meta
dict
text
stringlengths
1
1.2M
{ "arxiv_id": "2302.11352", "language": "en", "timestamp": "2023-02-23T02:14:21", "url": "https://arxiv.org/abs/2302.11352", "yymm": "2302" }
\section{Introduction} The promise of automated deep learning systems to assist radiologists is enormous. At the moment, important milestones, such as better consistency or even better performance have been achieved on an increasing number of use-cases~\cite{zhou2021review,liu2019comparison}. A source of inspiration in further improvement of these efforts is the way humans register and analyze images, which for deep learning has shown to be effective in the past \cite{litjens2017survey,zhou2021review}. In any analysis, a doctor provides the memory and knowledge to place what is currently seen in the context of what has been seen before. In principle this can be compared to what implicitly happens at scale in any deep learning method. A doctor's analysis is not implicit though. Their analysis process can be described and verified. We wonder whether (medical) deep learning methods could benefit from an explicit memory/knowledge infusion. Making deep learning methods more explicit in terms of using past observations has already been studied in Natural Language Processing (NLP), in the form of retrieval augmentation~\cite{lewis2020retrieval,pasupat2021controllable}. Supplementing data by retrieving relevant retrieved information can lead to performance gains \cite{gur2021cross}. This process can be thought of to work as both an enrichment and regularization process. A benefit of retrieval augmentation is that context from a trusted knowledge source is used as a supplement \cite{siriwardhana2022improving,komeili2022internet}. The versatility of retrieval augmentation, which essentially provides a non-parametric memory expansion, is gaining traction in the multi-modal field \cite{gur2021cross,ramos2022smallcap}. Multi-modal data modalities typically have different strengths leading to a strong and a weak data modality~\cite{zhou2021review}. For instance, radiology reports generally contain richer and more complete information than X-rays, since the report is essentially a clinician's annotation~\cite{pooch2019can}. With retrieval augmentation information can be transferred explicitly from the strong to the weak modality. A reason retrieval augmentation methods are not yet adopted for medical applications lies in the weakness of retrieval methods for the medical domain. Retrieval in the general domain is focused on global image regions~\cite{li2018large,ionescu2022overview} whereas in medical images global features, such as body/organ structure are similar across patients. Meanwhile more fine-grained aspects are more discriminating as disease indicators, but are easily overlooked. The need for fine-grained results makes medical image retrieval magnitudes more complex. We propose X-Ray Task Retrieval Augmentation (X-TRA), a framework for retrieval augmentation in a multi-modal medical setting, specifically designed for X-ray and radiology report analysis. To do so we introduce a cross-modal retrieval model and retrieval augmentation method. We make the following contributions. \begin{itemize} \item We propose a CLIP-based multi-modal retrieval framework with a dedicated fine-tuning component for efficient content alignment of medical information which improves state-of-the-art results in multi- and single- modal retrieval on radiology images and reports. \item We introduce a multi-modal retrieval augmentation component for disease classification and report retrieval pipelines. \item We show that our method (1) reaches state-of-the-art performance both in multi-label disease classification and report retrieval. (2) Our report retrieval is competitive with dedicated report generation methodologies. (3) We show the cross-dataset versatility and the limitations of our method. \end{itemize} \section{Related Work} \paragraph{Multi-modal alignment} The introduction of Transformers for natural language processing (NLP) accelerated the development of integrated vision-language (VL) alignment models suitable for various VL-tasks, such as ViLBERT~\cite{lu2019vilbert}, LXMERT~\cite{tan2019lxmert} and SimVLM~\cite{wang2021simvlm}. These methods provide alignment on region to sentence- or word- level scale. The next step in multi-modal alignment was made by methods using contrastive learning combined with substantially larger datasets. Examples are CLIP~\cite{radford2021learning} and ALIGN~\cite{jia2021scaling} which significantly outperform existing methods by using datasets for training consisting of 400M and 1.8B VL-pairs respectively. Domain-specific versions of CLIP, which is open-source, have been fine-tuned with additional data, such as PubMedCLIP \cite{EslamiDeMeloMeinel2021CLIPMedical}. \paragraph{Retrieval Augmentation} The origin of retrieval augmentation lies in the NLP field. It was created to fully utilise the power of large datasets. With retrieval augmentation we are not only dependent on a parametric model, but can also supplement data as a non-parametric component. Previous methods have shown the simple yet effective and versatility working of retrieval augmentation in a number of applications ~\cite{komeili2022internet,guu2020retrieval,siriwardhana2022improving}. \paragraph{Retrieval in Medical Imaging} Up until recently the only retrieval methods in medical imaging were tailored hand-crafted methods \cite{li2018large}. With access to large datasets and pre-trained methods the balance shifted towards making automated retrieval methods \cite{qayyum2017medical,Hu_2022_WACV}. Especially in the histopathology and radiology domain major strides were made with retrieval methods~\cite{endo2021retrieval,ionescu2022overview}. The use of text to improve image retrieval has been adopted for improving chest X-ray retrieval. Yu \textit{et al.}\cite{yu2021multimodal} use CNN and word2vec features for multi-modal alignment and retrieval. Zhang \textit{et al.}~\cite{zhang2022category} approach this problem with a hash-based retrieval method. \paragraph{Retrieval for chest X-ray analysis} Common tasks in chest X-ray analysis are disease classification and report generation \cite{johnson2019mimic,chambon2022roentgen,li2022self}. Using retrieval for report generation has been a common approach. The approaches often entail the use of retrieved information as an input or template for a decoder which crafts a custom report \cite{pino2021clinically,wang2022cross,yang2021writing}. Augmentation of chest X-ray tasks with synthetically generated diffusion-based images was shown to be possible~\cite{chambon2022roentgen}, however the clinical use of non-genuine images can lead to complications and is not undisputed~\cite{zhou2021review}. \section{Methods} Our method is composed of two separate parts (\autoref{fig:genarch}). The first part is the alignment of the two modalities and construction of the retrieval model. The second part uses the output of the retriever as a non-parametric component in (cross-modal) retrieval augmentation to enhance the downstream tasks. We consider a dataset $\Theta^{N}_{\{\mathbf{x},\mathbf{r}\}}$ consisting of pairs containing an X-ray ($\mathbf{x}_i$) and radiology report ($\mathbf{r}_i$). To align these modalities we make use of the powerful CLIP vision-language aligner. Our objective is to minimize the distance between $\mathbf{x}$ and $\mathbf{r}$, to make cross-modal tasks possible. These aligned features will be used for retrieval augmentation to do multi-label classification and report retrieval as downstream tasks. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{images/gen_arch.pdf} \caption{Architecture overview of X-TRA.} \label{fig:genarch} \end{figure} \subsection{Stage I: Multi-modal content alignment} We leverage the pre-trained features from CLIP for initial feature representations. However, there is a domain shift between the natural image data CLIP is trained on and medical images we want to use in our method. Medical images can be visually very similar, while holding drastically different information. Small localized markers can be indicators for disease. In natural images global representations are more decisive and thus more suitable for unsupervised contrastive alignment. Alignment in CLIP goes as follows~\cite{radford2021learning}, \begin{equation} \mathcal{L}_{CLIP}=-\frac{1}{N} \sum_{z \in Z}\sum_{i=1}^N \log \frac{\mathrm{e}^{\left(\operatorname{sim}\left(z_i^0, z_i^1\right) / \tau\right)}}{\sum_{j=1}^N \mathrm{e}^{ \left(\operatorname{sim}\left(z_i^0, z_j^1\right) / \tau\right)}}\;\; with \;Z = \{(\mathbf{x},\mathbf{r}),(\mathbf{r},\mathbf{x})\}. \end{equation} We need to overcome the obvious domain shift between medical images and the natural images on which CLIP is trained. Therefore, we require a more specific type of fine-tuning that is especially geared towards content-based extraction. We introduce the following loss, requiring a global class label for each dataset. With this fine-tuning step we are creating a supervised content-based alignment method with content classifier $C$: \begin{equation}\qquad\mathcal{L}_{ours} = -\frac{1}{N}\sum_{z \in Z}\sum_{i=1}^N y_i log_{e}(\widehat{C(z_i)})\qquad\;\;\; with \; Z = \{\mathbf{x},\mathbf{r},(\mathbf{x},\mathbf{r})\}.\end{equation} This content based alignment loss should improve the alignment of detailed content-level details over the global visual appearance of the image. \subsubsection{Creating a retrieval index} At retrieval time we need to retrieve images that have a high similarity with query images. To efficiently do so we make use of Facebook AI Similarity Search (FAISS) \cite{johnson2019billion}. This retrieval tool efficiently performs nearest-neighbour similarity search. After multi-modal alignment we encode our data to a FAISS index $I$ conditioned on the entire training dataset. We can construct indices that only retrieve images ($I^{\mathbf{x}}$), only reports ($I^{\mathbf{r}}$), or both ($I^{\mathbf{x}\mathbf{r}}$). Given a query $\mathcal{Q}_s$ in source modality $s$, we can obtain its $k$ neighbours of target modality $t$ through: \begin{equation}\mathcal{N}_{\mathbf{s\hspace{.2mm}\scalebox{.8}{\Arrow[.1cm]}\hspace{.2mm} t}}^{k} = I^{t}(\mathcal{Q}_s,k),\end{equation} this can be either $\mathbf{x}$, $\mathbf{r}$ or both. Once retrieval index $I$ is trained based on the newly aligned training dataset we can consider the retriever as a non-parametric component which retrieves information from a fixed dataset in the subsequent retrieval augmentation steps. Note that during testing time, a query from the test set will be used to retrieve neighbours from the training set. \subsection{Stage II: Retrieval Augmentation} The purpose of retrieval augmentation is to effectively leverage similar representations to adopt a more informative representation of a given input. with our already trained retrieval index we retrieve similar representations. To obtain a richer representation of $\mathbf{x}_i$, we retrieve intra- $\mathcal{N}_{\mathbf{\mathbf{x}\hspace{.2mm}\scalebox{.8}{\Arrow[.1cm]}\hspace{.2mm} \mathbf{x}}}^{k}$ and inter- modal neighbours $\mathcal{N}_{\mathbf{\mathbf{x}\hspace{.2mm}\scalebox{.8}{\Arrow[.1cm]}\hspace{.2mm} \mathbf{r}}}^{k}$ from $I^{\mathbf{x}}$ and $I^{\mathbf{r}}$ respectively. To integrate the retrieved neighbouring samples, we can use various fusion methods~\cite{priyasad2021memory}. The simplest one is concatenation: $(\mathbf{x}_i, \mathcal{N}_{\mathbf{\mathbf{x}\hspace{.2mm}\scalebox{.8}{\Arrow[.1cm]}\hspace{.2mm} \mathbf{x}}}^{k},\mathcal{N}_{\mathbf{\mathbf{x}\hspace{.2mm}\scalebox{.8}{\Arrow[.1cm]}\hspace{.2mm} \mathbf{r}}}^{k})$. A more suitable method is multi-head attention (MHA) which is able to capture the long range dependencies between the original image and the retrieved information~\cite{vaswani2017attention}: \begin{equation} \mathbf{x}^{TRA}_i = (\mathbf{x}_i, \mathrm{MHA}(\mathcal{N}_{\mathbf{\mathbf{x}\hspace{.2mm}\scalebox{.8}{\Arrow[.1cm]}\hspace{.2mm} \mathbf{x}}}^{k},\mathbf{x}_i),\mathrm{MHA}(\mathcal{N}_{\mathbf{\mathbf{x}\hspace{.2mm}\scalebox{.8}{\Arrow[.1cm]}\hspace{.2mm} \mathbf{r}}}^{k},\mathbf{x}_i)). \end{equation} \subsection{Downstream tasks} We are tackling two common tasks in chest X-ray analysis. These are multi-label disease classification and report retrieval. For this last task our objective is simply to show how well a retriever can perform on the task of report generation. We measure the performance by comparing task performance of $\mathbf{x}^{TRA}$ in comparison to $\mathbf{x}$. A useful property of our retrieval index would be usability of an already trained model across datasets. Three clinically relevant scenarios for this are: From scratch training on the new dataset, frozen usage of the trained retrieval model and fine-tuning of the existing retrieval model with another image-report dataset. \subsection{Datasets} The primary dataset to which our method is applied is \textbf{MIMIC-CXR} \textit{(200k image-report pairs)}~\cite{johnson2019mimic}. Disease labels for each pair are extracted from the report through a rule-based extraction method~\cite{irvin2019chexpert}. To evaluate the versatility and cross-domain capabilities of our method, we use the small \textbf{openI} \textit{(4k image-report pairs)}~\cite{openi} and image-only \textbf{CheXpert} \textit{(200k images)}~\cite{irvin2019chexpert} datasets. Official train-test splits are used. \subsection{Experimental setup} As pre-processing step, the X-ray images are normalized and standardized by rescaling with center-cropping to scale $256\times256$, from which images of size $224\times224$ are sampled. The maximum number of tokens for representing radiology reports in the text encoder is set to $256$. Three different VL models are used as encoders. At first a CNN-BERT model, composed of a DenseNet121 image encoder and a ClinicalBERT~\cite{huang2019clinicalbert} text encoder. This combination showed to worked well in previous multi-modal chest X-ray works. Given the strong performance of large vision-language models we also use CLIP (ViT-32 image encoder and text encoder)~\cite{radford2021learning} and its medically fine-tuned equivalent PubMedCLIP~\cite{EslamiDeMeloMeinel2021CLIPMedical}. This model is fine-tuned using the Radiology Objects in COntext (ROCO) dataset~\cite{pelka2018radiology}. Multi-modal alignment is implemented as a single pass through a two-layer ReLu activated MLP, with dimension $z_{enc}$, a dropout rate of $0.5$, and layer normalization. $z_{enc}$ is the output dimension of the encoder. We implement $C$ as a three layer classifier head with dimensions $\{z_{enc},256,14\}$. During retrieval we make use of $k=10$ retrieved neighbours. To prevent overfitting, early stopping with a tolerance of 3 is applied to all training operations. \section{Results} \subsection{Cross-modal Retrieval} We are comparing the performance of our retrieval method against previous methods in \autoref{tab:retrieval_main} in terms of class-based mean average precision~(mAP). Due to the powerful alignment of CLIP and tailor made fine-tuning we are outperforming all existing retrieval approaches for radiology images and/or reports by a large margin. The performance difference with similarly fine-tuned encoder-decoder combination DenseNet121 and ClinicalBERT further underwrites the power of CLIP in building a strong retrieval method, specifically on cross-domain retrieval. Interestingly, we observe that PubMedCLIP is not outperforming CLIP. This can be explained by a domain shift between MIMIC-CXR and ROCO, together with the ability of CLIP to generalize well out-of-domain~\cite{radford2021learning}. In our downstream tasks image-based retrieval is most important, which is performing similar on inter- and intra-modal retrieval tasks. \input{tables/ir_cls} \subsection{Multi-label disease classification} Disease classification results in terms of AUC in \autoref{tab:main_cls} show that retrieval augmentation gives a clear improvement across different disease classes. It is interesting to see that we find a positive, albeit weak, correlation (R$\approx$0.60) between the increase in class AUC performance and retrieval mAP. Moreover, the performance gain from retrieval augmentation ($0.80\rightarrow0.85$) is similar to additional training with synthetic diffusion-generated X-rays ($0.80\rightarrow0.84$)~\cite{chambon2022roentgen}. The benefit of our method is that the supplemented information originates from the trusted dataset itself and is not synthetically generated. \input{tables/ml_cls} \subsection{Report generation} In retrieval augmented report retrieval we show interesting performance on the report generation metrics compared to a selection of previous methods. While it should not be expected that simple retrieval outperforms dedicated report generation methods we are able to provide a result that can be considered competitive (\autoref{tab:reportgen}). On the METEOR and ROUGE metric we are even outperforming most existing methods. The metrics reflect that the strength of report retrieval is indeed in the global representation of the report. Our retriever is fine-tuned to retrieve samples with equivalent label spaces, hence good results on metrics that reward global similarity. An interesting outlook is the application of this method in a dedicated report generation framework which could boost performance further. \input{tables/ra_rg} \subsection{Cross-dataset} By evaluating the cross-dataset scenarios (\autoref{tab:cross_dataset}) with the CheXpert and openI datasets we can conclude that transferability to images from other domains is limited. However we do see that if retrieval augmentation is not useful, it can be ignored by the model and will not be detrimental for performance. The domain shift between different chest X-rays is a remaining problem \cite{pooch2019can}. Currently the most practical solution for this problem is the addition of a fine-tuning step. Cross-domain results on open-I show that learning across modalities is possible with fine-tuning. When adding the openI dataset to the existing retrieval index, we can integrate the existing index with this new dataset. We can see that X-TRA benefits openI in this setting. In the updated retrieval index $23\%$ of the retrieved information originates from openI and $77\%$ from MIMIC-CXR. \input{tables/cross_dataset} \begin{figure}[!b] \centering \begin{subfigure}[b]{0.90\linewidth} \includegraphics[width=\linewidth]{images/ablations.pdf} \caption{} \label{fig:ab2} \end{subfigure} \begin{subfigure}[b]{0.90\linewidth} \includegraphics[width=\linewidth]{images/ablation2.pdf} \caption{} \label{fig:ab1} \end{subfigure} \caption{Ablation studies on X-TRA on disease classification, for five different random seeds, with (a) different compositions of the retrieval index for $\mathcal{L}_{CLIP}$ and $\mathcal{L}_{ours}$ and (b) partial usage of the retrieval index.} \label{fig:abl} \end{figure} \subsection{Ablation studies} We study the effect of the components in our retrieval augmentation method in \autoref{fig:abl}. Specifically we look at the influence of each component in content- and CLIP based alignment. Interestingly, the composition of data modalities in retrieval augmentation does not have a big effect, since the retriever has similar results in inter- and intra-modal retrieval. In case randomly selected data is used instead of retrieved information, we achieve comparable results compared to our method without X-TRA. This is in accordance with cross-modal results, showing that if X-TRA supplemented information is not useful, it can be ignored. Using a partial retrieval index we can conclude that X-TRA can be useful with a small retrieval index, however performance reaches optimal levels when $N>100k$. \subsection{Insight and limitations} Qualitative results from our retrieval method for 2 different query images is shown in \autoref{fig:q}. We retrieve from the image index and report index. The retrieved images match well in terms of labels attributed to them, showing that our fine-tuning is preventing the retrieval of images that are only globally similar. Fine-tuning of the entire CLIP model to domain-specific data is an interesting prospective. Potentially this can further improve the performance of our retrieval model. However, as we have shown in this paper regarding the performance of CLIP against PubMedCLIP, the loss of generalization can also be detrimental. In future studies this an promising avenue to explore. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{images/retrieval_results.pdf} \caption{Examples of image-image and image-text retrieval including disease class labels. A green outline means a correct retrieval, orange or dashed means a missed or extra disease label respectively.} \label{fig:q} \end{figure} \section{Conclusion} In this work we present X-TRA, a simple yet effective method to improve multiple tasks on radiology images. Our method is composed of a content alignment and a retrieval augmentation step. With a new label-based alignment loss we are able to leverage pre-trained CLIP features to create a powerful cross-modal retrieval model. The general CLIP model appears to be more useful for our retrieval model than the slightly out-of-domain medically fine-tuned PubMedCLIP. We use this retrieval model to improve chest X-ray analysis through retrieval augmentation. With this we are adding an enrichment and regularization component that improves both multi-label disease classification and report retrieval by up to over $5\%$. On this last task we are even showing to be competitive with dedicated report retrieval methods. It opens up possibilities for retrieval augmentation as a generic tool in medical imaging. \bibliographystyle{splncs04}
{ "arxiv_id": "2302.11299", "language": "en", "timestamp": "2023-02-23T02:13:08", "url": "https://arxiv.org/abs/2302.11299", "yymm": "2302" }
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{R}{ecent} year has witnessed the rapid development of object detection aided by a bunch of benchmark datasets~\cite{voc,coco,object365,imagenet} and methods~\cite{rcnn,fastrcnn,fasterrcnn,yolov3,yolox}. Despite great success, the application of object detection has been long plagued instance-level annotations. To this end, numerous efforts~\cite{stac-ssod-ts,unbiasedteacher-ssod-ts,tang2021humbleteacher-ssod-ts,softteacher-ssod-ts} have been devoted to semi-supervised object detection (SSOD). Inspired by the advances in image classification~\cite{meanteacher-ssl-ts,mixmatch-ssl,fixmatch-ssl}, recent endeavors also resort to teacher-student learning for SSOD~\cite{stac-ssod-ts,unbiasedteacher-ssod-ts,instant-ssod-ts,tang2021humbleteacher-ssod-ts,softteacher-ssod-ts}. The main principle of this paradigm is to use a teacher network to generate pseudo labels for the optimization of the student one, where the strong and week data augmentations are separately applied to enforce the consistency between two networks~\cite{cutout,autoaugment,dataaug_forconsistency-ssl}. This paradigm can well exploit large amounts of unlabeled data based on limited label information, showing great advantages over previous semi-supervised approaches~\cite{meanteacher-ssl-ts,fixmatch-ssl,unbiasedteacher-ssod-ts,stac-ssod-ts}. Therefore, a flurry of methods~\cite{stac-ssod-ts,unbiasedteacher-ssod-ts,instant-ssod-ts,tang2021humbleteacher-ssod-ts,softteacher-ssod-ts} applying teacher-student learning have been recently proposed to SSOD and achieved notable success. However, these methods mainly focus on two-stage detection networks like FasterRCNN~\cite{fasterrcnn}, while the exploration on the widely-used one-stage models like YOLO series~\cite{yolov3,yolov4,yolov5} has yet to materialize. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{fig1.pdf} \vspace{-1em} \caption{ \textbf{ Qualities of pseudo-labels of YOLOv5~\cite{yolov5} and FasterRCNN~\cite{fasterrcnn} in Unbiased Teacher~\cite{unbiasedteacher-ssod-ts}.} The subplots of (a) and (b) show that the average pseudo-label quality of YOLOv5 is much lower than that of FasterRCNN, especially in the initial training phase. Subplot (c) illustrates the different pseudo-label requirements for box regression and classification. } \label{fig1} \vspace{-1em} \end{figure*} Due to the huge gap between one-stage and two-stage detection networks, it is often a sub-optimal solution to directly apply existing teacher-student approaches to one-stage SSOD. On one hand, one-stage networks typically adopt a dense prediction paradigm~\cite{yolo,yolo9000,yolov3,yolov5}, which is prone to producing more noisy pseudo-boxes. Concretely, two-stage models like FasterRCNN can use multi-stage filterings to ensure the quality of the predicted bounding boxes, \emph{e.g.}, the proposal selection by RPN and the further refinement by ROI head~\cite{fasterrcnn}. However, one-stage models directly enumerate over 100$k$ locations for prediction. In this case, low-quality and noisy pseudo-boxes will account for the vast majority when the teacher is not sufficiently trained, as shown in Fig.~\ref{fig1} (a)-(b). Therefore, how to select and produce high-quality pseudo-labels is more critical in one-stage detectors than the two-stage ones. On the other hand, the joint multi-task optimization of one-stage models also degrades the efficiency of teacher-student learning. Specifically, two-stage detectors complete the proposal of candidate bounding boxes and the classification of object categories through RPN and RoI head, respectively. In this case, different types of pseudo-labels can be collected for the optimization of each task without conflicts. In contrast, one-stage networks like YOLO series~\cite{yolov3,yolov4,yolo,yolov5,yolo9000} unify these tasks in one prediction layer, but the pseudo labels for different objectives are selected by the same criteria in existing SSOD schemes. Considering that these two tasks usually have different pseudo-label requirements and noise tolerances, this setting will exacerbate the multi-task optimization conflict in one-stage detection~\cite{yolox,wu2020rethinking,jiang2018acquisition}, which ultimately reduces the efficiency of teacher-student learning. In this paper, we present a novel teacher-student learning approach for one-stage SSOD, termed \textit{OneTeacher}, with two innovative designs, namely \emph{Multi-view Pesudo-label Refinement} (MPR) and \textit{Decoupled Semi-supervised Optimization} (DSO). Specifically, MPR is used to improve the quality of pseudo-boxes via multi-view refinements. It first employs augmented-view comparisons to enhance the robustness of pseudo-labels from the instance level, and then filters the low-quality ones according to image-level predictions. In this case, MPR can help OneTeacher obtain more reliable pseudo-labels during training. Meanwhile, DSO is deployed to address the multi-task optimization conflicts~\cite{yolox}. It disentangles the joint multi-task optimization with a simple structure modification, and also perform task-specific pseudo-labeling to maximize the effect of SSOD. With these innovative designs, {OneTeacher} can maximize the benefit of teacher-student learning for one-stage SSOD. To validate {OneTeacher}, we use YOLOv5~\cite{yolov5} as our base model. As one of the most advanced detection networks, YOLOv5 deploys a series of training techniques in its implementation, such as {EMA}~\cite{meanteacher-ssl-ts}, {cosine learning rate decay} and {various data augmentations}~\cite{yolov4,yolov3,cutout}. These techniques make its implementation much more complex than that of FasterRCNN in SSOD~\cite{unbiasedteacher-ssod-ts,stac-ssod-ts,softteacher-ssod-ts,mi2022active}, and some of them have been used in existing SSOD methods to improve performance~\cite{meanteacher-ssl-ts,cutout} , \emph{e.g.}, EMA and strong data augmentations, which also greatly reduces the benefits of teacher-student learning. In this case, we carefully revise the conventional teacher-student learning settings to make {OneTeacher} applicable to YOLOv5. For example, we reorganize the data augmentation scheme to accommodate both YOLOv5 and teacher-student learning. Meanwhile, some hyper-parameters are also adjusted according to the semi-supervised training statuses of one-stage models. These modifications are also shared with other SSOD methods~\cite{stac-ssod-ts,unbiasedteacher-ssod-ts} for fair comparison. We validate our {OneTeacher} on two benchmark datasets of object detection, \emph{i.e.,} COCO~\cite{coco} and Pascal VOC~\cite{voc}. The experimental results show that {OneTeacher} achieves great improvements over the supervised baseline, which can be up to +33.5\% relative AP gain on 10\% COCO data. This result is even more significant than that of two-stage SSOD~\cite{unbiasedteacher-ssod-ts,stac-ssod-ts,instant-ssod-ts,softteacher-ssod-ts}. Meanwhile, OneTeacher also achieves better performance than the state-of-the-art SSOD methods on all experimental settings, \emph{e.g.,} Unbiased Teacher~\cite{unbiasedteacher-ssod-ts}. In addition, extensive quantitative and qualitative analyses also prove that the issues of pseudo-label quality and optimization conflict are well handled by {OneTeacher}. These results greatly validate the effectiveness of {OneTeacher} towards one-stage SSOD. In summary, our contribution is three-fold: \begin{itemize} \item We identify two key challenges in one-stage semi-supervised object detection, \emph{i.e.}, low-quality pseudo-labels and multi-task optimization conflict, and also realize the first attempt of SSOD on the advanced one-stage model YOLOv5, of which implementation is much more complex than the two-stage ones. \item To address these challenges, we propose a novel one-stage semi-supervised learning scheme called OneTeacher with two innovative designs, namely Multi-view Pseudo-label Refinement and Decoupled Semi-supervised Optimization. \item Under all experimental settings, the proposed OneTeacher obtains distinct performance gains over both supervised and semi-supervised methods on COCO and Pascal VOC. \end{itemize} \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{framework.pdf} \vspace{-1em} \caption{\textbf{The framework of OneTeacher.} The teacher network produces high-quality pseudo-labels for the student by the proposed \emph{Multi-view Pseudo-label Refinement (MPR).} Its parameters are updated from the student ones via EMA~\cite{meanteacher-ssl-ts}. OneTeacher also deploys a novel \emph{Decoupled-Semi-supervised Optimization} (DSO) scheme to deal with the multi-task optimization conflict~\cite{yolox,wu2020rethinking}. The base model used is YOLOv5~\cite{yolov5}. } \label{framework} \end{figure*} \section{Related Work} \subsection{Object Detection} Object detection is a fundamental task in computer vision~\cite{yolo,yolo9000,yolov3,ssd,rcnn,fpn,fastrcnn,fasterrcnn,maskrcnn,cascadercnn}. In the era of deep learning, its modeling strategies can be roughly categorized into two main groups, \emph{i.e.}, the two-stage and one-stage approaches~\cite{yolo,yolo9000,yolov3,ssd,rcnn,fastrcnn,fasterrcnn,maskrcnn,fpn,yolox}. Specifically, two-stage approaches~\cite{rcnn,fastrcnn,fasterrcnn,maskrcnn,cascadercnn} first generate a manageable number of candidate object regions, and then the features of these regions are pooled to predict the corresponding box coordinates and categories. In practice, two-stage methods such as FasterRCNN~\cite{fasterrcnn} and Cascade RCNN~\cite{cascadercnn} often exhibit better robustness and performance than the single-stage ones, but their inference is less efficient. Compared to two-stage approaches, one-stage approaches~\cite{ssd,yolo,yolo9000,yolov3,ssd,yolox,yolov5,centernet} directly predict the coordinates and categories of objects based on the convolutional feature maps. Aided by the simple and flexible structure, one-stage approaches can easily achieve a good trade-off between performance and inference speed~\cite{yolov3,yolo,yolo9000,ssd}. To this end, the study of one-stage detection has gained increasing attention, which leads to the birth of a flurry of innovative detection networks~\cite{ssd,yolo,yolo9000,yolov3,ssd,yolox,yolov5,centernet}, such as CenterNet~\cite{centernet} and YOLO-series~\cite{yolo,yolo9000,yolov3,yolox,yolov5}. In this paper, we focus on the semi-supervised learning for one of the most advanced one-stage detectors called YOLOv5~\cite{yolov5}. Compared with previous YOLO models~\cite{yolov3,yolov4,yolo9000,yolo}, YOLOv5 applies a stronger visual backbone and also involves more training techniques, such as EMA and data augmentations, which makes its implementation of SSOD more difficult than the two-stage detection networks like Faster-RCNN. \subsection{Semi-supervised Learning} Semi-supervised learning (SSL) aims to train a model with limited label information and massive amounts of unlabeled data. In computer vision, the research of SSL on image classification garners an influx of interest~\cite{consistency-based-ssod,consistency-ssl,mixmatch-ssl,remixmatch-ssl,meanteacher-ssl-ts,fixmatch-ssl,learning-with-pseudo-ensembles-ssl}. In early SSL methods~\cite{mixmatch-ssl,remixmatch-ssl,meanteacher-ssl-ts,fixmatch-ssl}, one popular solution is to apply consistency regularization, which encourages the model to make consistent predictions with different perturbations, \emph{e.g.,} data augmentations~\cite{meanteacher-ssl-ts}, stochastic regularization~\cite{temporal-ssl,sajjadi2016regularization} and adversarial perturbations~\cite{miyato2018virtual}. Another direction of SSL is pseudo-label based learning~\cite{arazo2020pseudo,pham2019semi}, which uses the predictions of the model as hard labels for semi-supervised training. Recently, researchers resort to teacher-student learning and combine the merits of these two methodologies for better SSL~\cite{meanteacher-ssl-ts,mixmatch-ssl,remixmatch-ssl,fixmatch-ssl}. In particular, Mean Teacher~\cite{meanteacher-ssl-ts} applies data augmentation to the student and calculates its consistency loss with the teacher. To improve the training stability, it also employs \textit{exponential moving average} (EMA) to update the teacher's parameters from the student ones, which can prevent the model from confirmation bias~\cite{unbiasedteacher-ssod-ts}. MixMatch~\cite{mixmatch-ssl} applies $k$ types of different stochastic data augmentations to unlabeled images, and averages their predictions as the final pseudo-labels. Based on MixMatch~\cite{mixmatch-ssl}, ReMixMatch~\cite{remixmatch-ssl} proposes distribution alignments to align the prediction distributions of unlabeled and labeled data. It also proposes strong-weak augmentation strategies to ensure the consistency between the teacher and the student. More recently, some methods also aim to filter the noisy pseudo-labels by the prediction probability~\cite{fixmatch-ssl}, and address the class-imbalanced problem~\cite{guo2022class}. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{decouple.pdf} \vspace{-2em} \caption{\textbf{Comparison of the default optimization of YOLOv5 and the proposed Decoupled Semi-supervised Optimization (DSO).} The default optimization scheme will exacerbates multi-task optimization conflicts~\cite{yolox,wu2020rethinking} during teacher-student learning. The proposed DSO addresses this issue via a structure tweak and task-specific pseudo-labeling. } \label{decouple} \end{figure*} \subsection{Semi-supervised Object Detection} Due to the expensive instance-level annotations, semi-supervised object detection (SSOD) has long been a research spot in computer vision. Early SSOD approaches such as CSD~\cite{consistency-based-ssod} and ISD~\cite{jeong2021interpolation-ssod} introduce the consistency-based learning schemes to object detection networks. Recently, the teacher-student framework~\cite{meanteacher-ssl-ts} becomes popular in SSOD. The teacher network is used to produce pseudo-boxes, and the student network is trained with both labeled and pseudo-boxes. In particular, STAC~\cite{stac-ssod-ts} is a representative teacher-student based SSOD approach, which divides the SSL into two steps, \emph{i.e.}, the pseudo-labeling process for unlabeled data and the re-training step based on the pseudo-labels. Despite the effectiveness, STAC still suffers critical imbalance issue and overfitting. To address these issues, Unbiased Teacher~\cite{unbiasedteacher-ssod-ts} proposes an end-to-end SSOD method, which introduces EMA~\cite{meanteacher-ssl-ts} to update the parameters of the teacher network from the student one, and applies Focal loss~\cite{lin2017focalloss} to address the issue of error accumulation. In addition to Unbiased Teacher, a set of SSOD approaches~\cite{instant-ssod-ts,tang2021humbleteacher-ssod-ts,softteacher-ssod-ts,mi2022active} are proposed recently. Among them, most work focuses on the strategies of improving the pseudo-label quality, \emph{e.g.,} the co-rectify strategy~\cite{instant-ssod-ts}, the detection-specific data ensemble strategy~\cite{tang2021humbleteacher-ssod-ts} and the box jittering approach~\cite{softteacher-ssod-ts}. Different from these work, Active Teacher~\cite{mi2022active} aims to select the optimal labeled examples for SSOD via three metrics, \emph{i.e.,} \textit{difficulty}, \textit{information} and \textit{diversity}. Overall, most existing teacher-student based approaches are designed for the two-stage detector FasterRCNN. Very recently, some advances~\cite{chen2022dense,liu2022unbiased} also explore SSOD for one-stage detection networks like FCOS~\cite{tian2019fcos}. In particular, Liu \textit{et.al}~\cite{liu2022unbiased} propose the modeling of localization uncertainty to select reliable pseudo-labels. Chen \textit{et.al}~\cite{chen2022dense} improve the pseudo-label alignments and qualities for one-stage SSOD via three novel designs, \emph{i.e.,} an adaptive filtering strategy, an aggregated teacher and an uncertainty-consistency-regularization strategy. In this paper, we are committed to exploring the teacher-student learning paradigm for advanced one-stage detection networks, \emph{i.e.,} YOLOv5 \cite{yolov5}. Compared to these works, this paper elaborates the great gap between one-stage and two-stage SSODs in more detail and depth, especially for the popular YOLO series. Meanwhile, we also investigate how to make advanced training techniques used in one-stage detectors to be compatible with teacher-student learning. \section{OneTeacher} \subsection{ Overview} The framework of the proposed OneTeacher is depicted in Fig.~\ref{framework}, which consists of two detection networks with the same configurations, namely \emph{teacher} and \emph{student}. The teacher network is in charged of generating pseudo-labels, based on which the student one is trained together with the ground-truth labels. To this end, the optimization of the student network is defined by \begin{equation} \mathcal{L} = \mathcal{L}_{sup} + \lambda \cdot \mathcal{L}_{unsup}, \label{eq:semi-loss} \end{equation} where $\lambda$ is the hyper-parameter to adjust the contribution of unsupervised loss. During training, the parameters of the teacher network $\theta_t$ are updated from the student ones $\theta_s$ via \textit{exponential moving average} (EMA)~\cite{meanteacher-ssl-ts}, which can be formulated as \begin{equation} \label{equ:train-equ} \begin{aligned} &\theta_s \leftarrow \theta_s + \gamma \frac{\partial (\mathcal{L}_{sup} + {\lambda}_u \mathcal{L}_{unsup}) }{\partial\theta_s}, \\ &\theta_{t} \leftarrow \alpha \theta_{t} + (1-\alpha) \theta_{s}. \end{aligned} \end{equation} Here, $\gamma$ denotes the learning rate and $\alpha$ is the EMA coefficient. The use of EMA is to allow the teacher network to generate stable pseudo-labels during training, thereby alleviating the effects of pseudo-label bias~\cite{unbiasedteacher-ssod-ts}. In practice, the teacher network can be regarded as an ensemble of the student networks in different training statuses, which is also used as the target model after training. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{MPL.pdf} \vspace{-2em} \caption{\textbf{Illustration of Multi-view Pseudo-Label Refinement (MPR).} In MPR, the pseudo-labels are first refined by fusing the matched predictions from the augmented view. The global-view filtering is then performed to remove the pseudo-labels of the categories that lower than the global prediction threshold. } \vspace{-1em} \end{figure*} \subsection{ Semi-supervised Learning for One-stage Object Detection} One-stage detection networks often adopt an end-to-end prediction paradigm, which is obviously different from the two-stage ones. Specifically, given a labeled image $I_l \in \mathbb{R}^{H \times W \times 3 }$, the one-stage detection network like the YOLO series~\cite{yolov3,yolov4,yolov5} will output an tensor $t_l \in \mathbb{R}^{h\times w\times (c+5)}$ for the joint multi-task prediction, where $c$ denotes the number of object categories and 5 refers to the confidence score and the coordinates of the bounding box. To this end, the supervised loss $\mathcal{L}_{sup}$ can be defined by \begin{equation} \label{equ:full_sup} \begin{aligned} \mathcal{L}_{sup}(t_l,Y_l)&=\mathcal{L}_{iou}(t_l, Y_l^{coor}) + \mathcal{L}_{conf}(t_l, Y_l^{conf}) \\&+ \mathcal{L}_{cls}(t_l,Y_l^{cls}), \end{aligned} \end{equation} where $Y_l=\{Y_l^{cls}, Y_l^{conf}, Y_l^{coor}\}$ are the labels of categories, confidence and coordinates. $\mathcal{L}_{cls}$ and $\mathcal{L}_{conf}$ are the binary cross entropy losses for category classification and confidence regression, and $\mathcal{L}_{iou}$ denotes the IoU loss for bounding box regression. In terms of unsupervised loss, a straightforward solution is to select the predicted bounding boxes above a fixed threshold as pseudo-labels, which can be formulated as \begin{equation} \label{equ:full_unsup} \begin{aligned} \mathcal{L}_{unsup}(t_u,Y_u)&=\mathcal{L}_{iou}(t_u, Y_u^{coor}) + \mathcal{L}_{conf}(t_u, Y_{u}^{conf}) \\&+ \mathcal{L}_{cls}(t_u, Y_{u}^{cls}). \end{aligned} \end{equation} where $Y_u=\{Y_u^{cls}, Y_u^{conf}, Y_u^{coor}\}$ are the pseudo-label set of object categories, confidence and coordinates\footnote{In some methods, bounding box regression is ignored in unsupervised loss.}. A potential problem is the optimization conflict between regression and classification in one-stage detection, as revealed in \cite{yolox,wu2020rethinking}. This issue becomes more severe in semi-supervised learning. To explain, pseudo-labels are of much lower quality compared to the ground-truth ones. Meanwhile, classification and regression have different requirements about pseudo-labels, which are selected by the same threshold in existing methods~\cite{unbiasedteacher-ssod-ts,instant-ssod-ts,stac-ssod-ts}. \subsubsection{Decoupled semi-supervised optimization.} \label{DSO_sec} To alleviate multi-task optimization conflict, we propose a novel \textit{Decoupled Semi-supervised Optimization} (DSO) scheme for {OneTeacher}, which decouples the joint optimization via a simple branching structure and a task-specific pseudo-labeling strategy. As shown in Fig.~\ref{decouple}, for each image, we decompose the prediction branch into two separate ones, and then obtain the prediction tensors $t_l^{cls} \in \mathbb{R}^{h\times w\times c}$ and $t_l^{reg} \in \mathbb{R}^{h\times w\times 5}$ for classification and regression, respectively. Thus, the supervised loss defined in E.q~\ref{equ:full_sup} can be reformulated as \begin{equation} \label{equ:supervised-equ-detail} \small \begin{aligned} \mathcal{L}_{sup}(t_l^{cls},t_l^{reg},Y_l)&=\mathcal{L}_{iou}(t_l^{reg}, Y_l^{coor}) + \mathcal{L}_{conf}(t_l^{reg}, Y_l^{conf}) \\&+ \mathcal{L}_{cls}(t_l^{cls},Y_l^{cls}). \end{aligned} \end{equation} This modification can greatly avoid conflicts across tasks. Afterwards, we perform task-specific pseudo-labeling for unsupervised loss. Specifically, given the predictions of an unlabeled image by the teacher, we use the multiplication of the confidence probability and the max classification score as the indicator. Based on this indicator, we set two different thresholds $\sigma_{reg}$ and $\sigma_{cls}$ to select pseudo-labels for regression and classification, denoted as $Y_{ur}$ and $Y_{uc}$, respectively. Then, these two tasks are trained with the corresponding pseudo-labels , and the the unsupervised loss of {OneTeacher} can be written as \begin{equation} \label{equ:unsupervised-equ-detail} \begin{aligned} \mathcal{L}_{unsup}(t^{cls}_u,t^{reg}_u,Y_{ur},Y_{uc}) &=\mathcal{L}_{conf}(t_u^{reg}, Y_{ur}^{conf}) \\&+ \mathcal{L}_{cls}(t_u^{cls}, Y_{uc}^{cls}). \end{aligned} \end{equation} Here, we follow the setting of \cite{unbiasedteacher-ssod-ts} to discard the unsupervised optimization of bounding box regression. This task-specific pseudo-labeling strategy can flexibly adjust the noisy degree of different tasks, thereby improving the efficiency of teacher-student learning. During deployment, we also add a multi-label classification task to the model, which is used in the pseudo-label refinement discussed in Sec.~\ref{MPR}. In this case, the final objective function of \textit{OneTeacher} is defined as \begin{equation} \label{equ:loss-equ-detail-final} \begin{aligned} \mathcal{L}&=\mathcal{L}_{sup}(t_l^{cls},t_l^{reg}, t_l^{gls}, Y_l) \\&+\lambda \cdot \mathcal{L}_{unsup}(t_u^{cls}, t_u^{reg}, t_u^{gls}, Y_{ur}, Y_{uc}), \end{aligned} \end{equation} where $t^{gls}$ is the prediction tensor of the multi-label classification branch. $Y_{ur}$ and $Y_{uc}$ are the pseudo-labels selected by different thresholds. \begin{algorithm}[t] \caption{Pseudo Code of Augmented-view Refinement} \label[Algorithm]{alg:pseudocode_of_aug_view} \begin{algorithmic}[1] \Require Pseudo-labels of the original view $Y_{uo}$ and the augmented view $Y_{ua}$. \Ensure Refined Pseudo-labels $Y_u$. \ForAll{$y_{uo}^i \in Y_u^{ori}$} \ForAll{$y_{ua}^j \in Y_u^{aug}$} \State Calculating IoU values between $y_{uo}^i$ and $y_{ua}^j$: \State $\text{iou}_{oa}^{j}$=IoU($y_{uo}^i $, $y_{ua}^j$) \EndFor \State Selecting $y_{ua}^j$ with the largest $\text{iou}_{oa}^{j}$ \If { $\text{iou}_{oa}^{j}$ $>$$\sigma$} \State $y_u= \frac{y_{uo}^i+y_{ua}^j}{2}$ \Else \State $y_u=y_{uo}^i$ \EndIf \EndFor \\ \Return $\{y_u^{1},...,y_u^{n}\}$ \end{algorithmic} \end{algorithm} \subsection{Multi-view Pseudo-label Refinement} In two-stage networks like FaterRCNN, the teacher's predictions will be screened by RPN and ROI head successively to obtain the final pseudo-label set~\cite{unbiasedteacher-ssod-ts,instant-ssod-ts,stac-ssod-ts}. This multi-step selection can somewhat ensure the qualities of pseudo labels, which is however not applicable to one-stage models due to different prediction paradigms. A compromise solution for one-stage models is to directly use their confidence scores to determine pseudo-labels, which is insufficient for evaluating the quality of pseudo-labels. \label{MPR} To address this issue, we propose a novel \textit{ Multi-view Pseudo-label Refinement} (MPR) scheme, which consists of two main processes, namely \textit{augmented-view refinement} and \textit{global-view filtering}. The processing of MPR are shown in Algorithm \ref{alg:pseudocode_of_aug_view}. Specifically, given an unlabeled image $I_u$, we first apply augmented-view refinement to adjust its pseudo-label information. As shown in Fig.~\ref{MPR}, the teacher network will predict the bounding boxes of $I_u$ and its augmented view of flipping, denoted as $Y_{uo}$ and $Y_{ua}$, respectively. Afterwards, we compare each bounding box in $Y_{uo}$ with the one in $Y_{ua}$ by \textit{IoU} value, and select the matched bounding boxes from two views, denoted as $(y_{uo}, y_{ua})$. In practice, this process consists of two steps. Firstly, given a $y_{uo}^i$, we find the $y_{ua}^j$ that have the max IoU value with $y_{uo}^i$, which can be formulated as: \begin{equation} \begin{aligned} j=\mathop{\arg \max}_{y_{ua}^j \in Y_{ua}, y_{uo}^i \in Y_{uo}} \text{IoU}(y_{ua}^j,y_{uo}^i). \end{aligned} \end{equation} Then, if the IoU value is large than a threshold $\delta$, they are regarded as a pair $(y_{uo}^i,y_{ua}^j)$. In practice, the threshold is set to 0.45. Lastly, the final pseudo box $y_u^i$ averages the information of $(y_{uo}^i, y_{ua}^j)$ including confidence scores, classification probabilities and bounding box coordinates, which can be written by \begin{equation} \begin{aligned} y_u^i=\frac{y_{uo}^i+ y_{ua}^j}{2}. \end{aligned} \end{equation} The intuition behind this augmented view refinement is that the ensemble of predictions can alleviate the bias of the teacher to some extent, thereby making the final pseudo-labels more reliable. \begin{table}[t] \centering \caption{\textbf{The implementation of YOLOV5 for OneTeacher. The first block shows different training settings. } ``Opt.'' denotes the optimization strategies. ``ImageNet init'' denotes the pre-training on ImageNet. ``BR Aug.'' and ``BIR Aug.'' denote the box-relevant and box-irrelevant data augmentations, respectively. } \setlength \tabcolsep{4pt} \begin{tabular}{ll|ccc} \toprule[1.2pt] &{Method} &FRCNN & YOLOV5 & OneTeacher \\ \midrule[0.8pt] \multirow{3}{*}{Opt.} &{EMA~\cite{meanteacher-ssl-ts} } & $\times$ & $\checkmark$ & $\checkmark$ \\ &{ImageNet init} & $\checkmark$ & $\times$ & $\times$ \\ &{Learning rate decay} & step & cosine & constant \\ \midrule[0.8pt] \multirow{3}{*}{BR Aug.} & Flip-lr & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ & Mosaic~\cite{yolov4} & $\times$ & $\checkmark$ & $\checkmark$ \\ &Scale Jitter & $\times$ & $\checkmark$ & $\checkmark$ \\ \midrule[0.8pt] \multirow{6}{*}{BIR Aug.} & HSV translating & $\times$ & $\checkmark$ & $\checkmark$ \\ & Image translating & $\times$ & $\checkmark$ & $\checkmark$ \\ & ColorJitter~\cite{yolov3} & $\times$ & $\times$ & $\checkmark$ \\ & Grayscale & $\times$ & $\times$ & $\checkmark$ \\ & GaussianBlur & $\times$ & $\times$ & $\checkmark$ \\ & RandomErasing~\cite{zhong2020random} & $\times$ & $\times$ & $\checkmark$ \\ \bottomrule[1.2pt] \end{tabular} \label{tab_adap} \end{table} Inspired by the recent SSOD methods~\cite{zhang2021semi}, we also introduce an additional multi-label classification to enhance MPR from a global-view filtering. Concretely, the teacher network will output the image-level multi-class probability distribution. If the global probability of a specific category is lower than a threshold $\sigma_g$, we will filter the pseudo-boxes of this class. Our assumption is that the category recognition of local pseudo bounding boxes should be consistent with the global one, otherwise these pseudo boxes are often inferior in quality. Overall, the proposed MPR can filter a vast amount of low-quality bounding boxes and also improve the quality of pseudo-labels greatly. After MPR, we apply task-specific thresholds to select the final pseudo-label sets for classification and regression, as discussed in Sec~\ref{DSO_sec}. \subsection{ Implementation on YOLOv5} \label{imp} To validate the proposed {OneTeacher}, we further apply it to YOLOv5~\cite{yolov5}, one of the most advanced one-stage detection networks. However, it is not feasible to directly apply existing teacher-student learning based SSOD methods~\cite{stac-ssod-ts,unbiasedteacher-ssod-ts} to YOLOV5, which is mainly attributed to the conflicts of training techniques used by YOLOV5 and SSOD. This problem is usually ignored in existing SSOD approaches since the default implementation of two-stage detection networks like Faster-RCNN is relatively simple~\cite{fasterrcnn}. However, in real-world applications, a set of training techniques are usually used to improve model performance and robustness. The first problem encountered is the data augmentation setup. In YOLOv5~\cite{yolov5} and other advanced one-stage detectors~\cite{yolov4,yolox}, strong data augmentations like \textit{color jittering}~\cite{yolov3} have become the \emph{de facto} training setting, which, however, are also used as a solution to enforce the consistency learning of the student network in SSOD~\cite{unbiasedteacher-ssod-ts,instant-ssod-ts}. Abandoning these data augmentations will inevitably degrade the performance of YOLOv5, especially considering that YOLOv5 is trained from scratch without ImageNet pre-training. A direct solution is to keep the data augmentation settings unchanged and deploy additional augmentations for the student network. But the problem that comes with it is that the teacher will inevitably produce lower-quality labels after receiving the images with strong perturbations. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{conf_distribution.pdf} \caption{\textbf{Confidence score distribution of OneTeacher on COCO (10\%). } The blue and red points refer to the correct and wrong predictions, respectively. A threshold of 0.4 can achieve the best trade-off between quality and quantity. } \label{conf_distri_fig} \end{figure} \begin{table*}[t] \centering \caption{\textbf{Comparison of the proposed {OneTeacher} and other SSOD methods on {COCO}.} The ``supervised'' denotes the fully supervised baseline. We use {YOLOv5-s} as the base model and report the mAP scores on COCO \textit{val2017}.} \setlength\tabcolsep{10pt} \vspace{-2mm} \scalebox{0.95}[0.95]{ \begin{tabular}{lcccccccccc} \toprule[1.2pt] \multicolumn{11}{c}{COCO} \\ \midrule[0.8pt] \multicolumn{1}{c|}{\multirow{2}{*}{Model}} & \multicolumn{2}{c|}{1\%} & \multicolumn{2}{c|}{2\%} & \multicolumn{2}{c|}{5\%} & \multicolumn{2}{c|}{10\%} & \multicolumn{2}{c}{20\%} \\ \multicolumn{1}{c|}{} & mAP & \multicolumn{1}{c|}{AP50} & mAP & \multicolumn{1}{c|}{AP50} & mAP & \multicolumn{1}{c|}{AP50} & mAP & \multicolumn{1}{c|}{AP50} & mAP & AP50 \\ \midrule[0.8pt] \multicolumn{1}{l|}{Supervised~\cite{yolov5}} & 5.2\mypm{+0.0} & \multicolumn{1}{c|}{10.2\mypm{+0.0}} & 9.1\mypm{+0.0} & \multicolumn{1}{c|}{17.7\mypm{+0.0}} & 15.4\mypm{+0.0} & \multicolumn{1}{c|}{28.5\mypm{+0.0}} & 21.8\mypm{+0.0} & \multicolumn{1}{c|}{37.4\mypm{+0.0}} & 26.7\mypm{+0.0} & 44.2\mypm{+0.0} \\ \midrule[0.8pt] \multicolumn{1}{l|}{STAC~\cite{stac-ssod-ts}} & 6.4\mypm{+1.2} & \multicolumn{1}{c|}{12.9\mypm{+2.7}} & 10.6\mypm{+1.5} & \multicolumn{1}{c|}{20.4\mypm{+2.7}} & 14.2\mypm{-1.2} & \multicolumn{1}{c|}{27.5\mypm{-1.0}} & 18.3\mypm{-3.5} & \multicolumn{1}{c|}{34.7\mypm{-2.7}} & 21.7\mypm{-5.0} & 40.0\mypm{-4.2} \\ \multicolumn{1}{l|}{STAC~\cite{stac-ssod-ts}+EMA} & 8.2\mypm{+3.0} & \multicolumn{1}{c|}{15.2\mypm{+5.0}} & 14.1\mypm{+5.0} & \multicolumn{1}{c|}{25.5\mypm{+7.8}} & 20.7\mypm{+5.3} & \multicolumn{1}{c|}{36.1\mypm{+7.6}} & 25.6\mypm{+3.8} & \multicolumn{1}{c|}{43.0\mypm{+5.6}} & 29.3\mypm{+2.6} & 48.1\mypm{+3.9} \\ \multicolumn{1}{l|}{Unbiased Teacher~\cite{unbiasedteacher-ssod-ts}} & 8.3\mypm{+3.1} & \multicolumn{1}{c|}{14.8\mypm{+4.6}} & 14.0\mypm{+4.9} & \multicolumn{1}{c|}{25.5\mypm{+7.8}} & 20.7\mypm{+5.3} & \multicolumn{1}{c|}{35.4\mypm{+6.9}} & 25.3\mypm{+3.5} & \multicolumn{1}{c|}{42.0\mypm{+4.6}} & 30.2\mypm{+3.5} & 48.6\mypm{+4.4} \\ \midrule[0.8pt] \multicolumn{1}{l|}{OneTeacher} & \textbf{8.6}\mypm{+3.4} & \multicolumn{1}{c|}{\textbf{15.4}\mypm{+5.2}} & \textbf{14.7}\mypm{+5.6} & \multicolumn{1}{c|}{\textbf{25.7}\mypm{+8.0}} & \textbf{22.9}\mypm{+7.5} & \multicolumn{1}{c|}{\textbf{36.7}\mypm{+8.2}} & \textbf{29.1}\mypm{+7.3} & \multicolumn{1}{c|}{\textbf{45.3}\mypm{+7.9}} & \textbf{33.2}\mypm{+6.5} & \textbf{50.9}\mypm{+6.7} \\ \bottomrule[1.2pt] \end{tabular}} \label{sota} \end{table*} \begin{table}[t] \centering \caption{\textbf{Comparison of {OneTeacher} and the baselines on Pascal VOC.} COCO20 denotes the images of the same categories as VOC in COCO \emph{train2017}.} \setlength\tabcolsep{5pt} \vspace{-2mm} { \begin{tabular}{lcccc} \toprule[1.2pt] \multicolumn{5}{c}{VOC} \\ \midrule[0.8pt] \multicolumn{1}{c|}{Methods} & Labeled & \multicolumn{1}{c|}{Unlabeled} & mAP & AP50 \\ \midrule[0.8pt] \multicolumn{1}{l|}{Supervised} & VOC07 & \multicolumn{1}{c|}{None} & 41.3\mypm{+0.0} & 66.0\mypm{+0.0} \\ \midrule[0.8pt] \multicolumn{1}{l|}{STAC~\cite{stac-ssod-ts}} & VOC07 & \multicolumn{1}{c|}{VOC12} & 38.9\mypm{-2.4} & 66.8\mypm{+1.8} \\ \multicolumn{1}{l|}{STAC~\cite{stac-ssod-ts}+EMA} & VOC07 & \multicolumn{1}{c|}{VOC12} & 44.5\mypm{+3.2} & 70.5\mypm{+4.5} \\ \multicolumn{1}{l|}{UbTeacher~\cite{unbiasedteacher-ssod-ts}} & VOC07 & \multicolumn{1}{c|}{VOC12} & 42.1\mypm{+0.8} & 68.0\mypm{+2.0} \\ \multicolumn{1}{l|}{OneTeacher(ours)} & VOC07 & \multicolumn{1}{c|}{VOC12} & \textbf{45.3}\mypm{+4.0} & \textbf{70.8}\mypm{+4.8} \\ \midrule[0.8pt] \multicolumn{1}{l|}{STAC~\cite{stac-ssod-ts}} & VOC07 & \multicolumn{1}{c|}{VOC12+COCO20} & 38.4\mypm{-2.9} & 66.4\mypm{+0.4} \\ \multicolumn{1}{l|}{STAC+EMA~\cite{stac-ssod-ts}} & VOC07 & \multicolumn{1}{c|}{VOC12+COCO20} & 44.7\mypm{+3.4} & 70.8\mypm{+4.8} \\ \multicolumn{1}{l|}{UbTeacher~\cite{unbiasedteacher-ssod-ts}} & VOC07 & \multicolumn{1}{c|}{VOC12+COCO20} & 44.2\mypm{+2.9} & 70.2\mypm{+4.2} \\ \multicolumn{1}{l|}{OneTeacher(ours)} & VOC07 & \multicolumn{1}{c|}{VOC12+COCO20} & \textbf{46.1}\mypm{+4.8} &\textbf{ 71.4}\mypm{+5.4} \\ \bottomrule[1.2pt] \end{tabular}} \label{voc} \end{table} To address this issue, we categorize data augmentations into two groups, \emph{i.e.}, the \textit{box-relevant } and \textit{box-irrelevant} augmentations, as shown in Tab.~\ref{tab_adap}. Particularly, box-relevant augmentations, such as \textit{flip} and \textit{mosaic}~\cite{yolov4}, can augment the bounding box information, while having less impact on the image representation. On the contrary, box-irrelevant methods will not affect the ground-truth boxes but strongly perturb image content, such as \textit{color transformation} and \textit{Gaussian blur}. Therefore, we keep box-relevant methods as the weak data augmentation for the teacher, and use both box-relevant and box-irrelevant methods as the strong data augmentation for the student. This strategy can minimize the perturbations to the teacher network, while preserving the original settings of YOLOv5. In addition, we also adjust some common hyper-parameters in teacher-student learning for YOLOv5. Specifically, we lower the threshold for pseudo-labeling to 0.4, which is often set as a high value in two-stage SSOD~\cite{unbiasedteacher-ssod-ts,instant-ssod-ts}, \emph{e.g.}, 0.7. This change is attributed to the noisy pseudo-labeling issue in one-stage detection, where the model often fails to provide high-confidence pseudo labels in the initial stage. As shown in Fig.~\ref{conf_distri_fig}, we analyze the pseudo-label distributions, which show that the threshold of 0.4 can achieve a good trade-off between the quality and quantity of pseudo-labels. Meanwhile, other hyper-parameters like the weight of focal loss~\cite{lin2017focalloss} $\lambda$ are also set according to the training status of one-stage SSOD. Notably, these adaptions are shared with other SSOD methods for fair comparisons. Nevertheless, we still notice that it is difficult to maximize the effectiveness of one-stage teacher-student learning based on these adaptions alone. \vspace{-1em} \section{Experiments} \subsection{Datasets and Metric} We validate the proposed OneTeacher on two object detection datasets, namely COCO~\cite{coco} and Pascal VOC~\cite{voc}. Specifically, COCO contains three splits, namely \textit{train2017}, \textit{val2017} and \textit{test2017}. We build the label sets from \emph{train2017} with the percentages of 1\%, 2\%, 5\%, 10\% and 20\% labeled data, respectively, and the rest images of \emph{train2017} are used as the unlabeled sets. On all experiments of COCO, we evaluate the model on the \textit{val2017}. For VOC, we use the VOC07 \textit{train}+\textit{val} and the VOC012 \textit{train}+\textit{val} as the labeled and unlabeled datasets, respectively. Based on these settings, we further conduct the experiments using additional \textit{COCO20} as the unlabeled datasets. The models are evaluated on the VOC07 \textit{test}. For both COCO and VOC, we use \textit{AP{\tiny{50:95}}}, also known as \textit{mAP}, as the metric. \subsection{Implementation Details} \label{detail} \noindent\textbf{Fully-supervised baseline.} We use YOLOv5-s~\cite{yolov5} as our base model, and its backbone is randomly initialized\footnote{Note that YOLOv5 is trained from scratch without ImageNet~\cite{imagenet} pre-training.}. During training, the learning rate is set to 0.01 with a momentum of 0.937 and a weight decay of 0.0005. The batch size is 64, and the total training steps are 500$k$, of which 2$k$ steps are for \textit{warm-up}. We also use \textit{EMA}~\cite{meanteacher-ssl-ts} to temporally ensemble the network parameters, and the coefficient is set to 0.9999. We keep all data augmentations in YOLOv5, including random horizontal flip, mosaic~\cite{yolov4}, random image scale, random image translate and HSV color-space augmentation. \noindent\textbf{OneTeacher.} In {OneTeacher}, we remove the cosine learning rate decay, while keeping the other training configurations the same as the fully-supervised baseline. For semi-supervised learning, we use SGD as the optimizer with a learning rate of 0.01, a momentum of 0.937 and a weight decay of 0.0005. The EMA coefficient is set to 0.9996. Training takes 500$k$ steps with 2$k$ warm-up steps, and the number of \textit{burn-up} steps~\cite{unbiasedteacher-ssod-ts} for the student network is 3$k$. The batch size for labeled data and unlabeled data are all set to 64. As shown in Tab.~\ref{tab_adap}, we use random horizontal flip, mosaic, random image scale and random image translate as the weak data augmentation. The strong data augmentation for the student network includes the weak ones and color jittering, HSV color-space augmentation, grayscale, gaussian blur and cutout. By default, the pseudo-labeling thresholds for regression and classification are set to 0.4 and 0.5, respectively. For MPR, the global threshold $\sigma_g$ is set to 0.25. For the unsupervised loss, we employ the focal loss~\cite{lin2017focalloss} with $\alpha$ = 0.25 and $\gamma$ = 1.5. \noindent\textbf{The compared methods.} In addition to the supervised baseline, we also compare our method with the latest two-stage SSOD approach called \textit{Unbiased Teacher}~\cite{unbiasedteacher-ssod-ts} and a representative teacher-student method named by \textit{STAC}~\cite{stac-ssod-ts}. We also apply the aforementioned adaptions in Sec.~\ref{imp} to these baselines to make them applicable to YOLOv5. The other settings, such as learning rate, training steps and data augmentations, are the same as {OneTeacher}. The confidence threshold $\sigma$ in the pseudo-labeling is set to 0.4 for both baselines. Since YOLOv5 uses EMA during training, we also keep this setting in the student network of STAC, denoted as \emph{\textbf{STAC+EMA}}. \begin{table}[t] \centering \caption{\textbf{Comparisons of OneTeacher and baselines with ImageNet pre-training on COCO (1\%) and VOC.} For VOC, we use VOC07 \textit{train}+\textit{val} as the labeled data and VOC12 \textit{train}+\textit{val} as the unlabeled data.} \vspace{-1em} \begin{tabular}{lccccc} \toprule[1.2pt] \multicolumn{6}{c}{COCO (1\%)} \\ \midrule[0.8pt] \multicolumn{1}{l|}{Methods} & mAP & AP50 & APl & APm & APs \\ \midrule[0.8pt] \multicolumn{1}{l|}{Supervised} & 8.4\mypm{+0.0} & 17.0\mypm{+0.0} & 12.1\mypm{+0.0} & 9.1\mypm{+0.0} & 3.0\mypm{+0.0} \\ \midrule[0.8pt] \multicolumn{1}{l|}{UbTeacher~\cite{unbiasedteacher-ssod-ts}} & 12.7\mypm{+4.3} & 23.7\mypm{+6.7} & 16.9\mypm{+4.8} & 13.5\mypm{+4.4} & 4.8\mypm{+1.8} \\ \multicolumn{1}{l|}{OneTeacher (ours)} & \textbf{16.0}\mypm{+7.6} & \textbf{28.2}\mypm{+11.2} & \textbf{21.8}\mypm{+9.7} & \textbf{17.2}\mypm{+8.1} & \textbf{5.8}\mypm{+2.8} \\ \bottomrule \toprule \multicolumn{6}{c}{VOC} \\ \midrule[0.8pt] \multicolumn{1}{l|}{Methods} & mAP & AP50 & APl & APm & APs \\ \midrule[0.8pt] \multicolumn{1}{l|}{Supervised} & 43.5\mypm{+0.0} & 71.2\mypm{+0.0} & 48.6\mypm{+0.0} & 31.0\mypm{+0.0} & 11.3\mypm{+0.0} \\ \midrule[0.8pt] \multicolumn{1}{l|}{UbTeacher~\cite{unbiasedteacher-ssod-ts}} & 47.7\mypm{+4.2} & 75.1\mypm{+3.9} & 52.7\mypm{+4.1} & 33.7\mypm{+2.7} & \textbf{14.9}\mypm{+3.6} \\ \multicolumn{1}{l|}{OneTeacher (ours)} & \textbf{50.0}\mypm{+6.5} & \textbf{76.1}\mypm{+4.9} & \textbf{55.8}\mypm{+7.2} & \textbf{36.2}\mypm{+5.2} & {14.4}\mypm{+3.1} \\ \bottomrule[1.2pt] \end{tabular} \label{IN} \end{table} \begin{table}[t] \centering \caption{\textbf{Performance comparison of YOLOv5-s, YOLOv5-m and YOLOv5-x, which have different parameter scales. } OneTeacher is more effective for larger models. } \vspace{-1em} \setlength\tabcolsep{6pt} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{ccccc} \toprule[1.2pt] \multicolumn{4}{c}{COCO (10\%)} \\ \midrule[0.8pt] \multicolumn{1}{l|}{Model} & \multicolumn{1}{l|}{Params} & \multicolumn{1}{c|}{Methods} & mAP & AP50 \\ \midrule[0.8pt] \multicolumn{1}{c|}{\multirow{3}{*}{YOLOv5-s}} & \multicolumn{1}{c|}{\multirow{3}{*}{7.3M}} & \multicolumn{1}{c|}{Supervised} & 21.8\mypm{+0.0} & 37.4\mypm{+0.0} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Unbiased Teacher~\cite{unbiasedteacher-ssod-ts}} & 25.3\mypm{+3.5} & 42.0\mypm{+2.6} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{OneTeacher (ours)} & \textbf{29.1}\mypm{+8.2} & \textbf{45.3}\mypm{+7.9} \\ \midrule[0.8pt] \multicolumn{1}{c|}{\multirow{3}{*}{YOLOv5-m}} &\multicolumn{1}{c|}{\multirow{3}{*}{21.4M}}& \multicolumn{1}{c|}{Supervised} & 22.5\mypm{+0.0} & 37.9\mypm{+0.0} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Unbiased Teacher~\cite{unbiasedteacher-ssod-ts}} & 28.4\mypm{+5.9} & 44.3 \mypm{+6.4} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{OneTeacher (ours)} & \textbf{33.3}\mypm{+10.8} & \textbf{49.7}\mypm{+11.8} \\ \midrule[0.8pt] \multicolumn{1}{c|}{\multirow{3}{*}{YOLOv5-x}} & \multicolumn{1}{c|}{\multirow{3}{*}{87.7M}}& \multicolumn{1}{c|}{Supervised} & 23.7\mypm{+0.0} & 38.8\mypm{+0.0} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Unbiased Teacher~\cite{unbiasedteacher-ssod-ts}} & 36.5\mypm{+14.8} & 52.1\mypm{+16.3} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{OneTeacher (ours)} & \textbf{38.9}\mypm{+17.2} & \textbf{55.3}\mypm{+19.5} \\ \bottomrule[1.2pt] \end{tabular}} \label{yolox} \end{table} \subsection{Experimental Results} \subsubsection{Comparison with existing methods} We first compare {OneTeacher} with the supervised baseline and existing SSOD methods on COCO~\cite{coco} and VOC~\cite{voc}, of which results are given in Tab.~\ref{sota} - \ref{yolox}. \label{sota_dis} Tab.~\ref{sota} shows the performance comparison on COCO at different label scales. We can observe that {OneTeacher} achieves distinct performance gains over the supervised baseline, \emph{e.g.,} +33.5\% (relative improvement) on 10\% COCO labeled data. Such an improvement is even more significant than that of two-stage SSOD~\cite{softteacher-ssod-ts,unbiasedteacher-ssod-ts,stac-ssod-ts}, \emph{e.g.}, +32\% by Unbiased Teacher on FasterRCNN~\cite{fasterrcnn}. Compared to existing SSOD methods, the advantage of {OneTeacher} is also very obvious, \emph{e.g.}, up to +15\% than Unbiased Teacher (UbTeacher) and up to +59\% than STAC. Meanwhile, we also notice that these SSOD baselines do not perform as well on YOLOv5 as they do on FasterRCNN~\cite{unbiasedteacher-ssod-ts,stac-ssod-ts}. STAC, in particular, performs even worse than the supervised scheme when deploying its default two-stage setup. These results show that YOLOv5 is a strong and challenging base model, whose training techniques partially offset the benefit of SSOD. Meanwhile, we also notice that when using the less COCO labeled data, the benefits of OneTeacher and other SSOD methods become relatively smaller, as shown in Tab.~\ref{sota}. We attribute this problem to the random parameter initialization of YOLOv5~\cite{yolov5}. Without ImageNet pre-training, YOLOv5 is prone to over-fitting to a small amount of labeled data. In this case, we also provide the experimental results with ImageNet pre-training in Tab.~\ref{IN}, which can prove the effectiveness of {OneTeacher} on smaller datasets. From Tab.~\ref{IN}, we can see that OneTeacher achieves significant performance gains over baselines on less label data, \emph{e.g.,} +3.3 mAP than UbTeacher on COCO (1\%). On VOC, the merits of OneTeacher are also obvious, \emph{e.g.,} +2.3 mAP than UbTeacher. These results well confirm the effectiveness of OneTeacher with ImageNet pre-training. Although OneTeacher works well with ImageNet pre-training, we still report the results with the default random initialization setting in our paper, which is more challenging. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{MPR_ablation.pdf} \vspace{-2em} \caption{\textbf{Comparison of OneTeacher with and without MPR in terms of the averaged IoU score (a), accuracy (b) and number (c) of pseudo-labels on COCO (10\%).} MPR can significantly improve the quality of pseudo-labels. } \label{MPR_fig} \end{figure*} \begin{table}[t] \centering \caption{\textbf{Ablation Study of Multi-view Pseudo-Label Refinement (MPR) and Decoupled Semi-supervised Optimization (DSO) on {COCO (10\%)}. } } \footnotesize { \setlength \tabcolsep{22pt} \begin{tabular}{cccc} \toprule[1.2pt] \multicolumn{4}{c}{COCO (10\%)} \\ \midrule[0.8pt] \multicolumn{1}{c|}{MPR} & \multicolumn{1}{c|}{DSO} & mAP & AP50 \\ \midrule[0.8pt] \multicolumn{1}{c|}{$\times$} & \multicolumn{1}{c|}{$\times$} & 25.3 & 42.0 \\ \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c|}{$\times$} & 26.1 & 43.1 \\ \multicolumn{1}{c|}{$\times$} & \multicolumn{1}{c|}{$\checkmark$} & 28.1 & 44.0 \\ \midrule[0.8pt] \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c|}{$\checkmark$} & 29.1 & 45.3 \\ \bottomrule[1.2pt] \end{tabular} } \label{ablation} \end{table} \begin{table}[t] \centering \caption{\textbf{Effects of different pseudo-labeling thresholds in decoupled semi-supervised optimization.} $t_r$ and $t_c$ are thresholds for regression and classification, respectively. } \footnotesize { \setlength \tabcolsep{10pt} \begin{tabular}{ccccccc} \toprule[1.2pt] \multicolumn{7}{c}{COCO (10\%)} \\ \midrule[0.8pt] \multicolumn{1}{c|}{$t_r$} & \multicolumn{1}{c|}{$t_c$} & mAP & AP50 & APl & APm & APs \\ \midrule[0.8pt] \multicolumn{1}{c|}{0.2} & \multicolumn{1}{c|}{0.3} & 27.7 & 43.7 & 36.3 & 30.2 & 14.3 \\ \multicolumn{1}{c|}{0.2} & \multicolumn{1}{c|}{0.4} & 27.8 & 43.5 & 36.6 & 30.6 & 14.7 \\ \multicolumn{1}{c|}{0.2} & \multicolumn{1}{c|}{0.5} & 28.3 & 44.4 & 36.7 & 31.7 & 14.2 \\ \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c|}{0.3} & 27.9 & 43.7 & 36.8 & 30.3 & 14.8 \\ \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c|}{0.4} & 28.8 & 45.0 & 37.6 & 31.7 & 15.1 \\ \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c|}{0.5} & 29.0 & 45.4 & 37.6 & 32.4 & 14.7 \\ \multicolumn{1}{c|}{0.4} & \multicolumn{1}{c|}{0.4} & 28.1 & 43.9 & 37.3 & 30.5 & 14.3 \\ \multicolumn{1}{c|}{\textbf{0.4}} & \multicolumn{1}{c|}{\textbf{0.5}} & \textbf{29.1} & \textbf{45.3} & \textbf{38.8} & \textbf{31.8} & \textbf{14.6 }\\ \multicolumn{1}{c|}{0.5} & \multicolumn{1}{c|}{0.3} & 28.7 & 44.9 & 38.0 & 31.5 & 14.8 \\ \multicolumn{1}{c|}{0.5} & \multicolumn{1}{c|}{0.4} & 28.6 & 44.5 & 37.7 & 31.3 & 15.0 \\ \bottomrule[1.2pt] \end{tabular}} \label{thre} \end{table} \begin{table}[t] \centering \caption{\textbf{Comparisons of our design with other candidates. AR denotes the augmented-view refinement in MPR.} The first block compares the default augmentation strategy of UbTeacher with our adaption. The second block compares the common flip ensemble~\cite{tang2021humbleteacher-ssod-ts} with AR. } \setlength\tabcolsep{20pt} {\begin{tabular}{l|cc} \toprule[1.2pt] \multicolumn{3}{c}{COCO (10\%)} \\ \midrule[0.8pt] \multirow{1}{*}{Settings} & \multirow{1}{*}{mAP} & \multirow{1}{*}{AP50} \\ \midrule[0.8pt] +Default Aug~\cite{yolov5} & 12.6 & 24.1 \\ \multicolumn{1}{l|}{+Adapted Aug (Ours)} & 26.1 & \multicolumn{1}{c}{43.1} \\ \midrule[0.8pt] +Flip Ensemble~\cite{tang2021humbleteacher-ssod-ts} & 24.8 & 41.4 \\ +AR (ours) & 26.1 & 43.1 \\ \bottomrule[1.2pt] \end{tabular}} \label{ablationv2} \end{table} The results of VOC are given in Tab.~\ref{voc}. Compared to COCO, VOC has fewer object categories and the visual scenes involved are relatively simpler. In this case, the fully supervised baseline can already achieve good performance with limited label information, \emph{i.e.,} 41.3 mAP. We also observe that the performance gains of UbTeacher are not significant on VOC07+12, \emph{i.e.,} +0.8 mAP against the supervised baseline. We conjectured that the reasons are two-fold. Firstly, the fully supervised baseline is highly competitive due to its advanced training techniques. Secondly, these SSOD baselines still suffer the problems of pseudo-label noise and optimization conflict. Compared to these SSOD baselines, OneTeacher still demonstrates superior performance than the supervised baseline, \emph{i.e.}, +4.8 mAP. We further apply {OneTeacher} to a larger one-stage network, \emph{i.e.,} YOLOv5-x, of which results are reported in Tab.~\ref{yolox}. A first observation is that YOLOv5-x is prone to over-fitting on a small label set, resulting in worse performance than YOLOv5-s. However, when {OneTeacher} is applied, the performance of YOLOv5-x boosts from 21.7 mAP to 38.9 mAP, which is much more significant than that of YOLOv5-s. Additionally, {OneTeacher} also outperforms {Unbiased Teacher}~\cite{unbiasedteacher-ssod-ts} on YOLOv5-x. These results greatly validate the generalization ability of {OneTeacher} on large detection networks. Overall, these results not only show the challenges of YOLOv5, but also well confirm the effectiveness of OneTeacher towards one-stage SSOD. \subsubsection{Ablation study.} We further ablate two key designs of OneTeacher, \emph{i.e.}, \textit{Multi-view Pseudo-label Refinement} (MPR) and \textit{Decoupled Semi-supervised Optimization} (DSO) on COCO, of which results are given in Tab.~\ref{ablation}. We can find that all designs of these two schemes are beneficial to model performance. Among them, the benefit of DSO is the most significant, which subsequently confirms the issue of multi-task optimization conflict we argued. Meanwhile, when combining MPR and DSO, the performance can be improved by up to 15\%, suggesting the validity of these two schemes. To better understand the impact of MPR and DSO on semi-supervised learning, we further visualize their training processes in Fig.~\ref{MPR_fig} and Fig.~\ref{DSO_fig}. In Fig.~\ref{MPR_fig}, we compare the quality of pseudo-boxes of {OneTeacher} with and without MPR. The first observation is that with the help of MPR, the quality of pseudo-boxes is significantly improved, especially in the initial training phase. For instance, at about 4$k$ steps, the averaged IoU and accuracy scores are almost doubled. Meanwhile, the number of high-quality pseudo-boxes is also obviously increased. These results strongly confirm the effectiveness of MPR in tackling with the low-quality pseudo-label problem of one-stage SSOD. Fig.~\ref{DSO_fig} shows that DSO can greatly improve the training efficiency of OneTeacher, which also helps the base model achieve better performance. In Tab.~\ref{thre}, we also validate the effects of different pseudo-labeling thresholds in DSO. The first observation from this table is that the optimal thresholds for YOLOv5 are much smaller than those for Faster-RCNN, \emph{e.g.}, 0.4 \emph{v.s.} 0.7, reflecting the difference between one-stage and two-stage SSODs. More importantly, we can also observe that setting different pseudo-labeling thresholds for regression and classification is indeed beneficial for one-stage SSOD, \emph{e.g.}, +0.9mAP, suggesting that these two tasks in YOLOv5 have different tolerances about pseudo-label noises. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{DSO_ablation.pdf} \vspace{-1em} \caption{\textbf{The training process of OneTeacher with and without DSO.} DSO can address the conflicts of the coupled multi-task optimization and significantly improve the efficiency of semi-supervised learning. } \label{DSO_fig} \vspace{-1em} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=1\textwidth]{vis3.pdf} \vspace{-2em} \caption{\textbf{Visualizations of the pseudo-labels generated by OneTeacher with and without MPR.} With the help of MPR, OneTeacher can obtain more high-quality pseudo-labels. } \label{vis} \end{figure*} Tab.~\ref{ablationv2} reports the results of the alternative designs for OneTeacher. In first block, we compare two types of data augmentation settings for YOLOv5. We observe that compared to the default augmentation scheme used in two-stage SSODs like UbTeacher~\cite{unbiasedteacher-ssod-ts}, our new deployment for YOLOv5 greatly improves the model performance, \emph{i.e.,} +13.5 mAP. This result indicates that the original augmentations of YOLOv5 greatly contribute to its performance. In this case, the weak augmentation setting of UbTeacher, which does not include YOLOv5's augmentation methods, will significantly degenerate SSOD performance of YOLOv5. In Tab.~\ref{ablationv2}, we also compare the augmented-view refinement (AR) in MPR with its alternative called \textit{Flip Ensemble}~\cite{tang2021humbleteacher-ssod-ts}, which refines pseudo-labels via directly averaging the predictions of the original view and flipped view. It can be seen that the benefit of Flip Ensemble is much less significant than that of our augmented-view refinement. One possible reason is that this approach will ensemble all views of predictions without comparisons and filtering, thereby introducing more label noises to one-stage SSOD. Instead, the proposed AR only fuses the certain predictions from two different views, which can effectively improve the quality of pseudo-labels. \subsubsection{Qualitative Analysis} We visualize the pseudo-labels produced by \textit{OneTeacher} with and without MPR in Fig.~\ref{vis}. From these examples, we first observe that MPR can effectively filter the incorrect detections, \emph{e.g.}, the ``\textit{dining table}'' in Exp. (b). Meanwhile, the pseudo-boxes with incorrect category predictions can also be refined by MPR, \emph{e.g.}, the wrong prediction of ``\textit{clock}'' in Exp. (a) is corrected to ``\textit{vase}'' by MPR. Besides, MPR can also bring more high-quality pseudo-labels to OneTeacher, \emph{e.g.}, the missed detection ``\textit{pizza}'' in Exp. (e) is also detected after the refinement of MPR. These results well confirm the effectiveness of MPR towards the issue of low-quality pseudo-labeling. \section{Conclusion} In this paper, we focus on one-stage semi-supervised object detection (SSOD), which is often overlooked in existing literature. Specifically, we identify its two key challenges compared to two-stage SSOD, namely \textit{low-quality pseudo-labeling} and \textit{multi-task optimization conflict}. To address these issues, we present a novel teacher-student learning paradigm for one-stage detection networks, termed \textit{OneTeacher}. To handle the first issue, OneTeacher applies a novel design called \emph{Multi-view Pseudo-label Refinement} (MPR) to improve pseudo-labels quality from micro- to macro-views. Meanwhile, OneTeacher adopts a \emph{Decoupled Semi-supervised Optimization} (DSO) scheme to address the multi-task optimization conflicts via a branching structure and task-specific pseudo-labeling. In addition, we also apply the advanced one-stage detection network YOLOv5 as our base model and carefully revise its implementation to maximize the benefits of SSOD. To validate OneTeacher, we conduct extensive experiments on {COCO} and {Pascal VOC}. The experimental results show that OneTeacher greatly outperforms both supervised and semi-supervised methods under different settings, and also confirm its effectiveness towards the aforementioned key issues of one-stage SSOD. \section*{Acknowledgment} This work was supported by the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U21B2037, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, and No. 62002305), Guangdong Basic and Applied Basic Research Foundation (No.2019B1515120049), and the Natural Science Foundation of Fujian Province of China (No.2021J01002). We also thank Huawei Ascend Enabling Laboratory for the continuous support in this work. Particularly, we highly appreciate Qing Xu and Rongrong Fu for their efforts and valuable suggestions to this paper. \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.11420", "language": "en", "timestamp": "2023-02-23T02:16:10", "url": "https://arxiv.org/abs/2302.11420", "yymm": "2302" }
\chapter*{Acknowledgments} \addcontentsline{toc}{chapter}{Acknowledgments} \setcounter{page}{3} I want to thank my supervisor Prof. Hidetoshi Awata for his advices and supports. Also I want to thank Prof. Noriaki Ikeda for his comments and suggestions. Finally, I want to thank my family for its understanding and support. {% \hspace{1em} }% \tableofcontents \chapter{Introduction} \pagenumbering{arabic} \setcounter{page}{1} A Courant algebroid is a 4-tuple ($E,\rho,\langle,\rangle,[,]$) where $E$ is a vector bundle over a smooth manifold $M$, $\rho$ is an anchor map to tangent bundle, $\langle,\rangle$ is a non-degenerate metric, and $[,]$ is a Courant bracket on the sections of the bundle, satisfying a set of compatibility conditions. It first appeared in $\cite{C90}$ as the generalized tangent bundle $TM\oplus T^{*}M$ with a natural projection $\rho:TM\oplus T^{*}M\rightarrow TM$, a natural pairing $\langle.\rangle$, and a Dorfman bracket $[,]$, and a general definition was given in $\cite{LWX95}$ to generalize the double of Lie bialgebroids (Lie algebroid analogue of Lie bialgebras$\cite{D83}$). We can get a map $d:C^{\infty}(M)\rightarrow\Gamma(E)$ by defining $\langle df,e\rangle=\rho(e)f$ for $f\in C^{\infty}(M),e\in\Gamma(E)$. Courant algebroids play an important role in some areas of mathematics and physics, for example, generalized geomtries$\cite{G04}$, T-dualities$\cite{CG11}$, topological sigma models$\cite{R06}$,supergravity$\cite{CSW11}$, and double field theories$\cite{V12}$. Moreover, there is a one-to-one correspondence between the isomorphism class of differential-graded (dg for short) symplectic manifolds of degree 2 and isomorphism class of Courant algebroids$\cite{R02}$. A Courant-Dorfman algebra is a 5-tuple ($R,E,\partial,\langle,\rangle,[,]$), where $R$ is a commutative algebra, $E$ is an $R$-module, $\langle ,\rangle:E\otimes E\rightarrow R$ is a symmetric bilinear form, $\partial:R\rightarrow E$ is a derivation, and $[,]:E\otimes E\rightarrow E$ is a Dorfman bracket, satisfying a set of compatibility conditions. A Courant algebroid gives a Courant-Dorfman algebra via ($C^{\infty}(M),\Gamma(E),d,\langle,\rangle,[,]$). Courant-Dorfman algebras generalize Courant algebroids in two directions: first allowing for more general commutative algebras $R$ and modules $E$ than algebras of smooth functions and modules of smooth sections, and second allowing for degenerate $\langle,\rangle$. The relation between Courant-Dorfman algebras and Poisson vertex algebras was found in the context of current algebras. Current algebras are Poisson algebras consisting of functions on mapping spaces. In classical field theories, a Poisson algebraic structure of currents plays important roles when we consider symmetries of fields. The most basic example is Kac-Moody algebra, which is the Lie algebraic structure on $\mathrm{Map}(S^{1},G)$, where $G$ is a Lie group. Let $\mathfrak{g}$ be the Lie algebra of $G$ and $e_{a}$ be generators of $\mathfrak{g}$ such that $[e_{a},e_{b}]=f^{c}_{ab}e_{c}$. The bracket is of the form \begin{equation} \label{KM} \{J_{a}(\sigma),J_{b}(\sigma')\}=f^{c}_{ab}J_{c}(\sigma)\delta(\sigma-\sigma')+k\delta_{ab}\delta'(\sigma-\sigma'), \end{equation} where $k$ is a constant. The algebra plays important roles as the symmetry of the Wess-Zumino-Witten model, 2-dimensional conformal invariant sigma model whose target space is a Lie group$\cite{KZ84}$. Alekseev and Strobl observed there was more general current algebra whose source manifold was $S^{1}$ but a target manifold was a general smooth manifold$\cite{AS04}$. Let $M$ be a smooth manifold and choose a vector field $v=v^{i}(x)\partial_{i}$ and a 1-form $\alpha=\alpha_{i}(x)dx^{i}$ on $M$. We associate to them a current, \begin{equation} J_{(v,\alpha)}(\sigma)=v^{i}(x(\sigma))p_{i}(\sigma)+\alpha_{i}(x(\sigma))\partial_{\sigma}x^{i}(\sigma). \end{equation} The Poisson bracket of these currents is of the form, \begin{equation} \{J_{(v,\alpha)}(\sigma),J_{(u,\beta)}(\sigma')\}=J_{[(v,\alpha),(u,\beta)]}(\sigma)\delta(\sigma-\sigma')+\langle(v,\alpha),(u,\beta)\rangle(\sigma)\delta'(\sigma-\sigma'), \end{equation} where $u,v$ is a vector field on $M$, $\alpha,\beta$ is a 1-form on $M$, $[(v,\alpha),(u,\beta)]=([v,u],L_{v}\beta-\iota_{u}d\alpha)$ is the Dorfman bracket on the generalized tangent bundle $TM\oplus T^{*}M$ and $\langle(v,\alpha),(u,\beta)\rangle=\iota_{u}\alpha+\iota_{v}\beta$. Let $M=G$ be a Lie group, and consider an Alekseev-Strobl current of the form \begin{equation} J=p(\sigma)-\frac{k}{4\pi}g^{-1}(\sigma)\partial_{\sigma}g(\sigma) \end{equation} , where $p(\sigma)$ is a left invariant momentum. When we decompose $J$ on a basis $e_{a}$ of $\mathfrak{g}$, the Poisson bracket of $J_{a}$'s is a Kac-Moody algebra($\ref{KM}$). Alekseev-Strobl currents appear in the description of symmetries of 2-dimensional $\sigma$-models. Inspired by $\cite{AS04}$, Ekstrand and Zabzine studied the algebraic structure underlying more general current algebras on loop spaces$\cite{EZ09}$. They found that a weak notion of Courant-Dorfman algebras (weak Courant-Dorfman algebras) appears when we consider the Poisson bracket of currents. In $\cite{E11}$, (weak) Courant-Dorfman algebras were derived using the language of Lie conformal algebras (LCA for short) and Poisson vertex algebras (PVA for short). A Lie conformal algebra is a module with a $\lambda$-bracket satisfying some conditions like a Lie algebra, and a Poisson vertex algebra is defined as an algebra which has a structure of a Lie conformal algebra and satisfies the Leibniz rule. They first appeared in the context of vertex algebras., and the relation with the Poisson bracket of currents was investigated in $\cite{BSK09}$. We can get a Lie conformal algebra from the Poisson bracket of currents, and we can get a Poisson vertex algebra by taking into account the multiplication of currents. A Poisson vertex algebra can be seen as an algebraic generalization of a Poisson algebraic structure on loop spaces, while a Lie conformal algebra can be seen as an algebraic generalization of a Lie bracket on loop spaces. In $\cite{E11}$, Ekstrand derived weak Couarnt-Dorfman algebras from Lie conformal algebras and showed that the graded Poisson vertex algebras generated by elements of degree 0 and 1 are in one-to one correspondence with the Courant-Dorfman algebras. The above discussions are summarized as follows. \begin{equation} \xymatrix{ \fbox{degree 2 dg symplectic manifolds} \ar@{<->}[r]^-{1-to-1} & \fbox{Courant algebroids} } \end{equation} \begin{equation} \label{2} \xymatrix{ \fbox{Kac-Moody algebras}\ar@{^{(}->}[d] & \fbox{generalized tangent bundles of Lie groups} \ar[l]^-{target} \ar@{^{(}->}[d] \\ \fbox{Alekseev-Strobl current algebras} \ar@{^{(}->}[d] & \fbox{Courant algebroids} \ar[l]^-{target} \ar@{^{(}->}[d] \\ \fbox{Poisson vertex algebras} \ar@{<->}[r]^-{1-to-1} \ar@{^{(}->}[d] & \fbox{Courant-Dorfman algebras} \ar@{^{(}->}[d]\\ \fbox{Lie conformal algebras} \ar[r]^-{derive}& \fbox{weak Courant-Dorfman algebras} } \end{equation} Courant algebroids are in one-to-one correspondence with degree 2 dg symplectic manifolds, and Alekseev-Strobl current algebras can be described in the language of dg symplectic geometry$\cite{IK11}$. Moreover, Poisson algebras on the mapping space whose source manifold is higher dimensional were constructed (for example, $\cite{BZ05},\cite{HK12}$) and a general framework explaining these current algebras were given using dg symplectic geometry.$\cite{IX13},\cite{BHIW15},\cite{A21}$ These currents are called BFV(Batalin-Fradkin-Vilkovisky) current algebras. There Courant algebroids(degree 2 dg symplectic manifolds) are generalized to degree $n$ dg symplectic manifolds. BFV current algebras and degree $n$ dg symplectic manifolds can be seen as a higher analog of the second line of ($\ref{2}$). The aim of this paper is to give a higher analog of the third line and fourth line of ($\ref{2}$). In other words, we consider how to make higher Poisson vertex algebras, higher Courant-Dorfman algebras, higher Lie conformal algebras and higher weak Courant-Dorfman algebras which are generalizations of BFV current algebras and algebras of functions of degree $n$ dg symplectic manifolds. In particular, with higher Courant-Dorfman algebras and higher Poisson vertex algebras, we may be able to find and unify more general current algebras including the BFV current algebras, and use the techniques of Poisson vertex algebras in the higher setting. In this paper, we give a higher analog of the relation between Poisson vertex algebras and Courant-Dorfman algebras. First, we define higher Courant-Dorfman algebras by taking an algebraic structure of functions of degree $n$ dg symplectic manifolds. We give some examples, including ordinary Courant-Dorfman algebras and higher Dorfman bracket on $TM\oplus\wedge^{n-1}T^{*}M$. We also give an extended version of higher Courant-Dorfman algebras, whose definition is more natural when we consider the relation with higher PVAs. Second, we check that non-degenerate higher Courant-Dorfman algebras have a similar property to the non-degenerate Courant-Dorfman algebras. In particular, we make a graded Poisson algebra of degree $-n$ from a non-degenerate higher Courant-Dorfman algebra. This graded Poisson algebra is a generalization of the graded Poisson algebra of degree $-2$ introduced in $\cite{KW08},\cite{R09}$. For a non-degenerate higher Courant-Dorfman algebra from a finite-dimensional graded vector bundle, this graded Poisson algebra is isomorphic to the algebra of functions of degree $n$ dg symplectic manifolds. Third, we define a higher analog of Lie conformal algebras and Poisson vertex algebras, which are to higher Courant-Dorfman algebras what Poisson vertex algebras are to Courant-Dorfman algebras. We derive a weak notion of higher Courant-Dorfman algebras from higher Lie conformal algebras, and give the correspondence between higher Poisson vertex algebras and higher Courant-Dorfman algebras. This correspondence is a higher generalization of the correspondence between Courant-Dorfman algebras and Poisson vertex algebras, and the main result of this thesis. \begin{th.*} There is a bijection between higher Poisson vertex algebras generated by elements of degree $0\leq i\leq n-1$ and extended higher Courant-Dorfman algebras. \end{th.*} Moreover, we check higher Lie conformal algebras and higher Poisson vertex algebras have LCA-like and PVA-like properties. In particular, we show we can construct a graded Lie algebra out of the tensor product of a higher LCA and an arbitrary differential graded-commutative algebra (dgca for short) and a graded Poisson algebra out of the tensor product of a higher PVA and an arbitrary dgca. Taking a tensor product of the higher Courant-Dorfman algebra arising from a dg symplectic manifold of degree $n$ and de-Rham complex of a $n-1$ dimensional manifold, we see the associated Poisson algebras can be seen as an algebraic description of BFV current algebras. This is the higher generalization of Alekseev-Strobl Poisson vertex algebras. The higher generalization of ($\ref{2}$) are summarized as follows. \begin{equation} \xymatrix{ \fbox{BFV current alegbras} \ar@{^{(}->}[d] & \fbox{functions of degree $n$ dg symplectic manifolds} \ar[l]^-{target} \ar@{^{(}->}[d] \\ \fbox{\textbf{higher Poisson vertex algebras}} \ar@{<->}[r]^-{\textbf{1-to-1}} \ar@{^{(}->}[d] & \fbox{\textbf{(extended) higher Courant-Dorfman algebras}} \ar@{^{(}->}[d]\\ \fbox{\textbf{higher Lie conformal algebras}} \ar[r]^-{\textbf{derive}}& \fbox{\textbf{higher weak Courant-Dorfman algebras}} } \end{equation} In the case of $n=2$, this coincides with ($\ref{2}$). The bold parts (second line and third line) are defined and studied in this paper. The organization of this thesis is as follows. In chapter 2, we recall some basics about dg symplectic geometry. In chapter 3, we review the relation between Poisson vertex algebras and Courant-Dorfman algebras, focusing on the Poisson structure of loop spaces. In chapter 4, we define the higher Courant-Dorfman algebra and give some examples. In chapter 5, we construct graded Poisson algebras of degree $-n$, generalizing Keller-Waldman Poisson algebras. In chapter 6, we define higher Lie conformal algebras and Poisson vertex algebras and see the relation with higher Courant-Dorfman algebras. Moreover, we show how we can see these algebras as higher generalization of ordinary LCAs and PVAs. \chapter{dg symplectic manifolds} In this chapter, we review some basics of dg symplectic manifolds. We refer to $\cite{CS10},\cite{QZ11}$. A graded vector space is a collection of vector spaces $V=\oplus_{i\in\mathbb{Z}} V_{i}$, where $V_{i}$ is the vector space of degree $i$. Denote the dual of $V$ by $V^{*}=\oplus_{i\in\mathbb{Z}} (V^{*}_{i})^{-i}$. Define the tensor algebra of $V^{*}$ by \begin{equation} \mathrm{Tens}(V^{*})=\oplus_{i\geq0}(V^{*})^{\otimes i}, \end{equation} and the symmetric algebra of $V^{*}$ by \begin{equation} \mathrm{Sym}(V^{*})=\mathrm{Tens}(V^{*})/(v\otimes w-(-1)^{|v||w|}w\otimes v), \end{equation} where $|v|,|w|$ is the degree of the homogeneous elements $v,w\in V^{*}$. The algebra of functions on $V$ is identified with $\mathrm{Sym}(V^{*})$. A graded manifold $\mathcal{M}$ is a locally ringed space $(M,C^{\infty}(\mathcal{M}))$ which is locally isomorphic to $(U,C^{\infty}(U)\otimes \mathrm{Sym} V^{*})$., where $U\subset\mathbb{R}^{n}$ is open, and $V$ is a finite-dimensional graded vector space. A morphism of graded manifolds is a morphism of graded-commutative algebras of functions. Let $V$ be a graded vector space with homogeneous coordinates $(z^{i})^{n}_{i=1}$ corresponding to a basis of $V^{*}$. A vector field $X$ on $V$ is an $\mathbb{R}$-linear derivation on $V$ satisfying the Leibniz rule \begin{equation} X(fg)=X(f)g+(-1)^{k|f|}fX(g) \end{equation} for $f,g\in V$. It is of the form \begin{equation} X=\sum^{n}_{i=1}X^{i}\frac{\partial}{\partial z^{i}} \end{equation} where $X^{i}\in \mathrm{Sym}(V^{*})$, and $\frac{\partial}{\partial z^{i}}$ is the dual basis of $V$. A vector field $X$ acts on $V^{*}$ according to the following rules: \begin{align} \frac{\partial}{\partial z^{i}}(z^{j})&=\delta^{j}_{i},\\ \frac{\partial}{\partial z^{i}}(fg)&=\left(\frac{\partial}{\partial z^{i}}(f)\right)g+(-1)^{|z^{i}||f|}f\frac{\partial}{\partial z^{i}}(g). \end{align} A vector field $X$ is graded if $|Xf|=|f|+k$ for homogeneous $f$ and fixed $k\in\mathbb{Z}$. $k$ is called the degree of $X$. A graded vector field on a graded manifold $\mathcal{M}$ of degree $k$ is a graded linear map \begin{equation} X:C^{\infty}(\mathcal{M})\rightarrow C^{\infty}(\mathcal{M})[k], \end{equation} where $W[k]^{i}=W^{k+i}$, which satisfies the graded Leibniz rule, i.e. \begin{equation} X(fg)=X(f)g+(-1)^{k|f|}fX(g) \end{equation} holds for all homogeneous smooth functions $f,g\in :C^{\infty}(\mathcal{M})$. \begin{ex.} The Euler vector field $E$ on $\mathcal{M}$ is a vector field of degree 0 which satisfies \begin{equation} Ef=|f|f, \end{equation} for a homogeneous element $f\in C^{\infty}(\mathcal{M})$. Locally, it is of the form, \begin{equation} E=\sum_{i}|z^{i}|z^{i}\frac{\partial}{\partial z^{i}}. \end{equation} \end{ex.} \begin{de.}[{$\cite[Definition 3.3.]{CS10}$}] A cohomological vector field $Q$ is a graded vector field of degree 1 which satisfies $Q^{2}=0$. \end{de.} Every cohomological vector field on $\mathcal{M}$ corresponds to a differential on $C^{\infty}(\mathcal{M})$. A morphism of dg manifolds is a morphism of dg algebras of functions. The space of graded differential forms consists of homomorphisms from the graded vector fields on $\mathcal{M}$ to the functions on $\mathcal{M}$, \begin{equation} \Omega^{1}(\mathcal{M}):=\mathrm{Hom}_{C^{\infty}(\mathcal{M})}(\mathfrak{X}(\mathcal{M}),C^{\infty}(\mathcal{M})). \end{equation} Locally, the algebra of differential forms on a graded manifold $\mathcal{M}$ is constructed by adding new coordinates $dz^{i}$ to $z^{i}$($|dz^{i}|=|z^{i}|+1$). We denote a space of $k$-th differential forms by $\Omega^{k}(\mathcal{M})$. Define the de-Rham differential and the Lie derivative $L_{V}$ ($V$:a vector field) by \begin{equation} d\omega(V_{1},...,V_{n+1})=\sum_{i=1}^{n-1}\omega(V_{1},...,V_{i-1},[V_{i},V_{i+1}],V_{i+2},...,V_{n}), \end{equation} \begin{equation} L_{V}=\iota_{V}d+(-1)^{|V|}d\iota_{V}, \end{equation} where $\omega\in\Omega^{n}(\mathcal{M}),V_{i}\in\mathfrak{X}$ and $\iota_{V}$ is the contraction. \begin{de.}[{$\cite[Definition 4.3.]{CS10}$}] A graded symplectic form of degree $k$ on a graded manifold $\mathcal{M}$ is a two-form $\omega$ which has the following properties; \begin{itemize} \item$\omega$ is homogeneous of degree $k$, \item$\omega$ is closed with respect to the de-Rham differential, \item$\omega$ is non-degenerate, i.e. the induced morphism, \begin{equation} \omega:T\mathcal{M}\rightarrow T^{*}[k]\mathcal{M}, \end{equation} is an isomorphism. There $[k]$ means degree shifting the fibres of the vector bundle. \end{itemize} \end{de.} A graded symplectic manifold of degree $k$ is a pair $(\mathcal{M},\omega)$ of a graded manifold $\mathcal{M}$ and a graded symplectic form $\omega$ of degree $k$ on $\mathcal{M}$. \begin{le.}[{$\cite[Lemma 4.5.]{CS10}$}] Let $\omega$ be a graded symplectic form of degree $k\neq0$. Then $\omega$ is exact. \end{le.} \begin{proof} Let $E$ be the Euler vector field. Then, \begin{equation} k\omega=L_{E}\omega=(d\iota_{E}+\iota_{E}d)\omega=d(\iota_{E}\omega), \end{equation} which implies $\omega=\frac{d\iota_{E}\omega}{k}$($E$:Euler vector field). \end{proof} \begin{de.}[{$\cite[Definition 4.6.]{CS10}$}] Let $\omega$ be a graded symplectic form on a graded manifold $\mathcal{M}$. A vector field $X$ is called symplectic if $L_{X}\omega=0$, and Hamiltonian if there is a smooth function $H$ such that $\iota_{X}\omega=dH$. \end{de.} \begin{le.}[{$\cite[Lemma 4.7.]{CS10}$}] Let $\omega$ be a graded symplectic form of degree $k\neq0$ and $X$ be a symplectic vector field of degree $l$. If $k+l\neq0$, then $X$ is Hamiltonian. \end{le.} \begin{proof} For the Euler vector field $E$, \begin{align} -l\iota_{X}\omega&=\iota_{[E,X]}\omega=\iota_{X}d(\iota_{E}\omega)-d(\iota_{X}\iota_{E}\omega) \notag \\ &=k\iota_{X}\omega+d(\iota_{E}\iota_{X}\omega) \end{align} Let $H:=\iota_{E}\iota_{X}\omega$, Then \begin{equation} dH=(k+l)\iota_{X}\omega. \end{equation} Hence $\iota_{X}\omega=\frac{dH}{k+l}$. \end{proof} For a degree $k$ graded symplectic manifold $(\mathcal{M},\omega)$, we can define a Poisson bracket $\{-,-\}$ on $C^{\infty}(\mathcal{M})$ via \begin{equation} \{f,g\}:=(-1)^{|f|+1}X_{f}(g) \end{equation} where $X_{f}$ is the unique graded vector field that satisfies $\iota_{X_{f}}\omega=df$. $X_{f}$ is called a Hamiltonian vector field of $f$. If the vector field $Q$ is Hamiltonian, one can find a Hamiltonian function $S$ such that \begin{equation} Q=\{S,-\}. \end{equation} Since \begin{equation} Q^{2}(f)=\{\{S,S\},f\} \end{equation} $Q^{2}=0$(i.e. $Q$ is cohomological) is equivalent to $\{S,S\}$ being a constant. Assume that $Q$ is a cohomological vector field. Then, $|S|=k+1$, while $|\{-,-\}|=-k$. Consequently, $|\{S,S\}|=k+2$. If $k\neq-2$, then \begin{equation} \{S,S\}=0. \end{equation} This equation is known as the classical master equation. A cohomological vector field with a Hamiltonian function $S$ such that $Q=\{S,-\}$ is called a symplectic cohomological vector field. \begin{de.}[{$\cite[Definition 4.10.]{CS10}$}] A graded manifold endowed with a graded symplectic form and a symplectic cohomological vector field is called a differential graded symplectic manifold, or dg symplectic manifold for short. \end{de.} A morphism between two dg symplectic manifolds is a morphism of the Poisson algebras of functions respecting the differential induced by the symplectic cohomological vector field. We consider some special cases of dg symplectic manifolds $(M,\omega,S)$, where $S$ is the Hamiltonian function associated to a cohomological vector field. When $k=-1$, these manifolds correspond to BV theories. A BV theory is a formulation of a Lagrangian formalism of a gauge theory based on dg manifold($\cite{BV81}$). In this case, the Poisson bracket induced by $\omega$ corresponds to the BV antibracket and the Hamiltonian function corresponds to the BV action. When $k=0$, they emerge in the BFV theories. A BFV theory is a formulation of a constrained Hamiltonian system based on a dg manifold, which is a Hamiltonian counterpart of the BV theory($\cite{BV77},\cite{BF83}$). In this case, the Poisson bracket induced by $\omega$ corresponds to the BFV Poisson bracket and the Hamiltonian function corresponds to the BRST charge. Note that the physical Hamiltonian cannot be decided from the dg symplectic manifold. Suppose $k>0$ and that all the coordinates are of non-negative degree. Then $\mathcal{M}$ is called an N-manifold. N-manifolds of degree 1 and 2 are analyzed in $\cite{R02}$. \paragraph{$k=1.$} Every graded symplectic manifold of degree 1 is canonically isomorphic to the graded cotangent bundle $T^{*}[1]M$ of the base manifold $M$. We denote the coordinates of degree 0 by $x^{i}$, and the coordinates in degree 1 by $p_{i}$. The Hamiltonian $S$ has degree 2, so locally it must be of the form, \begin{equation} S=\frac{1}{2}\sum^{n}_{i,j=1}\pi^{ij}(x)p_{i}p_{j}. \end{equation} Hence, locally $S$ corresponds to a bivector field $\Pi=\pi^{ij}(x)\partial_{i}\wedge\partial_{j}$ and $\{S,S\}=0$ implies that $S$ corresponds to a Poisson bivector field. Let $C^{i}(C^{\infty}(\mathcal{M}))$ be the subspace of $C^{\infty}(\mathcal{M})$ generated by degree $i$ coordinates. For $f,g\in C^{0}(C^{\infty}(\mathcal{M}))$ \begin{align} \{\{f,S\},g\}&=\{\sum^{n}_{i,j=1}\frac{\partial f}{\partial{x^{i}}}\pi^{ij}p_{j},g\} \notag \\ &=\{\sum^{n}_{i,j=1}\frac{\partial f}{\partial{x^{i}}}\frac{\partial g}{\partial{x^{j}}}\pi^{ij}\}, \end{align} which is a Poisson manifold structure. Hence, there is a one-to-one correspondence between isomorphism class of dg symplectic manifolds of degree 1 and isomorphism class of Poisson manifolds. \paragraph{$k=2.$} The graded symplectic structure induces an isomorphism between the coordinates of degree 0, which we denote by $x^{i}$, and the coordinates in degree 2, which we denote by $p_{i}$. We denote the coordinates in degree 1 by $\eta^{\alpha}$. The graded symplectic form can be written as \begin{equation} \omega=\sum^{n}_{i=1}dp_{i}dx^{i}+\frac{1}{2}\sum^{n}_{\alpha,\beta=1}d(g_{\alpha\beta}(x)\eta^{\alpha})d\eta^{\beta} \end{equation} where $g_{\alpha\beta}$ is a symmetric non-degenerate form. Globally, the dg symplectic manifold corresponds to the symplectic realization of $E[1]$ for a vector bundle $E$ over $M$, equipped with a non-degenerate fibre pairing $g$. The Hamiltonian $S$ has degree 3, so locally it must be of the form \begin{equation} S=\sum_{i,\alpha}\rho^{i}_{\alpha}(x)p_{i}\eta^{\alpha}+\frac{1}{6}\sum_{\alpha,\beta,\gamma}f_{\alpha\beta\gamma}(x)\eta^{\alpha}\eta^{\beta}\eta^{\gamma}. \end{equation} For $e\in\Gamma(E)$, the first term corresponds to a bundle map $\rho:E\rightarrow TM$ defined by $\rho(e_{\alpha})=\rho^{i}_{\alpha}(e)\partial_{i}$, while the second one gives a bracket $[,]$ on $\Gamma(E)$ defined by $[e_{\alpha},e_{\beta}]=f^{\alpha\beta}_{\gamma}e_{\gamma}$. $\{S,S\}=0$ implies that $(E,g)$ is a Courant algebroid$\cite{R02}$. \begin{de.}[{$\cite[Definition 4.2.]{R02}$}] A Courant algebroid is a vector bundle $E$ over a smooth manifold $M$, with a non-degenerate symmetric bilinear form $\langle ,\rangle $, and a bilinear bracket $*$ on $\Gamma(E)$. The form and the bracket must be compatiable, in the meaning defined below, with the vector fields on $M$. We must have a smooth bundle map, the anchor \begin{equation} \pi:E\rightarrow TM. \end{equation} These structure satisfy the following five axioms, for all $A,B,C\in\Gamma(E)$ and $f\in C^{\infty}(M)$. \begin{description} \item[Axiom.1]: $\pi(A*B)=[\pi(A),\pi(B)]$ (The bracket of the right hand side is the Lie bracket of vector fields). \item[Axiom.2]: $A*(B*C)=(A*B)*C+B*(A*C)$. \item[Axiom.3]: $A*(fB)=(\pi(A)f)B+f(A*B)$. \item[Axiom.4]: $\langle A,B*C+C*B\rangle =\pi(A)\langle B,C\rangle $. \item[Axiom.5]: $\pi(A)\langle B,C\rangle =\langle A*B,C\rangle +\langle B,A*C\rangle $. \end{description} \end{de.} From the above data, we can define a map, $\partial:C^{\infty}(M)\rightarrow\Gamma(E)$ by \begin{equation} \langle \partial f,A\rangle =\pi(A)f \end{equation} for all $A\in\Gamma(E)$. A morphism of Courant algebroids is a bundle map respecting all the operations. We give a correspondence between Courant algebroids and dg symplectic manifolds of degree 2. Denote $C^{i}(C^{\infty}(\mathcal{M}))=\{f\in C^{\infty}(\mathcal{M}:|f|\leq i\}$. Then \begin{equation} C^{0}(C^{\infty}(\mathcal{M}))\simeq C^{\infty}(M), C^{1}(C^{\infty}(\mathcal{M}))\simeq \Gamma(E). \end{equation} For $f\in C^{0}(C^{\infty}(\mathcal{M}))$ and $A,B\in C^{1}(C^{\infty}(\mathcal{M}))$, we define the anchor $\pi$ and the bilinear bracket $*$ as the derived brackets, \begin{align} \{\{A,S\},B\}&=A*B, \\ \{\{A,S\},f\}&=\pi(A)f=\partial(f)A=\{\{S,f\},A\}. \end{align} We can check this definition satisfies the conditions of a Courant algebroid. Conversely, given a Courant algebroid $(E,M,\langle,\rangle,*,\pi)$, we can associate a degree 2 dg symplectic manifold $(\mathcal{M},\omega,S)$. Locally, \begin{equation} S=\sum_{i,\alpha}\pi(e_{\alpha})x^{i}p_{i}\eta^{\alpha}+\frac{1}{6}\sum_{\alpha,\beta,\gamma}\langle [e_{\alpha},e_{\beta}],e_{\gamma}\rangle(x)\eta^{\alpha}\eta^{\beta}\eta^{\gamma}, \end{equation} where $e_{\alpha},e_{\beta},e_{\gamma}\in\Gamma(E)$. Hence, there is a one-to-one correspondence between the isomorphism class of dg symplectic manifolds of degree 2 and isomorphism class of Courant algebroids. \begin{th.}[{$\cite[Theorem 4.5.]{R02}$}] Dg symplectic manifolds of degree 2 are in 1-1 correspondence with Courant algebroids. \end{th.} \chapter{Courant-Dorfman algebras and Poisson vertex algebras} In this chapter we review the definitions of Courant-Dorfman algebras and Poisson vertex algebras and the relation between these algebras. Courant-Dorfman algebras are defined by Roytenberg in $\cite{R09}$ as an algebraic generalization of Courant algebroids$\cite{LWX95}$. These are to Courant algebroids what Lie-Rinehart algebras are to Lie algebroids. \begin{de.}[{$\cite[Definition 2.1.]{R09}$}] A Courant-Dorfman algebra consists of the following data: \begin{itemize} \item a commutative algebra $R$, \item an $R$-module $E$, \item a symmetric bilinear form $\langle ,\rangle:E\otimes E\rightarrow R$, \item a derivation $\partial:R\rightarrow E$, \item a Dorfman bracket $[,]:E\otimes E\rightarrow E$, \end{itemize} which satisfies the following conditions; \begin{equation} [e_{1},fe_{2}]=f[e_{1},e_{2}]+\langle e_{1},\partial f\rangle e_{2}, \end{equation} \begin{equation} \langle e_{1},\partial\langle e_{2},e_{3}\rangle\rangle=\langle [e_{1},e_{2}],e_{3}\rangle+\langle e_{2},[e_{1},e_{3}]\rangle, \end{equation} \begin{equation} [e_{1},e_{2}]+[e_{2},e_{1}]=\partial\langle e_{1},e_{2}\rangle, \end{equation} \begin{equation} [e_{1},[e_{2},e_{3}]]=[[e_{1},e_{2}],e_{3}]+[e_{2},[e_{1},e_{3}]], \end{equation} \begin{equation} [\partial f,e]=0, \end{equation} \begin{equation} \langle \partial f,\partial g\rangle=0, \end{equation} where $f,g\in R$ and $e_{1},e_{2},e_{3}\in E$. \end{de.} For a Courant-Dorfman algebra, when $\langle ,\rangle$ is non-degenerate, we can make a graded Poisson algebras of degree $-2$, and when $R=C^{\infty}(M)$ and $E=\Gamma(F)$ for a vector bundle $F\rightarrow M$ (i.e. $E$ is a Courant algebroid), the graded Poisson algebra is isomorphic to the Poisson algebra of functions of the associated degree $n$ dg symplectic manifolds($\cite{KW08},\cite{R09}$). An important property of Courant-Dorfman algebras is a relation with Poisson vertex algebras. \begin{de.}[{$\cite[Definition 2.7]{K}$}] A Lie conformal algebra is a $\mathbb{C}[\partial]$-module $W$ (i.e.$\partial$ acts on elements of $W$) with a $\lambda$-bracket $\{_{\lambda}\}:W\otimes W\rightarrow W[\lambda]$, $\{a_{\lambda}b\}=\sum_{j\in\mathbb{Z}_{+}}\lambda^{j}a_{(j)}b$ (a product $a_{(j)}b\in W$ is called $j$-th bracket) which satisfies the following conditions. (Here $\lambda$ is an indeterminate.) \begin{description} \item[Sesquilinearity]: \begin{equation} \{\partial a_{\lambda}b\}=\lambda\{a_{\lambda}b\},\ \{a_{\lambda}\partial b\}=(\partial+\lambda)\{a_{\lambda}b\} \end{equation} ($\partial$ is a derivation of the $\lambda$-bracket.) \item[Skew-symmetry]: \begin{equation} \{a_{\lambda}b\}=-\{b_{-\lambda-\partial}a\} \end{equation} \item[Jacobi-identity]: \begin{equation} \{a_{\lambda}\{b_{\mu}c\}\}=\{\{a_{\lambda}b\}_{\mu+\lambda}c\}+\{b_{\mu}\{a_{\lambda}c\}\} \end{equation} \end{description} \end{de.} \begin{de.}[{$\cite[Definition 1.14.]{BSK09}$}] A Poisson vertex algebra is a commutative algebra $W$ with a derivation $\partial$(i.e.$\partial(ab)=(\partial a)b+a(\partial b)$) and $\lambda$-bracket $\{_{\lambda}\}:W\otimes W\rightarrow W[\lambda]$ such that $W$ is a Lie conformal algebra and satisfies the Leibniz rule. \begin{description} \item[Leibniz rule]: \begin{equation} \{a_{\lambda}b\cdot c\}=\{a_{\lambda}b\}\cdot c+b\cdot\{a_{\lambda}c\} \end{equation} \end{description} \end{de.} Poisson vertex algebras appear when we consider functions on phase spaces $T^{*}LM$ of loop spaces $LM=Map(S^{1},M)$. We denote local coordinates on $T^{*}LM$ by $X^{i}(\sigma),P_{i}(\sigma)$ with a coordinate $\sigma$ on $S^{1}$, and define a Poisson bracket by \begin{equation} \{X^{i}(\sigma),P_{i}(\sigma')\}=\delta^{i}_{j}\delta(\sigma-\sigma'). \end{equation} We can construct local functions on $T^{*}LM$ out of the coordinates $X$, $P$ and $\partial=\partial_{\sigma}$. We consider local functions of the form \begin{equation} A(X,\partial X,...,\partial^{k}X,P,...,\partial^{l}P) \end{equation} where $k,l$ are finite. We can create a functional out of $A$ by \begin{equation} \epsilon(\sigma)\in C^{\infty}(S^{1})\mapsto J_{\epsilon}(A)=\int_{S^{1}}\epsilon(\sigma)A(X,\partial X,...,\partial^{k}X,P,...,\partial^{l}P)d\sigma. \end{equation} Considering the Poisson brackets between them, we can find geometric and algebraic structures on $M$. In $\cite{AS04}$, the Poisson brackets between currents parametrised by sections of a generalized tangent bundle $TM\oplus T^{*}M$ is written in terms of the Dorfman bracket. In $\cite{EZ09}$, considering more general currents, weak Coutant-Dorfman algebras are derived. \begin{de.}[{$\cite[Definition 4.1]{E11}$}] A weak Courant-Dorfman algebra $(E,R,\partial,\langle,\rangle,[,])$ is defined by the following data: \begin{itemize} \item a vector space $R$, \item a vector space $E$, \item a symmetric bilinear form $\langle,\rangle:E\otimes E\rightarrow R$, \item a map $\partial:R\rightarrow E$, \item a Dorfman bracket $[,]:E\otimes E\rightarrow E$, \end{itemize} which satisfy the following conditions: \begin{equation} [A,[B,C]]=[[A,B],C]+[B,[A,C]], \end{equation} \begin{equation} [A,B]+[B,A]=\partial \langle A,B\rangle, \end{equation} \begin{equation} [\partial f,A]=0. \end{equation} \end{de.} The differences with the definition of a Courant-Dorfman algebra are the properties related to the algebraic structure of $\mathcal{R}$ and $\mathcal{E}$. The relation between Poisson brackets on the local functionals and Lie conformal and Poisson vertex algebras is discussed in $\cite{BSK09}$. Denote the coordinates on $T^{*}LM$ by $u^{\alpha}(\sigma)=\{X^{i}(\sigma),P_{i-d}(\sigma)\}^{\alpha}$, where $\alpha=1,...,2d$ and let $u^{\alpha(n)}=\partial^{n}u^{\alpha}$. The local functions can be written as polynomials \begin{equation} a(u^{\alpha},...,u^{\alpha(N)}). \end{equation} We have a total derivative operator by \begin{equation} \partial=u^{\alpha(1)}\frac{\partial}{\partial u^{\alpha}}+\cdots+u^{\alpha(N+1)}\frac{\partial}{\partial u^{\alpha(N)}}. \end{equation} The algebra of these polynomials with the total derivative is called an algebra of differential equation $\mathcal{V}$. When we integrate functions over $S^{1}$, the function of the form $\partial_{\sigma}(\cdots)$ doesn't contribute. We can take the quotient $\mathcal{V}/\partial\mathcal{V}$. We denote the image of $a\in\mathcal{V}$ by $\int a\in\mathcal{V}/\partial\mathcal{V}$. A local Poisson bracket on the phase space can be described by \begin{equation} \{u^{\alpha}(\sigma),u^{\beta}(\sigma')\}=H^{\alpha\beta}_{0}(\sigma')\delta(\sigma-\sigma')+H^{\alpha\beta}_{1}(\sigma')\partial_{\sigma'}\delta(\sigma-\sigma')+\cdots+H^{\alpha\beta}_{N}(\sigma')\partial^{N}_{\sigma'}\delta(\sigma-\sigma'). \end{equation} For $a,b\in\mathcal{V}$, we have \begin{equation} \{a(\sigma),b(\sigma')\}=\sum_{m,n}\frac{\partial a(\sigma)}{\partial u^{\alpha(m)}}\frac{\partial b(\sigma')}{\partial u^{\beta(n)}}\partial^{m}_{\sigma}\partial^{n}_{\sigma'}\{u^{\alpha}(\sigma),u^{\beta}(\sigma')\}. \end{equation} Using the Fourier transformation of this Poisson bracket, we get a Poisson vertex algebra. Define the Fourier transformed bracket by \begin{equation} \{a_{\lambda}b\}=\int_{S^{1}}e^{\lambda(\sigma-\sigma')}\{a(\sigma),b(\sigma')\}d\sigma. \end{equation} This bracket (called a $\lambda$-bracket) with $\mathcal{V}$ and $\partial$ satisfies the axioms of a Lie conformal algebra$\cite{BSK09}$. The algebra of differential functions $\mathcal{V}$ with $\partial$, the $\lambda$-bracket and the multiplication of polynomials on $\mathcal{V}$ is a Poisson vertex algebra. So we can translate the relation between (weak) Courant-Dorfman algebras and currents on the phase space into that between (weak) Courant-Dorfman algebras and Poisson vertex algebras (Lie conformal algebras). From Lie conformal algebras and Poisson vertex algebras, we can make Lie algebras and Poisson algebras using formal power series. For a Lie conformal algebra $W$, $W\otimes\mathbb{C}[[t,t^{-1}]]/Im(\partial+\partial_{t})$ is a Lie algebra with the Lie bracket \begin{equation} [a\otimes t^{m},b\otimes t^{n}]=\sum_{j\in\mathbb{Z}_{+}}\binom{m}{j}(a_{(j)}b)t^{m+n-j}. \end{equation} Moreover, for a Poisson vertex algebra $W$, $W\otimes\mathbb{C}[[t,t^{-1}]]/Im(\partial+\partial_{t})\cdot W\otimes\mathbb{C}[[t,t^{-1}]]$ is a Poisson algebra with the same Lie bracket. If we define a formal distribution $a(z)(a\in W)$ by \begin{equation} a(z):=\sum_{m\in\mathbb{Z}}z^{-1-m}a t^{m} \end{equation} and the formal $\delta$-function \begin{equation} \delta(z-w):=\sum_{m\in\mathbb{Z}}z^{-m-1}w^{m}, \end{equation} then we get \begin{equation} [a(z),b(w)]=\sum_{j\geq0}(a_{(j)}b)(w)\partial^{j}_{w}\delta(z-w). \end{equation} This Lie bracket has a similar form to the bracket of local functions. We can derive the properties of a weak Courant-Dorfman algebra from a Lie conformal algebra by comparing the independent terms of $\lambda$ on both sides of the axioms. Let \begin{equation} [a_{\lambda}b]=\sum_{j\geq0}a_{(j)}b\lambda^{j},\ a_{(0)}b=[a,b],\ [a_{\lambda}b]-[a,b]=\langle a_{\lambda}b\rangle, \end{equation} \begin{equation} \ \langle a,b\rangle =\frac{1}{2}(\langle a_{-\partial}b\rangle +\langle b_{-\partial}a\rangle ). \end{equation} Then the sesquilinearity says that \begin{equation} [\partial a,b]+o(\lambda)=\{\partial a_{\lambda}b\}=\lambda\{a_{\lambda}b\}\Rightarrow[\partial a,b]=0, \end{equation} the skew-symmetry says that \begin{equation} [a,b]+o(\lambda)=\{a_{\lambda}b\}=-\{b_{-\lambda-\partial}a\}=-[b,a]+\partial\langle b_{-\partial}a\rangle +o(\lambda)\Rightarrow[a,b]+[b,a]=\partial\langle a,b\rangle, \end{equation} and the Jacobi-identity says that \begin{equation} [a,[b,c]]+o(\lambda)=[[a,b],c]+[b,[a,c]]+o(\lambda)\Rightarrow[a,[b,c]]=[[a,b],c]+[b,[a,c]]. \end{equation} The right formulas are the conditions of a weak Courant-Dorfman algebra. Moreover, in $\cite{E11}$, a one-to-one correspondence between graded Poisson vertex algebras generated by elements of degree 0 and 1 and Courant-Dorfman algebras is established as Theorem 1. In this case, the $\lambda$-bracket is of the form \begin{equation} \{a_{\lambda}b\}=[a,b]+\lambda \langle a,b\rangle. \end{equation} Substituting this for the axioms of Poisson vertex algebras, we can get the axioms of Courant-Dorfman algebraic structure. \begin{th.}[{$\cite[Theorem 4.1]{E11}$}] The Poisson vertex algebras that are graded and generated by elements of degree 0 and 1 are in a one-to-one correspondence with the Courant-Dorfman algebras via \begin{equation} W^{0}=R,\ W^{1}=E,\ \partial=\partial \end{equation} \begin{equation} [e_{\lambda}e']=[e,e']+\lambda\langle e,e'\rangle,\ [e_{\lambda}f]=\langle e,\partial f\rangle \end{equation} \end{th.} In the case of $E=TM\oplus T^{*}M$, the associated Poisson vertex algebra can be seen as the algebraic description of Alekseev-Strobl currents$\cite{AS04}$. This correspondence is used to study the duality of currents$\cite{HM12}$, and non-commutative analog is considered $\cite{AFH21}$. \chapter{Definitions and examples of higher Courant-Dorfman algebras} In this chapter, we define higher Courant-Dorfman algebras of degree $n$ and give examples. The definition of these algebras of degree $2$ coincides with that of Courant-Dorfman algebras. Let $R=E^{0}$ be a commutative algebra over a ring $K\supset\mathbb{Q}$, and $E=\oplus_{1\leq i\leq n-1}E^{i}$ be a graded $R$-module, where $E^{i}$ has degree $i$. Define a pairing $\langle ,\rangle:E\otimes E\rightarrow R$ such that $\langle a,b\rangle=0$ unless $|a|+|b|=n$. Consider the graded-commutative algebra freely generated by $E$ and denote it by $\tilde{\mathcal{E}}=(\mathcal{E}^{k})_{k\in\mathbb{Z}}$. We restrict this graded-commutative algebra to the elements of degree $n-1\geq k\geq0$ and denote it by $\mathcal{E}=(\mathcal{E}^{k})_{n-1\geq k\geq0}$. The pairing $\langle ,\rangle$ can be extended to $\mathcal{E}$ by the Leibniz rule \begin{equation} \langle a,b\cdot c\rangle=\langle a,b\rangle\cdot c+(-1)^{(|a|-n)|b|}b\cdot\langle a,c\rangle. \end{equation} \begin{de.} $\mathcal{E}=(\mathcal{E}^{k})_{n-1\geq k\geq0}$ is \textit{a higher Courant-Dorfman algebra} of degree $n$ if $\mathcal{E}$ has a differential $d:\mathcal{E}^{k}\rightarrow\mathcal{E}^{k+1}$ which satisfies $d^{2}=0$ and $d(a\cdot b)=(da)\cdot b+(-1)^{|a|}a\cdot (db)$ and a bracket $[,]:\mathcal{E}\otimes\mathcal{E}\rightarrow\mathcal{E}$ of degree $1-n$ which satisfies the following condition: \begin{description} \item[sesquilinearity]: \begin{equation} \langle da,b\rangle=-(-1)^{|a|-n}[a,b], [da,b]=0. \end{equation} \item[skew-symmetry]: \begin{equation} [a,b]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,a]=-(-1)^{|a|}d\langle a,b\rangle, \end{equation} \begin{equation} \langle a,b\rangle=-(-1)^{(|a|-n)(|b|-n)}\langle b,a\rangle. \end{equation} \item[Jacobi identity]: \begin{equation} [a,[b,c]]=[[a,b],c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,[a,c]], \end{equation} \begin{equation} [a,\langle b,c\rangle]=\langle [a,b],c\rangle+(-1)^{(|a|+1-n)(|b|+1-n)}\langle b,[a,c]\rangle, \end{equation} \begin{equation} \langle a,\langle b,c\rangle\rangle=\langle \langle a,b\rangle,c\rangle+(-1)^{(|a|-n)(|b|-n)}\langle b,\langle a,c\rangle\rangle. \end{equation} \item[Leibniz rule]: \begin{equation} [a\cdot b,c]=[a,b]\cdot c+(-1)^{(|a|+1-n)|b|}b\cdot[a,c]. \end{equation} \end{description} \end{de.} Restricting the bracket to $\mathcal{E}^{n-1}\otimes\mathcal{E}^{n-1}\rightarrow\mathcal{E}^{n-1}$, it follows that $\mathcal{E}^{n-1}$ is a Leibniz algebra by the Jacobi identity. \begin{de.} A Leibniz algebra is an $R$-module $E$ with a bracket $[,]:E\otimes E\rightarrow E$ satisfying the Leibniz identity; \begin{equation} [a,[b,c]]=[[a,b],c]+[b,[a,c]]. \end{equation} \end{de.} Next, we define the non-degeneracy condition, and fullness condition, like Courant-Dorfman algebras. \begin{de.} The bilinear form $\langle ,\rangle$ gives rise to a map \begin{equation} (-)^{\flat}:E^{i}\rightarrow (E^{n-i})^{\vee}=Hom_{R}(E^{n-i},R) \end{equation} defined by \begin{equation} e^{\flat}(e')=\langle e,e'\rangle. \end{equation} $\langle ,\rangle$ is non-degenerate if $(-)^{\flat}$ is an isomorphism, and a higher Courant-Dorfman algebra is non-degenerate if $\langle ,\rangle$ is strongly non-degenerate. \end{de.} When a higher Courant-Dorfman algebra is non-degenerate, the inverse map is denoted by \begin{equation} (-)^{\sharp}:(E^{i})^{\vee}\rightarrow E^{n-i} \end{equation} and there is a graded-symmetric bilinear form \begin{equation} \{-,-\}:E^{\vee}\otimes_{R}E^{\vee}\rightarrow R \end{equation} defined by \begin{equation} \{\lambda,\mu\}=\langle \lambda^{\sharp},\mu^{\sharp}\rangle. \end{equation} \begin{de.} $\langle ,\rangle$ is full if ,for every $1\leq i\leq n-1$, every $a\in R$ can be written as a finite sum $a=\sum_{j}\langle x_{j},y_{j}\rangle$ with $x_{j}\in E^{i},y\in E^{m-i}$. \end{de.} Define the anchor map \begin{equation} \rho:E^{n-1}\rightarrow\mathfrak{X}=Der(R,R) \end{equation} by setting \begin{equation} \rho(e)\cdot f=\langle e,df\rangle. \end{equation} We can define a Dirac submodule, like an ordinary Couarnt-Dorfman algebra. \begin{de.} Suppose $\mathcal{E}$ is a higher Courant-Dorfman algebra. An $R$-submodule $\mathcal{D}\subset \mathcal{E}$ is said to be a Dirac submodule if $\mathcal{D}$ is isotropic with respect to $\langle ,\rangle$ and closed under $[-,-]$. \end{de.} We give some examples. \begin{ex.} Consider the case $n=2$. In this case, there is an $R$-module $E^{1}$, a pairing $\langle ,\rangle:E^{1}\otimes E^{1}\rightarrow R$, a derivation $d:R\rightarrow E^{1}$, and three brackets $[,]:R\otimes E^{1}\rightarrow R$,$[,]:E^{1}\otimes R\rightarrow R$,and $[,]:E^{1}\otimes E^{1}\rightarrow E^{1}$. From the sesquilinearity, we get $[e,f]=\langle e,df\rangle, [f,e]=-\langle df,e\rangle$. For other operations, one can see that the above definition reduces to the definition of a Courant-Dorfman algebra. \end{ex.} \begin{ex.} Given a commutative algebra $R$, let $E^{n-1}=\mathfrak{X}=Der(R,R),E^{1}=\Omega^{1}(K\ddot{a}hler\ differential)$. In this case, $\mathcal{E}^{n-1}=\mathfrak{X}\oplus\Omega^{n-1}$. It becomes a higher Courant-Dorfman algebra with respect to \begin{equation} \langle v,\alpha\rangle=\iota_{v}\alpha, \end{equation} \begin{equation} [v,\alpha]=L_{v}\alpha, [\alpha,v]=d(\iota_{v}\alpha)-L_{v}\alpha, \end{equation} \begin{equation} [v_{1},v_{2}]=\iota_{v_{1}}\iota_{v_{2}}\omega\ (\omega\in\Omega^{n+1,cl}). \end{equation} and $d$ is the de-Rham differential on $\Omega^{i}$. In the case of $R=C^{\infty}(M)$, $\mathcal{E}^{n-1}=TM\oplus\wedge^{n-1}T^{*}M$, and the bracket $[,]$ is called a higher Dorfman bracket. \end{ex.} \begin{ex.} Let $(\mathcal{M},\omega,\Theta)$ be a degree $n$ dg symplectic manifold and $C=C^{n-1}(C^{\infty}(\mathcal{M}))=\{f\in C^{\infty}(\mathcal{M}:|f|\leq n-1\}$. This is a higher Courant-Dorfman algebra with \begin{equation} [a,b]=\{\{a,\Theta\},b\},\ \langle a,b\rangle=\{a,b\},\ da=\{\Theta,a\}. \end{equation} In the previous example, the higher Courant-Dorfman algebra on $\mathcal{E}^{n-1}=TM\oplus\wedge^{n-1}T^{*}M$ coincides the algebra on $C=C^{n-1}(C^{\infty}(T^{*}[n]T[1]M))$. \end{ex.} \begin{ex.} As a variant of Example 2, we can replace $\mathfrak{X}$ by a Lie-Rinehart algebra $(R,L)$ and let $E^{n-1}=L,E^{1}=\Omega^{1}$. In this case, $\mathcal{E}^{n-1}=L\oplus\Omega^{n-1}$. It becomes a higher Courant-Dorfman algebra with respect to \begin{equation} \langle a,\alpha\rangle=\iota_{\rho(a)}\alpha, \end{equation} \begin{equation} [v,\alpha]=L_{\rho(a)}\alpha, [\alpha,v]=d(\iota_{\rho(a)}\alpha)-L_{\rho(a)}\alpha, \end{equation} \begin{equation} [v_{1},v_{2}]=\iota_{(\rho(v_{1}))}\iota_{(\rho(v_{2}))}\omega\ (\omega\in\Omega^{n+1,cl}), \end{equation} and $d$ is the de-Rham differential on $\Omega^{i}$. \end{ex.} In order to focus on the relation with higher Poisson vertex algebras, we should define extended higher Courant-Dorfman algebras, relaxing the condition on $\langle ,\rangle$. \begin{de.} Let $R=E^{0}$ be a commutative algebra, and $E=E^{i}(1\leq i\leq n-1)$ be a graded $R$-module. Consider the graded-commutative algebra freely generated by $E$ and denote it by $\tilde{\mathcal{E}}=(\mathcal{E}^{k})_{k\in\mathbb{Z}}$. We restrict this graded-commutative algebra to the elements of degree $n-1\geq k\geq0$ and denote it by $\mathcal{E}=(\mathcal{E}^{k})_{n-1\geq k\geq0}$. $\mathcal{E}=(\mathcal{E}^{k})_{n-1\geq k\geq0}$ is \textit{an extended higher Courant-Dorfman algebra} of degree $n$ if $\mathcal{E}$ has a differential $d:\mathcal{E}^{k}\rightarrow\mathcal{E}^{k+1}$ which satisfies $d^{2}=0$ and $d(a\cdot b)=(da)\cdot b+(-1)^{|a|}a\cdot (db)$, a pairing $\langle ,\rangle :\mathcal{E}\otimes\mathcal{E}\rightarrow\mathcal{E}$ of degree $-n$ and a bracket $[,]:\mathcal{E}\otimes\mathcal{E}\rightarrow\mathcal{E}$ of degree $1-n$ which satisfies the sesquilinearity, skew-symmetry, Jacobi identity, and Leibniz rule. \end{de.} The difference with a higher Courant-Dorfman algebra is that an extended Courant-Dorfman algebra allows the pairing $\langle,\rangle:E^{i}\otimes E^{j}\rightarrow E^{i+j-n}$ with $i+j\geq n+1$. From the viewpoint of graded geometry, these algebras include the case that the base manifold is a graded manifold. \chapter{Non-degenerate higher Courant-Dorfman algebras and degree $n$ dg symplectic manifolds} In this chapter, we consider the case that $\langle ,\rangle$ is non-degenerate, and study the relationship between the algebras and functions of degree $n$ dg symplectic manifolds. We construct a graded Poisson algebra of degree $-n$, generalizing the Keller-Waldman Poisson algebras$\cite{KW08}$. We assume that each $E^{i}$ is a projective, finitely generated module over $R$, and that $\langle ,\rangle $ is non-degenerate and full. \begin{de.} We assume $r\geq n$. $\mathcal{C}^{r}(\mathcal{E})\subset\oplus_{1\leq j\leq n-1}\oplus_{1\leq k\leq r-j}\oplus_{\sum^{k}_{t=1}i_{t}=r-j}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{k}},E^{j})$ consists of elements $C$ for which there exists a $K$-multilinear map \begin{equation} \sigma_{C}\in \oplus_{1\leq l\leq r-n}\oplus_{\sum^{l}_{t'=1} i_{t'}=r-n}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{l}},\mathfrak{X}), \end{equation} satisfying the following conditions: (1)For all $x_{1},...,x_{l},u,w\in E$, we have \begin{equation} \sigma_{C}(x_{1},...,x_{l})\langle u,w\rangle=\langle C(x_{1},...,x_{l},u),w\rangle+\langle u,C(x_{1},...,x_{l},w)\rangle. \end{equation} (2)For all $x_{1},...,x_{k},u\in E$, we have \begin{align} &\langle C(x_{1},...x_{i},x_{i+1},...,x_{k})-(-1)^{(|x_{i}|-n)(|x_{i+1}|-n)}C(x_{1},....,x_{i+1},x_{i},...,x_{k}),u\rangle \notag \\ &=\sigma_{C}(x_{1},...,x_{i-1},x_{i+2},....,x_{k})\langle x_{i},x_{i+1}\rangle. \end{align} Furthermore, $\mathcal{C}^{0}(\mathcal{E})=R, \mathcal{C}^{i}(\mathcal{E})=\mathcal{E}^{i}$ for $1\leq i\leq n-1$ and define \begin{equation} \mathcal{C}^{\bullet}(\mathcal{E})=\oplus_{r\geq0}\mathcal{C}^{r}(\mathcal{E}). \end{equation} We call $\sigma_{C}$ the symbol of $C$. \end{de.} Define $d_{C}\in\oplus_{1\leq l\leq r-n-k}\oplus_{\sum^{l}_{t'=1} i_{t'}=r-n-k}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{l}},\mathfrak{X}\otimes E^{k})$ by \begin{equation} \langle d_{C}(x_{1},...,x_{l})a,y\rangle:=\sigma_{C}(x_{1},...,x_{l},y)a. \end{equation} We can use instead of elements $C\in\mathcal{C}^{r}(\mathcal{E})$ $K$-multilinear forms $\omega\in \oplus_{1\leq k\leq r}\oplus_{\sum^{k}_{t=1}i_{t}=r}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{k}},R)$ defined by $\omega(x_{1},...,x_{t})=\langle C(x_{1},...,x_{t-1}),x_{t}\rangle $. \begin{de.} For $r\geq1$ the subspace $\Omega^{r}_{\mathcal{C}}(\mathcal{E})\subset\oplus_{1\leq k\leq r}\oplus_{\sum^{k}_{t=1}i_{t}=r}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{k}},R)$ consists of elements $\omega$ satisfying the following conditions; (1) \begin{equation} \omega(x_{1},...,ax_{k})=a\omega(x_{1},...,x_{k}), \end{equation} for all $a\in R$. (2)For $r\geq2$, there exists a multilinear map, \begin{equation} \sigma_{\omega}\in \oplus_{1\leq l\leq r-n}\oplus_{\sum^{l}_{t'=1} i_{t'}=r-n}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{l}},\mathfrak{X}), \end{equation} such that \begin{align} &\omega(x_{1},...x_{i},x_{i+1},...,x_{k})-(-1)^{(|x_{i}|-n)(|x_{i+1}|-n)}\omega(x_{1},....,x_{i+1},x_{i},...,x_{k}) \notag \\ &=\sigma_{\omega}(x_{1},...^{\wedge^{i}}...^{\wedge^{i+1}},x_{k})\langle x_{i},x_{i+1}\rangle. \end{align} \end{de.} By the non-degeneracy of $\langle,\rangle$, we get the following Lemma: \begin{le.} There is an isomorphism of graded $R$-modules \begin{equation} \mathcal{C}^{\bullet}(\mathcal{E})\rightarrow \Omega^{\bullet}_{\mathcal{C}}(\mathcal{E}), \end{equation} given by \begin{equation} \omega(x_{1},...,x_{t})=\langle C(x_{1},...,x_{t-1}),x_{t}\rangle. \end{equation} \end{le.} \begin{pr.} The map \begin{equation} [,]:\mathcal{C}^{r}(\mathcal{E})\otimes\mathcal{C}^{s}(\mathcal{E})\rightarrow\mathcal{C}^{r+s-n}(\mathcal{E}), \end{equation} defined by \begin{equation} \label{l1} [a,b]=0, [a,x]=0=[x,a], [x,y]=\langle x,y\rangle, [D,a]=\sigma_{D}a=-[a,D], \end{equation} \begin{equation} \label{l2} [C,x]=\iota_{x}C=-(-1)^{(r+n)(|x|+n)}[x,C], \end{equation} for elements $a,b\in R, x,y\in\mathcal{C}^{s}(\mathcal{E})$ for $s\leq n$, $D\in\mathcal{C}^{n}(\mathcal{E}),C\in\mathcal{C}^{r}(\mathcal{E})$ for $r\geq n$, and by the recursion, \begin{equation} \label{r} \iota_{x}[C_{1},C_{2}]=[[C_{1},C_{2}],x]=[C_{1},[C_{2},x]]-(-1)^{(|C_{1}|+n)(|C_{2}|+n)}[C_{2},[C_{1},x]], \end{equation} is well-defined and makes $\mathcal{C}^{\bullet}(\mathcal{E})$ a graded Lie algebra. \end{pr.} \begin{proof} It suffices to show that the recursion ($\ref{r}$) is consistent with ($\ref{l1}$) and ($\ref{l2}$), that $[C_{1},C_{2}]\in\mathcal{C}^{r+s-n}(\mathcal{E})$, and that the bracket satisfies the conditions for a graded Lie algebra. The consistency can be checked as follows: \begin{align} [[D,x],y]&=\langle D(x),y\rangle=(-1)^{(|x|-n)(|y|-n)}\langle D(y),x\rangle+\sigma_{C}\langle x,y\rangle \notag \\ &=(-1)^{(|x|-n)(|y|-n)}[[D,y],x]+[D,[x,y]]. \end{align} \begin{align} [[C,x],y]&=\iota_{y}\iota_{x}C=(-1)^{(|x|-n)(|y|-n)}\iota_{x}\iota_{y}C+d_{C}\langle x,y\rangle \notag \\ &=(-1)^{(|x|-n)(|y|-n)}[[C,y],x]+[[C,[x,y]]. \end{align} Next, we check that $[C_{1},C_{2}]$ is an element in $\mathcal{C}^{r+s-n}(\mathcal{E})$. For $N\leq 2n-1$, the claim is clear. For $N=2n$, we consider three cases. If $a\in R$ and $C\in\mathcal{C}^{2n}(\mathcal{E})$, then $[C,a]=d_{C}a\in\mathcal{C}^{n}(\mathcal{E})$ and \begin{equation} [[C,a],b]=[C,[a,b]]-(-1)^{n}[a,[C,b]]. \end{equation} If $x\in E^{i}$ and $C\in\mathcal{C}^{2n-i}(\mathcal{E})$, then $[C,a]=d_{C}a\in\mathcal{C}^{n}(\mathcal{E})$ and \begin{equation} [[C,x],a]=[C_{1},[x,a]]-(-1)^{(n-i)(i+n)}[x,[C,a]]. \end{equation} If $D_{1},D_{2}\in\mathcal{C}^{n}(\mathcal{E})$, then $[D_{1},D_{2}]\in\mathcal{C}^{r}(\mathcal{E})$ with \begin{equation} \sigma_{[D_{1},D_{2}]}a=\sigma_{D_{1}}\sigma_{D_{2}}a-\sigma_{D_{2}}\sigma_{D_{1}}a, \end{equation} \begin{equation} [[D_{1},D_{2}],a]=[D_{1},[D_{2},a]]-[D_{2},[D_{1},a]]. \end{equation} Let $C_{1}\in\mathcal{C}^{r}(\mathcal{E}),C_{2}\in\mathcal{C}^{s}(\mathcal{E})$ with $r+s\geq2n+1$. Consider a map $h:R\rightarrow\mathcal{C}^{r+s-2n}(\mathcal{E})$ defined by \begin{equation} h(a)=[C_{1},[C_{2},a]]-(-1)^{(r+n)(s+n)}[C_{2},[C_{1},a]]. \end{equation} Then $[C_{1},C_{2}]\in\mathcal{C}^{r+s-n}(\mathcal{E})$ and the symbol is \begin{equation} \sigma_{[C_{1},C_{2}]}(x_{1},...,x_{t})a=\langle h(a)(x_{1},...,x_{t-1}),x_{t}\rangle. \end{equation} The skew symmetry is clear by the construction. We check the Jacobi identity. It suffices to show \begin{equation} J(C_{1},C_{2},C_{3}):=[[C_{1},C_{2}],C_{3}]-[C_{1},[C_{2},C_{3}]]-(-1)^{(|C_{1}|+n)(|C_{2}+n|)}[C_{2},[C_{1},C_{3}]]=0. \end{equation} We prove the claim by induction for $N=\sum|C^{i}|$. For $1\leq N\leq 2n$, it is clear. By the recursion, \begin{align} &[J(C_{1},C_{2},C_{3}),x] \notag \\ =&(-1)^{(|C_{2}-n|)(|x|-n)+(|C_{3}|-n)(|x|-n)}J([C_{1},x],C_{2},C_{3}) \notag \\ +&(-1)^{(|C_{3}|-n)(|x|-n)}J(C_{1},[C_{2},x],C_{3})+J(C_{1},C_{2},[C_{3},x]) \end{align} and by induction, we obtain $[J(C_{1},C_{2},C_{3}),x]=0$. For $N\geq2n+1$, $|J(C_{1},C_{2},C_{3})|\geq1$, therefore we conclude $J(C_{1},C_{2},C_{3})=0$. \end{proof} \begin{pr.} There exists an associative, graded-commutative $K$-bilinear product $\wedge$ of degree 0 on $\mathcal{C}^{\bullet}(\mathcal{E})$ uniquely defined by \begin{equation} a\wedge b=ab=b\wedge a, a\wedge x=ax=x\wedge a, \end{equation} for $a,b\in R$ and $x\in E$ and by the recursion rule \begin{equation} [C_{1}\wedge C_{2},x]=(-1)^{(r-n)s}C_{2}\wedge[C_{1},x]+C_{1}\wedge[C_{2},x]. \end{equation} \end{pr.} \begin{proof} We prove that \begin{equation} (x_{1},...,x_{t})\rightarrow[C_{1}\wedge C_{2},x_{1}](x_{2},...,x_{t}), \end{equation} is an element in $\mathcal{C}^{r+s}(\mathcal{E})$, and that \begin{equation} [C_{1}\wedge C_{2},a]=(-1)^{(r-n)s}C_{2}\wedge[C_{1},a]+C_{1}\wedge[C_{2},a]. \end{equation} If $N\leq n$, the claim is clear. If $N=r+s\geq n+1$, the map \begin{equation} h(a)=(-1)^{(r-n)s}C_{2}\wedge[C_{1},a]+C_{1}\wedge[C_{2},a], \end{equation} is $d_{C_{1}\wedge C_{2}}$. \end{proof} \begin{th.} $(\mathcal{C}^{\bullet}(\mathcal{E}),[,],\wedge)$ is a graded Poisson algebra of degree $-n$. \end{th.} \begin{proof} It suffices to show the Leibniz rule \begin{equation} [C_{1}\wedge C_{2},x]=(-1)^{(r-n)s}C_{2}\wedge[C_{1},C_{3}]+C_{1}\wedge[C_{2},C_{3}]. \end{equation} We can check by direct calculations and the recursions. \end{proof} Since $\mathcal{C}^{\bullet}(\mathcal{E})\simeq \Omega^{\bullet}_{\mathcal{C}}(\mathcal{E})$, we can define a graded Poisson algebraic structure on $\Omega^{\bullet}_{\mathcal{C}}(\mathcal{E})$. This bracket is an extension of $\{-,-\}:E^{\vee}\otimes_{R}E^{\vee}\rightarrow R$. We can construct $m\in\Omega^{\bullet}_{C}(\mathcal{E})\simeq\mathcal{C}^{r}(\mathcal{E})$ from the map $\phi:\mathcal{E}^{i_{1}}\otimes\mathcal{E}^{i_{2}}\otimes\cdots\otimes\mathcal{E}^{i_{m}}\rightarrow\mathcal{E}^{i_{1}+\cdots+i_{m}-mn+r}$ by \begin{equation} \omega_{\phi}(e_{1},e_{2},...,e_{k})=\langle \cdots\langle \phi(e_{1},...,e_{m}),e_{m+1}\rangle\cdots\rangle,e_{k}\rangle. \end{equation} Let $\phi$ be the bracket of the higher Courant-Dorfman algebra. Then, $\omega_{\phi}$ satisfies $|\omega_{\phi}|=n+1$ and $[\omega_{\phi},\omega_{\phi}]=0$ and the map $[\omega_{\phi},-]$ is degree 1 and squares to 0, so it defines a differential on $\mathcal{C}^{\bullet}(\mathcal{E})$. This is a higher derived bracket of this algebra $\cite{V03}$. Next, we define another Poisson algebra $\mathcal{R}^{\bullet}(\mathcal{E})$ generalizing the Rothstein algebra. \begin{de.} A connection $\nabla$ for the graded module $E=(E^{i})$ is a map $\nabla:\mathfrak{X}\times E\rightarrow E$ of degree 0 such that \begin{equation} \nabla_{aD}x=a\nabla_{D}x, \end{equation} \begin{equation} \nabla_{D}(ax)=a\nabla_{D}x+D(a)x, \end{equation} for all $a\in R$ $x,y\in E$ and $D\in\mathfrak{X}$. If $\langle ,\rangle :E\otimes E\rightarrow R$ is a $K$-bilinear form, then $\nabla$ is called metric if in addition \begin{equation} D\langle x.y\rangle =\langle \nabla_{D}x,y\rangle+\langle x,\nabla_{D}y\rangle, \end{equation} for all $x,y\in E$ and $D\in\mathfrak{X}$. \end{de.} If each $E^{i}$ is finitely generated and projective then it allows for a connection $\nabla$. If $\langle ,\rangle :E\otimes E \rightarrow R$ is non-degenerate, then $\nabla$ can be chosen to be a metric connection. Indeed, if $\tilde{\nabla}$ is any connection and $\langle ,\rangle$ is strongly non-degenerate then $\nabla$ defined by \begin{equation} \langle \nabla_{D}x,y\rangle=\frac{1}{2}(\langle \tilde{\nabla}_{D}x,y\rangle-\langle x,\tilde{\nabla}_{D}y\rangle+D\langle x,y\rangle) \end{equation} is a metric connection. \begin{de.} The higher Rothstein algebra is defined as a graded symmetric algebra by \begin{equation} \mathcal{R}^{\bullet}(\mathcal{E})=\mathrm{Sym}(\oplus_{1\leq i\leq n-1}E^{i}[-i]\oplus\mathfrak{X}[-n]). \end{equation} \end{de.} Next we introduce the curvature of $\nabla$. A given connection for $E$ extends to $\mathrm{Sym}(E)$ by imposing the Leibniz rule. Thus we can consider \begin{equation} R(D_{1},D_{2})\xi:=\nabla_{D_{1}}\nabla_{D_{2}}\xi-\nabla_{D_{2}}\nabla_{D_{1}}\xi-\nabla_{[D_{1},D_{2}]}\xi, \end{equation} for $D_{i}\in\mathfrak{X}$ and $\xi\in \mathrm{Sym}(E)$. It defines an element \begin{equation} R(D_{1},D_{2})\in \mathrm{End}(\mathrm{Sym}(E)). \end{equation} Restricting $R(D_{1},D_{2})$ to $E$ gives a map $R(D_{1},D_{2}):E\rightarrow E$. For $x\in E^{i}$ and $y\in E^{n-i}$, \begin{equation} \langle R(D_{1},D_{2})x,y\rangle=(-1)^{i(n-i)}\langle R(D_{1},D_{2})y,x\rangle. \end{equation} $E^{i}$ is projective and finitely generated, so using the strongly non-degenerate inner product $\langle ,\rangle$ on $E$ we can define $r(D_{1},D_{2})\in Sym^{2}E|_{deg=n}$ by \begin{equation} R(D_{1},D_{2})x=\langle r(D_{1},D_{2}),x\rangle. \end{equation} With this preparation a Poisson structure can now be defined. \begin{th.} Let $\nabla$ be a metric connection on $E$. Then there exists a unique graded Poisson structure $\{-,-\}_{R}$ on $\mathcal{R}^{\bullet}(\mathcal{E})$ of degree $-n$ such that \begin{align} \{a,b\}_{R}&=0=\{a,x\}_{R},\\ \{x,y\}_{R}&=\langle x,y\rangle=-(-1)^{(|x|-n)(|y|-n)}\{y,x\}_{R}, \\ \{D,a\}_{R}&=-D(a)=-\{a,D\}_{R},\\ \{D,x\}_{R}&=-\nabla_{D}x=-\{x,D\}_{R},\\ \{D_{1},D_{2}\}_{R}&=-[D_{1},D_{2}]-r(D_{1},D_{2})=-\{D_{2},D_{1}\}_{R}, \end{align} for $a,b\in R$, $x,y\in E$ and $D_{1},D_{2}\in\mathfrak{X}$. \end{th.} \begin{proof} We can extend the bracket $\{\}_{R}$ to $\mathcal{R}(\mathcal{E})$ by the Leibniz rule from the above definition. The skew symmetry is clear by construction. Jacobi identity follows from the following Bianchi identity. \begin{align} &\ &\nabla_{D_{1}}r(D_{2},D_{3})+\nabla_{D_{2}}r(D_{3},D_{1})+\nabla_{D_{3}}r(D_{1},D_{2}) \notag \\ &+&r(D_{1},[D_{2},D_{3}])+r(D_{2},[D_{3},D_{1}])+r(D_{3},[D_{1},D_{2}])=0. \end{align} \end{proof} Next, we find the relation between $\mathcal{R}^{\bullet}(\mathcal{E})$ and $\mathcal{C}^{\bullet}(\mathcal{E})$. \begin{de.} Let the $R$-linear map $\mathcal{J}:\mathcal{R}^{\bullet}(\mathcal{E})\rightarrow\mathcal{C}^{\bullet}(\mathcal{E})$ be defined by \begin{equation} \mathcal{J}(a)=a,\mathcal{J}(x)=x,\mathcal{J}(D)=-\nabla_{D} \end{equation} for $a\in R,x\in E$ and $D\in\mathfrak{X}$ and extend by the Leibniz rule. \end{de.} \begin{pr.} (1) The map $\mathcal{J}$ is a homomorphism of Poisson algebras. (2) Let $\phi\in\mathcal{R}^{\bullet}(\mathcal{E})$ with $r\geq n$, then \begin{equation} \mathcal{J}(\phi)(x_{1},...,x_{k})=\{\{...\{\phi,x_{1}\}_{R},...\}_{R},x_{k}\}_{R}, \end{equation} and \begin{equation} \sigma_{\mathcal{J}(\phi)}(x_{1},...,x_{k-1})a=\{\{...\{\phi,x_{1}\}_{R},...\}_{R},x_{k-1}\}_{R},a\}_{R}, \end{equation} for all $x_{i}\in E$ and $a\in R$. \end{pr.} \begin{proof} (1):From the definition this is obvious for generators and it is true for all $\mathcal{R}^{\bullet}(\mathcal{E})$ by the Leibniz rule. (2)$[\mathcal{J}(\phi),x]=[\mathcal{J}(\phi),\mathcal{J}(x)]=\mathcal{J}(\{\phi,x\}_{R})$ and induction for $k$. \end{proof} \begin{le.} Let $\phi\in\mathcal{R}^{r}(\mathcal{E})$ with $r\geq 1$, then \begin{equation} \mathcal{J}(\phi)(x_{1},...,x_{k})=\{\{...\{\phi,x_{1}\}_{R},...\}_{R},x_{k}\}_{R}=0, \end{equation} if and only if $\phi=0$. \end{le.} \begin{proof} It is true for $r=1,...,n-1$ due to the non-degeneracy of $\langle ,\rangle$. Suppose for it is true for $1,2,...,r-1$. For $\phi\in\mathcal{R}^{r}(\mathcal{E})$, we have $|\{\phi,x\}_{R}|<r$, so it satisfies the condition if and only if $\{\phi,x\}_{R}=0$ Then \begin{equation} \{\phi,\langle x,y\rangle\}=\{\phi,\{x,y\}\}=\{\{\phi,x\},y\}+(-1)^{(\phi+n)(|x|+n)}\{x,\{\phi,y\}\}=0, \end{equation} and due to fullness $\{\phi,a\}_{R}=0$. Then, $\phi=0$. \end{proof} \begin{co.} Let $\hat{\mathcal{C}}^{\bullet}(\mathcal{E})$ be the subalgebra of $\mathcal{C}^{\bullet}(\mathcal{E})$ generated by $R,E$ and $\mathcal{C}^{n}(\mathcal{E})$. Then $\hat{\mathcal{C}}^{\bullet}(\mathcal{E})$ is closed under the bracket $[,]$ and $\mathcal{J}$ is an isomorphism of Poisson algebras \begin{equation} \mathcal{J}:\mathcal{R}^{\bullet}(\mathcal{E})\rightarrow\hat{\mathcal{C}}^{\bullet}(\mathcal{E}). \end{equation} \end{co.} \begin{proof} $\mathcal{J}$ is injective due to the above lemma. If $D\in\mathcal{C}^{n}(\mathcal{E})$ we can define an element $\xi\in Sym(E)|_{deg.=n}$ by $\langle \xi,x\rangle=D(x)-\Delta_{\sigma_{D}}x$, hence $D\in\mathcal{J}(\mathcal{R}^{n}(\mathcal{E}))$, therefore $\mathcal{C}^{n}(\mathcal{E})\simeq\mathcal{R}^{n}(\mathcal{E})$. \end{proof} \begin{le.} We have $\hat{\mathcal{C}}^{n+1}(\mathcal{E})=\mathcal{C}^{n+1}(\mathcal{E})$. \end{le.} \begin{proof} Let $C\in\mathcal{C}^{n+1}(\mathcal{E})$ and let $d_{C}\in Der(R,E^{1})$ be given $\langle d_{C}r,x\rangle =\sigma_{C}(x)r$. We can find $D^{1},...,D^{n}\in\mathfrak{X}$ and $e_{1},...,e_{n}\in\mathcal{E}$ such that $d_{C}(r)=D^{i}(r)e_{i}$. Let $T=C-\nabla_{D^{i}}\wedge e_{i}$. Then, $T\in\mathcal{E}^{n+1}$. \end{proof} Let $m\in\mathcal{C}^{n+1}(\mathcal{E})$ with $[m,m]=0$. Then $\delta_{m}=[m,-]$ squares to 0, and we get a subcomplex $\hat{\mathcal{C}}^{\bullet}(\mathcal{E})$. This complex is isomorphic to $\mathcal{R}^{\bullet}(\mathcal{E})$ with differential $\delta_{\mathcal{J}(m)}=[\mathcal{J}(m),-]$. When $R=C^{\infty}(M)$ and $E^{i}=\Gamma(M,F^{i})$ for a graded vector bundle $F^{i}\rightarrow M$, this Poisson algebra is isomorphic to the associated dg symplectic manifold$(\mathcal{M},\omega,\Theta)$. \begin{le.} Let $(F^{i}(1\leq i\leq n-1))$ be a graded bundle over a smooth manifold $M$, and $\langle ,\rangle:F^{i}\otimes F^{n-i}\rightarrow C^{\infty}(M)$ a fiberwise non-degenerate graded-symmetric bilinear form. Degree $n$ graded symplectic manifolds (with a choice of splitting in the sense of Remark$\ref{split}$) are in one-to-one correspondence with such graded vector bundles with $\langle ,\rangle$. \end{le.} \begin{proof} Any graded manifold is noncanonically diffeomorphic to a graded manifold associated to a graded vector bundle($\cite{GN12}$,Theorem 1). Let $(\mathcal{M},\omega)$ be a degree $n$ symplectic manifold and let $F^{i}$ the associated graded vector bundle. Then, $E^{n}=\Gamma(TM)$ and the Poisson bracket of degree $-n$ induced by $\omega$ is an extension of $\langle ,\rangle$ as a derivation. (In this case $C^{\infty}(\mathcal{M})\simeq\mathcal{R}^{\bullet}(\mathcal{E})$.) \end{proof} \begin{re.} \label{split} The diffeomorphism between a graded manifold and a graded manifold associated to a graded vector bundle is noncanonical. Denote the algebra of degree $i$ functions of a graded manifold $\mathcal{M}$ by $\mathcal{A}^{i}$. There exists a short exact sequence \begin{equation} \xymatrix{ 0 \ar[r] & (\mathcal{A}^{1})^{2} \ar[r] & \mathcal{A}^{2} \ar[r] & \Gamma(F^{2}) \ar[r] & 0, } \end{equation} where $F^{2}$ is a vector bundle over the base manifold $M$ of $\mathcal{M}$. Fixing a splitting, we can identify $\mathcal{A}^{2}$ with $(\mathcal{A}^{1})^{2}\oplus\Gamma(F^{2})$. For $\mathcal{A}^{i}(i\geq 2)$, we can choose such a splitting. Thus graded manifolds with a choice of splittings are in one-to-one correspondence with graded vector bundles. \end{re.} \begin{th.} Let $(R,E^{i}(1\leq i\leq n-1),\langle ,\rangle,d,[-,-])$ be a higher Courant-Dorfman algebra. Suppose $R=C^{\infty}(M)$ for a smooth manifold $M$, and each $E^{i}=\Gamma(F^{i})$ for a graded vector bundle $F^{i}$ over M. Degree $n$ dg symplectic manifolds are in one-to-one correspondence with higher Courant-Dorfman algebras of these types. \end{th.} \begin{proof} Let $(\mathcal{M},\omega)$ be a degree $n$ symplectic manifold corresponding to $(E^{i},\langle ,\rangle)$, with $\mathcal{A}$ its graded Poisson algebra of polynomial functions. Then $\mathcal{A}^{0}=C^{\infty}(M)$ and $\mathcal{A}^{i}=\mathcal{E}^{i}$ for $1\leq i\leq n-1$, and $\{-,-\}$ restricted to $\mathcal{A}^{i}$ is an extension of $\langle ,\rangle$. Let $\Theta\in\mathcal{A}^{n+1}$ satisfy $\{\Theta,\Theta\}=0$. Given arbitrary $e,e_{1},e_{2}\in\mathcal{A}^{i}$, define a differential $d$ and bracket $[,]$ by \begin{equation} d(e)=\{\Theta,e\}, [e_{1},e_{2}]=\{\{e_{1},\Theta\},e_{2}\}. \end{equation} This construction gives a higher Courant-Dorfman algebra. Conversely, given a higher Courant-Dorfman algebraic structure on $(E^{i},\{,\})$, we can define $\Theta=\mathcal{J}(\omega_{\phi})$. Locally, $\Theta$ can be written as follows. In a Darboux chart $(\xi^{a(k)})=(q^{a(l)},p^{a(n-l)})(1\leq k\leq n, 1\leq l\leq\lfloor\frac{n}{2}\rfloor)$, corresponding to a chart $(x_{i})$ on $M$ and a local basis $e^{a(k)}$ of sections of $E^{k}$ such that $\langle e^{a(k)},e^{b(n-k)}\rangle=\delta^{ab}$ \begin{equation} \Theta=\sum_{\sum i_{t}=n+1}\phi(q)\xi^{a_{1}(i_{1})}\cdots\xi^{a_{m}{i_{m}}} \end{equation} \begin{equation} \phi(q)=\langle \cdots\langle [e^{a_{1}(n-i_{1})}, e^{a_{2}(n-i_{2})}],e^{a_{3}(n-i_{3})}\rangle ,\cdots,e^{a_{m}(n-i_{m})}\rangle . \end{equation} This satisfies $\{\Theta,\Theta\}=0$ due to the properties of a higher Coutant-Dorfman algebra. \end{proof} \chapter{Higher PVAs from higher Courant-Dorfman algebras} In this chapter, we define higher PVAs corresponding to higher Courant-Dorfman algebras and check these algebras have a PVA-like property. In particular, a tensor product of a higher PVA and an arbitrary dgca has a structure of degree 0 graded Poisson algebra. First, we define higher Lie conformal algebras and derive properties of higher weak Courant-Dorfman algebras, in a similar way that we derive the properties of Courant-Dorfman algebras from Lie conformal algebras. \begin{de.} \textit{A higher Lie conformal algebra} of degree $n$ is a graded $\mathbb{C}[d]$-module $W=\oplus_{m\in\mathbb{Z}_{\geq0}} W^{m}$(i.e. $d$ acts on elements of $W$) with $|d|=1$, which has a degree $1-n$ map which we call $\Lambda$-bracket $[_{\Lambda}]:W\otimes W\rightarrow W[\Lambda]$ with $|\Lambda|=1$ which satisfy the conditions. (Here, $\Lambda$ is an indeterminate.) \begin{description} \item[Sesquilinearity] \begin{equation} [da_{\Lambda}b]=-(-1)^{-n}\Lambda[a_{\Lambda}b], [a_{\Lambda}db]=-(-1)^{|a|-n}(d+\Lambda)[a_{\Lambda}b] \end{equation} \item[Skewsymmetry] \begin{equation} [a_{\Lambda}b]=-(-1)^{(|a|+1-n)(|b|+1-n)}[b_{-\Lambda-d}a] \end{equation} \item[Jacobi identity] \begin{equation} [a_{\Lambda}[b_{\Gamma}c]]=[[a_{\Lambda}b]_{\Lambda+\Gamma}c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b_{\Gamma}[a_{\Lambda}c]]. \end{equation} \end{description} \end{de.} We derive the properties of a higher weak Courant-Dorfman algebra from a higher Lie conformal algebra. The $\Lambda$-bracket is of the form \begin{equation} [a_{\Lambda}b]=\sum_{j\geq0}\Lambda^{j}a_{(j)}b\ (a_{(j)}b\in W^{|a|+|b|+1-n-j}). \end{equation} Let \begin{equation} [a,b]=a_{(0)}b,\ \langle a_{\Lambda}b\rangle=\sum_{j\geq1}\Lambda^{j}a_{(j)}b. \end{equation} \begin{equation} \langle a,b\rangle=\langle a_{-d}b\rangle. \end{equation} Then we derive the properties of a higher Courant-Dorfman algebra by comparing the independent terms of $\Lambda$ on the both sides of the axioms. From the sesquilinearity, we can get \begin{equation} [da,b]+o(\Lambda)=\{da_{\Lambda}b\}=(-1)^{-n}\Lambda\{a_{\Lambda}b\}\Rightarrow[da,b]=0, \end{equation} from the skew-symmetry, we can get \begin{align} [a,b]+o(\Lambda)&=\{a_{\Lambda}b\}=-(-1)^{(|a|+1-n)(|b|+1-n)}\{b_{-\Lambda-d}a\} \notag \\ &=-(-1)^{(|a|+1-n)(|b|+1-n)}([b,a]+d\langle b_{-d}a\rangle)+o(\Lambda) \notag \\ \Rightarrow&[a,b]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,a]=(-1)^{(|a|+1-n)(|b|+1-n)}d\langle b,a\rangle, \end{align} and from the Jacobi-identity, we can get \begin{align} [a,[b,c]]+o(\Lambda)&=[[a,b],c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,[a,c]]+o(\Lambda) \notag \\ \Rightarrow[a,[b,c]]&=[[a,b],c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,[a,c]]. \end{align} These are properties of a higher weak Courant-Dorfman algebras. \begin{de.} \textit{A higher weak Courant-Dorfman algebra} of degree $n$ consists of the following data: \begin{itemize} \item a graded vector space $E=(E^{i})$, \item a graded symmetric bilinear form of degree $-n$ $\langle ,\rangle:E\otimes E\rightarrow E$, \item a map of degree 1 $d:E\rightarrow E$, \item a Dorfman bracket of degree $1-n$ $[,]:E\otimes E\rightarrow E$, \end{itemize} which satisfies the following conditions. \begin{equation} [e_{1},[e_{2},e_{3}]]=[[e_{1},e_{2}],e_{3}]+(-1)^{(|e_{1}|+1-n)(|e_{2}|+1-n)}[e_{2},[e_{1},e_{3}]], \end{equation} \begin{equation} [e_{1},e_{2}]+(-1)^{(|e_{1}|+1-n)(e_{2}+1-n)}[e_{2},e_{1}]=(-1)^{(|e_{1}|+1-n)(|e_{2}|+1-n)}d\langle e_{2},e_{1}\rangle, \end{equation} \begin{equation} [de_{1},e_{2}]=0. \end{equation} \end{de.} Next, we define higher Poisson vertex algebras. We did not assume that $d$ is a differential so far. From now on, we assume $d^{2}=0$. Then, $C=(C^{k},d)$ is a cochain complex. \begin{de.} Let $C=(C^{k},d)$ a cochain complex. $C$ is a higher Lie conformal algebra of degree $n$ if it endows with a $\Lambda$-bracket $[_{\Lambda}]:C\otimes C\rightarrow C[\Lambda]$ defined by \begin{equation} a\otimes b\mapsto [a_{\Lambda}b]=a_{(0)}b+\Lambda a_{(1)}b \end{equation} satisfying the axioms of higher Lie conformal algebras. $C$ is a higher Poisson vertex algebra of degree $n$ if it is a higher LCA and a differential graded-commutative algebra which satisfies \begin{description} \item[the Leibniz rule] \begin{equation} [a_{\Lambda}bc]=[a_{\Lambda}b]c+(-1)^{(|a|+1-n)|b|}b[a_{\Lambda}c]. \end{equation} \end{description} \end{de.} From extended higher Courant-Dorfman algebras, we get the following theorem. \begin{th.} The above higher Poisson vertex algebras generated by elements of degree $0\leq i\leq n-1$ are in one-to-one correspondence with the extended higher Courant-Dorfman algebras \end{th.} \begin{proof} Assume we have a higher PVA $(C=(C^{k},d),\{_{\Lambda}\})$. We denote $R=C^{0},\mathcal{E}^{i}=C^{i}(1\leq i\leq n-1)$. $C=(C^{k},d)$ is a dgca, so $R$ is a commutative algebra and each $\mathcal{E}^{i}$ is an $R$-module. We denote the $\Lambda$-bracket by \begin{equation} a_{(0)}b=[a,b]\ a_{(1)}b=(-1)^{|a|}\langle a,b\rangle. \end{equation} Sesquilinearity says that \begin{equation} (da)_{(0)}b+\Lambda(da)_{(1)}b=-(-1)^{-n}\Lambda(a_{(0)}b+\Lambda a_{(1)}b). \end{equation} Comparing the 0th-order terms and the first-order terms of $\Lambda$, we have \begin{equation} [da,b]=0,\ \langle a,b\rangle=-(-1)^{|a|-n}[a,b]. \end{equation} In a similar way, from the skewsymmetry, \begin{equation} a_{(0)}b+\Lambda a_{(1)}b=-(-1)^{(|a|+1-n)(|b|+1-n)}(b_{(0)}a-(\Lambda+d)b_{(1)}a), \end{equation} we can get \begin{equation} [a,b]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,a]=-(-1)^{|a|}d\langle b,a\rangle, \end{equation} \begin{equation} \langle a,b\rangle=-(-1)^{(|a|-n)(|b|-n)}\langle b,a\rangle. \end{equation} From the Jacobi-identity \begin{align} \ &a_{(0)}(b_{(0)}c)+a_{(0)}(\Gamma b_{(1)}c)+\Lambda a_{(1)}(b_{(0)}c)+\Lambda a_{(1)}(\Gamma b_{(1)}c) \notag \\ &=(a_{(0)}b)_{(0)}c+(\Lambda+\Gamma)(a_{(0)}b)_{(1)}c+(\Lambda a_{(1)}b)_{(0)}c+(\Lambda+\Gamma) (\Lambda a_{(1)} b)_{(1)}c \notag \\ &+(-1)^{(|a|+1-n)(|b|+1-n)}\{b_{(0)}(a_{(0)}c)+b_{(0)}(\Lambda a_{(1)}c)+\Gamma b_{(1)}(a_{(0)}c)+\Gamma b_{(1)}(\Lambda a_{(1)}c)\}, \end{align} we can get \begin{equation} [a,[b,c]]=[[a,b],c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,[a,c]], \end{equation} \begin{equation} [a,\langle b,c\rangle]=\langle [a,b],c\rangle+(-1)^{(|a|+1-n)(|b|+1-n)}\langle b,[a,c]\rangle, \end{equation} \begin{equation} \langle a,\langle b,c\rangle\rangle=\langle \langle a,b\rangle,c\rangle+(-1)^{(|a|-n)(|b|-n)}\langle b,\langle a,c\rangle\rangle. \end{equation} From the Leibniz rule \begin{equation} a_{(0)}(bc)+\Lambda a_{(1)}(bc)=(a_{(0)}b)c+\Lambda (a_{(1)}b)c+(-1)^{(|a|+1-n)|b|}b(a_{(0)}c+\Lambda a_{(1)}c), \end{equation} we can get \begin{equation} [a\cdot b,c]=[a,b]\cdot c+(-1)^{(|a|+1-n)|b|}b\cdot[a,c], \end{equation} \begin{equation} \langle a\cdot b,c\rangle=\langle a,b\rangle\cdot c+(-1)^{(|a|-n)|b|}b\cdot\langle a,c\rangle. \end{equation} The conditions coincide the definition of extended higher Courant-Dorfman algebras. Conversely, assuming that we have an extended Courant-Dorfman algebra $\mathcal{E}=(\mathcal{E}^{k},d),\langle ,\rangle,[,]$, define a $\Lambda$-bracket $\{a_{\Lambda}b\}=[a,b]+(-1)^{|a|}\Lambda\langle a,b\rangle$. Then, this bracket satisfies the conditions of a $\Lambda$-bracket. \end{proof} Next, we check this algebra has a PVA-like property. In particular, we show we can construct a graded Lie algebra from a tensor product of a higher LCA and an arbitrary differential graded-commutative algebra (dgca for short) and a graded Poisson algebra out of that of a higher PVA and an arbitrary dgca. \begin{le.} Let $C=(C^{k},d_{1})$ be a higher LCA and $(E,d_{2})$ be a dgca. Then, the tensor product $C\otimes E$ of cochain complexes is also a higher LCA by defining a bracket as $[a\otimes f_{\Lambda}b\otimes g]=(-1)^{(|b|+1-n)|f|}[a_{\Lambda+d_{2}}b]\otimes fg, d(a\otimes f)=d_{1}a\otimes f+(-1)^{|a|}a\otimes d_{2}f$. \end{le.} \begin{proof} sesquilinearity: \begin{align} [d(a\otimes f)_{\Lambda}b\otimes g]&=[d_{1}a\otimes f_{\Lambda}b\otimes g]+(-1)^{|a|}[a\otimes d_{2}f_{\Lambda}b\otimes g] \notag \\ &=(-1)^{(|b|+1-n)|f|}\{[d_{1}a_{\Lambda+d_{2}}b]\otimes fg+(-1)^{|a|+|b|+1-n}[a_{\Lambda+d_{2}}b]\otimes (d_{2}f)g\} \notag \\ &=(-1)^{(|b|+1-n)|f|}\{-(-1)^{-n}(\Lambda+d_{2})[a_{\Lambda+d_{2}}b]\otimes fg+[a_{\Lambda+d_{2}}b]\otimes (d_{2}f)g\} \notag \\ &=(-1)^{(|b|+1-n)|f|}\{-(-1)^{-n}\Lambda[a_{\Lambda+d_{2}}b]\otimes fg-(-1)^{|a|+|b|+1-n}[a_{\Lambda+d_{2}}b]\otimes (d_{2}f)g \notag \\ &+(-1)^{|a|+|b|+1-n}[a_{\Lambda+d_{2}}b]\otimes (d_{2}f)g\} \notag \\ &=-(-1)^{-n}\Lambda[a\otimes f_{\Lambda}b\otimes g]. \end{align} skew-symmetry: \begin{align} [a\otimes f_{\Lambda}b\otimes g]&=[a_{\Lambda+d_{2}}b]\otimes fg] \notag \\ &=-(-1)^{(|a|+1-n)(|b|+1-n)+(|b|+1-n)|f|}[b_{-\Lambda-d_{2}-d_{1}}a]\otimes fg \notag \\ &=-(-1)^{(|a|+|f|+1-n)(|b|+|g|+1-n)-(|a|+1-n)|g|}[b_{-\Lambda-d}a]\otimes gf \notag \\ &=-(-1)^{(|a|+|f|+1-n)(|b|+|g|+1-n)}[b\otimes g_{-\Lambda-d}a\otimes f]. \end{align} Using the Jacobi identity of the original higher LCAs, we can check the Jacobi identity in a similar way. \end{proof} \begin{de.} A graded Lie algebra $\mathcal{C}$ of degree $N\in\mathbb{Z}$ is a cochain complex of vector spaces with a biilinear operation $[,]:\mathcal{C}\otimes\mathcal{C}\rightarrow\mathcal{C}$ of degree $N$ satisfying: (1)skew-symmetry:$[a,b]=-(-1)^{(|a|+N)(|b|+N)}[b,a]$, (2)Jacobi identity:$[a,[b,c]]=[[a,b],c]+(-1)^{(|a|+N)(|b|+N)}[b,[a,c]]$. \end{de.} \begin{le.} Let $C=(C^{k},d)$ be a higher Lie conformal algebra of degree $n$. Then $C/\mathrm{Im}d$ is naturally a graded Lie algebra of degree $(1-n)$ with bracket \begin{equation} [a+dC,b+dC]=[a_{\Lambda}b]_{\Lambda=0}+dC \end{equation} \end{le.} \begin{proof} The well-definedness follows from the sesquilinearity. \begin{equation} [d\alpha,b]=-(-1)^{-n}(\Lambda[\alpha_{\Lambda}b])_{\Lambda=0}=0, \end{equation} \begin{equation} [a,d\beta]=-(-1)^{|a|-n}((\Lambda+d)[a_{\Lambda}\beta])_{\Lambda=0}=d[a_{\Lambda}\beta]\simeq0. \end{equation} The skew-symmetry follows from the skew-symmetry of the complex. \begin{align} [a,b]&=[a_{\Lambda}b]_{\Lambda=0} \notag \\ &=-(-1)^{(|a|+1-n)(|b|+1-n)}[b_{-\Lambda-d}a]_{\Lambda=0} \notag \\ &\simeq-(-1)^{(|a|+1-n)(|b|+1-n)}[b_{\Lambda}a]_{\Lambda=0} \notag \\ &=-(-1)^{(|a|+1-n)(|b|+1-n)}[b,a] \end{align} In the similar way, we can check the Jacobi-identity follows from the Jacobi-identity of the complex. \end{proof} \begin{le.} Let $L$ be a graded Lie algebra of degree $N$. Then, $L[-N]$ is a graded Lie algebra with the same bracket. \end{le.} \begin{proof} It satisfies the skewsymmetry and the Jacobi identity due to the grade-shifting. \end{proof} For any higher LCA of degree $n$ $C$ and dgca $E$, we put $L(C,E)=C\otimes E/\mathrm{Im}d$ and $\mathrm{Lie}(C,E)=L(C,E)[n-1]$. By the above lemmas, $\mathrm{Lie}(C,E)$ is a graded-Lie algebra via \begin{equation} \{a\otimes f,b\otimes g\}=(-1)^{(|b|+1-n)|f|}(a_{(0)}b\otimes fg+(-1)^{|a|}a_{(1)}b\otimes (df)g). \end{equation} Next, we discuss the Poisson algebraic structure. Let $C=(C^{n},d)$ be a higher PVA of degree $n$. Then, $C\otimes E[n-1]$ is a dgca with products $a\otimes f\cdot b\otimes g=(-1)^{|b||f|}a\cdot b\otimes f\cdot g$, and $\mathrm{Lie}(C,E)$ is a graded Lie algebra. We put $P(C,E)=C\otimes E[n-1]/(Im d)\cdot (C\otimes E[n-1])$. \begin{th.} $P(C,E)$ is a graded Poisson algebra with \begin{equation} [a\otimes f]\cdot[b\otimes g]=(-1)^{|b||f|}[a\cdot b\otimes fg], \end{equation} \begin{equation} \{[a\otimes f],[b\otimes g]\}=(-1)^{(|b|+1-n)|f|}(a_{(0)}b\otimes fg+(-1)^{|a|}a_{(1)}b\otimes (df)g). \end{equation} \end{th.} \begin{proof} Let $I_{d}=(Im d)\cdot (C\otimes E[n-1])$. If $a,b\in I_{d}$, then $a\cdot b,da\in I_{d}$. Therefore, $I_{d}$ is a dg ideal of $C\otimes E[n-1]$ and $P(C,E)$ is a dgca. If $a,b\in I_{d}/(Imd)$, then $[a,b]\in I_{d}/(Imd)$ by the Leibiniz identity of $C$, so $I_{d}/(Im d)$ is a graded Lie ideal of $Lie(C,E)$ and $P(C,E)$ is a Lie algebra with the Lie bracket. The Leibiniz identity follows from the Leibniz identity of $C$. So $P(C,E)$ is a Poisson algebra. \end{proof} By the above theorem, we get a graded Poisson algebra from a higher PVA and a dgca. \begin{ex.} We define the BFV analog of formal distribution Lie algebras. Define the algebra of power series \begin{equation} \mathbb{C}[[t_{1},t^{-1}_{1},...t_{n},t^{-1}_{n}]][\theta_{1},...,\theta_{n}] \end{equation} where $t_{i}$ are even coordinates of degree 0, $\theta_{i}$ are odd coordinates of degree 1. Define the "de-Rham differential" as \begin{equation} df:=\sum_{i}\frac{\partial f}{\partial t^{i}}\theta_{i}. \end{equation} Let $C=(C^{n},Q)$ be a higher LCA of degree $n+1$. For \begin{equation} V:=C\otimes\mathbb{C}\left[[t_{1},t^{-1}_{1},...t_{n},t^{-1}_{n}]][\theta_{1},...,\theta_{n}\right][n]/((Q\alpha)\otimes f+\alpha\otimes df), \end{equation} the bracket \begin{equation} [a\otimes t_{1}^{p_{1}}\cdots t_{n}^{p_{n}}\theta^{J},b\otimes t_{1}^{q_{1}}\cdots t_{n}^{q_{n}}\theta^{K}] \end{equation} \begin{equation} =(a_{(0)}b)t_{1}^{p_{1}+q_{1}}\cdots t_{n}^{p_{n}+q_{n}}\theta^{J\cdot K}+\sum^{n}_{k=1}(a_{(1)}b)p_{k}t_{1}^{p_{1}+q_{1}}\cdots t_{k}^{p_{k}+q_{k}-1} t_{n}^{p_{n}+q_{n}}\theta^{J\cdot\{k\}\cdot K}, \end{equation} \begin{equation} J,K\subset\{1,...,n\},J\cdot K=\left\{\begin{array}{ll} \phi&(J\cap K\neq\phi),\\ J\cup K&(J\cap K=\phi), \end{array}\right. \end{equation} makes the graded Lie algeraic structure. We define a formal distribution, \begin{align} a(Z_{1},...,Z_{n})&=a(z_{1},...,z_{n},\zeta_{1},...,\zeta_{n}) \notag \\ &=\sum_{m_{i}\in\mathbb{Z},J\subset\{1,...,n\}}z_{1}^{-1-m_{1}}\cdots z_{n}^{-1-m_{n}}\zeta^{\{1,...,n\}\backslash J}\alpha t_{1}^{m_{1}}\cdots t_{n}^{m_{n}}\theta^{J}, \end{align} and the formal $\delta$-function, \begin{align} \delta(Z-W)&=\delta(z_{1}-w_{1})\cdots\delta({z_{n}-w_{n}})\delta({\zeta_{1}-\xi_{1}})\cdots\delta(\zeta_{n}-\xi_{n}) \notag \\ &=\sum_{m_{i}\in\mathbb{Z}}z_{1}^{-m_{1}-1}w_{1}^{m_{1}}\cdots z_{n}^{-m_{n}-1}w_{n}^{m_{n}}(\zeta_{1}-\xi_{1})\cdots(\zeta_{n}-\xi_{n}), \end{align} Then, we get \begin{equation} [a(Z),b(W)]=[a,b](W)\delta(Z-W)+\langle a,b\rangle (W)d(\delta(Z-W)). \end{equation} (For another example of formal distribution Lie algebra using superfields, see $\cite{KH06}$.) Consider the case $n=2$. Let $C=(C^{n},Q)$ be a higher PVA of degree $2$. Then $P(C,\mathbb{C}[[t,t^{-1}]][\theta])$ is a graded Poisson algebra via \begin{equation} \{a t^{m},b t^{n}\}=(a_{(0)}b)t^{m+n}+(a_{(1)}b)mt^{m+n-1}\theta,\ \{a t^{m}\theta,b t^{n}\}=(a_{(0)}b)t^{m+n}\theta. \end{equation} An extended higher Courant-Dorfman algebra of degree 2 is the same as a Courant-Dorfman algebra, and there is a PVA corresponding to a given higher PVA. We denote the PVA by $\tilde{C}$. We restrict $P(C,\mathbb{C}[[t,t^{-1}]][\theta])$ to the degree 0 part. Explicitly, \begin{equation} P(C,\mathbb{C}[[t,t^{-1}]][\theta])|_{degree 0}=\{a t^{m_{1}},b t^{m_{2}}\theta|a\in C^{1},b\in C^{0},m_{1},m_{2}\in\mathbb{Z}\}. \end{equation} We can define an isomorphism of Poisson algebras between $P(C,\mathbb{C}[[t,t^{-1}]][\theta])|_{degree 0}$ and the Poisson algebra arising from the associated Poisson vertex algebra $\tilde{C}\otimes\mathbb{C}[[t,t^{-1}]]/Im(d+\partial_{t})\cdot \tilde{C}\otimes\mathbb{C}[[t,t^{-1}]]$ by sending $a t^{m_{1}}$($a\in C^{1}$) to $a t^{m_{1}}$, and $b t^{m_{2}}\theta$($b\in C^{0}$) to $b t^{m_{2}}$. This subalgebra corresponds to the physical current algebra of a BFV current algebra. \end{ex.} \begin{ex.} Let $(\mathcal{M},\omega,Q=\{\Theta,-\})$ be a degree $n$ dg symplectic manifold and $C=C^{n-1}(C^{\infty}(\mathcal{M}))=\{a\in C^{\infty}(\mathcal{M}):|a|\leq n-1\}$ and consider a higher Courant-Dorfman algebra on $C$. Let $\Sigma_{n-1}$ be a $n-1$ dimensional manifold and $E=(\Omega^{\bullet}(\Sigma_{n-1}),D)$ be their de-Rham complex. Then, $P(C,E)$ is equipped with degree 0 Poisson bracket with \begin{equation} \label{HP} [a\otimes\epsilon_{1},b\otimes\epsilon_{2}]=\{\{a,\Theta\},b\}\otimes \epsilon_{1}\epsilon_{2}+\{a,b\}\otimes (D\epsilon_{1})\epsilon_{2}, \end{equation} where $a,b\in C$ and $\epsilon_{1},\epsilon_{2}\in E$. This is an algebraic description of BFV current algebras from dg symplectic manifolds$\cite{IX13},\cite{A21}$. BFV current algebras are Poisson brackets on $C^{\infty}(Map(T[1]\Sigma_{n-1},\mathcal{M}))$, where $T[1]\Sigma_{n-1}$ is the shifted tangent space of $\Sigma_{n-1}$. In order to get the currents, we have to take a proper Lagrangian submanifold of $Map(T[1]\Sigma_{n-1},\mathcal{M})$. One way is the zero-locus reduction$\cite{GSYT}$. \begin{pr.}[{$\cite[Proposition 1]{A21}$}] We take a degree $-n$ graded Poisson algebra $P$ with a differential $Z$, and take a quotient $P/I_{Z}$, where $I_{Z}$ is the ideal of $P$ generated by $Z$-exact terms. Then, $P/I_{Z}$ is a degree $-n+1$ Poisson algebra with the derived bracket \begin{equation} \{[a],[b]\}=[\{a,Z(b)\}]. \end{equation} \end{pr.} Applying to the BFV current algebras we get the Poisson bracket on $C^{\infty}(Map(T[1]\Sigma_{n-1},\mathcal{M}))/I_{\tilde{D}+\tilde{Q}}$, where $\tilde{D}$ and $\tilde{Q}$ is a differential on $Map(T[1]\Sigma_{n-1},\mathcal{M})$ induced by $D$ and $Q$. For $a\in C^{\infty}(\mathcal{M})$ and $\epsilon\in C^{\infty}(T[1]\Sigma_{n-1})$, We define $J_{\epsilon}\left(a\right)\in C^{\infty}(Map(T[1]\Sigma_{n-1},\mathcal{M}))$ by \begin{equation} J_{\epsilon}\left(a\right)(\phi)=\int_{T[1]\Sigma_{n-1}}\epsilon\cdot\phi^{*}(a)(\sigma,\theta)d^{n-1}\sigma d^{n-1}\theta, \end{equation} where $\epsilon\in C^{\infty}(\Sigma_{n-1})$ are test functions on $T[1]\Sigma_{n-1}$, $\sigma,\theta$ are coordinates on $T[1]\Sigma_{n-1}$ of degree 0 and 1, $\phi\in Map(T[1]\Sigma_{n-1},\mathcal{M})$ and $\phi^{*}(a)$ is the pullback of $a$. Then the Poisson bracket is \begin{align} \label{CA} &\{J_{\epsilon_{1}}\left(a\right),J_{\epsilon_{2}}\left(b\right)\}(\phi) \notag \\ &=\int_{T[1]\Sigma_{n-1}}\epsilon_{1}\epsilon_{2}\cdot\phi^{*}(\{\{a,\Theta\},b\})(\sigma,\theta)d^{n-1}\sigma d^{n-1}\theta \notag \\ &+\int_{T[1]\Sigma_{n-1}}(D\epsilon_{1})\epsilon_{2}\cdot\phi^{*}(\{a,b\})(\sigma,\theta)d^{n-1}\sigma d^{n-1}\theta, \end{align} where $\epsilon_{1},\epsilon_{2}\in C^{\infty}(T[1]\Sigma_{n-1})$ are test functions on $T[1]\Sigma_{n-1}$, $\sigma,\theta$ are coordinates on $T[1]\Sigma_{n-1}$ of degree 0 and 1, Comparing to $\ref{HP}$ and $\ref{CA}$, we see that taking the quotient corresponds to the zero-locus reduction. \end{ex.} \chapter{Outlooks} In this thesis, we gave higher analogs of Lie conformal algebras and Poisson vertex algebras. It is natural to ask whether they have the same applications as ordinary Lie conformal algebras and Poisson vertex algebras. For example, our higher PVAs may be used to analyze multi-variable Hamiltonian PDEs. Considering the algebraic description of more general currents would be important. Another interesting problem is the non-commutative analog. In $\cite{AFH21}$, non-commutative version of Courant-Dorfman algebras and Poisson vertex algebras, which are called double Courant-Dorfman algebras and double Poisson vertex algebras, are considered. Their higher generalization would be given using our algebras. Another way taking the non-commutative version is the quantization, in analogy with vertex algebras.
{ "arxiv_id": "2302.11369", "language": "en", "timestamp": "2023-02-23T02:14:40", "url": "https://arxiv.org/abs/2302.11369", "yymm": "2302" }
\section{Introduction} \label{sec:introduction} Alpha particles are born in stellarators as a product of the fusion reaction. Born with $3.5$ MeV, alpha particles carry a substantial amount of energy which, if confined, will heat the plasma and sustain the reaction. On the other hand, poor confinement of the alphas can have destructive effects on the plasma-facing components, and detract from plasma self-heating. Hence, confinement of fast ions is, and has been, a focal point in stellarator design \cite{ku2008physics,henneberg2019properties,bader2019stellarator,landremanBullerDrevlak,leviness2022energetic}. Stellarator design is generally split into two stages. In the first stage the plasma shape is optimized such that the magnetohydrodynamic (MHD) equilibrium meets specified performance criteria, such as particle confinement, stability, and/or reduced turbulence. The second stage is then devoted to finding electromagnetic coil shapes and currents which generate the desired magnetic field. Due to the computational expense of simulating particle trajectories for long times, typically stage-one configurations are designed using proxy metrics for confinement, such as quasisymmetry (QS) \cite{boozer1983transport,landreman2022magnetic,rodriguez2022measures}, $\Gamma_c$ \cite{nemov2008poloidal,bader2021modeling,leviness2022energetic, bader2019stellarator} and epsilon effective, $\epsilon_{eff}$ \cite{nemov1999evaluation}. Recently, numerical optimization of a QS metric has been particularly successful in improving particle confinement in stellarators, leading to configurations with less than $1\%$ fast-ion losses \cite{landreman2022magnetic,wechsung2022precise,landremanBullerDrevlak}. Despite the success of QS and other proxies, it is unclear what is being sacrificed by using proxies to design the device rather than relying on exact calculations. For example, since QS is a sufficient condition for confinement, rather than a necessary condition, it may be overly stringent. Similarly, proxies in general only approximate the true goal of improving particle confinement, and do not capture the goal holistically or exactly. In this study we opt for a direct approach to achieve fast-ion confinement: we optimize stellarator designs by simulating fast-ion trajectories and minimizing the empirical loss of energy. Our model takes the form \begin{equation} \begin{split} \operatornamewithlimits{minimize}_{\mathbf{w} \in \mathbb{R}^{{n_{\wb}}}} \quad &\mathcal{J}(\mathbf{w}) := \mathbb{E}_{\mathbf{x},v_{\parallel}}[\mathcal{J}_{\text{energy}}(\mathbf{x},v_{\parallel},\mathbf{w})] \\ & B_{-}^* \le B(\mathbf{x},\mathbf{w}) \le B_+^* \quad \forall \mathbf{x} \in \mathcal{P} \end{split} \label{eq:main} \end{equation} The objective $\mathcal{J}$ measures the expected value of the energy lost, $\mathcal{J}_{\text{energy}}$, due to alpha particles born with a random initial position, $\mathbf{x}$, and parallel velocity, $v_{\parallel}$, drifting through the last closed flux surface of the plasma. The decision variables $\mathbf{w} \in\mathbb{R}^{{n_{\wb}}}$ are Fourier coefficients representing the shape of the plasma boundary. Motivated by physical and engineering requirements, the infinite dimensional nonlinear bound constraints restrict the strength of the magnetic field $B$ to an interval $[B_{-}^*,B_{+}^*]$ at each point throughout the plasma volume $\mathbf{x}\in\mathcal{P}$. By varying the shape of the plasma boundary we seek MHD equilibria that minimize the loss of alpha particle energy. The expected energy lost, $\mathcal{J}(\mathbf{w})$, is computed empirically from Monte Carlo simulation of collision-less guiding center trajectories by use of an approximation for the alpha particle energy in terms of confinement time. Due to the lack of analytical derivatives, we solve \cref{eq:main} using derivative-free optimization methods. In this document, we discuss practical challenges such as the noisy objective computation, high computational cost, and choice of derivative-free optimization algorithm. Our numerical results show that the approach is indeed effective at finding desirable configurations, and that the configurations we find are visibly not quasi-symmetric. To the author's knowledge, only two stellarators have been designed by simulating alpha particle losses within the design loop: the ARIES-CS stellarator \cite{ku2008physics} and a design by Gori et. al. \cite{gori2001alpha}. In the design of ARIES-CS, the average confinement time of $\sim 500$ particles was included as a term in an optimization objective. The initial particle locations were held fixed during the optimization, leading to an \say{effective and robust} technique. Similarly, Gori et al. included the average confinement time of reflected particles in their optimization objective. To mitigate the high computational cost and time required to simulate particle trajectories both studies limited the particle simulation to a fixed number of toroidal transits. Despite the empirical success of these designs, there is not a clear description of the methods used and challenges faced. As part of our work we bring light to this approach. The paper is structured as follows. In \Cref{sec:physics} we discuss the life cycle of alpha particles and the relevant physics to our numerical simulations. \Cref{sec:simsopt} describes the computational workflow for modeling and evaluating candidate stage-one designs. In \Cref{sec:optimization_model} we mathematically formulate our design problem as an optimization problem. \Cref{sec:objective_approximation} compares methods of computing the objective function via the simulation of alpha particle trajectories. Numerical results are presented in \Cref{sec:numerical}, prior to a brief discussion of future research directions in \Cref{sec:conclusion}. \section{Physical Model} \label{sec:physics} We consider toroidal plasma configurations that are static MHD equilibria, satisfying $\mu_0^{-1}(\nabla\times\mathbf{B})\times\mathbf{B}=\nabla p$, where $p$ is the pressure and $\mathbf{B}\in\mathbb{R}^3$ is the magnetic field. It is assumed that nested toroidal flux surfaces exist. For the numerical experiments in this work, we adopt the low $\beta$ (plasma pressure divided by magnetic pressure) limit of $p\approx 0$ and $\nabla\times\mathbf{B} \approx 0$ for simplicity, but the methods here are fully applicable to MHD equilibria with substantial pressure and current. A convenient coordinate system for MHD equilibria is Boozer coordinates $\mathbf{x} = (s,\theta,\zeta)$, where $s$ is the toroidal flux normalized to be 1 at the plasma boundary, and $\theta$ and $\zeta$ are poloidal and toroidal angles. The domain of the coordinates is $s,\theta,\zeta \in \mathcal{P} := [0,1]\times [0,2\pi) \times [0,2\pi/n_{\text{fp}})$ for a stellarator with $n_{\text{fp}}$ field periods. Motion of alpha particles in the equilibrium is modeled using the collisionless guiding center equations. For the case considered here of low $\beta$, these equations are \begin{align} \begin{split} \frac{d\mathbf{R}}{dt} &= v_{\parallel}\mathbf{b} + \frac{m v_{\parallel}^2}{qB^3}\left(v_{\parallel}^2+ \frac{v_{\perp}^2}{2}\right)\mathbf{B}\times \nabla B, \\ \frac{dv_{\parallel}}{dt} &= -\frac{v_{\perp}^2}{2B}\mathbf{b}\cdot\nabla B. \end{split} \label{eq:gc_motion} \end{align} Here, $\mathbf{R}$ is the guiding center location, $t$ is time, $m$ is the particle's mass, $q$ is the particle's charge, $B=|\mathbf{B}|$ is the field strength, $\mathbf{b}=\mathbf{B}/b$, and $v_{\parallel}$ and $v_{\perp}$ are the components of velocity parallel and perpendicular to $\mathbf{B}$. The magnetic moment $\mu = v_{\perp}^2/(2B)$ is conserved, as is the speed $v=\sqrt{v_{\parallel}^2+v_{\perp}^2}$. Trapped particles, which have sufficiently small $|v_{\parallel}/v_{\perp}|$, experience reversals in the sign of $v_{\parallel}$. Particles that do not experience $v_{\parallel}$ sign reversals are called ``passing". Alpha particles are born isotropically with an energy of 3.5 MeV. We consider two models for the initial spatial distribution. The first model is based on the local fusion reaction rate, resulting in alpha particle birth throughout the plasma volume. The second model distributes alpha particles across a single specified flux surface. Either way, after birth, alpha particle guiding centers are followed for a specified amount of time, or until they exit the plasma boundary surface, at which time they are considered lost. The birth distribution of alpha particles is derived in a standard manner \cite{drevlak2014, bader2021modeling,landremanBullerDrevlak,leviness2022energetic}, as follows. For calculations in which alpha particles are born throughout the volume, the spatial birth distribution is proportional to the local reaction rate \cite{NRLPlasma2019}, $f(s,\theta,\zeta) \propto n_D n_T (\overline{\sigma v})_{DT}$. Here, $D$ and $T$ subscripts indicate deuterium and tritium, $n_D$ and $n_T$ are the species densities, which we assume to be equal, and $(\overline{\sigma v})_{DT}$ is the Maxwellian-averaged fusion cross-section, computed in \cite{NRLPlasma2019} by \begin{equation} (\overline{\sigma v})_{DT}(s) = 3.68 \times 10^{-18}T_i^{-2/3}(s)\exp(-19.94T_i^{-1/3}(s)) \;\text{m}^3\text{sec}^{-1}, \end{equation} where $T_i$ is the ion temperature in keV. Within the numerical experiments, we assume the following density and temperature profiles: \begin{align} n_D(s) &= n_T(s) = 2^{20}(1 - s^5) \; \text{m}^{-3}, \\ T_i(s) &= 12(1 - s) \; \text{keV}. \end{align} These density and temperature profiles reflect plausible reactor parameters \cite{ku2008physics,alonso2022physics}, and the fact that temperature profiles in experiments are typically more peaked than density profiles. In this study the temperature and density profiles are held fixed in order to focus on the optimization of particle trajectories. The radial birth distribution of particles is thus proportional to \begin{equation} f_s(s) \propto (1 - s^5)^2 (1 - s)^{-2/3}\exp(-19.94 (12(1 - s))^{-1/3}). \label{eq:birth_distribution_radial} \end{equation} as depicted in \Cref{fig:sample_density} (left). Alternatively, to only consider particles born on a single flux surface, the localized initial radial distribution can be expressed as $f_s(s) = \delta(s - s_0)$, where $s_0=0.25$ is used in numerical experiments. \begin{figure}[tbh!] \includegraphics[scale=0.5]{figures/density_plot.pdf} \centering \caption{(left) Radial probability density $f_s(s)$ derived from the fusion reaction rate. (right) Density over Boozer coordinates $\theta$ and $\zeta$, $f_{\theta,\zeta}$ for configuration \textbf{A} which will be discussed in \Cref{sec:numerical}. } \label{fig:sample_density} \end{figure} For either initial radial distribution, particles are initialized uniformly over flux surfaces. This uniformity is expressed by a determinant of the Jacobian from Boozer to Cartesian coordinates $\sqrt{g}$, \begin{equation} f_{\theta,\zeta}(\theta,\zeta \, | \, s) \propto |\sqrt{g}| . \label{eq:birth_distribution_angles} \end{equation} \Cref{fig:sample_density} (right) shows $f_{\theta,\zeta}$ for configuration \textbf{A}, which will be discussed in \Cref{sec:numerical}. Lastly, the isotropic velocity birth distribution corresponds to a uniform distribution of $v_{\parallel}$ over $[-v_{\max},v_{\max}]$, where $v_{\max}=\sqrt{2E/m}$ and $E=3.5$ MeV. Defining the associated distribution \begin{equation} f_{v_{\parallel}}(v_{\parallel}) = \frac{1}{2v_{\max}}, \label{eq:birth_distribution_vpar} \end{equation} the total birth distribution is \begin{equation} f(s,\theta,\zeta,v_{\parallel}) = f_s(s)f_{\theta,\zeta}(\theta,\zeta \, | \, s)f_{v_{\parallel}}(v_{\parallel}). \label{eq:birth_distribution} \end{equation} Several mechanisms exist by which trapped particles are lost \cite{gibson1967single,galeev1969plasma,goldston1981confinement,beidler2001stochastic,paul2022energetic}. ``Ripple trapped'' particles, those trapped in a single field period or in coil ripple, typically experience a nonzero average radial magnetic drift and so are quickly lost. Other trapped particle trajectories may resemble the banana orbits of a tokamak, but with radial diffusion due to imperfect symmetry. Particles that transition between these two types of trapped states make additional radial excursions. Particles with wide banana orbits may also be directly lost. Generally, passing particles are not lost unless they are born very close to the plasma boundary. \section{Modeling and optimization software} \label{sec:simsopt} To evaluate candidate stage-one stellarator designs we rely on the \code{SIMSOPT} code \cite{landreman2021simsopt}. \code{SIMSOPT} is a framework for stellarator modeling and optimization which interfaces with MHD equilibrium solvers such as \code{VMEC} \cite{hirshman1986three} and \code{SPEC} \cite{hudson2012computation}, and houses infrastructure for defining magnetic fields, computing coordinate transformations, tracing particles, and computing properties of fields and equilibria. Certain rate-limiting computations in \code{SIMSOPT}, such as evaluating magnetic fields, are executed in C++. For ease of use, however, Python bindings are used through the PyBind11 library, allowing users to interface with \code{SIMSOPT} solely through the Python interface. In order to design stage-one configurations we first find an ideal MHD equilibrium by evaluating \code{VMEC} with a prescribed plasma boundary shape, current profile, and pressure profile. Subsequently, the magnetic field is transformed to Boozer coordinates which is used within the guiding center equations when tracing particles. The plasma boundary is paramterized as a Fourier series in the poloidal and toroidal angles $\theta$ and $\phi$, \begin{equation} \begin{split} R(\theta,\phi) &= \sum_{n=0}^{n_{\text{mode}}} R_{0,n}\cos(- n_{\text{fp}}n\phi) + \sum_{m =1}^{n_{\text{mode}}} \sum_{n=-n_{\text{mode}}}^{n_{\text{mode}}} R_{m,n}\cos(m\theta - n_{\text{fp}}n\phi), \\ Z(\theta,\phi) &= \sum_{n=1}^{n_{\text{mode}}} Z_{0,n}\sin(- n_{\text{fp}}n\phi) + \sum_{m =1}^{n_{\text{mode}}} \sum_{n=-n_{\text{mode}}}^{n_{\text{mode}}} Z_{m,n}\sin(m\theta - n_{\text{fp}}n\phi), \end{split} \label{eq:vmec_fourier_rep} \end{equation} where $n_{\text{mode}} = \{0,1,2,\ldots\}$ can be increased to achieve more complicated boundary representations. Field period symmetry with $n_{\text{fp}}$ periods and stellarator symmetry have been assumed. Upon computing the equilibrium, a Boozer-coordinate representation of the magnetic field is computed using the \code{BoozXform} code, via \code{SIMSOPT}. Working in Boozer coordinates reduces the number of interpolations required to integrate the guiding center equations. Initial particle positions and parallel velocities can then be generated, and particles traced using the vacuum guiding center equations in Boozer coordinates up to a terminal time $t_{\max}$ or until stopping criteria are satisfied. The guiding center equations are solved using the adaptive Runge-Kutta scheme RK45. \begin{figure}[tbh!] \includegraphics[scale=0.35]{figures/timing_plot.pdf} \centering \caption{Wall-clock time required to trace a single particle until a terminal time $t_{\max}$ using a single processor on a computing cluster. Timing results were averaged over $2000$ particles randomly generated throughout a four field period configuration, all of which were confined to their terminal time $t_{\max}$. The total time of an objective evaluation also includes the fixed time of evaluating VMEC, computing the Boozer transformation, and building interpolants of the $\mathbf{B}$-field, which took $19.07$ seconds for this configuration. } \label{fig:timing} \end{figure} VMEC and the particle tracing codes allow parallelism through MPI. While VMEC can be run efficiently on a single core, particle tracing is embarrassingly parallel and benefits from the use of numerous cores. Even with dozens of MPI processes, particle tracing can take anywhere from seconds to minutes of wall-clock time to complete. In addition, there are substantial costs, typically around $\sim 20$ seconds, associated with running VMEC, computing the Boozer transform, and interpolating the fields required for tracing. Timing results for simulating particle orbits are shown in \Cref{fig:timing}. In total, computing an equilibrium and tracing enough particles to evaluate the objective function, defined in \Cref{sec:objective}, often takes between 30sec and 130sec of wall-clock-time, depending on the terminal trace time, the configuration, and the number of particles. The optimization process, which can be run on a single node, or multiple nodes on a computing cluster, is time consuming, often running for one to two days. For example, solving an optimization problem would consume $26$ hours of wall-clock-time when using 48 MPI processes and a computational budget of 1000 function evaluations which each require tracing 3500 particles to $10$ms. This poses a serious challenge in performing the optimization. In \Cref{sec:conclusion} we discuss future work that could reduce this burden. \section{Optimization model formulation} \label{sec:optimization_model} We now outline a mathematical optimization problem that seeks stellarator configurations with good confinement of fast-ions. By varying the shape of the plasma boundary we minimize the energy lost due to alpha particles exiting the last closed flux surface. In the following, we describe the salient characteristics of the problem: the representation of decision variables, nonlinear constraints on the magnetic field strength, and an objective that quantifies the confinement of alpha particle energy. \subsection{Decision variables} \label{sec:decision_variables} The independent decision variables for optimization are the Fourier coefficients $R_{m,n},Z_{m,n}$ which define the shape of plasma boundary in \code{VMEC} via \cref{eq:vmec_fourier_rep}. The number of modes used in the boundary description is controlled by the parameter $n_{\text{mode}}\in\{0,1,2,\ldots\}$. Increasing $n_{\text{mode}}$ increases the complexity of the boundary shape allowing for potential improvements in confinement, while setting $n_{\text{mode}}=0$ only allows the major radius $R_{0,0}$ to vary. The total number of decision variables satisfies ${n_{\wb}} = 4n_{\text{mode}}^2 + 4n_{\text{mode}}$. The major radius of a design is central to particle trajectories simulation, since the Larmour radius and guiding center drifts scale with the square of the ratio of major radius to aspect ratio, $\propto (R_{0,0}/A)^2$. Standardization of the device size is thus necessary in order to have realistic particle losses, and to prevent the optimization from shrinking the aspect ratio arbitrarily. In confinement studies, device size is typically standardized by constraining the minor radius or the plasma volume. We opt to constrain the minor radius \textit{implicitly} to $a \approx a^* := 1.7$m (the minor radius of ARIES-CS), by fixing the major radius, fixing the toroidal flux, and constraining the field strength. In particular, we fix the major radius based on the target aspect ratio $A^*:=7$, \begin{equation} R_{0,0} = a^*A^*. \label{eq:major_radius_constraint} \end{equation} In \Cref{sec:constraints}, the toroidal flux and mean field strength will be selected to encourage the design to have an aspect ratio near $A^*$. If the design achieves the aspect ratio of $A^*$, it would also have an average minor radius of $a^*$. Otherwise, the minor radius will only be near $a^*$. The decision variables are collected into the vector $\mathbf{w}\in\mathbb{R}^{{n_{\wb}}}$ via $\mathbf{w} = (R_{0,1},\ldots, Z_{0,0},\ldots)$. \subsection{Nonlinear constraints} \label{sec:constraints} Engineering limitations on electromagnetic coils and the associated support structure place an upper limit on the magnetic field strength. For low-temperature superconductors, the field strength is limited to be no more than $15$T in the coil and approximately $5$T throughout the plasma volume \cite{ku2008physics}. To achieve reactor relevant scaling of the magnetic field, we fix the toroidal flux so that if the plasma has an average minor radius of $a^*$, the volume-averaged magnetic field strength is $B^*:=5$T, \begin{equation} \Psi_T = \pi (a^*)^2B^*. \label{eq:toroidal_flux_constraint} \end{equation} The value of toroidal flux set in \cref{eq:toroidal_flux_constraint} is used as an input parameter to the MHD equilibrium calculations, and does not need to be treated as a constraint in the optimization. When paired with the major radius constraint, \cref{eq:major_radius_constraint}, the toroidal flux constraint, \cref{eq:toroidal_flux_constraint}, to zeroth order fixes the ratio of the the squared aspect ratio to volume-averaged magnetic field strength, i.e. $A^2/B \approx (A^*)^2/B^*$. Thus by placing bound constraints on the field strength we can constrain the range of the aspect ratio. In addition, bound constraints on the field strength are necessary in order to constrain the mirror ratio $\max_{\mathbf{x}}{B(\mathbf{x})}/\min_{\mathbf{x}}{B(\mathbf{x})}$, which we find increases to unphysically large values when left unconstrained in optimization. We globally bound the field strength, \begin{equation} B_{-}^* \le B(\mathbf{x}) \le B_{+}^* \quad \forall \, \mathbf{x} \in \mathcal{P}. \label{eq:b_field_constraint_infinite} \end{equation} The upper and lower bounds $B^*_+ = B^*\frac{2r^*}{r^*+1}$ and $B^*_- = B^*\frac{2}{r^*+1}$ enforce that the mirror ratio is at most $r^* :=1.35$, similar to W7-X and the Compact Helical System (CHS) \cite{beidler2011benchmarking, nishimura1990compact}. The upper bound on the field strength is derived from material properties and tolerances in coil engineering and the lower bound is motivated by requirements on confinement and transport based phenomena. The constraints \cref{eq:b_field_constraint_infinite} are \say{soft constraints} by nature, in that a small violation of the constraints is tolerable. To handle the infinite dimensional constraints, \cref{eq:b_field_constraint_infinite}, we discretize the domain of the constraint into a uniform, ${n_{s}}\times {n_{\theta}} \times{n_{\zeta}}$ grid. We then apply the magnetic field constraints at each of the ${n_{\text{grid}}} = {n_{s}} {n_{\theta}} {n_{\zeta}}$ grid points $\mathbf{x}_i$, \begin{equation} \begin{split} &B_{-}^* - B(\mathbf{x}_i) \le 0 \quad i=1,\ldots,{n_{\text{grid}}}, \\ &B(\mathbf{x}_i) - B_{+}^*\le 0 \quad i=1,\ldots,{n_{\text{grid}}}, \end{split} \label{eq:b_field_constraint_discrete} \end{equation} totaling $2{n_{\text{grid}}}$ nonlinear simulation based constraints. \subsection{Optimization objective} \label{sec:objective} Fast-ion optimization has two principle goals: minimizing the thermal energy lost from the system, and dispersing or concentrating the load of fast-ions on the plasma-facing components. We focus solely on the first goal of achieving excellent confinement of energy, noting that this also makes progress towards the second goal. The confinement of fast-ions is often measured by the loss fraction, the fraction of particles lost within a terminal time $t_{\max}$. While the loss fraction measures particle confinement, it does not reflect the fact that particles lost quickly, with energy of nearly 3.5 MeV, contribute more to the heat flux on plasma-facing components and detract more from plasma self-heating than particles lost at late times, which have slowed substantially. If collisions were included in the particle tracing calculations, the energy loss fraction could be computed straightforwardly. However particle tracing is often done without collisions, because it is easier to implement and because efficient algorithms can be applied in the collisionless case \cite{albert2020accelerated,albert2020symplectic}. Therefore, here we describe a physically motivated objective function that places greater weight on minimizing prompt losses within collisionless calculations. Fusion-produced alpha particles primarily experience collisions with electrons during which they deposit most of their energy. This follows from the fact that the slowing-down collision frequency \cite{NRLPlasma2019} for alpha particles with background electrons is higher than the slowing-down collision frequency with ions as long as the alpha energy exceeds $\sim 50 T_e$ \cite[page 40]{helander2005collisional}. If reactor temperatures satisfy $ T_e \le 16$ keV, then collisions with electrons dominate until the alphas have slowed to $\le 0.8$ MeV. This process can be described by \begin{equation} \frac{d v}{dt} = -\nu_s^{\alpha/e} v, \label{eq:slowing_down} \end{equation} where $\nu_s^{\alpha/e}$ is the alpha-electron slowing-down collision frequency, which is approximately independent of alpha energy \cite{NRLPlasma2019}. $\nu_s^{\alpha/e}$ will vary with time as the particle traverses regions of different density and temperature. We neglect this complexity treating $\nu_s^{\alpha/e}$ as approximately constant, in which case the solution of \cref{eq:slowing_down} becomes \begin{equation} v(t) = v(0) e^{- \nu_s^{\alpha/e} t }. \label{eq:energy_model} \end{equation} The slowing-down time, $1 / \nu_s^{\alpha/e}$, is typically on the order of $100\text{ms}$ for plausible reactor parameters. Assuming an initial energy of 3.5 MeV, the energy lost associated with an alpha particle lost at time $\mathcal{T}$ is $3.5e^{-2 \nu_s^{\alpha/e} \mathcal{T}}\text{MeV}$. \begin{figure}[tbh!] \includegraphics[scale=0.5]{figures/collisional_energy.pdf} \centering \caption{Energy of alpha particles at the time they are lost. Data points were generated by tracing particles \textit{with collisions} from 10 configurations: NCSX, ARIES-CS, a QA from NYU, CFQS, a QH from IPP, a QA from IPP, HSX, Wistell-A, LHD, and W7-X. $20,000$ particles were traced for each configuration. The solid black line indicates the regressed mean of the data, and the dashed red line is the energy decay model $3.5\exp(-2t\nu_{s}^{\alpha/e})$ where the slowing-down time $1/\nu_s^{\alpha/e} \approx 0.057$sec was computed analytically using the volume averaged density and temperature \cite{NRLPlasma2019}. The energy model very closely matches the mean particle energy.} \label{fig:objective_validation} \end{figure} In \Cref{fig:objective_validation} we see that the energy decay model \cref{eq:energy_model} is almost identical to the mean energy of alpha particles at any given time. Data for \Cref{fig:objective_validation} was generated using \textit{collisional tracing} in the ANTS code \cite{drevlak2014}. $20,000$ particles were traced from each of 10 configurations: the National Compact Stellarator eXperiment (NCSX) \cite{zarnstorff2001physics}, Advanced Research Innovation and Evaluation Study - Compact Stellarator (ARIES-CS) \cite{najmabadi2008aries}, a quasi-axisymmetric (QA) stellarator developed at New York University (NYU) \cite{garabedian2008three,garabedian2009design}, the Chinese First Quasi-axisymmetric Stellarator (CFQS) \cite{liu2018magnetic}, a quasi-helically (QH) symmetric stellarator developed at the Max Planck Institute for Plasma Physics (IPP) \cite{nuhrenberg1988quasi}, a QA stellarator developed at IPP \cite{henneberg2019properties}, the Helically Symmetric eXperiment (HSX) \cite{anderson1995helically}, Wistell-A \cite{bader2020advancing}, the Large Helical Device (LHD) \cite{iiyoshi1999overview}, and the Wendelstein 7-X (W7-X) \cite{klinger2016performance}. Scattered are alpha particle energies at they moment they are lost. The mean of the particle energies (solid black line) is shown against the energy model \cref{eq:energy_model} (dashed red line). The accuracy of the energy model in predicting the mean energy justifies its use as an optimization objective. We take the expectation of this energy measure to compute our optimization objective, replacing $\nu_s^{\alpha/e}$ by the inverse of the fixed tracing time $t_{\max}$: \begin{equation} \mathcal{J}_{\text{energy}}(\mathbf{x},v_{\parallel},\mathbf{w}) = 3.5e^{-2\mathcal{T}(\mathbf{x},v_{\parallel},\mathbf{w}) / t_{\max}} \label{eq:Jenergy} \end{equation} We write the confinement time as $\mathcal{T}(\mathbf{x},v_{\parallel},\mathbf{w})$ to explicitly denote its dependence on the initial particle position, parallel velocity, and decision variables. For a particle that is lost at time $t$ the confinement time is calculated as $\mathcal{T} = \min\{t,t_{\max}\}$. To compute our optimization objective, the expected energy lost, $\mathcal{J}(\mathbf{w}) = \mathbb{E}[\mathcal{J}_{\text{energy}}(\mathbf{x},v_{\parallel},\mathbf{w})]$ we integrate $\mathcal{J}_{\text{energy}}$ against the distribution $f(\mathbf{x},v_{\parallel})$ of initial particle positions and parallel velocities, \begin{equation} \mathcal{J}(\mathbf{w}) := \int_{\mathbf{x}}\int_{v_{\parallel}} 3.5e^{-2\mathcal{T}(\mathbf{x},v_{\parallel},\mathbf{w}) / t_{\max}} \,\, f(\mathbf{x},v_{\parallel}) \, \, dv_{\parallel} \, d\mathbf{x} . \label{eq:objective_integral} \end{equation} In \Cref{sec:objective_approximation} we discuss three possible methods of computing this integral, by Monte Carlo (MC), by Simpson's rule, and by Quasi-Monte Carlo (QMC) \cite{lemieux2009MC}. As a simple alternative to this objective we can also minimize the energy lost from particles born on a single flux surface. This has the advantage of reducing the dimension of the objective computation. Hence we define the surface objective as \begin{equation} \mathcal{J}_{s}(\mathbf{w}) := \int_{\theta,\zeta}\int_{v_{\parallel}} 3.5e^{-2\mathcal{T}(\mathbf{x},v_{\parallel},\mathbf{w}) / t_{\max}} \,\, f(\theta,\zeta,v_{\parallel} | s) \, \, dv_{\parallel} \, d\theta \, d\zeta . \label{eq:surface_objective_integral} \end{equation} Previous stellarator designs which leveraged optimization of empirical alpha particle losses, ARIES-CS and a design by Gori et. al., used the expected value of confinement time, and the conditional expectation of the confinement time over particles which bounce as optimization objectives. In this study, we opt to use the energy loss objective $\mathcal{J}$ rather than mean confinement time due to the interpretation as energy. However, the mean confinement time and $\mathcal{J}$ may be related through Jensen's inequality, \begin{equation} \mathcal{J}_{\text{energy}}(\mathbb{E}[\mathcal{T}]) \le \mathbb{E}[\mathcal{J}_{\text{energy}}(\mathcal{T})] = \mathcal{J}(\mathbf{w}). \end{equation} By a straightforward computation, $\mathbb{E}[\mathcal{T}] \le -\frac{t_{\max}}{2}\ln(\frac{\mathcal{J}(\mathbf{w})}{3.5})$. Hence maximizing the mean confinement time should reduce $\mathcal{J}$ and similarly minimizing $\mathcal{J}$ should increase the mean confinement time. The set of local minima for these two objectives is not in general the same. However, if there exist configurations with $0\%$ losses, then the objectives share the set of global minimizers. \section{Numerical computation of objective} \label{sec:objective_approximation} Monte Carlo quadrature and deterministic numerical quadrature methods can be used to approximate the integral \cref{eq:objective_integral}. Whether spawned on a mesh or randomly according to some distribution, particles with initial position and parallel velocity $(\mathbf{x}_i,(v_{\parallel})_i)$ are traced through time until breaching the last closed flux surface, $s=1$, at some time $t \le t_{\max}$, or until the terminal tracing time is reached $t=t_{\max}$. The confinement time is calculated as $\mathcal{T} = \min\{t,t_{\max}\}$, which can be converted to the approximate energy lost due to potential particle ejection via \cref{eq:Jenergy}. Quadrature methods combine the integrand values computed from evaluation points as a weighted sum, \begin{equation} \mathcal{J}(\mathbf{w}) \approx \sum_{i=1}^N \omega_i\mathcal{J}_{\text{energy}}(\mathbf{x}_i,(v_{\parallel})_i, \mathbf{w})f(\mathbf{x}_i,(v_{\parallel})_i, \mathbf{w}) \label{eq:objective_quadrature} \end{equation} where the weights $\{\omega_i\}_{i=1}^N$ and nodes $\{\mathbf{x}_i,(v_{\parallel})_i\}_{i=1}^N$ are determined by the quadrature method. We briefly explore three different methods for approximating our objectives: MC, QMC, and Simpson's rule \cite{burden2015numerical}. MC quadrature samples $N$ nodes randomly from some density $\{\mathbf{x}_i,(v_{\parallel})_i\}_{i=1}^N \sim g(\mathbf{x},v_{\parallel})$ and approximates the integral via \cref{eq:objective_quadrature} with weights $\omega_i = (g(\mathbf{x}_i,(v_{\parallel})_i)N)^{-1}$. In our setting, $f_{\theta,\zeta}(\theta,\zeta |s)$ varies depending on the MHD equilibrium computed from $\mathbf{w}$. To simplify the sampling procedure, we opt to sample $\theta$ and $\zeta$ from a uniform distribution. Hence, initial particle positions and velocities are sampled from \begin{equation} g(s,\theta,\zeta,v_{\parallel}) := f_s(s)f_{v_{\parallel}}(v_{\parallel})n_{\text{fp}}/4\pi^2. \label{eq:sample_density} \end{equation} The standard deviation, and hence convergence rate, of the MC estimator is $\sigma/\sqrt{N}$ where $\sigma$ is the standard deviation of $\mathcal{J}_{\text{energy}} f/g$. On one hand MC is slow to deliver accurate estimates, but on the other hand it does not rely on smoothness assumptions to achieve its converge rate, unlike Simpson's rule. When used in the optimization loop, Monte Carlo carlo methods can be applied in two ways: by regenerating the samples $\{(\mathbf{x}_i,(v_{\parallel})_i\}_{i=1}^N$ at each iteration, or by generating the samples once and holding them fixed throughout the optimization. We denote the former method as generic MC. The later method is known as the Sample Average Approximation method (SAA) \cite{shapiro2001monte}. A great benefit of using SAA is that it forms deterministic optimization problems which can solved by the any conventional optimization method. The principal drawback of SAA is the slight bias it incurs in the solution, similar to quadrature methods. When using generic MC to compute the optimization objective, stochastic optimization methods must be used to solve the optimization problem. Stochastic solvers tend to converge slowly, but arrive at unbiased solutions. Quasi-Monte Carlo methods are a deterministic analog of MC methods. Similar to MC they approximate integrals as sample averages. However, the points used in the sample average are not truly random, rather they are \textit{low discrepancy sequences}, approximately random sequences. Quasi-Monte Carlo methods boast a convergence rate of $O(1/N)$, when using $N$ points in the approximation, which is an impressive improvement over MC and SAA. The constant in the convergence rate depends on the \textit{total variation} of the integrand, a measure of it's rate of change, rather than it's variance. Since the integrand in \cref{eq:objective_integral} depends on the confinement time, which is non-smooth, and perhaps even discontinuous in $\mathbf{x}$ and $v_{\parallel}$, the total variation of the integrand is large, and so QMC may not outperform MC until the number of samples is large. \begin{figure}[tbh!] \includegraphics[scale=0.4]{figures/1d_slice.pdf} \includegraphics[scale=0.4]{figures/error_in_objective_approximations.pdf} \centering \caption{(Left) Approximations of the objective function $\mathcal{J}_{1/4}$ using Simpson's rule, QMC, SAA, and MC across a one dimensional slice of space, around a point $\mathbf{w}_0$. The curves were computed by tracing $4096$ particles per point, with particles on a $16^3$ mesh for Simpson's rule. The shaded region is the $95\%$ confidence interval for the objective value computation with MC. The black line (Actual) represents the actual value of the objective, and is computed using MC with $32,000$ samples. (Right) Relative error of MC, QMC, and Simpson's rule in computing the objective at a single point. The MC curve represents the expected relative error of the MC estimator given the sample size, and was computed by bootstrapping.} \label{fig:quadrature_comparison} \end{figure} Simpson's rule uses quadratic interpolation of a function on a mesh to approximate the function's integral. High order quadrature methods, like Simpson's rule, achieve high-order convergence rates when the integrand can be well-approximated by a low-degree polynomial. However, since particle confinement times may jump chaotically under small perturbations in $\mathbf{x},v_{\parallel}$, Simpson's rule and other high-order quadrature schemes are not expected to achieve a high-order convergence rate. In \Cref{fig:quadrature_comparison}, we compare the approximation quality of four methods of computing the $\mathcal{J}_{1/4}$: generic MC, SAA, Simpson's rule, and QMC. \Cref{fig:quadrature_comparison} (right) shows the relative error of MC, Simpson's rule and QMC in approximating the objective $\mathcal{J}_{1/4}$ at a single point. Given the limits on sample size requirements, MC achieves similar accuracy to Simpson's rule and QMC. QMC performs slightly better than MC, but does not reliably do so at the sample sizes shown. \Cref{fig:quadrature_comparison} (left) shows the objective approximations over a one dimensional slice of space near an arbitrary configuration $\mathbf{w}_0$. Spatially, we find that SAA provides a smooth approximation to the objective, which is beneficial for optimization. For this reason we use SAA to compute the objectives in the numerical experiments. Unfortunately, due to the extraordinarily high standard deviation of the confinement times typically $>2000$ points are required to reduce the noise in the objective enough so that it can be tractably minimized by an optimization routine. The standard deviation of the confinement times is often of the same order of magnitude as the mean, though it decreases as the loss fraction decays to zero. In future work, variance reduction techniques \cite{law2023meta,law2022accelerating,hammersley2013monte} should be used to improve the accuracy of the objective computation and reduce the computational burden associated with tracing particles. \section{Numerical results} \label{sec:numerical} In this section we explore numerical solutions of \cref{eq:main}. We show physical properties of two, four field period vacuum configurations: configuration \textbf{A} was optimized using the surface initialization loss $\mathcal{J}_{1/4}$, and configuration \textbf{B} optimized using the volumetric initialization loss $\mathcal{J}$. We find that minimizers of $\mathcal{J}_{1/4}$ also perform well under $\mathcal{J}$, and that quasi-symmetry need not be satisfied for good confinement. Furthermore, we analyze the local relationship of particle losses with a quasi-symmetry metric, finding that reducing the violation of quasi-symmetry can increase particle losses. While our numerical solutions are vacuum configurations, the optimization model and numerical methods can be applied to finite-$\beta$ configurations as well. Due to the computational expense of repeated particle tracing, our configurations were optimized with a terminal trace time of $t_{\max}=10$ms. A three dimensional view of the configurations is shown in \cref{fig:3d_configs}. The data that support the findings of this study are openly available at the following \url{https://github.com/mishapadidar/alpha_particle_opt}. \begin{figure}[tbh!] \includegraphics[scale=0.21]{figures/3d_config_iota_0.89_top_view.pdf} \hspace{15pt} \includegraphics[scale=0.21]{figures/3d_config_iota_1.043_top_view.pdf} \includegraphics[scale=0.3]{figures/cross_sections.pdf} \centering \caption{Three dimensional views of configuration \textbf{A} (left) and configuration \textbf{B} (middle). Cross sections of configuration \textbf{A} (solid) and \textbf{B} (dashed) at four cylindrical angles $\phi$ across a field period (right).} \label{fig:3d_configs} \end{figure} \subsection{Methods} \label{sec:methods} Initial experimentation demonstrated that the optimization landscape contains many local solutions. To this end it was useful to search the optimization space by generating a host of initial points with varied rotational transform values, $\iota$. Starting points for the fast-ion optimization were generated by solving the optimization problem, \begin{align} \operatornamewithlimits{minimize}_{\mathbf{w}} \ \ (A-A^*)^2 + (\iota-\iota^*)^2 + \sum_{i=1}^{{n_{\text{grid}}}} \max(B(\mathbf{x}_i) - B_+^*,0)^2 + \max(B_-^* - B(\mathbf{x}_i),0)^2 , \label{eq:phase_one} \end{align} in \code{SIMSOPT} using concurrent function evaluations to compute forward difference gradients, and the default solver in Scipy's least-squares optimization routine \cite{virtanen2020scipy}. Optimal solutions were found to the problem within $5\%$ error for each target rotational transform $\iota^*$. The decision variables were characterized by $n_{\text{mode}}=1$, i.e ${n_{\wb}}=8$. The fast-ion optimization was initialized from the solutions of \cref{eq:phase_one}. The magnetic field bound constraints \cref{eq:b_field_constraint_discrete} were treated with a quadratic penalty method with penalty weights all equal to one, \begin{align} \operatornamewithlimits{minimize}_{\mathbf{w}} \ \ \mathcal{J}_{\text{penalty}}(\mathbf{w}) := \mathcal{J}(\mathbf{w}) + \sum_{i=1}^{{n_{\text{grid}}}} \max(B(\mathbf{x}_i) - B_+^*,0)^2 + \max(B_-^* - B(\mathbf{x}_i),0)^2, \label{eq:penalty_method} \end{align} with an analogous form for using the surface objective $\mathcal{J}_{1/4}$. The particle loss objective $\mathcal{J}(\mathbf{w})$ was computed using SAA, since it provided a reasonably smooth approximation of the objective. The penalty method was used because the field strength constraints are \say{soft constraints} --- they do not need to be satisfied exactly. Powell's BOBYQA algorithm \cite{powell2009bobyqa} within the Python package PDFO \cite{ragonneau2021pdfo} was used to solve \cref{eq:penalty_method}. BOBYQA is a derivative-free trust region method that uses local quadratic approximations of the objective to make progress towards a minimum. BOBYQA performed particularly well in this problem due to its ability to handle computational noise and use samples efficiently \cite{cartis2019improving}. Empirically we find that the efficiency of the optimization with terminal time $t_{\max}$ can be substantially improved by warm-starting the optimization from a solution with near-zero losses at a shorter value of $t_{\max}$, say $t_{\max}/10$. The optimization up to the terminal time $t_{\max}=10$ms was performed solving a sequence of optimization problems, where at each step $t_{\max}$ and the number of Fourier modes were increased: $(t_{\max},n_{\text{mode}}) =$(0.1\text{ms}, 1), (1\text{ms}, 1), (1\text{ms}, 2), (10\text{ms}, 2), (10\text{ms}, 3). For $t_{\max}=0.1,1,10$ms we use $10^4,7^4,6^4$ particles, respectively, and $8,48,48$ MPI processes to trace particles. Particles were traced until reaching the terminal tracing time of $t_{\max}$, or until the particle reached the $s=1$ flux surface or $s=0.01$ flux surface. Particles reaching the $s=1$ flux surface were deemed lost, while particles reaching the $s=0.01$ flux surface were deemed to be confined to the terminal time $t_{\max}$. The $s=0.01$ stopping criteria is currently required as part of the tracing code in \code{SIMSOPT}, but should not be used in future work. \begin{table}[htb!] \def2{2} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Config. & Aspect Ratio & Mirror ratio & Mean $\iota$ & Volume loss fraction & $s=1/4$ loss fraction \\ \hline \textbf{A} &6.67 &1.33 &0.856 &0.022 &0.0046 \\ \hline \textbf{B} &6.61 &1.32 &1.023 & 0.0215 &0.0094 \\ \hline \end{tabular} \caption{Properties of configurations \textbf{A} and \textbf{B}. Loss fractions were computed by tracing 10,000 particles to $t_{\max}=10$ms. } \label{table:config_A_B} \end{table} \subsection{Two solutions} \label{sec:two_solutions} We present two solutions found by solving \cref{eq:main}. Configuration $\textbf{A}$, with solution vector $\mathbf{w}_{\textbf{A}}$, was found by minimizing the surface initialization objective $\mathcal{J}_{1/4}$ which measures the energy lost by particles born on the $s=0.25$ flux surface. Configuration $\textbf{B}$, with solution vector $\mathbf{w}_{\textbf{B}}$, was found by minimizing the energy lost by particles born throughout the entire volume, i.e. objective $\mathcal{J}$. Properties of configuration \textbf{A} and \textbf{B} can be seen in \Cref{table:config_A_B}. All configurations presented in this section were scaled to same $1.7$m minor radius and $B_{0,0}(s=0) = 5.7$T field strength on the magnetic axis as the ARIES-CS configuration \cite{ku2008physics}. Configuration $\textbf{A}$ \textit{almost} reaches the global minimum value of $\mathcal{J}_{1/4}$, attaining an objective value of $\mathcal{J}_{1/4}(\mathbf{w}_{\textbf{A}}) = 0.475$ and a loss fraction of $0.0046$ for particles born on the $s=0.25$ flux surface; a global minimum would have zero particle losses and $\mathcal{J}_{1/4} = 0.473$. Configuration $\textbf{A}$ also reports a low loss fraction for particles born, according to $f$, throughout the volume, $0.022$. Similarly, configuration \textbf{B} has a loss fraction of $0.0215$ for particles born throughout the volume and a loss fraction of $0.0094$ for particles born on the $s=0.25$ flux surface. While the two configurations were optimized for different objectives, both configurations show good performance in both objectives. Optimizing using the surface loss $\mathcal{J}_{1/4}$ reduces the dimension of the integral \cref{eq:surface_objective_integral} and potentially the variance of the objective. Since improvement in the two objectives is highly correlated, in future work the surface loss objective could be used in place of the volume loss objective $\mathcal{J}$, unless confinement times are largely dependent on the radial birth distribution due. Neither configuration \textbf{A} nor \textbf{B} has active constraints at the solution, and so the constraints do not limit the performance of the solutions. We do find however, that in general the constraints on the field strength are active throughout the optimization. Without constraining the field strength, the mirror ratio becomes unphysically large, the contours of $B$ close poloidally, and solutions become approximately Quasi-Isodynamic \cite{subbotin2006integrated}. In \Cref{fig:loss_profile} we compare the alpha particle loss curves of configuration \textbf{A} and \textbf{B} to those of the stellarator configurations introduced in \Cref{fig:objective_validation}, as well as the QA and QH configurations from Landreman and Paul (labeled LP-QA and LP-QH)\cite{landreman2022magnetic}. To compute the curves, $5000$ particles born throughout the volume (left) or on the $s=0.25$ flux surface (right), were traced until the terminal time 10ms or until either they crossed the $s=1$ flux surface and were considered lost, or reached $s=0.01$ and were considered confined. Our configurations demonstrate good particle confinement up to the terminal time $t_{\max}=10$ms used in the optimization. Configuration \textbf{A} and \textbf{B} outperform all but LP-QA, LP-QH and Wistell-A in terms of particle losses from the $s=0.25$ flux surface, and are only outperformed by Wistell-A and LP-QH in terms of losses of volumetrically initialized particles. The lowest loss fraction from the $s=0.25$ flux surface, $0\%$, is that of LP-QH. The QS optimization problem posed by Landreman and Paul is computationally much less expensive to solve, and has a much smoother objective than $\mathcal{J}$ and $\mathcal{J}_{1/4}$, allowing for solutions to be refined to a much higher degree with gradient-based optimization methods. \begin{figure}[tbh!] \includegraphics[scale=0.5]{figures/loss_profile.pdf} \centering \caption{Collisionless loss curves for $5000$ particles born throughout the volume with distribution $f$ (left) or born on the $s=0.25$ flux surface (right). Configurations were scaled to a $1.7$m minor radius and a $B_{0,0}(s=0)=5.7$T field strength on the magnetic axis, like the ARIES-CS reactor \cite{ku2008physics}. Wistell-A reported less than $0.1\%$ losses from the $s=0.25$ flux surface, and LP-QH reported no losses from the $s=0.25$ flux surface and a loss fraction of $0.0002$ for particles distributed throughout the volume.} \label{fig:loss_profile} \end{figure} \subsection{Local analysis of Quasi-symmetry} \label{sec:quasisymmetry_analysis} Neither configuration \textbf{A} nor configuration \textbf{B} are QS. This is seen most clearly in \Cref{fig:B_contour} by viewing the contours of the magnetic field strength in Boozer coordinates. QS fields have the representation $B = B(s,m\theta - n\zeta)$ for some numbers $m,n$ in Boozer coordinates, implying that the contours of $B$ are straight when viewed as a function of $\theta$ and $\zeta$ \cite{imbert2019introduction}. Near the magnetic axis only $m=1$ is possible \cite{cary1997helical}, and to preserve field period symmetry $n$ must be a multiple of the number of field periods, $n \in kn_{\text{fp}}$ for $k\in\mathbb{N}$. Quasi-axisymmetry occurs when $m=1,\, n=0$ and quasi-helical symmetry occurs when $m=1,\, n\neq 0$. \begin{figure}[tbh!] \includegraphics[scale=0.4]{figures/nfp4_QH_cold_high_res_phase_one_mirror_1.35_aspect_7.0_iota_0.89_field_strength.pdf} \includegraphics[scale=0.4]{figures/nfp4_QH_cold_high_res_phase_one_mirror_1.35_aspect_7.0_iota_1.043_field_strength.pdf} \centering \caption{Contours of $B$ in the Boozer coordinates on four flux surface $s=0.05,0.25,0.5,1.0$, for configuration \textbf{A} (left four) and configuration \textbf{B} (right four).} \label{fig:B_contour} \end{figure} Exact quasi-symmetry is a sufficient condition for perfect confinement. In addition, Landreman and Paul \cite{landreman2022magnetic} showed that even precisely quasi-symmetric configurations can have excellent confinement properties. However, in general it is not clear how particle confinement degrades when QS is broken, or how particle confinement improves as the violation of QS is reduced. To explore this relationship, we modify configuration \textbf{A} to reduce the violation of QS and examine how the corresponding particle losses are affected. The degree of $(m,n)$-quasi-symmetry of a configuration can be measured by the metric proposed in \cite{landreman2022magnetic}, which we denote $Q_{m,n}(\mathbf{w})$. Configuration \textbf{A} can be modified to have reduced violation of $(m,n)$-quasi-symmetry by moving the solution vector $\mathbf{w}_{\textbf{A}}$ in the negative gradient direction of $Q_{m,n}$. For small step sizes $\alpha$, configurations with decision variables \begin{equation} \mathbf{w}_{m,n}(\alpha) = \mathbf{w}_{\textbf{A}} -\alpha\nabla Q_{m,n}(\mathbf{w}_{\textbf{A}}) \label{eq:step_in_qs_dir} \end{equation} will have a lower departure from QS than configuration \textbf{A}. As seen in \Cref{fig:local_qs}, as the violation of QS is reduced along this path, particle losses increase substantially, from approximately $0.43\%$ to approximately $1\%$, for all three types of QS considered $(m,n) = (1,0), (1,4), (1,-4)$. Locally, the violation of QS has an inverse relationship with confinement and so quasi-symmetric configurations may be isolated from non-quasi-symmetric configurations with low losses. \begin{figure}[tbh!] \includegraphics[scale=0.6]{figures/pareto_loss_frac.pdf} \centering \caption{Fraction of alpha particles lost as the violation of QS is reduced (right to left) along the line segment $\mathbf{w}_{\textbf{A}} -\alpha\nabla Q_{m,n}(\mathbf{w}_{\textbf{A}})$, for three different types of QS: $(m,n) = (1,0), (1,4), (1,-4)$. Reducing the violation of QS increases the fraction of lost particles. The shaded region indicates the $95\%$ confidence interval of the loss fraction.} \label{fig:local_qs} \end{figure} \section{Future work} \label{sec:conclusion} In the design of the ARIES-CS reactor, configurations with good alpha confinement were found by including alpha particle tracing calculations as part of an objective function within the optimization loop. In this study we have expanded upon this method, showing that it can be used to find configurations with fractional alpha losses. However, in it's current form, fast-ion optimization is computationally expensive, often taking multiple days to complete. Proxy metrics on the other hand, can be used to design stellarators in only a few hours on a computing cluster. To reduce the wall-clock-time of fast-ion optimization we propose three improvements: the use of variance reduction techniques to reduce the number of traced particles, symplectic particle tracing algorithms to improve the speed and accuracy of confinement time calculations, and multi-fidelity optimization methods to reduce the number of times particle tracing needs to be performed altogether. Law et. al. found in \cite{law2021accelerating,law2023meta} that combining variance reduction techniques such as importance sampling, control variates, and information reuse \cite{ng2014multifidelity} can reduce the number of particles that must be traced by a factor of 100. In addition, variance reduction techniques can be implemented relatively quickly making them an easily implementable addition to fast-ion optimization methods. The time spent tracing can also be reduced by improving orbit integration time. Albert et. al. \cite{albert2020accelerated,albert2020symplectic} showed that symplectic tracing algorithms can trace particle trajectories three times faster than adaptive integration algorithms, such as RK45, while maintaining the same statistical accuracy. Lastly, we propose using multi-fidelity optimization methods to reduce the number of expensive particle tracing simulations \cite{march2012provably,peherstorfer2018survey}. Multi-fidelity optimization methods for fast-ion optimization would rely on \say{low-fidelity models} of $\mathcal{J}$ to take reliable steps towards minima without performing many expensive particle tracing simulations. Low fidelity models of the energy loss objective could leverage particle tracing simulations with larger step sizes or simply be proxies, such as quasi-symmetry metrics. In addition to improvements in optimization efficiency, there are improvements to be made in constructing objective functions. Thus far, particle tracing has only been used to measure confinement. However, now that particle losses can be tractably reduced, the destructive effects of alphas on plasma-facing components is a central design consideration. A \say{wall-loading} objective function can either concentrate or disperse the alpha particle load on the wall, depending on engineering considerations. \section{Acknowledgments} We thank Max Ruth, Shane Henderson, and Rogerio Jorge for their useful discussions. This work was supported by a grant from the Simons Foundation (No. 560651, D.B.).
{ "arxiv_id": "2302.11384", "language": "en", "timestamp": "2023-02-23T02:14:57", "url": "https://arxiv.org/abs/2302.11384", "yymm": "2302" }
\section{Introduction} Many-body localization (MBL) is a spatio-temporal phenomenon that is believed to exist in interacting fermion systems at strong enough disorder~\cite{Basko2006, Gornyi2005, Oganesyan2007, Nandkishore2015, Bera2015, Prelovsek-review-2017,AbaninBloch-Review-2018,AletReview2018}. It manifests itself as a strongly reduced (preMBL) or even fully inhibited tendency (proper MBL) towards thermalization at large enough disorder strength. The requirement of strong disorder implies that all remnants of relaxation dynamics in the preMBL-regime are necessarily very slow, reminiscent of the familiar behavior of glasses. ``Slowness" originates from the fact that long-time behavior typically is dominated by collective reorganization processes in a highly disordered many-body-energy landscape. The salient collective events are associated with a broad distribution of time scales; there is no single characteristic rate that would lend itself as a measure of time. We briefly elaborate on the issue of time- and length scales by formulating a scenario: Consider an ensemble of disordered quantum wires of a finite length $L$. At strong enough disorder, typical many-body states can be thought about essentially as single (dressed) Slater-determinants constructed from localized single-particle wavefunctions; this is the jest of the celebrated concept of local integrals of motion (LIOM)~\cite{Serbyn2013, Huse2014, Vosk2015, OBrien2016, Imbrie-review2017}. While many, perhaps even most, wires may follow the LIOM-paradigm, a fraction ${\mathfrak q}(L)$ of the wires will not; they form a thermalizing sub-ensemble that we refer to as ``ergodic bubbles"~\cite{deroeckPRB17, Thiery2017}. A new ensemble of wires with the length $2L$ can be formed by combining samples giving rise to a new fraction ${\mathfrak q}(2L)$. The evolution of ${\mathfrak q}(2L)$ with system size has been investigated in RG-studies of toy models~\cite{Vosk2015, Zhang2016, Dumitrescu2018, Dumitrescu2017, Goremykina2018, Morningstar2019, MorningstarPRB20}. We emphasize two important implications of this rough picture for observations, numerical and experimental, in thermalizing phases when growing the system size at ${\mathfrak q}(L)\ll 1$: (i) strong finite size effects occur because thermalization sets in only after a certain length-scale has been reached long enough to include an ergodic bubble; (ii) the time scale for relaxation can be exponentially long because it reflects how long it takes a bubble to thermalize an almost localized and large sample region. We refer to this slow relaxation process as `creep'. Since creep emerges from destabilizing an interacting but localized (i.e. LIOM-dominated) sample region via remote thermal bubbles, creep indicates the dominant mechanism for many-body de-localization and is prevalent in the preMBL-regime. Signals of creep are exponentially long observation times required in numerical and experimental simulations as well as an appreciable dependency of transport-sensitive observables on the simulation volume. We interpret the gradual increase of the estimate for the disorder strength beyond which the localization transition was expected to occur as a signature of creep: while early works favored a value $W{\approx} 3.8$~\cite{Luitz2015, MaceMultifractality2018}, later authors reported significant finite size effects and favored larger values $W{\approx 4.5-5.5}$ \cite{Devakul2015, Doggen2018, SierantLargeWc20, DoggenRevAnnPhy21}. Creep manifests, in particular, in the spatio-temporal behavior of correlation functions; in the preMBL-regime, they exhibit a spatial decay much slower than exponential with tails slowly increasing with time; for a discussion of how creep appears in various observables see \textcite{Weiner19}. It is only recently that creep in its diverse manifestations and the resulting implications for the (pre)MBL phenomenology in finite size systems have been appreciated: A weak tendency towards equilibration has been analyzed in the relaxation dynamics even at large disorder in the local charge density, the sublattice imbalance, and the density matrix~\cite{Bera2017,Weiner19,TaliaPRB19, SierantPRB2021,Morningstar2022,SelsBathPRB22}; the evolution of the entanglement and the number entropy has been investigated ~\cite{SirkerPRL20,SierantPRB2021, SirkerPRB21,MaximilianAP21, Kiefer2022, Luitz2020, Ghosh2022, SirkerComment22, GhoshComment22}; attempts have been made to interpret the shift of intersection points (``critical points") and the flow of the spectral function with increasing system sizes~\cite{Vidmar2020, Panda2019, SierentPRL20, SierantLargeWc20, SuntasPRB20, Polkovnikov2021, AbaninAOP21}. The status as we see it is summarized in the phase diagram~Fig.~\ref{f1}. \begin{figure}[t] \centering {\includegraphics[width=1.0\columnwidth]{phasediag.pdf}} \caption{Schematic phase diagram of the disordered t-V (isotropic Heisenberg) model~\eqref{eq:H}. Shaded region: thermalizing regime exhibiting (transient) power-laws, e.g., for the decay of imbalance (or time-fluctuations) and growth of entanglement and second Renyi entropy. $W_c{\gtrsim 4.5}$: crossover from accelerating to decelerating dynamics as indicated by the temporal increase/decrease of an effective diffusion exponent $\beta(t)$~\cite{Bera2017,Weiner19}. Beyond $W_\text{freeze}{\approx}10$: fractal dimension of the dynamically active part of the many-body Hilbert space vanishes (freezing)~\cite{NandyPRB21}. This region is largely unexplored in finite size and time numerical studies and hosts the proper MBL phase, if it exists.} \label{f1} \end{figure} An implication of creep is the absence of characteristic time scales; one rather expects broad distributions of rates characterizing dynamical processes. This poses the question of at which time it is meaningful to compare the dynamical status of two samples that nominally belong to the same ensemble but differ by the specific disorder realization. We here pursue the implications of the following hypothesis: The time evolution in both samples can at least partially be synchronized when introducing an `internal sample time'. We here propose to model the internal time by a form of entropy. While the model has its limitations when adopted on a per-sample level, it works incredibly well for the disorder-averaged `internal ensemble time'. To illustrate the power of the concept, we plot in Fig.~\ref{f2} the average charge imbalance $\overline{\msr{I}}(t)$ over the entanglement entropy $\aSe(t)$: a data collapse is observed in a wide time window for a range of disorder values situated in the (transient) sub-diffusive regime (Fig.~\ref{f1}). On a more intuitive level, two considerations motivate us to consider an entropic entity $\mS_L(t)$ as a model of the internal clock of a system with spatial extension $L$: (i) The time evolution of the entanglement entropy is, in a sense, the fastest and most stable relaxation process: It is relatively fast, because, unlike the redistribution of, e.g., energy and particle number it is not restricted by a local conservation law: even in the proper MBL phase, one expects a logarithmic-in-time growth of the entanglement entropy, when energy and particle densities have long ceased to relax. The saturation dynamics of physical observables will thus be represented by a plateau when plotted against $\msr{S}_\mathrm{e}$ in the thermodynamic limit (ii) On a more formal level, the rate of entropy production can be considered as a generalized driving force for the relaxation of (ensemble-averaged) physical observables~\cite{brenig89}. \begin{figure}[t] \centering {\includegraphics[width=1\columnwidth]{se_collapse}} \caption{Evolution of the ensemble-averaged imbalance after quenching a product (Ne\'el)-state in a disordered wire of length $L{=}16,20,24, 26$ at four moderate disorder strengths $W{=}1.5, 2.0, 2.5, 3.0$. Time is measured in terms of the average entanglement entropy in units of the Page-value $\mS_\text{Page}{\coloneqq} \ln 2 (L/2) {-} 1/2$. } \label{f2} \end{figure} Subjecting the synchronization concept to a further test on a second observable, we apply it to the time fluctuations of the local density, $\overline{\msr{F}}(t)$. Again an excellent scaling collapse is observed, demonstrating the power of the concept. Further, we find that $\overline{\msr{F}} \propto \aSe^{-\rho}$, where the exponent depends on the disorder strength, $\rho(W)$; a similar relationship also holds for typical values of ${\msr{F}}$. The corresponding exponent functions, $\rho_\text{ave}(W)$ and $\rho_\text{typ}(W)$, reveal opposing trends with increasing disorder strength, in particular, they intersect at a disorder value $W_{{\msr{F}}}$. Since $W_{{\msr{F}}}$ is sufficiently close to $W_c$ we take this as fresh evidence in favor of the existence of (at least) two subphases in the thermalizing regime indicated in the phase diagram Fig~\ref{f1} (`Subphase' is meant here in a weak sense indicating that the thermalizing phase has regions with qualitatively different relaxation behavior that are connected via a cross-over.). \section{Theoretical setting} \label{section2} \subsection{Model and Method} We consider spinless fermions within the $t{-}V$ model \eqq{ \mathscr{H} = & -\frac{1}{2}\sum_{i=1}^{L-1}c_{i}^{\dagger}c_{i+1}+\text{h.c}+\sum_{i=1}^{L} \epsilon_i(n_i-1/2) \nonumber \\ & + V \sum_{i=1}^{L-1}(n_i-1/2)(n_{i+1}-1/2), \label{eq:H} } where $n_i\coloneqq c_{i}^{\dagger}c_{i}$, $i$ denotes the site index, $L$ the system size and $V$ the nearest neighbor interaction strength; $V{=}1$ throughout this work. The on-site potentials, $\epsilon_i$, are uncorrelated and distributed within $[-W, W]$. All calculations are done at half-filling, $N/L{=}1/2$. We study quench dynamics starting from the (product state) charge density wave state $|\Psi\rangle = |1 0 1 0 \ldots\rangle$. Time propagation employs the standard Chebyshev expansion of the time evolution operator, which reduces the exponentiation of $\mathscr{H}$ to several sparse matrix-vector multiplications~\cite{Wei06, Bera2017}. A comparison of the Chebyshev method with exact time evolution is presented in Appendix~\ref{app:conv}. \subsection{Observables} In most of this work, we focus on two main observables. For the ``internal clock" we adopt the entanglement entropy \begin{align} \msr{S}_\mathrm{e} \coloneqq - \text{Tr}_\text{A} \rho_\text{A} \ln \rho_\text{A}, \end{align} where $\rho_\text{A}$ denotes the density operator of subsystem $\text{A}$ as obtained after integrating out the complement of $\text{A}$, i.e. subsystem $\text{B}$. We use an equal partition $L_\text{A}{=}L_\text{B}{=}L/2$. With an eye to the measurement of internal times, we mention that $\msr{S}_\mathrm{e}(t)$ behaves similarly to the second Renyi-entropy $ \mS_2\coloneqq - \ln \text{Tr}_\text{A} \hat \rho_\text{A}^2, $ see Appendix~\ref{app1}. Recently, $\msr{S}_\mathrm{e}$~\cite{LukinScience19} as well as the Renyi entropy ${\mS_2}$ have been measured~\cite{BrydgesRenyiScience19} in ion trap experiments. Our second observable is the sublattice imbalance \begin{align} {\msr I}(t) \coloneqq \sum_{j}^{L} (-)^j\ n_j(t) \end{align} and derived quantities such as the density fluctuations \begin{align} {\msr{F}}(t) \coloneqq 1/L \sum_{j=1}^L \{ [n_j(t) - \{ n_j(t) \rab_{\Delta t}]^2 \rab_{\Delta t}, \label{e4} \end{align} where $\{ \ldots \rab_{\Delta t}$ denotes a sliding time window average over a set of given time traces $n_j(t), j=1,\ldots L$, also see ~\cite{NandyPRB21}; $\Delta t=12.5$ in our calculations. Observables averaged over different disorder configurations will be denoted by an overline, e.g., $\aSe(t), \overline{\msr{I}}(t)$. \section{Clean case - Results \& Discussion} Figs. \ref{f3a} and \ref{f4a} display the evolution of $\msr{S}_\mathrm{e}(t)$ and ${\msr I}(t)$ in the clean system, i.e., $W{=}0$. \subsection{Entanglement entropy - evolution in time and system size} \begin{figure}[t] \centering {\includegraphics[width=1\columnwidth]{clean_imb_s_v2}} \caption{(a)~Time evolution of the sublattice imbalance, ${\msr I}(t)$ and (b) the entanglement entropy, $\msr{S}_\mathrm{e}(t)$, after a quench in a clean quantum wire, Eq.~\eqref{eq:H}. $\alpha$ denotes the saturation value of $\msr{S}_\mathrm{e}$ in units of the Page value $\mS_\text{Page}$ for the respective system sizes. The inset shows the infinite-system size extrapolation of the saturation value, $\msr{S}_\mathrm{e}^\infty(L)$, as read off the main plot. We take the curvature seen in the data as an indication of corrections to scaling that are non-analytic in $1/L$, possibly logarithmic~\cite{Calabrese_2005}. Fitting the trace to Eq. \eqref{e5} yields $a_\text{V}=0.50(1), a_\text{S}=-0.29(4), a=1.8(1).$ } \label{f3a} \end{figure} We identify three windows of characteristic times the traces $\msr{S}_\mathrm{e}(t)$, see Fig. \ref{f3a}b): Let the characteristic velocity of pair-excitations be $v$; following the kinetic energy given in Eq. \eqref{eq:H} one expects $v\approx 1$. In the intermediate window, $1\lesssim t\lesssim L/2v$, we have $\msr{S}_\mathrm{e}(t)=c_\mS t + \text{subleading terms}$, reflecting the asymptotic behavior expected for the large-system limit, $L\to\infty$, of the ballistic system~\cite{Calabrese_2005}. Based on our data Fig. \ref{f3a}(b) we obtain $c_\mS\approx 0.52$. Due to the limited time window and the unknown subleading terms, reliable error bars are difficult to establish. Relaxation processes in the short time window occur on rates given by the (inverse) bandwidth; they reflect the local physics of the clean system. As seen in Fig.~\ref{f3a}a), the imbalance decays on these ultra-short time scales. At large times, $t\gtrsim L/2v$, the entanglement entropy $\msr{S}_\mathrm{e}(t)$ reaches a saturation value, $\msr{S}_\mathrm{e}^\infty(L)\coloneqq \msr{S}_\mathrm{e}(L;t\to\infty)$, that exhibits pronounced system size effects. Its analytical form is well described by \begin{align} \msr{S}_\mathrm{e}^\infty(L)= a_\text{V} L/2 + a_\text{S}\ln(L/2a) + \ldots \label{e5} \end{align} where $\ldots$ indicate terms that vanish in the thermodynamic limit and $a$ denotes a microscopic scale that accounts for terms $\mathcal{O}(L^0)$. The first term incorporates the expectation that the entanglement entropy of the ballistic system after saturation is an extensive observable, so the leading behavior is described by a volume law. Based on quantum field theory, one would expect a prefactor ratio $a_\text{V}/c_\mS=1$, which is consistent with our numerical estimate, $\approx 96$\%, within our extrapolation uncertainties~\cite{Calabrese_2005}. An earlier estimate of this ratio, 88\%, has been reported for free fermions (XX model)~\cite{zhao2016}. The second term in Eq. \eqref{e5} we read as a manifestation of interface (surface) effects: we interpret $L^{d-1}$ in the limit $d{\to}1$ as a logarithm. It accounts for interaction mediated long-range correlations in the one-dimensional bulk phase that lead to anomalously strong surface effects~\cite{Calabrese_2005}. For a fully chaotic system, $\msr{S}_\mathrm{e}^\infty(L)$ is expected to saturate at the Page value, $\mS_\text{Page}\coloneqq \ln(2) L/2 - 1/2$~\footnote{In Ref. \cite{Page1993} it is shown that the average entanglement entropy of random pure states to leading order is of the form \begin{align} \mS_{m,n} = \ln m - \frac{m^2-1}{2mn} + \ldots \nonumber \end{align} where $n,m$ is the Hilbert space dimensions of the respective subsystems assumed to be large. For the present case, we have $n=m=2^{L/2}$ motivating the definition of $\mS_\text{Page}$ given in the text. }. Confirming results of earlier authors~\cite{KimPRL13}, we find that clean systems do not exhaust the Page limit, meaning $\alpha <1$, see Fig.~\ref{f3a}b), with $\alpha(L)\coloneqq \msr{S}_\mathrm{e}(t\to\infty)/\mS_\text{Page}$ and $\alpha(L\to\infty)=a_\text{V}/\ln 2$. For the latter coefficient, we here report an estimate significantly below unity, $\alpha\approx 0.72$. We interpret this result, $\alpha{<}1$, as a consequence of the fact that the clean model exhibits local conservation laws, such as momentum conservation, that (apparently) can limit the phase-space volume accessible during relaxation as compared to a fully chaotic situation. \subsection{Clock rate and scaling} With an eye on the disordered case and its internal clock, we mention that entanglement has been introduced as a suitable time unit before already in the clean case. \textcite{Calabrese_2005} observe that the entanglement evolution $\msr{S}_\mathrm{e}(L;t)$ corresponding to quenches with different initializing states collapse onto a master curve after normalization with respect to the asymptotic value, i.e. when plotting the ratio $\msr{S}_\mathrm{e}(L;t)/\msr{S}_\mathrm{e}^\infty(L)$. At intermediate times, $1\lesssim t\lesssim L/2v$, $\msr{S}_\mathrm{e}(t)\propto t$ and therefore the collapse implies $\dot \msr{S}_\mathrm{e}(t)/\msr{S}_\mathrm{e}^\infty$ independent of time and the initial condition. In this sense, one might say that the rate of entropy production is controlled by the (saturation) entanglement $\msr{S}_\mathrm{e}^\infty$. Taking the idea further, \textcite{KimPRL13} have postulated a scaling form \begin{align} \msr{S}_\mathrm{e}(t) \approx \msr{S}_\mathrm{e}^\infty(L) \ F(t/\msr{S}_\mathrm{e}^\infty(L)) \label{e6c} \end{align} for a situation in which the system size $L$ rather than the initializing state is varied; $F(x)$ denotes a scaling function. Also here, the saturation value $\msr{S}_\mathrm{e}^\infty$ sets the rate for entanglement growth. In Fig.~\ref{f4a} we test the hypothesis \eqref{e6c} and show the scaling function corresponding to the data given in Fig.~\ref{f3a}. While a reasonable collapse is observed, corrections to scaling are appreciable that we tentatively assign to logarithmic corrections, such as displayed in \eqref{e5}. Hence, existing reports on quenches in clean systems lend support to the main hypothesis of our work, i.e. that the entanglement entropy can be a good model for an internal clock in interacting quantum wires. \begin{figure}[t] \centering {\includegraphics[width=1\columnwidth]{clean_entg_collapse}} \caption{(Near) scaling collapse of the entanglement entropy after employing the saturation value, $\msr{S}_\mathrm{e}^\infty(L){\coloneqq}\msr{S}_\mathrm{e}(L;t\to\infty)$, as a measure of time. Alternatively, the inset employs $L/2v$ for units in order to highlight the finite-size convergence near $t\approx L/2v$ consistent with expectations based on Ref.~\cite{Calabrese_2005}. } \label{f4a} \end{figure} \section{Disorder - Results \& Discussion} \subsection{Time evolution of entanglement entropy} \begin{figure*}[t] \centering {\includegraphics[width=1\textwidth]{SEvsTime}} \caption{Ensemble-averaged entanglement entropy $\msr{S}_\mathrm{e}(L,t)$ for system sizes $L{=}16,20,24$ and four moderate disorder values $W$. (a) Three temporal regimes are identified: ballistic, (sub-)diffusive, and saturating. (b-c): logarithmic time derivative $\dot \msr{S}_\mathrm{e}$ reveals a (nearly) power-law asymptotics. The shaded region indicates the window underlying the fits following Eq.~\eqref{e2}. (Parameters in Tab.~\ref{t1}.) (d) Saturation value is seen in the left panel in units of the Page limit $\mS_\text{Page}$ over the linear system size. } \label{f4} \end{figure*} We will begin our analysis of disordered systems with the ensemble-averaged entanglement entropy $\aSe(t)$ displayed in Figs. \ref{f4}(a) and \ref{f4b}. Similar to the clean case, also here the time traces suggest introducing three time regimes: In the short time regime, all traces (nearly) collapse irrespective of disorder $W$ (ballistic period). This regime we will here leave unattended since disorder effects are (relatively) weak. Our focus is on the sub-diffusive regime (significant disorder effects, system size converged) and on the saturation regime (strong effects of disorder and system sizes). \subsubsection{Sub-diffusive Regime} The sub-diffusive regime will eventually display the asymptotic dynamics of bulk systems at large enough $L$ and long enough times, i.e. the thermodynamic limit. Our nomenclature reflects an important result of our investigations also indicated in Fig. \ref{f1}: In the parametric window that we have investigated we consistently observe dynamical signatures, which are expected for the preMBL-regime; MBL proper is never observed. Early researchers have characterized the time evolution of observables in the sub-diffusive regime by power-laws, e.g. $\aSe(t){\propto} t^{1/{z_\text{ee}}}$~\cite{Luitz2016, Luitz:2017cp}, especially in the range up to $W\simeq 3$ displayed in Fig. \ref{f4}. However, as is clearly visible in Fig.~\ref{f4}(a) for the paradigmatic case of entanglement entropy, the putative time window of power-law dynamics has not yet opened up in the range of computationally available system sizes. \footnote{To be extracted from this data is only a lower bound for the dynamical exponent ${z_\text{ee}}$, i.e. the slope at the turning point in the (sub-)diffusive regime.} It thus is apparent that finite size effects in disordered wires are dramatically stronger as compared to the clean case, Fig.~\ref{f3a}. \begin{figure}[b] \centering \includegraphics[width=1\columnwidth]{se_vs_time_largeW.pdf} \caption{(a)~Traces similar to Fig.~\ref{f4}(a) for disorder in the regime of decelerating dynamics Fig.~\ref{f1}. For $W=4.0,6.0$ system sizes are $L=16,20,24$ and else $L=20,24$. (b)~Logarithmic time derivative of $\aSe$ for $W=4.0, 6.0$. The plot highlights the emergence of a power-law increase with effective exponents that quickly grow with the system size $L$. An increase faster than $\ln t$ rules out MBL within the respective disorder regime. } \label{f4b} \end{figure} The same conclusion also holds at larger disorder in the regime of decelerating dynamics shown in Fig.~\ref{f4b}(a). Notice the the left-hand side (lhs) curvature in the intermediate time window where the traces are nearly system-size converged (subdiffusive regime): % the crucial observation is that the lhs curvature becomes more pronounced in a larger window of time with ever growing system size. % As illustrated in Fig.~\ref{f4b}(b) the curvature signalizes an emergent power law with effective exponents that rapidly increase with the growing system size: a change in system size by 50\% increases the effective exponent at long times by 200\% ($W{=}4$) resp. 50\% ($W{=}6$). % The phase diagram proposed in Fig.~\ref{f1} is based on the assumption that the trend observed in Fig. \ref{f4b} is indicative of the thermodynamic limit and, therefore, does not reverse at much larger system sizes. While trend reversal towards insulating behavior (``reentrance") cannot be rigorously excluded in finite-size studies, we don't see any hints pointing toward its existence in numerical work in the MBL context ~ \footnote{A kind of trend reversal, if only a different one, is observed in the model of regular-random graphs (RRG) \cite{TikhonovRRG16}: this reversal is similar to creep in the sense that the reversed flow in RRG is directed away from the localized to the ergodic regime. Opposed to this, the reentrance described in the main text implies a flow reversal towards localizing behavior. }. In this sense, our result Fig. \ref{f4b} is inconsistent with the expected behavior in the MBL phase, $\aSe(t)\propto \ln t$~\cite{Znidaric2008, Bardarson2012,Serbyn2013a}, and indicates the absence of many-body localization in the respective window of disorder values. We mention the implications of our result Fig.~\ref{f4b} for Refs.~\cite{SirkerPRL20,SirkerPRB21}. These authors are quenching random half-filled product states and studying the number entropies $\mS_\mathrm{N}^{(\alpha)}$ that they relate to the second Renyi entropy $\overline{\mS}_\mathrm{N}^{(2)} {\sim} \ln \overline{\mS}_2$ and to the entanglement entropy $\overline{\mS}_\mathrm{N}^{(1)} {\sim} \ln \aSe$ \cite{SirkerPRL20, SirkerPRB21}. Fitting their data in the time domain, they conclude a double-logarithmic growth of the number entropies, $\overline{\mS}_\mathrm{N}^{(\alpha)} \sim \ln\ln t$. These statements imply $\aSe(t), \overline{\mS}_2(t) \propto \ln t$, which taken at face value is inconsistent with our claim of growth of $\aSe(t)$ faster than logarithmic, Fig. \ref{f4b}. \begin{figure*}[t] \centering {\includegraphics[width=\textwidth]{sample_imbalance}} \caption{Plot illustrating extremely strong sample-to-sample fluctuations. Left: Imbalance ${\msr I}(t)$ over simulation time $t$ for $10$ samples and four disorder values. The plot illustrates the logarithmically wide distribution of ${\msr I}$ at large times. Second column: Logarithmic derivative (effective exponent $\beta(t)$) of data shown in lhs column. The logarithmically broad distribution of $\beta$ is illustrated. Third column: Corresponding entanglement entropy $\msr{S}_\mathrm{e}(t)$. At large disorder, at a given time $\msr{S}_\mathrm{e}$ is logarithmically distributed. Right: Imbalance as a function of entanglement $\mS_\mathrm{e}(t)$; sample-to-sample fluctuations are reduced if times with similar values for $\mS_\mathrm{e}(t)$ are compared. } \label{f5} \end{figure*} We assign this apparent discrepancy to a significant difference in data analysis. The $\ln t$ behavior in Refs.~\cite{SirkerPRL20,SirkerPRB21} has been extracted from time traces obtained from system sizes up to $L{=}24$. In particular, the fit for $L{=}24$ was targeting the largest observation times, similar to our data Fig.~\ref{f4b}(a). Indeed, in this regime also our data is consistent with a logarithmic fitting, as is seen by the lack of curvature in the time traces for $L{=}24$ in Fig.~\ref{f4b}(a) at $t{\gtrsim}10^2$. In this sense, there is no discrepancy here. However, as is also seen in Fig.~\ref{f4b}(a) the regime of large times is strongly affected by finite-size effects: with increasing system sizes the traces quickly develop an ever-growing tendency for lhs-curvature that connects smoothly to the lhs-curvature already secured in the sub-diffusive time window. In Fig. \ref{f4b}(b) this flow away from logarithmic towards power-law dynamics is particularly apparent. The main result of Refs. \cite{SirkerPRL20,SirkerPRB21}, i.e. the absence of MBL-proper in the investigated parameter regime, remains unchallenged here; the reported $\aSe(t)\sim \ln t$ from our point of view represents a lower bound~ \footnote{In our analysis we adopt the perspective that there is no qualitative difference between quenching a Neel-state - as we do - and a random product state as done in Ref.~\cite{SirkerPRL20,SirkerPRB21} for the entanglement dynamics. However, subsequent work noted that averaging over random product states could make it more difficult to extrapolate the long-time dynamics~\cite{MaximilianAP21}.}. \subsubsection{Saturation regime - disorder enhanced entanglement} Confirming earlier results, we readily observe in Fig.~\ref{f4} that also in the disordered situation, $\aSe(t)$ does not exhaust the Page-value, $\mS_\text{Page}$, in the limit of long observation times~\cite{Page93}. To extract the saturation value, $\aSe(L,W)$, we feed the asymptotic form \begin{align} \aSe(L,W;t) = \aSe(L,W) - c(L,W)/t^{\gamma_L(W)} \label{e2} \end{align} into a three-parameter fit of the data Fig.~\ref{f4} (a-c). To estimate the amplitude $c(L,W)$ and the exponent $\gamma_L(W)$, we employ the asymptotics of the time derivative, $\dot \aSe_W(L,t)$, reproduced in Fig.~\ref{f4}(b,c). With this input, the saturation value $\aSe^\infty(L,W)$ is fitted adopting \eqref{e2} to the data in the main panel, Fig. \ref{f4}(a); the fitting parameters are reproduced in the appendix, Tab. ~\ref{t1}. We emphasize that the exponent $\gamma_L(W)$ is an effective one. Close scrutiny of Fig. \ref{f4}(b,c) reveals a slow time evolution of the slope (exponent) that is nearly but (perhaps) not fully converged within our window of observation times. As a consequence, the fitting parameters contain a systematic error that is not accounted for in the error estimates given in the appendix, Tab.~\ref{t1}. The saturation values of the entanglement entropy, $\aSe(L,W)$ are reproduced in Fig. \ref{f4}(d). A striking aspect revealed here is {\it disorder enhanced entanglement}; there is a wide regime in the $L,W$-plane, in which the subsystem-entanglement {\it grows} with the disorder. This is always the case in the limit of weak disorder, where it perhaps is less surprising: weak disorder destroys integrability and therefore brings the system closer towards chaotic dynamics, so $\aSe(L,W)$ moves upwards towards the Page limit. This increase has been seen previously in Ref.~\cite{Singh2016}. Remarkably, the trend appears to continue even to moderate and strong disorder, $1.5\lesssim W \lesssim 3.0$, provided the system size is large enough - as seen from the traces with family parameter $W$ intersecting each other in Fig.~\ref{f4}(d). \subsection{Entanglement entropy as an internal clock} In this subsection, we explore the potential of entanglement entropy as a model for an internal clock. We begin by pointing out a possible advantage of using internal clocks. Time traces that display the relaxation of two different transport-related observables both measured in a given sample tend to be correlated. For example consider $\msr{S}_\mathrm{e}(t)$ and ${\msr I}(t)$: When employing $\msr{S}_\mathrm{e}(t)$ as a measure of time on a per-sample basis, sample-to-sample fluctuations should reduce considerably, because the slow growth of the first will, in general, indicate strong localization and therefore hints at slow growth of the latter; ergo, ${\msr I}(\msr{S}_\mathrm{e})$ fluctuates less than ${\msr I}(t)$ between different samples and therefore should be easier to interpret. The reduction of sample-to-sample fluctuations is indeed seen in Fig.~\ref{f5}, right most column. We elaborate on the qualitative argument. It can be rephrased employing concepts underlying the traditional mode-coupling theory. It foresees that two slow observables in their time evolution can each follow their own internal clock rate, but only if they are `orthogonal', i.e., sufficiently decoupled. The generic expectation is that orthogonality is an exception; at least quantities deriving from the same (conserved) field, e.g. the particle density, follow the same clock rate. Adopting $\msr{S}_\mathrm{e}$ as a model for the internal clock illustrates a benefit of the concept, i.e. reducing sample-to-sample fluctuations in Fig.~\ref{f5} significantly. At the heart of the concept is the hypothesis that improved models (as compared to $\msr{S}_\mathrm{e}$) exist, at least in principle, that further reduce sample-to-sample fluctuations, nearly eliminating them in generic observables at large enough $L$. % The optimal model for the internal time on a per-sample level still has to be found. However as we will see in the next paragraphs, $\aSe$ is close to an optimal model for the disorder-averaged internal clock, i.e. the `internal ensemble time'. \subsubsection{Excursion: the arrow of time - progress and regression} A conservative interpretation of $\msr{S}_\mathrm{e}(t)$ as the internal system clock requires that the time evolution of the entanglement entropy exhibits a monotonous growth not only on average but also on a sample-to-sample basis. This requirement is not obviously met, because the entanglement entropy is not bound to strict growth in time far enough in non-equilibrium. As seen in Fig.~\ref{f5}, first and third column, similar to the charge imbalance~\cite{NandyPRB21} ${\msr I}(t)$, also the time evolution of the entanglement entropy $\msr{S}_\mathrm{e}(t)$ exhibits strong sample-to-sample fluctuations. And indeed, as seen in Fig. \ref{f5}, the third column at strong disorder $W\gtrsim 4$, $\msr{S}_\mathrm{e}(t)$ becomes highly non-monotonous. Hence, a conservative interpretation of $\msr{S}_\mathrm{e}(t)$ as an internal system clock will confine itself to moderate disorder $W\lesssim 4$. We would like to argue, however, that strict monotony is not a necessary condition for an observable to be a useful measure of time. To this end we recall that the arrow of time is bound to point in a single direction only in the macroscopic limit; relaxing finite-size systems may always exhibit transient periods of entropy shrinkage (`regression'), so that the condition of strict monotony can be released, at least in principle. \subsubsection{Collapsing averaged imbalance traces - $\overline{\msr{I}}(t)$ over $\aSe(t)$} In the spirit of the preceding paragraphs, we adopt $\aSe(t)$ as a legitimate model for the internal clock of the ensemble dynamics also at larger disorder values, even if individual samples display periods of regression. In fact, we will continue to bravely use $\aSe(t)$ also at large disorder as displayed, e.g., in Fig. \ref{fig:dephasing}. \begin{figure}[t] \centering \includegraphics[width=0.975\columnwidth]{S2_imb_collapse} \caption{Data from Fig.~\ref{f2} plotted over the second Renyi-entropy. A good collapse is achieved here, also, confirming that $\aSe(t)$ and $\overline{\mS}_2(t)$ model the internal clock with comparable quality. A direct comparison of $\aSe(t)$ and $\overline{\mS}_2$ is given in the appendix, Fig.~\ref{f12}. Parameters: $L{=}16,20,24, 26$ at four moderate disorder strengths $W{=}1.5, 2.0, 2.5, 3.0$.} \label{f7a} \end{figure} As we have argued, internal clocks are useful to the extent that they reveal similarities in the time evolution of two different samples that are not apparent when using the lab-time $t$. A particularly strong case in favor of internal clocks can be made if similarities can be revealed in the dynamics of systems - or ensembles - that nominally exhibit strong differences, e.g., in the disorder strength, so that at least `naively' similarities are not expected. Such a strong validation of the ``internal clock" concept has been given with Figs.~\ref{f2} and \ref{f7a}: imbalance traces $\overline{\msr{I}}(t)$ taken from samples of fixed length $L$, and for disorder varying from the moderate ($W{\approx} 1.5$) to the beginning of the strong disorder regime ($W{\approx} 3$) collapse to a master-curve - within the available window of observations times. For collapsing $\overline{\msr{I}}$, rescaling of the abscissa is not required, which reflects the absence of relevant microscopic time scales - at least within the observation window and outside the short-time regime. The fact that the data collapse requires a rescaling of the ordinate is not unexpected, but still merits a remark: Our computations operate in the limit of infinite temperature, $T^{-1}{=}0$; the corresponding equilibrium density is spatially homogeneous, so that the equilibrium imbalance vanishes, ${\msr I}{=}0$. Therefore, this observable or quantities derived thereof do not lend themselves to compare to the relaxation behavior seen in $\overline{\msr{I}}(t)$. Due to the apparent lack of an obvious scale derived from an equilibrium quantity, it is not inconceivable that the scaling factor of the ordinate, $f_L(W)$ (inset of Fig.~\ref{f2}), reflects an initial condition, here the N\'eel state, and changes upon other choices. We mention that at late times when entanglement approaches its saturation value, the master curve exhibits a large - possibly diverging - slope. The implication is that entanglement propagates fast and may saturate long before other physical observables do, which are subject to a local conservation law. \begin{figure}[t] \centering {\includegraphics[width=0.975\columnwidth]{collapse_Ft_Se.pdf}} \caption{Approximate collapse to a single master curve of the traces for $\overline{\msr{F}}(t)$ for different disorder values $W{=}1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 6.0$ displayed in Fig.~\ref{f6a} for $L=24$. } \label{fig:dephasing} \end{figure} \subsubsection{Collapsing of density fluctuations -- $\overline{\msr{F}}$ over $\aSe$} Traditional mode-coupling theory suggests that if $\aSe$ is suitable as an internal clock for $\overline{\msr{I}}$ then it should also be for related observables that derive from the evolution of density modes. In this spirit, we show in Fig. \ref{fig:dephasing} that also the time evolution of imbalance fluctuations \eqref{e4} can be collapsed employing the entanglement evolution, $\aSe$, as an internal clock. Since we have already demonstrated the concept of a system-internal clock for $\overline{\msr{I}}(t)$ in the regime of accelerated dynamics, $W\lesssim 3$, the excellent scaling observed for $\overline{\msr{F}}$ in this regime is comforting but not, perhaps, surprising. Interestingly, a reasonably good data collapse is seen for $\overline{\msr{F}}(t)$ even in the regime of large disorder, $W\approx 8$, which we take as an additional encouragement for the concept of internal clocks, here proposed. Incidentally, it is implied here that at least with respect to the observable density fluctuations (as seen within our observation time) there is no indication of a phase transition all the way from moderate to strong disorder. \subsection{Intermediate power-laws in average \& typical $\overline{\msr{F}}$} The raw data underlying the master curve is displayed in Fig. \ref{f6a}. The data confirms a trend already observed in our previous work \textcite{NandyPRB21}: the imbalance fluctuations exhibit a rather benign convergence with the system size. We, therefore, are confident to identify for each trace an intermediate regime between short times (plateau region) and longest times (pronounced system-size dependency) that defines an (effective) power law \begin{align} \log \overline{\msr{F}}/\overline{\msr{F}}_1 &= - (\rho_\text{ave}\ \xi_\text{sp} \log 2)\log \aSe \label{e6a} \end{align} with $\overline{\msr{F}}_1$ denoting the prefactor; $\xi_\text{sp}$ is the non-interacting localization length extracted in the long time limit of the second moment of the density-density correlation function at infinite temperature~\cite{Bera2017}. \begin{figure}[t] \centering {\includegraphics[width=0.95\columnwidth]{FtvsS2.pdf}} \caption{Ensemble-averaged temporal fluctuations of the imbalance ${\msr I}(t)$ as defined in \eqref{e4}. Following our earlier work [\onlinecite{NandyPRB21}], we have dressed $\overline{\msr{F}}$ with a power $1/\xi_\text{sp}\log 2$. For selected traces, two system sizes, $L=20~(\mathrm{lines}),24~(\mathrm{symbols})$, are given to expose finite-size effects. } \label{f6a} \end{figure} The exponent function $\rho_\text{ave}(W)$ is extracted fitting the data Fig.~\ref{f6a} to the form \eqref{e6a}; an analogous fitting procedure can also be performed for the typical traces of ${\msr{F}}$ and $\msr{S}_\mathrm{e}$ yielding $\rho_\text{typ}(W)$. The results for both exponents are displayed in Fig.~\ref{f8}. We first note that the evolution of the traces with the disorder is rather smooth and slow; there is no indication of a nearby MBL-transition. \footnote{ One might ask why we expect $\rho(W)$ to exhibit a signature of the transition into the proper MBL phase: the definition of the exponent, $\overline{\msr{F}}\propto \aSe^\rho$, at least partially reflects the fact that $\overline{\msr{F}}(t)$ and $\aSe(t)$ exhibit a power-law growth at long times (in a large-enough system). At least for the entanglement entropy, $\aSe(t)\propto t^{1/z_\text{ee}}$, the growth becomes slower than any power, $1/z_\text{ee}\to 0$, so that generically the proportionality $\overline{\msr{F}}\propto \aSe^\rho$ should cease to hold. } Remarkably, the exponent functions display opposing trends with increasing disorder. The typical value follows the {\it naive} expectation, which is that with the increasing disorder, the interaction-induced damping of temporal fluctuations is less effective, so $\rho_\text{typ}$ has a tendency to decrease. While the typical exponent $\rho_\text{typ}(W)$ decreases, the average exponent $\rho_\text{ave}(W)$ increases with disorder and there is, indeed, a common intersection point $W_{{\msr{F}}}\approx 6$. At weaker disorder, $W{<}W_{{\msr{F}}}$, rare samples exist that exhibit rather long decoherence times, so in this regime $\rho_\text{typ}{>}\rho_\text{ave}$. In the other regime, $W{>}W_{\overline{\msr{F}}}$ the situation is reversed and the average is dominated by rare samples in which damping is unusually effective. Notice that the crossover value is situated near the regime in which the system dynamics changes from accelerated to decelerated, $W_{\overline{\msr{F}}}\approx W_c$, cf. Fig. \ref{f1}. Summarizing, we interpret Fig.~\ref{f8} in conjunction with Fig.~\ref{f4b} as supplying fresh support in favor of the phase-diagram Fig.~\ref{f1} and its main claim: In the range $W_c,W_{\overline{\msr{F}}}\approx 3-6$ there is a crossover between two markedly different thermalizing regimes; there is no evidence that the $t-V$-model exhibits a phase transition at disorder values below $W\lesssim 10$. % We thus advocate the point of view that until recently this crossover has been widely misinterpreted in numerical work as indicating an ergodicity-breaking (i.e. MBL-) transition, e.g. in Refs.~\cite{Pal2010, Berkelbach2010, BarLev14, Devakul2015, Luitz2015, Khemani2017, Loic2018, MaceMultifractality2018, SierantLargeWc20, Laflorencie2020, ChandaPRB20, DoggenRevAnnPhy21}. \begin{figure}[t] \centering {\includegraphics[width=0.95\columnwidth]{ExpntRatio.pdf}} \caption{(Effective) exponents characterizing the evolution of the imbalance fluctuations with the entanglement entropy, ${\msr{F}}{\propto} \msr{S}_\mathrm{e}^\rho$, for typical values, $\rho_\text{typ}$, and average values $\rho_\text{ave}$. The intersection point indicates the crossover from accelerated to decelerated dynamics, see Fig.~\ref{f1}. } \label{f8} \end{figure} Recent numerical work claims that the MBL transition if it exists, occurs at a disorder strength $W^*\gtrsim 20$~\cite{Morningstar2022, SelsBathPRB22}. We notice that already at disorder value $W{\approx}3$ the non-interacting localization length $\xi_{\mathrm{sp}}$ is of the order the lattice spacing $a$, which implies $\xi_{\mathrm{sp}}(W^*)\ll a$. In this sense, if the MBL transition occurs at all, it for sure takes place at an extremely strong disorder. This immediately prompts the question of its physical significance. % \subsection{Sample-to-sample fluctuations and thermalization} \subsubsection{Distribution of exponents} The importance of sample-to-sample fluctuations is demonstrated in Fig.~\ref{f5}, lhs column. The plot shows the distribution of ${\msr I}$ being logarithmically broad at large observation times already at moderate disorder, $W{=}1.5$. In fact, a fraction of samples exhibit traces ${\msr I}(t)$ that does not indicate a discernible trend towards equilibration, ${\msr I}(t){\to} 0$ in the limit $t\to\infty$, at all. This point is also illustrated in Fig. \ref{f5}, second column. It shows an effective exponent \begin{align} \beta(t)\coloneqq \frac{\partial \log{\msr I}(t)}{\partial \log t} \label{e6} \end{align} that characterizes the time-evolution of ${\msr I}(t)$ for an individual sample; the corresponding (logarithmic) distribution functions ${\msr{P}}(\log \beta)$ are given in Fig. \ref{f6}. As is clearly illustrated by the data, the exponent distribution is logarithmically wide. Extremely strong sample-to-sample fluctuations have been observed also by other authors. As an interesting example, we mention \textcite{Doggen2018}: these authors use a machine-learning algorithm in order to distinguish localized from thermal samples. Within a regime of intermediate disorder values and system sizes the algorithm identifies the existence of two different sample types: those with insulating character and manifestly thermalizing others. \begin{figure}[t] \centering {\includegraphics[width=1\columnwidth]{beta_distribution}} \caption{Distribution function ${\msr{P}}$ of the effective exponents $\beta(t)$ (Eq. \eqref{e6}) that characterize the long-term behavior of the imbalance ${\msr I}(t)$ after a quench in systems of sizes $L=16,20,24$ and for two disorder values in the (sub-)diffusive regime. } \label{f6} \end{figure} \subsubsection{Restoration of self-averaging} The tails of ${\msr{P}}$ are seen to be shrinking with increasing system size in Fig. \ref{f6}, if only exceedingly slowly. The general expectation is that at $W<W^*$ the imbalance, ${\msr I}$, in interacting, disordered wires is self-averaging. The statement implies that in the limit of large systems, $L\gg 32$, the distribution ${\msr{P}}$ acquires zero width, eventually. In other words, in the thermodynamic limit all samples, except for a small number of measure zero, are expected to thermalize with a relaxation behavior that is characterized by the same ($W$-dependent) exponents. Due to computational limitations in system sizes and observation times, we can neither confirm nor dispute this expectation. If it is true, then all traces shown in Fig.~\ref{f8} will undergo a slow evolution so that they eventually collapse in the thermodynamic limit. However, also in this case it should not go unnoticed that for the range of system sizes traditionally studied in numerics, e.g. $L\lesssim 32$, individual samples hardly ever show the typical behavior because the exponent distribution ${\msr{P}}$ is so wide. \subsection{Relation to previous work: creep and RG} For further discussion, we recall the RG-scenario outlined in the introduction. It assumes that samples fall into roughly two categories, thermalizing (ergodic) ``bubbles" and non-thermalizing ``grains". Fig.~\ref{f5}, lhs column supports this picture and allows rough quantitative estimates: the thermalizing fraction of samples comprises seven or eight out of ten samples at $W{=}1.5$, so ${\mathfrak q}(L{=}24,W{=}1.5){\approx}80$\%; this fraction is rapidly decreasing for a larger disorder, e.g., we have only ${\mathfrak q}(L{=}24, W{=}2.5){\approx} 50$\% at $W{=}2.5$ and ${\mathfrak q}(L{=}24,W{=}5.0){\approx}0$. One can consider growing the sample size, e.g., by merging two samples of $L=24$, each. The RG assumes that the bubble and grain scenario continues to hold after merging and then predicts the evolution of ${\mathfrak q}(L,W)$ with $L$ by postulating phenomenological growths rules~\cite{Vosk2015,Zhang2016,Dumitrescu2017, Dumitrescu2018, Goremykina2018, Morningstar2019} Of interest to us is the general picture of the relaxation dynamics that take place immediately after merging two samples. Specifically, at moderate to strong disorder, $W\gtrsim 2.5$, most samples behave like grains since ${\mathfrak q}<1/2$. Hence, even when doubling the sample size the most likely outcome is that a grain is added to a grain, so the relaxation dynamics after merging is extremely slow and does not support thermalization. Only upon growing the system size even further a bubble will be added eventually, which can foster the delocalization process. Since a rather small bubble will need to equilibrate a large grain, thermalization is especially slow. An exponentially slow equilibration process that is enhanced by growing the system size is the hallmark of ``creep"~\cite{Bera2017, Weiner19}. In this sense, our numerical observations of large sample-to-sample fluctuations seen in Fig.~\ref{f5}, the slow flow seen in Fig.~\ref{f6}, and creep are all qualitatively consistent with the simplified RG-picture of thermalization through bubbles and grains. \footnote{ In recent numerical works, attempts have been made to substantiate the picture and identify correlated/entangled clusters in disordered samples~\cite{Loic2018,TomaszBubblePRB21, Hemery2022}. Since the data analysis in these papers underlies the assumption of a critical point near $W\approx3.8$, it remains to be seen to what extent the conclusions are affected once creep is taken into account.} The alert reader will have realized that we adopted from the RG-concept its intuitive phenomenological building blocks, i.e. bubbles and grain, and left aside an important result of the reported RG-flow, which predicts a critical point separating a localized from an ergodic phase \cite{Goremykina2018, Dumitrescu2018, Morningstar2019}. We have neglected this part of the RG studies, because, within the parameter regimes investigated, we don't see support for a critical scenario provided by ``ab-initio" simulations of microscopic models. We mention two possible reasons for the discrepancy - other than reentrance behavior at a larger system size outside of our observation window. (i) The critical point might be situated at extremely large disorder values, as suggested by \textcite{Morningstar2022} and \textcite{SelsBathPRB22}. (ii) The phenomenological RG-equations are oversimplifications that miss relevant terms with a delocalizing effect. \section{Conclusion} Short quantum wires at relatively weak interactions and intermediate disorder strength can exhibit localization behavior: initial deviations from the equilibrium density are highly resilient against equilibration; a non-vanishing fraction of them may not, in fact, equilibrate at all. Understanding the fate of shorter samples with respect to their relaxation dynamics when successively growing their length is the central theme of many-body (de-) localization (MBL). The numerical investigation we have presented in this work motivates three statements: (i) We follow how the relaxation dynamics of the sublattice imbalance $\overline{\msr{I}}$ and its fluctuations $\overline{\msr{F}}$ evolve over a large range of disorder values, reaching from below the clean bandwidth to four times its value. The temporal flow of the ensemble dynamics is conveniently parameterized by $\aSe$, which acts as a model for an internal (ensemble) clock. The usefulness of the ``internal clock" concept becomes most apparent by demonstrating that for a broad range of disorder values time traces $\overline{\msr{I}}(t)$ and $\aSe(t)$ can be collapsed to a single master curve - for fixed system size. The collapse works within the entire window of investigated disorder values, which implies the absence of a localization transition even when the disorder exceeds the clean bandwidth by a factor of four. (ii) We observe extremely strong sample-to-sample fluctuations: For systems with $L=24$ sites and moderate disorder non-ergodic samples (grains) coexist with highly ergodic samples (bubbles). Previous work has reported an extremely slow flow toward equilibration when growing the system size (``creep")~\cite{Weiner19}. In this work we have argued that the creep phenomenon is closely related to the strong sample-to-sample fluctuations here observed; the relationship has been established by borrowing ideas from a real-space renormalization group approach and the avalanche concept \cite{Thiery2017}. (iii) While the existing evidence points to a thermodynamic limit that represents an equilibrating thermal phase, the flow towards this limit suggests the existence of subphases. Specifically, when the disorder strength reaches about two times the bandwidth, accelerated dynamics gives way to decelerated dynamics as indicated in Fig.~\ref{f1}. In this work, we present additional numerical evidence for this scenario, based on the observation that the evolution of ${\msr{F}}(t)$ exhibits two regimes: At weak disorder, typical fluctuations are damped more strongly than average fluctuations; at larger disorder, it is the other way round. \paragraph*{Outlook.} As an outlook, we express our belief that the neglect of creep in many, if not most, of the earlier computational studies of MBL, invalidates in parts the data interpretation that has been offered in these works, presumably often in crucial ways. At the time being it is too early writing an MBL review from the perspective of many-body de-localization (MBdL), i.e. ``creep''. Nonetheless, a firm ground has been laid that allows for a more careful analysis of the physical phenomena to be encountered. Many fascinating ideas have been expressed in the past, some of them formulated as mathematical theorems, some of them in terms of toy models, all of them making predictions that merit a careful computational test. Of course, creep will have to be included as a hallmark of delocalization physics in the data analysis - and no longer be ignored. The most important statement supporting the existence of MBL goes back to Imbrie \cite{Imbrie2016,Imbrie-review2017}. While Imbrie's proof is believed to be rigorous, it also relies upon assumptions, e.g., concerning the spectral statistics of the sample Hamiltonian. It will be interesting to see in future work, whether the lack of evidence for MBL in the XXZ-Heisenberg model is due to the disorder being still too weak, due to the proof being not fully complete yet, or due to a much simpler reason, which is that the theorem does not apply to the XXZ-chain, because its assumptions are not met. Another open pressing question concerns the nature of MBdL in correlated disorder, such as represented, e.g., by the Andre-Aubry potential (AA). In our previous studies \textcite{Weiner19} of charge-density relaxations, we did not detect a qualitative difference between correlated and fully uncorrelated randomness; similarly, also here we have no indication that the dominating physics is related to the effects of rare regions. Both observations together prompt the expectation that the AA-model exhibits creep, i.e., MBdL. If true, the interpretation of cold-atom experiments in terms of the observation of MBL proper is challenged~\cite{Schreiber2015, Luschen2017, RispoliExp18}. One more crucial issue that merits closer scrutiny relates to the choice of the initial state used for time propagation, e.g., a Ne\'el state. In thermalizing phases, it is tempting to assume that the qualitative dynamics seen in a quench mostly reflect properties of the phase rather than the initial state. To what extent this remains true also in marginally thermalizing situations remains to be seen. We conclude with a caveat: In this article, we have adopted a manner of speaking that is established in the MBL community according to which a phase is ``equilibrating" or ``thermal" if the temporal evolution of the entanglement entropy is at large times faster than $\ln t$ and if simultaneously the sublattice imbalance and derived quantities in the thermodynamic limit decay to zero. It should be noted that the traditional meaning of equilibration implies that {\it all} local observables relax towards their equilibrium value, eventually. Whether the equilibrating/thermalizing phase(s) indicated in Fig.~\ref{f1} also satisfies such a stronger condition is a matter of ongoing research. \section{ACKNOWLEDGMENTS} We would like to thank J. Bardarson, I. Gornyi, A. Mirlin, M. Kiefer-Emmanouilidis, J. Sirker, and J. Zakrzewski for critical reading of the manuscript, and for their valuable comments, which improved our manuscript considerably. SB would like to thank S. Nandy for several discussions, and for an earlier collaboration on a closely related topic. FE expresses his gratitude to A. Rosch for an enjoyable set of conversations on the topic. SB acknowledges support from SERB-DST, India, through Matrics (No. MTR/2019/000566), and MPG for funding through the Max Planck Partner Group at IITB. Funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through EV30/11-2, EV30/12-1, EV30/14-1, EV30/14-2 is gratefully acknowledged.
{ "arxiv_id": "2302.11377", "language": "en", "timestamp": "2023-02-23T02:14:49", "url": "https://arxiv.org/abs/2302.11377", "yymm": "2302" }
\section{Introduction} \label{sec: intro} New light pseudoscalar particles are one of the most popular and simple extensions of the Standard Model (SM). They are often motivated in models with spontaneous violations of global symmetries and are called axion-like particles (ALPs). Their masses are generated by explicit small violations of the global symmetries, whereas their interactions are characterized by the symmetries. They affect low-energy observables via the interactions with the SM particles. In this paper, we revisit contributions to the electroweak precision observables (EWPOs). The ALPs are assumed to be coupled primarily with the SM ${\rm U(1)}_Y$ and ${\rm SU(2)}_L$ gauge bosons, yielding ALP couplings with the photon ($\gamma$) and $Z$ boson as well as the charged $W$ boson after the electroweak (EW) symmetry breaking. Then, the EWPOs are affected via vacuum polarizations of $\gamma$, $Z$, and $W$. Such contributions have been studied in Ref.~\cite{Bauer:2017ris}. The authors argued that they are expressed by the oblique parameters, $S,~T$, and $U$~\cite{Peskin:1991sw}; $S$ and $U$ are generated, while $T$ is absent at least at the one-loop level. Interestingly, $U$ becomes comparable to $S$, unlikely to a wide class of new physics models. Also, the CDF~II collaboration recently reported a new result of the $W$ mass measurement~\cite{CDF:2022hxs}. The result is not consistent with the SM prediction as well as the previous experimental values. In Ref.~\cite{Yuan:2022cpw}, the CDF result has been discussed in the ALP models based on Ref.~\cite{Bauer:2017ris}, and it was concluded that a light ALP can solve the disagreement marginally. It is noticed that the above studies ignored the other ALP contributions, {\em i.e.}, those except for $S,~T$, and $U$. In many analyses of the oblique parameters, {\em e.g.}, those in Refs.~\cite{Baak:2014ora, ParticleDataGroup:2022pth}, $S,~T$, and $U$ have been restricted/determined by fitting them globally to the EWPOs under the assumption that there are no additional contributions from new physics. However, we will show in this paper that this assumption is not valid in the ALP models. These three parameters are not enough to parameterize the ALP effects, but there are additional contributions via vacuum polarizations such as the oblique parameters beyond $S$, $T$, and $U$ (cf., Refs.~\cite{Maksymyk:1993zm, Barbieri:2004qk}). Although these extra contributions are suppressed in a wide class of models, this is not the case for the ALP; they can be comparable to $S$ and $U$. Furthermore, the $Z$ boson can decay into a light ALP and a photon. This decay proceeds at the tree level and contributes to the total width of the $Z$ boson. Since those contributions affect the EWPOs simultaneously with $S$, $T$, and $U$, they must be analyzed {\it collectively} in the global fit. It will be shown that the global-fit results are changed drastically. The ALPs are subject to experimental constraints. Although cosmological limits are very severe, they can be avoided if the ALPs are heavier than $\sim 1\GeV$~\cite{Jaeckel:2010ni, Cadamuro:2011fd, Proceedings:2012ulb}. Even in such a case, the ALPs affect meson decays via the interactions with the $W$ boson~\cite{Izaguirre:2016dfi, Alonso-Alvarez:2018irt, Gavela:2019wzg, Guerrera:2021yss, Bauer:2021mvw, Guerrera:2022ykl}. In particular, the $B$-meson decay into a $K^{(*)}$ meson with photons via $a \to \gamma\gamma$ or the decay with leptons via $a \to \ell^+\ell^-$ is very sensitive to the ALP contributions. Besides, the ALPs have been studied particularly in the LEP and LHC experiments~\cite{Mimasu:2014nea, Jaeckel:2015jla, Jaeckel:2012yz, Bauer:2017ris, Bauer:2018uxu, Florez:2021zoo, Wang:2021uyb, dEnterria:2021ljz, Knapen:2016moh, CMS:2018erd, ATLAS:2020hii, Craig:2018kne, Bonilla:2022pxu}. In this paper, their results are applied to the current model setup, and we will compare all those constraints with the EWPO fit results. The recent CDF result of the $W$ mass measurement~\cite{CDF:2022hxs} will also be discussed under those constraints. This paper is organized as follows. In Sec.~\ref{sec: alp}, we introduce the ALP model and provide its decay rates. In Sec.~\ref{sec: ewpt}, we explain the ALP contributions to the EWPOs and the analysis strategy. The experimental constraints are summarized in Sec.~\ref{sec: constraint}. We show the numerical results in Sec.~\ref{sec: result}, and Sec.~\ref{sec: conclusion} is devoted to the conclusion. In Appendix~\ref{app: PV_funcs}, the Passarino-Veltman function~\cite{Passarino:1978jh} is given explicitly. In Appendix~\ref{app: aToZstarA}, we give the analytic formula of the three-body decay width for $a\to Z^{*}\gamma$. \section{Model} \label{sec: alp} We consider an ALP $(a)$ coupled with the ${\rm SU(2)}_L$ gauge boson $(W_\mu^a)$ and the ${\rm U(1)}_Y$ gauge boson $(B_\mu)$. The Lagrangian is shown as~\cite{Georgi:1986df} \begin{align} \mathcal{L}_{\mathrm{ALP}} = \frac{1}{2}\partial_{\mu}a\partial^{\mu}a -\frac{1}{2} m_{a}^{2} a^{2} -c_{WW}\frac{a}{f_{a}}W_{\mu\nu}^{a}\widetilde{W}^{a\mu\nu} -c_{BB}\frac{a}{f_{a}}B_{\mu\nu}\widetilde{B}^{\mu\nu}, \label{eq: Lagrangian} \end{align} where $m_a$ is the ALP mass, and $f_a$ is the ALP decay constant. The coefficients, $c_{WW}$ and $c_{BB}$, as well as $m_a$ and $f_a$ are regarded as free parameters, though $m_a < f_a$ is satisfied. We set $f_a = 1\TeV$ throughout this paper. The field strengths of the ${\rm SU(2)}_{L}$ and ${\rm U(1)}_{Y}$ gauge bosons are defined as \begin{align} W_{\mu\nu}^{a} &= \partial_{\mu}W_{\nu}^{a}-\partial_{\nu}W_{\mu}^{a}+g\epsilon^{abc}W_{\mu}^{b}W_{\nu}^{c}, \\ B_{\mu\nu} &= \partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}, \end{align} where $g$ is the ${\rm SU(2)}_L$ gauge coupling constant. The dual is expressed by \begin{align} \widetilde{X}_{\mu\nu} = \frac{1}{2}\epsilon^{\mu\nu\rho\sigma}X_{\rho\sigma}\qc (X = W^{a}, B). \end{align} The totally antisymmetric tensors are defined with $\epsilon^{012}=1$ and $\epsilon^{0123}=1$. After the EW symmetry breaking, the above interactions are rewritten as \begin{align} \mathcal{L}_{\mathrm{int}} &= -\frac{1}{4}g_{a\gamma\gamma}aF_{\mu\nu}\widetilde{F}^{\mu\nu} -\frac{1}{2}g_{a\gamma Z}aZ_{\mu\nu}\widetilde{F}^{\mu\nu} \notag \\ &\quad -\frac{1}{4}g_{aZZ}aZ_{\mu\nu}\widetilde{Z}^{\mu\nu} -\frac{1}{2}g_{aWW}aW_{\mu\nu}^{+}\widetilde{W}^{-\mu\nu} + \ldots, \label{eq: effective_coupling} \end{align} where quartic interaction terms are omitted. Here, the field strengths are given by \begin{align} X'_{\mu\nu} = \partial_{\mu}X'_{\nu}-\partial_{\nu}X'_{\mu}. \end{align} where $X'_{\mu} = A_\mu$ for the photon $(\gamma)$, $Z_\mu$ for the $Z$ boson, and $W_\mu^{\pm}$ for the charged $W$ boson. Note that $F_{\mu\nu}$ corresponds to $A_\mu$. Then, the coupling constants are expressed by $c_{WW}$ and $c_{BB}$ with $f_a$ as \begin{align} g_{a\gamma\gamma} &= \frac{4}{f_{a}}\qty(s_{W}^{2}c_{WW}+c_{W}^{2}c_{BB}), \label{eq: gaAA} \\ g_{aZ\gamma} &= \frac{2}{f_{a}}\qty(c_{WW}-c_{BB})s_{2W}, \label{eq: gaZA} \\ g_{aZZ} &= \frac{4}{f_{a}}\qty(c_{W}^{2}c_{WW}+s_{W}^{2}c_{BB}), \label{eq: gaZZ} \\ g_{aWW} &= \frac{4}{f_{a}}c_{WW}, \label{eq: gaWW} \end{align} where $c_{W} = \cos{\theta_{W}}$, $s_{W} = \sin{\theta_{W}}$, and $s_{2W} = \sin{2\theta_{W}}$ with the Weinberg angle $\theta_{W}$. \subsection{Decay of ALP} \label{sec: decay} The ALP decays into a pair of SM gauge bosons. The partial decay widths for $a\to V_{i}V_{j}\ (V_{i,j}=\gamma,Z,W^{\pm})$ are obtained as~\cite{Bauer:2017ris, Craig:2018kne, Bonilla:2021ufe} \begin{align} \Gamma(a\to V_{i}V_{j}) = \frac{m_{a}^{3}}{32\pi(1+\delta_{ij})}\lambda^{3/2} \qty(\frac{m_{V_{i}}^{2}}{m_{a}^{2}}, \frac{m_{V_{j}}^{2}}{m_{a}^{2}})\abs{g_{aV_{i}V_{j}}^{\rm eff}}^{2}, \label{eq: decay_a_VV} \end{align} with $\lambda(x,y) = (1-x-y)^{2}-4xy$. Here $m_{V}$ is the gauge-boson mass ($m_\gamma=0$). Note that $\delta_{ij}=0$ for $a\to Z\gamma$ and $a\to W^+ W^-$. The ALP coupling to the SM gauge bosons $g^{\rm eff}_{aV_{i}V_{j}}$ is obtained both at the tree and loop levels. In particular, the coupling to the di-photon is given at the tree level by $g_{a\gamma\gamma}$ in Eq.~\eqref{eq: gaAA} and is generated by loop corrections with $g_{aWW}$ in Eq.~\eqref{eq: gaWW} as~\cite{Bauer:2017ris} \begin{align} g_{a\gamma\gamma}^{\rm eff} = g_{a\gamma\gamma}+\frac{2\alpha}{\pi} g_{aWW}B_{2}(\tau_{W}), \end{align} where $\alpha\equiv e^{2}/(4\pi)$ with the QED coupling $e=g s_{W}$. The loop function $B_{2}$ is defined as \begin{align} B_{2}(\tau) = 1-(\tau-1)f^{2}(\tau), \end{align} where $f(\tau)$ is given by \begin{align} f(\tau) = \left\{\mqty{ \arcsin{\frac{1}{\sqrt{\tau}}}, \\ \frac{\pi}{2}+\frac{i}{2}\log{\frac{1+\sqrt{1-\tau}}{1-\sqrt{1-\tau}}}, }\right.\qquad \mqty{ \text{for}~\tau\geq 1\\ \text{for}~\tau< 1 }, \end{align} with $\tau_{W}=4m_{W}^{2}/m_{a}^{2}$. In the light ALP limit, $\tau_{W}\gg 1$, the function is approximated as $B_{2}\to m_{a}^{2}/(6m_{W}^{2})$, {\em i.e.}, suppressed by $m_{W}$. In the heavy limit, $\tau_{W}\ll 1$, we obtain $B_{2}\to 1+\pi^{2}/4-\log^{2}{(m_{a}/m_{W})}$, which is enlarged by the logarithm. On the other hand, it is sufficient for us to evaluate the decay widths for $a\to Z\gamma,\, ZZ,\, W^{+}W^{-}$ at the tree level, {\em i.e.}, $g_{aV_{i}V_{j}}^{\rm eff} = g_{aV_{i}V_{j}}$. When the ALP is lighter than the $Z$ boson, and if $g_{a\gamma\gamma}^{\rm eff}$ is suppressed, the ALP decays into either a pair of SM fermions, $a \to f\bar f$, or the fermions with a photon, $a \to Z^{*}\gamma\to f\bar f \gamma$, by exchanging an off-shell $Z$ boson. Since the ALP does not couple with SM fermions directly, the former proceeds via radiative corrections, as will be explained later. On the other hand, even though the latter is a three-body decay, it proceeds at the tree level, and thus, its decay width can be comparable to the former. For $m_a \ll m_{Z}$, the decay width is obtained as \begin{align} \Gamma(a \to f\bar f \gamma) = N_{c}^{f}\frac{g_{Z}^{2}g_{aZ\gamma}^{2}}{30720\pi^{3}} \qty[(g_{V,f})^{2}+(g_{A,f})^{2}]\frac{m_{a}^{7}}{m_{Z}^{4}}, \end{align} where $N_{c}^{f} = 1~ (3)$ is the color factor for leptons (quarks), and $g_Z = g/c_{W}$ is the $Z$-boson coupling constant. The vector and axial form-factors of $Z$ boson are defined as $g_{V,f} = I_{3}^f - 2\, Q_f s_W^2$ and $g_{A,f} = I_{3}^f$. The formula valid for any $m_a$ is provided in Appendix~\ref{app: aToZstarA}. The ALP decays into a pair of leptons and heavy quarks, $a\to f\bar{f}$, via gauge-boson loops. The partial decay widths are obtained as~\cite{Bauer:2017ris, Bauer:2020jbp} \begin{align} \Gamma(a\to f\bar{f}) = N_{c}^{f}\frac{m_{a}m_{f}^{2}}{8\pi} \abs{g_{aff}^{\rm eff}}^{2} \sqrt{1-\frac{4m_{f}^{2}}{m_{a}^{2}}}. \end{align} The effective coupling induced by radiative corrections is given by \begin{align} g_{aff}^{\rm eff} &= 3Q_{f}^{2}\frac{\alpha}{4\pi}g_{a\gamma\gamma} \ln{\frac{\Lambda^{2}}{m_{f}^{2}}} +\frac{3}{4s_{W}^{2}}\frac{\alpha}{4\pi}g_{aWW} \ln{\frac{\Lambda^{2}}{m_{W}^{2}}} \notag \\ &\quad +\frac{3}{s_{W}c_{W}}\frac{\alpha}{4\pi}g_{a\gamma Z}Q_{f} \qty(I_{3}^{f}-2Q_{f}s_{W}^{2}) \ln{\frac{\Lambda^{2}}{m_{Z}^{2}}} \notag \\ &\quad +\frac{3}{s_{W}^{2}c_{W}^{2}}\frac{\alpha}{4\pi}g_{aZZ} \qty(Q_{f}^{2}s_{W}^{4}-I_{3}^{f}Q_{f}s_{W}^{2}+\frac{1}{8}) \ln{\frac{\Lambda^{2}}{m_{Z}^{2}}}, \label{eq: cff_eff} \end{align} where $Q_{f}$ and $I_{3}^{f}$ are the electric charge and the weak isospin of the fermion $f$, and $\tau_{f}=4m_{f}^{2}/m_{a}^{2}$. Here, we keep the terms enhanced by $\ln{(\Lambda^2/m_f^2)}$ or $\ln{(\Lambda^2/m_{V}^2)}$ $(V=Z, W)$. The cutoff scale $\Lambda$ is determined by a UV theory of the ALP model and treated as a model parameter in this paper. The subleading terms are given in Refs.~\cite{Bauer:2017ris, Bauer:2020jbp, Bonilla:2021ufe}, though they are irrelevant. Note that the above formula is valid for the decay which conserves the lepton/quark flavors. The $W$ boson contribution can generate flavor-violating interactions of quarks. In particular, those for the down-type quarks can be sizable due to top-quark loops. With keeping the up-type quark masses non-zero, the following term arises apart from Eq.~\eqref{eq: cff_eff}~\cite{Izaguirre:2016dfi, Alonso-Alvarez:2018irt, Gavela:2019wzg, Guerrera:2021yss, Bauer:2021mvw, Guerrera:2022ykl}: \begin{align} g_{ad_{i}d_{j}}^{\rm eff} = - \frac{3}{4s_{W}^{2}}\frac{\alpha}{4\pi}g_{aWW}\sum_{q=u,c,t}V_{qi}V_{qj}^{*}G(x_{q}), \label{eq: flavor_violating _coupling} \end{align} where $V_{ij}$ is the Cabbibo-Kobayashi-Maskawa (CKM) matrix, and the loop function is defined as \begin{align} G(x) = \frac{x\qty(1-x+x\ln{x})}{(1-x)^{2}}. \end{align} with $x_q = m_q^2/m_{W}^2$. On the other hand, the inclusive rate for the ALP decaying into light hadrons is shown as (cf., Ref.~\cite{Bauer:2017ris}) \begin{align} \Gamma(a\to{\rm hadrons}) &= 32\pi\alpha_{s}^{2}m_{a}^{3}\qty(1+\frac{83}{4}\frac{\alpha_{s}}{\pi})\abs{g_{agg}^{\rm eff}}^{2} +\sum_{f=u,d,s}\Gamma(a\to f\bar{f}), \end{align} where $m_{a}\gg\Lambda_{\rm QCD}$ is assumed. The effective coupling with gluons is given by \begin{align} g_{agg}^{\rm eff}=\frac{1}{32\pi^{2}}\sum_{f=u,d,s}g_{aff}^{\rm eff}. \end{align} Note that the above expression is not valid for $m_{a} \lesssim 3~{\rm GeV}$, where the perturbative QCD is not applicable, and various hadronic decay channels appear (cf., Refs.~\cite{Bauer:2017ris, Aloni:2018vki}). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/BR_summary.pdf} \caption{Branching ratios of the ALP with $f_{a}=1\TeV$ and $\Lambda = 4\pi f_{a}$. Here, $c_{BB}=0$, (left) $c_{WW}=0$ (middle), and $c_{\gamma\gamma} \propto g_{a\gamma\gamma} =0$ (right) are set. The decay into light hadrons is denoted as ``had'' in the legend.} \label{fig: BRa} \end{figure} In Fig.~\ref{fig: BRa}, we show the branching ratios of the ALP with $\Lambda = 4\pi f_{a}$. Among the ALP couplings, $c_{BB}$ and $c_{WW}$, we set $c_{BB}=0$ on the left panel, $c_{WW}=0$ on the middle, and $c_{\gamma\gamma} \propto g_{a\gamma\gamma} = 0$ on the right. Here, the decay rate of $a \to Z\gamma$ includes those from the three-body decays, $a \to Z^{*}\gamma\to f\bar f \gamma$. In the case with $c_{\gamma\gamma}\neq0$, the decay of $a\to \gamma\gamma$ is dominant for $m_{a}\leq m_{Z}$. When $c_{BB}=0$, the decay of $a\to WW$ is dominant for $m_{a} > 2m_{W}$, and the decays of $a\to Z\gamma$ and $a\to ZZ$ become relevant above $m_{Z}$ and $2m_{Z}$, respectively. When $c_{WW}=0$, the decay of $a\to \gamma\gamma$ is still dominant for $m_{a}> m_{Z}$, while the decays of $a\to Z\gamma$ and $a\to ZZ$ become sub-dominant above the thresholds. In the case with $c_{\gamma\gamma}=0$, the fermionic decay modes are relevant for $m_{a} < m_{Z}$, while the bosonic channels become relevant for $m_{a} \gtrsim m_{Z}$. \section{Electroweak precision test (EWPT)} \label{sec: ewpt} New physics models as well as the SM have been tested by using the EWPOs. The theoretical predictions are compared with the experimental data by performing global fit analyses. In particular, the former values are determined within the SM by a set of input observables: the fine-structure constant $\alpha$, the Fermi coupling constant $G_F$, and the $Z$ mass $m_{Z}$. The ALP contributions arise in vacuum polarizations due to the ALP interactions with the EW gauge bosons. Such contributions have been studied in terms of the oblique parameters, $S$, $T$, and $U$~\cite{Bauer:2017ris}. It has been known that the ALP generates $S$ and $U$, while $T$ is absent at least at the one-loop level. Besides, $U$ becomes as large as $S$, in contrast to the case in a wide class of new physics models. However, $S$ and $U$ are not enough to represent the ALP contributions. We introduce {\it three} additional variables, $\Delta_Z$, $\Delta_W$ and $\Delta\alpha$.\footnote{ No specific parameters are assigned for these variables in Ref.~\cite{Hagiwara:1994pw}. } The parameters $\Delta_{Z}$ and $\Delta_{W}$ represent radiative corrections to the gauge coupling constants of the $Z$ and $W$ bosons, respectively. Also, $\Delta\alpha$ is those to the QED coupling. Although they are negligible compared to $S$ and $T$ in a wide class of new physics models, this is not the case for the ALP models. Moreover, the decay of $Z \to a\gamma$ alters the $Z$-pole observables, such as the hadronic and total decay widths of the $Z$ boson (see Eq.~\eqref{eq: decay_a_VV}). Since the decay proceeds at the tree level, its impact on EWPOs is likely stronger than those via vacuum polarizations. We provide their formulae as well as those for $S$, $T$, and $U$ in this section. Since the EW data, except for the recent CDF result of the $W$ mass, are consistent with the SM predictions, the ALP contributions should be smaller than the SM ones. Hence, we retain only the leading contributions from the ALP, which stem from interference terms between the ALP and SM transition amplitudes. The ALP contributions are estimated up to the one-loop level. Besides, since we are interested in the observables at the EW scale, we focus on the leading terms in the $m_f/m_{W}$ expansions, where $m_f$ and $m_{W}$ are the SM fermion and $W$ masses, respectively. \subsection{Formulation} \label{subsec: Formulation} In this section, we basically follow Ref.~\cite{Hagiwara:1994pw} for the formulation of the EWPOs.\footnote{ It is also checked that the following results are consistent with the formulae in the literature, {\em e.g.}, Refs.~\cite{Hollik:1993cg, Hollik:1995dv}. See also, Ref.~\cite{Cirigliano:2013lpa}. } We consider the ALP contributions to vacuum polarizations. They generally appear in the gauge-boson propagators as \begin{align} \Gamma^{ab}_{\mu\nu}(k) = - i g_{\mu\nu} (k^2 - m_{a,0}^2) \delta^{ab} - i \left(g_{\mu\nu} - \frac{k_\mu k_\nu}{k^2} \right) \Pi_T^{ab}(k^2) - i \frac{k_\mu k_\nu}{k^2} \Pi_L^{ab}(k^2), \end{align} with $a,b=\gamma,W,Z$. The mass parameter $m_{a,0}$ is related to the pole mass $m_{a}$ as $m_{a,0}^2 = m_{a}^2 + {\rm Re}\,\Pi_T^{aa}(m_a^2)$. Here, $\Pi_T^{ab}(k^2)$ and $\Pi_L^{ab}(k^2)$ are the unrenormalized transverse and longitudinal self-energy corrections, respectively.\footnote{ Although the radiative corrections are assumed to include the pinch terms~\cite{Cornwall:1981zr, Cornwall:1989gv, Degrassi:1992ue, Degrassi:1992ff, Binosi:2009qm} in Ref.~\cite{Hagiwara:1994pw}, they are irrelevant in the following ALP analysis because gauge-dependent terms do not arise. } We focus on $\Pi_T^{ab}(k^2)$ because the terms proportional to $k_\mu k_\nu$ are subdominant with respect to the $m_f/m_{W}$ expansion and hereafter neglected. For convenience, we define \begin{align} \Pi_{T,V}^{ab}(k^2) = \frac{\Pi_T^{ab}(k^2)-\Pi_T^{ab}(m_{V}^2)}{k^2-m_{V}^2}. \end{align} It is also useful to parameterize the vacuum polarizations as \begin{align} \Pi_{T}^{\gamma\gamma}(k^2) &= e^2 \, \Pi_{T}^{QQ}(k^2), \\ \Pi_{T}^{Z\gamma}(k^2) &= e g_Z \left[ \Pi_{T}^{3Q}(k^2) - s_W^2 \Pi_{T}^{QQ}(k^2) \right], \\ \Pi_{T}^{ZZ}(k^2) &= g_Z^2 \left[ \Pi_{T}^{33}(k^2) - 2s_W^2 \Pi_{T}^{3Q}(k^2) + s_W^4 \Pi_{T}^{QQ}(k^2) \right], \\ \Pi_{T}^{WW}(k^2) &= g^2 \, \Pi_{T}^{11}(k^2). \end{align} Transition amplitudes for charged currents mediated by the $W$ boson, {\em e.g.}, $\mu \to e\bar\nu_e\nu_\mu$, are shifted from the tree-level ones as \begin{align} \mathcal{M}^{\rm CC} \to \mathcal{M}^{\rm CC} \left[ 1 - {\rm Re}\,\Pi_{T,W}^{WW}(k^2) \right], \label{eq:CCamp} \end{align} where $k$ is the momentum carried by the $W$ boson. Here and hereafter, we ignore imaginary parts of the vacuum polarizations because they correspond to higher-order corrections. Hence, the extra vacuum-polarization contributions can be implemented by replacing the ${\rm SU(2)}_L$ coupling as \begin{align} g^2 \to \bar g^2(k^2) = g^2 \left[ 1 - {\rm Re}\,\Pi_{T,W}^{WW}(k^2) \right]. \label{eq: running_g} \end{align} Let us next consider neutral currents, $f \bar f \to f'\bar f'$. Their transition amplitudes are expressed as \begin{align} \mathcal{M}^{\rm NC} = \mathcal{M}^{\rm NC}_{ij} [\bar\psi_f \gamma_\mu P_a \psi_f] [\bar\psi_{f'} \gamma^\mu P_{a'} \psi_{f'}], \end{align} where $i = f_a$ and $j = f'_{a'}$ with the chiral projector $P_{a,a'}$. Similar to the charged currents, the leading contributions from new physics via the vacuum polarizations can be implemented via the effective coupling and weak mixing angles as \begin{align} \mathcal{M}^{\rm NC}_{ij} &= \frac{\bar e^2(s)}{s} Q_iQ_j + \frac{\bar g_Z^2(s)}{s-m_{Z}^2+is\Gamma_Z/m_{Z}} [I_3^i - Q_i \bar s^2(s)][I_3^j - Q_j \bar s^2(s)], \label{eq:NCamp} \end{align} where the first term on the right-hand side has a pole at $s=0$, identified as the photon-exchange contribution, and the second one peaks at $s=m_{Z}^2$, corresponding to the $Z$ amplitude. Here, $s=k^2$ is the squared momentum carried by the photon or $Z$ boson. The effective parameters are defined as \begin{align} \bar e^2(k^2) &= e^2 \left[ 1 - {\rm Re}\,\Pi_{T,\gamma}^{\gamma\gamma}(k^2) \right], \label{eq: running_e} \\ \bar g_Z^2(k^2) &= g_Z^2 \left[ 1 - {\rm Re}\,\Pi_{T,Z}^{ZZ}(k^2) \right], \label{eq: running_gZ} \\ \bar s^2(k^2) &= s_W^2 \left[ 1 + \frac{c_W}{s_W} {\rm Re}\,\Pi_{T,\gamma}^{Z\gamma}(k^2) \right]. \end{align} Since the ALP couples only with the gauge bosons, there are no contributions from vertex and box corrections. The fine-structure constant, $\alpha$, is associated with the photon-exchange amplitude in the Thomson limit, $k^2\to0$. The effective coupling at a scale $k^2$ satisfies \begin{align} \frac{1}{\bar \alpha(k^2)} - \frac{1}{\alpha} &= 4\pi \, {\rm Re}\! \left[ \Pi_{T,\gamma}^{QQ}(k^2) - \Pi_{T,\gamma}^{QQ}(0) \right], \label{eq:alpha} \end{align} where $\bar\alpha(k^2) = \bar e^2(k^2)/4\pi$. Hence, the leading ALP contribution at $k^2 = m_{Z}^2$ gives \begin{align} \bar \alpha(m_{Z}^2) &= \alpha \left\{ 1 - {\rm Re}\left[ \Pi_{T,\gamma}^{\gamma\gamma}(m_{Z}^2) - \Pi_{T,\gamma}^{\gamma\gamma}(0) \right] \right\} \equiv \alpha \left( 1 + \Delta\alpha \right). \end{align} On the other hand, the measured value of the Fermi coupling constant, $G_F$, is related to the effective coupling of the charged current amplitude at $k^2 \simeq 0$ as \begin{align} G_F = \frac{\bar g^2(0)}{4\sqrt{2}\,m_{W}^2}. \label{eq:defGF} \end{align} Here, since we focus on radiative corrections from the ALP, vertex and box corrections are ignored. It is noted that the $W$ mass is not the input observable, but its theoretical value is determined from Eq.~\eqref{eq:defGF}. The effective couplings and weak mixing angle are expressed in terms of $\alpha$, $G_F$, and $m_{Z}$ with the oblique parameters, $S$, $T$, and $U$, as \begin{align} & \frac{1}{\bar g_Z^2(0)} = \frac{1 - \alpha T}{4\sqrt{2}\,G_F m_{Z}^2}, \\ & \bar s^2(m_{Z}^2) = \frac{1}{2} - \sqrt{\frac{1}{4} - \bar \alpha(m_{Z}^2) \left[ \frac{4\pi}{\bar g_Z^2(0)} + \frac{S}{4} \right]}, \\ & \frac{4\pi}{\bar g_W^2(0)} = \frac{\bar s^2(m_{Z}^2)}{\bar \alpha(m_{Z}^2)} - \frac{S+U}{4}. \end{align} In a given model, the oblique parameters are evaluated as \begin{align} &S = 16\pi \, {\rm Re}\! \left[ \Pi_{T,\gamma}^{3Q}(m_{Z}^2) - \Pi_{T,Z}^{33}(0) \right], \label{eq:S} \\ &T = \frac{4\sqrt{2}\,G_F}{\alpha} \, {\rm Re}\! \left[ \Pi_{T}^{33}(0) - \Pi_{T}^{11}(0) \right], \label{eq:T} \\ &U = 16\pi \, {\rm Re}\! \left[ \Pi_{T,Z}^{33}(0) - \Pi_{T,W}^{11}(0) \right]. \label{eq:U} \end{align} Thus, the leading contributions from new physics are shown as \begin{align} \bar g_Z^2(0) &= g_Z^2 \left( 1 + \alpha T \right), \label{eq: gZsqbar} \\ \bar s^2(m_{Z}^2) &= s_W^2 \left[ 1 + \frac{c_W^2}{c_W^2-s_W^2} \left( \Delta\alpha - \alpha T \right) + \frac{\alpha S}{4 s_W^2 (c_W^2-s_W^2)} \right], \label{eq: sWsqbar} \\ \bar g^2(0) &= g^2 \left[ 1 - \frac{s_W^2 \Delta\alpha}{c_W^2-s_W^2} - \frac{\alpha S}{2(c_W^2-s_W^2)} + \frac{c_W^2 \alpha T}{c_W^2-s_W^2} + \frac{\alpha U}{4 s_W^2} \right]. \label{eq: gWsqbar} \end{align} It is noticed that the $Z$- and $W$-pole observables, where the gauge bosons carry $k^2 = m_{Z}^2$ and $m_{W}^2$ respectively, require the effective couplings, $\bar g_Z^2(m_{Z}^2)$ and $\bar g^2(m_{W}^2)$. From Eqs.~\eqref{eq: running_g} and \eqref{eq: running_gZ}, they are related to $\bar g_Z^2(0)$ and $\bar g^2(0)$ as \begin{align} \bar g_Z^2(m_{Z}^2) &= \bar g_Z^2(0) \left\{ 1 - {\rm Re}\left[ \Pi_{T,Z}^{ZZ}(m_{Z}^2) - \Pi_{T,Z}^{ZZ}(0) \right] \right\} \equiv g_Z^2(0) \left( 1 + \Delta_Z \right), \\ \bar g^2(m_{W}^2) &= \bar g^2(0) \left\{ 1 - {\rm Re}\left[ \Pi_{T,W}^{WW}(m_{W}^2) - \Pi_{T,W}^{WW}(0) \right] \right\} \equiv g^2(0) \left( 1 + \Delta_W \right). \end{align} Consequently, the EWPOs are evaluated theoretically in terms of $\Delta_Z$, $\Delta_W$, and $\Delta\alpha$ as well as $S$, $T$, and $U$ (see also Sec.~\ref{sec: observables}). \subsection{ALP contributions} \label{sec: ALP_contribution} In Eq.~\eqref{eq: effective_coupling}, the ALP couples with $\gamma$, $Z$, and $W$. In the $R_{\xi}$ gauge, its contributions to the vacuum polarizations at the one-loop level are derived as \begin{align} \Pi_T^{WW}(k^2) &= \frac{1}{288\pi^2}g_{aWW}^{2}F(k^2;a,W), \\ \Pi_T^{\gamma\gamma}(k^2) &= \frac{1}{288\pi^2} \left[g_{a\gamma\gamma}^2 F(k^2;a,\gamma) + g_{aZ\gamma}^2 F(k^2;a,Z) \right], \\ \Pi_T^{Z\gamma}(k^2) &= \frac{1}{288\pi^2} \left[g_{a\gamma\gamma}g_{aZ\gamma} F(k^2;a,\gamma) + g_{aZ\gamma}g_{aZZ} F(k^2;a,Z) \right], \\ \Pi_T^{ZZ}(k^2) &= \frac{1}{288\pi^2} \left[g_{aZ\gamma}^2 F(k^2;a,\gamma) + g_{aZZ}^2 F(k^2;a,Z) \right]. \end{align} These results are valid for any ALP mass. The loop function is defined as \begin{align} F(k^2;a, V) &= 3 \left[k^2 - (m_a + m_{V})^2\right] \left[k^2 - (m_a - m_{V})^2\right] \notag \\ &\quad\quad \times \left[ B_0(k^2;m_a,m_{V}) - B_0(0;m_a,m_{V}) \right] \notag \\ &\quad - 3 k^2 \left[ A_0(m_a) + A_0(m_{V}) + (2m_a^2 + 2m_{V}^2 - k^2) B_0(0;m_a,m_{V}) \right] \notag \\ &\quad + 7 k^2 (3m_a^2 + 3m_{V}^2 - k^2), \label{eq: F_function} \end{align} where $A_0(x)$ and $B_0(k^{2}; x, y)$ are the Passarino-Veltman functions~\cite{Passarino:1978jh} and shown explicitly in Appendix~\ref{app: PV_funcs}. It is noticed that the results are independent of the gauge parameters. Then, the ALP contributions to $S$, $T$, and $U$ are given by \begin{align} \alpha S &= \frac{2c_{W}^{2}s_{W}^{2}}{9\pi^{2}m_{Z}^{2}} \frac{c_{WW}c_{BB}}{f_{a}^{2}} \qty[F(m_{Z}^{2}; a, \gamma)-F(m_{Z}^{2}; a, Z)], \\ \alpha T &= 0, \\ \alpha U &= \frac{2s_{W}^{4}}{9\pi^{2}m_{Z}^{2}} \frac{c_{WW}^{2}}{f_{a}^{2}} \qty[F(m_{Z}^{2}; a, \gamma)+\frac{c_{W}^{2}}{s_{W}^{2}}F(m_{Z}^{2}; a, Z)-\frac{1}{s_{W}^{2}c_{W}^{2}}F(m_{W}^{2}; a, W)]. \end{align} It is noticed that the ALP does not contribute to $T$. On the other hand, the contributions to $\Delta\alpha$, $\Delta_{Z}$, and $\Delta_{W}$ are given by \begin{align} \Delta\alpha &= -\frac{g_{a\gamma\gamma}^{2}}{288\pi^{2}}\qty[\frac{F(m_{Z}^{2};a,\gamma)}{m_{Z}^{2}}-F^{\prime}(0; a,\gamma)] -\frac{g_{aZ\gamma}^{2}}{288\pi^{2}}\qty[\frac{F(m_{Z}^{2};a,Z)}{m_{Z}^{2}}-F^{\prime}(0; a,Z)], \label{eq: delta_alpha_limit} \\ \Delta_{Z} &= \frac{g_{aZ\gamma}^{2}}{288\pi^{2}}\qty[\frac{F(m_{Z}^{2};a,\gamma)}{m_{Z}^{2}}-F^{\prime}(m_{Z}^{2}; a,\gamma)] +\frac{g_{aZZ}^{2}}{288\pi^{2}}\qty[\frac{F(m_{Z}^{2};a,Z)}{m_{Z}^{2}}-F^{\prime}(m_{Z}^{2}; a,Z)], \\ \Delta_{W} &= \frac{1}{288\pi^{2}}g_{aWW}^{2}\qty[\frac{F(m_{W}^{2};a,W)}{m_{W}^{2}}-F^{\prime}(m_{W}^{2};a,W)]. \label{eq: delta_W_limit} \end{align} The relation between $(c_{BB}, c_{WW})$ and $(g_{a\gamma\gamma}, g_{aZ\gamma}, g_{aZZ})$ is found in Eqs.~\eqref{eq: gaAA}--\eqref{eq: gaZZ}. It is noticed that $\Delta\alpha$, $\Delta_{Z}$, and $\Delta_{W}$ can be as large as $\alpha S$ and $\alpha U$, and thus, must not be neglected. Let us show the above results in the light ALP-mass limit, $m_{a}\ll m_{V}\, (V=Z,W)$. They are obtained as \begin{align} \alpha S &= -\frac{2c_{W}^{2}s_{W}^{2}m_{Z}^{2}}{\pi^{2}} \frac{c_{WW}c_{BB}}{f_{a}^{2}} \qty(\ln{\frac{m_{Z}^{2}}{\Lambda^{2}}}+1), \\ \alpha U &= -\frac{2s_{W}^{4}m_{Z}^{2}}{3\pi^{2}} \frac{c_{WW}^{2}}{f_{a}^{2}} \qty(\ln{\frac{m_{Z}^{2}}{\Lambda^{2}}}+\frac{1}{3}+\frac{2c_{W}^{2}}{s_{W}^{2}}\ln{\frac{m_{W}^{2}}{m_{Z}^{2}}}), \\ \Delta \alpha &= \frac{m_{Z}^{2}}{96\pi^{2}}\qty[ g_{a\gamma\gamma}^{2}\qty(\ln{\frac{m_{Z}^{2}}{\Lambda^{2}}}+\frac{11}{3}) +g_{aZ\gamma}^{2}\qty(\ln{\frac{m_{Z}^{2}}{\Lambda^{2}}}+\frac{11}{6})], \\ \Delta_{Z} &= \frac{m_{Z}^{2}}{96\pi^{2}}\qty(g_{aZ\gamma}^{2}+g_{aZZ}^{2})\qty(\ln{\frac{m_{Z}^{2}}{\Lambda^{2}}}+\frac{4}{3}), \\ \Delta_{W} &= \frac{m_{W}^{2}}{96\pi^{2}}g_{aWW}^{2}\qty(\ln{\frac{m_{W}^{2}}{\Lambda^{2}}}+\frac{4}{3}). \end{align} The results for $S$, $T$, $U$, and $\Delta \alpha$ are consistent with those in Ref.~\cite{Bauer:2017ris}. Let us comment on the ALP contributions, $U$, $\Delta\alpha$, $\Delta_{Z}$, and $\Delta_{W}$. In a wide class of new physics models in a high-energy scale, they are suppressed compared to $S$ and $T$. However, this is not the case for the ALP models. This is because the ALP contributions to the vacuum polarizations vanish at $k^{2}=0$ (see Eq.~\eqref{eq: F_function}). Then, the leading contribution to $S$ arises from the first-order derivative of the vacuum polarization, which is comparable to $U$, $\Delta\alpha$, $\Delta_{Z}$, and $\Delta_{W}$. \subsection{Observables} \label{sec: observables} The ALP contributions to the $Z$-pole observables are evaluated by the last term of the transition amplitude in Eq.~\eqref{eq:NCamp}. It is noticed that this term has the same form as the tree-level one except that the coupling and weak mixing are replaced by the effective ones. Hence, the ALP contributions are represented in terms of the following effective charges of the $Z$ coupling, \begin{align} g_{V,f} &= \sqrt{\rho_Z} \left[ I_{3}^f - 2\, Q_f \bar s^2(m_{Z}^2) \right] \equiv \hat g_{V,f} + \Delta g_{V, f}, \\ g_{A,f} &= \sqrt{\rho_Z} \, I_{3}^f \equiv \hat g_{A,f} + \Delta g_{A, f}, \end{align} with $\hat g_{V,f} = I_{3}^f - 2\, Q_f s_W^2$ and $\hat g_{A,f} = I_{3}^f$. Note that the effective parameters should be estimated at $k^2=m_{Z}^2$. Here, $\rho_Z$ is defined as \begin{align} \rho_Z = \bar g_Z^2(m_{Z}^2)/g_Z^2 = 1 + \alpha \Delta T + \Delta_Z. \end{align} Here and hereafter, variables with a hat $(\,\hat{}\,)$, {\em e.g.}, $\hat g_{V,f}$ and $\hat g_{A,f}$, represent the SM contributions at the tree level, and those with $\Delta$ such as $\Delta g_{V, f}$ and $\Delta g_{A, f}$ denote the ALP corrections. On the other hand, those with a subscript ``SM'' show the SM predictions including radiative corrections. When we evaluate the EWPOs, the ALP corrections are dominated by interference terms between the ALP and SM transition amplitudes. In particular, it is sufficient for the latter to be evaluated at the tree level when we focus on the leading ALP contributions. Hence, by expressing the partial decay width for $Z\to f\bar{f}$ as \begin{align} \Gamma_f = (\Gamma_f)_{\rm SM} + \Delta \Gamma_f, \end{align} the SM contribution at the tree level, $\hat \Gamma_f$, is given by \begin{align} \hat \Gamma_f = N_C^f \frac{G_F m_{Z}^3}{6\sqrt{2}\pi} \left[ \left(\hat g_{V,f}\right)^2 + \left(\hat g_{A,f}\right)^2 \right], \end{align} and $\Delta \Gamma_f$ is obtained as \begin{align} \Delta \Gamma_f = N_C^f \frac{G_F m_{Z}^3}{3\sqrt{2}\pi} \left[ \left(\hat g_{V,f}\right)^2 + \left(\hat g_{A,f}\right)^2 \right] \qty(\frac{\Delta g_{V, f}}{\hat g_{V, f}}+\frac{\Delta g_{A, f}}{\hat g_{A, f}}). \end{align} The radiative corrections to the SM value will be mentioned in Sec.~\ref{sec: analysis_strategy}. When $m_{a}$ is smaller than $m_{Z}$, the decay channel $Z \to a \gamma$ is open. Its partial decay width is given by \begin{align} \Gamma_{a\gamma} \equiv \Gamma(Z \to a \gamma) = \frac{m_{Z}^{3}}{96\pi}g_{aZ\gamma}^{2}\qty(1-\frac{m_{a}^{2}}{m_{Z}^{2}})^{3}. \end{align} Then, the hadronic and total decay widths of the $Z$ boson are modified as \begin{align} \Gamma_{\rm had} &= \Gamma_u + \Gamma_d + \Gamma_c + \Gamma_s + \Gamma_b \equiv (\Gamma_{\rm had})_{\rm SM} + \Delta\Gamma_{\rm had}, \\ \Gamma_Z &= \Gamma_e + \Gamma_\mu + \Gamma_\tau + 3 \Gamma_\nu + \Gamma_{\rm had} + \Gamma_{a\gamma} \equiv (\Gamma_Z)_{\rm SM} + \Delta\Gamma_Z, \end{align} where $\Delta\Gamma_{{\rm had}, Z}$ are the sums of $\Delta \Gamma_f$. It is noticed that $\Gamma_{a\gamma}$ appears in $\Gamma_Z$ and is generated at the tree level with $g_{aZ\gamma}$. Hence, its impact on the EWPO global fit is likely stronger than those via vacuum polarizations. The other $Z$-pole observables relevant for the following analysis are represented explicitly as follows. First of all, the hadronic cross-section is shown as \begin{align} \sigma_{\rm had}^0 &= (\sigma_{\rm had}^0)_{\rm SM} + \frac{12\pi}{m_{Z}^2} \frac{\hat\Gamma_{e}\hat\Gamma_{\rm had}}{\hat\Gamma_{Z}^2} \qty( \frac{\Delta \Gamma_{e}}{\hat\Gamma_{e}} +\frac{\Delta \Gamma_{\rm had}}{\hat\Gamma_{\rm had}} -2\frac{\Delta \Gamma_{Z}}{\hat\Gamma_{Z}} ). \end{align} Next, the ratios of the partial decay widths become \begin{align} R^0_\ell &= (R^0_\ell)_{\rm SM} + \frac{\hat\Gamma_{\rm had}}{\hat\Gamma_{\ell}} \qty( \frac{\Delta \Gamma_{\rm had}}{\hat\Gamma_{\rm had}} -\frac{\Delta \Gamma_{\ell}}{\hat\Gamma_{\ell}} ), \\ R^0_q &= (R^0_q)_{\rm SM} + \frac{\hat\Gamma_{q}}{\hat\Gamma_{\rm had}} \qty( \frac{\Delta \Gamma_{q}}{\hat\Gamma_{q}} -\frac{\Delta \Gamma_{\rm had}}{\hat\Gamma_{\rm had}} ), \end{align} where $\ell$ denotes the SM leptons. Also, $\sin^2\theta^f_{\rm eff}$ is given by \begin{align} \sin^2\theta^f_{\rm eff} &= (\sin^2\theta^f_{\rm eff})_{\rm SM} -\frac{1}{4|Q_f|} \frac{\hat g_{V,f}}{\hat g_{A,f}} \left( \frac{\Delta g_{V,f}}{\hat g_{V,f}} -\frac{\Delta g_{A,f}}{\hat g_{A,f}}\right). \end{align} The left-right asymmetry is shown as \begin{align} \mathcal{A}_f &= (\mathcal{A}_f)_{\rm SM} +\mathcal{\hat A}_f \qty(1-\frac{\hat g_{V,f}}{\hat g_{A,f}}\mathcal{\hat A}_f) \qty( \frac{\Delta g_{V,f}}{\hat g_{V,f}} -\frac{\Delta g_{A,f}}{\hat g_{A,f}}). \end{align} Here, the SM contribution at the tree level is \begin{align} \mathcal{\hat A}_f = \frac{2 \hat g_{V,f}/\hat g_{A,f}}{1 + (\hat g_{V,f}/\hat g_{A,f})^2}. \end{align} Finally, the forward-backward asymmetry is given by \begin{align} A^0_{\rm FB} = (A^0_{\rm FB})_{\rm SM} + \frac{3}{4}\mathcal{\hat A}_e\mathcal{\hat A}_f \qty( \frac{\Delta \mathcal{A}_e}{\mathcal{\hat A}_e} +\frac{\Delta \mathcal{A}_f}{\mathcal{\hat A}_f} ). \end{align} On the other hand, the mass and decay widths of the $W$ boson are also evaluated theoretically. From Eqs.~\eqref{eq:defGF} and \eqref{eq: gWsqbar}, the mass is obtained as \begin{align} m_{W}^2 &= (m_{W}^2)_{\rm SM} + \frac{\alpha c_W^2 m_{Z}^2}{c_W^2-s_W^2} \left[ - s_W^2 \frac{\Delta\alpha}{\alpha} - \frac{S}{2} + c_W^2 T + \frac{c_W^2-s_W^2}{4 s_W^2} U \right] \notag \\ &\equiv (m_{W}^2)_{\rm SM} + \Delta m_{W}^2. \end{align} Then, the $W$ partial decay width becomes \begin{align} \Gamma(W \to f_i f_j) &= [(\Gamma_W)_{ij}]_{\rm SM} + (\hat\Gamma_W)_{ij} \bigg[ \frac{3\Delta m_{W}^2}{2c_W^2 m_{Z}^2} + \Delta_W \bigg]. \end{align} Here, $\Delta_W$ arises because the $W$ coupling is estimated at $k^2 = m_{W}^2$. The SM prediction at the tree level is expressed as \begin{align} (\hat\Gamma_W)_{ij} &= N_c^f |V_{ij}|^2 \frac{g^2}{48\pi} m_{W}. \end{align} where $V_{ij}$ is the CKM matrix when $i,j$ are quarks, while it is the identity matrix for leptons. \subsection{Analysis strategy} \label{sec: analysis_strategy} \begin{table}[t] \centering \begin{tabular}{ccc|ccc} \hline & Measurement & Ref. & & Measurement & Ref. \\ \hline $\alpha_s(m_{Z}^2)$ & $0.1177 \pm 0.0010$ & \cite{deBlas:2022hdk} & $m_{Z}$ [GeV] & $91.1875 \pm 0.0021$ & \cite{Janot:2019oyi} \\ \cline{1-3} $\Delta\alpha_{\mathrm{had}}^{(5)} (m_{Z}^2)$ & $0.02766 \pm 0.00010$ & \cite{deBlas:2022hdk} & $\Gamma_Z$ [GeV] & $2.4955 \pm 0.0023$ & \\ \cline{1-3} $m_t$ [GeV] & $172.69 \pm 0.30$ & \cite{ParticleDataGroup:2022pth} & $\sigma_{h}^{0}$ [nb] & $41.4802 \pm 0.0325$ & \\ \cline{1-3} $m_h$ [GeV] & $125.21 \pm 0.17$ & \cite{ParticleDataGroup:2022pth} & $R^{0}_{\ell}$ & $20.7666 \pm 0.0247$ & \\ \cline{1-3} $m_{W}$ [GeV] & $80.377 \pm 0.012$ & \cite{ParticleDataGroup:2022pth} & $A_{\mathrm{FB}}^{0, \ell}$ & $0.0171 \pm 0.0010$ & \\ \cline{4-6} & $80.4133 \pm 0.0080$ & \cite{deBlas:2022hdk} & $R^{0}_{b}$ & $0.21629 \pm 0.00066$ & \cite{ALEPH:2005ab,Bernreuther:2016ccf} \\ \cline{1-3} $\Gamma_{W}$ [GeV] & $2.085 \pm 0.042$ & \cite{ParticleDataGroup:2022pth} & $R^{0}_{c}$ & $0.1721 \pm 0.0030$ & \\ \cline{1-3} $\mathcal{B}(W\to \ell\nu)$ & $0.10860 \pm 0.00090$ & \cite{Schael:2013ita} & $A_{\mathrm{FB}}^{0, b}$ & $0.0996 \pm 0.0016$ & \\ \cline{1-3} $\mathcal{A}_{\ell}$ (LEP) & $ 0.1465 \pm 0.0033 $ & \cite{ALEPH:2005ab} & $A_{\mathrm{FB}}^{0, c}$ & $0.0707 \pm 0.0035$ & \\ $\mathcal{A}_{\ell}$ (SLD) & $ 0.1513 \pm 0.0021 $ & \cite{ALEPH:2005ab} & $\mathcal{A}_b$ & $0.923 \pm 0.020$ & \\ & & & $\mathcal{A}_c$ & $0.670 \pm 0.027$ & \\ \hline \end{tabular} \caption{Experimental data of the SM input parameters and EWPOs.} \label{tab:EWPO} \end{table} As a test of the ALP models, we perform a global fit analysis on the EWPOs. A likelihood function is defined by a multivariate Gaussian, $-2 \ln L = (\vb{y}-\vb*{\mu})^{T}V^{-1}(\vb{y}-\vb*{\mu})$, where $\vb{y}$ is a vector of the measured quantities, $\vb*{\mu}$ is the corresponding theoretical predictions, and $V$ is the covariance matrix. In Table~\ref{tab:EWPO}, we summarize the measured values of the SM input and the EWPOs. The covariance matrices are provided in the references for those listed in the right column. The other SM input parameters, such as $G_{F}$, $\alpha$, and the masses of the light SM fermions, are fixed to be the observed central values~\cite{ParticleDataGroup:2022pth}. In the table, we show two results of the $W$ mass $m_{W}$; the one provided by Particle Data Group (PDG)~\cite{ParticleDataGroup:2022pth} and a value combined with the recent CDF result, for which we adopt the averaged value in Ref.~\cite{deBlas:2022hdk}. They will be denoted by $m_{W}^{\rm PDG}$ and $m_{W}^{\rm CDF}$, respectively. In the following analysis, we study both cases in the ALP models. On the other hand, the SM prediction of $m_{W}$ has been evaluated at the two-loop level~\cite{Awramik:2003rn}. We also include theoretical uncertainties from unknown higher-order corrections, $\delta m_{W}=0\pm4\MeV$ (cf., Ref.\cite{deBlas:2022hdk}). As a result, based on the SM input in the table, the SM prediction is derived as \begin{align} (m_{W})_{\rm SM} = 80.3552 \pm 0.0055\GeV. \end{align} The result is consistent with the PDG value, $m_{W}^{\rm PDG}$, but deviates from the result including the CDF result, $m_{W}^{\rm CDF}$, at the $6\sigma$ level.\footnote{ The largest uncertainty of the SM prediction originates in $m_{Z}$. Although the uncertainty from the top quark mass seems to be smaller, it may involve a potentially larger (theoretical) uncertainty, which reduces the discrepancy (cf., Ref.~\cite{deBlas:2022hdk}). } The SM predictions for the $Z$-pole observables as well as the $W$ mass have been evaluated up to the two-loop level including the electroweak corrections~\cite{Awramik:2006uz, Dubovyk:2019szj}. On the other hand, for the $W$ decay widths, we use the SM predictions provided in Ref.~\cite{dEnterria:2020cpv}. In particular, those assuming the CKM unitarity are adopted. Theoretical uncertainties from unknown higher-order corrections are irrelevant for those observables and neglected in the following analysis unlikely to $m_{W}$. We implement the ALP contributions to the EWPOs, as explained in Sec.~\ref{sec: ALP_contribution}. Obviously, $\Delta\alpha$, $\Delta_{Z}$, $\Delta_{W}$ and $\Gamma_{a\gamma}$ must be taken into account at the same as $S$, $T$, and $U$ when we perform the EWPT. This feature is contrary to the analyses explored in the previous works~\cite{Bauer:2017ris, Bauer:2018uxu, Yuan:2022cpw}, where $\Delta\alpha$, $\Delta_{Z}$, $\Delta_{W}$ and $\Gamma_{a\gamma}$ were ignored in the analyses of $S$ and $U$. In particular, they focused on the ALP much lighter than the $Z$ boson; in such a case $\Gamma_{a\gamma}$ alters $\Gamma_Z$ drastically. We will show the numerical results in Sec.~\ref{sec: result}. \section{Experimental constraints} \label{sec: constraint} In this section, we summarize experimental constraints on the ALP models. Although there are very severe bounds from cosmological measurements~\cite{Jaeckel:2010ni, Cadamuro:2011fd, Proceedings:2012ulb}, the model can avoid them if the ALP mass is $m_a \gtrsim 1\GeV$. Then, constraints from flavor and collider experiments become relevant. \subsection{Flavor constraints} Let us first consider the flavor constraints. If the ALPs have an interaction with the $W$ boson, they generate quark-flavor transitions via $W$ loops (see Eq.~\eqref{eq: flavor_violating _coupling})~\cite{Izaguirre:2016dfi, Alonso-Alvarez:2018irt, Gavela:2019wzg, Guerrera:2021yss, Bauer:2021mvw, Guerrera:2022ykl}. In particular, for $m_a = \mathcal{O}(1)\GeV$, the $B$-meson decays provide the best sensitivities (see, {\em e.g.}, Ref.~\cite{Bauer:2021mvw}). The decay rate for $B^{+}\to K^{+}a$ is obtained as~\cite{Izaguirre:2016dfi, Gavela:2019wzg, Bauer:2021mvw, Guerrera:2022ykl} \begin{align} \Gamma(B^{+}\to K^{+}a) = \frac{m_{B}^{3}}{64\pi}\abs{\Delta g_{abs}^{\rm eff}}^{2}f_{0}(m_{a}^{2})\lambda_{Ka}^{1/2}\qty(1-\frac{m_{K}^{2}}{m_{B}^{2}}), \end{align} where $\lambda_{Ka} = \lambda(m_a^2/m_B^2,m_K^2/m_B^2)$ with the $B$ $(K)$ meson mass, $m_B$ $(m_K)$. Here, $f_{0}(q^{2})$ is the scalar form factor, which is evaluated by following Ref.~\cite{FlavourLatticeAveragingGroupFLAG:2021npn}. In the ALP parameter region in interest, the ALP produced from the meson is likely to decay within the detectors. If its lifetime is short enough, it decays promptly. As the lifetime increases, the ALP propagates inside the detectors before it decays. Then, the vertex constructed from the final-state particles of the ALP decay becomes displaced from the primary interaction vertex. Among various such channels (see {\em e.g.}, Ref.~\cite{Bauer:2021mvw}), the following two studies provide the best sensitivities: \begin{itemize} \item When $g_{a\gamma\gamma}$ is large enough, the ALP decays predominantly into a pair of photons. The severest constraint is obtained from $B^+ \to K^+ a$, $a\to\gamma\gamma$ in the mass range of $0.175 < m_a < 4.78\GeV$ with various lifetimes. The analysis was performed by the BaBar collabotation~\cite{BaBar:2021ich}. \item When $g_{a\gamma\gamma}$ is suppressed, the ALP decays into SM fermions (see Sec.~\ref{sec: decay}). Among the decay channels, the process $B^+ \to K^+ a$, $a\to\mu\mu$, though ${\rm Br}(a\to\mu\mu)$ is $\sim 10^{-3}$, provides a severe bound for $m_a = \mathcal{O}(1)\GeV$. The analysis was performed by the LHCb collaboration in the mass range of $0.25<m_a<4.7\GeV$ and the lifetime $0.1<\tau_a<1000\,{\rm ps}$~\cite{LHCb:2016awg}. \end{itemize} These constraints are very severe and inevitable as long as the $B$-meson decays into the ALP, {\em i.e.}, for $m_a < m_B-m_K$.\footnote{ There are several mass regions in which the SM backgrounds are huge and the flavor constraints become absent or relaxed drastically. See Refs.~\cite{BaBar:2021ich, LHCb:2016awg} for the details. } Thus, the ALPs are favored to be heavier than $4.8\GeV$ to avoid the constraints and contribute to the EWPOs effectively. \subsection{Collider constraints} Even when the ALP is heavier than $4.8\GeV$ and the flavor constraints are avoided, the model is subject to the collider constraints. In the mass range of $m_a \gtrsim 1\GeV$, the ALP productions followed by $a\to\gamma\gamma$ have been studied with the experimental data of LEP~\cite{Mimasu:2014nea, Jaeckel:2015jla, Knapen:2016moh}, Belle II~\cite{Belle-II:2020jti}, CDF~\cite{Mimasu:2014nea, CDF:2013lma}, LHC (the proton collisions)~\cite{Jaeckel:2012yz, Bauer:2017ris, Bauer:2018uxu, Florez:2021zoo, dEnterria:2021ljz, Wang:2021uyb}, and LHC (the Pb collisions)~\cite{Knapen:2016moh, CMS:2018erd, ATLAS:2020hii}. There are also studies focusing on the ALP interaction with the $W$ bosons~\cite{Craig:2018kne, Bonilla:2022pxu}. Let us first consider the case that the produced ALP decays into a pair of photons. The collider constraints are summarized, {\em e.g.}, in Ref.~\cite{dEnterria:2021ljz}. For $1 \lesssim m_a \lesssim 5\GeV$, the process $e^+e^- \to \gamma a \to 3\gamma$ provides the best sensitivity~\cite{Mimasu:2014nea, Jaeckel:2015jla, Knapen:2016moh, Belle-II:2020jti}. The scattering cross section of $e^+e^- \to \gamma a$ is expressed as~\cite{Bauer:2017ris} \begin{align} \frac{d\sigma(e^+e^-\to\gamma a)}{d\Omega} = \frac{\alpha}{128\pi}s^2\qty(1-\frac{m_a^2}{s})^3 (1+\cos^2\theta) \qty(\abs{V}^2 + \abs{A}^2), \label{eq: ee_to_agamma} \end{align} where $\theta$ is the photon angle relative to the beam direction, and $V,\ A$ are \begin{align} V &= \frac{g_{a\gamma\gamma}}{s}+\frac{1-4s_W^2}{4c_Ws_W}\frac{g_{aZ\gamma}}{s-m_{Z}^2+is\Gamma_Z/m_{Z}}, \\ A &= \frac{1}{4c_Ws_W}\frac{g_{aZ\gamma}}{s-m_{Z}^2+is\Gamma_Z/m_{Z}}. \end{align} The first term on the right-hand side of $V$ corresponds to a photon-exchange contribution, and the second one and $A$ to the $Z$ boson. The LEP and LEP II experiments have measured $e^+e^- \to \gamma\gamma (\gamma)$ at the center-or-energy around the $Z$ pole and $200\GeV$, respectively. The results have been used to constraint the ALP: \begin{itemize} \item The former case, {\em i.e.}, for $\sqrt{s} \sim m_{Z}$, was analyzed by Ref.~\cite{Jaeckel:2015jla}. Since the on-shell $Z$ boson contributions dominate the cross section, the results provided by Ref.~\cite{Jaeckel:2015jla} can be interpreted as a bound on $\Gamma(Z\to \gamma a) \times {\rm Br}(a \to \gamma\gamma)$.\footnote{ Although the ATLAS collaboration reported a stronger bound on ${\rm BR}(Z\to3\gamma)$~\cite{ATLAS:2015rsn}, the result is applied to a higher $m_a$ region because of photon kinematical cuts~\cite{Bauer:2017ris}. } On the other hand, when $g_{aZ\gamma}$ is suppressed, the scattering proceeds by exchanging an off-shell photon. Since the cross section is small, its constraint is sufficiently weak for the parameter region in interest. \item The LEP II data in the latter case, {\em i.e.}, for $\sqrt{s} \sim 200\GeV$, were analyzed by Ref.~\cite{Knapen:2016moh}. In the above mass range, since the photons from the ALP decay are likely to be collimated, the inclusive $e^+e^- \to 2\gamma$ signal regions were studied. Although the scattering was supposed to proceed only via an off-shell photon in the literature, the $Z$ boson contributes generally as well. It is noticed from Eq.~\eqref{eq: ee_to_agamma} that the latter contribution does not affect the kinematic distributions of the final-state particles. Thus, we rescale the constraints provided in Ref.~\cite{Knapen:2016moh} by $\sigma(e^+e^-\to\gamma a) \times {\rm Br}(a \to \gamma\gamma)$. \end{itemize} The LHC results~\cite{Jaeckel:2012yz, Bauer:2017ris, Bauer:2018uxu, Florez:2021zoo, dEnterria:2021ljz, Wang:2021uyb, Knapen:2016moh, CMS:2018erd, ATLAS:2020hii} give the best sensitivity for $m_a \gtrsim 5\GeV$, where the ALPs are assumed to be produced via vector gauge-boson fusions (VBFs): \begin{itemize} \item Resonant productions of the ALPs from photon fusions, $\gamma\gamma \to a \to \gamma\gamma$, have been studied in the Pb-Pb collision at LHC~\cite{Knapen:2016moh, CMS:2018erd, ATLAS:2020hii}. The initial photons are emitted from the Pb nuclei. Heavy-ion collisions have been considered because the electric charge is larger than the proton. This process provides the best sensitivity for $5 \lesssim m_a \lesssim 100\GeV$. \item For $100 \lesssim m_a \lesssim 160\GeV$, resonant productions of the ALPs via VBFs, $VV \to a \to \gamma\gamma$ $(V=\gamma, Z, W)$, have been studied by using the ATLAS analyses on Higgs boson productions via VBFs in the p-p collisions~\cite{Jaeckel:2012yz}. Here, the initial gauge bosons are emitted from quarks in the incoming protons. \item For $150 \lesssim m_a \lesssim 400\GeV$, VBF productions of the ALPs in the p-p collisions contribute to the signals~\cite{Jaeckel:2012yz}. It is found in the literature that a large kinematical cut is imposed on the invariant mass of the final-state photons, and their constraints can be interpreted as those on the non-resonant productions of the ALPs via photon fusions, $\gamma\gamma \to a^* \to \gamma\gamma$, where the initial photons are emitted from the proton. \item More recently, the reference~\cite{Bauer:2018uxu} has derived a bound on $\gamma\gamma \to a \to \gamma\gamma$ by recasting the ATLAS result on the search for spin-0 resonances~\cite{ATLAS:2017ayi}. The result shows the best sensitivity for $200 \lesssim m_a < 2700\GeV$. \end{itemize} The constraints from the on-shell productions of the ALPs via $\gamma\gamma \to a \to \gamma\gamma$ are interpreted as those on $\abs{g_{a\gamma\gamma}^{\rm eff}}^2{\rm Br}(a\to \gamma\gamma)$. On the other hand, those from the non-resonant productions, $\gamma\gamma \to a^* \to \gamma\gamma$, are recast by $\abs{g_{a\gamma\gamma}^{\rm eff}}^4$. Next, let us consider the collider constraints for the ALPs decaying into particles other than a pair of photons. They become significant especially when $g_{a\gamma\gamma}$ is absent at the tree level. The LEP and LHC bounds, in this case, have been studied in Ref.~\cite{Craig:2018kne}, where the ALPs are assumed to be produced at the on-shell. The most relevant ones on each mass region are summarized as follows: \begin{itemize} \item For $1 \lesssim m_a \lesssim 40\GeV$, the leading constraint is from $Z \to \gamma a \to \gamma jj$, where $j$ denotes a jet (gluon and quarks). The decay was measured on the $Z$ pole at the LEP experiment. The bounds are understood as those on $\Gamma(Z \to \gamma a) \times \sum_{j}{\rm Br}(a \to jj)$ with $j = {\rm light\ hadrons}, c, b$. \item For $40\GeV \lesssim m_a \lesssim m_{Z}$, the search for $Z \to \gamma a$, $a \to \nu\bar\nu \gamma$ at LEP gives a significant constraint. Here, the neutrinos are produced by exchanging an off-shell $Z$ boson, {\em i.e.}, $a \to Z^* \gamma \to \nu\bar\nu \gamma$. Such a three-body decay can compete with $a \to ff$ because the former (latter) proceeds at the tree level (via radiative corrections). The bounds are applied to $\Gamma(Z \to \gamma a) \times {\rm Br}(a \to \gamma\nu\bar\nu)$. \item The LHC tri-boson searches give the best sensitivities on the ALP models in larger mass regions, especially above the $Z$ threshold. In most of the ALP mass regions, the leading constraint is obtained by $pp \to \gamma^*, Z^* \to \gamma a$, $a \to Z\gamma \to \nu\bar\nu\gamma$.\footnote{Although the same final state is yielded by $pp \to aZ \to a\nu\bar\nu$ with $a \to \gamma\gamma$, the process is unlikely to contribute because the analysis requires a di-photon invariant mass larger than the ALP mass.} Hence, the bounds are imposed on $\sigma(pp \to a\gamma) \times {\rm Br}(a \to Z\gamma)$. The results are obtained up to $m_a = 500\GeV$ in the literature. The same process but with $Z \to \mu\bar\mu$ provides the best limit around $m_a = 110\GeV$. Also, for general choices of $(c_{BB}, c_{WW})$ the constraints from $pp \to W^* \to W a$, $a \to W^+W^-$ may become significant especially around $m_a = 300\GeV$, though they should be compared to the constraints with $g_{a\gamma\gamma} \neq 0$. \end{itemize} These results are based on the assumption that the ALPs are produced directly, {\em i.e.}, on-shell. In contrast, the reference~\cite{Bonilla:2022pxu} has studied non-resonant productions of the ALPs at the LHC. Here, they analyzed off-shell ALP contributions to vector-boson scattering processes such as $V_1V_2 \to a^* \to V_3V_4$ or t-channel ALP exchange, where $V_i$ denotes the gauge boson. The constraints are derived for various $c_{BB}$, $c_{WW}$, and $m_a$. Note that $m_{a}\lesssim 100~{\rm GeV}$ is assumed in the literature, otherwise on-shell ALP contributions cannot be neglected generally. \section{Results} \label{sec: result} In this section, we show the EWPT results of the ALPs and compare them with the experimental constraints. Among the ALP contributions, the $Z$-boson decay into $a + \gamma$ affects the probability likelihood only when the ALP is lighter than the $Z$ boson. Hence, we will focus on the case of $m_{a} \ll m_{Z}$ in Sec.~\ref{sec: result_light_ALP}. We will also show how much the overlooked contributions affect the EWPOs by comparing the results with those based on the analyses in the previous studies. In Sec.~\ref{sec: result_heavier_ALP}, we will study the case when the ALP is not much lighter than the $Z$ boson. In Sec.~\ref{sec: ewpt}, we have provided the formulae of the ALP contributions that can be applied to $m_{a} \gtrsim m_{Z}$. In this case, the ALPs are free from the flavor constraints but are subject to the collider ones, particularly from the LHC. We will compare the EWPT results with those bounds. In Sec.~\ref{sec: result_W_mass}, we will discuss the goodness of fit by focusing on the $W$ mass. In particular, the recent CDF result of the $W$ mass measurement is inconsistent with the SM prediction. It will be shown that the tension may be solved in the ALP model if the ALP is heavier than $500\GeV$, and thus, the goodness of fit is improved against the SM case. \subsection{Light ALP case} \label{sec: result_light_ALP} Let us first consider the case when the ALP is much lighter than the $Z$ boson. In this case, the ALP contributions to the EWPOs are insensitive to the ALP mass, as seen from Eqs.~\eqref{eq: delta_alpha_limit}--\eqref{eq: delta_W_limit}. In Fig.~\ref{fig: EWPT_2D}, the EWPT results are shown for $f_a = 1\TeV$ and $\Lambda = 4\pi f_a$. The probability likelihoods are calculated by globally fitting the ALP model to the EWPOs on the $(g_{a\gamma\gamma}, g_{aWW})$ plane. The red (blue) colored region is obtained by adopting $m_{W}^{\rm CDF}$ $(m_{W}^{\rm PDG})$ for the experimental data. The probability distributions are normalized on the coupling parameter plane. The darker- (lighter-) colored regions enclosed by the dashed lines correspond to the 68\% (95\%) level. \begin{figure}[t] \begin{minipage}{0.5\linewidth} \centering \includegraphics[scale=0.35]{figs/EWPO_gAAgWW_ma4GeV_Lam4pi.pdf} \end{minipage} \begin{minipage}{0.5\linewidth} \centering \includegraphics[scale=0.35]{figs/EWPO_gAAgWW_ma5GeV_Lam4pi.pdf} \end{minipage} \caption{EWPO probability distribution on the $(g_{a\gamma\gamma}, g_{aWW})$ plane in units of ${\rm TeV}^{-1}$. For the red (blue) colored regions, $m_{W}^{\rm CDF}$ $(m_{W}^{\rm PDG})$ is adopted as the experimental value of the $W$ mass. The darker- (lighter-) colored regions enclosed by the dashed lines represent the 68\% (95\%) level. The ALP mass is $4\GeV$ (left) and $5\GeV$ (right). Also, $f_{a}=1\TeV$ and $\Lambda=4\pi f_{a}$ are taken. The other colored solid lines denote the experimental constraints. The regions including $(g_{a\gamma\gamma}, g_{aWW}) = (0,0)$ are allowed. In the cyan region, all the constraints are satisfied. } \label{fig: EWPT_2D} \end{figure} On the left panel, the ALP mass is set as $m_{a} = 4\GeV$. The flavor constraints impose very severe limits on $|g_{aWW}|$. The vertical axis is shown on a logarithmic scale to represent its tightness. In particular, the region above the gray line is excluded by $B^{+}\to K^{+}\gamma\gamma$. Although this decay is absent for $g_{a\gamma\gamma}=0$, such a parameter region is instead constrained by $B^{+}\to K^{+}\mu\mu$, by which the region above the magenta line is excluded. On the other hand, $g_{a\gamma\gamma}$ is limited by the collider constraints. In particular, $e^{+}e^{-}\to\gamma\gamma(\gamma)$ at LEP-I provides the best sensitivity except for $g_{a\gamma\gamma} \simeq 0$, and $e^{+}e^{-}\to\gamma jj$ on the $Z$ pole (LEP-I) does around $g_{a\gamma\gamma} = 0$. These collider bounds are satisfied in the region between the green lines. \begin{figure}[t] \begin{minipage}{0.5\linewidth} \centering \includegraphics[scale=0.35]{figs/EWPO_gAAgWW_ma5GeV_Lam4pi_CDF_comparison.pdf} \end{minipage} \begin{minipage}{0.5\linewidth} \centering \includegraphics[scale=0.35]{figs/EWPO_gAAgWW_ma5GeV_Lam4pi_PDG_comparison.pdf} \end{minipage} \caption{EWPO probability distribution on $(g_{a\gamma\gamma}, g_{aWW})$. Here, $m_{a} = 5~{\rm GeV}$, $f_{a}=1\TeV$, and $\Lambda=4\pi f_{a}$ are taken. The red and blue regions are the same as Fig.~\ref{fig: EWPT_2D}. On the left (right) panel, the green (yellow) regions are the EWPT result only via $S$ and $U$, {\em i.e.}, ignoring the ALP contributions via $\Delta\alpha$, $\Delta_{Z}$, $\Delta_{W}$ and $\Gamma_{a\gamma}$. The darker (lighter) regions with the dashed lines correspond to the 68\% (95\%) level. } \label{fig: EWPT_2D_comp} \end{figure} On the right panel, the ALP mass is set as $m_{a} = 5\GeV$. The EWPT results are almost the same as those on the left panel: Note that the vertical axis is shown on a linear (logarithmic) scale on the right (left). For this ALP mass, the flavor constraints are absent because the heavy-meson decay into the ALP is kinematically blocked, and the collider bounds become relevant. The green lines correspond to the constraints from the LEP-I data, as in the case with $m_{a}=4\GeV$. Since this constraint is governed by $g_{aZ\gamma}$, the region around $g_{aZ\gamma} = 0$ is allowed. The scattering $e^{+}e^{-}\to\gamma\gamma(\gamma)$ at LEP-II also gives an independent constraint, and the region inside the magenta lines is allowed. In particular, this constraint works even for $g_{aZ\gamma} \simeq 0$, because a photon-exchange diagram contributes to the scattering. Although the LHC constraint by the non-resonant searches for the ALPs is shown by the black line, where the region inside the line is allowed, the result is weaker than those from LEP~I and II. In both plots, the regions allowed by all the experimental constraints are filled by the cyan. It is found that the EWPT result becomes consistent with the constraints at the 68\% level only when the PDG value is adopted for the $W$ mass. Besides, although the region around $g_{aZ\gamma}=0$ is relatively favored by the constraints, the probability likelihood of the EWPOs is not improved well. Therefore, we conclude that the recent CDF result of $m_{W}$ cannot be explained by the ALP as long as the mass is $m_{a} \ll m_{Z}$. Let us compare our results with those based on the analyses in the previous studies~\cite{Bauer:2017ris, Bauer:2018uxu, Yuan:2022cpw}. They considered the ALP much lighter than the $Z$ boson, which is the same as the setup in this subsection. However, as mentioned in Sec.~\ref{sec: ewpt}, the contributions via $\Delta\alpha$, $\Delta_{Z}$, $\Delta_{W}$ and $\Gamma_{a\gamma}$ have been ignored, {\em i.e.}, the ALP contributions have been supposed to arise only via $S$ and $U$. In Fig.~\ref{fig: EWPT_2D_comp}, we show the EWPT results with and without including $\Delta\alpha$, $\Delta_{Z}$, $\Delta_{W}$ and $\Gamma_{a\gamma}$. Here, $m_{a}=5~{\rm GeV}$, $f_a=1\TeV$ and $\Lambda=4\pi f_{a}$ are set. The former result is the same as those on the right panel in Fig~\ref{fig: EWPT_2D}, while the latter corresponds to the setup in the previous studies. On the left (right) panel, $m_{W}^{\rm CDF}$ $(m_{W}^{\rm PDG})$ is adopted as the experimental value of the $W$ mass. In both cases, it is obvious that the EWPT results are modified drastically by the new contributions. In particular, the impact of $Z\to a\gamma$ is strong, because it proceeds at the tree level, and thus, worsens the global fit significantly in the region with $g_{aZ\gamma} \neq 0$. In addition, the effects via $\Delta\alpha$, $\Delta_{Z}$, and $\Delta_{W}$ are comparable to those via $S$ and $U$. Therefore, it is found that the analyses based only on $S$ and $U$ are not valid. The ALP contributions via $\Delta\alpha$, $\Delta_{Z}$, $\Delta_{W}$ and $\Gamma_{a\gamma}$ must be taken into account in the EWPO analysis. \subsection{Heavier ALP case} \label{sec: result_heavier_ALP} As shown in the previous subsection, the EWPT results for $m_{a} \ll m_{Z}$ are tightly limited by the flavor and collider constraints. In this subsection, we consider the case of $m_{a} \gtrsim m_{Z}$. In Fig.~\ref{fig: EWPT_2D_ma_gtr_mZ}, we show the EWPT results on the $(g_{a\gamma\gamma}, g_{aWW})$ plane. Here, we take $m_{a} = 195\GeV$ and $600\GeV$ on the left and right panels, respectively. Also, $\Lambda=4\pi f_{a}$ with $f_{a}=1\TeV$ is set for both masses. The collider constraints depend on the ALP mass. For $m_{a}=195~{\rm GeV}$, the photon scattering via the off-shell ALP exchange, $\gamma\gamma\to a^{*}\to \gamma\gamma$, gives the best constraint on $g_{a\gamma\gamma}$. The region between the green lines is allowed by this bound. On the other hand, the constraint from $pp\to Z^{*}\to a\gamma$ with $a\to Z\gamma\to\nu\nu\gamma$ is shown by the orange lines, where the region between the lines is allowed. Note that the production cross section becomes smaller as $g_{aZ\gamma}$ is suppressed. The region that satisfies both constraints is shown by the cyan. It is found that the EWPT result is consistent with them at the 68\% level if the PDG value is adopted for the $W$ mass, while there is no overlap for the CDF value. \begin{figure}[t] \begin{minipage}{0.5\linewidth} \centering \includegraphics[scale=0.35]{figs/EWPO_gAAgWW_ma195GeV_Lam4pi.pdf} \end{minipage} \begin{minipage}{0.5\linewidth} \centering \includegraphics[scale=0.35]{figs/EWPO_gAAgWW_ma600GeV_Lam4pi.pdf} \end{minipage} \caption{Same as Fig.~\ref{fig: EWPT_2D} but $m_{a} = 195\GeV$ (left) and $600\GeV$ (right). The collider bounds are shown by the orange and green solid lines, and both of them are satisfied in the cyan region. } \label{fig: EWPT_2D_ma_gtr_mZ} \end{figure} As mentioned in Sec.~\ref{sec: constraint}, many of the collider bounds can be avoided if the ALP is heavier than $500\GeV$. From the right panel, where the mass is $m_{a}=600~{\rm GeV}$, only the search for the on-shell ALP production via the photon fusion, $\gamma\gamma\to a\to \gamma\gamma$, gives a constraint. This is displayed by the green lines, and the region between them is allowed (filled by the cyan). It is seen that the ALP coupling is favored to satisfy $g_{a\gamma\gamma} \simeq 0$, and the constraint becomes weaker as $g_{aWW}$ grows because the branching ratio of $a\to\gamma\gamma$ decreases. We found that the EWPT result can be consistent with the constraints at the 68\% level even for $m_{W}^{\rm CDF}$ as well as $m_{W}^{\rm PDG}$. The $W$ mass will be discussed more in Sec.~\ref{sec: result_W_mass}. As the ALP coupling to di-photon is favored to be suppressed, let us focus on the case with $g_{a\gamma\gamma} = 0$. In Fig.~\ref{fig: EWPT_1D_Lam4pi}, we show the EWPT result for various $m_{a}$ with $g_{a\gamma\gamma} = 0$ and compare them with the experimental constraints. Here, we take $\Lambda=4\pi f_{a}$ with $f_{a}=1\TeV$. The probability distribution is normalized on the $g_{aWW}$ parameter space for each $m_{a}$, and the 68\% probability range is shown. The red (blue) colored band is the result obtained for $m_{W}^{\rm CDF}$ $(m_{W}^{\rm PDG})$. The regions excluded by the flavor, LEP, and LHC are shown by the brown, orange, and green regions, respectively. It is found that the EWPT result for $m_{W}^{\rm PDG}$ can be consistent with the constraints at the 68\% level except for $50 \lesssim m_{a} \lesssim 160\GeV$, though a part of the parameter regions is excluded for $m_{a} \lesssim 50\GeV$ (see also the discussion in Sec.~\ref{sec: result_W_mass}). In contrast, the result for $m_{W}^{\rm CDF}$ is already excluded unless the ALP is heavier than $500\GeV$. \begin{figure}[t] \centering \includegraphics[scale=0.5]{figs/gWW_maScan_Lam4pi.pdf} \caption{EWPO probability distribution for various $m_{a}$. Here, $g_{a\gamma\gamma}=0$, $f_{a}=1\TeV$, and $\Lambda=4\pi f_{a}$ are taken. The distribution function is normalized on the $g_{aWW}$ parameter region for each $m_{a}$. The red (blue) colored region corresponds to the 68\% level for $m_{W}^{\rm CDF}$ $(m_{W}^{\rm PDG})$. The regions excluded by the flavor, LEP, and LHC constraints are filled by brown, orange, and green, respectively.} \label{fig: EWPT_1D_Lam4pi} \end{figure} \subsection{Goodness of fit and $W$-boson mass} \label{sec: result_W_mass} In Secs.~\ref{sec: result_light_ALP} and \ref{sec: result_heavier_ALP}, we showed the EWPT results, where the likelihood functions were studied by performing the global fit to the EWPOs. It should be stressed that the probability distribution was normalized on the ALP-coupling plane, {\em i.e.}, the total probability is equal to unity on the plane, for each ALP mass. Therefore, the parameter region corresponding to ``68\% probability'' does not always mean that {\it all} theoretical values are consistent with the experimental data but indicates that the fit inside the region is {\it relatively} better than the outside. For instance, the tension between the theoretical and CDF values of the $W$ mass is not always solved even in that region. In this subsection, let us discuss how well the ALP contributions improve the global fit, particularly paying attention to the case when $m_{W}^{\rm CDF}$ is adopted to the likelihood. Since the largest tension arises in the $W$ mass, the goodness of fit is governed by $m_{W}$ among the EWPOs listed in Table~\ref{tab:EWPO}. In Fig.~\ref{fig: MwValue}, we show the theoretical values of $m_{W}$ for various $m_{a}$. Here, $g_{a\gamma\gamma} = 0$, $f_{a}=1\TeV$ and $\Lambda=4\pi f_{a}$ are set. The results are obtained by performing the global fit to the EWPOs. In particular, the indirect prediction of $m_{W}$ is displayed by the black bar, where $m_{W}$ is not included in the fit observables. On the other hand, the red bar represents the theoretical value for which $m_{W}$ is included in the likelihood. We derive the probability distributions as a function of $m_{W}$. Then, the central value, which is denoted by the dot on the bar, is determined such that the likelihood becomes maximum, while the 68\% uncertainties are shown by the bars. The results are compared with the SM prediction (green), PDG (cyan), and CDF-averaged (orange) values, where the $1\sigma$ range is shown. \begin{figure}[t] \centering \includegraphics[scale=0.7]{figs/MwValue.pdf} \caption{ALP predictions of $m_{W}$. Here, $g_{a\gamma\gamma}=0$, $f_{a}=1\TeV$, and $\Lambda=4\pi f_{a}$ are taken. The red (black) bars represent the results obtained with/without including $m_{W}$ in the global fit. The orange and cyan bands correspond to $m_{W}^{\rm CDF}$ and $m_{W}^{\rm PDG}$ with the $1\sigma$ uncertainty, while the green band is the SM prediction with the $1\sigma$ error. The ALP coupling is restricted by the experimental constraints during the fit analysis for the results with the label ``w/ Exp.''} \label{fig: MwValue} \end{figure} For $m_{a}=5\GeV$, the black and red bars are obtained without taking the experimental constraints into account. Even though the constraints are ignored, the theoretical values cannot be consistent with $m_{W}^{\rm CDF}$ because of the effects of $Z \to a\gamma$. Since the ALP contribution deteriorates the fit rapidly, the model cannot change the theoretical values of the EWPOs against the SM prediction, {\em i.e.}, does not solve the $m_{W}$ tension. Although the result is shown for $m_{a}=5\GeV$, the same conclusion holds as long as $Z \to a\gamma$ is effective, {\em i.e.}, for $m_{a} \lesssim 90\GeV$. Similarly, the goodness of fit for the case with $m_{W}^{\rm PDG}$ is not improved so much against the SM case. If the ALP is heavier than the $Z$ boson, the decay of $Z \to a\gamma$ is kinematically blocked. We show the results for $m_{a}=195\GeV$ with and without restring the parameter space by the experimental constraints, which are shown as the label, ``w/o Exp'' and ``w/ Exp,'' respectively. According to the former result, although the indirect prediction seems to be around the SM value and does not overlap with the $m_{W}^{\rm CDF}$ range, the uncertainty is asymmetric between the smaller and larger $m_{W}$ values. Namely, the probability distribution has a long tail toward larger $m_{W}$. Consequently, the theoretical prediction with including $m_{W}$ in the fit (the red bar) becomes consistent with $m_{W}^{\rm CDF}$ at the 68\% level. Once the experimental constraints are taken into account, the parameter space is restricted tightly for $m_{a}=195\GeV$. Then, the ALP contribution cannot make $m_{W}$ shift from the SM prediction sufficiently, as shown by the result for ``w/ Exp.'' It is understood from Fig.~\ref{fig: EWPT_1D_Lam4pi} that the same conclusion holds as long as the collider bounds are tight. The collider constraints as well as those from the flavor are relaxed if the ALP is heavier than $500\GeV$. In the table, we show the theoretical values for $m_{a} = 600\GeV$. As shown in Fig.~\ref{fig: EWPT_2D_ma_gtr_mZ}, the model is free from the experimental constraints for $g_{a\gamma\gamma} = 0$. Similar to the case for $m_{a}=195\GeV$ without including the experimental constraints, it is found that the ALP contributions can solve the tension between the SM and CDF-averaged values of the $W$ mass. In summary, although the quality of the global fit to the EWPOs can be improved effectively against the SM case by the ALP with a mass larger than $m_{Z}$, the model is constrained tightly by the collider measurements for $m_{a}<500\GeV$. Thus, the tension between the SM and CDF-averaged values of the $W$ mass can be solved only when the ALP is heavier than $500\GeV$. In Fig.~\ref{fig: MwValue}, we have focused on the ALP model with $m_{W}^{\rm CDF}$ for the experimental value of the $W$ mass. Here, let us comment on the case when $m_{W}^{\rm PDG}$ is adopted. Similar to the above case, the probability likelihood is less affected by the ALP as long as $Z \to a\gamma$ and/or the experimental bounds are severe. The goodness of fit is improved effectively compared to the SM case if the ALP is heavier than about $160\GeV$ under the collider constraints (see also Fig.~\ref{fig: EWPT_1D_Lam4pi}). \begin{figure}[t] \centering \includegraphics[scale=0.5]{figs/gWW_maScan_Lam1.pdf} \caption{Same as Fig.~\ref{fig: EWPT_1D_Lam4pi} but $\Lambda=f_{a}$.} \label{fig: EWPT_1D_Lam1} \end{figure} Before closing this section, let us comment on the cutoff dependence of the results. The cutoff appears in the loop functions and should be determined by a UV theory of the ALP model. In Fig.~\ref{fig: EWPT_1D_Lam1}, we perform the same analysis as Fig.~\ref{fig: EWPT_1D_Lam4pi} but with $\Lambda=f_{a}$. It is noticed that the EWPT regions drop rapidly above $m_{a}=400\GeV$. This is because the ALP contributions especially to the $W$ mass flip their signs. As a result, the goodness of fit is not improved well by the ALP contributions. Although the $m_{W}$ tension seems to be solved for $90 \lesssim m_{a} \lesssim 400\GeV$, all the parameter regions are excluded by the experimental constraints. Therefore, if the recent CDF result is confirmed in future experiments, a larger cutoff scale is favored. \section{Conclusions and discussion} \label{sec: conclusion} We have studied the EWPOs in the ALP model. The ALP is assumed to couple with the SM ${\rm SU(2)}_L$ and the ${\rm U(1)}_Y$ gauge bosons. We have provided the formulae of the ALP contributions to the EWPOs valid for any ALP mass and shown that those to $\Delta\alpha$, $\Delta_{Z}$, and $\Delta_{W}$ as well as the oblique parameter $U$ can be comparable to $S$ ($T=0$ in the ALP model). Furthermore, the decay of $Z\to a\gamma$ contributes to the total width of the $Z$-boson decay significantly. We have analyzed the probability likelihood generated from the ALP contributions by performing the global fit to the EWPOs and compared the results with the flavor and collider constraints. Because of $Z\to a\gamma$, the ALP contributions to the EWPOs are suppressed as long as the ALP is lighter than the $Z$ boson. Besides, since the experimental constraints are tight, the EWPT regions are widely excluded already if the ALP is not so heavy. Also, even when the ALP is heavy, the parameter regions are limited as long as the ALP is coupled sizably with a pair of photons. Therefore, a heavy ALP with interactions satisfying $g_{a\gamma\gamma} \sim 0$ is preferred to improve the global fit compared to the SM case. In the numerical analysis, we have studied both of the cases when the recent CDF result of the $W$ mass measurement is included and excluded in the experimental data. If the PDG value adopted, {\em i.e.}, the CDF result is neglected in the $W$ mass average, the ALP is favored to be heavier than about $160\GeV$ with satisfying $g_{a\gamma\gamma} \sim 0$. On the other hand, if the recent CDF result, which is inconsistent with the SM prediction, is included, it has been concluded that the ALP model can solve the tension only for $m_{a} > 500\GeV$ with $g_{a\gamma\gamma} \sim 0$. Let us comment on contributions from higher-dimensional operators that are not included in the ALP Lagrangian \eqref{eq: Lagrangian}. They are likely to be non-negligible when $m_{a}$ is comparable to $f_a$. The higher-dimensional operators should depend on a UV theory of the ALP model. The theory also determines the cutoff scale, which has been set as a model parameter in our analysis. Since we have set $f_{a}=1\TeV$, the results might be altered especially around $m_{a} \sim 1\TeV$. We have argued that the EWPOs are sensitive to the ALP models. In particular, if the recent CDF result of the $W$ mass measurement would be confirmed in future experiments, the model could provide an attractive solution. Although the model parameters are restricted by the experimental constraints, the model setup for solving the tension has not been explored sufficiently by colliders such as the LHC. Therefore, further collider studies would be helpful to establish the scenario. \section*{Acknowledgements} This work is supported by the Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research on Innovative Areas (No.~21H00086 [ME] and No.~22J01147 [MA]) and Scientific Research B (No.~21H01086 [ME]).
{ "arxiv_id": "2302.11392", "language": "en", "timestamp": "2023-02-23T02:15:08", "url": "https://arxiv.org/abs/2302.11392", "yymm": "2302" }
\section{Introduction} A significant fraction of field stars are formed as part of a binary system \citep{Eggleton2008, Raghavan2010}. Of these binaries, around 25 per cent are formed with sufficiently small orbital separations such that at some stage in their lives the two stars will interact with each other \citep{willems2004}, transferring material between them and affecting their future evolution. For many of these interacting systems, the mass-transfer will lead to a common-envelope phase, initiated by the post-main-sequence evolution of the more massive star (the primary). This involves both stars being engulfed by the expanding outer envelope of the more massive star, with the resulting drag forces causing the hot core of the primary star and its main-sequence companion to spiral in to small orbital separations and therefore short orbital periods ranging from hours to a few days. The lost orbital energy and angular momentum from the binary is imparted into the envelope, ejecting it \citep{Paczynski1976}, where it may then be ionised and lit up by the hot remnant core, appearing as a planetary nebula for a period of time \citep{Jones2017} before the core cools and stratifies to become a white dwarf (WD). As well as being a key tracer of the common-envelope phase, these short period detached post-common-envelope binaries (PCEBs) made up of a WD and a main-sequence star are the progenitors to many of the most interesting and exotic astrophysical objects and phenomena in the Universe, including the cosmologically important type Ia supernovae. Binaries made up of a WD and a main-sequence star are typically split into two categories, one of which containing the WDs with solar-type companions (WD+FGK) and the other made up of WDs with companions of spectral type M and later (referred to as WD+dM). These categories reflect the differences in observational properties, with the WD+dM binaries being relatively easy to find due to the two stars often contributing a similar amount of flux at optical wavelengths. This has allowed a large sample to be extracted from spectroscopic surveys \citep{Rebassa-Mansergas2007, Rebassa-Mansergas2010, Rebassa-Mansergas2012b, Rebassa-Mansergas2016}, making them the most common for many years. More recently, WD+FGK binaries have been found by using UV excesses to discern systems with WDs that would otherwise be outshone by their companion at optical wavelengths \citep{Parsons2016a, Rebassa-Mansergas2017}. While both types can be found with short orbital periods \citep{Hernandez2022b} -- and therefore with small orbital separations with a relatively high chance of eclipse -- the advantage of the WD+dM PCEBs is that the WD contributes enough of the total flux such that the eclipses can be detected, enabling them to be found in photometric surveys \citep{Parsons2013b, Parsons2015}. Eclipsing systems are a gold standard in astrophysics, allowing for incredibly precise measurements of the stellar and binary parameters, with typical precisions at or below the percent level. The result of this is that eclipsing PCEBs are some of the best laboratories of stellar and binary physics available to us and, as such, have been used to test and study a multitude of effects including, but not limited to: precisely measuring mass-radius relations of WDs \citep{Parsons2017a}, confirming the over-inflation of M~dwarfs relative to theoretical models \citep{Parsons2018a}, distinguishing the transition between helium and carbon-oxygen core compositions in WDs \citep{Parsons2017a}, finding systems with brown dwarf companions \citep{Beuermann2013, Parsons2017b, Casewell2020b, vanRoestel2021}, and identifying unusual systems such as merger products and extremely low metallicity systems \citep{O'Brien2001, Rebassa-Mansergas2019a}. In the current era of wide-field time-domain photometric sky surveys, such as the Zwicky Transient Facility (ZTF; \citealt{Masci2019, Bellm2019, Graham2019}), the number of known eclipsing PCEBs is increasing drastically, with ZTF alone contributing to more than an order of magnitude increase (van Roestel et al. in prep), so far, on the previously known sample \citep{Parsons2015}. The Legacy Survey of Space and Time (LSST; \citealt{Ivezic2019}) carried out by the Vera Rubin Observatory in the near future will only accelerate this increase. Follow-up of this vast quantity of systems will be an ongoing challenge, particularly as many of these will be extremely faint, but they will provide much-needed insight into the relatively uncertain physics of the common-envelope as well as uncovering rare systems that may have implications for specific areas of stellar or binary physics. These include, but are not limited to, systems containing magnetic, pulsating, or high-mass WDs, as well as those with brown dwarf companions. Previous work has shown that high-cadence multi-colour photometric observations of the primary eclipse is enough to accurately and efficiently characterize detached eclipsing PCEBs \citep{Brown2022}. This method makes use of the eclipse to cleanly disentangle the spectral energy distributions of the two components and constrains the effective temperatures, while using the shape of the eclipse to measure the orbital inclination and the stellar radii. These, in turn, provide information about the stellar masses through the use of mass-radius relations. A photometric method such as this is especially important as fainter systems are discovered -- particularly in the LSST era -- making spectroscopic follow-up even more difficult, and in many cases, impractical. With this in mind, we have undertaken a program of high-cadence photometric follow-up of eclipsing WD+dM PCEBs (first discovered by van Roestel et al. (in prep)) with the goal of characterizing a significant fraction and discovering a number of rare systems among them. Here we present the first results of this follow-up. \section{Observations} \subsection{Target selection} Our targets for follow-up were selected from the detached eclipsing WD+dM systems discovered by van Roestel et al. (in prep) using data from ZTF. In brief, this sample was created by searching for periodic outliers in the ZTF photometry, indicative of eclipses. The primary biases are therefore related to the probability of a given system eclipsing as viewed from Earth and the ability to detect an eclipse within the ZTF data. The former is dominated by the orbital period (with a very weak dependence on the secondary radius), while the latter is dominated by the signal-to-noise ratio of the eclipse, with a heavy dependence on the depth of the eclipse (and a much weaker dependence on the duration of the eclipse). A more detailed description of the full ZTF eclipsing WD+dM sample identification method and the biases within it will be presented in van Roestel et al. (in prep). We restricted our target list to systems visible from the La Silla Observatory (Dec < +25~deg) and brighter than $g=19.5~\rm{mag}$. We typically observed systems with eclipse timings that made for the most efficient use of telescope time on a particular night however we also tried to prioritise systems with longer periods where possible since the eclipses of these systems are more difficult to observe. Systems with ZTF light curves that indicated they may be of particular interest were also prioritised. This includes systems with in-eclipse flux measurements at or below the detection threshold of ZTF (indicative of brown dwarf companions) and systems with unusual ZTF light curves, showing variability inconsistent with typical binary variability mechanisms and indicating the presence of a magnetic WD. A journal of observations is included in \autoref{tab:observations}. \subsection{High speed photometry} Our photometric follow-up observations made use of the three-band frame-transfer camera, ULTRACAM \citep{Dhillon2007}, mounted on the 3.6\,m New Technology Telescope (NTT) at the ESO La Silla Observatory in Chile, to obtain high-cadence multi-colour photometry of the primary eclipse of each system -- the eclipse of the WD by its companion. For all targets observed with ULTRACAM we used the higher throughput Super-SDSS $u_{s}\,g_{s}\,i_{s}$ filters \citep{Dhillon2021}, with the exception of one observation where the $r_{s}$ filter was used in place of $i_{s}$. For a few of the systems thought to harbour magnetic WDs, we obtained high-speed photometry with the quintuple band frame-transfer camera, HiPERCAM \citep{Dhillon2021}, mounted on the 10.4\,m Gran Telescopio Canar\'ias (GTC) at the Roque de los Muchachos observatory in La Palma, again equipped with Super-SDSS $u_{s}\,g_{s}\,r_{s}\,i_{s}\,z_{s}$ filters. All observations were bias-subtracted and flat-field corrected (and fringe corrected in the case of the HiPERCAM $z_{s}$ band) using the HiPERCAM pipeline\footnote{\url{https://github.com/HiPERCAM/hipercam}}. Differential aperture photometry was then extracted using a variable aperture radius set to scale with the measured full width at half-maximum (FWHM) in each frame in order to remove effects due to seeing and transparency variations. For this we use a target aperture radius of $1.8\times FWHM$. In observations with lower signal-to-noise ratios, optimal extraction \citep{Naylor1998} was also performed, with the extraction method resulting in the highest signal-to-noise light curve being the one that was used. Flux calibration was then performed by fitting the atmospheric extinction in each band using one or more observing runs taken on the same night as the target observations (each spanning a minimum of 0.2 airmasses). The atmospheric extinction measurements were combined with an observation of an ULTRACAM flux standard star \citep[see][table A3]{Brown2022}, reduced using a larger target aperture radius of $2.5\times FWHM$, in order to measure the instrumental zeropoint for the night. The calibrated flux of the comparison star was then determined using the same target aperture radius as for the flux standard star, which was then used to flux calibrate the target. When using optimally extracted photometry, the flux calibration was still performed on the data reduced using a standard aperture photometry extraction. This calibration was then applied to the optimally extracted photometry to prevent systematic absolute flux errors between the two methods. These flux calibration steps were performed using the \textsc{cam\_cal}\footnote{\url{https://github.com/Alex-J-Brown/cam_cal}} package. \section{Method} We fit the flux calibrated eclipse photometry using the \textsc{pylcurve}\footnote{\url{https://github.com/Alex-J-Brown/pylcurve}} package, a python wrapper for \textsc{lcurve's} \textsc{lroche} routine \citep{Copperwheat2010}. In general, we follow the method of \citet{Brown2022} which involves fitting the eclipse photometry in multiple filters simultaneously with eight free parameters. These are the effective temperatures, $\rm{T_{1}}$ and $\rm{T_{2}}$, which define the spectral energy distributions (SEDs) of both stars through the use of stellar atmosphere models \citep{Claret2020a, Husser2013}; the stellar masses, $\rm{M_{1}}$ and $\rm{M_{2}}$; the binary inclination, $i$; the parallax, $\varpi$; the interstellar reddening, $E(B-V)$; and the time of mid-eclipse, $\rm{T_{0}}$. With the use of mass-radius relations and a given (fixed) orbital period, $\rm{P}$, the radii of both stars and the orbital separation of the binary can be defined allowing model light curves to be generated for each filter. See \citet{Brown2022} for more details on this method. For this work, however, we implement two changes to the methodology mentioned above, both regarding the spectral modelling of the secondary star: \begin{enumerate} \item \label{itm:first} Previously, PHOENIX stellar atmospheres \citep{Husser2013} were used to model the SED of the secondary star \citep{Brown2022}. However, these models are limited to a minimum effective temperature of 2300~K, preventing the modelling of systems with brown dwarf companions. We have therefore switched to using the BT-Settl CIFIST stellar atmosphere grid \citep{Allard2012} which go as low as 1200~K, allowing for a seamless transition to the brown dwarf regime and keeping our modelling consistent throughout. \item \label{itm:second} It is well known that there are significant differences in the synthetic photometry of low mass stars calculated using different spectral models for a given effective temperature and surface gravity. This is most apparent for lower effective temperatures (<3500~K), with models struggling to reproduce the transitions from M~dwarfs to L~dwarfs to T~dwarfs \citep{Saumon2008, Allard2012, Best2021}. Rigidly defining the SED of the secondary from these spectral models could therefore introduce problems where the model photometry cannot reproduce the observed SED of the star in question to the precision of our observations. We counter this by allowing the secondary to have a separate effective temperature in each observed bandpass. Despite being allowed to vary, these individual filter-specific effective temperatures should be consistent with each other at a certain level. We implement this consistency requirement using priors to favour solutions where these effective temperatures are similar across the different filters. \end{enumerate} In order to inform the priors on the filter-specific secondary temperatures mentioned in \autoref{itm:second}, we use a sample of 15\,279 well-characterised M~dwarfs \citep{Morrell2019}. Cross-matching this sample with SDSS DR13 returns a sample of 5\,222 M~dwarfs, on which we then make colour cuts informed by synthetic photometry of the BT-SETTL-CIFIST model atmospheres ($4.0 < (u'-i') < 6.4$ and $1.5 < (g'-i') < 3.4$) to remove many of the extreme outliers. This leaves 4\,158 M~dwarfs with SDSS photometry. We then fit fifth-order polynomials to the measured effective temperature as a function of $u'-i'$ and $g'-i'$ colours individually, using an iterative sigma clipping procedure with a $3\sigma$ cut to remove any outliers that remain after the initial colour cuts (\autoref{fig:temp_var}). The standard deviations of the residuals of the remaining points are 80~K for a $u'-i'$ colour and 30~K for a $g'-i'$ colour. We therefore implement Gaussian priors on the difference in effective temperature between the $u'$ and $i'$, and $g'$ and $i'$ bands of 80~K and 30~K respectively, both centred at zero. As, with this method, there are as many temperature measurements available for the secondary as filters used, we take the $i_{s}$-band measurement as being representative of the true secondary temperature. We make this choice based on it being the the band where the secondary is brightest and is therefore the most strongly constrained by the photometry. \begin{figure} \includegraphics[width=\columnwidth]{figures/temp_variance_1col.pdf} \caption{Effective temperatures of M~dwarfs measured by \citet{Morrell2019} against their SDSS colours. Blue crosses show points discarded by the sigma clipping procedure and the solid black lines show the final polynomial fits to these sigma-clipped distributions. The residuals of these fits, from which we calculate the standard deviations, are shown in the panels below. The gap in the sample at an effective temperature of $4000~\rm{K}$ is due to a discontinuity in the model grid used by \citet{Morrell2019}.} \label{fig:temp_var} \end{figure} As in \citet{Brown2022}, we use a Markov Chain Monte Carlo (MCMC) method to fit each light curve, implemented through the \textsc{python} package, \textsc{emcee}\footnote{\url{https://emcee.readthedocs.io/en/stable/}} \citep{Foreman-Mackey2013}. We run each fit for a minimum of 10\,000 steps using 100 walkers and inspect each fit manually for convergence and stability. Each system is first fit using a carbon-oxygen (CO) core WD mass-radius relation \citep{Bedard2020, Blouin2018, Tremblay2011} with the fit then being repeated using a helium (He) core model \citep{Panei2007} if the best-fit CO-core WD mass is below $0.5~\rm{M_{\odot}}$. If this subsequent fit using the He-core model is restricted by the upper mass limit of the He-core models -- $0.5~\rm{M_{\odot}}$ -- then we consider the WD to have a CO core-composition, if not then we assume the WD to possess a He core. \section{Results} The results of our light curve fits are presented in \autoref{tab:stellar_results} and \autoref{tab:binary_results} -- note that of the 43 systems that we have followed-up, 9 do not have measured parameters because they either harbour magnetic WDs or are strong candidates (see section \ref{sec:magnetic_wd}). Our best-fit values are taken to be the median of the posterior distributions of the MCMC with lower and upper uncertainties taken as the 16th and 84th percentiles respectively. As in \citet{Brown2022}, the formal uncertainties from the MCMC do not include contributions from systematic errors and so we attempt to take this into account by adding estimated systematic uncertainties in quadrature with the formal uncertainties of the MCMC. We add 1.5~per~cent in quadrature with the uncertainties on the primary temperature \citep{Gianninas2011}, $\rm{T_{1}}$, and 100~K in quadrature with the secondary temperature, $\rm{T_{2}}$. We also add 1~per~cent in quadrature with the WD mass, $\rm{M_{1}}$, and 5 per cent in quadrature with the secondary mass, $\rm{M_{2}}$ (for the reasons explained in \citet{Brown2022}). These contributions are included in the uncertainties shown in \autoref{tab:stellar_results} and in all figures. An example ULTRACAM eclipse light curve and best-fit model is shown in \autoref{fig:lc_example} with all best-fit light curves shown in Appendix \ref{sec:light_curves}. \begin{figure} \includegraphics[width=\columnwidth]{figures/lightcurves/ZTFJ0410.pdf} \caption{ULTRACAM $u_{s}\,g_{s}\,i_{s}$ eclipse light curve (coloured points) of ZTF\,J041016.82$-$083419.5 with the best-fit light curve model over-plotted in black and the residuals of this fit shown below. The zero-flux level is shown by the horizontal grey line.} \label{fig:lc_example} \end{figure} \begin{table*} \centering \caption{Best fit stellar parameters to the ULTRACAM eclipse photometry. Uncertainties include estimated systematic errors added in quadrature with the formal uncertainties of the MCMC. These estimated systematics are 1.5~per~cent on $\rm{T_{1}}$ \citep{Gianninas2011}, 100~K on $\rm{T_{2}}$, 1 per cent for $\rm{M_{1}}$, and 5 per cent for $\rm{M_{2}}$ \citep{Brown2022}.} \tabcolsep=0.13cm \label{tab:stellar_results} \begin{tabular}{@{}lcllllccll@{}} \hline Target & He/CO & $\rm{T_{1}~(K)}$ & $\rm{M_{1}~(M_{\odot})}$ & $\rm{R_{1}~(R_{\odot})}$ & $\rm{log(g_{1})}$ & $\rm{T_{2}~(K)}$ & $\rm{M_{2}~(M_{\odot})}$ & $\rm{R_{2}~(R_{\odot})}$ & $\rm{R_{2}/R_{L1}}$ \\ \hline ZTF\,J041016.82$-$083419.5 & He & $14690^{+560}_{-550}$ & $0.355^{+0.015}_{-0.011}$ & $0.0204^{+0.0004}_{-0.0006}$ & $7.37^{+0.04}_{-0.03}$ & $2840^{+110}_{-110}$ & $0.123^{+0.009}_{-0.008}$ & $0.151^{+0.008}_{-0.006}$ & $0.680^{+0.033}_{-0.021}$ \\[0.8ex] ZTF\,J051902.06$+$092526.4 & He & $10750^{+770}_{-580}$ & $0.391^{+0.019}_{-0.029}$ & $0.0178^{+0.0009}_{-0.0005}$ & $7.53^{+0.04}_{-0.08}$ & $2800^{+140}_{-110}$ & $0.177^{+0.014}_{-0.019}$ & $0.214^{+0.012}_{-0.018}$ & $0.842^{+0.079}_{-0.084}$ \\[0.8ex] ZTF\,J052848.24$+$215629.0 & CO & $12100^{+700}_{-630}$ & $0.787^{+0.025}_{-0.025}$ & $0.0105^{+0.0003}_{-0.0003}$ & $8.29^{+0.04}_{-0.04}$ & $3130^{+110}_{-110}$ & $0.184^{+0.014}_{-0.013}$ & $0.220^{+0.011}_{-0.009}$ & $0.408^{+0.014}_{-0.011}$ \\[0.8ex] ZTF\,J053708.26$-$245014.6 & He & $16100^{+440}_{-410}$ & $0.397^{+0.009}_{-0.007}$ & $0.0191^{+0.0002}_{-0.0002}$ & $7.48^{+0.02}_{-0.02}$ & $2970^{+100}_{-100}$ & $0.204^{+0.012}_{-0.011}$ & $0.241^{+0.007}_{-0.005}$ & $0.333^{+0.006}_{-0.004}$ \\[0.8ex] ZTF\,J061530.96$+$051041.8 & CO & $15220^{+600}_{-510}$ & $0.560^{+0.011}_{-0.011}$ & $0.0139^{+0.0002}_{-0.0002}$ & $7.90^{+0.02}_{-0.02}$ & $3380^{+110}_{-110}$ & $0.533^{+0.030}_{-0.029}$ & $0.547^{+0.013}_{-0.011}$ & $0.531^{+0.008}_{-0.008}$ \\[0.8ex] ZTF\,J063808.71$+$091027.4 & CO & $22500^{+1200}_{-1000}$ & $0.604^{+0.013}_{-0.011}$ & $0.0136^{+0.0001}_{-0.0002}$ & $7.95^{+0.02}_{-0.02}$ & $3320^{+110}_{-110}$ & $0.410^{+0.024}_{-0.022}$ & $0.432^{+0.012}_{-0.008}$ & $0.295^{+0.005}_{-0.004}$ \\[0.8ex] ZTF\,J063954.70$+$191958.0 & CO & $15980^{+520}_{-520}$ & $0.701^{+0.011}_{-0.009}$ & $0.0117^{+0.0001}_{-0.0001}$ & $8.15^{+0.01}_{-0.01}$ & $3200^{+100}_{-100}$ & $0.210^{+0.011}_{-0.011}$ & $0.246^{+0.004}_{-0.002}$ & $0.398^{+0.004}_{-0.002}$ \\[0.8ex] ZTF\,J064242.41$+$131427.6 & CO & $14560^{+540}_{-500}$ & $0.633^{+0.011}_{-0.008}$ & $0.0127^{+0.0001}_{-0.0001}$ & $8.03^{+0.02}_{-0.01}$ & $3110^{+100}_{-100}$ & $0.150^{+0.008}_{-0.008}$ & $0.183^{+0.004}_{-0.001}$ & $0.438^{+0.006}_{-0.002}$ \\[0.8ex] ZTF\,J065103.70$+$145246.2 & CO & $13140^{+560}_{-670}$ & $0.515^{+0.019}_{-0.020}$ & $0.0145^{+0.0003}_{-0.0003}$ & $7.83^{+0.03}_{-0.04}$ & $3170^{+120}_{-110}$ & $0.242^{+0.018}_{-0.019}$ & $0.276^{+0.012}_{-0.013}$ & $0.589^{+0.018}_{-0.019}$ \\[0.8ex] ZTF\,J070458.08$-$020103.3 & CO & $9280^{+230}_{-250}$ & $0.500^{+0.012}_{-0.015}$ & $0.0143^{+0.0003}_{-0.0002}$ & $7.82^{+0.02}_{-0.03}$ & $3300^{+100}_{-100}$ & $0.344^{+0.018}_{-0.020}$ & $0.370^{+0.006}_{-0.010}$ & $0.915^{+0.040}_{-0.043}$ \\[0.8ex] ZTF\,J071759.04$+$113630.2 & CO & $21110^{+920}_{-750}$ & $0.528^{+0.016}_{-0.017}$ & $0.0149^{+0.0003}_{-0.0003}$ & $7.81^{+0.03}_{-0.03}$ & $3150^{+120}_{-110}$ & $0.296^{+0.020}_{-0.022}$ & $0.326^{+0.013}_{-0.015}$ & $0.320^{+0.008}_{-0.009}$ \\[0.8ex] ZTF\,J071843.68$-$085232.1 & CO & $18940^{+870}_{-880}$ & $0.794^{+0.019}_{-0.018}$ & $0.0106^{+0.0002}_{-0.0002}$ & $8.28^{+0.03}_{-0.03}$ & $3120^{+110}_{-110}$ & $0.306^{+0.020}_{-0.019}$ & $0.335^{+0.012}_{-0.011}$ & $0.555^{+0.014}_{-0.012}$ \\[0.8ex] ZTF\,J080441.95$-$021545.7 & CO & $13430^{+560}_{-550}$ & $0.577^{+0.010}_{-0.009}$ & $0.0134^{+0.0001}_{-0.0001}$ & $7.94^{+0.01}_{-0.01}$ & $<1510^{+260}_{-200}$ & $<0.069^{+0.007}_{-0.007}$ & $0.098^{+0.002}_{-0.001}$ & $0.377^{+0.008}_{-0.006}$ \\[0.8ex] ZTF\,J080542.98$-$143036.3 & He & $26500^{+1200}_{-9000}$ & $0.393^{+0.013}_{-0.013}$ & $0.0239^{+0.0007}_{-0.0006}$ & $7.28^{+0.03}_{-0.04}$ & $3250^{+120}_{-110}$ & $0.291^{+0.020}_{-0.023}$ & $0.331^{+0.013}_{-0.017}$ & $0.586^{+0.016}_{-0.021}$ \\[0.8ex] ZTF\,J094826.35$+$253810.6 & CO & $11290^{+480}_{-450}$ & $0.504^{+0.026}_{-0.024}$ & $0.0145^{+0.0004}_{-0.0004}$ & $7.82^{+0.05}_{-0.05}$ & $3120^{+120}_{-120}$ & $0.169^{+0.015}_{-0.014}$ & $0.205^{+0.013}_{-0.012}$ & $0.546^{+0.024}_{-0.024}$ \\[0.8ex] ZTF\,J102254.00$-$080327.3 & CO & $8330^{+260}_{-250}$ & $0.605^{+0.027}_{-0.025}$ & $0.0127^{+0.0003}_{-0.0003}$ & $8.01^{+0.04}_{-0.04}$ & $3170^{+110}_{-110}$ & $0.405^{+0.030}_{-0.029}$ & $0.428^{+0.021}_{-0.020}$ & $0.620^{+0.023}_{-0.021}$ \\[0.8ex] ZTF\,J102653.47$-$101330.3 & He & $19320^{+710}_{-670}$ & $0.376^{+0.012}_{-0.010}$ & $0.0214^{+0.0004}_{-0.0007}$ & $7.35^{+0.04}_{-0.02}$ & $2840^{+110}_{-110}$ & $0.105^{+0.008}_{-0.006}$ & $0.134^{+0.007}_{-0.004}$ & $0.558^{+0.021}_{-0.012}$ \\[0.8ex] ZTF\,J103448.82$+$005201.9 & He & $10060^{+410}_{-370}$ & $0.455^{+0.007}_{-0.007}$ & $0.0159^{+0.0001}_{-0.0001}$ & $7.69^{+0.01}_{-0.01}$ & $<1550^{+250}_{-230}$ & $<0.067^{+0.005}_{-0.006}$ & $0.097^{+0.001}_{-0.001}$ & $0.460^{+0.010}_{-0.008}$ \\[0.8ex] ZTF\,J104906.96$-$175530.7 & He & $13000^{+440}_{-460}$ & $0.426^{+0.010}_{-0.007}$ & $0.0173^{+0.0001}_{-0.0002}$ & $7.59^{+0.02}_{-0.01}$ & $3170^{+100}_{-110}$ & $0.198^{+0.012}_{-0.010}$ & $0.235^{+0.007}_{-0.003}$ & $0.402^{+0.008}_{-0.003}$ \\[0.8ex] ZTF\,J122009.98$+$082155.0 & CO & $10170^{+270}_{-260}$ & $0.580^{+0.017}_{-0.018}$ & $0.0132^{+0.0003}_{-0.0002}$ & $7.96^{+0.03}_{-0.03}$ & $3140^{+110}_{-110}$ & $0.275^{+0.019}_{-0.020}$ & $0.306^{+0.012}_{-0.013}$ & $0.157^{+0.004}_{-0.004}$ \\[0.8ex] ZTF\,J125620.57$+$211725.8 & CO & $5073^{+79}_{-79}$ & $0.479^{+0.010}_{-0.009}$ & $0.0141^{+0.0001}_{-0.0001}$ & $7.82^{+0.02}_{-0.01}$ & $2950^{+100}_{-100}$ & $0.101^{+0.005}_{-0.005}$ & $0.125^{+0.001}_{-0.001}$ & $0.152^{+0.001}_{-0.001}$ \\[0.8ex] ZTF\,J130228.34$-$003200.2 & CO & $11790^{+400}_{-330}$ & $0.811^{+0.021}_{-0.016}$ & $0.0102^{+0.0002}_{-0.0002}$ & $8.33^{+0.03}_{-0.02}$ & $3030^{+100}_{-100}$ & $0.179^{+0.012}_{-0.010}$ & $0.216^{+0.008}_{-0.005}$ & $0.502^{+0.013}_{-0.009}$ \\[0.8ex] ZTF\,J134151.70$-$062613.9 & CO & $58300^{+8400}_{-8700}$ & $0.509^{+0.038}_{-0.035}$ & $0.0225^{+0.0009}_{-0.0016}$ & $7.43^{+0.09}_{-0.04}$ & $2800^{+210}_{-220}$ & $0.126^{+0.015}_{-0.009}$ & $0.159^{+0.018}_{-0.007}$ & $0.617^{+0.062}_{-0.021}$ \\[0.8ex] ZTF\,J140036.65$+$081447.4 & CO & $13340^{+650}_{-610}$ & $0.563^{+0.009}_{-0.008}$ & $0.0137^{+0.0001}_{-0.0001}$ & $7.92^{+0.01}_{-0.01}$ & $2970^{+100}_{-100}$ & $0.232^{+0.012}_{-0.012}$ & $0.268^{+0.003}_{-0.001}$ & $0.418^{+0.003}_{-0.001}$ \\[0.8ex] ZTF\,J140423.86$+$065557.7 & CO & $14980^{+470}_{-460}$ & $0.736^{+0.016}_{-0.015}$ & $0.0113^{+0.0002}_{-0.0002}$ & $8.20^{+0.02}_{-0.02}$ & $3100^{+100}_{-100}$ & $0.409^{+0.023}_{-0.023}$ & $0.432^{+0.010}_{-0.010}$ & $0.884^{+0.045}_{-0.031}$ \\[0.8ex] ZTF\,J140537.34$+$103919.0 & He & $29900^{+9000}_{-1100}$ & $0.404^{+0.008}_{-0.008}$ & $0.0279^{+0.0006}_{-0.0006}$ & $7.15^{+0.02}_{-0.02}$ & $3430^{+130}_{-140}$ & $0.085^{+0.005}_{-0.005}$ & $0.112^{+0.003}_{-0.003}$ & $0.234^{+0.004}_{-0.004}$ \\[0.8ex] ZTF\,J140702.57$+$211559.7 & He & $10870^{+350}_{-350}$ & $0.406^{+0.018}_{-0.014}$ & $0.0173^{+0.0004}_{-0.0004}$ & $7.57^{+0.04}_{-0.03}$ & $3160^{+110}_{-110}$ & $0.263^{+0.021}_{-0.016}$ & $0.296^{+0.015}_{-0.009}$ & $0.702^{+0.029}_{-0.016}$ \\[0.8ex] ZTF\,J145819.54$+$131326.7 & CO & $9420^{+260}_{-260}$ & $0.581^{+0.010}_{-0.010}$ & $0.0131^{+0.0001}_{-0.0001}$ & $7.97^{+0.01}_{-0.01}$ & $<1730^{+240}_{-270}$ & $<0.067^{+0.006}_{-0.006}$ & $0.095^{+0.001}_{-0.000}$ & $0.446^{+0.011}_{-0.006}$ \\[0.8ex] ZTF\,J162644.18$-$101854.3 & CO & $36700^{+2700}_{-2700}$ & $0.499^{+0.015}_{-0.012}$ & $0.0180^{+0.0002}_{-0.0003}$ & $7.62^{+0.02}_{-0.01}$ & $3180^{+110}_{-110}$ & $0.212^{+0.013}_{-0.011}$ & $0.259^{+0.008}_{-0.003}$ & $0.425^{+0.008}_{-0.004}$ \\[0.8ex] ZTF\,J163421.00$-$271321.7 & He & $10680^{+790}_{-630}$ & $0.436^{+0.042}_{-0.054}$ & $0.0166^{+0.0013}_{-0.0009}$ & $7.64^{+0.09}_{-0.12}$ & $2400^{+130}_{-120}$ & $0.134^{+0.016}_{-0.020}$ & $0.163^{+0.019}_{-0.022}$ & $0.759^{+0.128}_{-0.099}$ \\[0.8ex] ZTF\,J164441.18$+$243428.2 & He & $13270^{+520}_{-460}$ & $0.382^{+0.020}_{-0.018}$ & $0.0188^{+0.0007}_{-0.0007}$ & $7.47^{+0.05}_{-0.05}$ & $2500^{+110}_{-110}$ & $0.103^{+0.009}_{-0.009}$ & $0.129^{+0.009}_{-0.008}$ & $0.607^{+0.033}_{-0.028}$ \\[0.8ex] ZTF\,J180256.45$-$005458.3 & He & $10770^{+630}_{-500}$ & $0.458^{+0.019}_{-0.021}$ & $0.0160^{+0.0004}_{-0.0003}$ & $7.69^{+0.03}_{-0.04}$ & $3150^{+110}_{-110}$ & $0.150^{+0.010}_{-0.011}$ & $0.182^{+0.008}_{-0.010}$ & $0.319^{+0.010}_{-0.012}$ \\[0.8ex] ZTF\,J182848.77$+$230838.0 & CO & $16620^{+560}_{-650}$ & $0.594^{+0.009}_{-0.008}$ & $0.0134^{+0.0001}_{-0.0001}$ & $7.96^{+0.01}_{-0.01}$ & $<2290^{+110}_{-120}$ & $<0.068^{+0.007}_{-0.006}$ & $0.096^{+0.002}_{-0.000}$ & $0.392^{+0.009}_{-0.005}$ \\[0.8ex] ZTF\,J195456.71$+$101937.5 & CO & $21500^{+1000}_{-1100}$ & $0.509^{+0.015}_{-0.012}$ & $0.0154^{+0.0002}_{-0.0002}$ & $7.77^{+0.03}_{-0.02}$ & $3480^{+110}_{-110}$ & $0.449^{+0.028}_{-0.026}$ & $0.470^{+0.016}_{-0.013}$ & $0.523^{+0.012}_{-0.010}$ \\[0.8ex] \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Best fit binary parameters to the ULTRACAM eclipse photometry. The orbital periods are listed here for reference but are not fitted parameters and so do not have corresponding uncertainties. The \textit{Gaia} DR3 parallax measurements are included for comparison.} \tabcolsep=0.13cm \label{tab:binary_results} \begin{tabular}{@{}llllllll@{}} \hline Target & $i(\degree)$ & $\rm{a~(R_{\odot})}$ & $E(B-V)$ & $\rm{\varpi_{UCAM}}$ & $\rm{\varpi_{Gaia}}$ & $\rm{T_{0}~(BMJD(TDB))}$ & $\rm{P~(d)}$ \\ \hline ZTF\,J041016.82$-$083419.5 & $86.6^{+2.1}_{-1.7}$ & $0.616^{+0.009}_{-0.006}$ & $0.031^{+0.017}_{-0.017}$ & $3.863^{+0.091}_{-0.078}$ & $4.07\pm0.11$ & 59646.0489782(16) & 0.0811093 \\ [0.8ex] ZTF\,J051902.06$+$092526.4 & $76.3^{+1.1}_{-0.6}$ & $0.715^{+0.012}_{-0.020}$ & $0.112^{+0.028}_{-0.023}$ & $2.835^{+0.140}_{-0.140}$ & $2.92\pm0.30$ & 59251.0519387(57) & 0.0929131 \\ [0.8ex] ZTF\,J052848.24$+$215629.0 & $87.7^{+1.4}_{-1.0}$ & $1.546^{+0.017}_{-0.016}$ & $0.090^{+0.020}_{-0.021}$ & $5.666^{+0.104}_{-0.111}$ & $5.59\pm0.13$ & 59932.215321(52) & 0.2259952 \\ [0.8ex] ZTF\,J053708.26$-$245014.6 & $88.1^{+0.7}_{-0.6}$ & $1.688^{+0.014}_{-0.010}$ & $0.015^{+0.011}_{-0.010}$ & $4.580^{+0.044}_{-0.047}$ & $4.574\pm0.049$ & 59251.2246115(52) & 0.3277936 \\ [0.8ex] ZTF\,J061530.96$+$051041.8 & $85.0^{+0.7}_{-0.7}$ & $2.146^{+0.015}_{-0.014}$ & $0.019^{+0.019}_{-0.013}$ & $3.163^{+0.060}_{-0.051}$ & $3.166\pm0.081$ & 59280.12536567(83) & 0.3481742 \\ [0.8ex] ZTF\,J063808.71$+$091027.4 & $88.2^{+0.5}_{-0.6}$ & $3.197^{+0.024}_{-0.019}$ & $0.021^{+0.018}_{-0.015}$ & $1.709^{+0.047}_{-0.047}$ & $1.65\pm0.14$ & 59252.1564861(10) & 0.6576453 \\ [0.8ex] ZTF\,J063954.70$+$191958.0 & $88.9^{+0.7}_{-0.7}$ & $1.659^{+0.008}_{-0.005}$ & $0.028^{+0.013}_{-0.015}$ & $5.394^{+0.070}_{-0.075}$ & $5.387\pm0.085$ & 59251.17799186(52) & 0.2593556 \\ [0.8ex] ZTF\,J064242.41$+$131427.6 & $89.1^{+0.7}_{-0.8}$ & $1.195^{+0.006}_{-0.003}$ & $0.022^{+0.016}_{-0.013}$ & $3.583^{+0.075}_{-0.073}$ & $3.77\pm0.20$ & 59252.10345653(59) & 0.1710542 \\ [0.8ex] ZTF\,J065103.70$+$145246.2 & $85.3^{+1.5}_{-1.0}$ & $1.166^{+0.016}_{-0.017}$ & $0.037^{+0.016}_{-0.018}$ & $2.567^{+0.073}_{-0.076}$ & $2.70\pm0.17$ & 59252.2124933(16) & 0.1677075 \\ [0.8ex] ZTF\,J070458.08$-$020103.3 & $74.3^{+0.4}_{-0.2}$ & $1.079^{+0.006}_{-0.010}$ & $0.052^{+0.011}_{-0.013}$ & $3.715^{+0.076}_{-0.075}$ & $3.643\pm0.088$ & 59253.2216462(43) & 0.1413708 \\ [0.8ex] ZTF\,J071759.04$+$113630.2 & $84.9^{+0.4}_{-0.3}$ & $2.326^{+0.027}_{-0.030}$ & $0.018^{+0.017}_{-0.012}$ & $2.812^{+0.065}_{-0.072}$ & $2.74\pm0.13$ & 59251.1312794(93) & 0.4527638 \\ [0.8ex] ZTF\,J071843.68$-$085232.1 & $84.6^{+0.7}_{-0.7}$ & $1.563^{+0.014}_{-0.013}$ & $0.064^{+0.017}_{-0.019}$ & $2.157^{+0.062}_{-0.058}$ & $2.39\pm0.22$ & 59283.1026109(12) & 0.2158113 \\ [0.8ex] ZTF\,J080441.95$-$021545.7 & $85.3^{+0.1}_{-0.1}$ & $0.889^{+0.007}_{-0.004}$ & $0.027^{+0.015}_{-0.014}$ & $5.631^{+0.092}_{-0.089}$ & $5.47\pm0.11$ & 59646.09050723(97) & 0.1209762 \\ [0.8ex] ZTF\,J080542.98$-$143036.3 & $81.0^{+1.0}_{-0.7}$ & $1.260^{+0.015}_{-0.019}$ & $0.010^{+0.012}_{-0.008}$ & $1.102^{+0.034}_{-0.039}$ & $1.39\pm0.16$ & 59646.18599526(74) & 0.1981669 \\ [0.8ex] ZTF\,J094826.35$+$253810.6 & $79.9^{+0.6}_{-0.6}$ & $1.003^{+0.017}_{-0.017}$ & $0.018^{+0.008}_{-0.009}$ & $2.911^{+0.116}_{-0.108}$ & $2.94\pm0.26$ & 59239.2668295(95) & 0.1418270 \\ [0.8ex] ZTF\,J102254.00$-$080327.3 & $76.9^{+0.6}_{-0.6}$ & $1.592^{+0.024}_{-0.023}$ & $0.019^{+0.013}_{-0.012}$ & $5.750^{+0.158}_{-0.160}$ & $5.58\pm0.21$ & 59280.2576703(43) & 0.2314179 \\ [0.8ex] ZTF\,J102653.47$-$101330.3 & $87.4^{+1.6}_{-1.5}$ & $0.677^{+0.008}_{-0.006}$ & $0.033^{+0.010}_{-0.010}$ & $2.027^{+0.075}_{-0.063}$ & $1.65\pm0.19$ & 59237.2453759(14) & 0.0929868 \\ [0.8ex] ZTF\,J103448.82$+$005201.9 & $87.8^{+0.3}_{-0.2}$ & $0.688^{+0.003}_{-0.003}$ & $0.031^{+0.016}_{-0.015}$ & $3.551^{+0.138}_{-0.138}$ & $3.20\pm0.28$ & 59253.2517553(11) & 0.0915591 \\ [0.8ex] ZTF\,J104906.96$-$175530.7 & $88.6^{+1.1}_{-1.1}$ & $1.407^{+0.012}_{-0.006}$ & $0.029^{+0.006}_{-0.008}$ & $2.579^{+0.070}_{-0.070}$ & $2.47\pm0.17$ & 59238.3654607(11) & 0.2447332 \\ [0.8ex] ZTF\,J122009.98$+$082155.0 & $87.5^{+0.2}_{-0.2}$ & $4.592^{+0.052}_{-0.056}$ & $0.024^{+0.004}_{-0.008}$ & $3.874^{+0.093}_{-0.106}$ & $3.53\pm0.17$ & 59252.2559984(25) & 1.2329254 \\ [0.8ex] ZTF\,J125620.57$+$211725.8 & $89.8^{+0.2}_{-0.2}$ & $2.374^{+0.013}_{-0.012}$ & $0.005^{+0.006}_{-0.004}$ & $22.221^{+0.095}_{-0.094}$ & $22.171\pm0.096$ & 59641.3540758(33) & 0.5560572 \\ [0.8ex] ZTF\,J130228.34$-$003200.2 & $86.0^{+0.5}_{-0.6}$ & $1.268^{+0.012}_{-0.008}$ & $0.016^{+0.013}_{-0.011}$ & $8.554^{+0.071}_{-0.068}$ & $8.555\pm0.073$ & 59252.387889(43) & 0.1661310 \\ [0.8ex] ZTF\,J134151.70$-$062613.9 & $86.8^{+2.4}_{-2.7}$ & $0.764^{+0.018}_{-0.017}$ & $0.030^{+0.009}_{-0.009}$ & $0.894^{+0.089}_{-0.093}$ & $0.97\pm0.12$ & 59237.3073649(28) & 0.0969505 \\ [0.8ex] ZTF\,J140036.65$+$081447.4 & $89.2^{+0.6}_{-0.7}$ & $1.589^{+0.006}_{-0.005}$ & $0.005^{+0.008}_{-0.004}$ & $2.155^{+0.070}_{-0.075}$ & $1.58\pm0.30$ & 59253.2966645(14) & 0.2602766 \\ [0.8ex] ZTF\,J140423.86$+$065557.7 & $84.5^{+1.0}_{-0.9}$ & $1.342^{+0.010}_{-0.009}$ & $0.025^{+0.007}_{-0.009}$ & $2.538^{+0.059}_{-0.057}$ & $2.24\pm0.14$ & 59239.3665054(12) & 0.1683096 \\ [0.8ex] ZTF\,J140537.34$+$103919.0 & $88.5^{+0.4}_{-0.3}$ & $1.389^{+0.009}_{-0.009}$ & $0.016^{+0.010}_{-0.009}$ & $0.752^{+0.031}_{-0.024}$ & $0.78\pm0.26$ & 59251.334651(12) & 0.2714122 \\ [0.8ex] ZTF\,J140702.57$+$211559.7 & $86.5^{+2.4}_{-2.0}$ & $1.008^{+0.016}_{-0.011}$ & $0.051^{+0.010}_{-0.013}$ & $4.077^{+0.072}_{-0.070}$ & $4.079\pm0.091$ & 59643.3349542(63) & 0.1432802 \\ [0.8ex] ZTF\,J145819.54$+$131326.7 & $86.8^{+0.1}_{-0.2}$ & $0.742^{+0.004}_{-0.003}$ & $0.025^{+0.009}_{-0.012}$ & $5.067^{+0.152}_{-0.154}$ & $4.86\pm0.21$ & 59252.3531663(17) & 0.0920516 \\ [0.8ex] ZTF\,J162644.18$-$101854.3 & $88.7^{+0.9}_{-1.1}$ & $1.503^{+0.013}_{-0.010}$ & $0.291^{+0.007}_{-0.013}$ & $1.733^{+0.087}_{-0.085}$ & $1.91\pm0.20$ & 59253.3679108(15) & 0.2530067 \\ [0.8ex] ZTF\,J163421.00$-$271321.7 & $80.6^{+2.1}_{-1.5}$ & $0.637^{+0.020}_{-0.028}$ & $0.182^{+0.029}_{-0.026}$ & $4.127^{+0.238}_{-0.230}$ & $4.24\pm0.26$ & 59253.3310632(36) & 0.0780396 \\ [0.8ex] ZTF\,J164441.18$+$243428.2 & $80.3^{+0.6}_{-0.7}$ & $0.614^{+0.011}_{-0.011}$ & $0.031^{+0.017}_{-0.015}$ & $2.197^{+0.087}_{-0.086}$ & $2.43\pm0.22$ & 59283.3945858(11) & 0.0801054 \\ [0.8ex] ZTF\,J180256.45$-$005458.3 & $84.2^{+0.3}_{-0.3}$ & $1.485^{+0.020}_{-0.023}$ & $0.114^{+0.028}_{-0.024}$ & $4.700^{+0.077}_{-0.101}$ & $4.38\pm0.15$ & 59646.3585478(21) & 0.2690033 \\ [0.8ex] ZTF\,J182848.77$+$230838.0 & $88.7^{+0.1}_{-0.3}$ & $0.852^{+0.005}_{-0.002}$ & $0.088^{+0.014}_{-0.018}$ & $4.955^{+0.079}_{-0.077}$ & $4.914\pm0.097$ & 59695.37741036(84) & 0.1120067 \\ [0.8ex] ZTF\,J195456.71$+$101937.5 & $84.5^{+0.7}_{-0.8}$ & $1.901^{+0.020}_{-0.016}$ & $0.078^{+0.022}_{-0.023}$ & $3.495^{+0.051}_{-0.050}$ & $3.449\pm0.057$ & 59697.3389707(12) & 0.3102884 \\ [0.8ex] \hline \end{tabular} \end{table*} \section{Discussion} \subsection{Comparison with previous parameters} \begin{figure*} \includegraphics[width=\textwidth]{figures/jan_comparison.pdf} \caption{Comparison of our parameters from the NTT-ULTRACAM photometry against the initial parameters of van Roestel et al. (in prep) from ZTF photometry.} \label{fig:comparison} \end{figure*} Initial parameter estimates for these systems were made by fitting the ZTF time-series photometry alongside photometric measurements from other surveys, where available, covering a wide wavelength range (van Roestel et al. in prep). Comparing our parameters determined from the three-band eclipse photometry against these initial estimates demonstrates general agreement between the two methods (\autoref{fig:comparison}). The WD temperatures, in particular, show excellent agreement but there are some significant differences in the measured masses for certain systems. This may be due, in part, to the survey SED data used by van Roestel et al. (in prep) being taken at a range of different orbital phases and therefore suffering from increased systematics due to ellipsoidal modulation or reflection effect. As the method we have used in this work has previously been shown to retrieve accurate parameters \citep{Brown2022}, this may imply a slight underestimation in the uncertainties determined by combining SED fitting with ZTF photometry. The parameters determined using the high-speed eclipse photometry are typically more precise than those measured by van Roestel et al. (in prep). This is most apparent for the primary and secondary masses with a median uncertainty in the WD mass from the ULTRACAM photometry of 2.6~per~cent, and 7.2~per~cent for the secondary mass. These values are 6.0~per~cent and 13.7~per~cent respectively from the ZTF photometry for the same systems and so the ULTRACAM measurements are typically a factor of 2 more precise. This is likely due to the high time resolution of the ULTRACAM photometry, enabling the duration of the eclipse as well as the ingress and egress to be measured very precisely. In addition to the initial parameter estimates discussed above, two of the systems fit in this work have been included in previously published analyses -- ZTF~J125620.57$+$211725.8 and ZTF~J164441.18$+$243428.2. Comparisons with these previous works are made below. \subsubsection{ZTF~J125620.57$+$211725.8} ZTF~J125620.57$+$211725.8 was previously fitted by \citet{Rebassa-Mansergas2021}, using the Virtual Observatory SED Analyser (VOSA) to fit the available survey photometry. Out of the 112 systems that they analysed, 13 systems were determined to possess a WD with a mass below $0.2~\rm{M_{\odot}}$. It is not known how such low mass WDs could form in PCEBs with low-mass main sequence companions -- with any mass transfer initiating a common envelope phase in which the envelope would most likely not gain sufficient energy to be ejected, leading to a merger scenario \citep{Rebassa-Mansergas2021}. This system is -- as far as we know -- the only one of these 13 systems that eclipses, enabling a valuable check on the system parameters. Our fit to the eclipse photometry determines the WD mass to be $0.48\pm0.01~\mathrm{M_{\odot}}$, discrepant with the $0.155\pm0.02~\mathrm{M_{\odot}}$ obtained from VOSA by over $14\sigma$. We encourage spectroscopic follow-up of this system in order to determine the cause of this large discrepancy and the true WD mass. \subsubsection{ZTF~J164441.18$+$243428.2} ZTF~J164441.18$+$243428.2 was one of the four deeply eclipsing PCEBs found and fitted by \citet{Kosakowski2022}. For this target in particular they did not detect the eclipse minimum and so their parameters from the light curve fit represent limits rather than specific values. As would be expected, our light curve fit to the ULTRACAM photometry is consistent with these parameter limits. As well as fitting the eclipse light curve \citet{Kosakowski2022} performed a spectroscopic fit to the WD, determining the effective temperature, surface gravity, and mass (determined from the surface gravity using CO-core composition models). From our fit to the ULTRACAM photometry we find an effective temperature of $13270\pm490~\rm{K}$, cooler than the $14900\pm760~\rm{K}$ determined by their spectroscopic fit but still consistent to within $2\sigma$. For the WD mass there is a little more deviation, with our fit finding a WD mass of $0.38\pm0.02~\mathrm{M_{\odot}}$, $2.3\sigma$ below the $0.55\pm0.07~\mathrm{M_{\odot}}$ found from their spectroscopic fit and suggesting a He-core composition rather than a CO-core. For the companion, \citet{Kosakowski2022} estimate a mass of $0.084\pm0.004~\mathrm{M_{\odot}}$ by fitting the Pan-STARRS SED with a composite model, placing it close to the hydrogen-burning limit. We find a higher mass of $0.103\pm0.009~\mathrm{M_{\odot}}$ from our light curve fit taking it into more typically stellar territory. Again though, these two values are consistent to within $2\sigma$. Overall, our fit to the ULTRACAM photometry is fully consistent with their light curve fit and consistent with their spectroscopic and Pan-STARRS SED fits at around the $2\sigma$ level. \subsection{Brown dwarf companions} WDs with brown dwarf companions are rare, with around 0.5 per cent of WDs expected to have substellar partners \citep{Steele2011}. Eclipsing examples are, predictably, even rarer with only four systems currently confirmed \citep{Beuermann2013, Littlefair2014, Parsons2017b, Casewell2020a, vanRoestel2021}. These eclipsing WD-brown dwarf binaries are valuable as they are one of the few places where both the brown dwarf's radii and mass can be measured precisely and are therefore important benchmarks for brown dwarf models. Additionally, as some of the lowest mass objects thought to survive the common-envelope \citep{Casewell2018b}, brown dwarfs in PCEBs occupy an important area of the parameter space when studying common-envelope evolution, with the study of the common-envelope phase in this low-mass regime having implications for systems with planetary mass companions \citep{Vanderburg2020}. In our ULTRACAM follow-up we have found four systems so far that our light curve fits suggest as having brown dwarf companions. These are ZTF\,J080441.95$-$021545.7, ZTF\,J103448.82$+$005201.9, ZTF\,J145819.54$+$131326.7, and ZTF\,J182848.77$+$230838.0. As our mass-radius relation for M~dwarfs \citep{Brown2022} is horizontal below $0.07~\rm{M_{\odot}}$ -- and therefore uninformative in this regime -- the best fit secondary masses can only be regarded as upper limits. Additionally, as none of the secondaries for these systems are detected in-eclipse, only an upper limit can be given for their effective temperatures. One of these systems, ZTF\,J182848.77$+$230838.0, has a high secondary temperature for a brown dwarf. In order to rule out problems with the photometry, we stack the in-eclipse images (\autoref{fig:ZTFJ1828_image}). This reveals a faint ($G=20.88~\rm{mag}$) source 2.79~arcsec away from the target which results in an erroneous slight `detection' in eclipse and therefore a higher than expected temperature. The true upper limit for the secondary temperature will be lower than given by our fit. \begin{figure} \includegraphics[width=\columnwidth]{figures/ZTFJ1828_image.pdf} \caption{Stacked images of ZTF\,J1828$+$2308 taken with ULTRACAM in the $i_{s}$ filter before and during the eclipse. The red dashed aperture shows the location of ZTF\,J1828$+$2308 itself while the solid blue aperture shows the fainter background source 2.79" away (Gaia\,DR3\,4529477702982880512) that is marginally affecting our in-eclipse photometry.} \label{fig:ZTFJ1828_image} \end{figure} In addition to these four systems with sub-stellar companions, we have measured one system with a companion mass just above the hydrogen-burning limit, ZTF\,J140537.34$+$103919.0, hereafter ZTF\,J1405$+$1039. The best-fit parameters for this system suggest that the secondary is significantly hotter than would be expected for its mass (shown as the blue point in \autoref{fig:t2_m2}). Again, we stack the in-eclipse images to rule out problems in the photometry (\autoref{fig:ZTFJ1405_image}), demonstrating that the source is indeed detected in-eclipse. We believe that the most likely explanation for this is that ZTF\,J1405$+$1039 is actually a triple system, with a tertiary companion contributing a significant fraction of the in-eclipse flux. \begin{figure} \includegraphics[width=\columnwidth]{figures/Mdwarf_teff_m2.pdf} \caption{Measured masses and effective temperatures of the M~dwarf components with an inset plot zoomed in around the brown dwarfs (which are shown in red). The solid black line shows the 1\,Gyr track from \citet{Baraffe2015} and the shaded blue area denotes the region where our mass-radius relation is horizontal (i.e. the radius is constant in this mass range). For the brown dwarfs we plot the masses and temperatures as upper limits centred on the $84^{\rm{th}}$ percentile of the fit. The blue point denotes ZTF~J1405+1039 which has a best-fit secondary temperature that is much hotter than expected for its mass.} \label{fig:t2_m2} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figures/ZTFJ1405_image.pdf} \caption{Stacked images of ZTF\,J1405$+$1039 taken with ULTRACAM in the $i_{s}$ filter before and during the eclipse. The red dashed aperture shows the location of ZTF\,J1405$+$1039. It is clear that the source is still detected in-eclipse.} \label{fig:ZTFJ1405_image} \end{figure} \subsection{ZZ Ceti WDs} ZZ Cetis are pulsating WDs, possessing hydrogen atmospheres and pulsation periods ranging from tens of seconds to tens of minutes \citep{Fontaine2008, Winget2008, Romero2022a}. The presence of pulsations enable asteroseismological analyses to be performed, providing insight into the internal structure of the WD which is otherwise concealed by their highly stratified nature. In PCEBs, the possibility of measuring the internal structure of the WD is especially interesting as it can reveal how the WD itself is affected by the common-envelope phase \citep{Hermes2015}. Previously, only one ZZ Ceti WD in a detached eclipsing binary was known \citep{Parsons2020}. This system is a double WD binary, however, and as such its evolutionary history is less well defined, with the number of common-envelope events it has passed through being uncertain. ZZ Cetis found in WD-main sequence PCEBs do not have this problem with their evolutionary past known to comprise of a single common-envelope phase. These systems are therefore potentially very interesting systems to find. Currently there is one known ZZ Ceti WD in a detached, albeit not eclipsing, PCEB \citep{Pyrzas2015}. Although this is an important find, \citet{Hermes2015} noted that there were a lot of free parameters, limiting the precision of the asteroseismological analysis. Eclipsing examples of such systems would reduce these free parameters and enable a more precise analysis. \begin{figure} \includegraphics[width=\columnwidth]{figures/instabilitystrip.pdf} \caption{The ZZ Ceti instability strip (blue region) with known pulsating (dark grey) and non-pulsating (light grey) WDs from \citet{Gianninas2011, Steinfadt2012, Hermes2012, Hermes2013a, Hermes2013b, Hermes2013c, Romero2022a}. Points in red show the measured parameters of the WD components of binaries fit in this work, with the confirmed pulsators, ZTF~J1407+2115 and ZTF~J0528+2156, shown by the yellow and cyan stars respectively.} \label{fig:instability_strip} \end{figure} Comparing our best fit parameters for the WD components to the ZZ Ceti instability strip (\autoref{fig:instability_strip}), we find that eight of our systems have WDs that lie within $1\sigma$ of the instability strip (shown in \autoref{tab:pulsating}). Closer inspection of their light curves do not reveal any clear photometric variability indicative of pulsations in six of the systems, however, the out-of-eclipse data for many of these systems is typically less than 30 minutes and so is not enough to rule out pulsations either. Although the WD temperatures are not necessarily precise enough to say with certainty whether a particular WD lies within the instability strip or not. Of these eight systems with WDs that lie in the instability strip, we have found two that show clear variability due to pulsations. These represent the first two ZZ Ceti WDs found in eclipsing WD+dM PCEBs. \begin{table} \centering \begin{tabular}{ccccc} \hline Target & RA & Dec & G & Pulsating \\ \hline ZTF\,J0519$+$0925 & 05:19:02.1 & +09:25:26.38 & 19.0 & Candidate \\ ZTF\,J0528$+$2156 & 05:28:48.2 & +21:56:28.94 & 17.7 & Confirmed \\ ZTF\,J0948$+$2538 & 09:48:26.4 & +25:38:10.68 & 18.7 & Candidate \\ ZTF\,J1034$+$0052 & 10:34:48.8 & +00:52:01.69 & 19.0 & Candidate \\ ZTF\,J1302$-$0032 & 13:02:28.3 & -00:32:00.11 & 16.8 & Candidate \\ ZTF\,J1407$+$2115 & 14:07:02.6 & +21:15:59.75 & 17.4 & Confirmed \\ ZTF\,J1634$-$2713 & 16:34:21.0 & -27:13:21.54 & 18.8 & Candidate \\ ZTF\,J1802$-$0054 & 18:02:56.4 & -00:54:58.47 & 18.0 & Candidate \\ \hline \end{tabular} \caption{eclipsing PCEBs with -- either confirmed or candidate -- ZZ Ceti WDs} \label{tab:pulsating} \end{table} \subsubsection{ZTF~J1407$+$2115} ZTF~J140702.56$+$211559.7, hereafter ZTF~J1407$+$2115, was first observed with ULTRACAM in February 2021. Unusual out-of eclipse variation was noticed but the data taken in this run was insufficient to confirm pulsations. We observed ZTF~J1407+2115 again for 1\,h on the $2^{\rm{nd}}$ of March 2022, detecting 3 clear pulsations and confirming it as the first eclipsing detached PCEB containing a ZZ Ceti WD. With this confirmation, we observed ZTF~J1407+2115 in two long observing runs on the $4^{\rm{th}}$ and $26^{\rm{th}}$ of March 2022 using the $u_{s}\,g_{s}\,i_{s}$ and $u_{s}\,g_{s}\,r_{s}$ filters and lasting $\sim2\,\rm{h}$ and $\sim5\,\rm{h}$ respectively (Lomb-scargle periodograms of these two long runs are shown in \autoref{fig:periodograms}). It is the photometry from the long observing run on the $4^{\rm{th}}$ of March that we use to fit the system parameters. We choose this observation primarily for consistency with the modelling performed on the other systems in this work, but also as the wider wavelength range provided by the $i_{s}$-band strengthens the constraints on the WD temperature. Additionally, chromospheric variability in the H$\alpha$ feature can lead to higher scatter of M~dwarf fluxes in the $r_{s}$-band. In order to fit the eclipse photometry of this system, the pulsations need to be included in the light curve model to prevent them introducing large systematic errors in the best-fit parameters. We do this using a Gaussian process (GP) implemented through the \textsc{python} package, \textsc{george}\footnote{\url{https://george.readthedocs.io/en/latest/}} \citep{Ambikasaran2015}. The GP is applied to the residuals of the \textsc{pylcurve} model at each MCMC walker position, with the posterior log probability calculated as the sum of the GP marginalised log likelihood, the log likelihood from comparing the model WD SED with the measured eclipse depths, and the log priors (parallax and interstellar reddening). We use the \texttt{ExpSquaredKernel}, defined by an amplitude, temperature, and scale-length, with the temperature scaling the pulsation amplitude between the light curves in different filters according to a blackbody law. These three GP parameters are included as free parameters in our fit. We switch the GP off between the second and third contact points where the WD is totally eclipsed by its M~dwarf companion, with the contact points being calculated for every walker position. We then use \textsc{emcee} \citep{Foreman-Mackey2013} to sample from the posterior probability distribution and determine the best-fit parameters. This best-fit model is shown in \autoref{fig:GP_LC}. We find the WD to have an effective temperature of $10\,900\pm300~\rm{K}$ and a mass of $0.41\pm0.01~\rm{M_{\odot}}$, suggesting a core composed primarily of helium. This mass and temperature corresponds to a surface gravity of $7.57\pm0.04~\rm{dex}$, placing it in a relatively sparsely sampled region in the middle of the instability strip (\autoref{fig:instability_strip}). We subtract our best-fit eclipse light curve model from the $g_{s}$-band photometry of the longer run on the $26^{\rm{th}}$ of March, leaving just the pulsation signal. Running a periodogram on this determines the main pulsation mode to have a frequency of 1.11~mHz (898~s) with an amplitude of around 47~parts per thousand (ppt) (\autoref{fig:periodograms}). We calculate the $3\sigma$ significance threshold to be 8~ppt following the method of \citet{Greiss2014}, shuffling the flux values 10\,000 times and taking the amplitude of the $99.7^{\rm{th}}$ percentile highest peak. \begin{figure} \includegraphics[width=\columnwidth]{figures/periodograms.pdf} \caption{Lomb-Scargle periodograms (shown in parts per thousand relative to the flux of the WD) of the ULTRACAM $g_{s}$ light curves of ZTF~J1407$+$2115 and ZTF~J0528$+$2156 with their respective eclipse light curve models subtracted. Horizontal dashed lines show the $3\sigma$ significance levels calculated using the bootstrapping method described by \citet[Section 4.1]{Greiss2014}.} \label{fig:periodograms} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{figures/lightcurves/gp_plotted.pdf} \caption{ULTRACAM $u_{s}\,g_{s}\,i_{s}$ light curves of ZTF~J1407$+$2115 (\textbf{a}) and ZTF~J0528$+$2156 (\textbf{b}). The top row of each plot shows the observed light curve (coloured points) with the combined eclipse plus mean Gaussian process pulsation model (black line). The second row shows the observed light curve with the eclipse model subtracted (coloured points) as well as the same data binned up by a factor of ten (dark grey points) with the mean Gaussian process model (black line). The third row shows the observed light curve with the mean Gaussian process subtracted with the black line showing the eclipse model. The bottom row shows the residuals of the full light curve model. The filled region shows the phase range where the Gaussian process is switched off (between the second and third eclipse contact points).} \label{fig:GP_LC} \end{figure*} \subsubsection{ZTF~J0528$+$2156} ZTF~J052848.24$+$215629.0, hereafter ZTF~J0528$+$2156, was first observed with ULTRACAM in February 2021. Attempts at fitting the eclipse light curve showed some possible structure in the residuals, prompting us to observe it again to search for pulsations. We observed ZTF~J0528$+$2156 again on the $18^{\rm{th}}$ of December 2022 for 1.8~h, detecting pulsations with a period of around 11~minutes and amplitude of around 5~per~cent. We fit the ULTRACAM photometry in the same way as for ZTF~J1407$+$2115 -- using a Gaussian process to model the pulsations. We find the WD to have an effective temperature of $11\,900\pm600~\rm{K}$ and a mass of $0.78\pm0.02~\rm{M_{\odot}}$, corresponding to a surface gravity of $8.27\pm0.044~\rm{dex}$ and placing it comfortably within the instability strip (\autoref{fig:instability_strip}). Computing the periodogram of the residuals of the eclipse light curve model in the same way as for ZTF~J1407$+$2115, we find the main mode to have a frequency of 1.5~mHz (670~s) and amplitude of around 19~ppt with a $3\sigma$ significance threshold of 7~ppt (\autoref{fig:periodograms}). \subsection{Magnetic WDs} \label{sec:magnetic_wd} Around 36 per cent of WDs in cataclysmic variables (CVs) are observed to be strongly magnetic \citep{Pala2020}. This is in stark contrast with their progenitor population -- the detached PCEBs -- of which only a handful possess WDs with strong magnetic fields. \citet{Schreiber2021} propose an evolutionary channel between the magnetic CVs and the detached magnetic population to explain this discrepancy. This relies on a rotation-driven dynamo in which a crystallising WD, spun up due to accretion during the CV phase, can generate the strong magnetic fields that we observe in CVs. Interactions between the newly-formed magnetic field of the WD and the magnetic field of the M~dwarf then act to detach the binary, halting mass transfer and causing the binary to appear as a strongly magnetic detached PCEB for a period of time before angular momentum loss due to magnetic braking and gravitational wave radiation brings the two stars back into a mass-transferring state as a polar or intermediate polar. A test of this model was performed by \citet{Parsons2021}, using spectroscopic observations of detached magnetic PCEBs to constrain their evolutionary history, attempting to assess whether or not they are consistent with having undergone a mass-transferring phase in the past. All systems studied were found to be consistent with a previous CV phase but spectroscopic observations alone were not powerful enough to draw strong conclusions. More powerful constraints can be made if such systems are found to be eclipsing, enabling more precise measurements to be made from the eclipse photometry and therefore a robust test of the model. As part of our follow-up program we have discovered 6 new eclipsing PCEBs (\autoref{tab:magnetic}) that we have confirmed from our high-speed photometry as having magnetic WDs -- showing clear evidence of a bright magnetic pole in the eclipse ingress/egress, with one previously known as a magnetic system but not known to be eclipsing. We have additionally found 3 candidate systems that show out-of-eclipse variation that disappears when the WD is eclipsed but for which the ingress/egress of the eclipse do not confirm a bright magnetic pole. These systems have been found by searching for unusual out-of-eclipse variation in their ZTF light curves (\autoref{fig:ztf_ucam_mag}), inconsistent with the ellipsoidal modulation or reflection effect that is common in PCEBs. This unusual out-of-eclipse variability was noted in the pre-intermediate polar, SDSS\,J0303$+$0054, \citep{Parsons2013b} and is due to additional emission in the form of cyclotron radiation from the magnetic poles of the WD. The effect of the cyclotron emission on the eclipse profiles -- introducing steps in the ingress and egress due to the eclipse of the small, bright magnetic pole (\autoref{fig:ztf_ucam_mag}) -- makes the light curves of the magnetic systems more complicated to fit and so the analysis of these systems will be the subject of a future paper. \begin{figure*} \includegraphics[width=\textwidth]{figures/lightcurves/ztf_ucam_combined.pdf} \caption{{\it left}: ZTF g-band (black) and r-band (grey) light curves of the 6 new confirmed, and 3 new candidate eclipsing PCEBs with magnetic WDs. All show out-of-eclipse variation inconsistent with reflection effect or ellipsoidal modulation in at least one filter. Some light curves have been binned for clarity. {\it right}:~Normalised ULTRACAM/HiPERCAM $g_{s}$-band primary eclipse light curves (zoomed in on the ingress and egress) of the 6 new confirmed, and 3 new candidate eclipsing PCEBs with magnetic WDs. The solid grey line shows a flux of zero while the red dashed line shows the mean flux of the first 10 points shown.} \label{fig:ztf_ucam_mag} \end{figure*} \begin{table} \centering \begin{tabular}{ccccc} \hline Target & RA & Dec & G & Magnetic \\ \hline ZTF\,J0126$+$1210 & 01:26:07.8 & $+$12:10:49.14 & 18.8 & Candidate \\ ZTF\,J0220$+$6303 & 02:20:04.6 & $+$63:03:59.63 & 17.4 & Candidate \\ ZTF\,J0406$+$0958 & 04:06:27.2 & $+$09:58:26.97 & 18.1 & Candidate \\ ZTF\,J0618$-$0919 & 06:18:09.9 & $-$09:19:04.28 & 16.5 & Confirmed \\ ZTF\,J1206$+$5100 & 12:06:15.7 & $+$51:00:46.77 & 18.6 & Confirmed \\ ZTF\,J1922$+$1038 & 19:22:15.3 & $+$10:38:38.13 & 16.0 & Confirmed \\ ZTF\,J2142$+$4309 & 21:42:32.0 & $+$43:09:28.97 & 19.3 & Confirmed \\ ZTF\,J2220$+$0721 & 22:20:07.5 & $+$07:21:29.74 & 18.3 & Confirmed \\ ZTF\,J2353$+$4153 & 23:53:55.0 & $+$41:53:04.40 & 18.7 & Confirmed \\ \hline \end{tabular} \caption{eclipsing PCEBs with -- either confirmed or candidate -- magnetic WD components. } \label{tab:magnetic} \end{table} \section{Conclusions} Through our dedicated program of high-speed photometric follow-up we have obtained multi-band eclipse light curves for 43 new PCEBs found using ZTF. We have characterized 34 of these systems from the eclipse light curves alone -- finding four that contain sub-stellar companions, doubling the number of eclipsing examples known, and two with pulsating WDs representing the first ZZ Ceti WDs known in eclipsing WD+dM binaries. Of the remaining nine systems, we have found six to contain strongly magnetic WDs from their eclipse photometry with three further candidates. These will be invaluable to the study of magnetic field generation in binary WDs. Our results demonstrate that a photometric approach to the follow-up of eclipsing systems can effectively discern interesting sub-types of PCEBs, including those that would be otherwise missed by spectroscopic follow-up. \section*{Acknowledgements} SGP acknowledges the support of the UK's Science and Technology Facilities Council (STFC) Ernest Rutherford Fellowship. ARM acknowledges support from Grant RYC-2016-20254 funded by MCIN/AEI/10.13039/501100011033 and by ESF Investing in your future and from MINECO under the PID2020-117252GB-I00 grant. VSD, HiPERCAM, and ULTRACAM are supported by the STFC. IP and TRM acknowledge support from the STFC, grant ST/T000406/1 and a Leverhulme Research Fellowship. JM was supported by funding from a Science and Technology Facilities Council (STFC) studentship. Based on observations collected at the European Southern Observatory under ESO programme 0106.D-0824. Based on observations made with the Gran Telescopio Canarias (GTC), installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias, in the island of La Palma. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. We thank the anonymous referee for their helpful comments. \section*{Data Availability} The data underlying this article will be shared upon reasonable request to the corresponding author. \bibliographystyle{mnras}
{ "arxiv_id": "2302.11345", "language": "en", "timestamp": "2023-02-23T02:14:12", "url": "https://arxiv.org/abs/2302.11345", "yymm": "2302" }
\subsection{Agent-based model} In the simplified setting of this model, cells are represented as discrete agents that can proliferate and move on a one-dimensional uniform lattice, which constitutes the spatial domain, and can also degrade the surrounding ECM, which is regarded as being composed of discrete constitutive elements. The novel aspect of this model is the introduction of volume-filling effects, similar to the model described in~\cite{morris2020identifying}, which uses the methods described in~\cite{painter2002volume}, but extended to multiple populations~\cite{simpson2009multi}. Let the number of cells and ECM elements on lattice site $i=1,2,\ldots$ of width $\Delta$ at time $\tilde{t} \in \mathbb{R}^+$ of realisation $j = 1,2,\ldots, J$ of the model be denoted, respectively, by $u^j_i(\tilde{t})$ and $m^j_i(\tilde{t})$. We assume that ECM elements have constant density and are chosen to occupy a volume equal to that of a cell, such that at most $N$ cells or ECM elements can occupy each lattice site. The dynamics of the cells are governed by two mechanisms: proliferation, in which a cell places a daughter cell into the same lattice site it occupies; and motility, whereby cells can move to one of their two adjacent lattice sites. Moreover, ECM elements can be degraded by cells in the same lattice site as them. To incorporate volume-filling effects into the model, we prescribe that each lattice site has a maximum occupancy level $N$~\cite{taylor2016coupling} and assume that: \begin{enumerate}[label=(A\arabic*)] \item if a cell attempts a move to a neighbouring lattice site, then the probability that the move is successful decreases linearly with the occupancy level of the target site, such that the probability of a successful move to a target site with occupancy level $N$ is zero; \label{Ass1} \item if a cell attempts to proliferate, then the probability of success decreases linearly with the occupancy level of the site where the cell is located, such that the probability of a successful proliferation event in a site with occupancy level $N$ is zero. \label{Ass2} \end{enumerate} \paragraph{Probability of cell movement.} A cell attempts a movement in a time step $\tau$ with probability $p_{\rm m} \in[0,1]$, and the attempted movement from lattice site $i$ to either of the neighbouring lattice sites $i\pm1$ occurs with equal probability $1/2$. Using assumption~\ref{Ass1}, we can define the probability of movement to the left, $T_{i-}^{{{{\rm {m}}^j}}}(\tilde{t})$, or right, $T_{i+}^{{{\rm {m}}^j}}(\tilde{t})$, during the time interval $[\tilde{t}, \tilde{t}+\tau)$ of realisation $j$, as \begin{equation}\label{def:Tm} T_{i\pm}^{{{\rm {m}}^j}}(\tilde{t}) = \frac{p_{{\rm m}}}{2}\bigg(1-\frac{ u^j_{i\pm1}(\tilde{t})+ m^j_{i\pm1}(\tilde{t}) }{N}\bigg). \end{equation} \paragraph{Probability of cell proliferation.} A cell in lattice site $i$ attempts to proliferate in time step $\tau$ with probability $p_{\rm p} \in[0,1]$. If proliferation occurs, then the cell places a daughter cell into the same lattice site as itself. Using assumption~\ref{Ass2}, we can define the probability of proliferation, $T_i^{{{\rm {p}}^j}}(\tilde{t})$, during the time interval $[\tilde{t}, \tilde{t}+\tau)$ of realisation $j$, as \begin{equation}\label{def:Tp} T_i^{{{\rm {p}}^j}}(\tilde{t}) = p_{\rm p} \, \bigg(1- \frac{u^j_{i}(\tilde{t})+ m^j_{i}(\tilde{t}) }{N}\bigg). \end{equation} \medskip Note that, the initial distributions of cells and ECM elements must be such that at most $N$ cells or ECM elements can occupy each lattice site to ensure the probabilities $T_{i\pm}^{{{\rm {m}}^j}}(\tilde{t}),\, T_i^{{{\rm {p}}^j}}(\tilde{t}) \geq0$ are well-defined. Under the assumption that the initial distributions of cells and ECM elements satisfy $0 \leq u^j_i(0) + m^j_i(0) \leq N \; \text{for all} \; j = 1,2,\ldots, J \; \text{and} \; i = 1,2,\ldots$, the definitions for the probabilities of cell movement and proliferation given by Equations~\eqref{def:Tm} and~\eqref{def:Tp} ensure that \begin{equation}\label{eq:aprestAB} 0 \leq u^j_i(\tilde{t}) + m^j_i(\tilde{t}) \leq N \;\; \text{for all} \;\; j = 1,2,\ldots, J \;\; \text{and} \;\; i = 1,2,\ldots \;\; \text{for any} \;\; \tilde{t} \in \mathbb{R}^+. \end{equation} \paragraph{Probability of ECM degradation.} During the time interval $[\tilde{t}, \tilde{t}+\tau)$ of realisation $j$, an element of ECM in lattice site $i$ is degraded by a cell on the same lattice site with probability $p_{\rm d}\in[0,1]$, such that the degradation per unit element of ECM, $T_i^{{{\rm {d}}^j}}(\tilde{t})$, is \begin{equation} T_i^{{{\rm {d}}^j}}(\tilde{t})=p_{\rm d} u_i^j(\tilde{t}). \nonumber \end{equation} \subsection{Corresponding coarse-grained model} In order to derive a coarse-grained description of the agent-based model, we introduce the average occupancy of lattice site $i$ at time $\tilde{t}$ by cells and ECM elements over $J$ realisations of the model, denoted, respectively, by $$ \langle u_i(\tilde{t}) \rangle = \frac{1}{J} \sum_{j=1}^J u^j_i(\tilde{t}) \;\;\;\; \text{and} \;\;\;\; \langle m_i(\tilde{t}) \rangle= \frac{1}{J} \sum_{j=1}^J m^j_i(\tilde{t}). $$ \paragraph{Coarse-grained model of cell dynamics.} We proceed to derive a coarse-grained model by considering how the average occupancy in lattice site $i$ changes during the time interval $[\tilde{t}, \tilde{t}+\tau)$: \begin{align} \label{cell ABM eq} \langle u_i(\tilde{t}+\tau) \rangle &= \langle u_{i}(\tilde{t}) \rangle \nonumber\\ &\qquad+ \frac{p_{\rm m}}{2}\langle u_{i+1}(\tilde{t}) \rangle \bigg(1- \frac{ \langle u_{i}(\tilde{t})\rangle + \langle m_{i}(\tilde{t})\rangle }{N}\bigg)\nonumber\\&\qquad+\frac{p_{\rm m}}{2}\langle u_{i-1}(\tilde{t}) \rangle \bigg(1- \frac{ \langle u_{i}(\tilde{t})\rangle + \langle m_{i}(\tilde{t})\rangle }{N}\bigg) \nonumber\\&\qquad-\frac{p_{\rm m}}{2}\langle u_{i}(\tilde{t}) \rangle \bigg(1- \frac{\langle u_{i+1}(\tilde{t})\rangle+\langle m_{i+1}(\tilde{t})\rangle }{N}\bigg) \nonumber\\&\qquad-\frac{p_{\rm m}}{2}\langle u_{i}(\tilde{t}) \rangle \bigg(1- \frac{\langle u_{i-1}(\tilde{t})\rangle + \langle m_{i-1}(\tilde{t})\rangle }{N}\bigg) \nonumber\\ &\qquad+ p_{\rm p}\langle u_{i}(\tilde{t}) \rangle \bigg(1- \frac{\langle u_{i}(\tilde{t})\rangle + \langle m_{i}(\tilde{t})\rangle }{N}\bigg). \end{align} Note that, in writing down Equation~\eqref{cell ABM eq} we have used probabilistic approximations of the mean-field type which are frequently used for the coarse-graining of agent-based models and involve assuming independence of lattice sites (see, for example,~\cite{penington2011building}). Rearranging Equation~\eqref{cell ABM eq} and dividing both sides by $\tau$ yields: \begin{align} \label{cell ABM 2} \frac{\langle u_i(\tilde{t}+\tau) \rangle - \langle u_i(\tilde{t}) \rangle }{\tau}&= \frac{p_{\rm m} \Delta^2}{ 2\tau}\bigg[\frac{\langle u_{i-1}(\tilde{t}) \rangle -2 \langle u_{i}(\tilde{t}) \rangle +\langle u_{i+1}(\tilde{t}) \rangle}{ \Delta^2}\bigg]\nonumber \\ &\qquad+ \frac{p_{\rm m}\Delta^2}{2\tau N}\bigg[\frac{\langle u_{i}(\tilde{t}) \rangle (\langle m_{i-1}(\tilde{t}) \rangle -2\langle m_{i}(\tilde{t}) \rangle+\langle m_{i+1}(\tilde{t}) \rangle )}{\Delta^2}\bigg]\nonumber \\&\qquad-\frac{p_{\rm m}\Delta^2}{2\tau N}\bigg[\frac{\langle m_{i}(\tilde{t}) \rangle (\langle u_{i-1}(\tilde{t}) \rangle -2\langle u_{i}(\tilde{t}) \rangle +\langle u_{i+1}(\tilde{t}) \rangle)}{\Delta^2}\bigg] \nonumber \\&\qquad+\frac{p_{\rm p}}{\tau}\langle u_{i}(\tilde{t}) \rangle \bigg(1- \frac{\langle u_{i}(\tilde{t})\rangle + \langle m_{i}(\tilde{t})\rangle}{N}\bigg). \end{align} We now divide both sides of Equation~\eqref{cell ABM 2} by length scale $\Delta$, perform a Taylor expansion and take limits as $\Delta,\tau\to0$ to obtain a description of the cell density dynamics in terms of the variables $\tilde{u}(\tilde{x},\tilde{t})$ and $\tilde{m}(\tilde{x},\tilde{t})$, which are the continuum counterparts of $\langle u_i(\tilde{t}) \rangle/\Delta$ and $\langle m_i(\tilde{t}) \rangle/(\mu\Delta)$ which represent, respectively, the number density of cells and the density of ECM at position $\tilde{x} \in \mathbb{R}$ and time $\tilde{t}\in(0,\infty)$. The factor $\mu$ represents the number of cells equivalent to a unit mass of ECM and is introduced as a conversion factor between the density of ECM, as defined by mass of ECM per unit volume, and the number density of ECM elements, given by $\mu \tilde{m}(\tilde{x},\tilde{t})$. Under the assumptions \begin{equation} \lim_{\Delta, \tau\to0}\frac{p_{\rm m} \Delta ^2}{2\tau} = \tilde{D}, \qquad \lim_{\tau\to0}\frac{p_{\rm p}}{\tau}=\tilde{r}, \qquad \lim_{\Delta\to0}\frac{N}{\Delta}=\tilde{K}, \label{params} \end{equation} we obtain the following PDE for the cell density $\tilde{u}(\tilde{x},\tilde{t})$: \begin{equation} \frac{\partial \tilde{u}}{\partial \tilde{t}}=\tilde{D}\frac{\partial}{\partial \tilde{x}}\bigg[\bigg(1-\frac{\tilde{u}+\tilde{\mu}\tilde{m}}{\tilde{K}}\bigg)\frac{\partial \tilde{u}}{\partial \tilde{x}}+\tilde{u}\frac{\partial}{\partial \tilde{x}}\bigg(\frac{\tilde{u}+\tilde{\mu}\tilde{m}}{\tilde{K}}\bigg)\bigg]+\tilde{r}\tilde{u}\bigg(1-\frac{\tilde{u}+\tilde{\mu}\tilde{m}}{\tilde{K}}\bigg), \label{dimu} \end{equation} where $\tilde{x}\in\mathbb{R}$ and $\tilde{t}\in(0,\infty)$. Note that the first term on the right-hand side of Equation~\eqref{dimu} describes the movement of cells down gradients in cell density, with movement prevented by the presence of surrounding cells and ECM, as expected by the introduction of volume-filling effects. The second term models the motion of the cells down the ``total density gradient" of cells and ECM, $\tilde{u}+\tilde{\mu}\tilde{m}$. The third term captures cell proliferation, which is also impacted by volume-filling effects. From Equation~\eqref{dimu} it is clear that the parameter $\tilde{D}\geq0$, which is defined via Equation~\eqref{params}, can be regarded as the diffusion coefficient of the cells in the absence of ECM, while the parameters $\tilde{r}\geq0$ and $\tilde{K}>0$, which are also defined via Equation~\eqref{params}, are the intrinsic growth rate of the cell population, and the density corresponding to the maximum occupancy level (i.e. the carrying capacity), respectively. \paragraph{Coarse-grained model of ECM dynamics.} Probabilistic approximations similar to those underlying Equation~\eqref{cell ABM eq} give the following conservation equation for the evolution of ECM elements in lattice site $i$ during the time interval $[\tilde{t}, \tilde{t}+\tau)$: \begin{equation} \label{mABM} \langle m_{i}(\tilde{t}+\tau)\rangle = \langle m_{i}(\tilde{t})\rangle -p_{\rm d} \langle u_{i}(\tilde{t})\rangle\langle m_{i}(\tilde{t})\rangle . \end{equation} Rearranging Equation~\eqref{mABM}, dividing by $\Delta$ and $\tau$ and taking limits as $\Delta,\tau\to0$, under the assumption \begin{equation} \lim_{\Delta, \tau\to0} \frac{p_{\rm d}\Delta}{\tau} = \tilde{\lambda}, \label{params2} \end{equation} we formally obtain the following differential equation for ECM density $\tilde{m}(\tilde{x},\tilde{t})$: \begin{align} \frac{\partial \tilde{m}}{\partial \tilde{t}}=-\tilde{\lambda} \tilde{m} \tilde{u}, \label{dimm} \end{align} where $\tilde{x} \in \mathbb{R}$ and $\tilde{t}\in(0,\infty)$. Here, the parameter $\tilde{\lambda} \geq 0$ defined via Equation~\eqref{params2} is the per cell degradation rate of ECM. We observe that when there is no ECM degradation (i.e. if $\tilde{\lambda}=0$) and ECM is uniformly distributed at $\tilde{t}=0$ (i.e. if $\tilde{m}(\tilde{x},0) \equiv \tilde{m}^0$ where $\tilde{m}^0 \in \mathbb{R}^+$ with $0 \leq \tilde{m}^0 \leq \tilde{K}$), the mathematical model defined via Equations~\eqref{dimu} and~\eqref{dimm} simplifies to the following FKPP model of cell dynamics~\cite{fisher_wave_1937}: \begin{equation} \label{FKPP} \frac{\partial \tilde{u}}{\partial \tilde{t}}=\hat{D}\frac{\partial ^2 \tilde{u}}{\partial \tilde{x}^2}+\hat{r}\tilde{u}\bigg(1-\frac{\tilde{u}}{\hat{K}}\bigg), \end{equation} where $$ \hat{D} = \left(1 - \dfrac{\tilde{\mu} \tilde{m}^0}{\tilde{K}} \right) \tilde{D}, \quad \hat{r} = \left(1 - \dfrac{\tilde{\mu} \tilde{m}^0}{\tilde{K}} \right) \tilde{r}, \quad \hat{K} = \left(1 - \dfrac{\tilde{\mu} \tilde{m}^0}{\tilde{K}} \right) \tilde{K}. $$ \begin{subsection}{Non-dimensional coarse-grained model} The mathematical model defined via Equations~\eqref{dimu} and~\eqref{dimm} can be non-dimensionalised by the introduction of the following non-dimensional variables: \begin{equation*} u=\frac{\tilde{u}}{\tilde{K}},\quad m=\frac{\tilde{\mu}\tilde{m}}{\tilde{K}}, \quad t=\tilde{t}\tilde{r}, \quad x=\sqrt{\frac{\tilde{r}}{\tilde{D}}}\tilde{x}, \end{equation*} and written as: \begin{align}{} \frac{\partial u}{\partial t}&= \frac{\partial}{\partial x}\bigg[ (1- m)\frac{\partial u}{\partial x}+u\frac{\partial m}{\partial x}\bigg]+ u(1-u-m), \label{NDeqn2_u} \\ \frac{\partial m}{\partial t}&=-\lambda m u, \label{NDeqn2_m} \end{align} where $x\in\mathbb{R}$ and $t\in(0,\infty)$. Here, the only remaining parameter is $\lambda={\tilde{\lambda}\tilde{ K}}/{\tilde{r}}\geq0$ which is interpreted as the rescaled ECM degradation rate. We complement the model defined via Equations~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} with no flux boundary conditions for Equation~\eqref{NDeqn2_u}: \begin{equation}\label{NFBC} (1- m)\frac{\partial u}{\partial x}+u\frac{\partial m}{\partial x} = 0 \bigg|_{x=0}, \end{equation} and $u,\,{\partial u}/{\partial x}\to 0$ as $x\to\infty$. We also have the following initial conditions: \begin{equation}\label{ass:ICPDE} u(x,0)=u_0(x) \geq 0, \quad m(x,0)=m_0(x) \geq 0, \quad 0 \leq u_0(x) + m_0(x) \leq 1 \quad \forall \, x \in \mathbb{R}. \end{equation} We note that by assuming at the single-cell level that both the presence of cells and ECM elements impair the movement and proliferation of the cells, the resulting population-level description for cell density evolution in Equation~\eqref{NDeqn2_u} exhibits a number of differences to similar models without volume-filling effects. For example, the model studied by El Hachem et al.~in \cite{el2021travelling} does not consider volume-filling of cells to impair cell movement, and therefore contains one less flux term, namely that which accounts for movement of cells down the ``total density gradient''. This model can be recovered from Equation~\eqref{NDeqn2_u} by employing different underlying assumptions such that the probability of movement depends on the average available space (where space is only filled by ECM) between the target lattice site and the lattice site the cell occupies at time $\tilde{t}$. \end{subsection} \begin{subsection}{Numerical exploration of possible travelling wave solutions} We are interested in the possible constant profile, constant speed travelling wave solutions displayed by the model defined via Equations~\eqref{NDeqn2_u}-\eqref{NDeqn2_m}. As such, we first explore the range of possible behaviours numerically. We report on the results of numerical simulations carried out for the model posed on the spatial domain $(0,L)$, with $L>0$ sufficiently large so that the no flux boundary condition~\eqref{NFBC} at $x=L$ does not interact with the travelling wave. The simulations were subject to the following initial conditions: \begin{equation} u(x,0)=\begin{cases} 1, \qquad & \text{if} \qquad x<\alpha, \\ 0 \qquad & \text{if} \qquad x\geq\alpha, \label{hIC_u} \end{cases} \end{equation} \begin{equation} m(x,0)=\begin{cases} 0, &\text{if} \qquad x<\alpha, \\ m_0 & \text{if} \qquad x\geq\alpha, \label{hIC_m} \end{cases} \end{equation} where $0<\alpha\ll L$ represents the width of the initially invaded region at $t=0$ and $m_0\in[0,1)$ corresponds to the uninvaded density of ECM ahead of the cells. We note that, by design, the model~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} does not permit travelling waves when there are initial conditions with compactly supported cell density and $m_0=1$. This is because cells require space ahead of the wave in order to invade; in any regions initially devoid of cells, the ECM cannot be degraded to allow cells to invade. As such, we proceed by considering $m_0\in[0,1).$ Further specifics of the parameter values and the numerical methods used in this paper can be found in Appendix~\ref{appNS}. \begin{figure}[h!] \centering \hspace*{-.4cm} \includegraphics[scale=0.6]{Figs/heaviside/alpha1/Hegplots_plasma_alpha1.pdf} \caption{Numerical solutions of Equations~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} subject to the initial conditions~\eqref{hIC_u}-\eqref{hIC_m}, for $m_0=0.2$ in the top row and $m_0=0.8$ in the bottom row, and for rescaled ECM degradation rates $\lambda=5,\,50,\,500.$ Cell densities are shown in purple and ECM densities in orange at times $t=25,\,50,\,75,\,100$ from left to right. Further specifics of the parameter values and the numerical methods used can be found in Appendix~\ref{appNS}.} \label{fig:miscTWprofile} \end{figure} As shown in Figure~\ref{fig:miscTWprofile}, when $m_0 \in [0,1)$ the solutions to Equations~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} subject to the initial conditions~\eqref{hIC_u}-\eqref{hIC_m} converge to travelling waves whereby the cell density, $u$, decreases monotonically from one to zero and the ECM density, $m$, increases monotonically from zero to $m_0$. The numerical results in Figure~\ref{fig:miscTWprofile} also indicate that the speed of the travelling waves changes as the values of the parameters $\lambda$ and $m_0$ are changed. This is illustrated in more detail in Figure~\ref{fig:cvsdmvsm0}, which also shows that (in agreement with the analytical results presented in Section~\ref{TWA}) when $m_0 \in (0,1)$: if $\lambda \to 0^+$ then the speed of the travelling waves converges to $c=2 \, (1-m_0)$; whereas if $\lambda \to \infty$ then the speed of the travelling waves converges to $c=2.$ We also note that when $m_0=0$, the solutions to Equation~\eqref{NDeqn2_m} subject to the initial condition~\eqref{hIC_m} are such that $m(x,t) \equiv 0$ for all $t \geq 0$ and thus the model simplifies to the FKPP model~\eqref{FKPP} with $\hat{D}=\hat{r}=\hat{K}=1$, that is \begin{equation} \label{FKPP1} \frac{\partial u}{\partial t}= \frac{\partial ^2 u}{\partial x^2}+ u \left(1-u\right). \end{equation} Consistent with this, numerical simulations indicate that when $m_0=0$, the cell density $u$ converges to a travelling wave that decreases monotonically from one to zero (results not shown), and travels with speed $c=2$ (i.e. the minimal speed of travelling wave solutions to the FKPP model~\eqref{FKPP1}), see Figure~\ref{fig:cvsdmvsm0}. The numerical results summarised by Figure~\ref{fig:cvsdmvsm0} for $m_0 \in [0,1)$ show similar behaviours to that in~\cite{el2021travelling}, where no volume-filling effects of cells prevent cell movement, whilst a marked difference is observed for the case $m_0=1$, as discussed in Appendix~\ref{m0_1}. \begin{figure}[h!] \centering \includegraphics[scale=0.6]{Figs/heaviside/alpha1/Hcomb_c_plasma1_alpha1.pdf} \caption{The relationship between the numerically estimated speed (solid lines) of travelling wave solutions of Equations~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} subject to the initial conditions~\eqref{hIC_u}-\eqref{hIC_m}. The dashed lines in the plot on the left highlight the value of $2(1-m_0)$. The numerically estimated travelling wave speed is obtained by tracing the point $X(t)$ such that $u(X(t), t)=0.1.$ Further specifics of the parameter values and the numerical methods used can be found in Appendix~\ref{appNS}.} \label{fig:cvsdmvsm0} \end{figure} \end{subsection} \end{section} \begin{section}{Travelling wave analysis}\label{TWA} We seek travelling wave solutions of Equations~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} by adopting the usual travelling wave ansatz $u(x,t)=U(z)$ and $m(x,t)=M(z)$ where $z=x-ct$ with $c>0$. Since numerical simulations indicate that, for our chosen initial conditions, travelling waves do not emerge when $m_0=1$ (see Appendix~\ref{m0_1}), we proceed with this study by exclusively considering the case where $m_0\in[0,1)$, which gives \begin{align}{} \frac{\mathrm{d}}{\mathrm{d} z}\bigg[(1-M) \frac{\mathrm{d} U}{\mathrm{d} z}+U \frac{\mathrm{d} M}{\mathrm{d} z}\bigg]+c\frac{\mathrm{d} U}{\mathrm{d} z}+U\big(1-U-M\big)&=0, \label{stdTW_u} \\ c\frac{\mathrm{d} M}{\mathrm{d} z}-\lambda M U &=0, \label{stdTW_m} \end{align} for $-\infty<z<\infty$ with boundary conditions \begin{align} U(z)\to1 \quad &\text{as} \quad z\to-\infty, \label{BCUz}\\ U(z)\to0\quad &\text{as}\quad z\to\infty, \label{BCU2z} \\ M(z)\to m_0 \quad &\text{as} \quad z\to\infty. \label{BCMz} \end{align} Equation~\eqref{stdTW_m}, subject to the boundary condition~\eqref{BCMz}, has a semi-explicit solution. That is, if $U(z)$ is known, then we can evaluate $M(z)$ as \begin{equation}M(z)=m_0\, \exp\left\{-\frac{\lambda}{c}\int_{z}^{\infty}U(s)\text{d}s\right\}\label{semiexplicitsoln},\end{equation} which gives \begin{equation} M(z)\to0 \quad \text{as} \quad z\to-\infty, \label{exBCMz} \end{equation} and $M\leq m_0$ for all $z\in\mathbb{R}$. Moreover, writing $\mathrm{d} U/\mathrm{d} z=V,$ we can rewrite Equations~\eqref{stdTW_u}-\eqref{stdTW_m} as a system of three first-order ordinary differential equations \begin{align}{} \frac{\mathrm{d} U}{\mathrm{d} z}&=V, \label{stdTW3_u}\\ \frac{\mathrm{d} V}{\mathrm{d} z}&=\frac{1}{(1-M)}\bigg[-cV-\frac{\lambda}{c}M U V-\frac{\lambda^2}{c^2}MU^3 -U(1-U-M)\bigg], \label{stdTW3_v}\\ \frac{\mathrm{d} M}{\mathrm{d} z}&=\frac{\lambda}{c} M U, \label{stdTW3_m} \end{align} with boundary conditions given by \begin{align} U(z)\to1, \quad V(z)\to 0 \quad &\text{and} \quad M(z)\to0 \quad \text{as} \quad z\to-\infty, \label{BCz3_1}\\ U(z)\to0, \quad V(z)\to 0 \quad &\text{and} \quad M(z)\to m_0 \quad \text{as} \quad z\to\infty. \label{BCz3_2} \end{align} The steady states of the system~\eqref{stdTW3_u}-\eqref{stdTW3_m} with boundary conditions~\eqref{BCz3_1}-\eqref{BCz3_2} are given by $\mathcal{S}_1=(1,0,0)$ and $\mathcal{S}_2=(0,0,m_0)$. Travelling wave analysis based on standard linear stability techniques (i.e. standard travelling wave analysis) \cite{curtin2020speed} seeks trajectories in the phase space that connect $\mathcal{S}_1$ at $z=-\infty$ to $\mathcal{S}_2$ at $z=\infty$~\cite{murray2001mathematical, lam2022introduction}. The eigenvalues of the linearised system at $(U,V,M)=(1,0,0)$ are \begin{equation} \sigma_1=\frac{\lambda}{c}, \quad \sigma_{2,3}=\frac{-c\pm\sqrt{c^2+4}}{2}, \end{equation} which implies that $(1,0,0)$ is a saddle point. The eigenvalues of the linearised system at $(U,V,M)=(0,0,m_0)$ are \begin{equation}\sigma_1=0, \quad \sigma_{2,3}=\frac{-c\pm\sqrt{c^2-4(1-m_0)^2}}{2(1-m_0)}.\end{equation} The requirement that $U$ and $M$ are non-negative demonstrates the existence of a minimum wave speed, $c_{\text{min}}=2(1-m_0),$ and ensures the steady state is a stable node, see Figure~\ref{fig:cODEpp}. It is important to note that $c_{\text{min}}$ is a lower bound on the travelling wave speed, which is only actually attained for this system when the rescaled ECM degradation rate is sufficiently small, that is, $\lambda\to0^{+}$ (see Section~\ref{lam0}). This is clearly shown in Figure~\ref{fig:cvsdmvsm0}, which also demonstrates that decreasing $m_0$ results in an increase in the travelling wave speed. \begin{figure}[h!] \centering \includegraphics[scale=0.6]{Figs/ODEswboxwlegend.pdf} \caption{Phase plane plot of the ODE system~\eqref{stdTW3_u}-\eqref{stdTW3_v}, for different travelling wave speeds, $c$, demonstrating the change from a stable spiral to a stable node as the travelling wave speed exceeds $c_{\text{min}}$. Further specifics of the parameter values and the numerical methods used can be found in Appendix~\ref{appNS}.} \label{fig:cODEpp} \end{figure} Travelling wave analysis has also been performed on a PDE model for melanoma invasion into the skin \cite{browning2019bayesian}, where volume-filling effects of cells are not considered to impact cell movement \cite{el2021travelling}, as described earlier. Since travelling wave analysis is always performed on the linearised system, it follows that, the additional term describing cell movement prevented by the presence of other cells is lost from Equation~\eqref{stdTW3_v} during linearisation and the minimum travelling wave speed is the same as that derived in \cite{el2021travelling}: $c_{\text{min}}=2(1-m_0)$. Another minimal model for tumour growth was proposed in \cite{colson_travelling-wave_2021}, where the volume-filling effects of cells were not accounted for in describing cell movement or cell proliferation. Both of these models have the same equation for ECM density as Equation~\eqref{NDeqn2_m}, and the models in \cite{ el2021travelling, colson_travelling-wave_2021} have the same flux terms in the equation for cell density evolution, but the model in \cite{colson_travelling-wave_2021} has one less reaction term since proliferation is unimpeded by the local ECM density. As a result of the fact that all volume-filling effects are encoded in non-linear terms, changes to the flux terms alone (within this suite of models) have no effect on the predicted minimum travelling wave speed, as they are all identical after linearisation. However, alterations to the net proliferation terms do significantly impact the minimum travelling wave speeds predicted by standard travelling wave analysis. Further information regarding these models and their differences can be found in Appendix~\ref{APPcomp}. As previously described, we are particularly interested in investigating the dependence of travelling wave solutions on the parameters $\lambda$, the rescaled ECM degradation rate, and $m_0$, the density of ECM far ahead of the wave. Having now determined that the minimum travelling wave speed decreases linearly as $m_0$ increases, we now aim to explore the relationship between the numerically estimated travelling wave speed and $\lambda$. Since the travelling wave speed depends on $\lambda$, standard perturbation techniques are difficult to apply to the travelling wave equations~\eqref{stdTW_u}-\eqref{stdTW_m}. As a result, we examine Figure~\ref{fig:cvsdmvsm0} for clues as to how to proceed. We immediately see that for sufficiently small $\lambda$ it appears that the numerically estimated travelling wave speed is independent of $\lambda$ and matches the speed predicted by standard travelling wave analysis. It can also be seen from the contour plot in Figure~\ref{fig:cvsdmvsm0} that for large values of $\lambda$, the speed converges for all values of $m_0\in[0,1)$. As such, we now investigate the asymptotic limits corresponding to slow and fast rescaled ECM degradation rates, $\lambda\to0^{+}$ and $\lambda\to\infty$, respectively. \begin{subsection}{Asymptotic analysis for $\lambda\to0^{+}$} \label{lam0} By considering Equation~\eqref{stdTW_m}, it is clear that as $\lambda\to0^{+}$, $M(z)\to m_0$ for all $z\in(-\infty,\infty)$. Returning to Equation~\eqref{stdTW_u}, substitution of $M\equiv m_0$ gives \begin{equation}\label{rescaledFKPP} (1-m_0)\frac{\mathrm{d}^2U}{\mathrm{d} z}+c\frac{\mathrm{d}U}{\mathrm{d}z}+U(1-m_0-U)=0. \end{equation} Equation~\eqref{rescaledFKPP} is equivalent to the FKPP model~\eqref{FKPP} in travelling wave co-ordinates: \begin{equation}\label{rescaledFKPPog} \hat{D}\frac{\mathrm{d}^2\hat{U}}{\mathrm{d} z}+\hat{c}\frac{\mathrm{d}\hat{U}}{\mathrm{d}z}+\hat{r}\hat{U}\bigg(1-\frac{\hat{U}}{\hat{K}}\bigg)=0, \end{equation} with $\hat{D}=\hat{r}=\hat{K}=1-m_0,$ with $\hat{c}_{\text{min}} =2(1-m_0),$ as predicted earlier. An excellent match between the solution to the FKPP model~\eqref{FKPP} and the PDE~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} for low values of the rescaled ECM degradation rate can be seen in the plot on the left in Figure~\ref{fig:increasingdmvsfkpp}. Similar models, such as those described at the end of Section~\ref{TWA} which do not have volume-filling effects taken into account, demonstrate qualitatively similar behaviour. In all of these models, at very low rescaled ECM degradation rates we observe convergence of the solutions to those of the FKPP model with rescaled parameters. For models with the same cell proliferation term as in Equation~\eqref{NDeqn2_u}, the rescaled parameters are the same and the convergence has qualitatively similar behaviour, as displayed in the plot on the left in Figure~\ref{fig:increasingdmvsfkpp}. As a result, in the limit of very small rescaled ECM degradation rates, $\lambda\to0^{+}$, the model~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} can be simplified to that presented in \cite{el2021travelling} which neglects the volume-filling effects of cells upon cell movement. This model can, in turn, be well approximated by the FKPP model~\eqref{FKPP} with rescaled parameters $\hat{D}=\hat{r}=\hat{K}=1-m_0.$ This result is consistent with predictions from standard travelling wave analysis. However, for the model presented in \cite{colson_travelling-wave_2021}, the parameters of the rescaled FKPP model to which the model converges are, instead, $\hat{D}=1-m_0$ and $\hat{r}=\hat{K}=1,$ which entails a higher cell carrying capacity density since proliferation is not impacted by the surrounding ECM. See Appendix~\ref{APPcomp} for a more detailed comparison. As such, the model~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} is poorly approximated using models, such as that in \cite{colson_travelling-wave_2021}, with different underlying assumptions for cell proliferation. These differences highlight the importance of fully laying out all of the model assumptions at the single-cell level before deriving the PDE model, so that the population-level model fully captures behaviours associated with the underlying cell-level assumptions, in all parameter regimes. \end{subsection} \begin{subsection}{Asymptotic analysis for $\lambda\to\infty$} \label{laminf} In the case of very large rates of ECM degradation, by considering the semi-explicit solution for $M$ in terms of $U$ given by Equation~\eqref{semiexplicitsoln}, it is clear that $M(z)\to 0$ as $\lambda\to\infty$ for all $z < \infty.$ Setting $M\equiv 0$ in Equation~\eqref{stdTW_u} demonstrates that, as expected, as $\lambda\to\infty$, the cell density evolves according to the FKPP model~\eqref{FKPP1}. In travelling wave co-ordinates, when $M\equiv 0$, Equation~\eqref{stdTW_u} reduces to \begin{equation} \label{highdmconv} \frac{\mathrm{d}^2 U}{\mathrm{d} z^2}+c\frac{\mathrm{d} U}{\mathrm{d} z}+U(1-U)=0, \end{equation} which is the FKPP model~\eqref{FKPP1} in travelling wave co-ordinates, for which $c_{\text{min}}=2.$ This result can also be observed numerically in Figures~\ref{fig:cvsdmvsm0} and~\ref{fig:increasingdmvsfkpp}. The same behaviour is observed in similar models without volume-filling effects \cite{el2021travelling, colson_travelling-wave_2021}, demonstrating that the model~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} can easily be approximated with any of these simpler models in the parameter regime $\lambda\to\infty.$ \begin{figure}[h!] \centering \hspace*{-.3cm} \includegraphics[scale=0.525]{Figs/heaviside/alpha1/Hasy_plasma_alpha1.pdf} \caption{Left: plot of the cell density, $u$, obtained through numerical simulations of Equations~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} subject to the initial conditions~\eqref{hIC_u}-\eqref{hIC_m} (solid lines) for small values of $\lambda$, and numerical simulations of the FKPP model~\eqref{FKPP} with rescaled coefficients $\hat{D}=\hat{r}=\hat{K}=1-m_0$ (dashed black line) with $t=100$ and $m=0.6$. Right: plot of the cell density, $u$, obtained through numerical simulations of Equations~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} subject to the initial conditions~\eqref{hIC_u}-\eqref{hIC_m} (solid lines) for large values of $\lambda$, and numerical simulations of the FKPP model~\eqref{FKPP1} (dashed black line) in the plot on the right for $t=50$ and $m_0=0.4$. Qualitatively, the same behaviour is observed for all $m_0\in[0,1)$. Further specifics of the parameter values and the numerical methods used for the simulations can be found in Appendix~\ref{appNS}.} \label{fig:increasingdmvsfkpp} \end{figure} \end{subsection} \end{section} \begin{section}{Discussion and conclusions} In this paper, a model for cell invasion into the surrounding ECM has been studied by considering primarily its travelling wave solutions. In this model, derived from first principles from an agent-based model describing cell-level behaviours, cells evolve under the action of diffusion and proliferation, which is coupled to degradation of the surrounding ECM. As a result of volume-fillling effects, cells require space ahead of the wave front in order to invade the domain. \begin{table}[h!] \begin{center} \begin{tabular}{ |c||c|c|c|c|c|c| } \hline \rule{0pt}{10pt} \multirow{3}{*}{\textbf{Model}} & \multicolumn{2}{c|}{Volume-filling} & \multirow{3}{*}{Diffusion term} & \multicolumn{2}{c|}{Volume-filling} & \multirow{3}{*}{Reaction term} \\ & \multicolumn{2}{c|}{in movement} & & \multicolumn{2}{c|}{in proliferation} & \\ & by cells & by ECM & & by cells & by ECM &\\ \hline \rule{0pt}{20pt} Colson \cite{colson_travelling-wave_2021} & - & + & $\frac{\partial}{\partial x}\bigg[ (1- m)\frac{\partial u}{\partial x}\bigg]$ & + & - & $u(1-u)$\\ \rule{0pt}{20pt} Browning \cite{el2021travelling, browning2019bayesian} & - & + & $\frac{\partial}{\partial x}\bigg[ (1- m)\frac{\partial u}{\partial x}$\bigg] & + & + & $u(1-u-m)$ \\ \rule{0pt}{20pt} Equations~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} & + & + & $\frac{\partial}{\partial x}\bigg[ (1- m)\frac{\partial u}{\partial x}+u\frac{\partial m}{\partial x}\bigg]$ & + & + & $u(1-u-m)$ \\ \hline \end{tabular} \end{center} \caption{Description of the volume-filling effects of cells and ECM considered by the models compared in this study.} \label{table:modelcomp} \end{table} Numerical solutions of the PDE model~\eqref{NDeqn2_u}-\eqref{NDeqn2_m} demonstrate a complex relationship between the travelling wave speed, $c$, the density of ECM far ahead of the wave of cells, $m_0$, and the rescaled ECM degradation rate, $\lambda.$ Partial relationships between these parameters in asymptotic regimes of interest have been established, including that $c\to2(1-m_0)$ as $\lambda\to0^{+},$ and that $c\to2^{-}$ as $\lambda\to\infty$. A good agreement with the FKPP model~\eqref{FKPP} has been demonstrated in the case where $\lambda\to\infty,$ and we showed that the impacts of introducing volume-filling effects of cells to reduce cell movement (in comparison to the model in \cite{el2021travelling}) are minimal. As such, the FKPP model~\eqref{FKPP} provides a suitable model simplification to reproduce the qualitative behaviours of the fully dimensional system in the case of a large ECM degradation rate, $\tilde{\lambda}$, compared to the proliferation rate, $\tilde{r}$. Since $\lambda={\tilde{\lambda}{\tilde{K}}}/{\tilde{r}}$, the results equivalently suggest that as $\tilde{K}\to\infty$, the system can be well modelled by the FKPP model~\eqref{FKPP}. This describes a model where volume-filling effects are negligible, and thus the speed of the invasion front is given by $c_{\text{min}}=2.$ For $\lambda\to0^{+}$, which is representative of very large proliferation rates compared to the rescaled ECM degradation rates, or extremely small carrying capacities, the system can be studied by considering the simplification to a rescaled FKPP model~\eqref{rescaledFKPP}. In this case, travelling waves are observed for $m\in[0,1)$, but the speed of the invasion front is now given by $c_{\text{min}}=2(1-m_0).$ Converting back to dimensional variables, as with the FKPP model~\eqref{FKPP}, the analytically predicted travelling wave speed increases with the cell proliferation rate, but with a more complicated relationship for the regions of parameter space corresponding to where the relationship between the travelling wave speed and rescaled ECM degradation rate is not yet well established. It is clear that qualitatively similar results are observed between this new model with volume-filling, and previously studied models outside this framework, as described by Table~\ref{table:modelcomp}, in all cases where $m_0\in[0,1)$. Therefore, it could be said that the model originally proposed in \cite{browning2019bayesian} provides a good model simplification for any case where $m_0\in[0,1)$. In the case where $m_0=1$, the region which is initially uninvaded by cells is full with ECM, such that proliferation and movement of cells into this region is entirely prevented. This result provides the starkest difference between the model studied in this paper and those previously studied elsewhere \cite{el2021travelling, colson_travelling-wave_2021}. It is observed that in the case of compactly-supported initial cell density, cell invasion cannot occur into the region where $m(x,0)=1,\, u(x,0)=0,$ and thus travelling waves cannot be formed. It is biologically reasonable to assume that an invading cell population might have zero density far ahead of the invading front. However it is important to note that the model considered here is a very simplistic model for cell invasion into ECM, and if further biological complications, such as the secretion of matrix metalloproteinases (MMPs) by cells to degrade and remodel ECM, were introduced then these phenomenological results would no longer be observed \cite{perumpanani_extracellular_1998}. This is because we could reasonably assume MMPs could still diffuse into regions occupied entirely by ECM, and then degrade it. The overall conclusion of our study is that there exist simpler models for cell invasion into ECM such as \cite{fisher_wave_1937, el2021travelling, colson_travelling-wave_2021}, which are defined by similar guiding principles and can be used to reproduce the qualitative behaviours of the travelling waves observed in the model presented in this work whilst reducing computational complexity and making the resulting PDE model more analytically tractable. An advantage of the similarities between the suite of models is that certain model predictions, such as the relationship between the cell density and ECM density for $c>c_{\text{min}}$ are independent of specific underlying assumptions such as the volume-filling effect of cells impacting cell movement. The disadvantage of this conclusion, however, is that in order to use these models to infer parameters from data, extra steps would be required to validate whether the correct model has been selected. For example, analysis of cell trajectories can help infer the cell-cell interactions underlying the motility mechanism, and distinguish between the suite of models with qualitatively similar behaviours \cite{simpson2009pathlines, bowden2013design, ross2015inference}. Our results reveal that the reaction term significantly impacts the travelling wave speed for small and intermediate values of $\lambda$ and thus, it could be used to inform model development, by defining the reaction term by considering whether space or nutrients are the limiting factor for cell invasion into ECM; and model selection, by comparing the expected wave speeds to the data. There are a variety of possible extensions to the work presented in this paper. The underlying on-lattice agent-based model of cell movement involves a number of simplifying assumptions, such as that cells can only degrade ECM agents in the same lattice site. By varying these assumptions, there would be the possibility to expand the biological applicability of the study to determine under which regimes the resulting models can also be approximated by simpler seminal models of cell invasion. Different proliferation terms, as well as terms to account for ECM evolution in more detail could be included, such as ECM remodelling by cells, or elastic deformation \cite{malik2020impact}. Beyond this, another clear extension of this work would be to introduce further spatial dimensions, or different geometries, which are particularly interesting for studying cancer cell invasion, and to investigate the stability of the travelling wave solutions for the different possible models. It would also be of particular interest to arrive at some functional form for the travelling wave speed, $c(\lambda, m_0)$, for all possible parameter values, and to define the critical value of $\lambda_c$ whereby for $\lambda<\lambda_c$ the minimum travelling wave speed observed numerically matches that predicted by standard travelling wave analysis $c=c_{\text{min}}$. If possible, this knowledge could then further aid an investigation using perturbation methods into the shape of the wave front for intermediate values of $\lambda$ and by characterising this behaviour, this model could be used to describe biological scenarios such as tumour growth, where $\lambda$ would represent the rate at which the tumour cells were able to degrade ECM in the surrounding envrionment. \end{section} \section*{Acknowledgements} R. M. C. is supported by funding from the Engineering and Physical Sciences Research Council (EPSRC) and Wolfson College, University of Oxford. T. L. gratefully acknowledges support from the Italian Ministry of University and Research (MUR) through the grant ``Dipartimenti di Eccellenza 2018-2022'' (Project no. E11G18000350001), the PRIN 2020 project (No. 2020JLWP23) ``Integrated Mathematical Approaches to Socio–Epidemiological Dynamics'' (CUP: E15F21005420006), and the INdAM group GNFM. The authors are grateful to Kevin Painter and Chloe Colson for their interesting discussions regarding travelling waves in cell invasion models. \printbibliography \begin{appendices}
{ "arxiv_id": "2302.11283", "language": "en", "timestamp": "2023-02-23T02:12:45", "url": "https://arxiv.org/abs/2302.11283", "yymm": "2302" }
\section{Introduction} % % % % % \IEEEPARstart{T}{he} autonomy and intelligence of the inland waterways surveillance system play a significant role in the development of inland waterborne transportation, which can effectively reduce the labor cost of the supervisory departments and ensure the safety of vessel navigation. To accomplish this objective, the vessel traffic service (VTS) system is capable of providing effective situational awareness by using the automatic identification system (AIS), radar, closed circuit television (CCTV) \cite{bloisi2016enhancing}, etc. To further increase the capability of situational awareness, many intelligent technologies for single sensor have been presented, e.g., AIS-based vessel trajectory prediction \cite{zhang2022vessel}, radar-based object detection \cite{kim2021bernoulli}, and video-based object detection \cite{feng2022rapid}. It is well known that each type of sensor has its own advantages and disadvantages under the same scenarios. As a consequence, numerous efforts have been devoted to simultaneously exploiting the multi-source data \cite{bloisi2016enhancing, chen2008tracking, man2016information, lu2021fusion, huang2021identity, liu2022intelligent} to promote the traffic situational awareness for maritime transportation systems. However, these fusion methods mainly just take into consideration the positional relationship of the same target at a certain moment. It thus becomes difficult to guarantee high-quality data fusion, especially for the existence of time delay, missing data, random outliers, etc. The same moving vessels essentially share similar navigation behaviors, which could be represented using the time-series data, e.g., spatio-temporal trajectories. To further improve the stability and accuracy of data fusion, we will first extract the vessel trajectories from the raw sensing data, and then propose a trajectory matching-based fusion method (termed DeepSORVF) in this work. % \subsection{Motivation and Contribution} % Owing to the remote, intuitive, and real-time advantages of CCTV, terrestrial video surveillance systems have been widely used in inland waterborne transportation to improve the ability of traffic situational awareness and vessel abnormal behavior monitoring\cite{el2013target}. In particular, massive monitoring cameras can provide indispensable visual information for guaranteeing maritime safety. To fully use these visual features, many efforts have focused on the research of vessel detection and tracking to meet the requirement of intelligent supervision \cite{zhou2021deep, liu2021enhanced, chen2021visual}. However, these methods could only detect the moving vessel from the video images. It is intractable to achieve the important identity information (e.g., vessel name and size, etc.) and dynamic information (e.g., vessel speed and course, etc.). % % Other maritime awareness equipment, such as AIS and radar, could provide much richer attribute information about the vessel. In particular, the AIS data contains rich vessel identity and spatio-temporal information, which makes it play an essential role in analyzing vessel abnormal behaviors. The AIS data mainly contains the static and dynamic information, e.g., maritime mobile service identification (MMSI), vessel size, speed, course, position, etc. However, the AIS data essentially suffers from the inconsistency of time intervals, which limits its application in maritime intelligent transportation \cite{gao2021novel}. The radar has been widely used in near port supervision since it can provide the accurate distance and bearing of vessels. Unfortunately, some radar equipment is forbidden to be installed in populated regions to avoid the high-frequency electromagnetic radiation harming the health of people \cite{bloisi2016enhancing}. In the literature \cite{xiao2019traffic}, many methods have been proposed to robustly and accurately fuse the AIS and radar data. As long we simultaneously collect the AIS, radar, and video data, we can directly adopt the existing advanced methods to fuse the AIS and radar data, and then implement fusion with the video data. Intuitively, the fusion of AIS and video data seems more difficult because of the different coordinate systems, asynchronous data collection, different data structures, etc. Therefore, we tend to only fuse the AIS and video data to enhance the traffic situational awareness for intelligent surveillance in inland waterways. % % In this work, we propose a deep learning-based simple online and real-time vessel data fusion method (termed DeepSORVF) for promoting inland waterways surveillance. The main contributions of this work are as follows: % \begin{itemize} % \item We build two simple yet efficient methods to respectively extract the AIS- and video-based vessel trajectories for data fusion. To avoid the interference of vessel occlusion on video-based trajectory extraction, we propose a prior knowledge-driven anti-occlusion tracking method. % \item We design a novel asynchronous trajectory matching method to achieve the robust fusion of AIS and video data. The proposed method adopts an enhanced fast dynamic time warping algorithm for trajectory similarity measure and employs an AIS/video association method to decrease the computational cost and increase the stability. % \item We construct a public benchmark dataset (termed FVessel) for vessel detection, tracking, and data fusion, which consists of many videos and the corresponding AIS data collected in various weather conditions and locations. % \end{itemize} % % To our best knowledge, our DeepSORVF is the first trajectory matching-based computational method to fuse the AIS and video data for inland waterways surveillance. Meanwhile, we have verified the effectiveness and robustness of the proposed method on our newly-developed FVessel dataset. % \subsection{Organization} % The rest of this paper is organized as follows. Section \ref{sec:rw} briefly reviews the recent research on object detection, tracking, AIS and video data fusion. In Section \ref{sec:shipsort}, the proposed data fusion framework is described in detail. Section \ref{sec:exp} implements extensive comprehensive experiments to demonstrate the effectiveness of our method. Finally, Section \ref{con} summarizes the main contributions of this work. % \section{Related Works} \label{sec:rw} % This section mainly introduces the recent studies related to our work, i.e., multi-object detection and tracking, AIS and video data fusion. % \subsection{Multi-Object Detection and Tracking} % Multi-object detection and tracking methods are generally divided into two categories, namely traditional and deep learning methods. Due to the particularity of the research issue, this section mainly reviews the related works on vessel detection and tracking. % \subsubsection{Traditional Methods} % Background subtraction (BS) is a classic object detection method. Although many BS-based methods are proposed to detect conventional objects, these methods still achieve poor precision in vessel detection \cite{prasad2018object}. To improve the precision of the BS-based method, Hu \emph{et al.} \cite{hu2011robust} designed a robust foreground detection and background update method to effectively reduce the influence of waves. Bloisi \emph{et al.} \cite{bloisi2014background} proposed an independent multi-modal background subtraction (IMBS) algorithm. In particular, this algorithm models highly dynamic backgrounds (e.g., water) by creating a ``discretization'' of an unknown distribution. Furthermore, other types of vessel detection methods are proposed. For instance, Zhu \emph{et al.} \cite{zhu2010novel} designed a hierarchical complete-based vessel detection approach for spaceborne optical images. Zhang \emph{et al.} \cite{zhang2017ship} proposed a vessel detection algorithm using the discrete cosine transform (DCT)-based Gaussian mixture model (GMM) for efficient visual maritime surveillance on non-stationary surface platforms. Chen \emph{et al.} \cite{chen2019robust} achieved the vessel object tracking using multi-view learning and sparse representation. Although many techniques have been introduced to improve the performance of detectors, hand-designed features still produce poor robustness in vessel detection. Meanwhile, the high computational complexity of some methods will hinder their practical applications. % % \begin{figure*}[ht] \centering \includegraphics[width=0.98\linewidth]{images/Figure01_Flowchart} \caption{The architecture of the proposed deep learning-based simple online and real-time vessel data fusion method (termed DeepSORVF). The DeepSORVF consists of AIS-based vessel trajectory extraction, video-based vessel trajectory extraction, and asynchronous vessel trajectory matching.} \label{fig:flowchart} \end{figure*} % \subsubsection{Deep Learning Methods} % With the emergence and rapid development of graphics processing units (GPU), deep learning technology is widely used in the field of image processing. Many deep learning methods are proposed for object detection, e.g., region-based convolutional neural network (R-CNN) \cite{girshick2015region, ren2016faster}, single shot multibox detector (SSD) \cite{liu2016ssd, li2017fssd}, and you only look once network (YOLO) \cite{redmon2016you, redmon2017yolo9000, ge2021yolox, redmon2018yolov3}. Based on these object detection networks, many vessel detection methods are further researched. Shao \emph{et al.} \cite{shao2019saliency} proposed a YOLOv2-based saliency-aware network for vessel detection, which combined the salient features and coastline features to predict more accurate vessel positions. Liu \emph{et al.} \cite{liu2021enhanced} built an enhanced YOLOv3 network to promote vessel detection in video-based maritime surveillance. To reduce the impact of poor weather environments on vessel detection, this method constructed a data enhancement strategy to improve vessel detection precision in low-light, hazy, and rainy images. Furthermore, Chen \emph{et al.} \cite{chen2020deep} proposed a small vessel detection method based on an improved generative adversarial network (GAN) and a convolutional neural network (CNN). Feng \emph{et al.} \cite{feng2022rapid} proposed a ship detection method based on the multi-size gradient features and multi-branch support vector machine (SVM). Yang \emph{et al.} \cite{yang2021enhanced} applied the visual object tracking and semi-supervised object segmentation to the vessel tracking task, and proposed an enhanced SiamMask network. % \subsection{AIS and Video Data Fusion} % In current literature, many AIS and video data fusion methods have been proposed. For instance, Chen \emph{et al.} \cite{chen2008tracking} proposed a single-vessel tracking method by combining AIS and video data. In particular, this method could make the camera focus on the vessel according to the position information provided by the AIS, and use the Kalman filter to ensure the smoothness of the tracking. However, the operator fails to accurately obtain the identities and attributes of each vessel when the field of view exists multiple vessels. Therefore, more researchers began to focus on the information fusion of multiple vessels. For instance, Man \emph{et al.} \cite{man2016information} fused the AIS and video data with the Kalman filter to obtain the optimal vessel trajectory. Bloisi \emph{et al.} \cite{bloisi2016enhancing} proposed an automated maritime surveillance system that replaces radar sensors with vision sensors, which can be deployed in densely populated regions. Lu \emph{et al.} \cite{lu2021fusion} proposed a vision and AIS fusion method, which estimated the distance and azimuth of the detected visual vessel from the camera and fused it with the position information in the AIS data. Huang \emph{et al.} \cite{huang2021identity} designed a novel multi-vessel tracking technology based on the improved single shot multi-box detector (SSD) \cite{liu2016ssd} and DeepSORT \cite{wojke2017simple} algorithm, and used a multi-modal data fusion algorithm to display the AIS information of visual targets. Recently, Liu \emph{et al.} \cite{liu2022intelligent} constructed an intelligent edge-enabled shipboard navigation system based on augmented reality, deep object detection, and multi-source data fusion technologies. This system can achieve stable vessel detection under various complex weather conditions and fuse the detected vessel targets with synchronized AIS information. % \section{DeepSORVF: Deep Learning-based Simple Online and Real-time Vessel Data Fusion} \label{sec:shipsort} % In this section, the details of our method will be introduced. Fig. \ref{fig:flowchart} displays the flowchart of our data fusion method, including AIS-based vessel trajectory extraction, video-based vessel trajectory extraction, and asynchronous vessel trajectory matching. For the AIS data, we perform data cleaning and delayed data prediction to obtain high-quality AIS data. To guarantee that the AIS and video data are in the same coordinate system, we use the pinhole model to project the AIS data to the pixel coordinate system. For the video data, we first use the YOLOX network to detect vessel targets. To avoid the impact of vessel occlusion on the video-based trajectory, a prior knowledge-driven anti-occlusion tracking method is then used for video-based vessel trajectory extraction. During trajectory matching, we adopt the enhanced fast dynamic time warping algorithm (E-FastDTW) to calculate the similarity between trajectories and combine the Hungarian algorithm to obtain the matching results. It is worth mentioning that the matching result will be input into the video-based vessel trajectory extraction task at the next moment as prior knowledge. Based on our matching results, the AIS information (including, MMSI, longitude, latitude, speed, course, heading, etc.) and the visual vessel can be easily fused to facilitate inland waterways surveillance. % % \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{images/Figure02_AIS} \caption{The flowchart of the AIS data processing, which consists of data cleaning, data prediction, and data re-cleaning.} \label{fig:AIS} \end{figure} % \subsection{AIS-Based Vessel Trajectory Extraction} \label{se:AIS} % The AIS is widely used in maritime services since it can provide the integrated and rich vessel information. However, due to the limitation of the AIS working principle, AIS data fails to be required in real-time. Meanwhile, some abnormal and redundant AIS information will affect the accuracy and robustness of AIS-based trajectory extraction. Furthermore, the matching of these two data becomes increasingly difficult since AIS- and video-based trajectories are, respectively, in the WGS-84 and pixel coordinate systems. Therefore, we tend to extract the AIS-based trajectory projected in the pixel coordinate system for data fusion. To achieve this goal, we first process the AIS data to generate high-quality AIS data. Subsequently, the AIS-based vessel trajectory will be obtained by the pinhole model. % \subsubsection{AIS Data Processing} % Fig. \ref{fig:AIS} displays our framework for processing AIS data. The historically processed AIS data and the AIS data received at the current moment are combined as the input. The input data is successively processed by data cleaning, data prediction, and data re-cleaning to obtain high-quality AIS data as output. The data cleaning process is used to delete the AIS data outside the supervision region and abnormal data including the missing and abnormalities of the latitude, longitude, heading, speed, and MMSI. The data prediction module can estimate the position of vessels that have not yet received AIS information. Let $v_{t-1}$ be the speed of the vessel at time $T_{t-1}$, the moving distance $D_{\Delta t}$ at the time interval $\Delta t = T_{t}-T_{t-1}$ can be expressed as $D_{\Delta t} = v_{t-1}*\Delta t$. According to the longitude $\lambda_{t-1}$, latitude $\phi_{t-1}$, and course $\theta_{t-1}$ at time $T_{t-1}$, and the moving distance at time interval $D_{\Delta t}$, the longitude $\lambda_{t}$ and latitude $\phi_{t}$ at time $T_{t}$ can be generated by the forward geodetic computations\footnote{We exploit the pyproj.Geod.fwd function to implement the forward geodetic computations.}. % % \begin{figure*}[ht] \centering \includegraphics[width=0.98\linewidth]{images/Figure03_Video} \caption{The flowchart of anti-occlusion tracking method for video-based vessel trajectory extraction. Note that $\mathcal{G}$ is the wide residual network-based appearance feature extractor. The extraction of AIS-based vessel trajectories $\mathcal{T}_{\mathrm{ais}}$ has been introduced in Section \ref{se:AIS}. The generation of Boxes, OAR, $\mathcal{T}^{\mathrm{last}}_{\mathrm{vis}}$, and $\mathcal{F}_{\mathrm{id}}$ will be mentioned in Section \ref{se:video}. The generation of AIS/video association results $\mathcal{B}^{\mathrm{last}}$ will be described in Section \ref{se:fusion}.} \label{fig:video} \end{figure*} % \subsubsection{Vessel Positioning via Coordinate Transformation} % To fuse the high-quality AIS information and visual object, it is necessary to unify the different source data into the same coordinate system. In this work, we tend to project the AIS information in the world coordinate system (WCS) to the pixel coordinate system (PCS). Before the coordinate transformation, we first perform a Mercator projection on the original position of the AIS information. Let $(U, V, W)$ be the real vessel position in the 3D WCS, its 2D projection coordinate $(x, y)$ in PCS can be obtained by % \begin{equation}\label{eq:noisymodeadditive} \begin{bmatrix} x\\ y\\ 1 \end{bmatrix} = \frac{1}{Z} \mathcal{K}_{\mathrm{in}}\mathcal{K}_{\mathrm{ex}}\begin{bmatrix} U\\ V\\ W\\ 1 \end{bmatrix}, \end{equation} % with $Z$ being the scale factor. Here, $\mathcal{K}_{\mathrm{in}}$ and $\mathcal{K}_{\mathrm{ex}}$ are the internal and external parameter matrices of the camera, respectively. In this work, since the camera is fixed, we set the extrinsic parameter matrix $\mathcal{K}_{\mathrm{ex}}$ as an identity matrix. In particular, we directly use the pinhole model to estimate $\mathcal{K}_{\mathrm{in}}$. Please refer to Ref. \cite{palmieri2013harbour} for more details on the internal parameter estimation. Finally, we sequentially save the AIS data with the same MMSI into the same list in time series to build a set of all AIS-based vessel trajectories $\mathcal{T}_{\mathrm{ais}} = \{X_{a_1},..., X_{a_i},..., X_{a_I}\}$ with $X_{a_i}$ and $I$ being the $i$-th AIS trajectory and the number of AIS-based vessel trajectories. % \subsection{Video-Based Vessel Trajectory Extraction} \label{se:video} % Although many methods have been proposed to achieve vessel detection and tracking \cite{feng2022rapid, prasad2019object, bovcon2021mods}, it is still intractable to extract high-quality video-based vessel trajectories for data fusion. In the actual application of video-based maritime surveillance, the inevitable occlusion between vessels occurs in the cross encounter, confrontation, and chasing situations. Generally, it becomes difficult to accurately and robustly detect these vessels under the occlusion condition. Meanwhile, the corresponding appearance will be seriously affected by other vessels. To improve the quality of extracted trajectories, we propose a prior knowledge-driven anti-occlusion tracking method, as shown in Fig. \ref{fig:video}. % % Specifically, we first adopt the YOLOX network to detect the visual vessel object and get a set of bounding boxes, i.e., % \begin{equation}\label{eq:Boxes} \mathrm{Boxes} = \{box_1,...,box_l,...,box_L\}, \end{equation} % where $box_l$ is the location of the $l$-th bounding box, $L$ denotes the number of bounding boxes. % % \begin{algorithm}[t] \footnotesize \caption{Anti-Occlusion Vessel Tracking} \label{al:anti-occlusion} \LinesNumbered \KwIn {OAR: A set of all occlusion areas; Boxes: A set of all bounding boxes detected by YOLOX network; $\mathcal{T}_{\mathrm{ais}}$: A set of all AIS-based vessel trajectories at the current moment; $\mathcal{T}^{\mathrm{last}}_{\mathrm{vis}}$: A set of all video-based vessel trajectories at the previous moment; $\mathcal{B}^{\mathrm{last}}$: A set of all AIS/video association results at the previous moment; $\mathcal{F}_{\mathrm{id}}$: A set of all occluded vessel appearance features before the occlusion;} \KwOut {$\mathcal{T}_{\mathrm{vis}}$: A set of all video-based vessel trajectories at the current moment;} \textbf{Initialization:} Boxes$^{\mathrm{pre}}$: A empty set to save all predict bounding boxes of the occluded vessels; $\mathcal{G}$: A wide residual network-based appearance feature extractor; $\mathcal{I}$: A empty set to save all DeepSORT input data\; \tcp{Step 1. Bounding box removal in the occlusion areas.} \For{$box_l$ in $\mathrm{Boxes}$}{ \For{$\mathrm{AR}$ in $\mathrm{OAR}$}{ \uIf{$box_l$ locates in $\mathrm{AR}$}{ Remove the $box_l$ from $\mathrm{Boxes}$\; \textbf{break}\;} }} \tcp{Step 2. Occluded bounding box prediction.} \For{$Y^{'}_{v_j}$ in $\mathcal{T}^{\mathrm{last}}_{\mathrm{vis}}$}{ \For{$\mathrm{AR}$ in $\mathrm{OAR}$}{ \uIf{The center point of the bounding box at the previous moment in $Y^{'}_{v_j}$ locates in $\mathrm{AR}$}{Search the matched $[:,v_{j}]$ from $\mathcal{B}^{\mathrm{last}}$\; \uIf{exist $[a_{i},v_{j}]$}{ Predict the bounding box $box^{\mathrm{pre}}_{v_{j}}$ by AIS-based vessel trajectory $X_{a_i}$ in $\mathcal{T}_{\mathrm{ais}}$\;} \Else{Predict the bounding box $box^{\mathrm{pre}}_{v_{j}}$ by video-based vessel trajectory$Y^{'}_{v_j}$\; } Add the $box^{\mathrm{pre}}_{v_{j}}$ to $\mathrm{Boxes}^{\mathrm{pre}}$\; \textbf{break}\;} } } Update $\mathcal{F}_{\mathrm{id}}$ and $\mathrm{OAR}$\; \tcp{Step 3. Anti-occlusion DeepSORT.} \For{$box_l$ in $\mathrm{Boxes}$}{ Add $[box_l, \mathcal{G}(box_l)]$ to $\mathcal{I}$\; } \For{$box^{\mathrm{pre}}_{v_{j}}$ in $\mathrm{Boxes}^{\mathrm{pre}}$}{ \For{$f_{v_b}$ in $\mathcal{F}_{\mathrm{id}}$}{ \uIf{$v_j = v_b$}{ Add $[box^{\mathrm{pre}}_{v_{j}}, f_{v_b}]$ to $\mathcal{I}$\; \textbf{break}\;} }} Run DeepSORT with $\mathcal{I}$\; Add the results of DeepSORT to $\mathcal{T}^{\mathrm{last}}_{\mathrm{vis}}$ for generating the video-based vessel trajectory at the current moment $\mathcal{T}_{\mathrm{vis}}$\; \end{algorithm} % % Before the tracking, the results of the previous moment are input as the prior knowledge. Firstly, a set of the occlusion areas OAR will be used, which depends on the ratio of the occlusion area to the bounding boxes. The judgment metric of the occlusion area can be expressed as follows % \begin{equation}\label{eq:OAR} \frac{S_o}{\min(S_1, ..., S_r,... ,S_R)} > \omega, \end{equation} % where $S_o$ is the area of the occluded part, $S_r$ is the area of the $r$-th occluded bounding box, $R$ is the number of occluded bounding boxes, $\omega$ represents the anti-occlusion threshold. When the ratio exceeds $\omega$, we will store the location of the smallest rectangle box AR which can contain all occluded bounding boxes into the OAR. Meanwhile, the AIS-based vessel trajectories at the current moment $\mathcal{T}_{\mathrm{ais}}$ and the video-based vessel trajectories at the previous moment $\mathcal{T}^{\mathrm{last}}_{\mathrm{vis}}$ are also used as prior information, which can be given by % \begin{equation}\label{eq:aisvideo} \begin{cases} &\mathcal{T}_{\mathrm{ais}} = \{X_{a_1},...,X_{a_i},...,X_{a_I}\},\\ &\mathcal{T}^{\mathrm{last}}_{\mathrm{vis}} = \{Y^{'}_{v_1},...,Y^{'}_{v_j},...,Y^{'}_{v_J}\},\\ \end{cases} \end{equation} % where $X_{a_i}$ and $Y^{'}_{v_j}$ represent the trajectory series of the $i$-th AIS target and the $j$-th visual target, respectively, $I$ and $J$ are the numbers of AIS- and video-based vessel trajectories, respectively. Besides, we consider the AIS/video association results $\mathcal{B}^{\mathrm{last}}$ at the previous moment and the vessel appearance embedding $\mathcal{F}^{\mathrm{last}}_{\mathrm{id}}$ before the occlusion, which can be given by % \begin{equation}\label{eq:MF} \begin{cases} &\mathcal{B}^{\mathrm{last}} = \{...,[a_i,v_j],...\},\\ &\mathcal{F}_{\mathrm{id}} = \{...,f_{v_b},...\},\\ \end{cases} \end{equation} % where $[a_i,v_j]$ means that the $i$-th AIS target $a_i$ and $j$-th visual target $v_j$ are successfully associated, $f_{v_b}$ is the appearance embedding of the $b$-th visual target before occlusion\footnote{The vessel appearance embedding is extracted by the wide residual network \cite{zagoruyko2016wide} in the DeepSORT.}. We will detailedly describe the generation of AIS/video association results $\mathcal{B}^{\mathrm{last}}$ in Section \ref{se:fusion}. % % For the anti-occlusion tracking, the detection results located in the occlusion area OAR are removed to avoid the mis-detection caused by the vessel overlapping. Based on the fusion results at the previous moment, the corresponding AIS information of the occluded visual vessel is available. Therefore, the location of the occluded vessel's bounding box at the current moment $box^{\mathrm{pre}}$ can be estimated by % \begin{equation}\label{eq:position_pre} box^{\mathrm{pre}} = \begin{bmatrix} x\mathrm{_{tl}^{pre}}\\ y\mathrm{_{tl}^{pre}}\\ x\mathrm{_{br}^{pre}} \\ y\mathrm{_{br}^{pre}}\end{bmatrix} = \begin{bmatrix} \Delta x_{\mathrm{ais}}\\ \Delta y_{\mathrm{ais}}\\ \Delta x_{\mathrm{ais}}\\ \Delta y_{\mathrm{ais}} \end{bmatrix} + \begin{bmatrix} x\mathrm{_{tl}^{last}}\\ y\mathrm{_{tl}^{last}}\\ x\mathrm{_{br}^{last}}\\ y\mathrm{_{br}^{last}}\end{bmatrix}, \end{equation} % where $(x\mathrm{_{tl}^{last}}, y\mathrm{_{tl}^{last}})$ and $(x\mathrm{_{br}^{last}}, y\mathrm{_{br}^{last}})$ are the pixel indexes of the top-left and bottom-right points of the previous bounding box, respectively, $\Delta x_{\mathrm{ais}}$ and $\Delta y_{\mathrm{ais}}$ are the horizontal and vertical motion speeds, which are equal to the displacement of the AIS information between the current and previous moments. % % For the occluded vessels without the corresponding AIS information, the bounding box will be predicted via the visual motion features. The prediction result can also be given by variants based on Eq. (\ref{eq:position_pre}). The horizontal and vertical motion speeds ($\Delta x_{\mathrm{ais}}$, $\Delta y_{\mathrm{ais}}$) are replaced with the visual trajectory-based horizontal and vertical motion speeds ($\Delta x_{\mathrm{vis}}$, $\Delta y_{\mathrm{vis}}$), which can be calculated by % \begin{equation}\label{eq:visual_kin} \begin{cases} &\Delta x_{\mathrm{vis}} = \dfrac{x_{t-1} - x_{t-\delta}}{\delta},\\ &\Delta y_{\mathrm{vis}} = \dfrac{y_{t-1} - y_{t-\delta}}{\delta},\\ \end{cases} \end{equation} % where $(x_{t-1}, y_{t-1})$ and $(x_{t-\delta}, y_{t-\delta})$ denote the points of the video-based vessel trajectory at the previous moment and the previous $\delta$ moment, respectively. % % After prediction, we will then update the OAR and $\mathcal{F}_{\mathrm{id}}$. Based on the predicted detection box position, the occlusion area list OAR will be updated via Eq. (\ref{eq:OAR}) for the anti-occlusion tracking at the next moment. For the update of $\mathcal{F}_{\mathrm{id}}$, we first set up the occluded visual target as $v_j$. If $f_{v_j}$ exists in the original $\mathcal{F}_{\mathrm{id}}$, we directly store $f_{v_j}$ into the new $\mathcal{F}_{\mathrm{id}}$; otherwise, the appearance embedding of $v_j$ at the previous moment will be stored in the new $\mathcal{F}_{\mathrm{id}}$. Then, we employ a wide residual network $\mathcal{G}$ to extract the vessel appearance embedding in normal bounding boxes, and assign the vessel appearance embedding before occlusion in the $\mathcal{F}_{\mathrm{id}}$ to occluded bounding boxes. Finally, the bounding boxes and the corresponding vessel appearance embedding are jointly input into the DeepSORT for generating the video-based vessel trajectories at the current moment $\mathcal{T}_{\mathrm{vis}}$. % % It is worth mentioning that two metrics in DeepSORT can solve the ID assignment issue. Firstly, the Mahalanobis distances between the predicted Kalman states and the newly arrived locations are calculated as the location similarity metrics. Moreover, the cosine distances between the appearance embedding are calculated as the appearance similarity metrics. In our method, the appearance features of the occluded vessels are kept consistent with the latest extractions before the occlusion. Therefore, as long as the predicted bounding box is close to the prediction of Kalman filters, the ID of occluded vessels will not be assigned incorrectly. The pseudo code of the proposed anti-occlusion tracking method is shown in Algorithm \ref{al:anti-occlusion}. % \subsection{Asynchronous Vessel Trajectory Matching} \label{se:fusion} % In this section, we propose a simple yet effective trajectory matching method to fuse the AIS- and video-based asynchronous vessel trajectories. Firstly, we adopt an enhanced fast dynamic time warping (E-FastDTW) algorithm considering the direction to calculate the similarity of AIS- and video-based vessel trajectories. Based on the similarity measure result, the Hungarian algorithm is employed to generate the optimal matching result. To improve the stability and robustness of data fusion and reduce the computational cost, we employ an AIS/video association mechanism. When the number of successful pairings of two trajectories exceeds a pre-determined threshold, the AIS- and video-based vessel trajectories will be associated directly without similarity evaluation. % \subsubsection{Trajectory Similarity Measure via E-FastDTW} % For trajectory-based data fusion, it is an important prerequisite to determine the similarities between the AIS- and video-based vessel trajectories. The Euclidean distance is a simple but effective similarity calculation method. However, it requires that the two trajectories to be matched have the same length. Meanwhile, the Euclidean distance considers that two similar trajectories with only a slight shift in the time axis are significantly different. Therefore, dynamic time warping (DTW) has been proposed for ignoring this shift \cite{muller2007dynamic}. Suppose we have two trajectories $X$ and $Y$ of length $P$ and $Q$ respectively, represented as % \begin{equation}\label{eq:series} \begin{cases} &X = m_1, m_2,...,m_p,...,m_P,\\ &Y = n_1, n_2,...,n_q,...,n_Q.\\ \end{cases} \end{equation} % % \begin{algorithm}[t] \footnotesize \caption{Asynchronous Trajectory Matching} \label{al:matching} \LinesNumbered \KwIn {$\mathcal{T}_{\mathrm{ais}}$: A set of all AIS-based vessel trajectories; $\mathcal{T}_{\mathrm{vis}}$: A set of all video-based vessel trajectories; $\mathcal{M}^{\mathrm{last}}$: A set of all AIS/video numbers of matches at the previous moment; $\mathcal{B}^{\mathrm{last}}$: A set of all AIS/video association result at the previous moment;} \KwOut {$\mathcal{M}$: A set of all AIS/video numbers of matches at the current moment; $\mathcal{B}$: A set of all AIS/video association result at the current moment;} \textbf{Initialization:} $d(i, j)$: The Euclidean distance between the last trajectory points of $X_{a_i}$ and $Y_{v_j}$; $M_s$: An empty trajectory similarity matrix; $\mathcal{O}_{\mathrm{res}}$: An empty set to save the matching results; $D_{\mathrm{max}}$: The maximum matching distance; $Mat_{\mathrm{min}}$: The minimum number of matches; $T_{\mathrm{max}}$: The maximum time threshold; $S$: The E-FastDTW trajectory similarity measurement operator\; \tcp{Step 1. Trajectory similarity measure.} \For{$X_{a_i}$ in $\mathcal{T}_{\mathrm{ais}}$}{ \For{$Y_{v_j}$ in $\mathcal{T}_{\mathrm{vis}}$}{ \uIf{$d(i, j)>D_{\mathrm{max}}$}{ $M_{s}(i, j) = + \infty$ \;} \uElseIf{$[a_i,:]$ or $[:,v_j]$ in $\mathcal{B}^{\mathrm{last}}$}{ \uIf{$[a_i, v_j]$ in $\mathcal{B}^{\mathrm{last}}$}{ $M_{s}(i, j) = - \infty$ \;} \Else{ $M_{s}(i, j) = + \infty$ \;}} \Else{ $M_{s}(i, j) = S(X_{a_i}, Y_{v_j})$ by Eq. (\ref{eq:dtw})\;} }} \tcp{Step 2. Matching result generation.} Using the Hungarian algorithm to calculate $M_{s}$ for obtaining the matching result $\mathcal{O}_{\mathrm{res}}=\{...,[a_{i}, v_{j}],...\}$\; \For{$[a_i, v_j]$ in $\mathcal{O}_{\mathrm{res}}$}{ \uIf{$z_{a_i, v_j}$ in $\mathcal{M}^{\mathrm{last}}$}{ Add $z_{a_i, v_j} = z_{a_i, v_j}++$ to $\mathcal{M}$\;} \Else{ Add $z_{a_i, v_j}=1$ to $\mathcal{M}$ \;} } \For{$z_{a_i, v_j}$ in $\mathcal{M}^\mathrm{last}$}{ \uIf{$[a_i,v_j]$ not in $\mathcal{O}_{\mathrm{res}}$ and the time interval between the last matching moment of $[a_i, v_j]$ and the current moment $<T_{\mathrm{max}}$}{ Add $z_{a_i, v_j}$ to $\mathcal{M}$\;} } \tcp{Step 3. Association result generation.} \For{$z_{a_i, v_j}$ in $\mathcal{M}$}{ \uIf{$z_{a_i, v_j}>Mat_{\mathrm{min}}$}{ Add $[a_i, v_j]$ to $\mathcal{B}$\; }} \end{algorithm} % % Based on the two trajectories, the DTW constructs a $P \times Q$ alignment matrix $d$ where $d(p, q)$ is the Euclidean distance between the points $m_p$ and $n_q$. Then, a warp path $W$ is defined to construct the mapping between $X$ and $Y$, which can be written by % \begin{equation}\label{eq:warppath} W = w_1,w_2,...,w_c,...,w_C, \end{equation} % with $C$ being the length of $W$, and $\max\{P, Q\} \leq C < P+Q$. In particular, the warp path $W$ has three restrictions. For the sake of better understanding, we define the $(c-1)$-th and the $c$-th elements of $W$ as $w_{c-1}=(p^{'}, q^{'})$ and $w_c=(p, q)$. These three constraints for warp path can be defined as follows: % \begin{itemize} \item \emph{Restriction 1:} The $1$-st and the $C$-th elements of $W$ are $w_1=(1, 1)$ and $w_{C}=(P,Q)$, respectively. \item \emph{Restriction 2:} The adjacent elements of the warp path $W$ can only contain the adjacent coordinate points, including the diagonal adjacent. Therefore, the $w_{c-1}$ can only be one of $\left\{ (p-1, q), ~(p, q-1), ~(p-1, q-1) \right\}$. \item \emph{Restriction 3:} The elements of the warp path $W$ are monotonically increasing in time, i.e., $p^{'} \leq p$ and $q^{'} \leq q$. \end{itemize} % % Under the premise of satisfying the above three constraints, DTW only focuses on the path with the minimum cumulative distance of alignment matrix elements corresponding to all points \cite{muller2007dynamic}. Meanwhile, the included angle $\varphi$ between the starting and ending points of $X$ and $Y$ is also considered. Finally, the similarity value $S(X, Y)$ between $X$ and $Y$ calculated by our proposed E-FastDTW can be written as follows % \begin{equation}\label{eq:dtw} S(X, Y) = Dis(W) \cdot e^{\varphi} = \min\{\sum_{c=1}^{C}d(w_{cp},w_{cq})\} \cdot e^{\varphi}, \end{equation} % where $d(w_{cp},w_{cq})$ is the Euclidean distance between two data points corresponding to the $c$-th element in the warp path $W$, $Dis(W)$ denotes the sum of all $d(w_{cp},w_{cq})$ in the warp path $W$. To find the desired unique warp path, the DTW adopts the dynamic programming strategy. The cumulative distance $\mathcal{D}(p, q)$ between $m_p$ and $n_q$ is the sum of the minimum cumulative distance of three previous possible warp path elements and the Euclidean distance $d(p, q)$ between the points $m_p$ and $n_q$, which can be mathematically written as % \begin{equation}\label{eq:dynamic} \begin{aligned} \mathcal{D}(p, q) = d(p, q)& + \min\{\mathcal{D}(p-1, q), \\ &\mathcal{D}(p, q-1), \mathcal{D}(p-1, q-1)\}. \end{aligned} \end{equation} % % Furthermore, we also adopt the multi-level approach used in the FastDTW to speed up the time series similarity search and reduce the computational complexity. Please refer to Ref. \cite{salvador2007toward} for more details on the multi-level approach. % \subsubsection{Trajectory Matching} % In this work, we propose a novel matching method with higher precision and less computation. In particular, we will match and associate the AIS-based vessel trajectories $\mathcal{T}_{\mathrm{ais}}$ mentioned in Section \ref{se:AIS}, and the video-based vessel trajectories $\mathcal{T}_{\mathrm{vis}}$ mentioned in Section \ref{se:video}, which can be defined as follows % \begin{equation}\label{eq:trajectory} \begin{cases} &\mathcal{T}_{\mathrm{ais}} = \{X_{a_1},..., X_{a_i},..., X_{a_I}\},\\ &\mathcal{T}_{\mathrm{vis}} = \{Y_{v_1},..., Y_{v_j},..., Y_{v_J}\},\\ \end{cases} \end{equation} % where $X_{a_i}$ and $Y_{v_j}$ represent the trajectories of the $i$-th AIS target $a_i$ and the $j$-th visual target $v_j$, respectively, $I$ and $J$ are the numbers of AIS- and video-based vessel trajectories, respectively. Furthermore, the numbers of AIS/video matches $\mathcal{M}^{\mathrm{last}}$ and association results $\mathcal{B}^{\mathrm{last}}$ at the previous moment are also considered as input, i.e., % \begin{equation}\label{eq:fusion} \begin{cases} &\mathcal{M}^{\mathrm{last}} = \{..., z_{a_i, v_j},...\},\\ &\mathcal{B}^{\mathrm{last}} = \{..., [a_i, v_j],...\},\\ \end{cases} \end{equation} % where $z_{a_i, v_j}$ is the number of successful matches of $X_{a_i}$ and $Y_{v_j}$, $[a_i,v_j]$ means that $a_i$ and $v_j$ have been associated together. In the similarity measure, it is obviously time-consuming and intractable to adopt the E-FastDTW for calculating the similarity between all trajectories at each moment. Inspired by the DeepSORT algorithm, we propose a trajectory association mechanism to solve these issues. In particular, if two trajectories have been recorded in the $\mathcal{B}^{last}$, the two trajectories are directly matched by default without similarity measurement with other trajectories. Subsequently, we perform the similarity measure between all trajectories and construct a similarity matrix $M_s$ of size $I \times J$, where $M_s(i,j)$ represents the similarity value of $X_{a_i}$ and $Y_{v_j}$. In particular, when the Euclidean distance between the last trajectory points of $X_{a_i}$ and $Y_{v_j}$ exceeds the maximum matching distance $D_{\mathrm{max}}$, we consider the two trajectories to be completely different and set $M_s(i, j) = + \infty$. When the binding trajectory pair $[a_i,v_j]$ exists in the $\mathcal{B}^{\mathrm{last}}$, we set $M_s(i, j)= - \infty$ and set the values of other horizontal and vertical positions to positive infinity. For other ordinary trajectory pairs that do not satisfy the above conditions, we employ Eq. (\ref{eq:dtw}) (i.e., E-FastDTW) to calculate the trajectory similarity. After obtaining the similarity matrix $M_s$, we adopt the Hungarian optimization algorithm to find the optimal matching result $\mathcal{O}_{\mathrm{res}}$, which contains the matching trajectory pair information, i.e., % \begin{equation}\label{eq:result} \mathcal{O}_{\mathrm{res}}=\{...,[a_{i}, v_{j}],...\}, \end{equation} % where $[a_{i}, v_{j}]$ means that $a_i$ and $v_j$ are matched together. Then, we will generate the AIS/video matching results $\mathcal{M}$ and association results $\mathcal{B}$ at the current moment. More specifically, we iterate through all matching trajectory pairs in the $\mathcal{O}_{\mathrm{res}}$. If the number of matching times $z_{a_i, v_j}$ of trajectory pair $[a_i, v_j]$ in the $\mathcal{O}_{\mathrm{res}}$ exists in the $\mathcal{M}^\mathrm{last}$, we will store $(z_{a_i, v_j}+1)$ to $\mathcal{M}$; otherwise, 1 will be stored to $\mathcal{M}$. In addition, we save the number of matching times $z_{a_i,v_j}$ for some trajectory pairs directly from $\mathcal{M}^\mathrm{last}$ into $\mathcal{M}$. These $z_{a_i,v_j}$ need to satisfy two conditions, which can be defined as follows: % \begin{itemize} \item $z_{a_i,v_j}$ must exist in $\mathcal{M}^\mathrm{last}$ but $[a_i,v_j]$ is not in $\mathcal{O}_{\mathrm{res}}$. \item The time interval between the last matching moment and the current moment is less than $T_{\mathrm{max}}$. \end{itemize} % % For the generation of the AIS/video association result, we set a minimum number of matches $Mat_{\mathrm{min}}$ as a threshold to ensure that the association information is accurate. When $z_{a_i, v_j}$ in the $\mathcal{M}$ is greater than $Mat_{\mathrm{min}}$, we will store $[a_i, v_j]$ into $\mathcal{B}$. The pseudo code of the proposed trajectory matching method is shown in Algorithm \ref{al:matching}. % % \setlength{\tabcolsep}{6pt} \begin{table}[t] \scriptsize \centering \caption{Details of the FVessel dataset. The ``TOO'', ``NOV'', and ``NOA'' are the times of occlusions, the total number of vessels, and the number of vessels with AIS, respectively.} \begin{tabular}{c|cccccc} \hline Video & Video Length & Type & Weather & TOO & NOV & NOA \\ \hline \hline video-01 & 10m07s & Bridge & Low-light & 2 & 5 & 4 \\ video-02 & 19m52s & Bridge & Sunny & 6 & 7 & 6 \\ video-03 & 19m14s & Riverside & Sunny & 6 & 5 & 5 \\ video-04 & 06m10s & Riverside & Sunny & 0 & 1 & 1 \\ video-05 & 15m01s & Riverside & Sunny & 2 & 5 & 5 \\ video-06 & 12m49s & Riverside & Sunny & 2 & 4 & 4 \\ video-07 & 03m38s & Riverside & Sunny & 1 & 2 & 2 \\ video-08 & 16m05s & Riverside & Sunny & 3 & 6 & 5 \\ video-09 & 05m25s & Riverside & Sunny & 0 & 1 & 1 \\ video-10 & 11m17s & Bridge & Cloudy & 2 & 3 & 1 \\ video-11 & 05m18s & Riverside & Sunny & 1 & 3 & 3 \\ video-12 & 07m19s & Riverside & Sunny & 1 & 4 & 4 \\ video-13 & 12m58s & Riverside & Sunny & 5 & 6 & 6 \\ video-14 & 03m58s & Riverside & Sunny & 3 & 4 & 4 \\ video-15 & 10m46s & Riverside & Sunny & 0 & 4 & 4 \\ video-16 & 05m05s & Riverside & Sunny & 0 & 1 & 1 \\ video-17 & 08m08s & Riverside & Sunny & 1 & 2 & 2 \\ video-18 & 23m57s & Riverside & Sunny & 10 & 10 & 6 \\ video-19 & 11m28s & Riverside & Low-light & 0 & 2 & 2 \\ video-20 & 14m10s & Riverside & Low-light & 0 & 3 & 3 \\ video-21 & 24m01s & Riverside & Low-light & 4 & 7 & 6 \\ video-22 & 02m40s & Riverside & Low-light & 0 & 2 & 1 \\ video-23 & 19m24s & Riverside & Sunny & 2 & 4 & 4 \\ video-24 & 08m39s & Riverside & Sunny & 2 & 3 & 3 \\ video-25 & 24m05s & Riverside & Sunny & 4 & 8 & 8 \\ video-26 & 07m26s & Riverside & Sunny & 0 & 5 & 5 \\ \hline \end{tabular}\label{table:FVessel} \end{table} % \subsection{Implementation Details} % This section mainly introduces the detailed settings of the proposed data fusion method. In particular, our method is implemented on the python 3.7 platform. All experiments and tests are conducted on a PC with Intel Core i5-10600KF CPU @ 4.10GHz and Nvidia RTX A4000 GPU. To meet the requirement of real-time processing while ensuring the accurate fusion, our method only executes one processing per second. For the AIS-based vessel trajectory extraction, we delete the data more than two nautical miles from the camera and set the maximum storage time to two minutes. For the vessel detection task, we collect 20k images containing vessel objects as the training dataset. In training, we set the epoch to 100 and employ the Adam algorithm as the optimizer. The initial learning rates for the first 50 and last 50 epochs are $10^{-3}$ and $10^{-4}$, respectively. For the video-based vessel trajectory extraction, we set the occlusion area threshold $\omega = 0$ and the time span of visual motion feature extraction $\delta = 5s$. For the AIS and video data fusion, we set the maximum matching distance $D_{\mathrm{max}}$ as the half of the horizontal size of the image, the minimum number of matching times $Mat_{\mathrm{min}} = 15$, and the maximum time threshold $T_{\mathrm{max}}=15s$. % % \section{Experimental Results and Discussion} \label{sec:exp} % In this section, we conduct massive experiments on vessel detection, vessel tracking, and data fusion to quantitatively evaluate the performance of our proposed method. The running time analysis is also carried out to verify its practicality. % % \subsection{Benchmark Dataset} % % \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{images/Figure04_FVessel} \caption{Some samples of the FVessel dataset, which contains massive images and videos captured on the bridge region and riverside under sunny, cloudy, and low-light conditions.} \label{fig:FVessel} \end{figure} % % \setlength{\tabcolsep}{6pt} \begin{table}[t] \scriptsize \centering \caption{Details of the dataset used in the vessel detection, vessel tracking, and data fusion experiments. The ``TOO'', ``NOV'', and ``NOA'' are the times of occlusions, the total number of vessels, and the number of vessels with AIS, respectively.} \begin{tabular}{c|cccccc} \hline Video & Video Length & Type & Weather & TOO & NOV & NOA \\ \hline \hline clip-01 & 01m51s & Bridge & Cloudy & 1 & 2 & 2 \\ clip-02 & 01m36s & Riverside & Sunny & 1 & 2 & 2 \\ clip-03 & 03m42s & Riverside & Low-light & 1 & 2 & 2 \\ clip-04 & 02m42s & Riverside & Low-light & 2 & 3 & 3 \\ clip-05 & 03m05s & Riverside & Sunny & 2 & 3 & 3 \\ clip-06 & 02m38s & Riverside & Sunny & 2 & 3 & 3 \\ clip-07 & 11m10s & Riverside & Sunny & 2 & 4 & 4 \\ clip-08 & 05m07s & Riverside & Sunny & 1 & 2 & 2 \\ clip-09 & 08m39s & Riverside & Sunny & 2 & 3 & 3 \\ clip-10 & 02m46s & Riverside & Sunny & 2 & 4 & 4 \\ \hline \end{tabular}\label{table:dataset} \end{table} % In this section, we construct a benchmark dataset \footnote{We have released some matched pairs of AIS and video data, available at \url{https://github.com/gy65896/FVessel}. The complete dataset and original source code will be released if this work is finally accepted for publication.} for vessel detection, tracking, and data fusion (named FVessel) containing 26 videos and the corresponding AIS data captured by the HIKVISION DS-2DC4423IW-D dome camera and Saiyang AIS9000-08 Class-B AIS receiver on the Wuhan Segment of the Yangtze River \footnote{The vessel is uniquely identified by a particular MMSI in raw AIS data. To protect the privacy, the MMSI for each vessel has been replaced with a random number in our dataset.}. As shown in Fig. \ref{fig:FVessel}, these videos were captured under many locations (e.g., bridge region and riverside) and various weather conditions (e.g., sunny, cloudy, and low-light). Table \ref{table:FVessel} displays more details about the FVessel dataset, including the video length, collection location, weather condition, the times of occlusions, the total number of vessels, and the number of vessels with AIS. To verify the superiority of the proposed module, we intercept ten clips existing the vessel occlusion from the FVessel dataset for comparison experiments on vessel detection, tracking, and data fusion. More detailed information on the test dataset can be found in Table \ref{table:dataset}. % \setlength{\tabcolsep}{6pt} \begin{table}[t] \scriptsize \centering \caption{MOFA, IDP, IDR, and IDF$_1$ Results of Data Fusion for the Ten Clips from Table \ref{table:dataset}. (Unit: \%)} \begin{tabular}{c|l|cccc} \hline Video & Methods & MOFA $\uparrow$ & IDP $\uparrow$ & IDR $\uparrow$ & IDF$_1 \uparrow$ \\ \hline \hline \multirow{5}{*}{clip-01} & EDDF & 69.27 & 88.67 & 82.57 & 84.31 \\ & MSDF \cite{liu2022intelligent} & 68.35 & 88.61 & 82.11 & 83.84 \\ & MMDST \cite{huang2021identity} & 94.95 & \textbf{100.00} & 95.41 & 97.42 \\ & DeepSORVF (w/o) & 95.41 & \textbf{100.00} & 95.41 & 97.65 \\ & DeepSORVF & \textbf{99.54} & \textbf{100.00} & \textbf{99.54} & \textbf{99.77} \\ \hline \multirow{5}{*}{clip-02} & EDDF & 69.68 & 89.41 & 80.85 & 84.68 \\ & MSDF \cite{liu2022intelligent} & 69.68 & 89.41 & 80.85 & 84.68 \\ & MMDST \cite{huang2021identity} & 84.04 & \textbf{100.00} & 87.77 & 92.18 \\ & DeepSORVF (w/o) & 87.77 & \textbf{100.00} & 87.77 & 93.48 \\ & DeepSORVF & \textbf{99.47} & \textbf{100.00} & \textbf{99.47} & \textbf{99.73} \\ \hline \multirow{5}{*}{clip-03} & EDDF & 79.29 & 96.09 & 87.28 & 89.39 \\ & MSDF \cite{liu2022intelligent} & 81.66 & 96.09 & 87.28 & 90.49 \\ & MMDST \cite{huang2021identity} & 92.60 & \textbf{100.00} & 92.60 & 96.16 \\ & DeepSORVF (w/o) & 90.53 & \textbf{100.00} & 90.53 & 95.03 \\ & DeepSORVF & \textbf{94.67} & \textbf{100.00} & \textbf{94.67} & \textbf{97.26} \\ \hline \multirow{5}{*}{clip-04} & EDDF & 54.82 & 90.93 & 70.39 & 75.89 \\ & MSDF \cite{liu2022intelligent} & 53.73 & 90.66 & 72.37 & 75.95 \\ & MMDST \cite{huang2021identity} & 82.24 & 98.99 & 86.18 & 90.66 \\ & DeepSORVF (w/o) & 85.53 & 99.00 & 86.40 & 92.27 \\ & DeepSORVF & \textbf{97.59} & \textbf{99.33} & \textbf{98.25} & \textbf{98.79} \\ \hline \multirow{5}{*}{clip-05} & EDDF & 60.34 & 91.71 & 80.17 & 81.39 \\ & MSDF \cite{liu2022intelligent} & 57.78 & 91.13 & 78.89 & 80.09 \\ & MMDST \cite{huang2021identity} & 79.74 & \textbf{99.53} & 89.55 & 91.01 \\ & DeepSORVF (w/o) & 86.78 & 98.34 & 88.27 & 93.03 \\ & DeepSORVF & \textbf{92.96} & 98.44 & \textbf{94.46} & \textbf{96.41} \\ \hline \multirow{5}{*}{clip-06} & EDDF & 69.19 & 98.67 & 82.91 & 85.06 \\ & MSDF \cite{liu2022intelligent} & 71.15 & 98.66 & 82.63 & 85.88 \\ & MMDST \cite{huang2021identity} & 82.35 & \textbf{99.37} & 87.96 & 91.41 \\ & DeepSORVF (w/o) & 83.47 & 98.69 & 84.59 & 91.10 \\ & DeepSORVF & \textbf{92.16} & 98.81 & \textbf{93.28} & \textbf{95.97} \\ \hline \multirow{5}{*}{clip-07} & EDDF & 76.92 & 95.44 & 88.51 & 89.08 \\ & MSDF \cite{liu2022intelligent} & 75.74 & 95.21 & 87.92 & 88.48 \\ & MMDST \cite{huang2021identity} & 85.85 & \textbf{98.74} & 92.73 & 93.37 \\ & DeepSORVF (w/o) & 90.18 & 98.42 & 91.65 & 94.91 \\ & DeepSORVF & \textbf{93.03} & 98.46 & \textbf{94.50} & \textbf{96.44} \\ \hline \multirow{5}{*}{clip-08} & EDDF & 86.63 & 95.94 & 93.41 & 94.21 \\ & MSDF \cite{liu2022intelligent} & 87.38 & 95.57 & 93.41 & 94.21 \\ & MMDST \cite{huang2021identity} & 96.61 & \textbf{100.00} & 97.55 & 98.48 \\ & DeepSORVF (w/o) & 97.18 & \textbf{100.00} & 97.18 & 98.57 \\ & DeepSORVF & \textbf{98.49} & \textbf{100.00} & \textbf{98.49} & \textbf{99.24} \\ \hline \multirow{5}{*}{clip-09} & EDDF & 79.08 & 96.42 & 87.91 & 89.61 \\ & MSDF \cite{liu2022intelligent} & 81.80 & 96.42 & 87.77 & 89.47 \\ & MMDST \cite{huang2021identity} & 92.66 & \textbf{99.15} & 94.57 & 96.53 \\ & DeepSORVF (w/o) & 92.93 & 98.86 & 94.02 & 96.38 \\ & DeepSORVF & \textbf{94.70} & 98.34 & \textbf{96.33} & \textbf{97.32} \\ \hline \multirow{5}{*}{clip-10} & EDDF & 87.56 & 98.05 & 89.33 & 93.49 \\ & MSDF \cite{liu2022intelligent} & 88.44 & 98.54 & 89.78 & 93.95 \\ & MMDST \cite{huang2021identity} & 88.44 & \textbf{100.00} & 89.56 & 93.94 \\ & DeepSORVF (w/o) & 89.56 & \textbf{100.00} & 89.56 & 94.49 \\ & DeepSORVF & \textbf{97.78} & \textbf{100.00} & \textbf{97.78} & \textbf{98.88} \\ \hline \multirow{5}{*}{Average} & EDDF & 73.28 & 94.13 & 84.33 & 86.71 \\ & MSDF \cite{liu2022intelligent} & 73.57 & 94.03 & 84.30 & 86.70 \\ & MMDST \cite{huang2021identity} & 87.95 & \textbf{99.60} & 91.39 & 94.12 \\ & DeepSORVF (w/o) & 89.93 & 99.33 & 90.54 & 94.69 \\ & DeepSORVF & \textbf{96.04} & 99.34 & \textbf{96.68} & \textbf{97.98} \\ \hline \end{tabular}\label{table:fusion} \end{table} % % \begin{figure*}[h] \centering \includegraphics[width=0.97\linewidth]{images/Figure05_Bridge} \caption{Visual comparisons of fusion results on the dataset captured by the bridge region camera from Table \ref{table:dataset}. From top to bottom: visual fusion results generated by (a) MSDF \cite{liu2022intelligent}, (b) MMDST \cite{huang2021identity}, (c) DeepSORVF without the anti-occlusion strategy, and (d) DeepSORVF, respectively.} \label{fig:Bridge} \end{figure*} % % \begin{figure*}[ht] \centering \includegraphics[width=0.97\linewidth]{images/Figure06_Riverside1} \caption{Visual comparisons of fusion results on the dataset captured by the riverside camera from Table \ref{table:dataset}. From top to bottom: visual fusion results generated by (a) MSDF \cite{liu2022intelligent}, (b) MMDST \cite{huang2021identity}, (c) DeepSORVF without the anti-occlusion strategy, and (d) DeepSORVF, respectively.} \label{fig:Riverside1} \end{figure*} % % \subsection{Experiments on Data Fusion} % In this section, we implement the data fusion experiment to compare various methods, i.e., Euclidean distance-based data fusion (EDDF), multi-source data fusion (MSDF) \cite{liu2022intelligent}, multi-modal data-based ship tracking (MMDST) \cite{huang2021identity}, DeepSORVF (w/o) without the anti-occlusion strategy, and DeepSORVF. In particular, the EDDF calculates the Euclidean distance between the pixel position points at the current moment for similarity measurement and employs the near-matching mechanism. For the point matching-based MSDF and MMDST, we only replace our asynchronous vessel trajectory matching part with its corresponding matching method to compare the fusion effect under the premise of consistent detectors. Furthermore, all methods only process data once per second. % \subsubsection{Evaluation Metric} % To evaluate the performance of data fusion, we first use a variant of multi-object tracking accuracy ($\mathrm{MOTA}$) \cite{bernardin2008evaluating} as the evaluation metric and name it multi-object fusion accuracy ($\mathrm{MOFA}$), i.e., % % \begin{equation}\label{eq:MOFA} \mathrm{MOFA} = 1-\frac{FN_{\mathrm{mmsi}}+FP_{\mathrm{mmsi}}}{GT_{\mathrm{mmsi}}}, \end{equation} % where $\mathrm{mmsi}$ represents the identity of the vessel of interest (MMSI), $FP_\mathrm{mmsi}$, $FN_\mathrm{mmsi}$, and $GT_\mathrm{mmsi}$ are the number of the MMSI false positive, MMSI false negative, and MMSI ground truth, respectively. Furthermore, the identification precision ($\mathrm{IDP}$), identification recall ($\mathrm{IDR}$), and identification F1 ($\mathrm{IDF}_1$) are also employed as evaluation metrics. The $\mathrm{IDP}$, $\mathrm{IDR}$, and $\mathrm{IDF}_1$ can be given by % \begin{equation}\label{eq:IDP} \mathrm{IDP} = \frac{TP_{\mathrm{id}}}{TP_{\mathrm{id}}+FP_{\mathrm{id}}}, \end{equation} % % \begin{equation}\label{eq:IDR} \mathrm{IDR} = \frac{TP_{\mathrm{id}}}{TP_{\mathrm{id}}+FN_{\mathrm{id}}}, \end{equation} % % \begin{equation}\label{eq:IDF1} \mathrm{IDF}_1 = \frac{2TP_{\mathrm{id}}}{2TP_{\mathrm{id}}+FP_{\mathrm{id}}+FN_{\mathrm{id}}}, \end{equation} % where $TP_{\mathrm{id}}$, $FP_{\mathrm{id}}$, and $FN_{\mathrm{id}}$ are the numbers of the ID true positive, ID false positive, and ID false negative, respectively. In particular, the $\mathrm{id}$ is replaced with the identity of the vessel of interest (MMSI) in the data fusion evaluation. Please refer to Refs. \cite{bernardin2008evaluating, ristani2016performance} for more details on the $\mathrm{MOTA}$, $\mathrm{IDP}$, $\mathrm{IDR}$, and $\mathrm{IDF}_1$. Generally, higher $\mathrm{MOFA}$, $\mathrm{IDP}$, $\mathrm{IDR}$, and $\mathrm{IDF}_1$ mean better fusion performance. % % \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{images/Figure08_Fusion} \caption{Visual fusion results of our DeepSORVF on the FVessel dataset from Table \ref{table:FVessel}.} \label{fig:fusion} \end{figure*} % \subsubsection{Fusion Results on Ten Clips} % Table \ref{table:fusion} displays the evaluation results on all clips. It can be found that EDDF and MSDF perform poorly. Especially for clip-04, the MOFA is only 54.82\% for EDDF and 53.73\% for MSDF. The poor fusion effect stems from the fact that these methods only consider the current information without associating the historical feature. By considering the displacement direction of AIS- and video-based vessel trajectories, the MMDST greatly improves the fusion effect. However, the two DeepSORVFs based on the vessel motion trajectory matching have better performance by comparison. Particularly after implementing the anti-occlusion strategy, the performance of our DeepSORVF has improved considerably across all metrics. % \setlength{\tabcolsep}{12pt} \begin{table}[t] \scriptsize \centering \caption{MOFA, IDP, IDR, and IDF$_1$ Results of Data Fusion for the FVessel from Table \ref{table:FVessel}. (Unit: \%)} \begin{tabular}{c|cccc} \hline Video & MOFA $\uparrow$ & IDP $\uparrow$ & IDR $\uparrow$ & IDF$_1 \uparrow$ \\ \hline \hline video-01 & 79.94 & 89.35 & 90.76 & 90.05 \\ video-02 & 73.19 & 83.27 & 91.60 & 87.23 \\ video-03 & 96.45 & 99.23 & 97.20 & 98.20 \\ video-04 & 98.08 & 99.45 & 98.63 & 99.03 \\ video-05 & 89.19 & 93.46 & 95.91 & 94.67 \\ video-06 & 91.17 & 96.04 & 95.08 & 95.56 \\ video-07 & 96.81 & 99.59 & 97.21 & 98.39 \\ video-08 & 82.28 & 99.64 & 82.58 & 90.31 \\ video-09 & 98.45 & 100.00 & 98.45 & 99.22 \\ video-10 & 88.74 & 90.42 & 99.26 & 94.63 \\ video-11 & 97.66 & 99.29 & 98.36 & 98.83 \\ video-12 & 95.45 & 99.06 & 96.36 & 97.69 \\ video-13 & 84.82 & 94.82 & 89.72 & 92.20 \\ video-14 & 93.10 & 97.82 & 95.22 & 96.50 \\ video-15 & 95.88 & 97.19 & 98.74 & 97.96 \\ video-16 & 98.68 & 100.00 & 98.68 & 99.33 \\ video-17 & 90.02 & 93.80 & 96.39 & 95.08 \\ video-18 & 74.49 & 83.57 & 92.72 & 87.91 \\ video-19 & 96.62 & 98.31 & 98.31 & 98.31 \\ video-20 & 96.74 & 98.66 & 98.07 & 98.36 \\ video-21 & 76.43 & 87.03 & 89.82 & 88.40 \\ video-22 & 96.82 & 99.35 & 97.45 & 98.39 \\ video-23 & 94.71 & 98.91 & 95.77 & 97.31 \\ video-24 & 94.70 & 98.34 & 96.33 & 97.32 \\ video-25 & 91.49 & 97.66 & 93.73 & 95.66 \\ video-26 & 97.44 & 99.11 & 98.32 & 98.72 \\ \hline Average & 91.13 & 95.90 & 95.41 & 95.59 \\ \hline \end{tabular}\label{table:fusion_FVessel} \end{table} % % To provide a more understandable explanation, we display two examples of data fusion obtained by MSDF, MMDST, DeepSORVF (w/o), and DeepSORVF shown in Figs. \ref{fig:Bridge} and \ref{fig:Riverside1}. Specifically, Fig. \ref{fig:Bridge} displays the visualized data fusion result captured by the bridge region camera. Since the MSDF only considers the vessel characteristic at the current moment, the vessel information is more likely to be matched incorrectly. In the 80-th second, the MSDF, MMDST, and DeepSORVF (w/o) are unable to match the vessel identification information since the detector fails to identify the partially occluded target. For the data collected by the riverside, the visual vessels are often more severely occluded, resulting in the complete disappearance of target features. By analyzing Fig. \ref{fig:Riverside1}, the MSDF, MMDST, and DeepSORVF (w/o) will produce more missing detection and false matching. It is worth mentioning that the vessel occlusion will also affect the trajectory feature extraction and cause the feature matching failure. In contrast, the proposed DeepSORVF with the anti-occlusion strategy has a more stable data fusion effect and is suitable for a variety of scenarios. % \subsubsection{Fusion Results on FVessel Dataset} % Our DeepSORVF is also used to process more data in the FVessel dataset and calculate the $\mathrm{MOFA}$, $\mathrm{IDP}$, $\mathrm{IDR}$, and $\mathrm{IDF}_1$. Table \ref{table:fusion_FVessel} and Fig. \ref{fig:fusion} display the metric calculation results and the visualized fusion results, respectively. It can be found that the proposed method has stable fusion performance. The fusion accuracy (MOFA) of our method is between 73.19\% and 98.68\% and the average is 91.13\%. In the evaluation of the other three metrics, our method also has a good performance. Through the comparison in Fig. \ref{fig:fusion}, the results generated by our DeepSORVF are accurate and stable. The superiority of the proposed method benefits from the accurate prediction of the vessel bounding box by the anti-occlusion tracking method under the occlusion condition and the accurate matching based on the trajectory series. % % \setlength{\tabcolsep}{12pt} \begin{table}[t] \scriptsize \centering \caption{Precision and Recall Results of Vessel Detection for the Ten Clips from Table \ref{table:dataset}. (Unit: \%)} \begin{tabular}{l|cccccc} \hline Detector & Data Fusion & Precision $\uparrow$ & Recall $\uparrow$ \\ \hline \hline \multirow{2}{*}{Faster-RCNN \cite{ren2016faster}} & \XSolidBrush & 89.04 & 90.74 \\ & \CheckmarkBold & \textbf{90.03} & \textbf{96.48} \\ \hline \multirow{2}{*}{SSD \cite{liu2016ssd}} & \XSolidBrush & 94.98 & 88.14 \\ & \CheckmarkBold & \textbf{95.02} & \textbf{95.62} \\ \hline \multirow{2}{*}{YOLOv4 \cite{bochkovskiy2020yolov4}} & \XSolidBrush & 96.43 & 76.97 \\ & \CheckmarkBold & \textbf{96.74} & \textbf{84.85} \\ \hline \multirow{2}{*}{YOLOv5 \cite{jocher2021ultralytics}} & \XSolidBrush & 98.78 & 83.70 \\ & \CheckmarkBold & \textbf{99.47} & \textbf{91.29} \\ \hline \multirow{2}{*}{YOLOX \cite{ge2021yolox}} & \XSolidBrush & 98.91 & 94.13 \\ & \CheckmarkBold & \textbf{99.20} & \textbf{99.40} \\ \hline \end{tabular}\label{table:detection} \end{table} % \subsection{Influence of Data Fusion on Vessel Detection and Tracking} % In our proposed method, the result of trajectory matching is fed as prior knowledge to the vessel detection and tracking tasks at the next moment for promoting the video-based vessel trajectory extraction. To verify that our proposed data fusion method can improve vessel detection and tracking performance, we conduct massive experiments on ten clips. In particular, we select five different deep neural networks as detectors, i.e., Faster-RCNN \cite{ren2016faster}, SSD \cite{liu2016ssd}, YOLOv4 \cite{bochkovskiy2020yolov4}, YOLOv5 \cite{jocher2021ultralytics}, and YOLOX \cite{ge2021yolox}. Each detector has two versions, i.e., ``Detection'' and ``Detection + Data Fusion''. Furthermore, all methods only process data once per second. \subsubsection{Evaluation Metric} % To evaluate the performance of vessel detection, we select the $\mathrm{Precision}$ and $\mathrm{Recall}$ as evaluation metrics. Let $TP$, $FP$, and $FN$ denote the number of the true positive, false positive, and false negative, the $\mathrm{Precision}$ and $\mathrm{Recall}$ can be given by % \begin{equation}\label{eq:Precision} \mathrm{Precision} = \frac{TP}{TP+FP}, \end{equation} % % \begin{equation}\label{eq:Recall} \mathrm{Recall} = \frac{TP}{TP+FN}. \end{equation} % For vessel tracking, we tend to use MOTA as an evaluation metric, which can be defined as % \begin{equation}\label{eq:MOTA} \mathrm{MOTA} = 1-\frac{FP+FN+ID_s}{GT}, \end{equation} % where $FP$, $FN$, $ID_s$, and $GT$ represent the numbers of the false positive, false negative, $ID$ switch, and ground truth, respectively. Furthermore, we also adopt the $\mathrm{IDP}$, $\mathrm{IDR}$, and $\mathrm{IDF}_1$ metrics. Theoretically, better detection results have higher $\mathrm{Precision}$ and $\mathrm{Recall}$, and better tracking results have higher $\mathrm{MOTA}$, $\mathrm{IDP}$, $\mathrm{IDR}$, and $\mathrm{IDF}_1$. % \subsubsection{Vessel Detection and Tracking on Ten Clips} % Table \ref{table:detection} compares the detection $\mathrm{Precision}$ and $\mathrm{Recall}$ of various detectors on ten clips. Due to the mutual occlusion between the targets, some vessel characteristics are easily hidden by another vessel. Therefore, detectors often suffer from missing detection, resulting in higher $FN$ and poorer $\mathrm{Recall}$. In most cases, detectors are prone to produce false detection boxes in vessel encounter regions due to the overlapping of multiple vessel features. These false detection boxes will produce higher $FP$ and poorer $\mathrm{Precision}$. In contrast, the proposed anti-occlusion method based on data fusion results can improve the performance of various detectors. % % \setlength{\tabcolsep}{6pt} \begin{table}[t] \scriptsize \centering \caption{MOTA, IDP, IDR, and IDF$_1$ Results of Vessel Tracking for the Ten Clips from Table \ref{table:dataset}. (Unit: \%)} \begin{tabular}{l|ccccc} \hline Detector & Data Fusion & MOTA $\uparrow$ & IDP $\uparrow$ & IDR $\uparrow$ & IDF$_1 \uparrow$ \\ \hline \hline \multirow{2}{*}{Faster-RCNN \cite{ren2016faster}} & \XSolidBrush & 75.19 & 68.03 & 69.00 & 68.43 \\ & \CheckmarkBold & \textbf{84.43} & \textbf{81.81} & \textbf{87.95} & \textbf{84.58} \\ \hline \multirow{2}{*}{SSD \cite{liu2016ssd}} & \XSolidBrush & 81.36 & 84.49 & 78.77 & 81.14 \\ & \CheckmarkBold & \textbf{89.69} & \textbf{90.03} & \textbf{91.03} & \textbf{90.13} \\ \hline \multirow{2}{*}{YOLOv4 \cite{bochkovskiy2020yolov4}} & \XSolidBrush & 70.92 & 71.61 & 56.66 & 63.08 \\ & \CheckmarkBold & \textbf{80.50} & \textbf{80.44} & \textbf{70.18} & \textbf{74.77} \\ \hline \multirow{2}{*}{YOLOv5 \cite{jocher2021ultralytics}} & \XSolidBrush & 81.72 & 88.68 & 75.08 & 81.12 \\ & \CheckmarkBold & \textbf{90.40} & \textbf{94.48} & \textbf{86.50} & \textbf{90.05} \\ \hline \multirow{2}{*}{YOLOX \cite{ge2021yolox}} & \XSolidBrush & 92.61 & 89.80 & 85.36 & 87.50 \\ & \CheckmarkBold & \textbf{98.61} & \textbf{97.72} & \textbf{97.92} & \textbf{97.82} \\ \hline \end{tabular}\label{table:tracking} \end{table} % % To compare the tracking performance, Table \ref{table:tracking} further shows the $\mathrm{MOTA}$, $\mathrm{IDP}$, $\mathrm{IDR}$, and $\mathrm{IDF}_1$ results of various detectors on ten clips. In contrast, the proposed data fusion method can significantly improve the tracking performance of all five detectors and reduce the number of missing and false detection. The performance improvement benefits from the proposed anti-occlusion method based on data fusion results. The proposed method can achieve more stable vessel tracking during the occlusion. % % % \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{images/Figure09_Time} \caption{Processing time of one-second data on the ten clips from Table \ref{table:dataset}.} \label{fig:time} \end{figure} % % \subsection{Running Time Analysis} % % \setlength{\tabcolsep}{14pt} \begin{table}[t] \scriptsize \centering \caption{Processing Time of One-Second Data (Mean $\pm$ Std) on the Ten Clips from Table \ref{table:dataset}. (Unit: Sec.)} \begin{tabular}{c|cc} \hline Video & Video Length & Processing Time \\ \hline \hline clip-01 & 01m51s & 0.2721$\pm$0.0230 \\ clip-02 & 01m36s & 0.2526$\pm$0.0323 \\ clip-03 & 03m42s & 0.2998$\pm$0.0413 \\ clip-04 & 02m42s & 0.2462$\pm$0.0289 \\ clip-05 & 03m05s & 0.2444$\pm$0.0321 \\ clip-06 & 02m38s & 0.2642$\pm$0.0344 \\ clip-07 & 11m10s & 0.2279$\pm$0.0391 \\ clip-08 & 05m07s & 0.2404$\pm$0.0364 \\ clip-09 & 08m39s & 0.2569$\pm$0.0256 \\ clip-10 & 02m46s & 0.2579$\pm$0.0304 \\ \hline Average & -- & 0.2562 \\ \hline \end{tabular}\label{table:time} \end{table} % The time complexity of the proposed method is a critical metric, which directly determines whether it can be used in actual engineering. In this work, we only process the data once per second to ensure practicability. Therefore, we are unable to use the frame per second (FPS) as an evaluation metric. Meanwhile, since the proposed method considers trajectory features, the time complexity is also related to the number and length of AIS- and video-based vessel trajectories. Consequently, it is also inaccurate to calculate the running time of a single image. Finally, we compute the processing time of one-second data for ten clips in Table \ref{table:dataset}. The processing time of our method for each clip is shown in Fig. \ref{fig:time} and Table \ref{table:time}. It can be seen that our DeepSORVF has low time complexity and high practicability. It can process one second of data in 0.175-0.500 seconds and 0.2562 seconds on average. % \subsection{Discussion} % Although our proposed method adopts the prior knowledge-driven anti-occlusion tracking method and trajectory matching method to effectively improve the accuracy of data fusion, our method still has some limitations. In this section, we use the multiple object fusion accuracy (MOFA) and multiple object fusion precision (MOFP) as evaluation metrics, where the MOFP is a variant of the Multiple Object Tracking Precision (MOTP) \cite{bernardin2008evaluating} in the data fusion task. The MOFP can be given by % \begin{equation}\label{eq:MOTP} \mathrm{MOFP} = \frac{\varSigma_{t,i}\mathcal{D}^{t,i}_{\mathrm{mmsi}}}{\varSigma_{t}N^{t}_{\mathrm{mmsi}}}, \end{equation} where $\mathcal{D}^{t,i}_{\mathrm{mmsi}}$ denotes the distance of the $i$-th MMSI matching pair in the $t$-th second, $N^{t}_{\mathrm{mmsi}}$ is the number of matches in the $t$-th second. Theoretically, a better fusion effect has higher MOFA and lower MOFP. % \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{images/Figure10_Limitation} \caption{Visual comparisons of fusion results on the dataset from Table \ref{table:dataset}. DeepSORVF (w/o) represents our DeepSORVF without the anti-occlusion strategy.} \label{fig:Limitation} \end{figure} % % \setlength{\tabcolsep}{12pt} \begin{table}[t] \scriptsize \centering \caption{MOFA (\%) and MOFP Results of Data Fusion for the clip-02 and clip-03 from Table \ref{table:dataset}. DeepSORVF (w/o) Represents Our DeepSORVF Without the Anti-Occlusion Strategy.} \begin{tabular}{c|l|cc} \hline Video & Methods & MOFA $\uparrow$ & MOFP $\downarrow$ \\ \hline \hline \multirow{2}{*}{clip-02} & DeepSORVF (w/o) & 87.77 & \textbf{0.110} \\ & DeepSORVF & \textbf{99.47} & 0.140 \\ \hline \multirow{2}{*}{clip-03} & DeepSORVF (w/o) & 90.53 & \textbf{0.241} \\ & DeepSORVF & \textbf{94.67} & 0.252 \\ \hline \multirow{2}{*}{Average} & DeepSORVF (w/o) & 88.92 & \textbf{0.175} \\ & DeepSORVF & \textbf{97.02} & 0.196 \\ \hline \end{tabular}\label{table:limitation} \end{table} % % Using clip-02 and clip-03 as examples, we compute their MOFA and MOFP. As shown in Table \ref{table:limitation}, it can be found that the proposed anti-occlusion tracking method can significantly improve the accuracy of data fusion by comparing the MOFA. However, the DeepSORVF is slightly inferior to the DeepSORVF (w/o) without the anti-occlusion strategy in the bounding box localization precision evaluated by the MOFP. The more intuitive comparisons before and after the use of the anti-occlusion strategy are illustrated in Fig. \ref{fig:Limitation}. Our DeepSORVF with the anti-occlusion strategy can predict the vessel position and accurately match the occluded vessel information. However, the predicted bounding boxes still have some degree of bias in complex occlusion conditions. This deviation is mostly attributable to the inaccurate estimation of AIS and visual motion characteristics. When a vessel travels away from the camera, for instance, its visual movement speed generally slows and the object gets smaller. To further improve the vessel anti-occlusion performance, our future work will take into account the changing features of the moving vessels in the visual data. % % \section{Conclusion} \label{con} % In this paper, we proposed a deep learning-based simple online and real-time vessel data fusion method (named DeepSORVF). The DeepSORVF could pair the vessel features of AIS with visual targets. Due to the fact that reciprocal occlusion between vessel targets may readily interfere with video-based trajectory extraction, we suggested a prior knowledge-driven anti-occlusion tracking method. Meanwhile, a novel asynchronous trajectory matching method was designed for robust data fusion. Comprehensive experiments on vessel detection, vessel tracking, data fusion, and running time analysis have demonstrated the superior performance of our DeepSORVF on the newly-developed FVessel dataset. % \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.11411", "language": "en", "timestamp": "2023-02-23T02:15:55", "url": "https://arxiv.org/abs/2302.11411", "yymm": "2302" }
\section{Introduction} We study the existence and, especially, the \emph{non-existence} of physical measures in a large family of interval maps \( \widehat{\mathfrak{F}} \) which was introduced in \cite{CoaLuzMub22}. We start with heuristic and conceptual overview of results. Then in Section~\ref{sec:results} we give the formal definition of the family \( \widehat{\mathfrak{F}} \) and the precise technical statements of our results. In Section \ref{sec:recall} we recall the main construction and key estimates from \cite{CoaLuzMub22} which will be required, and in Sections \ref{sec:physical}-\ref{sec:density} we prove our results. \subsection{Overview of Results}\label{sec:overview} The family \( \widehat{\mathfrak{F}} \) consists of full branch maps with two orientation preserving branches, which are all in the same topological conjugacy class of uniformly expanding maps such as \( f(x)=2x\)~mod~1, in particular they are all topologically conjugate to each other. Depending on a number of parameters, they may however exhibit quite different features: the two fixed points are always topologically repelling but may be either hyperbolic or neutral and the branches may have critical points and/or singularities with infinite derivative. \( \widehat{\mathfrak{F}} \) contains uniformly expanding maps and well-known intermittent maps as well as many other maps which, as far as we know have not been studied before, see \cite{CoaLuzMub22} for an extensive discussion and review of the literature. Figure \ref{fig:fig} shows some possible graphs of the maps in \( \widehat{\mathfrak{F}} \), and see Section \ref{sec:defn} for the formal definition. \begin{figure}[h]\label{fig:1} \centering \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\textwidth]{./images/a} \end{subfigure}% \quad \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\textwidth]{./images/b} \end{subfigure} \quad \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\textwidth]{./images/c} \end{subfigure} \caption{Graph of $g$ for various possible values of parameters.}\label{fig:fig} \end{figure} The first main result of this paper is a complete classification of maps in \( \widehat{\mathfrak{F}} \) from the point of view of the kind of physical measure they admit (or not). In particular we will prove the following. \begin{maintheorem}\label{mainthmA} The family of maps \( \widehat{\mathfrak{F}} \) is the union of three non-empty pairwise disjoint subfamilies \begin{equation}\label{eq:subfam} \widehat{\mathfrak{F}} = \mathfrak{F} \cup \mathfrak{F}_\pm \cup \mathfrak{F}_* \end{equation} satisfying the following properties: \medskip 1) all \( g \in \mathfrak{F} \) have a physical measure equivalent to Lebesgue; 2) all \( g \in \mathfrak{F}_{\pm} \) have a physical measure supported on a repelling fixed point; 3) all \( g \in \mathfrak{F}_{*} \) are non-statistical and in particular have no physical measure. \end{maintheorem} The definitions of physical measure and and non-statistical maps are given in Section \ref{sec:introphsy}, and the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) are defined explicitly in Section \ref{sec:stat} in terms of various parameters of the maps in \( \widehat{\mathfrak{F}} \) . Item 1) in Theorem \ref{mainthmA}, i.e. the existence of physical measures equivalent to Lebesgue for maps in \( \mathfrak{F} \), was proved in \cite{CoaLuzMub22}, where it was also shown that such physical measure satisfy a number of statistical properties such as decay of correlations and various limit theorems Our focus in this paper is therefore on the complementary families \( \mathfrak{F}_{\pm} \) and \( \mathfrak{F}_{*} \). We note that the three families \( \mathfrak{F}, \mathfrak{F}_{\pm}, \mathfrak{F}_{*} \) form a partition of \( \widehat{\mathfrak{F}} \), \emph{there are no cases in which we are not able to obtain a conclusion} (although, as we shall see in the proofs, there are some boundary cases in which more sophisticated arguments are required). A natural question concerns the ``\emph{size}'' and ``\emph{structure}'' of the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) inside \( \widehat{\mathfrak{F}} \), and perhaps one of the most surprising and unexpected results of this paper is that they are \emph{intermingled} and \emph{dense} in some natural subsets of \( \widehat{\mathfrak{F}} \) in some appropriate topologies. Due to the presence of the discontinuity we cannot use the standard \( C^{r}\), or even \( C^{0}\), metric on \( \widehat{\mathfrak{F}} \), since the maps are not even continuous. However there are pairs of maps \( f, g\in \widehat{\mathfrak{F}} \) whose \emph{difference} \( f-g\in C^{r}\), and which therefore may be considered as \( C^{r} \) \emph{perturbations} one of the other. This observation motivates the definition of a natural extended metric on \( \widehat{\mathfrak{F}} \) defined as follows: for any \( f,g \in \widehat{ \mathfrak{F} } \) and \( r \geq 0 \) we let \begin{equation*}\label{eq:approx} d_{r}(f,g)\coloneqq \begin{cases} \|f - g \|_{C^r} &\text{ if } f-g\in C^{r} \\ \infty &\text{ otherwise}. \end{cases} \end{equation*} For simplicity we will just refer to it as \(C^{r}\) \emph{metric} on \( \widehat{\mathfrak{F}} \)\footnote{Notice that we can ``normalize'' this extended metric to give a standard bounded metric on \( \widehat{\mathfrak{F}} \) by defining \( \tilde d_r ( f, g ) \coloneqq { \tilde d_{r} ( f ,g ) }/{(1 + \tilde d_{r} (f , g ))} \) when \( d_{r} ( f ,g )< \infty \) and \( \tilde d_{r} ( f ,g ) \coloneqq 1 \) otherwise. The metrics \( d_{r}\) and \( \tilde d_{r}\) lead to equivalent topologies and so for our purposes it does not really matter which one we use.}. There are of course many maps which are at infinite distance from each other but, as we shall see, there is nevertheless a very rich structure in every neighbourhood of maps \( f \in \widehat{ \mathfrak{F} } \) as we have the following surprising and remarkable result. \begin{maintheorem}\label{thm:densityC0} Each of the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) are \(C^0\)-dense in \(\widehat{ \mathfrak{F}}\). \end{maintheorem} Theorem \ref{thm:densityC0} says, more precisely, than any \( f \in \widehat{ \mathfrak{F} } \), belonging to any of the classes \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\), can be approximated arbitrarily closely in the \( C^{0}\) metric by maps in the other two classes. For example, maps with a physical measure equivalent to Lebesgue, including uniformly expanding maps, can be \(C^0\) approximated both by maps with physical measures given by Dirac-delta measures on the fixed points and also by maps without physical measures. Similarly, maps without physical measures can be \(C^0\) approximated both by maps with a physical measure equivalent to Lebesgue and by maps with Dirac-delta physical measures on the fixed points. Theorem \ref{thm:densityC0} will be proved as a special case of a more technical but much more general result, see Theorem~\ref{thm:density-main} below, which also implies that maps in the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) can be approximated by maps in the other families in arbitrarily regular metrics, depending on the maps. In particular, for the case \( r = 1 \) we define the set \[ \widetilde{ \mathfrak{F}}\coloneqq \{ f \in \widehat{ \mathfrak{F}}: \text{both fixed points are neutral fixed points}\} \] and we have the following perhaps even more surprising and remarkable result. \begin{maintheorem}\label{thm:densityC1} Each of the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) are \(C^1\)-dense in \(\widehat{ \mathfrak{F}}\). \end{maintheorem} Theorem \ref{thm:densityC1} says that every \( C^{1}\) neighbourhood of every map \( f \in \widetilde{ \mathfrak{F}} \) contains maps belonging to \emph{all three families} \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\). In particular every map with a physical measure equivalent to Lebesgue can be \( C^{1}\) approximated by maps without physical measures, and vice-versa. Finally, we address the question of how \emph{persistent} is the dynamics corresponding to the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\), i.e. how \emph{``large''} each family is and how \emph{``robust''} with respect to perturbations. Given Theorems \ref{thm:densityC0} and \ref{thm:densityC1} above, none of these families can be \emph{open} in either the \( C^{0}\) or the \( C^{1} \) metrics as defined above, however we will see that there are several ways in which we can argue that each family is \emph{large} in some sense. In particular, letting \( \operatorname{supp} f \coloneqq \{ x : f(x) \neq 0 \} \) denote the support of function, we will prove the following result. \begin{maintheorem}\label{mainthm:open} Let \( f \in \mathfrak{F}_{*} \). If \( g \in \widetilde{\mathfrak{F}} \) is such that \( \overline{ \operatorname{ supp } ( f-g ) } \subset (-1,0)\cup (0,1), \) then \( g \in \mathfrak{F}_* \). \\ The same statement is true if we replace \( \mathfrak{ F }_* \) with either of the other two subclasses \( \mathfrak{ F }, \mathfrak{ F }_{\pm} \). \end{maintheorem} Theorem \ref{mainthm:open} shows that the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) are open under a particular class of perturbations which is slightly more restrictive than those allowed under the general \( C^{r}\) metric defined above but still quite substantial. This is of course particularly remarkable when applied to maps in \( \mathfrak{F}_{\pm}\) whose physical measures are Dirac-delta measures on fixed points, and to maps in \( \mathfrak{F}_* \) without physical measures. As far as we know there are no other previously known examples of systems with non-statistical behaviour which is as robust as this. There are also other ways in which the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) are persistent but these need to be formulated in terms of the parameters which define the maps and we therefore postpone the statements to Section \ref{sec:open} below. \subsection{Physical Measures}\label{sec:introphsy} In this section we give the precise definition of physical measure and non-statistical dynamics and give a brief discussion of relevant previously existing results. \subsubsection{Topological and Statistical Limits} For completeness we start with some general background notions which help to motivate the definitions and the results. Given a set \( X \), a map \( f : X \to X \) determines a Dynamical System by defining the \(n\)'th iterate \( f^{n}=f\circ \cdots \circ f\) by the \(n\)-fold composition of \( f\) with itself, and for any \( x_{0}\in X \) we define the \(n\)'th iterate of \( x_{0} \) under \( f \) as the image \( x_{n}=f^{n}(x)\). We can think of \( X \) as a \emph{state space}, or the collection of all possible configurations of some system, the map \( f \) as a \emph{force} or \emph{mechanism} which acts on the system, and \( x_{0}\) as an \emph{initial condition}. Then the sequence \[ \mathcal O(x) : =\{x_{n}\}_{n=0}^{\infty}, \] which we call the (forward) \emph{orbit} of \( x_{0}\), denotes the \emph{evolution in time} of the system starting from the initial condition \( x_{0}\). The main objective of the Theory of Dynamical Systems is essentially that of describing the structure of orbits and how they depend on the initial condition \( x_{0}\) and on the map \( f \). If \( X \) is a \emph{topological space} we can define the \emph{omega-limit} set \( \omega(x):= \{y\in X: y \text{ is an accumulation point of the sequence } \mathcal O(x)\}, \) which gives information about the asymptotic nature of the orbit from a \emph{topological} point of view. If \( X \) is a \emph{measure space} we can define the Dirac-delta point mass measures \( \delta_{x_{k}}\) at each point of the orbit and use this to describe the orbit by equidistributing the mass on the first \( n \) terms of the orbit, giving a sequence \[ \mu_{n} (x_{0}) \coloneqq \frac{1}{n} \sum_{ k = 0 }^{ n - 1 } \delta_{x_{k}} \] of probability measures associated to the initial condition \( x_{0}\). If this sequence \emph{converges}, for example in the weak star-topology, i.e. if there exists a probability measure \( \mu \) such that \begin{equation}\label{eq:conv} \mu_{n}(x_{0}) \to \mu, \end{equation} then \( \mu_{n}\) approximates \( \mu \) but, most importantly, for all sufficiently large \( n \geq 0 \), \( \mu\) \emph{approximates} \( \mu_{n}\), and therefore \( \mu \) gives an asymptotic description of the orbit from a \emph{statistical} point of view. Notice that if \( X \) is a metric space we can describe each orbit \emph{from both a topological and a statistical point of view}. In many cases these two descriptions are intuitively consistent one with the other, for example if \( X \) is a complete metric space and \( f \) is a contraction and \( p \in X \) is the unique fixed point of \( f\), then is easy to check that for any initial condition \(x_{0}\in X \) the points of the orbit of \( x_{0}\) converge to \( p \) and therefore \( \omega(x_{0})=\{p\}\) and \( \mu_{n}(x_{0}) \to \delta_{p}\). Similarly, for irrational circle rotations it is relatively easy to check that the orbit of every point \( x_{0}\in \mathbb S^{1}\) is dense in \( \mathbb S^{1} \), and therefore \( \omega(x_{0})=\mathbb S^{1}\), and it is a classical (but non-trivial) result that every orbit is uniformly distributed and therefore \( \mu_{n}(x) \to \) \emph{Lebesgue} on \( \mathbb S^{1}\). However this is not always the case and sometimes, such as in several examples which will be discussed below, and in some of the cases mentioned in our results, the topological and statistical description depend on the initial condition and yield quite different pictures of what we consider as the ``typical'' dynamics of the system. \subsubsection{Definition of Physical Measures} Given a probability measure \( \mu \) we define its \emph{basin} \[ \mathscr{B}_{\mu} \coloneqq \left\{ x : \mu_{n}(x) \to \mu \right\}. \] The set \( \mathscr{B}_{\mu} \) may very well be empty but, if \( \mathscr{B}_{\mu} \neq \emptyset \) then a natural question is to study it's size. Suppose \( X \) is a measure space with a normalized reference (Lebesgue) measure denoted by \( Leb\). \begin{defn} A probability measure \( \mu \) is a \emph{physical measure} (with \emph{full measure basin}) for \( f \) if \[ Leb(\mathscr{B}_{\mu})=1 \] \end{defn} More generally, we say that \( \mu \) is a physical measure if \( Leb(\mathscr{B}_{\mu}) > 0 \) but the examples we consider in this paper will always have full measure basin so for simplicity, unless otherwise specified, we will always implicitly assume that physical measures have full measure basins. \subsubsection{Physical measures on attractors} There are plenty of examples of dynamical systems with physical measures, for example: the Dirac-delta \( \delta_{p}\) in the fixed point for a contraction (in which case every point actually belongs to the basin); Lebesgue measure for irrational rotations (in which case also, every point belongs to the basin) and also for piecewise affine expanding circle maps such as \( f(x) = 2x \) mod 1 (in which case the basin has full Lebesgue measure but its complement is non-empty and indeed consist of an uncountable set of points). It follows by a classical theorem of Birkhoff \cite{Bir31} that if a probability measure \( \mu \) is invariant (\( \mu(f^{-1}(A))=\mu(A)\) for every measurable set \( A \)), ergodic (\( f^{-1}(A)=A \Rightarrow \mu(A)=0\) or \( \mu(A) =1 \)), and \emph{absolutely continuous with respect to Lebesgue} (\(Leb(A)=0 \Rightarrow \mu(A)=0 \)) then \( \mu \) is a physical measure and if \( \mu \) is \emph{equivalent to Lebesgue} (\(Leb(A)=0 \Leftrightarrow \mu(A)=0\)), then \( \mu \) is a physical measure with full measure basin. Such ergodic invariant measures equivalent to, or absolutely continuous with respect to, Lebesgue have been proved to exist in many classes of \emph{uniformly and non-uniformly expanding maps}. There is a huge literature so we just mention a few significant papers such as \cites{AlvLuzPin05,AraLuzVia09,Buz00,DiaHolLuz06,LasYor73,Pin06,Tsu00c,Tsu01a,Tsu05} but highlight in particular those related to one-dimensional maps with neutral fixed points \cites{Dol04,BahGalNis18,BahSau16,BalTod16,BruLep13,BruTerTod19,CamIso95,CoaHolTer19,CriHayMar10,Cui21,Dua12,FisLop01,FreFreTod13,FreFreTod16,FroMurSta11,Ino00,Kor16,LivSauVai99,Mel08,MelTer12,NicTorVai16,Pia80,PolSha09,PolWei99,PomMan80,Ruz15,Ruz18,Sar01,SheStr13,Ter13,Ter15,Tha00,Tha05,Tha80,Tha83,Tha95,Tha95a,You99,Zwe00,Zwe03,Zwe98}. In higher dimensions, where we may have a non-trivial attractor \( \Lambda\) of zero Lebesgue measure, if \( \Lambda\) satisfies some hyperbolicity conditions then pioneering work of Anosov, Sinai, Ruelle, and Bowen in the 1960s and 1970s showed that the absolute continuity can be replaced by a weaker but more technical condition of \emph{absolute continuity of conditional measures} and \emph{absolute continuity of the stable foliation}, and measures satisfying such properties are often referred to as \emph{Sinai-Ruelle-Bowen}, or \emph{SRB}, measures \cites{AnoSin67,Bow75, Rue76, Sin72}. Over the following decades then there has been a tremendous amount of research which has extended their results to increasingly general classes of systems, we mention here just some of the more recent papers \cites{AlvDiaLuz17,Bur21,BuzCroSar22,CliLuzPes17, CliLuzPes23,Vec22} and refer the reader to those for a more comprehensive list of references, and the formulation of a far-reaching conjecture of Palis to the effect that ``typical'' dynamical systems have (a finite number of) physical measures \cites{Pal08, Pal15}. \subsubsection{Physical measures on repellors} A rarer, but in many ways more interesting and intriguing, class of examples of physical measures consists of systems in which the physical measure is supported on an invariant set \( \Lambda\) which has zero Lebesgue measure and is topologically \emph{repelling}, in the sense that points close to \( \Lambda\) are mapped \emph{away} from \( \Lambda\) rather than \emph{towards} \( \Lambda\). The simplest example of this phenomenon, which is also very relevant for the class of maps which we consider in this paper, is given by well known Manneville-Pomeau \emph{intermittency map} \( f(x)=x+x^2\) mod 1 \cite{PomMan80}. It is easy to see that \( f(0)=0 \), and so the origin is a fixed point, and that \( f'(x) = 1+ 2x\), so that \( f'(0)=1\) and \(f'(x)>1\) for all \( x> 0 \). In particular every point in any small neighbourhood of the origin, except the origin itself, eventually leaves such a neighbourhood, and in this sense the origin is topologically repelling. However it can be shown that asymptotically most points of the orbits belong to arbitrarily small neighbourhoods of the origin, and in fact \emph{the empirical measures \( \mu_n(x)\) converge to the Dirac-delta measure \( \delta_0\) at the origin}, which is therefore a physical measure with full basin, see \cite{Alv20} and some of the paper mentioned above. Notice that the Manneville-Pomeau intermittency map is included in our family \( \widehat{\mathfrak F}\), see discussion in \cite{CoaLuzMub22}, and the result just mentioned is a special case of our Theorem \ref{thm:phys-measures} below. In the 1990s, Hofbauer and Keller proved the even more surprising result that there are many examples in smooth quadratic unimodal maps for which a similar phenomena occurs \cites{HofKel90, HofKel95}. \subsubsection{Non-Statistical Maps}\label{sec:nonstat} Equally, if not even more, interesting are dynamical systems which \emph{do not admit any physical measure.} The simplest, and somewhat trivial, way in which this can occur is when the basin of every probability measure \( \mu \) has zero Lebesgue measure, such as in the identity map for which \( \mu_{n}(x) \to \delta_{x}\), for every \( x \) or for rational circle rotations for which all orbits are periodic. A much more sophisticated and interesting way in which a map can fail to have physical measures is when there exists a full measure set of points for which the sequence of measures \( \mu_{n}(x)\) \emph{does not converge}, in this case we say that the orbit of \( x \) is \emph{non-statistical}. More formally, letting \[ \mathscr N:=\{x\in X: \mu_n(x) \text{ does not converge} \} \] we can make the following definition. \begin{defn} A map \( f: X \to X \) is \emph{non-statistical} if \[ Leb(\mathscr{N})=1 \] \end{defn} Notice that non-convergence of \( \mu_{n}(x)\) means that there must exist at least two measures, \( \hat\mu, \tilde\mu\), and two subsequences \( n_{i}\to \infty, n_{j}\to \infty\) such that \( \mu_{n_{i}}(x) \to \hat \mu\) and \( \mu_{n_{j}}(x) \to \tilde \mu\). This means that there is an infinite sequence of times for which the statistics of the orbit is extremely well described by the measure \( \hat \mu\) and another sequence of times for which the statistics of the orbit is extremely well described by the measure \( \tilde \mu\). We can think of these sequences as defining a series of \emph{timescales} at which we see completely different statistical behaviour and therefore the observed frequency of visits to any particular region does not stabilize as \( n \to \infty\). There is quite a large bibliography of research exploring the notion of the non-existence of physical measures from different points of view and giving a number, albeit quite limited, of examples \cites{AarThaZwe05,AraPin21,Bac99,BarKirNak20,BerBie22,ColVar01,CroYanZha20,Her18,HofKel90,HofKel95,HuYou95,Ino00,JarTol04,KanKirLi16,KarAsh11,Kel04,KirLiNak22,KirLiSom10,KirNakSom19,KirNakSom21,KirNakSom22,KirSom17,Kle06,LabRod17,Pal15,Tak08,Tak94,Tal20,Tal22,Zwe02}. We give here only a short and non comprehensive review of some of these and refer the reader to the original papers for additional information. Arguably the first example of a non-statistical system is the \emph{Bowen eye}, attributed by Takens~\cite{Tak94} to Bowen in the early 70s. The Bowen eye is a two dimensional vector field with an eye-like region whose boundary is formed by two saddle connections between two fixed points and under carefully chosen conditions orbits tend to oscillate between the two in a non-statistical way. This example is somewhat ``mild'' because the dynamics is very simple but, around the same time Hofbauer and Keller \cites{HofKel90, HofKel95, Kel04} showed that there are (uncountably) many parameters in the logistic family \( f_\lambda (x) \coloneqq \lambda x ( 1 - x ) \) for which \( f_{\lambda} \) is \emph{topologically mixing} but \emph{non-statistical}. Very recently Talebi \cite{Tal22} generalized and extended this result to the setting of complex rational maps. Another approach to the construction of non-statistical examples, or at least examples with positive Lebesgue measure of non-statistical points, is by constructing \emph{wandering domains} with non-statistical dynamics \cites{ColVar01, KirSom17,BerBie22} and a further example of non-statistical behaviour appears in \cite{CroYanZha20} where the authors construct a skew product \( F : \mathbb{T}^2 \times \mathbb{R} \to \mathbb{T}^2 \times \mathbb{R} \) which gives rise to non-statistical behaviour using the fact that skew translations over Anosov diffeomorphisms share properties with Brownian motion. Some results related to ours, for interval maps with two neutral fixed points, were also obtained in \cites{Zwe00, AarThaZwe05} in a somewhat more abstract setting and with a particular focus on the existence and properties of a sigma-finite invariant measure. As far as we know, none of the existing results considers a class of maps anywhere near as large as the family \( \widehat{\mathfrak{F}} \) considered here, nor gives such a complete and a systematic characterization of the various kinds of physical measures as given in Theorem \ref{mainthmA}. Most importantly, none of the existing results comes anywhere close to constructing examples of toplogically mixing maps without physical measures which are so \emph{prevalent} and \emph{persistent}, as described in Theorems \ref{thm:densityC0}, \ref{thm:densityC1}, \ref{mainthm:open}. \section{Statement of Results}\label{sec:results} We now give the precise definition of the family of maps \(\widehat{ \mathfrak{F}}\) and the subfamilies \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) as well as some more general technical theorems which imply the main theorems in Section \ref{sec:overview} above. \subsection{Doubly Intermittent Full Branch Maps}\label{sec:defn} We consider the class of maps introduced in~\cite{CoaLuzMub22}. For completeness we recall the precise definitions. Let \( I, I_-, I_+\) be compact intervals, let \( \mathring I, \mathring I_-, \mathring I_+\) denote their interiors, and suppose that \(I = I_{-}\cup I_{+} \) and \( \mathring I_-\cap \mathring I_+=\emptyset\). \begin{description} \item{\namedlabel{itm:A0}{\textbf{(A0)}}} \( g: I \to I \) is \emph{full branch}: the restrictions \(g_{-}: \mathring I_{-}\to \mathring I\) and \(g_{+}: \mathring I_{+}\to \mathring I\) are orientation preserving \( C^{2} \) diffeomorphisms and the only fixed points are the endpoints of \( I \). \end{description} To simplify the notation we assume that \( I=[-1, 1], I_{-}=[-1, 0], I_{+}=[0,1]\) but our results will be easily seen to hold in the general setting. For \( \iota > 0 \), we let \( U_{0-}:=(-\iota, 0], U_{0+}:=[0, \iota), U_{-1}:=g(U_{0+}), U_{+1}:=g(U_{0-}) \) be one-sided neighbourhoods of the endpoint of the intervals \(I = I_{-}, I_{+} \). \begin{description} \item{\namedlabel{itm:A1}{\textbf{(A1)}}} There exists constants \( \ell_1,\ell_2 \geq 0 \), \( \iota, k_1,k_2 , a_1,a_2,b_1,b_2 > 0 \) such that, if \( \ell_1,\ell_2> 0 \), \( k_1,k_2 \neq 1\), \begin{equation}\label{eqn_1} g(x) = \begin{cases} x+b_1{(1+x)}^{1+\ell_1} & \text{in } U_{-1}, \\ 1-a_1{|x|}^{k_1} & \text{in } U_{0-}, \\ -1+a_2{x}^{k_2} & \text{in } U_{0+}, \\ x-b_2{(1-x)}^{1+\ell_2} & \text{in } U_{+1}, \end{cases} \end{equation} If \( \ell_{1}=0 \) and/or \( \ell_2=0\) we replace the corresponding lines in~\eqref{eqn_1} with \( g|_{U_{\pm 1}}(x) \coloneqq \pm 1 + (1 + b_1) ( x + 1) \mp \eta (x), \) where \( \eta \) is \( C^2\), \(\eta(\pm 1)= 0, \eta'(\pm 1)=0\), and \( \eta''(x)>0\) on \( U_{-1}\) and \( \eta''(x)<0\) on \( U_{+1} \). If \( k_1 = 1 \) and/or \( k_2 = 1 \), then we replace the corresponding lines in~\eqref{eqn_1} with the assumption that \( g'(0_-) = a_1>1\) and/or \( g'(0_+) = a_2>1 \) respectively, and that \( g \) is monotone in the corresponding neighbourhood, which makes the definition much less restrictive. \end{description} It is easy to see that the definition in \eqref{eqn_1} yields maps with dramatically different derivative behaviour depending on the values of \( \ell_1, \ell_2, k_1, k_2\), including having neutral or expanding fixed points and points with zero or infinite derivative. Our final assumption can be \emph{intuitively thought of as saying that \( g \) is uniformly expanding outside the neighbourhoods \( U_{0\pm}\) and \( U_{\pm 1}\)}. This is however much stronger than what is needed and therefore we formulate a weaker and more general assumption for which we need to describe some aspects of the topological structure of maps satisfying condition \ref{itm:A0}. First of all we define \begin{equation}\label{eq:Delta0} \Delta^-_0:= g^{-1}(0,1)\cap I_- \quad\text{ and } \quad \Delta^+_0:= g^{-1}(-1,0)\cap I_+. \end{equation} Then we define iteratively, for every \( n \geq 1 \), the sets \begin{equation}\label{eq:Delta} \Delta_n^{-}:= g^{-1}(\Delta_{n-1}^{-})\cap I_{-} \quad\text{ and } \quad \Delta_n^{+}:= g^{-1}(\Delta_{n-1}^{+})\cap I_{+} \end{equation} as the \( n\)'th preimages of \( \Delta_0^-, \Delta_0^+\) inside the intervals \(I_{-}, I_{+} \). It follows from \ref{itm:A0} that \( \{ \Delta_n^{-}\}_{n\geq 0} \) and \( \{ \Delta_n^{+}\}_{n\geq 0} \) are $\bmod\;0$ partitions of \(I_{-}\) and \(I_{+}\) respectively, and that the partition elements depend \emph{monotonically} on the index in the sense that \( n > m \) implies that \( \Delta_n^{\pm}\) is closer to \( \pm 1\) than \( \Delta_m^{\pm}\), in particular the only accumulation points of these partitions are \( -1\) and \( 1 \) respectively. Then, for every \( n \geq 1 \), we let \begin{equation}\label{eq:delta} \delta_{n}^{-}:= g^{-1}(\Delta_{n-1}^{+}) \cap \Delta_0^{-} \quad\text{ and } \quad \delta_{n}^{+}:= g^{-1}(\Delta_{n-1}^{-}) \cap \Delta_0^{+}. \end{equation} Notice that \( \{ \delta_n^{-}\}_{n\geq 1} \) and \( \{ \delta_n^{+}\}_{n\geq 1} \) are $\bmod\; 0$ partitions of \( \Delta_0^-\) and \( \Delta_0^+\) respectively and also in these cases the partition elements depend monotonically on the index in the sense that \( n > m \) implies that \( \delta_n^{\pm}\) is closer to \( 0 \) than \( \delta_m^{\pm}\), (and in particular the only accumulation point of these partitions is 0). Notice moreover, that \( g^{n}(\delta_{n}^{-})= \Delta_{0}^{+} \) and \( g^{n}(\delta_{n}^{+})= \Delta_{0}^{-}. \) We now define two non-negative integers \( n_{\pm}\) which depend on the positions of the partition elements \( \delta_{n}^{\pm}\) and on the sizes of the neighbourhoods \( U_{0\pm}\) on which the map \( g \) is explicitly defined. If \( \Delta_0^{-} \subseteq U_{0-}\) and/or \( \Delta_0^{+} \subseteq U_{0+}\), we define \( n_{-}= 0 \) and/or \( n_{+}=0 \) respectively, otherwise we let \begin{equation}\label{eq:n+-} n_{+} := \min \{n :\delta_{n}^{+} \subset U_{0+} \} \quad\text{ and } \quad n_{-} := \min \{n :\delta_{n}^{-} \subset U_{0-} \}. \end{equation} We can now formulate our final assumption as follows. \begin{description} \item[\namedlabel{itm:A2}{\textbf{(A2)}}] There exists a \( \lambda > 1 \) such that for all \( 1\leq n\leq n_{\pm}\) and for all \( x \in \delta_n^{\pm} \) we have \( (g^n)'(x) > \lambda\). \end{description} Following \cite{CoaLuzMub22}, we let \[ \widehat{\mathfrak{F}} \coloneqq \{ g : I \to I \text{ which satisfy \ref{itm:A0}-\ref{itm:A2}}\} \] The class \( \widehat{\mathfrak{F}} \) contains many maps which have been studied in the literature, including uniformly expanding maps and various well known intermittency maps with a single neutral fixed point, we refer the reader to \cite{CoaLuzMub22} for a detailed literature review. \subsection{Physical measures on repelling fixed points and non-statistical dynamics}\label{sec:stat} It is proved in \cite{CoaLuzMub22} that every \( g \in \widehat{\mathfrak{F}} \) admits a unique (up to scaling) \( \sigma\)-\emph{finite ergodic invariant measure} \( \hat \mu\) \emph{equivalent to Lebesgue} and that many properties depend on the constants \[ \beta^- := \ell_1 k _2, \quad \beta^+ := \ell_2 k_1, \quad\text{ and } \quad \beta := \max \{ \beta^+, \beta^- \}. \] Notice that \( \beta^-, \beta^+\in [0, \infty) \) and can take any value in the allowed range, depending on the values of \( \ell_1, \ell_2, k_1, k_2\). They determine the level of \emph{``stickiness''} of the fixed points \( -1\) and \(+1\) respectively, given by the combination of the constants \( \ell_1, \ell_2\), which determine the order of tangency of the graph of \( g \) with the diagonal, and the the constants \( k_1, k_2\), which give the order of the singular or critical points. The larger the value of \( \beta^-, \beta^+ \) the \emph{``stickier''} are the corresponding fixed points. We can now define explicitly the subfamilies in \eqref{eq:subfam} by letting \begin{equation} \label{eq:def-subclasses} \mathfrak{F} \coloneqq \{\beta \in [0,1)\} , \qquad \mathfrak{F}_\pm \coloneqq \{\beta\geq 1, \ \beta^- \neq \beta^+ \} \qquad \mathfrak{F}_* \coloneqq \{\beta\geq 1, \ \beta^- = \beta^+ \}. \end{equation} It is clear that these families are pairwise disjoint and that their union is exactly \( \widehat{\mathfrak{F}} \). Notice that \( \beta \in [0,1) \) implies that \emph{both} \( \beta^{-}, \beta^{+} \in [0,1) \), whereas \( \beta\geq 1 \) only implies that \emph{at least one} of \( \beta^-, \beta^+ \geq 1 \). It is proved in \cite{CoaLuzMub22} that the \( \sigma\)-finite invariant measure \( \hat\mu\) is finite, and can therefore be rescaled to a probability measure \( \mu\), \emph{if and only if} \( \beta\in [0,1)\). \begin{thm}[\cite{CoaLuzMub22}]\label{thm:CoaLuzMub22} If \( g \in \mathfrak{F} \) then \( g \) admits a physical measure equivalent to Lebesgue. \end{thm} As mentioned above, this proves 1) in Theorem \ref{mainthmA}. We are therefore interested in the families \( \mathfrak{F}_\pm \) and \( \mathfrak{F}_* \), neither of which can contain any map with a physical measure equivalent to Lebesgue. The maps in \( \mathfrak{F}_\pm \) are those where one fixed point is \emph{stickier} than the other, whereas the maps in \( \mathfrak{F}_*\) are those for which the stickiness is the same, at least as far as it can be measured by the constants \( \beta^-, \beta^+\). It turns out that the typical statistical behaviour is completely different in these two cases. \begin{thm}\label{thm:phys-measures} If \( g\in \mathfrak{F}_\pm \) then \( g \) admits a physical measure with full basin. Moreover: \\ 1) if \( \beta^{-}> \beta^{+}\), the physical measure is the Dirac-delta measure \( \delta_{-1}\) on the fixed point -1; \\ 2) if \( \beta^{+}> \beta^{-}\), the physical measure is the Dirac-delta measure \( \delta_{1}\) on the fixed point +1 . \end{thm} \begin{thm}\label{thm:no-phys-measures} If \( g\in \mathfrak{F}_* \), then for Lebesgue almost every \( x\in I \) the sequence \( \mu_n(x) \) does not converge. In particular, \( g \) is non-statistical and admits no physical measures. \end{thm} Theorems \ref{thm:CoaLuzMub22}, \ref{thm:phys-measures}, and \ref{thm:no-phys-measures}, clearly imply Theorem \ref{mainthmA}. \begin{rem} It is interesting to note that, contrary to what might be expected, there are no cases in which the physical measures are given by a convex combination \( t\delta_{-1}+ (1-t) \delta_1\) of the Dirac-delta measures on the two fixed points. One may expect that this may be achieved at least for some carefully chosen values of the multiplicative paramaters \( a_1, a_2, b_1. b_2\) in \eqref{eqn_1} but in fact our results show that these play no significant role, at least at this level of the description of the dynamics. \end{rem} \subsection{Density of \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) in \( \widehat{\mathfrak{F}} \)} \label{sec:dense} We now address the issue of the density of the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\), as stated in Theorems \ref{thm:densityC0} and~\ref{thm:densityC1}. We will actually state a much more general result which says that each map \( f\in \widehat{\mathfrak{F}} \) can be approximated arbitrarily closely in the \( C^{r}\) topology by maps in \emph{any} of the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\), for some \( r \) which depends both on \( f \) and on the family in which we want to approximate \( f \). We recall first of all the \emph{ceiling function} \( \lceil x \rceil :=min\{\kappa\in \mathbb N: x \leq \kappa\}. \) Then, for every \( f \in \widehat{\mathfrak{F}} \), we define \[ r_{\pm} \coloneqq r_{\pm}(f) \coloneqq \max \{ \lceil \ell_1 \rceil, \lceil \ell_2 \rceil \}, \qquad \tilde r \coloneqq \tilde r(f) \coloneqq \begin{cases} \lceil 1 / k_{2}\rceil & \text{if } 0 \leq \beta^+ < 1 \leq \beta^- \\ \lceil 1 / k_{1}\rceil & \text{if } 0 \leq \beta^- < 1 \leq \beta^+ \\ \min\{ \lceil 1 / k_{2}\rceil \lceil 1 / k_{1}\rceil \} &\text{otherwise, } \end{cases} \] and \[r_* \coloneqq r_*(f) \coloneqq \begin{cases} \min \{ \lceil \ell_1 \rceil, \lceil \ell_2 \rceil \} & \text{if } \beta \in [0,1) \\ \lceil \ell_2 \rceil &\text{if } \beta^+ < 1 \leq \beta^- \\ \lceil \ell_1 \rceil &\text{if } \beta^- < 1 \leq \beta^+ \\ \min \left\{ \lceil \ell_{2} k_{1} / k_2 \rceil, \lceil \ell_1 \rceil \right\} & \text{if } \beta^+, \beta^- \geq 1 \text{ and } k_2 \geq k_1 \\ \min \left\{ \lceil \ell_{1}k_{2} / k_1 \rceil, \lceil \ell_2 \rceil \right\} & \text{if } \beta^+, \beta^- > 1 \text{ and } k_1 \geq k_2. \end{cases} \] \medskip\noindent Notice that \( r_{\pm}, r_{*}, r \) are all well defined non-negative \emph{integers} because of the way there are defined using the ceiling function. Moreover, \( r_{*}=0\) if and only if at least one of the fixed points is hyperbolic, and \( r_{\pm} = 0 \) if and only if both fixed points are hyperbolic (e.g. if \( f \) is uniformly expanding). If both fixed points are neutral then we necessarily have \( r_{\pm}, r_{*}, r >0 \) and therefore \( r_{\pm}, r_{*}, r \geq 1 \), since they are all integers and defined in terms of ceiling functions. \begin{thm} \label{thm:density-main} For every \( f \in \widehat{\mathfrak{F}} \) and every \( \varepsilon > 0 \) there exists \( \tilde{f} \in \mathfrak{F} \), \( f_{\pm} \in \mathfrak{F}_{\pm} \) and \( f_{*} \in \mathfrak{F}_{*} \) such that \begin{equation*} \label{eq:f-g-c-r-close} d_{\tilde r} (f , \tilde f) < \varepsilon, \qquad d_{ r_{\pm} } ( f, f_{\pm} ) < \varepsilon, \qquad d_{ r_* } ( f, f_* ) < \varepsilon. \end{equation*} \end{thm} \medskip Theorem \ref{thm:density-main} immediately implies Theorems \ref{thm:densityC0} and~\ref{thm:densityC1} since we always have \( r_{\pm}, r_{*}, \tilde r \geq 0 \) and, as mentioned in the previous paragraph, if \( f \in \widetilde{\mathfrak{F}} \) (i.e. if both fixed points are neutral) then we always have \( r_{\pm}, r_{*}, \tilde r \geq 1 \). However, it also shows that in some cases we can have approximations in much higher topologies. For example, consider a map \( f\in \mathfrak{F}_{*} \), which therefore has no physical measure. By definition we have \( \ell_{1}k_{2}= \ell_{2}k_{1} = \beta\), where \( \beta \geq 1 \) is arbitrary. For definiteness let us suppose that \( \beta =1 \) and then, given any \emph{arbitrarily large} positive integer \( R \), there exists a map \( f\in \mathfrak{F}_{*} \) such that \( \ell_{1} = \ell_{2} = R \) and \( k_{1} = k_{2} = 1/R\). This implies \( \tilde r=r_{\pm}=R\) and therefore, from Theorem \ref{thm:density-main}, we get that the map \( f \), which does not have any physical measure, can be approximated arbitrarily closely in the \( C^{R}\) topology by maps in \( \mathfrak{F} \), which have a physical measure equivalent to Lebesgue, \emph{and} by maps in \( \mathfrak{F}_{\pm} \), which have physical measures which are Dirac-delta measures on a fixed point. Notice that we do not need to consider \( r_{*}\) since taking \( g = f \) the last approximation is trivial. \subsection{``Openness'' of \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) in \( \widehat{\mathfrak{F}} \)}\label{sec:open} Finally, it just remains to discuss the ``openness'' of the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) as described in Theorem \ref{mainthm:open}. Now that we have the formal definitions of the maps in \( \widehat{\mathfrak{F}} \) the statement in Theorem \ref{mainthm:open} is actually almost immediate and therefore we just give the proof. \begin{proof}[Proof of Theorem \ref{mainthm:open}] By assumption the map \( g \) is in the class \( \widehat{ \mathfrak{ F } } \) and since \( \overline{ \operatorname{ supp } f - g } \subset ( -1 , 0 ) \cup ( 0, 1 ) \) we necesarrily have that \( g \) satisfies \ref{itm:A1} with the same parameters as \( f \). Thus, \( \beta^+(g) = \beta^+(f) \), \( \beta^-(g) = \beta^- (f) \) and so \( g \) must lies in the same subclass \( \mathfrak{F}, \mathfrak{F}_{\pm}, \mathfrak{F}_* \) as \( f \). \end{proof} We also mention, without giving formal statements, a couple of other natural ways in which maps in \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) can be perturbed without falling outside of their original family, essentially by perturbing some of the parameters through which they are defined. Notice first of all that the conditions which determine whether \( g \) belongs to \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), or \( \mathfrak{F}_*\), do not depend on the constants \( a_{1}, a_{2}, b_{1}, b_{2}\) and therefore we can choose these arbitrarily without changing the values of \( \beta_{1}, \beta_{2}\). Sufficiently small perturbations of these parameters do not invalidate condition (A2) and therefore each subfamily \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) is also ``\emph{open}'' in the sense that there exists an open neighbourhood of the parameters \( a_{1}, a_{2}, b_{1}, b_{2}\) for which the corresponding maps still belong to the same subfamily. In addition to the perturbations mentioned above, which do not change the values of the parameters \( \beta^-, \beta^+\), we can also perturb the parameters \( \ell_1, \ell_2, k_1, k_2\) which make up \( \beta^-, \beta^+\). This may of course affect which subfamily the perturbed map belongs to as the subfamilies are precisely defined in terms of the values of \( \beta^-, \beta^+\), and indeed for these kinds of perturbations the situation is slightly different depending on which subfamily we consider. The maps in \( \mathfrak{F} \) are characterized by the property that \( \beta^-, \beta^+ \in [0,1)\) and therefore there is an open set of sufficiently small perturbations of \( \ell_1, \ell_2, k_1, k_2\) such that this still holds as well as condition (A2), thus guaranteeing the the perturbed map is still in \( \mathfrak{F} \). Similarly, maps in \( \mathfrak{F}_{\pm}\) are characterized by the property that at least one of \( \beta^-, \beta^+\) is \( \geq 1 \) and so again there is a large set of perturbations of \( \ell_1, \ell_2, k_1, k_2\) which preserve that condition. Notice however that this may not always contain an open set of parameters, for example if \( \beta^-<1\) and \( \beta^+=1\), in which case we can only perturb \( \ell_2\) and \( k_1\) in such a way that \( \beta^+ \) does not decrease. Finally, maps in \( \mathfrak{F}_*\) are defined in the most restrictive way since they require \( \beta^-= \beta^+ \) and this condition is not preserved for an open set of choices of parameters \( \ell_1, \ell_2, k_1, k_2\). Nevertheless it is still a relatively large and persistent family since we can define a \begin{quote} \begin{center} \emph{three-parameter family completely contained in \( \mathfrak{F}^* \)}: \end{center} \end{quote} for \emph{any} \( \beta\geq 1, s,t> 0 \) there is a map \( g \) with \( \ell_1=s, \ell_2=t, k_1=\beta/t, k_2=\beta/s, \) which implies that \( \beta^-= \ell_1 k_2= \ell_2 k_1 = \beta^+\), and thus \( g\in \mathfrak{F}^* \). \section{The Induced Map} \label{sec:recall} In this section we recall some details of the construction of the induced map carried out in \cite{CoaLuzMub22} and a key estimate from \cite{CoaLuzMub22}, see Proposition \ref{prop:tail-of-tau} below, which will play a crucial role in our proofs. \subsection{Topological construction} We recall first of all from \cite{CoaLuzMub22} the topological structure of the \emph{first return maps} on the intervals \( \Delta_0^-, \Delta_0^+ \) defined in \eqref{eq:Delta0}. From the definitions of the sets \( \Delta_{n}^{\pm}\) and \( \delta_{n}^{\pm}\) in \eqref{eq:Delta} and \eqref{eq:delta}, and from the fact that each branch of \( g \) is a \( C^{2}\) diffeomorphism, it follows that for every \( n \geq 1 \), the maps \( g:\delta_{n}^{-} \to \Delta_{n-1}^{+} \) and \( g:\delta_{n}^{+} \to \Delta_{n-1}^{-} \) are \( C^{2}\) diffeomorphisms, and, for \( n \geq 2 \), the same is true for the maps \( g^{n-1}: \Delta_{n-1}^{-}\to \Delta_0^{-}, \) and \( g^{n-1}: \Delta_{n-1}^{+}\to \Delta_0^{+}, \) and therefore for every \( n \geq 1\), the maps \( g^{n}: \delta_n^{-} \to \Delta_0^+ \) and \( g^{n}: \delta_n^{+} \to \Delta_0^- \) are \( C^{2} \) diffeomorphisms. We can therefore define two \emph{full branch} maps \( \widetilde G^-:\Delta_{0}^{-} \to \Delta_{0}^{+} \) and \( \widetilde G^+:\Delta_{0}^{+} \to \Delta_{0}^{-} \) by \( \widetilde G^\pm|_{\delta_{n}^{\pm} } := g^{n}. \) Then for every \(i, j \geq 1\) we let \begin{equation} \label{eq:def-delta-i-j} \delta_{i,j}^{-} := g^{-i}(\delta^+_j) \cap \delta^-_i \quad\text{ and } \quad \delta^{+}_{i,j} := g^{-i}(\delta^-_j) \cap \delta^{+}_i \end{equation} Then, for \( i \geq 1\), the sets \( \{ \delta_{i,j}^{-}\}_{j\geq 1} \) and \( \{ \delta_{i,j}^{+}\}_{j\geq 1} \) are partitions of \( \delta_i^-\) and \( \delta_i^+\) respectively and so \( \mathscr P^- := \{ \delta_{i,j}^{-}\}_{i,j\geq 1} \) and \( \mathscr P^+ := \{ \delta_{i,j}^{+}\}_{i,j\geq 1} \) are partitions of \( \Delta_0^-, \Delta_0^+\) respectively, with the property that for every \( i,j \geq 1\), the maps \( g^{i+j}: \delta_{i,j}^{-} \to \Delta_{0}^{-} \) and \( g^{i+j}: \delta_{i,j}^{+} \to \Delta_{0}^{+} \) are \( C^2\) diffeomorphisms. Notice that \( i+ j \) is the \emph{first return time} of points in \( \delta_{i,j}^{-} \) and \( \delta_{i,j}^{+} \) to \( \Delta_{0}^{-} \) and \( \Delta_{0}^{+} \) respectively, and we have thus constructed \emph{two full branch first return induced maps} \( G^-:=\widetilde G^+ \circ \widetilde G^- :\Delta_{0}^{-} \to \Delta_{0}^{-} \) and \( G^+:=\widetilde G^- \circ \widetilde G^+ :\Delta_{0}^{+} \to \Delta_{0}^{+}. \) for which we have \( G^-|_{\delta_{i,j}^{-} }= g^{i+j} \) and \( G^+|_{\delta_{i,j}^{+} }= g^{i+j}. \) We now focus on one of these two full branch first return maps, for definiteness let's say \( G^{-}\) (but we could just as well take \( G^{+}\)) and for simplicity omit the superscript from the notation and write \begin{equation}\label{eq:noindex} \Delta_{0} := \Delta_{0}^{-}, \quad \delta_{i,j} := \delta_{i,j}^{-}, \quad G := G^{-}. \end{equation} It is proved in \cite{CoaLuzMub22} that \( G: \Delta_{0}\to \Delta_{0} \) is a full branch \emph{Gibbs-Markov} map with respect to the partition \( \{ \delta_{i,j} \} \), and therefore admits an \emph{invariant ergodic probability measure} \( \hat \mu \) which is \emph{equivalent to Lebesgue} and with Lipschitz continuous density. Then, by standard results, the \emph{induced measure} \begin{equation} \label{eq:mu} \mu \coloneqq \sum_{n=0}^{\infty} g^n_*(\hat\mu|\{\tau \geq n\}) \end{equation} is a \emph{sigma-finite, ergodic, \( g\)-invariant} measure which, since \( \bigcup_{n\geq 0} g^{n}(\{\tau \geq n \}) = I \ (\text{mod} \ 0)\) by construction, is \emph{equivalent to Lebesgue}. It is easy to check that \( \mu(I) < \infty\) if and only if \( \tau \in L^{1}(\hat\mu)\), and it follows from Proposition 2.6 of \cite{CoaLuzMub22} (which we recall in Proposition \ref{prop:tail-of-tau} below) that \( \tau \in L^{1}(\hat\mu)\) if and only if \( \beta\in [0,1)\), i.e. if and only if \( g\in \mathfrak F \), as already mentioned above. Moreover, since \( G \) is a \emph{first return} induced map, the measure \( \mu \) does not add any measure to the inducing domain and therefore \( \mu|\Delta_{0} = \hat \mu. \) \subsection{Inducing Times} Our arguments revolve around the \emph{distribution} of various \emph{observables} on \( \Delta_{0}\) with respect to the probability \( \hat \mu \). First we define \( \tau^\pm(x), \tau: \Delta_0 \to \mathbb N \) by \begin{equation}\label{eq:def-tau-pm} \tau^+(x):= \#\{1\leq i \leq \tau: g^i(x)\in I_+ \} \quad \tau^-(x):= \#\{1\leq j \leq \tau: g^j(x)\in I_- \}, \quad \tau \coloneqq \tau^+ + \tau^- \end{equation} where \( \tau^\pm \) \emph{count the number of iterates of \( x \) in \( I_-, I_+\) respectively before returning to~\( \Delta_0\)}. Notice that \( \tau^+|_{\delta_{i,j}} \equiv i \) and \( \tau^-|_{\delta_{i,j}} \equiv j \), that \( \tau|_{\delta_{i,j}} \equiv i+j \) is exactly the \emph{first return time} to \(\Delta_0\). The following key technical result from \cite{CoaLuzMub22} gives the distribution of these functions. We say that \( f \sim g \) if \( f(t) / g(t) \to 1 \) as \( t \to \infty \). \begin{prop}[{\cite{CoaLuzMub22}*{Proposition 2.6}}] \label{prop:tail-of-tau} There exist constants \( C_+, C_- > 0 \) such that \begin{equation} \label{eq:dist-of-tau-p} \hat \mu( \tau^+ > t ) \sim C_+ t^{-1/\beta^+}, \qquad \hat \mu( \tau^- > t ) \sim C_- t^{-1/\beta^-}, \qquad \hat \mu( \tau > t ) \sim (C_+ + C_- )t^{-1/\beta}. \end{equation} \end{prop} We will also be interested in the associated Birkhoff sums under the \emph{induced map} \( G \) \( \tau_{k}^{\pm}, \tau_{k}: \Delta_{0} \to \mathbb{N} \) defined by \begin{equation}\label{eq:deftaupm} \tau_{k}^{-} \coloneqq \sum_{ \ell = 0 }^{ k - 1 } \tau^{-} \circ G^{ \ell}, \qquad \tau_{k}^{+} \coloneqq \sum_{ \ell = 0 }^{ k - 1 } \tau^{+} \circ G^{ \ell}, \qquad \tau_k \coloneqq \tau_k^+ + \tau_k^- \end{equation} These give us the total time which points spend in the left and right intervals after \( k \) iterations of the induced map. For future reference, note that \( \tau_{k}^{-}, \tau_{k}^{+}, \tau_k\) are Birkhoff sums of \( \tau^{-}, \tau^{+}, \tau \) respectively, along the orbit of a point under the induced map \( G: \Delta_0\to\Delta_0 \) and therefore, by ergodicity and invariance of the probability measure \( \hat\mu\) under \( G \), since they are all non-negative observables, \begin{equation}\label{eq:int} \frac{\tau^{-}_{k}}{k} \to \int \tau^{-}d\hat\mu, \qquad\qquad \frac{\tau^{+}_{k}}{k} \to \int \tau^{+}d\hat\mu, \qquad\qquad \frac{\tau_{k}}{k} \to \int \tau d\hat\mu, \end{equation} as \( k \to \infty\), irrespective of whether the integrals are finite or not, for \( \hat\mu\) almost every \( x\in \Delta_0\). Therefore, since \( \hat\mu\) is equivalent to Lebesgue, for Lebesgue almost every \( x\in \Delta_0\). Proposition \ref{prop:tail-of-tau} implies that \( \tau^-, \tau^+, \tau \in L^1\) if and only if \(\beta^-, \beta^+, \beta \in [0,1)\) respectively, and therefore, from \eqref{eq:int} we have that if \( \beta^-, \beta^+, \beta \in [0,1)\), then we have respectively that \begin{equation}\label{eq:divbeta<1} \frac{\tau^{-}_{k}}{k} \to \smallint \tau^{-}d\hat\mu < \infty, \qquad \qquad \frac{\tau^{+}_{k}}{k}\to \smallint \tau^{+}d\hat\mu < \infty, \qquad \qquad \frac{\tau_{k}}{k}\to \smallint \tau d\hat\mu < \infty, \end{equation} and if \( \beta^-, \beta^+, \beta \geq 1\), then we have respectively that \begin{equation}\label{eq:div} \frac{\tau^{-}_{k}}{k} \to \infty, \qquad \qquad \frac{\tau^{+}_{k}}{k}\to \infty, \qquad \qquad \frac{\tau_{k}}{k}\to \infty. \end{equation} \section{Proof of Theorem \ref{thm:phys-measures}} \label{sec:physical} We split the proof of Theorem \ref{thm:phys-measures} into 3 parts. First we show that Lebesgue almost every point spends asymptotically all its time in either \(I^{-}\) or \( I^{+}\). Then we show that in fact such orbits spend most of the time in arbitrarily small neighbourhoods of the corresponding fixed points \( -1\) and \( + 1\) when measured along the subsequence \( \tau_{k}\). Finally we show that this implies that the same holds for the full sequence of iterates, thus proving the Theorem. \subsection{Statistics of orbits in \(I^{-}\) and \( I^{+}\)} We now go into a bit more detail on the behaviour of the induced observables \( \tau^-_k, \tau^+_k, \tau_k\). Recall that by definition \( \tau_k= \tau^+_k+ \tau^-_k\) and therefore \begin{equation}\label{eq:tauk1} \frac{\tau^-_k}{\tau_k} + \frac{\tau^+_k}{\tau_k} = \frac{\tau_{k}^{-}+ \tau_{k}^+ }{ \tau_k} = \frac{\tau_k}{\tau_k} =1 \end{equation} where \( {\tau^-_k}/{\tau_k} \) and \( {\tau^+_k}/{\tau_k} \) are simply the proportion of time that the orbit of \( x \) spends on the left and right intervals respectively in its first \( \tau_k\) iterates corresponding to \( k \) iterations of the induced map. The main result of this section shows that when \( \beta\geq 1 \), the largest of \( \beta^-\) and \( \beta^+\) ``gets everything''. \begin{prop} \label{prop:existence-of-physical-meaures} Suppose \( \beta \geq 1\). Then \begin{equation}\label{eq:existence-of-physical-meaures} \beta^-> \beta^+ \implies \frac{\tau^-_k}{\tau_k} \to 1 \quad \quad\text{ and } \quad \quad \beta^+ > \beta^- \implies \frac{\tau^+_k}{\tau_k} \to 1 \end{equation} for Lebesgue almost every point \( x\in \Delta_0\). \end{prop} It then follows of course from \eqref{eq:tauk1} and \eqref{eq:existence-of-physical-meaures} that when \( \beta \geq 1\) we have also \( \beta^-> \beta^+ \implies {\tau^+_k}/{\tau_k} \to 0 \) and \( \beta^+ > \beta^- \implies {\tau^-_k}/{\tau_k} \to 0.\) so Proposition \ref{prop:existence-of-physical-meaures} implies that whenever at least one of \( \beta_1, \beta_2\) is \( \geq 1 \) and \( \beta_1\neq \beta_2\), Lebesgue almost every point spends asymptotically all its time either in the left or right interval. \begin{proof}[Proof of Proposition \ref{prop:existence-of-physical-meaures}] We will prove the result when \( \beta^{-}> \beta^{+} \), the case \( \beta^{+}> \beta^{-} \) follows by exactly the same arguments. Notice first of all that from \eqref{eq:tauk1} we have \[ 1= \frac{\tau_{k}^{-}+ \tau_{k}^+ }{ \tau_k} = \frac{ \tau_{k}^- }{ \tau_k } + \frac{ \tau_k^+}{ \tau_k } = \frac{ \tau_{k}^- }{ \tau_k } \left( 1 + \frac{ \tau_k^+ }{ \tau_k^- } \right). \] and it is therefore sufficient to show that \begin{equation}\label{eq:equiv} \frac{\tau^{+}_k}{ \tau_k^-} \to 0 \end{equation} as this implies \( { \tau_{k}^- }/{ \tau_k } \to 1 \) as required. By assumption \( \beta\geq 1 \) and therefore \( \beta^{-}\geq 1 \), and it is therefore sufficient to consider the following 3 subcases. \begin{description} \item[\textbf{1)}] \( \beta^{-}\geq 1 > \beta^+ > 0\). \end{description} In this case we can write \[ \frac{\tau^{+}_k}{ \tau_k^-} = \frac{\tau^{+}_k}{ k} \frac{k}{ \tau_k^-} \] By \eqref{eq:divbeta<1} we have that \( {\tau^{+}_k}/{ k} \) is bounded and by \eqref{eq:div} we have that \( {k}/{ \tau_k^-} \to 0 \), implying that \( {\tau^{+}_k}/{ \tau_k^-} \to 0 \) and thus giving \eqref{eq:equiv}. \begin{description} \item{\textbf{2)} \( \beta^{-}> \beta^+ > 1\) .} \end{description} In this case, the statements in \eqref{eq:divbeta<1} and \eqref{eq:div} do not allow us to immediately draw any definite conclusions and we need to refer to a non-trivial result of \cite{GalHolPer21} which applies precisely to our case. Indeed, since the map \( G \) is Gibbs-Markov, the functions \( \tau^{\pm} \) are constant on each partition element \( \delta_{i,j} \), and the distribution of \( \tau^{\pm} \) is given by \eqref{eq:dist-of-tau-p}, Proposition 2.8 of \cite{GalHolPer21} applied to our setting satisfies gives the following result. \begin{lem}[[Proposition 2.8]\cite{GalHolPer21} ] \label{lem:holland} For all \( \epsilon > 0 \), \[ \beta^- > 1 \implies k^{\beta^{-}-\epsilon} \lesssim \tau^{-}_{k}\lesssim k^{\beta^{-}+\epsilon} \quad\text{ and } \quad \beta^+ > 1 \implies k^{\beta^{+}-\epsilon} \lesssim \tau^{+}_{k}\lesssim k^{\beta^{+}+\epsilon}. \] \end{lem} This implies \( { \tau^{+}_k }/{ \tau_k^-} \lesssim {k^{\beta^{+}+\epsilon} }/{k^{\beta^{-}-\epsilon}} \) almost surely, and thus implies \eqref{eq:equiv} if \( \epsilon \) is sufficiently small. \begin{description} \item{\textbf{3)} \( \beta^{-}> \beta^+ = 1\).} \end{description} Lemma \ref{lem:holland} requires \( \beta^{-}\) and \( \beta^{+}\) to be \emph{strictly} greater than 1 and therefore we cannot apply the argument above completely to this setting but we can conclude that \begin{equation}\label{eq:taukbound} \frac{ \tau^{+}_k }{ \tau_k^-} \lesssim \frac{\tau_k^{+}}{k^{\beta^{-}-\epsilon}} \end{equation} Letting \( q:= {1}/{(\beta^{-}-\epsilon)} \) and using the definition of \( \tau_k^{+}\) in \eqref{eq:deftaupm} it will be convenient to write this as \begin{equation}\label{eq:taukbound2} \frac{ \tau^{+}_k }{ \tau_k^-} \lesssim \frac{\tau_k^{+}}{k^{1/q}} = \frac{\tau^{+}+(\tau^{+} \circ G) + \cdots + (\tau^{+} \circ G^{k-1} )}{k^{1/q}} \end{equation} Since \( \hat \mu \) is \( G \) invariant, the summands \( \tau^+ \circ G^\ell \) are identically distributed and Proposition \ref{prop:tail-of-tau} gives \[ \mu ( (\tau^{+})^{q} > t ) = \mu (\tau^{+}> t^{1/q} ) \sim C_{+} t^{-1/q\beta^+} = C_{+} t^{-(\beta^{-}-\epsilon)/\beta^+} \] which implies that \( (\beta^{-}-\epsilon)/\beta^+> 1 \) and therefore \( \tau^+ \in L^q ( \hat \mu ) \) if \( \epsilon \) is small enough. Then we can apply the following classical and remarkable result. \begin{lem}\cite{Saw66}*{Corollary to Lemma 3} \label{lem:saw} Let \( (X, \mu) \) be a probability space and suppose \( \varphi_{n} \) are identically distributed random variables with \( \varphi_{n} \in L^{q} \) for some \( q \in (0,1)\). Then. \( \mu \)-almost surely, \[ \frac{\varphi_{1}+ \cdots + \varphi_{n}}{n^{1/q}} \to 0. \] \end{lem} Applying \ref{lem:saw} to \eqref{eq:taukbound2} implies \eqref{eq:equiv} and thus completes the proof. \end{proof} \subsection{Statistics of orbits near the fixed points for the subsequence \( \tau_{k}\) } Proposition \ref{prop:existence-of-physical-meaures} tells us that depending on the relative values of \( \beta^{-}, \beta^{+}\) orbits spend asymptotically all their time inside either the left or right subintervals \( I^{-}, I^{+}\). We are however especially interested in how much time orbits spend close to the two fixed points and in this section we will show that actually most of the time spent in these intervals is spent in arbitrarily small neighbourhoods of the corresponding fixed points. To formalize this statement, for \( \varepsilon > 0 \) we define the intervals \begin{equation} U^{+}_{\varepsilon} \coloneqq ( 1 - \varepsilon, 1 ), \quad\text{ and } \quad U^{-}_{\varepsilon} \coloneqq ( -1 , -1 + \varepsilon ), \end{equation} and then define the functions \( S_{n,\varepsilon}^{\pm} : [-1,1] \to \mathbb{N} \) and \( S_{n,\varepsilon} : [-1,1] \to \mathbb{N} \) by \begin{equation} \label{eq:def-S-n-eps-pm} S_{n, \varepsilon}^{-} \coloneqq \sum_{k = 0}^{ n - 1 } \mathbb{1}_{U_{\varepsilon}^{-}} \circ g^{k}, \qquad S_{n, \varepsilon}^{+} \coloneqq \sum_{k = 0}^{ n - 1 } {\mathbb{1}}_{U_{\varepsilon}^{+}} \circ g^{k}, \quad\text{ and } \quad S_{n, \varepsilon}= S_{n,\varepsilon}^{-} + S_{n,\varepsilon}^{+}. \end{equation} The functions \( S_{n,\varepsilon}^{-}\) and \( S_{n,\varepsilon}^{+}\) simply count the number of iterates of a point which belong to the neighbourhoods \( U^{-}_{\varepsilon} \) or \( U^{+}_{\varepsilon}\) respectively, in the first \( n \) iterates. \begin{prop}\label{lem:S-vs-tau-k} For every \( \varepsilon > 0 \) and Lebesgue almost-every \( x \in \Delta_0 \), \begin{equation} \label{eq:S-tau-k-pm-vs-tau-k-pm} \beta^{-} \geq 1 \implies \frac{S_{\tau_k, \varepsilon}^{-}}{\tau_k^-}\to 1, \qquad \beta^{+}\geq 1 \implies \frac{S_{\tau_k, \varepsilon}^{+}}{\tau_k^+}\to 1, \qquad \beta \geq 1 \implies \frac{S_{\tau_k, \varepsilon}}{\tau_k} \to 1. \end{equation} \end{prop} \begin{proof} Recall the definition of the partitions \( \{ \Delta_{n}^{\pm} \} \) in \eqref{eq:Delta} and, for \( \varepsilon > 0 \), let \[ N_{\varepsilon}^{\pm} \coloneqq \max \left\{ N : U_{\varepsilon}^{\pm} \subseteq \bigcup_{k = N}^{\infty} \Delta_{k}^\pm \right\}. \] Then it is easy to see from the definition of the partition \( \{ \delta_{i,j} \} \) in \eqref{eq:def-delta-i-j} and \eqref{eq:noindex} and the properties of the induced map that all points in \( \delta_{i,j}\) with \( i, j \geq N_{\varepsilon}\) will spend all but \( N_{\varepsilon}^{+} \) and \( N_{\varepsilon}^{-} \) iterates inside \(U_{\varepsilon}^{-}\) and \(U_{\varepsilon}^{+}\) respectively before they return at time \( \tau = i+j\). More formally, if \( x \in \delta_{i,j} \) and \( i > N^{+} \) then \( g^{k} (x) \in U^+_{\varepsilon} \) for \( k < i - N^{+} \) and \( g^{k} \not\in U^{+}_{\varepsilon} \) for \( i - N^{+} < k \leq i \), similarly if \( x \in \delta_{i,j} \) and \( j > N^{-} \) then \( g^{i + k} (x) \in U^+_{\varepsilon} \) for \( 1 \leq k < j - N^{+} \) and \( g^{i + k} \not\in U^{+}_{\varepsilon} \) for \( j - N^{+} < k \leq j \). Therefore, from the definitions in \eqref{eq:def-tau-pm}, \eqref{eq:def-S-n-eps-pm} it follows that for every \( x \in \Delta_0 \) we have \begin{equation} \label{eq:ineq-tau-S_n} \tau^+ (x) - N_{\varepsilon}^{+} - 1 \leq S^{+}_{\tau (x),\varepsilon} (x) \leq \tau^{+} (x),\quad\text{ and } \quad \tau^- (x) - N_{\varepsilon}^{-} - 1 \leq S^{-}_{\tau (x),\varepsilon} (x) \leq \tau^{-} (x). \end{equation} Notice that this holds even if \( \tau^{\pm}(x) \leq N_{\varepsilon}^{\pm}\) since then the left hand side of the corresponding inequality is negative. From the definition of \( \tau_{k} \) we can write \[ S_{\tau_k(x),\varepsilon}^\pm (x) = S_{\tau(x), \varepsilon}^\pm (x) + S_{\tau ( G(x) ), \varepsilon }^{\pm} ( G( x ) ) + \cdots + S_{\tau (G^{k-1} (x) ), \varepsilon}^{\pm} ( G^{k-1} (x) ). \] Applying the inequalities in \eqref{eq:ineq-tau-S_n} to each term in the sum above we get \begin{equation} \label{eq:S-tau-k-vs-tau-k} \tau^\pm_k (x) - k ( N_{\varepsilon}^\pm + 1 ) = \sum_{ m = 0 }^{ k - 1 } \tau^\pm \circ G^m (x) - k ( N_{\varepsilon}^\pm + 1 ) \leq S_{\tau_k (x), \varepsilon }^{ \pm } (x) \leq \sum_{ m = 0 }^{ k - 1 } \tau^\pm \circ G^m (x) = \tau^\pm_k (x) \end{equation} and so, dividing through by \( \tau^\pm_k (x) \), gives \[ 1 - \frac{ k }{ \tau^\pm_k (x) }( N^\pm_\varepsilon + 1 ) \leq \frac{ S_{ \tau_k (x), \varepsilon }^\pm (x) }{ \tau_k^{\pm} (x) } \leq 1. \] From \eqref{eq:div} we have \( k/{\tau_k^\pm (x) } \to 0\) for \( \hat \mu \) almost every \( x \in \Delta_0 \), yielding \eqref{eq:S-tau-k-pm-vs-tau-k-pm}. From \eqref{eq:S-tau-k-vs-tau-k} we also get \[ \tau_k(x) - k ( N_\varepsilon^+ + N_{\varepsilon}^- + 2 ) \leq S_{\tau_k(x), \varepsilon}^{+} (x) + S_{\tau_k(x), \varepsilon}^-(x) \leq \tau_k(x). \] Dividing through by \( \tau_k(x) \) and applying \eqref{eq:div} as above, we get \eqref{eq:S-tau-k-pm-vs-tau-k-pm} and complete the~proof. \end{proof} \subsection{Statistics of orbits near the fixed points} We are now ready to complete the proof of Theorem \ref{thm:phys-measures}. \begin{proof}[Proof of Theorem \ref{thm:phys-measures}] We will show that for every \( \varepsilon > 0 \) and Lebesgue almost every point \( x\in \Delta_0\), \begin{equation}\label{eq:seq} \beta^{-} > \beta^+ \implies \frac{S_{n, \varepsilon}^{-}}{n}\to 1, \qquad \beta^{+} > \beta^- \implies \frac{S_{n, \varepsilon}^{+}}{n}\to 1. \end{equation} This clearly implies the statement of Theorem \ref{thm:phys-measures}. Notice first of all that from Propositions \ref{prop:existence-of-physical-meaures} and~\ref{lem:S-vs-tau-k} we immediately get that for every \( \varepsilon > 0 \) and Lebesgue almost every point \( x\in \Delta_0\), \begin{equation}\label{eq:existence-of-physical-measures} \beta^-> \beta^+ \implies \frac{S_{\tau_k, \varepsilon}^{-}}{\tau_k} \to 1 \quad \quad\text{ and } \quad \quad \beta^+ > \beta^- \implies \frac{S_{\tau_k, \varepsilon}^{+}}{\tau_k} \to 1. \end{equation} We therefore just need to replace the convergence along the subsequence \( \tau_{k}\) with convergence along the full sequence. Suppose first that \( \beta^{+} > \beta^- \). Let \( x \in \Delta_{0}^{-}\) and consider the sequence of iterates \( g^{i}(x)\) for \( 1\leq i \leq \tau(x)\). Recall from the construction of the induced map that the iterates for which \( g^{i}(x) \in U^{-}_{\varepsilon} \) all lie at the ``beginning'' of the sequence, i.e. once the orbit leaves the neighbourhood \( U^{-}_{\varepsilon} \) it cannot return to it before the next return to \( \Delta^{-}_{0}\). More formally, either \( g^{i}(x)\notin U^{-}_{\varepsilon}\) for all \( 0\leq i \leq \tau(x)\) (i.e. the finite piece of orbit never enters \( U^{-}_{\varepsilon} \), or there exists an integer \( 1\leq m_{\varepsilon} < \tau(x)\) such that \( g^{i}(x)\in U^{-}_{\varepsilon}\) for all \( 0 < i \leq m_{\varepsilon}\) and \( g^{i}(x)\notin U^{-}_{\varepsilon}\) for all \( m_{\varepsilon}\leq i \leq \tau(x)\). This means that the ratio \( {S_{n, \varepsilon}^{+}}/{n} \) is always larger \emph{in-between} returns that at the \emph{following} return or, more formally, letting \( k \geq 1\) be the smallest integer such that \( \tau_{k}\geq n\), we have \[ 1\geq \frac{S_{n, \varepsilon}^{+}}{n} \geq \frac{S_{\tau_{k}, \varepsilon}^{+}}{\tau_{k}}\ \] By \eqref{eq:existence-of-physical-measures} this implies \eqref{eq:seq} in the case \( \beta^{+} > \beta^- \). The case \( \beta^{-}> \beta^{+}\) follows replacing \( - \) by \( + \) in \eqref{eq:noindex} and carrying out exactly the same argument. \end{proof} \section{Proof of Theorem \ref{thm:no-phys-measures}} Throughout this section we suppose that \( g \in \mathfrak F_{*}\) and therefore \( \beta^{-}=\beta^{+}=\beta\geq 1\). Recall the definition of the functions \( \tau^{-}, \tau^{+}, \tau:= \tau^{-}+\tau^{+}\) in \eqref{eq:def-tau-pm} and their tail distributions in \eqref{eq:dist-of-tau-p}. The key to the proof of Theorem \ref{thm:no-phys-measures} is the related function \begin{equation}\label{eq:def-tau-tilde} \tilde \tau \coloneqq \tau^{+}-\tau^{-} \end{equation} and its corresponding distribution given in \cite{CoaLuzMub22}. \begin{prop}\label{prop:tail-of-tautilde}[Proposition 2.6, Proposition 4.1 \cite{CoaLuzMub22}] There exist constants \( C_+, C_- > 0 \) such that \begin{equation} \label{eq:dist-of-tautilde} \hat \mu( \tilde \tau > t ) \sim C_+ t^{-1/\beta^+} \quad\text{ and } \quad \hat \mu( \tilde\tau< -t ) \sim C_- t^{-1/\beta^-}. \end{equation} \end{prop} Notice that \( \tilde \tau|_{\delta_{i,j}} \equiv i-j \) and is the only one of the functions defined in \eqref{eq:def-tau-pm} which takes both positive and negative values and thus also has a negative tail. As in \eqref{eq:deftaupm} we are also interested in the Birkhoff sums \( \tilde \tau_k : \Delta_0 \to \mathbb{Z} \) of \( \tilde \tau \) under the induced map \( G \) \begin{equation} \label{eq:def-tau-tilde-k} \tilde{\tau}_k \coloneqq \tau_k^+ - \tau_k^-, \end{equation} and will be interested in the asymptotic behaviour of the averages \( \tilde\tau_{k}/k\) as \( k \to \infty\). From \eqref{eq:divbeta<1} and~\eqref{eq:div} we can easily see that the limit exists (albeit possibly equal to \( + \infty\)) as long as \( \beta^{-}<1\) and/or \( \beta^{+}<1\). However, if \( \beta^{-}>1\) and \( \beta^{+}>1\) then \eqref{eq:div} shows that both \( \tau^{-}_{k}/k\to \infty\) and \( \tau^{+}_{k}/k\to \infty\) and therefore it is not possible to draw any definite conclusions about the limit of \( \tilde\tau_{k}/k\). The proof of Theorem \ref{thm:no-phys-measures} indeed rests essentially on the following result. \begin{prop} \label{lem:tilde-tau-limsup-liminf>1} For Lebesgue almost every \( x \in \Delta_0^- \), if \( \beta^- = \beta^+ = \beta > 1 \) then \begin{equation} \label{eq:tilde-tau-limsup-liminf>1} \limsup_{k\to\infty} \frac{\tilde{\tau}_k}{k} = +\infty \quad\text{ and } \quad \liminf_{k \to\infty} \frac{\tilde{\tau}_k}{k} = - \infty; \end{equation} if \( \beta^- = \beta^+ = \beta = 1 \) then \begin{equation} \label{eq:tilde-tau-limsup-liminf} \limsup_{k\to\infty} \frac{\tilde{\tau}_k}{\log k} = +\infty \quad\text{ and } \quad \liminf_{k \to\infty} \frac{\tilde{\tau}_k}{\log k} = - \infty. \end{equation} \end{prop} We show how Proposition \ref{lem:tilde-tau-limsup-liminf>1} implies Theorem \ref{thm:no-phys-measures}. \begin{proof}[Proof of Theorem \ref{thm:no-phys-measures}] We will show that \( {\tau^+_k}/{\tau_k} \) and \( {\tau^-_k}/{\tau_k} \) almost surely do not converge, which clearly implies the Theorem. Assuming by contradiction that \( \tau^+_k / \tau_k \) converges, and therefore also \( \tau^-_k / \tau_k \) converge, this would imply that \( { \tilde{\tau}_k }/{\tau_k} \) also converges. It is therefore sufficient to prove that \( { \tilde{\tau}_k }/{\tau_k} \) does not converge. Suppose first that \( \beta > 1 \). Writing \begin{equation}\label{eq:product} \frac{ \tilde{\tau}_k }{\tau_k} = \frac{ \tilde \tau_k }{k} \frac{ k}{ \tau_k } \end{equation} note that by \eqref{eq:div} we know that \( { k}/{ \tau_k } \to 0 \) and from Proposition \ref{lem:tilde-tau-limsup-liminf>1} we know that \( \tilde \tau_k / k \) is both negative and positive infinitely often, and so the only possible limit of \eqref{eq:product} is \( 0 \). However, \( \tilde \tau_k/ \tau_k \leq \tilde \tau_k/ k \) and so, again by Proposition~\ref{lem:tilde-tau-limsup-liminf>1}, cannot converge to \( 0 \) almost surely. Now suppose that \( \beta = 1 \). Writing \begin{equation}\label{eq:productb1} \frac{ \tilde{\tau}_k }{\tau_k} = \frac{ \tilde \tau_k }{ \log k} \frac{ \log k }{ \tau_k } \end{equation} note that by \eqref{eq:div} we know that \( { \log k }/{ \tau_k } \to 0 \) and from Proposition \ref{lem:tilde-tau-limsup-liminf>1} we know that \( \tilde \tau_k / \log k \) is both negative and positive infinitely often, and so the only possible limit of \eqref{eq:productb1} is \( 0 \). However, \( \tilde \tau_k/ \tau_k \leq \tilde \tau_k/ \log k \) and so, again by Proposition~\ref{lem:tilde-tau-limsup-liminf>1}, cannot converge to \( 0 \) almost surely. \end{proof} It therefore just remains to prove Proposition \ref{lem:tilde-tau-limsup-liminf>1}. We will consider separately the cases \( \beta > 1 \) and the cases \( \beta=1\) in Sections \ref{sec:nonexistence} and \ref{sec:nonexistence1} respectively. The arguments are similar in both cases but in the first case the constants and calculations involved are much more explicit and therefore give a better insight into the proof. Both cases rely on showing that the sequence \( \tilde\tau_{k}\) satisfies some \emph{stable law} and we therefore begin with some relatively standard definitions and notation. First of all recall that a sequence of random variables \( X_1, X_2, \ldots \) converges in distribution to \( Y \), which we write as \( X_n \dto{n} Y \), if \( \mathbb{P} ( X_n \leq x ) \underset{n \to \infty}{\to} \mathbb{P} ( Y \leq x ), \) for every continuity point \( x \) of \( x \mapsto \mathbb{P} ( Y \leq x ) \). \begin{defn} We say that a random variable \( Y \) is a \emph{stable random variable with parameters} \[ \alpha \in (0,2], \quad \eta \in [-1,1], \quad \gamma \geq 0, \quad \delta \in \mathbb{R}, \quad \text{ and write } \quad X \sim S ( \alpha, \eta, \gamma, \delta ),\] if the characteristic function of \( Y \) is given by \[ \mathbb{E} ( e^{ it Y } ) = \begin{cases} \exp \left( -\gamma^\alpha |t|^{\alpha} [ 1 - i \eta \tan ( \frac{ \pi \alpha }{ 2 } ) \operatorname{sign} t ] + i \delta t \right), & \text{ if } \alpha \neq 1 \\ \exp \left( -\gamma |t| [ 1 + i \eta \frac{2}{\pi} \operatorname{sign} t \log | \gamma t | ] + i \delta t \right), & \text{ if } \alpha = 1. \end{cases} \] \end{defn} Note that there are many ways of parametrizing stable random variables, our choice above corresponds to the \( S( \alpha, \eta, \gamma, \delta; 1 ) \) parametrization adopted in \cite{Nol20}. A key observation is that the parameter \( \eta \) determines the support of \( X \sim S( \alpha, \eta, \gamma, \delta ) \), in particular (cf. \cite{Nol20}*{Lemma 1.1}), \begin{equation}\label{eq:support-of-stable} \operatorname{support} X = \mathbb R \quad \text{\emph{unless} \( \alpha<1\) \emph{and} \( \eta=\pm 1\). } \end{equation} \subsection{Non-existence of physical measures (\(\beta>1\))} \label{sec:nonexistence} In this section we prove \eqref{eq:tilde-tau-limsup-liminf>1} of Proposition~\ref{lem:tilde-tau-limsup-liminf>1} Let \( C^+, C^- \) be as in Proposition \ref{prop:tail-of-tautilde} and set \begin{equation}\label{eq:etagamma} \eta \coloneqq \frac{ C^+ - C^- }{ C^+ + C^-},\quad \gamma \coloneqq \left( \frac{ 2 \Gamma ( 1/ \beta ) \sin \frac{\pi \beta }{2} }{ \pi (C^+ + C^-)} \right)^{ \beta } \end{equation} Notice that \( C^{-}, C^{+}>0\) and therefore both \( \eta\) and \( \gamma \) are well-defined and \( \eta \neq \pm 1\). \begin{prop}\label{prop:stable-laws-for-tilde-tau>1} Suppose that \( \beta > 1 \). Then, \[ \frac{\tilde{\tau}_k}{ k^{\beta} }\dto{n} Y \sim S( 1/\beta, \eta, \gamma, 0 ) \] In particular \( \operatorname{support} Y = \mathbb R\) \end{prop} We defer the proof of Proposition \ref{prop:stable-laws-for-tilde-tau>1} to Section \ref{sec:prop:stable-laws-for-tilde-tau} and now proceed to show how the proposition above implies \eqref{eq:tilde-tau-limsup-liminf>1}. \begin{proof}[Proof of \eqref{eq:tilde-tau-limsup-liminf>1}] We will show that the sets \begin{equation} \label{eq:Acalpm-beta>1} \mathscr{A}^{+} \coloneqq \left\{ x : \limsup_{ n \to \infty } \frac{\tilde{\tau}_k}{k} = +\infty \right\} \quad\text{ and } \quad \mathscr{A}^{-} \coloneqq \left\{ x : \liminf_{ n \to \infty } \frac{\tilde{\tau}_k}{k} = - \infty \right\}, \end{equation} are of full measure and the result will then follow. Notice that these sets are \( G \) invariant and so, by ergodicity, it suffices to show that they are of positive measure. Consider the sets \begin{equation} \label{eq:Apm-beta>1} A^{\pm}_k \coloneqq \{ \pm \tilde \tau_k > k^{\beta} \}, \quad\text{ and } \quad A^{\pm} \coloneqq \{ x : x \in A_k^{\pm} \text{ for infinitely many k }\} = \bigcap_{ \ell = 1 }^{ \infty } \bigcup_{ k = \ell }^{ \infty } A_k^{\pm} \end{equation} and notice that if \( x \in A^{+} \) then \( \tau_k(x) / k > k^{\beta - 1} \) for infinitely many \( k \) and so, since \( \beta > 1 \), we know that \( \limsup \tau_k(x)/ k = + \infty \). Similarly, if \( x \in A^{-} \) then \( \liminf \tau_k / k = - \infty \) which yields \( A^{\pm} \subset \mathscr{A}^\pm \). We will now use Proposition \ref{prop:stable-laws-for-tilde-tau>1} to show that \( A^{\pm} \), and thus \( \mathcal{A}^{\pm} \), is of positive measure. Proposition \ref{prop:stable-laws-for-tilde-tau>1} ensures that \( \mu ( A_{k}^{\pm} ) \to \mu ( Y > 1 ) = p^{\pm} \), and the fact that the support of the density \( Y \) is the entire real line ensures that \( p^{\pm} > 0 \). Thus, \[ \mu ( A^{\pm} ) = \mu \left( \bigcap_{ \ell = 1 }^{\infty} \bigcup_{ k \geq \ell } A_k^{\pm} \right) = \inf_{\ell} \left( \bigcup_{ k \geq \ell } A_k^{\pm} \right) \geq p^{\pm} > 0 \] as the sets \( \left( \bigcup_{ k \geq \ell } A_k^{\pm} \right)_{\ell \geq 1} \) are nested. \end{proof} \subsection{Non-existence of physical measures (\( \beta=1\))} \label{sec:nonexistence1} The argument in this case also relies on a stable law limit such as that in Proposition \ref{prop:stable-laws-for-tilde-tau>1} above but with some different parameters. We let \( \eta, \gamma\) be as in \eqref{eq:etagamma} and let \begin{equation} \label{eq:delta-stable} \delta \coloneqq - \eta \frac{2}{\pi} \gamma \log \gamma \end{equation} We also fix \( \phi (t) \coloneqq \int \exp\{ it \tilde{\tau} \} d m \) to be the characteristic function of \( \tilde \tau\) and define the sequences \begin{equation} \label{eq:def-b-ks} b_k \coloneqq \operatorname{Im} \log \phi(k^{-1}), \quad\text{ and } \quad \tilde{b}_k \coloneqq \frac{ k + b_k }{ \log k}. \end{equation} \begin{prop}\label{prop:stable-laws-for-tilde-tau} Suppose that \( \beta =1 \). Then, \[ \frac{\tilde{\tau}_k - b_k }{ k }\dto{k} Y \sim S( 1, \eta, \gamma, \delta ) \] \end{prop} We will argue in a very similar way to Section \ref{sec:nonexistence} and show that the stable law in Proposition \ref{prop:stable-laws-for-tilde-tau} implies \eqref{eq:tilde-tau-limsup-liminf} \begin{proof}[Proof of \eqref{eq:tilde-tau-limsup-liminf}] We will show that the sets \[ \mathscr{A}^{+} \coloneqq \left\{ x : \limsup_{ n \to \infty } \frac{\tilde{\tau}_k}{ \log k} = +\infty \right\} \quad\text{ and } \quad \mathscr{A}^{-} \coloneqq \left\{ x : \liminf_{ n \to \infty } \frac{\tilde{\tau}_k}{ \log k} = - \infty \right\}, \] are of full measure. First, note that these sets are \( G \) invariant and so, by ergodicity, it suffices to show that they are of positive measure. We define sets \( A^{\pm}_k, A^{\pm} \) in an analogous way to \eqref{eq:Apm-beta>1}, this time setting \[ A^{\pm}_k \coloneqq \{ \pm \tau_k - b_k > k \}, \quad\text{ and } \quad A^{\pm} \coloneqq \{ x : x \in A_k^{\pm} \text{ for infinitely many k }\} = \bigcap_{ \ell = 1 }^{ \infty } \bigcup_{ k = \ell }^{ \infty } A_k^{\pm}. \] From \eqref{eq:def-b-ks} we see that \( A^{\pm}_k = \{ \pm \tau_k / \log k > \tilde{b}_k \} \) and so, as in the proof of \eqref{eq:tilde-tau-limsup-liminf>1}, we see that \( A^{\pm} \subset \mathcal{A}^{\pm} \) as \( \tilde b_k \to \infty \). The conclusion is exactly the same as in the previous section. Proposition \ref{prop:stable-laws-for-tilde-tau} ensures that \( \mu ( A_{k}^{\pm} ) \to \mu ( Y > 1 ) = p^{\pm} \), and the fact that the support of the density \( Y \) is the entire real line ensures that \( p^{\pm} > 0 \). Thus, \[ \mu ( A^{\pm} ) = \mu \left( \bigcap_{ \ell = 1 }^{\infty} \bigcup_{ k \geq \ell } A_k^{\pm} \right) = \inf_{\ell} \left( \bigcup_{ k \geq \ell } A_k^{\pm} \right) \geq p^{\pm} > 0. \] \end{proof} \subsection{Proof of Propositions \ref{prop:stable-laws-for-tilde-tau>1} and \ref{prop:stable-laws-for-tilde-tau}} \label{sec:prop:stable-laws-for-tilde-tau} \begin{proof}[Proof of Propositions \ref{prop:stable-laws-for-tilde-tau>1} and \ref{prop:stable-laws-for-tilde-tau}] Let \(X_1, X_2, \ldots, \) be a sequence of iid random variables that are equal in distribution to \( \tilde{\tau} \). Proposition \ref{prop:tail-of-tau} ensures that \begin{equation} \label{eq:dist-Xk} t^{1/\beta} \hat \mu ( X_k > t ) \to C_+, \quad\text{ and } \quad t^{1/\beta} \hat \mu ( X_k < -t ) \to C_-. \end{equation} The classical probability literature (see for example \cite{Nol20}*{Theorem~3.12}) tells us that \eqref{eq:dist-Xk} implies that scaled and centred sums of the \( X_k \) will converge in distribution to a stable random variable. The scaling and centring sequence, and the parameters of the limiting stable random variable depend on the value of \( \beta \) and \( C_+, C_- \) in the way we describe below. Setting \( \eta, \gamma \) as in \eqref{eq:etagamma}, \cite{Nol20}*{Theorem~3.12} yields the following limit theorems 1) If \( \beta > 1 \) then \[ \frac{ \sum_{ \ell = 0 }^{ k - 1 } X_{\ell} }{ k^{\beta } } \dto{k} Y \sim S ( 1/\beta, \eta, \gamma, 0 ). \] 2) If \( \beta = 1 \) then defining \( \delta \) as in \eqref{eq:delta-stable} and \( ( b_k ) \) as in \eqref{eq:def-b-ks} \[ \frac{ \sum_{ \ell = 0 }^{ k - 1 } X_{\ell} - b_k }{ k } \dto{k} Y \sim S ( 1, \eta, \gamma, \delta ). \] Now that we have established the limit theorem for the iid sequence \( (X_k) \) \cite{Gou10a}*{Theorem 1.5} tells us that since \( G \) is a topologically mixing Gibbs-Markov map, and since \( \tilde{\tau} \) is constant on the partition elements \( \delta_{i,j} \), the sequence \( \tilde{\tau}_k \) will satisfy the same distributional convergence as \( X_1 + \cdots + X_k \). In particular, if \( \beta > 1 \) \[ \frac{ \tilde \tau_k }{ k^{\beta } } \dto{k} Y \sim S ( 1/\beta, \eta, \gamma, 0 ), \] otherwise if \( \beta = 1 \) \[ \frac{ \tilde{\tau}_k - b_k }{ k } \dto{k} Y \sim S ( 1, \eta, \gamma, \delta ). \] \end{proof} \section{Proof of Theorem \ref{thm:density-main}} \label{sec:density} Throughout this section we fix \( f \in \widehat{ \mathfrak{F} } \) and let and let \( \ell_1,\ell_2, k_1,k_2 , a_1,a_2,b_1,b_2, \iota \) be the corresponding parameters as in~\eqref{eqn_1}. Then, given \emph{arbitrary constants} \( \tilde\ell_{1}. \tilde\ell_{2}\geq 0\), in Section \ref{sec:construction} we will give a quite explicit construction of a map \( g \) which, in Section \ref{sec:fhat}, we will show belongs to our class \( \mathfrak{F} \), with parameters \( \tilde{\ell}_1, \tilde{\ell}_2, k_{1}, k_2, a_{1}, a_2, \tilde b_1, \tilde b_2 , \tilde\iota \), for appropriately chosen constants \(\tilde b_1, \tilde b_2, \tilde \iota \). In Section \ref{sec:crclose} we estimate the distance between \( f \) and \( g \) in an appropriate topology and finally, in Section \ref{sec:conc}, we apply these estimates to the various cases required by Theorem \ref{thm:density-main} and thus complete the proof. \subsection{Construction of \( g \)} \label{sec:construction} In Section \ref{sec:const-1} we describe the general construction of \( g \) and introduce the other parameters and functions on which \( g \) will depend. In Section \ref{sec:const-2} we then make some specific choices of the various parameters and functions involved in the construction. \subsubsection{General strategy for constructing the perturbation} \label{sec:const-1} Let \( f \in \widehat{\mathfrak{F}} \) and let the corresponding parameters as in~\eqref{eqn_1} be \( \ell_1,\ell_2 \geq 0 \), \(k_1,k_2 , a_1,a_2,b_1,b_2 > 0 \). For any two constants \(\tilde\ell_{1} > 0 \) and \(\tilde\ell_{2} > 0\), and any two compact intervals \[ [\tilde x_{1}, x_{1}]\subset U_{-1} \quad\text{ and } \quad [x_{2}, \tilde x_{2}]\subset U_{1}, \] we define functions \( h_1 : U_{-1} \to [-1,1] \) and \( h_2 : U_{1} \to [-1,1] \) as follows. If \( \tilde\ell_{1}=\ell_{1}\) or \( \tilde\ell_{2}=\ell_{2}\) then we let \( h_{1}=f|_{U_{-1}}\) and \( h_{2}=f|_{U_{1}}\) respectively. Otherwise, we let \begin{equation} \label{eq:def-h1} h_1(x) \coloneqq x + \tilde b_1 ( 1 + x )^{1 + \tilde{ \ell }_1} \quad\text{ and } \quad h_2(x) \coloneqq x - \tilde b_2 ( 1 - x )^{1 + \tilde{ \ell }_2}, \end{equation} where \( \tilde b_1, \tilde b_2 > 0 \) are any constants such that \begin{equation} \label{eq:cond-monotonicty} h_{1}(x) \leq f(x) \quad \text{ on } \quad [x_1, \tilde x_1], \quad\text{ and } \quad h_{2}(x) \geq g(x) \text{ on } \quad [ \tilde x_2, x_2], \end{equation} for every integer \( k\leq \lceil \ell_{1} \rceil \) and \( k\leq \lceil \ell_{2} \rceil \) respectively. Note that if \( \tilde \ell_{1,2} \geq \ell_{1,2} \) then we can take \( \tilde b_{1,2} = b_{1,2} \) and the corresponding line of \eqref{eq:cond-monotonicty} will hold. Moreover, regardless of the relative values of \( \ell_1,\ell_2 \) and \( \tilde \ell_1, \tilde \ell_2 \), we \eqref{eq:cond-monotonicty} will always hold for all \( \tilde b_1, \tilde b_2 > 0 \) sufficiently small as we are only asking for the inequalities in \eqref{eq:cond-monotonicty} to be satisfied for \( x \) in compact intervals away from \( -1 \) and \( 1 \). We let \[ \xi_{1}: [\tilde x_{1}, x_{1}] \to [0,1] \quad\text{ and } \quad \xi_{2}: [ x_{2}, \tilde x_{2}]\to [0,1] \] be \( C^{\infty}\) \emph{monotone increasing bijections} and define \(g\) on the intervals \( [-1, 0) \) and \( [0, 1 ] \) by \begin{equation} \label{eq:def-g-1} g|_{[-1, 0]} (x) \coloneqq \begin{cases} h_1(x) & \text{if } x \in [-1, -\tilde x_1] \\ h_1(x) + \xi_1 (x) ( f(x) - h_1(x) ) & \text{if } x \in (\tilde x_1, x_1) \\ f(x) & \text{if } x \in [x_1, 0 ). \end{cases} \end{equation} and \begin{equation} \label{eq:def-g-2} g|_{[0, 1 ]} (x) \coloneqq \begin{cases} f(x) & \text{if } x \in [0, x_{2} ] \\ f(x) + \xi_2 (x) (h_2 (x) - f(x) ) & \text{if } x \in (x_{2}, \tilde x_{2}) \\ h_2(x) & \text{if } x \in [\tilde x_{2}, 1 ]. \end{cases} \end{equation} Notice that \( g \) is equal to \( f \) outside \( U_{-1}\) and \( U_{1}\) and, apart from \( \tilde\ell_{1}, \tilde\ell_{2}\), depends on the two intervals \( [\tilde x_{1}, x_{1}], [ x_{2}, \tilde x_{2}]\), the constants \( \tilde b_1,\tilde b_2\), and the functions \( \xi_{1}, \xi_{2}\), which we explain below how to choose. \subsubsection{Choosing \( x_1,x_2,\tilde x_1, \tilde x_2, \tilde b_1,\tilde b_2, \xi_{1}, \xi_{2}\)} \label{sec:const-2} We explain how to make a specific choice of the constants \( \tilde x_1, \tilde x_2, \tilde b_1,\tilde b_2\) and the functions \( \xi_{1}, \xi_{2}\) depending on arbitrary \( x_1 \in U_{-1} \) and \( x_2 \in U_{1} \). First of all we define the \emph{affine orientation preserving bijections} \( \eta_{1}: [\tilde x_{1}, x_{1}] \to [0,1] \) and \( \eta_{2}: [ x_{2}, \tilde x_{2}]\to [0,1] \) by \[ \eta_1(x) \coloneqq \frac{x-\tilde x_1}{x_1-\tilde x_1} \quad\text{ and } \quad \eta_2(x)\coloneqq \frac{x-x_2}{\tilde x_2- x_2}. \] Then we define a \( C^\infty\) map \( \xi:[0.1]\to [0,1]\) by \begin{equation} \label{eq:def-of-xi} \xi ( x ) \coloneqq \begin{cases} 0 &\text{if } x = 0 \\ \exp \left\{ 1 - \frac{ 1 }{ 1 - ( x - 1 )^2 } \right\} &\text{if } x \in (0, 1]. \end{cases} \end{equation} and let \[ \xi_1(x):= \xi\circ \eta_1(x) \quad\text{ and } \quad \xi_2(x):= \xi\circ \eta_2(x) \] Notice that \( \xi\) is monotone increasing, \( \xi ( 0 ) = 0 \), \( \xi ( 1 ) = 1 \), and \( D^k \xi ( 0 ) = D^k \xi ( 1 ) = 0 \) for every \( k \geq 1 \) and therefore \( \xi_1, \xi_2\) are also in particular \( C^\infty\) and flat at the endpoints. We now fix \( \tilde x_1\) to be the mid-point between \( -1\) and \( x_1\), and \( \tilde x_2\) to be the midpoint between \( x_2\) and \( 1 \), i.e. \[ \tilde x_1 \coloneqq \frac{1}{2} x_1 - \frac{1}{2}, \quad\text{ and } \quad \tilde x_2 \coloneqq \frac{1}{2} x_2 + \frac{1}{2}. \] \begin{rem} The fundamental reason for these choices is that, by using the explicit forms of \( \eta_1, \eta_2\) and \( \xi\), and the definition of \( \xi_1, \xi_2\), we can verify by repeated use of the chain rule that there exists a \( C > 0 \) such that for every \( x_1 \in U_{-1} \), every \( x \in [x_1, \tilde x_1] \) and for every \( k \leq \lceil \ell_1 \rceil \) we have \begin{equation} \label{eq:derv-xi-1} D^k \xi_1 ( x ) = D^{k-1} \left( \frac{2}{ 1 + x_1 } \xi'( \eta_1 (x) ) \right) = \left( \frac{2}{1 + x_1} \right)^{ k } D^k \xi ( \eta_1 (x) ) \leq C (1 + x_{1})^{-k}. \end{equation} and, similarly, for every \( x_2 \in U_{1} \), every \( x \in [x_2, 1] \) and every \( k \leq \lceil \ell_2 \rceil \), we have \begin{equation*} D^k \xi_2 ( x ) \leq C (1 - x_{2})^{-k}. \end{equation*} \end{rem} Next, we will fix \( \tilde b_1, \tilde b_2 > 0 \). These constants play no role in the statistical properties of the map but will need to be chosen carefully to ensure that \( g \) indeed satisfies \ref{itm:A0}-\ref{itm:A2}. We have already that \( \tilde{b}_1, \tilde{b_2} \) are small enough so that \eqref{eq:cond-monotonicty} holds, and now we also require the following conditions: \begin{equation} \label{eq:cond-on-b1} \begin{aligned} &\text{if } \ell_1 > 0, \text{then } &b_1 \ell_1 (1 + x)^{\ell_1} - \tilde b_1 \tilde \ell_1 ( 1 + x )^{\tilde \ell_1} &&\geq 0 &&\text{ for all } x \in [\tilde x_1, x_1],\\ &\text{if } \ell_1 = 0, \text{then } &1 - \tilde b_1 \tilde \ell (1 + x)^{\tilde \ell_1 } &&\geq 0 &&\text{ for all } x \in [\tilde x_1, x_1], \end{aligned} \end{equation} and \begin{equation} \label{eq:cond-on-b2} \begin{aligned} &\text{if } \ell_2 > 0, \text{then } &b_2 \ell_2 (1 + x)^{\ell_2} - \tilde b_2 \tilde \ell_2 ( 1 + x )^{\tilde \ell_2} &&\geq 0 &&\text{ for all } x \in [x_2, \tilde x_2 ],\\ &\text{if } \ell_2 = 0, \text{then } &1 - \tilde b_1 \tilde \ell_2 (1 + x)^{\tilde \ell_2 } &&\geq 0 &&\text{ for all } x \in [x_2, \tilde x_2 ]. \end{aligned} \end{equation} Note that it is always possible to find \( \tilde b_1, \tilde b_2 > 0 \) small enough so that the above hold. This concludes our definition of the map \( g \) which depends only \( f \) and the parameters \( \tilde \ell_1, \tilde \ell_2 > 0 \) and \( x_1, x_2 \). Notice that choice of \( \tilde \ell_1, \tilde \ell_2 > 0 \) is completely arbitrary and the only restriction on the choice of \( x_1, x_2 \) is that they have to lie in the neighbourhoods \( U_{-1}, U_{1} \), in particular notice that \( x_1,x_2 \) can be chosen arbitrarily close to the fixed points \( -1,1 \). \subsection{The map \( g \) is in the class \( \widehat{ \mathfrak{F} } \)} \label{sec:fhat} \begin{lem} \label{lem:fhat} \( g \) belongs to the class \( \widehat{ \mathfrak{F} } \) with parameters \( \tilde{\ell}_1, \tilde{\ell}_2, k_{1}, k_2, a_{1}, a_2, \tilde b_1, \tilde b_2 \). \end{lem} \begin{rem} Note that all of the parameters of \( g \) are fixed by the map \( f \) \emph{except} for \( \tilde \ell_1, \tilde \ell_2 \) which are \emph{arbitrary} positive constants, which means that we are free to choose \( \tilde \ell_1 , \tilde \ell_2 \) so that \( g \) is in any one of the three distinct subclasses \( \mathfrak{F}, \mathfrak{F}_{\pm}, \mathfrak{F}_{*} \) defined in \eqref{eq:def-subclasses}. \end{rem} \begin{proof} It follows immediately from the construction that the map is full branch and \( C^{2} \) and that \ref{itm:A0} is therefore satisfied. The neighbourhoods \( U_{\pm 1}\) and \( U_{0\pm}\) no longer have the required form as in \ref{itm:A1} but we can shrink them and define the intervals \begin{equation} \label{eq:def-tilde-U} \begin{split} \widetilde{U}_{-1} \coloneqq [-1, \tilde x_{1} ), \quad \widetilde U_{1} \coloneqq ( \tilde x_{2}, 1 ], \quad \widetilde{U}_{0-} = g^{-1} (\widetilde U_{1}), \quad \widetilde{U}_{0+} \coloneqq g^{-1} ( \widetilde{U}_{-1}). \end{split} \end{equation} It then follows that \( g \) satisfies \ref{itm:A1} with respect to these neighbourhoods for the required parameters \( \tilde{\ell}_1, \tilde{\ell}_2, k_{1}, k_2, a_{1}, a_2, \tilde b_1, \tilde b_2 \). Since we have shrunk the neighbourhoods and modified the map in the regions \( U_{-1}\setminus \widetilde{U}_{-1} \) and \( U_{1}\setminus \widetilde{U}_{1} \) it is no longer immediate that the expansivity condition \ref{itm:A2} continues to hold outside these new neighbourhoods and we therefore just need to check \ref{itm:A2}. Let \( \tilde{\delta}_{n}^{\pm}, \tilde{\Delta}_n^{\pm} \) and \( \delta_n^{\pm}, \Delta_n^{\pm} \) denote the partitions corresponding to \( g \) and \( f \) respectively. We know from the construction of \( g \) that \( g^{n} \equiv f^{n} \) on \( \delta_n^{\pm} \), and \( \tilde \delta_n^{\pm} = \delta_n^{\pm} \) for every \( n \leq n^{\pm} \coloneqq \min \{ x : \delta_n \subset U_{ 0 \pm } (f) \}\). As \( f \) satisfies \ref{itm:A2} we know that \begin{equation} \label{eq:A2-for-f} (g|_{\delta_n^{\pm}}^n)'(x) = (f|_{\delta_n^{\pm}}^n)'(x) > \lambda > 1 \text{ for every } n \leq n^{\pm}. \end{equation} So, in order to verify that \( g \) satisfies \ref{itm:A2} it remains to check that \begin{equation} \label{eq:g-A2-main-claim} (g^n)' (x) > \lambda > 1, \text{ for every } x \in \tilde \delta_n^{\pm} \text{ and for every } n^{\pm} + 1 \leq n \leq \tilde{n}^{\pm}, \end{equation} for some (possibly different) \( \lambda > 1 \), where \( \tilde{n}^{\pm} \coloneqq \min \{ x : \delta_n \subset \tilde U_{ 0 \pm } (f) \}\). We will follow the argument given \cite{CoaLuzMub22}*{Section 3.3} and show that \( (g^{n+1})' (x) \geq (g^{n})'(x) \) for every \( n \geq n^{+} \) and every \( x \in \tilde \delta_n^{+} \). The argument for \( n \geq n^{-} \) and \( x \in \tilde{ \delta }_{n}^- \) then follows in the same way. Note that if \( k_1, \in ( 0, 1 ] \) then there is nothing to prove, so we will assume throughout this proof that \( k_1 > 1 \). As in \cite{CoaLuzMub22}*{Section 3.3} we define \begin{equation}\label{eq:def-phi} \phi \coloneqq ( g|_{ U_{0+} } )^{-1} \circ g|_{ U_{-1} } \circ g |_{ U_{0+}} \end{equation} and claim that the conclusion of \cite{CoaLuzMub22}*{Lemma 3.7} holds for \( g \), namely we claim that \begin{equation}\label{eq:phiexpansion} \frac{(g^2)'(x)}{ g' ( \phi (x) )} = \frac{ g'(x) }{ g'(\phi(x))} g'(g(x)) > 1, \text{ for every } x \in \tilde \delta_{n+1}^{+} \text{ and every } n \geq n^{+} . \end{equation} Notice that if \( g (x) \in [ x_1, 0) \), or if \( g(x) \in [-1, \tilde x_1] \) then we can apply\cite{CoaLuzMub22}*{Lemma 3.7} to obtain \( (g^2)'(x) / g'( \phi (x)) > 1 \). If instead \( g(x) \in ( \tilde x_1 , x_1 ) \), then \cite{CoaLuzMub22}*{Lemma 3.7} cannot be applied directly, but its proof can be adapted to our setting as show below. So, let us assume that \( x \in \delta_n^+ \) for some \( n \) and that \( g(x) \in ( \tilde x_1 , x_1) \). Since we are working only with \( x \in \delta_n^+ \) we will drop the subscripts on the parameters to ease notation, specifically we will let \( \ell = \ell_1 \), \( \tilde \ell = \tilde \ell_1 \), \(b = b_1 \), \()\tilde b = \tilde b_1\), \(k = k_2\) and \( a = a_2 \). By the definition of \( g \) in \( U_{0+}\) given in \ref{itm:A1} we have \begin{equation} \label{eq:g-p-over-g-phi} \frac{g'(x)}{g'(\phi(x))} = \left(\frac{ x }{ \phi (x) }\right)^{k - 1} = \left(\frac{ \phi (x) }{ x }\right)^{1-k} \end{equation} Recall that \( k > 1\) and \( x< \phi(x)\) and so the ratio above is strictly less than \( 1\). Let us fix \begin{equation} \label{eq:def-y} y \coloneqq g ( x ) = -1 + a x^{k}, \end{equation} We will compute \( ( \phi(x) / x )^{ k } \) and \( g' ( g (x) ) \) in case that \( \ell = 0 \) and the case that \( \ell > 0 \) separately. First we note that regardless the value of \( \ell \) we find that inserting the definition of \( g \) into \eqref{eq:def-phi} yields \begin{equation} \label{eq:phi-x} \varphi (x) = \left( \frac{1}{a} \right)^{1/k} ( 1 + g(y) )^{ 1/k } = \left( \frac{1}{a} \right)^{1/k} [ 1 + h_1 (y) + \xi (y) ( f(y) - h_1 (y) ) ]^{ 1/k }. \end{equation} and, using \eqref{eq:cond-monotonicty} we have that \begin{equation} \label{eq:g-p-g} g'( g(x) ) = h_1'(y) + \xi( y) ( f'(y) - h_1'(y) ) + \xi'(y) ( f (y) - h(y) ) \geq h_1'(y) + \xi( y) ( f'(y) - h_1'(y) ) \end{equation} 1) Suppose that \( \ell > 0 \). Inserting \eqref{eq:def-y} into the definitions of \( f \) and \( h_1 \) we find \( f(y) - h_1(y) = a x^k (b a^{\ell} x^{ \ell } - \tilde b a^{\tilde \ell} x^{ \tilde \ell } ) \), and we note for later use that \eqref{eq:cond-monotonicty} ensures that \begin{equation} \label{eq:d-positive} b a^{\ell} x^{ \ell } - \tilde b a^{\tilde \ell} x^{ \tilde \ell } \geq 0. \end{equation} Thus, from \eqref{eq:phi-x}, \begin{align*} \left( \frac{ \phi (x) }{ x } \right)^{k} &=\left( \frac{1}{ a x^k } \right) \left[ ax^k + \tilde b a^{\tilde \ell + 1} x^{k ( \tilde \ell + 1 )} + \xi (y) a x^k \left( b a^{\ell} x^{ \ell } - \tilde b a^{\tilde \ell} x^{ \tilde \ell } \right) \right] \\ &= 1 + \tilde b a^{\tilde \ell} x^{k\tilde \ell} + \xi (y)\left( b a^{\ell} x^{ \ell } - \tilde b a^{\tilde \ell} x^{ \tilde \ell } \right) \\ \end{align*} Next, using \eqref{eq:cond-on-b1}, \eqref{eq:g-p-g} and \eqref{eq:d-positive}, we obtain \begin{align} \nonumber g'( g(x) ) \nonumber &> 1 + \tilde b x^{ k \tilde \ell } + \xi( y)( f'(y) - h_1'(y) ) \\ \nonumber &= 1 + \tilde b x^{ k \tilde \ell } + \xi( y )( (1 + \ell ) b a^{\ell} x^{k \ell} - (1 + \tilde \ell )\tilde b a^{\tilde \ell} x^{k \tilde \ell}) \\ &\geq 1 + \tilde b x^{ k \tilde \ell } + \xi( y ) \left( b a^{\ell} x^{k \ell} - \tilde b a^{\tilde \ell} x^{k \tilde \ell} \right) = \left( \frac{ \phi (x) }{ x } \right)^{k}. \label{eq:g-prime-g} \end{align} So, from \eqref{eq:g-p-over-g-phi} and \eqref{eq:g-prime-g} we get \[ \frac{ g'(x) }{ g'( \phi (x) ) } g' ( g (x ) ) > \left( \frac{ \phi (x) }{ x } \right)^{k - 1} \left( \frac{ \phi (x) }{ x } \right)^{k} > 1, \] Which proves our claim \eqref{eq:phiexpansion}, in the case that \( \ell > 0 \). 2) If \( \ell = 0 \), then we proceed as before and insert the definitions of \( f \) and \( h_1 \) into \eqref{eq:def-phi} \begin{align*} \phi(x) = \left(\frac{1}{a} \right)^{1/k} \left[ ax^k + \tilde b a^{\ell + 1 } x^{k ( \tilde \ell + 1 )} + \xi(y)\left( - 1 + \eta (y) + b a x^{k} - \tilde b a^{1 + \tilde \ell} x^{ k ( 1 + \tilde \ell )} \right) \right], \end{align*} and so \[ \left( \frac{ \phi (x) }{ x } \right)^{k} = 1 + \tilde b a^{ \tilde \ell} x^{k \tilde \ell } + \xi(y)\left( \frac{- 1 + \eta (y)}{ ax^{k}} + b - \tilde b a^{\tilde \ell} x^{ k \tilde \ell } \right). \] We recall from \cite{CoaLuzMub22}*{Equation(13)} that \( \eta''(y) \implies \eta' (y) \geq \eta (y) / ( 1 +y ) > ( \eta (y) - 1 )/ ( 1 + y ) \) for every \( y \in U_{-1} \). So, inserting the expressions for \( f' \) and \( h_1' \) into \eqref{eq:g-p-g} \begin{align*} g'(y) &\geq 1 + \tilde b ( 1 + \tilde \ell ) (1 + y )^{\tilde \ell} + \xi (y) \left( \eta'(y) + 1 + b - \tilde b (1 + \tilde \ell ) (1 + y )^{ \tilde \ell } \right) \\ &> 1 + \tilde b a^{\tilde \ell} x^{ k\tilde \ell} + \xi(y) \left( \frac{-1 + \eta (y) }{ ax^{k} } + b - \tilde b a^{\tilde \ell} x^{ k \tilde \ell } + 1 - \tilde b \tilde\ell a ^{\tilde \ell} x^{ k \tilde \ell} \right) \geq \left( \frac{ \phi (x) }{ x } \right)^{k}. \end{align*} This concludes \eqref{eq:phiexpansion} in the case that \( \ell = 0 \). We can then proceed as in \cite{CoaLuzMub22}*{Corollary 3.9}, to get \[ (g^{n+1}) ' (x) = g'(x) g'(g(x)) \cdots g'(g^{n} (x)) = \frac{ g'(x) g'(g (x))}{ g'( \phi(x)) } (g^{n})'(\phi (x)) > (g^{n})'(\phi (x)). \] This, together with \eqref{eq:A2-for-f}, implies \eqref{eq:g-A2-main-claim} and allows us to conclude that \( g \) satisfies \ref{itm:A2}. \end{proof} \subsection{\( g \) is \( C^r \) close to \( f \)} \label{sec:crclose} Let \( f \in \widehat{ \mathfrak{F} } \). In Sections \ref{sec:construction} and \ref{sec:fhat} we constructed a map \( g \in \widehat{ \mathfrak{F} } \) ultimately depending only on the choice of two arbitrary constants \( \tilde \ell_1, \tilde \ell_2 > 0 \) and two points \( x_1 \in U_{-1} \), \( x_{2} \in U_{1} \). We now show that \( g \) can be chosen arbitrarily close to \( f \), by choosing the points \( x_{1}, x_{2}\) sufficiently close to the fixed points \( -1, 1\) respectively, in a topology determined by the constants \begin{equation}\label{eq:approxreg} r_{1} \coloneqq \min \{ \ell_1, \tilde \ell_1 \}, \quad\text{ and } \quad r_{2} \coloneqq \min \{ \ell_1, \tilde \ell_2 \}. \end{equation} More precisely, we have the following result. \begin{lem} \label{lem:c-r-close} There exists a \( C > 0 \) such that \[ \| f - g \|_{ C^{ \lceil r_1 \rceil }([-1,0]) } \leq C ( 1 + x_1 )^{ 1 + r_1 - \lceil r_1 \rceil }, \quad\text{ and } \quad \| f - g \|_{ C^{ \lceil r_2 \rceil }([0,1]) } \leq C ( 1 - x_2 )^{ 1 + r_2 - \lceil r_2 \rceil }. \] \end{lem} \begin{proof} We will only give an explicit proof of the bound for \( \| f - g \|_{C^{\lceil r_1 \rceil }} \) as the argument for \( \| f - g \|_{C^{\lceil r_2 \rceil }} \) is the same. If \( \ell_1 = \tilde{\ell_1} \) we obtain trivially that \( f - g \equiv 0 \). So, let us assume \( \ell_1 \neq \tilde{\ell_1} \) and consider the subcases \( \ell_1 = 0 \), \( \ell_1 > 0 \) separately. If \( \ell_1 = 0 \), then \( r_1 = 0 \) and using the definition \eqref{eq:def-h1} of \( h_1 \) near \( -1 \), the fact that \( \xi \) is bounded, and the definition of \( g \) one finds that \[ \| g - f \|_{C^0} \leq 2 \| h_1 - f \|_{C^0} < C (1 + x) \leq C (1 + x_1). \] This proves the result in the case that \( \ell_1 = 0. \) Let us now suppose throughout the remainder of the proof that \( \ell_1 > 0 \). We begin by establishing the following sublemma. \begin{sublem} \label{sublem:derv-bounds} There exists a \( C > 0 \) so that for every \( 1 \leq k \leq \lceil r_1 \rceil \) \begin{equation} \label{eq:derv-bounds} \begin{split} &| D^k f(x) - D^k h_1 (x) | \leq C( 1 + x_{1})^{1 + r_1 - k }\quad \forall x \in [-1, x_1] \\ \end{split} \end{equation} \end{sublem} \begin{proof}[Proof of Sublemma \ref{sublem:derv-bounds}] Suppose that \( 0 < r_1 = \ell_1 < \tilde \ell_1 \), let \( 1 \leq k \leq \lceil r_1 \rceil \) and note that for some constants \( c_k, \tilde c_k \) \begin{align*} | D^k f (x) - D^k h_1 (x) | &= | c_{k} ( 1 + x )^{ 1 + \ell_1 - k } + \tilde c_{k} ( 1 + x )^{ 1 + \tilde \ell_1 - k } | = ( 1 + x )^{ 1 + \ell_1 - k } | c_k - \tilde c_{k}( 1 + x ) ^{\tilde \ell_1 - \ell_1} |. \end{align*} As \( ( 1 + x ) ^{\tilde \ell_1 - \ell_1} \to 0 \) as \( x \to -1 \), and as we are considering only finitely many \( k \), we see that there exists some constant \( C > 0 \), independent of \( k \) such that \eqref{eq:derv-bounds} holds. Suppose now that \( 0 < \tilde \ell_1 < \ell_1 \). Repeating the calculation above with \( \ell_1 \) and \( \tilde \ell_1 \) exchanged we conclude the proof. \end{proof} We now continue the proof of Lemma \ref{lem:c-r-close}. The bound \eqref{eq:derv-bounds} immediately implies \[ \| f - g \|_{C^{\lceil r_1 \rceil} [-1, \tilde x_1 ]} = \| f - h_1 \|_{C^{\lceil r_1 \rceil} [-1, \tilde x_1 ]} \leq C (1 + x_1)^{1 + r_1 - \lceil r_1 \rceil} \] for some \( C \) which depends on \( \lceil r_1 \rceil \), but not on \( x_1 \). For \( x \in [\tilde{x}_1, x_1] \) we find by repeated applications of the product rule that \begin{equation}\label{eq:derv-of-g} D^k g(x) = D^k h_1 (x) + \sum_{ j = 0 }^{ n } \binom{k}{j} D^j \xi_1(x) \cdot ( D^{k -j} f (x) - D^{ k - j} h_1 (x) ). \end{equation} Using \eqref{eq:derv-xi-1} and using \eqref{eq:derv-bounds}, we obtain \begin{align} \nonumber \left|\sum_{ j = 0 }^{ n } \binom{k}{j} D^j \xi_1 \cdot ( D^{k -j} f (x) - D^{ k - j} h_1 (x) ) \right| &\leq C \sum_{ j = 0 }^{ n } \binom{k}{j} ( 1 + x_1)^{-j} x_1^{ 1 + r_1 - k + j} \\ \label{eq:sum-of-derv} &= C ( 1 + x_1)^{1 + r_1 -k}. \end{align} Finally, combining \eqref{eq:derv-bounds}, \eqref{eq:derv-of-g} and \eqref{eq:sum-of-derv} we find that for any \( k = 0 , \ldots, \lceil r_1 \rceil \) we find that \( |D^k f (x) - D^k g (x) | \leq C( 1 + x_1)^{1 + r_1 - k } \), for every \( x \in [-1, -1 + x_1] \). So, \begin{equation}\label{eq:Cr-norm-g-1} \| f - g \|_{ C^{\lceil r_1 \rceil} [ -1, 0 ] } \leq C (1 + x_1)^{1 + r_1 - \lceil r_1 \rceil }, \end{equation} for some \( C \) which does not depend on \( x_1 \). \end{proof} \subsection{Concluding the proof of Theorem \ref{thm:density-main}} \label{sec:conc} \begin{proof} Let \( f \in \widehat{\mathfrak{F}} \) and let \( \varepsilon > 0 \). For each of the classes \( \mathfrak{F}, \mathfrak{F}_{\pm},\mathfrak{F}_* \) we will choose \( \tilde \ell_1, \tilde \ell_2 > 0 \) so that the corresponding map \( g \) constructed in Section \ref{sec:construction} belongs to the chosen class, and then, by Lemma~\ref{lem:c-r-close}, we can choose \( x_1 \in U_{-1}, x_2 \in U_{1} \) so that \( g \) is \( \varepsilon \)-close to \( f\) in the appropriate topology. We illustrate this process in detail in a couple of cases and then give some tables to show that choices in all cases. Suppose first that \( f \in \mathfrak{F} \) (so that \( \beta\in [0,1)\) and \( f \) has a physical measure equivalent to Lebesgue) and let us approximate \( f \) by some map \( g\in \mathfrak F_{*}\), ( so that \( \beta\geq 1\) and \( g \) has no physical measure). Set \( \tilde \ell_{1}=1/k_{2}, \tilde \ell_{2}=1/k_{1}\) so that \( \tilde\ell_{1}k_{2}= \tilde\ell_{2}k_{1}=1\), which ensures that \( g\in \mathfrak F_{*}\). Notice that we could choose \( \tilde \ell_{1}=t/k_{2}, \tilde \ell_{2}=t/k_{1}\) for any \( t \geq 1 \) and that for any such choice we have \( \tilde\ell_{1} > \ell_{1}\) and \( \tilde\ell_{2}> \ell_{2}\) because by assumption we have \( \ell_{1}k_{2}, \ell_{2}k_{1}\in [0,1)\). Once \( \tilde\ell_{1}, \tilde\ell_{2}\) have been chosen we immediately get the regularity of the approximation from \eqref{eq:approxreg} which in this case is given by \( r= \lceil \min\{\ell_{1}, \tilde\ell_{1}, \ell_{2}, \tilde\ell_{2}\}\rceil = \lceil\min\{\ell_{1}, \ell_{2}\}\rceil= r_{*}(f) \), thus proving the Theorem in this case. For a second example, suppose that \( f \in \mathfrak{F}_{\pm} \) with \( 0 < \beta^+ < 1 \leq \beta^- \) (so that \( f \) has a physical measure supported on the fixed point \( -1 \)), and let us construct a \( \tilde g \in \mathfrak{F} \) (so that \( g \) has a physical measure equivalent to Lebesgue) that is close to \( f \). Recall that \( \tilde g \in \mathfrak{F} \) if and only if \( \beta^+, \beta^- \in [0,1) \) and so we can leave unchanged the value of \( \beta^{+}\), and therefore let \( \tilde\ell_{2}=\ell_{2}\), but we need to lower the value of \( \beta^{-}\) to something less than 1. We can do this by letting \( \tilde\ell_{1} = 1/ k_2 - \gamma \) for any \( 0 < \gamma < 1/ k_2 \), which gives \( \beta^{-}(g)=\tilde\ell_{2}k_{2}= ( 1/ k_2 - \gamma) k_{2} = 1- k_{2}\gamma <1\), and therefore \( g \in \mathfrak{F} \) as required. To estimate the distance between \( f \) and \( g \), and to choose the appropriate metric for this distance, notice that \( g|_{[0,1]}=f|_{[0,1]}\) and therefore we only need to worry about the distance between \( f\) and \( g \) on \( [-1,0]\). Therefore, by \eqref{eq:approxreg} we get \( r = \lceil \min\{\ell_{1}, \tilde \ell_{1}\} \rceil =\lceil \ell_{1} \rceil =\lceil 1 / k_2 - \gamma \rceil \) and by Lemma \ref{lem:c-r-close} we can construct \( g \) arbitrarily close to \( f \) in the \( d_{r}\) metric with \( r = \lceil 1 / k_2 - \gamma \rceil \) for any \( \gamma>0\) arbitrarily small. Notice however that if \( \gamma\) is sufficiently small then \( \lceil 1 / k_2 - \gamma \rceil = \lceil 1 / k_2 \rceil \) and therefore \( r = \tilde r(f) \) as claimed in the Theorem. All the cases can be obtained by a simple reasonsing as illustrated in the two examples above, from which we deduce the following choices for \( \tilde\ell_{1}, \tilde\ell_{2}\) and the corresponding regularity of the approximation. \renewcommand{\arraystretch}{1.4} \begin{table}[h] \centering \subfloat[Choice of \( \tilde{\ell}_1, \tilde{\ell}_2 \) for constructing \( \tilde{f} \)]{ \begin{tabular}{@{}lcc@{}} \toprule Parameters of \( f \) & \( \tilde{ \ell }_1 \) & \( \tilde{ \ell }_2 \) \\ \midrule \( 1 \leq \beta^+ < 1 \leq \beta^- \) & \( 1/k_2 - \gamma \) & \( \ell_2 \) \\ \( 1 \leq \beta^- < 1 \leq \beta^+ \) & \( \ell_1 \) & \( 1/k_1 - \gamma \) \\ \( \beta^+, \beta^- \geq 1 \) & \(1/k_2 - \gamma \) & \( 1/k_1 - \gamma \) \\ \bottomrule \end{tabular}} \quad \subfloat[Choice of \( \tilde{\ell}_1, \tilde{\ell}_2 \) for constructing \( f_{\pm} \)]{ \begin{tabular}{@{}lcc@{}} \toprule Parameters of \( f \) & \( \tilde{ \ell }_1 \) & \( \tilde{ \ell }_2 \) \\ \midrule \( \ell_1 \geq \ell_2 \) and \( \beta^- < 1 \) & \( 1/k_2 \) & \( \ell_2 \) \\ \( \ell_1 \geq \ell_2 \) and \( \beta^- \geq 1 \) & \( \ell_1 + \gamma \) & \( \ell_2 \) \\ \( \ell_1 < \ell_2 \) and \( \beta^+ < 1 \) & \( \ell_1 \) & \( 1 / k_1 \) \\ \( \ell_1 < \ell_2 \) and \( \beta^+ \geq 1 \) & \( \ell_1 \) & \( \ell_2 + \gamma \) \\ \bottomrule \end{tabular} } \\ \vspace{1em} \subfloat[Choice of \( \tilde{\ell}_1, \tilde{\ell}_2 \) for constructing \( f_* \)]{ \begin{tabular}{@{}lcc@{}} \toprule Parameters of \( f \) & \( \tilde{ \ell }_1 \) & \( \tilde{ \ell }_2 \) \\ \midrule \( \beta \in [0,1) \) & \( 1/k_2 \) & \( 1/k_1 \) \\ \( \beta^+ < 1 \leq \beta^- \) & \( \ell_1 \) & \( \beta^- / k_1 \) \\ \( \beta^- < 1 \leq \beta^+ \) & \( \beta^+ / k_2 \) & \( \ell_2 \) \\ \( \beta^+, \beta^- \geq 1 \) and \( k_1 \geq k_2 \) & \( \beta^+ / k_2 \) & \( \ell_2 \) \\ \( \beta^+, \beta^- \geq 1 \) and \( k_1 < k_2 \) & \( \ell_1 \) & \( \beta^-/ k_1 \) \\ \bottomrule \end{tabular} } \caption{Choosing \(\tilde{\ell}_1, \tilde{\ell}_2 \) to complete the proof of Theorem \ref{thm:density-main}.} \end{table} This completes the proof. \end{proof} \newpage \begin{bibdiv} \begin{biblist} \bib{AarThaZwe05}{article}{ author={Aaronson, Jon}, author={Thaler, Maximilian}, author={Zweim{\"u}ller, Roland}, title={Occupation times of sets of infinite measure for ergodic transformations}, date={2005}, journal={Ergodic Theory and Dynamical Systems}, } \bib{Alv20}{book}{ author={Alves, Jos{\'e}~F.}, title={Nonuniformly hyperbolic attractors. geometric and probabilistic aspects}, series={Springer Monographs in Mathematics}, publisher={Springer International Publishing}, date={2020}, } \bib{AlvDiaLuz17}{article}{ author={Alves, Jos{\'e}~F.}, author={Dias, Carla~L.}, author={Luzzatto, Stefano}, author={Pinheiro, Vilton}, title={S{RB} measures for partially hyperbolic systems whose central direction is weakly expanding}, date={2017}, journal={J. Eur. Math. Soc. (JEMS)}, volume={19}, number={10}, pages={2911\ndash 2946}, } \bib{AlvLuzPin05}{article}{ author={Alves, Jos{\'e}~F.}, author={Luzzatto, Stefano}, author={Pinheiro, Vilton}, title={Markov structures and decay of correlations for non-uniformly expanding dynamical systems}, date={2005}, journal={Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire}, volume={22}, number={6}, pages={817\ndash 839}, } \bib{AnoSin67}{article}{ author={Anosov, D~V}, author={Sinai, Yakov~G.}, title={{Some Smooth Ergodic Systems}}, date={1967}, journal={Russian Mathematical Surveys}, volume={103}, } \bib{AraLuzVia09}{article}{ author={Ara{\'u}jo, V{\'{\i}}tor}, author={Luzzatto, Stefano}, author={Viana, Marcelo}, title={Invariant measures for interval maps with critical points and singularities}, date={2009}, journal={Adv. Math.}, volume={221}, number={5}, pages={1428\ndash 1444}, } \bib{AraPin21}{article}{ author={Ara{\'{u}}jo, V{\'{i}}tor}, author={Pinheiro, Vilton}, title={{Abundance of wild historic behavior}}, date={2021}, journal={Bulletin of the Brazilian Mathematical Society. New Series.}, volume={52}, } \bib{Bac99}{article}{ author={Bachurin, P~S}, title={The connection between time averages and minimal attractors}, date={1999}, journal={Russian Mathematical Surveys}, volume={54}, number={6}, pages={1233\ndash 1235}, } \bib{BahGalNis18}{article}{ author={Bahsoun, Wael}, author={Galatolo, Stefano}, author={Nisoli, Isaia}, author={Niu, Xiaolong}, title={A rigorous computational approach to linear response}, date={2018}, journal={Nonlinearity}, volume={31}, } \bib{BahSau16}{article}{ author={Bahsoun, Wael}, author={Saussol, Beno{\^\i}t}, title={Linear response in the intermittent family: differentiation in a weighted $c^0$-norm}, date={2016}, journal={Discrete and Continuous Dynamical Systems}, } \bib{BalTod16}{article}{ author={Baladi, V.}, author={Todd, M.}, title={Linear response for intermittent maps}, date={2016}, journal={Communications in Mathematical Physics}, volume={347}, pages={857\ndash 874}, } \bib{BarKirNak20}{article}{ author={Barrientos, Pablo~G.}, author={Kiriki, Shin}, author={Nakano, Yushi}, author={Raibekas, Artem}, author={Soma, Teruhiko}, title={Historic behavior in non-hyperbolic homoclinic classes}, date={2020}, journal={Proceedings of the American Mathematical Society}, volume={148}, pages={1195\ndash 1206}, } \bib{BerBie22}{article}{ author={Berger, Pierre}, author={Biebler, Sebastien}, title={Emergence of wandering stable components}, date={2022}, journal={Journal of the American Mathematical Society}, volume={36}, } \bib{Bir31}{article}{ author={Birkhoff, George~David}, title={{Proof of the Ergodic Theorem.}}, date={1931}, journal={Proceedings of the National Academy of Sciences of the United States of America}, volume={17}, number={12}, pages={656\ndash 660}, } \bib{Bow75}{book}{ author={Bowen, Rufus}, title={Equilibrium states and the ergodic theory of {A}nosov diffeomorphisms}, series={Lecture Notes in Mathematics, Vol. 470}, publisher={Springer-Verlag}, address={Berlin}, date={1975}, } \bib{BruLep13}{article}{ author={Bruin, Henk}, author={Leplaideur, Renaud}, title={Renormalization, thermodynamic formalism and quasi-crystals in subshifts}, date={2013feb}, journal={Communications in Mathematical Physics}, volume={321}, number={1}, pages={209\ndash 247}, } \bib{BruTerTod19}{article}{ author={Bruin, Henk}, author={Terhesiu, Dalia}, author={Todd, Mike}, title={The pressure function for infinite equilibrium measures}, date={2019jun}, journal={Israel Journal of Mathematics}, volume={232}, number={2}, pages={775\ndash 826}, } \bib{Bur21}{article}{ author={Burguet, David}, title={{SRB} measures for ${C}^\infty$ surface diffeomorphisms}, date={2021}, journal={Preprint}, } \bib{Buz00}{article}{ author={Buzzi, J{\'e}r{\^o}me}, title={Absolutely continuous invariant probability measures for arbitrary expanding piecewise r-analytic mappings of the plane}, date={2000}, journal={Ergodic Theory Dynam. Systems}, volume={20}, number={3}, pages={697\ndash 708}, } \bib{BuzCroSar22}{article}{ author={Buzzi, J{\'e}r{\^o}me}, author={Crovisier, Sylvain}, author={Sarig, Omri}, title={Another proof of burguet's existence theorem for srb measures of $c^\infty$ surface diffeomorphisms}, date={2022}, journal={Preprint}, } \bib{CamIso95}{article}{ author={Campanino, Massimo}, author={Isola, Stefano}, title={Statistical properties of long return times in type i intermittency}, date={1995}, journal={Forum Mathematicum}, volume={7}, number={7}, } \bib{CliLuzPes17}{article}{ author={Climenhaga, Vaughn}, author={Luzzatto, Stefano}, author={Pesin, Yakov}, title={The geometric approach for constructing {S}inai-{R}uelle-{B}owen measures}, date={2017}, journal={Journal of Statistical Physics}, volume={166}, } \bib{CliLuzPes23}{article}{ author={Climenhaga, Vaughn}, author={Luzzatto, Stefano}, author={Pesin, Yakov}, title={Srb measures and young towers for surface diffeomorphisms}, date={2023}, journal={Annales Henri Poincar{\'{e}}}, volume={23}, } \bib{CoaHolTer19}{article}{ author={Coates, Douglas}, author={Holland, Mark}, author={Terhesiu, Dalia}, title={Limit theorems for wobbly interval intermittent maps}, date={201910}, } \bib{CoaLuzMub22}{article}{ author={Coates, Douglas}, author={Luzzatto, Stefano}, author={Mubarak, MUhammad}, title={Doubly intermittent full branch maps with critical points and singularities}, date={2022}, journal={Preprint}, } \bib{ColVar01}{article}{ author={Colli, Eduardo}, author={Vargas, Edson}, title={Non-trivial wandering domains and homoclinic bifurcations}, date={2001}, journal={Ergodic Theory and Dynamical Systems}, } \bib{CriHayMar10}{article}{ author={Cristadoro, Giampaolo}, author={Haydn, Nicolai}, author={Marie, Philippe}, author={Vaienti, Sandro}, title={Statistical properties of intermittent maps with unbounded derivative}, date={2010}, journal={Nonlinearity}, } \bib{CroYanZha20}{article}{ author={Crovisier, Sylvain}, author={Yang, Dawei}, author={Zhang, Jinhua}, title={Empirical measures of partially hyperbolic attractors}, date={2020}, journal={Communications in Mathematical Physics}, } \bib{Cui21}{article}{ author={Cui, Hongfei}, title={Invariant densities for intermittent maps with critical points}, date={2021}, journal={Journal of Difference Equations and Applications}, volume={27}, number={3}, pages={404\ndash 421}, } \bib{DiaHolLuz06}{article}{ author={Diaz-Ordaz, Karla}, author={Holland, Mark}, author={Luzzatto, Stefano}, title={Statistical properties of one-dimensional maps with critical points and singularities}, date={2006}, journal={Stochastics and Dynamics}, volume={6}, number={4}, } \bib{Dol04}{incollection}{ author = {Dolgopyat, Dmitry}, title = {Prelude to a kiss}, booktitle = {Modern dynamical systems and applications}, pages = {313--324}, year = {2004}, } \bib{Dua12}{article}{ author={Duan, Y.}, title={A.c.i.m for random intermittent maps: existence, uniqueness and stochastic stability}, date={2012}, journal={Dynamical Systems. An International Journal}, } \bib{FisLop01}{article}{ author={Fisher, Albert~M}, author={Lopes, Artur}, title={Exact bounds for the polynomial decay of correlation, 1/fnoise and the {CLT} for the equilibrium state of a non-h{\"o}lder potential}, date={2001jul}, journal={Nonlinearity}, volume={14}, number={5}, pages={1071\ndash 1104}, } \bib{FreFreTod13}{article}{ author={Freitas, Ana Cristina~Moreira}, author={Freitas, Jorge~Milhazes}, author={Todd, Mike}, title={The compound poisson limit ruling periodic extreme behaviour of non-uniformly hyperbolic dynamics}, date={2013mar}, journal={Communications in Mathematical Physics}, volume={321}, number={2}, pages={483\ndash 527}, } \bib{FreFreTod16}{article}{ author={Freitas, Ana Cristina~Moreira}, author={Freitas, Jorge~Milhazes}, author={Todd, Mike}, author={Vaienti, Sandro}, title={Rare events for the manneville-pomeau map}, date={2016nov}, journal={Stochastic Processes and their Applications}, volume={126}, number={11}, pages={3463\ndash 3479}, } \bib{FroMurSta11}{article}{ author={Froyland, Gary}, author={Murray, Rua}, author={Stancevic, Ognjen}, title={Spectral degeneracy and escape dynamics for intermittent maps with a hole}, date={2011jul}, journal={Nonlinearity}, volume={24}, number={9}, pages={2435\ndash 2463}, } \bib{GalHolPer21}{article}{ author={Galatolo, Stefano}, author={Holland, Mark}, author={Persson, Tomas}, author={Zhang, Yiwei}, title={Anomalous time-scaling of extreme events in infinite systems and birkhoff sums of infinite observables}, date={202110}, journal={Discrete and Continuous Dynamical Systems}, } \bib{Gou10a}{article}{ author={Gou{\"{e}}zel, S{\'{e}}bastien}, title={{Almost sure invariance principle for dynamical systems by spectral methods}}, date={2010jul}, journal={The Annals of Probability}, volume={38}, number={4}, pages={1639\ndash 1671}, } \bib{Her18}{incollection}{ author={Herman, Michel}, title={An example of non-convergence of birkhoff sums}, date={2018}, booktitle={Notes inachev{\'e}es de michael r. herman s{\'e}lectionn{\'e}es par jean-christophe yoccoz}, publisher={Soci{\'e}t{\'e} Math{\'e}matique de France}, } \bib{HofKel90}{article}{ author={Hofbauer, Franz}, author={Keller, Gerhard}, title={Quadratic maps without asymptotic measure}, date={1990}, journal={Comm. Math. Phys.}, volume={127}, number={2}, pages={319\ndash 337}, } \bib{HofKel95}{incollection}{ author={Hofbauer, Franz}, author={Keller, Gerhard}, title={{Quadratic maps with maximal oscillation}}, date={1995}, booktitle={Algorithms, fractals, and dynamics}, editor={Takahashi, Y.}, publisher={Springer, Boston, MA}, pages={89\ndash 94}, } \bib{HuYou95}{article}{ author={Hu, Huyi}, author={Young, Lai-Sang}, title={Nonexistence of sbr measures for some diffeomorphisms that are `almost anosov'}, date={1995}, journal={Ergodic Theory and Dynamical Systems}, volume={15}, } \bib{Ino00}{article}{ author={Inoue, Tomoki}, title={Sojourn times in small neighborhoods of indifferent fixed points of one-dimensional dynamical systems}, date={2000}, journal={Ergodic Theory and Dynamical Systems}, volume={20}, } \bib{JarTol04}{book}{ author={J{\"a}rvenp{\"a}{\"a}, Esa}, author={Tolonen, Tapani}, title={Natural ergodic measures are not always observable}, publisher={University of Jyv{\"a}skyl{\"a}}, date={2005}, } \bib{KanKirLi16}{article}{ author={Kanagawa, Hiratuka}, author={Kiriki, Shin}, author={Li, Ming-Chia}, title={Geometric {L}orenz flows with historic behaviour}, date={2016}, journal={Discrete and Continuous Dynamical Systems}, volume={36}, number={12}, pages={7021\ndash 7028}, } \bib{Kel04}{article}{ author={Keller, Gerhard}, title={{Completely mixing maps without limit measure}}, date={2004}, journal={Colloquium Mathematicum}, volume={100}, number={1}, pages={73\ndash 76}, } \bib{KirLiSom10}{article}{ author={Kiriki, Shin}, author={Li, Ming-Chia}, author={Soma, Teruhiko}, title={{Coexistence of invariant sets with and without SRB measures in Henon family}}, date={2010}, journal={Nonlinearity}, volume={23}, number={9}, pages={2253\ndash 2269}, } \bib{KirLiNak22}{article}{ author={Kiriki, Shin}, author={Li, Xiaolong}, author={Nakano, Yushi}, author={Soma, Teruhiko}, title={Abundance of observable lyapunov irregular sets}, date={2022}, journal={Communications in Mathematical}, volume={Physics}, pages={1\ndash 29}, } \bib{KirNakSom19}{article}{ author={Kiriki, Shin}, author={Nakano, Yushi}, author={Soma, Teruhiko}, title={Historic behaviour for nonautonomous contraction mappings}, date={2019}, journal={Nonlinearity}, volume={32}, pages={1111\ndash 1124}, } \bib{KirNakSom21}{article}{ author={Kiriki, Shin}, author={Nakano, Yushi}, author={Soma, Teruhiko}, title={Historic and physical wandering domains for wild blender-horseshoes}, date={2021}, journal={Preprint}, } \bib{KirNakSom22}{article}{ author={Kiriki, Shin}, author={Nakano, Yushi}, author={Soma, Teruhiko}, title={Emergence via non-existence of averages}, date={2022}, journal={Advances in Mathematics}, volume={400}, pages={1\ndash 30}, } \bib{KirSom17}{article}{ author={Kiriki, Shin}, author={Soma, Teruhiko}, title={Takens' last problem and existence of non-trivial wandering domains}, date={2017}, journal={Advances in Mathematics,}, volume={306}, pages={pp.}, } \bib{Kle06}{article}{ author={Kleptsyn, V~A}, title={An example of non-coincidence of minimal and statistical attractors}, date={2006}, journal={Ergodic Theory and Dynamical Systems}, volume={26}, } \bib{Kor16}{article}{ author={Korepanov, Alexey}, title={Linear response for intermittent maps with summable and nonsummable decay of correlations}, date={2016}, journal={Nonlinearity}, } \bib{LabRod17}{article}{ author={Labouriau, Isabel~S.}, author={Rodrigues, Alexandre A.~P.}, title={On takens' last problem: tangencies and time averages near heteroclinic networks}, date={2017}, journal={Nonlinearity}, volume={30}, pages={1876\ndash 1910}, } \bib{LasYor73}{article}{ author={Lasota, A.}, author={Yorke, James~A.}, title={On the existence of invariant measures for piecewise monotonic transformations}, date={1973}, journal={Trans. Amer. Math. Soc.}, volume={186}, pages={481\ndash 488}, } \bib{LivSauVai99}{article}{ author={Liverani, Carlangelo}, author={Saussol, Beno{\^{\i}}t}, author={Vaienti, Sandro}, title={A probabilistic approach to intermittency}, date={1999}, journal={Ergodic Theory Dynam. Systems}, volume={19}, number={3}, pages={671\ndash 685}, } \bib{Mel08}{article}{ author={Melbourne, Ian}, title={Large and moderate deviations for slowly mixing dynamical systems}, date={2008nov}, journal={Proceedings of the American Mathematical Society}, volume={137}, number={5}, pages={1735\ndash 1741}, } \bib{MelTer12}{article}{ author={Melbourne, Ian}, author={Terhesiu, Dalia}, title={First and higher order uniform dual ergodic theorems for dynamical systems with infinite measure}, date={2012nov}, journal={Israel Journal of Mathematics}, volume={194}, number={2}, pages={793\ndash 830}, } \bib{NicTorVai16}{article}{ author={Nicol, Matthew}, author={T{\=o}r{\=o}k, Andrew}, author={Vaienti, Sandro}, title={Central limit theorems for sequential and random intermittent dynamical systems}, date={2016}, journal={Ergodic Theory and Dynamical Systems}, volume={38}, number={3}, pages={1127\ndash 1153}, } \bib{Nol20}{book}{ author={Nolan, John~P}, title={Univeriate stable distributions}, publisher={Springer}, date={2020}, } \bib{KarAsh11}{article}{ author={{\=o}zkan Karabacak}, author={Ashwin, Peter}, title={On statistical attractors and the convergence of time averages}, date={2011}, journal={Mathematical Proceedings of the Cambridge Philosophical Society}, volume={150}, } \bib{Pal08}{article}{ author={Palis, J}, title={Open questions leading to a global perspective in dynamics}, date={2008}, journal={Nonlinearity}, volume={21}, number={4}, } \bib{Pal15}{article}{ author={Palis, J}, title={Open questions leading to a global perspective in dynamics (corrigendum)}, date={2015}, journal={Nonlinearity}, volume={28}, number={3}, } \bib{Pia80}{article}{ author={Pianigiani, Giulio}, title={First return map and invariant measures}, date={1980}, journal={Israel J. Math.}, volume={35}, number={1-2}, pages={32\ndash 48}, } \bib{Pin06}{article}{ author={Pinheiro, Vilton}, title={Sinai-{R}uelle-{B}owen measures for weakly expanding maps}, date={2006}, journal={Nonlinearity}, volume={19}, number={5}, pages={1185\ndash 1200}, } \bib{PolSha09}{article}{ author={Pollicott, Mark}, author={Sharp, Richard}, title={Large deviations for intermittent maps}, date={2009jul}, journal={Nonlinearity}, volume={22}, number={9}, pages={2079\ndash 2092}, } \bib{PolWei99}{article}{ author={Pollicott, Mark}, author={Weiss, Howard}, title={Multifractal analysis of lyapunov exponent for continued fraction and manneville-pomeau transformations and applications to diophantine approximation}, date={1999nov}, journal={Communications in Mathematical Physics}, volume={207}, number={1}, pages={145\ndash 171}, } \bib{PomMan80}{article}{ author={Pomeau, Yves}, author={Manneville, Paul}, title={Intermittent transition to turbulence in dissipative dynamical systems}, date={1980}, journal={Communications in Mathematical Physics}, volume={74}, pages={189\ndash 197}, } \bib{Rue76}{article}{ author={Ruelle, David}, title={A measure associated with axiom-{A} attractors}, date={1976}, journal={Amer. J. Math.}, volume={98}, number={3}, pages={619\ndash 654}, } \bib{Ruz15}{article}{ author={Ruziboev, Marks}, title={Decay of correlations for invertible systems with non-h{\"o}lder observables}, date={2015}, journal={Dynamical Systems. An International Journal}, } \bib{Ruz18}{incollection}{ author={Ruziboev, Marks}, title={Almost sure rates of mixing for random intermittent maps}, date={201801}, booktitle={Differential equations and dynamical systems}, publisher={Springer}, } \bib{Sar01}{article}{ author={Sarig, Omri}, title={{Phase transitions for countable Markov shifts}}, date={2001}, journal={Communications in Mathematical Physics}, volume={217}, number={3}, pages={555\ndash 577}, } \bib{Saw66}{article}{ author={Sawyer, S.}, title={Maximal inequalities of weak type}, date={1966}, journal={Annals of Mathematics}, volume={84}, } \bib{SheStr13}{article}{ author={Shen, Weixiao}, author={van Strien, Sebastian}, title={On stochastic stability of expanding circle maps with neutral fixed points}, date={2013sep}, journal={Dynamical Systems}, volume={28}, number={3}, pages={423\ndash 452}, } \bib{Sin72}{article}{ author={Sinai, Ja.~G.}, title={Gibbs measures in ergodic theory}, date={1972}, journal={Uspehi Mat. Nauk}, volume={27}, number={4}, pages={21\ndash 64}, } \bib{Tak94}{article}{ author={Takens, Floris}, title={Heteroclinic attractors: time averages and moduli of topological conjugacy.}, date={1994}, journal={Bullettin of the Brazilian Mathematical Society}, volume={25}, } \bib{Tak08}{article}{ author={Takens, Floris}, title={Orbits with historic behaviour, or nonexistence of averages}, date={2008}, journal={Nonlinearity}, volume={21}, } \bib{Tal20}{article}{ author={Talebi, Amin}, title={Statistical (in)stability and non-statistical dynamics}, date={2020}, journal={Preprint}, } \bib{Tal22}{article}{ author={Talebi, Amin}, title={Non-statistical rational maps}, date={2022}, journal={Mathematische Zeitschrift}, } \bib{Ter13}{article}{ author={Terhesiu, Dalia}, title={Improved mixing rates for infinite measure-preserving systems}, date={2013aug}, journal={Ergodic Theory and Dynamical Systems}, volume={35}, number={2}, pages={585\ndash 614}, } \bib{Ter15}{article}{ author={Terhesiu, Dalia}, title={Mixing rates for intermittent maps of high exponent}, date={2015}, journal={Probability Theory and Related Fields}, volume={166}, number={3-4}, pages={1025\ndash 1060}, } \bib{Tha80}{article}{ author={Thaler, Maximilian}, title={Estimates of the invariant densities of endomorphisms with indifferent fixed points}, date={1980}, journal={Israel Journal of Mathematics}, volume={37}, number={4}, pages={303\ndash 314}, } \bib{Tha83}{article}{ author={Thaler, Maximilian}, title={Transformations on [0, 1] with infinite invariant measures}, date={1983}, journal={Israel Journal of Mathematics}, volume={46}, number={1-2}, pages={67\ndash 96}, } \bib{Tha95}{article}{ author={Thaler, Maximilian}, title={The invariant densities for maps modeling intermittency}, date={1995}, journal={Journal of Statistical Physics}, volume={79}, number={3-4}, pages={739\ndash 741}, } \bib{Tha95a}{article}{ author={Thaler, Maximilian}, title={A limit theorem for the perron-frobenius operator of transformations on [0,1] with indifferent fixed points}, date={1995}, journal={Israel Journal of Mathematics}, volume={91}, number={1-3}, pages={111\ndash 127}, } \bib{Tha00}{article}{ author={Thaler, Maximilian}, title={The asymptotics of the perron-frobenius operator of a class of interval maps preserving infinite measures}, date={2000}, journal={Studia Mathematica}, volume={143}, number={2}, pages={103\ndash 119}, } \bib{Tha05}{article}{ author={Thaler, Maximilian}, title={Asymptotic distributions and large deviations for iterated maps with an indifferent fixed point}, date={2005}, journal={Stochastics and Dynamics}, volume={05}, number={03}, pages={425\ndash 440}, } \bib{Tsu00c}{article}{ author={Tsujii, Masato}, title={Absolutely continuous invariant measures for piecewise real-analytic expanding maps on the plane}, date={2000}, journal={Comm. Math. Phys.}, volume={208}, number={3}, pages={605\ndash 622}, } \bib{Tsu01a}{article}{ author={Tsujii, Masato}, title={Absolutely continuous invariant measures for expanding piecewise linear maps}, date={2001}, journal={Invent. Math.}, volume={143}, number={2}, pages={349\ndash 373}, } \bib{Tsu05}{article}{ author={Tsujii, Masato}, title={Physical measures for partially hyperbolic surface endomorphisms}, date={2005}, journal={Acta Math.}, volume={194}, number={1}, pages={37\ndash 132}, } \bib{Vec22}{article}{ author={Veconi, Dominic}, title={SRB measures of singular hyperbolic attractors}, date={2022}, journal={Discrete and Continuous Dynamical Systems}, volume={42}, } \bib{You99}{article}{ author={Young, Lai-Sang}, title={Recurrence times and rates of mixing}, date={1999}, journal={Israel J. Math.}, volume={110}, pages={153\ndash 188}, } \bib{Zwe98}{article}{ author={Zweim{\"{u}}ller, Roland}, title={{Ergodic structure and invariant densities of non-Markovian interval maps with indifferent fixed points}}, date={1998}, journal={Nonlinearity}, volume={1263}, } \bib{Zwe00}{article}{ author={Zweim{\"{u}}ller, Roland}, title={{Ergodic properties of infinite measure-preserving interval maps with indifferent fixed points}}, date={2000}, journal={Ergodic Theory and Dynamical Systems}, volume={20}, number={5}, pages={1519\ndash 1549}, } \bib{Zwe02}{article}{ author={Zweim{\"{u}}ller, Roland}, title={{Exact $C^\infty$ covering maps of the circle without (weak) limit measure}}, date={2002}, journal={Colloquium Mathematicum}, volume={93}, number={2}, pages={295\ndash 302}, } \bib{Zwe03}{article}{ author={Zweim{\"{u}}ller, Roland}, title={{Stable limits for probability preserving maps with indifferent fixed points}}, date={2003}, journal={Stochastics and Dynamics}, volume={3}, number={1}, pages={83\ndash 99}, } \end{biblist} \end{bibdiv} \end{document} \section{Introduction} We study the existence and, especially, the \emph{non-existence} of physical measures in a large family of interval maps \( \widehat{\mathfrak{F}} \) which was introduced in \cite{CoaLuzMub22}. We start with heuristic and conceptual overview of results. Then in Section~\ref{sec:results} we give the formal definition of the family \( \widehat{\mathfrak{F}} \) and the precise technical statements of our results. In Section \ref{sec:recall} we recall the main construction and key estimates from \cite{CoaLuzMub22} which will be required, and in Sections \ref{sec:physical}-\ref{sec:density} we prove our results. \subsection{Overview of Results}\label{sec:overview} The family \( \widehat{\mathfrak{F}} \) consists of full branch maps with two orientation preserving branches, which are all in the same topological conjugacy class of uniformly expanding maps such as \( f(x)=2x\)~mod~1, in particular they are all topologically conjugate to each other. Depending on a number of parameters, they may however exhibit quite different features: the two fixed points are always topologically repelling but may be either hyperbolic or neutral and the branches may have critical points and/or singularities with infinite derivative. \( \widehat{\mathfrak{F}} \) contains uniformly expanding maps and well-known intermittent maps as well as many other maps which, as far as we know have not been studied before, see \cite{CoaLuzMub22} for an extensive discussion and review of the literature. Figure \ref{fig:fig} shows some possible graphs of the maps in \( \widehat{\mathfrak{F}} \), and see Section \ref{sec:defn} for the formal definition. \begin{figure}[h]\label{fig:1} \centering \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\textwidth]{./images/a} \end{subfigure}% \quad \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\textwidth]{./images/b} \end{subfigure} \quad \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\textwidth]{./images/c} \end{subfigure} \caption{Graph of $g$ for various possible values of parameters.}\label{fig:fig} \end{figure} The first main result of this paper is a complete classification of maps in \( \widehat{\mathfrak{F}} \) from the point of view of the kind of physical measure they admit (or not). In particular we will prove the following. \begin{maintheorem}\label{mainthmA} The family of maps \( \widehat{\mathfrak{F}} \) is the union of three non-empty pairwise disjoint subfamilies \begin{equation}\label{eq:subfam} \widehat{\mathfrak{F}} = \mathfrak{F} \cup \mathfrak{F}_\pm \cup \mathfrak{F}_* \end{equation} satisfying the following properties: \medskip 1) all \( g \in \mathfrak{F} \) have a physical measure equivalent to Lebesgue; 2) all \( g \in \mathfrak{F}_{\pm} \) have a physical measure supported on a repelling fixed point; 3) all \( g \in \mathfrak{F}_{*} \) are non-statistical and in particular have no physical measure. \end{maintheorem} The definitions of physical measure and and non-statistical maps are given in Section \ref{sec:introphsy}, and the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) are defined explicitly in Section \ref{sec:stat} in terms of various parameters of the maps in \( \widehat{\mathfrak{F}} \) . Item 1) in Theorem \ref{mainthmA}, i.e. the existence of physical measures equivalent to Lebesgue for maps in \( \mathfrak{F} \), was proved in \cite{CoaLuzMub22}, where it was also shown that such physical measure satisfy a number of statistical properties such as decay of correlations and various limit theorems Our focus in this paper is therefore on the complementary families \( \mathfrak{F}_{\pm} \) and \( \mathfrak{F}_{*} \). We note that the three families \( \mathfrak{F}, \mathfrak{F}_{\pm}, \mathfrak{F}_{*} \) form a partition of \( \widehat{\mathfrak{F}} \), \emph{there are no cases in which we are not able to obtain a conclusion} (although, as we shall see in the proofs, there are some boundary cases in which more sophisticated arguments are required). A natural question concerns the ``\emph{size}'' and ``\emph{structure}'' of the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) inside \( \widehat{\mathfrak{F}} \), and perhaps one of the most surprising and unexpected results of this paper is that they are \emph{intermingled} and \emph{dense} in some natural subsets of \( \widehat{\mathfrak{F}} \) in some appropriate topologies. Due to the presence of the discontinuity we cannot use the standard \( C^{r}\), or even \( C^{0}\), metric on \( \widehat{\mathfrak{F}} \), since the maps are not even continuous. However there are pairs of maps \( f, g\in \widehat{\mathfrak{F}} \) whose \emph{difference} \( f-g\in C^{r}\), and which therefore may be considered as \( C^{r} \) \emph{perturbations} one of the other. This observation motivates the definition of a natural extended metric on \( \widehat{\mathfrak{F}} \) defined as follows: for any \( f,g \in \widehat{ \mathfrak{F} } \) and \( r \geq 0 \) we let \begin{equation*}\label{eq:approx} d_{r}(f,g)\coloneqq \begin{cases} \|f - g \|_{C^r} &\text{ if } f-g\in C^{r} \\ \infty &\text{ otherwise}. \end{cases} \end{equation*} For simplicity we will just refer to it as \(C^{r}\) \emph{metric} on \( \widehat{\mathfrak{F}} \)\footnote{Notice that we can ``normalize'' this extended metric to give a standard bounded metric on \( \widehat{\mathfrak{F}} \) by defining \( \tilde d_r ( f, g ) \coloneqq { \tilde d_{r} ( f ,g ) }/{(1 + \tilde d_{r} (f , g ))} \) when \( d_{r} ( f ,g )< \infty \) and \( \tilde d_{r} ( f ,g ) \coloneqq 1 \) otherwise. The metrics \( d_{r}\) and \( \tilde d_{r}\) lead to equivalent topologies and so for our purposes it does not really matter which one we use.}. There are of course many maps which are at infinite distance from each other but, as we shall see, there is nevertheless a very rich structure in every neighbourhood of maps \( f \in \widehat{ \mathfrak{F} } \) as we have the following surprising and remarkable result. \begin{maintheorem}\label{thm:densityC0} Each of the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) are \(C^0\)-dense in \(\widehat{ \mathfrak{F}}\). \end{maintheorem} Theorem \ref{thm:densityC0} says, more precisely, than any \( f \in \widehat{ \mathfrak{F} } \), belonging to any of the classes \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\), can be approximated arbitrarily closely in the \( C^{0}\) metric by maps in the other two classes. For example, maps with a physical measure equivalent to Lebesgue, including uniformly expanding maps, can be \(C^0\) approximated both by maps with physical measures given by Dirac-delta measures on the fixed points and also by maps without physical measures. Similarly, maps without physical measures can be \(C^0\) approximated both by maps with a physical measure equivalent to Lebesgue and by maps with Dirac-delta physical measures on the fixed points. Theorem \ref{thm:densityC0} will be proved as a special case of a more technical but much more general result, see Theorem~\ref{thm:density-main} below, which also implies that maps in the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) can be approximated by maps in the other families in arbitrarily regular metrics, depending on the maps. In particular, for the case \( r = 1 \) we define the set \[ \widetilde{ \mathfrak{F}}\coloneqq \{ f \in \widehat{ \mathfrak{F}}: \text{both fixed points are neutral fixed points}\} \] and we have the following perhaps even more surprising and remarkable result. \begin{maintheorem}\label{thm:densityC1} Each of the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) are \(C^1\)-dense in \(\widehat{ \mathfrak{F}}\). \end{maintheorem} Theorem \ref{thm:densityC1} says that every \( C^{1}\) neighbourhood of every map \( f \in \widetilde{ \mathfrak{F}} \) contains maps belonging to \emph{all three families} \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\). In particular every map with a physical measure equivalent to Lebesgue can be \( C^{1}\) approximated by maps without physical measures, and vice-versa. Finally, we address the question of how \emph{persistent} is the dynamics corresponding to the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\), i.e. how \emph{``large''} each family is and how \emph{``robust''} with respect to perturbations. Given Theorems \ref{thm:densityC0} and \ref{thm:densityC1} above, none of these families can be \emph{open} in either the \( C^{0}\) or the \( C^{1} \) metrics as defined above, however we will see that there are several ways in which we can argue that each family is \emph{large} in some sense. In particular, letting \( \operatorname{supp} f \coloneqq \{ x : f(x) \neq 0 \} \) denote the support of function, we will prove the following result. \begin{maintheorem}\label{mainthm:open} Let \( f \in \mathfrak{F}_{*} \). If \( g \in \widetilde{\mathfrak{F}} \) is such that \( \overline{ \operatorname{ supp } ( f-g ) } \subset (-1,0)\cup (0,1), \) then \( g \in \mathfrak{F}_* \). \\ The same statement is true if we replace \( \mathfrak{ F }_* \) with either of the other two subclasses \( \mathfrak{ F }, \mathfrak{ F }_{\pm} \). \end{maintheorem} Theorem \ref{mainthm:open} shows that the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) are open under a particular class of perturbations which is slightly more restrictive than those allowed under the general \( C^{r}\) metric defined above but still quite substantial. This is of course particularly remarkable when applied to maps in \( \mathfrak{F}_{\pm}\) whose physical measures are Dirac-delta measures on fixed points, and to maps in \( \mathfrak{F}_* \) without physical measures. As far as we know there are no other previously known examples of systems with non-statistical behaviour which is as robust as this. There are also other ways in which the families \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) are persistent but these need to be formulated in terms of the parameters which define the maps and we therefore postpone the statements to Section \ref{sec:open} below. \subsection{Physical Measures}\label{sec:introphsy} In this section we give the precise definition of physical measure and non-statistical dynamics and give a brief discussion of relevant previously existing results. \subsubsection{Topological and Statistical Limits} For completeness we start with some general background notions which help to motivate the definitions and the results. Given a set \( X \), a map \( f : X \to X \) determines a Dynamical System by defining the \(n\)'th iterate \( f^{n}=f\circ \cdots \circ f\) by the \(n\)-fold composition of \( f\) with itself, and for any \( x_{0}\in X \) we define the \(n\)'th iterate of \( x_{0} \) under \( f \) as the image \( x_{n}=f^{n}(x)\). We can think of \( X \) as a \emph{state space}, or the collection of all possible configurations of some system, the map \( f \) as a \emph{force} or \emph{mechanism} which acts on the system, and \( x_{0}\) as an \emph{initial condition}. Then the sequence \[ \mathcal O(x) : =\{x_{n}\}_{n=0}^{\infty}, \] which we call the (forward) \emph{orbit} of \( x_{0}\), denotes the \emph{evolution in time} of the system starting from the initial condition \( x_{0}\). The main objective of the Theory of Dynamical Systems is essentially that of describing the structure of orbits and how they depend on the initial condition \( x_{0}\) and on the map \( f \). If \( X \) is a \emph{topological space} we can define the \emph{omega-limit} set \( \omega(x):= \{y\in X: y \text{ is an accumulation point of the sequence } \mathcal O(x)\}, \) which gives information about the asymptotic nature of the orbit from a \emph{topological} point of view. If \( X \) is a \emph{measure space} we can define the Dirac-delta point mass measures \( \delta_{x_{k}}\) at each point of the orbit and use this to describe the orbit by equidistributing the mass on the first \( n \) terms of the orbit, giving a sequence \[ \mu_{n} (x_{0}) \coloneqq \frac{1}{n} \sum_{ k = 0 }^{ n - 1 } \delta_{x_{k}} \] of probability measures associated to the initial condition \( x_{0}\). If this sequence \emph{converges}, for example in the weak star-topology, i.e. if there exists a probability measure \( \mu \) such that \begin{equation}\label{eq:conv} \mu_{n}(x_{0}) \to \mu, \end{equation} then \( \mu_{n}\) approximates \( \mu \) but, most importantly, for all sufficiently large \( n \geq 0 \), \( \mu\) \emph{approximates} \( \mu_{n}\), and therefore \( \mu \) gives an asymptotic description of the orbit from a \emph{statistical} point of view. Notice that if \( X \) is a metric space we can describe each orbit \emph{from both a topological and a statistical point of view}. In many cases these two descriptions are intuitively consistent one with the other, for example if \( X \) is a complete metric space and \( f \) is a contraction and \( p \in X \) is the unique fixed point of \( f\), then is easy to check that for any initial condition \(x_{0}\in X \) the points of the orbit of \( x_{0}\) converge to \( p \) and therefore \( \omega(x_{0})=\{p\}\) and \( \mu_{n}(x_{0}) \to \delta_{p}\). Similarly, for irrational circle rotations it is relatively easy to check that the orbit of every point \( x_{0}\in \mathbb S^{1}\) is dense in \( \mathbb S^{1} \), and therefore \( \omega(x_{0})=\mathbb S^{1}\), and it is a classical (but non-trivial) result that every orbit is uniformly distributed and therefore \( \mu_{n}(x) \to \) \emph{Lebesgue} on \( \mathbb S^{1}\). However this is not always the case and sometimes, such as in several examples which will be discussed below, and in some of the cases mentioned in our results, the topological and statistical description depend on the initial condition and yield quite different pictures of what we consider as the ``typical'' dynamics of the system. \subsubsection{Definition of Physical Measures} Given a probability measure \( \mu \) we define its \emph{basin} \[ \mathscr{B}_{\mu} \coloneqq \left\{ x : \mu_{n}(x) \to \mu \right\}. \] The set \( \mathscr{B}_{\mu} \) may very well be empty but, if \( \mathscr{B}_{\mu} \neq \emptyset \) then a natural question is to study it's size. Suppose \( X \) is a measure space with a normalized reference (Lebesgue) measure denoted by \( Leb\). \begin{defn} A probability measure \( \mu \) is a \emph{physical measure} (with \emph{full measure basin}) for \( f \) if \[ Leb(\mathscr{B}_{\mu})=1 \] \end{defn} More generally, we say that \( \mu \) is a physical measure if \( Leb(\mathscr{B}_{\mu}) > 0 \) but the examples we consider in this paper will always have full measure basin so for simplicity, unless otherwise specified, we will always implicitly assume that physical measures have full measure basins. \subsubsection{Physical measures on attractors} There are plenty of examples of dynamical systems with physical measures, for example: the Dirac-delta \( \delta_{p}\) in the fixed point for a contraction (in which case every point actually belongs to the basin); Lebesgue measure for irrational rotations (in which case also, every point belongs to the basin) and also for piecewise affine expanding circle maps such as \( f(x) = 2x \) mod 1 (in which case the basin has full Lebesgue measure but its complement is non-empty and indeed consist of an uncountable set of points). It follows by a classical theorem of Birkhoff \cite{Bir31} that if a probability measure \( \mu \) is invariant (\( \mu(f^{-1}(A))=\mu(A)\) for every measurable set \( A \)), ergodic (\( f^{-1}(A)=A \Rightarrow \mu(A)=0\) or \( \mu(A) =1 \)), and \emph{absolutely continuous with respect to Lebesgue} (\(Leb(A)=0 \Rightarrow \mu(A)=0 \)) then \( \mu \) is a physical measure and if \( \mu \) is \emph{equivalent to Lebesgue} (\(Leb(A)=0 \Leftrightarrow \mu(A)=0\)), then \( \mu \) is a physical measure with full measure basin. Such ergodic invariant measures equivalent to, or absolutely continuous with respect to, Lebesgue have been proved to exist in many classes of \emph{uniformly and non-uniformly expanding maps}. There is a huge literature so we just mention a few significant papers such as \cites{AlvLuzPin05,AraLuzVia09,Buz00,DiaHolLuz06,LasYor73,Pin06,Tsu00c,Tsu01a,Tsu05} but highlight in particular those related to one-dimensional maps with neutral fixed points \cites{Dol04,BahGalNis18,BahSau16,BalTod16,BruLep13,BruTerTod19,CamIso95,CoaHolTer19,CriHayMar10,Cui21,Dua12,FisLop01,FreFreTod13,FreFreTod16,FroMurSta11,Ino00,Kor16,LivSauVai99,Mel08,MelTer12,NicTorVai16,Pia80,PolSha09,PolWei99,PomMan80,Ruz15,Ruz18,Sar01,SheStr13,Ter13,Ter15,Tha00,Tha05,Tha80,Tha83,Tha95,Tha95a,You99,Zwe00,Zwe03,Zwe98}. In higher dimensions, where we may have a non-trivial attractor \( \Lambda\) of zero Lebesgue measure, if \( \Lambda\) satisfies some hyperbolicity conditions then pioneering work of Anosov, Sinai, Ruelle, and Bowen in the 1960s and 1970s showed that the absolute continuity can be replaced by a weaker but more technical condition of \emph{absolute continuity of conditional measures} and \emph{absolute continuity of the stable foliation}, and measures satisfying such properties are often referred to as \emph{Sinai-Ruelle-Bowen}, or \emph{SRB}, measures \cites{AnoSin67,Bow75, Rue76, Sin72}. Over the following decades then there has been a tremendous amount of research which has extended their results to increasingly general classes of systems, we mention here just some of the more recent papers \cites{AlvDiaLuz17,Bur21,BuzCroSar22,CliLuzPes17, CliLuzPes23,Vec22} and refer the reader to those for a more comprehensive list of references, and the formulation of a far-reaching conjecture of Palis to the effect that ``typical'' dynamical systems have (a finite number of) physical measures \cites{Pal08, Pal15}. \subsubsection{Physical measures on repellors} A rarer, but in many ways more interesting and intriguing, class of examples of physical measures consists of systems in which the physical measure is supported on an invariant set \( \Lambda\) which has zero Lebesgue measure and is topologically \emph{repelling}, in the sense that points close to \( \Lambda\) are mapped \emph{away} from \( \Lambda\) rather than \emph{towards} \( \Lambda\). The simplest example of this phenomenon, which is also very relevant for the class of maps which we consider in this paper, is given by well known Manneville-Pomeau \emph{intermittency map} \( f(x)=x+x^2\) mod 1 \cite{PomMan80}. It is easy to see that \( f(0)=0 \), and so the origin is a fixed point, and that \( f'(x) = 1+ 2x\), so that \( f'(0)=1\) and \(f'(x)>1\) for all \( x> 0 \). In particular every point in any small neighbourhood of the origin, except the origin itself, eventually leaves such a neighbourhood, and in this sense the origin is topologically repelling. However it can be shown that asymptotically most points of the orbits belong to arbitrarily small neighbourhoods of the origin, and in fact \emph{the empirical measures \( \mu_n(x)\) converge to the Dirac-delta measure \( \delta_0\) at the origin}, which is therefore a physical measure with full basin, see \cite{Alv20} and some of the paper mentioned above. Notice that the Manneville-Pomeau intermittency map is included in our family \( \widehat{\mathfrak F}\), see discussion in \cite{CoaLuzMub22}, and the result just mentioned is a special case of our Theorem \ref{thm:phys-measures} below. In the 1990s, Hofbauer and Keller proved the even more surprising result that there are many examples in smooth quadratic unimodal maps for which a similar phenomena occurs \cites{HofKel90, HofKel95}. \subsubsection{Non-Statistical Maps}\label{sec:nonstat} Equally, if not even more, interesting are dynamical systems which \emph{do not admit any physical measure.} The simplest, and somewhat trivial, way in which this can occur is when the basin of every probability measure \( \mu \) has zero Lebesgue measure, such as in the identity map for which \( \mu_{n}(x) \to \delta_{x}\), for every \( x \) or for rational circle rotations for which all orbits are periodic. A much more sophisticated and interesting way in which a map can fail to have physical measures is when there exists a full measure set of points for which the sequence of measures \( \mu_{n}(x)\) \emph{does not converge}, in this case we say that the orbit of \( x \) is \emph{non-statistical}. More formally, letting \[ \mathscr N:=\{x\in X: \mu_n(x) \text{ does not converge} \} \] we can make the following definition. \begin{defn} A map \( f: X \to X \) is \emph{non-statistical} if \[ Leb(\mathscr{N})=1 \] \end{defn} Notice that non-convergence of \( \mu_{n}(x)\) means that there must exist at least two measures, \( \hat\mu, \tilde\mu\), and two subsequences \( n_{i}\to \infty, n_{j}\to \infty\) such that \( \mu_{n_{i}}(x) \to \hat \mu\) and \( \mu_{n_{j}}(x) \to \tilde \mu\). This means that there is an infinite sequence of times for which the statistics of the orbit is extremely well described by the measure \( \hat \mu\) and another sequence of times for which the statistics of the orbit is extremely well described by the measure \( \tilde \mu\). We can think of these sequences as defining a series of \emph{timescales} at which we see completely different statistical behaviour and therefore the observed frequency of visits to any particular region does not stabilize as \( n \to \infty\). There is quite a large bibliography of research exploring the notion of the non-existence of physical measures from different points of view and giving a number, albeit quite limited, of examples \cites{AarThaZwe05,AraPin21,Bac99,BarKirNak20,BerBie22,ColVar01,CroYanZha20,Her18,HofKel90,HofKel95,HuYou95,Ino00,JarTol04,KanKirLi16,KarAsh11,Kel04,KirLiNak22,KirLiSom10,KirNakSom19,KirNakSom21,KirNakSom22,KirSom17,Kle06,LabRod17,Pal15,Tak08,Tak94,Tal20,Tal22,Zwe02}. We give here only a short and non comprehensive review of some of these and refer the reader to the original papers for additional information. Arguably the first example of a non-statistical system is the \emph{Bowen eye}, attributed by Takens~\cite{Tak94} to Bowen in the early 70s. The Bowen eye is a two dimensional vector field with an eye-like region whose boundary is formed by two saddle connections between two fixed points and under carefully chosen conditions orbits tend to oscillate between the two in a non-statistical way. This example is somewhat ``mild'' because the dynamics is very simple but, around the same time Hofbauer and Keller \cites{HofKel90, HofKel95, Kel04} showed that there are (uncountably) many parameters in the logistic family \( f_\lambda (x) \coloneqq \lambda x ( 1 - x ) \) for which \( f_{\lambda} \) is \emph{topologically mixing} but \emph{non-statistical}. Very recently Talebi \cite{Tal22} generalized and extended this result to the setting of complex rational maps. Another approach to the construction of non-statistical examples, or at least examples with positive Lebesgue measure of non-statistical points, is by constructing \emph{wandering domains} with non-statistical dynamics \cites{ColVar01, KirSom17,BerBie22} and a further example of non-statistical behaviour appears in \cite{CroYanZha20} where the authors construct a skew product \( F : \mathbb{T}^2 \times \mathbb{R} \to \mathbb{T}^2 \times \mathbb{R} \) which gives rise to non-statistical behaviour using the fact that skew translations over Anosov diffeomorphisms share properties with Brownian motion. Some results related to ours, for interval maps with two neutral fixed points, were also obtained in \cites{Zwe00, AarThaZwe05} in a somewhat more abstract setting and with a particular focus on the existence and properties of a sigma-finite invariant measure. As far as we know, none of the existing results considers a class of maps anywhere near as large as the family \( \widehat{\mathfrak{F}} \) considered here, nor gives such a complete and a systematic characterization of the various kinds of physical measures as given in Theorem \ref{mainthmA}. Most importantly, none of the existing results comes anywhere close to constructing examples of toplogically mixing maps without physical measures which are so \emph{prevalent} and \emph{persistent}, as described in Theorems \ref{thm:densityC0}, \ref{thm:densityC1}, \ref{mainthm:open}. \section{Statement of Results}\label{sec:results} We now give the precise definition of the family of maps \(\widehat{ \mathfrak{F}}\) and the subfamilies \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) as well as some more general technical theorems which imply the main theorems in Section \ref{sec:overview} above. \subsection{Doubly Intermittent Full Branch Maps}\label{sec:defn} We consider the class of maps introduced in~\cite{CoaLuzMub22}. For completeness we recall the precise definitions. Let \( I, I_-, I_+\) be compact intervals, let \( \mathring I, \mathring I_-, \mathring I_+\) denote their interiors, and suppose that \(I = I_{-}\cup I_{+} \) and \( \mathring I_-\cap \mathring I_+=\emptyset\). \begin{description} \item{\namedlabel{itm:A0}{\textbf{(A0)}}} \( g: I \to I \) is \emph{full branch}: the restrictions \(g_{-}: \mathring I_{-}\to \mathring I\) and \(g_{+}: \mathring I_{+}\to \mathring I\) are orientation preserving \( C^{2} \) diffeomorphisms and the only fixed points are the endpoints of \( I \). \end{description} To simplify the notation we assume that \( I=[-1, 1], I_{-}=[-1, 0], I_{+}=[0,1]\) but our results will be easily seen to hold in the general setting. For \( \iota > 0 \), we let \( U_{0-}:=(-\iota, 0], U_{0+}:=[0, \iota), U_{-1}:=g(U_{0+}), U_{+1}:=g(U_{0-}) \) be one-sided neighbourhoods of the endpoint of the intervals \(I = I_{-}, I_{+} \). \begin{description} \item{\namedlabel{itm:A1}{\textbf{(A1)}}} There exists constants \( \ell_1,\ell_2 \geq 0 \), \( \iota, k_1,k_2 , a_1,a_2,b_1,b_2 > 0 \) such that, if \( \ell_1,\ell_2> 0 \), \( k_1,k_2 \neq 1\), \begin{equation}\label{eqn_1} g(x) = \begin{cases} x+b_1{(1+x)}^{1+\ell_1} & \text{in } U_{-1}, \\ 1-a_1{|x|}^{k_1} & \text{in } U_{0-}, \\ -1+a_2{x}^{k_2} & \text{in } U_{0+}, \\ x-b_2{(1-x)}^{1+\ell_2} & \text{in } U_{+1}, \end{cases} \end{equation} If \( \ell_{1}=0 \) and/or \( \ell_2=0\) we replace the corresponding lines in~\eqref{eqn_1} with \( g|_{U_{\pm 1}}(x) \coloneqq \pm 1 + (1 + b_1) ( x + 1) \mp \eta (x), \) where \( \eta \) is \( C^2\), \(\eta(\pm 1)= 0, \eta'(\pm 1)=0\), and \( \eta''(x)>0\) on \( U_{-1}\) and \( \eta''(x)<0\) on \( U_{+1} \). If \( k_1 = 1 \) and/or \( k_2 = 1 \), then we replace the corresponding lines in~\eqref{eqn_1} with the assumption that \( g'(0_-) = a_1>1\) and/or \( g'(0_+) = a_2>1 \) respectively, and that \( g \) is monotone in the corresponding neighbourhood, which makes the definition much less restrictive. \end{description} It is easy to see that the definition in \eqref{eqn_1} yields maps with dramatically different derivative behaviour depending on the values of \( \ell_1, \ell_2, k_1, k_2\), including having neutral or expanding fixed points and points with zero or infinite derivative. Our final assumption can be \emph{intuitively thought of as saying that \( g \) is uniformly expanding outside the neighbourhoods \( U_{0\pm}\) and \( U_{\pm 1}\)}. This is however much stronger than what is needed and therefore we formulate a weaker and more general assumption for which we need to describe some aspects of the topological structure of maps satisfying condition \ref{itm:A0}. First of all we define \begin{equation}\label{eq:Delta0} \Delta^-_0:= g^{-1}(0,1)\cap I_- \quad\text{ and } \quad \Delta^+_0:= g^{-1}(-1,0)\cap I_+. \end{equation} Then we define iteratively, for every \( n \geq 1 \), the sets \begin{equation}\label{eq:Delta} \Delta_n^{-}:= g^{-1}(\Delta_{n-1}^{-})\cap I_{-} \quad\text{ and } \quad \Delta_n^{+}:= g^{-1}(\Delta_{n-1}^{+})\cap I_{+} \end{equation} as the \( n\)'th preimages of \( \Delta_0^-, \Delta_0^+\) inside the intervals \(I_{-}, I_{+} \). It follows from \ref{itm:A0} that \( \{ \Delta_n^{-}\}_{n\geq 0} \) and \( \{ \Delta_n^{+}\}_{n\geq 0} \) are $\bmod\;0$ partitions of \(I_{-}\) and \(I_{+}\) respectively, and that the partition elements depend \emph{monotonically} on the index in the sense that \( n > m \) implies that \( \Delta_n^{\pm}\) is closer to \( \pm 1\) than \( \Delta_m^{\pm}\), in particular the only accumulation points of these partitions are \( -1\) and \( 1 \) respectively. Then, for every \( n \geq 1 \), we let \begin{equation}\label{eq:delta} \delta_{n}^{-}:= g^{-1}(\Delta_{n-1}^{+}) \cap \Delta_0^{-} \quad\text{ and } \quad \delta_{n}^{+}:= g^{-1}(\Delta_{n-1}^{-}) \cap \Delta_0^{+}. \end{equation} Notice that \( \{ \delta_n^{-}\}_{n\geq 1} \) and \( \{ \delta_n^{+}\}_{n\geq 1} \) are $\bmod\; 0$ partitions of \( \Delta_0^-\) and \( \Delta_0^+\) respectively and also in these cases the partition elements depend monotonically on the index in the sense that \( n > m \) implies that \( \delta_n^{\pm}\) is closer to \( 0 \) than \( \delta_m^{\pm}\), (and in particular the only accumulation point of these partitions is 0). Notice moreover, that \( g^{n}(\delta_{n}^{-})= \Delta_{0}^{+} \) and \( g^{n}(\delta_{n}^{+})= \Delta_{0}^{-}. \) We now define two non-negative integers \( n_{\pm}\) which depend on the positions of the partition elements \( \delta_{n}^{\pm}\) and on the sizes of the neighbourhoods \( U_{0\pm}\) on which the map \( g \) is explicitly defined. If \( \Delta_0^{-} \subseteq U_{0-}\) and/or \( \Delta_0^{+} \subseteq U_{0+}\), we define \( n_{-}= 0 \) and/or \( n_{+}=0 \) respectively, otherwise we let \begin{equation}\label{eq:n+-} n_{+} := \min \{n :\delta_{n}^{+} \subset U_{0+} \} \quad\text{ and } \quad n_{-} := \min \{n :\delta_{n}^{-} \subset U_{0-} \}. \end{equation} We can now formulate our final assumption as follows. \begin{description} \item[\namedlabel{itm:A2}{\textbf{(A2)}}] There exists a \( \lambda > 1 \) such that for all \( 1\leq n\leq n_{\pm}\) and for all \( x \in \delta_n^{\pm} \) we have \( (g^n)'(x) > \lambda\). \end{description} Following \cite{CoaLuzMub22}, we let \[ \widehat{\mathfrak{F}} \coloneqq \{ g : I \to I \text{ which satisfy \ref{itm:A0}-\ref{itm:A2}}\} \] The class \( \widehat{\mathfrak{F}} \) contains many maps which have been studied in the literature, including uniformly expanding maps and various well known intermittency maps with a single neutral fixed point, we refer the reader to \cite{CoaLuzMub22} for a detailed literature review. \subsection{Physical measures on repelling fixed points and non-statistical dynamics}\label{sec:stat} It is proved in \cite{CoaLuzMub22} that every \( g \in \widehat{\mathfrak{F}} \) admits a unique (up to scaling) \( \sigma\)-\emph{finite ergodic invariant measure} \( \hat \mu\) \emph{equivalent to Lebesgue} and that many properties depend on the constants \[ \beta^- := \ell_1 k _2, \quad \beta^+ := \ell_2 k_1, \quad\text{ and } \quad \beta := \max \{ \beta^+, \beta^- \}. \] Notice that \( \beta^-, \beta^+\in [0, \infty) \) and can take any value in the allowed range, depending on the values of \( \ell_1, \ell_2, k_1, k_2\). They determine the level of \emph{``stickiness''} of the fixed points \( -1\) and \(+1\) respectively, given by the combination of the constants \( \ell_1, \ell_2\), which determine the order of tangency of the graph of \( g \) with the diagonal, and the the constants \( k_1, k_2\), which give the order of the singular or critical points. The larger the value of \( \beta^-, \beta^+ \) the \emph{``stickier''} are the corresponding fixed points. We can now define explicitly the subfamilies in \eqref{eq:subfam} by letting \begin{equation} \label{eq:def-subclasses} \mathfrak{F} \coloneqq \{\beta \in [0,1)\} , \qquad \mathfrak{F}_\pm \coloneqq \{\beta\geq 1, \ \beta^- \neq \beta^+ \} \qquad \mathfrak{F}_* \coloneqq \{\beta\geq 1, \ \beta^- = \beta^+ \}. \end{equation} It is clear that these families are pairwise disjoint and that their union is exactly \( \widehat{\mathfrak{F}} \). Notice that \( \beta \in [0,1) \) implies that \emph{both} \( \beta^{-}, \beta^{+} \in [0,1) \), whereas \( \beta\geq 1 \) only implies that \emph{at least one} of \( \beta^-, \beta^+ \geq 1 \). It is proved in \cite{CoaLuzMub22} that the \( \sigma\)-finite invariant measure \( \hat\mu\) is finite, and can therefore be rescaled to a probability measure \( \mu\), \emph{if and only if} \( \beta\in [0,1)\). \begin{thm}[\cite{CoaLuzMub22}]\label{thm:CoaLuzMub22} If \( g \in \mathfrak{F} \) then \( g \) admits a physical measure equivalent to Lebesgue. \end{thm} As mentioned above, this proves 1) in Theorem \ref{mainthmA}. We are therefore interested in the families \( \mathfrak{F}_\pm \) and \( \mathfrak{F}_* \), neither of which can contain any map with a physical measure equivalent to Lebesgue. The maps in \( \mathfrak{F}_\pm \) are those where one fixed point is \emph{stickier} than the other, whereas the maps in \( \mathfrak{F}_*\) are those for which the stickiness is the same, at least as far as it can be measured by the constants \( \beta^-, \beta^+\). It turns out that the typical statistical behaviour is completely different in these two cases. \begin{thm}\label{thm:phys-measures} If \( g\in \mathfrak{F}_\pm \) then \( g \) admits a physical measure with full basin. Moreover: \\ 1) if \( \beta^{-}> \beta^{+}\), the physical measure is the Dirac-delta measure \( \delta_{-1}\) on the fixed point -1; \\ 2) if \( \beta^{+}> \beta^{-}\), the physical measure is the Dirac-delta measure \( \delta_{1}\) on the fixed point +1 . \end{thm} \begin{thm}\label{thm:no-phys-measures} If \( g\in \mathfrak{F}_* \), then for Lebesgue almost every \( x\in I \) the sequence \( \mu_n(x) \) does not converge. In particular, \( g \) is non-statistical and admits no physical measures. \end{thm} Theorems \ref{thm:CoaLuzMub22}, \ref{thm:phys-measures}, and \ref{thm:no-phys-measures}, clearly imply Theorem \ref{mainthmA}. \begin{rem} It is interesting to note that, contrary to what might be expected, there are no cases in which the physical measures are given by a convex combination \( t\delta_{-1}+ (1-t) \delta_1\) of the Dirac-delta measures on the two fixed points. One may expect that this may be achieved at least for some carefully chosen values of the multiplicative paramaters \( a_1, a_2, b_1. b_2\) in \eqref{eqn_1} but in fact our results show that these play no significant role, at least at this level of the description of the dynamics. \end{rem} \subsection{Density of \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) in \( \widehat{\mathfrak{F}} \)} \label{sec:dense} We now address the issue of the density of the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\), as stated in Theorems \ref{thm:densityC0} and~\ref{thm:densityC1}. We will actually state a much more general result which says that each map \( f\in \widehat{\mathfrak{F}} \) can be approximated arbitrarily closely in the \( C^{r}\) topology by maps in \emph{any} of the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\), for some \( r \) which depends both on \( f \) and on the family in which we want to approximate \( f \). We recall first of all the \emph{ceiling function} \( \lceil x \rceil :=min\{\kappa\in \mathbb N: x \leq \kappa\}. \) Then, for every \( f \in \widehat{\mathfrak{F}} \), we define \[ r_{\pm} \coloneqq r_{\pm}(f) \coloneqq \max \{ \lceil \ell_1 \rceil, \lceil \ell_2 \rceil \}, \qquad \tilde r \coloneqq \tilde r(f) \coloneqq \begin{cases} \lceil 1 / k_{2}\rceil & \text{if } 0 \leq \beta^+ < 1 \leq \beta^- \\ \lceil 1 / k_{1}\rceil & \text{if } 0 \leq \beta^- < 1 \leq \beta^+ \\ \min\{ \lceil 1 / k_{2}\rceil \lceil 1 / k_{1}\rceil \} &\text{otherwise, } \end{cases} \] and \[r_* \coloneqq r_*(f) \coloneqq \begin{cases} \min \{ \lceil \ell_1 \rceil, \lceil \ell_2 \rceil \} & \text{if } \beta \in [0,1) \\ \lceil \ell_2 \rceil &\text{if } \beta^+ < 1 \leq \beta^- \\ \lceil \ell_1 \rceil &\text{if } \beta^- < 1 \leq \beta^+ \\ \min \left\{ \lceil \ell_{2} k_{1} / k_2 \rceil, \lceil \ell_1 \rceil \right\} & \text{if } \beta^+, \beta^- \geq 1 \text{ and } k_2 \geq k_1 \\ \min \left\{ \lceil \ell_{1}k_{2} / k_1 \rceil, \lceil \ell_2 \rceil \right\} & \text{if } \beta^+, \beta^- > 1 \text{ and } k_1 \geq k_2. \end{cases} \] \medskip\noindent Notice that \( r_{\pm}, r_{*}, r \) are all well defined non-negative \emph{integers} because of the way there are defined using the ceiling function. Moreover, \( r_{*}=0\) if and only if at least one of the fixed points is hyperbolic, and \( r_{\pm} = 0 \) if and only if both fixed points are hyperbolic (e.g. if \( f \) is uniformly expanding). If both fixed points are neutral then we necessarily have \( r_{\pm}, r_{*}, r >0 \) and therefore \( r_{\pm}, r_{*}, r \geq 1 \), since they are all integers and defined in terms of ceiling functions. \begin{thm} \label{thm:density-main} For every \( f \in \widehat{\mathfrak{F}} \) and every \( \varepsilon > 0 \) there exists \( \tilde{f} \in \mathfrak{F} \), \( f_{\pm} \in \mathfrak{F}_{\pm} \) and \( f_{*} \in \mathfrak{F}_{*} \) such that \begin{equation*} \label{eq:f-g-c-r-close} d_{\tilde r} (f , \tilde f) < \varepsilon, \qquad d_{ r_{\pm} } ( f, f_{\pm} ) < \varepsilon, \qquad d_{ r_* } ( f, f_* ) < \varepsilon. \end{equation*} \end{thm} \medskip Theorem \ref{thm:density-main} immediately implies Theorems \ref{thm:densityC0} and~\ref{thm:densityC1} since we always have \( r_{\pm}, r_{*}, \tilde r \geq 0 \) and, as mentioned in the previous paragraph, if \( f \in \widetilde{\mathfrak{F}} \) (i.e. if both fixed points are neutral) then we always have \( r_{\pm}, r_{*}, \tilde r \geq 1 \). However, it also shows that in some cases we can have approximations in much higher topologies. For example, consider a map \( f\in \mathfrak{F}_{*} \), which therefore has no physical measure. By definition we have \( \ell_{1}k_{2}= \ell_{2}k_{1} = \beta\), where \( \beta \geq 1 \) is arbitrary. For definiteness let us suppose that \( \beta =1 \) and then, given any \emph{arbitrarily large} positive integer \( R \), there exists a map \( f\in \mathfrak{F}_{*} \) such that \( \ell_{1} = \ell_{2} = R \) and \( k_{1} = k_{2} = 1/R\). This implies \( \tilde r=r_{\pm}=R\) and therefore, from Theorem \ref{thm:density-main}, we get that the map \( f \), which does not have any physical measure, can be approximated arbitrarily closely in the \( C^{R}\) topology by maps in \( \mathfrak{F} \), which have a physical measure equivalent to Lebesgue, \emph{and} by maps in \( \mathfrak{F}_{\pm} \), which have physical measures which are Dirac-delta measures on a fixed point. Notice that we do not need to consider \( r_{*}\) since taking \( g = f \) the last approximation is trivial. \subsection{``Openness'' of \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) in \( \widehat{\mathfrak{F}} \)}\label{sec:open} Finally, it just remains to discuss the ``openness'' of the families \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) as described in Theorem \ref{mainthm:open}. Now that we have the formal definitions of the maps in \( \widehat{\mathfrak{F}} \) the statement in Theorem \ref{mainthm:open} is actually almost immediate and therefore we just give the proof. \begin{proof}[Proof of Theorem \ref{mainthm:open}] By assumption the map \( g \) is in the class \( \widehat{ \mathfrak{ F } } \) and since \( \overline{ \operatorname{ supp } f - g } \subset ( -1 , 0 ) \cup ( 0, 1 ) \) we necesarrily have that \( g \) satisfies \ref{itm:A1} with the same parameters as \( f \). Thus, \( \beta^+(g) = \beta^+(f) \), \( \beta^-(g) = \beta^- (f) \) and so \( g \) must lies in the same subclass \( \mathfrak{F}, \mathfrak{F}_{\pm}, \mathfrak{F}_* \) as \( f \). \end{proof} We also mention, without giving formal statements, a couple of other natural ways in which maps in \( \mathfrak{F}, \mathfrak{F}_\pm, \mathfrak{F}_*\) can be perturbed without falling outside of their original family, essentially by perturbing some of the parameters through which they are defined. Notice first of all that the conditions which determine whether \( g \) belongs to \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), or \( \mathfrak{F}_*\), do not depend on the constants \( a_{1}, a_{2}, b_{1}, b_{2}\) and therefore we can choose these arbitrarily without changing the values of \( \beta_{1}, \beta_{2}\). Sufficiently small perturbations of these parameters do not invalidate condition (A2) and therefore each subfamily \( \mathfrak{F} \), \( \mathfrak{F}_{\pm}\), \( \mathfrak{F}_*\) is also ``\emph{open}'' in the sense that there exists an open neighbourhood of the parameters \( a_{1}, a_{2}, b_{1}, b_{2}\) for which the corresponding maps still belong to the same subfamily. In addition to the perturbations mentioned above, which do not change the values of the parameters \( \beta^-, \beta^+\), we can also perturb the parameters \( \ell_1, \ell_2, k_1, k_2\) which make up \( \beta^-, \beta^+\). This may of course affect which subfamily the perturbed map belongs to as the subfamilies are precisely defined in terms of the values of \( \beta^-, \beta^+\), and indeed for these kinds of perturbations the situation is slightly different depending on which subfamily we consider. The maps in \( \mathfrak{F} \) are characterized by the property that \( \beta^-, \beta^+ \in [0,1)\) and therefore there is an open set of sufficiently small perturbations of \( \ell_1, \ell_2, k_1, k_2\) such that this still holds as well as condition (A2), thus guaranteeing the the perturbed map is still in \( \mathfrak{F} \). Similarly, maps in \( \mathfrak{F}_{\pm}\) are characterized by the property that at least one of \( \beta^-, \beta^+\) is \( \geq 1 \) and so again there is a large set of perturbations of \( \ell_1, \ell_2, k_1, k_2\) which preserve that condition. Notice however that this may not always contain an open set of parameters, for example if \( \beta^-<1\) and \( \beta^+=1\), in which case we can only perturb \( \ell_2\) and \( k_1\) in such a way that \( \beta^+ \) does not decrease. Finally, maps in \( \mathfrak{F}_*\) are defined in the most restrictive way since they require \( \beta^-= \beta^+ \) and this condition is not preserved for an open set of choices of parameters \( \ell_1, \ell_2, k_1, k_2\). Nevertheless it is still a relatively large and persistent family since we can define a \begin{quote} \begin{center} \emph{three-parameter family completely contained in \( \mathfrak{F}^* \)}: \end{center} \end{quote} for \emph{any} \( \beta\geq 1, s,t> 0 \) there is a map \( g \) with \( \ell_1=s, \ell_2=t, k_1=\beta/t, k_2=\beta/s, \) which implies that \( \beta^-= \ell_1 k_2= \ell_2 k_1 = \beta^+\), and thus \( g\in \mathfrak{F}^* \). \section{The Induced Map} \label{sec:recall} In this section we recall some details of the construction of the induced map carried out in \cite{CoaLuzMub22} and a key estimate from \cite{CoaLuzMub22}, see Proposition \ref{prop:tail-of-tau} below, which will play a crucial role in our proofs. \subsection{Topological construction} We recall first of all from \cite{CoaLuzMub22} the topological structure of the \emph{first return maps} on the intervals \( \Delta_0^-, \Delta_0^+ \) defined in \eqref{eq:Delta0}. From the definitions of the sets \( \Delta_{n}^{\pm}\) and \( \delta_{n}^{\pm}\) in \eqref{eq:Delta} and \eqref{eq:delta}, and from the fact that each branch of \( g \) is a \( C^{2}\) diffeomorphism, it follows that for every \( n \geq 1 \), the maps \( g:\delta_{n}^{-} \to \Delta_{n-1}^{+} \) and \( g:\delta_{n}^{+} \to \Delta_{n-1}^{-} \) are \( C^{2}\) diffeomorphisms, and, for \( n \geq 2 \), the same is true for the maps \( g^{n-1}: \Delta_{n-1}^{-}\to \Delta_0^{-}, \) and \( g^{n-1}: \Delta_{n-1}^{+}\to \Delta_0^{+}, \) and therefore for every \( n \geq 1\), the maps \( g^{n}: \delta_n^{-} \to \Delta_0^+ \) and \( g^{n}: \delta_n^{+} \to \Delta_0^- \) are \( C^{2} \) diffeomorphisms. We can therefore define two \emph{full branch} maps \( \widetilde G^-:\Delta_{0}^{-} \to \Delta_{0}^{+} \) and \( \widetilde G^+:\Delta_{0}^{+} \to \Delta_{0}^{-} \) by \( \widetilde G^\pm|_{\delta_{n}^{\pm} } := g^{n}. \) Then for every \(i, j \geq 1\) we let \begin{equation} \label{eq:def-delta-i-j} \delta_{i,j}^{-} := g^{-i}(\delta^+_j) \cap \delta^-_i \quad\text{ and } \quad \delta^{+}_{i,j} := g^{-i}(\delta^-_j) \cap \delta^{+}_i \end{equation} Then, for \( i \geq 1\), the sets \( \{ \delta_{i,j}^{-}\}_{j\geq 1} \) and \( \{ \delta_{i,j}^{+}\}_{j\geq 1} \) are partitions of \( \delta_i^-\) and \( \delta_i^+\) respectively and so \( \mathscr P^- := \{ \delta_{i,j}^{-}\}_{i,j\geq 1} \) and \( \mathscr P^+ := \{ \delta_{i,j}^{+}\}_{i,j\geq 1} \) are partitions of \( \Delta_0^-, \Delta_0^+\) respectively, with the property that for every \( i,j \geq 1\), the maps \( g^{i+j}: \delta_{i,j}^{-} \to \Delta_{0}^{-} \) and \( g^{i+j}: \delta_{i,j}^{+} \to \Delta_{0}^{+} \) are \( C^2\) diffeomorphisms. Notice that \( i+ j \) is the \emph{first return time} of points in \( \delta_{i,j}^{-} \) and \( \delta_{i,j}^{+} \) to \( \Delta_{0}^{-} \) and \( \Delta_{0}^{+} \) respectively, and we have thus constructed \emph{two full branch first return induced maps} \( G^-:=\widetilde G^+ \circ \widetilde G^- :\Delta_{0}^{-} \to \Delta_{0}^{-} \) and \( G^+:=\widetilde G^- \circ \widetilde G^+ :\Delta_{0}^{+} \to \Delta_{0}^{+}. \) for which we have \( G^-|_{\delta_{i,j}^{-} }= g^{i+j} \) and \( G^+|_{\delta_{i,j}^{+} }= g^{i+j}. \) We now focus on one of these two full branch first return maps, for definiteness let's say \( G^{-}\) (but we could just as well take \( G^{+}\)) and for simplicity omit the superscript from the notation and write \begin{equation}\label{eq:noindex} \Delta_{0} := \Delta_{0}^{-}, \quad \delta_{i,j} := \delta_{i,j}^{-}, \quad G := G^{-}. \end{equation} It is proved in \cite{CoaLuzMub22} that \( G: \Delta_{0}\to \Delta_{0} \) is a full branch \emph{Gibbs-Markov} map with respect to the partition \( \{ \delta_{i,j} \} \), and therefore admits an \emph{invariant ergodic probability measure} \( \hat \mu \) which is \emph{equivalent to Lebesgue} and with Lipschitz continuous density. Then, by standard results, the \emph{induced measure} \begin{equation} \label{eq:mu} \mu \coloneqq \sum_{n=0}^{\infty} g^n_*(\hat\mu|\{\tau \geq n\}) \end{equation} is a \emph{sigma-finite, ergodic, \( g\)-invariant} measure which, since \( \bigcup_{n\geq 0} g^{n}(\{\tau \geq n \}) = I \ (\text{mod} \ 0)\) by construction, is \emph{equivalent to Lebesgue}. It is easy to check that \( \mu(I) < \infty\) if and only if \( \tau \in L^{1}(\hat\mu)\), and it follows from Proposition 2.6 of \cite{CoaLuzMub22} (which we recall in Proposition \ref{prop:tail-of-tau} below) that \( \tau \in L^{1}(\hat\mu)\) if and only if \( \beta\in [0,1)\), i.e. if and only if \( g\in \mathfrak F \), as already mentioned above. Moreover, since \( G \) is a \emph{first return} induced map, the measure \( \mu \) does not add any measure to the inducing domain and therefore \( \mu|\Delta_{0} = \hat \mu. \) \subsection{Inducing Times} Our arguments revolve around the \emph{distribution} of various \emph{observables} on \( \Delta_{0}\) with respect to the probability \( \hat \mu \). First we define \( \tau^\pm(x), \tau: \Delta_0 \to \mathbb N \) by \begin{equation}\label{eq:def-tau-pm} \tau^+(x):= \#\{1\leq i \leq \tau: g^i(x)\in I_+ \} \quad \tau^-(x):= \#\{1\leq j \leq \tau: g^j(x)\in I_- \}, \quad \tau \coloneqq \tau^+ + \tau^- \end{equation} where \( \tau^\pm \) \emph{count the number of iterates of \( x \) in \( I_-, I_+\) respectively before returning to~\( \Delta_0\)}. Notice that \( \tau^+|_{\delta_{i,j}} \equiv i \) and \( \tau^-|_{\delta_{i,j}} \equiv j \), that \( \tau|_{\delta_{i,j}} \equiv i+j \) is exactly the \emph{first return time} to \(\Delta_0\). The following key technical result from \cite{CoaLuzMub22} gives the distribution of these functions. We say that \( f \sim g \) if \( f(t) / g(t) \to 1 \) as \( t \to \infty \). \begin{prop}[{\cite{CoaLuzMub22}*{Proposition 2.6}}] \label{prop:tail-of-tau} There exist constants \( C_+, C_- > 0 \) such that \begin{equation} \label{eq:dist-of-tau-p} \hat \mu( \tau^+ > t ) \sim C_+ t^{-1/\beta^+}, \qquad \hat \mu( \tau^- > t ) \sim C_- t^{-1/\beta^-}, \qquad \hat \mu( \tau > t ) \sim (C_+ + C_- )t^{-1/\beta}. \end{equation} \end{prop} We will also be interested in the associated Birkhoff sums under the \emph{induced map} \( G \) \( \tau_{k}^{\pm}, \tau_{k}: \Delta_{0} \to \mathbb{N} \) defined by \begin{equation}\label{eq:deftaupm} \tau_{k}^{-} \coloneqq \sum_{ \ell = 0 }^{ k - 1 } \tau^{-} \circ G^{ \ell}, \qquad \tau_{k}^{+} \coloneqq \sum_{ \ell = 0 }^{ k - 1 } \tau^{+} \circ G^{ \ell}, \qquad \tau_k \coloneqq \tau_k^+ + \tau_k^- \end{equation} These give us the total time which points spend in the left and right intervals after \( k \) iterations of the induced map. For future reference, note that \( \tau_{k}^{-}, \tau_{k}^{+}, \tau_k\) are Birkhoff sums of \( \tau^{-}, \tau^{+}, \tau \) respectively, along the orbit of a point under the induced map \( G: \Delta_0\to\Delta_0 \) and therefore, by ergodicity and invariance of the probability measure \( \hat\mu\) under \( G \), since they are all non-negative observables, \begin{equation}\label{eq:int} \frac{\tau^{-}_{k}}{k} \to \int \tau^{-}d\hat\mu, \qquad\qquad \frac{\tau^{+}_{k}}{k} \to \int \tau^{+}d\hat\mu, \qquad\qquad \frac{\tau_{k}}{k} \to \int \tau d\hat\mu, \end{equation} as \( k \to \infty\), irrespective of whether the integrals are finite or not, for \( \hat\mu\) almost every \( x\in \Delta_0\). Therefore, since \( \hat\mu\) is equivalent to Lebesgue, for Lebesgue almost every \( x\in \Delta_0\). Proposition \ref{prop:tail-of-tau} implies that \( \tau^-, \tau^+, \tau \in L^1\) if and only if \(\beta^-, \beta^+, \beta \in [0,1)\) respectively, and therefore, from \eqref{eq:int} we have that if \( \beta^-, \beta^+, \beta \in [0,1)\), then we have respectively that \begin{equation}\label{eq:divbeta<1} \frac{\tau^{-}_{k}}{k} \to \smallint \tau^{-}d\hat\mu < \infty, \qquad \qquad \frac{\tau^{+}_{k}}{k}\to \smallint \tau^{+}d\hat\mu < \infty, \qquad \qquad \frac{\tau_{k}}{k}\to \smallint \tau d\hat\mu < \infty, \end{equation} and if \( \beta^-, \beta^+, \beta \geq 1\), then we have respectively that \begin{equation}\label{eq:div} \frac{\tau^{-}_{k}}{k} \to \infty, \qquad \qquad \frac{\tau^{+}_{k}}{k}\to \infty, \qquad \qquad \frac{\tau_{k}}{k}\to \infty. \end{equation} \section{Proof of Theorem \ref{thm:phys-measures}} \label{sec:physical} We split the proof of Theorem \ref{thm:phys-measures} into 3 parts. First we show that Lebesgue almost every point spends asymptotically all its time in either \(I^{-}\) or \( I^{+}\). Then we show that in fact such orbits spend most of the time in arbitrarily small neighbourhoods of the corresponding fixed points \( -1\) and \( + 1\) when measured along the subsequence \( \tau_{k}\). Finally we show that this implies that the same holds for the full sequence of iterates, thus proving the Theorem. \subsection{Statistics of orbits in \(I^{-}\) and \( I^{+}\)} We now go into a bit more detail on the behaviour of the induced observables \( \tau^-_k, \tau^+_k, \tau_k\). Recall that by definition \( \tau_k= \tau^+_k+ \tau^-_k\) and therefore \begin{equation}\label{eq:tauk1} \frac{\tau^-_k}{\tau_k} + \frac{\tau^+_k}{\tau_k} = \frac{\tau_{k}^{-}+ \tau_{k}^+ }{ \tau_k} = \frac{\tau_k}{\tau_k} =1 \end{equation} where \( {\tau^-_k}/{\tau_k} \) and \( {\tau^+_k}/{\tau_k} \) are simply the proportion of time that the orbit of \( x \) spends on the left and right intervals respectively in its first \( \tau_k\) iterates corresponding to \( k \) iterations of the induced map. The main result of this section shows that when \( \beta\geq 1 \), the largest of \( \beta^-\) and \( \beta^+\) ``gets everything''. \begin{prop} \label{prop:existence-of-physical-meaures} Suppose \( \beta \geq 1\). Then \begin{equation}\label{eq:existence-of-physical-meaures} \beta^-> \beta^+ \implies \frac{\tau^-_k}{\tau_k} \to 1 \quad \quad\text{ and } \quad \quad \beta^+ > \beta^- \implies \frac{\tau^+_k}{\tau_k} \to 1 \end{equation} for Lebesgue almost every point \( x\in \Delta_0\). \end{prop} It then follows of course from \eqref{eq:tauk1} and \eqref{eq:existence-of-physical-meaures} that when \( \beta \geq 1\) we have also \( \beta^-> \beta^+ \implies {\tau^+_k}/{\tau_k} \to 0 \) and \( \beta^+ > \beta^- \implies {\tau^-_k}/{\tau_k} \to 0.\) so Proposition \ref{prop:existence-of-physical-meaures} implies that whenever at least one of \( \beta_1, \beta_2\) is \( \geq 1 \) and \( \beta_1\neq \beta_2\), Lebesgue almost every point spends asymptotically all its time either in the left or right interval. \begin{proof}[Proof of Proposition \ref{prop:existence-of-physical-meaures}] We will prove the result when \( \beta^{-}> \beta^{+} \), the case \( \beta^{+}> \beta^{-} \) follows by exactly the same arguments. Notice first of all that from \eqref{eq:tauk1} we have \[ 1= \frac{\tau_{k}^{-}+ \tau_{k}^+ }{ \tau_k} = \frac{ \tau_{k}^- }{ \tau_k } + \frac{ \tau_k^+}{ \tau_k } = \frac{ \tau_{k}^- }{ \tau_k } \left( 1 + \frac{ \tau_k^+ }{ \tau_k^- } \right). \] and it is therefore sufficient to show that \begin{equation}\label{eq:equiv} \frac{\tau^{+}_k}{ \tau_k^-} \to 0 \end{equation} as this implies \( { \tau_{k}^- }/{ \tau_k } \to 1 \) as required. By assumption \( \beta\geq 1 \) and therefore \( \beta^{-}\geq 1 \), and it is therefore sufficient to consider the following 3 subcases. \begin{description} \item[\textbf{1)}] \( \beta^{-}\geq 1 > \beta^+ > 0\). \end{description} In this case we can write \[ \frac{\tau^{+}_k}{ \tau_k^-} = \frac{\tau^{+}_k}{ k} \frac{k}{ \tau_k^-} \] By \eqref{eq:divbeta<1} we have that \( {\tau^{+}_k}/{ k} \) is bounded and by \eqref{eq:div} we have that \( {k}/{ \tau_k^-} \to 0 \), implying that \( {\tau^{+}_k}/{ \tau_k^-} \to 0 \) and thus giving \eqref{eq:equiv}. \begin{description} \item{\textbf{2)} \( \beta^{-}> \beta^+ > 1\) .} \end{description} In this case, the statements in \eqref{eq:divbeta<1} and \eqref{eq:div} do not allow us to immediately draw any definite conclusions and we need to refer to a non-trivial result of \cite{GalHolPer21} which applies precisely to our case. Indeed, since the map \( G \) is Gibbs-Markov, the functions \( \tau^{\pm} \) are constant on each partition element \( \delta_{i,j} \), and the distribution of \( \tau^{\pm} \) is given by \eqref{eq:dist-of-tau-p}, Proposition 2.8 of \cite{GalHolPer21} applied to our setting satisfies gives the following result. \begin{lem}[[Proposition 2.8]\cite{GalHolPer21} ] \label{lem:holland} For all \( \epsilon > 0 \), \[ \beta^- > 1 \implies k^{\beta^{-}-\epsilon} \lesssim \tau^{-}_{k}\lesssim k^{\beta^{-}+\epsilon} \quad\text{ and } \quad \beta^+ > 1 \implies k^{\beta^{+}-\epsilon} \lesssim \tau^{+}_{k}\lesssim k^{\beta^{+}+\epsilon}. \] \end{lem} This implies \( { \tau^{+}_k }/{ \tau_k^-} \lesssim {k^{\beta^{+}+\epsilon} }/{k^{\beta^{-}-\epsilon}} \) almost surely, and thus implies \eqref{eq:equiv} if \( \epsilon \) is sufficiently small. \begin{description} \item{\textbf{3)} \( \beta^{-}> \beta^+ = 1\).} \end{description} Lemma \ref{lem:holland} requires \( \beta^{-}\) and \( \beta^{+}\) to be \emph{strictly} greater than 1 and therefore we cannot apply the argument above completely to this setting but we can conclude that \begin{equation}\label{eq:taukbound} \frac{ \tau^{+}_k }{ \tau_k^-} \lesssim \frac{\tau_k^{+}}{k^{\beta^{-}-\epsilon}} \end{equation} Letting \( q:= {1}/{(\beta^{-}-\epsilon)} \) and using the definition of \( \tau_k^{+}\) in \eqref{eq:deftaupm} it will be convenient to write this as \begin{equation}\label{eq:taukbound2} \frac{ \tau^{+}_k }{ \tau_k^-} \lesssim \frac{\tau_k^{+}}{k^{1/q}} = \frac{\tau^{+}+(\tau^{+} \circ G) + \cdots + (\tau^{+} \circ G^{k-1} )}{k^{1/q}} \end{equation} Since \( \hat \mu \) is \( G \) invariant, the summands \( \tau^+ \circ G^\ell \) are identically distributed and Proposition \ref{prop:tail-of-tau} gives \[ \mu ( (\tau^{+})^{q} > t ) = \mu (\tau^{+}> t^{1/q} ) \sim C_{+} t^{-1/q\beta^+} = C_{+} t^{-(\beta^{-}-\epsilon)/\beta^+} \] which implies that \( (\beta^{-}-\epsilon)/\beta^+> 1 \) and therefore \( \tau^+ \in L^q ( \hat \mu ) \) if \( \epsilon \) is small enough. Then we can apply the following classical and remarkable result. \begin{lem}\cite{Saw66}*{Corollary to Lemma 3} \label{lem:saw} Let \( (X, \mu) \) be a probability space and suppose \( \varphi_{n} \) are identically distributed random variables with \( \varphi_{n} \in L^{q} \) for some \( q \in (0,1)\). Then. \( \mu \)-almost surely, \[ \frac{\varphi_{1}+ \cdots + \varphi_{n}}{n^{1/q}} \to 0. \] \end{lem} Applying \ref{lem:saw} to \eqref{eq:taukbound2} implies \eqref{eq:equiv} and thus completes the proof. \end{proof} \subsection{Statistics of orbits near the fixed points for the subsequence \( \tau_{k}\) } Proposition \ref{prop:existence-of-physical-meaures} tells us that depending on the relative values of \( \beta^{-}, \beta^{+}\) orbits spend asymptotically all their time inside either the left or right subintervals \( I^{-}, I^{+}\). We are however especially interested in how much time orbits spend close to the two fixed points and in this section we will show that actually most of the time spent in these intervals is spent in arbitrarily small neighbourhoods of the corresponding fixed points. To formalize this statement, for \( \varepsilon > 0 \) we define the intervals \begin{equation} U^{+}_{\varepsilon} \coloneqq ( 1 - \varepsilon, 1 ), \quad\text{ and } \quad U^{-}_{\varepsilon} \coloneqq ( -1 , -1 + \varepsilon ), \end{equation} and then define the functions \( S_{n,\varepsilon}^{\pm} : [-1,1] \to \mathbb{N} \) and \( S_{n,\varepsilon} : [-1,1] \to \mathbb{N} \) by \begin{equation} \label{eq:def-S-n-eps-pm} S_{n, \varepsilon}^{-} \coloneqq \sum_{k = 0}^{ n - 1 } \mathbb{1}_{U_{\varepsilon}^{-}} \circ g^{k}, \qquad S_{n, \varepsilon}^{+} \coloneqq \sum_{k = 0}^{ n - 1 } {\mathbb{1}}_{U_{\varepsilon}^{+}} \circ g^{k}, \quad\text{ and } \quad S_{n, \varepsilon}= S_{n,\varepsilon}^{-} + S_{n,\varepsilon}^{+}. \end{equation} The functions \( S_{n,\varepsilon}^{-}\) and \( S_{n,\varepsilon}^{+}\) simply count the number of iterates of a point which belong to the neighbourhoods \( U^{-}_{\varepsilon} \) or \( U^{+}_{\varepsilon}\) respectively, in the first \( n \) iterates. \begin{prop}\label{lem:S-vs-tau-k} For every \( \varepsilon > 0 \) and Lebesgue almost-every \( x \in \Delta_0 \), \begin{equation} \label{eq:S-tau-k-pm-vs-tau-k-pm} \beta^{-} \geq 1 \implies \frac{S_{\tau_k, \varepsilon}^{-}}{\tau_k^-}\to 1, \qquad \beta^{+}\geq 1 \implies \frac{S_{\tau_k, \varepsilon}^{+}}{\tau_k^+}\to 1, \qquad \beta \geq 1 \implies \frac{S_{\tau_k, \varepsilon}}{\tau_k} \to 1. \end{equation} \end{prop} \begin{proof} Recall the definition of the partitions \( \{ \Delta_{n}^{\pm} \} \) in \eqref{eq:Delta} and, for \( \varepsilon > 0 \), let \[ N_{\varepsilon}^{\pm} \coloneqq \max \left\{ N : U_{\varepsilon}^{\pm} \subseteq \bigcup_{k = N}^{\infty} \Delta_{k}^\pm \right\}. \] Then it is easy to see from the definition of the partition \( \{ \delta_{i,j} \} \) in \eqref{eq:def-delta-i-j} and \eqref{eq:noindex} and the properties of the induced map that all points in \( \delta_{i,j}\) with \( i, j \geq N_{\varepsilon}\) will spend all but \( N_{\varepsilon}^{+} \) and \( N_{\varepsilon}^{-} \) iterates inside \(U_{\varepsilon}^{-}\) and \(U_{\varepsilon}^{+}\) respectively before they return at time \( \tau = i+j\). More formally, if \( x \in \delta_{i,j} \) and \( i > N^{+} \) then \( g^{k} (x) \in U^+_{\varepsilon} \) for \( k < i - N^{+} \) and \( g^{k} \not\in U^{+}_{\varepsilon} \) for \( i - N^{+} < k \leq i \), similarly if \( x \in \delta_{i,j} \) and \( j > N^{-} \) then \( g^{i + k} (x) \in U^+_{\varepsilon} \) for \( 1 \leq k < j - N^{+} \) and \( g^{i + k} \not\in U^{+}_{\varepsilon} \) for \( j - N^{+} < k \leq j \). Therefore, from the definitions in \eqref{eq:def-tau-pm}, \eqref{eq:def-S-n-eps-pm} it follows that for every \( x \in \Delta_0 \) we have \begin{equation} \label{eq:ineq-tau-S_n} \tau^+ (x) - N_{\varepsilon}^{+} - 1 \leq S^{+}_{\tau (x),\varepsilon} (x) \leq \tau^{+} (x),\quad\text{ and } \quad \tau^- (x) - N_{\varepsilon}^{-} - 1 \leq S^{-}_{\tau (x),\varepsilon} (x) \leq \tau^{-} (x). \end{equation} Notice that this holds even if \( \tau^{\pm}(x) \leq N_{\varepsilon}^{\pm}\) since then the left hand side of the corresponding inequality is negative. From the definition of \( \tau_{k} \) we can write \[ S_{\tau_k(x),\varepsilon}^\pm (x) = S_{\tau(x), \varepsilon}^\pm (x) + S_{\tau ( G(x) ), \varepsilon }^{\pm} ( G( x ) ) + \cdots + S_{\tau (G^{k-1} (x) ), \varepsilon}^{\pm} ( G^{k-1} (x) ). \] Applying the inequalities in \eqref{eq:ineq-tau-S_n} to each term in the sum above we get \begin{equation} \label{eq:S-tau-k-vs-tau-k} \tau^\pm_k (x) - k ( N_{\varepsilon}^\pm + 1 ) = \sum_{ m = 0 }^{ k - 1 } \tau^\pm \circ G^m (x) - k ( N_{\varepsilon}^\pm + 1 ) \leq S_{\tau_k (x), \varepsilon }^{ \pm } (x) \leq \sum_{ m = 0 }^{ k - 1 } \tau^\pm \circ G^m (x) = \tau^\pm_k (x) \end{equation} and so, dividing through by \( \tau^\pm_k (x) \), gives \[ 1 - \frac{ k }{ \tau^\pm_k (x) }( N^\pm_\varepsilon + 1 ) \leq \frac{ S_{ \tau_k (x), \varepsilon }^\pm (x) }{ \tau_k^{\pm} (x) } \leq 1. \] From \eqref{eq:div} we have \( k/{\tau_k^\pm (x) } \to 0\) for \( \hat \mu \) almost every \( x \in \Delta_0 \), yielding \eqref{eq:S-tau-k-pm-vs-tau-k-pm}. From \eqref{eq:S-tau-k-vs-tau-k} we also get \[ \tau_k(x) - k ( N_\varepsilon^+ + N_{\varepsilon}^- + 2 ) \leq S_{\tau_k(x), \varepsilon}^{+} (x) + S_{\tau_k(x), \varepsilon}^-(x) \leq \tau_k(x). \] Dividing through by \( \tau_k(x) \) and applying \eqref{eq:div} as above, we get \eqref{eq:S-tau-k-pm-vs-tau-k-pm} and complete the~proof. \end{proof} \subsection{Statistics of orbits near the fixed points} We are now ready to complete the proof of Theorem \ref{thm:phys-measures}. \begin{proof}[Proof of Theorem \ref{thm:phys-measures}] We will show that for every \( \varepsilon > 0 \) and Lebesgue almost every point \( x\in \Delta_0\), \begin{equation}\label{eq:seq} \beta^{-} > \beta^+ \implies \frac{S_{n, \varepsilon}^{-}}{n}\to 1, \qquad \beta^{+} > \beta^- \implies \frac{S_{n, \varepsilon}^{+}}{n}\to 1. \end{equation} This clearly implies the statement of Theorem \ref{thm:phys-measures}. Notice first of all that from Propositions \ref{prop:existence-of-physical-meaures} and~\ref{lem:S-vs-tau-k} we immediately get that for every \( \varepsilon > 0 \) and Lebesgue almost every point \( x\in \Delta_0\), \begin{equation}\label{eq:existence-of-physical-measures} \beta^-> \beta^+ \implies \frac{S_{\tau_k, \varepsilon}^{-}}{\tau_k} \to 1 \quad \quad\text{ and } \quad \quad \beta^+ > \beta^- \implies \frac{S_{\tau_k, \varepsilon}^{+}}{\tau_k} \to 1. \end{equation} We therefore just need to replace the convergence along the subsequence \( \tau_{k}\) with convergence along the full sequence. Suppose first that \( \beta^{+} > \beta^- \). Let \( x \in \Delta_{0}^{-}\) and consider the sequence of iterates \( g^{i}(x)\) for \( 1\leq i \leq \tau(x)\). Recall from the construction of the induced map that the iterates for which \( g^{i}(x) \in U^{-}_{\varepsilon} \) all lie at the ``beginning'' of the sequence, i.e. once the orbit leaves the neighbourhood \( U^{-}_{\varepsilon} \) it cannot return to it before the next return to \( \Delta^{-}_{0}\). More formally, either \( g^{i}(x)\notin U^{-}_{\varepsilon}\) for all \( 0\leq i \leq \tau(x)\) (i.e. the finite piece of orbit never enters \( U^{-}_{\varepsilon} \), or there exists an integer \( 1\leq m_{\varepsilon} < \tau(x)\) such that \( g^{i}(x)\in U^{-}_{\varepsilon}\) for all \( 0 < i \leq m_{\varepsilon}\) and \( g^{i}(x)\notin U^{-}_{\varepsilon}\) for all \( m_{\varepsilon}\leq i \leq \tau(x)\). This means that the ratio \( {S_{n, \varepsilon}^{+}}/{n} \) is always larger \emph{in-between} returns that at the \emph{following} return or, more formally, letting \( k \geq 1\) be the smallest integer such that \( \tau_{k}\geq n\), we have \[ 1\geq \frac{S_{n, \varepsilon}^{+}}{n} \geq \frac{S_{\tau_{k}, \varepsilon}^{+}}{\tau_{k}}\ \] By \eqref{eq:existence-of-physical-measures} this implies \eqref{eq:seq} in the case \( \beta^{+} > \beta^- \). The case \( \beta^{-}> \beta^{+}\) follows replacing \( - \) by \( + \) in \eqref{eq:noindex} and carrying out exactly the same argument. \end{proof} \section{Proof of Theorem \ref{thm:no-phys-measures}} Throughout this section we suppose that \( g \in \mathfrak F_{*}\) and therefore \( \beta^{-}=\beta^{+}=\beta\geq 1\). Recall the definition of the functions \( \tau^{-}, \tau^{+}, \tau:= \tau^{-}+\tau^{+}\) in \eqref{eq:def-tau-pm} and their tail distributions in \eqref{eq:dist-of-tau-p}. The key to the proof of Theorem \ref{thm:no-phys-measures} is the related function \begin{equation}\label{eq:def-tau-tilde} \tilde \tau \coloneqq \tau^{+}-\tau^{-} \end{equation} and its corresponding distribution given in \cite{CoaLuzMub22}. \begin{prop}\label{prop:tail-of-tautilde}[Proposition 2.6, Proposition 4.1 \cite{CoaLuzMub22}] There exist constants \( C_+, C_- > 0 \) such that \begin{equation} \label{eq:dist-of-tautilde} \hat \mu( \tilde \tau > t ) \sim C_+ t^{-1/\beta^+} \quad\text{ and } \quad \hat \mu( \tilde\tau< -t ) \sim C_- t^{-1/\beta^-}. \end{equation} \end{prop} Notice that \( \tilde \tau|_{\delta_{i,j}} \equiv i-j \) and is the only one of the functions defined in \eqref{eq:def-tau-pm} which takes both positive and negative values and thus also has a negative tail. As in \eqref{eq:deftaupm} we are also interested in the Birkhoff sums \( \tilde \tau_k : \Delta_0 \to \mathbb{Z} \) of \( \tilde \tau \) under the induced map \( G \) \begin{equation} \label{eq:def-tau-tilde-k} \tilde{\tau}_k \coloneqq \tau_k^+ - \tau_k^-, \end{equation} and will be interested in the asymptotic behaviour of the averages \( \tilde\tau_{k}/k\) as \( k \to \infty\). From \eqref{eq:divbeta<1} and~\eqref{eq:div} we can easily see that the limit exists (albeit possibly equal to \( + \infty\)) as long as \( \beta^{-}<1\) and/or \( \beta^{+}<1\). However, if \( \beta^{-}>1\) and \( \beta^{+}>1\) then \eqref{eq:div} shows that both \( \tau^{-}_{k}/k\to \infty\) and \( \tau^{+}_{k}/k\to \infty\) and therefore it is not possible to draw any definite conclusions about the limit of \( \tilde\tau_{k}/k\). The proof of Theorem \ref{thm:no-phys-measures} indeed rests essentially on the following result. \begin{prop} \label{lem:tilde-tau-limsup-liminf>1} For Lebesgue almost every \( x \in \Delta_0^- \), if \( \beta^- = \beta^+ = \beta > 1 \) then \begin{equation} \label{eq:tilde-tau-limsup-liminf>1} \limsup_{k\to\infty} \frac{\tilde{\tau}_k}{k} = +\infty \quad\text{ and } \quad \liminf_{k \to\infty} \frac{\tilde{\tau}_k}{k} = - \infty; \end{equation} if \( \beta^- = \beta^+ = \beta = 1 \) then \begin{equation} \label{eq:tilde-tau-limsup-liminf} \limsup_{k\to\infty} \frac{\tilde{\tau}_k}{\log k} = +\infty \quad\text{ and } \quad \liminf_{k \to\infty} \frac{\tilde{\tau}_k}{\log k} = - \infty. \end{equation} \end{prop} We show how Proposition \ref{lem:tilde-tau-limsup-liminf>1} implies Theorem \ref{thm:no-phys-measures}. \begin{proof}[Proof of Theorem \ref{thm:no-phys-measures}] We will show that \( {\tau^+_k}/{\tau_k} \) and \( {\tau^-_k}/{\tau_k} \) almost surely do not converge, which clearly implies the Theorem. Assuming by contradiction that \( \tau^+_k / \tau_k \) converges, and therefore also \( \tau^-_k / \tau_k \) converge, this would imply that \( { \tilde{\tau}_k }/{\tau_k} \) also converges. It is therefore sufficient to prove that \( { \tilde{\tau}_k }/{\tau_k} \) does not converge. Suppose first that \( \beta > 1 \). Writing \begin{equation}\label{eq:product} \frac{ \tilde{\tau}_k }{\tau_k} = \frac{ \tilde \tau_k }{k} \frac{ k}{ \tau_k } \end{equation} note that by \eqref{eq:div} we know that \( { k}/{ \tau_k } \to 0 \) and from Proposition \ref{lem:tilde-tau-limsup-liminf>1} we know that \( \tilde \tau_k / k \) is both negative and positive infinitely often, and so the only possible limit of \eqref{eq:product} is \( 0 \). However, \( \tilde \tau_k/ \tau_k \leq \tilde \tau_k/ k \) and so, again by Proposition~\ref{lem:tilde-tau-limsup-liminf>1}, cannot converge to \( 0 \) almost surely. Now suppose that \( \beta = 1 \). Writing \begin{equation}\label{eq:productb1} \frac{ \tilde{\tau}_k }{\tau_k} = \frac{ \tilde \tau_k }{ \log k} \frac{ \log k }{ \tau_k } \end{equation} note that by \eqref{eq:div} we know that \( { \log k }/{ \tau_k } \to 0 \) and from Proposition \ref{lem:tilde-tau-limsup-liminf>1} we know that \( \tilde \tau_k / \log k \) is both negative and positive infinitely often, and so the only possible limit of \eqref{eq:productb1} is \( 0 \). However, \( \tilde \tau_k/ \tau_k \leq \tilde \tau_k/ \log k \) and so, again by Proposition~\ref{lem:tilde-tau-limsup-liminf>1}, cannot converge to \( 0 \) almost surely. \end{proof} It therefore just remains to prove Proposition \ref{lem:tilde-tau-limsup-liminf>1}. We will consider separately the cases \( \beta > 1 \) and the cases \( \beta=1\) in Sections \ref{sec:nonexistence} and \ref{sec:nonexistence1} respectively. The arguments are similar in both cases but in the first case the constants and calculations involved are much more explicit and therefore give a better insight into the proof. Both cases rely on showing that the sequence \( \tilde\tau_{k}\) satisfies some \emph{stable law} and we therefore begin with some relatively standard definitions and notation. First of all recall that a sequence of random variables \( X_1, X_2, \ldots \) converges in distribution to \( Y \), which we write as \( X_n \dto{n} Y \), if \( \mathbb{P} ( X_n \leq x ) \underset{n \to \infty}{\to} \mathbb{P} ( Y \leq x ), \) for every continuity point \( x \) of \( x \mapsto \mathbb{P} ( Y \leq x ) \). \begin{defn} We say that a random variable \( Y \) is a \emph{stable random variable with parameters} \[ \alpha \in (0,2], \quad \eta \in [-1,1], \quad \gamma \geq 0, \quad \delta \in \mathbb{R}, \quad \text{ and write } \quad X \sim S ( \alpha, \eta, \gamma, \delta ),\] if the characteristic function of \( Y \) is given by \[ \mathbb{E} ( e^{ it Y } ) = \begin{cases} \exp \left( -\gamma^\alpha |t|^{\alpha} [ 1 - i \eta \tan ( \frac{ \pi \alpha }{ 2 } ) \operatorname{sign} t ] + i \delta t \right), & \text{ if } \alpha \neq 1 \\ \exp \left( -\gamma |t| [ 1 + i \eta \frac{2}{\pi} \operatorname{sign} t \log | \gamma t | ] + i \delta t \right), & \text{ if } \alpha = 1. \end{cases} \] \end{defn} Note that there are many ways of parametrizing stable random variables, our choice above corresponds to the \( S( \alpha, \eta, \gamma, \delta; 1 ) \) parametrization adopted in \cite{Nol20}. A key observation is that the parameter \( \eta \) determines the support of \( X \sim S( \alpha, \eta, \gamma, \delta ) \), in particular (cf. \cite{Nol20}*{Lemma 1.1}), \begin{equation}\label{eq:support-of-stable} \operatorname{support} X = \mathbb R \quad \text{\emph{unless} \( \alpha<1\) \emph{and} \( \eta=\pm 1\). } \end{equation} \subsection{Non-existence of physical measures (\(\beta>1\))} \label{sec:nonexistence} In this section we prove \eqref{eq:tilde-tau-limsup-liminf>1} of Proposition~\ref{lem:tilde-tau-limsup-liminf>1} Let \( C^+, C^- \) be as in Proposition \ref{prop:tail-of-tautilde} and set \begin{equation}\label{eq:etagamma} \eta \coloneqq \frac{ C^+ - C^- }{ C^+ + C^-},\quad \gamma \coloneqq \left( \frac{ 2 \Gamma ( 1/ \beta ) \sin \frac{\pi \beta }{2} }{ \pi (C^+ + C^-)} \right)^{ \beta } \end{equation} Notice that \( C^{-}, C^{+}>0\) and therefore both \( \eta\) and \( \gamma \) are well-defined and \( \eta \neq \pm 1\). \begin{prop}\label{prop:stable-laws-for-tilde-tau>1} Suppose that \( \beta > 1 \). Then, \[ \frac{\tilde{\tau}_k}{ k^{\beta} }\dto{n} Y \sim S( 1/\beta, \eta, \gamma, 0 ) \] In particular \( \operatorname{support} Y = \mathbb R\) \end{prop} We defer the proof of Proposition \ref{prop:stable-laws-for-tilde-tau>1} to Section \ref{sec:prop:stable-laws-for-tilde-tau} and now proceed to show how the proposition above implies \eqref{eq:tilde-tau-limsup-liminf>1}. \begin{proof}[Proof of \eqref{eq:tilde-tau-limsup-liminf>1}] We will show that the sets \begin{equation} \label{eq:Acalpm-beta>1} \mathscr{A}^{+} \coloneqq \left\{ x : \limsup_{ n \to \infty } \frac{\tilde{\tau}_k}{k} = +\infty \right\} \quad\text{ and } \quad \mathscr{A}^{-} \coloneqq \left\{ x : \liminf_{ n \to \infty } \frac{\tilde{\tau}_k}{k} = - \infty \right\}, \end{equation} are of full measure and the result will then follow. Notice that these sets are \( G \) invariant and so, by ergodicity, it suffices to show that they are of positive measure. Consider the sets \begin{equation} \label{eq:Apm-beta>1} A^{\pm}_k \coloneqq \{ \pm \tilde \tau_k > k^{\beta} \}, \quad\text{ and } \quad A^{\pm} \coloneqq \{ x : x \in A_k^{\pm} \text{ for infinitely many k }\} = \bigcap_{ \ell = 1 }^{ \infty } \bigcup_{ k = \ell }^{ \infty } A_k^{\pm} \end{equation} and notice that if \( x \in A^{+} \) then \( \tau_k(x) / k > k^{\beta - 1} \) for infinitely many \( k \) and so, since \( \beta > 1 \), we know that \( \limsup \tau_k(x)/ k = + \infty \). Similarly, if \( x \in A^{-} \) then \( \liminf \tau_k / k = - \infty \) which yields \( A^{\pm} \subset \mathscr{A}^\pm \). We will now use Proposition \ref{prop:stable-laws-for-tilde-tau>1} to show that \( A^{\pm} \), and thus \( \mathcal{A}^{\pm} \), is of positive measure. Proposition \ref{prop:stable-laws-for-tilde-tau>1} ensures that \( \mu ( A_{k}^{\pm} ) \to \mu ( Y > 1 ) = p^{\pm} \), and the fact that the support of the density \( Y \) is the entire real line ensures that \( p^{\pm} > 0 \). Thus, \[ \mu ( A^{\pm} ) = \mu \left( \bigcap_{ \ell = 1 }^{\infty} \bigcup_{ k \geq \ell } A_k^{\pm} \right) = \inf_{\ell} \left( \bigcup_{ k \geq \ell } A_k^{\pm} \right) \geq p^{\pm} > 0 \] as the sets \( \left( \bigcup_{ k \geq \ell } A_k^{\pm} \right)_{\ell \geq 1} \) are nested. \end{proof} \subsection{Non-existence of physical measures (\( \beta=1\))} \label{sec:nonexistence1} The argument in this case also relies on a stable law limit such as that in Proposition \ref{prop:stable-laws-for-tilde-tau>1} above but with some different parameters. We let \( \eta, \gamma\) be as in \eqref{eq:etagamma} and let \begin{equation} \label{eq:delta-stable} \delta \coloneqq - \eta \frac{2}{\pi} \gamma \log \gamma \end{equation} We also fix \( \phi (t) \coloneqq \int \exp\{ it \tilde{\tau} \} d m \) to be the characteristic function of \( \tilde \tau\) and define the sequences \begin{equation} \label{eq:def-b-ks} b_k \coloneqq \operatorname{Im} \log \phi(k^{-1}), \quad\text{ and } \quad \tilde{b}_k \coloneqq \frac{ k + b_k }{ \log k}. \end{equation} \begin{prop}\label{prop:stable-laws-for-tilde-tau} Suppose that \( \beta =1 \). Then, \[ \frac{\tilde{\tau}_k - b_k }{ k }\dto{k} Y \sim S( 1, \eta, \gamma, \delta ) \] \end{prop} We will argue in a very similar way to Section \ref{sec:nonexistence} and show that the stable law in Proposition \ref{prop:stable-laws-for-tilde-tau} implies \eqref{eq:tilde-tau-limsup-liminf} \begin{proof}[Proof of \eqref{eq:tilde-tau-limsup-liminf}] We will show that the sets \[ \mathscr{A}^{+} \coloneqq \left\{ x : \limsup_{ n \to \infty } \frac{\tilde{\tau}_k}{ \log k} = +\infty \right\} \quad\text{ and } \quad \mathscr{A}^{-} \coloneqq \left\{ x : \liminf_{ n \to \infty } \frac{\tilde{\tau}_k}{ \log k} = - \infty \right\}, \] are of full measure. First, note that these sets are \( G \) invariant and so, by ergodicity, it suffices to show that they are of positive measure. We define sets \( A^{\pm}_k, A^{\pm} \) in an analogous way to \eqref{eq:Apm-beta>1}, this time setting \[ A^{\pm}_k \coloneqq \{ \pm \tau_k - b_k > k \}, \quad\text{ and } \quad A^{\pm} \coloneqq \{ x : x \in A_k^{\pm} \text{ for infinitely many k }\} = \bigcap_{ \ell = 1 }^{ \infty } \bigcup_{ k = \ell }^{ \infty } A_k^{\pm}. \] From \eqref{eq:def-b-ks} we see that \( A^{\pm}_k = \{ \pm \tau_k / \log k > \tilde{b}_k \} \) and so, as in the proof of \eqref{eq:tilde-tau-limsup-liminf>1}, we see that \( A^{\pm} \subset \mathcal{A}^{\pm} \) as \( \tilde b_k \to \infty \). The conclusion is exactly the same as in the previous section. Proposition \ref{prop:stable-laws-for-tilde-tau} ensures that \( \mu ( A_{k}^{\pm} ) \to \mu ( Y > 1 ) = p^{\pm} \), and the fact that the support of the density \( Y \) is the entire real line ensures that \( p^{\pm} > 0 \). Thus, \[ \mu ( A^{\pm} ) = \mu \left( \bigcap_{ \ell = 1 }^{\infty} \bigcup_{ k \geq \ell } A_k^{\pm} \right) = \inf_{\ell} \left( \bigcup_{ k \geq \ell } A_k^{\pm} \right) \geq p^{\pm} > 0. \] \end{proof} \subsection{Proof of Propositions \ref{prop:stable-laws-for-tilde-tau>1} and \ref{prop:stable-laws-for-tilde-tau}} \label{sec:prop:stable-laws-for-tilde-tau} \begin{proof}[Proof of Propositions \ref{prop:stable-laws-for-tilde-tau>1} and \ref{prop:stable-laws-for-tilde-tau}] Let \(X_1, X_2, \ldots, \) be a sequence of iid random variables that are equal in distribution to \( \tilde{\tau} \). Proposition \ref{prop:tail-of-tau} ensures that \begin{equation} \label{eq:dist-Xk} t^{1/\beta} \hat \mu ( X_k > t ) \to C_+, \quad\text{ and } \quad t^{1/\beta} \hat \mu ( X_k < -t ) \to C_-. \end{equation} The classical probability literature (see for example \cite{Nol20}*{Theorem~3.12}) tells us that \eqref{eq:dist-Xk} implies that scaled and centred sums of the \( X_k \) will converge in distribution to a stable random variable. The scaling and centring sequence, and the parameters of the limiting stable random variable depend on the value of \( \beta \) and \( C_+, C_- \) in the way we describe below. Setting \( \eta, \gamma \) as in \eqref{eq:etagamma}, \cite{Nol20}*{Theorem~3.12} yields the following limit theorems 1) If \( \beta > 1 \) then \[ \frac{ \sum_{ \ell = 0 }^{ k - 1 } X_{\ell} }{ k^{\beta } } \dto{k} Y \sim S ( 1/\beta, \eta, \gamma, 0 ). \] 2) If \( \beta = 1 \) then defining \( \delta \) as in \eqref{eq:delta-stable} and \( ( b_k ) \) as in \eqref{eq:def-b-ks} \[ \frac{ \sum_{ \ell = 0 }^{ k - 1 } X_{\ell} - b_k }{ k } \dto{k} Y \sim S ( 1, \eta, \gamma, \delta ). \] Now that we have established the limit theorem for the iid sequence \( (X_k) \) \cite{Gou10a}*{Theorem 1.5} tells us that since \( G \) is a topologically mixing Gibbs-Markov map, and since \( \tilde{\tau} \) is constant on the partition elements \( \delta_{i,j} \), the sequence \( \tilde{\tau}_k \) will satisfy the same distributional convergence as \( X_1 + \cdots + X_k \). In particular, if \( \beta > 1 \) \[ \frac{ \tilde \tau_k }{ k^{\beta } } \dto{k} Y \sim S ( 1/\beta, \eta, \gamma, 0 ), \] otherwise if \( \beta = 1 \) \[ \frac{ \tilde{\tau}_k - b_k }{ k } \dto{k} Y \sim S ( 1, \eta, \gamma, \delta ). \] \end{proof} \section{Proof of Theorem \ref{thm:density-main}} \label{sec:density} Throughout this section we fix \( f \in \widehat{ \mathfrak{F} } \) and let and let \( \ell_1,\ell_2, k_1,k_2 , a_1,a_2,b_1,b_2, \iota \) be the corresponding parameters as in~\eqref{eqn_1}. Then, given \emph{arbitrary constants} \( \tilde\ell_{1}. \tilde\ell_{2}\geq 0\), in Section \ref{sec:construction} we will give a quite explicit construction of a map \( g \) which, in Section \ref{sec:fhat}, we will show belongs to our class \( \mathfrak{F} \), with parameters \( \tilde{\ell}_1, \tilde{\ell}_2, k_{1}, k_2, a_{1}, a_2, \tilde b_1, \tilde b_2 , \tilde\iota \), for appropriately chosen constants \(\tilde b_1, \tilde b_2, \tilde \iota \). In Section \ref{sec:crclose} we estimate the distance between \( f \) and \( g \) in an appropriate topology and finally, in Section \ref{sec:conc}, we apply these estimates to the various cases required by Theorem \ref{thm:density-main} and thus complete the proof. \subsection{Construction of \( g \)} \label{sec:construction} In Section \ref{sec:const-1} we describe the general construction of \( g \) and introduce the other parameters and functions on which \( g \) will depend. In Section \ref{sec:const-2} we then make some specific choices of the various parameters and functions involved in the construction. \subsubsection{General strategy for constructing the perturbation} \label{sec:const-1} Let \( f \in \widehat{\mathfrak{F}} \) and let the corresponding parameters as in~\eqref{eqn_1} be \( \ell_1,\ell_2 \geq 0 \), \(k_1,k_2 , a_1,a_2,b_1,b_2 > 0 \). For any two constants \(\tilde\ell_{1} > 0 \) and \(\tilde\ell_{2} > 0\), and any two compact intervals \[ [\tilde x_{1}, x_{1}]\subset U_{-1} \quad\text{ and } \quad [x_{2}, \tilde x_{2}]\subset U_{1}, \] we define functions \( h_1 : U_{-1} \to [-1,1] \) and \( h_2 : U_{1} \to [-1,1] \) as follows. If \( \tilde\ell_{1}=\ell_{1}\) or \( \tilde\ell_{2}=\ell_{2}\) then we let \( h_{1}=f|_{U_{-1}}\) and \( h_{2}=f|_{U_{1}}\) respectively. Otherwise, we let \begin{equation} \label{eq:def-h1} h_1(x) \coloneqq x + \tilde b_1 ( 1 + x )^{1 + \tilde{ \ell }_1} \quad\text{ and } \quad h_2(x) \coloneqq x - \tilde b_2 ( 1 - x )^{1 + \tilde{ \ell }_2}, \end{equation} where \( \tilde b_1, \tilde b_2 > 0 \) are any constants such that \begin{equation} \label{eq:cond-monotonicty} h_{1}(x) \leq f(x) \quad \text{ on } \quad [x_1, \tilde x_1], \quad\text{ and } \quad h_{2}(x) \geq g(x) \text{ on } \quad [ \tilde x_2, x_2], \end{equation} for every integer \( k\leq \lceil \ell_{1} \rceil \) and \( k\leq \lceil \ell_{2} \rceil \) respectively. Note that if \( \tilde \ell_{1,2} \geq \ell_{1,2} \) then we can take \( \tilde b_{1,2} = b_{1,2} \) and the corresponding line of \eqref{eq:cond-monotonicty} will hold. Moreover, regardless of the relative values of \( \ell_1,\ell_2 \) and \( \tilde \ell_1, \tilde \ell_2 \), we \eqref{eq:cond-monotonicty} will always hold for all \( \tilde b_1, \tilde b_2 > 0 \) sufficiently small as we are only asking for the inequalities in \eqref{eq:cond-monotonicty} to be satisfied for \( x \) in compact intervals away from \( -1 \) and \( 1 \). We let \[ \xi_{1}: [\tilde x_{1}, x_{1}] \to [0,1] \quad\text{ and } \quad \xi_{2}: [ x_{2}, \tilde x_{2}]\to [0,1] \] be \( C^{\infty}\) \emph{monotone increasing bijections} and define \(g\) on the intervals \( [-1, 0) \) and \( [0, 1 ] \) by \begin{equation} \label{eq:def-g-1} g|_{[-1, 0]} (x) \coloneqq \begin{cases} h_1(x) & \text{if } x \in [-1, -\tilde x_1] \\ h_1(x) + \xi_1 (x) ( f(x) - h_1(x) ) & \text{if } x \in (\tilde x_1, x_1) \\ f(x) & \text{if } x \in [x_1, 0 ). \end{cases} \end{equation} and \begin{equation} \label{eq:def-g-2} g|_{[0, 1 ]} (x) \coloneqq \begin{cases} f(x) & \text{if } x \in [0, x_{2} ] \\ f(x) + \xi_2 (x) (h_2 (x) - f(x) ) & \text{if } x \in (x_{2}, \tilde x_{2}) \\ h_2(x) & \text{if } x \in [\tilde x_{2}, 1 ]. \end{cases} \end{equation} Notice that \( g \) is equal to \( f \) outside \( U_{-1}\) and \( U_{1}\) and, apart from \( \tilde\ell_{1}, \tilde\ell_{2}\), depends on the two intervals \( [\tilde x_{1}, x_{1}], [ x_{2}, \tilde x_{2}]\), the constants \( \tilde b_1,\tilde b_2\), and the functions \( \xi_{1}, \xi_{2}\), which we explain below how to choose. \subsubsection{Choosing \( x_1,x_2,\tilde x_1, \tilde x_2, \tilde b_1,\tilde b_2, \xi_{1}, \xi_{2}\)} \label{sec:const-2} We explain how to make a specific choice of the constants \( \tilde x_1, \tilde x_2, \tilde b_1,\tilde b_2\) and the functions \( \xi_{1}, \xi_{2}\) depending on arbitrary \( x_1 \in U_{-1} \) and \( x_2 \in U_{1} \). First of all we define the \emph{affine orientation preserving bijections} \( \eta_{1}: [\tilde x_{1}, x_{1}] \to [0,1] \) and \( \eta_{2}: [ x_{2}, \tilde x_{2}]\to [0,1] \) by \[ \eta_1(x) \coloneqq \frac{x-\tilde x_1}{x_1-\tilde x_1} \quad\text{ and } \quad \eta_2(x)\coloneqq \frac{x-x_2}{\tilde x_2- x_2}. \] Then we define a \( C^\infty\) map \( \xi:[0.1]\to [0,1]\) by \begin{equation} \label{eq:def-of-xi} \xi ( x ) \coloneqq \begin{cases} 0 &\text{if } x = 0 \\ \exp \left\{ 1 - \frac{ 1 }{ 1 - ( x - 1 )^2 } \right\} &\text{if } x \in (0, 1]. \end{cases} \end{equation} and let \[ \xi_1(x):= \xi\circ \eta_1(x) \quad\text{ and } \quad \xi_2(x):= \xi\circ \eta_2(x) \] Notice that \( \xi\) is monotone increasing, \( \xi ( 0 ) = 0 \), \( \xi ( 1 ) = 1 \), and \( D^k \xi ( 0 ) = D^k \xi ( 1 ) = 0 \) for every \( k \geq 1 \) and therefore \( \xi_1, \xi_2\) are also in particular \( C^\infty\) and flat at the endpoints. We now fix \( \tilde x_1\) to be the mid-point between \( -1\) and \( x_1\), and \( \tilde x_2\) to be the midpoint between \( x_2\) and \( 1 \), i.e. \[ \tilde x_1 \coloneqq \frac{1}{2} x_1 - \frac{1}{2}, \quad\text{ and } \quad \tilde x_2 \coloneqq \frac{1}{2} x_2 + \frac{1}{2}. \] \begin{rem} The fundamental reason for these choices is that, by using the explicit forms of \( \eta_1, \eta_2\) and \( \xi\), and the definition of \( \xi_1, \xi_2\), we can verify by repeated use of the chain rule that there exists a \( C > 0 \) such that for every \( x_1 \in U_{-1} \), every \( x \in [x_1, \tilde x_1] \) and for every \( k \leq \lceil \ell_1 \rceil \) we have \begin{equation} \label{eq:derv-xi-1} D^k \xi_1 ( x ) = D^{k-1} \left( \frac{2}{ 1 + x_1 } \xi'( \eta_1 (x) ) \right) = \left( \frac{2}{1 + x_1} \right)^{ k } D^k \xi ( \eta_1 (x) ) \leq C (1 + x_{1})^{-k}. \end{equation} and, similarly, for every \( x_2 \in U_{1} \), every \( x \in [x_2, 1] \) and every \( k \leq \lceil \ell_2 \rceil \), we have \begin{equation*} D^k \xi_2 ( x ) \leq C (1 - x_{2})^{-k}. \end{equation*} \end{rem} Next, we will fix \( \tilde b_1, \tilde b_2 > 0 \). These constants play no role in the statistical properties of the map but will need to be chosen carefully to ensure that \( g \) indeed satisfies \ref{itm:A0}-\ref{itm:A2}. We have already that \( \tilde{b}_1, \tilde{b_2} \) are small enough so that \eqref{eq:cond-monotonicty} holds, and now we also require the following conditions: \begin{equation} \label{eq:cond-on-b1} \begin{aligned} &\text{if } \ell_1 > 0, \text{then } &b_1 \ell_1 (1 + x)^{\ell_1} - \tilde b_1 \tilde \ell_1 ( 1 + x )^{\tilde \ell_1} &&\geq 0 &&\text{ for all } x \in [\tilde x_1, x_1],\\ &\text{if } \ell_1 = 0, \text{then } &1 - \tilde b_1 \tilde \ell (1 + x)^{\tilde \ell_1 } &&\geq 0 &&\text{ for all } x \in [\tilde x_1, x_1], \end{aligned} \end{equation} and \begin{equation} \label{eq:cond-on-b2} \begin{aligned} &\text{if } \ell_2 > 0, \text{then } &b_2 \ell_2 (1 + x)^{\ell_2} - \tilde b_2 \tilde \ell_2 ( 1 + x )^{\tilde \ell_2} &&\geq 0 &&\text{ for all } x \in [x_2, \tilde x_2 ],\\ &\text{if } \ell_2 = 0, \text{then } &1 - \tilde b_1 \tilde \ell_2 (1 + x)^{\tilde \ell_2 } &&\geq 0 &&\text{ for all } x \in [x_2, \tilde x_2 ]. \end{aligned} \end{equation} Note that it is always possible to find \( \tilde b_1, \tilde b_2 > 0 \) small enough so that the above hold. This concludes our definition of the map \( g \) which depends only \( f \) and the parameters \( \tilde \ell_1, \tilde \ell_2 > 0 \) and \( x_1, x_2 \). Notice that choice of \( \tilde \ell_1, \tilde \ell_2 > 0 \) is completely arbitrary and the only restriction on the choice of \( x_1, x_2 \) is that they have to lie in the neighbourhoods \( U_{-1}, U_{1} \), in particular notice that \( x_1,x_2 \) can be chosen arbitrarily close to the fixed points \( -1,1 \). \subsection{The map \( g \) is in the class \( \widehat{ \mathfrak{F} } \)} \label{sec:fhat} \begin{lem} \label{lem:fhat} \( g \) belongs to the class \( \widehat{ \mathfrak{F} } \) with parameters \( \tilde{\ell}_1, \tilde{\ell}_2, k_{1}, k_2, a_{1}, a_2, \tilde b_1, \tilde b_2 \). \end{lem} \begin{rem} Note that all of the parameters of \( g \) are fixed by the map \( f \) \emph{except} for \( \tilde \ell_1, \tilde \ell_2 \) which are \emph{arbitrary} positive constants, which means that we are free to choose \( \tilde \ell_1 , \tilde \ell_2 \) so that \( g \) is in any one of the three distinct subclasses \( \mathfrak{F}, \mathfrak{F}_{\pm}, \mathfrak{F}_{*} \) defined in \eqref{eq:def-subclasses}. \end{rem} \begin{proof} It follows immediately from the construction that the map is full branch and \( C^{2} \) and that \ref{itm:A0} is therefore satisfied. The neighbourhoods \( U_{\pm 1}\) and \( U_{0\pm}\) no longer have the required form as in \ref{itm:A1} but we can shrink them and define the intervals \begin{equation} \label{eq:def-tilde-U} \begin{split} \widetilde{U}_{-1} \coloneqq [-1, \tilde x_{1} ), \quad \widetilde U_{1} \coloneqq ( \tilde x_{2}, 1 ], \quad \widetilde{U}_{0-} = g^{-1} (\widetilde U_{1}), \quad \widetilde{U}_{0+} \coloneqq g^{-1} ( \widetilde{U}_{-1}). \end{split} \end{equation} It then follows that \( g \) satisfies \ref{itm:A1} with respect to these neighbourhoods for the required parameters \( \tilde{\ell}_1, \tilde{\ell}_2, k_{1}, k_2, a_{1}, a_2, \tilde b_1, \tilde b_2 \). Since we have shrunk the neighbourhoods and modified the map in the regions \( U_{-1}\setminus \widetilde{U}_{-1} \) and \( U_{1}\setminus \widetilde{U}_{1} \) it is no longer immediate that the expansivity condition \ref{itm:A2} continues to hold outside these new neighbourhoods and we therefore just need to check \ref{itm:A2}. Let \( \tilde{\delta}_{n}^{\pm}, \tilde{\Delta}_n^{\pm} \) and \( \delta_n^{\pm}, \Delta_n^{\pm} \) denote the partitions corresponding to \( g \) and \( f \) respectively. We know from the construction of \( g \) that \( g^{n} \equiv f^{n} \) on \( \delta_n^{\pm} \), and \( \tilde \delta_n^{\pm} = \delta_n^{\pm} \) for every \( n \leq n^{\pm} \coloneqq \min \{ x : \delta_n \subset U_{ 0 \pm } (f) \}\). As \( f \) satisfies \ref{itm:A2} we know that \begin{equation} \label{eq:A2-for-f} (g|_{\delta_n^{\pm}}^n)'(x) = (f|_{\delta_n^{\pm}}^n)'(x) > \lambda > 1 \text{ for every } n \leq n^{\pm}. \end{equation} So, in order to verify that \( g \) satisfies \ref{itm:A2} it remains to check that \begin{equation} \label{eq:g-A2-main-claim} (g^n)' (x) > \lambda > 1, \text{ for every } x \in \tilde \delta_n^{\pm} \text{ and for every } n^{\pm} + 1 \leq n \leq \tilde{n}^{\pm}, \end{equation} for some (possibly different) \( \lambda > 1 \), where \( \tilde{n}^{\pm} \coloneqq \min \{ x : \delta_n \subset \tilde U_{ 0 \pm } (f) \}\). We will follow the argument given \cite{CoaLuzMub22}*{Section 3.3} and show that \( (g^{n+1})' (x) \geq (g^{n})'(x) \) for every \( n \geq n^{+} \) and every \( x \in \tilde \delta_n^{+} \). The argument for \( n \geq n^{-} \) and \( x \in \tilde{ \delta }_{n}^- \) then follows in the same way. Note that if \( k_1, \in ( 0, 1 ] \) then there is nothing to prove, so we will assume throughout this proof that \( k_1 > 1 \). As in \cite{CoaLuzMub22}*{Section 3.3} we define \begin{equation}\label{eq:def-phi} \phi \coloneqq ( g|_{ U_{0+} } )^{-1} \circ g|_{ U_{-1} } \circ g |_{ U_{0+}} \end{equation} and claim that the conclusion of \cite{CoaLuzMub22}*{Lemma 3.7} holds for \( g \), namely we claim that \begin{equation}\label{eq:phiexpansion} \frac{(g^2)'(x)}{ g' ( \phi (x) )} = \frac{ g'(x) }{ g'(\phi(x))} g'(g(x)) > 1, \text{ for every } x \in \tilde \delta_{n+1}^{+} \text{ and every } n \geq n^{+} . \end{equation} Notice that if \( g (x) \in [ x_1, 0) \), or if \( g(x) \in [-1, \tilde x_1] \) then we can apply\cite{CoaLuzMub22}*{Lemma 3.7} to obtain \( (g^2)'(x) / g'( \phi (x)) > 1 \). If instead \( g(x) \in ( \tilde x_1 , x_1 ) \), then \cite{CoaLuzMub22}*{Lemma 3.7} cannot be applied directly, but its proof can be adapted to our setting as show below. So, let us assume that \( x \in \delta_n^+ \) for some \( n \) and that \( g(x) \in ( \tilde x_1 , x_1) \). Since we are working only with \( x \in \delta_n^+ \) we will drop the subscripts on the parameters to ease notation, specifically we will let \( \ell = \ell_1 \), \( \tilde \ell = \tilde \ell_1 \), \(b = b_1 \), \()\tilde b = \tilde b_1\), \(k = k_2\) and \( a = a_2 \). By the definition of \( g \) in \( U_{0+}\) given in \ref{itm:A1} we have \begin{equation} \label{eq:g-p-over-g-phi} \frac{g'(x)}{g'(\phi(x))} = \left(\frac{ x }{ \phi (x) }\right)^{k - 1} = \left(\frac{ \phi (x) }{ x }\right)^{1-k} \end{equation} Recall that \( k > 1\) and \( x< \phi(x)\) and so the ratio above is strictly less than \( 1\). Let us fix \begin{equation} \label{eq:def-y} y \coloneqq g ( x ) = -1 + a x^{k}, \end{equation} We will compute \( ( \phi(x) / x )^{ k } \) and \( g' ( g (x) ) \) in case that \( \ell = 0 \) and the case that \( \ell > 0 \) separately. First we note that regardless the value of \( \ell \) we find that inserting the definition of \( g \) into \eqref{eq:def-phi} yields \begin{equation} \label{eq:phi-x} \varphi (x) = \left( \frac{1}{a} \right)^{1/k} ( 1 + g(y) )^{ 1/k } = \left( \frac{1}{a} \right)^{1/k} [ 1 + h_1 (y) + \xi (y) ( f(y) - h_1 (y) ) ]^{ 1/k }. \end{equation} and, using \eqref{eq:cond-monotonicty} we have that \begin{equation} \label{eq:g-p-g} g'( g(x) ) = h_1'(y) + \xi( y) ( f'(y) - h_1'(y) ) + \xi'(y) ( f (y) - h(y) ) \geq h_1'(y) + \xi( y) ( f'(y) - h_1'(y) ) \end{equation} 1) Suppose that \( \ell > 0 \). Inserting \eqref{eq:def-y} into the definitions of \( f \) and \( h_1 \) we find \( f(y) - h_1(y) = a x^k (b a^{\ell} x^{ \ell } - \tilde b a^{\tilde \ell} x^{ \tilde \ell } ) \), and we note for later use that \eqref{eq:cond-monotonicty} ensures that \begin{equation} \label{eq:d-positive} b a^{\ell} x^{ \ell } - \tilde b a^{\tilde \ell} x^{ \tilde \ell } \geq 0. \end{equation} Thus, from \eqref{eq:phi-x}, \begin{align*} \left( \frac{ \phi (x) }{ x } \right)^{k} &=\left( \frac{1}{ a x^k } \right) \left[ ax^k + \tilde b a^{\tilde \ell + 1} x^{k ( \tilde \ell + 1 )} + \xi (y) a x^k \left( b a^{\ell} x^{ \ell } - \tilde b a^{\tilde \ell} x^{ \tilde \ell } \right) \right] \\ &= 1 + \tilde b a^{\tilde \ell} x^{k\tilde \ell} + \xi (y)\left( b a^{\ell} x^{ \ell } - \tilde b a^{\tilde \ell} x^{ \tilde \ell } \right) \\ \end{align*} Next, using \eqref{eq:cond-on-b1}, \eqref{eq:g-p-g} and \eqref{eq:d-positive}, we obtain \begin{align} \nonumber g'( g(x) ) \nonumber &> 1 + \tilde b x^{ k \tilde \ell } + \xi( y)( f'(y) - h_1'(y) ) \\ \nonumber &= 1 + \tilde b x^{ k \tilde \ell } + \xi( y )( (1 + \ell ) b a^{\ell} x^{k \ell} - (1 + \tilde \ell )\tilde b a^{\tilde \ell} x^{k \tilde \ell}) \\ &\geq 1 + \tilde b x^{ k \tilde \ell } + \xi( y ) \left( b a^{\ell} x^{k \ell} - \tilde b a^{\tilde \ell} x^{k \tilde \ell} \right) = \left( \frac{ \phi (x) }{ x } \right)^{k}. \label{eq:g-prime-g} \end{align} So, from \eqref{eq:g-p-over-g-phi} and \eqref{eq:g-prime-g} we get \[ \frac{ g'(x) }{ g'( \phi (x) ) } g' ( g (x ) ) > \left( \frac{ \phi (x) }{ x } \right)^{k - 1} \left( \frac{ \phi (x) }{ x } \right)^{k} > 1, \] Which proves our claim \eqref{eq:phiexpansion}, in the case that \( \ell > 0 \). 2) If \( \ell = 0 \), then we proceed as before and insert the definitions of \( f \) and \( h_1 \) into \eqref{eq:def-phi} \begin{align*} \phi(x) = \left(\frac{1}{a} \right)^{1/k} \left[ ax^k + \tilde b a^{\ell + 1 } x^{k ( \tilde \ell + 1 )} + \xi(y)\left( - 1 + \eta (y) + b a x^{k} - \tilde b a^{1 + \tilde \ell} x^{ k ( 1 + \tilde \ell )} \right) \right], \end{align*} and so \[ \left( \frac{ \phi (x) }{ x } \right)^{k} = 1 + \tilde b a^{ \tilde \ell} x^{k \tilde \ell } + \xi(y)\left( \frac{- 1 + \eta (y)}{ ax^{k}} + b - \tilde b a^{\tilde \ell} x^{ k \tilde \ell } \right). \] We recall from \cite{CoaLuzMub22}*{Equation(13)} that \( \eta''(y) \implies \eta' (y) \geq \eta (y) / ( 1 +y ) > ( \eta (y) - 1 )/ ( 1 + y ) \) for every \( y \in U_{-1} \). So, inserting the expressions for \( f' \) and \( h_1' \) into \eqref{eq:g-p-g} \begin{align*} g'(y) &\geq 1 + \tilde b ( 1 + \tilde \ell ) (1 + y )^{\tilde \ell} + \xi (y) \left( \eta'(y) + 1 + b - \tilde b (1 + \tilde \ell ) (1 + y )^{ \tilde \ell } \right) \\ &> 1 + \tilde b a^{\tilde \ell} x^{ k\tilde \ell} + \xi(y) \left( \frac{-1 + \eta (y) }{ ax^{k} } + b - \tilde b a^{\tilde \ell} x^{ k \tilde \ell } + 1 - \tilde b \tilde\ell a ^{\tilde \ell} x^{ k \tilde \ell} \right) \geq \left( \frac{ \phi (x) }{ x } \right)^{k}. \end{align*} This concludes \eqref{eq:phiexpansion} in the case that \( \ell = 0 \). We can then proceed as in \cite{CoaLuzMub22}*{Corollary 3.9}, to get \[ (g^{n+1}) ' (x) = g'(x) g'(g(x)) \cdots g'(g^{n} (x)) = \frac{ g'(x) g'(g (x))}{ g'( \phi(x)) } (g^{n})'(\phi (x)) > (g^{n})'(\phi (x)). \] This, together with \eqref{eq:A2-for-f}, implies \eqref{eq:g-A2-main-claim} and allows us to conclude that \( g \) satisfies \ref{itm:A2}. \end{proof} \subsection{\( g \) is \( C^r \) close to \( f \)} \label{sec:crclose} Let \( f \in \widehat{ \mathfrak{F} } \). In Sections \ref{sec:construction} and \ref{sec:fhat} we constructed a map \( g \in \widehat{ \mathfrak{F} } \) ultimately depending only on the choice of two arbitrary constants \( \tilde \ell_1, \tilde \ell_2 > 0 \) and two points \( x_1 \in U_{-1} \), \( x_{2} \in U_{1} \). We now show that \( g \) can be chosen arbitrarily close to \( f \), by choosing the points \( x_{1}, x_{2}\) sufficiently close to the fixed points \( -1, 1\) respectively, in a topology determined by the constants \begin{equation}\label{eq:approxreg} r_{1} \coloneqq \min \{ \ell_1, \tilde \ell_1 \}, \quad\text{ and } \quad r_{2} \coloneqq \min \{ \ell_1, \tilde \ell_2 \}. \end{equation} More precisely, we have the following result. \begin{lem} \label{lem:c-r-close} There exists a \( C > 0 \) such that \[ \| f - g \|_{ C^{ \lceil r_1 \rceil }([-1,0]) } \leq C ( 1 + x_1 )^{ 1 + r_1 - \lceil r_1 \rceil }, \quad\text{ and } \quad \| f - g \|_{ C^{ \lceil r_2 \rceil }([0,1]) } \leq C ( 1 - x_2 )^{ 1 + r_2 - \lceil r_2 \rceil }. \] \end{lem} \begin{proof} We will only give an explicit proof of the bound for \( \| f - g \|_{C^{\lceil r_1 \rceil }} \) as the argument for \( \| f - g \|_{C^{\lceil r_2 \rceil }} \) is the same. If \( \ell_1 = \tilde{\ell_1} \) we obtain trivially that \( f - g \equiv 0 \). So, let us assume \( \ell_1 \neq \tilde{\ell_1} \) and consider the subcases \( \ell_1 = 0 \), \( \ell_1 > 0 \) separately. If \( \ell_1 = 0 \), then \( r_1 = 0 \) and using the definition \eqref{eq:def-h1} of \( h_1 \) near \( -1 \), the fact that \( \xi \) is bounded, and the definition of \( g \) one finds that \[ \| g - f \|_{C^0} \leq 2 \| h_1 - f \|_{C^0} < C (1 + x) \leq C (1 + x_1). \] This proves the result in the case that \( \ell_1 = 0. \) Let us now suppose throughout the remainder of the proof that \( \ell_1 > 0 \). We begin by establishing the following sublemma. \begin{sublem} \label{sublem:derv-bounds} There exists a \( C > 0 \) so that for every \( 1 \leq k \leq \lceil r_1 \rceil \) \begin{equation} \label{eq:derv-bounds} \begin{split} &| D^k f(x) - D^k h_1 (x) | \leq C( 1 + x_{1})^{1 + r_1 - k }\quad \forall x \in [-1, x_1] \\ \end{split} \end{equation} \end{sublem} \begin{proof}[Proof of Sublemma \ref{sublem:derv-bounds}] Suppose that \( 0 < r_1 = \ell_1 < \tilde \ell_1 \), let \( 1 \leq k \leq \lceil r_1 \rceil \) and note that for some constants \( c_k, \tilde c_k \) \begin{align*} | D^k f (x) - D^k h_1 (x) | &= | c_{k} ( 1 + x )^{ 1 + \ell_1 - k } + \tilde c_{k} ( 1 + x )^{ 1 + \tilde \ell_1 - k } | = ( 1 + x )^{ 1 + \ell_1 - k } | c_k - \tilde c_{k}( 1 + x ) ^{\tilde \ell_1 - \ell_1} |. \end{align*} As \( ( 1 + x ) ^{\tilde \ell_1 - \ell_1} \to 0 \) as \( x \to -1 \), and as we are considering only finitely many \( k \), we see that there exists some constant \( C > 0 \), independent of \( k \) such that \eqref{eq:derv-bounds} holds. Suppose now that \( 0 < \tilde \ell_1 < \ell_1 \). Repeating the calculation above with \( \ell_1 \) and \( \tilde \ell_1 \) exchanged we conclude the proof. \end{proof} We now continue the proof of Lemma \ref{lem:c-r-close}. The bound \eqref{eq:derv-bounds} immediately implies \[ \| f - g \|_{C^{\lceil r_1 \rceil} [-1, \tilde x_1 ]} = \| f - h_1 \|_{C^{\lceil r_1 \rceil} [-1, \tilde x_1 ]} \leq C (1 + x_1)^{1 + r_1 - \lceil r_1 \rceil} \] for some \( C \) which depends on \( \lceil r_1 \rceil \), but not on \( x_1 \). For \( x \in [\tilde{x}_1, x_1] \) we find by repeated applications of the product rule that \begin{equation}\label{eq:derv-of-g} D^k g(x) = D^k h_1 (x) + \sum_{ j = 0 }^{ n } \binom{k}{j} D^j \xi_1(x) \cdot ( D^{k -j} f (x) - D^{ k - j} h_1 (x) ). \end{equation} Using \eqref{eq:derv-xi-1} and using \eqref{eq:derv-bounds}, we obtain \begin{align} \nonumber \left|\sum_{ j = 0 }^{ n } \binom{k}{j} D^j \xi_1 \cdot ( D^{k -j} f (x) - D^{ k - j} h_1 (x) ) \right| &\leq C \sum_{ j = 0 }^{ n } \binom{k}{j} ( 1 + x_1)^{-j} x_1^{ 1 + r_1 - k + j} \\ \label{eq:sum-of-derv} &= C ( 1 + x_1)^{1 + r_1 -k}. \end{align} Finally, combining \eqref{eq:derv-bounds}, \eqref{eq:derv-of-g} and \eqref{eq:sum-of-derv} we find that for any \( k = 0 , \ldots, \lceil r_1 \rceil \) we find that \( |D^k f (x) - D^k g (x) | \leq C( 1 + x_1)^{1 + r_1 - k } \), for every \( x \in [-1, -1 + x_1] \). So, \begin{equation}\label{eq:Cr-norm-g-1} \| f - g \|_{ C^{\lceil r_1 \rceil} [ -1, 0 ] } \leq C (1 + x_1)^{1 + r_1 - \lceil r_1 \rceil }, \end{equation} for some \( C \) which does not depend on \( x_1 \). \end{proof} \subsection{Concluding the proof of Theorem \ref{thm:density-main}} \label{sec:conc} \begin{proof} Let \( f \in \widehat{\mathfrak{F}} \) and let \( \varepsilon > 0 \). For each of the classes \( \mathfrak{F}, \mathfrak{F}_{\pm},\mathfrak{F}_* \) we will choose \( \tilde \ell_1, \tilde \ell_2 > 0 \) so that the corresponding map \( g \) constructed in Section \ref{sec:construction} belongs to the chosen class, and then, by Lemma~\ref{lem:c-r-close}, we can choose \( x_1 \in U_{-1}, x_2 \in U_{1} \) so that \( g \) is \( \varepsilon \)-close to \( f\) in the appropriate topology. We illustrate this process in detail in a couple of cases and then give some tables to show that choices in all cases. Suppose first that \( f \in \mathfrak{F} \) (so that \( \beta\in [0,1)\) and \( f \) has a physical measure equivalent to Lebesgue) and let us approximate \( f \) by some map \( g\in \mathfrak F_{*}\), ( so that \( \beta\geq 1\) and \( g \) has no physical measure). Set \( \tilde \ell_{1}=1/k_{2}, \tilde \ell_{2}=1/k_{1}\) so that \( \tilde\ell_{1}k_{2}= \tilde\ell_{2}k_{1}=1\), which ensures that \( g\in \mathfrak F_{*}\). Notice that we could choose \( \tilde \ell_{1}=t/k_{2}, \tilde \ell_{2}=t/k_{1}\) for any \( t \geq 1 \) and that for any such choice we have \( \tilde\ell_{1} > \ell_{1}\) and \( \tilde\ell_{2}> \ell_{2}\) because by assumption we have \( \ell_{1}k_{2}, \ell_{2}k_{1}\in [0,1)\). Once \( \tilde\ell_{1}, \tilde\ell_{2}\) have been chosen we immediately get the regularity of the approximation from \eqref{eq:approxreg} which in this case is given by \( r= \lceil \min\{\ell_{1}, \tilde\ell_{1}, \ell_{2}, \tilde\ell_{2}\}\rceil = \lceil\min\{\ell_{1}, \ell_{2}\}\rceil= r_{*}(f) \), thus proving the Theorem in this case. For a second example, suppose that \( f \in \mathfrak{F}_{\pm} \) with \( 0 < \beta^+ < 1 \leq \beta^- \) (so that \( f \) has a physical measure supported on the fixed point \( -1 \)), and let us construct a \( \tilde g \in \mathfrak{F} \) (so that \( g \) has a physical measure equivalent to Lebesgue) that is close to \( f \). Recall that \( \tilde g \in \mathfrak{F} \) if and only if \( \beta^+, \beta^- \in [0,1) \) and so we can leave unchanged the value of \( \beta^{+}\), and therefore let \( \tilde\ell_{2}=\ell_{2}\), but we need to lower the value of \( \beta^{-}\) to something less than 1. We can do this by letting \( \tilde\ell_{1} = 1/ k_2 - \gamma \) for any \( 0 < \gamma < 1/ k_2 \), which gives \( \beta^{-}(g)=\tilde\ell_{2}k_{2}= ( 1/ k_2 - \gamma) k_{2} = 1- k_{2}\gamma <1\), and therefore \( g \in \mathfrak{F} \) as required. To estimate the distance between \( f \) and \( g \), and to choose the appropriate metric for this distance, notice that \( g|_{[0,1]}=f|_{[0,1]}\) and therefore we only need to worry about the distance between \( f\) and \( g \) on \( [-1,0]\). Therefore, by \eqref{eq:approxreg} we get \( r = \lceil \min\{\ell_{1}, \tilde \ell_{1}\} \rceil =\lceil \ell_{1} \rceil =\lceil 1 / k_2 - \gamma \rceil \) and by Lemma \ref{lem:c-r-close} we can construct \( g \) arbitrarily close to \( f \) in the \( d_{r}\) metric with \( r = \lceil 1 / k_2 - \gamma \rceil \) for any \( \gamma>0\) arbitrarily small. Notice however that if \( \gamma\) is sufficiently small then \( \lceil 1 / k_2 - \gamma \rceil = \lceil 1 / k_2 \rceil \) and therefore \( r = \tilde r(f) \) as claimed in the Theorem. All the cases can be obtained by a simple reasonsing as illustrated in the two examples above, from which we deduce the following choices for \( \tilde\ell_{1}, \tilde\ell_{2}\) and the corresponding regularity of the approximation. \renewcommand{\arraystretch}{1.4} \begin{table}[h] \centering \subfloat[Choice of \( \tilde{\ell}_1, \tilde{\ell}_2 \) for constructing \( \tilde{f} \)]{ \begin{tabular}{@{}lcc@{}} \toprule Parameters of \( f \) & \( \tilde{ \ell }_1 \) & \( \tilde{ \ell }_2 \) \\ \midrule \( 1 \leq \beta^+ < 1 \leq \beta^- \) & \( 1/k_2 - \gamma \) & \( \ell_2 \) \\ \( 1 \leq \beta^- < 1 \leq \beta^+ \) & \( \ell_1 \) & \( 1/k_1 - \gamma \) \\ \( \beta^+, \beta^- \geq 1 \) & \(1/k_2 - \gamma \) & \( 1/k_1 - \gamma \) \\ \bottomrule \end{tabular}} \quad \subfloat[Choice of \( \tilde{\ell}_1, \tilde{\ell}_2 \) for constructing \( f_{\pm} \)]{ \begin{tabular}{@{}lcc@{}} \toprule Parameters of \( f \) & \( \tilde{ \ell }_1 \) & \( \tilde{ \ell }_2 \) \\ \midrule \( \ell_1 \geq \ell_2 \) and \( \beta^- < 1 \) & \( 1/k_2 \) & \( \ell_2 \) \\ \( \ell_1 \geq \ell_2 \) and \( \beta^- \geq 1 \) & \( \ell_1 + \gamma \) & \( \ell_2 \) \\ \( \ell_1 < \ell_2 \) and \( \beta^+ < 1 \) & \( \ell_1 \) & \( 1 / k_1 \) \\ \( \ell_1 < \ell_2 \) and \( \beta^+ \geq 1 \) & \( \ell_1 \) & \( \ell_2 + \gamma \) \\ \bottomrule \end{tabular} } \\ \vspace{1em} \subfloat[Choice of \( \tilde{\ell}_1, \tilde{\ell}_2 \) for constructing \( f_* \)]{ \begin{tabular}{@{}lcc@{}} \toprule Parameters of \( f \) & \( \tilde{ \ell }_1 \) & \( \tilde{ \ell }_2 \) \\ \midrule \( \beta \in [0,1) \) & \( 1/k_2 \) & \( 1/k_1 \) \\ \( \beta^+ < 1 \leq \beta^- \) & \( \ell_1 \) & \( \beta^- / k_1 \) \\ \( \beta^- < 1 \leq \beta^+ \) & \( \beta^+ / k_2 \) & \( \ell_2 \) \\ \( \beta^+, \beta^- \geq 1 \) and \( k_1 \geq k_2 \) & \( \beta^+ / k_2 \) & \( \ell_2 \) \\ \( \beta^+, \beta^- \geq 1 \) and \( k_1 < k_2 \) & \( \ell_1 \) & \( \beta^-/ k_1 \) \\ \bottomrule \end{tabular} } \caption{Choosing \(\tilde{\ell}_1, \tilde{\ell}_2 \) to complete the proof of Theorem \ref{thm:density-main}.} \end{table} This completes the proof. \end{proof} \newpage \begin{bibdiv} \begin{biblist} \bib{AarThaZwe05}{article}{ author={Aaronson, Jon}, author={Thaler, Maximilian}, author={Zweim{\"u}ller, Roland}, title={Occupation times of sets of infinite measure for ergodic transformations}, date={2005}, journal={Ergodic Theory and Dynamical Systems}, } \bib{Alv20}{book}{ author={Alves, Jos{\'e}~F.}, title={Nonuniformly hyperbolic attractors. geometric and probabilistic aspects}, series={Springer Monographs in Mathematics}, publisher={Springer International Publishing}, date={2020}, } \bib{AlvDiaLuz17}{article}{ author={Alves, Jos{\'e}~F.}, author={Dias, Carla~L.}, author={Luzzatto, Stefano}, author={Pinheiro, Vilton}, title={S{RB} measures for partially hyperbolic systems whose central direction is weakly expanding}, date={2017}, journal={J. Eur. Math. Soc. (JEMS)}, volume={19}, number={10}, pages={2911\ndash 2946}, } \bib{AlvLuzPin05}{article}{ author={Alves, Jos{\'e}~F.}, author={Luzzatto, Stefano}, author={Pinheiro, Vilton}, title={Markov structures and decay of correlations for non-uniformly expanding dynamical systems}, date={2005}, journal={Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire}, volume={22}, number={6}, pages={817\ndash 839}, } \bib{AnoSin67}{article}{ author={Anosov, D~V}, author={Sinai, Yakov~G.}, title={{Some Smooth Ergodic Systems}}, date={1967}, journal={Russian Mathematical Surveys}, volume={103}, } \bib{AraLuzVia09}{article}{ author={Ara{\'u}jo, V{\'{\i}}tor}, author={Luzzatto, Stefano}, author={Viana, Marcelo}, title={Invariant measures for interval maps with critical points and singularities}, date={2009}, journal={Adv. Math.}, volume={221}, number={5}, pages={1428\ndash 1444}, } \bib{AraPin21}{article}{ author={Ara{\'{u}}jo, V{\'{i}}tor}, author={Pinheiro, Vilton}, title={{Abundance of wild historic behavior}}, date={2021}, journal={Bulletin of the Brazilian Mathematical Society. New Series.}, volume={52}, } \bib{Bac99}{article}{ author={Bachurin, P~S}, title={The connection between time averages and minimal attractors}, date={1999}, journal={Russian Mathematical Surveys}, volume={54}, number={6}, pages={1233\ndash 1235}, } \bib{BahGalNis18}{article}{ author={Bahsoun, Wael}, author={Galatolo, Stefano}, author={Nisoli, Isaia}, author={Niu, Xiaolong}, title={A rigorous computational approach to linear response}, date={2018}, journal={Nonlinearity}, volume={31}, } \bib{BahSau16}{article}{ author={Bahsoun, Wael}, author={Saussol, Beno{\^\i}t}, title={Linear response in the intermittent family: differentiation in a weighted $c^0$-norm}, date={2016}, journal={Discrete and Continuous Dynamical Systems}, } \bib{BalTod16}{article}{ author={Baladi, V.}, author={Todd, M.}, title={Linear response for intermittent maps}, date={2016}, journal={Communications in Mathematical Physics}, volume={347}, pages={857\ndash 874}, } \bib{BarKirNak20}{article}{ author={Barrientos, Pablo~G.}, author={Kiriki, Shin}, author={Nakano, Yushi}, author={Raibekas, Artem}, author={Soma, Teruhiko}, title={Historic behavior in non-hyperbolic homoclinic classes}, date={2020}, journal={Proceedings of the American Mathematical Society}, volume={148}, pages={1195\ndash 1206}, } \bib{BerBie22}{article}{ author={Berger, Pierre}, author={Biebler, Sebastien}, title={Emergence of wandering stable components}, date={2022}, journal={Journal of the American Mathematical Society}, volume={36}, } \bib{Bir31}{article}{ author={Birkhoff, George~David}, title={{Proof of the Ergodic Theorem.}}, date={1931}, journal={Proceedings of the National Academy of Sciences of the United States of America}, volume={17}, number={12}, pages={656\ndash 660}, } \bib{Bow75}{book}{ author={Bowen, Rufus}, title={Equilibrium states and the ergodic theory of {A}nosov diffeomorphisms}, series={Lecture Notes in Mathematics, Vol. 470}, publisher={Springer-Verlag}, address={Berlin}, date={1975}, } \bib{BruLep13}{article}{ author={Bruin, Henk}, author={Leplaideur, Renaud}, title={Renormalization, thermodynamic formalism and quasi-crystals in subshifts}, date={2013feb}, journal={Communications in Mathematical Physics}, volume={321}, number={1}, pages={209\ndash 247}, } \bib{BruTerTod19}{article}{ author={Bruin, Henk}, author={Terhesiu, Dalia}, author={Todd, Mike}, title={The pressure function for infinite equilibrium measures}, date={2019jun}, journal={Israel Journal of Mathematics}, volume={232}, number={2}, pages={775\ndash 826}, } \bib{Bur21}{article}{ author={Burguet, David}, title={{SRB} measures for ${C}^\infty$ surface diffeomorphisms}, date={2021}, journal={Preprint}, } \bib{Buz00}{article}{ author={Buzzi, J{\'e}r{\^o}me}, title={Absolutely continuous invariant probability measures for arbitrary expanding piecewise r-analytic mappings of the plane}, date={2000}, journal={Ergodic Theory Dynam. Systems}, volume={20}, number={3}, pages={697\ndash 708}, } \bib{BuzCroSar22}{article}{ author={Buzzi, J{\'e}r{\^o}me}, author={Crovisier, Sylvain}, author={Sarig, Omri}, title={Another proof of burguet's existence theorem for srb measures of $c^\infty$ surface diffeomorphisms}, date={2022}, journal={Preprint}, } \bib{CamIso95}{article}{ author={Campanino, Massimo}, author={Isola, Stefano}, title={Statistical properties of long return times in type i intermittency}, date={1995}, journal={Forum Mathematicum}, volume={7}, number={7}, } \bib{CliLuzPes17}{article}{ author={Climenhaga, Vaughn}, author={Luzzatto, Stefano}, author={Pesin, Yakov}, title={The geometric approach for constructing {S}inai-{R}uelle-{B}owen measures}, date={2017}, journal={Journal of Statistical Physics}, volume={166}, } \bib{CliLuzPes23}{article}{ author={Climenhaga, Vaughn}, author={Luzzatto, Stefano}, author={Pesin, Yakov}, title={Srb measures and young towers for surface diffeomorphisms}, date={2023}, journal={Annales Henri Poincar{\'{e}}}, volume={23}, } \bib{CoaHolTer19}{article}{ author={Coates, Douglas}, author={Holland, Mark}, author={Terhesiu, Dalia}, title={Limit theorems for wobbly interval intermittent maps}, date={201910}, } \bib{CoaLuzMub22}{article}{ author={Coates, Douglas}, author={Luzzatto, Stefano}, author={Mubarak, MUhammad}, title={Doubly intermittent full branch maps with critical points and singularities}, date={2022}, journal={Preprint}, } \bib{ColVar01}{article}{ author={Colli, Eduardo}, author={Vargas, Edson}, title={Non-trivial wandering domains and homoclinic bifurcations}, date={2001}, journal={Ergodic Theory and Dynamical Systems}, } \bib{CriHayMar10}{article}{ author={Cristadoro, Giampaolo}, author={Haydn, Nicolai}, author={Marie, Philippe}, author={Vaienti, Sandro}, title={Statistical properties of intermittent maps with unbounded derivative}, date={2010}, journal={Nonlinearity}, } \bib{CroYanZha20}{article}{ author={Crovisier, Sylvain}, author={Yang, Dawei}, author={Zhang, Jinhua}, title={Empirical measures of partially hyperbolic attractors}, date={2020}, journal={Communications in Mathematical Physics}, } \bib{Cui21}{article}{ author={Cui, Hongfei}, title={Invariant densities for intermittent maps with critical points}, date={2021}, journal={Journal of Difference Equations and Applications}, volume={27}, number={3}, pages={404\ndash 421}, } \bib{DiaHolLuz06}{article}{ author={Diaz-Ordaz, Karla}, author={Holland, Mark}, author={Luzzatto, Stefano}, title={Statistical properties of one-dimensional maps with critical points and singularities}, date={2006}, journal={Stochastics and Dynamics}, volume={6}, number={4}, } \bib{Dol04}{incollection}{ author = {Dolgopyat, Dmitry}, title = {Prelude to a kiss}, booktitle = {Modern dynamical systems and applications}, pages = {313--324}, year = {2004}, } \bib{Dua12}{article}{ author={Duan, Y.}, title={A.c.i.m for random intermittent maps: existence, uniqueness and stochastic stability}, date={2012}, journal={Dynamical Systems. An International Journal}, } \bib{FisLop01}{article}{ author={Fisher, Albert~M}, author={Lopes, Artur}, title={Exact bounds for the polynomial decay of correlation, 1/fnoise and the {CLT} for the equilibrium state of a non-h{\"o}lder potential}, date={2001jul}, journal={Nonlinearity}, volume={14}, number={5}, pages={1071\ndash 1104}, } \bib{FreFreTod13}{article}{ author={Freitas, Ana Cristina~Moreira}, author={Freitas, Jorge~Milhazes}, author={Todd, Mike}, title={The compound poisson limit ruling periodic extreme behaviour of non-uniformly hyperbolic dynamics}, date={2013mar}, journal={Communications in Mathematical Physics}, volume={321}, number={2}, pages={483\ndash 527}, } \bib{FreFreTod16}{article}{ author={Freitas, Ana Cristina~Moreira}, author={Freitas, Jorge~Milhazes}, author={Todd, Mike}, author={Vaienti, Sandro}, title={Rare events for the manneville-pomeau map}, date={2016nov}, journal={Stochastic Processes and their Applications}, volume={126}, number={11}, pages={3463\ndash 3479}, } \bib{FroMurSta11}{article}{ author={Froyland, Gary}, author={Murray, Rua}, author={Stancevic, Ognjen}, title={Spectral degeneracy and escape dynamics for intermittent maps with a hole}, date={2011jul}, journal={Nonlinearity}, volume={24}, number={9}, pages={2435\ndash 2463}, } \bib{GalHolPer21}{article}{ author={Galatolo, Stefano}, author={Holland, Mark}, author={Persson, Tomas}, author={Zhang, Yiwei}, title={Anomalous time-scaling of extreme events in infinite systems and birkhoff sums of infinite observables}, date={202110}, journal={Discrete and Continuous Dynamical Systems}, } \bib{Gou10a}{article}{ author={Gou{\"{e}}zel, S{\'{e}}bastien}, title={{Almost sure invariance principle for dynamical systems by spectral methods}}, date={2010jul}, journal={The Annals of Probability}, volume={38}, number={4}, pages={1639\ndash 1671}, } \bib{Her18}{incollection}{ author={Herman, Michel}, title={An example of non-convergence of birkhoff sums}, date={2018}, booktitle={Notes inachev{\'e}es de michael r. herman s{\'e}lectionn{\'e}es par jean-christophe yoccoz}, publisher={Soci{\'e}t{\'e} Math{\'e}matique de France}, } \bib{HofKel90}{article}{ author={Hofbauer, Franz}, author={Keller, Gerhard}, title={Quadratic maps without asymptotic measure}, date={1990}, journal={Comm. Math. Phys.}, volume={127}, number={2}, pages={319\ndash 337}, } \bib{HofKel95}{incollection}{ author={Hofbauer, Franz}, author={Keller, Gerhard}, title={{Quadratic maps with maximal oscillation}}, date={1995}, booktitle={Algorithms, fractals, and dynamics}, editor={Takahashi, Y.}, publisher={Springer, Boston, MA}, pages={89\ndash 94}, } \bib{HuYou95}{article}{ author={Hu, Huyi}, author={Young, Lai-Sang}, title={Nonexistence of sbr measures for some diffeomorphisms that are `almost anosov'}, date={1995}, journal={Ergodic Theory and Dynamical Systems}, volume={15}, } \bib{Ino00}{article}{ author={Inoue, Tomoki}, title={Sojourn times in small neighborhoods of indifferent fixed points of one-dimensional dynamical systems}, date={2000}, journal={Ergodic Theory and Dynamical Systems}, volume={20}, } \bib{JarTol04}{book}{ author={J{\"a}rvenp{\"a}{\"a}, Esa}, author={Tolonen, Tapani}, title={Natural ergodic measures are not always observable}, publisher={University of Jyv{\"a}skyl{\"a}}, date={2005}, } \bib{KanKirLi16}{article}{ author={Kanagawa, Hiratuka}, author={Kiriki, Shin}, author={Li, Ming-Chia}, title={Geometric {L}orenz flows with historic behaviour}, date={2016}, journal={Discrete and Continuous Dynamical Systems}, volume={36}, number={12}, pages={7021\ndash 7028}, } \bib{Kel04}{article}{ author={Keller, Gerhard}, title={{Completely mixing maps without limit measure}}, date={2004}, journal={Colloquium Mathematicum}, volume={100}, number={1}, pages={73\ndash 76}, } \bib{KirLiSom10}{article}{ author={Kiriki, Shin}, author={Li, Ming-Chia}, author={Soma, Teruhiko}, title={{Coexistence of invariant sets with and without SRB measures in Henon family}}, date={2010}, journal={Nonlinearity}, volume={23}, number={9}, pages={2253\ndash 2269}, } \bib{KirLiNak22}{article}{ author={Kiriki, Shin}, author={Li, Xiaolong}, author={Nakano, Yushi}, author={Soma, Teruhiko}, title={Abundance of observable lyapunov irregular sets}, date={2022}, journal={Communications in Mathematical}, volume={Physics}, pages={1\ndash 29}, } \bib{KirNakSom19}{article}{ author={Kiriki, Shin}, author={Nakano, Yushi}, author={Soma, Teruhiko}, title={Historic behaviour for nonautonomous contraction mappings}, date={2019}, journal={Nonlinearity}, volume={32}, pages={1111\ndash 1124}, } \bib{KirNakSom21}{article}{ author={Kiriki, Shin}, author={Nakano, Yushi}, author={Soma, Teruhiko}, title={Historic and physical wandering domains for wild blender-horseshoes}, date={2021}, journal={Preprint}, } \bib{KirNakSom22}{article}{ author={Kiriki, Shin}, author={Nakano, Yushi}, author={Soma, Teruhiko}, title={Emergence via non-existence of averages}, date={2022}, journal={Advances in Mathematics}, volume={400}, pages={1\ndash 30}, } \bib{KirSom17}{article}{ author={Kiriki, Shin}, author={Soma, Teruhiko}, title={Takens' last problem and existence of non-trivial wandering domains}, date={2017}, journal={Advances in Mathematics,}, volume={306}, pages={pp.}, } \bib{Kle06}{article}{ author={Kleptsyn, V~A}, title={An example of non-coincidence of minimal and statistical attractors}, date={2006}, journal={Ergodic Theory and Dynamical Systems}, volume={26}, } \bib{Kor16}{article}{ author={Korepanov, Alexey}, title={Linear response for intermittent maps with summable and nonsummable decay of correlations}, date={2016}, journal={Nonlinearity}, } \bib{LabRod17}{article}{ author={Labouriau, Isabel~S.}, author={Rodrigues, Alexandre A.~P.}, title={On takens' last problem: tangencies and time averages near heteroclinic networks}, date={2017}, journal={Nonlinearity}, volume={30}, pages={1876\ndash 1910}, } \bib{LasYor73}{article}{ author={Lasota, A.}, author={Yorke, James~A.}, title={On the existence of invariant measures for piecewise monotonic transformations}, date={1973}, journal={Trans. Amer. Math. Soc.}, volume={186}, pages={481\ndash 488}, } \bib{LivSauVai99}{article}{ author={Liverani, Carlangelo}, author={Saussol, Beno{\^{\i}}t}, author={Vaienti, Sandro}, title={A probabilistic approach to intermittency}, date={1999}, journal={Ergodic Theory Dynam. Systems}, volume={19}, number={3}, pages={671\ndash 685}, } \bib{Mel08}{article}{ author={Melbourne, Ian}, title={Large and moderate deviations for slowly mixing dynamical systems}, date={2008nov}, journal={Proceedings of the American Mathematical Society}, volume={137}, number={5}, pages={1735\ndash 1741}, } \bib{MelTer12}{article}{ author={Melbourne, Ian}, author={Terhesiu, Dalia}, title={First and higher order uniform dual ergodic theorems for dynamical systems with infinite measure}, date={2012nov}, journal={Israel Journal of Mathematics}, volume={194}, number={2}, pages={793\ndash 830}, } \bib{NicTorVai16}{article}{ author={Nicol, Matthew}, author={T{\=o}r{\=o}k, Andrew}, author={Vaienti, Sandro}, title={Central limit theorems for sequential and random intermittent dynamical systems}, date={2016}, journal={Ergodic Theory and Dynamical Systems}, volume={38}, number={3}, pages={1127\ndash 1153}, } \bib{Nol20}{book}{ author={Nolan, John~P}, title={Univeriate stable distributions}, publisher={Springer}, date={2020}, } \bib{KarAsh11}{article}{ author={{\=o}zkan Karabacak}, author={Ashwin, Peter}, title={On statistical attractors and the convergence of time averages}, date={2011}, journal={Mathematical Proceedings of the Cambridge Philosophical Society}, volume={150}, } \bib{Pal08}{article}{ author={Palis, J}, title={Open questions leading to a global perspective in dynamics}, date={2008}, journal={Nonlinearity}, volume={21}, number={4}, } \bib{Pal15}{article}{ author={Palis, J}, title={Open questions leading to a global perspective in dynamics (corrigendum)}, date={2015}, journal={Nonlinearity}, volume={28}, number={3}, } \bib{Pia80}{article}{ author={Pianigiani, Giulio}, title={First return map and invariant measures}, date={1980}, journal={Israel J. Math.}, volume={35}, number={1-2}, pages={32\ndash 48}, } \bib{Pin06}{article}{ author={Pinheiro, Vilton}, title={Sinai-{R}uelle-{B}owen measures for weakly expanding maps}, date={2006}, journal={Nonlinearity}, volume={19}, number={5}, pages={1185\ndash 1200}, } \bib{PolSha09}{article}{ author={Pollicott, Mark}, author={Sharp, Richard}, title={Large deviations for intermittent maps}, date={2009jul}, journal={Nonlinearity}, volume={22}, number={9}, pages={2079\ndash 2092}, } \bib{PolWei99}{article}{ author={Pollicott, Mark}, author={Weiss, Howard}, title={Multifractal analysis of lyapunov exponent for continued fraction and manneville-pomeau transformations and applications to diophantine approximation}, date={1999nov}, journal={Communications in Mathematical Physics}, volume={207}, number={1}, pages={145\ndash 171}, } \bib{PomMan80}{article}{ author={Pomeau, Yves}, author={Manneville, Paul}, title={Intermittent transition to turbulence in dissipative dynamical systems}, date={1980}, journal={Communications in Mathematical Physics}, volume={74}, pages={189\ndash 197}, } \bib{Rue76}{article}{ author={Ruelle, David}, title={A measure associated with axiom-{A} attractors}, date={1976}, journal={Amer. J. Math.}, volume={98}, number={3}, pages={619\ndash 654}, } \bib{Ruz15}{article}{ author={Ruziboev, Marks}, title={Decay of correlations for invertible systems with non-h{\"o}lder observables}, date={2015}, journal={Dynamical Systems. An International Journal}, } \bib{Ruz18}{incollection}{ author={Ruziboev, Marks}, title={Almost sure rates of mixing for random intermittent maps}, date={201801}, booktitle={Differential equations and dynamical systems}, publisher={Springer}, } \bib{Sar01}{article}{ author={Sarig, Omri}, title={{Phase transitions for countable Markov shifts}}, date={2001}, journal={Communications in Mathematical Physics}, volume={217}, number={3}, pages={555\ndash 577}, } \bib{Saw66}{article}{ author={Sawyer, S.}, title={Maximal inequalities of weak type}, date={1966}, journal={Annals of Mathematics}, volume={84}, } \bib{SheStr13}{article}{ author={Shen, Weixiao}, author={van Strien, Sebastian}, title={On stochastic stability of expanding circle maps with neutral fixed points}, date={2013sep}, journal={Dynamical Systems}, volume={28}, number={3}, pages={423\ndash 452}, } \bib{Sin72}{article}{ author={Sinai, Ja.~G.}, title={Gibbs measures in ergodic theory}, date={1972}, journal={Uspehi Mat. Nauk}, volume={27}, number={4}, pages={21\ndash 64}, } \bib{Tak94}{article}{ author={Takens, Floris}, title={Heteroclinic attractors: time averages and moduli of topological conjugacy.}, date={1994}, journal={Bullettin of the Brazilian Mathematical Society}, volume={25}, } \bib{Tak08}{article}{ author={Takens, Floris}, title={Orbits with historic behaviour, or nonexistence of averages}, date={2008}, journal={Nonlinearity}, volume={21}, } \bib{Tal20}{article}{ author={Talebi, Amin}, title={Statistical (in)stability and non-statistical dynamics}, date={2020}, journal={Preprint}, } \bib{Tal22}{article}{ author={Talebi, Amin}, title={Non-statistical rational maps}, date={2022}, journal={Mathematische Zeitschrift}, } \bib{Ter13}{article}{ author={Terhesiu, Dalia}, title={Improved mixing rates for infinite measure-preserving systems}, date={2013aug}, journal={Ergodic Theory and Dynamical Systems}, volume={35}, number={2}, pages={585\ndash 614}, } \bib{Ter15}{article}{ author={Terhesiu, Dalia}, title={Mixing rates for intermittent maps of high exponent}, date={2015}, journal={Probability Theory and Related Fields}, volume={166}, number={3-4}, pages={1025\ndash 1060}, } \bib{Tha80}{article}{ author={Thaler, Maximilian}, title={Estimates of the invariant densities of endomorphisms with indifferent fixed points}, date={1980}, journal={Israel Journal of Mathematics}, volume={37}, number={4}, pages={303\ndash 314}, } \bib{Tha83}{article}{ author={Thaler, Maximilian}, title={Transformations on [0, 1] with infinite invariant measures}, date={1983}, journal={Israel Journal of Mathematics}, volume={46}, number={1-2}, pages={67\ndash 96}, } \bib{Tha95}{article}{ author={Thaler, Maximilian}, title={The invariant densities for maps modeling intermittency}, date={1995}, journal={Journal of Statistical Physics}, volume={79}, number={3-4}, pages={739\ndash 741}, } \bib{Tha95a}{article}{ author={Thaler, Maximilian}, title={A limit theorem for the perron-frobenius operator of transformations on [0,1] with indifferent fixed points}, date={1995}, journal={Israel Journal of Mathematics}, volume={91}, number={1-3}, pages={111\ndash 127}, } \bib{Tha00}{article}{ author={Thaler, Maximilian}, title={The asymptotics of the perron-frobenius operator of a class of interval maps preserving infinite measures}, date={2000}, journal={Studia Mathematica}, volume={143}, number={2}, pages={103\ndash 119}, } \bib{Tha05}{article}{ author={Thaler, Maximilian}, title={Asymptotic distributions and large deviations for iterated maps with an indifferent fixed point}, date={2005}, journal={Stochastics and Dynamics}, volume={05}, number={03}, pages={425\ndash 440}, } \bib{Tsu00c}{article}{ author={Tsujii, Masato}, title={Absolutely continuous invariant measures for piecewise real-analytic expanding maps on the plane}, date={2000}, journal={Comm. Math. Phys.}, volume={208}, number={3}, pages={605\ndash 622}, } \bib{Tsu01a}{article}{ author={Tsujii, Masato}, title={Absolutely continuous invariant measures for expanding piecewise linear maps}, date={2001}, journal={Invent. Math.}, volume={143}, number={2}, pages={349\ndash 373}, } \bib{Tsu05}{article}{ author={Tsujii, Masato}, title={Physical measures for partially hyperbolic surface endomorphisms}, date={2005}, journal={Acta Math.}, volume={194}, number={1}, pages={37\ndash 132}, } \bib{Vec22}{article}{ author={Veconi, Dominic}, title={SRB measures of singular hyperbolic attractors}, date={2022}, journal={Discrete and Continuous Dynamical Systems}, volume={42}, } \bib{You99}{article}{ author={Young, Lai-Sang}, title={Recurrence times and rates of mixing}, date={1999}, journal={Israel J. Math.}, volume={110}, pages={153\ndash 188}, } \bib{Zwe98}{article}{ author={Zweim{\"{u}}ller, Roland}, title={{Ergodic structure and invariant densities of non-Markovian interval maps with indifferent fixed points}}, date={1998}, journal={Nonlinearity}, volume={1263}, } \bib{Zwe00}{article}{ author={Zweim{\"{u}}ller, Roland}, title={{Ergodic properties of infinite measure-preserving interval maps with indifferent fixed points}}, date={2000}, journal={Ergodic Theory and Dynamical Systems}, volume={20}, number={5}, pages={1519\ndash 1549}, } \bib{Zwe02}{article}{ author={Zweim{\"{u}}ller, Roland}, title={{Exact $C^\infty$ covering maps of the circle without (weak) limit measure}}, date={2002}, journal={Colloquium Mathematicum}, volume={93}, number={2}, pages={295\ndash 302}, } \bib{Zwe03}{article}{ author={Zweim{\"{u}}ller, Roland}, title={{Stable limits for probability preserving maps with indifferent fixed points}}, date={2003}, journal={Stochastics and Dynamics}, volume={3}, number={1}, pages={83\ndash 99}, } \end{biblist} \end{bibdiv} \end{document}
{ "arxiv_id": "2302.11311", "language": "en", "timestamp": "2023-02-23T02:13:22", "url": "https://arxiv.org/abs/2302.11311", "yymm": "2302" }
\section{Introduction}\label{sec:1} Soft robotic systems possess many of the features required in minimally invasive surgery (MIS), including low weight and compliance similar to that of biological systems \cite{Runciman2019}. In addition, soft robots allow for affordable designs by replacing expensive actuators with low-cost solutions that can be produced locally in a low-resource setting \cite{Franco2021a}. Pneumatics and hydraulics are two common actuation strategies for soft robotic systems, due to their high power-to-weight ratio and affordability. In particular, pneumatic actuation yields fast responses and is well suited for force control, while hydraulic actuation enables the exertion of higher forces. Both approaches have been used extensively, with pneumatics being the most common of the two \cite{Runciman2020}. Increasing attention has been focused on the design of soft hydraulic actuators, such as the one described in \cite{Runciman2021} for MIS applications, which combine high forces with a low-profile form factor, and the possibility to provide shape-sensing ability based on the electrical impedance of the fluid \cite{Avery2019}. Unlike pneumatic soft actuators \cite{Niiyama2015}, employing an incompressible fluid allows control of the length of the actuator in open-loop with good repeatability. Nevertheless, model based control becomes necessary if the application demands high position accuracy in the presence of unknown external forces. Model-based control of soft robots is a notoriously complex topic, since these systems often possess more degrees-of-freedom (DOFs) than actuators \cite{Wang2022}. As a result, model-based control methods should account for the dynamics of the unactuated DOFs to ensure stability \cite{Franco2021b,Borja2022}. In addition, the presence of disturbances, which are ubiquitous in unstructured environments, such as those commonly found in surgery, can degrade performance. To address this point, recent controllers for soft robots have included either nonlinear observers \cite{Franco2021b,Trumic2020a,Trumic2020b} or integral actions \cite{Franco2021c}. Another challenge specific to soft robotic systems with pneumatic or hydraulic actuation is due to the pressure dynamics of the fluid, which decouples the control input from the dynamics of the payload \cite{Stolzle2021}. In our recent work \cite{Franco2022,Franco2022b,Franco2022c}, we have proposed an energy-based control approach for soft continuum manipulators that relies on the port-Hamiltonian formulation and extends the energy shaping methodology \cite{Ortega2002} by accounting for the internal energy of the fluid. Nevertheless, to the best of our knowledge, the case of multiple soft bellow actuators arranged in an antagonistic pair and subject to disturbances has not yet been considered. In this paper we investigate the model based control of soft hydraulic bellow actuators \cite{Runciman2021} arranged in an antagonistic pair (see Figure \ref{fig:fig1}) by employing a port-Hamiltonian formulation and an energy shaping control paradigm. The main contributions of this work include the following points. \begin{itemize} \item A dynamical model of the soft hydraulic bellow actuator, which includes the pressure dynamics of the fluid, is presented. Differently from \cite{Runciman2021}, the relationship between the contraction of the actuator and its volume is expressed analytically in closed form and is employed for control purposes using an energy shaping procedure. \item A nonlinear observer is designed to compensate for the effect of unknown external forces in real-time. Stability conditions are discussed with a Lyapunov approach in relation to the tuning parameters. \item The performance of the proposed controller is assessed with numerical simulations and extensive experiments. \end{itemize} The rest of the paper is organized as follows. Section \ref{sec:2} presents the system model. Section \ref{sec:3} details the controller design. Section \ref{sec:4} presents the results of simulations and experiments. Section \ref{sec:6} contains concluding remarks. \begin{figure} [tb] \begin{center} \subfloat[\label{fig:1a}]{ \begin{tabular}{c} \includegraphics[scale = 0.7]{Figures/Figure1a.pdf} \end{tabular} }\\ \subfloat[\label{fig:1b}]{ \begin{tabular}{c} \includegraphics[scale = 0.8]{Figures/Figure1b.pdf} \end{tabular} } \caption{Schematic of soft hydraulic actuator (a); antagonistic pair (b).} \label{fig:fig1} \vspace{-0.5cm} \end{center} \end{figure} \section{System model}\label{sec:2} We consider a system consisting of two soft hydraulic bellow actuators \cite{Runciman2021}, denoted with the subscripts 1 and 2, supplied by pressurized water and arranged in an antagonistic pair to move a payload of mass \(m\) in the horizontal direction \(x\). The actuators are made of inextensible thermoplastic material (e.g. nylon) and, differently from pneumatic muscle actuators, they contract when the internal volume of the fluid increases. Without loss of generality, we assume that the position \(x\) of the payload, which is connected to each actuator, increases when actuator 2 contracts and actuator 1 expands. The actuators are supplied by identical syringe pumps, which are not modeled in detail this work. The length of the bellow actuators varies with \(\theta\) as \begin{equation} \label{eq:1} \begin{split} L(\theta) = L_0 \frac{\sin{\theta}}{\theta}, \end{split} \end{equation} where \(L_0\) is the length of the empty actuator, and \(\theta\) is half the central angle of the actuator's section when it is contracted (see Figure \ref{fig:1a}). The volume of one bellow actuator varies in a nonlinear fashion with \(\theta\), that is \begin{equation} \label{eq:2} \begin{split} V(\theta) = k_0 \frac{L_0^2}{n_L}\left(\frac{d_c}{3}+\frac{D_s}{2}\right) \frac{\theta- \cos{\theta}\sin{\theta}}{\theta^2}, \end{split} \end{equation} where \(n_L\) is the number of pouches in the actuator, \(d_c\) and \(D_s\) define the actuator's geometry, and \(k_0\) is a scaling factor \cite{Runciman2021}. A closed-form analytical expression that approximates the volumes \(V_1\) and \(V_2\) of the antagonistic pair (see Figure \ref{fig:1b}) is obtained by substituting Taylor series in (\ref{eq:1}) and (\ref{eq:2}), that is \(\sin{\theta}\approx \theta - \frac{\theta^3}{6}\) and \(\cos{\theta}\approx 1 - \frac{\theta^2}{2}\), which yields \(L(\theta) = L_0 (1-\frac{\theta^2}{6})\). Defining the contraction of the actuator \(V_2\) as \(x=L_0-L(\theta)\), the volumes \(V_1\) and \(V_2\) yield \begin{equation} \label{eq:3} \begin{split} V_1 = K_0 \left(\frac{2}{3}-\frac{x_M-x-x_0}{2L_0}\right)\sqrt{\frac{6(x_M-x-x_0)}{L_0}}+V_0,\\ V_2 = K_0 \left(\frac{2}{3}-\frac{x+x_0}{2L_0}\right)\sqrt{\frac{6(x+x_0)}{L_0}}+V_0, \end{split} \end{equation} where \(K_0\) is a scaling factor that accounts for the parameters in (\ref{eq:2}), \(x_0\) is the initial position, \(x_M\) is the maximum contraction of the actuators, and \(V_0\) is the dead volume of fluid assumed to be identical for both actuators. The mechanical energy \(H\) of the system includes the kinetic energy of the payload and of the fluid, and the internal energy of the pressurized fluid \(\Phi\) in each actuator. The potential elastic energy is instead negligible, since the actuator material does not stretch longitudinally and the system lies on the horizontal plane. In summary, \(H = \frac{1}{2}M\dot{x}^2 + \Phi_1 + \Phi_2\), where the internal energy of the pressurized fluid in each bellow actuator is \cite{Gao2019} \begin{equation} \label{eq:4} \begin{split} \Phi_1 = \left(- P_1 + \Gamma_{0}(e^{P_1/\Gamma_{0}}-1)\right)V_1,\\ \Phi_2 = \left( -P_2 + \Gamma_{0}(e^{P_2/\Gamma_{0}}-1)\right)V_2, \end{split} \end{equation} and the pressures \(P_1\) and \(P_2\) are relative to atmosphere, while the total mass of the moving parts for a fluid of constant density \(\rho\) is \begin{equation} \label{eq:5} \begin{split} M = \left( m + V_1 \rho + V_2 \rho\right). \end{split} \end{equation} Denoting the isothermal bulk modulus of the fluid with \(\Gamma_0\), the pressure dynamics are given by \begin{equation} \label{eq:6} \begin{split} \dot{P}_1 = \Gamma_{0} \frac{U_1 - A_1 \dot{x}}{ V_{1}}, ~ \dot{P}_2 = \Gamma_{0} \frac{U_2 - A_2 \dot{x}}{ V_{2}}, \end{split} \end{equation} where the volumetric flow rates \(U_1\) and \(U_2\) provided by the syringe pumps correspond to the control input \cite{Acuna-Bravo2009}, while \(A_1=\frac{\partial V_1}{\partial x}\) and \(A_2=\frac{\partial V_2}{\partial x}\). The system dynamics in port-Hamiltonian form, without the internal dynamics of the syringe pumps, is thus \begin{equation} \label{eq:7} \begin{split} \begin{bmatrix} \dot{x} \\ \dot{p} \\ \dot{P_1} \\ \dot{P_2} \\ \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0\\ - 1 & - R & \Gamma_{01} & \Gamma_{02}\\ 0 & -\Gamma_{01} & 0 & 0 \\ 0 & -\Gamma_{02} & 0 & 0 \\ \end{bmatrix}\begin{bmatrix} \partial_x H \\ \partial_p H \\ \partial_{P_1} H \\ \partial_{P_2} H \\ \end{bmatrix} + \begin{bmatrix} 0 \\ -F \\ \frac{\Gamma_{0}U_1}{V_1}\\ \frac{\Gamma_{0}U_2}{V_2}\ \end{bmatrix}, \end{split} \end{equation} where \(\Gamma_{01} = \frac{\Gamma_{0}A_1}{V_1} \) and \(\Gamma_{02} = \frac{\Gamma_{0}A_2}{V_2} \), \(R\) is the physical damping related to the transmission, and \(F\) is the external force due to the payload. The system states are the position \(x\) of the payload, the momenta \(p = M\dot{x}\), and the pressures \(P_1\) and \(P_2\). The notation \(\partial_x H=\frac{\partial H}{\partial x}, \partial_p H=\frac{\partial H}{\partial p}, \partial_{P_1} H=\frac{\partial H}{\partial P_1}, \partial_{P_2} H=\frac{\partial H}{\partial P_2}\) is employed for brevity. The following assumptions are introduced for controller design purposes. \emph{Assumption 1}. The fluid is isothermal, isentropic, and inviscid. The pressures \(P_1\) and \(P_2\), the density \(\rho\) (assumed constant), and the speed of the fluid (which is approximated with \(\dot{x}\)) are uniform throughout the volumes \(V_1\) and \(V_2\). \emph{Assumption 2}. All model parameters are accurately known. The bulk modulus of the fluid is \(\Gamma_0\). The friction of the transmission (i.e. the lead-screw of the syringe pump, and the cable attached to the payload) is defined by the parameter \(R\). The system lies in the horizontal plane. \emph{Assumption 3}. The position \(x\) and the velocity \(\dot{x}\) of the payload, and the pressures \(P_1\) and \(P_2\) of the fluid are measurable and bounded, that is \(P_1,P_2 \ll \Gamma_0 \) . \emph{Assumption 4}. The effect of the external forces is accounted for with \(F\), which is unknown but constant and can be either positive or negative. The effect of viscosity on the pressure dynamics is negligible at low speed \cite{Franco2022}, while pressure, speed and density are near-uniform in the case of laminar flow. Constant external forces can include the weight of an additional payload, while the case of time-varying forces is discussed in Section \ref{sec:3.3}. \section{Controller design}\label{sec:3} The control goal corresponds to regulating the position of the payload to \(x=x^*\) in the presence of an unknown external force \(F\). \subsection{Nonlinear observer}\label{sec:3.1} The external force is estimated with a nonlinear observer constructed according to the \emph{Immersion and Invariance} methodology \cite{AstolfiA.KaragiannisD.2007}. To this end, the estimation error \(\zeta\) is defined as \begin{equation} \label{eq:8} \begin{split} \zeta = {\widehat{F}} + \beta - F, \end{split} \end{equation} where the force estimate is \(\Tilde{F} =\widehat{F} + \beta \). The function \(\beta\), which is the state-dependent part of the force estimate, and the observer state \(\widehat{F}\) are computed with \begin{equation} \label{eq:9} \begin{split} {\dot{\widehat{F}}} = \alpha \left(-\partial_x H -R \partial_p H + \frac{\Gamma_{0}A_1}{V_1} \partial_{P_1} H \right) \\ +\alpha \left( \frac{\Gamma_{0}A_2}{V_2} \partial_{P_2} H - {\widehat{F}} - \beta \right) ,\\ \beta = -\alpha p, \end{split} \end{equation} with \(\alpha>0\) a constant tuning parameter. \emph{Proposition 1}: Consider system (\ref{eq:7}) with \emph{Assumptions 1} to \emph{4} and with the observer (\ref{eq:9}). Then \(\zeta\) converges to zero exponentially for all \(\alpha >0\). \emph{Proof}: Computing the time derivative of (\ref{eq:8}) while substituting \(\dot{p}, \dot{P_1}\) and \(\dot{P_2}\) from (\ref{eq:7}) gives \begin{equation} \label{eq:10} \begin{split} \dot{\zeta}={\dot{\widehat{F}}} + \frac{\partial \beta}{\partial x} \frac{p}{M} + \frac{\partial \beta}{\partial p} \left(-\partial_x H - {\widehat{F}} - \beta + \zeta\right)\\ + \frac{\partial \beta}{\partial p} \left( \frac{\Gamma_{0}A_1}{V_1} \partial_{P_1} H + \frac{\Gamma_{0}A_2}{V_2} \partial_{P_2} H -R \partial_p H \right). \end{split} \end{equation} Substituting (\ref{eq:9}) into (\ref{eq:10}) yields \begin{equation} \label{eq:11} \begin{split} \dot{\zeta} = -\alpha \zeta.\ \end{split} \end{equation} Defining the Lyapunov function candidate \(\Upsilon = \frac{1}{2}\zeta^{2}\), computing its time derivative, and substituting (\ref{eq:11}) yields \begin{equation} \label{eq:12} \begin{split} \dot{\Upsilon} = - \alpha \zeta^{2} = -2 \alpha \Upsilon <0. \end{split} \end{equation} It follows from (\ref{eq:12}) that \(\zeta\) is bounded and converges to zero exponentially for all \( \alpha > 0\) concluding the proof \(\square\) \subsection{Energy shaping control}\label{sec:3.2} The control law is designed following a similar procedure to \cite{Franco2022}, which is extended to account for the presence of redundant actuators in the antagonistic pair and for the nonlinear observer (\ref{eq:9}). The closed-loop dynamics in port-Hamiltonian form yields thus \begin{equation} \label{eq:13} \begin{bmatrix} \dot{x} \\ \dot{p} \\ \dot{P_1} \\ \dot{P_2} \\ \end{bmatrix} = \begin{bmatrix} 0 & S_{12} & S_{13} & S_{14}\\ - S_{12} & - S_{22} & S_{23} & S_{24}\\ - S_{13} & - S_{23} & - S_{33} & 0\\ - S_{14} & - S_{24} & 0 & -S_{44}\\ \end{bmatrix}\begin{bmatrix} \partial_x H_d \\ \partial_p H_d \\ \partial_{P_1} H_d \\ \partial_{P_2} H_d \\ \end{bmatrix} - \begin{bmatrix} 0 \\ \zeta \\ 0 \\ 0 \\ \end{bmatrix},\ \end{equation} where \(H_{d} = \frac{1}{2} p^{2}M_{d}^{- 1} + \Omega_{d} + \varsigma^{2}/2\) is a positive definite storage function. The potential energy \(\Omega_{d} = \frac{1}{2}k_p \left(x^* -x \right)^2\) has a strict minimizer at \(x=x^*\) corresponding to the regulation goal, \(M_{d} = k_{m}M\), and \(\varsigma\) is given by \begin{equation} \label{eq:14} \begin{split} \varsigma =P_1 A_1 + P_2 A_2 - {\widehat{F}} + k_p k_m\left(x-x^* \right), \end{split} \end{equation} with \(k_{p}>0\) and \(k_m>0\) constant tuning parameters, \(A_1=\frac{\partial V_1}{\partial x}\) and \(A_2=\frac{\partial V_2}{\partial x}\). and \({\widehat{F}}\) is computed by time-integration of (\ref{eq:9}). The terms \(S_{\text{ij}}\) are defined so that the open-loop dynamics (\ref{eq:7}) matches the closed-loop dynamics (\ref{eq:13}) accounting for the estimation error (\ref{eq:8}), that is \begin{equation} \label{eq:15} \begin{split} S_{12} = k_{m}, ~S_{13}=S_{14}=0, ~S_{22} = k_{m}R - \alpha k_m M, \\ S_{23} = \frac{ 1 + k_{m}\partial_{x}\varsigma }{2\partial_{P_1}\varsigma}, ~ S_{24} = \frac{ 1 + k_{m}\partial_{x}\varsigma }{2\partial_{P_2}\varsigma}, \\ S_{33} = \frac{k_{i}}{\left(\partial_{P_1}\varsigma\right)^2}>0, ~ S_{44} = \frac{k_{i}}{\left(\partial_{P_2}\varsigma\right)^2} > 0. \end{split} \end{equation} The control inputs \(U_1\) and \(U_2\) are thus \begin{equation} \label{eq:16} \begin{split} U_1 = \frac{A_1 p}{M} - \frac{V_{1} }{\Gamma_{0}} \left( \frac{ 1 + k_{m}\partial_{x}\varsigma }{2 A_1} \frac{p}{M} + \frac{k_{i}}{A_1}\varsigma \right), \\ U_2 = \frac{A_2 p}{M} - \frac{ V_{2}}{\Gamma_{0}} \left( \frac{ 1 + k_{m}\partial_{x}\varsigma }{2 A_2} \frac{p}{M} + \frac{k_{i}}{A_2} \varsigma \right), \end{split} \end{equation} where the tuning parameters are \(k_m, k_p, k_i, \text{and}~ \alpha\) in (\ref{eq:9}). \emph{Lemma 1}: The system (\ref{eq:7}) in closed-loop with the control laws (\ref{eq:16}) yields (\ref{eq:13}) with the parameters (\ref{eq:14}) and (\ref{eq:15}). \emph{Proof}: Equating the corresponding rows of (\ref{eq:7}) and of (\ref{eq:13}) yields the matching equations \begin{equation} \label{eq:17} \begin{split} M^{- 1}p = S_{12}M_{d}^{- 1}p + S_{13} \varsigma \partial_{P_1} \varsigma + S_{14} \varsigma \partial_{P_2} \varsigma, \\ \frac{\Gamma_{0}A_1\partial_{P_1} H}{V_1} + \frac{\Gamma_{0}A_2\partial_{P_2} H}{V_2} -\partial_x H -R \partial_p H -F = \\ - {S}_{12}\left( \partial_{x}\Omega_{d} + \frac{1}{2}\partial_{x}( p^{T}M_{d}^{- 1}p) + \varsigma \partial_x\varsigma\right) \\ - S_{22}M_{d}^{- 1}p + S_{23} \varsigma \partial_{P_1}\varsigma+ S_{24}\varsigma \partial_{P_2} \varsigma - \zeta - \alpha p, \\ \frac{ \Gamma_{0} }{V_{1}} \left(U_1-A_1 \frac{p}{M}\right) = -S_{13} \varsigma \partial_{x} \varsigma - S_{23} \frac{p}{M_d} - S_{33}\varsigma \partial_{P_1}\varsigma, \\ \frac{ \Gamma_{0} }{ V_{2}} \left(U_2-A_2 \frac{p}{M}\right)= -S_{14} \varsigma \partial_{x} \varsigma - S_{24} \frac{p}{M_d} - S_{44}\varsigma \partial_{P_2}\varsigma, \end{split} \end{equation} which are verified by the parameters (\ref{eq:15}). In particular, the first equation is verified by \(S_{12} = k_{m}\) and \(M_{d} = k_{m}M\) with \(S_{13} = S_{14} = 0\) since \(\partial_p\varsigma=0\). Substituting \(S_{12}\), \(S_{22}\), \(S_{23}\) and \(S_{24}\) verifies the second equation with \(\varsigma\) in (\ref{eq:14}). Finally, substituting \(U_1\) and \(U_2\) from (\ref{eq:16}) with \(S_{13}=S_{14}=0\) and \(\partial_{P_1}\varsigma=A_1, \partial_{P_2}\varsigma=A_2\) verifies the last two equations \(\square\) \emph{Remark 1}. Differently from our previous work \cite{Franco2021b,Franco2022}, the system (\ref{eq:7}) is fully actuated. Thus, the controller design does not require solving partial differential equations, which is a major challenge in energy shaping control \cite{Ortega2002}. However, the payload dynamics are not input-affine due to the pressure dynamics of the fluid, which is similar to our work \cite{Franco2022}. In this regard, the first key difference from \cite{Franco2022} is due to the presence of redundant bellow actuators in the antagonistic pair that are characterized by nonlinear expressions of the volumes \(V_1\) and \(V_2\), which yields nonlinear control laws. The second key difference is due to the nonlinear observer (\ref{eq:9}) which results in the closed-loop damping \(S_{22}\) in (\ref{eq:15}) including a negative term proportional to \(\alpha\). This so-called negative damping assignment greatly simplifies the controller design. For comparison purposes, redefining (\ref{eq:14}) as \begin{equation} \notag \begin{split} \varsigma =P_1 A_1 + P_2 A_2 - {\widehat{F}} + \alpha p + k_p k_m\left(x-x^* \right), \end{split} \end{equation} would cancel the term \(\alpha p\) from the second equation in (\ref{eq:17}) yielding \(S_{22}=k_m R\) as in \cite{Franco2022}. This would lead to a more complex control law, since \(\partial_p \varsigma \neq 0\) thus requiring \(S_{13}\neq 0\) to verify the first matching equation in (\ref{eq:17}), that is \begin{equation} \notag \begin{split} M^{- 1}p = S_{12}M_{d}^{- 1}p + S_{12} \varsigma \partial_p \varsigma + S_{13} \varsigma \partial_{P_1} \varsigma + S_{14} \varsigma \partial_{P_2} \varsigma. \end{split} \end{equation} \emph{Remark 2}. The dynamics of the syringe pumps supplying the flow rates \(U_1\) and \(U_2\) is not modeled for simplicity. However, in case the syringe pumps are actuated by stepper motors, the control laws (\ref{eq:16}) can be employed to design a reference trajectory \(x_s (t)\). For instance, employing a minimum-jerk trajectory with duration \(T_f\) yields \begin{equation} \notag \begin{split} x_s (t) = x_{s0} + (x_s^* - x_{s0})\left(\frac{10 t^3}{T_f^3}-\frac{15 t^4}{T_f^4}+\frac{6 t^6}{T_f^6}\right), \end{split} \end{equation} where \(x_{s0}\) and \(x_s^*\) are the initial position and the final position of the stepper motor. Computing the time derivative of \(x_s (t)\), while noting that the flow rate of a syringe pump with area \(S\) is \(U_1=\dot{x}_s S\) and corresponds to the control input (\ref{eq:16}), yields the target position for the first stepper motor at any instant \(0<t<T_f\) \begin{equation} \notag \begin{split} x_s^* = x_{s0} + \frac{U_1 T_f^5 }{30 t^2 S (T_f - t)^2}. \end{split} \end{equation} In a digital implementation with sampling interval \(\Delta t\), the former expression is modified by substituting \(t=\Delta t\) (i.e. \(x_s^*\) is computed at each instant only for the subsequent sampling interval). If in addition \(T_f = 2\Delta t\) we have \begin{equation} \notag \begin{split} x_s^* = x_{s0} + \frac{32 U_1 \Delta t }{30 S}, \end{split} \end{equation} where \(U_1\) is given by (\ref{eq:16}). \subsection{Stability analysis}\label{sec:3.3} \emph{Proposition 2}: Consider system (\ref{eq:7}) with \emph{Assumptions 1} to \emph{4} in closed-loop with the control laws (\ref{eq:16}), where the adaptive estimate of the force \({\Tilde{F}}=\widehat{F} - \alpha p\) is computed with (\ref{eq:9}). Define the parameters \(k_{i},k_{m},\alpha\) such that the matrix \begin{equation} \label{eq:18} \begin{split} \Theta = \begin{bmatrix} \frac{R - \alpha M}{k_m M^2} & \frac{1}{2 k_m M} & 0 \\ \frac{1}{2 k_m M} & \alpha & 0 \\ 0 & 0 & 2k_{i}\\ \end{bmatrix},\ \end{split} \end{equation} is positive definite, that is \(k_i>0, (R - \alpha M)\alpha k_m > \frac{1}{4} \). Then the equilibrium point \((x,\dot{x},P_1,P_2) = \left(x^{*},0,P_1^*,P_2^*\right)\) is globally asymptotically stable provided that \(A_1 \neq -A_2\). \emph{Proof}: Defining the Lyapunov function \(\Psi = {H}_{d} + \Upsilon\) and computing its time derivative along the trajectories of the closed-loop system (\ref{eq:13}) while substituting (\ref{eq:12}) yields \begin{equation} \label{eq:19} \begin{split} \dot{\Psi} = - S_{22}\left(\partial_{p}H_{d}\right)^2 -\partial_{p}H_{d}\zeta - \alpha \zeta^{2} - 2 k_i\varsigma^{2}. \end{split} \end{equation} Refactoring common terms in (\ref{eq:19}) yields \begin{equation} \label{eq:20} \begin{split} \dot{\Psi} = - \overline{x}^{T}\Theta \overline{x},\ \end{split} \end{equation} where \(\overline{x}^{T} = \begin{bmatrix} p & \zeta & \varsigma \end{bmatrix}\) and \(\Theta\) is given in (\ref{eq:18}). Thus \(\dot{\Psi} \leq 0\) for all \(k_i>0, (R - \alpha M) \alpha k_m > \frac{1}{4} \) and the equilibrium is stable. It follows from (\ref{eq:20}) that \(\overline{x} \in \mathcal{L}^{2} \cap \mathcal{L}^{\infty}\), while computing \(\dot{p}\) from (\ref{eq:13}) yields \(\dot{p} \in \mathcal{L}^{\infty} \). Similarly, it follows from (\ref{eq:14}) that \(\dot{\varsigma} \in \mathcal{L}^{\infty} \) , and \(\dot{\zeta} \in \mathcal{L}^{\infty} \) from (\ref{eq:11}). Consequently, \(\overline{x}^{T}\) converges to zero asymptotically \cite{Tao1997}. Computing \(\dot{p}\) from (\ref{eq:13}) at \(\overline{x} = 0\) yields \(\partial_{x}\Omega_{d} = 0\), that is \( k_p k_m (x^*-x)=0\). In addition, \(\partial_{x}^2\Omega_{d} = k_p k_m>0\) which confirms that the equilibrium is a strict minimizer of \(\Omega_{d}\). Thus \(x=x^*\) is the largest invariant set in \(\overline{x}=0\) and it is asymptotically stable (see Corollary 3.1 in \cite{Khalil2002}). Finally, computing (\ref{eq:14}) at \(\overline{x} = 0\) and \(x=x^*\) yields the values of \(P_1^*,P_2^*\) at the equilibrium, that is \(P_1^* A_1 + P_2^* A_2 = {\widehat{F}}\). To prove the global claim it is necessary to show that \(\Psi\) is radially unbounded (see Corollary 3.2 in \cite{Khalil2002}). To this end note that \(x,p,\varsigma,\zeta \rightarrow \infty \implies \Psi \rightarrow \infty\). In addition, it follows from (\ref{eq:14}) that \(P_1,P_2 \rightarrow \infty \implies \varsigma \rightarrow \infty\) provided that \(A_1 \neq -A_2\), while \(\varsigma=0 \implies A_1 P_1 + A_2 P_2 = k_p(x^* -x)+ \widehat{F}\). Consequently, provided that \(A_1 \neq -A_2\), the condition \(\varsigma=0 \cap P_1, P_2 \rightarrow \infty\) requires either \(x \rightarrow \infty\) or \(\widehat{F} \rightarrow \infty\), which yield \(\Psi \rightarrow \infty\) and concludes the proof \(\square\) \emph{Remark 3}. The negative damping assignment in \(S_{22}\) imposes an upper bound on \(\alpha\), that is \(0<\alpha<R/M\). If the force \(F\) is time-varying with time derivative \(\dot{F} = c \dot{x}\), where \(0 \leq c \leq \epsilon\), equation (\ref{eq:19}) yields \begin{equation} \notag \begin{split} \dot{\Psi} \leq - S_{22}\left(\partial_{p}H_{d}\right)^2 -\partial_{p}H_{d}\zeta - \alpha \zeta^{2} - \zeta \epsilon \frac{p}{M} - 2 k_i\varsigma^{2}, \end{split} \end{equation} which can be written as (\ref{eq:20}) with the new matrix \begin{equation} \label{eq:21} \begin{split} \Theta^{'} = \begin{bmatrix} \frac{R - \alpha M}{k_m M^2} & \frac{1}{2 k_m M} + \frac{\epsilon}{2 M} & 0 \\ \frac{1}{2 k_m M} + \frac{\epsilon}{2 M} & \alpha & 0 \\ 0 & 0 & 2k_{i}\\ \end{bmatrix}.\ \end{split} \end{equation} Global asymptotic stability of the equilibrium is concluded if \(k_i>0\) and \( (R- \alpha M) \alpha k_m > \frac{1}{4} \left(1+ \epsilon k_m\right)^2\). Rearranging the former inequality yields \(\epsilon^2 k_m^2 - 2 k_m (-\epsilon + 2 (R - \alpha M)\alpha) +1 < 0 \), which admits real solutions provided that \((R - \alpha M)\alpha > \epsilon/2 \): this indicates that the presence of time varying external forces requires either a larger physical damping \(R\) or a less aggressive tuning of the parameter \(\alpha\). \begin{figure} [ht] \begin{center} \subfloat[\label{fig:2a}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/2a-eps-converted-to.pdf} \end{tabular} } \hspace*{-2.7em} \subfloat[\label{fig:2b}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/2b-eps-converted-to.pdf} \end{tabular} }\\[-0.3ex] \subfloat[\label{fig:2c}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/2c-eps-converted-to.pdf} \end{tabular} } \hspace*{-2.7em} \subfloat[\label{fig:2d}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/2d-eps-converted-to.pdf} \end{tabular} }\\[-0.3ex] \subfloat[\label{fig:2e}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/2e-eps-converted-to.pdf} \end{tabular} } \hspace*{-2.7em} \subfloat[\label{fig:2f}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/2f-eps-converted-to.pdf} \end{tabular} } ~ \caption{Simulation results for system (\ref{eq:7}) and controller (\ref{eq:16}) considering different external forces, \(F_1=5 \tanh{(\dot{x})}\), \(F_2=10 x\), \(F_3=-10 x\) : (a) position \(x\); (b) disturbance estimate \(\Tilde{F}\); (c) pressure \(P_1\); (d) pressure \(P_2\); (e) control input \(U_1\); (f) control input \(U_2\).} \label{fig:2} \vspace{-0.5cm} \end{center} \end{figure} \section{Results}\label{sec:4} \subsection{Simulations}\label{sec:4.1} Simulations have been conducted in MATLAB using an ODE23 solver and the initial conditions \((x,\dot{x},P_1,P_2)=(0,0,0,0)\) with atmospheric pressure \(P_{\text{atm}}=10^5\). The model parameters in SI units are \(\Gamma_0=2 \times 10^9, ~\rho=10^3, ~R=5, ~m=0.25, ~L_0=30 \times 10^{-3}, ~V_0=1\times 10^{-7}, ~n_L=3, ~D_s=12\times 10^{-3}, ~d_c=9\times 10^{-3}, ~x_0=L_0/8\), \(x_M=L_0/4\), and \(K_0=1.05 \frac{L_0^2}{n_L} (\frac{d_c}{3}+\frac{D_s}{2})=2.8 \times 10^{-6}\). The tuning parameters for the control laws (\ref{eq:16}) have been set as \( k_p=1, k_m=2, k_i=10, \alpha=10\) for illustrative purposes. Note that the former values verify the stability conditions of \emph{Proposition 2}, that is \(k_i=10>0, (R -\alpha M)\alpha k_m = 80 > \frac{1}{4}\). Three different external forces have been considered: \(F=5 \tanh{(\dot{x})}\), which is akin to Coulomb friction and vanishes at equilibrium, is indicated with \(F_1\); \(F=10 x\), which represents a compression spring, is indicated with \(F_2\); \(F=-10 x\), which represents a tension spring, is indicated with \(F_3\). Figure \ref{fig:2} shows that the regulation goal \(x=x^*\) is correctly achieved with the control laws (\ref{eq:16}) in spite of the different external forces. In particular, the forces \(F_1\) and \(F_2\) oppose motion, while \(F_3\) favors motion, thus resulting in a slightly faster transient. The disturbance observer (\ref{eq:9}) converges to a constant value, which corresponds to the forces \(F_1\), \(F_2\), and \(F_3\) at equilibrium, that is \(0\) N, \(0.01\) N, \(-0.01\) N. The control inputs and the corresponding pressures remain smooth for all operating conditions. \subsection{Experiments}\label{sec:4.2} The controller (\ref{eq:16}) was tested experimentally on a prototype consisting of two identical soft hydraulic actuators arranged in an antagonistic pair (see Figure \ref{fig:3}). The actuator dimensions are the same as those specified in Section \ref{sec:4.1}. The actuators are supplied by two identical syringe pumps (ID 27 mm) driven by stepper motors and lead-screw transmission. The position \(x\) has been measured with an optical tracking system (OptiTrack, NaturalPoint, Inc., USA), and the pressures have been measured with two sensors (MS5803-14BA, TE Connectivity, Switzerland). A Python script was employed to collect data from the sensors via serial link with baud rate 115200 and to communicate with the stepper drivers (DRV8825, Pololu, USA) with a sampling frequency of 20.84 Hz. The position command issued to the stepper \(j=1,2\) has been computed from (\ref{eq:16}) as in \emph{Remark 2}, that is \(x_{sj}^*=x_{sj0} + U_{j} k_U\), where \(k_U\) is a constant depending on the size of the syringe pumps. The tuning parameters have been set to \( k_p=4, k_m=4, k_i=10, \alpha=10\), which are similar to those used in the simulations, while \(k_U=6\) has been chosen empirically. The starting position corresponding to \(x=0\) has been set empirically by filling both actuators by equal amounts so that \(x_0\approx L_0/8\), after which the controller has been activated. To assess the effect of external forces, different masses (i.e., 50 g, 75 g, 100 g) have been attached to the gantry plate with a pulley, thus resulting in a constant force in the negative $x$ direction. \begin{figure}[tb] \def\columnwidth{\columnwidth} \centering {\footnotesize\input{Figures/Setup.pdf_tex}} \caption{Experimental setup, showing antagonistic arrangement of soft actuators, the gantry plate, optical trackers, and cable used to transmit external force $F_{ext}$ to gantry.}\vspace{-0.5cm} \label{fig:3} \end{figure} Figure \ref{fig:4} shows that the controller achieves the regulation goal \(x=x^*\) with a consistent transient, which is comparable to the simulations in Figure \ref{fig:2}, regardless of the external forces. Nevertheless, the settling time is larger than in the simulations since the syringe pumps and the stepper motors have not been accounted for in the controller design. The force estimate \(\Tilde{F}\) computed from (\ref{eq:9}) shows an initial spike, corresponding to the instant when the additional mass starts pulling on the actuators, and then settles around a constant value. Differently from Figure \ref{fig:2}, the disturbance estimate and the pressures are affected by high frequency noise which is due to: i) measurement noise on the position and the velocity (i.e., computed by discrete differentiation), which could be reduced with a low-pass filter; ii) quantization effects due to the stepper resolution, which could be improved by increasing the degree of microstepping used. As shown in Figure \ref{fig:5a}, the test with 50g mass was repeated five times demonstrating good repeatability, with a maximum standard deviation of the error with respect to the mean of 0.022 mm across any of the five repetitions. The maximum error after settling (corresponding to t = 135 s in Figure \ref{fig:5a}) was 0.043 mm, and the mean error was 0.006 mm with a standard deviation of 0.033 mm. Figure \ref{fig:5b} shows the system response with our open-loop controller \cite{Runciman2021} (i.e., the control input is related to the prescribed position \(x^*\) with a lookup table) in the presence of different payloads. In this case the external forces result in noticeable position errors, thus confirming that a feedback controller is necessary for high accuracy. Conversely, the open-loop approach \cite{Runciman2021} yields a faster response: the reference position \(x_{sj}^*\) issued at the start of the experiments corresponds to the setpoint \(x^*\), thus the responsiveness of the system depends only on the maximum speed of the stepper motor. In addition, open-loop control is not affected by measurement noise from the tracking system. Figure \ref{fig:6} shows an additional set of results, where the same tuning parameters have been employed, but the setpoint \(x^*\) varies in time on both sides of the initial position. The results indicate that the proposed controller is effective in different operating conditions and yields a similar transient to Figure \(\ref{fig:4}\), suggesting that tuning does not need to be altered depending on \(x^*\), which is an important advantage in engineering practice. A video of the experiments has been included as a supplementary file. \begin{figure} [tb] \begin{center} \subfloat[\label{fig:4a}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/4a-eps-converted-to.pdf} \end{tabular} } \hspace*{-2.7em} \subfloat[\label{fig:4b}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/4b-eps-converted-to.pdf} \end{tabular} }\\[-0.3ex] \subfloat[\label{fig:4c}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/4c-eps-converted-to.pdf} \end{tabular} } \hspace*{-2.7em} \subfloat[\label{fig:4d}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/4d-eps-converted-to.pdf} \end{tabular} }\\[-0.3ex] \subfloat[\label{fig:4e}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/4e-eps-converted-to.pdf} \end{tabular} } \hspace*{-2.7em} \subfloat[\label{fig:4f}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/4f-eps-converted-to.pdf} \end{tabular} } \caption{Experimental results for system (\ref{eq:7}) with controller (\ref{eq:16}) under various loads: (a) Position \(x\). (b) Force estimate \(\Tilde{F}\). (c) Pressure $P_{1}$. (d) Pressure $P_{2}$. (e) Stepper position $x_{s1}$. (f) Stepper position $x_{s2}$.} \label{fig:4} \vspace{-0.5cm} \end{center} \end{figure} \begin{figure} [t!] \begin{center} \subfloat[\label{fig:5a}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/Reps-eps-converted-to.pdf} \end{tabular} } \hspace*{-2.7em} \subfloat[\label{fig:5b}]{ \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/OpenLoop-eps-converted-to.pdf} \end{tabular} } \caption{Positioning results of (a) five repetitions of energy shaping controller (\ref{eq:16}) with 50 g mass; (b) open-loop control method \cite{Runciman2021} under various loads.} \label{fig:5} \vspace{-0.5cm} \end{center} \end{figure} \begin{figure} [t!] \begin{center} \begin{tabular}{c} \includegraphics[scale = 0.6]{Figures/Multi_xd_wide-eps-converted-to.pdf} \end{tabular} \caption{Experimental results of energy shaping controller (\ref{eq:16}) for multiple setpoints under various payloads.} \label{fig:6} \vspace{-0.5cm} \end{center} \end{figure} \section{Conclusion}\label{sec:6} In this work we have investigated the model based position control of a system consisting of two soft hydraulic bellow actuators arranged in an antagonistic pair. A dynamical model of the system, which includes the pressure dynamics of the fluid, has been defined in port-Hamiltonian form. A nonlinear observer has been designed to compensate the effect of external forces. A nonlinear control algorithm has then been constructed with an energy shaping approach. Although the control laws are specific to the antagonistic pair, the proposed approach can be readily extended to systems of multiple actuators arranged in different configurations. Therefore, there is potential to use this energy shaping control method to deliver force estimation and high accuracy positioning capabilities to rapidly manufactured, low-cost soft robotic systems, enabling many exciting applications. The simulations results indicate that the controller achieves the prescribed regulation goal in the presence of different external forces. The experimental results on a prototype setup confirm that the controller can compensate the effect of model uncertainties and external forces, which have a similar magnitude to some MIS tasks, and that it yields higher accuracy compared to our previous open-loop approach. In addition, the system response remains consistent across a range of operating conditions without needing to vary the tuning parameters, which is an advantage in engineering practice. Conversely, the open-loop controller resulted in faster response, which depends only on the maximum speed of the stepper, and proved to be immune to sensor noise. However, these advantages come at the cost of having to individually characterize each actuator, which is time consuming and difficult to scale up. As such, future work will investigate a hybrid control approach with the aim of combining the advantages of both methods. % In addition, we shall investigate regulation and tracking control for more complex actuator arrangements. \addtolength{\textheight}{-11cm} % \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.11385", "language": "en", "timestamp": "2023-02-23T02:14:57", "url": "https://arxiv.org/abs/2302.11385", "yymm": "2302" }
\section{Overview of MIMO Systems}\label{Sec:Intro} \IEEEPARstart{I}{n} 5G communication systems and beyond, multiple-input multiple-output (MIMO) technology plays an ever-increasingly important role. Expanding the array aperture and integrating denser antenna elements within limited physical space are two possible approaches for improving the throughput of MIMO systems. In the first approach, massive MIMO (mMIMO) and extra-large scale MIMO (XL-MIMO) emerge as the key technologies for exploiting the spatial domain resources \cite{XL-MIMO}. In the second approach, holographic MIMO (HMIMO) has attracted increasing interest thanks to its ability to provide unprecedented flexibility in manipulating the electromagnetic (EM) field \cite{Holograhic}. Although numerous studies have been conducted, mMIMO and XL-MIMO still face hardware implementation challenges. Existing MIMO architectures can be mainly categorized into fully-digital and analog/digital hybrid ones. Due to the prohibitive cost of equipping each antenna element with a dedicated radio frequency (RF) chain in a fully-digital array (FDA), various hybrid architectures have been proposed to trade off between hardware cost and system performance \cite{HBF_survey}. By introducing a phase-shift network, a fully-connected array (FCA) divides the original high-dimensional digital domain processing into a high-dimensional analog domain operation and a low-dimensional digital domain processing, so that the number of RF chains can be significantly reduced. To further reduce the number of phase shifters, the sub-connected array (SCA) has been proposed. From FDA to FCA to SCA, the hardware design becomes easier to implement at the cost of reduced degrees of freedom (DoFs) for the associated signal processing. Although various sophisticated precoding algorithms have been proposed to compensate for this performance degradation, how to achieve a satisfactory trade-off between hardware cost and achievable performance remains an open problem. On the other hand, as a theoretical concept, HMIMO aspires to provide maximum DoFs for the manipulation of the EM field. In contrast to conventional MIMO systems employing half-wavelength critical antenna spacing, HMIMO systems are assumed to have a continuous aperture surface with infinitesimal antenna spacing, and each point on the surface has theoretically independent adjustability of the corresponding excitation current. As a result, HMIMO is expected to have the capability to flexibly generate any current density distribution on this aperture surface. Therefore, it should be able to customize any desired EM properties (e.g., polarization, radiation pattern, etc). However, it could be practically challenging to engineer such idealized HMIMO systems since the mutual coupling effect becomes more severe as the antenna spacing decreases. Moreover, it is also difficult to realize EM-level manipulations in existing MIMO systems, since the EM properties of their antennas are fixed once the antenna has been designed and fabricated. To break the performance limits of aperture-restricted MIMO, this article proposes a feasible hardware architecture called \textit{reconfigurable mMIMO} (R-mMIMO). Different from other reconfigurable antenna systems, which directly control the amplitude/phase response of an the incident signal, R-mMIMO can actively change the EM properties of the radiating antennas, and thereby indirectly influence the transmission channel. In the following sections, we first provide a simple example to illustrate the theoretical performance gain realized with R-mMIMO by exploiting the EM radiation (EMR) domain. Then, to showcase the practical feasibility of R-mMIMO, a detailed architecture of the proposed R-mMIMO is presented, and differences compared to traditional mMIMO (T-mMIMO) architectures are highlighted. Subsequently, the EMR domain precoding problem for R-mMIMO systems is investigated, and the spectral efficiency (SE) and energy efficiency (EE) of a viable R-mMIMO structure are compared to those of T-mMIMO designs. Finally, some open research directions and challenges for R-mMIMO are discussed. \section{EM Radiation Gain of R-mMIMO Systems}\label{S2} R-mMIMO \cite{MRA_CE} is a promising solution to enhance the MIMO capacity for a given antenna aperture. An R-mMIMO system can be obtained from a T-mMIMO system by replacing the conventional antennas with reconfigurable antennas. The basic idea of a reconfigurable antenna is to alter the physical structure of the antenna with the help of RF switches, so that the surface current density distribution becomes tunable. As an implementable solution based on state-of-the-art hardware fabrication methodologies, the EM properties of each R-mMIMO antenna can be made configurable within a limited set of operation modes. By contrast, although HMIMO has the theoretical capability to manipulate the whole continuous antenna aperture, it is physically unrealizable in practice. \begin{figure}[!h] \vspace*{-3mm} \begin{center} \includegraphics[width = 0.5\textwidth]{Fig1.pdf} \end{center} \vspace*{-5mm} \caption{Comparison of received E-field intensities for different MIMO systems. A fully-digital linear array with an aperture of 4$\lambda$ (operated at $3$ GHz with $\lambda$ denoting the wavelength) is placed along the y-axis with its geometric center at the origin. The target receiver is randomly located in the area $\left\{\left(x,y\right)|x\in \left[5,50\right]\textrm{m}, y\in \left[50,100\right] \textrm{m}\right\}$, which is in the far-field region of the array. The results shown are averaged over 3,000 randomly generated target positions.} \label{holo} \end{figure} To provide an illustrative comparison of different types of MIMO systems, Fig. 1 considers a free-space propagation scenario, where a linear array is placed along the y-axis with its geometric center at the origin, and the received E-field intensity is simulated for the target receiver randomly located in a given far-field area. Optimal singular value decomposition-based precoding is performed at the transmitter. Without loss of generality, we use a Hertz dipole antenna to model each element of the T-mMIMO system for its tractable EMR expression, while the same antenna model with the additional capability to rotate the radiation pattern is assumed for R-mMIMO and HMIMO. As we assume a fixed array aperture, we reduce the antenna spacing when increasing the number of array elements. As can be observed, there is a performance gap between T-mMIMO and HMIMO even for very large numbers of antennas. This is because the radiation pattern shape of a Hertz dipole antenna is not omnidirectional, but a sinusoidal function in elevation direction. Observe that for large numbers of antennas, R-mMIMO can largely fill the performance gap between T-mMIMO and HMIMO by adjusting the main-lobe direction of each element's radiation pattern. For comparatively small numbers of antennas, which is of practical relevance, R-mMIMO can reduce the performance gap between T-mMIMO and HMIMO. With the capability of providing directional beam patterns to regions outside the normal direction, R-mMIMO can radiate more energy to the target position compared to T-mMIMO. In fact, the performance of ultra-dense R-mMIMO converges to that of HMIMO as the number of antenna elements goes to infinity. In this article, we are interested in the practical case of a limited number of antenna elements, where R-mMIMO can provide significant performance gains over T-mMIMO at an acceptable hardware cost. The reconfigurability of the EM properties provides R-mMIMO with extra DoFs. In particular, in this article, we consider a reconfigurable pixel antenna (RPA) structure with radiation pattern reconfigurability. In T-mMIMO systems, the radiation pattern of each patch element is fixed, which means that the spatial coverage area of a pattern is always limited. By contrast, an RPA-based system can reconfigure the surface geometry of its parasitic layer for each antenna independently, so that the E-field can be manipulated to generate different radiation patterns. To integrate RPAs into a real-world wireless systems, this article propose to deploy arrays of RPAs, which leads to a new architecture for R-mMIMO. The detailed architecture of this proposed R-mMIMO is presented in the following section. This novel R-mMIMO is a candidate base station (BS) architecture providing enhanced precoding capabilities for future wireless communication systems. \begin{figure*}[!t] \centering \includegraphics[width = 0.9\textwidth]{Fig2.pdf} \vspace*{-1mm} \caption{Schematic diagram of R-mMIMO systems: (a)~multi-user downlink transmission and the corresponding SCA-based R-mMIMO architecture, (b)~structure of a single RPA, (c)~examples of 3D radiation pattern produced by an RPA, and (d)~2D horizontal and vertical radiation pattern cuts of~(c).} \label{Array} \vspace*{-3mm} \end{figure*} \section{System Model of R-mMIMO}\label{S3} \subsection{Architecture of R-mMIMO}\label{different} Fig.~\ref{Array}\,(a) illustrates the schematic diagram and the hardware design of an R-mMIMO system. In addition to the RF network and antenna array of T-mMIMO, R-mMIMO includes an extra parasitic layer on top of its patch layer. This additional parasitic layer allows each antenna to shape its own radiation pattern. Without loss of generality, we take SCA-based R-mMIMO as an example to illustrate the system modeling and evaluate performance, since it is easy to implement at low hardware cost for practical deployment and can be easily extended to the cases of FDA- and FCA-based R-mMIMO. Fig.~\ref{Array}\,(b) depicts a single RPA, which consists of a patch layer and a parasitic layer. The patch layer carries the patch antenna, which couples the energy from the RF chain into space. The parasitic layer is composed of multiple inter-connected metallic pixels. The operating principle of such an RPA system can be described by the theory of reactively controlled directive arrays developed by Harrington \cite{Harrington}. This theory shows that the radiation pattern of the original antenna can be reconfigured through proper reactive loading of the parasitic elements. In the RPA system, the loading, i.e., the metallic pixels, are typically interconnected by electronically controllable switches, such as PIN diodes or RF micro-electro-mechanical-systems (RF-MEMS) \cite{Hardware2}. By setting the on-off status of the switches in the parasitic layer, the radiation pattern of each antenna can be flexibly reconfigured. The design of the parasitic layer is the key to the reconfigurability of the radiation pattern. An RPA offers limited adjustability of the current distribution on the surface of the parasitic layer, and the available patterns of each RPA are also constrained by the designed parasitic layer. In the example shown in Fig.~\ref{Array}\,(b), there are 12 PIN switches for the parasitic layer above each antenna, which can potentially produce $2^{12} = 4096$ EMR patterns. However, not all the patterns are suitable for information transfer. In \cite{GA}, the authors proposed an offline genetic algorithm to choose suitable PIN connections, so that the desired steering angles can be customized. This helps to remove the unsuitable patterns to only keep the desired patterns that can be used. The actual hardware design is beyond the scope of this article, and interested readers may refer to \cite{Hardware2} for more details. Fig.~\ref{Array}\,(c) shows an example of the radiation patterns produced by a single RPA, and its 2D horizontal and vertical cuts are depicted in Fig.~\ref{Array}\,(d). Denoting the azimuth angle with respect to the local coordinate of the BS by $\phi$, we consider four types of patterns with different radiation directions: \begin{itemize} \item \textbf{Type 0}: Normal pattern with peak gain at $\phi=0^{\circ}$; \item \textbf{Type 1}: Tilt pattern with peak gain at $\phi=30^{\circ}$; \item \textbf{Type 2}: Split pattern with peak gains at $\phi=\pm 56^{\circ}$; \item \textbf{Type 3}: Tilt pattern with peak gain at $\phi=-30^{\circ}$. \end{itemize} Intuitively, the EMR pattern of a T-mMIMO antenna has a fixed radiation direction like \textbf{Type 0}, which yields a lower coverage gain compared to R-mMIMO with multiple pattern choices. We note that in contrast to the simple Hertz dipole antenna considered in Fig. 1, practical RPAs can generate diverse radiation pattern shapes, such as the \textbf{Type 2} split pattern. Therefore, RPAs can also provide more diverse patterns than what is possible by rotating conventional antenna patterns. In general, the extra parasitic layer does not incur much hardware cost and power consumption, since metallic pixels and electronically controllable switches are low-cost and consume low energy. Thus, due to the reconfigurability of its patterns, the considered SCA-based R-mMIMO is expected to achieve a comparable performance as FCA/FDA-based T-mMIMO but at a much lower hardware cost. We illustrate the performance of R-mMIMO for a typical urban macro (UMa) cell transmission scenario in the following subsection. \subsection{Downlink Precoding with R-mMIMO}\label{S3.2} We consider a downlink transmission system, where a BS with $N_t/2$ pairs of dual-polarized antennas and $M_t \le N_t$ RF chains simultaneously transmits $U\le M_t$ data streams to $U$ single-antenna user equipments (UEs). Without loss of generality, we assume that each UE employs an ideal omnidirectional antenna, and each BS array element is an RPA, whose radiation pattern can be configured within a pattern set of cardinality $P$. For example, the radiation patterns of Type 0~-~Type 3 in Fig.~\ref{Array}\,(c) constitute a reconfigurable pattern set with a cardinality of four. For serving the $U$ scheduled UEs, the baseband information signal vector is first precoded with an $M_t\times U$-dimensional digital precoder matrix $\bm{F}_{\mathrm{BB}}$. Next, the precoded vector is up-converted to the RF before passing through an analog precoder $\bm{F}_{\mathrm{RF}}$ of dimension $N_t\times M_t$. Finally, the RF signal is radiated into the channel which is affected by the tuning of the parasitic layer. To account for the radiation patterns of the antennas, let $\bm{h}_{u}\left(\bm{\mu}\right)$ denote the $N_t$-dimensional channel vector between the BS and the $u$-th UE, where the $N_t$-dimensional vector $\bm{\mu}$ indexes the adopted radiation patterns of the transmit RPAs. In other words, the effects of the EMR pattern are included in the channel vector, see also \cite{MRA_CE,QuaDRiGa}\footnote{A detailed description of the impact of the RPA on the channel cannot be provided here due to the limited space. Interested readers may refer to our online supplementary material for more details, see \url{https://github.com/kekeyingBIT/R-mMIMO/blob/main/supplement.pdf}}. Then, according to Shannon's formula, the overall SE of the system is given by \begin{equation}\label{eq2} R\! = \sum\limits_{u=1}^{U}\! \log_{2}\! \left(\!\! 1\! +\! \frac{|\bm{h}_{u}^{ H}\left(\bm{\mu}\right)\bm{F}_{\mathrm{RF}}\bm{f}_{u}|^2}{\sum\limits_{j\neq u}^{U}\! |\bm{h}_{u}^{H}\left(\bm{\mu}\right)\bm{F}_{\mathrm{RF}}\bm{f}_{j}|^2\! +\! \sigma_n^2}\! \right)\!\! , \! \end{equation} where $\bm{f}_{u}$ is the digital precoder for the $u$-th UE, $\bm{F}_{\mathrm{BB}} = \left[\bm{f}_{1} ~ \bm{f}_{2} \cdots \bm{f}_{U}\right]$, and $\sigma_{n}^2$ is the power of the complex additive white Gaussian noise (AWGN) at the receiver. In SCA-based T-mMIMO, hybrid precoding aims to optimize the analog and digital precoders for maximization of the SE. Typically, the non-zero entries of the analog precoder are constrained by the phase-shift network connections, i.e., they have constant modulus and finite phase resolution. Moreover, the Frobenius norm of the product of the analog precoder and the digital precoder is constrained by the transmit power. Numerous studies have been conducted to solve the resulting optimization problem so that the performance gap between SCA-based T-mMIMO and FDA-based T-mMIMO is minimized \cite{HBF_survey, Dynamic}. However, the SE optimization for R-mMIMO systems is complicated by the additional index vector $\bm{\mu}$, which we also refer to as EMR precoder in this article. The EMR precoder performs EMR precoding by selecting the radiation pattern for each RPA. Thus, a \textit{three-level precoding} involving the digital, analog, and EMR precoders is required. We note that SCA-based T-mMIMO can be considered as a special case of SCA-based R-mMIMO, where all the RPAs at the BS use the same legacy radiation pattern, for example, Type 0. \section{Three-Level Precoding for R-mMIMO}\label{S4} \subsection{Overview of Existing Schemes}\label{S4.1} To optimize SE, the joint design of the digital, analog, and EMR precoders is needed. However, an efficient solution to this joint design problem is not yet available in the literature. The additional discrete-valued EMR precoder complicates the problem, rendering the joint design a highly complex mixed continuous-discrete optimization problem, elaborated as follows: 1) the search space of the EMR precoder expands exponentially with the number of transmit antennas, and it is impossible to traverse all $P^{N_t}$ options due to the prohibitive complexity; 2) for each EMR precoder in the search space, the SE is a highly non-convex and non-differentiable function, and it is very difficult to find an approximate objective function to optimize at a reasonable computational cost while ensuring good performance. Rather than tackling the intractable joint design, existing schemes usually focus on the optimization of the EMR precoder only, which we refer to as EMR domain precoding in this article. In \cite{MRA_BF}, the authors considered FDA-based R-mMIMO with multi-user transmission. By adopting zero-forcing fully-digital precoding, the received signal-to-noise ratio (SNR) at the UE can be equivalently used as an optimization metric to maximize the SE, and the authors of \cite{MRA_BF} proposed an iterative mode selection method to design the EMR domain precoding by maximizing this SNR metric. The work in \cite{TS} considered small-scale MIMO system with a single UE, where only EMR domain precoding was studied, and Thompson sampling and upper confidence bound algorithms were applied for its design. Obviously, when extending to hybrid arrays, mMIMO, and multi-user scenarios, these EMR domain precoding methods cannot be directly applied. \begin{figure*}[!t] \vspace*{-4mm} \centering \subfigure[]{ \includegraphics[width=0.333\textwidth, height= 0.2\textheight]{Fig3a.pdf} } \quad \hspace{-10mm} \subfigure[]{ \includegraphics[width=0.333\textwidth,height= 0.2\textheight]{Fig3b.pdf} } \quad \hspace{-10mm} \subfigure[]{ \includegraphics[width=0.333\textwidth, height= 0.2\textheight]{Fig3c.pdf} } \vspace*{-2mm} \caption{(a)~SE gains achieved by R-mMIMO for different geographic regions, (b)~CDF curves of the overall SE under different BS architectures, and (c)~EE of different BS architectures.} \label{SEEE} \vspace*{-1mm} \end{figure*} \subsection{Three-Level SE Optimization}\label{S4.2} To support multi-user and general wideband hybrid precoding in R-mMIMO systems, we further extend the iterative mode selection method of \cite{MRA_BF} to a generalized solution. Specifically, given the full channel state information (CSI) for all UEs, the proposed three-level SE optimization algorithm can be divided into two stages. In the first stage, an EMR domain precoding algorithm is employed to optimize the EMR precoder. Given the optimal EMR precoder obtained in the first stage, we jointly optimize the analog and digital precoders based on the resulting equivalent channel, as traditional hybrid precoding algorithms do. Next, we provide a greedy search algorithm for the first stage. The proposed greedy search algorithm for EMR domain precoding is summarized as follows. The search starts by initializing the pattern index of all transmit antennas to the legacy pattern, i.e., setting all the RPAs at the BS to Type 0. Then, in each iteration, the EMR patterns for each RPA are selected sequentially. To be more specific, in the $i$-th iteration, in order to optimize the $n_t$-th antenna's pattern, we keep all the other antennas' patterns fixed, and apply a standard hybrid precoding algorithm for all $P$ possible EMR patterns for the current antenna and evaluate the corresponding $P$ SE values. The pattern yielding the highest SE is selected for the $n_t$-th antenna. After all $N_t$ antennas have been configured one by one, the algorithm enters the next iteration. Typically, this greedy search method converges in $T_{iter} = 3$ to $5$ iterations. Observe that with this EMR domain precoding algorithm, the number of searches for the EMR precoder is reduced to $N_t P T_{iter}$ compared to $P^{N_t}$ for the optimal exhaustive search. Therefore, a substantial amount of computational resources and time can be saved. However, in general, the resulting greedy-search based EMR domain precoding scheme only finds a good feasible solution for the EMR precoder rather than the optimal solution. We note that more sophisticated search strategies, such as evolutionary algorithms or swarm intelligence algorithms, can be applied to the constrained derivative-free optimization problem for the EMR precoder. These strategies can explore and exploit the search space more comprehensively. Nevertheless, our simulation results in Section. \ref{Sec.Sim} show that the proposed greedy algorithm already yields considerable performance gains over T-mMIMO in typical application scenarios. \section{Performance Comparison}\label{Sec.Sim} In this section, we evaluate the performance of the proposed SCA-based R-mMIMO architecture and compare it with that of FDA- and SCA-based T-mMIMO architectures. We adopt the QuaDRiGa software package for channel generation \cite{QuaDRiGa}. As illustrated in Fig.~\ref{Array}\,(a), we consider downlink transmission in the UMa cell scenario. The BS employs an SCA with $N_t/2 = 16$ pairs of dual-polarized antennas and $M_t = 8$ RF chains. For each cell, a total of $15$ UEs are randomly distributed, and each UE is equipped with a single omnidirectional antenna. Penetration loss is taken into account for indoor UEs. For each transmission time interval (TTI), round robin scheduling is employed to select $U \le 8$ UEs for performing downlink precoding. Furthermore, the overall transmit power for each cell is $42$\,dBm and the noise power spectral density is $-174$\,dBm/Hz. Throughout the simulation, inter-cell interference is neglected in order to keep our numerical comparison simple. For the baseline precoding algorithm, we adopt the eigen-zero-forcing method for FDA and the extended algorithm of \cite{Dynamic} for SCA, respectively. Moreover, a reconfigurable pattern set containing Type 0 to Type 3 from Fig.~\ref{Array}\,(d) is considered for SCA-based R-mMIMO ($P=4$), and a fixed pattern Type 0 is utilized for FDA/SCA-based T-mMIMO \footnote{Practical results \cite{MRA_BF} showed that RPAs achieve an additional antenna gain compared to conventional antennas. However, considering the introduction of an extra parasitic layer, we assume this additional antenna gain is fully compensated by the insertion loss. Therefore, we assume all radiation patterns have the same maximum radiation gain, as shown in Fig.~\ref{Array}~(d).}. The absolute SE gains achieved by SCA-based R-mMIMO over SCA-based T-mMIMO for UEs in different geographic regions are shown in Fig.~\ref{SEEE}\,(a). Note that $U = 1$ UE is served in each TTI for each cell. Here, we divide the geographic region into different parts according to the horizontal distance between the UE and the BS, and the average SE results for the near ($35$ m$\sim$$100$ m), middle ($100$ m$\sim$$200$ m), far ($200$ m$\sim$$289$ m) regions as well as the entire ($35$ m$\sim$$289$ m) region are presented as bar charts. As can be seen, the proposed R-mMIMO architecture achieves a higher SE than T-mMIMO for the UE at any distance. Intuitively, this can be explained by the fact that the EMR pattern used in this example is reconfigurable in the horizontal plane, and thus, a more flexible horizontal beam can be customized for a given channel environment. Fig.~\ref{SEEE}\,(b) compares the cumulative distribution functions (CDFs) of the SE for the considered different array architectures, i.e., SCA-based T-mMIMO, FDA-based T-mMIMO, and SCA-based R-mMIMO. When the number of scheduled UEs is one, two, four, and six, respectively, the average SE gain of SCA-based R-mMIMO over SCA-based T-mMIMO is 15.1\%, 20.0\%, 24.9\%, and 33.3\%, respectively. Intriguingly, in this application scenario, the proposed scheme even achieves a better SE than FDA-based T-mMIMO. For any system architecture, the numbers of RF chains and antennas are fixed once the system design is determined. Therefore, the average SE performance for each UE decreases with the increasing number of scheduled UEs. The proposed R-mMIMO can provide extra precoding DoFs to mitigate this performance degradation to some extent. Consequently, the relative SE gain provided by R-mMIMO increases with the increased number of scheduled UEs, which indicates that the reconfigurability in the EMR domain can provide considerable SE gains. Next, we compare the EE performance of the three considered array architectures. The EE is defined as the SE divided by the total consumed power. The total consumed power includes the precoder power consumption and the transmit power consumption. For FDA-based T-mMIMO, the precoder power consumption is mainly caused by the $N_t$ RF chains, with each RF chain consisting of two digital-to-analog converters (DACs), two low-pass filters (LPFs), two mixers (MXs), and a local oscillator (LO) which is shared by all the chains. The power consumptions of these components are given by $P_{\mathrm{DAC}} = 200\,\mathrm{mW}$, $ P_\mathrm{{LPF}} = 14\,\mathrm{mW}$, $P_\mathrm{{MX}} = 19\,\mathrm{mW}$, and $ P_\mathrm{{LO}} = 5\,\mathrm{mW}$, respectively \cite{EE_ref}. For SCA-based T-mMIMO, the precoder power consumption is mainly due to its $M_t$ RF chains and $N_t$ phase shifters. The power consumption of a single phase shifter (PS) is given by $P_\mathrm{{PS}} = 30\,\mathrm{mW}$ \cite{EE_ref}. In our proposed SCA-based R-mMIMO architecture, extra power is consumed by the parasitic layer. In this example, each RPA contains $12$ electrically-controlled switches with each switch (SW) consuming $P_\mathrm{SW} = 5\,\mathrm{mW}$ \cite{EE_ref}. Therefore, an additional power consumption of $60\,\mathrm{mW}$ is assumed for each RPA. The transmit power is set to $P_t = 14.4\,\mathrm{W}$, which is a typical value for the UMa cell scenario. It can be seen from Fig.~\ref{SEEE}\,(c) that the proposed SCA-based R-mMIMO architecture offers significant EE benefits over FDA-based T-mMIMO and SCA-based T-mMIMO. Also, as expected, FDA-based T-mMIMO has the lowest EE. \section{Challenges and Future Directions}\label{benefits} The proposed R-mMIMO architecture has the potential to revolutionize MIMO systems in future 6G networks. However, there exist some key challenges that must be addressed before large-scale deployment becomes feasible. \subsection{Theoretical Capacity}\label{S6.1} R-mMIMO outperforms T-mMIMO owing to the additional DoFs in the EMR domain. Intuitively, these DoFs enable the transmitter to actively adjust the energy distribution between multiple paths to obtain an improved channel capacity. However, there is no available research that analyzes the fundamental capacity limits of R-mMIMO systems, and how to design optimal EMR patterns efficiently to approach this capacity is still unknown. As a trade-off between T-mMIMO and HMIMO, the design of R-mMIMO systems involves both signal processing and EMR pattern design, which calls for a fundamental analysis using tools from EM information theory \cite{EIT} for transmission modeling, DoF analysis, and performance evaluation. Pattern space analysis is expected to be one potential approach. In \cite{MRA_CE}, the authors adopted a Gram-Schmidt technique to decompose the radiation pattern into a set of orthogonal basic patterns, which allows the antenna radiation pattern to be decoupled from the rest of the channel environment. Based on this, the transmission model can be reduced to the multiplication of the equivalent channel and the signal in the pattern space. This facilitates tractable transmission modeling and fundamental theoretical analysis of generic R-mMIMO systems. \subsection{Multifunctional Reconfigurable Antennas}\label{S6.2} The RPA discussed in this article only considers the reconfigurability of the radiation pattern shape in the E-field. The authors in \cite{Hardware2} presented experimental results and design techniques for multifunctional reconfigurable antennas (MRAs), which offer a potential approach to design antenna hardware with the capability of providing other desirable EM properties. An MRA can integrate the reconfigurability of multiple domains (e.g., the frequency, pattern, and polarization domains) into a compact structure at a low cost. The EM behavior of MRAs can be changed by altering their physical or geometrical properties using some common tuning mechanisms, such as electronic devices like diodes and varactors, artificial metamaterials like microfluidics, liquid crystals, graphene, and so on. Through flexibly configuring their EM properties, MRAs can be used at both the BS and the UE to provide multi-domain diversity, co-channel interference mitigation, and reliable links with enhanced data rate in wireless communication systems. \subsection{CSI Acquisition}\label{S6.3} In R-mMIMO systems, CSI acquisition, which involves a completely new dimension owing to the reconfigurability of the EMR pattern, is a challenging task that must be resolved. It is impossible to estimate the full CSI at a time due to the fact that the training overhead increases linearly with the number of available EMR patterns. In \cite{MRA_CE}, the authors propose a combined channel estimation and prediction scheme, where only a subset of the EMR patterns are trained for estimation, and the untrained patterns are predicted exploiting the correlations between the different patterns. To further extend this idea, one can utilize more sophisticated approaches, such as compressed sensing or deep learning, to acquire the estimates of the channel parameters (such as angles, gains, and delays) from the legacy pattern. Then, the CSI for other reconfigurable patterns can be constructed using these estimated parameters. Another challenging issue is that a long training progress may cause outdated CSI, which can affect the accuracy of data detection and the efficiency of precoding. A potential solution is to take statistical CSI into account for more accurate channel estimation. Furthermore, in a time-varying environment, multi-user scheduling, CSI feedback, and channel tracking also need to be optimized accordingly, which deserves further study. \subsection{Low Complexity Precoding}\label{S6.4} This article considers a two-stage precoding algorithm for multi-user wideband transmission, which introduces additional complexity in the EMR domain precoding stage. Existing heuristic algorithms, such as the greedy search adopted in this article, may not meet practical complexity constraints since the SE objective function is computationally complicated and hard to approximate. Therefore, it is critical to investigate low-complexity EMR domain precoding algorithms without sacrificing too much of the SE performance. In addition to intelligent search algorithms, like evolutionary algorithms and swarm intelligence algorithms, one possible method is to model the EMR domain precoding as a decision process and use deep reinforcement learning methods. By interacting with the channel environment and learning from the data, the BS can learn a good radiation pattern selection strategy during exploitation and exploration. Moreover, neural networks can also be exploited to reduce the computation complexity of the objective function, which can help to simplify the algorithm design. Another possible solution is to model the explicit relationship between the radiation pattern and the channel. After optimizing the radiation pattern in a continuous space through derivative-based methods, the EMR precoder can then be acquired by quantizing the result to the nearest discrete grid value. Furthermore, interference from neighboring cells should not be overlooked when deploying R-mMIMO in wireless networks. Since the analog radiation patterns may cause interference to nearby cells when enhancing the coverage of the local cell, cooperative optimization of multi-cell precoding is required. \subsection{Integration with Other Technologies}\label{S6.5} The extra DoFs of R-mMIMO allow us to transmit addtional information in the EMR domain or to customize more preferable channel conditions. Specifically, reconfigurable antennas have been exploited for mode shift keying transmission \cite{MSK}, which employed radiation patterns with low correlation, therefore achieving better detection performance than traditional spatial modulation. Some recent works proposed to use polarization domain DoFs for carrying information \cite{Polar}, where a polarization modulation scheme was developed to boost the system throughput. It is reasonable to expect that the joint exploitation of all DoFs offered by radiation patterns and polarization will be possible in the future given the rapid development of reconfigurable antennas. Moreover, R-mMIMO can be also applied in sensing and communication systems. For example, by customizing channels with lower cross correlations among potential targets, more accurate sensing can be achieved. Meanwhile, R-mMIMO can reshape the relative energy distribution among multipath components, which might be helpful in unfavorable propagation environments. Finally, R-mMIMO could be exploited to introduce randomness into the channel, which might be beneficial for covert communication based on channel randomization or radiation pattern hopping. \section{Conclusions}\label{S7} An innovative R-mMIMO system based on RPA was proposed in this article. A typical example of UMa downlink transmission was studied to demonstrate that SCA-based R-mMIMO with three-level precoding is capable of achieving SEs and EEs that are noticeably superior to those of existing SCA- and FDA-based T-mMIMO. In particular, our simulation results have confirmed that R-mMIMO can provide higher SE and EE gains with the increased number of scheduled UEs. Moreover, we have discussed critical challenges pertaining to R-mMIMO and presented new research directions towards making R-mMIMO a practical technology for 6G communication systems.
{ "arxiv_id": "2302.11325", "language": "en", "timestamp": "2023-02-23T02:13:45", "url": "https://arxiv.org/abs/2302.11325", "yymm": "2302" }
\section{Introduction and Related Work} Dysphagia or swallowing difficulty is a common complication found in 30 - 50\% of people following stroke~\cite{Smithard2007LongtermOA}. The prevalence of dysphagia in older people with dementia can be high up to 84\%. Risks are identified in people with dysphagia such as malnutrition, development of pneumonia and aspiration. Serious Dysphagia can lead to a strong association with mortality~\cite{Smithard2016DysphagiaAG, Smithard1997TheNH}. Hence early detection and treatment of Dysphagia are crucial. A Videofluoroscopic Swallow Study (VFSS) is accepted as the gold standard assessment for dysphagia. During VFSS, patients are asked to swallow texture-modified foods and liquids that contain barium. It provides visual data on the trajectory of bolus, muscle, and hyoid bone movement and the connection between anatomy and aspiration ~\cite{Ramsey2003EarlyAO}. However, the clinical assessment requires an extensively experienced speech-language therapist to analyse the visual data on a per-frame basis. The visual data sometimes has both low spatial and temporal quality due to device modalities and radiation noise. Moreover, there can be ambiguity and inconsistency in the judgments of different clinical experts. Early attempts at automated processing of the data have used traditional methods, such as Hough transforms~\cite{Zheng2004AutomatedSO}, Sobel Edge detection~\cite{Kellen2009ComputerAssistedAO} and Haar classifiers~\cite{Noorwali2013SemiautomaticTO} to track lumbar vertebrae, hyoid bone and epiglottis, which are important anatomical structures in the pharyngeal swallowing reflex. With the more recent impact of deep learning in medical image analysis, others have shown advances in pharyngeal phase detection \cite{Lee2018DetectionOT, Lee2019AutomaticDO, Lee2021AutomaticPP, Lee2020MachineLA} and hyoid bone detection \cite{Kim2021HyoidBT, Lee2020OnlineLF, Feng2021AutomaticHB, Iyer2019DeepLA}. Bolus trajectory is one of the main indicators in a VFS study, but there are few studies on the automation of bolus detection or segmentation ~\cite{Caliskan2020AutomatedBD, Zhang2021DeepLearningBased, Zeng2022VideoTransUNetTB}. CNN-based works, such as~\cite{Ronneberger2015UNetCN}, demonstrate significant superiority in feature extraction, though they are disadvantaged in computing long-range relations due to their inherent local operations. Vision transformers~\cite{Dosovitskiy2020AnII, Liu2021SwinTH}, on the other hand, have exhibited great predominance in modelling global contextual correlations by using attention mechanisms. Recent works that leverage vision transformers~\cite{Chen2021TransUNetTM, Cao2021SwinUnetUP} have shown remarkable performance in medical image segmentation. While others have dealt with video dynamics for detection or segmentation in videos, such as Cao et al. \cite{Cao2019GCNetNN} and Yang et al.~\cite{Yang2019GreatAD}, only few has explicitly addressed the use of temporal information in assisting the detection or segmentation of sequential medical data~\cite{9506463}. The dynamics of bolus suggest that an implicit temporal relationship between the frames on a feature level can be exploited in learning detection or segmentation models. In this paper, we present a deep-learning pipeline that takes account of multi-rater annotations and fuses them into a more consistent and reliable ground truth. Subsequently, an architecture is proposed (see Fig.~\ref{fig:Video_swin}) comprising a ResNet-50 feature extractor, a Temporal Context Module (TCM) feature blender, a non-local attention encoder (Swin Transformer) and a cascaded CNN decoder for detailed segmentation map prediction. Our main contributions are summarised as follows: i) we provide the VFSS2022 dataset Part 2 in different modalities in contrast to Part 1 annotated with reliable labels for the laryngeal bolus and pharynx. ii) we propose a new architecture enhancing the performance of previous work~\cite{Zeng2022VideoTransUNetTB} by extending the vision transformer encoder to a stronger and more generalised Swin Transformer, iii) we perform a detailed ablation study to reveal the importance of temporal feature blending. We also explore the cross-dataset transferability and generalizability of our deep neural networks on data across different modalities. \section{Methodology} \subsection{Architecture Overview} Inspired by UNet \cite{Ronneberger2015UNetCN}, we follow the encoder-decoder structure to build our video instance segmentation network, as shown in Fig.~\ref{fig:Video_swin}. It takes a video snippet as an input that consists of a sequence of frames with dimension $\mathbf{x} \in \mathbb{R}^{t \times H \times W}$, where $H \times W$ represents the spatial resolution of the input and $t$ is a temporal range of the input sequence. The input frames will be successively fed into a ResNet-50 backbone for feature extraction (see Fig.~\ref{fig:Video_swin}(a)). Then, the extracted features are simultaneously passed into a novel Temporal Context Module(TCM) (see Fig.~\ref{fig:Video_swin}(b))~\cite{Yang2019GreatAD}, which blends the past and future frame features into the target central frame feature. Thereafter, the output feature that is integrated with high-level spatial and temporal representation is tokenised into image patches by a Swin transformer encoder (see Fig.~\ref{fig:Video_swin}(c)) for global context construction. Finally, an up-sampling decoder (see Fig.~\ref{fig:Video_swin}(d)) reconstructs the segmentation map to the original image size of $H \times W$ with cascaded CNNs and binary segmentation heads (see Fig.~\ref{fig:Video_swin}(e)). \subsection{Temporal Context Module} The proposed architecture contains a key component Temporal Context Module(TCM) following success in video detection~\cite{Yang2019GreatAD}. The design of TCM follows the principled blending framework by~\cite{Cao2019GCNetNN} where a trainable self-attention module is formulated to a range of frame features from the previous CNN block. The input features $x_t \in\{x_1, \ldots, x_i\}$ are separately linear embedded to a feature space by function $e(\cdot)$ and weights $w_t$ in a concurrent manner. After that, a global $\mathbf{Softmax}$ operation is applied so that the temporal correlation across all the frames in the feature space can be aggregated. The agglomerated features are dispersed again to several stems for further linear embeddings. The normalisation of each stem is necessary to prevent vanishing/exploding gradients and can be done easily by $\hat{x}_{t, i}=\frac{1}{H W} \mathcal{C}\left(x_{t, i} ; w_t\right) \sum_{j=1}^{H W} \mathcal{C}\left(x_{t, j} ; w_t\right)$. Identity mapping operations by $\mathbf{multiplication} \otimes$ and $\mathbf{addition} \oplus$ are applied in each stem. In the end, stabilised features are added back to the central frame feature as a final single mixture high-level description of the short-term snippet. In summary, the TCM operation can be formulated by: \begin{equation} z_{t, i}^{T C M}=x_{t, i}+\sum_{n \in T} w_{n}^{**}\left(x_{n, i} \oplus w_{n}^{*} \sum_{j=1}^{HW} \hat{x}_{n, j} \otimes x_{n, j}\right)~, \end{equation}\\ where $x_n$ are the linear embedded features to be combined, $w_n^{*}$ and $w_n^{**}$ are trainable parameters for identity additions and blending operations. \begin{figure*} \begin{center} \begin{tabular}{c} \vspace{-10pt} \includegraphics[height=3.7cm]{figures/qe_output.pdf} \end{tabular} \end{center} \caption[example] {\label{fig:qualitative} \textbf{Qualitative Results.} Model segmentation results on 3 consecutive frames selected from VFSS Part2 dataset testset. All results are in pairs of bolus and pharynx predictions side by side. The red and blue outlines indicate the output segmentation and ground truth, respectively.(Best viewed zoomed)} \vspace{-10pt} \end{figure*} \subsection{Swin Transformer} Following~\cite{Dosovitskiy2020AnII}, we tokenise temporal blended features into feature patches $x_p$ and map them into a latent D-dimensional embedding space via learnable linear projection. \cite{Liu2021SwinTH, Cao2021SwinUnetUP} suggest the unnecessity of employing position embedding $\mathbf{E_{pos}}$ in Swin transformer, hence we omitted it in our work for simplicity. The projected feature can be expressed as $z_{0}=\left[x_{p}^{1} \mathbf{E_{pat}} ; x_{p}^{2} \mathbf{E_{pat}} ; \cdots ; x_{p}^{N} \mathbf{E_{pat}}\right]$, where $\mathbf{E_{pat}}$ is the patch embedding linear projection. The conventional vision transformer computes global attention across the vectorized patches, As a result, the computational complexity is quadratically increased along with the increase of the input resolutions. To alleviate the computation overhead in Multihead Self-Attention(MSA), a Window-based Multihead Self-Attention($\mathbf{W\mbox{-}MSA}$) method is proposed in~\cite{Liu2021SwinTH}. The window moves along the feature or image with no over-lapping and conducts self-attention within the local window and makes it more computationally efficient in computer vision tasks. the $\mathbf{W\mbox{-}MSA}$ also includes a relative position bias and can be expressed as: \begin{equation} \operatorname{Attention}(Q, K, V)=\operatorname{SoftMax}\left(Q K^T / \sqrt{d}+B\right)V~, \end{equation} where $Q, K, V \in \mathbf{R}^{M^2 \times d}$ stands for the query, key and value matrices respectively; $d$ is the query/key dimension, and $M^2$ patch numbers in a window. Values in $B$ are taken from the bias matrix$\hat{B}$. To model the relationship between windows, a Shifted-Window MSA ($\mathbf{SW\mbox{-}MSA}$) is proposed in~\cite{Liu2021SwinTH}, the patches take turns in two consecutive Swin Transformer blocks, each of which contains both a $\mathbf{W\mbox{-}MSA}$ and a $\mathbf{SW\mbox{-}MSA}$ accompanied with a 2-layer $\mathbf{MLP}$ followed a $\mathbf{GELU}$ activation function. And $\mathbf{LayerNorm}$(LN) and skip connections are added before the $\mathbf{MLP}$, as illustrated in Fig.~\ref{fig:Video_swin}. \begin{table*}[ht] \caption{\textbf{Quantitative Results.} Segmentation accuracy on 5 metrics of VFSS2022 Part1/Part2 is shown, as well as a number of parameters and FLOPs of each model. Part1/Part2 datasets are trained separately.} \vspace{-20pt} \label{tab:comparison method} \begin{center} \begin{footnotesize} \begin{tabular}{l|l|l|l|l|l|l|l} \toprule \rule[-1ex]{0pt}{3.5ex} \textbf{Model} & \textbf{DSC} & \textbf{HD95} & \textbf{ASD} & \textbf{Sensitivity} & \textbf{Specificity} & \textbf{FLOPs} & \textbf{\#Params}\\ \midrule \midrule \rule[-1ex]{0pt}{3.5ex} (1) UNet\cite{Ronneberger2015UNetCN} & $0.8422/0.7894$ & $14.7530/20.7516$ & $2.1675/4.6458$ & $0.8289/0.7414$ & $0.9988/0.9793$ & 50.1G & 34.5M\\ \rule[-1ex]{0pt}{3.5ex} (2) NestedUNet\cite{Stoyanov2018DeepLI} & $0.8335/0.7537$ & $13.7601/6.4952$ & $2.2275/5.1220$ & $0.8305/0.7188$ & $0.9987/0.9682$ & 105.7G &36.6M\\ \rule[-1ex]{0pt}{3.5ex} (3) ResUNet\cite{Zhang2017RoadEB} & $0.8465/0.7846$ & $11.982/6.4187$ & $2.0487/2.4218$ & $0.8183/0.7218$ & $\mathbf{0.9991}/0.9994$ & 43.1G & 31.5M \\ \rule[-1ex]{0pt}{3.5ex} (4) AttUNet\cite{Oktay2018AttentionUL} & $0.8501/0.7917$ & $12.9356/16.9552$ & $2.1832/4.2174$ & $0.8328/0.7721$ & $0.9988/0.9985$ & 51.0G & 34.8M\\ \rule[-1ex]{0pt}{3.5ex} (5) TransUNet\cite{Chen2021TransUNetTM} & $0.8586/0.8046$ & $7.4510/4.6291$ & $1.1050/1.9322$ & $0.8486/0.7579$ & $0.9989/0.9929$ & 29.3G &105.3M\\ \rule[-1ex]{0pt}{3.5ex} (6) Video-TransUNet\cite{Zeng2022VideoTransUNetTB} & $0.8796/0.8041$ & $6.9155/4.7775$ & $\mathbf{1.0379}/1.5270$ & $0.8851/0.7423$ & $0.9986/\mathbf{0.9996}$ & 40.4G & 110.5M \\ \rule[-1ex]{0pt}{3.5ex} (7) SwinUNet\cite{Cao2021SwinUnetUP} & $ 0.8477/0.8001$ & $10.2897/5.9846$ & $2.0817/2.1342$ & $0.8459/0.7336$ & $0.9985/0.9935$ & 6.1G &27.1M \\ \rule[-1ex]{0pt}{3.5ex} (8) Ours & $\mathbf{0.8986}/\mathbf{0.8186}$ & $\mathbf{6.2365}/\mathbf{4.5268}$ & $1.3081/\mathbf{1.2052}$ & $\mathbf{0.9011}/\mathbf{0.7756}$ & $0.9986/0.9995$ & 25.8G& 48.9M\\ \midrule \end{tabular} \end{footnotesize} \end{center} \vspace{-20pt} \end{table*} \section{EXPERIMENTS AND RESULTS} \subsection{Datasets and Implementation details} \textbf{Datasets.} The VFSS2022 datasets are collected in two major hospitals and the utilisation of the anonymised data is ethically reviewed and approved by the hospitals and our internal institutional Ethics Board. During the VFS studies, the patients carried out modified barium swallow tests under the practitioner's supervision. VFSS2022 Part 1 produces 3.5 minutes of swallowing videos which result in 440 sampled frames with a spatial resolution of $512\times 512$ pixels. Each frame is annotated by 3 experts and reviewed by 2 speech and language therapists and compromising labels for bolus and pharynx. The final ground truth is fused together with the 3 labels by a common image fusion strategy STAPLE~\cite{Warfield2004SimultaneousTA}. VFSS2022 Part 2 is annotated by one trained expert consisting of 154 frames and corresponding labels, it appears to have more modal noises and poorer temporal quality, and is used for the model generalisation test. \textbf{Implementation details.} The bolus and pharynx are concatenated as 2-layer tensors for the end-to-end model co-learning from both. The layers for the frames with no visible bolus are replaced with full-size zeroed tensors. To study the effect of input snippet lengths in our system, the input number of frames is in the range of $t = 3, 5, 7, 9, 11\& 13$ both for training and testing. All experiments supported online data augmentation such as random limited rotation and flipping. We initialise the weights of the ResNet-50 backbone and Swin-Transformer from the pre-trained models~\cite{Chen2021TransUNetTM, Cao2021SwinUnetUP}. During training, our system takes in a batch size of 2 and is equipped with an Adam optimizer with an initial learning rate of 1e-3. For transfer learning, the learning rate is dropped to 1e-4 at the beginning. A learning rate scheduler is set to drop the learning rate to 80\% after 20 epochs of validation loss saturation. The architecture is achieved in Python 3.8.5 and Pytorch 1.9 and trained with an NVIDIA Tesla P100 16GB GPU. We consider the overall loss of Binary Cross Entropy Loss and Dice Loss as the final training objectives. \subsection{Comparison with the state of the art} We compare our proposed architecture with major medical image segmentation models including UNet~\cite{Ronneberger2015UNetCN}, NestedUNet~\cite{Stoyanov2018DeepLI}, ResUNet~\cite{Zhang2017RoadEB}, AttUNet~\cite{Oktay2018AttentionUL}, TransUNet~\cite{Chen2021TransUNetTM}, Video-TransUNet~\cite{Zeng2022VideoTransUNetTB} and SwinUNet~\cite{Cao2021SwinUnetUP} over 5 common evaluation metrics, the Dice Coefficient~(DSC), the 95th percentile of the Hausdorff Distance~(HD95), the Average Surface Distance~(ASD), Sensitivity and Specificity, see Tab.~\ref{tab:comparison method}. Additionally, we also include the total number of parameters of the model and the total floating-point operations(FLOPs) to compare the model size and computing performance. It can be seen that in Tab.~\ref{tab:comparison method}, our method improved segmentation accuracy to $89.86\%$/$81.86\%$(DSC) and $6.2365$/$4.5268$ pixels(HD95) on VFSS2022 Part1/Part2, the test results dominate the previous SOTA~\cite{Zeng2022VideoTransUNetTB} and other methods with a significant margin. The general quality is greatly improved and output noises are less produced, as demonstrated in the qualitative results, see Fig.~\ref{fig:qualitative}. More importantly, the proposed method achieves a remarkable speed-accuracy trade-off. Although compared with SwinUNet~\cite{Cao2021SwinUnetUP} the model size is doubled, it notably improved the segmentation accuracy by 5.09\%. And it is observed that our model has reduced the number of parameters to less than half compared with the previous SOTA while not sacrificing computational efficiency due to the design of hierarchy shifting windows. Fig.~\ref{fig:grad_cam} shows the Grad-CAM output one layer before TCM in Video-TransUNet and Video-SwinUNet. Compared to TransUNet and SwinUNet which don't include TCM, the attention maps from our method devote great concentrations are computed to task-relevant features. Hence it promotes the efficacy of the TCM and the importance of temporal-relation constructions. \subsection{Ablation study} \begin{figure} \begin{center} \begin{tabular}{c} \vspace{-12pt} \includegraphics[height=4.3cm]{figures/gradcam.pdf} \end{tabular} \end{center} \caption[example] {\label{fig:grad_cam} \textbf{Grad-CAM Visualisation.} Comparing the two closest competing architectures, grad-cam maps show where the model pays attention. Note the cleaner focus of our proposed approach.(Best viewed zoomed)} \vspace{-10pt} \end{figure} \begin{table}[b] \vspace{-15pt} \caption{Ablation Study on impacts of different encoder-decoder combinations to performances.} \vspace{-15pt} \label{tab:ablation_parts} \begin{center} \begin{tabular}{l l|l|l|l} \toprule \rule[-1ex]{0pt}{3.5ex} \textbf{Encoder} & \textbf{Decoder} & \textbf{DSC} & \textbf{HD95} & \textbf{\#Params} \\ \midrule \midrule \rule[-1ex]{0pt}{3.5ex} (1)Swin & Swin & $0.8477$ & $10.2897$ & 27.1M\\ \rule[-1ex]{0pt}{3.5ex} (2)CNN+Swin & Swin & $0.8483$ & $9.5757$ & 39.1M\\ \rule[-1ex]{0pt}{3.5ex} (3)CNN+Swin & Swin+CUP & $0.8562$ & $8.0544$ & 43.8M\\ \rule[-1ex]{0pt}{3.5ex} (4)CNN+TCM + Swin & Swin+CUP& $0.8634$ & $6.8941$ & 49.1M \\ \rule[-1ex]{0pt}{3.5ex} (5)CNN+Swin & CUP & $0.8592$ & $8.2744$ & 43.2M\\ \rule[-1ex]{0pt}{3.5ex} (6)CNN+TCM+Swin & CUP & $0.8899$ & $\mathbf{5.1234}$ & 48.4M\\ \rule[-1ex]{0pt}{3.5ex} (7)S/A+TSC & S/A & $\mathbf{0.8986}$ & $6.2365$ & 48.9M\\ \midrule \end{tabular} \end{center} \vspace{-20pt} \end{table} We conducted major ablation experiments to reveal the efficacy of the proposed temporal blending framework via a novel TCM component. We modulate 4 main components, CNN extractor(CNN), Swin Transformer Block(Swin), Temporal Context Module(TCM) and CNN up-sampler(CUP) in our experiments. Comparing Tab.~\ref{tab:ablation_parts}(4) to (3) and (6) to (5), we can see that TCM has increased performance by a margin without an extra expensive cost in computational power. The CNN feature extractor and CUP indicate the effectiveness of convolutional operations due to their intrinsic locality characteristic. The use of skip connections is well studied in ~\cite{Ronneberger2015UNetCN, Chen2021TransUNetTM}, we attached an additional Temporal feature Skip Connection(TSC) to the decoder path, see Tab.~\ref{tab:ablation_parts}(7) to (6), it is suggested that the TSC is beneficial in constructing the segmentation map, which further supports the significance of temporal features in the neural network. Grid search over snippet sizes $t = 3, 5, 7, 9, 11 \& 13$ revealed the optimal, application-specific size $t = 5$ both for training and testing. \subsection{Transfer learning} \begin{table}[b] \vspace{-15pt} \caption{Transferability test on each part of the model.} \vspace{-15pt} \label{tab:transfer_learning} \begin{center} \begin{tabular}{l|l|l|l|l|l} \toprule \rule[-1ex]{0pt}{3.5ex}\textbf{Pretrained} &\textbf{Training}&\textbf{Frozen}&\textbf{Fine-tuning}&\textbf{DSC} &\textbf{HD95}\\ \textbf{dataset} & \textbf{dataset} & \textbf{weights} & \textbf{weights} & \\ \midrule \midrule \rule[-1ex]{0pt}{3.5ex} (1)Part1 & Part2 & N/A & All & 0.7618 & 15.1496 \\ \rule[-1ex]{0pt}{3.5ex} (2)Part1 & Part2 & a & b+c+d+e & 0.7979 & 4.9123\\ \rule[-1ex]{0pt}{3.5ex} (3)Part1 & Part2 & a* & b+c+d+e & 0.8437 & 4.6512\\ \rule[-1ex]{0pt}{3.5ex} (4)Part1 & Part2 & a + b & c+d+e & 0.7295 & 16.4245\\ \rule[-1ex]{0pt}{3.5ex} (5)Part1 & Part2 & a+b+c & d+e & 0.7030 & 18.1302\\ \rule[-1ex]{0pt}{3.5ex} (6)Part1 & Part2 & a*+b+c* & d+e & 0.8171 & 5.2039\\ \rule[-1ex]{0pt}{3.5ex} (7)Part2 & Part1 & a* & b+c+d+e & 0.8920 & 4.1984\\ \rule[-1ex]{0pt}{3.5ex} (8)Part2 & Part1 & a* & b+c*+d+e & 0.8850 & 6.0604\\ \rule[-1ex]{0pt}{3.5ex} (9)Part1+2 & Part1 & a* & b+c*+d+e & 0.8994 & 3.8415\\ \rule[-1ex]{0pt}{3.5ex} (10)Part1+2 & Part2 & a* & b+c*+d+e & 0.8094 & 4.6473\\ \midrule \end{tabular} \end{center} \vspace{-20pt} \end{table} We also explore the transferability of each component in Fig.~\ref{fig:Video_swin}, CNN(a), TCM(b), Swin Transformer(c), Decoder(d), Segmentation head(e), shown in Tab.~\ref{tab:transfer_learning}, a* indicates weights are pre-trained on ImageNet from~\cite{Chen2021TransUNetTM}, otherwise are trained from scratch. We adopt a standard transfer learning approach, fine-tuning, to investigate the generalisation ability of each part in domain shift from VFSS2022 Part 1 to Part 2 and vice versa. It is suggested that fine-tuning the later part after feature extraction is beneficial in domain adaption in both ways, noting row(3) and row(7). It is also shown our model's ability to generalise in part 1/part 2 when the model trains the entire dataset and can even gain performance boosts(DSC 89.94\%) in part 1, see row(10) and row(11). \section{Conclusion} We presented an end-to-end framework that exploits multi-frame inputs to segment VFSS2022 data with great success leading to performance gains and model size reduction. Our proposed neural network merits local and global spatial context and leveraged temporal features. Each of the modules can be fine-tuned or exchanged. The final framework achieves superior performance over other designs and provides a new, alternative pipeline for medical video segmentation tasks. \ \\ \begin{footnotesize} ACKNOWLEDGEMENTS. Data usage and publication are granted by UoB Ethics Approval REF: 11277. We thank project investigators Ian Swaine, Salma Ayis, Aoife Stone-Ghariani, Dharinee Hansjee, Stefan T Kulnik, Peter Kyberd, Elizabeth Lloyd-Dehler, William Oliff, Lydia Morgan and Russel Walker and thank Yuri Lewyckyj and Victor Perez for their annotations. Project: CTAR-SwiFt; Funder: NIHR; Grant: PB-PG-1217-20005. \end{footnotesize} \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.11410", "language": "en", "timestamp": "2023-02-23T02:15:53", "url": "https://arxiv.org/abs/2302.11410", "yymm": "2302" }
\section{INTRODUCTION} Recently, deep learning techniques have been extensively adopted for classifying electroencephalography (EEG) data.~\cite{schirrmeister2017deep} Despite this, a significant obstacle persists in the limited availability of training data.~\cite{ju2020federated} To circumvent this constraint, researchers have turned to generative modeling, a rapidly evolving field in machine learning, to generate synthetic EEG time series through a process known as data augmentation.~\cite{hartmann2018eeg} This technique involves the creation of plausible samples that were not present in the original dataset, thereby expanding the training data with "unseen" examples. To tackle the non-Euclidean characteristics inherent in EEG signals, researchers have been exploring the use of geometric deep learning methods to classify EEG signals in brain-computer interfaces (BCIs).~\cite{ju2022tensor,ju2022deep,ju2022graph,kobler2022spd,pan2022matt,wilson2022deep} These techniques involve the application of second-order neural networks on matrices, known as spatial covariance matrices (SCMs), which are derived from EEG signals. It is noteworthy that these SCMs possess a wealth of discriminative information, including the variance of signals recorded by individual channels and the coherence between signals recorded by neighboring channels.~\cite{ju2022graph} Consequently, the generation of SCMs, which possess neuropsychological significance, will provide direct benefits to research in BCIs. In this study, we generate SCMs utilizing a cutting-edge generative modeling technique known as score-based generative modeling~\cite{song2019generative,song2020improved}. Score-based generative models generate samples from noise by introducing a gradual increase in noise to the data, which is then undone through the estimation of the score function, which represents the gradient of the log-density function relative to the data. This noise perturbation can be described as a forward diffusion process modeled by a stochastic differential equation (SDE).~\cite{song2020score} This approach has been demonstrated to be successful in generating images, audio, and point clouds. In contrast to three-channel RGB images, which have pixel light intensities that range from 0 to 255, SCMs are generally preprocessed as multiple-channel square matrices that possess both symmetric and positive semidefinite properties and are decimal entities. We evaluate our approach using the Korea University (KU) dataset, also known as the OpenBMI dataset.~\footnote{\, The KU dataset refers to the \url{http://gigadb.org/dataset/100542} and its corresponding description article \url{https://academic.oup.com/gigascience/article/8/5/giz002/5304369}.} \section{METHODOLOGY} In this section, we propose a three-step process for generating SCMs utilizing score-based generative modeling. \begin{itemize} \item \textbf{1$^{\text{st}}$ STEP}: In this step, the raw EEG signals are filtered and segmented in the frequency and time domains using methods described in~\cite{ju2022tensor,ju2022graph}. Specifically, a bank of bandpass filters (i.e., Chebyshev Type II filters) is applied to decompose the EEGs into multiple-frequency passbands. Then, a segmentation plan is implemented on the time domain to divide the EEGs into smaller segments with or without overlapping. For the piece within $T$ duration $X \in \mathbb{R}^{n_C \times n_T}$, its spatial covariance matrix is $S:=X \cdot X^{\top} \in \mathbb{R}^{n_C \times n_C}$, where $n_C$ and $n_T$ are the number of channels and timestamps, respectively. In the final process of this step, SCMs are scaled by dividing them with their $\ell_2-$norm, i.e., $\bar{S}:= S/||S||_{\ell_2}$. \item \textbf{2$^{\text{nd}}$ STEP}: In this step, score-based generative modeling~\footnote{\, We briefly summarize score-based generative modeling in Appendix~\ref{A1} and~\ref{A2}.} is used to estimate the unknown prior distribution $p_{data}(S)$ through score matching, and generate samples within each specific frequency band and time interval of EEGs, using either Langevin dynamics or time-reversal SDEs. We fit the model for all bands simultaneously. The generated samples are of $n_C \times n_C$ matrices, but they are typically not symmetric and positive. \item \textbf{3$^{\text{rd}}$ STEP}: In this step, we approximate the generated matrices to preserve symmetry and positivity by utilizing Lemma 3~\cite{ju2022graph}. Specifically, suppose a generated matrix $X \in \mathbb{R}^{n_C \times n_C}$, then the projected SCM $S^{\dagger} := \sum_{i=1}^C \max\{\lambda_i, \epsilon\} u_i u_i^{\top}$, where $\epsilon>0$ is a preset threshold, eigenvalues $\{\lambda_i\}_{i=1}^C$ and corresponding orthonormal eigenvectors $\{u_i\}_{i=1}^C$ are crafted from symmetric matrix $\frac{1}{2}(X + X^{\top})$. \end{itemize} \begin{figure*}[!h] \center \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{all_data_dist.jpg} \caption{Raw/Generating Dist. for all nine frequency bands}~\label{fig:a} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{Mu_Beta_dist.jpg} \caption{Raw/Generating Dist. for the Mu and Beta frequency bands.}~\label{fig:b} \end{subfigure} \medskip \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{Ori_data_Frechet.png} \caption{Fréchet Mean of the raw dataset. (Triangle sign in Subfigure~\ref{fig:a})}~\label{fig:c} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{Gen_data_Frechet.png} \caption{Fréchet Mean of the generating dataset. (Cross sign in Subfigure~\ref{fig:a})}~\label{fig:d} \end{subfigure} \caption{\textbf{Subfigures~\ref{fig:a} and~\ref{fig:b}}: Illustration of raw and generating distributions of the 2-dimensional projection of covariance matrices: Each 2-dimensional point in the figure is projected from its associated $20\times 20$ covariance matrices (i.e., a 400-dimensional tensor) using t-SNE. There are 151,200 points (i.e., 9 frequency bands $\times$ 8400 trials $\times$ raw/generating options) in Subfigure~\ref{fig:a} and 84,000 points (i.e., 5 frequency bands $\times$ 8400 trials $\times$ raw/generating options) in Subfigure~\ref{fig:b}. \\ \textbf{Subfigures~\ref{fig:c} and~\ref{fig:d}}: Illustration of Fréchet means of covariance matrices for the nine frequency bands, 4 $\sim$ 8 Hz, 8 $\sim$ 12 Hz, 12 $\sim$ 16 Hz, 16 $\sim$ 20 Hz, 20 $\sim$ 24 Hz, 24 $\sim$ 28 Hz, 28 $\sim$ 32 Hz, 32 $\sim$ 36 Hz, and 36 $\sim$ 40 Hz. ~\label{fig:1}} \end{figure*} \begin{figure*}[!h] \center \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{Ori_data_c1_Frechet.png} \caption{Fréchet Mean of the \textbf{left}-hand-class trials in the raw dataset.}~\label{fig:e} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{Ori_data_c0_Frechet.png} \caption{Fréchet Mean of the \textbf{right}-hand-class trials in the raw dataset.}~\label{fig:f} \end{subfigure} \medskip \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{Gen_data_c1_Frechet.png} \caption{Fréchet Mean of the \textbf{left}-hand-class trials in the generating dataset.}~\label{fig:g} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{Gen_data_c0_Frechet.png} \caption{Fréchet Mean of the \textbf{right}-hand-class trials in the generating dataset.}~\label{fig:h} \end{subfigure} \caption{Illustration of Fréchet means of covariance matrices within the nine frequency bands for the \textbf{left} and \textbf{right}-hand classes. The highlight entities of SCMs in subfigures~\ref{fig:e} and~\ref{fig:g} (Mu and Beta bands) locate in the regions of FC4, C4, and CP4 over the scalp, while those in subfigures~\ref{fig:f} and~\ref{fig:h} fall in the regions of FC3, C3, and CP3. ~\label{fig:2} } \end{figure*} \begin{figure*}[!ht] \center \begin{subfigure}{0.33\textwidth} \includegraphics[width=\linewidth]{Real_c1_No_27.png} \caption{Real SCMs, \textbf{left}-hand (Trial No.27 of Subject No.1)}~\label{fig:i} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.33\textwidth} \includegraphics[width=\linewidth]{Gen_c1_No_5.png} \caption{Generated SCMs, \textbf{left}-hand (Sample 1)}~\label{fig:j} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.33\textwidth} \includegraphics[width=\linewidth]{Gen_c1_No_15.png} \caption{Generated SCMs, \textbf{left}-hand (Sample 2)}~\label{fig:k} \end{subfigure} \medskip \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Real_c0_No_8.png} \caption{Real SCMs, \textbf{right}-hand (Trial No.8 of Subject No.1)}~\label{fig:l} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Gen_c0_No_0.png} \caption{Generated SCMs, \textbf{right}-hand (Sample 1)}~\label{fig:m} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Gen_c0_No_6.png} \caption{Generated SCMs, \textbf{right}-hand (Sample 2)}~\label{fig:n} \end{subfigure} \caption{Conditional SCM Generation: In each line, we plot a picked SCM derived from an actual EEG segment for a category and another two generated samples within the same class. ~\label{fig:3} } \end{figure*} \section{EXPERIMENTS} \subsection{The Korea University Dataset} The Korea University (KU) Dataset, also known as the OpenBMI dataset, was collected from 54 subjects performing a binary class EEG-MI task. The EEG signals were captured at a rate of 1,000 Hz using 62 Ag/AgCl electrodes. The continuous EEG data were then segmented from 1,000 to 3,500 ms with reference to the stimulus onset. For evaluation, 20 electrodes in the motor cortex region were selected (i.e., FC-5/3/1/2/4/6, C-5/3/1/z/2/4/5, and CP-5/3/1/z/2/4/6). The study comprised two sessions, designated $S_1$ and $S_2$, each of which was divided into two phases, a training phase, and a test phase, each with 100 trials balanced between right and left-hand imagery tasks, resulting in a total of 21,600 trials (i.e., 54 subjects $\times$ 2 sessions $\times$ 200 trials) available for evaluation. In accordance with Table 8 in the study~\cite{ju2022graph}, the subjects were divided into two groups: a \emph{training subject group} and a \emph{test subject group}. The criteria for inclusion in the \emph{training subject group} were that the accuracies of 10-fold cross-validation on both $S_1$ and $S_2$ must be higher than 70\% (criterion level). A total of 21 subjects met this criterion and were included in the training subject group: Subjects No. 2, 3, 5, 6, 8, 12, 17, 18, 21, 22, 28, 29, 32, 33, 35, 36, 37, 39, 43, 44, and 45. The remaining 33 subjects were included in the \emph{test subject group}: Subjects No. 1, 4, 7, 9, 10, 11, 13, 14, 15, 16, 19, 20, 23, 24, 25, 26, 27, 30, 31, 34, 38, 40, 41, 42, 46, 47, 48, 49, 50, 51, 52, 53, and 54. \subsection{Parameters in Training Models} For the score-based generative modeling, the Variance Exploding (VE) SDE approach, incorporating the NCSN++ model architecture~\footnote{\, The PyTorch implementation for Score-Based Generative Modeling refers to the GitHub repository (\url{https://github.com/yang-song/score_sde_pytorch}).}, was selected for evaluation. We independently train two generative models to produce the left and right-hand samples using 4,200 trials in the \emph{training subject group}, comprising 21 subjects over 2 sessions with 100 trials/per class each. The signal from each trial was initially transformed into a covariance matrix and scaled. The noise parameters were set to $\sigma_{max} = 10$ and $\sigma_{min} = 0.01$. The training process was performed over 100,000 iterations, utilizing a batch size of 128. It is noteworthy that the CNN filter size within NCSN++ was set to $20 \times 20$ to address the non-Euclidean issue discussed in the discussion section. \subsection{Results} To alleviate the discrepancy between the raw and generated distributions resulting from the generative methods, we normalize each covariance matrix by zero-centering the means and scaling the variances to unity before visualization and quantitative analysis. \subsubsection{Visualization} In Figure~\ref{fig:a}, we present 2-dimensional projections of the 8,400 trials' raw SCMs from the \emph{training subject group} and the generated 8,400 covariance matrices (both left and right-hand classes) using t-SNE~\cite{van2008visualizing}. The Fréchet means~\footnote{\, Fréchet mean is introduced in Appendix~\ref{A3}.} of both distributions are marked with triangle and cross signs, respectively. The two distributions are nearly coincident, and the Riemannian distance between their Fréchet means is relatively small (approximately 4.323), indicating that the center of the learned distribution closely matches the raw distribution. Figure~\ref{fig:b} provides a more detailed view of the projections within the Mu and Beta frequency bands. Furthermore, a set of covariance matrices corresponding to the two labeled Fréchet means in Figure~\ref{fig:a} have been plotted. The Fréchet mean was computed independently for each frequency band. The Fréchet mean of the frequency bands $8\sim 12$ Hz, $12 \sim 16$ Hz, $16 \sim 20$ Hz, and $20 \sim 24$ Hz exhibit a similar profile, while the others differ. It is worth noting that these frequency bands are associated with event-related desynchronization and synchronization during cognitive and motor processing. Figure~\ref{fig:2} illustrates the Fréchet means of covariance matrices differentiated between the left and right-hand classes within nine frequency bands. Subfigures~\ref{fig:e} and~\ref{fig:f} display the SCMs with key regions highlighted, corresponding to neurophysiological findings. Specifically, the highlighted entities in subfigures~\ref{fig:e} and~\ref{fig:g} (Mu and Beta bands) are situated in the regions of FC4, C4, and CP4 across the scalp, while those in subfigures~\ref{fig:f} and~\ref{fig:h} are located in the regions of FC3, C3, and CP3 across the scalp. To provide a more comprehensive visual representation of the texture of the generated samples, Figure~\ref{fig:3} offers a closer examination of the SCMs derived from actual EEGs and those generated from the proposed methodology for the two categories. \subsubsection{Classification} To assess the generated samples' performance, we classify them using the pre-trained Tensor-CSPNet model~\footnote{\, The Python implementation of Tensor-CSPNet refers to the following GitHub repository: (\url{https://github.com/GeometricBCI/Tensor-CSPNet-and-Graph-CSPNet}).} on all subjects (two sessions) in the training subject group, which comprises a total of 8400 trials. The model architecture adopts a simplified 2500ms time window and incorporates two-level BiMap layers, transforming the input dimension of 20 to 30 and back to an output dimension of 20. There are 8400 balanced generated samples, with each class containing 4200. The pre-trained classifier predicts an accuracy of 84.30\% over all samples, and the confusion matrix is as follows: \begin{table}[!ht] \caption{Confusion Matrix: Predicted labels in a total of 8400. } \centering \begin{tabular}{l | ll} \toprule True $\backslash$ Predicted & Right-hand & Left-hand \\ \midrule Right-hand & 3730 (44.4\%) & 470 (5.6\%) \\ Left-hand & 849 (10.1\%) & 3351 (39.9\%) \\ \bottomrule \end{tabular} \end{table} \begin{table*}[!ht] \caption{Cross-session classification with data augmentation approach: Each column depicts the number of samples incorporated into the training session. The samples are divided equally between the two classes - left-hand and right-hand. The selected cross-session scenario originates from the training and evaluation sessions in the KU dataset. The initial session of 200 trials and the added samples serve as the training data, while the first half of the second session, comprising 100 trials, are utilized for validation and the latter half, also consisting of 100 trials, for testing purposes. The results (\%) presented encompass the mean of \textbf{10} times runs across all scenarios and the optimal performance. ~\label{tab:cross_session} } \centering \begin{tabular}{l l | llll llllll} \toprule {Argumentation} & &&&& &&&&&\\ {Samples} & None & 20 & 40 & 60 & 80 & 100 & 120 & 140 & 160 & 180 & 200\\ \midrule Subject No.30 & &&&& &&&& \\ Avg.(Std.) & 55.2(3.9) & 52.1(3.9) & 55.5(6.8)& 56.2(2.9)& 54.7(5.1) & 53.8(4.4) & 56.7(4.8)& 57.2(4.3)& 56.8(6.4)& \textbf{58.3}(5.3)& 57.6(4.5)\\ Best & 61.0 & 59.0 & \textbf{71.0} & 63.0 & 64.0 & 62.0 & 64.0 & 66.0& 66.0& 66.0 & 67.0 \\ \midrule Subject No.42 & &&&& &&&& \\ Avg.(Std.) & 59.2(3.5) & 59.1(6.2) & 63.2(4.6)& 62.5(3.4)& 65.6(4.6) & 65.1(4.1) & 64.8(3.6) & 65.8(2.8)& 65.4(3.6)& 61.8(4.1) & \textbf{67.9}(2.5) \\ Best & 63.0 & 66.0 & 69.0& 67.0& 72.0 & \textbf{73.0} & 71.0 & 70.0& 72.0& 69.0 & 72.0\\ \bottomrule \end{tabular} \end{table*} In this study, we conducted an additional experiment in a cross-session setting where one session of trials was utilized for training, the first half of another for validation, and the second half for testing, which is also known as the holdout scenario. This task presents a significant challenge due to the signal variability across sessions, and many state-of-the-art algorithms, including geometric methods~\cite{ju2022tensor, ju2022graph} performed poorly, yielding accuracy rates below 70\%. The proposed generative method was applied to generate SCMs using all subjects (two sessions) in the training subject group. The classifier, Tensor-CSPNet, was trained using the first session and the generated samples, validated in the first half of the second session, and evaluated in the second half for testing. Table~\ref{tab:cross_session} shows the cross-session classification accuracies, where each column of "None", "20", "40", "60", "80", "100", "120", "140", "160", "180", and "200" represents the number of added generative samples to the training set. The "None" column results are the typical cross-session outcomes but applied to normalized SCMs and without segmenting the time interval, thus slightly different from those in~\cite {ju2022graph}. We selected Subjects 30 and 42 as representatives from the \emph{testing subject group}. In the case of Subject 30, the average accuracy, calculated over ten runs, increased by 3.1\% after the addition of 180 generated samples. Conversely, Subject 42 saw a substantial improvement of 8.7\% in average accuracy after incorporating 200 generated samples in each trial. \subsection{Discussions} This study explores a new method for generating SCMs for BCI applications using score-based generative modeling with the SDE approach. The generated samples are analyzed through both visual and quantitative evaluations. Visually, the samples produced by the proposed method have a comparable appearance to the SCMs obtained from actual EEG recordings. Furthermore, the center (Fréchet mean) of the generated samples aligns with neurophysiological findings that event-related desynchronization and synchronization occur on electrodes C3 and C4 within the Mu and Beta frequency bands during motor imagery processing. From a quantitative standpoint, 84.3\% of the samples can be accurately predicted by a pre-trained Tensor-CSPNet, and holdout experiments on two subjects (Subject No.30 and No.42) show an improvement of up to 8.7\% in the average accuracy of 10-times runs. It is worth noting that not all subjects exhibit a discernible improvement after incorporating the generative samples. At present, there is no established criterion to determine which subjects may benefit from the approach. Despite these positive results, there are still several areas for improvement in the current approach, which will be discussed in the following sections. \subsubsection{Issue of Non-Euclidean Nature} In the experiments, the SCM channels are ordered from start to end as FC-5/3/1/2/4/6, C-5/3/1/z/2/4/5, and CP-5/3/1/z/2/4/6. The score-based generative model employs a CNN-structured architecture to capture local information from adjacent channels in this sequence. However, this order fails to reflect the correlations between EEG channels with respect to their spatial locations, a phenomenon referred to as the non-Euclidean nature, which results in limited performance. To tackle this problem, we propose a heuristic approach that sets the filter size to $20 \times 20$, which corresponds to the total size of the SCMs. It may not be readily applicable to complex scenarios, as it can be challenging to capture the signal granularity with a large filter size. \subsubsection{Issue of Randomness} It is possible that some of the generated samples may contain valuable discriminatory information for classification, while others may not. The randomness introduced by the sampling process in the score-based generative modeling may compromise the performance of the classifier. We think subsequent research will likely investigate numerous specific heuristic techniques to address this issue. \subsubsection{Issue of Cross-frequency Coupling} A potential explanation for the limited performance could be attributed to the diversity of the generative model. Since each SCM over a specific frequency band is independently generated from random noise, the composite SCM generated from these independent SCMs may lack neurophysiological significance and have yet to be previously observed. In simpler terms, real SCMs derived from the EEG signal where changes in brain activity occur during cognitive and motor processing, resulting in event-related desynchronization and synchronization. However, a generative SCM may not have this same origin, even though it may appear similar. For instance, the SCM within the frequency range of 32 to 36 Hz, as depicted in Subfigure~\ref{fig:m}, highlights a novel instance of the typical occurrence of high-intensity activities within the Mu and Beta bands. \subsubsection{Issue of Distribution Shift} Despite the fact that the generative samples may contain ample discriminatory information, the limited performance observed may still stem from the disparity between the prior and learned distributions. This incongruence can result in variations in the numerical ranges of pixels or entities within the SCMs. To mitigate this challenge, we utilize a simple heuristic normalization technique for the covariance matrices by zero-centering the means and scaling the variances to unity. This approach results in well-overlapping raw and generated distributions, but it may not always be a reliable method in complex scenarios. \section{APPENDICES} In the appendices, we will briefly introduce score-based generative modeling. For a formal convention, we suppose we have samples of spatial covariance matrices $\{S_i \in \mathbb{R}^{n_C \times n_C}\}_{i=1}^N$ from an (unknown) distribution $p_{data}(S)$. \subsection{Score-based Generative Modeling}~\label{A1} In the score-based generative modeling, the score network $s_{\theta}: \mathbb{R}^{n_C \times n_C} \longmapsto \mathbb{R}^{n_C \times n_C}$ is a deep neural network parametrized by $\theta$ and used to learn the score of a probability density $\nabla_S \log p(S)$. To train score network $s_{\theta}$, a technique called \emph{denoising score matching}~\cite{vincent2011connection} is proposed to firstly replace $p_{data}(S)$ using a Gaussian noise $\sigma$-pertubed version $p_{data}^{\sigma}(\tilde{S})$, where $p_{data}^{\sigma}(\tilde{S}) = \int p_{\mathcal{N}}^{\sigma}(\tilde{S}|S) \cdot p_{data} (S)~dS$, and the denoising objective $\mathcal{J}_D (\theta)$ with noise level $\sigma$ is then given as follows, \[ \mathcal{J}_D^{\sigma} (\theta) := \mathbb{E}_{p_{\mathcal{N}}^{\sigma}(\tilde{S}|S) \cdot p_{data} (S)} ||s_{\theta}(\tilde{S}) - \nabla_{\tilde{S}} \log p_{\mathcal{N}}^{\sigma}(\tilde{S}|S)||. \] Keep in mind that the noise model term $\nabla_{\tilde{S}} \log p_{\mathcal{N}}^{\sigma}(\tilde{S}|S)$ has a simple analytic form, written $\nabla_{\tilde{S}} \log p_{\mathcal{N}}^{\sigma}(\tilde{S}|S) = -\frac{1}{\sigma^2} (\tilde{S} - S)$. In the sampling phase, Langevin dynamics are applied to recursively generate samples using the score function $s_{\theta}$ as follows, \[ \tilde{S}_t = \tilde{S}_{t-1} + \frac{\epsilon}{2}\cdot s_{\theta} (\tilde{S}_{t-1}) + \sqrt{\epsilon}\cdot Z_t, \] where initial $\tilde{S}_0 \sim \pi(x)$ (prior distribution) and fixed step size $\epsilon > 0$ are given, and $Z_t \in \mathcal{N} (0, I)$. \subsection{Diffusing Samples with an SDE}~\label{A2} For a continuum of distribution evolving over time $t$, the score-based generative modeling has been further established within a unified framework of stochastic diffusion equations (SDEs) with diffusion probabilistic modeling.~\cite{song2020score} Technically, the SDEs are of the form as follows, \[ dS = f(S, t)~dt + g(t)~dW, \] where $f(\cdot, t): \mathbb{R}^{n_C \times n_C} \longmapsto \mathbb{R}^{n_C \times n_C}$ and $g(t) \in \mathbb{R}$ are the drift and diffusion coefficient respectively, and $W\in \mathbb{R}^{n_C \times n_C}$ is a standard Wiener process. The solution of the above SDE is a diffusion process $\{S(t)\}_{t \in [0, T]}$ over a finite time horizon $[0, T]$, and $p_t(S)$ is the marginal distribution of $S(t)$. For variance exploding (VE) SDEs, $f(S_t, t) := \alpha_t \cdot s_\theta (S_t)$ and $g(S_t, t) := \sqrt{2\alpha_t}$. The score-based generative modeling relies on the following time-reversal diffusion process for generating samples, \[ d S = \big( f(x, t) - g(t)^2 \cdot \nabla_S \log p_t(S) \big)~dt + g(t)~d \bar{W}, \] where $\bar{W}\in \mathbb{R}^{n_C \times n_C}$ is a standard Wiener process in the reverse-time direction. A time-dependent neural network $s_{\theta} (S, t)$ is used to estimate $\nabla_S \log p_t(S)$ by squeezing the following loss $\mathcal{J}_D \big(\theta; \lambda\big)$ as \[ \int_0^T \mathbb{E}_{p_{0t}(\tilde{S}|S) \cdot p_{data} (S)} \lambda(t) \cdot ||s_{\theta}(\tilde{S}, t) - \nabla_{\tilde{S}} \log p_{0t} (\tilde{S}|S)||~dt, \] where $p_{0t} (\tilde{S}|S)$ is the transition distribution from $S(0)$ to $S(t)$, and $\lambda:[0, T] \rightarrow \mathbb{R}_{>0}$ is a positive weighting function. \subsection{Fréchet Mean}~\label{A3} The generated sample in the current approach is in the form of an SPD matrix, which means that traditional measures for generative modeling in computer vision, such as the Inception score and Fréchet Inception Distance, cannot be used. The Riemannian distance on SPD manifolds is used as an alternative method to evaluate the distance between the prior and generated distributions. From a mathematical perspective, a conventional treatment to view the space of spatial covariance matrices is on the symmetric positive definite (SPD) manifolds, which is equipped with a Riemannian metric, i.e., affine invariant Riemannian metric (AIRM)~\cite{pennec2006riemannian}, written as $(\mathcal{S}^{++}, AIRM)$. The Riemannian distance between two spatial covariance matrices is $d_{AIRM} (S_1, S_2) := ||\log{(S_1^{-1} \cdot S_2)}||_{\mathcal{F}}$, where $\mathcal{F}$ is Frobenius norm and $\log$ is the logarithm. Given a set of SPD matrices $\{S^1, \cdots, S^N\}$, the Fréchet mean $\mu$ of that set is given as follows, \[ \mu := \arg\min_{\mu \in \mathcal{S}^{++}} \frac{1}{N} \cdot \sum_{i=1}^N~d_{AIRM}^2 (S^i, \mu). \] \section{ACKNOWLEDGMENT} This work was supported under the RIE2020 Industry Alignment Fund–Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contributions from industry partner(s); This work was supported by the RIE2020 AME Programmatic Fund, Singapore (No. A20G8b0102); This work was also supported by Innovative Science and Technology Initiative for Security Grant Number JPJ004596, ATLA, Japan. {\footnotesize \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.11419", "language": "en", "timestamp": "2023-02-23T02:16:10", "url": "https://arxiv.org/abs/2302.11419", "yymm": "2302" }
\section{Further Results} \label{app:more_results} In the following, we provide further insights and experimental results in order to access the performance of \textsc{SBalign} in comparison with different baselines and across tasks of various nature. \subsection{Cell Differentiation} \begin{figure}[H] \centering \includegraphics[width=.7\textwidth]{figures/fig_cell_marginals.pdf} \caption{Distribution of the cell population (i.e., marginals) at time $t=t_0$ and $t=t_1$ for (\textbf{a}) the ground truth, and (\textbf{b}) \textsc{SBalign}, after projection along their first two principal components.} \label{fig:results_cell_marginals} \end{figure} \subsection{Protein Docking} Protein complex formation is a central step in many biological processes, namely signal transduction, DNA replication, and repair. The complex formation step is guided by appropriate energetics, with a dynamic alteration in structure to form a stable complex. Understanding the mechanistic principles governing protein complex formation is thus a central problem in biology, with the long standing goal of engineering protein interactions to achieve a desired functional or therapeutic response. Computationally, the goal is to predict the structure of the bound states of participating proteins, given their corresponding unbound structures. Beyond the results in Table~\ref{tab:results_docking}, we display the ground truth and docked complexes in Figs.~\ref{fig:results_docked_1QA9},~\ref{fig:results_docked_1NW9}, and~\ref{fig:results_docked_1JIW}. \begin{figure}[H] \centering \includegraphics[width=.9\textwidth]{figures/fig_pred_1NW9.pdf} \caption{Ground truth and predicted bound structures for the complex with PDB ID: 1NW9. \textsc{SBalign} is able to identify the true binding pocket.} \label{fig:results_docked_1NW9} \end{figure} \begin{figure}[H] \centering \includegraphics[width=.9\textwidth]{figures/fig_pred_1JIW.pdf} \caption{Ground truth and predicted bound structures for the complex with PDB ID: 1JIW. \textsc{SBalign} is able to find the true binding interface compared to \textsc{EquiDock}.} \label{fig:results_docked_1JIW} \end{figure} \section{Datasets} \label{app:datasets} \subsection{Synthetic Datasets} Several datasets from different domains were used throughout this work. In the following, we will provide an overview on dataset preparation, processing, and featurization. \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{figures/fig_synthetic_datasets.pdf} \caption{Initial (\textit{blue}) and final (\textit{red}) marginals for the two toy datasets \textbf{(a)} moon and \textbf{(b)} T, together with arrows indicating a few alignments} \label{fig:synthetic_datasets} \end{figure} \para{Moon dataset} The \texttt{moon} toy dataset (Fig.~\ref{fig:synthetic_datasets}a) is generated by first sampling $\distend$ and then applying a clockwise rotation of $233^\circ$ around the origin to obtain $\distinit$. The points on the two semi-circumferences supporting $\distend$ are initially placed equally-spaced along each semi-circumference and then moved by applying additive Gaussian noise to both coordinates. While classic generative models will choose the shortest path and connect ends of both moons closest in Euclidean distance, only methods equipped with additional knowledge or insight on the intended alignment will be able to solve this task. \para{T dataset} This toy dataset (Fig.~\ref{fig:synthetic_datasets}b) is generated by placing an equal amount of samples at each of the four extremes of a T-shaped area having ratio between \textit{x} and \textit{y} dimensions equal to 51/55. If run with a Brownian prior, classical \acp{SB} also fail on this dataset because they produce swapped pairings: i.e., they match the left (\textit{resp.} top) point cloud with the bottom (\textit{resp.} right) one. At the same time, though, this dataset prevents reference drifts with simple analytical forms (such as spatially-symmetric or time-constant functions) from fixing classical \acp{SB} runs. It therefore illustrates the need for general, plug-and-play methods capable of generating approximate reference drifts to use in the computation of classical \acp{SB}. \subsection{Cell Differentiation Datasets} \begin{figure}[H] \centering \includegraphics[width=0.65\linewidth]{figures/fig_overview_cells.pdf} \caption{Overview of \textsc{SBalign} in the setting of cell differentiation with the goal of learning the evolutionary process that morphs a population from its stat at $t$ to $t+1$. Through genetic tagging (i.e., barcodes) we are able to trace progenitor cells at time point $t$ into their descendants $t+1$. This provides us with an alignment between populations at consecutive time steps. Our goal is then to recover a stochastic trajectory from $\mathbf{x}_0$ to $\mathbf{x}_1$. To achieve this, we connect the characterization of a SDE conditioned on $\mathbf{x}_0$ and $\mathbf{x}_1$ (utilizing the Doob's \emph{$h$-transform}) with that of a Brownian bridge between $\mathbf{x}_0$ and $\mathbf{x}_1$ (classical Schr\"odinger bridge theory), leading to a simpler training procedure with lower variance and strong empirical results.} \label{fig:overview_cells} \end{figure} \para{Dataset description} We obtain the datapoints used in our cell differentiation task from the dataset generated by \cite{weinreb2020lineage}, which contains 130861 observations/cells. We follow the preprocessing steps in \citet{bunne2021learning} and use the Python package \texttt{scanpy} \citep{wolf2018scanpy}. After processing, each observation records the level of expression of 1622 different highly-variable genes as well as the following metainformation per cell: \begin{itemize} \item a \texttt{timestamp}, expressed in days and taking values in \{2, 4, 6\}; \item a \texttt{barcode}, which a short DNA sequence that allows tracing the identity of cells and their lineage by means of single-cell sequencing readouts; \item an additional \texttt{annotation}, which describes the current differentiation fate of the cell. \end{itemize} \para{Dataset preparation} We only retain cells with barcodes that appear both on days 2 and 4, taking care of excluding cells that are already differentiated on day 2. We construct matchings by pairing cells measured at two different times but which share the barcode. Additionally, we filter cells to make sure that no one appears in more than one pair. To reduce the very high dimensionality of these datapoints, we perform a PCA projection down to 50 components. We end up with a total of 4702 pairs of cells, which we partition into train, validation, and test sets according to the split 80\%/10\%/10\%. \subsection{Protein Docking Datasets} \label{sec:app_protein_docking_datasets} \para{Dataset description} For the rigid-protein docking task, we utilize the DB5.5 dataset. The dataset was downloaded from \href{https://zlab.umassmed.edu/benchmark/}{https://zlab.umassmed.edu/benchmark/}. The dataset consists of both unbound and bound structures, but the structures are largely rigid, with an average complex RMSD of 0.96 between the bound and unbound structures. \para{Dataset preparation} Following similar convention as \textsc{EquiDock} \citep{ganea2022independent}, we treat the receptor as fixed. For each ligand, the final 3D structure corresponds to its bound version, and the unbound version is generated by applying a random rotation $R$ and translation $b$ to the bound version. However, applying a different rotation and translation to each ligand would result in a different Brownian bridge, thus providing limited learning signal for the drift $b_t^{\theta}$. To avoid this, we create a rotation matrix $R$ during training by sampling a random angle between $30-45^{\circ}$ along each axis, and a translation $b$ with a maximum magnitude between $5.0-10.0$. The same $R$ and $b$ are also applied to the validation and test sets. We leave it to future work to extend the algorithm to work for arbitrary rotations and translations. \para{Featurization} Following standard practice and for memory and computational efficiency, we only use the C$\alpha$ coordinates of the residues to represent our protein structures instead of the full-atom structures. For each amino acid residue, we compute the following features: a one hot encoding of the amino acid identity $f_e$ of size $23$, hydrophobicity $f_h \in [-4.5, 4.5]$, volume $f_v \in [60.1, 227.8]$, the charge $f_c \in \{-1, 0, 1\}$, polarity $f_p \in \{0, 1\}$, and whether the amino acid residue is a hydrogen bond donor $f_d \in \{0, 1\}$ or acceptor $f_a \in \{0, 1\}$. The hydropathy and volume features are expanded into a radial basis with interval size $0.1$ and $10$ respectively. To equip the model with a notion of time, we use a sinusoidal embedding of time $\phi(t)$ of embedding dimensionality $32$. These are concatenated to the amino acid features to form our input features for the amino acid residues. \para{Position at t} For any time $t$, we sample the positions of the C$\alpha$ atoms using the Brownian Bridge - given the coordinates $\mathbf{x}_0$ at $t=0$ and the coordinates $\mathbf{x}_1$ at $t=1$ with a Brownian bridge between $\mathbf{x}_0$ and $\mathbf{x}_1$, we know that $x_t \sim \mathcal{N}\left(x_t; (1-t)\mathbf{x}_0 + t\mathbf{x}_1, t(1-t)\right)$. \section{Experimental Details} In the following, we provide further experimental details on the chosen evaluation metrics, network architectures, and hyperparameters. \subsection{Evaluation Metrics} \label{app:metrics} \subsubsection{Cell Differentiation} For fairness of comparison between our method and the baseline (\textsc{fbSB}) ---which only works at the level of distribution of cells--- we also consider three evaluation metrics (i.e., $W_{\varepsilon}$, MMD and $\ell_2$) that capture the similarity between the end marginal $\distend$ and our prediction $\pi^\star_1$, irrespective of matchings. In what follows, we denote with $\hat{\nu}$ the predicted end marginal $\pi^\star_1$ ---i.e., the predicted status of cells at day 4--- and with $\nu$ the distribution of observed transcriptomes. \para{Wasserstein-2 distance} We measure accuracy of the predicted target population $\hat{\nu}$ to the observed target population $\nu$ using the entropy-regularized Wasserstein distance \citep{cuturi2013sinkhorn} provided in the \texttt{OTT} library \citep{jax2018github,cuturi2022optimal} defined as \begin{equation}\label{eq:reg-ot} \end{equation} where $H(\mathbf{P}) \vcentcolon= -\sum_{ij} \mathbf{P}_{ij} (\log \mathbf{P}_{ij} - 1)$ and the polytope $U(\hat{\nu},\nu)$ is the set of $n\times m$ matrices $\{\mathbf{P}\in\mathbb{R}^{n \times m}_+, \mathbf{P}\mathbf{1}_m = \hat{\nu}, \mathbf{P}^\top\mathbf{1}_n=\nu\}$. \para{Maximum mean discrepancy} Kernel maximum mean discrepancy~\citep{gretton2012kernel} is another metric to measure distances between distributions, i.e., in our case between predicted population $\hat{\nu}$ and observed one $\nu$. Given two random variables $x$ and $y$ with distributions $\hat{\nu}$ and $\nu$, and a kernel function $\omega$, \citet{gretton2012kernel} define the squared MMD as: \begin{equation*} \text{MMD}(\hat{\nu},\nu; \omega) = \mathbb{E}_{x,x^\prime}[\omega(x, x^\prime)] + \mathbb{E}_{y,y^\prime}[\omega(y, y^\prime)] - 2\mathbb{E}_{x,y}[\omega(x, y)]. \end{equation*} We report an unbiased estimate of $\text{MMD}(\hat{\nu},\nu)$, in which the expectations are evaluated by averages over the population particles in each set. We utilize the RBF kernel, and as is usually done, report the MMD as an average over the length scales: $2, 1, 0.5, 0.1, 0.01, 0.005$. \para{Perturbation signature $\ell_2$} A common method to quantify the effect of a perturbation on a population is to compute its perturbation signature \citep[(PS)]{stathias2018drug}, computed via the difference in means between the distribution of perturbed states and control states of each feature, e.g., here individual genes. $\ell_2$(PS) then refers to the $\ell_2$-distance between the perturbation signatures computed on the observed and predicted distributions, $\nu$ and $\hat{\nu}$. The $\ell_2$(PS) is defined as \begin{equation*} \text{PS}(\nu, \mu) = \frac{1}{m}\sum_{y_i \in \nu}{y_i} - \frac{1}{n}\sum_{x_i \in \mu}{x_i}, \end{equation*} where $n$ is the size of the unperturbed and $m$ of the perturbed population. We report the $\ell_2$ distance between the observed signature $\text{PS}(\nu, \mu)$ and the predicted signature $\text{PS}(\hat{\nu}, \mu)$, which is equivalent to simply computing the difference in the means between the observed and predicted distributions. \para{RMSD} To measure the quality of matchings sampled from \textsc{SBalign} $(\hat{x}^i_0, \hat{x}^i_1)$ ---compared to the observed ones $(x^i_0, x^i_1)$--- we compute: \begin{equation} \text{RMSD}(\{x^i_1\}^n,\{\hat{x}^i_1\}^n) = \sqrt{\frac{1}{n}\sum^n_{i=1} \lVert x^i_1 - \hat{x}^i_1\rVert^2} \end{equation} which, when squared, represents the mean of the square norm of the differences between predicted and observed statuses of the cells on day 4. \para{Cell type classification accuracy} We assess the quality of \textsc{SBalign} trajectories by trying to predict the differentiation fate of cells, starting from (our compressed representation of) their transcriptome. For this, we train a simple MLP-based classifier on observed cells and use it on the last time-frame of trajectories sampled from \textsc{SBalign} to infer the differentiation of cells on day 4. We use the classifier \texttt{MLPClassifier} offered by the library \texttt{scikit-learn} with the following parameters: \begin{itemize} \item 2 hidden layers, each with a hidden dimension of 50, \item the \textit{}{logistic} function as non-linearity \item $\ell_2$ norm, regularization with coefficient $0.1$. \end{itemize} We report the subset accuracy of the predictions on the \textit{test} set, measured as the number of labels (i.e., cell types) coinciding with the ground truth. \subsubsection{Protein Docking} \label{subsec:app_prot_dock_eval} To evaluate our model, we utilize two metrics: Complex Root Mean Squared Deviation (Complex RMSD) and Interface Root Mean Squared Deviation (Interface RMSD). Given a ligand with $m$ residues and a receptor with $n$ residues, we denote the predicted bound structures with $\mathbf{Z}' \in \R^{(n + m)}$ and the ground truth bound structure with $\mathbf{Z}^{*} \in \R^{(n + m)}$. We first superimpose the predicted and ground truth bound structures using the Kabsch algorithm and then compute the Complex RMSD as $\text{C}_{\text{rmsd}} = \sqrt{\frac{1}{n+m}\norm*{\mathbf{Z}' - \mathbf{Z}}_F}$. The Interface RMSD $\text{I}_{\text{rmsd}}$ is computed similarly, but only using the residues from the two proteins that are within 8\r{A} of each other. \subsection{Network Architectures} \subsubsection{Cell Differentiation and Synthetic Datasets} We parameterize both $b^\theta(t, X_t)$ and $m^\phi(t, b_t, X_t)$ using a model composed of: \begin{enumerate} \item \textbf{\texttt{x\_enc}}: 3-layer MLP performing the expansion of spatial coordinates (or drift) into hidden states (of dimension 64 to 256); \item \textbf{\texttt{t\_enc}}: sinusoidal embedding of time (on 64 to 256 dimensions), followed by a two layer MLP; \item \textbf{\texttt{mlp}}: 3-layer MLP which maps the concatenation of embedded spatial and temporal information (output of modules 1 and 2 above) to drift magnitude values along each dimension. \end{enumerate} After every linear layer (except the last one), we apply a non-linearity and dropout (level 0.1). In all the experiments, we set the diffusivity function $g(t)$ in \eqref{eq:SB-SDE} to a constant $g$, which is optimized (see \S~\ref{subsec:hyperparams}). \subsubsection{Protein Docking} \label{subsec:app_prot_dock_nn} For the scope of this paper, we use a $\text{MLP}$ as $b_t^{\theta}$ and $m^{\phi}$. As inputs, both $b_t^{\theta}$ and $m^{\phi}$ receive input node features and the C$_\alpha$ coordinates at time $t$, as described in Section~\ref{sec:app_protein_docking_datasets}, with $m^{\phi}$ receiving the prediction $b_t^{\theta}$ as additional input. Both models have $3$ hidden layers, each with a dimension of $64$ and an output dimension of $3$, with around 50K parameters in total. Our current architectures are not equivariant to global rotations and translations, which is a desirable property in protein docking as the structures of the proteins themselves are invariant to the choice of reference coordinates frames. We leave a thorough exploration of other architectures, such as equivariant GNN architectures similar to those adopted in \citep{ganea2022independent} to future work. \subsection{Hyperparameters} \label{subsec:hyperparams} In the following, we will provide an overview of the selected hyperparameters as well as chosen training procedures. \subsubsection{Synthetic Tasks} We perform hyper-parameter optimization using the Python package \texttt{ray.tune} \citep{liaw2018tune} on: \begin{itemize} \item \textbf{activation}, chosen among \texttt{leaky\_relu}, \texttt{relu}, \texttt{selu} and \texttt{silu} as implemented in the Python library \texttt{PyTorch} \citep{pytorch2019paszke}. We find \texttt{selu} to achieve marginally better performance on toy datasets. \item \textbf{g}, the value of the diffusivity constant, chosen among $\{1, 2, 5, 10\}$. We find $g=1$ to yield optimal results. \end{itemize} \subsubsection{Cell Differentiation} We perform hyper-parameter optimization using the Python package \texttt{ray.tune} \citep{liaw2018tune} on: \begin{itemize} \item \textbf{activation}, chosen among \texttt{leaky\_relu}, \texttt{relu}, \texttt{selu}, and \texttt{silu} as implemented in the Python library \texttt{PyTorch} \citep{pytorch2019paszke}. We observe that \texttt{silu} brings noticeable performance improvements on the cell differentiation dataset. \item \textbf{g}, the value of the diffusivity constant, chosen among $\{0.01, 0.1, 0.8, 1, 1.2, 2, 5\}$. We find $g=1$ to yield optimal results. \end{itemize} \subsubsection{Protein Docking} \label{subsec:app_prot_dock_hparam} We use \textsc{Adam} as our optimizer with a learning rate of $0.001$, and training batch size of $2$. For each ligand, we sample $5$ timepoints during every training epoch so that the model is exposed to different timepoints from the corresponding Brownian Bridge for each ligand. This number was chosen as a tradeoff between CUDA memory and coverage of timepoints between $0$ and $1$. We use a regularization strength of $1.0$ for $m^{\phi}$ for all $t$. Inference on the validation set using training is carried out using the exponential moving average of parameters, and the moving average is updated every optimization step with a decay rate of $0.999$. The model training is set to a maximum of $1000$ epochs but training is typically stopped after $100$ epochs beyond which no improvements in the validation metrics are observed. \section{Reproducibility} Code will be made available upon publication of this work. \subsection{} \para{Problem formulation} Suppose that we are given access to i.i.d. \emph{aligned} data $(\mathbf{x}_0^i,\mathbf{x}_1^i)_{i=1}^N$, where the marginal distribution of $\mathbf{x}^i_0$'s is $\distinit$ and of $\mathbf{x}_1^i$'s is $\distend$. Typically, we view $\distinit$ as the empirical marginal distribution of a stochastic process observed at time $t= 0$, and likewise $\distend$ the empirical marginal observed at $t=\horizon$. The goal is to reconstruct the stochastic process $\Pmargin$ based on $(\mathbf{x}_0^i,\mathbf{x}_1^i)_{i=1}^N$, \ie to \emph{interpolate} between $\distinit$ and $\distend$. Such a task is ubiquitous in biological applications. For instance, understanding how proteins dock to other biomolecules is of significant interest in biology and has become a topic of intense study in recent years \citep{ganea2022independent, tsaban2022harnessing, corso2022diffdock}. In the protein docking task, $\mathbf{x}_0^i$ represents the 3D structures of the unbound proteins, while $\mathbf{x}_1^i$ represents the 3D structure of the bound complex. Reconstructing a stochastic process that diffuses $\mathbf{x}_0^i$'s to $\mathbf{x}_1^i$'s is tantamount to recovering the energy landscape governing the docking process. Similarly, in molecular dynamics simulations, we have access to trajectories $\left(\mathbf{x}_t^i\right)_{t \in [0, 1]}$, where $\mathbf{x}_0^i$ and $\mathbf{x}_1^i$ represent the initial and final positions of the $i$-th molecule respectively. Any learning algorithm using these simulations should be able to respect the provided alignment. \para{Diffusion Schr\"odinger bridges} To solve the interpolation problem, in \cref{sec:Methods}, we will invoke the framework of \acp{DSB}, which are designed to solve interpolation problems with \emph{unaligned} data. More specifically, given two marginals $\distinit$ and $\distend$, the \ac{DSB} framework proceeds by first choosing a reference process $\refpro$ using prior knowledge, for instance a simple Brownian motion, and then solve the entropy-minimization problem over all stochastic processes $\Pmargin$: \begin{equation} \label{eq:SB} \tag{SB} \min_{ \substack{ \Pinit = \distinit, \; \Pend = \distend} } \KL{\Pmargin}{\refpro}. \end{equation} Despite the fact that many methods exist for solving \eqref{eq:SB} \citep{de2021diffusion,chen2021likelihood,vargas2021solving,bunne2022recovering}, none of these approaches are capable of incorporating \emph{alignment} of the data. This can be seen by inspecting the objective \eqref{eq:SB}, in which the coupling information $(\mathbf{x}_0^i,\mathbf{x}_1^i)$ is completely lost as only its individual marginals $\distinit,\distend$ play a role therein. Unfortunately, it is well-known that tackling the marginals separately necessitates a forward-backward learning process known as the \acli{IPF} (IPF) procedure \citep{fortet1940resolution,kullback1968probability}, which constitutes the primary reason of high variance training, thereby confronting \acp{DSB} with numerical and scalability issues. Our major contribution, detailed in the next section, is therefore to devise the first algorithmic framework that solves the interpolation problem with aligned data \emph{without} resorting to IPF. \subsection{Learning aligned diffusion Schr\"odinger bridges} \para{Static SB and aligned data} Our starting point is the simple and classical observation that \eqref{eq:SB} is the continuous-time analogue of the \emph{entropic optimal transport}, also known as the \emph{static} \acl{SB} problem \citep{leonard2013survey,chen2021stochastic,Peyre2019computational}: \begin{equation} \label{eq:static-SB} \pi^\star \vcentcolon= \argmin_{ \substack{ \Pinit = \distinit, \; \Pend = \distend} } \KL{\mathbb{P}_{0,1}}{\refprobase_{0,1}} \end{equation}where the minimization is over all \emph{couplings} of $\distinit$ and $\distend$, and $\refprobase_{0,1}$ is simply the joint distribution of $\refpro$ at $t=0,\horizon$. In other words, if we denote by $\Psol$ the stochastic process that minimizes \eqref{eq:SB}, then the joint distribution $\Psol[0,\horizon]$ necessarily coincides with the $\pi^\star$ in \eqref{eq:static-SB}. Moreover, since in \acp{DSB}, the data is always assumed to arise from $\Psol$, we see that: \begin{quote} The \emph{aligned} data $(\mathbf{x}_0^i,\mathbf{x}_1^i)_{i=1}^N$ constitutes samples of $\pi^\star$. \end{quote} This simple but crucial observation lies at the heart of all derivations to come. Our central idea is to represent $\Psol$ via two different, but equivalent, characterizations, both of which involve $\pi^\star$: That of a \emph{mixture} of reference processes with pinned end points, and that of conditional \acdefp{SDE}. \para{$\Psol$ from $\pi^\star$: $\refpro$ with pinned end points} For illustration purposes, from now on, we will assume that the reference process $\refpro$ is a Brownian motion with diffusion coefficient $\volat$:\footnote{\looseness -1 Extension to more involved reference processes is conceptually straightforward but notationally clumsy. Furthermore, reference processes of the form \eqref{eq:gtWt} are dominant in practical applications \citep{song2020score, bunne2022recovering}, so we omit the general case. } \begin{equation} \label{eq:gtWt} \dd \refpro = \volat \dWiener. \end{equation} In this case, it is well-known that $\refpro$ \emph{conditioned} to start at $\mathbf{x}_0$ and end at $\mathbf{x}_1$ can be written in another \ac{SDE} \citep{mansuy2008aspects, liu2023learning}: \begin{equation} \label{eq:BB} \dd X_t = \volatsq[\ctime] \frac{\mathbf{x}_1-X_t}{\cvolat[\horizon]-\cvolat[\ctime]} \dt + \volat\dWiener \end{equation} where $X_0 = \mathbf{x}_0$ and \begin{equation} \cvolat\vcentcolon= \int_0^\ctime \volatsq \dd s. \end{equation}We call the processes in \eqref{eq:BB} the \emph{scaled Brownian bridges} as they generalize the classical Brownian bridge, which corresponds to the case of $\volat \equiv 1$. The first characterization of $\Psol$ is then an immediate consequence the following classical result in \acl{SB} theory: Draw a sample $(\mathbf{x}_0, \mathbf{x}_1) \sim \pi^\star$ and connect them via \eqref{eq:BB}. The resulting path is a sample from $\Psol$ \citep{leonard2013survey, chen2021stochastic}. In other words, $\Psol$ is a \emph{mixture} of scaled Brownian bridges, with the mixing weight given by $\pi^\star$. \para{$\Psol$ from $\pi^\star$: \ac{SDE} representation} Another characterization of $\Psol$ is that it is itself given by an \ac{SDE} of the form \citep{leonard2013survey, chen2021stochastic} \begin{equation} \label{eq:SB-SDE} \dd X_t = \volatsq[\ctime]b_t(X_t) \dt + \volat\dWiener. \end{equation} Here, $b_t: \R^d \to \R^d$ is a time-dependent drift function that we wish to learn. Now, by Doob's h-transform, we know that the \ac{SDE} \eqref{eq:SB-SDE} \emph{conditioned} to start at $\mathbf{x}_0$ and end at $\mathbf{x}_1$ is given by another \ac{SDE} \citep{doob1984classical,rogers2000diffusions}: \begin{equation} \label{eq:SB-SD-conditioned} \dd X_t = \volatsq[\ctime]\bracks*{b_t(X_t) + \nabla \log h_t(X_t) }\dt +\volat \dWiener \end{equation} where $h_t(\mathbf{x}) \vcentcolon= \prob(X_1 = \mathbf{x}_1\vert X_t = \mathbf{x})$ is the \emph{Doob's $h$ function}. Notice that we have suppressed the dependence of $h_t$ on $\mathbf{x}_0$ and $\mathbf{x}_1$ for notational simplicity \para{Loss function} Since both \eqref{eq:BB} and \eqref{eq:SB-SD-conditioned} represent $\Psol$, the solution of the \acp{DSB}, the two \acp{SDE} must coincide. ~In other words, suppose we parametrize $b_t$ as $b_t^\theta$, then, by matching terms in \eqref{eq:BB} and \eqref{eq:SB-SD-conditioned}, we can learn the optimal parameter $\theta^\star$ via optimization of the loss function \begin{equation} \label{eq:loss} L(\theta) \vcentcolon= \exof*{\int_0^1 \norm*{ \frac{\mathbf{x}_1-X_t}{\cvolat[\horizon]-\cvolat[\ctime]}-\nabla \log h_t^\theta(X_t)}^2 \dt } \end{equation}where $h_t^\theta$ is determined by $b_t^\theta$ as well as the drawn samples $(\mathbf{x}_0,\mathbf{x}_1)$. In short, assuming that, for each $\theta$, we can compute $h_t^\theta$ \emph{based only on} $b_t^\theta$, we can then backprop through \eqref{eq:loss} and optimize it using any off-the-shelf algorithm. \para{A slightly modified \eqref{eq:loss}} Even with infinite data and a neural network with sufficient capacity, the loss function defined in \eqref{eq:loss} does converge to 0. For the purpose of numerical stability, we instead propose to modify \eqref{eq:loss} to: \begin{equation} \label{eq:loss_modified} L(\theta) \vcentcolon= \exof*{\int_0^1 \norm*{\frac{\mathbf{x}_1-X_t}{\cvolat[\horizon]-\cvolat[\ctime]}- \left(b_t^\theta + \nabla \log h_t^\theta(X_t)\right)}^2 \dt } \end{equation}which is clearly equivalent to \eqref{eq:loss} at the true solution of $b_t$. Notice that \eqref{eq:loss_modified} bears a similar form as the popular score-matching objective employed in previous works \citep{song2019generative,song2020score}: \begin{equation} \label{eq:score_matching} L(\theta) \vcentcolon= \exof*{\int_0^1 \norm*{\nabla \log p(\mathbf{x}_t | \mathbf{x}_0)- s^\theta(X_t, t)}^2 \dt }, \end{equation} where the term $\frac{\mathbf{x}_1-X_t}{\cvolat[\horizon]-\cvolat[\ctime]}$ is akin to $\nabla \log p(\mathbf{x}_t | \mathbf{x}_0)$, while $\left(b_t^\theta + \nabla \log h_t^\theta(X_t)\right)$ corresponds to $s^\theta(X_t, t)$. \begin{algorithm}[t] \caption{\textsc{SBalign}} \label{alg:SBalign} \begin{algorithmic} \STATE {\bfseries Input:} Aligned data $(\mathbf{x}^i_0,\mathbf{x}^i_1)_{i=1}^N$, learning rates $\lrf,\lrb$, number of iterations $K$ \smallskip \STATE Initialize $\paramf \subs \paramf_0$, $\paramb \subs \paramb_0$. \FOR{$k=1$ {\bfseries to} $K$} \STATE Draw a mini-batch of samples from $(\mathbf{x}^i_0,\mathbf{x}^i_1)_{i=1}^N$ \STATE Compute empirical average of \eqref{eq:loss_final} with mini-batch. \STATE Update $\paramb \subs \paramb - \lrb\nabla L(\theta,\phi)$ \STATE Update $\paramf \subs \paramf - \lrf\nabla L(\theta,\phi)$ \ENDFOR \end{algorithmic} \end{algorithm} \para{Computing $h_t^\theta$ Inspecting $h_t$ in \eqref{eq:SB-SD-conditioned}, we see that, given $(\mathbf{x}_0,\mathbf{x}_1)$, it can be written as the conditional expectation of an indicator function: \begin{equation} \label{eq:h-semigroup} h_t(\mathbf{x}) = \prob(X_1 = \mathbf{x}_1\vert X_t = \mathbf{x}) = \exof*{\one_{\{\mathbf{x}_1\}}\vert X_t = \mathbf{x}} \end{equation}where the expectation is over \eqref{eq:SB-SDE}. Functions of the form \eqref{eq:h-semigroup} lend itself well to computation since it solves simulating the \emph{unconditioned} paths. ~Furthermore, in order to avoid overfitting on the given samples, it is customary to replace the ``hard'' constraint $\one_{\{\mathbf{x}_1\}}$ by its \emph{smoothed} version \citep{zhang2021path, holdijk2022path}: \begin{equation} \label{eq:softdoob} h_{t,\tau}(\mathbf{x}) \vcentcolon= \exof*{ \exp\parens*{-\frac{1}{2\tau}\norm{X_\horizon-\mathbf{x}_1}^2 } \vert X_t = \mathbf{x}}. \end{equation}Here, $\tau$ is a regularization parameter that controls how much we ``soften'' the constraint, and we have $\lim_{\tau\to 0} h_{t,\tau} = h_t$. Although the computation of \eqref{eq:softdoob} can be done via a standard application of the Feynman–Kac formula \citep{rogers2000diffusions}, an altogether easier approach is to parametrize $h_{t,\tau}$ by a second neural network $m^{\phi}$ and perform alternating minimization steps on $b_t^\theta$ and $m^{\phi}$. This way, we can also avoid simulating even the unconditional paths of \eqref{eq:SB-SDE}, and thereby further reducing the variance in training. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/fig_results_synthetic.pdf} \caption{Experimental results on the Moon dataset (\textbf{a-c}) and T-dataset (\textbf{d-f}). The top row shows the trajectory sampled using the learned drift, and the bottom row shows the matching based on the learnt drift. Compared to other baselines, \textsc{SBalign} is able to learn an appropriate drift respecting the true alignment. (\textbf{f}) further showcases the utility of \textsc{SBalign}'s learnt drift as a suitable reference process to improve other training methods.} \label{fig:results_spiral} \end{figure*} \para{Regularization} Since it is well-known that $\nabla \logh_t$ typically explodes when $\ctime\to 1$ \citep{liu2023learning}, it is important to regularize the behavior of $m^{\phi}$ for numerical stability, especially when $\ctime\to 1$. Moreover, in practice, it is desirable to learn a drift $b_t^\theta$ that respects the data alignment \emph{in expectation}: If $(\mathbf{x}_0,\mathbf{x}_1)$ is an input pair, then multiple runs of the \ac{SDE} \eqref{eq:SB-SDE} starting from $\mathbf{x}_0$ should, on average, produce samples that are in the proximity of $\mathbf{x}_1$. This observation implies that we should search for drifts whose corresponding $h$-transforms are diminishing. A simple way to simultaneously achieve the above two requirements is to add an $\ell^2$-regularization term, resulting in the loss function: \begin{align} \label{eq:loss_final} L(\theta,\phi) &\vcentcolon= \mathbb{E} \Bigg[\int_0^1 \norm*{\frac{\mathbf{x}_1-X_t}{\cvolat[\horizon]-\cvolat[\ctime]}- \left(b_t^\theta + m^{\phi}(X_t)\right)}^2 \\ &\hspace{35mm}+ \lambda_t \norm{m^{\phi}(\mathbf{x}_t)}^2 \dt \Bigg] \nonumber \end{align}where $\lambda_t$ can either be constant or vary with time. The overall algorithm is depicted in \cref{alg:SBalign}. \subsection{Paired Schr\"odinger bridges as prior processes} \label{subsec:prior_drift} Classical SBs are unsuitable in cases where the alignments are known, because they only consider samples from $\distinit$ and $\distend$ and disregard those drawn from the (optimal) coupling $\pi^\star$. However, the reliance of our method on this crucial knowledge is critical to avoid the necessity of IPF-like iterates but may become a limitation when insufficient information on alignments is available. In such a situation, while it is unrealistic to hope for an accurate solution to the aligned SB problem, the interpolation between $\distinit$ and $\distend$ learned by \textsc{SBalign} (\ref{eq:SB-SDE}) can potentially still be leveraged to obtain a better reference process, when solving a classical SB on the same marginals ---i.e. the term $b_t(X_t)$ learned via \textsc{SBalign} can, in fact, be used \textit{as is} to construct a data-informed alternative $\Tilde{\refpro}$ to the standard Brownian motion (\ref{eq:gtWt}). Improved reference processes, either using pre-trained or data-informed ones, have been previously considered in the literature. For instance, both \citet{de2021diffusion} and \citet{chen2021likelihood} use a pre-trained reference process for challenging image interpolation tasks. This approach, however, relies on DSBs trained using the classical score-based generative modeling objective between a Gaussian and the data distribution. It therefore pre-trains the reference process on a related ---but different--- process, i.e., the one mapping Gaussian noise to data rather than $\distinit$ to $\distend$. An alternative, proposed by \citet{bunne2022recovering}, draws on the closed-form solution of SBs between two Gaussian distributions, which are chosen to approximate $\distinit$ and $\distend$, respectively. Unlike our method, these alternatives construct better prior drifts by falling back to simpler and related tasks, or approximations of the original problem. We instead propose to shape a coarse-grained description of the drift based on alignments sampled directly from $\mathbb{P}_{0,1}$. \subsection{Synthetic Experiments} \label{sec:synthetic} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/fig_cell_trajectories_matchings.pdf} \caption{Cell differentiation trajectories based on (\textbf{a}) the ground truth and (\textbf{b-d}) learned drifts. \textsc{SBalign} is able to learn an appropriate drift underlying the true differentiation process while respecting the alignment. (\textbf{d}) Using the learned drift from \textsc{SBalign} as a reference process helps improve the drift learned by other training methods.} \label{fig:results_cell_traj} \end{figure*} We run our algorithm on two synthetic datasets (figures in \S~\ref{app:datasets}), and compare the results with classic Schrödinger bridge models, i.e., the forward-backward SB formulation proposed by \cite{chen2021likelihood}, herein referred to as \textsc{fbSB}. We equip the baseline with prior knowledge, as elaborated below, to further challenge \textsc{SBalign}. \para{Moon dataset} The first synthetic dataset (Fig.~\ref{fig:results_spiral}a-c) consists of two distributions, each supported on two semi-circles ($\distinit$ drawn in \textit{blue} and $\distend$ in \textit{red}). $\distend$ was obtained from $\distinit$ by applying a clockwise rotation around the center, i.e., by making points in the upper blue arm correspond to those in the right red one. This transformation is clearly not the most likely one under the assumption of Brownian motion of particles and should therefore not be found as the solution of a classical SB problem. This is confirmed by \textsc{fbSB} trajectories (Fig.~\ref{fig:results_spiral}a), which tend to map points to their closest neighbor in $\distend$ (e.g., some points in the upper arm of $\distinit$ are brought towards the left rather than towards the right). While being a minimizer of \eqref{eq:SB}, such a solution completely disregards our prior knowledge on the alignment of particles, which is instead reliably reproduced by the dynamics learned by \textsc{SBalign} (Fig.~\ref{fig:results_spiral}b). One way of encoding this additional information on the nature of the process is to modify $\refpro$ by introducing a clockwise radial drift, which describes the prior tangential velocity of particles moving circularly around the center. Solving the classical SB with this updated reference process indeed generates trajectories that respect most alignments (Fig.~\ref{fig:results_spiral}b), but requires a hand-crafted expression of the drift that is only possible in very simple cases. \para{T dataset} In most real-world applications, it is very difficult to define an appropriate reference process $\refpro$, which respects the known alignment without excessively distorting the trajectories from a solution to \eqref{eq:SB}. This is already visible in simple examples like (Fig.~\ref{fig:results_spiral}d-f), in which the value of good candidate prior drifts at a specific location needs to vary wildly in time. In this dataset, $\distinit$ and $\distend$ are both bi-modal distributions, each supported on two of the four extremes of an imaginary T-shaped area. We target alignments that connect the two arms of the T as well as the top cloud with the bottom one. We succeed in learning them with \textsc{SBalign} (Fig.~\ref{fig:results_spiral}e) but unsurprisingly fail when using the baseline \textsc{fbSB} (Fig.~\ref{fig:results_spiral}d) with a Brownian motion prior. \looseness -1 In this case, however, attempts at designing a better reference drift for \textsc{fbSB} must take into account the additional constraint that the horizontal and vertical particle trajectories intersect (see Fig.~\ref{fig:results_spiral}e), i.e., they cross the same area at times $t_h$ and $t_v$ (with $t_h > t_v$). This implies that the drift $b_t$, which initially points downwards (when $t < t_v$), should swiftly turn rightwards (for $t > t_h$). Setting imprecise values for one of $t_h$ and $t_v$ when defining custom reference drifts for classical SBs would hence not lead to the desired result and, worse, would actively disturb the flow of the other particle group. \looseness -1 As described in \S~\ref{subsec:prior_drift}, in presence of hard-to-capture requirements on the reference drift, the use of \textsc{SBalign} offers a remarkably easy and efficient way of learning a parameterization of it. For instance, when using the drift obtained by \textsc{SBalign} as reference drift for the computation of the SB baseline (\textsc{fbSB}), we find the desired alignments (Fig.~\ref{fig:results_spiral}f). \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/fig_cell_pred_types.pdf} \caption{Cell type prediction on the differentiation dataset. All distributions are plotted on the first two principal components. \textbf{a-b:} Ground truth cell types on day 2 and day 4 respectively. \textbf{c-d:} \textsc{fbSB} and \textsc{SBalign} cell type predictions on day 4. \textsc{SBalign} is able to better model the underlying differentiation processes and capture the diversity in cell types.} \label{fig:results_cell_class} \end{figure*} \vspace{-5pt} \subsection{Cell Differentiation} \label{sec:cell} \vspace{-5pt} \looseness -1 Biological processes are determined through heterogeneous responses of single cells to external stimuli, i.e., developmental factors or drugs. Understanding and predicting the dynamics of single cells subject to a stimulus is thus crucial to enhance our understanding of health and disease and the focus of this task. Most single-cell high-throughput technologies are destructive assays ---i.e., they destroy cells upon measurement--- allowing us to only measure \textit{unaligned} snapshots of the evolving cell population. Recent methods address this limitation by proposing (lower-throughput) technologies that keep cells alive after transcriptome profiling \citep{chen2022live} or that genetically tag cells to obtain a clonal trace upon cell division \citep{weinreb2020lineage}. \para{Dataset} To showcase \textsc{SBalign}'s ability to make use of such (partial) alignments when inferring cell differentiation processes, we take advantage of the genetic barcoding system developed by \citet{weinreb2020lineage}. With a focus on fate determination in hematopoiesis, \citet{weinreb2020lineage} use expressed DNA barcodes to clonally trace single-cell transcriptomes over time. The dataset consists of two snapshots: the first, recorded on day 2, when most cells are still undifferentiated (see Fig.~\ref{fig:results_cell_class}a), and a second, on day 4, comprising many different mature cell types (see Fig.~\ref{fig:results_cell_class}b). Using \textsc{SBalign} as well as the baseline \textsc{fsSB}, we attempt to reconstruct cell evolution between day 2 and day 4, all while capturing the heterogeneity of emerging cell types. For details on the dataset, see \S~\ref{app:datasets}. \begin{table} \caption{\textbf{Cell differentiation prediction results.} Shown are distributional metrics (MMD, $\text{W}_{\epsilon}$), alignment-based metrics ($\ell_2$, RMSD), and cell type classification accuracy for different methods on the cell differentiation dataset. \vspace{-5pt}} \label{tab:results_cells} \centering \adjustbox{max width=\linewidth}{% \begin{tabular}{lccccc} \toprule & \multicolumn{5}{c}{\textbf{Cell Differentiation}} \\ \cmidrule(lr){2-6} \textbf{Methods} & MMD $\downarrow$ & $\text{W}_\varepsilon \downarrow$ & $\ell_2(\text{PS}) \downarrow$ & RMSD $\downarrow$ & Class. Acc. $\uparrow$ \\ \midrule \textsc{fbSB}& 1.58e-2 & 12.6 & 4.07 & 9.63e-1 & 58.0\% \\ \textsc{fbSB} with \textsc{SBalign} & 5.15e-3 & 10.6 & 0.95 & 9.88e-1 & 49.0\% \\ \textsc{\bf{\textsc{SBalign}}}& 9.77e-3 & 11.2 & 1.24 & 9.28e-1 & 56.0\% \\ \bottomrule \vspace{-15pt} \end{tabular} } \end{table} \para{Baselines} \looseness -1 We benchmark \textsc{SBalign} against previous \acp{DSB} such as \citep[\textsc{fbSB}]{chen2021likelihood}. Beyond, we compare \textsc{SBalign} in the setting of learning a prior reference process. Naturally, cell division processes and subsequently the propagation of the barcodes are very noisy. While this genetic annotation provides some form of assignment, it does not capture the full developmental process. We thus test \textsc{SBalign} in a setting where it learns a prior from such partial alignments and, plugged into \textsc{fbSB}, is fine-tuned on the full dataset. \para{Evaluation metrics} To assess the performance of \textsc{SBalign} and the baselines, we monitor several metrics, which include distributional distances, i.e., MMD~\citep{gretton2012kernel} and $\text{W}_{\epsilon}$~\citep{cuturi2013sinkhorn}, as well as average scores, i.e., $\ell_2(\text{PS})$ \citep{bunne2021learning} and RMSD. Moreover, we also train a simple neural network-based classifier to annotate the cell type on day 4 and we report the accuracy of the predicted vs. true cell type for all the models. See \S~\ref{app:metrics} for further details. \para{Results} \textsc{SBalign} accurately predicts cellular differentiation processes in hematopoiesis from day 2 to day 4, as visible from the (2D projections of the) learned trajectories and alignments (Fig.~\ref{fig:results_cell_traj}c) and the quantitative evaluation in Table~\ref{tab:results_cells}. \textsc{SBalign} outperforms \textsc{fbSB} in all but the cell-type accuracy metric: Remarkably, our method exceeds the performances of the baseline also on distributional metrics and not uniquely on alignment-based ones. Further, we evaluate how well \textsc{SBalign} recovers the heterogeneity of emerging cell types throughout the developmental process on day 4. The results are displayed in Fig.~\ref{fig:results_cell_class}d and show that, while capturing the overall differentiation trend, \textsc{SBalign} (as well as \textsc{fbSB}) struggles to isolate rare cell types. Lastly, we employ \textsc{SBalign} to learn a prior process from noisy alignments based on genetic barcode annotations. When using this reference process within \textsc{fbSB}, we learn an SB which compensates for inaccuracies stemming from the stochastic nature of cell division and barcode redistribution and which achieves better scores on distributional metrics (see Tab.~\ref{tab:results_cells}). Further results can be found in \S~\ref{app:more_results}. \begin{figure*}[t] \centering \includegraphics[width=.9\textwidth]{figures/fig_pred_1QA9.pdf} \caption{Ground truth and predicted bound structures for the complex with PDB ID: 1QA9. \textsc{SBalign} is able to find the true binding interface compared to \textsc{EquiDock}. \vspace{-15pt}} \label{fig:results_docked_1QA9} \end{figure*} \vspace{-5pt} \subsection{Protein Docking} \vspace{-5pt} In ({\em computational}) protein docking, the goal is to predict the 3D structure of the bound (docked) state of a protein pair, given the unbound states of the corresponding proteins. These proteins are denoted (arbitrarily) as the ligand and receptor respectively. For the scope of this paper, and following previous work, we focus on the rigid docking setup. However, our algorithm can also be applied to flexible protein docking, and we leave a full treatment of this problem to future work. \para{Experimental setup} Our setup follows a similar convention as \textsc{EquiDock} \citep{ganea2022independent}. To summarize, the unbound structure of the ligand is derived by applying a random rotation and translation to the corresponding bound structure, while the receptor is held fixed w.l.o.g. Applying a different rotation and translation to each ligand can however result in a different Brownian bridge for each complex, resulting in limited meaningful signal for learning $b_t^\theta$. To avoid this, we sample a rotation and translation at the start of training and apply the same rotation and translation to all complexes across training, validation, and testing. Additional details regarding this setup can be found in \S~\ref{app:datasets}. \para{Dataset} We use the DB5.5 dataset \citep{vreven2015updates} for our empirical evaluation. The DB5.5 dataset is a standard dataset used in protein-protein docking, however, it only has 253 complexes. We utilize the same splits as EquiDock \citep{ganea2022independent}, with 203 complexes in the training set, 25 complexes in the validation set, and 25 complexes in the test set. For the evaluation in Table~\ref{tab:results_docking}, we use the full DB5.5 test set. For ligands in the test set, we generate the corresponding unbound versions by applying the rotation and translation sampled during training. \para{Baselines} We compare our method to the GNN-based model \textsc{EquiDock} as well as traditional docking software including \textsc{Attract}~\citep{attract2017,de2015web}, \textsc{HDock}~\citep{yan2020hdock}, \textsc{ClusPro}~\citep{desta2020performance,kozakov2017cluspro}, and \textsc{PatchDock}~\citep{mashiach2010integrated,schneidman2005patchdock}. As mentioned in the paragraph above, for ligands in the test set, we generate the corresponding unbound versions by applying the rotation and translation sampled during training. We evaluate the trained model from \textsc{EquiDock} and \textsc{SBalign} on these unbound structures and report corresponding evaluation metrics. For the remaining baselines, we include the numbers from \citep{ganea2022independent}. These baselines typically sample several candidate complexes by considering small increments of rotation angles. We expect this makes them somewhat invariant to arbitrary initialization, and the corresponding docking scores to not be severely impacted. \para{Evaluation metrics} We report two metrics, Complex Root Mean Square Deviation (Complex RMSD), and Interface Root Mean Square Deviation (Interface RMSD). Following \citep{ganea2022independent}, the ground truth and predicted complex structures are first superimposed using the Kabsch algorithm \citep{kabsch1976solution}, and the Complex RMSD is then computed between the superimposed versions. A similar procedure is used for computing Interface RMSD, but only using the residues from the two proteins that are within $8\,$\r{A} of each other. More details in \S~\ref{app:metrics}. \para{Results} The model performance is summarized in Table~\ref{tab:results_docking}. Our method \textsc{SBalign} considerably outperforms \textsc{EquiDock} across all metrics. \textsc{SBalign} also achieves comparable or better performance than traditional docking software without relying on extensive candidate sampling and re-ranking or learning surface templates from parts of the current test set. An example of docked structures, in direct comparison with \textsc{EquiDock} is displayed Fig.~\ref{fig:results_docked_1QA9}. Further visualizations and results can be found in \S~\ref{app:more_results}. \label{sec:protein} \begin{table} \caption{\textbf{Rigid docking results.} Complex and interface RMSD between predicted and true bound structures (after Kabsch alignment). $^*$ denotes methods for which we use values directly from \citep{ganea2022independent}. All other results show the performance on our test set. \vspace{-5pt} } \label{tab:results_docking} \centering \adjustbox{max width=\linewidth}{% \begin{tabular}{lcccccc} \toprule & \multicolumn{6}{c}{\textbf{DB5.5 Test Set}} \\ & \multicolumn{3}{c}{Complex RMSD} & \multicolumn{3}{c}{Interface RMSD} \\ \cmidrule(lr){2-7} \textbf{Methods} & Median & Mean & Std & Median & Mean & Std\\ \midrule \textsc{Attract}$^*$ & 9.55 & 10.09 & 9.88 & 7.48 & 10.69 & 10.90 \\ \textsc{HDock}$^*$ & 0.30 & 5.34 & 12.04 & 0.24 & 4.76 & 10.83 \\ \textsc{ClusPro}$^*$& 3.38 & 8.25 & 7.92 & 2.31 & 8.71 & 9.89 \\ \textsc{PatchDock}$^*$ & 18.26 & 18.00 & 10.12 & 18.88 & 18.75 & 10.06 \\ \textsc{\textsc{EquiDock}}$^*$& 14.13 & 14.72 & 5.31 & 11.97 & 13.23 & 4.93 \\ \cmidrule(lr){1-7} \textsc{\textsc{EquiDock}}& 14.12 & 14.73 & 5.31 & 11.97 & 13.23 & 4.93 \\ \textsc{\bf{\textsc{SBalign}}}& 6.59 & 6.69 & 2.04 & 7.69 & 8.11 & 2.39 \\ \bottomrule \vspace{-15pt} \end{tabular} } \end{table} \subsubsection*{References}} \usepackage[inline]{enumitem} \usepackage[table, dvipsnames]{xcolor} \usepackage{color} \usepackage{amsmath} \usepackage{float} \usepackage{adjustbox} \usepackage{caption} \usepackage{mathtools, nccmath} \usepackage{tikz} \usepackage[algo2e, ruled,vlined,boxed,linesnumbered]{algorithm2e} \SetArgSty{textnormal} \usepackage{listings} \usepackage{multicol} \usepackage{wrapfig} \usepackage{enumitem} \usepackage{makecell} \usepackage{upgreek} \usepackage[toc,page]{appendix} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{mathtools} \usepackage{dsfont} \usepackage{acronym} \renewcommand*{\aclabelfont}[1]{\acsfont{#1}} \newcommand{\acli}[1]{\textit{\acl{#1}}} \newcommand{\aclip}[1]{\textit{\aclp{#1}}} \newcommand{\acdef}[1]{\textit{\acl{#1}} \textup{(\acs{#1})}\acused{#1}} \newcommand{\acdefp}[1]{\textit{\aclp{#1}} \textup{(\acsp{#1})}\acused{#1}} \usepackage{makecell} \newcommand{.}{.} \newcommand{\ackperiod}{} \usepackage{titlesec} \newcommand{\para}[1]{\vspace{-10pt}\smallskip\paragraph{\textbf{#1.}}} \usepackage{quoting} \quotingsetup{vskip=\medskipamount} \usepackage{stmaryrd} \usepackage{wasysym} \usepackage{booktabs} \usepackage[sort&compress,capitalize,nameinlink]{cleveref} \crefname{assumption}{Assumption}{Assumptions} \crefname{algo}{Algorithm}{Algorithms} \crefname{example}{Example}{Examples} \crefname{method}{Method}{Methods} \newcommand{\crefrangeconjunction}{\textendash} \creflabelformat{assumption}{\upshape(#2#1#3\upshape)} \crefname{assumptionenum}{Assumption}{Assumptions} \creflabelformat{assumptionenum}{#2#1#3} \crefname{item}{}{} \creflabelformat{item}{#2#1#3} \crefname{eq}{}{} \creflabelformat{eq}{\upshape(#2#1#3\upshape)} \def\hfill{\small$\blacktriangle$}{\hfill{\small$\blacktriangle$}} \usepackage{thmtools} \usepackage{thm-restate} \newtheorem{theorem}{Theorem} \newtheorem{corollary}{Corollary} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{conjecture}{Conjecture} \newtheorem{claim}{Claim} \newtheorem{example}{{\small$\blacktriangledown$} Example} \newtheorem{algo}{{\small$\blacktriangledown$} Algorithm} \newcommand{\needref}{{\color{red}\upshape\textbf{[??]}}\xspace} \newcommand{\attn}{{\color{red}\upshape\textbf{[!!]}}\xspace} \newcommand{\debug}[1]{#1} \newcommand{\commtag}[1]{\tag*{\small\{#1\}}} \newcommand{\colcircle}[1]{\tikz\draw[#1, fill=#1] (0,0) circle (.5ex);} \definecolor{darkblue}{HTML}{1A254B} \definecolor{lightblue}{HTML}{A7BED3} \definecolor{blue}{HTML}{114083} \definecolor{green}{HTML}{81B5AE} \definecolor{pink}{HTML}{F2545B} \definecolor{red}{HTML}{A4243B} \definecolor{airforceblue}{rgb}{0.36, 0.54, 0.66} \definecolor{thistle}{rgb}{0.85, 0.75, 0.85} \definecolor{ticklemepink}{rgb}{0.99, 0.54, 0.67} \definecolor{thulianpink}{rgb}{0.67, 0.24, 0.43} \definecolor{tealblue}{rgb}{0.11, 0.36, 0.43} \newcommand{\bl}[1]{\textcolor{tealblue}{#1}} \newcommand{\rl}[1]{\textcolor{thulianpink}{#1}} \newcommand{\vcentcolon=}{\vcentcolon=} \def\mathcal{P}{\mathcal{P}} \def\mathcal{N}{\mathcal{N}} \def\mathbf{P}{\mathbf{P}} \def\mathbf{C}{\mathbf{C}} \def\mathbb{R}{\mathbb{R}} \def\overline{W}_{\varepsilon}{\overline{W}_{\varepsilon}} \defW_{\varepsilon}{W_{\varepsilon}} \def\mathbf{1}{\mathbf{1}} \DeclarePairedDelimiterX{\dotp}[2]{\langle}{\rangle}{#1, #2} \DeclareMathOperator*{\argminB}{argmin} \DeclareMathOperator*{\argmaxB}{argmax} \DeclareMathOperator*{\argmin}{argmin} \def\mathbb{R}{\mathbb{R}} \def\mathrm{d}{\mathrm{d}} \def\mathbf{X}{\mathbf{X}} \def\mathrm{Leb}{\mathrm{Leb}} \newcommand{\expe}[1]{\mathbb{E}[#1]} \input{macros} \theoremstyle{plain} \newcommand{\swap}[3][-]{#3#1#2} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \title{Aligned Diffusion Schr\"odinger Bridges} \author[1,2]{Vignesh Ram Somnath$^*$} \author[1,3]{Matteo Pariset$^*$} \author[1]{Ya-Ping Hsieh} \author[2]{\\Maria Rodriguez Martinez} \author[1]{Andreas Krause} \author[1]{Charlotte Bunne} \affil[1]{% Department of Computer Science\\ ETH Z\"urich } \affil[2]{% IBM Research Z\"urich } \affil[3]{% Department of Computer Science\\ EPFL } \begin{document} \renewcommand\ttdefault{lmtt} \maketitle \begin{abstract} \input{content/abstract.tex} \end{abstract} \vspace{-5pt} \footnotetext[1]{Equal contribution.} \section{Introduction} \label{sec:intro} \input{content/introduction.tex} \section{Background} \label{sec:background} \input{content/background.tex} \section{Aligned Diffusion Schr\"odinger Bridges} \label{sec:Methods} \input{content/methods.tex} \section{Experiments} \label{sec:experiments} \input{content/experiments.tex} \vspace{-5pt} \section{Conclusion} \label{sec:conclusion} \input{content/conclusion.tex} \section*{Acknowledgements} This publication was supported by the NCCR Catalysis (grant number 180544), a National Centre of Competence in Research funded by the Swiss National Science Foundation as well as the European Union’s Horizon 2020 research and innovation programme 826121. We thank Caroline Uhler for introducing us to the dataset by \citet{weinreb2020lineage}, which was instrumental in this research.
{ "arxiv_id": "2302.11287", "language": "en", "timestamp": "2023-02-23T02:12:49", "url": "https://arxiv.org/abs/2302.11287", "yymm": "2302" }
\section{Introduction}\label{sec:1} Many real-world problems can be formulated from an equation of the form $Tu=u$, where $u$ belongs to some convex set $C$ and $T$ is in general a nonlinear operator. This mathematical equation is generally named as {\it fixed-point equation}, $C$ is the {\it solution set} of the problem and $u$ is called a {\it fixed-point} of $T$. When $C$ is inserted into the framework of a metric space and the operator $T$ has some metric-topological property, we then say that $Tu=u$ addresses a {\it metric fixed-point problem}. An interesting topic in metric fixed point theory concerns the so-called fixed point property (FPP). Given a Banach space $X$, a closed bounded convex subset $C$ of $X$ is said to have the FPP when every nonexpansive (i.e., $1$-Lipschitz continuous) mapping $T\colon C\to C$ has a fixed point. If every such $C$ has the FPP then we say that $X$ has the FPP. The study on the FPP dates from the 1960s, and has since been a highly active area of research, highlighting not only its relevance per se, but also its deep connections with Banach space theory. In a non-exhaustive list, we mention here the works of F. Browder \cite{Brd1,Brd2} and W. Kirk \cite{Ki} who in 1965 obtained the first positive results (i.e., showed FPP) in case when $X$ is either a Hilbert space, uniformly convex or, more generally, a reflexive space with normal structure. In 1981 B. Maurey \cite{M} proved that every reflexive subspace of $L_1[0,1]$ has FPP. In contrast, D. Alspach \cite{A} exhibited a weakly compact convex subset of $L_1[0,1]$ that fails to have it. Noteworthily, P. Dowling and C. Lennard \cite{DLT2} proved in 1997 that every nonreflexive subspace of $L_1[0,1]$ fails FPP. One can go through \cite{BP} for recent studies regarding the FPP in generalized Lebesgue spaces. Often, most of the technical issues that arise in the study of the FPP are in fact intrinsic manifestations of the geometric nature of norms. It is then interesting from a theoretical viewpoint to know which renormings of $X$ have the FPP. Recall that $\mathrm{c}_0$ and $\ell_1$ are among the standard examples of Banach spaces for which FPP fails. Additionally, $\ell_1(\Gamma)$ ($\Gamma$ uncountable) and $\ell_\infty$ are examples of spaces that cannot be equivalently renormed to have the FPP (cf. e.g., \cite{GK} and \cite{DLT1}). However, in 2011, Hern\'andez-Linares \cite{H-L} showed that $L_1[0,1]$ can be renormed to have FPP for affine nonexpansive mappings. Thus, understanding the essence of certain structural aspects is, sometimes, a necessary and important step for building nonexpansive mappings without fixed points. Even in some special, but not unimportant, circumstances this may play a crucial role in solving hard problems of the theory. There are plenty of instances of such situations, see \cite{C-SDFJLST} and references therein. As a sample, no space containing asymptotically isometric copies of $\ell_1$ and $\mathrm{c}_0$ as well, may have the FPP (see \cite{DJLT, DLT2, DLT3}). It should also be pointed out that P. Dowling, W. B. Johnson, C. Lennard and B. Turett showed (cf. \cite[Theorem 1]{DJLT}) that the family $\{ \| \cdot\|_\gamma\}_{\gamma}$ of renormings of $\ell_1$ given by \[ \| x\|_\gamma=\sup_{n\in \mathbb{N}}\gamma_n \sum_{k=n}^\infty | \xi_k|\quad \text{for all } x=(\xi_n)_n \in \ell_1, \] where $\gamma=\{ \gamma_n\}_{n=1}^\infty$ is a sequence in $(0,1)$ that strictly increases to $1$, fail to contain asymptotically isometric copies of $\ell_1$. Surprisingly, in 2008 P. K. Lin \cite{Lin2} proved for \[ \gamma_\star=\Big\{ \frac{8^n}{1 + 8^n}\Big\}_{n=1}^\infty \] that $(\ell_1, \| \cdot\|_{\gamma_\star})$ has the FPP, thus exhibiting the first example of a nonreflexive Banach space that enjoys the fixed point property. In addition to Lin's result, it is worth pointing out a 2009's result of Domingu\'ez Benavides \cite{D-B} asserting that every reflexive space can be renormed to have fixed point property. This is in fact a meaningful improvement of a result of van Dust \cite[Theorem 1]{vD} which encompasses the class of all separable reflexive spaces. It should be stressed, however, that the question of whether reflexive spaces have FPP remains open (even for super-reflexive spaces). \section{Goal}\label{sec:2} The use of unconditionality in metric fixed point theory has been of great importance in the field. One of the best-known results within this context, Lin's theorem \cite{Lin1}, states that every Banach space having an unconditional basis whose unconditionality constant being less than $\frac{1}{2} (\sqrt{33} - 3)$, then it has the weak fixed point property (weak-FPP). In technical works, \begin{theorem} Let $X$ be a Banach space with an unconditional basis whose unconditional constant satisfies $\mathcal{D}< \frac{1}{2}(\sqrt{33} -3)$. Then for every weakly compact convex subset $C$ of $X$, every nonexpansive self-map of $C$ has a fixed point. \end{theorem} In light of this result, a natural question that arises is whether or not the condition $\mathcal{D}<\frac{1}{2} (\sqrt{33} - 3)$ can be relaxed. More precisely, the following issues have been at the spotlight of many current research works: \begin{itemize} \item[($\mathcal{Q}1$)] Does every $X$ with an unconditional basis have the weak-FPP?\vskip .15cm \item[($\mathcal{Q}2$)] Does every $X$ with a $1$-suppression unconditional basis have the weak-FPP?\vskip .15cm \item[($\mathcal{Q}3$)] Can $\mathrm{c}_0$ be equivalently renormed to have the FPP? \end{itemize} \smallskip These are, in principle, very sensitive issues. For example, a technical difficulty behind questions ($\mathcal{Q}1$) and ($\mathcal{Q}2$) is that unconditional subsequences of weakly null sequences in such spaces can have large unconditional constants. In its turn, $1$-suppression unconditional bases have close connections to FPP. As shown in \cite{Lin1} e.g., superreflexive spaces with such bases enjoy the FPP. Very roughly, a commonly used strategy to tackle these problems is to quantify the stability of the FPP using Banach-space distance devices. To this respect, let us recall that given isomorphic Banach spaces $X$ and $Y$ the {\it Banach-Mazur distance} between $X$ and $Y$ is defined by \[ \mathrm{dist}_{BM}(X,Y)=\inf\big\{ \| T\| \big\|T^{-1}\big\| \colon T \text{ is an isomorphism from } X \text{ to } Y\,\big\}. \] In \cite{Lin3} Lin proved that $\mathrm{dist}_{BM}(X, \ell_2)< \sqrt{ \frac{1}{2}\big( 5 + \sqrt{ 13}\big)}$ implies $X$ has the FPP. A recent result due to Piasecki \cite{P} shows however that weak-FPP is not generally stable under Banach-Mazur distance $1$ (cf. \cite[Example 6.1]{MJ}). We kindly refer the reader to \cite{BGK, HL-MJ, KS, Lin3, Lin4}, where further results on this and related subjects are examined. As for ($\mathcal{Q}3$), it seems that no partial solution is available at the moment and this has been in fact a difficult open problem in the theory. In line with these matters, recently the author \cite{Bar} gave a small step towards ($\mathcal{Q}1$) by investigating the influence of isometric embeddings of Banach spaces with unconditional basis into Banach spaces with much better projection basis constants. As a consequence, the following result has been proven. \begin{theorem}[\cite{Bar}] Let $X$ be a Banach space with a shrinking $\mathcal{D}$-unconditional basis such that $\mathcal{D}<\sqrt{6}-1$. Then $X$ has the weak-FPP. \end{theorem} This, in particular, extends and improves Lin's result in the quoted class of Banach spaces. The goal of the present paper, which can be seen as a continuation of the work started in \cite{Bar}, is to take one further step towards issues ($\mathcal{Q}1$), ($\mathcal{Q}2)$ and ($\mathcal{Q}3$), drawing attention to some facts that are implicitly contained in the literature and its links with the theory. Here we are mainly concerned with the FPP for the class of affine nonexpansive mappings. Let $C$ be a nonempty convex subset of a Banach space $X$. Recall that a mapping $T\colon C\to C$ is called {\it affine nonexpansive} if it is nonexpansive and satisfies \[ T(\lambda x+ (1- \lambda) y)= \lambda T(x) + (1-\lambda)T(y) \text{ for all } \lambda\in [0,1] \text{ and } x, y\in C. \] In Section \ref{sec:3} we set up the notation, definitions and give some preliminary propositions. Our results are stated and proved in Section \ref{sec:4}. We finish the paper in Section \ref{sec:11} with some examples and further implications of our results. \section{Preliminaries}\label{sec:3} Our notation concerning Banach space theory is standard and follows the terminology from \cite{FHHMZ} for the most part. We will recall some notions for completeness' sake. For an infinite set $M\subset \mathbb{N}$, let $[M]^\omega$ (resp. $[M]^{<\omega}$) denote the family of all infinite (resp. finite) increasing subsets of $M$. For $E, F\in[\mathbb{N}]^{<\omega}$, $E<F$ means $\max E< \min F$. For each $E\in [\mathbb{N}]^{<\omega}$ and every $x\in \mathrm{c}_{00}$ let $Ex$ denote the projection of the sequence $x=(a_i)$ onto $E$. $\mathrm{c}_{00}$ denotes the vector space of all real finitely supported sequences $(a_i)$. $\mathrm{c}_0$ is the Banach space of real sequences tending to zero, under the supremum norm, and $\ell_1$ is the Banach space of absolutely summable real sequences endowed with its usual norm given by the sum of the absolute values of the coordinates. Given a Banach space $X$, recall that a sequence $(x_i)$ in $X$ is called a {\it semi-normalized} if $0< \liminf\| x_n\| \leq \limsup\|x_n\|< \infty$ and it is called a basic sequence if it is a Schauder basis for its closed linear span $[x_i]$. Given two basic sequences $(x_n)$ and $(y_n)$ in Banach spaces $X$ and $Y$, respectively, we say that $(x_n)$ $D$-dominates $(y_n)$ for $D\geq 1$, denoted by $(y_n) \lesssim_D (x_n)$, if there is the linear mapping $\mathcal{L}\colon [x_n]\to [y_n]$ given by $\mathcal{L}(x_n)= y_n$ for all $n$, is bounded with $\|T\|\leq K$. $(x_n)$ is said to be $(A,B)$-equivalent to $(y_n)$, for $A, B>0$, if $(y_n) \lesssim_A (x_n)\lesssim_B (y_n)$. Therefore, two basic sequences $(x_n)$ and $(y_n)$ are called equivalent if $[x_n]$ and $[y_n]$ are linearly isomorphic. Clearly every basic sequence equivalent to the unit basis of $\mathrm{c}_0$ is semi-normalized and weakly null. We say that a basic sequence $(x_n)$ is {\it spreading} (resp. {\it $1$-spreading}) if it is equivalent (resp. $1$-equivalent) to all of its subsequences. For a constant $C\geq 1$, we say that $(x_n)$ $C$-dominates $(y_n)$ (or that $(y_n)$ is $C$-dominated by $(x_n)$) if $\| \sum_{i=1}^n a_i y_i\| \leq C\| \sum_{i=1}^n a_i x_i\|$, for all scalars $a_1, a_2, \dots, a_n$. A Schauder basis $(e_i)$ is called {\it $\mathcal{D}$-unconditional} if for any $n\in \mathbb{N}$ and scalars $c_1, \dots, c_n$ and $\epsilon_1,\dots, \epsilon_n$ with $| \epsilon_i|=1$ for all $i$, $\big\| \sum_{i=1}^n \epsilon_i c_i e_i\big\| \leq \mathcal{D}\big\| \sum_{i=1}^n c_i e_i\big\|$. Further, $(e_i)$ is called {\it $\beta$-suppression unconditional} if for all $n$, scalars $c_1, \dots, c_n$, and $F\subset \{ 1, \dots, n\}$, $\big\| \sum_{i\in F} c_i x_i\big\| \leq \beta \big\| \sum_{i=1}^n c_i e_i\big\|$. It is easily seen that if $(e_i)$ is $\mathcal{D}$-unconditional, then it is $\frac{1}{2}(\mathcal{D}+1)$-suppression unconditional. Also, if $(e_i)$ is $\beta$-suppression unconditional, it is $2\beta$-unconditional. A {\it conditional basis} is a Schauder basis that is not unconditional. Recall that a Schauder basis $(e_i)$ of $X$ is called {\it boundedly complete} provided that if whenever $(a_i)_{i=1}^\infty$ is a sequence of scalars such that $\sup_{N\in \mathbb{N}}\big\| \sum_{i=1}^N a_i e_i\big\| < \infty$, then $\sum_{i=1}^\infty a_i e_i$ converges in norm. $(e_i)$ is called {\it shrinking} if $(e^*_i)$ is a Schauder basis for $X^*$. \begin{remark}\label{rmk:1} At the level of examples, let us recall that the unit basis of $\mathrm{c}_0$ is shrinking and $1$-unconditional, but not boundedly complete, while the unit basis of $\ell_1$ is $1$-unconditional, not shrinking and boundedly complete. It is well known (see e.g. \cite[Proposition 11.3.6]{AK}) that every semi-normalized weakly null $1$-spreading basic sequence is $1$-suppression unconditional. \end{remark} Let $(e_i)$ be a Schauder basis of $X$ and $P_k\colon X\to X$, $k\in \mathbb{N}$, the canonical basis projections given by $P_n(x)= P_n\big( \sum_{j=1}^\infty e^*_j(x) e_j\big)=\sum_{j=1}^n e^*_j(x) e_j$. In the sequel we use the notation $R_k(x)= x - P_k(x)$. The basis $(e_i)$ is called premonotone ($\beta$-premonotone) if $\| R_{n}(x)\|\leq \| x\|$ ($\| R_{n}(x)\|\leq \beta \|x\|$) for every $x\in [e_i]$ and for every $n\in \mathbb{N}$ (cf. \cite{BP}). $(e_i)$ is called monotone (bimonotone, strongly bimonotone) if $\mathcal{K}_m:=\sup_n\| P_n\|=1$ ($\mathcal{K}_b:=\sup_{m<n}\| P_{[m,n]}\|=1$, $\mathcal{K}_{sb}:=\sup_{A\in[\mathbb{N}]^{<\omega}}\big( \| P_A\| \vee \| I- P_A\|\big)=1$). Note that if $(e_i)$ is a spreading basis of $X$, then $\vertiii{x}= \sup_n \| R_n(x)\|$ defines an equivalent norm on $X$ for which $(e_i)$ is premonotone and spreading. If $(e_i)$ is conditional and spreading, then the summing functional defined by $\mathfrak{s}\big( \sum_i a_i e_i\big)=\sum_i a_i$ is well defined and bounded (cf. \cite[p. 7]{AMS}). A sequence of non-zero vectors $(u_n)$ is called a block basic sequence of $(e_i)$, if there exists a sequence $(F_n)\subset [\mathbb{N}]^{<\omega}$ with $F_n< F_{n+1}$ for all $n$, and scalars $(\lambda_n)$ with $\lambda_i\neq 0$ for all $i\in F_n$ and $n\in \mathbb{N}$ such that $u_n=\sum_{i\in F_n} \lambda_i e_i$ for all $n\in \mathbb{N}$. We then call $F_n$ as the {\it support} of $u_n$. We close this section with two easy propositions, the proof of the first one is straightforward and is thus omitted. \begin{prop}\label{prop:1sec3} Let $X$ be a Banach space with a $\beta$-premonotone Schauder basis $(e_i)$. Then every block basic sequence $(u_i)$ of $(e_i)$ is $\beta$-premonotone. \end{prop} The following result is contained in the proof of \cite[Lemma 2.5]{F} \begin{prop}\label{prop:2sec3} Let $(x_n)$ be a semi-normalized basic sequence in the unit ball of a Banach space $X$. Assume that $(x_n)$ dominates all of its subsequences. Then there exist a constant $C\geq 1$ and a subsequence $(x'_n)$ of $(x_n)$ such that $(x_n)$ $C$-dominates each subsequence of $(x'_n)$. \end{prop} \smallskip \section{Main results}\label{sec:4} Our main results are displayed below. \begin{theorem}\label{thm:A} Let $X$ be a Banach space. Assume that $X$ contains a pre-monotone basic sequence equivalent to the unit basis of $\mathrm{c}_0$. Then $X$ fails the fixed point property for affine nonexpansive mappings. \end{theorem} \begin{proof} The proof is based on an approach developed by \'Alvaro, Cembranos and Mendoza in \cite{ACM}. Let $(x_n)$ be a pre-monotone basic sequence in $X$ equivalent to the unit basis of $\mathrm{c}_0$. Thus there exist constants $A, B>0$ such that $(x_n)$ is $(A,B)$-equivalent to the unit basis of $\mathrm{c}_0$. That is, \[ A\sup_{i\geq 1}|a_i|\leq \Bigg\| \sum_{i=1}^\infty a_i x_i \Bigg\| \leq B \sup_{i\geq 1}| a_i|, \] for all $(a_i)_{i=1}^\infty \in \mathrm{c}_0$. Define \[ K= \Bigg\{ \sum_{i=1}^\infty t_i x_i\in [x_n] \colon 0\leq t_i \leq 1\,\,\text{ for all }i\in \mathbb{N}\Bigg\}. \] Clearly $K$ is nonempty, closed convex and bounded. Fix a decreasing null sequence $(\beta_i)$ in $(0,1)$. For $i\in\mathbb{N}$, set $\alpha_i= 1- \beta_i$. Then $(\alpha_i)$ is a non-decreasing sequence in $(0,1)$ with $\alpha_i \nearrow 1$. Let us consider the affine mapping $T\colon [x_i] \to [x_i]$ given by \[ T(x) = \sum_{i=1}^\infty \alpha_i t_i x_i + \sum_{i=1}^\infty \beta_i x_i,\quad x= \sum_{i=1}^\infty t_i x_i. \] We now notice that $T(K)\subset K$ and $T$ does not have any fixed points in $K$. In line with \cite{ACM}, the main point here is to show that $(x_i)$ has the {\it norm one property}, which as a result will imply nonexpansiveness. This in turn follows directly from the pre-monotonicity of the sequence $(x_i)$. Indeed, fix $N\in \mathbb{N}$ and take any scalars $(a_i)_{i=1}^N$. Note that \[ \sum_{i=1}^N \alpha_i a_i x_i= \alpha_1 \sum_{i=1}^N a_i x_i + \sum_{i=1}^{N-1}\big( \alpha_{i+1} - \alpha_i\big) R_i \Bigg( \sum_{i=1}^N a_i x_i\Bigg), \] where $R_i=I - P_i$ and $P_i$ is the $i^{\textrm{th}}$ canonical projection associated with $(x_i)$. Since $(x_i)$ is pre-monotone, $\| R_i\|=1$ for all $i\in \mathbb{N}$. Thus using this together with triangle inequality, we deduce \[ \Bigg\|\sum_{i=1}^N a_i \alpha_i x_i\Bigg\| \leq \alpha_N \Bigg\| \sum_{i=1}^N a_i x_i\Bigg\|. \] Letting $N$ tend to infinity, this certainly implies \[ \Bigg\|\sum_{i=1}^\infty a_i \alpha_i x_i\Bigg\| \leq \Bigg\| \sum_{i=1}^\infty a_i x_i\Bigg\|\quad \text{ for all } \sum_{i=1}^\infty a_i x_i\in [x_i]. \] Observe that the above argument combined with the pre-monotonicity property also shows that $T$ is well-defined. Consequently, $\| T(x) - T(y)\| \leq \| x - y\|$ for all $x, y\in K$ and the proof is complete. \end{proof} \begin{theorem}\label{thm:B} Let $X$ be a Banach space having a pre-monotone Schauder basis that is not boundedly complete. Then $X$ contains a pre-monotone basic sequence equivalent to the unit basis of $\mathrm{c}_0$ provided that one of the following conditions is verified: \begin{itemize} \item[(i)] The basis is unconditional. \item[(ii)] The basis is spreading and $X$ does not embed into a Banach space with an unconditional basis. \item[(iii)] The basis is shrinking and is such that each of its strongly bounded block basic sequences dominate its subsequences. \end{itemize} \end{theorem} \begin{proof} Let $(e_i)$ denote the basis of $X$. \vskip .05cm \noindent{\bf Proof of (i).} It is a classical result in Banach space theory (see e.g. \cite[proof of Theorem 1.c.10 (iii) $\Rightarrow$ (i)]{LT}) that a Schauder basis is boundedly complete if and only if whenever $(x_n)$ is block basic sequence of the basis, bounded away from zero, one has $\sup_{N\ge 1}\big\| \sum_{i=1}^N x_i\big\|=\infty$. By assumption $(e_i)$ is not boundedly complete, so there is a semi-normalized block basic sequence $(x_n)$ such that \begin{equation}\label{eqn:1sec4} \sup_{N\ge 1}\Bigg\| \sum_{i=1}^N x_i\Bigg\|<\infty. \end{equation} Then two consequences follow from this. First, $(x_i)$ is pre-monotone (cf. Proposition \ref{prop:1sec3}) and second $(x_i)$ is equivalent to the unit basis of $\mathrm{c}_0$ which, in turn, follows directly from (\ref{eqn:1sec4}) and the assumption that $(e_i)$ is unconditional (cf. \cite{LT}). This proves (i). \hfill $\square$ \vskip .05cm \noindent{\bf Proof of (ii).} Since $(e_i)$ is not boundedly complete, there exists $x^{**}\in X^{**}$ such that the series $\sum_{i=1}^\infty x^{**}(e^*_i)e_i$ does not converge in norm. On the other hand note that since $X$ does not embed into a space with an unconditional basis, this clearly implies that $(e_i)$ is conditional. In addition, it is also strongly summing (cf. \cite[Proposition 6.5]{AMS}). It follows in particular that $\sum_{i=1}^\infty x^{**}(e^*_i)$ is convergent. Hence we may choose a strictly increasing sequence $(n_i)_i\in [\mathbb{N}]^\omega$, so that if $x_i= \sum_{j=n_{i-1}+1}^{n_i} x^{**}(e^*_j) e_j$, then $(x_i)$ is semi-normalized and $\sum_{i=1}^\infty | \mathfrak{s}(x_i)|<\infty$, where $\mathfrak{s}$ is the summing functional. By the proof of \cite[Lemma 6.3]{AMS}, $(x_i)$ is equivalent to the unit vector basis of $\mathrm{c}_0$. Since $(e_i)$ is premonotone, so is $(x_i)$ and the result follows. \hfill $\square$ \vskip .05cm \noindent{\bf Proof of (iii).} Since $(e_i)$ is shrinking and not boundedly complete, it has a block basic sequence $(x_n)$ which is semi-normalized, weakly null and strongly bounded, that is, such that $\sup_N\big\| \sum_{i=1}^N x_i\big\|< \infty$. According to an unpublished result of Johnson (cf. \cite[Theorem 4.1]{O}), if every subsequence of a normalized weakly null sequence $(u_n)$ admits a further subsequence which is strongly bounded, then there exists a subsequence of $(u_n)$ equivalent to the unit basis of $\mathrm{c}_0$. Let us stress that the same proof given in \cite{O} shows that Johnson's result remains valid for semi-normalized weakly null sequences. Let $C\geq 1$ and $(x'_n)$ be as in Proposition \ref{prop:2sec3}. Then $(x'_n)$ is semi-normalized and weakly null. In addition, every subsequence of any subsequence of $(x'_n)$ is $C$-dominated by $(x_n)$. Since $(x_n)$ is strongly bounded, it follows from Johnson's result that $(x'_n)$ has a subsequence $(x''_n)$ equivalent to the unit basis of $\mathrm{c}_0$. Since $(e_i)$ is pre-monotone, so is $(x''_n)$ as desired. \end{proof} Note that every {\it bimonotone} basic sequence is pre-monotone, and every {\it $1$-suppression unconditional} basic sequence is bimonotone. In \cite{C} Cembranos proved that if $E$ is an infinite dimensional Banach space, then the space $C(K, E)$ of all continuous $E$-valued functions defined on $K$, endowed with the supremum norm, contains a complemented copy of $\mathrm{c}_0$. In fact, his proof shows slightly more. \begin{theorem}\label{thm:C} Let $K$ be an infinite compact Hausdorff space. Assume that $E$ is an infinite dimensional Banach space. Then $C(K, E)$ contains a complemented $1$-suppression unconditional basic sequence equivalent to the unit basis of $\mathrm{c}_0$. \end{theorem} \begin{proof} The proof is contained in \cite{C}, but we include it here for completeness. Let $(x^*_n)$ be a normalized weak$^*$ null sequence in $E^*$ (Josefson-Nissenzweig theorem). For each $n\in \mathbb{N}$, choose $x_n \in 2B_X$ so that $x^*_n(x_n)=1$. Next, select a sequence $(G_n)$ of open nonempty pairwise disjoint subsets of $K$. Now for $n\in \mathbb{N}$ fix $t_n\in G_n$ and, thanks to Urysohn lemma, take $f_n\in C(K)$ such that $f_n(K)\subset [0,1]$, $f_n(t_n)=1$, $f_n\big( K\setminus G_n\big)=\{ 0\}$ and $\| f_n\| =1$. By the proof in \cite{C}, it follows that $(f_n(\cdot) x_n\big)_{n=1}^\infty$ is equivalent to the unit basis of $\mathrm{c}_0$ and $[f_n(\cdot) x_n]$ is complemented in $C(K, E)$. Fortunately Cembranos's argument also shows that $(f_n(\cdot) x_n\big)_{n=1}^\infty$ is $1$-suppression unconditional. Indeed, fix $n\in \mathbb{N}$ and take any set $\sigma \subset\{ 1, \dots, n\}$. Then for every scalars $(a_i)_{i=1}^n\subset \mathbb{R}$ we have \[ \begin{split} \Bigg\| \sum_{i\in \sigma} a_i f_i(\cdot) x_i\Bigg\| &= \sup_{t\in K} \Bigg\| \sum_{i\in \sigma} a_i f_i(t) x_i\Bigg\| =\sup_{t\in \bigcup_{i\in \sigma} G_i}\| a_i f_i(t)x_i\|\\[1.3mm] &\leq \sup_{t\in \bigcup_{i\in \{1,\dots, n\}} G_i}\| a_i f_i(t)x_i\|\\[1.2mm] &=\sup_{t\in K} \Bigg\| \sum_{i=1}^n a_i f_i(t) x_i\Bigg\|= \Bigg\| \sum_{i=1}^n a_i f_i(\cdot) x_i\Bigg\|. \end{split} \] \end{proof} Recall that a closed, bounded and convex subset $C$ of a Banach space $X$ has the {\it point of continuity property} (PCP) (resp. {\it convex point of continuity property} (CPCP)) if for every non-empty norm-closed (convex) subset $K$ of $C$, the identity mapping \textrm{Id} on $K$ is weak-to-norm continuous at some point of $K$. The space $X$ has PCP (resp. CPCP) if its closed unit ball $B_X$ has the PCP (resp. CPCP). Historically PCP seems to have been studied for the first time by Namioka \cite{N}. CPCP was introduced by Bourgain \cite{B}. Clearly PCP implies CPCP. It is well-known that Banach spaces with Radon-Nikod\'ym property (RNP), including separable dual spaces, have the CPCP. It is known that PCP is separably determined \cite{Bou}. Rosenthal \cite{Ros} showed that every semi-normalized basic sequence in a Banach space with PCP has some boundedly complete basic sequence. Then no Banach space with PCP can contain a subspace isomorphic to $\mathrm{c}_0$. To some extent our next result links Theorem \ref{thm:A} to the failure of PCP. \begin{theorem}\label{thm:D} Let $X$ be a Banach space with a $1$-suppression unconditional basis. Assume that $X$ has the FPP for nonexpansive maps. Then $X$ has PCP. \end{theorem} \begin{proof} Let $\|\cdot\|$ and $(e_i)$ denote the norm and the basis of $X$, respectively. For $n\in\mathbb{N}$, let $R_n= I- P_n$ where $P_n$ is the $n$-\text{th} basis projection. Let $(e^*_i)$ be the biorthogonal functionals of $(e_i)$. Assume that the thesis does not hold. Then we can find, as $B_X$ is assumed to fail PCP, a non-empty closed subset $K$ of $B_X$ with no point of weak-to-norm continuity. In addition, taking a non-empty subset of $K$ (cf. \cite[Lemma 2.1]{PA}), we may assume that there exists $\delta>0$ so that every weak-neighborhood of $K$ has diameter at least $\delta$. We now proceed by induction on $n$ to build a semi-normalized block basis of $(e_i)$ dominated by the unit basis of $\mathrm{c}_0$. We will mimic the proof of a result of Argyros, Odell and Rosenthal \cite[Proposition 2.3]{AORos}. Note that $\ensuremath{\mathrm{diam}}(K)\geq \delta$, and this yields a point $y_1\in K$ so that $\| y_1\| \geq \delta/2$. Pick $m_1\in\mathbb{N}$ so that $\| R_{m_1}y_1\|< \delta/2$. Define $z_1 = P_{m_1}y_1$. Let \[ V_1=\Big\{ x\in K\colon \sum_{j=1}^{m_1}| e^*_j( x -y_1 )| < \delta/2^{m_1+2}M_1\Big\}, \] where $M_1=\sup_{1\leq i\leq m_1}\| e_i\|$. Then there exists $y_2\in V_1$ so that $\| y_2 - y_1\|\geq \delta/2$. Set $x_1= y_1$ and $x_2= y_2 - y_1$. Now choose $m_2> m_1$ so that $\|R_{m_2} x_2\| < \delta/2^{3}$ and define $z_2 = P_{(m_1, m_2]} x_2$. It follows that $z_2\neq 0$ and $\| x_2 -z_2\|< \delta/2^2$. Now, suppose that $n\geq 1$, $m_{n+1}> m_n$, vectors $x_1, \dots, x_{n+1}$ and non-null block vectors $z_1, \dots, z_n$ have been already built such that $\sum_{i=1}^{n+1} x_i\in K$, $\| x_k\|\geq \delta/2$ and $\| x_k - z_k\|< \delta/2^k$ for all $k=1,\dots, n$. Put $M_{n+1}=\max_{1\leq j\leq m_{n+1}}\|e_j\|$ and $y_{n+1}= \sum_{i=1}^{n+1} x_i$. Set \[ V_{n+1}=\Big\{ x\in K \colon \sum_{j=1}^{m_{n+1}} |e^*_j(x - y_{n+1} )| < \delta/2^{m_{n+1}+2}M_{n+1}\Big\}. \] Then $V_{n+1}$ is a non-empty weak-neghborhood of $K$, so $\ensuremath{\mathrm{diam}}(V_{n+1})\geq \delta$. Consequently, there is $y_{n+2}\in V_{n+1}$ so that $\| y_{n+2} - y_{n+1}\|\geq \delta/2$. Set $x_{n+2}=y_{n+2} - y_{n+1}$. Now, pick $m_{n+2}> m_{n+1}$ so that $\| R_{m_{n+2}} x_{n+2}\| < \delta/ 2^{n+3}$ and define $z_{n+2}= P_{(m_{n+1}, m_{n+2}]} x_{n+2}$. It follows that $\| x_{n+2} - z_{n+2}\|\leq \delta/2^{n+2}$. This concludes the induction process. It follows from this construction that $(z_n)$ is a semi-normalized block basic sequence of $(e_i)$. We now proceed to show that $(z_n)$ is dominated by the unit basis of $\mathrm{c}_0$. To do this, we shall use that $(z_n)$ is suppression unconditional. Fix any scalars $(a_i)_{i=1}^m$. Pick a Hahn-Banach functional $\varphi\in S_{X^*}$ so that $\| \sum_{i=1}^m a_i z_i\|= \varphi( \sum_{i=1}^m a_i z_i)$. Then \[ \begin{split} \Bigg\| \sum_{i=1}^m a_i z_i\Bigg\|&\leq \sum_{i=1}^m |a_i|| \varphi( z_i)| \leq \sup_{1\leq i\leq m}| a_i| \sum_{\varphi(z_i)\geq 0} \varphi(z_i)\leq \Bigg\| \sum_{i=1}^m z_i\Bigg\| \sup_{1\leq i\leq m}|a_i|\\[1.2mm] &\leq \Bigg( \sum_{i=1}^m \|z_i - x_i\| + \Bigg\| \sum_{i=1}^m x_i\Bigg\|\Bigg) \sup_{1\leq i\leq m}|a_i|\leq \big(\delta + \sup_{x\in K}\| x\|\big) \sup_{1\leq i\leq m}|a_i|. \end{split} \] Thus $(z_n)$ is a pre-monotone $\mathrm{c}_0$-basic sequence. By Theorem \ref{thm:A}, $X$ fails the FPP proving the result. \end{proof} It is well known that $B_{\mathrm{c}_0}$ fails the FPP. Our next result generalizes this fact to the framework of spaces containing $\mathrm{c}_0$ in a asymptotic isometric manner. \begin{theorem}\label{thm:E} Let $X$ be a Banach space that does not contain a subspace isomorphic to $\ell_1$. Assume that $X$ contains an asymptotically isometric copy of $\mathrm{c}_0$. Then $B_X$ fails the FPP for nonexpansive maps. \end{theorem} \begin{proof} By employing the proof of \cite[Theorem 8]{DLT3} with the inequality provided in \cite[Corollary 6]{DLT3}, we find a decreasing null sequence $(\varepsilon_n)$ in $(0,1)$ and a sequence $(y_n)$ in $S_X$ such that $\sum_{n=1}^\infty \varepsilon_n<\infty$ and \[ \sup_{n\geq k}|a_n|\leq \Bigg\| \sum_{n=k}^\infty a_n y_n\Bigg\| \leq \sup_{n\geq k}(1+\varepsilon_n)|a_n|, \] for all $k\in \mathbb{N}$ and for all scalars $(a_n)_{n=1}^\infty \in \mathrm{c}_0$. Now, a careful examination of the proof in \cite[Theorem 4.3]{GP} (see also \cite[Theorem 2.2]{DF}) shows that there exist a sequence $(\alpha_n)$ in $(1, \infty)$ and basic sequences $(x_n)$ in $B_X$ and $(x^*_n)_{n=1}^\infty$ in $2B_{X^*}$ such that \begin{itemize} \item[(a)] $x^*_m(x_n)= \delta_{mn}$ for all $m, n\in\mathbb{N}$, \item[(b)] $\|x^*_n\|\leq \alpha_n\leq 1 + \varepsilon_n$ for all $n\in\mathbb{N}$, \item[(c)] $(x^*_n)_{n=1}^\infty$ is weak$^*$ null, and \end{itemize} \begin{equation}\label{eqn:4} \sup_{n\geq k}\frac{|a_n|}{\alpha_n}\leq \Bigg\| \sum_{n=k}^\infty a_n x_n\Bigg\| \leq \sup_{n\geq k}\frac{(1 + \varepsilon_n)|a_n|}{\alpha_n}, \end{equation} for all $k\in\mathbb{N}$ and scalars $(a_n)_{n=1}^\infty \in \mathrm{c}_0$. Let $K$ be the set considered in the proof of Theorem \ref{thm:A} and define a mapping $T\colon B_X \to K$ by \[ T(x) = x_1 +\sum_{n=1}^\infty (1-\varepsilon_n)|x^*_n(x)|x_{n+1},\quad x\in B_X. \] By (b) and (c), $T$ is well-defined and leaves $K$ invariant. Also, $\sum_{n=1}^\infty \varepsilon_n<\infty$ and (a) imply that $\digamma(T)=\emptyset$. Moreover, by (b) and the right-hand side of (\ref{eqn:4}), $T$ is nonexpansive. This completes the proof. \end{proof} Regarding the weak-FPP, we obtain the following results. \begin{theorem}\label{thm:F} Let $X$ be a Banach space having a $\mathcal{D}$-unconditional basis with $\mathcal{D}<2$. Assume that $Z$ is a Banach space with a strongly bimonotone basis and $\mathrm{dist}_{BM}(X,Y)=1$ for some subspace $Y$ of $Z$. Then $X$ has the weak fixed point property. \end{theorem} \begin{proof} Let $(e_i)$ denote the basis of $X$. Assume for a contradiction that $X$ fails to have the weak-FPP. Let $K$ be a weakly compact convex subset of $X$ which is minimal-invariant for a fixed point free nonexpansive mapping $T\colon K\to K$. Take $(x_n)$ an approximate fixed point sequence of $T$. By passing to a subsequence and taking suitable scalings, we may assume that $\ensuremath{\mathrm{diam}}(K)=1$, $0\in K$ and $(x_n)$ is semi-normalized and weakly null. Let $\varepsilon>0$ be fixed to be chosen later. Since $(e_i)$ is $ \mathcal{D}$-unconditional, after passing to a further subsequence, we may also assume that $(x_n)$ is $(\mathcal{D} +\varepsilon)$-unconditional. By assumption there is a Banach space $Z$ having a strongly bimonotone basis such that $\mathrm{dist}_{BM}(X,Y)=1$ for some subspace $Y$ of $Z$. Let $J\colon X\to Y$ be an isomorphism satisfying $\|J^{-1}\|\leq 1$ and $\|J\|\leq 1 +\varepsilon$. Following \cite{Bar} we now define a new norm on $Z$ by \[ |z|=\inf\big\{ \|x\|_X + \|z- Jx\|_Z\colon x\in X\big\}. \] It follows that $J$ is an isometry from $X$ into $(Z, |\cdot|)$. Moreover, one has \begin{equation}\label{eqn:1sec9} \frac{1}{1 +\varepsilon}\|z\|_Z\leq |z|\leq \|z\|_Z\quad\text{for all }z\in Z. \end{equation} Let $(z_i)$ be the basis of $Z$ and let $(P_i)$ denote its canonical basis projections. Inequality (\ref{eqn:1sec9}) readily shows that $\mathcal{K}^{|\cdot|}_{sb}$, the strong bimonotonicity projection constant of $(z_i)$ with respect to the norm $|\cdot|$, is not larger than $1+\varepsilon$. For $n\in \mathbb{N}$ let $y_n=J(x_n)$. Since $J$ is an isometry, $(y_n)$ is weakly null and $(\mathcal{D}+\varepsilon)$-unconditional in $(Z,|\cdot|)$. On the other hand, applying the gliding hump method, we obtain a subsequence $(y_{n_i})$ of $(y_n)$ and an increasing sequence of consecutive intervals $(F_i)$ of integers in $\mathbb{N}$ such that \begin{equation}\label{eqn:2sec9} \lim_{i\to\infty}| y_{n_i} - P_{F_i} y_{n_i}|=0. \end{equation} Let now $[X]$ and $[Z]$ denote the ultrapowers of $X$ and $Z$ respectively. Define $K_J:=J(K)$ and $T_J\colon K_J\to K_J$ by $T_J(Jx)= J(Tx)$, for all $x\in K$. Next set \[ [\mathcal{M}]=\left\{ [v_i] \in[K_J] \colon\, \begin{matrix}&\exists\, x\in K\text{ such that }\, \big| [v_i] - Jx\big|\leq(\mathcal{D}+\varepsilon)/2, \text{ and }\\ &| [v_i] -[y_{m_i}]|\vee \big| [v_i] -[y_{n_{i+1}}]\big|\leq 1/2. \end{matrix}\right\}. \] Clearly $[\mathcal{M}]$ is a nonempty, closed convex subset of $[K_J]$ which is $[T_J]$-invariant. By Lin's lemma \cite{Lin1}, $\sup\{ \big| [v_i]\big| \colon [v_i] \in [\mathcal{M}]\}=1$. Recall that $\ensuremath{\mathrm{diam}}(K)=1$ and $J$ is an isometry. As it turns out however, for $\varepsilon$ small enough, one has \[ \begin{split} \big| [v_i]\big|&\leq \frac{1}{2}\Big( \big|[P_{F_i}] + [P_{F_{i+1}}]\big| \big| [v_i] - Jx\big| + \big|[I] - [P_{F_i}]\big| \big| [v_i] - [ y_{n_i}]\big| \\[1.2mm] &\hskip 6cm + \big| [ I] - [P_{F_{i+1}}]\big| \big| [v_i] - [y_{n_{i+1}}]\big|\Big)\\[1.2mm] &\leq \frac{1}{2}\Big( (1 +\varepsilon)\frac{\mathcal{D} +\varepsilon}{2} + \frac{1+\varepsilon}{2}+ \frac{1+\varepsilon}{2}\Big)\\[1.2mm] &= \frac{1+\varepsilon}{2}\Big(\frac{\mathcal{D} +\varepsilon}{2} +1\Big)<1. \end{split} \] This contradiction proves the theorem. \end{proof} \begin{theorem}\label{thm:G} Let $X$ be a Banach space with a Schauder basis $(e_i)$. Assume that every block basis sequence has a subsequence equivalent to the basis. Then $X$ has the weak fixed point property in each of the following two situations: \begin{itemize} \item[(i)] $(e_i)$ is a $1$-suppression unconditional basis. \item[(ii)] $(e_i)$ is a $1$-spreading basis. \end{itemize} \end{theorem} \begin{proof} It is not hard to see (after applying an infinitary analog of Ramsey's theorem) that the block basis assumption implies that $(e_i)$ is spreading, that is, it is equivalent to any of its subsequences (cf. \cite[p.397]{FPR} for the details). \vskip .1cm \noindent{\bf Proof of (i).} By James's theorem either $X$ is reflexive or contains a subspace isomorphic to $\mathrm{c}_0$ or $\ell_1$. If $X$ contains a subspace isomorphic to $\ell_1$, we may assume that $\ell_1$ is equivalent to a block subspace of $X$, so, from the block basis assumption, $(e_i)$ is itself equivalent to the unit vector basis of $\ell_1$. Since every Schur space has the weak-FPP, the result follows. Suppose then that no subspace of $X$ is isomorphic to $\ell_1$. Towards a contradiction, assume that $X$ fails the weak-FPP. Since $(e_i)$ is strongly bimonotone, from the proof of Garcia-Falset's result \cite{GFal} we deduce that $X$ contains a semi-normalized weakly null basic sequence $(x_n)$ which generates $\ell_1$ as a spreading model. Since $X$ does not contain isomorphic copies of $\ell_1$, then either $X$ contains an isomorphic copy of $\mathrm{c}_0$ or $X$ is reflexive. It turns out that $X$ cannot contain a subspace isomorphic to $\mathrm{c}_0$. Indeed, if so then as in the $\ell_1$ case, we can assume that $\mathrm{c}_0$ is equivalent to a block subspace of $X$. It follows then from the block basis assumption on $(e_i)$ that every weakly null sequence in $X$ generates $\mathrm{c}_0$ as a spreading model, contradiction. Then $X$ is reflexive and thus any spreading model of $X$ is generated by a block basic sequence. So, combining the block basis assumption with the subsymmetric property of the basis, we can conclude that any spreading model of $X$ is equivalent to $(e_i)$ (cf. details in \cite[Lemma 2]{FPR}. But this implies $X$ is isomorphic to $\ell_1$ since $(x_n)$ generates $\ell_1$ as a spreading model. This contradiction concludes the proof of part (i) of the theorem. \hfill $\square$ \vskip .1cm \noindent{\bf Proof of (ii).} By Rosenthal's $\ell_1$-theorem, either $(e_i)$ is equivalent to the unit basis of $\ell_1$ or $(e_i)$ is weak Cauchy. So, we may without loss of generality assume that $(e_i)$ is weak Cauchy. Now, we analyse two cases: \vskip .05cm \noindent{\bf Case 1.} $(e_i)$ is weakly null. Then $(e_i)$ is $1$-suppression unconditional. By the previous result it follows that $X$ has the weak fixed point property. \vskip .05cm \noindent{\bf Case 2.} $(e_i)$ is not weakly null. Then $(e_i)$ is a conditional $1$-spreading basis. By the proof of Theorem 2.3-(b) in \cite{FOSZ} the skipped sequence $(d_i)$ given by $d_1= e_1$, $d_{i+1}=e_{i+1} - e_i$ for $i\in \mathbb{N}$, defines a $1$-suppression unconditional basis for $X$ (cf. also \cite[Proposition 2.1]{AMS}). Hence, by considering $X$ equipped with the basis $(d_i)$, we can repeat the Garcia-Falset's argument \cite{GFal} to deduce that if $X$ fails the weak-FPP then it contains a semi-normalized weakly null basic sequence $(x_n)$ which generates a $\ell_1$-spreading model. As before, the block basis assumption implies $X$ is reflexive, so any spreading model of $X$ is equivalent to $(e_i)$. But once again, this yields a contradiction. \end{proof} \smallskip \section{Consequences and final considerations}\label{sec:11} In this final section we discuss some consequences of our main results. The crucial ingredient used in the proof of Theorem \ref{thm:A} concerns the existence of pre-monotone basic sequences equivalent to the unit vector basis of $\mathrm{c}_0$. Theorem \ref{thm:B} provides sufficient conditions for the existence of such sequences, implying thus the failure of the FPP for affine nonexpansive mappings. Theorem \ref{thm:C} says that $C(K, E)$ always fails the FPP. To some extent, Theorem \ref{thm:D} links PCP to the failure of the FPP. Theorem \ref{thm:E} yields a negative partial answer for question ($\mathcal{Q}3$) in the ball of any Banach space containing an asymptotically isometric copy of $\mathrm{c}_0$. All these results are relevant as they offer new informations related to the FPP as well as new perspectives regarding to the analysis of ($\mathcal{Q}3$). Theorem \ref{thm:F} shows that under certain natural conditions the weak-FPP is invariant by Banach-Mazur distance $1$. Theorem \ref{thm:G} provides new partial solutions for ($\mathcal{Q}2$). We conclude this work with following result additional results and remarks. \begin{prop}\label{prop:1sec10} Let $X$ be a Banach space having a pre-monotone basis. Assume that $X$ contains an isomorphic copy of $\mathrm{c}_0$. Then $X$ contains a pre-monotone basic sequence equivalent to the unit basis of $\mathrm{c}_0$ and, hence, it fails the FPP for affine nonexpansive mappings. \end{prop} \begin{proof} Let $(y_i)$ be a semi-normalized weakly null sequence in $X$ equivalent to the unit basis of $\mathrm{c}_0$. By the gliding hump method, $(y_i)$ admits a subsequence which is equivalent to a block basic sequence $(x_i)$ of the basis of $X$. Therefore $(x_i)$ is a pre-monotone basic sequence equivalent to the unit basis of $\mathrm{c}_0$. By Theorem \ref{thm:A}, the result follows. \end{proof} \begin{remark} In light of the above result it is natural to ask if every Banach space containing a copy of $\mathrm{c}_0$ must contain a pre-monotone $\mathrm{c}_0$-sequence. Dowling, Lennard and Turett \cite{DLT3} proved that if a Banach space $X$ contains an isomorphic copy of $\mathrm{c}_0$ then it contains an asymptotically pre-monotone $\mathrm{c}_0$-sequence, that is, a sequence $(x_n)$ which is equivalent to the unit basis of $\mathrm{c}_0$ and for some decreasing null sequence $(\delta_n)$ in $(0,1)$ one has $\|R_n\| \leq (1 + \delta_n)$ for all $n\in \mathbb{N}$. On the other hand, the proposition mentioned in Remark \ref{rmk:1} yields an affirmative answer for the $1$-spreading case. \end{remark} The next result is a variant of Proposition \ref{prop:1sec10}. \begin{prop}\label{prop:2sec10} Let $X$ be a Banach space with a Schauder basis in which every block basic sequence has a block basic sequence which is not boundedly complete. Assume that $X$ contains a pre-monotone basic sequence. Then $X$ contains a pre-monotone basic sequence which is not boundedly complete. \end{prop} \begin{proof} Let $(e_i)$ denote the basis of $X$ and let $(y_n)$ be a pre-monotone basic sequence in $X$. By a result due to Dean, Singer and Sternbach (see \cite[Proposition 3]{DSS}), $(y_n)$ has a block basic sequence $(x_n)$ which is equivalent to a block basic sequence with respect to $(e_i)$. By assumption, $(x_n)$ is not boundedly complete and this concludes the proof. \end{proof} \begin{remark} Examples of Banach spaces having a basis with the property described in the statement of Proposition \ref{prop:2sec10} include those that are $\mathrm{c}_0$-saturated. For $p\geq 1$, the $p^{\textrm{th}}$ James-Schreier space $V_p$ defined in \cite[Theorem 5.2]{BL} is an important example of a $\mathrm{c}_0$-saturated space. In fact, the standard basis $V_p$ is monotone shrinking and does not embed into a Banach space with an unconditional basis. Recall that for $X$ and $Y$ infinite-dimensional Banach spaces, one says that $X$ is $Y$-{\it saturated} if each closed, infinite-dimensional subspace of $X$ contains a subspace which is isomorphic to $Y$. We were not been able to verify whether or not $V_p$ fails the FPP. \end{remark} In what follows we shall consider some consequences of Theorem \ref{thm:B}. \begin{corollary}\label{cor:A} Let $X$ be a Banach space with a pre-monotone unconditional basis. Assume that no subspace of $X$ is isomorphic to $\ell_1$. Then the following are equivalent: \begin{itemize} \item[(i)] $X$ is reflexive. \item[(ii)] $X$ has the fixed point property for affine nonexpansive mappings. \item[(iii)] Every closed convex subset of $B_X$ has the FPP for affine nonexpansive mappings. \end{itemize} \end{corollary} \begin{proof} The implication (i) $\Rightarrow$ (ii) is a direct consequence of the classical Schauder-Tychonoff's fixed point theorem. It is clear that (ii) implies (iii). Let us prove that (iii) $\Rightarrow$ (i). Assume that $X$ is not reflexive. Taking into account that no subspace of $X$ is isomorphic to $\ell_1$, it follows from \cite[Theorem 1 and Lemma 2]{J1} that $(e_i)$ cannot be boundedly complete. By the proof of Theorem \ref{thm:B} (i), the result follows. \end{proof} \begin{corollary}\label{cor:B} Let $X$ be a Banach space with a pre-monotone unconditional basis. Assume that no subspace of $X$ is isomorphic to $\ell_1$. Then $X$ fails the FPP for affine nonexpansive mappings if and only if $X$ contains a basic sequence which is not boundedly complete. \end{corollary} \begin{proof} The proof of this corollary follows by using a sequential characterization of reflexivity due to Singer \cite[Theorem 2 and Corollary 1]{S} (see also \cite[Proposition]{Z}) in concert with Corollary \ref{cor:A}. \end{proof} \begin{corollary}\label{cor:C} Let $X$ be a Banach space with a $1$-unconditional basis $(e_i)$. Assume that $X$ does not contain $\ell_1$ isomorphically. Then the following assertions are equivalent: \begin{itemize} \item[(i)] $X$ is reflexive. \item[(ii)] $X$ has the fixed point property for nonexpansive mappings. \item[(iii)] Every closed convex subset of $B_X$ has the FPP for nonexpansive mappings. \end{itemize} \end{corollary} \begin{proof} The implication (i) $\Rightarrow$ (ii) follows directly from Lin's theorem \cite{Lin1}, since every closed, bounded convex subset of a reflexive space is weakly compact. Clearly (ii) implies (iii). Finally, from Corollary \ref{cor:A}) we see that (iii) implies (i). \end{proof} \begin{remark} It is noteworthy to point out that Lin's $\ell_1$-renorming \cite{Lin2} shows that the assumption on the absence of isomorphic copies of $\ell_1$ cannot be removed from the above statements. Likewise, the assumption of not being boundedly complete is necessary in our main theorem. Our results are directly related to those in \cite{B-MJ2}. In particular, the above corollaries should be compared with \cite[Corollary 5.4]{B-MJ2}. \end{remark} \begin{example}[Schreier's space] The classical Schreier space $X_{\mathcal{S}}$ is the completion of $\mathrm{c}_{00}$ with respect to the following norm \[ \| x \|_\mathcal{S}= \sup_{E\in \mathcal{S}}\sum_{i\in E}| a_i|,\quad x=(a_i)_{i=1}^\infty\in \mathrm{c}_{00}, \] where $\mathcal{S}=\big\{ E\in [\mathbb{N}]^{<\infty} \colon |E|\leq \min E\big\}$ is the family of so-called {\it admissible} sets. It is known that the unit basis of $X_{\mathcal{S}}$ is $1$-unconditional and not boundedly complete. In fact, one can prove that $X_{\mathcal{S}}$ contains an isometric copy of $\mathrm{c}_0$ (cf. \cite[Theorem 0.5, Proposition 0.7 and Corollary 0.8]{CS}). \end{example} \begin{remark} Notice that the assertion (i) of Theorem \ref{thm:B} shows in particular that any renorming of $\mathrm{c}_0$ enjoying the fixed point property must be such that its unit basis cannot be premonotone. This fact is slightly more general than Proposition 8 in \cite{ACM}. \end{remark} \begin{example}[A non-$1$-unconditional $\mathrm{c}_0$-sequence that is $1$-suppression unconditional] Let us consider the following norm in $\mathrm{c}_0$: \[ \| (a_i)\|= \sup_{i, j\in \mathbb{N}}|a_i - a_j|,\quad (a_i)_{i=1}^\infty\in \mathrm{c}_0. \] As it was pointed out in \cite[Example 10]{ACM}, the unit basis of $\mathrm{c}_0$ is not $1$-unconditional with respect to $\|\cdot\|$. However, one easily shows that it is $1$-suppression unconditional and hence pre-monotone. \end{example} \begin{remark} Recently, Nezir and Mustafa \cite{NM} discussed question ($\mathcal{Q}3$) of whether there exists a renorming of $\mathrm{c}_0$ enjoying the FPP. In particular, they introduce a class of equivalent norms $\vertiii{\cdot}$ on $\mathrm{c}_0$ and show that $(\mathrm{c}_0, \vertiii{\cdot})$ has the FPP for the class of affine $\vertiii{\cdot}$-nonexpansive mappings. Let us remind the definition of $\vertiii{\cdot }$ given in \cite{NM}: \[ \vertiii{ (\xi_i) }=\lim_{p\to \infty} \sup_{k\in \mathbb{N}}\gamma_k\Bigg( \sum_{j=k}^\infty \frac{| \xi_j|^p}{2^j}\Bigg)^{1/p}, \] where $(\xi_i)\in \mathrm{c}_0$ and $(\gamma_k)$ is strictly increasing with $\gamma_k>2$ and $\gamma_k\nearrow 3$. Let $(e_i)$ be the unit basis of $\mathrm{c}_0$. It is clear from this definition that $\vertiii{\cdot}$ is an equivalent norm on $\mathrm{c}_0$ for which $(e_i)$ is a $1$-unconditional Schauder basis. Since boundedly completeness is invariant by isomorphisms, it follows that $(e_i)$ is not boundedly complete w.r.t. $\vertiii{\cdot}$. Unfortunately, \'Alvaro-Cembranos-Mendoza's method embodied in Theorem \ref{thm:A} shows that $(\mathrm{c}_0, \vertiii{\cdot})$ cannot have the FPP for affine nonexpansive mappings. {\it After completing this work, we were aware that Professors V. Nezir and N. Mustafa sent in October 2020 an errata to the Filomat journal explaining that there was a gap in one of their proofs in \cite{NM}, and also reported that in their Erratum.} \end{remark} \begin{remark} A Banach space $X$ as in Theorem \ref{thm:B} (iii) may fail to contain $\mathrm{c}_0$-sequences without the extra assumption that strongly bounded block basic sequences of the basis must dominate their subsequences. In fact, recall that the classical James's space $J_2=\big\{ (a_i)_{i=1}^\infty \in c_0 \,\big| \| (a_i)_{i=1}^\infty\|_{J_2}<\infty\big\}$ where $\| \cdot\|_{J_2}$ is defined by \[ \| (a_i)_{i=1}^\infty\|_{J_2}= \sup\Bigg\{ \Big( \sum_{k=1}^l | a_{p_{k+1}} - a_{p_k}|^2\Big)^{1/2}\,\colon \, 2\leq l,\, 1\leq p_1< \dots < p_l\Bigg\}, \] is not reflexive, does not contain any subspaces isomorphic to $\mathrm{c}_0$ or $\ell_1$ and its unit basis is non-unconditional shrinking and monotone. It is worth mentioning that $J_2$ has the weak-FPP (cf. \cite{K}). \end{remark} \begin{remark} It is known that for infinite compact Hausdorff spaces $K$, $C(K)$ does not have a $1$-suppression unconditional basis (see e.g. \cite[Remark 2.13]{ALMT}). On the other hand, it was pointed out in \cite{B-MJ1} that if $C(K)$ is isomorphic to $\mathrm{c}_0$ with $K$ metrizable, then it has the weak-FPP. However it seems to be an open question whether or not $C(K)$ has this property if $K$ is a scattered set such that $K^{(\omega)}\neq \emptyset$. \end{remark} The following (well-known) result shows that $C(K)$ fails to have the FPP for affine nonexpansive mappings. Furthermore, for the case where $C(K)^*$ is separable then no Schauder basis for $C(K)$ can be boundedly complete. \begin{prop} If $K$ is an infinite compact Hausdorff space, then $C(K)$ contains a basic sequence isometrically equivalent to the unit basis of $\mathrm{c}_0$. \end{prop} \begin{proof} The proof is implicitly contained in Cembranos's argument highlighted in the proof of Theorem \ref{thm:C}, so we omit the proof. \end{proof} \begin{remark} As is naturally the case with $C(K)$ spaces, it is well-known that the spaces $C(K, E)$ always contain isometric copies of $\mathrm{c}_0$. However the main advantage in Cembranos's approach reproduced in Theorem \ref{thm:C} lies in the complementation property of isomorphic copies of $\mathrm{c}_0$. This kind of property has been useful in metric fixed point theory (cf. \cite{B-MJ2}). \end{remark} \begin{remark} It is known that if $X$ is a Banach space that has an unconditional basis and fails the CPCP, then $X$ contains a basic sequence equivalent to the unit vector basis of $\mathrm{c}_0$ (cf. \cite[Corollary 2.11]{Sch}). Therefore the proof of Theorem \ref{thm:D} provides a more general. The Banach space $B_\infty$ constructed in \cite{GM} satisfies the CPCP, fails PCP, has a seminormalized supershrinking Schauder basis and does not contain subspaces isomorphic to $\mathrm{c}_0$. This shows that in some sense Theorem \ref{thm:E} is sharp. \end{remark} \begin{remark} The arguments used in Theorem \ref{thm:G} can also be applied to show that if $X$ has weak Banach-Saks property and $\mathrm{dist}_{BM}(X, Y)=1$, where $Y$ is a subspace of a Banach space $Z$ having a strongly bimonotone basis, then $X$ has the weak fixed point property. This yields a slightly generalized version of a known result of Garc\'ia-Falset \cite{GFal}. \end{remark} The following notion was introduced by C. Lennard and Nezir in \cite{LN}, where important characterizations of reflexivity in terms of the FPP were established. \begin{definition}[Cascanding Nonexpansive Mappings] Let $X$ be a Banach space and $C$ be a closed convex subset of $X$. Let $T\colon C\to C$ be a mapping and define $C_0=C$, $C_n=\overline{\mathrm{conv}}\hskip .05cm T(C_{n-1})$ for every $n\in \mathbb{N}$. The mapping $T$ is said to be cascanding nonexpansive if there exists a sequence $(\lambda_n)_{n\geq 0}\subset [1,\infty)$ with $\lim_n \lambda_n=1$ and such that $\| Tx - Ty\| \leq \lambda_n\| x - y\|$ for all $x, y\in C_n$ and for all $n\geq 0$. \end{definition} We conclude the paper by referring the reader to Benavides and Jap\'on Pineda \cite{B-MJ2} for further fixed-point characterizations of weak compactness in spaces with unconditional basis for the class of cascanding nonexpansive mappings. As far as we are concerned, the methods used in these last works do not allow us to obtain our results by considering the class of nonexpansive mappings (even in the special case where the basis is $1$-unconditional). \medskip \noindent{\bf Conflicts of Interest.} The author declare that there are no competing interest. \smallskip \section*{Acknowledgments} This work was started when the author visited the Mathematics Department of the Federal University of Amazonas (UFAM), from December 16, 2020 to January 7, 2021. For this reason, the authors also thank Professors Fl\'avia Morgana and Jeremias Le\~ao for the kind invitation and support. The author also wish to thank Professor Bruno M. Braga for his careful reading of the proof of the part (i) of Theorem \ref{thm:B}. The research was partially supported by FUNCAP/CNPq/PRONEX Grant 00068.01.00/15.
{ "arxiv_id": "2302.11389", "language": "en", "timestamp": "2023-02-23T02:15:02", "url": "https://arxiv.org/abs/2302.11389", "yymm": "2302" }
\section{Introduction} \subsection{The main results.} A celebrated theorem of Deligne and Illusie provides an analog of Hodge decomposition on de Rham cohomology in degrees $<p$ of varieties over a perfect field $k$ of positive characteristic $p$ that lift to $W_2(k)$. In \cite{deligne-illusie} they proved that for a smooth variety over $X_0$ over $k$ that admits a lift to a smooth scheme $X_1$ over the ring $W_2(k)$ of length $2$ Witt vectors, the canonical truncation of the de Rham complex $\mathrm{dR}_{X_0/k}:=(F_{X_0/k*}\Omega^{\bullet}_{X_0/k},d)$ in degrees $\leq p-1$ decomposes in the derived category $D(X_0^{(1)})$ of ${\mathcal O}_{X^{(1)}_0}$-modules: \begin{equation}\label{intro: de rham complex decomposition} \tau^{\leq p-1}\mathrm{dR}_{X_0/k}\simeq\bigoplus\limits_{i=0}^{p-1} \Omega^i_{X^{(1)}_0/k}[-i] \end{equation} Passing to cohomology, this gives decompositions for $n<p$: \begin{equation}\label{intro: cohomology decomposition}H^n_{\mathrm{dR}}(X_0/k)\simeq \bigoplus\limits_{i+j=n}H^j(X_0,\Omega^i_{X_0/k})^{(1)}\end{equation} In this paper we investigate whether the de Rham complex of a variety liftable to $W_2(k)$ decomposes in further degrees. We prove that in general it does not decompose, and decomposition (\ref{intro: cohomology decomposition}) might not exist, answering a question of Deligne and Illusie \cite[Remarque 2.6(iii)]{deligne-illusie}: \begin{thm}[Corollary \ref{nondeg example: main corollary}]\label{intro: nondeg example} There exists a smooth projective variety $X_0$ over $k$ of dimension $p+1$ that lifts to $W(k)$ such that $\dim_k H^p_{\mathrm{dR}}(X_0/k)<\displaystyle\sum\limits_{i+j=p}\dim_k H^j(X_0,\Omega^i_{X_0/k})$. In particular, the Hodge-to-de Rham spectral sequence of $X_0$ does not degenerate at the first page, and $\mathrm{dR}_{X_0/k}$ is not quasi-isomorphic to $\bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}[-i]$. \end{thm} Moreover, we compute the obstruction to decomposing the truncation $\tau^{\leq p}\mathrm{dR}_{X_0/k}$ of the de Rham complex in degrees $\leq p$ in terms of other invariants of the variety $X_0$ and its lift $X_1$. Given that $\tau^{\leq p-1}\mathrm{dR}_{X_0/k}$ decomposes, the next truncation fits into an exact triangle in the derived category $D(X^{(1)}_0)$: \begin{equation}\label{intro: deg p triangle} \bigoplus\limits_{i=0}^{p-1}\Omega^i_{X^{(1)}_0/k}[-i]\tilde{o}\tau^{\leq p}\mathrm{dR}_{X_0/k}\tilde{o}\Omega^p_{X^{(1)}_0/k}[-p] \end{equation} and hence gives an extension class $$e_{X_1,p}\in H^{p+1}(X^{(1)}_0,\Lambda^p T_{X^{(1)}_0/k})\oplus H^p(X^{(1)}_0,\Lambda^pT_{X^{(1)}_0/k}\otimes\Omega^1_{X^{(1)}_0/k})\oplus\ldots$$ that vanishes if and only if (\ref{intro: deg p triangle}) splits in $D(X^{(1)}_0)$. We compute this class: \begin{thm}[{Theorem \ref{cosimp applications: the best part de Rham} if $X_1$ lifts to $W(k)$, Corollary \ref{semiperf: smooth cor} in general}]\label{intro: main extension class} All components of $e_{X_1,p}$ except for the one in $H^{p+1}(X^{(1)}_0,\Lambda^p T_{X^{(1)}_0/k})$ vanish, and that component is equal to \begin{equation}\label{intro: main extension class formula} \mathrm{Bock}_{X^{(1)}_1}(\ob_{F,X_1}\cdot\hspace{2pt} \alpha(\Omega^1_{X^{(1)}_0}))\in H^{p+1}(X^{(1)}_0,\Lambda^p T_{X^{(1)}_0/k}) \end{equation} where \begin{itemize} \item $\mathrm{Bock}_{X^{(1)}_1}:H^p(X^{(1)}_0,\Lambda^p T_{X^{(1)}_0/k})\tilde{o} H^{p+1}(X^{(1)}_0,\Lambda^p T_{X^{(1)}_0/k})$ is the Bockstein homomorphism, that is the connecting morphism arising from the short exact sequence of sheaves on $X_1$: $0\tilde{o} \Lambda^p T_{X^{(1)}_0/k}\tilde{o}\Lambda^p T_{X^{(1)}_1/W_2(k)}\tilde{o} \Lambda^p T_{X^{(1)}_0/k}\tilde{o} 0$. \item $\ob_{F,X_1}\in H^1(X^{(1)}_0,F_{X_0^{(1)}}^*T_{X^{(1)}_0})$ is the obstruction to lifting the Frobenius morphism $F_{X_0}:X_0\tilde{o} X_0$ to an endomorphism of $X_1$, \item $\alpha(\Omega^1_{X^{(1)}_0})\in H^{p-1}(X^{(1)}_0,F_{X^{(1)}_0}^*\Omega^1_{X^{(1)}_0}\otimes(\Lambda^p\Omega^1_{X^{(1)}_0})^{\vee})$ is a certain `characteristic class' of the cotangent bundle, defined in Definition \ref{free cosimplicial: alpha definition}.\end{itemize} \end{thm} The class $\alpha(E)\in H^{p-1}(X_0,F_{X_0}^*E\otimes (\Lambda^p E)^{\vee})$ is defined for any vector bundle $E$ on $X_0$, and it can be made especially explicit when $p=2$, in which case it is the class of the extension \begin{equation} 0\tilde{o} F_{X_0}^*E\tilde{o} S^2E\xrightarrow{e_1\cdot e_2\mapsto e_1\wedge e_2}\Lambda^2E\tilde{o} 0. \end{equation} The fact that $e_{X_1,p}$ has only one potentially non-zero component is a shadow of an additional ${\mathbb Z}/p$-grading on the de Rham complex $\mathrm{dR}_{X_0/k}$ in the presence of a lift $X_1$, discovered by Drinfeld through his work on the prismatization \cite{drinfeld}. More generally, Bhatt-Lurie \cite{apc} and Li-Mondal \cite{li-mondal} showed that a lift of $X_0$ to a smooth $W_2(k)$-scheme $X_1$ induces an endomorphism $\Theta_{X_1}$ of $\mathrm{dR}_{X_0/k}$ in the derived category $D(X^{(1)}_0)$, called the {\it Sen operator}. The action of $\Theta_{X_1}$ on the $i$th cohomology sheaf $H^i(\mathrm{dR}_{X_0/k})\simeq\Omega^i_{X^{(1)}_0/k}$ is given by multiplication by $(-i)$. Therefore generalized eigenspaces for this operator form a decomposition \begin{equation}\label{intro: drinfeld decomposition} \mathrm{dR}_{X_0/k}\simeq\bigoplus\limits_{i=0}^{p-1}\mathrm{dR}_{X_0/k,i} \end{equation} such that the object $\mathrm{dR}_{X_0/k,i}$ has non-zero cohomology only in degrees congruent to $i$ modulo $p$. In particular, $\tau^{\leq p}\mathrm{dR}_{X_0/k,0}$ is an extension of $\Omega^p_{X^{(1)}_0/k}[-p]$ by ${\mathcal O}_{X^{(1)}_0}$ that splits off as a direct summand from the extension (\ref{intro: deg p triangle}), corroborating the fact that the components of $e_{X_1,p}$ in $H^{p+1-i}(X_0,\Lambda^pT_{X_0}\otimes \Omega^i_{X_0})$ vanish for $i>0$. It was observed earlier by Achinger and Suh \cite{achinger-suh} that these components vanish for $i>1$, for a purely homological-algebraic reason which is related to our method. The Sen operator $\Theta_{X_1}$ not only induces the decomposition (\ref{intro: drinfeld decomposition}) but also equips each direct summand $\mathrm{dR}_{X_0/k,i}$ with an additional structure of the nilpotent endomorphism $\Theta_{X_1}+i$. Our methods allow to describe the action of $\Theta_{X_1}$ on $\tau^{\leq p}\mathrm{dR}_{X_0,0}$. Since $\tau^{\leq p}\mathrm{dR}_{X_0/k,0}$ has only two non-zero cohomology sheaves and $\Theta_{X_1}$ acts on them by zero, it naturally defines a map $\Omega^p_{X^{(1)}_0}[-p]\tilde{o}{\mathcal O}_{X^{(1)}_0}$ in the derived category $D(X^{(1)}_0)$ whose cohomology class we denote by $c_{X^{(1)}_1,p}\in H^p(X^{(1)}_0,\Lambda^p T_{X^{(1)}_0})$. This class can be described as: \begin{thm}[{Theorem \ref{semiperf: main sen operator}}]\label{intro: sen class} For a smooth $W_2(k)$-scheme $X_1$ with the special fiber $X_0=X_1\times_{W_2(k)}k$ we have \begin{equation}\label{intro: sen class equation}c_{X_1,p}=\ob_{F,X_1}\cdot\alpha(\Omega^1_{X_0})\end{equation}where the right hand side is the product of classes $\ob_{F,X_1}\in H^1(X_0,F_{X_0}^{*}T_{X_0/k})$ and $\alpha(\Omega^1_{X_0/k})\in H^{p-1}(X_0,F_{X_0}^*\Omega^1_{X_0/k}\otimes \Lambda^p T_{X_0/k})$ \end{thm} It is not a coincidence that $e_{X_1,p}$ is the result of applying the Bockstein homomorphism to $c_{X_1,p}$ -- this follows from the basic properties of the action of the Sen operator on the diffracted Hodge cohomology of $X_1$, as we prove in Lemma \ref{sen operator: classes over zp2}. In particular, if $\tau^{\leq p}\mathrm{dR}_{X_0/k}$ is not decomposable then the Sen operator on $\tau^{\leq p}\mathrm{dR}_{X_0/k}$ must be non-semi-simple. Thus Theorem \ref{intro: nondeg example} also provides an example of a liftable variety with a non-semisimple Sen operator on its de Rham cohomology, answering a question of Bhatt. In fact, the non-vanishing of $c_{X_1,p}$ is a more frequent phenomenon than that of $e_{X_1,p}$ as we demonstrate by the following: \begin{cor}[Proposition \ref{nonsemisimp: p dim example}] There exists a smooth projective variety $X_0$ of dimension $p$ over $k$ equipped with a lift $X_1$ over $W_2(k)$ such that the Sen operator $\Theta_{X_1}$ on $\mathrm{dR}_{X_0/k}$ is not semisimple. \end{cor} The examples we construct admit a smooth proper morphism to a curve, and for such varieties the class $\alpha(\Omega^1_{X_0})$ can be expressed in terms of the Kodaira-Spencer class of this fibration, giving a more tangible reformulation (Theorem \ref{nonsemisimp: sen class fibration}) of Theorem \ref{intro: sen class}. We give two different, but similar in spirit, proofs of Theorem \ref{intro: main extension class}, both of which use crucially the multiplicative structure on de Rham and Hodge-Tate complexes. \subsection{Proof of Theorem \ref{intro: main extension class}.} The first proof, for which we additionally need to assume that $X_0$ lifts to a smooth (formal) scheme $X$ over $W(k)$, takes place in homotopical algebra rather than algebraic geometry. To prove Theorem \ref{intro: main extension class} in full generality, in a situation when only a lift over $W_2(k)$ exists, we appeal to Theorem \ref{intro: sen class}, which implies it by Corollary \ref{semiperf: smooth cor}. To describe the idea, recall the structure of the proof of \cite{deligne-illusie}. Given the lift $X_1$ over $W_2(k)$, they produce a map $s:\Omega^1_{X^{(1)}_0}[-1]\tilde{o} \mathrm{dR}_{X_0/k}$ in the derived category $D(X^{(1)}_0)$ inducing an isomorphism on cohomology sheaves in degree $1$. Using that $\mathrm{dR}_{X_0/k}$ can be represented by a cosimplicial commutative algebra in quasi-coherent sheaves on $X_0$ via the \v{C}ech resolution, $s$ gives rise to maps \begin{equation}\label{intro: deg i splitting map}s_i:S^i(\Omega^1_{X^{(1)}_0/k}[-1])\tilde{o} \mathrm{dR}_{X_0/k}\end{equation} for all $i\geq 0$ where $S^i$ denotes the derived symmetric power functor in the sense of \cite{illusie-cotangent1}. When $i<p$, the derived symmetric power $S^i(\Omega^1_{X^{(1)}_0/k}[-1])$ is identified with $\Omega^i_{X^{(1)}_0/k}[-i]$, and maps $s_i$ produce a quasi-isomorphism $\bigoplus s_i:\bigoplus\limits_{i=0}^{p-1}\Omega^i_{X^{(1)}_0/k}[-i]\simeq\tau^{\leq p-1}\mathrm{dR}_{X_0/k}$. This method of decomposing $\mathrm{dR}_{X_0/k}$ stops working for $i=p$, because $S^p(\Omega^1_{X^{(1)}_0/k}[-1])$ is now different from $\Omega^p_{X^{(1)}_0/k}[-p]$. It has non-zero cohomology sheaves in several degrees, and we analyze this object in detail in Section \ref{free cosimplicial: section}. Specifically, there is a natural map $N_p:S^p(\Omega^1_{X^{(1)}_0/k}[-1])\tilde{o}\Omega^p_{X^{(1)}_0/k}[-p]$ but the map $s_p$ need not factor through it. In Lemma \ref{free cosimplicial: algebra Bockstein} we compute the map $s_p$ on the fiber of $N_p$, and thus relate the obstruction to splitting $\tau^{\leq p}\mathrm{dR}_{X_0/k}$ to the obstruction to the existence of a section of the map $N_p$, the latter being precisely the class $\alpha(\Omega^1_{X_0})$. This argument is not specific to the de Rham complex and applies more generally to derived commutative algebras whose cohomology algebra is freely generated by $H^1$. In the body of the paper we work with derived commutative algebras in the sense of Mathew, but here we state the main algebraic result for the more classical notion of cosimplicial commutative algebras: \begin{thm}[Theorem \ref{cosimp: main theorem}]\label{intro: general cosimplicial} Let $A$ be a cosimplicial commutative algebra in quasi-coherent sheaves on a flat ${\mathbb Z}_{(p)}$-scheme $X$. Suppose that $H^0(A)={\mathcal O}_X$, the sheaf $H^1(A)$ is a locally free sheaf of ${\mathcal O}_X$-modules, and the multiplication on the cohomology of $A$ induces an isomorphism $\Lambda^{\bullet}H^1(A)\simeq H^{\bullet}(A)$. If there exists a map $s:H^1(A)[-1]\tilde{o} A$ in $D(X)$ inducing an isomorphism on cohomology in degree $1$, then \begin{enumerate} \item $\tau^{\leq p-1}A$ is quasi-isomorphic to $\bigoplus\limits_{i=0}^{p-1}H^i(A)[-i].$ \item The connecting map $H^p(A)\tilde{o} (\tau^{\leq p-1}A)[p+1]$ corresponding to the fiber sequence $\tau^{\leq p-1}A\tilde{o}\tau^{\leq p}A\tilde{o} H^p(A)[-p]$ can be described as the composition \begin{multline}\label{intro: general algebra formula} H^p(A)=\Lambda^p H^1(A)\tilde{o} \Lambda^pH^1(A)/p\xrightarrow{\alpha(H^1(A)/p)}F_{X_0}^*(H^1(A)/p)[p-1]\xrightarrow{F_{X_0}^*s[p]}(\tau^{\leq 1}F^*_{X_0}(A/p))[p]\\ \xrightarrow{\varphi_{A/p}}(\tau^{\leq 1}A/p)[p]\xrightarrow{\tau^{\leq 1}\mathrm{Bock}_A}(\tau^{\leq 1}A)[p+1]\xrightarrow{\oplus}(\tau^{\leq p-1}A)[p+1] \end{multline} Here $\varphi_{A/p}$ is the Frobenius morphism of the cosimplicial commutative algebra $A/p$, and $\mathrm{Bock}_A:A/p\tilde{o} A[1]$ is the morphism corresponding to the fiber sequence $A\xrightarrow{p}A\tilde{o} A/p$ on $X$. \end{enumerate} \end{thm} The fact that the map $s_p:S^p(\Omega^1_{X^{(1)}_0}[-1])\tilde{o}\mathrm{dR}_{X_0/k}$ need not factor through $\Omega^p_{X^{(1)}_0}[-p]$ is an incarnation of the classical phenomenon responsible for the existence of non-trivial Steenrod operations on mod $p$ cohomology of topological spaces. If the map $s:\Omega^1_{X^{(1)}_0/k}[-1]\tilde{o} \mathrm{dR}_{X_0/k}$ in the derived category $D(X^{(1)}_0)$ was represented by a genuine map between these complexes, all the maps $s_i$ would have to factor through $\Omega^i_{X^{(1)}_0/k}[-i]$ because of the commutative DG algebra structure on the de Rham complex. However, $s$ can generally only be represented by a chain-level map into a model of $\mathrm{dR}_{X_0/k}$ where multiplication is commutative up to a set of coherent homotopies, rather than commutative on the nose. Theorem \ref{intro: general cosimplicial} draws inspiration from Steenrod's construction \cite{steenrod-symmetric-homology}, \cite{may-steenrod} of cohomology operations from homology classes of symmetric groups, and our method of proof implies the following statement about operations on cohomology of cosimplicial commutative ${\mathbb F}_p$-algebras, defined by Priddy \cite{priddy}. The appearance of Frobenius and Bockstein homomorphisms in the description of the degree $1$ Steenrod operation is directly related (Lemma \ref{free cosimplicial: algebra Bockstein} being the common reason for both) to their appearance in (\ref{intro: general algebra formula}). \begin{pr}[{Proposition \ref{free cosimplicial: steenrod operations description prop}}] For a cosimplicial commutative ${\mathbb F}_p$-algebra $B$, \begin{enumerate} \item the degree zero Steenrod operation $P^0:H^i(B)\tilde{o} H^i(B)$ is equal to the Frobenius endomorphism $\varphi_B$ of $B$, \item the degree $1$ Steenrod operation $P^1:H^i(B)\tilde{o} H^{i+1}(B)$ is equal to Serre's Witt vector Bockstein homomorphism induced by the exact sequence $B\xrightarrow{V}W_2(B)\tilde{o} B$. In particular, for any cosimplicial commutative flat ${\mathbb Z}/p^2$-algebra $\widetilde{B}$ lifting $B$, $P^1$ can be described as the composition $H^i(B)\xrightarrow{\varphi_B}H^i(B)\xrightarrow{\mathrm{Bock}_{\widetilde{B}}}H^{i+1}(B)$. \end{enumerate} \end{pr} We apply Theorem \ref{intro: general cosimplicial} to the diffracted Hodge complex $A:=\Omega^{\DHod}_X$ of \cite{apc} (it coincides with the Hodge-Tate cohomology of \cite{prisms} over an appropriately chosen prism), and deduce Theorem \ref{intro: main extension class} by reducing modulo $p$, using that $\mathrm{dR}_{X_0/k}$ is the reduction of $\Omega^{\DHod}_{X^{(1)}}$. We thus along the way obtain in Theorem \ref{cosimp applications: HT extension} a similar description of the extension class corresponding to $\tau^{\leq p}\Omega^{\DHod}_X$ as well. Theorem \ref{intro: general cosimplicial} also applies to a number of cohomology theories evaluated on abelian varieties: in Section \ref{AV coherent: section} we use it to study extensions in the canonical filtration on coherent (Proposition \ref{AV coherent: coherent main}), de Rham (Proposition \ref{AV coherent: AV de rham}), and \'etale cohomology (Proposition \ref{AV coherent: etale}) of an abelian scheme being acted on by a group. The results on coherent and de Rham equivariant cohomology of abelian schemes play the key role in our example of a liftable variety with a non-degenerate conjugate spectral sequence. A related approach to decomposability of the de Rham complex has been previously used by Beuchot \cite{beuchot}, who used a description of cohomology sheaves of $(\Omega^1_{X^{(1)}_0}[-1]^{\otimes p})_{hS_p}$ to prove decomposability of the truncation $\tau^{\leq p}\mathrm{dR}_{X_0/k}$ for a liftable variety $X_0$ under the assumption that certain cohomology groups of $X_0$ vanish. The consequence of Theorem \ref{intro: general cosimplicial} for the de Rham complex was also obtained independently by Robert Burklund, by a similar method. At the moment it is unclear how to generalize Theorem \ref{intro: general cosimplicial}, and therefore Theorem \ref{intro: main extension class}, to describe extensions in degrees $>p$. We make some preliminary observations on this in Section \ref{higher ext: section}. Computations of the map $S^p(\Omega^1_{X^{(1)}_0}[-1])\tilde{o}\mathrm{dR}_{X_0/k}$ that we perform for the proof of Theorem \ref{intro: main extension class} also allow us to determine when the de Rham complex can be decomposed compatibly with the algebra structure. The following result is to be contrasted with the formality result of \cite{deligne-griffiths-morgan-sullivan} for smooth projective varieties in characteristic zero: \begin{pr}[{Proposition \ref{cosimp applications: de rham formality}}] For a smooth variety $X_0$ over $k$ the de Rham complex $\mathrm{dR}_{X_0/k}$ is quasi-isomorphic to $\bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}[-i]$ as an $E_{\infty}$-algebra in $D(X^{(1)}_0)$ if and only if $X_0$ admits a lift over $W_2(k)$ together with its Frobenius endomorphism. \end{pr} The `if' direction was proven in \cite{deligne-illusie} and is the starting point of their proof of the decomposition (\ref{intro: de rham complex decomposition}). \subsection{Liftable variety with a non-degenerate conjugate spectral sequence} It is rather non-obvious that the formula (\ref{intro: main extension class formula}) ever takes non-zero values. Our example for Theorem \ref{intro: nondeg example} is an approximation (in the sense of \cite{antieau-bhatt-mathew}) of the classifying stack of a finite flat non-commutative group scheme $G$ over $W(k)$ defined as follows. Let $E$ be an elliptic curve over $W(k)$ whose reduction is supersingular. The group scheme $G$ is \begin{equation} G:=SL_p({\mathbb F}_{p^2})\ltimes (E[p]\otimes_{{\mathbb F}_p}{\mathbb F}_{p^2}^{\oplus p}). \end{equation} Here $E[p]\otimes_{{\mathbb F}_p}{\mathbb F}_{p^2}^{\oplus p}$ is the product of $2p$ copies of the $p$-torsion group scheme $E[p]$, and the discrete group $SL_p({\mathbb F}_{p^2})$ acts on it via the tautological representation on ${\mathbb F}_{p^2}^{\oplus p}$. We do not apply formula (\ref{intro: main extension class formula}) to the stack $BG$ directly, but rather study the conjugate spectral sequence of an auxiliary quotient stack by applying Theorem \ref{intro: general cosimplicial} to equivariant Hodge and de Rham cohomology of the abelian scheme $E^{\times d}$ for a certain $d$, with respect to a certain discrete group. First, we show that the class $\alpha(-)$ does not vanish in the universal example: \begin{pr}[{Proposition \ref{rational group cohomology: main non-vanishing}}] Denote by $V$ the tautological $p$-dimensional representation of the algebraic group $GL_{p,k}$. The class $\alpha(V)\in \Ext^{p-1}_{GL_{p,k}}(\Lambda^p V,V^{(1)})$ is non-zero. \end{pr} For the proof, we find an explicit representation for $\alpha(V)$ as a Yoneda extension and apply Kempf's vanishing theorem to compare spectral sequences associated to b\^{e}te and canonical filtrations on this explicit complex. We also prove an enhancement of this non-vanishing: the class $\alpha(V)$ remains non-zero after being restricted to the discrete group $GL_p({\mathbb F}_{p^r})$ whenever $r>1$ (Proposition \ref{group cohomology: from algebraic to discrete}), and that the Bockstein homomorphism applied to $\alpha(V)$ gives a non-zero class in the cohomology of the special linear group $SL_p({\mathcal O}_F)$ of the ring of integers in an appropriately chosen number field $F$ (Proposition \ref{group cohomology: ring of integers main}). These enhancements use the technique of Cline-Parshall-Scott-van der Kallen \cite{cpsk} for comparing the cohomology a reductive group with that of its group of ${\mathbb F}_q$-points. With these non-vanishing results in hand, we apply Theorem \ref{intro: general cosimplicial} to de Rham and coherent cohomology of an abelian scheme being acted on by a group. Our example in Theorem \ref{intro: nondeg example} stems from the following potential discrepancy between de Rham and coherent cohomology: \begin{pr}[{Proposition \ref{nondeg example: AV quotient}}]\label{intro: equivariant de rham vs coherent} There exists an abelian scheme $A$ over $W(k)$ with an action of a discrete group $\Gamma$ such that the truncation $\tau^{\leq p}\mathrm{R}\Gamma(A_0,{\mathcal O})$ of the cohomology of the structure sheaf of the special fiber $A_0=A\times_{W(k)}k$ is $\Gamma$-equivariantly decomposable, while its de Rham cohomology $\tau^{\leq p}\mathrm{R}\Gamma_{\mathrm{dR}}(A_0/k)$ is not. \end{pr} The ultimate reason for different behavior of de Rham and coherent cohomology in Proposition \ref{intro: equivariant de rham vs coherent} is that the Frobenius morphism appearing in (\ref{intro: general algebra formula}) is zero on $H^1(A_0,{\mathcal O})$, but is non-zero on $H^1_{\mathrm{dR}}(A_0/k)$. We moreover arrange that the stack $[A_0/\Gamma]$ has a non-degenerate conjugate spectral sequence. We then construct a map $[A_0/\Gamma]\tilde{o} BG$ and deduce that the conjugate spectral sequence of $BG$ does not degenerate either, as desired. The dependence of the extension class (\ref{intro: general algebra formula}) on the Frobenius action yields the following criterion for supersingularity of an elliptic curve: \begin{pr}[{Corollary \ref{AV coherent: supersingularity criterion}}] \label{intro: oridnary vs supseringular} Suppose that $n\geq 2p$. For any elliptic curve $E_0$ over $k$, the discrete group $GL_n({\mathbb Z})$ naturally acts on $E_0^{\times n}$ and therefore on $\mathrm{R}\Gamma(E_0^{\times n},{\mathcal O})$. The complex $\tau^{\leq p}\mathrm{R}\Gamma(E_0^{\times n},{\mathcal O})$ is $GL_n({\mathbb Z})$-equivariantly quasi-isomorphic to the direct sum $\bigoplus\limits_{i=0}^pH^i(E_0^{\times n},{\mathcal O})[-i]$ of its cohomology if and only if $E_0$ is supersingular. \end{pr} \subsection{Computation of the Sen operator.} Finally, let us describe the idea of our proof of Theorem \ref{intro: sen class}. It might be possible to upgrade Theorem \ref{intro: general cosimplicial} to work in the category of sheaves on $X$ equipped with an endomorphism, but we instead evaluate the Sen operator on $\tau^{\leq p}\mathrm{dR}_{X_0/k}$ using the technique of descent to quasiregular semiperfectoid rings developed by \cite{bms}. We set up the foundation for our computation in Section \ref{sen operator: section}, defining in particular a version of diffracted Hodge cohomology equipped with a Sen operator relative to the base ${\mathbb Z}/p^n$ for $n\geq 2$ (Theorem \ref{sen operator: zpn main}). We then prove formula (\ref{intro: sen class equation}) in Theorem \ref{semiperf: main sen operator}. For a quasiregular semiperfectoid $W(k)$-algebra $S$ with mod $p$ reduction $S_0=S/p$ the derived de Rham complex $\mathrm{dR}_{S_0/k}$ is concentrated in degree zero so the Sen operator in question is an endomorphism of a classical module, rather than an object of the derived world. To compute it, we consider the natural map \begin{equation}\label{intro: semiperf sym to dr map}\bigoplus\limits_{i\geq 0} S^i(L\Omega^1_S[-1])\tilde{o} \Omega^{\DHod}_S\end{equation} from the symmetric algebra on the shifted cotangent complex of $S$, comprised of the analogs of the maps (\ref{intro: deg i splitting map}). Since $S$ is quasiregular semiperfectoid, the map (\ref{intro: semiperf sym to dr map}) is an injection of flat $W(k)$-modules with torsion cokernel. As $\Theta_S$ acts on the source of this map via multiplication by $-i$ on the summand $S^i(L\Omega^1_S[-1])$, this allows us to pin down exactly how $\Theta_S$ acts on $\Omega^{\DHod}_S$. A slight variation of this argument works for a semiperfectoid $W_2(k)$-algebra $S_1$ lifting $S_0$, even if there is no lift over $W(k)$, as we prove in Lemma \ref{semiperf: sen for qrsprfd}. Though as a result we only compute the Sen operator on $\mathrm{dR}_{S_0}$ rather than $\Omega^{\DHod}_{S_1/{\mathbb Z}/p^2}$. Note that, as in the proof of Theorem \ref{intro: main extension class}, we do not directly use here the construction of the Sen operator via the Cartier-Witt stack, but rather rely crucially on its basic properties: that it is a derivation with respect to the multiplication on $\Omega^{\DHod}_X$ and that it acts by scalar multiplication on graded quotients of the conjugate filtration. \subsection*{Notation.} We use the machinery of $\infty$-categories in the sense of \cite{lurie-htt}. When referring to an ordinary category we always view it as an $\infty$-category via the simplicial nerve construction \cite[1.1.2]{lurie-htt}. For an object ${\mathcal F}\in D(X)$ of the derived category of quasi-coherent sheaves on a prestack $X$ we denote by $H^i({\mathcal F})\in \QCoh(X)$ the cohomology sheaf of ${\mathcal F}$ in degree $i$, and by $H^i(X,{\mathcal F})$ the abelian group of the degree $i$ cohomology of the derived global sections complex $\mathrm{R}\Gamma(X,{\mathcal F})$. By $\tau^{\leq i}{\mathcal F}$ we denote the canonical truncation: it is an object with a map $\tau^{\leq i}{\mathcal F}\tilde{o} {\mathcal F}$ such that $H^j(\tau^{\leq i}{\mathcal F})\simeq H^j({\mathcal F})$ for $j\leq i$, and $H^j(\tau^{\leq i}{\mathcal F})=0$ for $j>i$. When working in a stable $\infty$-category ${\mathcal C}$ linear over a commutative ring $R$, for objects $M,N\in {\mathcal C}$ we denote by $\mathrm{RHom}_{{\mathcal C}}(M,N)\in D(R)$ the corresponding object of morphisms. The mapping space $\Map_{{\mathcal C}}(M,N)$ is equivalent to the image of the truncation $\tau^{\leq 0}\mathrm{RHom}_{{\mathcal C}}(M,N)$ under the forgetful functor $D(R)\tilde{o} \Sp$ to the $\infty$-category of spaces. For an algebraic stack $X$ over a ring $R$ we denote by $L\Omega^i_{X/R}$ the $i$th exterior power of its cotangent complex, viewed as an object of the derived category $D(X)$ of quasicoherent sheaves on $X$. If $pR=0$ we denote by $\mathrm{dR}_{X/R}$ the (derived) de Rham complex of $X$ relative to $R$, viewed as an object of $D(X\times_{R,F_R} R)$, cf. \cite[\S 3]{bhatt-derived}. We sometimes drop the base $R$ from the notation if it is clear from the context, especially if $R$ is perfect. \subsection*{Acknowledgements.}I am very grateful to Vadim Vologodsky for many conversations on the surrounding topics throughout several years, I learned most of the techniques used in this paper from him. I am thankful to Luc Illusie for numerous stimulating conversations, encouragement, and many very helpful comments and corrections on the previous versions of this text. I am also grateful to Bhargav Bhatt, Lukas Brantner, Robert Burklund, Sanath Devalapurkar, Dmitry Kubrak, Shizhang Li, Shubhodip Mondal, Arthur Ogus, Alexei Skorobogatov, and Bogdan Zavyalov for helpful conversations, suggestions, and explanations. Special thanks to Dmitry and Shizhang for pointing out errors in the previous versions of some of the arguments. This work was completed during my stay at the Max Planck Institute for Mathematics in Bonn, I am grateful to MPIM for hospitality. This research was partially conducted during the period the author served as a Clay Research Fellow. \section{Preliminaries on homotopical algebra} The key to our approach to the study of the de Rham and Hodge-Tate cohomology is the fact that the de Rham complex and the diffracted Hodge complex admit a structure of a commutative algebra, in the appropriate derived sense. In this expository section we summarize the necessary facts about non-abelian derived functors, derived commutative algebras in the sense of Mathew, and cosimplicial commutative algebras. The specific piece of structure that will be used in Section \ref{cosimp: section} is a map $S^p A\tilde{o} A$ where $S^p$ is the derived symmetric power functor in the sense of \cite{illusie-cotangent1}. Such a map is naturally induced both by the structure of a derived commutative algebra and a cosimplicial commutative algebra structure on $A$. We choose to work with the former, but comment in Subsection \ref{dalg: cosimp subsection} on how to translate the results in the language of cosimplicial commutative algebras. Our exposition here definitely does not introduce any original mathematics but, on the contrary, is aiming to convince the reader that the part of this machinery relevant for us is powered by a fairly classical piece of homolog(top)ical algebra. \subsection{Non-abelian derived functors.} Let $X$ be an arbitrary prestack, that is a functor $X:\Ring^{\op}\tilde{o} \Sp$ from the category of ordinary commutative rings to the $\infty$-category of spaces. We denote by $\PreStk$ the $\infty$-category of prestacks. For a prestack $X$, we denote by $D(X)$ the derived $\infty$-category of quasi-coherent sheaves on $X$ defined as the limit $\lim\limits_{\Spec R\tilde{o} X}D(R)$ of derived categories of $R$ modules, cf. \cite[Chapter I.3]{gaitsgory-rozenblum}. We work in this large generality simply because all actual proofs in this section take place over affine schemes and the language of prestacks provides a convenient framework for generalizing the result to more general geometric objects via descent. By an algebraic stacks we will always mean a 1-Artin stack. In all of our applications, $X$ will either be a scheme, a $p$-adic formal scheme, or a global quotient of a scheme by a group scheme. If $X$ is a quasi-compact scheme with affine diagonal then $D(X)$ is equivalent to the derived category (in the sense of \cite[Definition 1.3.5.8]{lurie-ha}) of the usual abelian category $\QCoh(X)$ of quasi-coherent sheaves on $X$. If such an $X$ is being acted on by an affine flat group scheme $G$ then $D([X/G])$ is equivalent to the left completion of the derived category of the abelian category $\QCoh_G(X)$ of $G$-equivariant quasicoherent sheaves on $X$. For each $n\geq 0$ we have the derived functors $S^n,\Gamma^n,\Lambda^n:D(X)\tilde{o} D(X)$ of symmetric, divided, and exterior power functors, respectively. These are the $\infty$-categorical enhancements for the non-abelian derived functors defined by Illusie \cite{illusie-cotangent1}. We refer to \cite[Section 3]{brantner-mathew} and \cite[Section 2.3]{barwick-glasman-mathew-nikolaus} for a treatment in the case $X=\Spec R$ is an affine scheme, and \cite[Section A.2]{kubrak-prikhodko-p-adic} for the general case. We will now briefly recall the construction of these functors and their basic properties. Denote by $\Mod_R$ the ordinary abelian category of $R$-modules. These functors are uniquely characterized by the following properties: \begin{enumerate} \item If $X=\Spec R$ is affine then for a flat module $M\in \Mod_R\subset D(R)$, the values \begin{multline*}S^n(M)=(M^{\otimes n})_{S_n}, \Gamma^n(M)=(M^{\otimes n})^{S_n},\\ \Lambda^n(M)=(M^{\otimes n})/\langle m_1\otimes\ldots\otimes m_n|m_i=m_j\text{ for some }i\neq j\rangle\end{multline*} are the usual symmetric, divided, and exterior powers. \item If $X$ is an affine scheme, the functors $S^n,\Lambda^n,\Gamma^n$ preserve sifted colimits, and are polynomial functors of degree $\leq n$ in the sense of \cite[Definition 2.11]{barwick-glasman-mathew-nikolaus}. \item These functors are natural in morphisms of the underlying stacks. That is, the functors $S^n,\Gamma^n,\Lambda^n$ can be enhanced to endomorphisms of the functor $D(-):\PreStk^{\op}\tilde{o} \Cat_{\infty}$ that sends $X$ to $D(X)$ and a morphism $f:X\tilde{o} Y$ to the pullback functor $f^*:D(Y)\tilde{o} D(X)$. \end{enumerate} These derived functors are first constructed on affine schemes using the following: \begin{lm}[{\hspace{1sp}\cite{barwick-glasman-mathew-nikolaus}}]\label{dalg: derived functors for rings} For a commutative ring $R$, the restriction functor \begin{equation}\label{dalg: derived functors for rings formula}\Func_{\leq n}(D(R),D(R))\tilde{o} \Func_{\leq n}(\Proj_R^{\mathrm{f.g}},D(R))\end{equation} is an equivalence. Here the source of the functor is the category of polynomial of degree $\leq n$ endofunctors of the stable $\infty$-category $D(R)$ that preserve sifted colimits. The target category is the category of polynomial functors of degree $\leq n$ from the abelian category of finitely generated projective $R$-modules to the $\infty$-category $D(R)$. \end{lm} \begin{proof} Statement (2) is a combination of \cite[Theorem 2.19]{barwick-glasman-mathew-nikolaus} applied to ${\mathcal A}=\Proj(R)^{\mathrm{f.g}}$ with the fact that $D(R)$ is the ind-completion of $\mathrm{Stab}({\mathcal A})=\Perf(R)$. \end{proof} Denote by $\Mod:\Ring\tilde{o} \Cat$ the functor from the ordinary category of commutative rings to the $2$-category of ordinary categories that sends a commutative ring $R$ to the ordinary category $\Mod_R$ or $R$ modules, and a morphism of rings $f:R\tilde{o} R'$ to the functor $R'\otimes_R -:\Mod_R\tilde{o} \Mod_{R'}$. Similarly, denote by $\Proj^{\mathrm{f.g.}}$ the subfunctor of $\Mod$ that sends a ring $R$ to the category $\Proj^{\mathrm{f.g.}}_R$ of finitely generated projective $R$-modules. We denote by $\Mor_{\leq n}(\Proj^{\mathrm{f.g.}},\Mod)\subset \Mor(\Proj^{\mathrm{f.g.}},\Mod)$ the full subcategory of the ordinary $1$-category of natural transformations $\Proj^{\mathrm{f.g.}}\tilde{o}\Mod$ spanned by transformations that are given by polynomial functors $\Proj^{\mathrm{f.g.}}_R\tilde{o}\Mod_R$ of degree $\leq n$, for every ring $R$. We have the following construction principle for polynomial functors on arbitrary prestacks \begin{lm}\label{dalg: derived functors for stacks} For every prestack $X$ there is a functor \begin{equation}\Sigma_X:\Mor_{\leq n}(\Proj^{\mathrm{f.g.}},\Mod)\tilde{o} \Func(D(X),D(X))\end{equation} satisfying the following property: For any morphism $f:\Spec R\tilde{o} X$ from an affine scheme, and a polynomial functor $T\in \Mor_{\leq n}(\Proj^{\mathrm{f.g.}},\Mod)$ there is an equivalence of functors $f^*\circ \Sigma_X(T)\simeq T^{\mathrm{derived}}_R\circ f^*$ from $D(X)$ to $D(R)$, where $T^{\mathrm{derived}}_R$ is the result of applying the inverse of the equivalence (\ref{dalg: derived functors for rings formula}) to $T_R:\Proj_{R}^{\mathrm{f.g.}}\tilde{o}\Mod_R$. Moreover, if $X$ is an algebraic stack flat over a ring $A$, then $\Sigma_X$ factors through the category $\Mor_{\leq n}(\Proj^{\mathrm{f.g.}}|_{A-flat},\Mod|_{A-flat})$ of morphisms between these functors restricted to the category of flat $A$-algebras. \end{lm} \begin{proof} For a general $X$ the category $D(X)$ is, by definition, equivalent to $\lim\limits_{\Spec R\tilde{o} X}D(R)$, where the limit is taken over all affine schemes mapping to $X$. Therefore $\Func(D(X),D(X))\simeq \lim\limits_{\Spec R\tilde{o} X}\Func(D(X),D(R))$ and we define $\Sigma_X(T)$ as the object of this limit given by $T_R^{\mathrm{derived}}\circ f^*\in \Func(D(X),D(R))$ for every map $f:\Spec R\tilde{o} X$. The last assertion follows from the fact that if $X$ is a flat algebraic stack over $A$, then in the above limit we may restrict to morphisms $\Spec R\tilde{o} X$ for which $R$ is flat over $A$, by \cite[Proposition 3.1.4.2]{gaitsgory-rozenblum}. \end{proof} \begin{definition} For a prestack $X$, we define functors $S^n,\Gamma^n,\Lambda^n:D(X)\tilde{o} D(X)$ as images under $\Sigma_X$ of the corresponding polynomial functors on projective modules over rings. We will denote these functors by $S^n_X,\Gamma^n_X,\Lambda^n_X$ if the base is not clear from the context. \end{definition} \begin{rem}We can give an explicit recipe for computing the derived functors $S^n,\Gamma^n,\Lambda^n$ that we state in a special case, for simplicity. Suppose that $X$ is a quasi-compact separated scheme and $M\in D^-(X)$ is an object with cohomology bounded from above, that can be represented by a complex $M^0\tilde{o} M^1\tilde{o}\ldots$ concentrated in degrees $\geq 0$ of flat quasicoherent sheaves. Then $S^n(M)$ (and likewise for the other functors $\Lambda^n,\Gamma^n$) is the totalization of the cosimplicial diagram \begin{equation} \begin{tikzcd} S^n(\DK(M)^0) \arrow[r, shift left=0.65ex] \arrow[r, shift right=0.65ex] & S^n(\DK(M)^1) \arrow[r, shift left=1.3ex] \arrow[r, shift right=1.3ex] \arrow[r] &\ldots \end{tikzcd} \end{equation} where $\DK(M)$ is the cosimplicial sheaf associated to the complex $M$ by the Dold-Kan equivalence. The miracle is that the resulting object $S^n(M)$ does not depend on the choice of a resolution, up to a quasi-isomorphism. \end{rem} We will use Lemma \ref{dalg: derived functors for stacks} frequently for homotopy-coherent computations with polynomial functors. Note that specifying an object of $\Mor_{\leq n}(\Proj^{\mathrm{f.g.}},\Mod)$ amounts to a manageable collection of data: we need to give a functor $T_R:\Proj^{\mathrm{f.g.}}_R\tilde{o} \Mod_R$ for every ring $R$, construct equivalences $R'\otimes_R T_R(-)\simeq T_{R'}(R'\otimes_R -)$ for all maps of rings $R\tilde{o} R'$, and then check (rather than provide any additional constructions) that these equivalences are associative with respect to the composition of maps of rings. Let us illustrate this technique by constructing certain maps between the symmetric and divided power functors. For a projective module $M$ over a ring $R$ there are natural maps between $S^nM=(M^{\otimes n})_{S_n}$ and $\Gamma^nM=(M^{\otimes n})^{S_n}$: \begin{equation} r_n:\Gamma^n M\hookrightarrow M^{\otimes n}\twoheadrightarrow S^nM \qquad N_n=\sum\limits_{\sigma\in S_n}\sigma: S^nM\tilde{o} \Gamma^nM \end{equation} The compositions $N_n\circ r_n$ and $r_n\circ N_n$ are equal to $n!\cdot \mathrm{Id}_{\Gamma^n M}$ and $n!\cdot \mathrm{Id}_{S^n M}$, respectively. Since maps $r_n$ and $N_n$ are compatible with base change along arbitrary maps of rings $R\tilde{o} R'$, they define morphism between functors $S^n$ and $\Gamma^n$ in the category $\Mor_{\leq n}(\Proj^{\mathrm{f.g.}},\Mod)$. Therefore Lemma \ref{dalg: derived functors for stacks} provides us with maps \begin{equation}\label{dalg: norm prestacks} r_n:\Gamma^n\tilde{o} S^n\qquad N_n:S^n\tilde{o} \Gamma^n \end{equation} of endofucntors of the category $D(X)$ for every prestack $X$. \begin{lm}\label{dalg: polynomial frobenius lemma} For an ${\mathbb F}_p$-algebra $R_0$ and a projective $R_0$-module $M$ the map $r_p:\Gamma^pM\tilde{o} S^pM$ naturally factors as \begin{equation} \Gamma^p M\xrightarrow{\psi_M} F^*_{R_0}M\xrightarrow{\Delta_M} S^p M \end{equation} where the first arrow is a surjection and the second one is an injection. \end{lm} \begin{proof} We start by constructing the map $\Delta_M:M\otimes_{R_0,F_{R_0}}R_0=F^*_{R_0}M\tilde{o} S^p M$. Define $\Delta_M(m\otimes r)=r\cdot m^p$, this is a well-defined additive $R_0$-linear map. To check that $r_p$ factors through $\Delta_M$ we may work locally on $\Spec R_0$ and thus assume that $M$ is a free $R_0$-module. Denote a basis for $M$ by $e_1,\ldots,e_n$. Then a basis for the module $\Gamma^p M\subset M^{\otimes p}$ is given by the elements $e_I:=\sum\limits_{\sigma\in S_p/\Stab_I}e_{i_1}\otimes \ldots\otimes e_{i_p}$ where $I=(i_1,\ldots,i_p)$ runs through $S_p$-orbits on $\{1,\ldots,n\}^{\times p}$. The map $r_p$ sends $e_I$ to $[S_p:\Stab_I]\cdot e_{i_1}\cdot\ldots\cdot e_{i_p}$. If the stabilizer of $I$ inside $S_p$ has index coprime to $p$, then $\Stab_I$ contains a long cycle of length $p$ which forces all $i_1,\ldots,i_p$ to be equal. Hence only basis elements of the form $e_i^{\otimes p}\in \Gamma^p M$ are not killed by $r_p$, which gives the desired factorization. \end{proof} By Lemma \ref{dalg: derived functors for stacks}, having constructed maps $\psi_M, \Delta_M$ for projective modules in a functorial fashion, we get maps \begin{equation}\label{dalg: polynomial frobenius} \Delta_M:F_{X_0}^*M\tilde{o} S^p M\qquad \psi_M:\Gamma^p M\tilde{o} F_{X_0}^*M \end{equation} for all objects $M\in D(X_0)$ for an arbitrary ${\mathbb F}_p$-prestack $X_0$. The composition $\Delta_M\circ \psi_M$ is naturally homotopic to $r_p:\Gamma^p M\tilde{o} S^p M$, by construction. A useful computation tool for us will be the following `d\'ecalage' isomorphisms: \begin{lm}[\hspace{1sp}{\cite[Proposition 4.3.2.1]{illusie-cotangent1},\cite[Proposition A.2.49]{kubrak-prikhodko-p-adic}}] There are natural equivalences for $M\in D(X)$ \begin{equation}\label{dalg: decalage} S^n(M[1])\simeq (\Lambda^n M)[n] \quad \Gamma^n(M[-1])\simeq (\Lambda^n M)[-n] \end{equation} \end{lm} \subsection{Derived commutative algebras.} The precise version of the notion of a commutative algebra that we will need is derived commutative algebras, in the sense of Mathew. In this section we collect the necessary material about this notion, largely following \cite[Section 4]{raksit}. Let us stress again that this section is purely expository and contains no original results. Recall the following point of view on ordinary commutative algebras over a field $k$, afforded by the Barr-Beck theorem. The endofunctor $S^{\bullet}=\bigoplus\limits_{n\geq 0}S^n$ of the category of $k$-vector spaces has a structure of a monad, and the category of $k$-algebras is equivalent to the category of modules over the monad $S^{\bullet}$ in the category of $k$-vector spaces. The idea of derived commutative algebras is to replicate this definition in the derived world by using the appropriate derived version of the symmetric algebra monad. \begin{construction}[{\hspace{1sp}\cite[Construction 4.2.19]{raksit}}] For a ring $R$ consider the endofunctor $S^{\bullet}:=\bigoplus\limits_{n\geq 0}S^n:\Proj_R\tilde{o} \Proj_R$ of the ordinary category of projective $R$-modules. It has a monad structure induced by the maps $S^i(S^jM)\twoheadrightarrow S^{i\cdot j}M$ for every projective module $M$. It naturally extends to a monad, in the sense of \cite[Definition 4.7.0.1]{lurie-ha}, on the category $D(X)$ for every prestack $X$ which we denote as \begin{equation} S^{\bullet}:=\bigoplus\limits_{n\geq 0} S^n:D(X)\tilde{o} D(X) \end{equation} and call it the derived symmetric algebra monad. \end{construction} We can now define derived commutative algebras on $X$: \begin{definition} The category $\DAlg(X)$ of derived commutative algebras on $X$ is the category of modules over the derived symmetric algebra monad $S^{\bullet}$ in $D(X)$. \end{definition} By construction, there is a functor $S^{\bullet}:D(X)\tilde{o}\DAlg(X)$ left adjoint to the forgetful functor $\DAlg(X)\tilde{o} D(X)$. For a derived commutative algebra $A\in \DAlg(X)$ we will usually denote the underlying object in $D(X)$ by the same symbol $A$. The structure of a derived commutative algebra, in particular, gives a map \begin{equation} m:A\otimes_{{\mathcal O}_X}A\tilde{o} S^2A\tilde{o} A \end{equation} which induces a graded commutative product operation $H^i(A)\otimes_{{\mathcal O}_X}H^j(A)\tilde{o} H^{i+j}(A)$ on cohomology sheaves. \begin{definition} For an object $M$, we call $S^{\bullet}(M)\in \DAlg(X)$ the free derived commutative algebra on $M$. \end{definition} \begin{rem} For every $M\in D(X)$ there are natural maps $(M^{\otimes n})_{hS_n}\tilde{o} S^n(M)$ which give rise to a map from the monad $M\mapsto \bigoplus\limits_{n\geq 0} (M^{\otimes n})_{hS_n}$ to $S^{\bullet}$. It is in general far from equivalence, we make a precise comparison between their values in a special case in Lemma \ref{free cosimplicial: cosimp vs einf}. Since modules over the the former monad are the $E_{\infty}$-algebras on $X$, we get a functor $\DAlg(X)\tilde{o} \Alg_{E_{\infty}}D(X)$. \end{rem} \comment{\begin{lm} For any prestack $X$ we have an equivalence $\DAlg(X)\simeq \lim\limits_{\Spec R\tilde{o} X}\DAlg(R)$. \end{lm} \begin{proof} There is a natural functor $F:\DAlg(X)\tilde{o} \lim\limits_{\Spec R\tilde{o} X}\DAlg(R)$ that fits in the commutative diagram \begin{equation} \begin{tikzcd} \DAlg(X)\arrow[r,"F"]\arrow[d]& \lim\limits_{\Spec R\tilde{o} X}\DAlg(R)\arrow[d] \\ D(X)\arrow[r, "\sim"] & \lim\limits_{\Spec R\tilde{o} X}D(R) \end{tikzcd} \end{equation} where the vertical arrows are the forgetful functors. By definition of $\DAlg(R)$, the left adjoint to the forgetful functor $\lim\limits_{\Spec R\tilde{o} X}\DAlg(R)\tilde{o} \lim\limits_{\Spec R\tilde{o} X}D(R)$ is the limit $\lim S^{\bullet}_R$ of free symmetric algebra functors. Moreover, under the equivalence $D(X)\simeq \lim\limits_{\Spec R\tilde{o} X}D(R)$ the monad $S^{\bullet}_X$ is identified with the monad $\lim S^{\bullet}_R$. In particular, there is a functor $G:\lim\limits_{\Spec R\tilde{o} X}\DAlg(R)\tilde{o} \Mod_{S^{\bullet}}D(X)=\DAlg(X)$. For any $f:\Spec R\tilde{o} X$ the composition of $G$ with the induced map $f^*:\DAlg(X)\tilde{o} \DAlg(R)$ is equivalent to the natural functor $\lim\limits_{\Spec R\tilde{o} X}\DAlg(R)\tilde{o} \DAlg(R)$, hence the composition $F\circ G$ is equivalent to the identity functor. We have $F\circ G\simeq \mathrm{Id}$, because, for any $\Spec R\tilde{o} X$, the composition of $F\circ G$ with the functor \end{proof}} Derived commutative algebras in characteristic $p$ have natural Frobenius endomorphisms: \begin{lm}\label{dalg: frobenius} Suppose that $X$ is a prestack over ${\mathbb F}_p$. For all $A\in \DAlg(X)$ there is a natural morphism $\varphi_A:F_X^*A\tilde{o} A$ in $D(X)$ that is equal to the linearization of the usual Frobenius endomorphism when $A$ is a flat ordinary commutative algebra on an affine scheme $X$. \end{lm} \begin{rem} It should be possible to enhance $\varphi_A$ to a morphism of derived commutative algebras, but we do not pursue this here. \end{rem} \begin{proof} We define $\varphi_A$ as the composition $F_X^*A\xrightarrow{\Delta_A} S^pA\xrightarrow{m_A}A$ where $m_A$ is a part of the $S^{\bullet}$-module structure on $A$, and $\Delta_A$ is the morphism defined in (\ref{dalg: polynomial frobenius}). Compatibility with the usual Frobenius follows from the defining formula of $\Delta_A$ given in the proof of Lemma \ref{dalg: polynomial frobenius lemma}. \end{proof} One of the main motivations for introducing the notion of derived commutative algebras is that many cohomological invariants arising in geometry are naturally equipped with the structure of a derived commutative algebra. \begin{pr}\label{dalg: pushforward of rings} If $f:X\tilde{o} Y$ is a morphism of prestacks then the functor $Rf_*:D(X)\tilde{o} D(Y)$ can be naturally enhanced to a functor $Rf^{\mathrm{alg}}_*:\DAlg(X)\tilde{o} \DAlg(Y)$. \end{pr} \begin{proof} Recall that $Rf_*:D(X)\tilde{o} D(Y)$ is defined as the right adjoint functor to the pullback functor $f^*:D(Y)\tilde{o} D(X)$. The equivalences $f^*\circ S^n_Y\simeq S^n_X\circ f^*$ induce a colimit-preserving functor $f^*_{\mathrm{alg}}:\DAlg(Y)\tilde{o} \DAlg(X)$ given by $f^*$ on the underlying objects of $D(-)$. Since $\DAlg(Y)$ is presentable (e.g. by \cite[Proposition 4.1.10]{raksit}), this functor admits a right adjoint $Rf_*^{\mathrm{alg}}:\DAlg(X)\tilde{o} \DAlg(Y)$, by the adjoint functor theorem \cite[Corollary 5.5.2.9(1)]{lurie-htt}. It remains to check that $Rf_*^{\mathrm{alg}}$ defined this way induces $Rf_*$ on the underlying sheaves. By construction, the compositions $D(Y)\xrightarrow{f^*}D(X)\xrightarrow{S^{\bullet}_X} \DAlg(X)$ and $D(Y)\xrightarrow{S^{\bullet}_Y}\DAlg(Y)\xrightarrow{f^*_{\mathrm{alg}}}\DAlg(X)$ are equivalent. Their right adjoints are $\DAlg(X)\tilde{o} D(X)\xrightarrow{Rf_*}D(Y)$ and $\DAlg(X)\xrightarrow{Rf_*^{\mathrm{alg}}}\DAlg(Y)\xrightarrow{}D(Y)$, and their equivalence is precisely the desired compatibility between $Rf_*$ and $Rf_*^{\mathrm{alg}}$. \end{proof} In particular, $Rf_*{\mathcal O}_X\in D(Y)$ is naturally equipped with a derived commutative algebra structure. The functor $Rf^{\mathrm{alg}}_*$ is compatible with Frobenius endomorphisms: \begin{lm}\label{dalg: geometric frobenius} Suppose that $Y$ is a prestack over ${\mathbb F}_p$ and $f:X\tilde{o} Y$ is a morphism of prestacks. For $A\in \DAlg(X)$ there is a natural equivalence between the composition $F_Y^*Rf_*A\tilde{o} Rf_*F_X^*A\xrightarrow{Rf_*\varphi_A}Rf_*A$ and the map $F_Y^*Rf_*A\xrightarrow{\varphi_{Rf_*^{\mathrm{alg}}A}}Rf_*A$ in $D(Y)$. Here $F_Y^*Rf_*A\tilde{o} Rf_*F_X^*A$ is the base change map corresponding to the equality $F_Y\circ f=f\circ F_X$. \end{lm} \begin{proof} By adjunction between $f^*$ and $Rf_*$, and the definition of $\varphi_A$ and $\varphi_{Rf_*^{\mathrm{alg}}A}$, our task is equivalent to identifying the composition \begin{equation}\label{dalg: geometric frobenius eq1} f^*F_Y^*Rf_*A\tilde{o} f^*Rf_*F_X^*A\xrightarrow{f^*Rf_*\Delta_A}f^*Rf_*S^p_XA\xrightarrow{f^*Rf_*m_A}f^*Rf_*A\tilde{o} A \end{equation} with the composition \begin{equation}\label{dalg: geometric frobenius eq2} f^*F_Y^*Rf_*A\xrightarrow{f^*\Delta_{Rf_*A}}f^*S^p_YRf_*A\xrightarrow{f^*m_{Rf_*^{\mathrm{alg}}A}} f^*Rf_*A\tilde{o} A \end{equation} The map $S^p_YRf_*A\xrightarrow{m_{Rf_*^{\mathrm{alg}}A}} Rf_*A$ is adjoint to the map $f^*S^p_YRf_*A\simeq S^p_Xf^*Rf_*A\tilde{o} S^p_XA\xrightarrow{m_A} A$, which allows us to rewrite (\ref{dalg: geometric frobenius eq2}) as \begin{equation}\label{dalg: geometric frobenius eq3} f^*F_Y^*Rf_*A\xrightarrow{}f^*S^p_YRf_*A\simeq S^p_Xf^*Rf_*A\tilde{o} S^p_XA\xrightarrow{m_A}A \end{equation} For any object $M\in D(Y)$ the map $f^*F_Y^*M\xrightarrow{f^*\Delta_M}f^*S^p_YM\simeq S^p_Xf^*M$ can be identified with the composition $f^*F_Y^*M\simeq F_X^*f^*M\xrightarrow{\Delta_{f^*M}}S^p_Xf^*M$ where the first equivalence arises from the fact that $f$ intertwines the Frobenius endomorphisms of $X$ and $Y$. Applying this to $M=Rf_*A$ allows us to identify (\ref{dalg: geometric frobenius eq1}) with (\ref{dalg: geometric frobenius eq3}), as desired. \end{proof} We can use Lemma \ref{dalg: pushforward of rings} to construct examples of derived commutative algebras: \begin{definition}\label{dalg: free divided power algebra def} For a finite locally free sheaf $M$ (concentrated in degree $0$) on a scheme $X$ we define the {\it free divided power algebra} on $M[-1]$ denote by $\Gamma^{\bullet}(M[-1])\in \DAlg(X)$ as $R\pi_*^{\mathrm{alg}}{\mathcal O}_{B_XM^{\vee \#}}$ where $\pi: B_XM^{\vee \#}\tilde{o} X$ is the relative classifying stack of the divided power group scheme $M^{\vee\#}$ on $X$ associated to $M^{\vee}$. \end{definition} This terminology is justified by the fact that the underlying $E_{\infty}$-algebra of $\Gamma^{\bullet}(M[-1])$ is identified with $\bigoplus\limits_{i\geq 0}\Lambda^i M[-i]$, e.g. by \cite[Lemma 7.8]{bhatt-lurie-prismatization}. If $X$ is an ${\mathbb F}_p$-scheme, by Lemma \ref{dalg: geometric frobenius} the Frobenius map $\varphi^*_{\Gamma^{\bullet}(M[-1])}:F_{X}^*\Gamma^{\bullet}(M[-1])\tilde{o} \Gamma^{\bullet}(M[-1])$ factors as $F_X^*\Gamma^{\bullet}(M[-1])\tilde{o} {\mathcal O}_X\tilde{o} \Gamma^{\bullet}(M[-1])$ because the Frobenius endomorphism of $M^{\vee \#}$ factors through the identity section. Cohomology of a sheaf of rings on a site can be equipped with a derived commutative algebra structure: \begin{lm}\label{dalg: site cohomology} Let ${\mathcal C}$ be a site and ${\mathcal F}$ be a sheaf of (ordinary) commutative algebras over a ring $R$ on ${\mathcal C}$. Then for any object $X\in {\mathcal C}$ the complex $\mathrm{R}\Gamma(X,{\mathcal F})\in D(R)$ is naturally endowed with a structure of a derived commutative $R$-algebra. \end{lm} \begin{proof} By \cite[01GZ]{stacks-project} we can compute $\mathrm{R}\Gamma(X,{\mathcal F})$ as a filtered colimit over all hypercovers $U_{\bullet}\tilde{o} X$ of \v{C}ech cohomology with respect to $U_{\bullet}$: \begin{equation}\label{dalg: hypercover formula} \mathrm{R}\Gamma(X,{\mathcal F})\simeq \colim\limits_{U_{\bullet}\tilde{o} X}\lim\limits_n {\mathcal F}(U_n) \end{equation} Each ${\mathcal F}(U_n)$ is a commutative $R$-algebra, which we view as an object of $\DAlg(R)$. Since the forgetful functor $\DAlg(R)\tilde{o} D(R)$ commutes with limits and filtered colimits, this endows $\mathrm{R}\Gamma(X,{\mathcal F})$ with the structure of a derived commutative $R$-algebra. \end{proof} Applying this construction to \'etale cohomology with coefficients in ${\mathbb F}_p$ produces derived commutative algebras with the special property that the Frobenius endomorphism is homotopic to identity: \begin{lm}\label{dalg: frobenius on etale} If $X$ is a scheme then the Frobenius endomorphism $\varphi_{\mathrm{R}\Gamma_{\text{et}}(X,{\mathbb F}_p)}$ of the derived commutative ${\mathbb F}_p$-algebra $\mathrm{R}\Gamma_{\text{et}}(X,{\mathbb F}_p)$ is naturally homotopic to the identity morphism. \end{lm} \begin{proof} In the formula (\ref{dalg: hypercover formula}) each ${\mathbb F}_p(U_n)=\Func(\pi_0(U_n),{\mathbb F}_p)$ is the algebra of ${\mathbb F}_p$-valued functions on a set, and its Frobenius endomorphism is the identity, so the lemma follows. \end{proof} When studying the Sen operator, we will use that it is compatible with the derived commutative algebra structure on the diffracted Hodge cohomology (to be defined in Section \ref{applications: section}). Specifically, we need the following notion \begin{definition}\label{dalg: derivation} For a prestack $X$ and a derived commutative algebra $A\in \DAlg(X)$, a {\it derivation} $f:A\tilde{o} A$ is a map in $D(X)$ such that the map $\mathrm{Id}_A+\varepsilon\cdot f:A\otimes {\mathbb Z}[\varepsilon]/\varepsilon^2\tilde{o} A\otimes {\mathbb Z}[\varepsilon]/\varepsilon^2$ in $D(X\times {\mathbb Z}[\varepsilon]/\varepsilon^2)$ is equipped with the structure of a map in $\DAlg(X\times {\mathbb Z}[\varepsilon]/\varepsilon^2)$. \end{definition} \subsection{Cosimplicial commutative algebras.}\label{dalg: cosimp subsection} For the duration of this subsection assume that $X$ is a scheme. In all of our main applications we will in fact be presented with a cosimplicial commutative algebra in the ordinary category $\QCoh(X)$. We denote by $\CAlg^{\Delta}_X$ the ordinary category of cosimplicial commutative algebras in the abelian symmetric monoidal category $\QCoh(X)$. In this subsection we make some remarks on the relation between $\CAlg^{\Delta}_X$ and $\DAlg(X)$. These facts are not used in any of our main results, but the reader who feels more comfortable with $\CAlg^{\Delta}_X$ than with $\DAlg(X)$ is encouraged to specialize the results in Section \ref{cosimp: section} to a situation where the algebra $A$ is a cosimplicial commutative algbera, using this subsection as a dictionary. \com{A forthcoming work of Mathew and Mondal establishes an equivalence between the full subcategory of $\DAlg(X)$ ...} In general, a cosimplicial commutative algebra gives rise to a derived commutative algebra: \begin{lm}\label{dalg: cosimp to dalg} Let $X$ be a scheme. There is a natural functor ${\mathcal R}_X:\CAlg^{\Delta}(X)\tilde{o} \DAlg(X)$ from the ordinary category of cosimplicial commutative algebras in quasi-coherent sheaves on $X$ to the category of derived commutative algebras on $X$. This functor is compatible with the cosimplicial totalization functor on the level of underlying complexes. \end{lm} \begin{rem} For all of our main results we will work with derived commutative algebras concentrated in degrees $\geq 0$. In a forthcoming work, Mathew and Mondal will show that for any commutative base ring $R$ the $\infty$-category of derived commutative $R$-algebras concentrated in degrees $\geq 0$ is equivalent to the $\infty$-category of cosimplicial commutative $R$-algebras, cf. \cite[Remark 2.1.8]{mondal-reinecke}. \end{rem} \begin{proof}[Proof of Lemma \ref{dalg: cosimp to dalg}] First of all, there is a functor from ordinary commutative algebras in $\QCoh(X)$ to $\DAlg(X)$. Indeed, for an ordinary algebra $A$ there is a natural map $S^nA\tilde{o} (A^{\otimes_{{\mathcal O}_X} n})_{S_n}$ where the tensor product and coinvariants are taken in the non-derived sense. Hence the commutative multiplication on $A\in \QCoh(X)$ endows it with a structure of a monad over $S^{\bullet}$ in $D(X)$. Now, if $A=\left[{\begin{tikzcd}A^0 \arrow[r, shift left=0.65ex] \arrow[r, shift right=0.65ex] & A^1 \ldots \end{tikzcd}}\right]$ is a cosimplicial commutative algebra in $\QCoh(X)$, we define ${\mathcal R}_X(A)$ as $\lim\limits_{[n]\in\Delta}{\mathcal R}_X(A^n)$, where each ${\mathcal R}_X(A^n)$ was defined in the previous paragraph. This indeed induces the totalization on underlying sheaves because the forgetful functor $\DAlg(X)\tilde{o} D(X)$ commutes with limits. \end{proof} \begin{example}If $f:X\tilde{o} \Spec R$ is a morphism from a separated scheme to the spectrum of a ring $R$, the object $\mathrm{R}\Gamma(X,{\mathcal O}_X)=f_*({\mathcal O}_X)\in D(R)$ is endowed with a structure of a cosimplicial commutative algebra using the \v{C}ech construction associated to a cover $X=\bigcup\limits_{i} U_i$: \begin{equation} \mathrm{R}\Gamma(X,{\mathcal O}_X)\simeq \left[{\begin{tikzcd}\prod\limits_{i} {\mathcal O}(U_i) \arrow[r, shift left=0.65ex] \arrow[r, shift right=0.65ex] & \prod\limits_{i<j} {\mathcal O}(U_i\cap U_j) \arrow[r, shift left=1.3ex] \arrow[r, shift right=1.3ex] \arrow[r] &\ldots \end{tikzcd}}\right] \end{equation} The result of applying ${\mathcal R}_R$ to this cosimplicial algebra is naturally equivalent to $f_*^{\mathrm{alg}}{\mathcal O}_X$ from Lemma \ref{dalg: pushforward of rings}.\end{example} If $X$ is a scheme over ${\mathbb F}_p$, for every cosimplicial commutative algebra $A\in \CAlg^{\Delta}_X$ there is a natural map $\varphi^{\mathrm{cosimp}}_A:F_X^*A\tilde{o} A$ of algebras induced by the term-wise Frobenius endomorphism. It follows from the construction of the functor ${\mathcal R}_X$ that $\varphi_{A}^{\mathrm{cosimp}}$ coincides with the Frobenius map arising from the derived commutative algebra structure: \begin{lm} For an ${\mathbb F}_p$-scheme $X$, and a term-wise flat cosimplicial commutative algebra $A\in \CAlg^{\Delta}_X$ there is a natural homotopy between ${\mathcal R}_X(\varphi_A^{\mathrm{cosimp}})$ and $\varphi_A$ in $D(X)$. \end{lm} \begin{proof} If $A$ is an ordinary commutative algebra, this was established along with defining $\varphi_A$ in Lemma \ref{dalg: frobenius}. The case of an arbitrary $A$ is obtained by passing to the limit over the cosimplicial category $\Delta$. \end{proof} For a locally free sheaf $M$ on a scheme $X$ we can represent the free divided power algebra $\Gamma^{\bullet}(M[-1])\in \DAlg(X)$ of Definition \ref{dalg: free divided power algebra def} by a cosimplicial commutative algebra. Denote by $\DK(M[-1])$ the cosimplicial object in the category of locally free sheaves on $X$, obtained by applying the Dold-Kan correspondence to the complex $M[-1]$. \begin{lm}\label{dalg: free divided power as cosimp} The derived commutative algebra $\Gamma^{\bullet}(M[-1])$ is equivalent to ${\mathcal R}_X(\Gamma^{\bullet}_{\mathrm{naive}}(\DK(M[-1])))$ where $\Gamma^{\bullet}_{\mathrm{naive}}(\DK(M[-1]))$ is the result of applying term-wise the free divided power algebra functor to the cosimplicial sheaf $\DK(M[-1])$. \end{lm} \begin{proof} The derived pushforward of the structure sheaf along the map $B_XM^{\vee \#}\tilde{o} X$ is equivalent, as a derived commutative algebra, to the totalization of the cosimplicial diagram \begin{equation}\label{dalg: free divided power as cosimp diagram} \begin{tikzcd}{\mathcal O}_X \arrow[r, shift left=0.65ex] \arrow[r, shift right=0.65ex] & \pi_*{\mathcal O}_{M^{\vee\#}} \arrow[r, shift left=1.3ex] \arrow[r, shift right=1.3ex] \arrow[r] &\pi_*{\mathcal O}_{M^{\vee\#}\times_X M^{\vee\#}}\ldots \end{tikzcd} \end{equation} obtained by applying the functor $\pi_*{\mathcal O}_{(-)}$ to the bar-resolution associated to the group scheme $\pi:M^{\vee\#}\tilde{o} X$ over $X$. The $n$th term of the diagram (\ref{dalg: free divided power as cosimp diagram}) is the commutative algebra $\pi_*{\mathcal O}_{M^{\vee\#\times_X n}}\simeq\Gamma^{\bullet}_X(M^{\oplus n})$ concentrated in degree $0$, so the cosimplicial commutative algebra defined by (\ref{dalg: free divided power as cosimp diagram}) is indeed equivalent to the result of applying $\Gamma^{\bullet}_{\mathrm{naive}}$ to $\DK(M[-1])$. \end{proof} \subsection{Bockstein morphisms.} For a stable ${\mathbb Z}$-linear $\infty$-category ${\mathcal C}$, any object $M\in {\mathcal C}$ gives rise to a natural fiber sequence \begin{equation} M\xrightarrow{p}M\tilde{o} M/p \end{equation} We will denote the corresponding connecting map by $\mathrm{Bock}_M:M/p\tilde{o} M[1]$ and refer to it as the {\it Bockstein morphism} corresponding to $M$. Note that $\mathrm{Bock}_{M[1]}$ is naturally homotopic to $(-\mathrm{Bock}_{M}[1])$. Similarly, for a ${\mathbb Z}/p^n$-linear stable $\infty$-category ${\mathcal C}$ for any object $M\in {\mathcal C}$ we have a fiber sequence \begin{equation} M\otimes_{{\mathbb Z}/p^n}{\mathbb Z}/p^{n-1}\tilde{o} M\tilde{o} M\otimes_{{\mathbb Z}/p^n}{\mathbb Z}/p \end{equation} inducing the connecting map $\mathrm{Bock}_M:M\otimes{\mathbb Z}/p\tilde{o} M\otimes{\mathbb Z}/p^{n-1}[1]$. These constructions over ${\mathbb Z}$ and ${\mathbb Z}/p^n$ are compatible in the sense that $\mathrm{Bock}_{M/p^n}$ is the composition $M/p\xrightarrow{\mathrm{Bock}_M}M[1]\tilde{o} M/p^{n-1}[1]$. \section{Symmetric power \texorpdfstring{$S^p$}{}, class \texorpdfstring{$\alpha$}{}, and Steenrod operations} \label{free cosimplicial: section} In this section we study in detail the derived functor $S^p$ of $p$th symmetric power by comparing it with the divided power functor $\Gamma^p$. For complexes of vector spaces over a field $k$ the values of $S^n$ can be described non-canonically using the computations of $S^n(k[-i])$ done by Priddy \cite{priddy}, but we crucially need to understand $S^pM$ as a complex of sheaves, rather that its separate cohomology sheaves. \subsection{Symmetric powers vs. divided powers.} Let $X$ be an arbitrary prestack. Recall that in the previous section for an object $M\in D(X)$ we defined natural morphisms \begin{equation} N_n:S^nM\tilde{o} \Gamma^nM\quad r_n:\Gamma^nM\tilde{o} S^nM \end{equation} Denote by $T_n(M)$ the cofiber $\cofib(N_n:S^nM\tilde{o} \Gamma^nM)$ of the norm map. The functor $T_n$, especially for $n=p$, has been extensively studied in the literature. References close to our point of view are works of Friedlander-Suslin \cite[Section 4]{friedlander-suslin}, and Kaledin \cite[6.3]{kaledin-coperiodic}. We have the following classical results about $T_n$: \comment{\begin{rem} In (2), when $2$ is invertible on $X$, $\Lambda^n M$ is naturally isomorphic to the coinvariants $(M^{\otimes n})_{S_n,\mathrm{sgn}}$ with respect to the alternating action of $S_n$. Also, when $2$ is invertible on $X$, the norm map $N:(M^{\otimes_n})_{S_n,\mathrm{sgn}}\tilde{o} (M^{\otimes n})^{S_n,\mathrm{sgn}}$ is an isomorphism, contrary to the case of the norm map $N:S^n(M)\tilde{o} \Gamma^n(M)$. \end{rem} } \begin{lm}[{\hspace{1sp}\cite[Lemma 4.12]{friedlander-suslin},\cite[Lemma 6.9]{kaledin-coperiodic}}]\label{free cosimplicial: tate p} \begin{enumerate} \item If $X$ is a prestack over ${\mathbb Z}[\frac{1}{n!}]$ then $T_n(M)\simeq 0$ for every $M\in D(X)$. \item If $R$ is a flat ${\mathbb Z}_p$-algebra and $M$ is a flat $R$-module, then $N_p:S^pM\tilde{o} \Gamma^pM$ is an injection and its cokernel $T_p(M)$ is naturally isomorphic to $F^*_{R/p}(M/p)$ as an $R$-module. \item If $X$ is a flat algebraic stack over ${\mathbb Z}_p$ then there is a natural equivalence $T_p(M)\simeq i_*F_{X_0}^*i^*M$, where $i:X_0=X\times_{{\mathbb Z}_p}{\mathbb F}_p\tilde{o} X$ is the closed immersion of the special fiber, and $F_{X_0}:X_0\tilde{o} X_0$ is the absolute Frobenius morphism. \item If $X_0$ is a prestack over ${\mathbb F}_p$ then $T_p(M_0)$ fits into a natural fiber sequence \begin{equation}\label{free cosimplicial: tate mod p extension} F_{X_0}^*M_0[1]\tilde{o} T_p(M_0)\tilde{o} F_{X_0}^*M_0. \end{equation} \end{enumerate} \end{lm} \begin{rem}\label{free cosimp: remark Tp} \begin{enumerate} \item In the setting of (3), if $M_0\simeq i^*M\in D(X_0)$ is the reduction of an object $M\in D(X)$ then $T_p(M_0)$ can be naturally (in $M$) described as \begin{equation} T_p(M_0)\simeq i^*T_p(M)\simeq i^*i_*F_{X_0}^*M_0 \end{equation} Under this identification, the fiber sequence in (\ref{free cosimplicial: tate mod p extension}) is identified with the sequence induced by the fiber sequence of functors $\mathrm{Id}[1]\tilde{o} i^*i_*\tilde{o} \mathrm{Id}$ from $D(X_0)$ to $D(X_0)$. \item This will not be used in any of the proofs but let us remark that one can describe the extension (\ref{free cosimplicial: tate mod p extension}) completely, even in the absence of a lift of $X_0$ to ${\mathbb Z}_p$ together with the object $M$. It is proven in \cite[Theorem 6]{petrov-vaintrob-vologodsky} that in the setting of (4), at least if $X$ is a scheme, $T_p(M)$ can be upgraded to an object of the derived category of crystals of quasi-coherent crystals of ${\mathcal O}$-modules $D(\Cris(X_0/{\mathbb F}_p))$ on the scheme $X_0$ such that (\ref{free cosimplicial: tate mod p extension}) is an extension of crystals where $F^*_{X_0}M$ is endowed with a crystal structure using the canonical connection. Moreover, homotopy classes of splittings of (\ref{free cosimplicial: tate mod p extension}) in $D(\Cris(X_0/{\mathbb F}_p))$ are in bijection with lifts of $F_{X_0}^*M$ to an object in $D(\Cris(X_0/({\mathbb Z}/p^2)))$. If $X_0$ is equipped with a lift $X_1$ over ${\mathbb Z}/p^2$ then splittings of (\ref{free cosimplicial: tate mod p extension}) in $D(X_0)$ are in bijection with lifts of $F_{X_0}^*M$ to an object of $D(X_1)$. \end{enumerate} \end{rem} \begin{proof} 1) We have $r_n\circ N_n=n!\cdot\ \mathrm{Id}_{S^nM}$, $N_n\circ r_n=n!\cdot \mathrm{Id}_{\Gamma^nM}$. Therefore $N_n$ is an isomorphism if $n!$ is invertible, so $T^n$ vanishes on prestacks over ${\mathbb Z}[\frac{1}{n!}]$. 2) By the previous part, the map $N_p:S^pM\tilde{o} \Gamma^p M$ becomes an isomorphism after inverting $p$. Since the module $S^pM$ is $p$-torsion free, the map $N_p$ is injective. We will now construct a natural isomorphism $\alpha$ between the module $F^*_{R/p}(M/p)=M/p\otimes_{R/p,F_{R/p}}R/p$ and the cokernel of $N_p$. To an element $m\otimes r\in M/p\otimes_{R/p,F_{R/p}}R/p$ assign the element $\alpha(m\otimes r):=\widetilde{r}\cdot \widetilde{m}^{\otimes p}\in \coker N_p$ where $\widetilde{r}\in R$ and $\widetilde{m}\in M$ are arbitrary lifts of $r$ and $m$, respectively. To see that $\alpha$ gives a well-defined map $F_{R/p}^*(M/p)\tilde{o} \coker N_p$ we need to check that $\widetilde{r}\cdot\widetilde{m}^{\otimes p}\in \coker N_p$ does not depend on the choices of the lifts $\widetilde{r},\widetilde{m}$ and that $(\widetilde{m}_1+\widetilde{m}_2)^{\otimes p}=\widetilde{m}_1^{\otimes p}+\widetilde{m}_2^{\otimes p}\in \coker N_p$. The first claim follows from the fact that $p\cdot\Gamma_R^p(M)\subset \im N_p$, and the additivity is demonstrated by the formula \begin{equation} (\widetilde{m}_1+\widetilde{m}_2)^{\otimes p}-\widetilde{m}_1^{\otimes p}-\widetilde{m}_2^{\otimes p}=N_p\left(\sum\limits_{i=1}^{p-1} \frac{1}{i!(p-i)!}\widetilde{m}_1^{i}\widetilde{m}_2^{p-i}\right) \end{equation} Finally, to show that $\alpha:F_{R/p}^*(M/p)\tilde{o} \coker(N_p)$ is an isomorphism, we may assume that $M$ is a free $R$-module, because both functors $M\mapsto \coker (N_p:S^pM\tilde{o}\Gamma^p M)$ and $M\mapsto F_{R/p}^*(M/p)$ commute with filtered colimits and every flat module can be represented as a filtered colimit of free modules. Let $\{e_i\}_{i\in I}$ be a basis of $M$ over $R$. Then $\{N_p(e_{i_1}\ldots e_{i_p})\}_{(i_1,\ldots,i_p)\in I^p\setminus I}\cup\{e_i^{\otimes p}\}_{i\in I}$ is an $R$-basis for $\Gamma_R^p(M)$ so $\{e_i^{\otimes p}\}_{i\in I}$ is an $R/p$-basis for $\coker N_p$, as desired. 3) Part (2) produced a natural short exact sequence $S^pM\tilde{o}\Gamma^p M\tilde{o} F^*_{R/p}(M/p)$ of $R$-modules for every projective module $M$ over a flat ${\mathbb Z}_p$-algebra $R$. The functor $M\mapsto F^*_{R/p}(M/p)$ from $\Proj^{\mathrm{f.g.}}_R$ to $\Mod_R$ is polynomial (in fact, linear), and the formation of this short exact sequence is compatible with base change along arbitrary maps $R\tilde{o} R'$ of flat ${\mathbb Z}_p$-algebras, so Lemma \ref{dalg: derived functors for stacks} produces a fiber sequence $S^p M\tilde{o} \Gamma^p M\tilde{o} i_*F^*_{X_0}i^*M$ for every $M\in D(X)$ which gives the desired identification. 4) We will establish such a fiber sequence when $M_0$ is a flat sheaf on an affine scheme $X_0=\Spec R$ and the general case will follow formally as in (3). If $M_0$ is a flat $R$-module where $R$ is an ${\mathbb F}_p$-algebra, then $T_p(M_0)$ is represented by the complex $S^pM_0\xrightarrow{N_p}\Gamma^pM_0$ concentrated in degrees $[-1,0]$. In Lemma \ref{dalg: polynomial frobenius lemma} we defined maps $\psi_{M_0}:\Gamma^p M_0\tilde{o} F_{R_0}^*M_0$ and $\Delta_{M_0}:F_{R_0}^*M_0\tilde{o} S^pM_0$ that give rise to a sequence \begin{equation} 0\tilde{o} F^*_{R_0}M_0\xrightarrow{\Delta_{M_0}} S^p_{R_0}M_0\xrightarrow{N_p}\Gamma^pM_0\xrightarrow{\psi_{M_0}} F^*_{R_0}M_0\tilde{o} 0 \end{equation} that one checks to be exact by a direct calculation with a basis of $M_0$ in case it is free, as in part (2). This exact sequence gives rise to the desired fiber sequence (\ref{free cosimplicial: tate mod p extension}) by Lemma \ref{dalg: derived functors for stacks}. \end{proof} Non-decomposability of the de Rham complex will arise from the fact that the map $N_p:S^p M\tilde{o} \Gamma^p M$ does not have a section in general. By definition, for an object $M\in D(X)$ on an arbitrary prestack $X$ there is a natural fiber sequence \begin{equation}\label{free cosimplicial: general fiber sequence} T_p(M)[-1]\xrightarrow{\gamma_M} S^p M\xrightarrow{N_p}\Gamma^pM \end{equation} It will be slightly more natural for us to start with an arbitrary object $E\in D(X)$, and apply (\ref{free cosimplicial: general fiber sequence}) to $M=E[-1]$ to get a fiber sequence \begin{equation}\label{free cosimplicial: alpha' extension} T_p(E[-1])[-1]\tilde{o} S^p(E[-1])\tilde{o} (\Lambda^p E)[-p] \end{equation} where we used the d\'ecalage identification $\Gamma^p(E[-1])\simeq (\Lambda^pE)[-p]$ from Lemma \ref{dalg: decalage} to rewrite the third term. If $X$ is an algebraic stack flat over ${\mathbb Z}_p$ then, by Lemma \ref{free cosimplicial: tate p}(3) this fiber sequence takes the form \begin{equation}\label{free cosimplicial: alpha flat over zp}F_{X_0}^*(E/p)[-2]\tilde{o} S^p(E[-1])\tilde{o} (\Lambda^p E)[-p]\end{equation} Here $F_{X_0}^*(E/p)$ is an abbreviation for $i_*F_{X_0}^*i^*E$, we will prefer using this notation in what follows. Let us record the description of the cohomology sheaves of $S^p(E[-1])$ in the case $E$ is a locally free sheaf, for $p>2$, the analogous result for $p=2$ will be established in Corollary \ref{free cosimplicial: symmetric power cohomology sheaves p=2}: \begin{lm}\label{free cosimplicial: symmetric power cohomology sheaves}Suppose that $p>2$. \begin{enumerate} \item If $E$ is a locally free sheaf on a scheme $X$ flat over ${\mathbb Z}_p$, then \begin{equation}H^2(S^p(E[-1]))\simeq F_{X_0}^*(E/p),\quad H^p(S^p(E[-1]))\simeq\Lambda^p E,\end{equation} and all other cohomology sheaves of $S^p(E[-1])$ are zero. \item If $E$ is a locally free sheaf on a scheme $X_0$ over ${\mathbb F}_p$, then \begin{equation}H^1(S^p(E[-1]))\simeq H^2(S^p(E[-1]))\simeq F_{X_0}^*E,\quad H^p(S^p(E[-1]))\simeq\Lambda^p E\end{equation} and all other cohomology sheaves of $S^p(E[-1])$ are zero. \end{enumerate} \end{lm} \begin{proof} This is immediate from (\ref{free cosimplicial: alpha' extension}) and Lemma \ref{free cosimplicial: tate p}(3),(4). \end{proof} Let now $X_0$ be an arbitrary prestack over ${\mathbb F}_p$. For an object $E\in D(X_0)$ the pushout of the fiber sequence (\ref{free cosimplicial: alpha' extension}) along the map $T_p(E[-1])[-1]\tilde{o} F_{X_0}^*E[-2]$ from (\ref{free cosimplicial: tate mod p extension}) defines a fiber sequences \begin{equation}\label{free cosimplicial: alpha extension} F_{X_0}^*E[-2]\tilde{o} S^p(E[-1])\bigsqcup\limits_{T_p(E)[-2]}F_{X_0}^*E[-2]\tilde{o} (\Lambda^p E)[-p] \end{equation} In case where $E$ is a locally free sheaf on an ${\mathbb F}_p$-scheme scheme $X_0$, the sequence (\ref{free cosimplicial: alpha extension}) is simply the truncation of (\ref{free cosimplicial: alpha' extension}) in degrees $\geq 2$. \begin{definition}\label{free cosimplicial: alpha definition} We will denote by $\alpha(E):\Lambda^p E\tilde{o} F_{X_0}^*E[p-1]$ the shift by $[p]$ of the connecting morphism corresponding to the fiber sequence (\ref{free cosimplicial: alpha extension}) \end{definition} Suppose now that $X$ is a flat algebraic stack over ${\mathbb Z}_p$. Denote by $i:X_0\simeq X\times_{{\mathbb Z}_p}{\mathbb F}_p\hookrightarrow X$ the closed embedding of the special fiber. Let us remark that the information contained in the extension (\ref{free cosimplicial: alpha' extension}) for an object $E\in D(X)$ is completely captured by $\alpha(i^*E)$: \begin{lm}\label{free cosimplicial: alpha adjunction} For an object $E\in D(X)$ denote $i^*E$ by $E_0$. The image of the map $\alpha(E_0)$ under the adjunction identification $\mathrm{RHom}_{X_0}(\Lambda^p E_0,F_{X_0}^*E_0[p-1])=\mathrm{RHom}_X(\Lambda^p E,i_*F_{X_0}^*i^*E[p-1])$ is the connecting morphism corresponding to the fiber sequence \begin{equation}\label{free cosimplicial: alpha adjunction formula}T_p(E[-1])[-1]\tilde{o} S^p(E[-1])\tilde{o} (\Lambda^p E)[-p]\end{equation} where we identify $T_p(E[-1])[-1]$ with $i_*F_{X_0}^*i^*E[-2]$ via Lemma \ref{free cosimplicial: tate p}(3). \end{lm} \begin{proof} In general, for two objects $M\in D(X), N\in D(X_0)$ the adjunction identification $\mathrm{RHom}_{D(X)}(M,i_*N)\simeq \mathrm{RHom}_{D(X_0)}(i^*M,N)$ can be described as sending a map $f:M\tilde{o} i_*N$ to the composition $i^*M\xrightarrow{i^*f}i^*i_*N\tilde{o} N$ where the second map is the counit of the adjunction. The fiber sequence $T_p(E_0[-1])[-1]\tilde{o} S^p(E_0[-1])\tilde{o} (\Lambda^p E_0)[-p]$ is the result of applying $i^*$ to the sequence (\ref{free cosimplicial: alpha adjunction formula}), and the map $T_p(E_0[-1])[-1]\tilde{o} F_{X_0}^*E_0[-2]$ used to form the pushout sequence (\ref{free cosimplicial: alpha extension}) is precisely the counit map $i^*i_*\tilde{o}\mathrm{Id}$ evaluated on $F^*_{X_0}E_0[-2]$ by Remark \ref{free cosimp: remark Tp}, so the lemma follows. \end{proof} The extension defining $\alpha(E)$ can be described in more classical terms for $p=2$. For a projective module $M$ over an ${\mathbb F}_2$-algebra $R_0$ we have a natural short exact sequence \begin{equation}\label{free cosimplicial: alpha for p=2 discrete} 0\tilde{o} F_{R_0}^*M\xrightarrow{\Delta_M} S^2M\xrightarrow{j} \Lambda^2 M\tilde{o} 0 \end{equation} Here $\Delta_M$ is the map defined in (\ref{dalg: polynomial frobenius}), it sends an element $m\otimes 1\in M\otimes_{R_0,F_{R_0}}R_0$ to $m\cdot m\in S^2M$, and $j$ sends $m_1\cdot m_2\in S^2M$ to $m_1\wedge m_2\in \Lambda^2 M$. By Lemma \ref{dalg: derived functors for stacks} this gives rise to a fiber sequence \begin{equation}\label{free cosimplicial: alpha for p=2 derived}F^*_{X_0}E\xrightarrow{\Delta_E}S^2E\xrightarrow{j} \Lambda^2 E\end{equation} for every object $E\in D(X_0)$ on any ${\mathbb F}_2$-prestack $X_0$. \begin{lm}\label{free cosimplicial: alpha for p=2} When $p=2$, and $E$ is an object of $D(X_0)$, the fiber sequence (\ref{free cosimplicial: alpha extension}) is naturally equivalent to the shift by $[-2]$ of the sequence (\ref{free cosimplicial: alpha for p=2 derived}). \end{lm} \begin{proof} Consider the shift of the fiber sequence (\ref{free cosimplicial: alpha extension}) by $[2]$: \begin{equation}\label{free cosimplicial: alpha extension p=2 shifted} F_{X_0}^*E\tilde{o} S^2(E[-1])[2]\bigsqcup\limits_{T_2(E)}F_{X_0}^*E\tilde{o}\Lambda^2 E \end{equation} When $X_0=\Spec R_0$ is an affine scheme and $E$ corresponds to a projective $R_0$-module, first and third terms of (\ref{free cosimplicial: alpha extension p=2 shifted}) are concentrated in degree $0$, so this fiber sequence is an exact sequence of $R_0$-modules. We will functorially identify it with the sequence (\ref{free cosimplicial: alpha for p=2 discrete}), which will prove the Lemma in general thanks to Lemma \ref{dalg: derived functors for stacks}. First, note that the middle term of (\ref{free cosimplicial: alpha extension p=2 shifted}) is isomorphic to the cohomology module $H^2(S^2(E[-1]))$. On the other hand, the fiber sequence $$F_{X_0}^*E[-1]\xrightarrow{\Delta_{E[-1]}}S^2(E[-1])\xrightarrow{j_{E[-1]}}\Lambda^2(E[-1])$$ identifies $H^2(S^2(E[-1]))$ with $H^2(\Lambda^2E[-1])\simeq S^2E$. This already identifies the middle terms of exact sequences (\ref{free cosimplicial: alpha for p=2 derived}) and (\ref{free cosimplicial: alpha extension p=2 shifted}), so it remains to check that this identification fits into an identification of fiber sequences. To this end, note that for any projective $R_0$-module $M$ the norm map $N_2:S^2M\tilde{o}\Gamma^2 M$ factors as $S^2M\xrightarrow{j}\Lambda^2 M\xrightarrow{j'}\Gamma^2 M$ where $j'$ sends $m_1\wedge m_2$ to $m_1\otimes m_2+m_2\otimes m_1\in \Gamma^2 M\subset M^{\otimes 2}$. Therefore we get such a factorization for any object $M\in D(X_0)$. Applying this to $M=E[-1]$ we learn that the map $S^2(E[-1])\tilde{o} \Gamma^2(E[-1])\simeq \Lambda^2 E$ factors through $j_{E[-1]}$ which proves the lemma. \end{proof} We can deduce the computation of cohomology sheaves of $S^p(E[-1])$ when $p=2$, complementing Lemma \ref{free cosimplicial: symmetric power cohomology sheaves}(2): \begin{cor}\label{free cosimplicial: symmetric power cohomology sheaves p=2} For a locally free sheaf $E$ on an ${\mathbb F}_2$-scheme $X_0$ we have \begin{equation} H^1(S^2(E[-1]))\simeq F^*_{X_0}E\quad H^2(S^2(E[-1]))\simeq S^2E \end{equation} and all other cohomology sheaves are zero. \end{cor} For an arbitrary $p$, we can relate $S^p(E[-1])$ to the homotopy coinvariants $(E[-1]^{\otimes p})_{hS_p}$ of the symmetric group $S_p$ acting on $E[-1]^{\otimes p}$ by permutations. This identification will be used in our proof of non-vanishing of the class $\alpha$ in Section \ref{rational: section}. \begin{lm}\label{free cosimplicial: cosimp vs einf} For a locally free sheaf $E$ on an ${\mathbb F}_p$-scheme $X_0$ there is a natural equivalence $S^p(E[-1])\simeq\tau^{\geq 1}(E[-1]^{\otimes p})_{hS_p}$ \end{lm} \begin{proof} First, let us recall the values of the cohomology sheaves of $(E[-1]^{\otimes p})_{hS_p}$: \begin{lm}\label{free cosimplicial: symmetric group cohomology} If $p>2$ then \begin{equation}\label{free cosimplicial: symmetric group cohomology formula} H^i((E[-1]^{\otimes p})_{hS_p})\simeq\begin{cases}\Lambda^p E, i=p \\ F_{X_0}^*E, i\equiv 1\text{ or }2 \bmod p-1 \text{, and } i\leq 2\\ 0\text{ otherwise}\end{cases} \end{equation} If $p=2$ then \begin{equation}\label{free cosimplicial: symmetric group cohomology formula 2} H^i((E[-1]^{\otimes 2})_{hS_2})\simeq\begin{cases}S^2 E, i=2 \\ F_{X_0}^*E, i<2\\ 0, i>2\end{cases} \end{equation} \end{lm} \begin{proof} We will give the argument in the case $p>2$ and the case $p=2$ is proven analogously. Note that the $S_p$-equivariant object $E[-1]^{\otimes p}$ is equivalent to $\mathrm{sgn}\otimes E^{\otimes p}[-p]$ where $S_p$ acts on $E^{\otimes p}$ via permutation of factors, and $\mathrm{sgn}$ is the sign character. In particular, $H^p((E[-1]^{\otimes p})_{hS_p})$ is isomorphic to $\Lambda^p E$ by definition of the exterior power. To identify other cohomology sheaves, we will compare homology of the symmetric group with that of a cyclic group. Denote by $C_p\subset S_p$ a cyclic subgroup of order $p$. There is a natural map $(E[-1]^{\otimes p})_{hC_p}\tilde{o} (E[-1]^{\otimes p})_{hS_p}$. The coinvariants $(E[-1]^{\otimes p})_{hC_p}$ for the cyclic group can be represented by the following two-periodic complex \begin{equation} \ldots\xrightarrow{1-\sigma} E^{\otimes p}\xrightarrow{N_{C_p}}E^{\otimes p}\xrightarrow{1-\sigma}E^{\otimes p} \end{equation} where $\sigma$ is the endomorphism of $E^{\otimes p}$ given by cyclic permutation of the factors, and $N_{C_p}=\sum\limits_{i=0}^{p-1}\sigma^i$. For all $i<p$ we get a map $F^*_{X_0}E\tilde{o} H^i((E[-1]^{\otimes p})_{hC_p})$ given by sending a section $e\otimes 1\in E(U)\otimes_{{\mathcal O}_{X_0}(U),F_{X_0}}{\mathcal O}_{X_0}(U)$ on an affine open $U\subset X_0$ to $e^{\otimes p}\in E^{\otimes p}(U)$. One checks on stalks that this map is an isomorphism for all $i<p$. We get \begin{equation} H^i((E[-1]^{\otimes p})_{hC_p})\simeq\begin{cases}(E^{\otimes p})_{C_p}, i=p \\ F_{X_0}^*E, i<p\\ 0\text{ otherwise}\end{cases} \end{equation} Since $C_p\subset S_p$ is a $p$-Sylow subgroup, the map $(E[-1]^{\otimes p})_{hC_p}\tilde{o} (E[-1]^{\otimes p})_{hS_p}$ establishes $(E[-1]^{\otimes p})_{hS_p}$ as a direct summand of $(E[-1]^{\otimes p})_{hC_p}$. Hence to prove the lemma it remains to check that the surjective map $H^i((E[-1]^{\otimes p})_{hC_p})\tilde{o} H^i((E[-1]^{\otimes p})_{hS_p})$ is an isomorphism for $i\equiv 1,2\bmod p$ and is the zero map for all other $i<p$. This can be checked on stalks, and $F_{X_0}^*$ is an additive functor, so we may assume that $X_0=\Spec {\mathbb F}_p$ and $E$ is a $1$-dimensional ${\mathbb F}_p$-vector space. The statement is now a consequence of classical computations of homology of the symmetric group with coefficients in the sign character, cf. \cite[Chapter V, Proposition 7.8]{steenrod-epstein}. \end{proof} There is a natural map $E[-1]^{\otimes p}\tilde{o} S^p(E[-1])$ which is $S_p$-equivariant with respect to the permutation action on the source and the trivial action on the target. Hence it induces a natural map $(E[-1]^{\otimes p})_{hS_p}\tilde{o} S^p(E[-1])$. Moreover, this map factors through $(E[-1]^{\otimes p})_{hS_p}\tilde{o} \tau^{\geq 1}(E[-1]^{\otimes p})_{hS_p}$ because $S^p(E[-1])$ is concentrated in degrees $\geq 1$. We thus obtain a map \begin{equation}\label{free cosimplicial: cosimp vs einf map}\xi_E:\tau^{\geq 1}(E[-1]^{\otimes p})_{hS_p}\tilde{o} S^p(E[-1])\end{equation} between objects of $D(X_0)$ whose cohomology sheaves are isomorphic, by (\ref{free cosimplicial: symmetric group cohomology formula}) and Lemma \ref{free cosimplicial: symmetric power cohomology sheaves}(2). It remains to check that this particular map induces isomorphisms on cohomology sheaves. For cohomology in degree $p$, this is a consequence of the fact that the composition $E[-1]^{\otimes p}\tilde{o} S^p(E[-1])$ induces (up to a sign) the surjection $E^{\otimes p}\twoheadrightarrow \Lambda^p E$ on $H^p$ when $p>2$, and the surjection $E^{\otimes 2}\twoheadrightarrow S^2E$ when $p=2$. To show that the map $\xi_E$ induces isomorphisms on $H^1$ and $H^2$ it is enough to consider the case $X_0=\Spec {\mathbb F}_p$ and $E$ being a $1$-dimensional vector space. This follows from the fact that cohomology classes of $({\mathbb F}_p[-1]^{\otimes p})_{hS_p}$ are responsible for Steenrod operations in the cohomology of $E_{\infty}$-algebras, as proven in Lemma \ref{free cosimplicial: free einf free cosimp deg p} below. \end{proof} \subsection{$T_p$ and commutative algebras.} We will now compute how the defect between $S^p$ and $\Gamma^p$, given by $T_p$, interacts with the structure of a derived commutative algebra. This computation plays a central role in our proof of Theorem \ref{cosimp: main theorem}. \begin{lm}\label{free cosimplicial: algebra Bockstein} Let $X$ be an algebraic stack or a formal scheme flat over ${\mathbb Z}_p$ with the special fiber $X_0:=X\times_{{\mathbb Z}_p}{\mathbb F}_p$. For a derived commutative algebra $A\in\DAlg(X)$ the composition $F_{X_0}^*(A/p)[-1]\xrightarrow{\gamma_A} S^pA\xrightarrow{m_A}A$ is naturally homotopic in $D(X)$ to \begin{equation}\label{free cosimplicial: algebra Bockstein equation} F_{X_0}^*(A/p)[-1]\xrightarrow{\varphi_{A/p}}A/p[-1]\xrightarrow{\mathrm{Bock}_A[-1]}A \end{equation} \end{lm} \begin{proof} For a projective module $M$ over a flat ${\mathbb Z}_p$-algebra $R$ there is a diagram of $R$-modules \begin{equation}\label{free cosimplicial: algebra Bockstein ordinary diagram} \begin{tikzcd} S^p M\arrow[r, "N_p"]\arrow[d,"\id_{S^p M}"] & \Gamma^p M\arrow[r]\arrow[d, "r_p"] & F^*_{R/p}(M/p)\arrow[d,"\Delta_{M/p}"] \\ S^p M\arrow[r, "p!"] & S^p M\arrow[r] & S^p_{R/p}(M/p) \end{tikzcd} \end{equation} where both rows are short exact sequences of $R$-modules. It evidently gives rise to a diagram of polynomial functors that is moreover natural in arbitrary maps $R\tilde{o} R'$. We can construct from this the following diagram in $D(X)$ where rows are fiber sequences and vertical maps organize into morphisms of fiber sequences: \begin{equation}\label{free cosimplicial: algebra Bockstein derived diagram} \begin{tikzcd} S^pA\arrow[r, "N_p"]\arrow[d, equal] & \Gamma^pA\arrow[d]\arrow[r] & F^*_{X_0}(A/p)\arrow[d, "\Delta_{A/p}"]\\ S^pA\arrow[r, "p!"]\arrow[d, "m_A"] & S^p A\arrow[d, "m"]\arrow[r] & S^p_{X_0}(A/p)\arrow[d, "m_A"]\\ A\arrow[r, "p!"] & A\arrow[r] & A/p \end{tikzcd} \end{equation} The maps between the first two rows are obtained by applying the functor $\Sigma_X$ of Lemma \ref{dalg: derived functors for stacks} to the diagram (\ref{free cosimplicial: algebra Bockstein ordinary diagram}), and evaluating the resulting maps on the object $A\in D(X)$. The maps between the second and third rows are obtained by applying the functor $\cofib(p!)$ to the map $m_A:S^pA\tilde{o} A$. The connecting morphism $F^*_{X_0}(A/p)\tilde{o} (S^pA)[1]$ corresponding to the top row of (\ref{free cosimplicial: algebra Bockstein derived diagram}) is equivalent to $-\gamma_A[1]$, by definition of $\gamma_A$ given in (\ref{free cosimplicial: general fiber sequence}). Commutativity of the diagram implies the lemma because the connecting morphism $A/p\tilde{o} A[1]$ of the bottom row is the negative $(-\mathrm{Bock}_A)$ of the Bockstein morphism by the identity $(p-1)!=-1$ in ${\mathbb F}_p$, and the composition of the right vertical column is the map $\varphi_{A/p}:F_{X_0}^*(A/p)\tilde{o} A/p$, by the definition of Frobenius endomorphisms of derived commutative algebras. \end{proof} \begin{rem} If $A$ happens to be represented by a term-wise flat cosimplicial commutative algebra $A^{\bullet}$ in $\QCoh(X)$, for a flat ${\mathbb Z}_p$-scheme $X$, then the diagram (\ref{free cosimplicial: algebra Bockstein derived diagram}) can be obtained from the strictly commutative diagram in the ordinary category of cosimplicial sheaves on $X$, where the rows are term-wise exact: \begin{equation}\label{free cosimplicial: cosimplicial Bockstein diagram} \begin{tikzcd} S^p_{\mathrm{naive}}A^{\bullet}\arrow[r, "N"]\arrow[d, "m"] & \Gamma_{\mathrm{naive}}^pA^{\bullet}\arrow[d, "m"]\arrow[r] & F^*(A/p)\arrow[d, "\varphi^{\mathrm{cosimp}}_{A/p}"]\\ A\arrow[r, "p!"] & A\arrow[r] & A/p \end{tikzcd} \end{equation} where $S^p_{\mathrm{naive}}$ and $\Gamma^p_{\mathrm{naive}}$ denote the endofunctors of the ordinary category of cosimplicial objects in $\QCoh(X)$ induced by the (non-derived) functors $S^p$ and $\Gamma^p$. \end{rem} \subsection{Steenrod operations on cohomology of cosimplicial algebras.}\label{free cosimplicial: steenrod subsection} This subsection is not used in the rest of the paper and contains classical material, if only presented somewhat differently, but we include it, as Proposition \ref{free cosimplicial: steenrod operations description prop} was the original motivation for our approach and Theorem \ref{cosimp: main theorem} should be viewed as its generalization. Let us also mention the result of Scavia \cite[Theorem 1.1(iv),(v)]{scavia-steenrod} which is related to and implied by Proposition \ref{free cosimplicial: steenrod operations description prop}. Recall that cohomology of derived symmetric powers is related to natural operations on cohomology of cosimplicial commutative algebras: \com{introduce notation calg} \begin{lm}\label{free cosimplicial: cohomology operations} Fix a commutative base ring $R$. The $R$-module of natural transformations between functors $H^i,H^j:\mathrm{CAlg}_R^{\Delta}\tilde{o}\Mod_R$ can be described as \begin{equation}\label{free cosimplicial: cohomology operations formula} \Hom(H^i,H^j)\simeq H^j(S^{\bullet}(R[-i])) \end{equation} where the isomorphism takes a natural transformation $\alpha:H^i\tilde{o} H^j$ to the image of the class $1\in R=H^i(S^1(R[-i]))\subset H^i(S^{\bullet}(R[-i]))$ under $\alpha$ evaluated on the free algebra $S^{\bullet}(R[-i])$. \end{lm} \begin{rem} The resulting module of natural transformations can be computed completely, cf. \cite{priddy} for the case $R={\mathbb F}_p$. \end{rem} \begin{proof} We will describe the inverse map. Given a class $c\in H^j(S^{\bullet}(R[-i]))$, for every cosimplicial commutative algebra $A$ we define a morphism $H^i(A)\tilde{o} H^j(A)$ as follows. Let $x:R[-i]\tilde{o} A$ be a map representing a cohomology class $[x]\in H^i(A)$. Applying the functor $S^{\bullet}$ to this map of cosimplicial $R$-modules and composing it with the multiplication on $A$ gives a map $S^{\bullet}(R[-i])\xrightarrow{S^{\bullet} x} S^{\bullet}A\xrightarrow{m_A} A$. We declare the image of $[x]$ under the natural transformation to be the image of the class $c$ under $m_A\circ S^{\bullet}x$. This produces a map $H^j(S^{\bullet}(R[-i]))\tilde{o} \Func(H^i,H^j)$ inverse to the map (\ref{free cosimplicial: cohomology operations formula}). \end{proof} We will now deduce from Lemma \ref{free cosimplicial: algebra Bockstein} that certain cohomology operations of degree $0$ and $1$ are related to Frobenius endomorphism, and Witt vectors Bockstein homomorphism introduced by Serre in \cite[\S 3]{serre}. Recall that for any object $M\in D(R)$ of the derived category of modules over a ring $R$ we defined a map $T_p(M)[-1]\xrightarrow{\gamma_M} S^p_R M$ in (\ref{free cosimplicial: general fiber sequence}). In the case $M={\mathbb Z}_p[-i]$ over $R={\mathbb Z}_p$ it takes form ${\mathbb F}_p[-i-1]\tilde{o} S^p({\mathbb Z}_p[-i])$, and we denote by $P^1_{{\mathbb Z}_p}\in H^{i+1}(S^p({\mathbb Z}_p[-i]))$ the resulting $p$-torsion class. For $M={\mathbb F}_p[-i]$ over $R={\mathbb F}_p$ the map $\gamma_{{\mathbb F}_p}$ has the form ${\mathbb F}_p[-i]\oplus{\mathbb F}_p[-i-1]\tilde{o} S^p_{{\mathbb F}_p}({\mathbb F}_p[-i])$, and we denote by $P^0_{{\mathbb F}_p}\in H^i(S^p({\mathbb F}_p[-i])), P^1_{{\mathbb F}_p}\in H^{i+1}(S^p_{{\mathbb F}_p}({\mathbb F}_p[-i]))$ the resulting classes. Note that $P^1_{{\mathbb F}_p}$ is the image of $P^1_{{\mathbb Z}_p}$ under the reduction map. By Lemma \ref{free cosimplicial: cohomology operations} the class $P^1_{{\mathbb Z}_p}$ gives rise to a natural homomorphism \begin{equation}P^1_{{\mathbb Z}_p}:H^i(A)\tilde{o} H^{i+1}(A)\end{equation} for every cosimplicial commutative ${\mathbb Z}_p$-algebra $A$. Similarly, classes $P^0_{{\mathbb F}_p},P^1_{{\mathbb F}_p}$ define operations \begin{equation} P^0_{{\mathbb F}_p}:H^i(A_0)\tilde{o} H^{i}(A_0)\quad P^1_{{\mathbb F}_p}:H^i(A_0)\tilde{o} H^{i+1}(A_0) \end{equation}for every cosimplicial commutative ${\mathbb F}_p$-algebra $A_0$. In the following result by $W_2(A_0)$ we mean the cosimplicial commutative ${\mathbb Z}/p^2$-algebra obtained by applying term-wise the length $2$ Witt vectors functor to $A_0$. \begin{pr}\label{free cosimplicial: steenrod operations description prop} \begin{enumerate} \item For a cosimplicial commutative algebra $A$ over ${\mathbb Z}_p$ the operation $P^1_{{\mathbb Z}_p}:H^i(A)\tilde{o} H^{i+1}(A/p)$ is equal to the composition $$H^i(A)\tilde{o} H^i(A/p)\xrightarrow{\varphi_{A/p}}H^{i}(A/p)\xrightarrow{\mathrm{Bock}^i_{A}}H^{i+1}(A)$$ \item For a cosimplicial commutative algebra $A_0$ over ${\mathbb F}_p$ the operation $P^0_{{\mathbb F}_p}:H^i(A_0)\tilde{o} H^i(A_0)$ is equal to the Frobenius endomorphism of $A_0$. \item For a cosimplicial commutative algebra $A_0$ over ${\mathbb F}_p$ the operation $P^1_{{\mathbb F}_p}:H^i(A_0)\tilde{o} H^{i+1}(A_0)$ is equal to the connecting homomorphism induced by the exact sequence $A_0\xrightarrow{V} W_2(A_0)\tilde{o} A_0$. \end{enumerate} \end{pr} \begin{proof} Given a class $[x]\in H^i(A)$ represented by a map $x:{\mathbb Z}_p[-i]\tilde{o} A$ we get a commutative diagram \begin{equation}\label{free cosimplicial: steenrod diagram} \begin{tikzcd} T_p({\mathbb Z}_p[-i])[-1]\arrow[d, "{T_p(x)[-1]}"]\arrow[r, "\gamma_{{\mathbb Z}_p[-i]}"] & S^p{\mathbb Z}_p[-i]\arrow[d, "S^px"] \\ T_p(A)[-1]\arrow[r, "\gamma_{A}"] & S^pA\arrow[r, "m_A"] & A \end{tikzcd} \end{equation} By definition, the value $P^1_{{\mathbb Z}_p}([x])\in H^{i+1}(A)$ is equal to the image of the class $1\in {\mathbb F}_p\simeq H^{i+1}(T_p({\mathbb Z}_p[-i])[-1])$ under the clockwise composition in the diagram (\ref{free cosimplicial: steenrod diagram}). On the other hand, the counter-clockwise composition is homotopic to ${\mathbb F}_p[-i-1]\xrightarrow{x[-1]\bmod p }A/p[-1]\xrightarrow{\varphi_{A/p}[-1]}A/p[-1]\xrightarrow{\mathrm{Bock}_{A}}A$ by Lemma \ref{free cosimplicial: algebra Bockstein}, and this implies part (1). For a class $[x]\in H^i(A_0)$ represented by a map $x:{\mathbb F}_p[-i]\tilde{o} A_0$ the value $P^0_{{\mathbb F}_p}([x])\in H^i(A_0)$ is defined as the image of the unit in ${\mathbb F}_p=H^i({\mathbb F}_p[-i])$ under the composition ${\mathbb F}_p[-i]\xrightarrow{\Delta_{{\mathbb F}_p[-i]}}S^p{\mathbb F}_p[-i]\xrightarrow{S^px}S^pA_0\xrightarrow{m_{A_0}} A_0$. This composition is homotopic to ${\mathbb F}_p[-i]\xrightarrow{x}A_0\xrightarrow{\Delta_{A_0}}S^p A_0\xrightarrow{m_{A_0}} A_0$ which implies statement (2) by the definition of Frobenius on $A_0$. By Lemma \ref{free cosimplicial: cohomology operations}, in (3) it is enough to check the claimed formula for $P^1_{{\mathbb F}_p}$ on the universal class $1\in H^i(S^{\bullet}{\mathbb F}_p[-i])$ for $A_0=S^{\bullet}{\mathbb F}_p[-i]$. This cosimplicial ${\mathbb F}_p$-algebra lifts to $A=S^{\bullet}{\mathbb Z}_p[-i]$ and the universal class lifts along the map $H^i(A)\tilde{o} H^i(A_0)$. Therefore we can apply part (1) to get that $P^1_{{\mathbb F}_p}(1)$ is equal to the image of $1$ under the composition $H^i(A_0)\xrightarrow{\varphi_{A_0}}H^i(A_0)\xrightarrow{\mathrm{Bock}_{A/p^2}}H^{i+1}(A_0)$. To see that this composition is equal to the Witt vector Bockstein homomorphism, recall that there is a map of cosimplicial rings $\phi:W_2(A_0)\tilde{o} A/p^2$ term-wise given by $[a_0]+V[a_1]\mapsto \widetilde{a}_0^p+p\widetilde{a}_1$ where $\widetilde{a}_0,\widetilde{a}_1\in A^i/p^2$ are arbitrary lifts of $a_0,a_1\in A^i_0$. This map fits into a commutative diagram where rows are term-wise exact sequences: \begin{equation} \begin{tikzcd} 0\arrow[r] & A_0\arrow[r,"V"]\arrow[d,equal] & W_2(A_0)\arrow[r]\arrow[d,"\phi"] & A_0\arrow[r]\arrow[d, "\varphi_{A_0}"] & 0 \\ 0\arrow[r] & A_0\arrow[r,"p"] & A/p^2\arrow[r] & A_0\arrow[r] & 0 \end{tikzcd} \end{equation} The induced map between the associated long exact sequence of cohomology proves that the connecting homomorphism induced by the top row is equal to $\mathrm{Bock}_{A/p^2}\circ\varphi_{A_0}$ which finishes the proof of (3). \end{proof} \subsection{Steenrod operations on cohomology of $E_{\infty}$-algebras.} In this expository subsection we recall how power operations on cohomology of an $E_{\infty}$-algebra is defined, and relate them to the operations on the cohomology of cosimplicial commutative algebras. All of this material is contained in \cite{steenrod-epstein}, \cite{may-steenrod}, \cite{priddy}. Let $A$ be an $E_{\infty}$-algebra over a (ordinary) commutative ring $R$. This structure, in particular, gives a multiplication map $A^{\otimes p}\tilde{o} A$ in $D(R)$ which is $S_p$-equivant where $S_p$ acts via permutations of the factors in $A^{\otimes p}$ and acts trivially on $A$. The multiplication map therefore factors through a map \begin{equation} m_A:(A^{\otimes p})_{hS_p}\tilde{o} A \end{equation} This map is used to define operations on cohomology of $A$. Given a cohomology class $[x]\in H^i(A)$ we can represent it by a map $x:R[-i]\tilde{o} A$ in $D(R)$ and form the composition \begin{equation}\label{free cosimplicial: coh class symmetric multiplication} (R[-i]^{\otimes p})_{hS_p}\xrightarrow{x^{\otimes p}}(A^{\otimes p})_{hS_p}\xrightarrow{m_A}A \end{equation} We now specialize to the case $R={\mathbb F}_p$. The cohomology groups of $({\mathbb F}_p[-i]^{\otimes p})_{hS_p}$ are given by \begin{equation} H^j(({\mathbb F}_p[-i]^{\otimes p})_{hS_p})=\begin{cases} {\mathbb F}_p,\text{ for }j=pi-(2k+1)(p-1)\text{ or }pi-(2k+1)(p-1)+1\text{ with }k\geq 0\\ 0, \text{ otherwise} \end{cases} \end{equation} if $i$ is odd, and by \begin{equation} H^j(({\mathbb F}_p[-i]^{\otimes p})_{hS_p})=\begin{cases} {\mathbb F}_p,\text{ for }j=pi-2k(p-1)\text{ or }pi-2k(p-1)+1\text{ with }k\geq 0\\ 0, \text{ otherwise} \end{cases} \end{equation} if $i$ is even, see \cite[Chapter V, Lemmas 6.1, 6.2 + Proposition 7.8]{steenrod-epstein}. Thus for every $m$ of the form $2k(p-1)$ or $2k(p-1)+1$ we can define $P^{m}([x])\in H^{i+m}(A)$ as the image of a fixed generator of $H^{i+m}(({\mathbb F}_p[-i]^{\otimes p})_{hS_p})$ under the map (\ref{free cosimplicial: coh class symmetric multiplication}). Applying this construction to the singular cohomology of a topological space gives rise to natural cohomology operations which satisfy special properties that are false for general $E_{\infty}$-algebras: \begin{thm}\label{free cosimplicial: top space operations} For a topological space $X$ and an integer $i\geq 0$ the operations $P^0:H^i_{\mathrm{sing}}(X,{\mathbb F}_p)\tilde{o} H^i_{\mathrm{sing}}(X,{\mathbb F}_p),P^1:H^i_{\mathrm{sing}}(X,{\mathbb F}_p)\tilde{o} H^{i+1}_{\mathrm{sing}}(X,{\mathbb F}_p)$ arising from the $E_{\infty}$-algebra structure on $C^{\bullet}_{\mathrm{sing}}(X,{\mathbb F}_p)$ are described as \begin{enumerate} \item $P^0=\mathrm{Id}$ \item $P^1$ is the Bockstein homomorphism corresponding to the term-wise exact sequence of complexes $C^{\bullet}_{\mathrm{sing}}(X,{\mathbb F}_p)\tilde{o} C^{\bullet}_{\mathrm{sing}}(X,{\mathbb Z}/p^2)\tilde{o} C^{\bullet}_{\mathrm{sing}}(X,{\mathbb F}_p)$. \end{enumerate} Moreover, all operations $P^m$ with $m<0$ are zero. \end{thm} We can reconcile these operations with the operations on cohomology of cosimplicial commutative algebras, which will justify denoting these by the same symbols $P^i$. Let $A\in\CAlg^{\Delta}_{{\mathbb F}_p}$ be a cosimplicial commutative ${\mathbb F}_p$-algebra. We can view it as an $E_{\infty}$-algebra with the symmetric multiplication factoring through the cosimplicial symmetric multiplication \begin{equation} (A^{\otimes p})_{hS_p}\tilde{o} S^pA\xrightarrow{m_A}A \end{equation} In particular, for a class $x\in H^i(A)$ the map (\ref{free cosimplicial: coh class symmetric multiplication}) factors through the natural map $({\mathbb F}_p[-i]^{\otimes p})_{hS_p}\tilde{o} S^p({\mathbb F}_p[-i])$. \begin{lm}\label{free cosimplicial: free einf free cosimp deg p} For every $i$ the map $({\mathbb F}_p[-i]^{\otimes p})_{hS_p}\tilde{o} S^p({\mathbb F}_p[-i])$ factors through an equivalence $\tau^{\geq i}({\mathbb F}_p[-i]^{\otimes p})_{hS_p}\simeq S^p({\mathbb F}_p[-i])$. \end{lm} \begin{proof} The cohomology groups of the object $S^p({\mathbb F}_p[-i])$ are abstractly isomorphic to that of $\tau^{\geq i}({\mathbb F}_p[-i]^{\otimes p})_{hS_p}$ by \cite[Theorems 4.1.1,4.2.1]{priddy} so it is enough to show that the map $\tau^{\geq i}({\mathbb F}_p[-i]^{\otimes p})_{hS_p}\tilde{o} S^p{\mathbb F}_p[-i]$ induces an injection on cohomology in degrees $\geq i$. For this it suffices to provide, for every $m\geq 0$, a topological space $X$ such that the operation $P^m:H^{i}_{\mathrm{sing}}(X,{\mathbb F}_p)\tilde{o} H^{i+m}_{\mathrm{sing}}(X,{\mathbb F}_p)$ is non-zero. The Eilenberg-MacLane space $X=K({\mathbb F}_p,i)$ satisfies this condition, because the operations on cohomology of topological spaces defined from cohomology classes of Eilenberg-MacLane spaces coincide with those defined using the $E_{\infty}$-algebra structure, e.g. by the uniqueness of functorial operations \cite[\S VIII.3]{steenrod-epstein}. \end{proof} This discussion can be applied to the cosimplicial commutative algebra $A=C^{\bullet}_{\mathrm{sing}}(X,{\mathbb F}_p)$ of singular cochains on a topological space $X$ whose underlying $E_{\infty}$-algebra was referenced in Theorem \ref{free cosimplicial: top space operations}. The fact that operations $P^m$ vanish for $m<0$ can thus be explained by the fact that $S^p({\mathbb F}_p[-i])$ is concentrated in degrees $\geq i$. The algebras of the form $C^{\bullet}_{\mathrm{sing}}(X,{\mathbb F}_p)$ are special among general cosimplicial commutative ${\mathbb F}_p$-algebras in that their Frobenius endomorphism is equal to the identity, because $C^{\mathrm{sing}}(X,{\mathbb F}_p)$ is defined as the algebra of ${\mathbb F}_p$-valued functions on the simplicial singular set of $X$, cf. \cite[6.1]{priddy}. In particular, the relations $P^0=\mathrm{Id}$ and $P^1=\mathrm{Bock}$ in Theorem \ref{free cosimplicial: top space operations} can be deduced from our Proposition \ref{free cosimplicial: steenrod operations description prop}. \section{Extensions in complexes underlying derived commutative algebras} \label{cosimp: section} In this section we prove our main algebraic result on extensions in the canonical filtration on the complexes underlying certain derived commutative algebras. In this section $X$ will be an algebraic stack flat over ${\mathbb Z}_p$, or a formal scheme flat over $\Spf{\mathbb Z}_p$. In both regimes we denote by $X_0=X\times_{{\mathbb Z}_p}{\mathbb F}_p$ the special fiber of $X$. \begin{thm}\label{cosimp: main theorem} Let $A\in \DAlg^{\geq 0}(X)$ be a derived commutative algebra on $X$ concentrated in degrees $\geq 0$ such that $H^0(A)={\mathcal O}_X$, $H^1(A)$ a locally free ${\mathcal O}_X$-module, and multiplication on cohomology induces an isomorphism $H^{\bullet}(A)\simeq \Lambda^{\bullet}H^1(A)$. Assume further that there is a morphism $s:H^1(A)[-1]\tilde{o} A$ in $D(X)$ splitting the canonical filtration on $\tau^{\leq 1}A$. \begin{enumerate} \item There exists a natural equivalence $\bigoplus\limits_{i=0}^{p-1} H^i(A)[-i]\simeq \tau^{\leq p-1}A$ in $D(X)$. \item There is a natural homotopy between the map $H^p(A)\simeq \cofib(\tau^{\leq p-1}A\tilde{o}\tau^{\leq p}A)[p]\tilde{o}(\tau^{\leq p-1}A)[p+1]$ corresponding to the extension $\tau^{\leq p-1}A\tilde{o} \tau^{\leq p}A\tilde{o} H^p(A)[-p]$ and the composition \begin{multline}\label{cosimp: main formula} H^p(A)=\Lambda^p H^1(A)\xrightarrow{\alpha(H^1(A)/p)}F_{X_0}^*H^1(A/p)[p-1]\xrightarrow{F_{X_0}^*s[p]}(\tau^{\leq 1}F_{X_0}^*(A/p))[p]\xrightarrow{\varphi_{A/p}}\\ (\tau^{\leq 1}A/p)[p]\xrightarrow{\mathrm{Bock}_A}(\tau^{\leq 1}A)[p+1]\tilde{o} (\tau^{\leq p-1}A)[p+1] \end{multline} \end{enumerate} \end{thm} \begin{proof} For each $i\geq 0$, consider the map \begin{equation} s_i:S^i(H^1(A)[-1])\xrightarrow{S^i s}S^iA\xrightarrow{m_A}A \end{equation} The composition $H^1(A)[-1]^{\otimes i}\tilde{o} S^i(H^1(A)[-1])\xrightarrow{s_i}A$ induces the multiplication map $m:H^1(A)^{\otimes i}\tilde{o} H^i(A)$ on $i$-th cohomology and $0$ on all other cohomology groups. By the assumption that $H^{\bullet}(A)$ is freely generated by $H^1(A)$, the map $m:H^1(A)^{\otimes i}\tilde{o} H^i(A)$ identifies $H^i(A)$ with $\Lambda^iH^1(A)$. For $i\leq p-1$ we have $S^i(H^1(A)[-1])\simeq \Gamma^i(H^1(A)[-1])\simeq (\Lambda^i H^1(A))[-i]$, and the map $H^1(A)[-1]^{\otimes i}\tilde{o} S^i(H^1(A)[-1])$ is the shift by $[-i]$ of the natural surjection $H^1(A)^{\otimes i}\tilde{o} \Lambda^i H^1(A)$. Therefore $s_i$ induces an isomorphism on $H^i$, which proves the first part of the theorem. For $i=p$ the natural map $S^p(H^1(A)[-1])\xrightarrow{N_p}\Gamma^p(H^1(A)[-1])\simeq (\Lambda^p H^1(A))[-p]$ is not an equivalence anymore. Using the map $s_p$, we will relate the extension $\tau^{\leq p-1}A\tilde{o}\tau^{\leq p}A\tilde{o} H^p(A)[-p]$ to the extension \begin{equation}F_{X_0}^*(H^1(A)/p)[-2]\tilde{o} S^p(H^1(A)[-1])\tilde{o} \Lambda^p H^1(A)[-p]\end{equation} constructed in (\ref{free cosimplicial: alpha flat over zp}). To begin, let us compute the map induced by $s_p$ on the cohomology in degree $p$. Since $S^p(H^1(A)[-1])$ is concentrated in degrees $\leq p$, the map $s_p$ naturally factors through $\tau^{\leq p}A$. We claim that the composition \begin{equation}\label{cosimp: deg p coh diagram}S^p(H^1(A)[-1])\xrightarrow{s_p}\tau^{\leq p}A\tilde{o} H^p(A)[-p]\end{equation} is naturally homotopic to the norm map $N_p:S^p(H^1(A)[-1])\tilde{o}(\Lambda^pH^1(A))[-p]$. This composition factors uniquely through the norm map, because the mapping space $\Map_{D(X)}(\mathrm{fib} (N_p),H^p(A)[-p])$ is contractible as $H^p(A)[-p]$ is a locally free sheaf placed in degree $p$, and $\mathrm{fib} (N_p)\simeq F^*_{X_0}H^1(A)/p[-2]$ is a $p$-torsion object concentrated in degrees $\leq 2\leq p$. Therefore the composition (\ref{cosimp: deg p coh diagram}) has the form $S^p(H^1(A)[-1])\xrightarrow{N_p}\Lambda^p H^1(A)[-p]\xrightarrow{\psi}H^p(A)[-p]$ for some map $\psi$. To check that $\psi$ is equal to the cup-product map, we may precompose this composition with the map $H^1(A)[-1]^{\otimes p}\tilde{o} S^p H^1(A)[-1]$, and the claim follows from: \begin{lm}\label{cosimp: tensor power vs symmetric power} Let $E$ be a locally free sheaf on $X$. Under the d\'ecalage identification $\Gamma^p (E[-1])\simeq \Lambda^p E[-p]$ the composition $E[-1]^{\otimes p}\tilde{o} S^p(E[-1])\xrightarrow{N_p}\Gamma^p(E[-1])$ is identified with the shift by $[-p]$ of the map $E^{\otimes p}\tilde{o}\Lambda^p E$. \end{lm} \begin{proof} \cite[Proposition I.4.3.2.1]{illusie-cotangent1} shows that d\'ecalage equivalences are compatible with graded algebra structures on symmetric, divided power, and exterior algebras. Our assertion is a special case of this. \end{proof} This allows us to fit $s_p$ into the following map of fiber sequences for some map $\beta_p^{\leq p-1}$: \begin{equation} \begin{tikzcd}[row sep=large, column sep = large] F^*_{X_0}(H^1(A)/p)[-2]\arrow[r,"{\gamma_{H^1(A)[-1]}}"]\arrow[d,"\beta_p^{\leq p-1}"] & S^p(H^1(A)[-1])\arrow[r,"N_p"]\arrow[d, "s_p"] & (\Lambda^p H^1(A))[-p]\arrow[d, "\sim"] \\ \tau^{\leq p-1}A\arrow[r] & \tau^{\leq p}A\arrow[r] & H^p(A)[-p] \end{tikzcd} \end{equation} This diagram implies that the extension class $H^p(A)\tilde{o} (\tau^{\leq p-1}A)[p+1]$ corresponding to the bottom row can be described as the composition \begin{equation}\label{cosimp: diagram yet unknown beta} H^p(A)\simeq \Lambda^p H^1(A)\xrightarrow{\alpha(H^1(A)/p)}F_{X_0}^*(H^1(A)/p)[p-1]\xrightarrow{\beta_p^{\leq p-1}[p+1]}(\tau^{\leq p-1}A)[p+1] \end{equation} Here $\alpha(H^1(A)/p)$ is the natural class attached to the vector bundle $H^1(A)/p$ on $X_0$ by Definition \ref{free cosimplicial: alpha definition}, we view it as a map from $\Lambda^p H^1(A)$ to $F^*_{X_0}(H^1(A)/p)[p-1]$ via adjunction as in Lemma \ref{free cosimplicial: alpha adjunction}. To finish the proof of Theorem \ref{cosimp: main theorem}, it remains to compute $\beta_p^{\leq p-1}$. Note that $\beta_p^{\leq p-1}$ can be naturally recovered from the composition $\beta_p:F^*_{X_0}H^1(A)/p[-2]\xrightarrow{\beta^{\leq p-1}_p} \tau^{\leq p-1}A\tilde{o} A$ as the truncation $\tau^{\leq p-1}\beta_p$, because the mapping space $\Map_{D(X)}(F^*_{X_0}H^1(A)/p[-2],\tau^{\geq p}A)$ is contractible. To identify $\beta_p$, consider the following commutative diagram \begin{equation} \begin{tikzcd}[column sep = large] F^*_{X_0}(H^1(A)/p)[-2]\arrow[r,"{\gamma_{H^1(A)[-1]}}"]\arrow[d, "{F_{X_0}^*s[-1]}"]\arrow[drr, bend left=58,"\beta_p"] & S^p (H^1(A)[-1])\arrow[d, "S^ps"]\\ F^*_{X_0}(A/p)[-1]\arrow[r] & S^pA \arrow[r, "m"] & A \end{tikzcd} \end{equation} The composition of the bottom row was proven in Lemma \ref{free cosimplicial: algebra Bockstein} to be homotopic to $F^*_{X_0}(A/p)[-1]\xrightarrow{\varphi_{A/p}}A/p[-1]\xrightarrow{\mathrm{Bock}_A}A$, hence $\beta_p:F_{X_0}^*(H^1(A)/p)[-2]\tilde{o} A$ is homotopic to the composition \begin{equation} F^*_{X_0}(H^1(A)/p)[-2]\xrightarrow{F_{X_0}^*s[-1]}F_{X_0}^*(A/p)[-1]\xrightarrow{\varphi_{A/p}}A/p[-1]\xrightarrow{\mathrm{Bock}_A[-1]}A \end{equation} that can be factored as \begin{equation}\label{cosimp: betaleq1 formula} F^*_{X_0}(H^1(A)/p)[-2]\xrightarrow{F_{X_0}^*s[-1]}(\tau^{\leq 1}F_{X_0}^*(A/p))[-1]\xrightarrow{\varphi_{A/p}}\tau^{\leq 1}(A/p)[-1]\xrightarrow{\mathrm{Bock}_A[-1]}\tau^{\leq 1}A\tilde{o} A \end{equation} Therefore $\beta^{\leq p-1}_p$ is the composition of the first $3$ arrows in (\ref{cosimp: betaleq1 formula}) followed by the map $\tau^{\leq 1}A\tilde{o}\tau^{\leq p-1}A$. Plugging this expression for $\beta_p^{\leq p-1}$ into (\ref{cosimp: diagram yet unknown beta}) finishes the proof of the second part of the theorem. \end{proof} Let us record the special form that Theorem \ref{cosimp: main theorem} takes in the case of augmented algebras. \begin{cor}\label{cosimp: augmented} Let $A\in \DAlg^{\geq 0}(X)$ be a derived commutative algebra such that $H^0(A)\simeq {\mathcal O}_X$ and the multiplication on cohomology induces an isomorphism $H^{\bullet}(A)=\Lambda^{\bullet}H^1(A)$. Assume also that $A$ is equipped with a map $\varepsilon:A\tilde{o} {\mathcal O}_X$ of derived commutative algebras that induces an isomorphism on $H^0$. Then \begin{enumerate} \item $\tau^{\leq p-1}A$ naturally decomposes in $D(X)$ as $\bigoplus\limits_{i=0}^{p-1}H^i(A)[-i]$ \item The extension class $H^p(A)\tilde{o} \tau^{\leq p-1}A[p+1]$ corresponding to $\tau^{\leq p}A$ can be described as the composition \begin{multline}\label{cosimp: augmented formula} H^p(A)\xrightarrow{\alpha(H^1(A)/p)}F^*(H^1(A)/p)[p-1]\xrightarrow{\varphi_{A/p}} (H^1(A)/p)[p-1]\xrightarrow{\mathrm{Bock}_{H^1(A)}[p-1]} \\ \tilde{o} H^1(A)[p]\xrightarrow{\oplus} \tau^{\leq p-1}A[p+1] \end{multline} \end{enumerate} \end{cor} \begin{proof} The augmentation map $\varepsilon$ induces a splitting of the fiber sequence $H^0(A)\tilde{o} \tau^{\leq 1}A\tilde{o} H^1(A)[-1]$. In particular, there exists a map $s:H^1(A)[-1]\tilde{o} A$ inducing an isomorphism on $H^1$, so we are in a position to apply Theorem \ref{cosimp: main theorem}. The formula (\ref{cosimp: main formula}) specializes to (\ref{cosimp: augmented formula}) because under the decompositions $\tau^{\leq 1}A\simeq H^0(A)\oplus H^1(A)[-1]$ and $\tau^{\leq 1}(A/p)\simeq H^0(A/p)\oplus H^1(A/p)[-1]$ the Bockstein and Frobenius morphisms are diagonalized. \end{proof} \subsection{Equivariant situation} Suppose that $X$ is a flat ${\mathbb Z}_p$-scheme equipped with an action of a discrete group $G$. Let us explicitly record that Theorem \ref{cosimp: main theorem} applied to the global quotient $[X/G]$ can be rephrased as a statement about $G$-equivariant algebras on $X$, where we take the definition of the category of $G$-equivariant derived commutative algebras on $X$ to be $\DAlg([X/G])$. We state the result in the augmented setting because this is the version that will be used in all the applications. \begin{thm}\label{cosimp: equivariant main theorem} Given a $G$-equivariant augmented derived commutative algebra $A$ on $X$, such that $H^0(A)={\mathcal O}_X$, the sheaf $H^1(A)$is a locally free sheaf of ${\mathcal O}_X$-modules, and the multiplication induces isomorphisms $\Lambda^{\bullet} H^1(A)\simeq H^{\bullet}(A)$, we have \begin{enumerate} \item There is an equivalence $\tau^{\leq p-1}A\simeq \bigoplus\limits_{i=0}^{p-1} H^i(A)[-i]$ in $D_G(X)$. \item The map $H^p(A)\tilde{o} \tau^{\leq p-1}A[p+1]$ corresponding to the fiber sequence $\tau^{\leq p-1}A\tilde{o} \tau^{\leq p}A\tilde{o} H^p(A)[-p]$ in $D_G(X)$ can be described as \begin{equation}\label{cosimp: equivariant main formula} H^p(A)\xrightarrow{\alpha(H^1(A)/p)}F^*(H^1(A)/p)[p-1]\xrightarrow{\varphi_{A/p}} (H^1(A)/p)[p-1]\xrightarrow{\mathrm{Bock}_{H^1(A)}[p-1]}H^1(A)[p]\xrightarrow{\oplus}\tau^{\leq p-1}A[p+1]. \end{equation} \end{enumerate} \end{thm} \section{Applications to de Rham and Hodge-Tate cohomology}\label{applications: section} In this section we apply Theorem \ref{cosimp: main theorem} to de Rham and diffracted Hodge complexes. Let $k$ be a perfect field of characteristic $p>0$. We now work in a setup where $X$ is a formally smooth formal scheme over $W(k)$, and as before denote by $X_0$ its special fiber $X\times_{W(k)}k$. We denote by $F_{X_0/k}:X_0\tilde{o} X_0^{(1)}:=X_0\times_{k,\operatorname{Fr}_p}k$ the relative Frobenius morphism, and as before $F_{X_0}:X_0\tilde{o} X_0$ denotes the absolute Frobenius morphism. We have the following cohomological invariants associated to $X$ and $X_0$, each equipped with a derived commutative algebra structure. \begin{itemize} \item The diffracted Hodge complex $\Omega^{\DHod}_{X}\in D(X)$ defined in \cite[Notation 4.7.12]{apc} whose cohomology algebra $H^{\bullet}(\Omega^{\DHod}_X)$ is isomorphic to the algebra $\Omega^{\bullet}_X$ of differential forms on $X$. By \cite[Theorem 7.20(2)]{bhatt-lurie-prismatization} it can be identified with the derived pushforward $R\pi^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_*{\mathcal O}_{X^{\DHod}}$ of the structure sheaf along the map $\pi^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}:X^{\DHod}\tilde{o} X$, hence Lemma \ref{dalg: pushforward of rings} equips $\Omega^{\DHod}_X$ with a structure of a derived commutative algebra in $D(X)$. The Sen operator on $\Omega^{\DHod}_{X}$ induces a decomposition $\tau^{\leq p-1}\Omega^{\DHod}_{X}\simeq\bigoplus\limits_{i=0}^{p-1}\Omega^i_X[-i]$ in $D(X)$, and, in particular, gives rise to a map $s:\Omega^1_X[-1]\tilde{o}\Omega^{\DHod}_X$ that induces an isomorphism on $H^1$. \item The de Rham complex $\mathrm{dR}_{X_0/k}=F_{X_0/k*}{\mathcal O}_{X_0}\xrightarrow{d}F_{X_0/k*}\Omega^1_{X_0/k}\xrightarrow{d}\dots$ viewed as an object of $D(X^{(1)}_0)$. The Cartier isomorphism provides an identification $H^{\bullet}(\mathrm{dR}_{X_0/k})\simeq \Omega^{\bullet}_{X^{(1)}_0/k}$ of graded algebras. By de Rham comparison, $\mathrm{dR}_{X_0/k}$ is naturally identified with $\Omega^{\DHod}_{X^{(1)}}/p$, where $X^{(1)}:=X\times_{W(k),\operatorname{Fr}_p}W(k)$ is the Frobenius-twist of the formal $W(k)$-scheme $X$. We give $\mathrm{dR}_{X_0/k}$ the structure of an object of $\DAlg(X^{(1)}_0)$ by identifying it with the derived pushforward of the structure sheaf along the map $\pi^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}:(X^{(1)}_0)^{\DHod}\tilde{o} X_0^{(1)}$ using Lemma \ref{dalg: pushforward of rings}. \end{itemize} \begin{rem} The diffracted Hodge complex can be identified with the Hodge-Tate cohomology of $X$ relative to an appropriate prism, by \cite[Example 4.7.8]{apc}. Theorem \ref{cosimp: main theorem} also applies to Hodge-Tate cohomology of smooth formal schemes over arbitrary prisms, as well as to the decompleted version of the diffracted Hodge cohomology \cite[Construction 4.9.1]{apc}. \end{rem} We will apply Theorem \ref{cosimp: main theorem} to $\Omega^{\DHod}_{X^{(1)}}$ and will then compute the extension in the canonical filtration on $\mathrm{dR}_{X_0/k}$ by reducing modulo $p$. To begin, we will relate the Frobenius endomorphism of the de Rham complex to the obstruction to lifting Frobenius onto $X\times_{W(k)}W_2(k)$. \subsection{Obstruction to lifting Frobenius over $W_2(k)$.} We prove the results in this subsection without the smoothness assumption, and only assuming the existence of a flat lift over $W_2(k)$, for a future application in Section \ref{semiperf: section}. For a scheme $Y_0$ over $k$ equipped with a lift $Y_1$ over $W_2(k)$ we denote by $\ob_{F,Y_1}:F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/k}\tilde{o} {\mathcal O}_{Y_0}[1]$ the obstruction to lifting $F_{Y_0/k}:Y_0\tilde{o} Y_0^{(1)}$ to a morphism $Y_1\tilde{o} Y_1^{(1)}$, as defined by Illusie \cite{illusie-cotangent1}. We also denote by $\mathrm{dR}_{Y_0/k}$ the derived de Rham complex of $Y_0$ relative to $k$, viewed as an object of $D(Y_0^{(1)})$, cf. \cite[\S 3]{bhatt-derived}. It is equipped with a filtration $\mathrm{Fil}_{\mathrm{conj}}^{\bullet}$ whose graded pieces are equivalent to the shifted exterior powers of the cotangent complex: $\gr_{\mathrm{conj}}^i\mathrm{dR}_{Y_0/k}\simeq L\Omega^i_{Y_0^{(1)}/k}[-i]$. Note also that the natural map induces an equivalence $\mathrm{dR}_{Y_0/{\mathbb F}_p}\simeq \mathrm{dR}_{Y_0/k}$. Using the relation between the cotangent complex of $Y_0$ over $W(k)$ and the de Rham complex of $Y_0$, due to \cite{prisms} and \cite{illusie-conjugate} we will prove: \begin{pr}\label{applications: frobenius lift obstruction prop} Let $Y_0$ be a quasisyntomic scheme over $k$ equipped with a flat lift $Y_1$ over $W_2(k)$. Denote by $s:L\Omega^1_{Y^{(1)}_0/k}[-1]\tilde{o} \mathrm{dR}_{Y_0/k}$ the splitting of the conjugate filtration in degree $1$ arising from $Y_1$. The composition \begin{equation}F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/k}[-1]\xrightarrow{F^*_{Y_0/k}s}F_{Y_0/k}^*\mathrm{Fil}_1^{\mathrm{conj}}\mathrm{dR}_{Y_0/k}\xrightarrow{dF_{Y^{(-1)}_0/k}} \mathrm{Fil}^{\mathrm{conj}}_1\mathrm{dR}_{Y^{(-1)}_0/k}\end{equation} is homotopic to the composition $F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0}[-1]\xrightarrow{\ob_{F,Y_1}}{\mathcal O}_{Y_0}\tilde{o}\mathrm{Fil}_1^{\mathrm{conj}}\mathrm{dR}_{Y^{(-1)}_0}$. Here $Y_0^{(-1)}$ is the twist $Y_0\times_{k,\operatorname{Fr}_p^{-1}}k$ by the inverse of Frobenius and $dF_{Y^{(-1)}_0/k}$ is the map induced by the functoriality of the de Rham complex. \end{pr} \begin{rem} For $Y_0$ smooth over $k$ this result also follows from the explicit model for the map $s:\Omega^1_{Y_0^{(1)}/k}\tilde{o} \mathrm{dR}_{Y_0/k}$ using local Frobenius lifts, provided by \cite{deligne-illusie}, cf. \cite{srinivas}. \end{rem} \begin{rem}\label{applications: absolute vs relative obstruction} Using the isomorphism $Y_0^{(1)}\simeq Y_0$ of ${\mathbb F}_p$-schemes we may view $\ob_{F,Y_1}$ as a map $F^*_{Y_0}L\Omega^1_{Y_0}[-1]\tilde{o}{\mathcal O}_{Y_0}[1]$ that coincides with the obstruction to lifting the absolute Frobenius morphism $F_{Y_0}:Y_0\tilde{o} Y_0$ to an endomorphism of $Y_1$, we will denote this map by the same symbol $\ob_{F,Y_1}$. \end{rem} \begin{convention}\label{applications: sign flip}We take $\ob_{F,Y_1}$ to mean the {\it negative} of the obstruction class defined in \cite{illusie-cotangent1}, to avoid a trailing sign in all of the subsequent expressions. \end{convention} We start by recalling from \cite{illusie-conjugate} how a lift of $Y_0$ over $W_2(k)$ provides a decomposition of $L\Omega^1_{Y_0/W(k)}$. In general, if $Y_0$ is a quasisyntomic scheme over $k$ we have the fundamental triangle corresponding to the morphisms $Y_0\tilde{o} \Spec k\tilde{o} \Spec W(k)$ \begin{equation}\label{applications: fundamental triangle formula} {\mathcal O}_{Y_0}[1]\simeq {\mathcal O}_{Y_0}\otimes_k L\Omega^1_{k/W(k)}\tilde{o} L\Omega^1_{Y_0/W(k)}\tilde{o} L\Omega^1_{Y_0/k} \end{equation} The natural map $\mathrm{fib}(L\Omega^1_{Y_0/W(k)}\tilde{o} L\Omega^1_{Y_0/k})\tilde{o} \mathrm{fib}(L\Omega^1_{Y_0/W_2(k)}\tilde{o} L\Omega^1_{Y_0/k})$ establishes the source as a direct summand of the target, hence a flat scheme $Y_1$ over $W_2(k)$ lifting $Y_0$ induces a splitting of this fiber sequence via the map $di:L\Omega^1_{Y_0/k}\simeq i^*L\Omega^1_{Y_1/W_2(k)}\tilde{o} L\Omega^1_{Y_0/W_2(k)}$, where $i:Y_0\hookrightarrow Y_1$ is the inclusion of the special fiber, cf. \cite[\S 4]{illusie-conjugate}. We denote by $s'_{Y_1}:L\Omega^1_{Y_0/k}\tilde{o} L\Omega^1_{Y_0/W(k)}$ the resulting section of (\ref{applications: fundamental triangle formula}). \begin{lm}\label{applications: frobenius obstruction definition} For a flat scheme $Y_1$ over $W_2(k)$ the obstruction $\ob_{F,Y_1}:F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/k}\tilde{o} {\mathcal O}_{Y_0}[1]$ to lifting $F_{Y_0/k}:Y_0\tilde{o} Y^{(1)}_0$ to a morphism from $Y_1$ to $Y_1^{(1)}$ is homotopic to the composition \begin{equation} F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/k}\xrightarrow{F^*_{Y_0/k}di^{(1)}} F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/W_2(k)}\xrightarrow{dF_{Y_0/k}}L\Omega^1_{Y_0/W_2(k)}\tilde{o} {\mathcal O}_{Y_0}[1] \end{equation} \end{lm} \begin{proof} Let us start by recalling the definition of this obstruction given in \cite[Proposition III.2.2.4]{illusie-cotangent1}. The scheme $Y_1$ viewed as a lift of $Y_0$ produces a map $L\Omega^1_{Y_0/W_2(k)}\tilde{o} {\mathcal O}_{Y_0}[1]$, and composing it with $dF_{Y_0/k}$ we get a map \begin{equation}\gamma_{Y_0}:F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/W_2(k)}\xrightarrow{dF_{Y_0/k}} L\Omega^1_{Y_0/W_2(k)}\tilde{o} {\mathcal O}_{Y_0}[1]\end{equation} On the other hand, the lift $Y^{(1)}_1$ of $Y^{(1)}_0$ gives rise to a map $L\Omega^1_{Y^{(1)}_0/W_2(k)}\tilde{o} {\mathcal O}_{Y^{(1)}_0}[1]$, and pulling it back along $F_{Y_0/k}$ we obtain a map \begin{equation} \gamma_{Y^{(1)}_0}:F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/W_2(k)}\tilde{o} F^*_{Y_0/k}{\mathcal O}_{Y^{(1)}_0}[1]\simeq {\mathcal O}_{Y_0}[1] \end{equation} By definition, $\ob_{F,Y_1}:F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/k}\tilde{o} {\mathcal O}_{Y_0}[1]$ is the unique (up to a contractible space of choices) map such that the composition $F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/W_2(k)}\tilde{o} F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/k}\xrightarrow{\ob_{F,Y_1}}{\mathcal O}_{Y_0}[1]$ is homotopic to the difference $\gamma_{Y_0}-\gamma_{Y_0^{(1)}}$ (we have incorporated Convention \ref{applications: sign flip} at this point). Equivalently, $\ob_{F,Y_1}$ is the composition \begin{equation}F^*_{Y_0/k}L\Omega^1_{Y^{(1)}_0/k}\xrightarrow{F_{Y_0/k}^*di^{(1)}}F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/W_2(k)}\xrightarrow{\gamma_{Y_0}-\gamma_{Y^{(1)}_0}}{\mathcal O}_{Y_0}[1]\end{equation} However, the composition $F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/k}\xrightarrow{F^*_{Y_0/k}di^{(1)}} F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/W_2(k)}\xrightarrow{\gamma_{Y^{(1)}_0}}{\mathcal O}_{Y_0}[1]$ is zero by construction, so the lemma follows from the definition of $\gamma_{Y_0}$. \end{proof} \begin{proof}[Proof of Proposition \ref{applications: frobenius lift obstruction prop}] \cite[Proposition 4.15]{prisms} in the smooth case, and \cite[Corollary 3.3]{illusie-conjugate} in general identifies $\mathrm{Fil}_1^{\mathrm{conj}}\mathrm{dR}_{Y_0/k}$ with the shifted cotangent complex $L\Omega^1_{Y^{(1)}_0/W(k)}[-1]$. Moreover, the fiber sequence ${\mathcal O}_{Y^{(1)}_0}\tilde{o} \mathrm{Fil}_1^{\mathrm{conj}}\mathrm{dR}_{Y_0/k}\tilde{o} L\Omega^1_{Y^{(1)}_0/k}[-1]$ induced by the conjugate filtration is identified with the shift of the fundamental triangle \begin{equation}\label{applications: fundamental w2 triangle}L\Omega^1_{k/W(k)}\otimes_k{\mathcal O}_{Y^{(1)}_0}\tilde{o} L\Omega^1_{Y^{(1)}_0/W(k)}\tilde{o} L\Omega^1_{Y^{(1)}_0/k}\end{equation} corresponding to the sequence of morphisms $Y^{(1)}_0\tilde{o}\Spec k\tilde{o} \Spec W(k)$. Denote by $i:Y^{(1)}_0\hookrightarrow Y^{(1)}_1$ the usual inclusion. We have a map $s'_{Y_0^{(1)}}:L\Omega^1_{Y^{(1)}_0/k}\tilde{o} L\Omega^1_{Y^{(1)}_0/W_2(k)}$ that splits this fundamental triangle, hence defining a map $s'_{Y^{(1)}_1}:L\Omega^1_{Y^{(1)}_0/k}[-1]\tilde{o}\mathrm{Fil}_1^{\mathrm{conj}}\mathrm{dR}_{Y_0/k}$. By the uniqueness of functorial decompositions of the de Rham complex \cite[Theorem 5.10]{li-mondal}, $s'_{Y^{(1)}_1}$ is naturally equivalent to the section $s_{Y^{(1)}_1}$ constructed using the Sen operator. Since the identification $\mathrm{Fil}_1^{\mathrm{conj}}\mathrm{dR}_{Y_0/k}\simeq L\Omega^1_{Y_0/W(k)}[-1]$ is functorial in $Y_0$, the map $F_{Y_0/k}^*\mathrm{Fil}_1^{\mathrm{conj}}\mathrm{dR}_{Y_0/k}\tilde{o} \mathrm{Fil}_1^{\mathrm{conj}}\mathrm{dR}_{Y_0^{(-1)}/k}$ induced by $F_{Y_0/k}$ is identified with the shift of $dF_{Y_0/k}:F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/W(k)}\xrightarrow{} L\Omega^1_{Y_0/W(k)}$. The latter map factors as \begin{equation} F^*_{Y_0/k}L\Omega^1_{Y^{(1)}_0/W(k)}\tilde{o} {\mathcal O}_{Y_0}\tilde{o} L\Omega^1_{Y_0/W(k)} \end{equation} because $dF_{Y_0/k}:F_{Y_0/k}^*L\Omega^1_{Y^{(1)}_0/k}\tilde{o} L\Omega^1_{Y_0/k}$ is zero, and precomposing it with $F^*_{Y_0/k}\Omega^1_{Y^{(1)}_0}\xrightarrow{F^*_{Y_0/k}s'_{Y_1^{(1)}}} F^*_{Y_0/k}L\Omega^1_{Y^{(1)}_0/W(k)}$ we complete the proof of Proposition \ref{applications: frobenius lift obstruction prop} by the formula for the obstruction to lifting Frobenius given by Lemma \ref{applications: frobenius obstruction definition}. \end{proof} The Frobenius arising from the derived commutative structure on the de Rham complex coincides with the map induced by the geometric Frobenius morphism: \begin{lm}\label{applications: cosimplicial frob is frob} For a smooth scheme $X_0$ over $k$ the Frobenius map $\varphi_{\mathrm{dR}_{X_0/k}}:F^*_{X^{(1)}_0}\mathrm{dR}_{X_0/k}\tilde{o} \mathrm{dR}_{X_0/k}$ of the derived commutative algebra $\mathrm{dR}_{X_0/k}\in \DAlg(X_0^{(1)})$ is naturally identified with the map $dF_{X^{(1)}_0}:F_{X^{(1)}_0}^*\mathrm{dR}_{X_0/k}\tilde{o} \mathrm{dR}_{X_0/k}$ induced by the functoriality of the de Rham complex under the absolute Frobenius endomorphism. Explicitly, this morphism is the composition \begin{equation}F_{X_0^{(1)}}^*\mathrm{dR}_{X_0/k}\tilde{o} F^*_{X^{(1)}_0}F_{X_0/k*}{\mathcal O}_{X_0}\xrightarrow{\varphi_{{X^{(1)}_0}}} {\mathcal O}_{X^{(1)}_0}\tilde{o}\mathrm{dR}_{X_0/k}\end{equation} where the first map is induced by the map from the de Rham complex to its $0$th term. \end{lm} \begin{proof} We endowed $\mathrm{dR}_{X_0/k}$ with the structure of a derived commutative algebra by identifying it with the pushforward $R\pi_{*}^{\DHod}{\mathcal O}_{(X^{(1)}_0)^{\DHod}}$ of the structure sheaf along the morphism of stacks $\pi^{\DHod}:(X^{(1)}_0)^{\DHod}\tilde{o} X^{(1)}_0$. By Lemma \ref{dalg: geometric frobenius}, the map $\varphi_{\mathrm{dR}_{X_0/k}}:F_{X_0^{(1)}}^*\mathrm{dR}_{X_0/k}\tilde{o} \mathrm{dR}_{X_0/k}$ is the composition $F^*_{X^{(1)}_0}\mathrm{dR}_{X_0/k}=F_{X^{(1)}_0}^*R\pi^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{*}{\mathcal O}_{(X^{(1)}_0)^{\DHod}}\tilde{o} R\pi^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_*F_{(X^{(1)}_0)^{\DHod}}^*{\mathcal O}_{(X^{(1)}_0)^{\DHod}}\simeq R\pi^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_*{\mathcal O}_{(X^{(1)}_0)^{\DHod}}=\mathrm{dR}_{X_0/k}$. The Frobenius endomorphism of the stack $X_0^{\DHod}$ coincides with the endomorphism induced from $F_{X^{(1)}_0}$ by functoriality, cf. \cite[Remark 3.6]{bhatt-lurie-prismatization}. Hence $\varphi_{\mathrm{dR}_{X_0/k}}$ is equivalent to the morphism induced by $F_{X_0}$ by functoriality of the de Rham complex. The last assertion follows from the fact that the maps $dF_{X^{(1)}_0}:F_{X^{(1)}_0}^*F_{X_0/k*}\Omega^i_{X_0}\tilde{o}\Omega^i_{X^{(1)}_0}$ are zero for all $i\geq 1$. \end{proof} Let us record the compatibility between two Frobenius morphisms on the level of cohomology arising from Lemma \ref{applications: cosimplicial frob is frob}: \begin{lm}\label{applications: coh cosimplicial frob is frob} For a smooth $k$-scheme $X_0$, the Frobenius morphism $\varphi_{\mathrm{R}\Gamma_{\mathrm{dR}}(X_0/k)}$ of the derived commutative $k$-algebra $\mathrm{R}\Gamma_{\mathrm{dR}}(X_0/k)$ is naturally homotopic to the morphism induced by the relative Frobenius morphism $F_{X_0/k}:X_0\tilde{o} X^{(1)}_0$. \end{lm} \subsection{Proof of Theorem \ref{cosimp applications: HT extension}.} We can now prove the main result of this section: \begin{thm}\label{cosimp applications: HT extension} For a smooth formal scheme $X$ over $\Spf W(k)$ there is a natural decomposition $\tau^{\leq p-1}\Omega^{\DHod}_X\simeq \bigoplus\limits_{i=0}^{p-1}\Omega^i_X[-i]$ in $D(X)$. The class of the extension $\tau^{\leq p-1}\Omega^{\DHod}_X\tilde{o}\tau^{\leq p}\Omega^{\DHod}_X\tilde{o} \Omega^p_X[-p]$ in $D(X)$ is naturally equivalent to the composition \begin{equation}\label{cosimp applications: main formula} \Omega^p_X\xrightarrow{\alpha(\Omega^1_{X_0})}F_{X_0}^*\Omega^1_{X_0}[p-1]\xrightarrow{\ob_{F,X\times_{W(k)}{W_2(k)}}}{\mathcal O}_{X_0}[p]\xrightarrow{\mathrm{Bock}_{{\mathcal O}_X}}{\mathcal O}_X[p+1]\tilde{o} \Omega^{\DHod}_X[p+1] \end{equation} \end{thm} \begin{proof} Applying Theorem \ref{cosimp: main theorem} to the derived commutative algebra $\Omega^{\DHod}_X$ on $X$ equipped with the splitting $s:\Omega^1_X[-1]\tilde{o} \Omega^{\DHod}_X$ gives the first assertion (which by now we knew anyway thanks to the existence of the Sen operator), as well as the following formula for the extension class of $\tau^{\leq p}\Omega^{\DHod}_X$: \begin{multline} \Omega^p_X\xrightarrow{\alpha(\Omega^1_{X_0})}F_{X_0}^*\Omega^1_{X_0}[p-1]\xrightarrow{F_{X_0}^*s[p]}(\tau^{\leq 1}F^*_{X_0}\mathrm{dR}_{X^{(-1)}_0/k})[p]\xrightarrow{\varphi_{\mathrm{dR}_{X^{(-1)}_0/k}}}(\tau^{\leq 1}\mathrm{dR}_{X_0^{(-1)}/k})[p]\\ \xrightarrow{\mathrm{Bock}_{\Omega^{\DHod}_X}}(\tau^{\leq 1}\Omega^{\DHod}_X)[p+1]\tilde{o} (\tau^{\leq p-1})\Omega^{\DHod}_X[p+1] \end{multline} Using Proposition \ref{applications: frobenius lift obstruction prop} together with Remark \ref{applications: absolute vs relative obstruction}, and Lemma \ref{applications: cosimplicial frob is frob} we can rewrite this composition as \begin{multline} \Omega^p_X\xrightarrow{\alpha(\Omega^1_{X_0})}F_{X_0}^*\Omega^1_{X_0}[p-1]\xrightarrow{\ob_{F,X\times_{W(k)}W_2(k)}}{\mathcal O}_{X_0}[p]\tilde{o}(\tau^{\leq 1}\mathrm{dR}_{X_0^{(-1)}/k})[p]\\ \xrightarrow{\mathrm{Bock}_{\Omega^{\DHod}_X}}(\tau^{\leq 1}\Omega^{\DHod}_X)[p+1]\tilde{o} (\tau^{\leq p-1}\Omega^{\DHod}_X)[p+1] \end{multline} Converting this into (\ref{cosimp applications: main formula}) amounts to the observation that the Bockstein maps for $\Omega^{\DHod}_X$ and ${\mathcal O}_X$ are related by the commutative square \begin{equation} \begin{tikzcd} {\mathcal O}_{X_0}\arrow[r]\arrow[d, "\mathrm{Bock}_{{\mathcal O}_{X}}"] & \mathrm{dR}_{X^{(-1)}_0/k}\arrow[d, "\mathrm{Bock}_{\Omega^{\DHod}_{X}}"] \\ {\mathcal O}_X[1]\arrow[r] & \Omega^{\DHod}_X[1] \end{tikzcd} \end{equation} \end{proof} \subsection{Cohomological consequences.} We can rewrite the answer provided by (\ref{cosimp applications: main formula}) as a formula for the extension class in $H^{p+1}(X,\Lambda^p T_X)$. Getting to this statement from Theorem \ref{cosimp applications: HT extension} amounts to a general piece of bookkeeping concerning the Bockstein homomorphisms, given by Lemma \ref{cosimp applications: Bockstein} below. \begin{thm}\label{cosimp applications: the best part} The class of the extension $\bigoplus\limits_{i=0}^{p-1}\Omega^i_{X}[-i]\tilde{o}\tau^{\leq p}\Omega^{\DHod}_X\tilde{o}\Omega^p_{X}[-p]$ in $H^{p+1}(X,\Lambda^pT_{X})$ is equal to \begin{equation}\mathrm{Bock}_X(\ob_{F,X\times_{W(k)}W_2(k)}\cup \alpha(\Omega^1_{X_0})),\end{equation} that is the result of applying the Bockstein homomorphism $\mathrm{Bock}_{X}:H^p(X_0,\Lambda^pT_{X_0})\tilde{o} H^{p+1}(X,\Lambda^p T_{X})$ to the product of classes $\alpha(\Omega^1_{X_0})\in\Ext^{p-1}_{X_0}(\Omega^p_{X_0},F_{X_0}^*\Omega^1_{X_0})=H^{p-1}(X_0,\Lambda^p T_{X_0}\otimes F_{X_0}^*\Omega^1_{X_0})$ and $\ob_{F,X\times_{W(k)}W_2(k)}\in H^1(X_0,F^*T_{X_0})$. \end{thm} \begin{lm}\label{cosimp applications: Bockstein} For two objects $M,N\in D(X)$ denote by $\underline{\mathrm{RHom}}_{{\mathcal O}_X}(M,N)\in D(X)$ their internal $\Hom$ object. The Bockstein morphism \begin{equation}\mathrm{RHom}_{X_0}(i^*M,i^*N)=\mathrm{R}\Gamma(X_0,i^*\underline{\mathrm{RHom}}_{{\mathcal O}_X}(M,N))\tilde{o} \mathrm{R}\Gamma(X,\underline{\mathrm{RHom}}_{{\mathcal O}_X}(M,N)[1])=\mathrm{RHom}_X(M,N[1])\end{equation} can be described as sending $f:i^*M\tilde{o} i^*N$ to the composition \begin{equation} M\tilde{o} i_*i^*M\xrightarrow{i_*f}i_*i^*N\tilde{o} N[1] \end{equation} \end{lm} \begin{proof} For an object $K\in D(X)$ there is a natural fiber sequence $K\tilde{o} i_*i^*K\tilde{o} K[1]$ coming from the identification $i_*i^*K\simeq\cofib(K\xrightarrow{p} K)$. Applying $\mathrm{R}\Gamma(X,-)$ to the second map $i^*i_*K\tilde{o} K[1]$ induces the Bockstein homomorphoism $\mathrm{R}\Gamma(X_0,i^*K)\tilde{o} \mathrm{R}\Gamma(X,K[1])$ on cohomology. The lemma aims to compute this map in the case $K=\underline{\mathrm{RHom}}_{{\mathcal O}_X}(M,N)$. By adjunction, we have $i_*i^*\underline{\mathrm{RHom}}_{{\mathcal O}_X}(M,N)\simeq i_*\underline{\mathrm{RHom}}_{{\mathcal O}_{X_0}}(i^*M,i^*N)\simeq \underline{\mathrm{RHom}}_{{\mathcal O}_X}(M,i_*i^*N)$, and the Bockstein map $i_*i^*\underline{\mathrm{RHom}}_{{\mathcal O}_X}(M,N)\tilde{o} \underline{\mathrm{RHom}}_{{\mathcal O}_X}(M,N)[1]$ is induced by composing with the map $i_*i^*N\tilde{o} N[1]$. This proves the lemma by the observation that the adjunction equivalence $\mathrm{RHom}_{{X_0}}(i^*M,i^*N)\simeq \mathrm{RHom}_{X}(M,i_*i^*N)$ takes a map $f:i^*M\tilde{o} i^*N$ to the composition $M\tilde{o} i_*i^*M\xrightarrow{i_*f}i_*i^*N$. \end{proof} Reducing modulo $p$ we also get a description of the extension class of $\tau^{\leq p}\mathrm{dR}_{X_0/k}$: \begin{thm}\label{cosimp applications: the best part de Rham} If $X_0$ is a smooth scheme over $k$ equipped with a smooth lift $X$ over $W(k)$ then the class of the extension $\bigoplus\limits_{i=0}^{p-1}\Omega^i_{X^{(1)}_0}[-i]\tilde{o}\tau^{\leq p}\mathrm{dR}_{X_0/k}\tilde{o}\Omega^p_{X^{(1)}_0}[-p]$ in $H^{p+1}(X^{(1)}_0,\Lambda^pT_{X^{(1)}_0})$ is equal to \begin{equation}\mathrm{Bock}_{X^{(1)}_{W_2(k)}}(\ob_{F,X^{(1)}_{W_2(k)}}\cup \alpha(\Omega^1_{X^{(1)}_0})),\end{equation} the result of applying the Bockstein homomorphism $\mathrm{Bock}_{X^{(1)}_{W_2(k)}}:H^p(X^{(1)}_0,\Lambda^pT_{X^{(1)}_0})\tilde{o} H^{p+1}(X^{(1)}_0,\Lambda^p T_{X^{(1)}_0})$ to the product of classes $\alpha(\Omega^1_{X^{(1)}_0})\in\Ext^{p-1}_{X^{(1)}_0}(\Omega^p_{X^{(1)}_0},F_{X_0^{(1)}}^*\Omega^1_{X^{(1)}_0})=H^{p-1}(X^{(1)}_0,\Lambda^p T_{X^{(1)}_0}\otimes F_{X^{(1)}_0}^*\Omega^1_{X^{(1)}_0})$ and $\ob_{F,X^{(1)}_{W_2(k)}}\in H^1(X^{(1)}_0,F_{X^{(1)}_0}^*T_{X^{(1)}_0})$. \end{thm} For the convenience of applications, let us state explicitly what Theorem \ref{cosimp applications: the best part} tells us about the differentials in the Hodge-Tate spectral sequence \begin{cor} For a smooth $W(k)$-scheme $X$ the Hodge-Tate spectral sequence $E_2^{ij}=H^i(X,\Omega^j_{X/W(k)})\Rightarrow H^{i+j}_{\DHod}(X)$ has no non-zero differentials on pages $E_2,\ldots,E_p$ and for every $i\geq 0$ the differential $d_{p+1}^{i,p}:H^i(X,\Omega^p_{X/W(k)})\tilde{o} H^{i+p+1}(X,{\mathcal O}_X)$ on page $E_{p+1}$ can be described as the composition \begin{multline} H^{i}(X,\Omega^p_{X/W(k)})\tilde{o} H^i(X_0,\Omega^p_{X_0/k})\xrightarrow{\alpha(\Omega^1_{X_0})}H^{i+p-1}(X_0,F_{X_0}^*\Omega^1_{X_0/k})\xrightarrow{\ob_{F,X}}\\ H^{i+p}(X_0,{\mathcal O}_{X_0})\xrightarrow{\mathrm{Bock}_{X}}H^{i+p+1}(X,{\mathcal O}_X). \end{multline} \end{cor} \subsection{Decomposing de Rham complex compatibly with the algebra structure} Another application of the results of Section \ref{free cosimplicial: section} is an obstruction to formality of the de Rham complex as a commutative algebra. In what follows, by the derived commutative algebra $\bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}[-i]$ we mean the free divided power algebra on the object $\Omega^1_{X^{(1)}_0}[-1]\in D(X^{(1)}_0)$, see Definition \ref{dalg: free divided power algebra def}. \begin{pr}\label{cosimp applications: de rham formality} Let $X_0$ be an arbitrary smooth scheme over $k$. The following are equivalent \begin{enumerate} \item $\mathrm{dR}_{X_0/k}$ is equivalent to $\bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}[-i]$ as a derived commutative algebra in $D(X^{(1)}_0)$ \item $\mathrm{dR}_{X_0/k}$ is equivalent to $\bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}[-i]$ as an $E_{\infty}$-algebra in $D(X^{(1)}_0)$ \item there exists a map $\mathrm{dR}_{X_0/k}\tilde{o}{\mathcal O}_{X_0^{(1)}}$ of $E_{\infty}$-algebras in $D(X_0^{(1)})$ that induces an isomorphism $H^0(\mathrm{dR}_{X_0/k})\simeq{\mathcal O}_{X_0^{(1)}}$ \item $X_0$ together with its Frobenius endomorphism admits a lift over $W_2(k)$. \end{enumerate} \end{pr} \begin{proof} Formally, (1) implies (2), and (2) implies (3). Let us first show that (4) implies (1). It is well-known that (4) implies the formality of $\mathrm{dR}_{X_0/k}$ as a commutative DG algebra \cite[Remarque 2.2(ii)]{deligne-illusie}, or as an $E_{\infty}$-algebra \cite[Proposition 3.17]{bhatt-derived}. We have not discussed the relation between commutative DG algebras and derived commutative algebras, so let us give an independent argument for the equivalence $\mathrm{dR}_{X_0/k}\simeq\bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}$ in $\DAlg(X_0)$, in the presence of a Frobenius lift over $W_2(k)$. For this equivalence, we will represent $\mathrm{dR}_{X_0/k}$ by the \v{C}ech complex associated to the flat cover $X_0^{\mathrm{perf}}\tilde{o} X_0$ where $X_0^{\mathrm{perf}}:=\lim\limits_{\leftarrow}X^{(-n)}_0$ is the perfection of $X_0$. By fpqc descent for derived de Rham cohomology we have that $\mathrm{dR}_{X_0/k}$ is equivalent to the cosimplicial totalization of the following diagram of derived commutative algebras in $D(X_0^{(1)})$, cf. \cite[Remark 8.15]{bms2}: \begin{equation}\label{cosimp applications: de rham cech diagram} \begin{tikzcd}\mathrm{dR}_{X_0^{\mathrm{perf}}/k} \arrow[r, shift left=0.65ex] \arrow[r, shift right=0.65ex] & \mathrm{dR}_{X_0^{\mathrm{perf}}\times_{X_0}X_0^{\mathrm{perf}}/k} \arrow[r, shift left=1.3ex] \arrow[r, shift right=1.3ex] \arrow[r] &\ldots \end{tikzcd} \end{equation} Since each $(X_0^{\mathrm{perf}})^{\times_{X_0}n}$ is a quasiregular semiperfect scheme, all terms of this diagram are classical commutative algebras in $\QCoh(X_0^{(1)})$ placed in degree $0$. The lift $X_1$ of $X_0$ together with a Frobenius endomorphism induces lifts of the scheme $X_0^{\mathrm{perf}}$, the morphism $X_0^{\mathrm{perf}}\tilde{o} X_0$, and the Frobenius endomorphism of $X_0^{\mathrm{perf}}$. Therefore each algebra $\mathrm{dR}_{(X_0^{\mathrm{perf}})^{\times_{X_0}n}/k}$ decomposes as the Frobenius-twist of $\Gamma^{\bullet}(L\Omega^1_{(X_0^{\mathrm{perf}})^{\times_{X_0}n}/k}[-1])$, by \cite[Proposition 3.17]{bhatt-derived}. Moreover all the maps in (\ref{cosimp applications: de rham cech diagram}) are compatible with these decompositions, so the cosimplicial commutative algebra defined by this diagram is quasi-isomorphic to $\Gamma^{\bullet}_{\mathrm{naive}}(\DK(\Omega^1_{X_0^{(1)}/k}[-1]))$. This cosimplicial commutative algebra is a model for the derived commutative algebra $\bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}[-i]$ by Lemma \ref{dalg: free divided power as cosimp}, so the implication (4)$\Rightarrow$(1) is proven. This is not logically necessary, but we will first give the proof of the implication (1)$\Rightarrow$(4) to illustrate the idea of the implication (3)$\Rightarrow$(4) in a simpler case. Suppose that such an equivalence $\alpha:\bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}[-i]\simeq \mathrm{dR}_{X_0/k} $ of derived commutative algebras exists. In particular, $\tau^{\leq 1}\mathrm{dR}_{X_0/k}$ is decomposed as ${\mathcal O}_{X^{(1)}_0}\oplus\Omega^1_{X^{(1)}_0/k}[-1]$. By \cite[Th\'eor\`eme 3.5]{deligne-illusie} any such decomposition is induced, up to a homotopy, by a lift of $X_0$ over $W_2(k)$. Denote the lift corresponding to $\tau^{\leq 1}\alpha$ by $X_1$. The Frobenius endomorphism $\varphi_{\bigoplus\limits \Omega^i_{X^{(1)}_0/k}[-i]}:F_{X^{(1)}_0}^*\bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}[-i]\tilde{o} \bigoplus\limits_{i\geq 0}\Omega^i_{X^{(1)}_0/k}[-i]$ is zero on all components with $i\geq 1$ and is the usual adjunction map $F_{X^{(1)}_0}^*{\mathcal O}_{X^{(1)}_0}\xrightarrow{\sim}{\mathcal O}_{X^{(1)}_0}$ on the structure sheaf. Hence composing the section $\Omega^1_{X^{(1)}_0}[-1]\tilde{o} \mathrm{dR}_{X_0/k}$ induced by $\alpha$ with the Frobenius map $\varphi_{\mathrm{dR}_{X_0/k}}:F_{X^{(1)}_0}^*\mathrm{dR}_{X_0/k}\tilde{o}\mathrm{dR}_{X_0/k}$ is homotopic to zero. By Proposition \ref{applications: frobenius lift obstruction prop} and Lemma \ref{applications: cosimplicial frob is frob} this implies that the composition $F^*_{X_0^{(1)}}\Omega^1_{X^{(1)}_0}[-1]\xrightarrow{\ob_{F,X_1}}{\mathcal O}_{X^{(1)}_0}\xrightarrow{}\mathrm{dR}_{X_0/k}$ is homotopic to zero. Since the second map admits a section, it follows that $\ob_{F,X_1}=0$ and $X_1$ is a lift of $X_0$ that admits a lift of Frobenius. Finally, we prove that (3) implies (4). The augmentation map $\varepsilon:\mathrm{dR}_{X_0/k}\tilde{o}{\mathcal O}_{X_0^{(1)}}$ induces, in particular, a map $s:\Omega^1_{X_0^{(1)}}[-1]$ in $D(X_0^{(1)})$ such that the composition $\varepsilon\circ s$ is zero. Denote by $X_1$ the lift of $X_0$ corresponding to this section $s$. The map $s:\Omega^1_{X^{(1)}_0}[-1]\tilde{o} \mathrm{dR}_{X_0/k}$ also induces a map $s_p:S^p(\Omega^1_{X^{(1)}_0}[-1])\tilde{o} \mathrm{dR}_{X_0/k}$. The assumption that $\varepsilon$ is a map of $E_{\infty}$-algebras implies that the composition \begin{equation}(\Omega^1_{X^{(1)}_0}[-1]^{\otimes p})_{hS_p}\tilde{o} S^p(\Omega^1_{X^{(1)}_0}[-1])\xrightarrow{s_p} \mathrm{dR}_{X_0/k}\xrightarrow{\varepsilon}{\mathcal O}_{X_0^{(1)}}\end{equation} is homotopic to zero. We will deduce from this that the composition $T_p(\Omega^1_{X^{(1)}_0/k}[-1])[-1]\tilde{o} S^p\Omega^1_{X^{(1)}_0}[-1]\tilde{o} \mathrm{dR}_{X_0/k}\xrightarrow{\varepsilon}{\mathcal O}_{X_0^{(1)}}$ is homotopic to zero. Denote by $K$ the fiber of the map $(\Omega^1_{X^{(1)}_0}[-1]^{\otimes p})_{hS_p}\tilde{o} \Omega^p_{X^{(1)}_0}[-p]$, it is equipped with a map $K\tilde{o} T_p(\Omega^1_{X^{(1)}_0}[-1])[-1]$, and the object $\mathrm{fib}(K\tilde{o} T_p(\Omega^1_{X^{(1)}_0}[-1])[-1])\simeq \mathrm{fib}((\Omega^1_{X^{(1)}_0}[-1]^{\otimes p})_{hS_p}\tilde{o} S^p\Omega^1_{X^{(1)}_0}[-1])$ is concentrated in degrees $\leq 0$ by Lemma \ref{free cosimplicial: cosimp vs einf}. Our assumption implies that the map $T_p(\Omega^1_{X^{(1)}_0}[-1])[-1]\tilde{o} \mathrm{dR}_{X_0/k}\xrightarrow{\varepsilon}{\mathcal O}_{X_0^{(1)}}$ factors through the map $T_p(\Omega^1_{X^{(1)}_0}[-1])[-1]\tilde{o} \mathrm{fib}(K[1]\tilde{o} T_p(\Omega^1_{X^{(1)}_0}[-1]))$ which forces it to be homotopic to zero because this fiber object is concentrated in degrees $\geq -1$. In Lemma \ref{free cosimplicial: tate p}(4) we constructed a map $F_{X_0}^*M\tilde{o} T_p(M)[-1]$ for any $M\in D(X_0)$ such that the composition $F_{X_0}^*M\tilde{o} T_p(M)[-1]\xrightarrow{\gamma_M}S^p M$ is the map $\Delta_M: F_{X_0}^*M\tilde{o} S^p M$. By definition, Frobenius map is the composition $F_{X^{(1)}_0}^*\mathrm{dR}_{X_0/k}\xrightarrow{\Delta_{\mathrm{dR}_{X_0/k}}}S^p\mathrm{dR}_{X_0/k}\xrightarrow{m}\mathrm{dR}_{X_0/k}$ which allows us to conclude that the composition \begin{equation}F_{X_0^{(1)}}^*\Omega^1_{X^{(1)}_0}[-1]\tilde{o} T_p(\Omega^1_{X^{(1)}_0}[-1])[-1]\tilde{o} S^p\Omega^1_{X^{(1)}_0}[-1]\xrightarrow{s_p} \mathrm{dR}_{X_0/k}\end{equation} is homotopic to the composition \begin{equation}\label{applications: augmentation formula}F_{X^{(1)}_0}^*\Omega^1_{X_0^{(1)}}[-1]\xrightarrow{F^*_{X_0^{(1)}}s}F_{X_0^{(1)}}^*\mathrm{dR}_{X_0/k}\xrightarrow{\varphi_{\mathrm{dR}_{X_0/k}}}\mathrm{dR}_{X_0/k}\end{equation} We are given that the composition of (\ref{applications: augmentation formula}) with $\varepsilon:\mathrm{dR}_{X_0/k}\tilde{o}{\mathcal O}_{X_0^{(1)}}$ is homotopic to zero, which is equivalent to the vanishing of the obstruction to lifting $F_{X_0}$ on $X_1$, by Proposition \ref{applications: frobenius lift obstruction prop}. \end{proof} \section{Preliminaries on the Sen operator}\label{sen operator: section} In this section we collect preliminary material on the Sen operator, and more generally on the derived category of sheaves equipped with an endomorphism. We first discuss the Sen operator on diffracted Hodge cohomology of a scheme over ${\mathbb Z}_p$ and relate its non-semisimplicity to non-decomposability of the de Rham complex, and then give a parallel discussion over ${\mathbb Z}/p^n,n\geq 2$ for which we, in particular, describe the Hodge-Tate locus of the Cartier-Witt stack of ${\mathbb Z}/p^n$, generalizing \cite[Example 5.15]{bhatt-lurie-prismatization}. For the purposes of the application to the de Rham complex in Section \ref{semiperf: section}, the case of ${\mathbb Z}/p^n$ subsumes that of ${\mathbb Z}_p$, but we invite the reader to consider the case of schemes over ${\mathbb Z}_p$ first. \subsection{Category of objects with an endomorphism.} Let $X$ be a flat formal scheme over $W(k)$ and as before $D(X)$ is the $\infty$-category of quasicoherent sheaves on $X$. In this section we will work in the $\infty$-category $D_{{\mathbb N}}(X):=\Func(B{\mathbb N},D(X))$ of objects of $D(X)$ equipped with an additional endomorphism. Objects of this category are pairs $(M,f_M)$ where $M$ is an object of $D(X)$ and $f_M:M\tilde{o} M$ is an endomorphism of $M$ in $D(X)$. The morphisms between objects $(M,f_M)$ and $(N, f_N)$ are given by \begin{equation}\label{sen operator: morphisms in DN formula} \mathrm{RHom}_{D_{{\mathbb N}}(X)}((M,f_M),(N,f_N))=\mathrm{fib}(\mathrm{RHom}_{D(X)}(M,N)\xrightarrow{f\mapsto f\circ f_M-f_N\circ f}\mathrm{RHom}_{D(X)}(M,N)) \end{equation} For a scalar $\lambda\in {\mathbb Z}_p$ we have the functor $D(X)\tilde{o} D_{{\mathbb N}}(X)$ sending an object $M\in D(X)$ to itself equipped with the endomorphism $\lambda\cdot \mathrm{Id}_M$. We will often use the following special cases of the formula (\ref{sen operator: morphisms in DN formula}): \begin{lm}\label{sen operator: morphisms in DN} For an arbitrary object $(M, f_M)\in D_{{\mathbb N}}(X)$ the morphisms between it and an object of the form $(N, \lambda\cdot \mathrm{Id}_N)$ can be described as \begin{equation}\label{sen operator: morphism to scalar} \mathrm{RHom}_{D_{{\mathbb N}}(X)}((M, f_M),(N,\lambda\cdot\mathrm{Id}_N))= \mathrm{RHom}_{D(X)}(\cofib(f_M-\lambda\cdot \mathrm{Id}_M:M\tilde{o} M),N) \end{equation} \begin{equation}\label{sen operator: morphism from scalar} \mathrm{RHom}_{D_{{\mathbb N}}(X)}((N,\lambda\cdot\mathrm{Id}_N),(M,f_M))=\mathrm{RHom}_{D(X)}(N,M^{f_M=\lambda}) \end{equation} \end{lm} \subsection{Diffracted Hodge cohomology and Sen operator over ${\mathbb Z}_p$.} The works of Drinfeld \cite{drinfeld} and Bhatt-Lurie imply that the diffracted Hodge cohomology is equipped with a natural endomorphism, referred to by \cite{apc} as `Sen operator'. We work with arbitrary (not necessarily smooth over $W(k)$) flat formal schemes because our proof will proceed through a computation with diffracted Hodge cohomology of quasiregular semiperfectoid rings. \begin{thm}[\hspace{1sp}{\cite[Construction 4.7.1]{apc}}]\label{sen operator: general theorem} For a bounded $p$-adic formal scheme $X$ there is a natural object $(\Omega^{\DHod}_X,\Theta_X)\in D_{{\mathbb N}}(X)$ whose underlying object is the diffracted Hodge complex, equipped with a filtration $\mathrm{Fil}^{\mathrm{conj}}_{\bullet}$ such that the graded quotients $\gr_i^{\mathrm{conj}}$ of this filtration are equivalent to $(L\Omega^i_X[-i],-i)$. \end{thm} From now on we assume that $X$ is a flat formal ${\mathbb Z}_p$-scheme. When $X$ is formally smooth over $W(k)$ for some perfect field $k$, the conjugate filtration on $\Omega^{\DHod}_X$ coincides with the canonical filtration, and $\Theta_X$ acts on $H^i(\Omega^{\DHod}_X)\simeq\Omega^i_X$ by $(-i)\cdot\mathrm{Id}_{\Omega^i_X}$. The interesting information contained in this new cohomological invariant of $X$ is the data of extensions between the graded quotients of the conjugate filtration. One of our main goals, achieved in Theorem \ref{semiperf: main sen operator}, is to explicate what this information is for the smallest potentially non-split step $(\mathrm{Fil}_{\mathrm{conj}}^p\Omega^{\DHod}_X,\Theta_X)$ of the conjugate filtration. Given that $\Theta_X$ acts via multiplication by $-i$ on $\gr_i^{\mathrm{conj}}\Omega^{\DHod}_X$, the product $\Theta_X(\Theta_X+1)\ldots (\Theta_X+i)$ is naturally homotopic to zero as an endomorphism of $\mathrm{Fil}_{i}^{\mathrm{conj}}\Omega^{\DHod}_X$. Therefore for each $i$ the object $\mathrm{Fil}_i^{\mathrm{conj}}\Omega^{\DHod}_X\in D_{{\mathbb N}}(X)$ is naturally equipped with the structure of a ${\mathbb Z}_p[t]/t(t+1)\ldots(t+i)$-module, and the decomposition of the spectrum of this ring into a union of connected components induces a decomposition \begin{equation} (\Omega^{\DHod}_X,\Theta_X)\simeq \bigoplus\limits_{i=0}^{p-1}(\Omega^{\DHod}_{X,i},\Theta_X) \end{equation} such that for every $n\in {\mathbb Z}_p$ the endomorphism $\Theta_X+n$ of $\mathrm{Fil}_i\Omega^{\DHod}_{X,n\bmod p}$ is topologically nilpotent for every $i$, cf. \cite[Remark 4.7.20]{apc}. Restricting to the special fiber $X_0:=X\times_{{\mathbb Z}_p}{\mathbb F}_p\xhookrightarrow{i} X$ we similarly get an endomorphism $\Theta_X$ of $\mathrm{dR}_{X_0}$, and a decomposition \begin{equation} (\mathrm{dR}_{X_0},\Theta_X)\simeq \bigoplus\limits_{i=0}^{p-1}(\mathrm{dR}_{X_0,i},\Theta_X) \end{equation} We will now set up a way to package the information about the extensions in the conjugate filtration on $(\mathrm{Fil}_p^{\mathrm{conj}}\Omega^{\DHod}_X,\Theta_X)$. There is a decomposition $(\mathrm{Fil}^{\mathrm{conj}}_{p-1}\Omega^{\DHod}_X,\Theta_X)\simeq \bigoplus\limits_{i=0}^{p-1}(L\Omega^i_X[-i],-i)$ and the fiber sequence $(\mathrm{Fil}^{\mathrm{conj}}_{p-1}\Omega^{\DHod}_X,\Theta_X)\tilde{o} (\mathrm{Fil}_p^{\mathrm{conj}}\Omega^{\DHod}_X,\Theta_X)\tilde{o} (L\Omega^p_X[-p],-p)$ gives rise to the map in $D_{{\mathbb N}}(X)$: \begin{equation}\label{sen operator: extension map}(L\Omega^p_X[-p],-p)\tilde{o} \bigoplus\limits_{i=0}^{p-1}(L\Omega^i_X[-i],-i)[1]\end{equation} In the remainder of this section by $L\Omega^i_{X_0}$ we will mean the $i$th exterior power of the cotangent complex of $X_0$ relative to ${\mathbb F}_p$, so that $L\Omega^i_{X_0}\simeq i^*L\Omega^i_X$. \begin{lm}\label{sen operator: ext group description} There is a natural equivalence \begin{equation}\label{sen operator: ext group description formula} \Map_{D_{{\mathbb N}}(X)}((L\Omega^p_X[-p],-p), \bigoplus\limits_{i=0}^{p-1}(L\Omega^i_X[-i],-i)[1])\simeq \Map_{D(X_0)}(L\Omega^p_{X_0}, {\mathcal O}_{X_0}[p]) \end{equation} Under this identification the map from the LHS to $\Map_{D(X)}(L\Omega^p_X[-p],\bigoplus\limits_{i=0}^{p-1}L\Omega^i_X[-i])$ induced by the forgetful functor $D_{{\mathbb N}}(X)\tilde{o} D(X)$ sends $f\in\Map_{D(X_0)}(L\Omega^p_{X_0},{\mathcal O}_{X_0}[p])$ to the composition $L\Omega^p_X\tilde{o} L\Omega^p_{X_0}\xrightarrow{f}{\mathcal O}_{X_0}[p]\xrightarrow{\mathrm{Bock}_{{\mathcal O}_X}[p]}{\mathcal O}_X[p+1]$. \end{lm} \begin{proof} By Lemma \ref{sen operator: morphisms in DN} the left hand side of (\ref{sen operator: ext group description formula}) is equivalent to $\Map_{D(X)}(L\Omega^p_X,{\mathcal O}_X^{p=0}[p+1])$, because the summands $(L\Omega^i_X[-i],-i)$ with $0<i\leq p-1$ do not contribute to this mapping space. The fiber of multiplication by $p$ on ${\mathcal O}_X$ is identified with ${\mathcal O}_X/p[-1]=i_*i^*{\mathcal O}_X[-1]$ hence this mapping space can be further described as $\Map_{D(X)}(L\Omega^p_X,i_*i^*{\mathcal O}_X[p])\simeq \Map_{D(X_0)}(L\Omega^p_{X_0}, {\mathcal O}_{X_0}[p])$. For the second assertion note that, firstly, under the identification $\Map_{D_{{\mathbb N}}(X)}((L\Omega^p_X,-p),({\mathcal O}_X[p],0))\simeq \Map_{D(X)}(L\Omega^p_X,{\mathcal O}_X^{p=0}[p+1])$ the map induced by the forgetful functor $D_{{\mathbb N}}(X)\tilde{o} D(X)$ is given by composing with the natural map ${\mathcal O}_X^{p=0}\tilde{o}{\mathcal O}_X$ which is nothing but the Bockstein map $\mathrm{Bock}_{{\mathcal O}_X}[-1]:i_*{\mathcal O}_{X_0}[-1]\tilde{o}{\mathcal O}_X$. Secondly, the adjunction identification $\Map_{D(X_0)}(L\Omega^p_{X_0}, {\mathcal O}_{X_0}[p])\simeq \Map_{D(X)}(L\Omega^p_X,i_*i^*{\mathcal O}_X[p])$ sends $f\in \Map_{D(X_0)}(L\Omega^p_{X_0}, {\mathcal O}_{X_0}[p])$ to $L\Omega^p_X\tilde{o} i_*L\Omega^p_{X_0/{\mathbb F}_p}\xrightarrow{i_*f}i_*{\mathcal O}_{X_0}[p]$, which implies the claim. \end{proof} \begin{notation}\label{sen operator: zp classes notation} We denote by $c_{X,p}\in \Map_{D(X_0)}(L\Omega^p_{X_0}, {\mathcal O}_{X_0}[p])$ the result of transporting the map (\ref{sen operator: extension map}) along the equivalence (\ref{sen operator: ext group description formula}), and by $e_{X,p}\in\Map_{D(X)}(L\Omega^p_X,{\mathcal O}_X[p+1])$ we denote the extension class in the conjugate filtration on $\mathrm{Fil}_p^{\mathrm{conj}}\Omega^{\DHod}_{X,0}$. \end{notation} Let us explicitly record how $e_{X,p}$ can be recovered from $c_{X,p}$, which is immediate from Lemma \ref{sen operator: ext group description}: \begin{lm}\label{sen operator: classes bockstein relation w} The element $e_{X,p}\in\Map_{D(X)}(L\Omega^p_X,{\mathcal O}_X[p+1])$ corresponding to the extension ${\mathcal O}_X\tilde{o}\mathrm{Fil}_p\Omega^{\DHod}_{X,0}\tilde{o} L\Omega^p_X[-p]$ is naturally homotopic to the composition \begin{equation} L\Omega^p_X\tilde{o} L\Omega^p_{X_0}\xrightarrow{c_{X,p}}{\mathcal O}_{X_0}[p]\xrightarrow{\mathrm{Bock}_{{\mathcal O}_X}}{\mathcal O}_X[p+1] \end{equation} \end{lm} Lemma \ref{cosimp applications: Bockstein} can be used to restate Lemma \ref{sen operator: classes bockstein relation w} as follows \begin{cor}\label{sen operator: smooth classes w lift} The map $e_{X,p}$ is naturally homotopic to the image of $c_{X,p}$ under the Bockstein morphism \begin{equation} \mathrm{Bock}: \mathrm{RHom}_{X_0}(L\Omega^p_{X_0},{\mathcal O}_{X_0}[p])\tilde{o} \mathrm{RHom}_{X}(L\Omega^p_{X},{\mathcal O}_X[p+1]) \end{equation} arising from the object $\mathrm{RHom}_{X}(L\Omega^p_{X},{\mathcal O}_X[p+1])\in D({\mathbb Z}_p)$. \end{cor} The map $c_{X,p}$ can also be read off from the action of the Sen operator on $\mathrm{dR}_{X_0}$, as we will now show. In general, if $(M_1,0)\tilde{o} (M,f_M)\tilde{o} (M_2,0)$ is a fiber sequence in $D_{{\mathbb N}}(X_0)$ then $f_M:M\tilde{o} M$ naturally factors as $M\tilde{o} M_2\xrightarrow{h}M_1\tilde{o} M$ for a morphism $h\in\Map_{D(X_0)}(M_2,M_1)$. The element $\delta\in \Map_{D_{{\mathbb N}}(X_0)}((M_2,0), (M_1,0)[1])=\Map_{D(X_0)}(M_2,M_1[1])\oplus \Map_{D(X_0)}(M_2,M_1)$ corresponding to this fiber sequence is then given by $\underline{\delta}\oplus h$ where $\underline{\delta}$ is the image of $\delta$ under the forgetful map $D_{{\mathbb N}}(X_0)\tilde{o} D(X_0)$. Applying this discussion to $\mathrm{Fil}_p^{\mathrm{conj}}\mathrm{dR}_{X_0}$ we obtain: \begin{lm}\label{sen operator: cXp as nilpotent operator} The endomorphism $\Theta_X$ of $\mathrm{Fil}_p\mathrm{dR}_{X_0,0}$ naturally factors as \begin{equation} \mathrm{Fil}_p\mathrm{dR}_{X_0,0}\tilde{o} L\Omega^p_{X_0}[-p]\xrightarrow{c_{X,p}[-p]}{\mathcal O}_{X_0}=\mathrm{Fil}_0^{\mathrm{conj}}\mathrm{dR}_{X_0,0}\tilde{o}\mathrm{Fil}_p^{\mathrm{conj}}\mathrm{dR}_{X_0,0} \end{equation} \end{lm} \begin{proof} Given the paragraph preceding the statement of the lemma, this amounts to observing that the map $\mathrm{RHom}_{D_{{\mathbb N}}}(X)((L\Omega^p_X[-p],-p),({\mathcal O}_X,0))\tilde{o} \mathrm{RHom}_{D_{{\mathbb N}}}(X_0)((L\Omega^p_{X_0}[-p],-p),({\mathcal O}_{X_0},0))$ induced by the restriction to the special fiber $X_0$ can be described as $\mathrm{RHom}_{D(X_0)}(L\Omega^p_{X_0},{\mathcal O}_{X_0}[p])\xrightarrow{(\mathrm{Bock},\id)}\Map_{D(X_0)}(L\Omega^p_{X_0},{\mathcal O}_{X_0}[p+1])\oplus \mathrm{RHom}_{D(X_0)}(L\Omega^p_{X_0},{\mathcal O}_{X_0}[p])$ under the identification (\ref{sen operator: ext group description formula}). \end{proof} For our computation of the Sen operator in Theorem \ref{semiperf: main sen operator} it will be convenient to directly relate $c_{X,p}$ to the action of $\Theta_X$ on all of the object $\mathrm{Fil}_p\mathrm{dR}_{X_0}$, rather than its weight zero part: \begin{lm}\label{sen operator: cXp as nilpotent operator on dR} The endomorphism $\Theta_X-\Theta_X^p$ of $\mathrm{Fil}_p\mathrm{dR}_{X_0}$ naturally factors as \begin{equation} \mathrm{Fil}_p\mathrm{dR}_{X_0}\tilde{o} L\Omega^p_{X_0}[-p]\xrightarrow{c_{X,p}[-p]}{\mathcal O}_{X_0}=\mathrm{Fil}_0\mathrm{dR}_{X_0}\tilde{o} \mathrm{Fil}_p\mathrm{dR}_{X_0} \end{equation} \end{lm} \begin{proof} The endomorphism $\mathrm{Id}-\Theta_X^{p-1}: \mathrm{Fil}_p\mathrm{dR}_{X_0}\tilde{o} \mathrm{Fil}_p\mathrm{dR}_{X_0}$ is an idempotent corresponding to the direct summand $\mathrm{Fil}_p\mathrm{dR}_{X_0,0}$ of $\mathrm{Fil}_p\mathrm{dR}_{X_0}$. Therefore $\Theta_X-\Theta_X^p=\Theta_X\cdot (\mathrm{Id}-\Theta_X^{p-1})$ can be factored as $\mathrm{Fil}_p\mathrm{dR}_{X_0}\tilde{o}\mathrm{Fil}_p\mathrm{dR}_{X_0,0}\xrightarrow{\Theta_X}\mathrm{Fil}_p\mathrm{dR}_{X_0,0}\tilde{o} \mathrm{Fil}_p\mathrm{dR}_{X_0}$ where the first and last maps establish $\mathrm{Fil}_p\mathrm{dR}_{X_0,0}$ as a direct summand of $\mathrm{Fil}_p\mathrm{dR}_{X_0}$. Hence the claim follows from Lemma \ref{sen operator: cXp as nilpotent operator}. \end{proof} Since $\tau^{\leq p}\mathrm{dR}_{X_0}$ decomposes as $\bigoplus\limits_{i=0}^{p-1}\tau^{\leq p}\mathrm{dR}_{X_0,i}\simeq \tau^{\leq p}\mathrm{dR}_{X_0,0}\oplus\bigoplus\limits_{i=0}^{p-1}L\Omega^i_X[-i]$ compatibly with the Sen operator, $\Theta_X$ on $\tau^{\leq p}\mathrm{dR}_{X_0}$ is semi-simple if and only if $c_{X,p}\sim 0$. In particular, if $c_{X,p}$ vanishes then the conjugate filtration on the diffracted Hodge complex splits in degrees $\leq p$: \begin{cor}\label{sen operator: semisimplicity implies decomposability} If the Sen operator on $\mathrm{Fil}_{\mathrm{conj}}^p\mathrm{dR}_{X_0}$ is semi-simple then there exists a decomposition \begin{equation} (\mathrm{Fil}_{\mathrm{conj}}^p\Omega^{\DHod}_X,\Theta_X)\simeq\bigoplus\limits_{i=0}^p(L\Omega^i_X[-i],-i) \end{equation} \end{cor} \subsection{Diffracted Hodge cohomology and Sen operator over \texorpdfstring{${\mathbb Z}/p^n,n\geq 2$}{}.} In this subsection we give a discussion parallel to the above for a flat scheme $X_{n-1}$ over ${\mathbb Z}/p^n$ for $n\geq 2$. We will define the diffracted Hodge complex $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$ of $X_{n-1}$ relative to ${\mathbb Z}/p^n$, equipped with a Sen operator with properties analogous to Theorem \ref{sen operator: general theorem}. \begin{thm}\label{sen operator: zpn main} For a scheme $X_{n-1}$ quasisyntomic over ${\mathbb Z}/p^n$ there is a natural object $(\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n},\Theta_{X_{n-1}})\in D_{{\mathbb N}}(X_{n-1})$ equipped with a filtration $\mathrm{Fil}^{\mathrm{conj}}_{\bullet}$ with graded quotients equivalent to $(L\Omega^i_{X_{n-1}/{\mathbb Z}/p^n}[-i],-i)$. The object $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}\in D(X_{n-1})$ has a natural structure of a filtered derived commutative algebra such that $\Theta_{X_{n-1}}$ is a derivation. The base change $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}\otimes_{{\mathcal O}_{X_{n-1}}}{\mathcal O}_{X_0}$ is identified with $\mathrm{dR}_{X_0}$, and the base change of the conjugate filtration matches the conjugate filtration on $\mathrm{dR}_{X_0}$. If $X$ is a quasisyntomic formal scheme over ${\mathbb Z}_p$ then for $X_{n-1}:=X\times_{{\mathbb Z}_p}{\mathbb Z}_p/p^n$ we have $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}\simeq \Omega^{\DHod}_{X}\otimes_{{\mathcal O}_X}{\mathcal O}_{X_{n-1}}$ compatibly with the conjugate filtrations and the Sen operators. \end{thm} Given a quasisyntomic ${\mathbb Z}/p^2$-scheme $X_1$ with the special fiber $X_0=X_1\times_{{\mathbb Z}/p^2}{\mathbb F}_p\xhookrightarrow{i}X_1$, the de Rham complex $\mathrm{dR}_{X_0}\simeq i^*\Omega^{\DHod}_{X_1/{\mathbb Z}/p^2}$ therefore gets equipped with a Sen operator $\Theta_{X_1}$. Generalized eigenspaces for $\Theta_{X_1}$ give a decomposition \begin{equation} \mathrm{dR}_{X_0}\simeq\bigoplus\limits_{i=0}^{p-1}\mathrm{dR}_{X_0,i} \end{equation} The object $\mathrm{Fil}^p_{\mathrm{conj}}\mathrm{dR}_{X_0,0}$ fits into a fiber sequence \begin{equation}\label{sen operator: filp0 mod p2 formula}{\mathcal O}_{X_0}\tilde{o} \mathrm{Fil}^p_{\mathrm{conj}}\mathrm{dR}_{X_0,0}\tilde{o} L\Omega^p_{X_0}[-p]\end{equation} on which $\Theta_{X_1}$ naturally acts, inducing zero on the first and third terms. \begin{notation}\label{sen operator: zp2 classes notation} We denote by $e_{X_1,p}:L\Omega^p_{X_0}\tilde{o}{\mathcal O}_{X_0}[p+1]$ the connecting map corresponding to the fiber sequence (\ref{sen operator: filp0 mod p2 formula}), and by $c_{X_1,p}:L\Omega^p_{X_0}\tilde{o} {\mathcal O}_{X_0}[p]$ the map induced by the nilpotent operator $\Theta_{X_1}$ on $\mathrm{Fil}_p^{\mathrm{conj}}\mathrm{dR}_{X_0,0}$. \end{notation} This is consistent with Notation \ref{sen operator: zp classes notation} in the sense that for a quasisyntomic formal ${\mathbb Z}_p$-scheme $X$ with the mod $p^2$ reduction $X_1=X\times_{{\mathbb Z}_p}{\mathbb Z}/p^2$ the map $c_{X_1,p}$ is naturally homotopic to $c_{X,p}$, and $e_{X_1,p}$ is the mod $p$ reduction of $e_{X,p}$. We can also consider the endomorphism $\Theta_{X_1}-\Theta_{X_1}^p$ of $\mathrm{Fil}_p^{\mathrm{conj}}\mathrm{dR}_{X_0}$ which acts by zero on $\mathrm{Fil}_{p-1}^{\mathrm{conj}}$ and hence naturally induces a map $\Theta_{X_1}-\Theta_{X_1}^p:L\Omega^p_{X_0}[-p]\tilde{o}\mathrm{Fil}_{p-1}^{\mathrm{conj}}\mathrm{dR}_{X_0}$. As in the previous discussion in the presence of a lift of $X_0$ over $W(k)$, this map, $c_{X_1,p}$, and $e_{X_1,p}$ are related as follows: \begin{lm}\label{sen operator: classes over zp2} There is a natural homotopy \begin{equation}e_{X_1,p}\sim\mathrm{Bock}_{\mathrm{RHom}_{D(X_1)}(L\Omega^p_{X_1},{\mathcal O}_{X_1}[p])}(c_{X_1,p})\end{equation} where \begin{equation}\mathrm{Bock}_{\mathrm{RHom}_{D(X_1)}(L\Omega^p_{X_1},{\mathcal O}_{X_1}[p])}:\mathrm{RHom}_{D(X_0)}(L\Omega^p_{X_0},{\mathcal O}_{X_0}[p])\tilde{o} \mathrm{RHom}_{D(X_0)}(L\Omega^p_{X_0},{\mathcal O}_{X_0}[p+1])\end{equation} is the Bockstein homomorphism induced by $\mathrm{RHom}_{D(X_1)}(L\Omega^p_{X_1},{\mathcal O}_{X_1}[p])\in D(W_2(k))$. The map $\Theta_{X_1}-\Theta_{X_1}^p:L\Omega^p_{X_0}[-p]\tilde{o} \mathrm{Fil}_{p-1}^{\mathrm{conj}}\mathrm{dR}_{X_0}$ is naturally homotopic to the composition $L\Omega^p_{X_0}[-p]\xrightarrow{c_{X_1,p}}{\mathcal O}_{X_0}\tilde{o}\mathrm{Fil}_{p-1}^{\mathrm{conj}}\mathrm{dR}_{X_0}$ \end{lm} We will now define diffracted Hodge cohomology relative to ${\mathbb Z}/p^n$, proving Theorem \ref{sen operator: zpn main}, and will then prove Lemma \ref{sen operator: classes over zp2}. As in \cite{apc}, the construction of diffracted Hodge cohomology over ${\mathbb Z}/p^n$ together with its Sen operator arises from the computation of the Cartier-Witt stack of the base ring ${\mathbb Z}/p^n$. I learned the following fact from Sanath Devalapurkar: \begin{lm}\label{sen operator: wcartht zpn} There is an isomorphism of stacks $\WCart^{\mathrm{HT}}_{{\mathbb Z}/p^n}\simeq {\mathbb G}_{a,{\mathbb Z}/p^n}^{\#}/{\mathbb G}_{m,{\mathbb Z}/p^n}^{\#}$ where the quotient is taken with respect to the scaling action. Moreover, the natural map $\WCart_{{\mathbb Z}/p^n}^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}\tilde{o}\WCart_{{\mathbb Z}_p}^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}$ is identified with ${\mathbb G}_{a,{\mathbb Z}/p^n}^{\#}/{\mathbb G}_{m,{\mathbb Z}/p^n}^{\#}\tilde{o} B{\mathbb G}_{m,{\mathbb Z}/p^n}^{\#}\tilde{o} B{\mathbb G}^{\#}_{m,{\mathbb Z}_p}$. \end{lm} \begin{proof} We denote by ${\mathbb G}_a^{\#}$ the divided power envelope of $0$ in ${\mathbb G}_a$, and by ${\mathbb G}_m^{\#}$ the divided power envelope of $1$ in ${\mathbb G}_m$, both viewed as group schemes over ${\mathbb Z}_p$. In this proof we will use repeatedly the identifications of group schemes ${\mathbb G}_m^{\#}\simeq W^{\times}[F]:=W^{F=1}$ and ${\mathbb G}_a^{\#}\simeq W[F]:=W^{F=0}$ proven in \cite[Lemmma 3.4.11, Variant 3.4.12]{apc} and \cite[Lemma 3.2.6]{drinfeld}, together with the fact that the multiplication action of $W^{\times }[F]$ on $W[F]$ corresponds to the usual scaling action of ${\mathbb G}_m^{\#}$ on ${\mathbb G}_a^{\#}$ under these identifications. By \cite[Construction 3.8]{bhatt-lurie-prismatization} the stack $\WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}/p^n}$ is the quotient of $(\Spec {\mathbb Z}/p^n)^{\DHod}$ by ${\mathbb G}_{m,{\mathbb Z}/p^n}^{\#}$ where for a test algebra $S$ in which $p$ is nilpotent we have $(\Spec{\mathbb Z}/p^n)^{\DHod}(S)=\Map({\mathbb Z}/p^n,W(S)/V(1))$ where the mapping space is taken in the category of animated commutative rings. As in \cite[Example 5.15]{bhatt-lurie-prismatization} we can rewrite this more explicitly as $(\Spec{\mathbb Z}/p^n)^{\DHod}(S)\simeq \{x\in W(S)| V(1)x=p^n\}$ with the ${\mathbb G}_m^{\#}(S)=W(S)^{F=1}$-action given by multiplication on $x$. The key computation that will allow us to identify this set with ${\mathbb G}_a^{\#}(S)$ in a ${\mathbb G}_m^{\#}(S)$-equivariant fashion is the following: \begin{lm} We have $p^n=V(p^{n-1})$ in $W({\mathbb Z}/p^n)$. \end{lm} \begin{proof} Recall that for any ring $R$ there is the ghost map $W(R)\tilde{o} R^{{\mathbb N}}$ sending a Witt vector $[x_0]+V[x_1]+V^2[x_2]+\ldots$ to $(x_0,x_0^p+px_1,x_0^{p^2}+px_1^p+p^2x_2,\ldots)$. This is a map of rings and it is injective if $R$ is $p$-torsion-free. For an element $a\in W(R)$ with ghost coordinates $(a_0,a_1,\ldots)$ the ghost coordinates of $F(a)$ are given by $(a_1,a_2,\ldots)$, and the ghost coordinates of $V(a)$ are $(0, pa_0,pa_1,\ldots)$. Therefore the ghost coordinates of $V(p^{n-1})\in W({\mathbb Z}_p)$ are $(0,p^n,p^n,\ldots)$ and the ghost coordinates of $p^n-V(p^{n-1})$ are equal to $(p^n,0,0,\ldots)$. The isomorphism ${\mathbb G}_a^{\#}\simeq W[F]$ composed with the ghost map sends $r\in {\mathbb G}_a^{\#}$ to $(r,0,0,\ldots)$, hence $p^n-V(p^{n-1})\in W({\mathbb Z}_p)$ is the image of $p^n\in {\mathbb G}_a^{\#}({\mathbb Z}_p)$ under the natural map ${\mathbb G}_a^{\#}\simeq W[F]\subset W$. Since $p^n$ is annihilated by the map ${\mathbb G}_a^{\#}({\mathbb Z}_p)\tilde{o} {\mathbb G}_a^{\#}({\mathbb Z}/p^n)$ induced by ${\mathbb Z}_p\twoheadrightarrow {\mathbb Z}/p^n$, the element $p^n-V(p^{n-1})$ is zero in $W({\mathbb Z}/p^n)$, as desired. \end{proof} We now define an isomorphism ${\mathbb G}_a^{\#}\simeq (\Spec {\mathbb Z}/p^n)^{\DHod}$ by sending $y\in {\mathbb G}_a^{\#}(S)\simeq W(S)^{F=0}$ to $y+V(p^{n-2})\in (\Spec {\mathbb Z}/p^n)^{\DHod}(S)$. The element $y+V(p^{n-2})$ indeed satisfies the equation $V(1)(y+V(p^{n-2}))=p^n$ because \begin{equation}V(1)\cdot (y+V(p^{n-2}))=VF(y+V(p^{n-2}))=VFV(p^{n-2})=V(p^{n-1})=p^n\end{equation} This isomorphism intertwines the ${\mathbb G}_m^{\#}$-action on $(\Spec {\mathbb Z}/p^n)^{\DHod}$ with the usual scaling action on ${\mathbb G}_a^{\#}$ because for $a\in {\mathbb G}_m^{\#}(S)=W(S)^{F=1}$ we have $a\cdot(y+V(p^{n-2}))=a\cdot y+V(F(a)p^{n-2})=a\cdot y+V(p^{n-2})$. \end{proof} We can now define diffracted Hodge cohomology relative to ${\mathbb Z}/p^n$. Given a flat scheme $X_{n-1}$ over ${\mathbb Z}/p^n$, the stack $\WCart_{X_{n-1}}^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}$ lives over $\WCart_{{\mathbb Z}/p^n}^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}$ and we define $(X_{n-1}/{\mathbb Z}/p^n)^{\DHod}$ as the fiber product \begin{equation} \begin{tikzcd} (X_{n-1}/{\mathbb Z}/p^n)^{\DHod}\arrow[r]\arrow[d] & \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{X_{n-1}} \arrow[d] \\ \Spec({\mathbb Z}/p^n)\arrow[r,"\eta"] & \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}/p^n} \end{tikzcd} \end{equation} where $\eta:\Spec({\mathbb Z}/p^n)\tilde{o} \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}/p^n}\simeq {\mathbb G}_{a,{\mathbb Z}/p^n}^{\#}/{\mathbb G}_{m,{\mathbb Z}/p^n}^{\#}$ is the composition $\Spec({\mathbb Z}/p^n)\xrightarrow{0}{\mathbb G}_{a,{\mathbb Z}/p^n}^{\#}\tilde{o} {\mathbb G}_{a,{\mathbb Z}/p^n}^{\#}/{\mathbb G}^{\#}_{m,{\mathbb Z}/p^n}$. The stack $(X_{n-1}/{\mathbb Z}/p^n)^{\DHod}$ is equipped with a map $\pi^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}:(X_{n-1}/{\mathbb Z}/p^n)^{\DHod}\tilde{o} X_{n-1}$ obtained as the composition $(X_{n-1}/{\mathbb Z}/p^n)^{\DHod}\tilde{o} \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{X_{n-1}}\tilde{o} X_{n-1}$. Define the diffracted Hodge cohomology of $X_{n-1}$ realtive to ${\mathbb Z}/p^n$ as the pushforward of the structure sheaf along the map $\pi^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$: \begin{equation}\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}:=R\pi_{X_{n-1}/{\mathbb Z}/p^n *}{\mathcal O}_{(X_{n-1}/{\mathbb Z}/p^n)^{\DHod}}\in D(X_{n-1})\end{equation} We will now prove that this object shares the basic properties of diffracted Hodge cohomology of ${\mathbb Z}_p$-schemes: \begin{proof}[Proof of Theorem \ref{sen operator: zpn main}] Since, by construction, the stack $(X_{n-1}/{\mathbb Z}/p^n)^{\DHod}$ is equipped with an action of ${\mathbb G}_{m,{\mathbb Z}/p^n}^{\#}$ such that the map $\pi^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}:(X_{n-1}/{\mathbb Z}/p^n)^{\DHod}\tilde{o} X_{n-1}$ is ${\mathbb G}_{m,{\mathbb Z}_p}^{\#}$-equivariant for the trivial action of ${\mathbb G}_{m,{\mathbb Z}_p}^{\#}$ on the target, the object $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}\in D(X_{n-1})$ is naturally equipped with a ${\mathbb G}_{m,{\mathbb Z}_p}^{\#}$-action. As in \cite[Theorem 3.5.8]{apc}, this gives rise to an endomorphism $\Theta_{X_{n-1}}$ of $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$. We endow $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$ with the structure of a derived commutative algebra via Lemma \ref{dalg: pushforward of rings}, and $\Theta_{X_{n-1}}$ is seen to be a derivation (Definition \ref{dalg: derivation}) by following \cite[Construction 3.5.4]{apc}. We will now construct the conjugate filtration on $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$ and identify its graded quotients. To do this we will compare $\Omega_{X_{n-1}/{\mathbb Z}/p^n}^{\DHod}$ with the `absolute' diffracted Hodge cohomology $\Omega_{X_{n-1}}^{\DHod}$. By \cite[Construction 4.7.1]{apc} the object $\Omega^{\DHod}_{X_{n-1}}$ is equipped with a filtration $\mathrm{Fil}_n^{\mathrm{conj}}\Omega^{\DHod}_{X_{n-1}}$ with graded quotients $\gr_n^{\mathrm{conj}}\simeq L\Omega^n_{X_{n-1}/{\mathbb Z}_p}[-n]$. In the case $X_{n-1}=\Spec({\mathbb Z}/p^n)$ we have an equivalence $\Omega^{\DHod}_{{\mathbb Z}/p^n}\simeq{\mathbb Z}/p^n[{\mathbb G}_{a,{\mathbb Z}/p^n}^{\#}]$ of derived commutative ${\mathbb Z}/p^n$-algebras by Lemma \ref{sen operator: wcartht zpn}. By base change for the diagram \begin{equation} \begin{tikzcd} (X_{n-1}/{\mathbb Z}/p^n)^{\DHod}\arrow[r]\arrow[d] & (X_{n-1})^{\DHod} \arrow[d] \\ X_{n-1}\arrow[r,"\eta"] & (\Spec {\mathbb Z}/p^n)^{\DHod}\times_{{\mathbb Z}/p^n}X_{n-1} \end{tikzcd} \end{equation} we have \begin{equation}\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}\simeq {\mathbb Z}/p^n\otimes_{{\mathbb Z}/p^n[{\mathbb G}_a^{\#}]}\Omega^{\DHod}_{X_{n-1}}.\end{equation} The ordinary commutative algebra ${\mathbb Z}/p^n[{\mathbb G}_a^{\#}]$ is endowed with a conjugate filtration via the identification ${\mathbb Z}/p^n[{\mathbb G}_a^{\#}]\simeq \Omega^{\DHod}_{{\mathbb Z}/p^n}$, and the associated graded algebra is the free divided power algebra on the ${\mathbb Z}/p^n$-module $L\Omega^1_{{\mathbb Z}/p^n/{\mathbb Z}_p}[-1]\simeq {\mathbb Z}/p^n$. Moreover, the conjugate filtration on $\Omega^{\DHod}_{X_{n-1}}$ makes it into an object of the derived category of filtered ${\mathbb Z}/p^n[{\mathbb G}_a^{\#}]$-modules in $D(X_{n-1})$. Equipping ${\mathbb Z}/p^n$ with the trivial filtration $\mathrm{Fil}_0^{\mathrm{conj}}{\mathbb Z}/p^n={\mathbb Z}/p^n,\mathrm{Fil}_{-1}^{\mathrm{conj}}{\mathbb Z}/p^n=0$, we get an induced tensor product filtration ${\mathbb Z}/p^n\otimes_{{\mathbb Z}/p^n[{\mathbb G}_a^{\#}]}\Omega^{\DHod}_{X_{n-1}}$, and hence on the complex $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$. We will first check that the filtered complex $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}\otimes_{{\mathcal O}_{X_{n-1}}}{\mathcal O}_{X_0}$ is equivalent to $\mathrm{dR}_{X_0}$, and that there is an equivalence of filtered complexes $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}\simeq \Omega^{\DHod}_{X}\otimes_{{\mathcal O}_X}{\mathcal O}_{X_{n-1}}$ compatible with the Sen operators if $X$ is a flat formal ${\mathbb Z}_p$-scheme lifting $X_{n-1}$. Since we assume that $X_{n-1}$ is quasisyntomic over ${\mathbb Z}/p^n$, the formal scheme $X$ is quasisyntomic over ${\mathbb Z}_p$, and $\Omega^{\DHod}_X$ coincides with the derived pushforward of the structure sheaf along the map of stacks $X^{\DHod}\tilde{o} X$, by specializing to the Hodge-Tate locus the isomorphism of \cite[Theorem 7.20(2)]{bhatt-lurie-prismatization}. We have the following relations between the relevant Cartier-Witt stacks: \begin{multline} \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{X_{n-1}}\times_{\WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}/p^n}} \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb F}_p}\simeq \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{X_0} \\ \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{X}\times_{\WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}_p}}\WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}/p^{n}}\simeq \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{X_{n-1}} \end{multline} These follows directly from the definition of $\WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{(-)}$, as in \cite[Remark 3.5]{bhatt-lurie-prismatization}. The identification $\mathrm{dR}_{X_0}\simeq \Omega^{\DHod}_{X_{n-1}}\otimes_{{\mathbb Z}/p^n}{\mathbb F}_p$ follows from the fact that $\WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb F}_p}\simeq \Spec{\mathbb F}_p$ and the map $\WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb F}_p}\tilde{o} \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}/p^n}$ factors through the map $\eta$. The identification $\Omega^{\DHod}_{X}/p^n\simeq \Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$ is implied by the fact that the composition $\Spec{\mathbb Z}/p^n\xrightarrow{\eta} \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}/p^n}\tilde{o} \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}_p}$ is equal to the composition $\Spec{\mathbb Z}/p^n\tilde{o} \Spf{\mathbb Z}_p\tilde{o} \WCart^{\overline{{\mathlarger{\mathbbl{\Delta}}}}}_{{\mathbb Z}_p}$. Finally, we will compute the quotients of the conjugate filtration on $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$. The associated graded object $\gr_{\mathrm{conj}}^{\bullet}(\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n})\simeq \gr^{\bullet}_{\mathrm{conj}}({\mathbb Z}/p^n\otimes_{{\mathbb Z}/p^n[{\mathbb G}_a^{\#}]}\Omega^{\DHod}_{X_{n-1}})$ is equivalent to ${\mathbb Z}/p^n\otimes_{{\mathbb Z}/p^n\langle t\rangle}\bigoplus\limits_{i\geq 0}L\Omega^{i}_{X_{n-1}/{\mathbb Z}_p}[-i]$ where we identified $\gr^{\bullet}_{\mathrm{conj}}{\mathbb Z}/p^n[{\mathbb G}_a^{\#}]$ with the divided power algebra ${\mathbb Z}/p^n\langle t\rangle$ in one variable, such that $\Theta(t)=-t$. We have a natural map $\gr_{\mathrm{conj}}^{\bullet}(\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n})\tilde{o} \bigoplus\limits_{i\geq 0}(L\Omega^i_{X_{n-1}/{\mathbb Z}/p^n}[-i],-i)$ of graded objects of $D_{{\mathbb N}}(X_{n-1})$. To check that it is an equivalence we may assume that $X_{n-1}$ is isomorphic to the spectrum of a polynomial ring over ${\mathbb Z}/p^n$, and the result follows because such $X_{n-1}$ lifts over ${\mathbb Z}_p$. \end{proof} We will now establish analogs of Corollary \ref{sen operator: smooth classes w lift} and Lemmas \ref{sen operator: cXp as nilpotent operator}, \ref{sen operator: cXp as nilpotent operator on dR} over ${\mathbb Z}/p^2$, thus proving Lemma \ref{sen operator: classes over zp2}. Given a flat scheme $X_1$ over ${\mathbb Z}/p^2$ and two objects $M,N\in D(X_1)$, consider the complex of morphisms $\mathrm{RHom}_{D_{{\mathbb N}}(X_1)}((M,p),(N,0))$, viewed as an object of $D({\mathbb Z}/p^2)$. The restriction along $i:X_0\hookrightarrow X_1$ induces the map \begin{multline}\mathrm{RHom}_{D_{{\mathbb N}}(X_1)}((M,p),(N,0))\tilde{o} \\ \mathrm{RHom}_{D_{{\mathbb N}}(X_0)}((i^*M,0),(i^*N,0))\simeq \mathrm{RHom}_{X_0}(i^*M.i^*N)\oplus \mathrm{RHom}_{X_0}(i^*M,i^*N[-1])\end{multline} We denote by $r:\mathrm{RHom}_{D_{{\mathbb N}}(X_1)}((M,p),(N,0))\tilde{o} \mathrm{RHom}_{X_0}(i^*M,i^*N[-1])$ the second component of this composition. Explicitly, the data of a 1-morphism $f:(M,p)\tilde{o} (N,0)$ amounts to a map $f:M\tilde{o} N$ in $D(X_1)$ and a homotopy between $p\cdot f$ and $0$. Restricting to $X_0$, this gives a homotopy from the zero morphism $i^*M\xrightarrow{0}i^*N$ to itself, which is equivalent to the data of a 1-morphism $i^*M\tilde{o} i^*N[-1]$. By Lemma \ref{sen operator: morphisms in DN} this map is nothing but $r(f)$. \begin{lm} Let $X_1$ be a flat scheme over ${\mathbb Z}/p^2$. Denote by $X_0=X_1\times_{{\mathbb Z}/p^2}{\mathbb F}_p\xhookrightarrow{i}X_1$ its special fiber. For any two objects $M,N\in D(X_1)$ the composition \begin{equation} \mathrm{RHom}_{D_{{\mathbb N}}(X_1)}((M,p),(N,0))\tilde{o} \mathrm{RHom}_{D(X_1)}(M,N)\tilde{o} \mathrm{RHom}_{D(X_0)}(i^*M,i^*N) \end{equation} where the first map is forgetting the endomorphisms and the second map is induced by $i$, can be identified with the composition \begin{equation} \mathrm{RHom}_{D_{{\mathbb N}}(X_1)}((M,p),(N,0))\xrightarrow{r} \mathrm{RHom}_{D(X_0)}(i^*M,i^*N[-1])\xrightarrow{\mathrm{Bock}} \mathrm{RHom}_{D(X_0)}(i^*M,i^*N) \end{equation} where the second map is Bockstein morphism associated to the object $\mathrm{RHom}_{D(X_1)}(M,N[-1])$. \end{lm} \begin{proof} By Lemma \ref{sen operator: morphisms in DN} we can identify $\mathrm{RHom}_{D_{{\mathbb N}}(X_1)}((M,p),(N,0))$ with $\mathrm{RHom}_{D(X_1)}(M,N^{p=0})$, and the forgetful map to $\mathrm{RHom}_{D(X_1)}(M,N)$ is given by composing with the natural map $N^{p=0}\tilde{o} N$. Moreover, we can identify $\mathrm{RHom}_{D(X_1)}(M,N^{p=0})$ with $\mathrm{RHom}_{D(X_1)}(M,N)^{p=0}$ so that the desired identification becomes a consequence of the following Lemma \ref{sen operator: p fiber reduction}, applied to $A=\mathrm{RHom}_{D(X_1)}(M,N)$. \end{proof} \begin{lm}\label{sen operator: p fiber reduction} For any object $A\in D({\mathbb Z}/p^2)$ the composition $A^{p=0}\tilde{o} A\tilde{o} i_*i^*A$ is naturally identified with the composition \begin{equation}A^{p=0}\xrightarrow{r_A} i_*i^*A[-1]\xrightarrow{\mathrm{Bock}_{A[-1]}}i_*i^*A\end{equation} where the map $r_A$ is induced by the identification $i_*i^*(A^{p=0})=i_*(i^*A)^{p=0}\simeq i_*i^*A\oplus i_*i^*A[-1]$. \end{lm} \begin{proof} Recall that $i_*i^*A\simeq A\otimes_{{\mathbb Z}/p^2}{\mathbb F}_p$ and there is a natural fiber sequence $i_*i^*A\xrightarrow{a_1}A\xrightarrow{a_2}i_*i^*A$ obtained by taking the tensor product of the sequence ${\mathbb F}_p\tilde{o} {\mathbb Z}/p^2\tilde{o} {\mathbb F}_p$ with $A$. Consider the commutative diagram \begin{equation} \begin{tikzcd} A \arrow[r, "p"]\arrow[d,"a_2"] & A\arrow[d,equal]\\ i_*i^*A\arrow[r,"a_1"] & A \end{tikzcd} \end{equation} Taking the fibers of the horizontal arrows induces a commutative diagram \begin{equation}\label{sen operator: p fiber reduction diagram} \begin{tikzcd} \mathrm{fib}(A\xrightarrow{p}A)\arrow[d]\arrow[r] & A\arrow[d]\\ \mathrm{fib}(i_*i^*A\xrightarrow{a_1} A)\arrow[r] & i_*i^*A \\ \end{tikzcd} \end{equation} The induced map $A^{p=0}=\mathrm{fib}(A\xrightarrow{p}A)\tilde{o} \mathrm{fib}(i_*i^*A\xrightarrow{a_1} A)\simeq i_*i^*A[-1]$ is the map $r_A$, and the map $i_*i^*A[-1]\simeq \mathrm{fib}(i_*i^*A\xrightarrow{a_1}A)\tilde{o} i_*i^*A$ is $\mathrm{Bock}_{A[-1]}$, by definition. Hence the statement of the lemma amounts to commutativity of the diagram (\ref{sen operator: p fiber reduction diagram}). \end{proof} The upshot is that, given a quasisyntomic ${\mathbb Z}/p^n$-scheme $X_{n-1}$ for $n\geq 2$, the Sen operator on $\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$ induces an operator on $\mathrm{dR}_{X_0}$, and all of the information about the restriction of the latter operator to $\mathrm{Fil}_p\mathrm{dR}_{X_0}$ is captured by the class $c_{X_{n-1},p}$. If $X_{n-1}$ lifts to a flat formal ${\mathbb Z}_p$-scheme $X$, then this $c_{X_{n-1},p}$ also remembers the data of the Sen operator on $\mathrm{Fil}_p\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$, by Lemma \ref{sen operator: ext group description}. In the absence of a lift to $W(k)$ it is at the moment unclear to me whether the Sen operator on $\mathrm{Fil}_p\Omega^{\DHod}_{X_{n-1}/{\mathbb Z}/p^n}$ contains more information than its mod $p$ reduction. \section{Sen operator via descent from semiperfectoid rings}\label{semiperf: section} In this section we prove Theorem \ref{semiperf: main sen operator} via descent from quasiregular semiperfectoid rings, this approach was suggested by Bhargav Bhatt. As a consequence, we extend Theorem \ref{cosimp applications: the best part} to the situation where $X_0$ is only liftable to ${\mathbb Z}/p^2$ rather than ${\mathbb Z}_p$. \begin{thm}\label{semiperf: main sen operator} Let $X_1$ be a quasisyntomic scheme over ${\mathbb Z}/p^2$ with the special fiber $X_0=X_1\times_{{\mathbb Z}/p^2}{\mathbb F}_p$. The map $c_{X_1,p}:L\Omega^p_{X_0}\tilde{o}{\mathcal O}_{X_0}[p]$ arising from the Sen operator on the de Rham complex of $X_0$ is naturally homotopic to the composition \begin{equation} L\Omega^p_{X_0}\xrightarrow{\alpha(L\Omega^1_{X_0})}F_{X_0}^*L\Omega^1_{X_0}[p-1]\xrightarrow{\ob_{F,X_1}}{\mathcal O}_{X_0}[p]. \end{equation} \end{thm} The particular interpretation of $c_{X_1,p}$ that will be used in the proof is that the map $\Theta_{X_1}-\Theta_{X_1}^p:L\Omega^p_{X_0}[-p]\tilde{o}\mathrm{Fil}_{p-1}^{\mathrm{conj}}\mathrm{dR}_{X_0}\simeq \bigoplus\limits_{i=0}^{p-1}L\Omega^i_{X_0}[-p]$ factors as $L\Omega^p_{X_0}[-p]\xrightarrow{c_{X_1,p}}{\mathcal O}_{X_0}\xrightarrow{\oplus}\mathrm{Fil}_{p-1}^{\mathrm{conj}}\mathrm{dR}_{X_0}$, as we established in Lemma \ref{sen operator: classes over zp2}. We start by recalling the description of diffracted Hodge cohomology of quasiregular semiperfectoid algebras. Let $S$ be a quasiregular semiperfectoid flat ${\mathbb Z}/p^n{\mathbb Z}$-algebra, as defined in \cite[Definition 4.20]{bms2}. The cotangent complex $L\Omega^1_{S/{\mathbb Z}/p^n}$ is concentrated in degree $(-1)$ and $H^{-1}(L\Omega^1_{S/{\mathbb Z}/p^n})$ is a flat $S$-module. For brevity, we denote $H^{-1}(L\Omega^1_{S/{\mathbb Z}/p^n})$ by $M_S$. The objects $L\Omega^i_{S/{\mathbb Z}/p^n}[-i]=\Lambda^i(H^{-1}(L\Omega^1_{S/{\mathbb Z}/p^n})[1])[-i]\simeq \Gamma^iM_S$ are also flat $S$-modules placed in degree zero. Since $\Omega^{\DHod}_{S/{\mathbb Z}/p^n}$ admits an exhaustive filtration with graded pieces given by $L\Omega^i_{S/{\mathbb Z}/p^n}[-i]$, the diffracted Hodge cohomology complex $\Omega^{\DHod}_{S/{\mathbb Z}/p^n}$ is a commutative flat $S$-algebra concentrated in degree $0$. We denote by $S_0:=S/p$ the mod $p$ reduction of $S$, and by $\Omega^{\DHod}_{S_0}\simeq \Omega^{\DHod}_{S/{\mathbb Z}/p^n}\otimes_S S_0$ the diffracted Hodge cohomology of $S_0$ which coincides with the derived de Rham cohomology $\mathrm{dR}_{S_0/{\mathbb F}_p}$. Thanks to the fact that $\Omega^{\DHod}_{S/{\mathbb Z}/p^n}$ is concentrated in degree $0$, the Sen operator $\Theta_S:\Omega^{\DHod}_{S/{\mathbb Z}/p^n}\tilde{o} \Omega^{\DHod}_{S/{\mathbb Z}/p^n}$ is a genuine endomorphism of a discrete $S$-algebra, and we may use tools from ordinary algebra rather than higher algebra to study it. The reader is encouraged to view the next result as an analog of Theorem \ref{cosimp: main theorem}. \newcommand{\Omega^{\DHod}}{\Omega^{\DHod}} \begin{pr}\label{semiperf: sen for qrsprfd} Let $S$ be a quasiregular semiperfectoid flat ${\mathbb Z}/p^2$-algebra. Suppose that $f:\Omega^{\DHod}_{S/{\mathbb Z}/p^2}\tilde{o}\Omega^{\DHod}_{S/{\mathbb Z}/p^2}$ is a derivation of $S$-algebras that preserves the conjugate filtration, acting on $\gr_i^{\mathrm{conj}}\Omega^{\DHod}_{S/{\mathbb Z}/p^2}\simeq \Gamma^i M_S$ by $-i$. Denote by $s:M_S\tilde{o} \mathrm{Fil}_1^{\mathrm{conj}}\Omega^{\DHod}_{S/{\mathbb Z}/p^2}$ the splitting of the conjugate filtration given by $M_S\simeq \ker(f+1:\mathrm{Fil}^{\mathrm{conj}}_1\Omega^{\DHod}_{S/{\mathbb Z}/p^2})\subset \mathrm{Fil}^{\mathrm{conj}}_1\Omega^{\DHod}_{S/{\mathbb Z}/p^2}$. Then the map $f-f^p:\mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S_0}\tilde{o} \mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S_0}$ on the mod $p$ reduction of $\mathrm{Fil}_p^{\mathrm{conj}}\Omega^{\DHod}_{S/{\mathbb Z}/p^2}$ factors as \begin{equation}\label{semiperf: sen for qrsprfd equation} \mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S_0}\twoheadrightarrow{} \Gamma^p_{S_0}(M_S/p)\twoheadrightarrow{}F_{S_0}^*(M_S/p)\xrightarrow{F_{S_0}^*s}F_{S_0}^*\mathrm{Fil}^{\mathrm{conj}}_1\Omega^{\DHod}_{S_0}\xrightarrow{\varphi_{S_0}}\mathrm{Fil}^{\mathrm{conj}}_1\Omega^{\DHod}_{S_0}\hookrightarrow \mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S_0} \end{equation} where the surjection $\Gamma^p_{S_0}(M_S/p)\twoheadrightarrow{}F_{S_0}^*(M_S/p)$ is the map $\psi_{M_S/p}$ defined in (\ref{dalg: polynomial frobenius}). \end{pr} \begin{proof} \newcommand{\Omega^{\DHod}_{S/\bZ/p^2}}{\Omega^{\DHod}_{S/{\mathbb Z}/p^2}} \newcommand{\Omega^{\DHod}_{S_0}}{\Omega^{\DHod}_{S_0}} To prove the proposition it is enough to check that $f-f^p$ coincides with the composition (\ref{semiperf: sen for qrsprfd equation}) on elements $y\in \mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S_0}$ whose image in $\gr^{\mathrm{conj}}_p\simeq \Gamma_{S_0}^p(M_S/p)$ has the form $x^{[p]}$ for some $x\in M_S/p$. Indeed, the $S_0$-module $\mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S_0}$ is spanned by elements of this form and the submodule $\mathrm{Fil}^{\mathrm{conj}}_{p-1}\Omega^{\DHod}_{S_0}$, and both $f-f^p$ and the composition (\ref{semiperf: sen for qrsprfd equation}) are identically zero on $\mathrm{Fil}^{\mathrm{conj}}_{p-1}\Omega^{\DHod}_{S_0}$. The composition (\ref{semiperf: sen for qrsprfd equation}) takes the element $y$ to $s(x)^p$, and we will prove that $f-f^p$ does the same. Consider the endomorphism of $\mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S/\bZ/p^2}$ given by $F:=(-1)^p\prod\limits_{i=0}^{p-1}(f+i)$. It reduces modulo $p$ to the map $f-f^p:\mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S_0}\tilde{o} \mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S_0}$. Note also that $F$ annihilates $\mathrm{Fil}^{\mathrm{conj}}_{p-1}\Omega^{\DHod}_{S}$. Let $\widetilde{y}\in \mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S/{\mathbb Z}/p^2}$ be a lift of $y$ such that the image of $\widetilde{y}$ in $\gr^{\mathrm{conj}}_p\Omega^{\DHod}_{S/\bZ/p^2}\simeq\Gamma^p_S(M_S)$ has the form $\widetilde{x}^{[p]}$ for some $\widetilde{x}\in M_S$ lifting $x\in M_S/p$. Then the image of $p!\cdot \widetilde{y}$ in $\gr^{\mathrm{conj}}_p\Omega^{\DHod}_{S/\bZ/p^2}$ is equal to $\widetilde{x}^p$, hence $p!\cdot\widetilde{y}-s(\widetilde{x})^p$ lies in the submodule $\mathrm{Fil}^{\mathrm{conj}}_{p-1}\Omega^{\DHod}_{S/\bZ/p^2}\subset \mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S/\bZ/p^2}$. Therefore $F(p!\cdot \widetilde{y}-s(\widetilde{x})^p)=0$. By definition of the section $s$, we have $f(s(\widetilde{x}))=-s(\widetilde{x})$. Since $f$ is a derivation, $f(s(\widetilde{x})^p)=-p\cdot s(\widetilde{x})^p$, and $F(s(\widetilde{x})^p)=(-1)^p\prod\limits_{i=0}^{p-1}(-p+i)\cdot s(\widetilde{x})^p$. Hence we get the following equation on the element $F(\widetilde{y})$: \begin{equation}\label{semiperf: main prop equation1} p!F(\widetilde{y})=\prod\limits_{i=0}^{p-1}(p-i)\cdot s(\widetilde{x})^p \end{equation} We now use crucially that $\Omega^{\DHod}_{S/\bZ/p^2}$ is flat over ${\mathbb Z}/p^2$: we can cancel out $p!$ on both sides of (\ref{semiperf: main prop equation1}) to get $F(\widetilde{y})=s(\widetilde{x})^p+p\cdot a$ for some $a\in \mathrm{Fil}^{\mathrm{conj}}_p\Omega^{\DHod}_{S/\bZ/p^2}$. Reducing modulo $p$ gives the desired equality \begin{equation}\label{semiperf: main prop equation2} (f-f^p)(y)=s(x)^p.\end{equation} \end{proof} We will now deduce a computation of the Sen operator for an arbitrary quasisyntomic ${\mathbb Z}/p^2$-scheme using descent. \begin{proof}[Proof of Theorem \ref{semiperf: main sen operator}] We will prove that $c_{X_1,p}$ is equivalent to the composition \begin{equation}\label{semiperf: desired sen formula} L\Omega^p_{X_0}\xrightarrow{\alpha(L\Omega^1_{X_0})}F_{X_0}^*L\Omega^1_{X_0}[p-1]\xrightarrow{F_{X_0}^*s[p-1]}F_{X_0}^*\mathrm{dR}_{X_0}[p]\tilde{o} F_{X_0}^*F_{X_0*}{\mathcal O}_{X_0}[p]\tilde{o} {\mathcal O}_{X_0}[p] \end{equation} This is equivalent to the desired statement because the composition $F_{X_0}^*L\Omega^1_{X_0}\xrightarrow{F_{X_0}^*s[1]}F_{X_0}^*\mathrm{dR}_{X_0}[1]\tilde{o} F_{X_0}^*F_{X_0*}{\mathcal O}_{X_0}[1]\tilde{o} {\mathcal O}_{X_0}[1] $ is homotopic to the obstruction class $\ob_{F,X_1}:F_{X_0}^*L\Omega^1_{X_0}\tilde{o}{\mathcal O}_{X_0}[1]$ as was established in Proposition \ref{applications: frobenius lift obstruction prop}. Denote by $d_{X_1,p}$ the composition of maps in (\ref{semiperf: desired sen formula}). We denote by $\QSyn_{{\mathbb Z}/p^2}$ the category of schemes quasisyntomic over ${\mathbb Z}/p^2$, by $\AffQSyn_{{\mathbb Z}/p^2}\subset \QSyn_{{\mathbb Z}/p^2}$ the full subcategory of affine quasisyntomic schemes, and by $\mathrm{QRSPrfd}_{{\mathbb Z}/p^2}\subset\AffQSyn_{{\mathbb Z}/p^2}$ the opposite of the category of quasiregular semiperfectoid ${\mathbb Z}/p^2$-algebras. Proposition \ref{semiperf: sen for qrsprfd} established that $c_{X_1,p}=d_{X_1,p}$ when $X_1=\Spec S$ is the spectrum of a quasiregular semiperfectoid ${\mathbb Z}/p^2$-algebra, because the map $\Gamma^p_{S_0}(M_S/p)\tilde{o} F_{S_0}^*(M_S/p)$ is the shift by $[-p]$ of the map $L\Omega^p_{S_0}\xrightarrow{\alpha(L\Omega^1_{S_0})}F^*_{S_0}L\Omega^1_{S_0}[p-1]$. The case of an arbitrary $X_1\in \Sch_{{\mathbb Z}/p^2}$ will follow by a descent argument in the spirit of \cite{bms2}, cf. \cite[Proposition 10.3.1]{blm} and \cite[Proposition 4.4]{li-mondal}. Let $\QC_{\mathrm{obj}}$ be the $\infty$-category of $\infty$-categories equipped with a distinguished object, as defined in \cite[Tag 020S]{kerodon}. Its objects are pairs $(C,{\mathcal C})$ where ${\mathcal C}$ is a $\infty$-category and $C$ is an object of ${\mathcal C}$, and $1$-morphisms from $(C,{\mathcal C})$ to $(D,{\mathcal D})$ are pairs $(F:{\mathcal C}\tilde{o} {\mathcal D},\alpha: F(C)\tilde{o} D)$ where $F$ is a functor and $\alpha$ is a $1$-morphism in ${\mathcal D}$. Assigning to a scheme $X_1\in \QSyn_{{\mathbb Z}/p^2}$ the pair $(L\Omega^i_{X_0/{\mathbb F}_p},D(X_0))$ defines a functor $L\Omega^i:\Sch_{{\mathbb Z}/p^2}^{\op}\tilde{o} \QC_{\mathrm{obj}}$, where a morphism $f:X_1\tilde{o} Y_1$ is sent to $f_0^*:D(Y_0)\tilde{o} D(X_0)$ and $\Lambda^i df_{0}:f_0^*L\Omega^i_{Y_0}\tilde{o} L\Omega^i_{X_0}$ (this functor factors through $\QSyn_{{\mathbb F}_p}$). For $i=0$ we denote this functor simply by ${\mathcal O}$. The formation of classes $c_{X_1,p}$ and $d_{X_1,p}$ defines morphisms from $L\Omega^p[-p]$ to ${\mathcal O}$. Denote by $\Func^0(\QSyn^{\op}_{{\mathbb Z}/p^2},\QC_{\mathrm{obj}})$ the category of functors for which the composition with the forgetful functor $\QC_{\mathrm{obj}}\tilde{o} \Cat_{\infty}$ is $X_1\mapsto D(X_0)$. \begin{lm} For any $i,j$ the restriction induces an equivalence \begin{equation}\label{semiperf: map equation} \Map_{\Func^0(\QSyn^{\op}_{{\mathbb Z}/p^2},\QC_{\mathrm{obj}})}(L\Omega^i[-i],L\Omega^j[-j])\tilde{o} \Map_{\Func^0(\mathrm{QRSPrfd}^{\op}_{{\mathbb Z}/p^2},\QC_{\mathrm{obj}})}(L\Omega^i[-i],L\Omega^j[-j]).\end{equation} \end{lm} \begin{proof} Affine schemes corresponding to quasiregular semiperfectoid rings form a basis in the flat topology on the category of all quasisyntomic ${\mathbb Z}/p^2$-schemes by \cite[Lemma 4.28]{bms2} which implies the result by flat descent for the exterior powers of the cotangent complex. \end{proof} In general, specifying a natural transformation between functors $F_1,F_2$ into an $\infty$-category requires (among further higher homotopies) specifying maps $a_{X_1}:F_1(X_1)\tilde{o} F_2(X_1)$ for all objects $X_1$ together with the additional data of homotopies between $F_2(f)\circ a_{X_1}$ and $a_{Y_1}\circ F_1(f)$ for every map $f:X_1\tilde{o} Y_1$. Crucially, if $X=\Spec S$ is the spectrum of a quasiregular semiperfectoid ${\mathbb Z}/p^2$-algebra, then the mapping space $\Map_{D(X_0)}(L\Omega^p_{X_0}[-p],{\mathcal O}_{X_0})$ is discrete because both $L\Omega^p_{X_0}[-p]$ and ${\mathcal O}_{X_0}$ are concentrated in degree zero. Therefore the target of the map (\ref{semiperf: map equation}) is a discrete space and an element of it is completely determined by its values on the objects: functoriality with respect to morphisms in $\mathrm{QRSPrfd}_{{\mathbb Z}/p^2}$ amounts to checking a condition rather than specifying additional structure. Therefore it is enough to compare the values of $c_{X_1,p}$ and $d_{X_1,p}$ on objects of $\mathrm{QRSPrfd}_{{\mathbb Z}/p^2}$, and the corollary is proven. \end{proof} We can now generalize the results of Section \ref{applications: section} on extensions in the conjugate filtration on the de Rham complex to schemes over $k$ that only admit a lift to $W_2(k)$ and are not necessarily smooth. \begin{cor} For a quasisyntomic scheme $X_0$ over ${\mathbb F}_p$ equipped with a flat lift $X_1$ over ${\mathbb Z}/p^2$ the class $e_{X_1,p}\in\mathrm{RHom}_{D(X_1)}(L\Omega^p_{X_0},{\mathcal O}_{X_0}[p+1])$ of the extension $\bigoplus\limits_{i=0}^{p-1} L\Omega^i_{X_0}[-i]\tilde{o}\mathrm{Fil}_p^{\mathrm{conj}}\mathrm{dR}_{X_0}\tilde{o} L\Omega^p_{X_0}[-p]$ is naturally homotopic to $\mathrm{Bock}_{X_1}(\ob_{F,X_1}\circ \alpha(L\Omega^1_{X_0}))$. \end{cor} \begin{proof} This is Lemma \ref{sen operator: classes over zp2} combined with Theorem \ref{semiperf: main sen operator}. \end{proof} For convenience of applications, let us also explicitly state this result on the level of cohomology classes in the case of smooth varieties: \begin{cor}\label{semiperf: smooth cor} For a smooth scheme $X_0$ over $k$ equipped with a smooth lift $X_1$ over $W_2(k)$ the class $c_{X_1,p}\in H^p(X_0,\Lambda^p T_{X_0/k})$ is equal to $\ob_{F,X_1}\cup \alpha(\Omega^1_{X_0/k})$, and $e_{X_1,p}\in H^{p+1}(X_0,\Lambda^pT_{X_0/k})$ is equal to $\mathrm{Bock}_{X_1}(\ob_{F,X_1}\cup \alpha(\Omega^1_{X_0}))$. \end{cor} Hence the first potentially non-trivial differentials in the conjugate spectral sequence of a liftable scheme are described as follows: \begin{cor} For a smooth scheme $X_0$ over $k$ equipped with a lift $X_1$ over $W_2(k)$ the conjugate spectral sequence $E_{ij}^2=H^i(X^{(1)}_0,\Omega^j_{X^{(1)}_0})\Rightarrow H^{i+j}_{\mathrm{dR}}(X_0/k)$ has no non-zero differentials on pages $E_2,\ldots, E_{p}$. The differentials $d^{i,p}_{p+1}:H^i(X^{(1)}_0,\Omega^p_{X^{(1)}_0})\tilde{o} H^{i+p+1}(X^{(1)}_0,{\mathcal O})$ on page $E_{p+1}$ can be described as \begin{equation} \mathrm{Bock}_{{\mathcal O}_{X_1^{(1)}}}\circ (\ob_{F,X_1}\cup\alpha(\Omega^1_{X^{(1)}_0}))-(\ob_{F,X_1}\cup\alpha(\Omega^1_{X^{(1)}_0}))\circ\mathrm{Bock}_{\Omega^p_{X_1^{(1)}}} \end{equation} where $\ob_{F,X_1}\cup\alpha(\Omega^1_{X^{(1)}_0})$ denotes the map $H^j(X_0^{(1)},\Omega^p_{X_0^{(1)}})\tilde{o} H^{j+p}(X_0,^{(1)},{\mathcal O}_{X_0^{(1)}})$ induced by the product with the class $c_{X^{(1)}_1,p}=\ob_{F,X_1}\cup\alpha(\Omega^1_{X^{(1)}_0})\in H^p(X_0^{(1)},\Lambda^pT_{X^{(1)}_0})$, for $j=i,i+1$. \end{cor} \begin{proof} As usual, denote the inclusion of the special fiber by $i:X_0\hookrightarrow X_1$. The differential $d_{p+1}^{i,p}$ is induced by the map $i^*e_{X_1^{(1)},p}:\Omega^p_{X^{(1)}_0}\tilde{o} {\mathcal O}_{X^{(1)}_0}[p+1]$ which is the image of $\ob_{F,X_1}\cup\alpha(\Omega^1_{X^{(1)}_0})\in \mathrm{RHom}_{X^{(1)}_0}(\Omega^p_{X^{(1)}_0},{\mathcal O}_{X^{(1)}_0}[p])$ under the Bockstein homomorphism associated with the $W_2(k)$-module $\mathrm{RHom}_{X_1^{(1)}}(\Omega^p_{X_1^{(1)}},{\mathcal O}_{X_1^{(1)}}[p])$. For the purposes of computing the effect of $i^*e_{X_1^{(1)},p}$ on the cohomology groups it is enough to compute $i_*i^*e_{X_1^{(1)},p}:i_*\Omega^p_{X^{(1)}_0}\tilde{o} i_*{\mathcal O}_{X^{(1)}_0}[p+1]$. This is achieved by Lemma \ref{cosimp applications: hom bockstein} below. \end{proof} \begin{lm}\label{cosimp applications: hom bockstein} Let $X_1$ be a flat scheme over $W_2(k)$ with the special fiber $X_0:=X_1\times_{W_2(k)}k\xhookrightarrow{i}X_1$. For any two objects $M,N\in D(X_1)$ the composition \begin{equation} \mathrm{RHom}_{D(X_0)}(i^*M,i^*N)\xrightarrow{\mathrm{Bock}_{\mathrm{RHom}_{X_1}(M,N)}} \mathrm{RHom}_{D(X_0)}(i^*M,i^*N)[1]\xrightarrow{i_*} \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N)[1] \end{equation} is given by sending a map $f:i^*M\tilde{o} i^*N$ to $\mathrm{Bock}_N\circ i_*f-i_*f[1]\circ \mathrm{Bock}_M$. \end{lm} \begin{proof} For any object $K\in D(X_1)$ there is a natural fiber sequence $i_*i^*K\tilde{o} K\tilde{o} i_*i^*K$ that we will view as a two-step filtration on $K$, defining therefore a functor $B:D(X_1)\tilde{o} D_{\mathrm{Fil}}(X_1)$ to the category of filtered objects of $D(X_1)$. The complex of filtered morphisms between $M$ and $N$ fits into the fiber sequence \begin{equation}\label{cosimp applications: hom bockstein eq1} \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N)\tilde{o} \mathrm{RHom}_{D_{\mathrm{Fil}}(X_1)}(M,N)\tilde{o} \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N)^{\oplus 2} \end{equation} Here the first term is identified with the complex of morphisms from $M$ to $N$ that shift the filtration down by $1$, and the second map associates to a filtered morphism its effect on the graded pieces of the filtrations. This fiber sequence is the Baer sum of the following fiber sequences \begin{multline} \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N)\tilde{o} \mathrm{RHom}_{D(X_1)}(M,i_*i^*N)\tilde{o} \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N) \\ \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N)\tilde{o} \mathrm{RHom}_{D(X_1)}(i_*i^*M,N)\tilde{o} \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N) \end{multline} induced by applying the functor $\mathrm{RHom}_{D(X_1)}(-,i_*i^*N)$ to $i_*i^*M\tilde{o} M\tilde{o} i_*i^*M$, and the functor $\mathrm{RHom}_{D(X_1)}(i_*i^*M,-)$ to $i_*i^*N\tilde{o} N\tilde{o} i_*i^*N$, respectively. Therefore the connecting map $\mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N)^{\oplus 2}\tilde{o} \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N)[1]$ induced by (\ref{cosimp applications: hom bockstein eq1}) is given by $(f_0,f_1)\mapsto \mathrm{Bock}_N\circ f_0-f_1[1]\circ \mathrm{Bock}_M$. The map $\mathrm{RHom}_{D(X_1)}(M,N)\tilde{o} \mathrm{RHom}^{\mathrm{Fil}}_{D(X_1)}(M,N)$ induced by the functor $B$ then induces a map of fiber sequences \begin{equation}\label{cosimp applications: hom bockstein eq2} \begin{tikzcd} \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N)\arrow[r] & \mathrm{RHom}^{\mathrm{Fil}}_{D(X_1)}(M,N)\arrow[r] & \mathrm{RHom}_{D(X_1)}(i_*i^*M,i_*i^*N)^{\oplus 2}\\ \mathrm{RHom}_{D(X_0)}(i^*M,i^*N)\arrow[r]\arrow[u, "{i_*}"] & \mathrm{RHom}_{D(X_1)}(M,N)\arrow[r]\arrow[u] & \mathrm{RHom}_{D(X_0)}(i^*M,i^*N)\arrow[u,"{(i_*,i_*)}"] \end{tikzcd} \end{equation} and this proves the lemma because $\mathrm{Bock}_{\mathrm{RHom}_{X_1}(M,N)}$ is precisely the connecting morphism induced by the bottom row of (\ref{cosimp applications: hom bockstein eq2}). \end{proof} \section{Sen operator of a fibration in terms of Kodaira-Spencer class}\label{nonsemisimp: section} In this section we specialize the formula for the class $c_{X_1,p}$ from Corollary \ref{semiperf: smooth cor} to $p$-dimensional $W_2(k)$-schemes that are fibered over a curve. The fact that the cotangent bundle of such a scheme admits a line subbundle allows us (Theorem \ref{nonsemisimp: sen class fibration}) to relate the class $\alpha(\Omega^1_{X_0})$ to the Kodaira-Spencer class of the fibration. We then use this relation in Proposition \ref{nonsemisimp: p dim example} to give examples showing that the Sen operator $\Theta_X$ on $\mathrm{dR}_{X_0}$ might be non-semisimple. By Corollary \ref{sen operator: semisimplicity implies decomposability}, this is the case in a situation when the conjugate spectral sequence is non-degenerate, so Corollary \ref{nondeg example: main corollary} does also provides such examples. But we would like to demonstrate that non-semi-simplicity of $\Theta_X$ is a much more frequent phenomenon than non-degeneration of the Hodge-to-de Rham spectral sequence, appearing even for familiar classes of varieties. We start by introducing the notation needed to state Theorem \ref{nonsemisimp: sen class fibration}. Let $f:X_1\tilde{o} Y_1$ be a smooth morphism of smooth $W_2(k)$-schemes with $\dim Y_1=1$, $\mathrm{rel.dim}(f)=p-1$. We will describe the class $c_{X_1,p}\in H^p(X_0,\Lambda^p T_{X_0})$ in terms of the Kodaira-Spencer class of $f$ and the obstruction to lifting the Frobenius morphism of $Y_0$ to $Y_1$. Recall that the Kodaira-Spencer class $\mathrm{ks}_{f_0}:T_{Y_0}\tilde{o} R^1f_{0*}T_{X_0/Y_0}$ is obtained from the fundamental triangle \begin{equation}\label{nonsemisimp: fundamental triangle} T_{X_0/Y_0}\tilde{o} T_{X_0}\tilde{o} f_0^*T_{Y_0} \end{equation} by applying the functor of $0$th cohomology to the morphism $T_{Y_0}\tilde{o} Rf_{0*}T_{X_0/Y_0}[1]$ corresponding to the class of the extension (\ref{nonsemisimp: fundamental triangle}) by adjunction. Denote by $\mathrm{ks}_{f_0}^{p-1}$ the composition $T_{Y_0}^{\otimes p-1}\xrightarrow{\mathrm{ks}_{f_0}^{\otimes p-1}} (R^1f_{0*}T_{X_0/Y_0})^{\otimes p-1}\tilde{o} R^{p-1}f_{0*}\Lambda^{p-1}T_{X_0/Y_0}$ where the second map is the cup product on cohomology. By our assumption on the dimensions, the Leray spectral sequence for $f_0$ identifies $H^p(X_0,\Lambda^p T_{X_0})$ with $H^1(Y_0,R^{p-1}f_{0*}\Lambda^pT_{X_0})=H^{1}(Y_0, T_{Y_0}\otimes R^{p-1}f_{0*}\Lambda^{p-1}T_{X_0/Y_0})$. \begin{thm}\label{nonsemisimp: sen class fibration} The class $c_{X,p}\in H^p(X_0,\Lambda^p T_{X_0})$ is equal, up to multiplying by an element of ${\mathbb F}_p^{\times}$, to the product of the class $\ob_{F,Y}\in H^1(Y_0,F_{Y_0}^*T_{Y_0})=H^1(Y_0,T_{Y_0}^{\otimes p})$ with $\mathrm{ks}^{p-1}_{f_0}\in H^0(Y_0,T_{Y_0}^{\otimes 1-p}\otimes R^{p-1}f_{0*}\Lambda^{p-1}T_{X_0/Y_0})$. \end{thm} We have the following description of $\alpha(E)$ for a rank $p$ vector bundle admitting a line subbundle. It will be proven, for $p>2$, as a consequence of our computations with group cohomology in Section \ref{group cohomology: section}, under Lemma \ref{group cohomology: reducible formula}. For the proof in the case $p=2$ see Remark \ref{nonsemisimp: reducible formula p=2} below. \begin{lm}\label{nonsemisimp: reducible formula} Let $E$ be a vector bundle on $X_0$ of rank $p$ that fits into an extension \begin{equation}\label{nonsemisimp: reducible formula extension} 0\tilde{o} L\tilde{o} E\tilde{o} E'\tilde{o} 0 \end{equation} where $L$ is a line bundle, and $E'$ is a vector bundle of rank $p-1$. The class of this extension defines an element $v(E)\in \Ext^1_{X_0}(E',L)=H^1(X_0,L\otimes (E')^{\vee})$. Denote by $v(E)^{p-1}\in H^{p-1}(X_0,L^{\otimes p-1}\otimes (\det E')^{\vee})$ the image of $v(E)^{\otimes p-1}\in H^{p-1}(X_0, (L\otimes (E')^{\vee})^{\otimes p-1})$ under the map induced by $(L\otimes (E')^{\vee})^{\otimes p-1}\tilde{o} \Lambda^{p-1}(L\otimes (E')^{\vee})=L^{\otimes p-1}\otimes (\det E')^{\vee}$. The class $\alpha(E)\in \Ext^{p-1}_{X_0}(\Lambda^p E, F_{X_0}^*E)=H^{p-1}(X_0,F_{X_0}^*E\otimes L^{\vee}\otimes (\det E')^{\vee})$ is equal, up to multiplying by a scalar from ${\mathbb F}_p^{\times}$, to the image of $v(E)^{p-1}$ under the map induced by $L^{\otimes p-1}\otimes(\det E')^{\vee}=F_{X_0}^*L\otimes L^{\vee}\otimes(\det E')^{\vee}\hookrightarrow F_{X_0}^*E\otimes L^{\vee}\otimes(\det E')^{\vee}$. \end{lm} \begin{rem}\label{nonsemisimp: reducible formula p=2} The lemma is straightforward when $p=2$. It follows from the existence of the following map of extensions: \begin{equation} \begin{tikzcd} F_{X_0}^*E\arrow[r] & S^2E\arrow[r] & \Lambda^2 E \\ L^{\otimes 2}\arrow[r]\arrow[u, hook] & L\otimes E\arrow[r]\arrow[u, hook] & L\otimes E'\arrow[u, equal] \end{tikzcd} \end{equation} Here the left vertical map is the pullback along $F_{X_0}$ of the inclusion $L\hookrightarrow E$, the top row is the extension representing $\alpha(E)$, and the bottom row is the tensor product of (\ref{nonsemisimp: reducible formula extension}) with $L$. Our proof of this lemma for $p>2$ is rather ad hoc: it relies on the fact that the class $\alpha(V)$ for the tautological representation $V$ of $GL_p$ is non-zero, as proven in Proposition \ref{rational group cohomology: main non-vanishing}. Given that both $\alpha(E)$ and $v(E)^{p-1}$ admit explicit representatives as Yoneda extensions ((\ref{group cohomology: de rham p-1}) for the former and a Koszul complex for the latter), it would be nicer to have a direct proof of the identity claimed in Lemma \ref{nonsemisimp: reducible formula}, in the spirit of the proof for $p=2$. \end{rem} \comment{\begin{lm}\label{nonsemisimp: split reducible formula} Let $E$ be a vector bundle on $X_0$ of rank $p$ that fits into an extension \begin{equation}0\tilde{o} L\tilde{o} E\tilde{o} \bigoplus\limits_{i=1}^{p-1}L_i\tilde{o} 0\end{equation} where $L,L_1,\ldots,L_{p-1}$ are line bundles. The corresponding extension class is an element $v_1\oplus\ldots\oplus v_{p-1}\in \bigoplus\limits_{i=1}^{p-1} H^1(X_0, L\otimes L_i^{\otimes -1})$, denote by $v_1\cup \ldots\cup v_{p-1}\in H^{p-1}(X_0,L^{\otimes p-1}\otimes \bigotimes L_i^{-1})$ the cup-product of the components of the extension class. Then the class $\alpha(E)\in \Ext^{p-1}_{X_0}(\Lambda^p E,F^*E)\simeq H^{p-1}(X_0,F^*E\otimes L^{-1}\otimes \bigotimes\limits_{i=1}^{p-1}L_i^{-1})$ is equal, up to multiplication by an element of ${\mathbb F}_p^{\times}$, to the image of $v_1\cup\ldots\cup v_{p-1}$ under the map induced by $L^{\otimes p-1}\otimes \bigotimes L_i^{-1}\hookrightarrow F^*E\otimes L^{-1}\otimes \bigotimes\limits_{i=1}^{p-1}L_i^{-1}$. \end{lm} } \begin{proof}[Proof of Theorem \ref{nonsemisimp: sen class fibration}] By Corollary \ref{semiperf: smooth cor} the class $c_{X_1,p}$ is the product of $\alpha(\Omega^1_{X_0})\in H^{p-1}(X_0,\Lambda^p T_{X_0}\otimes (F_{X_0}^*T_{X_0})^{\vee})$ with $\ob_{F,X_1}\in H^1(X_0,F^*_{X_0}T_{X_0})$. The vector bundle $\Omega^1_{X_0}$ fits into the exact sequence dual to (\ref{nonsemisimp: fundamental triangle}): \begin{equation} 0\tilde{o} f^*\Omega^1_{Y_0}\tilde{o} \Omega^{1}_{X_0}\tilde{o} \Omega^1_{X_0/Y_0}\tilde{o} 0 \end{equation} and we can apply Lemma \ref{nonsemisimp: reducible formula} to $E=\Omega^1_{X_0}$. It gives that the class $\alpha(\Omega^1_{X_0})\in H^{p-1}(X_0, F_{X_0}^*\Omega^1_{X_0}\otimes \Lambda^p T_{X_0})$ is the image of the class $v(\Omega^1_{X_0})^{p-1}\in H^{p-1}(X_0,F_{X_0}^*f_0^*\Omega^1_{Y_0}\otimes \Lambda^p T_{X_0})$. Therefore the product $\alpha(\Omega^1_{X_0})\cdot \ob_{F,X_1}\in H^p(X_0,\Lambda^p T_{X_0})$ is equal to the product of $v(\Omega^1_{X_0})^{p-1}$ with the image of $\ob_{F,X_1}$ under the map $H^1(X_0,F_{X_0}^*T_{X_0})\tilde{o} H^1(X_0, F_{X_0}^*f_0^*T_{Y_0})$. By functoriality of obstructions, this image is equal to the image of $\ob_{F,Y_1}$ under the pullback map $H^1(Y_0,F_{Y_0}^*T_{Y_0})\tilde{o} H^1(X_0, F_{X_0}^*f_0^*T_{Y_0})$. By the Leray spectral sequence the receptacle of the class $v(\Omega^1_{X_0})^{p-1}$ fits into the exact sequence \begin{multline} 0\tilde{o} H^1(Y_0, R^{p-2}f_{0*}(F_{X_0}^*f_0^*\Omega^1_{Y_0}\otimes \Lambda^p T_{X_0}))\tilde{o} H^{p-1}(X_0, F_{X_0}^*f_0^*\Omega^1_{Y_0}\otimes \Lambda^p T_{X_0})\xrightarrow{\rho} \\ H^0(Y_0, R^{p-1}f_{0*}(F_{X_0}^*f_0^*\Omega^1_{Y_0}\otimes \Lambda^p T_{X_0}))\tilde{o} 0 \end{multline} Since $H^2(Y_0,{\mathcal F})=0$ for any quasicoherent sheaf ${\mathcal F}$ on the curve $Y_0$, our product only depends on the image of $v(\Omega^1_{X_0})^{p-1}$ in $H^0(Y_0, R^{p-1}f_{0*}(F_{X_0}^*f_0^*\Omega^1_{Y_0}\otimes \Lambda^p T_{X_0}))$ and is equal to the product of this image $\rho(v(\Omega^1_{X_0})^{p-1})$ with $\ob_{F,Y_1}$ under the identification $H^p(X_0,\Lambda^p T_{X_0})\simeq H^1(Y_0,T_{Y_0}\otimes R^{p-1}f_{0*}\Lambda^{p-1}T_{X_0/Y_0})$. It remains to observe that the image of the class $v(\Omega^1_{X_0})\in H^1(X_0, f^*\Omega^1_{Y_0}\otimes T_{X_0/Y_0})$ in $H^0(Y_0,R^1(f_0^*\Omega^1_{Y_0}\otimes T_{X_0/Y_0}))=H^0(Y_0, \Omega^1_{Y_0}\otimes R^1f_{0*}T_{X_0/Y_0})$ is the Kodaira-Spencer map $\mathrm{ks}_{f_0}$. Therefore $\rho(\alpha(\Omega^1_{X_0})^{p-1})\in H^0(Y_0, R^{p-1}f_{0*}(F_{X_0}^*f_0^*\Omega^1_{Y_0}\otimes \Lambda^p T_{X_0}))=H^0(Y_0, F_{Y_0}^*\Omega^1_{Y_0}\otimes T_{Y_0}\otimes R^{p-1}f_{0*}\Lambda^{p-1}T_{X_0/Y_0})$ is equal to $\mathrm{ks}_{f_0}^{p-1}$ and the desired formula for $c_{X_1,p}$ is proven. \end{proof} We will now demonstrate that there does exist a fibration as in Theorem \ref{nonsemisimp: sen class fibration} for which $c_{X_1,p}$ is non-zero. From now until the end of this section assume that $k=\overline{{\mathbb F}}_p$. \begin{pr}\label{nonsemisimp: p dim example} For every $p$ there exists a smooth projective scheme $X$ over $W(k)$ with $\dim_{W(k)}(X)=p$ such that the class $c_{X,p}\in H^p(X_0,\Lambda^p T_{X_0})$ is non-zero, and therefore the Sen operator on $\mathrm{dR}_{X_0}$ is not semi-simple. \end{pr} \begin{rem} Since $X$ has relative dimension $p$, the de Rham complex $\mathrm{dR}_{X_0}$ of $X_0$ is necessarily decomposable, by \cite[Corollaire 2.3]{deligne-illusie}. \end{rem} \begin{proof} We will construct $X$ as the $(p-1)$th relative Cartesian power $S^{\times_Y(p-1)}$ of an appropriate smooth projective morphism $h: S\tilde{o} Y$ of relative dimension $1$, where $Y$ is a geometrically connected smooth projective relative curve over $W(k)$. Assume that \begin{enumerate} \item The Kodaira-Spencer map $\mathrm{ks}_{h_0}:T_{Y_0}\tilde{o} R^1h_{0*}T_{S_0/Y_0}$ is an injection of vector bundles \item $\coker \mathrm{ks}_{h_0}$ is a direct sum $\bigoplus L_j$ of line bundles such that $\deg L_j< \frac{1}{p-1}\deg \Omega^1_{Y_0}$ for all $j$. \end{enumerate} Injectivity of the Kodaira-Spencer map implies that the curve fibration $h$ is not isotrivial, which forces the genus of $Y$ and all of the fibers of $h$ to be larger than or equal to $2$ by \cite[Th\'eorem\`e 4]{szpiro}. In particular, the Frobenius endomorphism of $Y_0$ does not lift to $Y\times_{W(k)}{W_2(k)}$ so the class $\ob_{Y_1,F}\in H^1(Y_0,F_{Y_0}^*T_{Y_0})$ is non-zero, cf. \cite[Lemma I.5.4]{raynaud-froblifts}. We will now prove that under these conditions on $S$ the class $c_{X,p}\in H^p(X_0,\Lambda^p T_{X_0})$ for the $p$-dimensional $W(k)$-scheme $X$ is non-zero, and then will check that a family of curves $h$ satisfying (1) and (2) does indeed exist. Denote by $\pi_1,\ldots,\pi_{p-1}:X=S^{\times_Y (p-1)}\tilde{o} S$ the projection maps, and by $f:X\tilde{o} Y$ the map down to $Y$ (so that $f=h\circ \pi_i$, for all $i$). We have $T_{X/Y}\simeq \bigoplus\limits_{i=1}^{p-1}\pi_i^*T_{S/Y}$. For each $i$ the pushforward $R^1f_*(\pi^{*}_i T_{S/Y})=H^1(Rh_*\circ R\pi_{i*}(\pi^*_i T_{S/Y}))=H^1(Rh_*(T_{S/Y}\otimes R\pi_{i*}{\mathcal O}_{X}))$ is equal to $R^1h_*T_{S/Y}$, because $h_*T_{S/Y}=0$ by our assumption on the genus of the fibers of $h$. The Kodaira-Spencer class $\mathrm{ks}_f: T_Y\tilde{o} R^1f_*T_{X/Y}=\bigoplus\limits_{i=1}^{p-1} R^1h_*T_{S/Y}$ is then equal to the diagonal map $\bigoplus\limits_{i=1}^{p-1} \mathrm{ks}_h$, because the extension $T_{X/Y}\tilde{o} T_X\tilde{o} f^*T_Y$ is the Baer sum of the pullbacks along $\pi_i$ of the extension $T_{S/Y}\tilde{o} T_S\tilde{o} h^*T_Y$. The pushforward $R^{p-1}f_*\Lambda^{p-1}T_{X/Y}=R^{p-1}f_*(\bigotimes\limits_{i=1}^{p-1} \pi^*_iT_{S/Y})$ is likewise identified with $(R^1h_*T_{S/Y})^{\otimes p-1}$, and the $(p-1)$th power $\mathrm{ks}_f^{p-1}$ of the Kodaira-Spencer map is equal to $\mathrm{ks}_h^{\otimes p-1}:T_Y^{\otimes p-1}\tilde{o} (R^1h_*T_{S/Y})^{\otimes p-1}$. By Theorem \ref{nonsemisimp: sen class fibration}, the class $c_{X,p}\in H^p(X_0,\Lambda^p T_{X_0})=H^1(Y_0,T_{Y_0}\otimes R^{p-1}\Lambda^{p-1}T_{X_0/Y_0})$ is (up to a scalar from ${\mathbb F}_p^{\times}$) the image of the obstruction to lifting Frobenius $\ob_{F,Y_1}\in H^1(Y_0,F^*T_{Y_0})=H^1(Y_0,T_{Y_0}^{\otimes p})$ under the map $\id_{T_{Y_0}}\otimes \mathrm{ks}_{h_0}^{\otimes p-1}:T_{Y_0}^{\otimes p}\tilde{o} T_{Y_0}\otimes (R^1h_{0*}T_{S_0/Y_0})^{\otimes p-1}$. The proof of non-vanishing of $c_{X,p}$ will be completed if we can show that the induced map $H^1(Y_0, T_{Y_0}^{\otimes p})\tilde{o} H^1(Y_0,T_{Y_0}\otimes (R^1h_{0*}T_{S_0/Y_0})^{\otimes p-1})$ is injective. For brevity, denote the vector bundle $\coker \mathrm{ks}_{h_0}$ by $Q$. The map $\id_{T_{Y_0}}\otimes \mathrm{ks}_{h_0}^{\otimes p-1}$ is an injection of vector bundles whose cokernel admits a filtration with graded pieces isomorphic to $T_{Y_0}^{\otimes i}\otimes Q^{\otimes p-i}$ for $i=1,\ldots,p-1$. By our assumption (2), each $T_{Y_0}^{\otimes i}\otimes Q^{\otimes p-i}$ is a direct sum of line bundles of degree $< \frac{1}{p-1}\deg\Omega^1_{Y_0}\cdot (p-i)-\deg\Omega^1_{Y_0}\cdot i<0$ because $\deg \Omega^1_{Y_0}=2g(Y_0)-2$ is positive. Therefore $H^0(Y_0,\coker(\id_{T_{Y_0}}\otimes \mathrm{ks}_{h_0}^{\otimes p-1}))=0$ which implies that the map induced by $\id_{T_{Y_0}}\otimes \mathrm{ks}_{h_0}^{\otimes p-1}$ on cohomology in degree $1$ is injective. We will now prove that there exists a family of curves $h:S\tilde{o} Y$ satisfying properties (1), (2). We will construct $Y$ as a complete intersection of ample divisors in an appropriate compactification ${\mathcal M}_g^*$ of the moduli space of curves of genus $g$, following the idea of Mumford (\hspace{1sp}\cite[\S 1]{oort-complete}) for constructing proper subvarieties of moduli spaces of curves. We only need to make sure that this construction goes through over $W(k)$, and check that the condition (2) is fulfilled. Denote by $M_{g,W(k)}$ the coarse moduli scheme of the stack ${\mathcal M}_{g,W(k)}$, and by $\overline{M}_{g,W(k)}$ the coarse moduli scheme of its Deligne-Mumford compactification. By \cite[Theorem V.2.5]{faltings-chai} there exists a projective scheme $A_{g,W(k)}^*$ over $W(k)$ that contains the coarse moduli scheme $A_{g,W(k)}$ of principally polarized abelian varieties of dimension $g$ as a dense open subscheme such that $\dim_{W(k)}(A_{g,W(k)}^*\setminus A_{g,W(k)})=g$. Moreover, the Torelli map extends to a morphism $j:\overline{M}_{g,W(k)}\tilde{o} A^*_{g,W(k)}$ that induces a locally closed immersion of the locus $U\subset \overline{M}_{g,W(k)}$ of smooth curves without nontrivial automorphisms into $A^*_{g,W(k)}$. Denote also by $\pi:{\mathcal C}\tilde{o} U$ the universal curve over $U$. Assume from now on that $g\geq \max(4, \frac{p}{3}+1)$. The inequality $g\geq 4$ ensures that the locus of curves without nontrivial automorphisms has complement of codimension $\geq 2$ in ${\mathcal M}_g$, and the condition $g\geq \frac{p}{3}+1$ will be used to ensure property (2). Denote by $M_{g,W(k)}^*$ the closure of $j(M_{g,W(k)})$ inside $A_{g,W(k)}^*$. This is a flat projective scheme over $W(k)$ that contains $U$ as a dense open subscheme, such that fibers of $M_{g,W(k)}^*\setminus U$ over both points of $\Spec W(k)$ have codimension $\geq 2$ (though we do not claim that this complement is flat over $W(k)$). Denote by $\omega$ the line bundle $\det (\pi_*\Omega^1_{{\mathcal C}/U})$ on $U$. By the property \cite[Theorem V.2.5(1)]{faltings-chai} of the Satake compactification, some positive power $\omega^{\otimes m}$ extends to a very ample $L$ line bundle on $M_{g,W(k)}^*$. Take $Y\subset U\subset M_{g,W(k)}^*$ to be a smooth proper complete intersection of zero loci of sections of $L$ that has $\dim(Y/W(k))=1$. It is possible to find such a complete intersection entirely contained in $U$ because the codimension of the complement of $U$ is at least $2$. Let $h:S\tilde{o} Y$ be the restriction of the universal curve ${\mathcal C}$ to $Y$. We have the exact sequence corresponding to the closed embedding $\iota:Y\hookrightarrow U$: \begin{equation} 0\tilde{o} T_Y\xrightarrow{d\iota} T_{M_{g,W(k)}}|_Y\tilde{o} L|_Y^{\oplus 3g-4}\tilde{o} 0 \end{equation} where we identified the normal bundle to $Y$ with $L|_Y^{\oplus 3g-4}$. Since $U$ is the locus where the map ${\mathcal M}_{g,W(k)}\tilde{o} M_{g,W(k)}$ from the moduli stack of curves to the coarse moduli space is an isomorphism, the sheaf $T_{M_{g,W(k)}}|_Y$ is identified with $R^1h_*T_{S/Y}$ in a way that identifies $d\iota$ with the Kodaira-Spencer map $\mathrm{ks}_f$. By \cite[Theorem 2]{harris-mumford}, the determinant line bundle $\det(T_{M_g,W(k)}|_Y)$ is isomorphic to $\omega^{\otimes -13}|_Y$ and, in particular, has negative degree. Therefore $\deg(T_Y)+(3g-4)\cdot \deg L|_Y\leq 0$ giving that $\deg L|_Y\leq -\deg(T_Y)\cdot \frac{1}{3g-4}< -\deg(T_Y)\cdot \frac{1}{p-1}$ by our assumption on $g$. Therefore the family $h:S\tilde{o} Y$ satisfies all of the desired assumptions. \end{proof} \begin{rem} At the moment it is unclear to me whether the class $c_{X,p}$ is non-zero for other natural classes of varieties. For instance, when $p=2$ is $c_{X,2}\in H^2(X_0,\Lambda^2 T_{X_0})\simeq k$ non-zero for a general K3 surface $X$ over $W_2(k)$? \end{rem} \section{Cohomology of abelian varieties}\label{AV coherent: section} In this section, we apply Theorem \ref{cosimp: main theorem} to the de Rham, coherent, and \'etale cohomology of abelian varieties equipped with a group action. These results will play the key role in our example of a liftable variety with a non-degenerate conjugate spectral sequence, which ultimately relies on the existence of a supersingular abelian variety with an action of a group, whose Hodge cohomology is equivariantly decomposable, but de Rham cohomology is not. In this section, for a commutative ring $R$ and a discrete group $G$ we denote by $D_G(R)$ the derived $\infty$-category of complexes of $R$-modules equipped with a $G$-action. \subsection{Coherent cohomology.} Recall that for a group $G$ acting on a finite-dimensional $k$-vector space $V$ we have a class $\alpha(V)\in \Ext^{p-1}_G(\Lambda^p V, V^{(1)})=H^{p-1}(G,V^{(1)}\otimes(\Lambda^p V)^{\vee})$ defined in Definition \ref{free cosimplicial: alpha definition}, corresponding to the extension $V^{(1)}[-2]\tilde{o}\tau^{\geq 2}S^p(V[-1])\tilde{o}\Lambda^p V[-p]$ in $D_G(k)$. If there exists a representation of $G$ on a free $W_2(k)$-module $\widetilde{V}$ such that $\widetilde{V}/p\simeq V$ then the module $\widetilde{V}^{(1)}\otimes(\Lambda^p\widetilde{V})^{\vee}$ defines the Bockstein homomorphisms $\mathrm{Bock}^{i}:H^i(G,V^{(1)}\otimes(\Lambda^p V)^{\vee})\tilde{o} H^{i+1}(G,V^{(1)}\otimes(\Lambda^p V)^{\vee})$. \begin{pr}\label{AV coherent: coherent main} Let $A$ be an abelian scheme over $W(k)$ equipped with an action of a discrete group $G$. \begin{enumerate}[leftmargin=*] \setcounter{enumi}{-1} \item There exists a $G$-equivariant equivalence $\tau^{\leq p-1}\mathrm{R}\Gamma(A,{\mathcal O})\simeq \bigoplus\limits_{i=0}^{p-1}H^i(A,{\mathcal O})[-i]$. \item If $F^*_{A_0}:H^1(A_0,{\mathcal O})\tilde{o} H^1(A_0,{\mathcal O})$ is zero, then there exists a $G$-equivariant equivalence \begin{equation} \tau^{\leq p}\mathrm{R}\Gamma(A,{\mathcal O})\simeq\bigoplus\limits_{i=0}^{p}H^i(A,{\mathcal O})[-i] \end{equation} \item Suppose that $A$ admits an endomorphism $\widetilde{F}_A:A\tilde{o} A$ that lifts the absolute Frobenius endomorphism $F_{A_0}$ (in particular, $A_0$ is ordinary) and commutes with the action of $G$. Then the extension class $H^p(A,{\mathcal O})\tilde{o} \bigoplus\limits_{i=0}^{p-1}H^i(A,{\mathcal O})[p-i+1]$ in $D_G(W(k))$ corresponding to $\tau^{\leq p}\mathrm{R}\Gamma(A,{\mathcal O})$ lands in the direct summand $H^1(A,{\mathcal O})[p]$ and the resulting class in $\Ext^p_{G,W(k)}(H^p(A,{\mathcal O}),H^1(A,{\mathcal O}))=H^p(G,H^1(A,{\mathcal O})\otimes\Lambda^pH^1(A,{\mathcal O})^{\vee})$ is equal to \begin{equation}\label{AV coherent: ext class formula} \widetilde{F}^*_A(\mathrm{Bock}^{p-1}(\alpha(H^1(A_0,{\mathcal O})))) \end{equation} where $\alpha(H^1(A_0,{\mathcal O}))\in H^{p-1}(G,H^1(A_0,{\mathcal O})^{(1)}\otimes\Lambda^pH^1(A_0,{\mathcal O})^{\vee})$ is the class corresponding to the representation of $G$ on the $k$-vector space $H^1(A_0,{\mathcal O})$, and $\mathrm{Bock}^{p-1}$ is the Bockstein homomorphism induced by the $W(k)$-module $H^1(A,{\mathcal O})^{(1)}\otimes\Lambda^pH^1(A,{\mathcal O})^{\vee}$. \end{enumerate} \end{pr} \begin{proof} The identity section $e:\Spec W(k)\tilde{o} A$ induces the augmentation $e^*:\mathrm{R}\Gamma(A,{\mathcal O})\tilde{o} W(k)$, and the cohomology algebra $H^*(A,{\mathcal O})$ is the exterior algebra on $H^1(A,{\mathcal O})$ so we can apply Theorem \ref{cosimp: equivariant main theorem} to the $G$-equivariant derived commutative algebra $\mathrm{R}\Gamma(A,{\mathcal O})$. Part (0) then follows. To prove statements (1) and (2), we will apply (\ref{cosimp: equivariant main formula}). The formula for the extension $H^p(A,{\mathcal O})\tilde{o} \tau^{\leq p-1}\mathrm{R}\Gamma(A,{\mathcal O})[p+1]$ in $D_G(W(k))$ reads: \begin{equation}\label{AV coherent: main formula} \Lambda^p H^1(A,{\mathcal O})\xrightarrow{\alpha(H^1(A_0,{\mathcal O}))} H^1(A_0,{\mathcal O})^{(1)}[p-1]\xrightarrow{F^*_{A_0}} H^1(A_0,{\mathcal O})[p-1]\xrightarrow{\mathrm{Bock}_{H^1(A,{\mathcal O})}} H^1(A,{\mathcal O})[p] \end{equation} Here we identified the Frobenius endomorphism of the cosimplicial algebra $\mathrm{R}\Gamma(A_0,{\mathcal O})$ with $F^*_{A_0}$ by Lemma \ref{applications: coh cosimplicial frob is frob}. In particular, if $F^*_{A_0}$ on $H^1(A_0,{\mathcal O})$ is zero, then the composition (\ref{AV coherent: main formula}) vanishes, which proves part (1). Regarding part (2), note that the lift of Frobenius $\widetilde{F}_{A}^*$ induces the following commutative diagram in $D_G(W(k))$: \begin{equation} \begin{tikzcd} H^1(A_0,{\mathcal O})^{(1)}\arrow[r, "F_{A_0}^*"]\arrow[d, "\mathrm{Bock}_A^{(1)}"] & H^1(A_0,{\mathcal O})\arrow[d, "\mathrm{Bock}_A"] \\ H^1(A,{\mathcal O})^{(1)}[1]\arrow[r, "\widetilde{F}_A^*"] & H^1(A, {\mathcal O})[1] \end{tikzcd} \end{equation} This allows us to rewrite (\ref{AV coherent: main formula}) as \begin{equation}\label{AV coherent: rewritten formula} \Lambda^p H^1(A,{\mathcal O})\xrightarrow{\alpha(H^1(A_0,{\mathcal O}))} H^1(A_0,{\mathcal O})^{(1)}[p-1]\xrightarrow{\mathrm{Bock}_{H^1(A,{\mathcal O})^{(1)}}[p]} H^1(A,{\mathcal O})^{(1)}[p]\xrightarrow{\widetilde{F}^*_A} H^1(A,{\mathcal O})[p] \end{equation} The desired formula (\ref{AV coherent: ext class formula}) now follows from Lemma \ref{cosimp applications: Bockstein}. \end{proof} \begin{cor}\label{AV coherent: supersingularity criterion} Let $n\geq 2p$ be an integer and $E_0$ be an elliptic curve over $k$. The group $GL_n({\mathbb Z})$ acts on the abelian variety $E_0^{n}$ and therefore acts on the complex $\mathrm{R}\Gamma(E_0^n,{\mathcal O})$. The truncation $\tau^{\leq p}\mathrm{R}\Gamma(E_0^n,{\mathcal O})$ decomposes in $D_{GL_n({\mathbb Z})}(k)$ as a direct sum of its cohomology modules if and only if $E_0$ is supersingular. \end{cor} \begin{proof} We can choose a lift $E$ of $E_0$ to an elliptic curve over $W(k)$ and apply Proposition \ref{AV coherent: coherent main} to the abelian scheme $E^n$ that is being acted on by the group $G=GL_n({\mathbb Z})$. If $E_0$ is supersingular then Proposition \ref{AV coherent: coherent main}(1) gives that $\tau^{\leq p}\mathrm{R}\Gamma(E_0^n,{\mathcal O})$ is decomposable, for all $n$. If $E_0$ is ordinary, assume moreover that our chosen lift $E$ is the canonical one, so that $F_{E_0}$ lifts to a map $\widetilde{F}_E:E\tilde{o} E$. Following Proposition \ref{AV coherent: coherent main}(2), we need to check that the mod $p$ reduction of the class $\widetilde{F}^*_{E^n}(\mathrm{Bock}^{p-1}(\alpha(H^1(E_0,{\mathcal O}))))\in H^p(GL_n({\mathbb Z}),H^1(E^n,{\mathcal O})\otimes\Lambda^pH^1(E^n,{\mathcal O})^{\vee})$ is non-zero. Note that the map $\widetilde{F}^*_{E^n}$ is a bijection, so we only need to check that its argument is non-zero modulo $p$. Let $F/{\mathbb Q}$ be the quadratic extension provided by Proposition \ref{group cohomology: ring of integers main}. Identifying the abelian group ${\mathcal O}_F$ with ${\mathbb Z}^{\oplus 2}$ we get an embedding $SL_p({\mathcal O}_F)\subset GL_{2p}({\mathbb Z})\subset GL_n({\mathbb Z})$. The induced action of $SL_p({\mathcal O}_F)$ on $H^1(E^n,{\mathcal O})=H^1(E,{\mathcal O})\otimes_{W(k)}W(k)^{\oplus n}\simeq W(k)^{\oplus n}$ contains the tautological representation $W(k)^{\oplus p}$ as a direct summand, hence the mod $p$ reduction of the class $\mathrm{Bock}^{p-1}(\alpha(H^1(E^n,{\mathcal O})))$ does not vanish in $H^p(SL_p({\mathcal O}_F), H^1(E_0^n,{\mathcal O})^{(1)}\otimes \Lambda^p H^1(E_0,{\mathcal O})^{\vee})$. Therefore the analogous class in $H^p(GL_n({\mathbb Z}), H^1(E_0^n,{\mathcal O})^{(1)}\otimes \Lambda^p H^1(E_0,{\mathcal O})^{\vee})$ does not vanish either, and the non-decomposability of $\tau^{\leq p}\mathrm{R}\Gamma(E_0^n,{\mathcal O})$ in $D_{GL_n({\mathbb Z})}(k)$ follows. \end{proof} \subsection{De Rham cohomology.} We can also apply Theorem \ref{cosimp: main theorem} to the de Rham cohomology of an abelian variety equipped with a group action. \begin{pr}\label{AV coherent: AV de rham} Let $A_0$ be an abelian variety over $k$ equipped with an action of a discrete group $G$. \begin{enumerate} \item $\tau^{\leq p-1}\mathrm{R}\Gamma_{\mathrm{dR}}(A_0/k)$ is $G$-equivariantly equivalent to $\bigoplus\limits_{i=0}^{p-1}H^i_{\mathrm{dR}}(A_0/k)[-i]$. \item The extension class $H^p_{\mathrm{dR}}(A_0/k)\tilde{o}\bigoplus\limits_{i=0}^{p-1}H^i_{\mathrm{dR}}(A_0/k)[p+1-i]$ corresponding to $\tau^{\leq p}\mathrm{R}\Gamma_{\mathrm{dR}}(A_0/k)$ lands in the direct summand $H^1(A_0/k)[p]$ and the resulting class in $\Ext^p_{G,k}(H^p_{\mathrm{dR}}(A_0/k),H^1_{\mathrm{dR}}(A_0/k))=H^p(G,H^1_{\mathrm{dR}}(A_0/k)\otimes \Lambda^p H^1_{\mathrm{dR}}(A_0/k)^{\vee})$ equals to \begin{equation}F_{A_0}^*\circ\mathrm{Bock}^{p-1}(\alpha(H^1_{\mathrm{dR}}(A_0/k)))\end{equation} where we use the crystalline cohomology $H^1_{\mathrm{cris}}(A_0/W_2(k))$ as a lift of the $G$-representation $H^1_{\mathrm{dR}}(A_0/k)$ to define the Bockstein homomorphism. \end{enumerate} \end{pr} \begin{proof} The derived commutative algebra $\mathrm{R}\Gamma_{\mathrm{dR}}(A_0/k)$ in the category of $G$-representations on $k$-vector spaces admits a lift to $W(k)$ given by the crystalline cohomology $\mathrm{R}\Gamma_{\mathrm{cris}}(A_0/W(k))$. This algebra has an augmentation $e^*:\mathrm{R}\Gamma_{\mathrm{cris}}(A_0/W(k))\tilde{o} \mathrm{R}\Gamma_{\mathrm{cris}}(\Spec k/W(k))=W(k)$ so we can apply Theorem \ref{cosimp: equivariant main theorem} to it. This gives decomposability of $\tau^{\leq p-1}\mathrm{R}\Gamma_{\mathrm{dR}}(A_0/k)$ and describes the extension class $H^p_{\mathrm{cris}}(A_0/W(k))\tilde{o} H^1_{\mathrm{cris}}(A_0/W(k))[p]$ as the composition \begin{multline}\label{AV coherent: AV de rham formula} H^p_{\mathrm{cris}}(A_0/W(k))=\Lambda^p H^1_{\mathrm{cris}}(A_0/W(k))\tilde{o} \Lambda^p H^1_{\mathrm{dR}}(A_0/k)\xrightarrow{\alpha(H^1_{\mathrm{dR}}(A_0/k))} \\ H^1_{\mathrm{dR}}(A_0/k)^{(1)}[p-1]\xrightarrow{\varphi_{\mathrm{R}\Gamma_{\mathrm{dR}}(A_0/k)}}H^1_{\mathrm{dR}}(A_0/k)[p-1]\xrightarrow{\mathrm{Bock}_{H^1_{\mathrm{cris}}(A_0/W(k))}} H^1_{\mathrm{cris}}(A_0/W(k))[p] \end{multline} Since $\varphi_{\mathrm{R}\Gamma_{\mathrm{dR}}(A_0/k)}$ lifts to an endomorphism $\varphi_{A_0,\mathrm{cris}}^*$ of $\mathrm{R}\Gamma_{\mathrm{cris}}(A_0/W(k))$, the composition of the last two arrows of (\ref{AV coherent: AV de rham formula}) is equivalent to \begin{equation} H^1_{\mathrm{dR}}(A_0/k)^{(1)}[p-1]\xrightarrow{\mathrm{Bock}_{H^1_{\mathrm{cris}}(A_0/W(k))^{(1)}}}H^1_{\mathrm{cris}}(A_0/W(k))^{(1)}[p]\xrightarrow{\varphi_{A_0,\mathrm{cris}}^*}H^1_{\mathrm{cris}}(A_0/W(k))[p] \end{equation} As in the deduction of Theorem \ref{cosimp applications: the best part} from Theorem \ref{cosimp applications: HT extension}, we now apply Lemma \ref{cosimp applications: Bockstein} to conclude that the class defined by the composition (\ref{AV coherent: AV de rham formula}) equals to $\varphi_{A_0}^*\circ\mathrm{Bock}_{p-1}(\alpha(H^1_{\mathrm{dR}}(A_0/k)))\in \Ext^{p}_G(H^p_{\mathrm{cris}}(A_0/W(k)),H^1_{\mathrm{cris}}(A_0/W(k)))$. Reducing this class modulo $p$ gives the desired result. \end{proof} \subsection{Galois action on \'etale cohomology of abelian varieties.} This application was suggested by Alexei Skorobogatov who had independently conjectured the validity of Proposition \ref{AV coherent: etale} for $p=2$. Let $A$ be an abelian variety over an arbitrary field $F$ of characteristic not equal to $p$. Denote by $\overline{F}$ a separable closure of $F$ and by $G_F:=\Gal(\overline{F}/F)$ the absolute Galois group of $F$. We view $\mathrm{R}\Gamma_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)$ as an object of the derived category $\widehat{D}_{G_F}({\mathbb Z}_p)$ of $p$-complete ${\mathbb Z}_p$-modules equipped with a continuous action of $G_F$. More precisely, it is the inverse limit of categories $D_{G_F}({\mathbb Z}/p^n)$ of discrete ${\mathbb Z}/p^n$-modules over the profinite group $G_F$. \begin{pr}\label{AV coherent: etale} \begin{enumerate} \item The truncation $\tau^{\leq p-1}\mathrm{R}\Gamma_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)$ is equivalent to $\bigoplus\limits_{i=0}^{p-1} H^i_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)[-i]$ in $\widehat{D}_{G_F}({\mathbb Z}_p)$. \item The extension class $H^p_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)\tilde{o} \bigoplus\limits_{i=0}^{p-1} H^i_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)[p+1-i]$ corresponding to the fiber sequence $\tau^{\leq p-1}\mathrm{R}\Gamma_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)\tilde{o} \tau^{\leq p}\mathrm{R}\Gamma_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)\tilde{o} H^p_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)[-p]$ can be described as \begin{multline}\label{AV coherent: etale formula} H^p_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)\xrightarrow{\alpha(H^1_{\text{et}}(A_{\overline{F}},{\mathbb F}_p))}H^1_{\text{et}}(A_{\overline{F}},{\mathbb F}_p)[p-1]\xrightarrow{\mathrm{Bock}_{H^1_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)}}H^1_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)[p]\\ \xrightarrow{\oplus}\bigoplus\limits_{i=0}^{p-1} H^i_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)[p+1-i] \end{multline} \end{enumerate} \end{pr} \begin{proof} The \'etale cohomology $\mathrm{R}\Gamma_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)$ has a structure of a derived commutative algebra in $\widehat{D}_{G_F}({\mathbb Z}_p)$ by Lemma \ref{dalg: site cohomology}. The pullback along the identity section $e: \Spec F\tilde{o} A$ induces an augmentation, and multiplication on cohomology induces isomorphisms $H^i_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)\simeq\Lambda^i H^1_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)$. Hence we may apply Theorem \ref{cosimp: equivariant main theorem} to $\mathrm{R}\Gamma_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)$ which readily implies part (1). Part (2) follows from the formula (\ref{cosimp: equivariant main formula}) by the fact that the Frobenius map $\varphi_{\mathrm{R}\Gamma_{\text{et}}(A_{\overline{F}},{\mathbb F}_p)}:H^1_{\text{et}}(A_{\overline{F}},{\mathbb F}_p)\tilde{o} H^1_{\text{et}}(A_{\overline{F}},{\mathbb F}_p)$ is the identity map, by Lemma \ref{dalg: frobenius on etale}. \end{proof} We can apply this to describe some of the differentials in the Hochschild-Serre spectral sequence of $A$ with coefficients in ${\mathbb Z}_p$ and ${\mathbb F}_p$. Here $H^i(G_F,-)$ denotes continuous cohomology of the Galois group $G_F$. \begin{cor} \begin{enumerate} \item In the spectral sequence $E_{2}^{i,j}=H^i(G_F,H^j_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p))\Rightarrow H_{\text{et}}^{i+j}(A,{\mathbb Z}_p)$ there are no non-zero differentials on pages $E_2,\ldots, E_{p-1}$, and the differentials $d_{p}^{i,p}:H^i(G_F,H^p_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p))\tilde{o} H^{i+p}(G_F,H^1_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p))$ can be described as \begin{equation} H^i(G_F,\Lambda^p H^1_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p))\xrightarrow{\alpha(H^1_{\text{et}}(A_{\overline{F}},{\mathbb F}_p))}H^{i+p-1}(G_F,H^1_{\text{et}}(A_{\overline{F}},{\mathbb F}_p))\xrightarrow{\mathrm{Bock}^{i+p-1}_{H^1_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)}}H^{i+p}(G_F,H^1_{\text{et}}(A_{\overline{F}},{\mathbb Z}_p)) \end{equation} \item Likewise, the the spectral sequence $E_{2}^{i,j}=H^i(G_F,H^j_{\text{et}}(A_{\overline{F}},{\mathbb F}_p))\Rightarrow H^{i+j}_{\text{et}}(A,{\mathbb F}_p)$ has no non-zero differentials on pages $E_2,\ldots, E_{p-1}$, and the differentials $d_{p}^{i,p}:H^i(G_F,H^p_{\text{et}}(A_{\overline{F}},{\mathbb F}_p))\tilde{o} H^{i+p}(G_F,H^1_{\text{et}}(A_{\overline{F}},{\mathbb F}_p))$ are equal to \begin{equation} \mathrm{Bock}^{i+p-1}_{H^1_{\text{et}}(A_{\overline{F}},{\mathbb Z}/p^2)}\circ\alpha(H^1_{\text{et}}(A_{\overline{F}},{\mathbb F}_p))-\alpha(H^1_{\text{et}}(A_{\overline{F}},{\mathbb F}_p))\circ \mathrm{Bock}_{H^p_{\text{et}}(A_{\overline{F}},{\mathbb Z}/p^2)}^{i} \end{equation} \end{enumerate} \end{cor} \begin{proof} Part (1) follows immediately from the formula (\ref{AV coherent: etale formula}) because $d_p^{i,p}$ is the result of applying the functor $H^i(G_F,-)$ to that map. Part (2) follows by applying Lemma \ref{cosimp applications: hom bockstein} to describe the effect of the mod $p$ reduction of (\ref{AV coherent: etale formula}) on cohomology. \end{proof} \section{Liftable variety with non-degenerate conjugate spectral sequence} In this section we construct an example of a smooth projective variety over $k$ that lifts to a smooth projective scheme over $W(k)$ but whose conjugate spectral sequence has a non-zero differential. Let us start by describing our example. Choose an elliptic curve $E$ over $W(k)$ such that the special fiber $E_0=E\times_{W(k)}k$ is supersingular. Denote $p^2$ by $q$, and consider the finite flat group scheme $H:=E[p]\otimes_{{\mathbb F}_p}{\mathbb F}_q^{\oplus p}$. It is isomorphic to a product of $2p$ copies of $E[p]$, and we write it this way to define an action of the group $GL_{p}({\mathbb F}_q)$ on it via the tautological representation on ${\mathbb F}_q^{\oplus p}$. Define the non-commutative finite flat group scheme \begin{equation}\label{nondeg example: G formula} G:=SL_{p}({\mathbb F}_q)\ltimes (E[p]\otimes_{{\mathbb F}_p}{\mathbb F}_q^{\oplus p}) \end{equation} The main result of this section is that the classifying stack $BG_0$ of the special fiber of this finite group scheme has a non-degenerate conjugate spectral sequence. In Corollary \ref{nondeg example: main corollary} we then approximate the stack $BG$ by a smooth projective scheme whose special fiber also has a non-zero differential in its conjugate spectral sequence. \begin{thm}\label{nondeg example: maint thm stack} The differential $d^{0,p}_{p+1}:H^0(BG_0,L\Omega^p_{BG_0/k})\tilde{o} H^{p+1}(BG_0,{\mathcal O})$ on the $(p+1)$st page of the conjugate spectral sequence of the classifying stack $BG_0$ is non-zero. \end{thm} In general, if $X_0$ is an algebraic stack over $k$, the spectral sequence associated with the conjugate filtration on its de Rham complex starts with the second page of the form $E_2^{i,j}=H^i(X_0,L\Omega^j_{X_0/k})$. If $X_0$ admits a lift over $W_2(k)$ then there are no non-zero differentials on pages $E_2,\ldots,E_p$, by the main result of \cite{kubrak-prikhodko-di}. In this case we denote by $d^{i,j}_{p+1}:H^i(X_0,L\Omega^j_{X_0/k})\tilde{o} H^{i+p+1}(X_0,L\Omega^{j-p}_{X_0/k})$ the differentials on page $E_{p+1}$. Although we already have a formula for the differential $d^{0,p}_{p+1}$, provided by Theorem \ref{cosimp applications: the best part}, we take a somewhat roundabout approach to proving Theorem \ref{nondeg example: maint thm stack} by proving as an intermediary that for a certain $n$ the quotient stack of the abelian variety $E_0^{\times n}$ by a well-chosen infinite group has a non-degenerate conjugate spectral sequence. This non-degeneracy will arise from the results of Section \ref{AV coherent: section} by contrasting the extensions in the canonical filtrations on de Rham and Hodge cohomology of $E_0^{\times n}$, viewed as complexes with a group action. Let $F$ be a number field provided by Proposition \ref{group cohomology: ring of integers main}. It is a quadratic extension of ${\mathbb Q}$ such that ${\mathcal O}_F/p\simeq{\mathbb F}_{p^2}={\mathbb F}_q$. Consider the abelian scheme $A:=E\otimes_{{\mathbb Z}}{\mathcal O}_F^{\oplus p}$ equipped with the natural action of the group $GL_p({\mathcal O}_F)$. The `$\otimes$' symbol refers here to Serre's tensor product \cite[1.7.4]{conrad-cm}. Explicitly, choosing a ${\mathbb Z}$-basis in ${\mathcal O}_F$ we get an identification $A\simeq E^{\times 2p}$ and the group $GL_p({\mathcal O}_F)$ acts through the embedding $GL_p({\mathcal O}_F)\hookrightarrow GL_{2p}({\mathbb Z})$. We can view the multiplication-by-$p$ map $A\xrightarrow{[p]}A$ as an $A[p]$-torsor on $A$ and therefore get a $GL_p({\mathcal O}_F)$-equivariant classifying map $A\tilde{o} BA[p]$. The action of $GL_p({\mathcal O}_F)$ on the $p$-torsion group scheme $A[p]$ factors through $GL_p({\mathcal O}_F/p)=GL_p({\mathbb F}_q)$ and $A[p]$ is $GL_p({\mathbb F}_q)$-equivariantly isomorphic to $H$. Hence the classifying map can be viewed as a $GL_p({\mathcal O}_F)$-equivariant morphism $A\tilde{o} BH$ and therefore induces a morphism \begin{equation}f:[A/SL_p({\mathcal O}_F)]\tilde{o} [(BH)/SL_p({\mathbb F}_q)]\simeq B(SL_p({\mathbb F}_q)\ltimes H)=BG\end{equation} of quotient stacks. Pullback along $f$ induces a morphism between conjugate spectral sequences of the special fibers of these stacks. In particular, there is a commutative square \begin{equation} \begin{tikzcd} H^0(BG^{(1)}_0,L\Omega^p_{BG^{(1)}_0})\arrow[r,"f_0^*"]\arrow[d, "d^{0,p}_{p+1}"] & H^0([A^{(1)}_0/SL_p({\mathcal O}_F)],L\Omega^p_{[A^{(1)}_0/SL_p({\mathcal O}_F)]})\arrow[d, "d^{0,p}_{p+1}"] \\ H^{p+1}(BG^{(1)}_0,{\mathcal O})\arrow[r,"f_0^*"] & H^{p+1}([A^{(1)}_0/SL_p({\mathcal O}_F)],{\mathcal O}) \end{tikzcd} \end{equation} Therefore the following Lemma \ref{nondeg example: classifying map on hodge coh} and Proposition \ref{nondeg example: AV quotient} imply Theorem \ref{nondeg example: maint thm stack}. \begin{lm}\label{nondeg example: classifying map on hodge coh} The map \begin{equation}f_0^*:H^0(BG_0,L\Omega^p_{BG_0/k})\tilde{o} H^0([A_0/SL_p({\mathcal O}_F)],L\Omega^p_{[A_0/SL_p({\mathcal O}_F)]})\end{equation} is an isomorphism. \end{lm} \begin{pr}\label{nondeg example: AV quotient} The differential $d_{p+1}^{0,0}:H^0(Y^{(1)}_0,L\Omega^p_{Y_0/k})\tilde{o} H^{p+1}(Y^{(1)}_0,{\mathcal O})$ in the conjugate spectral sequence for $Y_0=[A_0/SL_p({\mathcal O}_F)]$ is non-zero. \end{pr} \begin{rem} A trivialization of the cotangent bundle to the abelian variety $A_0$ provides a decomposition of the de Rham complex of $A_0$ \cite[Remarque 2.6 (iv)]{deligne-illusie}. Proposition \ref{nondeg example: AV quotient} demonstrates that this decomposition in general cannot be chosen to be compatible with the action of the group of automorphisms of the lift $A$. \end{rem} \begin{proof}[Proof of Lemma \ref{nondeg example: classifying map on hodge coh}] For a finite flat group scheme $\Gamma$ over $k$ denote by $\pi:\Spec k\tilde{o} B\Gamma$ the natural quotient morphism, and by $e:\Spec k \tilde{o} \Gamma$ the identity section. Recall that the cotangent complex $L\Omega^1_{B\Gamma/k}$ is described as the shift $e^*L\Omega^1_{\Gamma/k}[-1]$ of the co-Lie complex of $\Gamma$, where we view $L\Omega^1_{B\Gamma/k}\in D(B\Gamma)$ as an object of the category of $\Gamma$-equivariant objects in $D(k)$. In particular, $L\Omega^{1}_{BG_0/k}$ is concentrated in degrees $\geq 0$, its $p$th exterior power $L\Omega^p_{BG_0/k}$ is concentrated in non-negative degrees as well, so $H^0(BG_0,L\Omega^p_{BG_0/k})$ is simply the invariant subspace $(H^0(\pi^*L\Omega^p_{BG_0/k}))^{G_0}=(H^0(\pi^*L\Omega^p_{BG_0/k}))^{SL_p({\mathbb F}_q)}$. In the last equality we used that $H_0$ is commutative, hence the adjoint action of $G_0$ on the cotangent complex $\pi^*L\Omega^p_{BG_0}\simeq \pi^*L\Omega^p_{BH_0}$ factors through $SL_p({\mathbb F}_q)=G_0/H_0$. Similarly, $H^0([A_0/SL_p({\mathcal O}_F)],L\Omega^p_{[A_0/SL_p({\mathcal O}_F)]})=H^0(A_0,\Omega^p_{A_0})^{SL_p({\mathcal O}_F)}=H^0(A_0,\Omega^p_{A_0})^{SL_p({\mathbb F}_q)}$. By Lemma \ref{nondeg example: p torsion cotangent complex} below, $f_0$ induces an isomorphism $H^0(f_0^*L\Omega^p_{BG_0})\simeq \Omega^p_{A_0}$. Therefore it induces an isomorphism on the subspaces of $SL_p({\mathbb F}_q)$-invariants, which finishes the proof of Lemma \ref{nondeg example: classifying map on hodge coh}. \end{proof} \begin{lm}\label{nondeg example: p torsion cotangent complex} Let $T$ be an abelian scheme over a base $S$ such that $p{\mathcal O}_S=0$. We denote the dual Lie algebra $e^*\Omega^1_{T/S}$ by $\omega_{T}$, where $e:S\tilde{o} T$ is the identity section. Denote by $q:BT[p]\tilde{o} S$ the structure morphism. The cotangent complex of $BT[p]$ can be described as $L\Omega^1_{BT[p]/S}\simeq q^*\omega_T\oplus q^*\omega_T[-1]$. The classifying map $f:T\tilde{o} BT[p]$ corresponding to the torsor $T\xrightarrow{[p]}T$ induces the map $df:f^*L\Omega^1_{BT[p]/S}\tilde{o} \Omega^1_{T/S}$ that gives an isomorphism $H^0(f^*L\Omega^1_{BT[p]/S})\simeq \Omega^1_{T/S}$. \end{lm} \begin{proof} By definition, the classifying map fits into the Cartesian square \begin{equation}\label{nondeg example: p torsion cotangent complex diagram} \begin{tikzcd} T\arrow[r,"q_T"]\arrow[d, "{[}p{]}"] & S \arrow[d, "\pi"] \\ T\arrow[r, "f"] & BT[p] \end{tikzcd} \end{equation} that induces an equivalence $L\Omega^1_{T\xrightarrow{[p]}T}\simeq q_T^*L\Omega^1_{S/BT[p]}\simeq q_T^*e^*L\Omega^1_{T[p]/S}$. Transitivity triangles for the two vertical morphisms in (\ref{nondeg example: p torsion cotangent complex diagram}) fit into the following commutative diagram of sheaves on $T$ \begin{equation}\label{nondeg example: p torsion cotangent complex diagram2} \begin{tikzcd} q_T^*\pi^*L\Omega^1_{BT[p]/S}\arrow[r]\arrow[d] & 0\arrow[r]\arrow[d] & q_T^*e^*L\Omega^1_{T[p]/S}\arrow[d,"\sim"] \\ \left[p\right]^*\Omega^1_{T/S}\arrow[r, "{d[p]}"] & \Omega^1_{T/S}\arrow[r] & q_T^*e^*L\Omega^1_{T[p]/S} \end{tikzcd} \end{equation} We have $d[p]=0$, and the bottom triangle gives that $L\Omega^1_{T[p]/S}\simeq \omega_T\oplus\omega_T[1]$ which implies that $L\Omega^1_{BT[p]/S}\simeq q^*\omega_T\oplus q^*\omega_T[-1]$. The left vertical map in (\ref{nondeg example: p torsion cotangent complex diagram2}) is the pullback of the map $df:f^*L\Omega^1_{BT[p]/S}\tilde{o} \Omega^1_{T/S}$ along $[p]:T\tilde{o} T$. Since $[p]$ is a faithfully flat map, to prove that $df$ induces an isomorphism on $H^0$ it is enough to prove that the left vertical map in (\ref{nondeg example: p torsion cotangent complex diagram2}) induces an isomorphism on $H^0$. But this follows from the fact that the right vertical map in this diagram is an equivalence. \end{proof} To prove Proposition \ref{nondeg example: AV quotient} we observe that the conjugate spectral sequence of a quotient stack of a scheme $X_0$ by an action of a discrete group $\Gamma$ has a non-zero differential in a certain situation where the Hodge cohomology of $X_0$ is $\Gamma$-equivariantly decomposable while the de Rham cohomology is not. The precise statement is: \begin{lm}\label{nondeg example: quotient differential} Let $X_0$ be a smooth scheme over $k$ equipped with an action of a discrete group $\Gamma$ such that $X_0$ admits a lift $X_1$ over $W_2(k)$ to which the action of $\Gamma$ lifts. Suppose that for all $i\leq p$ the complex $\tau^{\leq p-i}\mathrm{R}\Gamma(X_0,\Omega^i_{X_0/k})$ is $\Gamma$-equivariantly equivalent to $\bigoplus\limits^{p-i}_{j=0}H^j(X,\Omega^i_{X/k})[-j]$ but the map $H^p_{\mathrm{dR}}([X_0/\Gamma]/k)\tilde{o} H^p_{\mathrm{dR}}(X_0/k)^{\Gamma}$ is not surjective. Then the conjugate spectral sequence for the stack $Y_0=[X_0/\Gamma]$ has a non-zero differential $d^{0,p}_{p+1}:H^0(Y^{(1)}_0,L\Omega^p_{Y^{(1)}_0/k})\tilde{o} H^{p+1}(Y^{(1)}_0,{\mathcal O}_{Y^{(1)}_0})$ on the $(p+1)$th page. \end{lm} \begin{proof} For an algebraic stack $Z$ over $k$ that satisfies $L\Omega^i_{Z/k}\in D^{\geq 0}(Z)$ for all $i$, we have $H^p_{\mathrm{dR}}(Z/k)=\mathrm{Fil}_p^{\mathrm{conj}}H^p_{\mathrm{dR}}(Z/k)$, and we denote by $a_{Z,p}:H^p_{\mathrm{dR}}(Z/k)\tilde{o} H^0(Z^{(1)},L\Omega^p_{Z^{(1)}/k})$ the map on cohomology induced by the morphism $\mathrm{Fil}_p^{\mathrm{conj}}\mathrm{dR}_Z\tilde{o}\gr_p^{\mathrm{conj}}\mathrm{dR}_Z\simeq L\Omega^p_{Z^{(1)}/k}[-p]$. Note that both $X_0$ and $[X_0/\Gamma]$ satisfy this condition. The conclusion of the lemma is equivalent to saying that the map $a_{[X_0/\Gamma],p}:H^p_{\mathrm{dR}}([X_0/\Gamma]/k)\tilde{o} H^0([X^{(1)}_0/\Gamma],L\Omega^p_{[X^{(1)}_0/\Gamma]/k})=H^0(X^{(1)}_0,\Omega^p_{X^{(1)}_0/k})^{\Gamma}$ is not surjective. Consider the commutative diagram induced by the pullback along the map $\pi:X_0\tilde{o} [X_0/\Gamma]$: \begin{equation} \begin{tikzcd} 0\tilde{o} \mathrm{Fil}_{p-1}^{\mathrm{conj}}H^p_{\mathrm{dR}}([X_0/\Gamma])\arrow[r]\arrow[d] & H^p_{\mathrm{dR}}([X_0/\Gamma]) \arrow[r, "a_{[X_0/\Gamma],p}"]\arrow[d] & H^0([X^{(1)}_0/\Gamma],L\Omega^p_{[X_0^{(1)}/\Gamma]})\arrow[d,equal] \\ 0\tilde{o} (\mathrm{Fil}_{p-1}^{\mathrm{conj}}H^p_{\mathrm{dR}}(X_0))^{\Gamma}\arrow[r] & H^p_{\mathrm{dR}}(X_0)^{\Gamma}\arrow[r,"a_{X_0,p}"]& H^0(X^{(1)}_0,\Omega^p_{X^{(1)}_0})^{\Gamma} \end{tikzcd} \end{equation} Both rows are exact on the left and in the middle, and our goal is to show that the top row is not exact on the right. For the sake of a contradiction, assume that $a_{[X_0/\Gamma],p}$ is surjective. Then the second map in the bottom row is of course surjective as well, so both rows are exact sequences. However, we will now check that the left vertical map is surjective which will yield the contradiction with our assumption that the middle vertical map is not surjective. By the assumption that $X_0$ lifts to $W_2(k)$ together with the group action, we have splittings $\mathrm{Fil}^{\mathrm{conj}}_{p-1}H^p_{\mathrm{dR}}([X_0/\Gamma])\simeq \bigoplus\limits_{i=0}^{p-1}H^{p-i}([X^{(1)}_0/\Gamma],L\Omega^i_{[X^{(1)}_0/\Gamma]})$ and $\mathrm{Fil}_{p-1}^{\mathrm{conj}}H^p_{\mathrm{dR}}(X_0)\simeq\bigoplus\limits_{i=0}^{p-1} H^{p-i}(X^{(1)}_0,\Omega^i_{X^{(1)}_0})$ (the second splitting is moreover $\Gamma$-equivariant), and the map $\pi^*$ is compatible with these decompositions. Thus to prove that the map $\mathrm{Fil}^{\mathrm{conj}}_{p-1}H^p_{\mathrm{dR}}([X_0/\Gamma])\tilde{o} (\mathrm{Fil}^{\mathrm{conj}}_{p-1}H^p_{\mathrm{dR}}(X_0))^{\Gamma}$ is surjective it is enough to show that $H^{p-i}([X^{(1)}_0/\Gamma],L\Omega^i_{[X_0^{(1)}/\Gamma]})\tilde{o} H^{p-i}(X_0,\Omega^i_{X_0^{(1)}})^{\Gamma}$ is surjective for all $0\leq i\leq p-1$. The group $H^{p-i}([X^{(1)}_0/\Gamma],L\Omega^i_{[X^{(1)}_0/\Gamma]})$ is identified with the group cohomology $H^{p-i}(\Gamma, \mathrm{R}\Gamma(X^{(1)}_0,\Omega^i_{X^{(1)}_0}))$ so this surjectivity is implied by the fact that each of the complexes $\tau^{\leq p-i}\mathrm{R}\Gamma(X_0,\Omega^i_{X^{(1)}_0})$ is $\Gamma$-equivariantly decomposable. \end{proof} \begin{proof}[Proof of Proposition \ref{nondeg example: AV quotient}.] The results of Section \ref{AV coherent: section} put us in a position to apply Lemma \ref{nondeg example: quotient differential} to the action of $SL_p({\mathcal O}_F)$ on $A_0\simeq E_0\otimes_{{\mathbb Z}}{\mathcal O}_F^{\oplus p}$. On the one hand, since $A_0$ is a product of supersingular elliptic curves, Proposition \ref{AV coherent: coherent main} asserts that $\tau^{\leq p}\mathrm{R}\Gamma(A,{\mathcal O})$ is $SL_p({\mathcal O}_F)$-equivariantly decomposable, and hence so is the mod $p$ reduction $\tau^{\leq p}\mathrm{R}\Gamma(A_0,{\mathcal O})$ of this complex. This implies that the cohomology complexes $\tau^{\leq p}\mathrm{R}\Gamma(A_0,\Omega^i_{A_0})$ are decomposable as well because $\Omega^i_{A_0}$ is trivial as a $G$-equivariant bundle, so $\mathrm{R}\Gamma(A_0,\Omega^i_{A_0})$ is $G$-equivariantly quasi-isomorphic to $\mathrm{R}\Gamma(A_0,{\mathcal O})\otimes_{k}\Lambda^i(\Lie A_0)^{\vee}$. Hence the assumptions on decomposability of the Hodge cohomology in Lemma \ref{nondeg example: quotient differential} are satisfied. On the other hand, using Proposition \ref{AV coherent: AV de rham}, we will see that the map $H^p_{\mathrm{dR}}([A_0/SL_p({\mathcal O}_F)])\tilde{o} H^p_{\mathrm{dR}}(A_0)^{SL_p({\mathcal O}_F)}$ is not surjective. This map fits into a long exact sequence \begin{equation} \ldots\tilde{o} H^p_{\mathrm{dR}}([A_0/SL_p({\mathcal O}_F)])\tilde{o} H^p_{\mathrm{dR}}(A_0)^{SL_p({\mathcal O}_F)}\xrightarrow{\delta} H^{p+1}(SL_p({\mathcal O}_F),\bigoplus\limits_{i=0}^{p-1} H^i_{\mathrm{dR}}(A_0)[-i])\tilde{o}\ldots \end{equation} Here $\delta$ lands in the direct summand $H^p(SL_p({\mathcal O}_F),H^1_{\mathrm{dR}}(A_0))$ and it is given by the composition \begin{equation} (\Lambda^p H^1_{\mathrm{dR}}(A_0))^{SL_p({\mathcal O}_F)}\xrightarrow{\mathrm{Bock}^{p-1}(\alpha(H^1_{\mathrm{dR}}(A_0/k)))}H^p(SL_p({\mathcal O}_F),H^1_{\mathrm{dR}}(A_0)^{(1)})\xrightarrow{F^*_{A_0}} H^p(SL_p({\mathcal O}_F),H^1_{\mathrm{dR}}(A_0)) \end{equation} in the notation of Proposition \ref{AV coherent: AV de rham}. We will prove that $\delta$ is non-zero, thus checking that $H^p_{\mathrm{dR}}([A_0/SL_p({\mathcal O}_F)])\tilde{o} H^p_{\mathrm{dR}}(A_0)^{SL_p({\mathcal O}_F)}$ is not surjective. Non-vanishing of $\delta$ can be checked after replacing the base field $k$ by a finite extension, hence we may assume that $k$ contains ${\mathbb F}_{q}={\mathbb F}_{p^2}$. The $k$-vector space $H^1_{\mathrm{dR}}(A_0)$ can be $SL_p({\mathcal O}_F)$-equivariantly identified with \begin{equation}H^1_{\mathrm{dR}}(E_0)\otimes_{{\mathbb Z}}{\mathcal O}_F^{\oplus p}=H^1_{\mathrm{dR}}(E_0)\otimes_k (k\otimes_{{\mathbb F}_p}{\mathcal O}_F^{\oplus p}/p)\end{equation} The group $SL_p({\mathcal O}_F)$ acts on the RHS via its tautological action on ${\mathcal O}_F^{\oplus p}/p={\mathbb F}_q^{\oplus p}$. The $k$-vector space $k\otimes_{{\mathbb F}_p}{\mathcal O}_F^{\oplus p}/p$ is $SL_p({\mathcal O}_F)$-equivariantly isomorphic to $\bigoplus\limits_{\tau\in \Gal(F/{\mathbb Q})} V^{\tau}$ where $V^{\tau}$ is isomorphic to $V:={\mathbb F}_q^{\oplus p}\otimes_{{\mathbb F}_q}k$ as a $k$-vector space but the $SL_p({\mathcal O}_F)$-action is modified by precomposing with the automorphism $\tau\in \Gal(F/{\mathbb Q})\simeq {\mathbb Z}/2$. We get the following $SL_p({\mathcal O}_F)$-equivariant description of 1st de Rham cohomology of $A_0$: \begin{equation} H^1_{\mathrm{dR}}(A_0)\simeq H^1_{\mathrm{dR}}(E_0)\otimes_k \bigoplus\limits_{\tau\in \Gal(F/{\mathbb Q})} V^{\tau} \end{equation} Choose any element $\xi\in H^1_{\mathrm{dR}}(E_0/k)$ such that $F^*_{E_0}(\xi)\neq 0$, and choose an arbitrary lift $\widetilde{\xi}\in H^1_{\mathrm{cris}}(E_0/W_2(k))$ of $\xi$. This defines a split $SL_p({\mathcal O}_F)$-equivariant embedding $\iota_{\xi}:V\xrightarrow{\xi\otimes \id_V}H^1_{\mathrm{dR}}(E_0)\otimes_k V\subset H^1_{\mathrm{dR}}(A_0)$ that lifts to an embedding $\widetilde{V}\xrightarrow{}H^1_{\mathrm{cris}}(A_0/W_2(k))$. We have a commutative diagram \begin{equation}\label{nondeg example: one element extension diagram} \begin{tikzcd} k=(\Lambda^p V)^{SL_p({\mathcal O}_F)}\arrow[rr,"\mathrm{Bock}^{p-1}_{\widetilde{V}}(\alpha(V))"]\arrow[d, "\Lambda^p\iota_{\xi}"] & & H^{p}(SL_p({\mathcal O}_F),V^{(1)})\arrow[r, "\iota_{\xi}^{(1)}"] & H^{p}(SL_p({\mathcal O}_F),H^1_{\mathrm{dR}}(A_0)^{(1)})\arrow[d, "\varphi_{A_0}^*"] \\ H^p_{\mathrm{dR}}(A_0)^{SL_p({\mathcal O}_F)}\arrow[rrr, "\delta"] & & & H^p(SL_p({\mathcal O}_F),H^1_{\mathrm{dR}}(A_0)) \end{tikzcd} \end{equation} By Proposition \ref{group cohomology: ring of integers main} the class $\mathrm{Bock}_{\widetilde{V}}^{p-1}(\alpha(V))$ induces a non-zero map $k=(\Lambda^p V)^{SL_p({\mathcal O}_F)}\tilde{o} H^p(SL_p({\mathcal O}_F),V^{(1)})$. By our choice of $\xi$ the composition $\varphi_{A_0}^*\circ\iota_{\xi}^{(1)}:V^{(1)}\tilde{o} H^1_{\mathrm{dR}}(A_0)$ is a split injection. Therefore the composition of the top row and right vertical arrow in (\ref{nondeg example: one element extension diagram}) is injective, so $\delta$ is non-zero. We have thus checked that all the conditions of Lemma \ref{nondeg example: quotient differential} are satisfied, hence the differential $d^{0,p}_{p+1}:H^0(Y_0,L\Omega^p_{Y_0/k})\tilde{o} H^{p+1}(Y_0,{\mathcal O})$ in the conjugate spectral sequence for the stack $Y_0=[A_0/SL_p({\mathcal O}_F)]$ is non-zero, as desired. \end{proof} Theorem \ref{nondeg example: maint thm stack} is therefore proven, by combining Proposition \ref{nondeg example: AV quotient} with Lemma \ref{nondeg example: classifying map on hodge coh}. We can now produce an example of a liftable smooth projective variety with a non-degenerate conjugate spectral sequence: \begin{cor}\label{nondeg example: main corollary} Let $k$ be any perfect field of characteristic $p$. There exists a smooth projective scheme $X$ over $W(k)$ of relative dimension $p+1$, such that the differential $H^0(X_0,\Omega^p_{X_0})\tilde{o} H^{p+1}(X_0,{\mathcal O})$ on the $(p+1)$st page of the conjugate spectral sequence of $X_0$ is non-zero. \end{cor} \begin{proof} We will use the classical technique of approximating the classifying stack of a finite flat group scheme by a projective variety, originated by Serre \cite{serre}, cf. \cite[Theorem 1.2]{antieau-bhatt-mathew}. We choose, using Lemma \ref{nondeg example: approximation finite fields} below, a complete intersection $Z\subset{\mathbb P}^N_{W(k)}$ of relative dimension $p+1$ over $W(k)$, equipped with a free action of the group scheme $G$, such that $Z/G$ is smooth over $W(k)$. Our example is $X:=Z/G$, it is a smooth projective scheme over $W(k)$ equipped with the classifying map $f:X\tilde{o} BG$. The induced map $f_0^*:H^{p+1}(BG_0,{\mathcal O})\tilde{o} H^{p+1}(X_0,{\mathcal O})$ is seen to be injective by considering the Hochschild-Serre spectral sequence for the morphism $Z_0\tilde{o} Z_0/G_0$ using the fact that $H^i(Z_0,{\mathcal O})=0$ for $i\leq p$. Therefore the differential $H^0(X_0,\Omega^p_{X_0})\tilde{o} H^{p+1}(X_0,{\mathcal O})$ is non-zero. \end{proof} \begin{lm}\label{nondeg example: approximation finite fields} Let $k$ be any perfect field of characteristic $p$. For any finite flat group scheme $\Gamma$ over $W(k)$ and an integer $d\geq 0$ there exists a complete intersection $Z\subset{\mathbb P}^N_{W(k)}$ of relative dimension $d$ equipped with a free action of $\Gamma$ such that the quotient $Z/\Gamma$ is a smooth scheme. \end{lm} \begin{proof} If $k$ is infinite, this is proven in \cite[2.7-2.9]{bms}. For finite $k$ we will check that the same construction goes through if we appeal to a Bertini theorem over finite fields due to Gabber and Poonen. \cite[Lemma 2.7]{bms} provides us with an action of $\Gamma$ on a projective space ${\mathbb P}^N_{W(k)}$ such that there is an open subscheme $U\subset{\mathbb P}^N_{W(k)}$ preserved by $\Gamma$ on which $\Gamma$ acts freely, the quotient $U/\Gamma$ is smooth over $W(k)$, and the dimension of every component of both fibers of ${\mathbb P}^N_{W(k)}\setminus U$ is at most $N-d-1$. Consider the quotient scheme $Q:={\mathbb P}^N_{W(k)}/\Gamma$ and denote by $M\subset Q$ the image of ${\mathbb P}^N_{W(k)}\setminus U$ under the quotient map. By construction, $Q\setminus M$ is smooth over $W(k)$, and $M$ has fiber-wise codimension $\geq d+1$ inside $Q$. Following the argument below \cite[Lemma 2.9]{bms}, it is enough for us to find very ample line bundles ${\mathcal L}_1,\ldots,{\mathcal L}_{N-d}$ on $Q$ and sections $s_i\in H^0(Q, {\mathcal L}_i)$ such that their common vanishing locus is smooth and does not intersect $M$. By induction on the number $N-d$, it is enough to produce a very ample line bundle ${\mathcal L}$ on $Q$ and a section $s\in H^0(Q,{\mathcal L})$ such that $V(s)\cap M\subset V(s)$ is of fiber-wise codimension $\geq d+1$, and $V(s)\setminus V(s)\cap M$ is smooth. Note that both conditions can be checked just on the special fiber. Let ${\mathcal L}$ be some very ample line bundle on $Q$. We choose a closed point on every irreducible component of $M_0$, and denote $B$ the set comprised of these points. Let $n_0$ be an integer such that $H^1(Q, {\mathcal L}^{\otimes n})=0$ for all $n\geq n_0$. Applying \cite[Corollary 1.6]{gabber}, for some $n>n_0$ we can find a section $s\in H^0(Q_0,{\mathcal L}^{\otimes n})$ of the line bundle ${\mathcal L}^{\otimes n}$ on the special fiber $ Q_0:=Q\times_{W(k)}k$, such that $V(s)\setminus M_0\cap V(s)$ is smooth of dimension $N-1$, and $V(s)$ does not contain any of the points from $B$. The latter property implies that the codimension of $M_0\cap V(s)$ inside $V(s)$ is at least $d+1$. By our assumption on the vanishing of $H^1(Q,{\mathcal L}^{\otimes n})$, we can lift $s$ to a section of ${\mathcal L}^{\otimes n}$ on $Q$ whose vanishing locus satisfies the desired properties. Having constructed the sections $s_1,\ldots, s_{N-d}$, we define $Z\subset {\mathbb P}^N_{W(k)}$ to be the preimage of $V(s_1,\ldots,s_{N-d})\subset Q\setminus M\subset Q={\mathbb P}^N_{W(k)}/\Gamma$. The action of $\Gamma$ on $Z$ is free and the quotient $Z/\Gamma=V(s_1,\ldots,s_{N-d})$ is smooth, by the choice of sections $s_1,\ldots,s_{N-d}$. \end{proof} \begin{rem} \begin{enumerate}[leftmargin=*] \item If in the construction of the abelian scheme $A$ we used an elliptic curve $E$ with {\it ordinary} reduction, the statement of Proposition \ref{nondeg example: AV quotient} would be false. Indeed, if $E_0$ is ordinary then the stack $[A_0/SL_p({\mathcal O}_F)]$ admits a lift to $W_2(k)$ together with its Frobenius endomorphism, hence the conjugate filtration is split in all degrees by \cite[Remarque 2.2(ii)]{deligne-illusie}. The part of the proof showing that the map $H^p_{\mathrm{dR}}([A_0/SL_p({\mathcal O}_F)])\tilde{o} H^p_{\mathrm{dR}}(A_0)^{SL_p({\mathcal O}_F)}$ is not surjective goes through just as well because it only used that $\varphi_{E_0}^*$ is non-zero on $H^1_{\mathrm{dR}}(E_0)$, but it is no longer true that $\tau^{\leq p}\mathrm{R}\Gamma(A_0,{\mathcal O})$ is $SL_p({\mathcal O}_F)$-equivariantly decomposable, by Proposition \ref{AV coherent: coherent main}. So both Hodge and de Rham cohomology of $A_0$ are not $SL_p({\mathcal O}_F)$-equivariantly decomposable, but the conjugate filtration on $\mathrm{R}\Gamma_{\mathrm{dR}}(A_0)$ is $SL_p({\mathcal O}_F)$-equivariantly split. \item As in the proof of \cite[Corollarie 2.4]{deligne-illusie}, the variety $X_0$ in Corollary \ref{nondeg example: main corollary} necessarily has a non-zero differential in its Hodge-to-de Rham spectral sequence. \item There are several examples in the literature of smooth projective varieties over $k$ with a non-degenerate Hodge-to-de Rham spectral sequence that lift to some ramified extension of $W(k)$, but to the best of my understanding none of them lift to $W_2(k)$. \end{enumerate} \end{rem} \comment{ \section{} Let $E$ be a supersingular elliptic curve over $k$. \begin{lm} For all $n$ there exists a $GL_n({\mathbb Z})$-equivariant equivalence \begin{equation} \tau^{<p^2}\mathrm{R}\Gamma(E^n,{\mathcal O})\simeq \bigoplus\limits_{i=0}^{p^2-1} H^i(E^n,{\mathcal O})[-i] \end{equation} \end{lm} \begin{proof} For each integer $a\in {\mathbb Z}$ the multiplication by $a$ morphism $[a]:E\tilde{o} E$ induces multiplication by $a^i$ on $H^i(E^n,{\mathcal O})$ which gives the decomposition $\tau^{<p}\mathrm{R}\Gamma(E^n,{\mathcal O})\simeq \bigoplus\limits H^i(E^n,{\mathcal O})[-i]$ because $1,a,a^2,\ldots,a^{p-1}$ are pairwise different elements of $k$ for a suitable choice of $a$. We will extend this argument to degrees $<p^2$ using other endomorphisms of the elliptic curve $E$. Recall that $\End(E)$ is a maximal order in a quaternion algebra over ${\mathbb Q}$ that is non-split at $p$. In particular, $\End(E)$ contains the ring of integers ${\mathcal O}_F$ of a quadratic extension $F\supset {\mathbb Q}$ in which $p$ is inert. The action of ${\mathcal O}_F$ on the $1$-dimensional space $H^1(E,{\mathcal O})$ induces a morphism ${\mathcal O}_F\tilde{o} \End_k H^1(E,{\mathcal O})\simeq k$ whose image must be isomorphic to ${\mathbb F}_p^2$. Therefore for $\lambda\in {\mathcal O}_F$ whose image under ${\mathcal O}_F\tilde{o}{\mathcal O}_F/p\simeq{\mathbb F}_{p^2}$ generates the multiplicative group of ${\mathbb F}_p^2$ the induced endomorphism of $E^n$ acts on $H^i(E^n,{\mathcal O})$ for $i=0,1,\ldots,p^2-1$ via pair-wise different elements of $k$. This endomorphism of $E^n$ commutes with $GL_n({\mathbb Z})$ and therefore induces the desired decomposition. \end{proof} The differential in the conjugate \begin{equation} H^0(A/G,\Omega^p)\xrightarrow{\alpha(\Omega^1_{A_0/G})}H^{p-1}(A_0/G,F^*\Omega^1)\xrightarrow{\ob_{F,A_{W_2(k)}/G}}H^p(A_0/G,{\mathcal O})\xrightarrow{\mathrm{Bock}}H^{p+1}(A/G,{\mathcal O}) \end{equation} \begin{lm} \begin{enumerate} \item The class $\alpha(\Omega^1_{A_0/G})\in H^{p-1}(A_0/G,F^*\Omega^1\otimes\Lambda^p T)$ is equal to the image of $\alpha(H^0(A_0,\Omega^1_{A_0}))\in H^{p-1}(G,H^0(\Omega^1_{A_0})^{(1)}\otimes\Lambda^p H^0(\Omega^1_{A_0})^{\vee})$ \item The image of $\ob_{F,A/G}\in H^1(A_0/G,F^*T)$ in $H^1(A_0,F^*T)^G$ is the class $\ob_{F,A}$. Moreover, under the identification $H^1(A_0,F^*T)^G=\Hom_G(H^0(A_0,\Omega^1_{A_0})^{(1)},H^1(A_0,{\mathcal O}))$ the class $\ob_{F,A}$ is mapped to an isomorphism. \end{enumerate} \end{lm} The Hochschild-Serre spectral sequence induces a filtration } \section{Extensions in higher degrees}\label{higher ext: section} It would be interesting to generalize the methods of Section \ref{cosimp: section} to treat extensions in the canonical filtration on the de Rham complex in degrees $>p$. In this section, which is independent of the rest of the paper, we make some preliminary remarks, roughly amounting to extending part (1) of Theorem \ref{cosimp: main theorem} to higher degrees. Suppose that $X$ is an arbitrary (not necessarily flat) scheme over ${\mathbb Z}_{(p)}$ and let $A\in \DAlg(X)$ be a derived commutative algebra such that $H^0(A)={\mathcal O}_X$, $H^1(A)$ is a locally free sheaf, and multiplication induces isomorphisms $\Lambda^i H^1(A)\simeq H^i(A)$ for all $i$. \begin{lm}\label{higher ext: mult2splittings} Given maps $s_i:\Lambda^i H^1(A)[-i]\tilde{o} A,s_j:\Lambda^j H^1(A)[-j]\tilde{o} A$ in $D(X)$ that induce isomorphisms on $i$th and $j$th cohomology respectively, there exists a map $s_{i+j}:\Lambda^{i+j}H^1(A)[-i-j]\tilde{o} A$ that induces an isomorphism on $H^{i+j}$ if the integer $\binom{i+j}{i}$ is not divisible by $p$. \end{lm} \begin{proof} For any object $M\in D(X)$, there is a natural map $m_{i,j}:S^iM\otimes S^jM\tilde{o} S^{i+j}M$ that, in case $M$ is a projective module over a ring, is given by the surjection $(M^{\otimes (i+j)})_{S_i\times S_j}\tilde{o} (M^{\otimes (i+j)})_{S_{i+j}}$ where we view $S_{i}\times S_j$ as a subgroup of $S_{i+j}$. Choosing a set of representatives $\Sigma\subset S_{i+j}$ for left cosets of this subgroup, we can define a map (that is independent of the choice of $\Sigma$) $N_{S_{i+j}/S_i\times S_j}:S^{i+j}M\tilde{o} S^iM\otimes S^jM$ given by $\sum\limits_{\sigma\in \Sigma}\sigma$. The composition $S^{i+j}M\xrightarrow{N_{S_{i+j}/S_{i}\times S_j}}S^iM\otimes S^jM\xrightarrow{m_{i,j}}S^{i+j}M$ is equal to multiplication by $\binom{i+j}{i}=[S_i\times S_j:S_{i+j}]$. Applying this to $M=H^1(A)[1]$ and using the decalage equivalences $S^i(H^1(A)[1])\simeq (\Lambda^i H^1(A))[i]$, we obtain a map $N_{S_{i+j}/S_i\times S_j}:\Lambda^{i+j}H^1(A)\tilde{o} \Lambda^i H^1(A)\otimes\Lambda^j H^1(A)$ whose composition with the wedge product map is the multiplication by $\binom{i+j}{i}$ map on $\Lambda^{i+j}H^1(A)$. Consider the product of sections $s_i$ and $s_j$ given by \begin{equation} s'_{i+j}:\Lambda^i H^1(A)[-i]\otimes\Lambda^j H^1(A)[-j]\xrightarrow{s_{i}\otimes s_j}A\otimes A\xrightarrow{m}A. \end{equation} It induces exterior multiplication $\Lambda^i H^1(A)\otimes \Lambda^j H^1(A)\tilde{o} \Lambda^{i+j}H^1(A)$ when evaluated on $H^{i+j}$, so precomposing $s'_{i+j}$ with the map $N_{S_{i+j}/S_i\times S_j}$ gives rise to a map $s_{i+j}:\Lambda^{i+j}H^1(A)[-i-j]\tilde{o} A$ that induces multiplication by $\binom{i+j}{i}$ on $H^{i+j}$. Under the assumption that $p\nmid\binom{i+j}{i}$ the map $s_{i+j}$ thus gives the desired splitting in degree $i+j$. \end{proof} \begin{cor}\label{higher ext: extendingppower} Given an integer $n\geq 0$, suppose that there exist morphisms $s_{p^i}:H^{p^i}(A)[-{p^i}]\tilde{o} A$ inducing an isomorphism on $H^{p^i}$, for all $i=0,1,\dots n$. Then there exists an equivalence $\tau^{\leq p^{n+1}-1}A\simeq\bigoplus\limits_{i=0}^{p^{n+1}-1}H^i(A)[-i]$. \end{cor} \begin{proof} We need to construct sections $s_m:H^m(A)[-m]\tilde{o} A$ for all $0\leq m\leq p^{n+1}-1$. We will construct these by induction on $m$, the base case being $m=0$, where $s_0:H^0(A)\tilde{o} A$ is simply the natural map that exists for every complex concentrated in non-negative degrees. Suppose that the sections $s_{m'}$ have been constructed for all $m'<m$. Let $m=a_rp^r+\dots+a_1p+a_0$ be the base $p$ expansion of $m$. The splitting $s_{p^r}$ is given to us and $s_{m-p^r}$ has already been constructed, so $s_m$ is provided by Lemma \ref{higher ext: mult2splittings} applied to $(i,j)=(p^r,m-p^r)$. \end{proof} \begin{example} If $X_0$ is a smooth scheme over $k$ that lifts to a scheme $X_1$ over $W_2(k)$, such that the class $e_{X_1,p}$ vanishes, the conditions of Corollary \ref{higher ext: extendingppower} are satisfied for $A=\mathrm{dR}_{X_0/k}$ on $X_0$ with $n=1$, hence the truncation $\tau^{\leq p^2-1}\mathrm{dR}_{X_0/k}$ decomposes as a direct sum of its cohomology sheaves. \end{example} \renewcommand{\mathrm{alg}}{{}} \section{Non-vanishing in rational group cohomology} \label{rational: section} In this section we prove non-vanishing results related to the class $\alpha$ of Definition \ref{free cosimplicial: alpha definition}. Let $V$ be a finite-dimensional $k$-vector space equipped with the tautological action of the algebraic group $GL(V)$, viewed as a group scheme over $k$. We denote by $V^{(1)}$ the vector space $V\otimes_{k,\operatorname{Fr}_p}k$ on which $GL(V)$ acts through the relative Frobenius $F_{GL(V)/k}:GL(V)\tilde{o} GL(V)^{(1)}=GL(V^{(1)})$. Under the equivalence between representations of $GL(V)$ and quasicoherent sheaves on the classifying stack $BGL(V)$ the representation $V^{(1)}$ corresponds to the pullback $F_{BGL(V)}^*V$ under absolute Frobenius, because $V$ and $GL(V)$ can be descended to ${\mathbb F}_p\subset k$. The construction of Definition \ref{free cosimplicial: alpha definition} applied to the stack $BGL(V)$ with its tautological vector bundle associates to $V$ a class $\alpha(V)\in \Ext^{p-1}_{GL(V)}(\Lambda^p V,V^{(1)})$ where $V^{(1)}$ is the Frobenius twist of the representation $V$. Recall that $\alpha(V)$ is the extension class of \begin{equation} V^{(1)}[-2]\tilde{o} \tau^{\geq 2}S^p(V[-1])\tilde{o}\Lambda^p V[-p] \end{equation} We will start by proving that this extension is generally non-split when viewed as an extension of algebraic representations \begin{pr}\label{rational group cohomology: main non-vanishing} If $\dim V=p$ then the class $\alpha(V)|_{SL(V)}\in \Ext^{p-1}_{SL(V)}(\Lambda^p V, V^{(1)})=H^{p-1}_{\mathrm{alg}}(SL(V),V^{(1)})$ is non-zero. \end{pr} \begin{rem} In this section, we will work in the derived category $D(\Rep_{SL(V)})$ of the abelian category of $k[SL(V)]$-comodules. Its full subcategory $D^+(\Rep_{SL(V)})$ of bounded below complexes is equivalent to $D^+(BSL(V))$, so we may view $\alpha(V)$ as a morphism in $D^+(\Rep_{SL(V)})$. The unbounded derived categories $D(BSL(V))$ and $D(\Rep_{SL(V)})$ are not equivalent though, and we will need to consider complexes unbounded from below in the intermediate steps of the proof. \end{rem} We will witness the non-vanishing of $\alpha(V)$ by representing $S^p(V[-1])$ as a direct summand of an explicit complex of $SL(V)$-modules and then comparing the result of applying $\mathrm{R}\Gamma_{\mathrm{alg}}(SL(V),-)$ to the terms of this complex and the cohomology groups of this complex. \begin{rem}\label{group cohomology: p=2 remark} When $p=2$, an explicit model for $\tau^{\geq 2}S^p(V[-1])$ is at our disposal by Lemma \ref{free cosimplicial: alpha for p=2}. In this case Proposition \ref{rational group cohomology: main non-vanishing} is asserting that the extension $$0\tilde{o} V^{(1)}\tilde{o} S^2V\tilde{o} \Lambda^2V\tilde{o} 0$$ does not admit an $SL(V)$-equivariant section, for a $2$-dimensional space $V$, and this is elementary to show. Indeed, if $e_1,e_2$ is a basis in $V$ then a matrix $\left(\begin{matrix}a & b\\ c & d\end{matrix}\right)$ satisfying $ad-bc=1$ sends an element $\lambda_{11}e_1^2+\lambda_{12}e_1e_2+\lambda_{22}e_2^2\in S^2V$ to $(\lambda_{11}a^2+\lambda_{12}\cdot ab+\lambda_{22}b^2)e_1^2+\lambda_{12}e_1e_2+(\lambda_{11}c^2+\lambda_{12}\cdot cd+\lambda_{22}d^2)e_2^2$, so the invariants $(S^2V)^{SL_2}$ vanish. But the action of $SL_2$ on $\Lambda^2 V$ is trivial, so the surjection $S^2V\tilde{o} \Lambda^2V$ does not admit a section. \end{rem} This remark proves Proposition \ref{rational group cohomology: main non-vanishing} for $p=2$. For the remainder of this section we assume that $p>2$. We start by relating $S^p(V[-1])$ to the homology of the symmetric group $S_p$ acting on $V[-1]^{\otimes p}$ via permutation of the factors. For an object $M$ of the derived category of modules over a discrete group $G$ we denote by $M_{hG}$ the derived coinvariants of $M$. \comment{\begin{proof} Note that there is a $S_p$-equivariant equivalence $V[-1]^{\otimes p}=V^{\otimes p}[-p]$ where on the right hand side $V^{\otimes p}$ is equipped with the permutation action of $S_p$ twisted by the sign character. Let $\{e_i\}_{i\in I}$ be a basis for $V$. Then \end{proof}} \begin{lm}\label{rational: cosimplicial vs einf} There is a $GL(V)$-equivariant equivalence $\tau^{\geq 1}((V[-1]^{\otimes p})_{hS_p})\simeq S^pV[-1]$. \end{lm} \begin{proof} This is Lemma \ref{free cosimplicial: cosimp vs einf} applied to the classifying stack $X_0=BGL(V)$ and $E$ being the tautological vector bundle corresponding to the representation $V$. \end{proof} The advantage of working with $(V[-1]^{\otimes p})_{hS_p}$ to understand the extension in the canonical filtration on $S^p(V[-1])$ is that we can represent $(V[-1]^{\otimes p})_{hS_p}$ as a direct summand of an explicit complex. Denote by $C_p\subset S_p$ a cyclic subgroup of order $p$ generated by the long cycle $\sigma=(12\ldots p)$. We have a natural map $(V[-1]^{\otimes p})_{hC_p}\tilde{o} (V[-1]^{\otimes p})_{hS_p}$ that admits a section $N_{S_p/C_p}:(V[-1]^{\otimes p})_{hS_p}\tilde{o} (V[-1]^{\otimes p})_{hC_p}$ because the index of $C_p\subset S_p$ is coprime to $p$, so $(V[-1]^{\otimes p})_{hS_p}$ is a direct summand of $(V[-1]^{\otimes p})_{hC_p}$. Denote by $N:=\sum\limits_{i=0}^{p-1}\sigma^i\in k[C_p]$ the norm element in the group algebra of $C_p$. Using the $2$-periodic resolution \begin{equation}\ldots\xrightarrow{1-\sigma}k[C_p]\xrightarrow{N}k[C_p]\xrightarrow{1-\sigma}k[C_p]\end{equation} for the trivial $C_p$-module we have the following complex representing $(V[-1]^{\otimes p})_{hC_p}$: \begin{equation}\label{rational: symp resolution} \ldots\xrightarrow{1-\sigma}V^{\otimes p}\xrightarrow{N} V^{\otimes p}\xrightarrow{1-\sigma}\overset{p}{V^{\otimes p}} \end{equation} With this explicit complex in hand, we will deduce Proposition \ref{rational group cohomology: main non-vanishing} from the following collection of facts about the cohomology of $SL(V)$: \begin{lm}\label{rational: terms coh vanishing} \begin{enumerate} \item We have $H^i(SL(V), V^{\otimes n})=0$ for all $i>0$ and $n\geq 0$. \item The embedding $\varepsilon:\Lambda^p V\tilde{o} V^{\otimes p}$ given by $v_1\wedge\ldots\wedge v_p\mapsto\sum\limits_{g\in S_p}\mathrm{sgn}(g)v_{g(1)}\otimes\ldots\otimes v_{g(p)}$ induces an isomorphism $\Lambda^p V\simeq H^0(SL(V),V^{\otimes p})$. \item $H^i(SL(V),V^{(1)})=0$ for all $i\neq p-1$ and $H^{p-1}(SL(V),V^{(1)})=k$. \end{enumerate} \end{lm} \begin{rem} First assertion of (3) for $p=2$ is elementary to prove analogously to Remark \ref{group cohomology: p=2 remark}, and for $p=3$ it is a result of Stewart \cite[Theorem 1]{stewart}. It appears to be a new result for $p>3$. \end{rem} We will now explain how Proposition \ref{rational group cohomology: main non-vanishing} follows from Lemma \ref{rational: terms coh vanishing} and the rest of the subsection is devoted to its proof. Proving that the extension $V^{(1)}[-2]\tilde{o} \tau^{\geq 2}S^p(V[-1])\tilde{o}\Lambda^p V[-p]$ is non-split is equivalent to showing that the map $$H^p(SL(V), \tau^{\geq 2}S^p(V[-1]))\tilde{o} H^p(SL(V),\Lambda^p V[-p])=H^0(SL(V),k)=k$$ is zero. By Lemma \ref{rational: cosimplicial vs einf} the object $S^p(V[-1])$ is identified with $\tau^{\geq 2}(V[-1]^{\otimes p})_{hS_p}$. First, by Lemma \ref{rational: terms coh vanishing} (3) and the fact that all non-zero cohomology modules of the complex $\tau^{\leq 1}(V[-1]^{\otimes p})_{hS_p}$ are isomorphic to $V^{(1)}$ (by Lemma \ref{free cosimplicial: symmetric group cohomology}), the complex of derived invariants $\mathrm{R}\Gamma(SL(V),\tau^{\leq 1}(V[-1]^{\otimes p})_{hS_p})$ is concentrated in degrees $\leq p$. Therefore the map $H^p(SL(V),(V[-1])^{\otimes p}_{hS_p})\tilde{o} H^p(SL(V),\tau^{\geq 2}(V[-1])^{\otimes p}_{hS_p})$ is surjective, and it is enough to prove that the map $H^p(SL(V),(V[-1])^{\otimes p}_{hS_p})\tilde{o} H^p(SL(V),\Lambda^pV[-p])$ is zero. Moreover, since $(V[-1]^{\otimes p})_{hS_p}$ is a direct summand of $(V[-1]^{\otimes p})_{hC_p}$, it is enough to prove that the composition $H^p(SL(V),(V[-1])^{\otimes p}_{hC_p})\tilde{o} H^p(SL(V),(V[-1])^{\otimes p}_{hS_p})\tilde{o} H^p(SL(V),\Lambda^pV[-p])$ is zero. By Lemma \ref{rational: terms coh vanishing} (1) and (2), the map $\varepsilon:k=\Lambda^p V\tilde{o} V^{\otimes p}$ induces an isomorphism $k=\mathrm{R}\Gamma(SL(V),\Lambda^p V)\tilde{o} \mathrm{R}\Gamma(SL(V),V^{\otimes p})$. Lemma \ref{rational: commute invariants with cp coinvariants} below implies that this map induces an equivalence $k[-p]_{hC_p}=\mathrm{R}\Gamma(SL(V),k[-p]_{hC_p})\simeq \mathrm{R}\Gamma(SL(V),(V[-1]^{\otimes p})_{hC_p})$. This allows us to compute the map \begin{equation}H^p(SL(V),(V[-1]^{\otimes p})_{hC_p})\tilde{o} H^p(SL(V),(\Lambda^p V)[-p])\end{equation} as the result of passing to derived $SL(V)$-invariants in the composition $\Lambda^p V\xrightarrow{\varepsilon}V^{\otimes p}\tilde{o}\Lambda^p V$. But this composition is zero, which finishes the proof of Proposition \ref{rational group cohomology: main non-vanishing} modulo Lemma \ref{rational: terms coh vanishing}. \begin{lm}\label{rational: commute invariants with cp coinvariants} Let $M$ be a finite-dimensional representation of a reductive group $G$ over $k$, equipped with an action of $C_p$ that commutes with $G$. Then there is an equivalence $\mathrm{R}\Gamma(G,M_{hC_p})\simeq \mathrm{R}\Gamma(G,M)_{hC_p}$ \end{lm} \begin{proof} This is not a priori obvious as it entails commuting a limit with a colimit. We will first show that the natural map $M_{hC_p}\tilde{o} \Rlim\tau^{\geq -n}(M_{hC_p})$ induces an equivalence $\mathrm{R}\Gamma(G,M_{hC_p})\simeq R\Gamma(G,\Rlim\tau^{\geq -n}(M_{hC_p}))$ The object $M_{hC_p}\in D_G(k)$ is represented by a $2$-periodic complex \begin{equation} \ldots\xrightarrow{1-\sigma}M\xrightarrow{N} M\xrightarrow{1-\sigma}{M} \end{equation} In particular, as $i$ varies $H^i(M)$ runs through at most $3$ different finite-dimensional representations of $G$. Since for a reductive $G$ and any finite-dimensional representation $W$ the complex $\mathrm{R}\Gamma(G,W)$ is bounded \cite[Theorem 2.4(b)]{cpsk}, there exists a constant $c\in{\mathbb N}$ such that $\mathrm{R}\Gamma(G, H^i(M))\in D^{\leq c}(k)$ for all $i$, hence $\mathrm{R}\Gamma(G,\tau^{< -n}(M_{hC_p}))\in D^{< c-n}(k)$. The fiber of the natural map $\mathrm{R}\Gamma(G,M_{hC_p})\tilde{o} \mathrm{R}\Gamma(G,\Rlim \tau^{\geq -n}(M_{hC_p}))$ is given by $\Rlim \mathrm{R}\Gamma(G,\tau^{<-n}(M_{hC_p}))$, and this limit is forced to vanish. Next, let $c'\in {\mathbb N}$ be such that $\mathrm{R}\Gamma(G, M/\ker(1-\sigma))$, $\mathrm{R}\Gamma(G,M/\ker N)$, and $\mathrm{R}\Gamma(G,M)$ belong to $D^{\leq c}(k)$. Then for every $n$ the natural map $\mathrm{R}\Gamma(G,M)_{hC_p}\tilde{o} \mathrm{R}\Gamma(G, \tau^{\geq -n}(M_{hC_p}))$ induces an isomorphism on cohomology in degrees $\geq c' -n$ because $\tau^{\geq -n}(M_{hC_p})$ can be represented by the complex \begin{equation} 0\tilde{o} \overset{-n-1}{M/\ker f}\tilde{o}\ldots \xrightarrow{1-\sigma}M\xrightarrow{N} M\xrightarrow{1-\sigma}{M} \end{equation} where $f$ is $N$ or $1-\sigma$, depending on the parity of $n$. This implies that the map $\mathrm{R}\Gamma(G,M)_{hC_p}\tilde{o} \Rlim\mathrm{R}\Gamma(G,\tau^{\leq -n}(M_{hC_p}))$ is an equivalence which finishes the proof of the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{rational: terms coh vanishing}] Part (1) follows from classical results on good filtrations, see \cite[II.4]{jantzen}. We recall here relevant facts from this theory, from which the desired vanishing follows immediately. In general, consider a split reductive group $G$ over $k$ with a Borel subgroup $B\subset G$ and a maximal torus $T\subset B$. Recall that for a weight $\lambda\in X^*(T)$ there is the associated $G$-equivariant line bundle ${\mathcal O}(\lambda)$ on the flag variety $G/B$, obtained by applying the equivalence \{$G$-equivariant vector bundles on $G/B$\}$\simeq$\{representations of $B$\} to the character $B\tilde{o} T\xrightarrow{\lambda}{\mathbb G}_m$. The global sections $H^0(\lambda):=H^0(G/B,{\mathcal O}(\lambda))$ is a finite-dimensional representation of $G$ which is non-zero if and only if $\lambda$ is a dominant weight. A filtration $\ldots\subset W_i\subset W_{i-1}\subset \ldots$ on a finite-dimensional representation $W$ is called {\it good} if every quotient $W_{i-1}/W_i$ is isomorphic to the representation $H^0(G/B, {\mathcal O}(\lambda_i))$ for some dominant weight $\lambda$. If a representation $W$ admits a good filtration then $H^i(G, W)=0$ for all $i>0$, by Kempf vanishing \cite[Proposition II.4.13]{jantzen}. For $n=0$, part (1) is asserting that $H^{>0}(SL(V),k)=0$ which is the case because $k=H^0(\lambda)$ for the trivial character $\lambda$. The tautological representation $V$ of $SL(V)$ has the form $H^0(\lambda)$ for $\lambda$ the highest weight of $V$, so it tautologically has a good filtration. Tensor product of two representations with good filtrations admits a good filtration as well \cite[Proposition II.4.21]{jantzen}, hence $V^{\otimes n}$ admits a good filtration which implies that $H^{>0}(SL(V),V^{\otimes n})=0$ for $n>0$. Part (2) is proven by a direct computation. Note that $\varepsilon$ is an $SL(V)$-invariant map so all we need is to prove that $(V^{\otimes p})^{SL(V)}$ has dimension at most $1$. Let $e_1,\ldots,e_p$ be a basis for $V$ and consider first the action of the maximal torus $T=\{\diag(t_1,\ldots,t_{p-1},(t_1\cdot\ldots\cdot t_{p-1})^{-1})|t_i\in{\mathbb G}_m\}$ on $V^{\otimes p}$. An element $e_{i_1}\otimes\ldots\otimes e_{i_p}$ is being acted on by $T$ via the character $t_1^{d_1-d_p}\ldots t_{p-1}^{d_{p-1}-d_p}$ where $d_j$ is the number of indices $r=1,\ldots,p$ such that $i_r=j$. Therefore the subset of $T$-invariants $(V^{\otimes p})^T\subset V^{\otimes p}$ is spanned by tensors $e_{i_1}\otimes\ldots\otimes e_{i_p}$ for which $(i_1,\ldots,i_p)$ is a permutation of the sequence $(1,2\ldots ,p)$. We can embed the group $S_p$ into $SL(V)$ by sending $\tau\in S_p$ to the operator $e_i\mapsto \mathrm{sgn}(\tau)e_{\tau(i)}$. The subspace $(V^{\otimes p})^T$ is preserved by $S_p$ and is isomorphic to the regular representation $k[S_p]$ of $S_p$. Hence $S_p\ltimes T$-invariants in $V^{\otimes p}$ are $1$-dimensional and part (2) is proven. To prove part (3) we will use an auxiliary complex of $SL(V)$-modules obtained from the de Rham complex. The idea of using the de Rham complex of an affine space to study cohomology of representations of $GL(V)$ appears already in \cite{franjou-lannes-schwartz} and \cite{friedlander-suslin}, and the idea of our computation is specifically quite similar to \cite[Theorem 4.5]{friedlander-suslin}. Consider the affine space ${\mathbb A}(V):=\Spec S^{\bullet}(V)$ corresponding to the vector space $V^{\vee}$. It is equipped with an action of $GL(V)$ and the de Rham complex $\Omega^{\bullet}_{{\mathbb A}(V)/k}$ can be viewed as a complex of representations of $GL(V)$ on $k$-vector spaces. Explicitly, $\Omega^{\bullet}_{{\mathbb A}(V)/k}$ has the form \begin{equation} S^{*}(V)\xrightarrow{d}S^{*}(V)\otimes_k V\xrightarrow{d}S^{*}(V)\otimes_k\Lambda^2V\xrightarrow{d}\ldots \end{equation} where the de Rham differential $d:S^{*}(V)\otimes_{k}\Lambda^i V\tilde{o} S^{*}(V)\otimes_k\Lambda^{i+1}V$ is given by \begin{equation}d(v_1\cdot\ldots\cdot v_n\otimes \omega)=\sum\limits_{i=1}^dv_1\cdot\ldots\cdot \widehat{v}_i\cdot\ldots\cdot v_d\otimes v_i\wedge \omega \end{equation} The action of the center ${\mathbb G}_m\subset GL(V)$ introduces a grading on the de Rham complex with the $n$-th graded piece given by \begin{equation} \Omega^{\bullet}_n:=S^nV\xrightarrow{d} S^{n-1}V\otimes V\xrightarrow{d}\ldots\xrightarrow{d}\Lambda^n V \end{equation} The Cartier isomorphism describes the cohomology modules of this complex as follows: \begin{lm}[{cf. \cite[Theorem 4.1]{friedlander-suslin}}] When $p\nmid n$ the complex $\Omega^{\bullet}_n$ is acyclic. If $n=pk$ for an integer $k$ then there are $GL(V)$-equivariant isomorphisms $H^i(\Omega^{\bullet}_{pk})\simeq \Lambda^i(V^{(1)})\otimes S^{k-i}V^{(1)}$ for all $i$. \end{lm} In particular, for $n=p$ the only non-zero cohomology groups of the complex $\Omega^{\bullet}_p$ are in degrees $0$ and $1$, both isomorphic to $V^{(1)}$. For completeness, we note the remarkable coincidence that will however not be used in our proof: \begin{lm}[{\hspace{1sp}\cite[Lemma 4.12]{friedlander-suslin}}]\label{rational: deligne illusie} There is an equivalence $T_p(V[-1])\simeq \tau^{\leq 1}\Omega^{\bullet}_p$ of representations of $GL(V)$. \end{lm} Suppose that $\dim V=p$. Consider the complex $\Omega^{\leq p-1}_p$ obtained from $\Omega^{\bullet}_p$ by removing the last term: \begin{equation}\label{group cohomology: de rham p-1} \Omega^{\leq p-1}_p:=S^p V\tilde{o} S^{p-1}V\otimes V\tilde{o}\ldots\tilde{o} V\otimes\Lambda^{p-1}V \end{equation} This is a complex with $H^0\simeq H^1\simeq V^{(1)}$, $H^{p-1}=\Lambda^p V$ and all other cohomology groups are equal to zero. We will use $\Omega^{\leq p-1}_p$ to compute cohomology of the module $V^{(1)}$. We will compute $\mathrm{R}\Gamma(SL(V),-)$ applied to the complex (\ref{group cohomology: de rham p-1}) in two ways. First, Lemma \ref{group cohomology: schur cohomology vanishing} below implies that the terms of the complex have no cohomology in positive degrees and the only term having non-zero $H^0$ is $V\otimes \Lambda^{p-1}V$. Hence $\mathrm{R}\Gamma(SL(V),\Omega^{\leq p-1}_p)=k[1-p]$, and the map $\Lambda^{p-1}V\otimes V[1-p]\tilde{o} \Omega^{\leq p-1}_p$ given by the embedding of the top term of $\Omega^{\leq p-1}_p$ induces a quasi-isomorphism on $\mathrm{R}\Gamma(SL(V),-)$. Therefore the map $\Omega^{\leq p-1}_{p}\tilde{o} H^{p-1}(\Omega^{\leq p-1}_p)[1-p]=(\Lambda^p V)[1-p]$ induces the zero map on $\mathrm{R}\Gamma(SL(V),-)$ because the de Rham differential $V\otimes \Lambda^{p-1}V\tilde{o}\Lambda^p V$ induces the zero map on $SL(V)$-invariants. On the other hand, the $E_2$ page spectral sequence associated with the canonical filtration looks as follows: \begin{equation*} \begin{tikzpicture} \matrix (m) [matrix of math nodes, nodes in empty cells,nodes={minimum width=5ex, minimum height=5ex,outer sep=-5pt}, column sep=1ex,row sep=1ex]{ & & & & & \\ p-1 & & H^0(SL(V),\Lambda^p V) & 0 & 0 & \dots \\ \vdots & & 0 & 0 & 0 & \\ 1 & & H^0(SL(V), V^{(1)}) & H^1(SL(V),V^{(1)}) & \dots & H^{p-1}(SL(V), V^{(1)}) \\ 0 & & H^0(SL(V), V^{(1)}) & H^1(SL(V),V^{(1)}) & \dots & H^{p-1}(SL(V), V^{(1)}) \\ \quad\strut & & & & & \strut \\ \quad\strut & & 0 & 1 & \dots & p-1 & \strut \\}; \draw[-stealth] (m-2-3.south east) -- (m-4-6.north west); \draw[thick] (m-1-2.north) -- (m-6-2.north) ; \draw[thick] (m-6-2.north) -- (m-6-6.north) ; \end{tikzpicture} \end{equation*} We used here that $H^{>0}(SL(V),\Lambda^p V)=H^{>0}(SL(V),k)=0$, as we proved in part (1). Since we computed that $H^{p-1}(SL(V),\Omega^{\leq p-1}_p)=k$, and $H^{i}(SL(V),\Omega^{\leq p-1}_p)=0$ for $i\neq p-1$, the $E_{\infty}$ page of the spectral sequence has no non-zero terms away from the diagonal $i+j=p-1$. Moreover, the induced map $H^{p-1}(SL(V),\Omega^{\leq p-1}_p)\tilde{o} H^0(SL(V),\Lambda^p V)$ is zero, so there has to be at least one non-zero differential coming out of the entry $E_{2}^{0,p-1}=H^0(SL(V),\Lambda^p V)$. Therefore $H^i(G,V^{(1)})\neq 0$ for at least one $i$. Let $i_{-}$ be the minimal such $i$ and $i_{+}$ be the maximal one. The latter is well-defined because cohomology of a finite-dimensional module over a reductive group is always concentrated in finitely many degrees by \cite[Theorem 2.4(b)]{cpsk}. If $i_+>p-1$ then the entry $E_2^{i^+,1}=H^{i^+}(SL(V),V^{(1)})$ will necessarily survive to the infinite page giving that $H^{i_{+}+1}(SL(V),\Omega^{\leq p-1}_p)\neq 0$ which is not the case. Similarly, if $i_-<p-1$ then the entry $E_2^{i_{-},0}$ will survive to the infinite page contradicting that $H^{i_-}(SL(V),\Omega^{\leq p-1}_p)=0$. Therefore $H^i(G,V^{(1)})\neq 0$ for all $i\neq p-1$ and $d_{p-1}:H^0(SL(V),\Lambda^p V)\tilde{o} H^{p-1}(SL(V),V^{(1)})$ is an isomorphism. \end{proof} \begin{lm}\label{group cohomology: schur cohomology vanishing} \begin{enumerate} \item For all $a,b$ the cohomology $H^i(SL(V),S^aV\otimes\Lambda^b V)$ vanishes in all positive degrees $i$. \item $H^0(SL(V),S^aV\otimes\Lambda^b V)=0$ for $a\geq 2$ and any $b$. \item $(V\otimes \Lambda^{p-1}V)^{SL(V)}$ is a $1$-dimensional space, but the multiplication map $V\otimes\Lambda^{p-1}V\tilde{o}\Lambda^pV$ induces the zero map on subspaces of invariants. \end{enumerate} \end{lm} \begin{proof} (1), (2), and the first assertion of (3) are special cases of \cite[Proposition II.4.13]{jantzen}. For the second assertion of (3), the multiplication map $V\otimes\Lambda^{p-1}V\tilde{o}\Lambda^p V$ is equal, up to a unit, to the composition $V\otimes \Lambda^{p-1}V\hookrightarrow V^{\otimes p}\tilde{o}\Lambda^p V$ where the fist map is induced by the antisymmetrization map $\Lambda^{p-1}V\tilde{o} V^{\otimes p-1}$. In Lemma \ref{rational: terms coh vanishing}(2) we established that the map $V^{\otimes p}\tilde{o}\Lambda^p V$ induces the zero map on $SL(V)$-invariants, so part (3) is proven. \end{proof} Having done this computation, we can conclude that $\Omega^{\leq p-1}_p$ in fact coincides with $S^p(V[-1])$, up to a shift: \begin{cor} The complex $\Omega^{\leq p-1}_p[-1]$ is equivalent to $S^p(V[-1])$ in $D_G(k)$. \end{cor} \begin{rem} It would be much nicer to directly identify $\Omega^{\leq p-1}_p[-1]$ with $\tau^{\geq 1}(V[-1]^{\otimes p})_{hS_p}$ thus making the proof of Proposition \ref{rational group cohomology: main non-vanishing} more straightforward. \end{rem} \begin{proof} By Proposition \ref{rational: deligne illusie} the object $\Omega^{\leq p-1}_p[-1]$ fits into a fiber sequence \begin{equation} T_p(V[-2])\tilde{o}\Omega^{\leq p-1}_p[-1]\tilde{o}\Lambda^p V[-p] \end{equation} By the proof of Proposition \ref{rational: terms coh vanishing}(3) this fiber sequence is not split. Our object $S^p(V[-1])$ also fits into a non-split fiber sequence with the same first and third terms. Equivalence classes of such fiber sequences are parametrized by $\Ext^{p-1}_{SL(V)}(\Lambda^p V, T_p(V))=H^{p-1}(SL(V), T_p(V))$. Since $T_p(V)$ fits into a fiber sequence $V^{(1)}[1]\tilde{o} T_p(V)\tilde{o} V^{(1)}$, Lemma \ref{rational: terms coh vanishing}(3) implies that this Ext space is $1$-dimensional. Therefore any two non-split extensions are isomorphic, as desired. \end{proof} \section{From algebraic cohomology to cohomology of the group of \texorpdfstring{${\mathbb F}_q$}{}-points}\label{group cohomology: section} \renewcommand{\mathrm{alg}}{\mathrm{alg}} Let now $k={\mathbb F}_q$ be a finite field of characteristic $p$. In this section we show that $\alpha(V)$ remains non-zero when restricted to the discrete group $SL_p({\mathbb F}_q)$, provided that $q>p$: \begin{pr}\label{group cohomology: from algebraic to discrete} If $\dim V=p$ and $q>p$ then the restriction map $H^{p-1}_{\mathrm{alg}}(SL(V),V^{(1)})\tilde{o} H^{p-1}(SL_p({\mathbb F}_q),V^{(1)})$ is injective. \end{pr} In this and next sections we use the notation $H^*_{\mathrm{alg}}$ for the cohomology of representations of algebraic groups, and $H^*$ is reserved for cohomology of discrete groups. In general, it follows from results of Cline, Parshall, Scott, and van der Kallen that the restriction from cohomology of a split reductive group $G$ to that of its ${\mathbb F}_q$-points is injective for a large enough $q$: \begin{thm}[{\hspace{1sp}\cite[Theorem 6.6]{cpsk}+\hspace{1sp}\cite[Theorem 2.1]{cps-detecting}}] Let $G$ be a split reductive group over ${\mathbb F}_p$, and $W$ be a finite-dimensional representation of it. For large enough $q=p^r$ the restriction map \begin{equation} H^n_{\mathrm{alg}}(G,W)\tilde{o} H^{n}(G({\mathbb F}_q),W) \end{equation} is injective for all $n$. \end{thm} Our Proposition \ref{group cohomology: from algebraic to discrete} is only marginally stronger than this result in that we show that for any $q>p$ injectivity holds in the particular case $G=SL_p, W=V^{(1)}$. We give here an argument that follows closely the proof of \cite[Theorem 6.6]{cpsk} in order to introduce the techniques that will be used in Section \ref{borel: section}. Let us briefly describe the method of \cite{cpsk}. Let $B\subset G$ be a Borel subgroup of a split reductive group $G$ over $k$. We have a commutative diagram \begin{equation} \begin{tikzcd} H^n_{\mathrm{alg}}(G,W)\arrow[r]\arrow[d,"\sim"] & H^n(G({\mathbb F}_q),W)\arrow[d] \\ H^n_{\mathrm{alg}}(B,W)\arrow[r] & H^n(B({\mathbb F}_q),W) \\ \end{tikzcd} \end{equation} The fact that the left vertical map is an isomorphism is a consequence of the vanishing $H^{>0}(G/B,{\mathcal O})=0$ (\hspace{1sp}\cite[\S 6 Theorem 1(a)]{kempf}) of the cohomology of the structure sheaf on the flag variety: \begin{thm}[\hspace{1sp}{\cite[Theorem 2.1]{cpsk}}]\label{group cohomology: restriction to Borel iso} For every algebraic $G$-module $W$ restriction induces an isomorphism $H^i_{\mathrm{alg}}(G,W)\simeq H^i_{\mathrm{alg}}(B,W)$ for all $i$. \end{thm} Therefore it is enough to prove that the bottom horizontal arrow is injective. This is now a more tangible question as the Borel subgroup is isomorphic to the semi-direct product $T\ltimes U$ of the maximal torus $T$ and the unipotent radical $U$ of $B$. Since the algebraic cohomology of a torus, as well as the cohomology of the finite group $T({\mathbb F}_q)$ with $p$-torsion coefficients vanish in positive degrees, we have $H^i_{\mathrm{alg}}(B, W)=H^i_{\mathrm{alg}}(U,W)^T$ and $H^i(B({\mathbb F}_q), W)=H^i(U({\mathbb F}_q), W)^{T({\mathbb F}_q)}$. This reduces the problem to the study of the action of $T$ on $H^i_{\mathrm{alg}}(U, W)$ which, for the purposes of proving the desired eventual injectivity, can be reduced to the study of the action of ${\mathbb G}_m$ on $H^{\bullet}_{\mathrm{alg}}({\mathbb G}_a,k)$ and ${\mathbb F}_q^{\times}$ on $H^{\bullet}({\mathbb F}_q,k)$ which is performed in Proposition \ref{group cohomology: additive group cohomology}. Choose a basis $e_1,\ldots,e_p$ of $V$, and let $B_p\subset SL(V)$ be the subgroup of matrices preserving each of the subspaces $\langle e_1,\ldots,e_i\rangle\subset V$. Denote also by $T_p\subset B_p$ the maximal torus of the diagonal matrices, and by $U_p\subset B_p$ the subgroup of strictly upper-triangular matrices. \subsection{Proof of Proposition \ref{group cohomology: from algebraic to discrete} when \texorpdfstring{$p=2$}{}.} We will now prove that when $p=2$ and $q>p$ the restriction map $H^1_{\mathrm{alg}}(B_2,V^{(1)})\tilde{o} H^1(B_2({\mathbb F}_q),V^{(1)})$ is injective. We treat the case $p=2$ separately both because there are slight notational differences from the case $p>2$, arising from the fact that $H^{\bullet}_{\mathrm{alg}}({\mathbb G}_a,k)$ has a different-looking ring structure, and because it explains the idea of the general proof in a setting less burdened by the combinatorial difficulties. Denote by $\chi_1:T_2\tilde{o}{\mathbb G}_m$ the character $\diag(a,a^{-1})\mapsto a$ identifying the maximal torus $T_2$ with ${\mathbb G}_m$. The conjugation action of $T_2$ on $U_2\simeq {\mathbb G}_a$ is then given by $\chi_1^2$. When restricted to $B_2$, the representation $V^{(1)}$ fits into an exact sequence $0\tilde{o}\chi_1^2\tilde{o} V^{(1)}\tilde{o} \chi_1^{-2}\tilde{o} 0$. It induces the long exact sequences in cohomology of $B_2$ and $B_2({\mathbb F}_q)$: \begin{equation} \begin{tikzcd} & H^1_{\mathrm{alg}}(U_2,\chi_1^2)^{T_2}\arrow[r]\arrow[d] & H^1_{\mathrm{alg}}(U_2,V^{(1)})^{T_2}\arrow[r]\arrow[d] & H^1_{\mathrm{alg}}(U_2,\chi_1^{-2})^{T_2}\arrow[d] \\ H^0(U({\mathbb F}_q),\chi_1^{-2})^{T_2({\mathbb F}_q)}\arrow[r] & H^1(U_2({\mathbb F}_q),\chi_1^2)^{T_2({\mathbb F}_q)}\arrow[r] & H^1(U_2({\mathbb F}_q),V^{(1)})^{T_2({\mathbb F}_q)}\arrow[r] & H^1(U_2({\mathbb F}_q),\chi_1^{-2})^{T_2({\mathbb F}_q)} \end{tikzcd} \end{equation} We have $H^1_{\mathrm{alg}}(U_2,\chi^2_1)\simeq\Hom_{\mathrm{grp}}({\mathbb G}_a,{\mathbb G}_a)$ with the action of $T_2$ described as follows: any $k$-algebra $R$, an element $a\in T_2(R)=R^{\times}$ acts by sending a homomorphism $f:{\mathbb G}_a\tilde{o}{\mathbb G}_a$ to $a^{2}\cdot f(a^{-2}\cdot -)$. Similarly, $H^1_{\mathrm{alg}}(U_2,\chi^{-2}_1)\simeq\Hom_{\mathrm{grp}}({\mathbb G}_a,{\mathbb G}_a)$ but the torus action sends $f$ to $a^{-2}\cdot f(a^{-2}\cdot -)$. All homomorphisms $f:{\mathbb G}_a\tilde{o} {\mathbb G}_a$ have the form $f(x)=\sum\limits_{i=0}^N a_i x^{p^i}$ for some $N\geq 0$ and $a_i\in k$, where $x$ is a coordinate on ${\mathbb G}_a$. It follows that $H^1_{\mathrm{alg}}(U_2,\chi_1^{-2})^{T_2}=0$ and $H^1_{\mathrm{alg}}(U_2,\chi_1^{2})^{T_2}\subset \Hom_{\mathrm{grp}}({\mathbb G}_a,{\mathbb G}_a)$ is the one-dimensional space of homomorphisms of the form $x\mapsto a_0\cdot x$. Therefore the map $H^1_{\mathrm{alg}}(U_2,\chi_1^2)^{T_2}\tilde{o} H^1_{\mathrm{alg}}(U_2,V^{(1)})^{T_2}$ is surjective, and the restriction map $H^1_{\mathrm{alg}}(U_2,\chi_1^2)^{T_2}\tilde{o} H^1(U_2({\mathbb F}_q),\chi_1^2)^{T({\mathbb F}_q)}$ is injective for any $q$. It remains to observe that $H^0(U_2({\mathbb F}_q),\chi_1^{-2})$ is a one-dimensional vector space on which $T_2({\mathbb F}_q)={\mathbb F}_q^{\times}$ acts via the character $\chi_1^{-2}$. Therefore the invariant subspace of this $0$th cohomology group is trivial as soon as $q>2$. This implies that the map $H^1(U_2({\mathbb F}_q),\chi_1^2)^{T_2({\mathbb F}_q)}\tilde{o} H^1(U_2({\mathbb F}_q),V^{(1)})^{T_2({\mathbb F}_q)}$ is injective, and hence the restriction $H^1_{\mathrm{alg}}(U_2,V^{(1)})^{T_2}\tilde{o} H^1(U_2({\mathbb F}_q),V^{(1)})^{T_2({\mathbb F}_q)}$ is injective. \subsection{Proof of Proposition \ref{group cohomology: from algebraic to discrete} when \texorpdfstring{$p>2$}{}.} As in the special case $p=2$ that we dealt with above, the key to the proof is to analyze the action of the maximal torus on the cohomology of the unipotent radical of a Borel subgroup of $SL_p$. For a split torus $T$ over $k$ we denote by $X^*(T)=\Hom_{\mathrm{grp}}(T,{\mathbb G}_m)$ its lattice of characters. We use additive notation for the group operation in $X^*(T)$. If $U$ is a unipotent algebraic group equipped with an action of $T$, we denote by $\Delta_U\subset X^*(T)$ the set of characters appearing in the $T$-representation $\Lie U$. Here is the main computation in the case $U\simeq {\mathbb G}_a^n$ using which we will study the case of an arbitrary unipotent $U$ by a d\'evissage argument. \begin{pr}\label{group cohomology: additive group cohomology} Assume that $p>2$. Let $A$ be a group scheme over ${\mathbb Z}$ isomorphic to ${\mathbb G}_{a,{\mathbb Z}}^{n}$, equipped with an action of a split torus $T$. Denote by ${\mathfrak a}:=(\Lie A)^{\vee}$ the dual Lie algebra of $A$. We denote by ${\mathfrak a}_k$ and ${\mathfrak a}_{W_2(k)}$ the modules ${\mathfrak a}\otimes_{{\mathbb Z}}k$ and ${\mathfrak a}\otimes_{{\mathbb Z}}W_2(k)$. \begin{enumerate} \item There is a $T$-equivariant identification \begin{equation}H^{\bullet}_{\mathrm{alg}}(A, k)=\Lambda^*(\bigoplus\limits_{i=0}^{\infty} {\mathfrak a}_k^{(i)}\cdot x_i)\otimes S^*(\bigoplus\limits_{i=1}^{\infty} {\mathfrak a}_k^{(i)}\cdot\beta(x_i))\end{equation} where $x_i$ and $\beta(x_i)$ are formal symbols, invariant under $T$. The cohomological degrees of $x_i$ and $\beta(x_i)$ are $1$ and $2$, respectively. \item Let ${\mathbb F}_q\subset k$ be a finite subfield of $k$. There is a $T({\mathbb F}_q)$-equivariant identification \begin{equation}H^{\bullet}(A({\mathbb F}_{p^r}),k)=\Lambda^*(\bigoplus\limits_{i=0}^{r-1} {\mathfrak a}_k^{(i)}\cdot x_i)\otimes S^*(\bigoplus\limits_{i=1}^{r} {\mathfrak a}_k^{(i)}\cdot \beta(x_i))\end{equation} and the map $H^{\bullet}_{\mathrm{alg}}(A,k)\tilde{o} H^{\bullet}(A({\mathbb F}_{p^r}),k)$ sends $x_i$ and $\beta(x_i)$ to $x_{i\mod r}$ and $\beta(x_{((i-1)\mod r)+1})$, respectively. In particular, this map is surjective in every degree. \comment{\item For $n>1$ we have $H^{\bullet}(A(W_n({\mathbb F}_{p^r})),k)=\Lambda(\bigoplus\limits_{i=0}^{r-1} {\mathfrak a}^{(i)})\otimes S(\bigoplus\limits_{i=1}^{r} {\mathfrak a}^{(i)})$ as well but the map $H^{\bullet}(A(W_{n-1}({\mathbb F}_{p^r})),k)\tilde{o} H^{\bullet}(A(W_{n}({\mathbb F}_{p^r})),k)$ is given by identity on the exterior algebra and by zero on the symmetric algebra. } \end{enumerate} Suppose that $F\supset {\mathbb Q}$ is a finite Galois extension, and ${\mathfrak p}\subset{\mathcal O}_F$ is an unramified prime ideal such that the residue field ${\mathcal O}_F/{\mathfrak p}$ is identified with ${\mathbb F}_q$. We have the following $T({\mathcal O}_F)$-equivariant identifications: \begin{enumerate} \setcounter{enumi}{2} \item $H^{\bullet}(A({\mathcal O}_F),k)=\Lambda^{\bullet}(\bigoplus\limits_{\tau\in \Gal(F/{\mathbb Q})} {\mathfrak a}_k^{\tau}\cdot x_{\tau})$. Here $T({\mathcal O}_F)$ acts on ${\mathfrak a}_k$ through the chosen map $T({\mathcal O}_F)\tilde{o} T({\mathcal O}_F/{\mathfrak p})=T({\mathbb F}_q)$, and ${\mathfrak a}_k^{\tau}$ denotes the composition of this action with the automorphism $T({\mathcal O}_F)\xrightarrow{\tau}T({\mathcal O}_F)$. The map on cohomology induced by $A({\mathcal O}_F)\tilde{o} A({\mathbb F}_q)$ annihilates $\beta(x_i)$ and sends $x_i$ to $x_{\tau_i}$ where $\tau_i\in \Gal(F/{\mathbb Q})$ is the element of the decomposition group of ${\mathfrak p}$ that induces the automorphism $\operatorname{Fr}_p^i$ on ${\mathbb F}_q$. \item $H^{\bullet}(A({\mathcal O}_F),W_2(k))=\Lambda^{\bullet}(\bigoplus\limits_{\tau\in \Gal(F/{\mathbb Q})} {\mathfrak a}_{W_2(k)}^{\tau}\cdot x_{\tau})$ such that the mod $p$ reduction of this isomorphism is the isomorphism in (3). \end{enumerate} \end{pr} \begin{example}\label{group cohomology: ga cohomology example} For cohomology in degree $1$ and $A\simeq{\mathbb G}_{a,{\mathbb Z}}$ statement (1) amounts to the fact that $H^1({\mathbb G}_{a,k},k)=\Hom_{\mathrm{grp}}({\mathbb G}_{a,k},{\mathbb G}_{a,k})$ is spanned by homomorphisms of the forms $t\mapsto t^{p^i}$ where $t$ is a coordinate on ${\mathbb G}_{a,k}$. Statement (2) in this case is saying that $H^1({\mathbb G}_a({\mathbb F}_{p^r}),k)=\Hom({\mathbb F}_{p^r},k)$, and every homomorphism ${\mathbb F}_{p^r}\tilde{o} k$ of additive groups can be represented as $t\mapsto\sum\limits_{i=0}^{r-1}a_it^{p^i}$ for some $a_i\in k$, in a unique way. \end{example} \begin{proof} By K\"unneth formula, it is enough to consider the case $A\simeq{\mathbb G}_a$. Parts (1) and (2) are \cite[Theorem 4.1]{cpsk}, see also \cite[Proposition I.4.27]{jantzen}. To fix ideas, let us explicitly say that in part (1) the map ${\mathfrak a}_k^{(i)}\cdot x_i\tilde{o} H^1(A,k)=\Hom_{\mathrm{grp}}(A_k,{\mathbb G}_{a,k})$ sends an element $\alpha\cdot x_i$ with $0\neq \alpha\in {\mathfrak a}_k$ to the group scheme homomorphism $A_k\xrightarrow{L_{\alpha}}{\mathbb G}_{a,k}\xrightarrow{t\mapsto t^{p^i}}{\mathbb G}_{a,k}$ where $L_{\alpha}$ is the unique group scheme homomorphism that induces the functional $\alpha$ on Lie algebras. We now turn to proving (4). Part (3) can either be proven by the same argument or deduced formally using that cohomology groups $H^{\bullet}(A({\mathcal O}_F),W_2(k))$ are flat $W_2(k)$-modules. Additionally to assuming that $A={\mathbb G}_a$ we may and will assume that $T={\mathbb G}_m$, acting on $A$ through some power of the standard character. Since $A({\mathcal O}_F)$ is isomorphic to ${\mathbb Z}^{[F:{\mathbb Q}]}$, the cohomology ring $H^{\bullet}(A({\mathcal O}_F),W_2(k))$ is $T({\mathcal O}_F)$-equivariantly isomorphic to $\Lambda^{\bullet}(H^1(A({\mathcal O}_F),W_2(k)))$ via the multiplication on cohomology. The $1$st cohomology module $H^1(A({\mathcal O}_F),W_2(k))$ is naturally identified with $\Hom(A({\mathcal O}_F),W_2(k))=\Hom_{W_2(k)}({\mathcal O}_F\otimes_{{\mathbb Z}} W_2(k),W_2(k))$ where $T({\mathcal O}_F)={\mathcal O}_F^{\times }$ acts by multiplication on the source of the maps. The algebra ${\mathcal O}_F\otimes_{{\mathbb Z}}W_2(k)$ is isomorphic to $\bigoplus\limits_{\tau\in \Gal(F/{\mathbb Q})}W_2(k)$ via the isomorphism sending $a\otimes b\in {\mathcal O}_F\otimes_{{\mathbb Z}}W_2(k)$ to $\oplus \tau(\kappa(a)b)$ where $\kappa:{\mathcal O}_F\tilde{o}{{\mathcal O}_F/{\mathfrak p}^2}\simeq W_2({\mathbb F}_q)$ is the unique lift of the chosen identification ${\mathcal O}_F/{\mathfrak p}\simeq{\mathbb F}_q$. Therefore, $A({\mathcal O}_F)\otimes_{{\mathbb Z}}W_2(k)$ is isomorphic to $\bigoplus\limits_{\tau\in\Gal(F/{\mathbb Q})} (\Lie A_{W_2(k)})^{\tau}$ as a ${\mathcal O}_F^{\times}$-module which implies the claim by dualizing. \end{proof} Since we assume that $k$ contains ${\mathbb F}_q$, every representation of the finite abelian group $T({\mathbb F}_q)$ on a finite-dimensional $k$-vector space decomposes as a direct sum of characters. We will sometimes refer to the characters of $T({\mathbb F}_q)$ appearing as direct summands of a representation as $T({\mathbb F}_q)$-{\it weights} of this representation. The image of the restriction map $X^*(T)=\Hom_{\mathrm{grp}}(T,{\mathbb G}_m)\tilde{o} \Hom(T({\mathbb F}_q),k^{\times})$ is identified with $X^*(T)/(q-1)$. It follows from Proposition \ref{group cohomology: additive group cohomology} that \begin{cor}\label{group cohomology: additive group weights}\begin{enumerate} \item If a character $\chi\in X^*(T)$ is a weight of $H^n_{\mathrm{alg}}(A,k)$ then $\chi$ can be expressed as a sum of $\leq n$ elements of $-p^{{\mathbb N}}\cdot \Delta_A\subset X^*(T)$. \item If a character $\chi$ of $T({\mathbb F}_q)$ is a weight of $H^n(A({\mathbb F}_q),k)$ then $\chi$ extends to an algebraic character $\widetilde{\chi}$ of $T$ that is congruent modulo $q-1$ to an element of $X^*(T)$ expressible as a sum of $\leq n$ elements of $-p^{{\mathbb N}}\cdot \Delta_A$. \end{enumerate} \end{cor} These observations imply the following injectivity criterion \begin{cor}[{\hspace{1sp}\cite[5.4]{cpsk}}]\label{group cohomology: abelian restriction criteria} Let $\chi$ be a character of $T$. \begin{enumerate} \item If every equality of the form $\chi=p^{r_1}\lambda_1+\ldots+p^{r_l}\lambda_l$ with $\lambda_1,\ldots,\lambda_l\in -\Delta_A$, $l\leq n$, and some $r_1,\ldots,r_l\geq 0$ satisfies $r_1,\ldots,r_l\leq r-1$, then the restriction map $\Hom_T(\chi, H^n_{\mathrm{alg}}(A,k))\tilde{o} \Hom_{T({\mathbb F}_q)}(\chi, H^n(A({\mathbb F}_q),k))$ is injective. \item If $\chi$ is not congruent modulo $(q-1)\cdot X^*(T)$ to a sum of the form $r_1+\ldots+r_l$ with $r_1,\ldots,r_l\in -p^{{\mathbb N}}\cdot \Delta_A$ and $l\leq n$ then $\Hom_{T({\mathbb F}_q)}(\chi, H^n(A({\mathbb F}_q),k))=0$. \end{enumerate} \end{cor} \begin{proof} 1) Proposition \ref{group cohomology: additive group cohomology} (1), (2) implies that there exists a $T({\mathbb F}_q)$-equivariant map $s_n:H^n(A({\mathbb F}_q),k)\tilde{o} H^n_{\mathrm{alg}}(A,k)$ such that its composition with the restriction $H^n_{\mathrm{alg}}(A,k)\tilde{o} H^n(A({\mathbb F}_q),k)$ is the identity map: we define $s_n$ by sending $x_i$ to $x_i$, and $\beta(x_i)$ to $\beta(x_i)$. The assumption on character $\chi$ implies that every appearance of $\chi$ as a $T$-equivariant direct summand of $H^n_{\mathrm{alg}}(A,k)$ is in the image of $s_n$, which implies the injectivity. 2) Immediate from Corollary \ref{group cohomology: additive group weights}(2). \end{proof} We can deduce from Proposition \ref{group cohomology: additive group cohomology} the following results about the action of $T$ on the cohomology of an arbitrary unipotent group. \begin{lm}\label{group cohomology: algebraic action on unipotent cohomology} Let $U$ be a unipotent algebraic group over $k$ equipped with an action of a split torus $T$. As before, $\Delta_U\subset X^*(T)$ is the set of characters of $T$ that appear as weights of the action of $T$ on the Lie algebra $\Lie U$. \begin{enumerate} \item Every weight of $T$ on $H^i_{\mathrm{alg}}(U,k)$ can be expressed as a sum of $\leq i$ elements of $-p^{{\mathbb N}}\cdot \Delta_{U}$ \item Let $U'\subset U$ be a normal $T$-stable subgroup, and assume the subsets $p^{{\mathbb N}}\Delta_{U'},p^{{\mathbb N}}\Delta_{U/U'}\subset X^*(T)$ are disjoint. Fix an integer $i$. Suppose that for every expression $\chi=r_1+\dots+r_l$ with $l\leq i$ and $r_1,\ldots,r_l\in -p^{{\mathbb N}}\cdot \Delta_U$, all $r_1,\ldots,r_l$ are contained in $-p^{{\mathbb N}}\cdot\Delta_{U'}$. Then the map $\Hom_T(\chi,H^i_{\mathrm{alg}}(U,k))\tilde{o} \Hom_T(\chi,H^i_{\mathrm{alg}}(U',k))$ is injective. \end{enumerate} \end{lm} \begin{proof} Since $U$ is unipotent, there exists a $T$-equivariant filtration $U_0=U\supset U_1\supset\dots\supset U_{n}=1$ by normal subgroups, such that all graded quotients $U_{m}/U_{m+1}$ are isomorphic to ${\mathbb G}_a$. We have a Hochschild-Serre spectral sequence \begin{equation}\label{group cohomology: algebraic action spectral sequence} E_2^{r,s}=H^r_{\mathrm{alg}}(U/U_{n-1},H^s_{\mathrm{alg}}(U_{n-1},k))\Rightarrow H^{r+s}_{\mathrm{alg}}(U,k) \end{equation} Since $U_{n-1}$ is a central subgroup of $U$, the term $E_2^{r,s}$ is $T$-equivariantly isomorphic to $H^r_{\mathrm{alg}}(U/U_{n-1},k)\otimes H^s_{\mathrm{alg}}(U_{n-1},k)$. We can use this spectral sequence to prove (1) by induction on $n=\dim U$, with the base case given by Proposition \ref{group cohomology: additive group cohomology}(1). By the inductive assumption, the $T$-weights of $H^r_{\mathrm{alg}}(U/U_{n-1},k)$ are sums of $\leq r$ elements of $-p^{{\mathbb N}}\cdot\Delta_{U/U_{n-1}}$, and the weights of $H^s_{\mathrm{alg}}(U_{n-1},k)$ are sums of $\leq s$ elements of $-p^{{\mathbb N}}\cdot\Delta_{U_{n-1}}$ by Proposition \ref{group cohomology: additive group cohomology}(1). From the spectral sequence (\ref{group cohomology: algebraic action spectral sequence}) we have that $H^i_{\mathrm{alg}}(U,k)$ is a $T$-equivariant subquotient of $\bigoplus\limits_{r+s=i}H^r_{\mathrm{alg}}(U/U_{n-1},k)\otimes H^s_{\mathrm{alg}}(U_{n-1},k)$ which proves the inductive step, completing the proof of (1). To prove (2), consider the spectral sequence \begin{equation} \widetilde{E}_2^{r,s}=H^r_{\mathrm{alg}}(U/U',H^s_{\mathrm{alg}}(U',k))\Rightarrow H^{r+s}_{\mathrm{alg}}(U,k) \end{equation} To show injectivity of the restriction map \begin{equation}\Hom_T(\chi,H^i_{\mathrm{alg}}(U,k))\tilde{o} \Hom_T(\chi,E_2^{0,i})\subset \Hom_T(\chi,H^i_{\mathrm{alg}}(U',k))\end{equation} it is enough to prove that the spaces $\Hom_T(\chi,H^{i-s}_{\mathrm{alg}}(U/U',H^s_{\mathrm{alg}}(U',k)))$ vanish for $s<i$. Since every representation of the solvable group scheme $T\ltimes(U/U')$ is a successive extension of characters, $H^s_{\mathrm{alg}}(U',k)$ has a filtration by $U/U'$-submodules with $1$-dimensional graded pieces which is also respected by $T$. Therefore $T$-weights of $H^{i-s}_{\mathrm{alg}}(U/U',H^s_{\mathrm{alg}}(U',k))$ form a subset of the set of $T$-weights of $H^{i-s}_{\mathrm{alg}}(U/U',k)\otimes H^s_{\mathrm{alg}}(U',k)$. For $s<i$ the character $\chi$ does not appear in the latter set because our assumption implies that $\chi$ cannot be written as $\chi'+\chi''$ where $\chi'$ is a sum of $\leq s$ elements of $-p^{{\mathbb N}}\cdot \Delta_{U'}$, and $\chi''$ is a sum of $\leq i-s$ elements of $-p^{{\mathbb N}}\cdot\Delta_{U/U'}$ . \end{proof} We will now compute $\Delta_{U_p}$ for the unipotent radical $U_p\subset B_p$ of a Borel subgroup of $SL_p$. Let $e_1,\ldots,e_p$ be a basis of $V$ so that $B_p$ is the subgroup preserving each of the subspaces $\langle e_1,\dots,e_{i}\rangle$, and $U_p\subset B_p$ is the subgroup of matrices that moreover act trivially on the quotients $\langle e_1,\dots,e_{i}\rangle/\langle e_1,\dots,e_{i-1}\rangle$. Let $\chi_i\in X^*(T_p)$ be the character of the torus $T_p=B_p/U_p$ through which it acts on $e_i$. Note that $\chi_p=-\chi_1-\ldots-\chi_{p-1}$, and $\chi_1,\ldots,\chi_{p-1}$ form a basis of $X^*(T_p)$. The set of positive roots $\Delta_{U_p}$ is equal to \begin{equation}\{\chi_i-\chi_j|1\leq i<j\leq p-1\}\cup \{\chi_i+(\chi_1+\ldots+\chi_{p-1})|1\leq i\leq p-1\}\end{equation} On the other hand, the weights of $T_p$ on the representation $V^{(1)}$ are given by $p\chi_1,\ldots,p\chi_{p-1},p\chi_p=-p(\chi_1+\ldots+\chi_{p-1})$. Our goal is to prove injectivity of the map $H^{p-1}_{\mathrm{alg}}(U_p,V^{(1)})^T\tilde{o} H^{p-1}(U_p({\mathbb F}_q),V^{(1)})$ for $q>p$. To do so we will analyze the $T$-action on $H^i_{\mathrm{alg}}(U_p,k)$ for $i\leq p-1$ and determine for which $i,j$ the cohomology group $H^i_{\mathrm{alg}}(U_p,\chi_j^p)$ might have non-zero $T$-invariants. This will be achieved through the following combinatorial computation: \begin{lm}\label{group cohomology: weights of frobenius twist} \begin{enumerate} \item For $2\leq j\leq p$ the character $p\chi_{j}$ does not belong to the submonoid of $X^*(T_p)$ spanned by $\Delta_{U_p}$. \item The only (up to permutation) way to express $p\chi_1$ as a sum of $\leq p-1$ elements of $p^{{\mathbb N}}\cdot\Delta_{U_p}$ is $(\chi_1-\chi_2)+\ldots+(\chi_1-\chi_{p-1})+(\chi_1+(\chi_1+\chi_2+\ldots+\chi_{p-1}))$. \item For $q>p$, and any $2\leq j\leq p$, the character $p\chi_j$ is not congruent modulo $q-1$ to a sum of $\leq p-1$ elements of $p^{{\mathbb N}}\cdot \Delta_{U_p}$. \item For $q>p$, any congruence modulo $q-1$ between $p\chi_1$ and a sum of $\leq p-1$ elements of $p^{\leq r-1}\cdot \Delta_{U_p}$ is an equality. In particular, by (2) there are no such congruences with strictly less than $p-1$ summands. \end{enumerate} \end{lm} \begin{proof} The first statement for $j=p$ is clear because the image of every element of $\Delta_{U_p}$ under the map $\sigma:X^*(T_p)\xrightarrow{b_1\chi_1+\ldots+b_{p-1}\chi_{p-1}\mapsto b_1+\ldots+b_{p-1}}{\mathbb Z}$ is non-negative. Similarly, a linear combination of elements of $\Delta_{U_p}$ that belongs to $\langle \chi_2,\ldots,\chi_{p-1}\rangle$ must be a combination of elements $\chi_i-\chi_j,2\leq i<j\leq p-1$ and is therefore killed by $\sigma$. This shows that $p\chi_j$ for $j=2,\ldots,p-1$ are not in the monoid generated by $\Delta_{U_p}$ either. For the second statement consider an arbitrary expression $p\chi_1=p^{r_1}\lambda_1+\ldots+p^{r_l}\lambda_l$ with $l\leq p-1$, $\lambda_1,\ldots,\lambda_l\in\Delta_{U_p}$. Since $\sigma(p\chi_1)=p$, there is exactly one $m$ such that $\lambda_m$ is from the set $\{\chi_1+(\chi_1+\ldots+\chi_{p-1}),\ldots,\chi_{p-1}+(\chi_1+\ldots+\chi_{p-1})\}$ and $r_m=0$ for this $m$. There are exactly $p-2$ other elements of $\Delta_{U_p}$ in which $\chi_1$ appears with a non-zero coefficient: $\chi_1-\chi_2,\ldots,\chi_1-\chi_{p-1}$. Therefore $l=p-1$, all $r_1,\ldots,r_{p-1}$ are equal to $0$, and all $\lambda_1,\ldots,\lambda_{p-1}$ are elements of the set $\{\chi_1-\chi_2,\ldots,\chi_1-\chi_{p-1},2\chi_1+\chi_2+\ldots+\chi_{p-1}\}$. Hence there must be no repetitions among $\lambda_1,\ldots,\lambda_{p-1}$ for them to sum up to $p\chi_1$, which proves assertion (2). Suppose that, contrary to the assertion (3), there is a congruence \begin{equation}\label{group cohomology: weights congruence formula}p\chi_j\equiv p^{r_1}\lambda_1+\ldots+p^{r_l}\lambda_l\bmod q-1\end{equation} with $l\leq p-1$ and all $\lambda_m\in \Delta_{U_p}$. The coefficient of $\chi_1$ in $p^{r_1}\lambda_1+\ldots+p^{r_l}\lambda_l$ is a sum of $\leq 2l$ powers of $p$. Hence it is a number whose sum of digits in base $p$ is at most $2l\leq 2(p-1)$. Moreover, its sum of digits is equal to $2(p-1)$ only if $l=p-1$, and all $\lambda_1,\ldots,\lambda_{p-1}$ are equal to $2\chi_1+\chi_2\ldots+\chi_{p-1}$. This would violate (\ref{group cohomology: weights congruence formula}), because the right hand side would have the shape $(p^{\lambda_1}+\ldots +p^{\lambda_{p-1}})\cdot (2\chi_1+\chi_2\ldots+\chi_{p-1})$. Therefore the sum of digits in base $p$ of the coefficient of $\chi_1$ in $p^{r_1}\lambda_1+\ldots+p^{r_l}\lambda_l$ is at most $2(p-1)-1$. Note that a number with base $p$ expansion $\overline{a_n\ldots a_ra_{r-1}\ldots a_0}$ is congruent to $\overline{a_n\ldots a_r}+\overline{a_{r-1}\ldots a_0}$ modulo $p^r-1$. Applying this observation repeatedly, we see that for any non-zero number $a$ there exists a number $0<a'<p^r$ congruent to $a$ modulo $p^r-1$ and with sum of digits less or equal to that of $a$. For $2\leq j\leq p-1$ the coefficient of $\chi_1$ in $p\chi_j$ is zero, so by this discussion there would have to be an integer $0<a'<p^r$ divisible by $p^r-1$ with the sum of digits $\leq 2(p-1)-1$, but there is no such number. For $j=p$ the coefficient of $\chi_1$ in $p\chi_j$ is $-p$, but the only number $0<a'<p^r-1$ congruent to $-p$ modulo $p^r-1$ is $p^r-p-1=\overline{(p-1)(p-1)\ldots (p-1)(p-2)(p-1)}$ and its sum of digits is $r(p-1)-1$. This finishes the proof of (3) if $r>2$, but for $r=2$ we still need to rule out the possibility that the sum of digits in base $p$ of the coefficient of $\chi_1$ in $p^{r_1}\lambda_1+\ldots+p^{r_l}\lambda_l$ is $2(p-1)-1$. If this was the case, up to reordering the summands, this sum would have the form \begin{equation}(p^{r_1}+\ldots+p^{r_{p-2}})(2\chi_1+\chi_2+\ldots+\chi_{p-1})+p^{r_{p-1}}(\chi_i+(\chi_1+\ldots+\chi_{p-1}))\end{equation} or \begin{equation}(p^{r_1}+\ldots+p^{r_{p-2}})(2\chi_1+\chi_2+\ldots+\chi_{p-1})+p^{r_{p-1}}(\chi_1-\chi_i),\end{equation} for some $i=2,\ldots ,p-1$. Neither of these expressions can be congruent to $p\chi_p=-p(\chi_1+\ldots+\chi_{p-1})$ modulo $p^2-1$: if $p>3$ this is clear because for all $j\neq 1,i$ (which exists since $p>3$), the difference between the coefficients of $\chi_1$ and $\chi_j$ in these sums is a positive integer with the sum of digits $\leq p-1$, and in particular it cannot be zero modulo $p^2-1$. If $p=3$ then we have a congruence of the form $-3(\chi_1+\chi_2)\equiv 3^{r_1}(2\chi_1+\chi_2)+3^{r_2}(\chi_1+2\chi_2)$ or $-3(\chi_1+\chi_2)\equiv 3^{r_1}(2\chi_1+\chi_2)+3^{r_2}(\chi_1-\chi_2)$ modulo $8$. In the first case we would have $3^{r_1}\equiv 3^{r_2}$ and $3^{r_1+1}\equiv -3$ which is impossible because $(-1)$ is not a power of $3$ modulo $8$. In the second case comparing the coefficients of $\chi_2$ we arrive at the contradiction as well, finishing the proof of (3). For the assertion (4), suppose that $p\chi_1\equiv p^{r_1}\lambda_1+\ldots+p^{r_l}\lambda_{l}$ is such a congruence. The value of $\sigma$ on its right hand side is a number less than or equal to $p^{r}(p-1)$ whose sum of digits is at most $p-1$, and which is congruent to $p$ modulo $p^r-1$. The only such number is $p$ itself (we use here that $r>1$), which implies that for exactly one value of $m$ we have $r_m=0$ and $\lambda_m=\chi_i+(\chi_1+\ldots+\chi_{p-1})$, while for all $m'\neq m$ the character $\lambda_{m'}$ is of the form $\chi_i-\chi_j$. Next, we consider the coefficient $c$ of $\chi_1$ in the right hand side of our congruence. Summands $p^{r_{m'}}\lambda_{m'}$ for $m'\neq m$ contribute at most $p^{r-1}$ to this coefficients, and $p^{r_m}\lambda_m=\lambda_m$ contributes $1$ or $2$. Hence $c$ is less than or equal to $p^{r-1}(p-2)+2$. As $c$ also has to be congruent to $p$ modulo $p^r-1$, it is forced to be equal to $p$. Given what we already know about the right hand side, this can only happen if $l=p-1$, and all $r_1,\ldots,r_l$ are equal to zero, hence the congruence is forced to be an equality. \end{proof} Let ${\mathbb G}_a^{p-1}\simeq A_p\subset U_p$ be the subgroup of matrices that act trivially on the quotient $V/\langle e_1\rangle$. Note that $A_p$ is preserved by the action of $T_p$ and \begin{equation}\Delta_{A_p}=\{\chi_1-\chi_i|2\leq i\leq p-1\}\cup \{2\chi_1+\chi_2+\ldots+\chi_{p-1}\}\subset \Delta_{U_p}\end{equation} Lemma \ref{group cohomology: weights of frobenius twist} indicates that restriction to the subgroup $A_p\subset U_p$ should detect all cohomology classes of $V^{(1)}$ in degrees $\leq p-1$. We make this precise in Lemma \ref{group cohomology: A restriction injective} below. The deduction of Lemma \ref{group cohomology: A restriction injective} from Lemma \ref{group cohomology: weights of frobenius twist} is analogous to the discussion of injectivity conditions in \cite[\S 5]{cpsk}. \begin{lm}\label{group cohomology: A restriction injective} The following restriction maps are injective: \begin{enumerate} \item $H^{p-1}_{\mathrm{alg}}(B_p,V^{(1)})=H^{p-1}_{\mathrm{alg}}(U_p,V^{(1)})^{T_p}\tilde{o} H^{p-1}_{\mathrm{alg}}(A_p,V^{(1)})^{T_p}$. \item $H^{p-1}_{\mathrm{alg}}(A_p,V^{(1)})^{T_p}\tilde{o} H^{p-1}(A_p({\mathbb F}_q),V^{(1)})^{T_p({\mathbb F}_q)}$, if $q$ is strictly larger than $p$. \end{enumerate} \end{lm} \begin{proof} As a representation of $B_p$, $V^{(1)}$ admits a filtration with graded pieces given by the characters $\chi_i^p$, for $i=1,\ldots,p$. Given Lemma \ref{group cohomology: weights of frobenius twist}(1) and (2), Lemma \ref{group cohomology: algebraic action on unipotent cohomology}(1) implies that $H^i_{\mathrm{alg}}(B_p,\chi_j^p)=H^i_{\mathrm{alg}}(U_p,\chi_j^p)^{T_p}=0$ for $2\leq j\leq p$ and all $i$, and for $j=1$ with $i<p-1$. Lemma \ref{group cohomology: algebraic action on unipotent cohomology}(2) shows that the restriction $H^j_{\mathrm{alg}}(U_p,\chi_1^p)^{T_p}\tilde{o} H^j_{\mathrm{alg}}(A_p,\chi_1^p)^{T_p}$ is an isomorphism (both groups are in fact zero) for $j<p-1$ and is an injection for $j=p-1$. Therefore the restriction $H^{p-1}_{\mathrm{alg}}(B_p,V^{(1)})\tilde{o} H^{p-1}_{\mathrm{alg}}(A_p,V^{(1)})^{T_p}$ is injective. For the second statement, $H^{i}(A_p({\mathbb F}_q),\chi_j^p)^{T({\mathbb F}_q)}=0$ for $i<p-1$ and all $j$, by the combination of Corollary \ref{group cohomology: additive group weights}(2) and Lemma \ref{group cohomology: weights of frobenius twist}(3), (4). The restriction maps $H^{p-1}_{\mathrm{alg}}(A_p,\chi_j^p)^{T_p}\tilde{o} H^{p-1}(A_p({\mathbb F}_q), \chi_j^p)^{T_p({\mathbb F}_q)}$ are injective by Lemma \ref{group cohomology: weights of frobenius twist}(3),(4) and Corollary \ref{group cohomology: abelian restriction criteria} (the source group is in fact zero for $j\neq 1$). This implies that the restriction $H^{p-1}_{\mathrm{alg}}(A_p,V^{(1)})^{T_p}\tilde{o} H^{p-1}(A_p({\mathbb F}_q),V^{(1)})^{T_p({\mathbb F}_q)}$ is injective. \end{proof} \begin{proof}[Proof of Proposition \ref{group cohomology: from algebraic to discrete}] Lemma \ref{group cohomology: A restriction injective} completes the proof of Proposition \ref{group cohomology: from algebraic to discrete}, because combined with Theorem \ref{group cohomology: restriction to Borel iso} it even shows that the composition $H^{p-1}_{\mathrm{alg}}(SL_p,V^{(1)})\simeq H^{p-1}_{\mathrm{alg}}(B_p,V^{(1)})\tilde{o} H^{p-1}(A_p({\mathbb F}_q),V^{(1)})$ is injective. \end{proof} Moreover, the $1$-dimensional subrepresentation $\chi_1^p\subset V^{(1)}$ is responsible for all of cohomology of $V^{(1)}$ in degree $p-1$. Precisely, we have the following results that will be used in the next section, and in the proof of Lemma \ref{group cohomology: reducible formula}. \begin{lm}\label{group cohomology: everything from chi1 algebraic} \begin{enumerate} \item The map $H^{p-1}_{\mathrm{alg}}(A_p,\chi_1^p)^{T_p}\tilde{o} H^{p-1}_{\mathrm{alg}}(A_p,V^{(1)})^{T_p}$ is an isomorphism of $1$-dimensional vector spaces. \item The map $H^{p-1}(A_p({\mathbb F}_q),\chi_1^p)^{T_p({\mathbb F}_q)}\tilde{o} H^{p-1}(A_p({\mathbb F}_q),V^{(1)})^{T_p({\mathbb F}_q)}$ is an isomorphism when $q>p$. \end{enumerate} \end{lm} \begin{proof} 1) The kernel and cokernel of this map are, respectively, a quotient and a subgroup of the groups $H^{p-2}_{\mathrm{alg}}(A_p,\chi_2^p\oplus\ldots\oplus \chi_p^p)^{T_p}$ and $H^{p-1}_{\mathrm{alg}}(A_p,\chi_2^p\oplus\ldots \oplus \chi_p^p)^{T_p}$ and we saw in the proof of Lemma \ref{group cohomology: A restriction injective} that they both vanish. The fact that the $T_p$-invariant subspace of $H^{p-1}_{\mathrm{alg}}(A_p,\chi_1^p)$ is $1$-dimensional follows from Lemma \ref{group cohomology: weights of frobenius twist}(2). 2) Just like in the case of algebraic cohomology, the kernel and cokernel of this map are subquotients of the groups $H^{p-2}(A_p({\mathbb F}_q),\chi_2^p\oplus\ldots\oplus \chi_p^p)^{T_p({\mathbb F}_q)}$ and $H^{p-1}(A_p({\mathbb F}_q),\chi_2^p\oplus\ldots \oplus \chi_p^p)^{T_p({\mathbb F}_q)}$ and they vanish by the proof of Lemma \ref{group cohomology: A restriction injective}. \end{proof} We can deduce from the results obtained so far the following expression for the class $\alpha(E)$ when $E$ is a vector bundle of rank $p$ that admits a line sub-bundle, which was used in Section \ref{nonsemisimp: section} (as Lemma \ref{nonsemisimp: reducible formula}) to relate the class $\alpha(\Omega^1_{X_0})$ to the Kodaira-Spencer map of a fibration. \begin{lm}\label{group cohomology: reducible formula} Let $X_0$ be arbitrary algebraic stack over ${\mathbb F}_p$. Suppose that a vector bundle $E$ of rank $p$ on $X_0$ fits into an extension \begin{equation} 0\tilde{o} L\tilde{o} E\tilde{o} E'\tilde{o} 0 \end{equation} where $L$ is a line bundle, and $E'$ is a vector bundle of rank $p-1$. The class of this extension defines an element $v(E)\in \Ext^1_{X_0}(E',L)=H^1(X_0,L\otimes (E')^{\vee})$. Denote by $v(E)^{p-1}\in H^{p-1}(X_0,L^{\otimes p-1}\otimes (\det E')^{\vee})$ the image of $v(E)^{\otimes p-1}\in H^{p-1}(X_0, (L\otimes (E')^{\vee})^{\otimes p-1})$ under the map induced by $(L\otimes (E')^{\vee})^{\otimes p-1}\tilde{o} \Lambda^{p-1}(L\otimes (E')^{\vee})=L^{\otimes p-1}\otimes (\det E')^{\vee}$. The class $\alpha(E)\in \Ext^{p-1}_{X_0}(\Lambda^p E, F^*E)=H^{p-1}(X_0,F^*E\otimes L^{\vee}\otimes (\det E')^{\vee})$ is equal, up to multiplying by a scalar from ${\mathbb F}_p^{\times}$, to the image of $v(E)^{p-1}$ under the map induced by $L^{\otimes p-1}\otimes(\det E')^{\vee}=F^*L\otimes L^{\vee}\otimes(\det E')^{\vee}\hookrightarrow F^*E\otimes L^{\vee}\otimes(\det E')^{\vee}$. \end{lm} \begin{proof} The $GL_{p}$-torsor $\underline{\Isom}(E,{\mathcal O}^{\oplus p})$ associated to the bundle $E$ naturally reduces to a maximal parabolic subgroup $P:=\left(\begin{matrix}* & * & \ldots & *\\ 0 & * & \ldots & * \\ 0 & * & \ddots & \vdots \\ 0 & * & \ldots & * \end{matrix}\right)\subset GL_{p}$. Therefore $E$ arises as the pullback of the tautological rank $p$ vector bundle $V$ along the classifying map $X_0\tilde{o} BP$ to the classifying stack of the group $P$ over ${\mathbb F}_p$. Therefore it is enough to prove the corresponding expression for the class $\alpha(V)\in \Ext^{p-1}_{P,\mathrm{alg}}(\Lambda^p V,V^{(1)})=H^{p-1}_{\mathrm{alg}}(P,V^{(1)}\otimes (\Lambda^pV)^*)$. Denote by $B_p\subset P\cap SL_{p}$ the subgroup of upper triangular matrices. By \cite[Corollary II.4.7(c)]{jantzen} the restriction map $H^i(P\cap SL_p, W)\tilde{o} H^i(B_p, W|_{B_p})$ is an isomorphism for any $P\cap SL_p$-module $W$. Since restriction along the inclusion $P\cap SL_p\subset P$ is an injection on cohomology, it is enough to prove the desired expression for $\alpha(V)$ in $\Ext^{p-1}_{B_p,\mathrm{alg}}(\Lambda^p V,V^{(1)})=H^{p-1}_{\mathrm{alg}}(B_p,V^{(1)})$. Recall that we denote by $T_p\subset B_p$ the maximal torus of diagonal matrices and by $\chi_i:T_p\tilde{o} {\mathbb G}_m$, for $i=1,\ldots,p$, the character sending $\diag(a_1,\ldots,a_p)$ to $a_i$. We denote by the same symbol the composite character $B_p\tilde{o} T_p\xrightarrow{\chi_i}{\mathbb G}_m$. Since we are working inside $SL_p$ the character $\chi_p$ can be expressed as $(\chi_1\cdot\ldots\cdot \chi_{p-1})^{-1}$. Next, there is a subgroup ${\mathbb G}_a^{p-1}\simeq A_p=\left(\begin{matrix}1 & * & \ldots & *\\ 0 & 1 & \ldots & 0 \\ 0 & 0 & \ddots & \vdots \\ 0 & 0 & \ldots & 1 \end{matrix}\right)\subset B_p$. The Frobenius twist $V^{(1)}$ viewed as a representation of $B_p$ admits a filtration with quotients $\chi_1^p,\ldots,\chi_p^p$. By Lemma \ref{group cohomology: A restriction injective}(1), we may further restrict to the subgroup $T_p\ltimes A_p\subset B_p$ to prove the desired equality of cohomology classes. The class $v(V|_{T_p\ltimes A_p})\in H^1_{\mathrm{alg}}(T_p\ltimes A_p, \chi_1\otimes (\chi_2\oplus\ldots\oplus \chi_p)^{\vee})=\Hom_{\mathrm{grp}}(A_p,\chi_1\otimes (\chi_2\oplus\ldots\oplus \chi_p)^{\vee})^{T_p}$ is an isomorphism between $A_p$ and the underlying vector space of the representation $\chi_1\otimes (\chi_2\oplus\ldots\oplus \chi_p)^{\vee}$, and its power $v(V|_{T_p\ltimes A_p})^{p-1}\in H^{p-1}_{\mathrm{alg}}(A_p,\chi_1^{p-1}\otimes \chi_2^{-1}\otimes\ldots\otimes \chi_p^{-1})^{T_p}=H^{p-1}(A_p,\chi_1^p)^{T_p}$ is therefore non-zero. By Lemma \ref{group cohomology: everything from chi1 algebraic}(1) the class $\alpha(V|_{T_p\ltimes A_p})\in H^{p-1}(A_p,V^{(1)})^{T_p}$ is the image of some class in $H^{p-1}(A_p,\chi_1^p)^{T_p}$. Since $\alpha(V|_{T_p\ltimes A_p})$ is non-zero, and $H^{p-1}(A_p,\chi_1^p)^{T_p}$ is a one-dimensional vector space, the result follows. \end{proof} \begin{rem} We do not expect Lemma \ref{group cohomology: A restriction injective}(2) to remain true for $q=p$. E.g. when $p=2$ in Remark \ref{group cohomology: p=2 remark} we computed explicitly a cocycle representing the class $\alpha(V)\in H^1_{\mathrm{alg}}(SL_2,V^{(1)})$, and its restriction to $U_2\simeq{\mathbb G}_a$ is the image of $\mathrm{Id}\in \Hom_{\mathrm{grp}}(U_2,{\mathbb G}_a)=H^1_{\mathrm{alg}}(U_2,k)$ under the map in the long exact sequence \begin{equation}\dots\tilde{o} H^0_{\mathrm{alg}}(U_2,k)\xrightarrow{\delta} H^1_{\mathrm{alg}}(U_2,k)\tilde{o} H^1_{\mathrm{alg}}(U_2,V^{(1)})\tilde{o} H^1_{\mathrm{alg}}(U_2,k)\tilde{o}\ldots\end{equation} The connecting homomorphism $\delta$ sends $1\in k=H^0_{\mathrm{alg}}(U_2,k)$ to the Frobenius map $\operatorname{Fr}_2\in \Hom(U_2,{\mathbb G}_a)=H^1_{\mathrm{alg}}(U_2,k)$. Since $\mathrm{Id}$ coincides with $\operatorname{Fr}_2$ when restricted to $U_2({\mathbb F}_2)$, the image of $\alpha(V)\in H^1_{\mathrm{alg}}(SL_2,V^{(1)})$ in $H^1(U_2({\mathbb F}_2),V^{(1)})$ (and consequently in $H^1(SL_2({\mathbb F}_2),V^{(1)})$) is zero. \end{rem} \section{Cohomology of \texorpdfstring{$SL_p$}{} over rings of integers} \label{borel: section} We will enhance the results of the previous section by showing that for an appropriately chosen discrete group acting on the vector space $V$, the Bockstein homomorphism (associated to a lift of $V$ over $W_2({\mathbb F}_q)$) applied to the class $\alpha(V)$ is non-zero. We keep the notation of the previous section: we work over a finite field $k={\mathbb F}_q$, and $V$ is a $p$-dimensional ${\mathbb F}_q$-vector space. Let $\widetilde{V}$ be a free $W_2({\mathbb F}_q)$-module such that $\widetilde{V}/p\simeq V$, it is equipped with the tautological action of the discrete group $GL_p(W_2({\mathbb F}_q))$. Denote by $\widetilde{V}^{(1)}:=\widetilde{V}\otimes_{W_2({\mathbb F}_q),W_2(\operatorname{Fr}_p)}W_2({\mathbb F}_q)$ the twist of $\widetilde{V}$ by the Frobenius automorphism of $W_2({\mathbb F}_q)$ induced by $\operatorname{Fr}_p:t\mapsto t^p$ on ${\mathbb F}_q$. The module $\widetilde{V}^{(1)}$ can be $GL_p(W_2({\mathbb F}_q))$-equivariantly identified with $\widetilde{V}$ where the action on the latter is modified by precomposing with the Frobenius automorphism $GL_p(W_2({\mathbb F}_q))\xrightarrow{W_2(\operatorname{Fr}_p)}GL_p(W_2({\mathbb F}_q))$. We have a short exact sequence \begin{equation}0\tilde{o} V^{(1)}\xrightarrow{\psi_1}\widetilde{V}^{(1)}\xrightarrow{\psi_2} V^{(1)}\tilde{o} 0\end{equation} of $GL_p(W_2({\mathbb F}_q))$-modules which gives rise to the connecting homomorphisms $\mathrm{Bock}^i: H^i(G, V^{(1)})\tilde{o} H^{i+1}(G, V^{(1)})$ for any group $G$ mapping to $GL_p(W_2({\mathbb F}_q))$. Let now $F$ be an arbitrary number field in which $p$ is unramified and such that there exists a prime ideal ${\mathfrak p}\subset {\mathcal O}_F$ with the residue field ${\mathbb F}_q$. Prime ideal ${\mathfrak p}$ gives rise to a surjection $\kappa:{\mathcal O}_F\twoheadrightarrow{\mathbb F}_q$, and there is a unique homomorphism of rings ${\mathcal O}_F\twoheadrightarrow W_2({\mathbb F}_q)$ lifting $\kappa$. This homomorphism gives rise to a map $SL_p({\mathcal O}_F)\tilde{o} SL_p(W_2({\mathbb F}_q))$ which defines an action of $SL_p({\mathcal O}_F)$ on $V, V^{(1)}$, and $\widetilde{V}^{(1)}$, and hence defines Bockstein homomorphisms $\mathrm{Bock}^i:H^i(SL_p({\mathcal O}_F),V^{(1)})\xrightarrow{} H^{i+1}(SL_p({\mathcal O}_F), V^{(1)})$. \begin{pr}\label{group cohomology: ring of integers main} For every prime $p$ there exists a quadratic extension $F/{\mathbb Q}$ in which $p$ is not split, such that $\mathrm{Bock}^{p-1}(\alpha(V))\in H^p(SL_p({\mathcal O}_F),V^{(1)})$ is non-zero. \end{pr} We will prove this non-vanishing by a method similar to the one employed in previous section, using the technique of \cite{cpsk}. For the method to work we need the image of the reduction map ${\mathcal O}^{\times}_F\tilde{o}{\mathbb F}_{p^2}^{\times}$ on groups of units to be large enough: the field $F$ will be chosen appropriately in Lemma \ref{group cohomology: sorry not sorry}. As in the previous section, we denote by $B_p\subset SL_p$ the subgroup of upper triangular matrices with respect to a given basis, $T_p\subset B_p$ is the diagonal torus, and ${\mathbb G}_a^{p-1}\simeq A_p\subset B_p$ is the subgroup of matrices that send the basis vector $e_i$ to a vector of the form $a_ie_1+e_i$, for all $i\geq 2$. By Lemmas \ref{group cohomology: A restriction injective} and \ref{group cohomology: everything from chi1 algebraic}(2), the image of the class $\alpha(V)\in H^{p-1}_{\mathrm{alg}}(SL(V),V^{(1)})$ in $H^{p-1}(A_p({\mathbb F}_q),V^{(1)})^{T_p({\mathbb F}_q)}$ is non-zero and moreover lies in the image of the homomorphism $H^{p-1}(A_p({\mathbb F}_q),\chi_1^p)^{T_p({\mathbb F}_q)}\tilde{o} H^{p-1}(A_p({\mathbb F}_q),V^{(1)})^{T_p({\mathbb F}_q)}$. Therefore to prove that $\mathrm{Bock}(\alpha(V))$ is non-zero we may work with the cohomology of the group of ${\mathcal O}_F$-points of the subgroup $T_p\ltimes A_p\subset SL_p$ which is a fairly explicit object thanks to Proposition \ref{group cohomology: additive group cohomology}. When restricted to $B_p({\mathcal O}_F)$, the representation $\widetilde{V}$ admits a filtration with graded quotients $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1,\ldots,\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_p$ that are characters factoring through $B_p({\mathcal O}_F)\tilde{o} T_p({\mathcal O}_F)$, and lifting the characters $\chi_1,\ldots,\chi_p$. We denote by $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_i^{(1)}$ the character $B_p({\mathcal O}_F)\tilde{o} T_p({\mathcal O}_F)\tilde{o} W_2({\mathbb F}_q)^{\times}$ obtained by composing $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_i$ with the Frobenius automorphism $\operatorname{Fr}_p:W_2({\mathbb F}_q)^{\times}\tilde{o} W_2({\mathbb F}_q)^{\times}$. The representation $\widetilde{V}^{(1)}$ of $B_p({\mathcal O}_F)$ is likewise filtered with graded quotients isomorphic to $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_i^{(1)}$. Note that $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_i^{(1)}/p\simeq \chi_i^{(1)}\simeq \chi_i^p$, but $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_i^{(1)}$ is generally {\it not} isomorphic to $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_i^p$. The discrepancy between these two characters will be key for proving that the class $\alpha(V)\in H^{p-1}(SL_p({\mathcal O}_F),V^{(1)})$ does not lift to a class in $H^{p-1}(SL_p({\mathcal O}_F),\widetilde{V}^{(1)})$. We treat separately cases $p=2$ and $p>2$ because the proof for $p>2$ relies on the results of the previous section that were only proven away from the case $p=2$. We also hope that reading the proof for $p=2$ first makes the general argument clearer. In all of the cases, we choose an extension $F/{\mathbb Q}$ as directed by Lemma \ref{group cohomology: sorry not sorry}. \begin{lm}\label{group cohomology: sorry not sorry} For every prime number $p$ there exists an integer $N>0$ such that $p$ is not split in the real quadratic field $F={\mathbb Q}(\sqrt{N})$, and the two conditions are satisfied: \begin{enumerate} \item The group of units ${\mathcal O}_F^{\times}$ surjects onto $\{x\in{\mathbb F}_{p^2}^{\times}|N_{{\mathbb F}_{p^2}/{\mathbb F}_p}(x)=\pm 1\}$ under the reduction map ${\mathcal O}_F\tilde{o} {\mathcal O}_F/p\simeq{\mathbb F}_{p^2}$ \item There exists a unit $u\in{\mathcal O}_F^{\times}$ whose reduction $\overline{u}$ in $W_2({\mathbb F}_{p^2})^{\times}$ satisfies $\operatorname{Fr}_p(\overline{u})\neq \overline{u}^p$. \end{enumerate} \end{lm} \begin{proof} We first treat the case $p>2$. Let $u_0\in{\mathbb F}_{p^2}^{\times}$ be a generator of the cyclic group $\{x\in{\mathbb F}_{p^2}^{\times}|N_{{\mathbb F}_{p^2}/{\mathbb F}_p}(x)=\pm 1\}$. It satisfies the equation $u_0^2-2d_0u_0-1=0$ where $d_0=\frac{1}{2}\mathrm{Tr}_{{\mathbb F}_{p^2}/{\mathbb F}_p}(u_0)\in{\mathbb F}_p$, hence we can write $u_0$ as $d_0+\sqrt{d_0^2+1}$. Let $d\in{\mathbb Z}$ be an arbitrary integer reducing to $d_0$ modulo $p$, and take $F$ to be the field ${\mathbb Q}(\sqrt{d^2+1})$. By construction, $d^2+1$ is not a square modulo $p$, hence $d^2+1$ is not a square in ${\mathbb Q}$, and $p$ is non-split in ${\mathcal O}_F$. The element $u=d+\sqrt{d^2+1}\in{\mathcal O}_F$ is invertible and it (or its conjugate) reduces to $u_0$ in ${\mathbb F}_{p^2}$, hence condition (1) is satisfied. Next, let us check that we can choose the lift $d$ of $d_0$ to ensure that condition (2) is satisfied for $u=d+\sqrt{d^2+1}$. The element $\operatorname{Fr}_p(\overline{u})\in W_2({\mathbb F}_{p^2})$ is the mod $p^2$ reduction of $d-\sqrt{d^2+1}$, hence to prove that $\operatorname{Fr}_p(\overline{u})\neq \overline{u}^p$ it is enough to ensure that the integers $\mathrm{Tr}_{F/{\mathbb Q}}(d-\sqrt{d^2+1})$ and $\mathrm{Tr}_{F/{\mathbb Q}}((d+\sqrt{d^2+1})^p)$ are not congruent modulo $p^2$. The first one is equal to $2d$, and we can expand their difference as \begin{multline}\label{borel: wieferich formula} \mathrm{Tr}_{F/{\mathbb Q}}((d+\sqrt{d^2+1})^p)-\mathrm{Tr}_{F/{\mathbb Q}}(d-\sqrt{d^2+1})=\\ 2(d^p+\binom{p}{2}d^{p-2}(d^2+1)+\ldots+\binom{p}{p-1}d(d^2+1)^{(p-1)/2})-2d \end{multline} This is a polynomial of degree $p$ in $d$ that reduces to $2(d^p-d)$ modulo $p$. In particular, by Hensel's lemma, this polynomial has exactly one root in ${\mathbb Z}/p^2$ reducing to $d_0$, so we can choose the lift $d$ of $d_0$ such that the integer (\ref{borel: wieferich formula}) is not zero modulo $p^2$. Finally, for $p=2$ take $F={\mathbb Q}(\sqrt{5})$. The unit $a=\frac{1+\sqrt{5}}{2}\in{\mathcal O}_F={\mathbb Z}[\frac{1+\sqrt{5}}{2}]$ has minimal polynomial $a^2-a-1=0$, hence $2$ is not split in ${\mathcal O}_F$, and $a$ reduces to an element of ${\mathbb F}_4\setminus {\mathbb F}_2$ that necessarily generates ${\mathbb F}_4^{\times}$. Condition (2) is fulfilled simply by $u=-1$. \end{proof} \begin{proof}[Proof of Proposition \ref{group cohomology: ring of integers main} for $p=2$.] First, we will prove that the class $\alpha(V)\in H^1(SL_2({\mathbb F}_q),V^{(1)})$ survives under the map to $H^1(SL_2({\mathcal O}_F),V^{(1)})$. When restricted to the Borel subgroup $B_2$, the module $V^{(1)}$ fits into the extension $0\tilde{o}\chi_1^2\tilde{o} V^{(1)}\tilde{o} \chi_1^{-2}\tilde{o} 0$. The extension induced from $0\tilde{o} V^{(1)}\tilde{o} S^2V\tilde{o}\Lambda^2 V\tilde{o} 0$ via the map $V^{(1)}\tilde{o} \chi_1^{-2}$ is split by the map $S^2V\tilde{o} S^2(\chi_1^{-1})$, hence the class $\alpha(V)|_{B_2}\in H^1_{\mathrm{alg}}(B_2,V^{(1)})$ is in the image of the map $H^1(B_2,\chi_1^2)\tilde{o} H^1(B_2,V^{(1)})$. In particular, the restriction $\alpha(V)|_{B_2({\mathbb F}_q)}$ is in the image of the map $H^1(B_2({\mathbb F}_q),\chi_1^2)\tilde{o} H^1(B_2({\mathbb F}_q),V^{(1)})$. Hence it is enough to show the following two facts: \begin{enumerate} \item $H^1(B_2({\mathbb F}_q),\chi_1^2)\tilde{o} H^1(B_2({\mathcal O}_F),\chi_1^2)$ is injective \item $H^1(B_2({\mathcal O}_F),\chi_1^2)\tilde{o} H^1(B_2({\mathcal O}_F),V^{(1)})$ is injective \end{enumerate} For (1), $H^1(B_2({\mathbb F}_q),\chi_1^2)=H^1(A_2({\mathbb F}_q),\chi_1^2)^{T_2({\mathbb F}_q)}$ obviously injects into $H^1(A_2({\mathbb F}_q),\chi_1^2)$, and the map $H^1(A_2({\mathbb F}_q),\chi_1^2)\tilde{o} H^1(A_2({\mathcal O}_F),\chi_1^2)$ is an isomorphism because $A_2$ acts trivially on $\chi_1^2$ here, and the natural map $A_2({\mathcal O}_F)\tilde{o} A_2({\mathbb F}_q)$ induces an isomorphism $A_2({\mathcal O}_F)/2\simeq A_2({\mathbb F}_q)$. This implies (1). For (2), it is enough to show that $H^0(B_2({\mathcal O}_F),\chi_1^{-2})=0$. The group $T_2({\mathcal O}_F)={\mathcal O}_F^{\times}$ maps surjectively onto ${\mathbb F}_q^{\times}={\mathbb F}_4^{\times}$, hence $\chi_1^{-2}$ is a non-trivial character of $T_2({\mathcal O}_F)\subset B_2({\mathcal O}_F)$, and the space of invariants $H^0(B_2({\mathcal O}_F),\chi_1^{-2})=(\chi_1^{-2})^{B_2({\mathcal O}_F)}$ vanishes. Therefore the image of $\alpha(V)$ in $H^1(SL_2({\mathcal O}_F),V^{(1)})$ is indeed non-zero. Next, we will check that the Bockstein map $\mathrm{Bock}^1:H^1(SL_2({\mathcal O}_F),V^{(1)})\tilde{o} H^2(SL_2({\mathcal O}_F),V^{(1)})$ induced by the $W_2(k)$-module $\widetilde{V}^{(1)}$ is injective. The key input for this is that $H^1(SL_2({\mathcal O}_F),\widetilde{V}^{(1)})$ is annihilated by multiplication by $2$. Indeed, the central element $\diag(-1,-1)\in SL_2({\mathcal O}_F)$ acts in the representation $\widetilde{V}^{(1)}$ via multiplication by $(-1)$ so multiplication by $(-1)$ on $H^i(SL_2({\mathcal O}_F),\widetilde{V}^{(1)})$, for all $i$, is equal to identity, hence these cohomology groups are $2$-torsion. Injectivity of $\mathrm{Bock}^1$ now follows by considering the long exact sequence \begin{equation} \ldots\tilde{o} H^0(V^{(1)})\xrightarrow{\mathrm{Bock}^0} H^1(V^{(1)})\xrightarrow{\psi_1} H^1(\widetilde{V}^{(1)})\xrightarrow{\psi_2} H^1(V^{(1)})\xrightarrow{\mathrm{Bock}^1} H^2(V^{(1)}) \end{equation} where $H^i$ everywhere refers to the cohomology of $SL_2({\mathcal O}_F)$. The first visible term $H^0(V^{(1)})=(V^{(1)})^{SL_2({\mathcal O}_F)}$ is zero, because $SL_2({\mathcal O}_F)\tilde{o} SL_2({\mathbb F}_q)$ is a surjection, hence $\psi_1$ is injective. The composition $\psi_1\circ\psi_2$ is the multiplication by $2$ map on $H^1(\widetilde{V}^{(1)})$ which we know to be zero, so $\psi_2$ has to be zero, which is equivalent to injectivity of $\mathrm{Bock}^1$. This finishes the proof of Proposition \ref{group cohomology: ring of integers main} for $p=2$. \end{proof} \begin{proof}[Proof of Proposition \ref{group cohomology: ring of integers main} for $p>2$.] We will show the following vanishing results, and Proposition \ref{group cohomology: ring of integers main} will be deduced as a formal consequence of these. \begin{lm}\label{group cohomology: integral restriction facts} Assume that $q=p^2$. There exists a real quadratic extension $F/{\mathbb Q}$ with ${\mathcal O}_F/p\simeq{\mathbb F}_q={\mathbb F}_{p^2}$, such that \begin{enumerate} \item The map $H^{p-1}((T_p\ltimes A_p)({\mathbb F}_q),\chi_1^p)\tilde{o} H^{p-1}((T_p\ltimes A_p)({\mathcal O}_F),\chi_1^p)$ is injective. \item The group $H^{p-1}((T_p\ltimes A_p)({\mathcal O}_F),\chi_2^p\oplus\ldots\oplus \chi_p^p)$ vanishes. \item The group $H^{p-2}((T_p\ltimes A_p)({\mathcal O}_F),\chi_1^p)$ vanishes. \item the $W_2(k)$-module $H^{p-1}((T_p\ltimes A_p)({\mathcal O}_F),\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)})$ is annihilated by $p$. \end{enumerate} \end{lm} \begin{proof} We choose $F$ satisfying the properties listed in Lemma \ref{group cohomology: sorry not sorry}. The reduction map $A_p({\mathcal O}_F)\tilde{o} A_p({\mathbb F}_q)$ induces an isomorphism $A_p({\mathcal O}_F)/p\simeq A_p({\mathbb F}_q)$. Therefore the induced map $H^1(A_p({\mathbb F}_q),k)\tilde{o} H^1(A_p({\mathcal O}_F),k)$ on cohomology in degree $1$ is an isomorphism, and induces a $T_p({\mathcal O}_F)$-equivariant surjection $H^i(A_p({\mathbb F}_q),k)\tilde{o} H^i(A_p({\mathcal O}_F),k)\simeq \Lambda^i ({\mathfrak a}_k\cdot x_{e}\oplus {\mathfrak a}^{(1)}_k\cdot x_{\tau})$ for $i\geq 1$, where we use the notation of Proposition \ref{group cohomology: additive group cohomology}, and $e,\tau$ are the elements of the Galois group $\Gal(F/{\mathbb Q})\simeq {\mathbb Z}/2$. The action of $T_p({\mathcal O}_F)$ on $H^1(A_p({\mathcal O}_F),k)\simeq {\mathfrak a}_k\cdot x_{e}\oplus {\mathfrak a}^{(1)}_k\cdot x_{\tau}$ factors through $T_p({\mathcal O}_F)\tilde{o} T_p({\mathbb F}_q)$, and this module explicitly is given as the direct sum of inverses of the characters $\chi_1-\chi_2,\ldots,\chi_1-\chi_p,p(\chi_1-\chi_2),\ldots,p(\chi_1-\chi_p)$. The reduction map $({\mathcal O}_F^{\times})^{p-1}\simeq T_p({\mathcal O}_F)\tilde{o} T_p({\mathbb F}_q)\simeq ({\mathbb F}^{\times}_{p^2})^{p-1}$ is not surjective (as soon as $p>3$), but its image is equal to $(\{x|N_{{\mathbb F}_{p^2}/{\mathbb F}_p}(x)=\pm 1\})^{p-1}$, by our choice of the field $F$. We need the following partial refinement of Lemma \ref{group cohomology: weights of frobenius twist} to nevertheless be able to bound the invariant subspaces $H^j(A_p({\mathcal O}_F),\chi_i^p)^{T_p({\mathcal O}_F)}$. \begin{lm}\label{borel: combinatorics}Denote by $S$ the set $\{\chi_1-\chi_2,\ldots,\chi_1-\chi_p,p(\chi_1-\chi_2),\ldots, p(\chi_1-\chi_p)\}\subset X^*(T_p)$. \begin{enumerate} \item For $2\leq i\leq p$ the character $p\chi_i$ is not congruent modulo $p+1$ to a sum of $\leq p-1$ elements of $S$. \item The character $p\chi_1$ is not congruent modulo $p+1$ to a sum of $\leq p-2$ elements of $S$. \item The only, up to permutation, congruence modulo $p+1$ between $p\chi_1$ and a sum of $\leq p-1$ elements of $S$ is the equality $p\chi_1=(\chi_1-\chi_2)+\ldots+(\chi_1-\chi_p)$. \end{enumerate} \end{lm} \begin{proof}[Proof of Lemma \ref{borel: combinatorics}] We use $\chi_1,\ldots,\chi_{p-1}$ as a basis for $X^*(T_p)$, the character $\chi_p$ is expressed as $-(\chi_1+\ldots+\chi_{p-1})$. Denote by $\sigma:X^*(T_p)\tilde{o}{\mathbb Z}$ the map sending $a_1\chi_1+\ldots+a_{p-1}\chi_{p-1}$ to $a_1+\ldots+a_{p-1}$. We have $\sigma(\chi_1-\chi_i)=0$ for $i\leq p-1$ and $\sigma(\chi_1-\chi_p)=p$. In the rest of the proof, symbol $\equiv$ alway refers to congruence modulo $p+1$. 1) The only elements of $S$ with a non-zero value of $\sigma$ are $\chi_1-\chi_p$ and $p(\chi_1-\chi_p)$, with the values $p\equiv -1$ and $p^2\equiv 1$, respectively. Suppose that we have a congruence $p\chi_i\equiv r_1+\ldots+r_{l}\bmod p+1$ with $l\leq p-1$, and all $r_1,\ldots,r_{l}$ from $S$. Suppose first that $i\neq p$. If $\chi_1-\chi_p$ appears in this sum $a$ times, and $p(\chi_1-\chi_p)\equiv \chi_p-\chi_1$ appears $b$ times, then $a-b\equiv 1\bmod p+1$ because $\sigma(p\chi_i)\equiv -1$. This forces $b$ to be equal to $a-1$. Therefore the difference $p\chi_i-(a(\chi_1-\chi_p)+(a-1)(\chi_p-\chi_1))\equiv -\chi_i-\chi_1+\chi_p=-2\chi_1-\chi_2-\ldots-2\chi_i-\ldots-\chi_{p-1}$ is congruent to a sum of $\leq p-2$ elements of the form $\pm(\chi_1-\chi_j)$ for $j=2,\ldots,p-1$. But such a congruence would have to use, for each $2\leq j\leq p-1,j\neq i$, an element of the form $\pm(\chi_1-\chi_j)$ at least once, and an element of the form $\pm(\chi_1-\chi_i)$ at least twice (because $p+1\geq 4$), so at least $p-1$ elements of $S$ would be needed. Next, let us rule out the possibility of a congruence $p\chi_p\equiv r_1+\ldots+r_{l}$. We have $\sigma(p\chi_p)=-p(p-1)\equiv -2$. Hence if $\chi_1-\chi_p$ appears $a$ times in this congruence, then $p(\chi_1-\chi_p)$ appears $a-2$ times, so $p\chi_p-2(\chi_1-\chi_p)\equiv -3\chi_1-\chi_2-\ldots-\chi_{p-1}$ is congruent to a sum of $\leq p-3$ elements of the form $\pm(\chi_1-\chi_j),j\leq p-1$. But similarly to the previous case, such a sum would have to use at least $p-2$ such elements, and the original congruence cannot exist. 2) We have $\sigma(p\chi_1)=p\equiv -1$, hence a congruence $p\chi_1\equiv r_1+\ldots +r_{l}$ would have to use $a$ instances of $\chi_1-\chi_p$ and $a-1$ instances of $p(\chi_p-\chi_1)$, for some $a$. But this leaves us with $p\chi_1-(\chi_1-\chi_p)=(p-2)\chi_1-\chi_2-\ldots-\chi_{p-1}$ being congruent to a sum of $\leq p-3$ elements of the form $\pm(\chi_1-\chi_j),j\leq p-1$, which is impossible. 3) As in part (2), such a congruence would induce a congruence between $(p-2)\chi_1-\chi_2-\ldots-\chi_{p-1}$ and a sum of $\leq p-2$ elements of the form $\pm(\chi_1-\chi_j),j\leq p-1$. For each $j=2,\ldots,p-1$, we have to use an element of the form $\pm(\chi_1-\chi_j)$ at least once, hence exactly once, and this element must be $\chi_1-\chi_j$, forcing $a=1$ and implying the desired uniqueness. \end{proof} We can now proceed with the proof of Lemma \ref{group cohomology: integral restriction facts}. In (1), we will prove that even the composition of this map with further restriction $H^{p-1}((T_p\ltimes A_p)({\mathcal O}_F),\chi_1^p)\tilde{o} H^{p-1}(A_p({\mathcal O}_F),\chi_1^p)$ is injective. By Lemma \ref{group cohomology: weights of frobenius twist}(4), in the notation of Proposition \ref{group cohomology: additive group cohomology}, the invariant subspace $H^{p-1}(A_p({\mathbb F}_q),\chi_1^p)^{T({\mathbb F}_q)}\subset H^{p-1}(A_p({\mathbb F}_q),\chi_1^p)$ is $1$-dimensional and is equal to $\Lambda^{p-1}({\mathfrak a}_k\cdot x_0)$. This shows the injectivity asserted in part (1), because ${\mathfrak a}_k\cdot x_0\subset H^1(A_p({\mathbb F}_q),k)$ maps isomorphically onto ${\mathfrak a}_k\cdot x_e\subset H^1(A_p({\mathcal O}_F),k)$. By Lemma \ref{group cohomology: nontrivial character cohomology} below, to prove part (2) it is enough to show that every character of $T_p({\mathcal O}_F)$ appearing as a subquotient of $H^i(A_p({\mathcal O}_F),\chi_j^p)$ for $j\geq 2, i\leq p-1$ is non-trivial. The module $H^{i}(A_p({\mathcal O}_F),k)$ is isomorphic to a direct sum of characters that are products of $i$ elements of $$\{-(\chi_1-\chi_2),\ldots,-(\chi_1-\chi_p),-p(\chi_1-\chi_2),\ldots,-p(\chi_1-\chi_p)\}\subset X^*(T_p)$$ Since the kernel of the restriction $X^*(T_p)\tilde{o} \Hom(T_p({\mathcal O}_F),k^{\times})$ is contained in $(p+1)\cdot X^*(T_p)$ by our choice of the field $F$, the assertion follows from Lemma \ref{borel: combinatorics}(1). Analogously, part (3) follows from Lemma \ref{borel: combinatorics}(2). We now turn to proving part (4). By part (3), we have that $H^i(A_p({\mathcal O}_F),\chi_1^p)$ decomposes as a direct sum of non-trivial characters of $T_p({\mathcal O}_F)$ for $i<p-1$, therefore $\mathrm{R}\Gamma(T_p({\mathcal O}_F),H^i(A_p({\mathcal O}_F),\chi_1^p))$ and $\mathrm{R}\Gamma(T_p({\mathcal O}_F),H^i(A_p({\mathcal O}_F),\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)}))$ are quasi-isomorphic to $0$. Hence $H^{p-1}((T_p\ltimes A_p)({\mathcal O}_F),\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)})$ injects into $H^{p-1}(A_p({\mathcal O}_F),\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)})^{T_p({\mathcal O}_F)}$, and it is enough to prove that the latter $W_2(k)$-module is annihilated by $p$. We have a $T_p({\mathcal O}_F)$-equivariant identification $H^{p-1}(A_p({\mathcal O}_F),\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)})\simeq \widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)}\otimes\Lambda^{p-1}({\mathfrak a}_{W_2(k)}\oplus{\mathfrak a}_{W_2(k)}^{(1)})$, and this module decomposes as a direct sum of characters of the form $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)}\otimes \eta$ where $\eta$ is a product of $p-1$ characters of the form $-(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1-\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_j)$ or $-(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}^{(1)}_1-\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_j^{(1)})$ for $j=2,\ldots,p$. By Lemma \ref{borel: combinatorics}, even the mod $p$ reduction of $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)}\otimes \eta$ is a non-trivial character of $T_p({\mathcal O}_F)$ unless $\eta=-(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1-\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_2)-\ldots-(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1-\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_{p-1})-(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1-\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_p)=-p\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1$. Therefore $H^{p-1}(A_p({\mathcal O}_F),\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)})^{T_p({\mathcal O}_F)}=(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)}\otimes\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{-p})^{T_p({\mathcal O}_F)}$. Let now $u\in{\mathcal O}_F^{\times}$ be a unit such that its reduction $\overline{u}$ in $W_2({\mathbb F}_{p^2})^{\times}$ satisfies $\operatorname{Fr}_p(\overline{u})\neq\overline{u}^p$, as provided by Lemma \ref{group cohomology: sorry not sorry}(2). Then the element $\diag(u,u^{-1},1,\ldots,1)\in T_p({\mathcal O}_F)$ acts in the character $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)}\otimes\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{-p}$ via multiplication by $\operatorname{Fr}_p(\overline{u})\overline{u}^{-p}$, therefore the $W_2(k)$-module of invariants $(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)}\otimes\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{-p})^{T_p({\mathcal O}_F)}$ is isomorphic to $k$, which proves part (4). \end{proof} \begin{lm}\label{group cohomology: nontrivial character cohomology} Suppose that $F$ is a number field such that the group of units of ${\mathcal O}_F$ is infinite. For any split torus $T$ over ${\mathcal O}_F$, if $\chi:T({\mathcal O}_F)\tilde{o} k^{\times}$ is a non-trivial character then $\mathrm{R}\Gamma(T({\mathcal O}_F),\chi)=0$. \end{lm} \begin{proof} Choose an isomorphism $T({\mathcal O}_F)\simeq{\mathbb Z}^{\oplus N}\times T({\mathcal O}_F)^{\mathrm{tors}}$ such that the restriction of $\chi$ to ${\mathbb Z}^{\oplus N}$ is still nontrivial. We have $\mathrm{R}\Gamma(T({\mathcal O}_F),\chi)=\mathrm{R}\Gamma(T({\mathcal O}_F)^{\mathrm{tors}},\mathrm{R}\Gamma({\mathbb Z}^{\oplus N},\chi))$ so it is enough to show that $\mathrm{R}\Gamma({\mathbb Z}^{\oplus N},\chi)=0$. By definition, $\mathrm{R}\Gamma({\mathbb Z}^{\oplus N},\chi)=\mathrm{RHom}_{k[x_1^{\pm 1},\dots,x_N^{\pm 1}]}(k,\chi)$ where we denote by $\chi$ the module over the group algebra $k[x_1^{\pm 1},\dots,x_N^{\pm 1}]$ corresponding to the character $\chi|_{{\mathbb Z}^{\oplus N}}$, and $k$ is the module on which all $x_i$ act by $1$. By the assumption that $\chi|_{{\mathbb Z}^{\oplus N}}$ is non-trivial, $k$ and $\chi$ have disjoint supports in $\Spec k[x_1^{\pm 1},\dots,x^{\pm 1}_N]$ which implies the vanishing. \end{proof} Having proven Lemma \ref{group cohomology: integral restriction facts}, we will now deduce Proposition \ref{group cohomology: ring of integers main}. Consider the long exact sequence induced by $0\tilde{o}\chi_1^p\xrightarrow{\psi_1} \widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)}\xrightarrow{\psi_2} \chi_1^p\tilde{o} 0$: \begin{equation} \ldots\tilde{o} H^{p-2}(\chi_1^p)\xrightarrow{\mathrm{Bock}^{p-2}}H^{p-1}(\chi_1^p)\xrightarrow{\psi_1} H^{p-1}(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)})\xrightarrow{\psi_2} H^{p-1}(\chi_1^p)\xrightarrow{\mathrm{Bock}^{p-1}} H^p(\chi_1^p)\tilde{o} \ldots \end{equation} where $H^i(M)$ is the abbreviation for $H^i((T_p\ltimes A_p)({\mathcal O}_F),M)$. The multiplication by $p$ map on $H^{p-1}(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)})$ factors as the composition $H^{p-1}(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)})\xrightarrow{\psi_2} H^{p-1}(\chi_1^p)\xrightarrow{\psi_1} H^{p-1}(\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)})$. By Lemma \ref{group cohomology: integral restriction facts}(3) the map $\psi_1$ is injective, but Lemma \ref{group cohomology: integral restriction facts}(4) says that the composition $\psi_1\circ\psi_2$ is zero, hence $\psi_2$ is zero itself and the Bockstein homomorphism $H^{p-1}((T_p\ltimes A_p)({\mathcal O}_F),\chi_1^p)\tilde{o} H^p((T_p\ltimes A_p)({\mathcal O}_F),\chi_1^p)$ induced by the character $\widetilde{\chi}}\newcommand{\tbeta}{\widetilde{\beta}_1^{(1)}$ is injective. Consider now the long exact sequence of cohomology associated with the sequence $0\tilde{o} \chi_1^p\tilde{o} V^{(1)}\tilde{o} \chi_2^p\oplus\ldots\oplus \chi_p^p\tilde{o} 0$. By Lemma \ref{group cohomology: integral restriction facts}(2) we get that the map $H^{p}((T_p\ltimes A_p)({\mathcal O}_F),\chi_1^p)\tilde{o} H^{p}((T_p\ltimes A_p)({\mathcal O}_F),V^{(1)})$ is injective. Combined with Lemma \ref{group cohomology: integral restriction facts}(1) this implies that the composition $$H^{p-1}(A_p({\mathbb F}_q),V^{(1)})^{T_p({\mathbb F}_q)}\tilde{o} H^{p-1}((T_p\ltimes A_p)({\mathcal O}_F),V^{(1)})\xrightarrow{\mathrm{Bock}^{p-1}_{\widetilde{V}^{(1)}}}H^{p}((T_p\ltimes A_p)({\mathcal O}_F),V^{(1)})$$ is injective when restricted to the image of the map $H^{p-1}(A_p({\mathbb F}_q),\chi_1^p)^{T_p({\mathbb F}_q)}\tilde{o} H^{p-1}(A_p({\mathbb F}_q),V^{(1)})^{T_p({\mathbb F}_q)}$. But that map is an isomorphism by Lemma \ref{group cohomology: everything from chi1 algebraic} (2) so Proposition \ref{group cohomology: ring of integers main} is proven. \end{proof} \bibliographystyle{alpha}
{ "arxiv_id": "2302.11401", "language": "en", "timestamp": "2023-02-23T02:15:41", "url": "https://arxiv.org/abs/2302.11401", "yymm": "2302" }
\subsubsection*{\bibname}} \bibliographystyle{apalike} \RequirePackage[OT1]{fontenc} \RequirePackage{amsthm,amsmath,amsfonts,amssymb,mathtools,thmtools} \RequirePackage[colorlinks,citecolor=blue,urlcolor=blue]{hyperref} \RequirePackage{enumitem,graphicx,MnSymbol,mathrsfs} \usepackage{bm} \usepackage{graphicx,psfrag,epsf} \usepackage{subcaption} \usepackage{tikz-cd} \usepackage[title]{appendix} \renewcommand \subsubsection*{\bibname}} \bibliographystyle{apalike} \RequirePackage[OT1]{fontenc} \RequirePackage{amsthm,amsmath,amsfonts,amssymb,mathtools,thmtools} \RequirePackage[colorlinks,citecolor=blue,urlcolor=blue]{hyperref} \RequirePackage{enumitem,graphicx,MnSymbol,mathrsfs} \usepackage{bm} \usepackage{graphicx,psfrag,epsf} \usepackage{subcaption} \usepackage{tikz-cd} \usepackage[title]{appendix} \renewcommand
{ "arxiv_id": "2302.11428", "language": "en", "timestamp": "2023-02-23T02:16:25", "url": "https://arxiv.org/abs/2302.11428", "yymm": "2302" }
\section{Introduction} \begin{tikzpicture}[remember picture, overlay] \node at ($(current page.north) + (0,-0.5in)$)[text width=19.5cm] {\Large This work has been submitted to the IEEE for possible publication.\\ Copyright may be transferred without notice, after which this version may no longer be accessible.}; \end{tikzpicture} Manipulating flexible wire, cable, rope, or other DLOs has a wide range of applications, such as catheter inserting \cite{jayender_robot-assisted_2006}, surgical suturing \cite{schulman_case_2013}, automotive \cite{jiang_robotized_2010}, aerospace \cite{shah_planning_2018}, electromechanical industries \cite{franke_robot_2009}, and so forth. The deformable property results in high-dimensional state space for modeling DLO, which makes manipulating DLO challenging. Existing research for handling DLOs is focused on robot motion planning for such basic tasks as tying/untying knots \cite{suzuki_-air_2021, yamakawa_motion_2010, wakamatsu_knottingunknotting_2006}, forming a given shape \cite{yan_self-supervised_2020}, contact-based cable routing \cite{zhu_robotic_2020}, inserting string and rope in a hole \cite{wang_online_2015}, and winding \cite{gobert_3dwoodwind_2022, koichiro_ito_winding_2017}. While most of the research about DLO considers quasistatic manipulation, some also address dynamic manipulation \cite{yamakawa_motion_2010, koichiro_ito_winding_2017, zhang_robot_2021}. Solving a DLO manipulation task usually contains three key steps: perception, modeling, and motion planning. Computer vision is often used to perceive a DLO's state. Some researchers attach AR tags along a wire harness as sampling points to detect deformation \cite{jiang_robotized_2010}. More generally, classic algorithms, such as uniform thresholding and Canny edge detector, are applied to extract cables in a controlled environment \cite{zhu_robotic_2020}. With the development of neural networks, deep learning is used to extract features of a DLO \cite{yan_self-supervised_2020,caporali_ariadne_2022}. Tactile servoing is another approach for DLO manipulation. She {\it et al.} design a gripper with GelSight for cable following and cable insertion tasks \cite{yu_she_cable_2021}. Common DLO modeling is done topologically or geometrically. Topology is helpful to describe the spatial relation of a DLO fragment in a knotting task, such as presented in Wakamatsu's work \cite{wakamatsu_knottingunknotting_2006}. Different geometric methods are broadly used for other types of tasks, for example, thin plate splines \cite{schulman_case_2013}, parameterized curve with minimum energy-based scheme \cite{shah_planning_2018}, multi-link system \cite{yamakawa_motion_2010}. A bi-directional long short-term memory (LSTM) is also used to model the structure of a chain-like mass-spring system \cite{yan_self-supervised_2020}. Alternatively, it is possible to bypass the modeling step with deep learning to create low-level joint control directly from input sensor data, as suggested by Suzuki {\it et al.} \cite{suzuki_-air_2021}, who used a convolutional auto-encoder (CAE) and LSTM structure. The system used RGB images and proximity sensor information as input to generate robot joint angles directly. Once the model is ready, a task-oriented motion planning method can be introduced to solve the problem, such as learning from demonstration \cite{schulman_case_2013}, model predictive path integral (MPPI) control \cite{yan_self-supervised_2020}, planning based-on angular contact mobility index (ACMI) \cite{zhu_robotic_2020}. There are several papers investigating different aspects of DLO wrapping tasks. Lee {\it et al.} focused on testing the performance of simulating a high volume of possible contacts \cite{lee_parallelized_2021}. Göbert {\it et al.} \cite{gobert_3dwoodwind_2022} designed a customized end-effector to attach a winding filament to a cyclical mold with a rotation axis and planned the path of the mold. Ito {\it et al.} studied using one whipping motion to wind a whip onto a target object with dynamic manipulation, rather than a quasistatic manner \cite{koichiro_ito_winding_2017}. However, there is a lack of study to enable an off-the-shelf, general-purpose robot manipulator to perform general wrapping manipulation of a DLO around another object. In this paper, we address the open problem of enabling a general-purpose robot manipulator with a simple parallel gripper to autonomously wrap a DLO around another object based on synergizing real-time perception and robot motion planning and control without requiring prior physical and geometrical information of both the rope and the rod. We are interested in providing a general robotic wrapping capability that can be applied to DLOs of varied materials, including different kinds of ropes, flexible cables, fibers, and so on. Specifically, the paper presents a novel approach for general-purpose robot wrapping operations with the following characteristics: \begin{itemize} \item It uses real-time perception of the objects and wrapping state to determine and adapt the robot motion for accomplishing and improving wrapping operations to achieve high-quality results. \item It re-uses and adjusts the canonical motion of making a single wrap of the DLO around the other object to be flexible to the length of the DLO (thus the length of the coil as the wrapping result) and to enable constant check and improvement of wrapping quality with feedback control. \item Hence, it is able to achieve high-quality wrapping without requiring prior knowledge of the physical and geometrical properties of the DLO and the other object. \end{itemize} This paper is organized as follows. \Cref{section:task_definition} defines the problem and setup. \Cref{section:approach} introduces our approach in detail. \Cref{section:exp_n_result} describes our experiments and presents the results. \Cref{section:conclusions} concludes the paper. \section{Task description}\label{section:task_definition} We define the task we study as wrapping a DLO, called the "rope", around a given rigid object, called the "rod". A coordinate system is set up at the base of the manipulator. The positive directions along the $x$, $y$, and $z$ axes are defined as "front", "left", and "up" respectively from the manipulator's perspective. The rod is set in front of the manipulator. The manipulator has a gripper installed as the end-effector. As the initial state, the rope rides over the rod. We divide the rope into three sections: the fixed section, the curving section, and the active section. The fixed section is attached to a fixture or held by the manipulator without moving during the wrapping. The curving section has already been wrapped around the rod. The active section has sufficient length to create a few more wraps. The system could only obtain information through an RGB-D camera set to face the rod and the manipulator. The side view of the setup is depicted in Fig.~\ref{fig:task}. The goal of the task is for the manipulator to wrap the rope around the rod to create a helix that is tight in both radial and axial directions. Neither the dimensions nor the materials of the rod and the rope are known to the robot system. \begin{figure}[t] \centerline{\includegraphics[width=0.4\textwidth]{figures/task.png}} \caption{The side view of the task setup.} \label{fig:task} \end{figure} \section{Approach}\label{section:approach} We set up a real environment according to the task, as shown in Fig.~\ref{fig:setup}. A dual-arm manipulator (YuMi IRB 14000, ABB) is placed on a tabletop for the task. The fingers of the manipulator have been modified to have a $2.5$mm wide $30$mm long slot between two fingers when the gripper is fully closed. This allows the rope to slide in between while keeping the tension of the rope. A support structure is mounted in front of the manipulator to install the rod. An RGB-D camera (RealSense Depth Camera D415, Intel) is placed to face the manipulator. The manipulator and the camera are connected to a desktop computer as the controller. Our approach is to enable the manipulator to learn how to make a high-quality helix of the rope around the rod by repeating and improving each single wrap along the helix. Our system first conducts {\bf rod estimation}: Using collected RGB images and point cloud data to estimate the position, orientation, and dimensions of the rod with respect to the manipulator automatically. This step is detailed in \Cref{section:rod_estimation}. Next, our system processes {\bf rope estimation} by using RGB images and the rod estimation result to obtain the rope's color and diameter information. This step is described in \Cref{section:rope_estimation}. Subsequently, our system conducts a single wrap of the rope around the rod. It selects a point on the rope for grasping, moves the end-effector to grasp the point, executes the motion path to create one wrap, and releases the grip, which involves the following major procedures: \begin{enumerate} \item {\bf Grasp point selection: }Using RGB images to search for a grasp point along the rope. \item {\bf Motion adjustment for wrapping: }Generating robot end-effector's wrapping motion path based on adjustable parameters. \item {\bf Auxiliary motion generation to facilitate wrapping: }Generating picking and releasing motions for the manipulator to perform before and after the wrapping motion respectively to complete the whole process. \item {\bf Motion outcome estimation and feedback control: }Using RGB images to estimate the outcome of the motion and adjust the parameters of the motion path generator. \end{enumerate} \noindent Those procedures are described in \Cref{section:gp_selection,section:wrapping,section:aux,section:fb_control} in detail. Our system repeats the single-wrap process above to generate a helix while improving the wrapping quality until the remaining rope is not sufficient for more wraps. We enable the manipulator to continue practicing wrapping by unraveling the rope manually and letting the manipulator repeat the whole process until it can produce a tight helix wrap satisfactorily. \begin{figure}[t] \centerline{\includegraphics[width=0.4\textwidth]{figures/setup.png}} \caption{The experiment setup.} \label{fig:setup} \end{figure} \subsection{The rod estimation}\label{section:rod_estimation} Rod estimation is done as the first step of the whole process. This process starts with establishing the transformation between the camera and the robot by using the RGB image from the camera and a fiducial marker \cite{ar_track_alvar} attached to the robot. The RGB\nobreakdash-D camera provides a colorized depth map of the workspace (Fig.~\ref{fig:raw_pcd}). Let $P$ denote the set of all the 3D points in the map, and let $Q$ be the set of all 2D pixels with color information in the map. Each data point within the frame can be represented as $(p_x, p_y, p_z, q_x, q_y, c)$, where $(p_x, p_y, p_z)\in P$ is the position in the camera coordinate system, $(q_x, q_y, c)\in Q$ is the pixel location on the image plane, and $c$ is the color. Next, from the point cloud data captured by the camera, our system extracts the points between the robot and the camera, and above the tabletop. The system downsamples the extracted point cloud and applies DBSCAN \cite{ester_density-based_1996} to create clusters according to the distance from the camera. An example result is shown as Fig.~\ref{fig:dbscan}. As there is no other object between the camera and the rod, the cluster with the shortest distance is selected. A 3D bounding box is created around this cluster (Fig.~\ref{fig:rod_n_support_pcd}). All points within the 3D bounding box are collected. White pixels in Fig.~\ref{fig:rod_n_support_mask} indicate the selected data points in the 2D image plane. Those points are further classified into 2 clusters according to their hue with K-mean \cite{macqueen_methods_1967}. The cluster with the most data points is kept. A maximum inscribed rectangle is used to fit the kept points, as shown in Fig.~\ref{fig:inscribed_rectangle}. Finally, the points within the rectangle are treated as points on the rod, with ${P'\subset P}$ being the 3D points and ${Q'\subset Q}$ denoting the pixels. $P'$ are used to estimate the radius $r_{rod}$ and the length $l_{rod}$ of the rod. Our system creates a half-cylinder surface template based on the estimation and employs ICP \cite{besl_method_1992} to match it to $P'$ (Fig.~\ref{fig:icp_result}). At this point, the estimates of the rod's center position $(x_{rod},y_{rod},z_{rod})$, the rod's orientation, and $r_{rod}$ are obtained. \begin{figure} \centering \subfloat[The workspace is captured by the RGB-D camera to generate colorized point cloud.\label{fig:raw_pcd}] {\includegraphics[width=0.45\linewidth]{figures/find_rod/0_full_cloud.jpg} } \hfill \subfloat[The robot and the background are subtracted. The data points are clustered by distance.\label{fig:dbscan}] {\includegraphics[width=0.45\linewidth]{figures/find_rod/2_dbscan.jpg} } \vspace{-1\baselineskip} \subfloat[The cluster with the closest distance toward the camera is selected.\label{fig:rod_n_support_pcd}] {\includegraphics[width=0.45\linewidth]{figures/find_rod/3_select_pcd.jpg} } \hfill \subfloat[An image mask is generated from the selected cluster. \label{fig:rod_n_support_mask}] {\includegraphics[width=0.45\linewidth]{figures/find_rod/4_apply_pc_mask.jpg} } \vspace{-1\baselineskip} \subfloat[The maximum inscribed rectangle (red) is added to estimate the rod on the 2D image plane.\label{fig:inscribed_rectangle}] {\includegraphics[width=0.45\linewidth]{figures/find_rod/5_ncp.jpg} } \hfill \subfloat[The half-cylinder template (yellow) is matched to the point cloud by applying ICP. \label{fig:icp_result}] {\includegraphics[width=0.45\linewidth]{figures/find_rod/6_icp_result.jpg} } \caption{Key steps to estimate the rod's dimension and pose.} \label{fig:find_rod} \end{figure} \subsection{The rope estimation}\label{section:rope_estimation} Once the rod's information is determined, our system estimates the color (hue range) and the diameter $d$ of the rope using $Q'$ (highlighted by the red rectangle in Fig.~\ref{fig:img_w_box}). It extracts the hue channel of $Q'$ to create a histogram. Otsu's method \cite{otsu_threshold_1979} is applied to the histogram to find a threshold that can separate $Q'$ into the rope and the rod. Then a Gaussian function $N(\mu,\sigma)$ is used to approximate the normalized rope's hue histogram. The hue range of the rope is chosen as $[\mu-3\sigma,\mu+3\sigma]$. The hue threshold found above is also applied to $Q'$ to create a binary mask of the rope. The result is shown as in Fig.~\ref{fig:rope_mask}. A minimum area rectangle is generated to enclose the selected area, as in Fig.~\ref{fig:rope_contour}. The short edge of the rectangle for the rope segment is taken as $d$. \begin{figure} \begin{minipage}{.25\textwidth} \subfloat[Pixels on the rod are selected from the image (red rectangle). \label{fig:img_w_box}] {\includegraphics[width=.95\linewidth]{figures/find_rope/raw_img_w_box.jpg} } \end{minipage} \hspace{.005\textwidth} \begin{minipage}{.22\textwidth} \subfloat[The rope on the rod is obtained via thresholding with its hue feature. \label{fig:rope_mask}] {\includegraphics[width=.95\linewidth]{figures/find_rope/rope_mask.jpg} } \subfloat[A contour is created to represent the piece of the rope. \label{fig:rope_contour}] {\includegraphics[width=.95\linewidth]{figures/find_rope/rope_contour.jpg} } \end{minipage} \caption{Estimate the rope's width and color.} \label{fig:rope_width} \end{figure} \subsection{Grasp point selection}\label{section:gp_selection} Wrapping a rope typically requires grasping both the fixed section and the active section and moving the active one around the rod. Finding the grasp points on the two sections is performed with an unwrapped rope. For each additional wrapping motion, only the grasp point on the active section needs to be updated. The camera's limited 3D resolution makes it difficult to detect thin features, such as the rope, with the point cloud data. Therefore, this process is done based on the 2D RGB image. Grasp point selection starts with extracting the two sections of the rope from the image. A sub-image of Fig.~\ref{fig:img_w_box} is created as shown in Fig.~\ref{fig:rope_region}, by extending the bounding box of the rod at both sides and downward. The system employs Ariadne+'s \cite{caporali_ariadne_2022} pre-trained DeepLabV3+ \cite{chen_encoder-decoder_2018} to create a binary mask $M_1$ from the sub-image. Compared to traditional computer vision methods, this deep learning approach suggests possible rope areas with less noise. However, we observed that extraction defects may happen. For example, Fig.~\ref{fig:ariadne} shows an incomplete detection on the fixed section that is near the rod. Therefore, our system applies the rope's hue range as the threshold to the sub-image to create another binary mask $M_2$, as shown in Fig.~\ref{fig:hue_mask}. $M_2$ is used to connect detected rope segments in $M_1$ as much as possible. For a white pixel in $M_1$, if $M_2$ has a vertical white line segment across the pixel at the same position, all the pixels on this line are added to $M_1$. The updated $M_1$ is then skeletonized \cite{zhang_fast_1984} to reduce detected objects to 1-pixel width, as shown in Fig.~\ref{fig:rope_skeleton}. Finally, our system searches for all lines from the bottom of the skeletonized mask, extracts the two longest lines, and calculates their center. The one near the right gripper (based on the robot's view) is taken as the fixed section (highlighted with green), and the one near the left gripper is taken as the active section (highlighted with blue), see Fig.~\ref{fig:found_ropes}. After the two sections of the rope are detected, our system determines one point along each of the sections for grasping by providing a value $l_{gp}$ (in millimeters) measured from the lower edge of the rod. We assume: 1) both sections of the rope (within the image) are always parallel to the $YZ$ plane (Fig.~\ref{fig:task}) and tangent to the rod, and 2) the distance between any two pixels and their distance in 3D are uniformly scaled. Our system converts $l_{gp}$ to the measurement in pixels and searches the corresponding point along designated rope section. In Fig.~\ref{fig:found_ropes}, the resulting point in the image plane is represented by a red dot. The position information of the pixel is converted back to the world coordinate system and used to move the robot's end-effector, as shown in Fig.~\ref{fig:grasping_result}. \begin{figure} \centering \subfloat[Selected region that contains the rope.\label{fig:rope_region}] {\includegraphics[width=0.45\linewidth]{figures/grasping_point/0_rope_region.jpg} } \hfill \subfloat[Mask $M_1$ generated by using Ariadne+. \label{fig:ariadne}] {\includegraphics[width=0.45\linewidth]{figures/grasping_point/1_ariadne.jpg} } \subfloat[Mask $M_2$ generated by using the rope's hue feature. \label{fig:hue_mask}] {\includegraphics[width=0.45\linewidth]{figures/grasping_point/2_hue_masked.jpg} } \hfill \subfloat[Masks are combined and skeletonized. \label{fig:rope_skeleton}] {\includegraphics[width=0.45\linewidth]{figures/grasping_point/3_rope_skeleton.jpg} } \subfloat[The fixed section (green), the active section (blue), and the grasp point (red) on the fixed section are found on the 2D image plane. \label{fig:found_ropes}] {\includegraphics[width=0.45\linewidth]{figures/grasping_point/4_found_ropes.jpg} } \hfill \subfloat[The 3D position of the grasp point is located for the follow-up robot motion planning. \label{fig:grasping_result}] {\includegraphics[width=0.45\linewidth]{figures/grasping_point/5_grasping_result.jpg} } \caption{Example of grasp point selection for the fixed section.} \label{fig:grasping_point_selection} \end{figure} \subsection{Motion adjustment for wrapping}\label{section:wrapping} \begin{figure} \subfloat[The side view of the section of the rod and the unwrapped rope. \label{fig:rope_n_rod_side}] {\includegraphics[width=.45\linewidth]{figures/spiral/spiral_1.png} } \hfill \subfloat[The side view of the end-effector position path (magenta dashed line). \label{fig:spiral_side}] {\includegraphics[width=.45\linewidth]{figures/spiral/spiral_3.png} } \vspace{-1\baselineskip} \subfloat[The front view of the rope's diameter and the wraps' advance. The translucent orange section indicates the active section before the wrapping. The opaque one on its' right is after wrapping. \label{fig:spiral_front}] {\makebox[0.95\linewidth]{\includegraphics[width=.45\linewidth]{figures/spiral/spiral_2.png} }} \caption{Spiral curve for the robot end-effector.} \label{fig:spiral} \end{figure} The wrapping motion is the most crucial as it determines the quality of the generated wrapping helix. It is performed in the presence of the estimation error or uncertainty of the rod's pose, dimension, the grasp points along the rope, and the unknown physical properties of the rope. Our approach is to create a spiral curve with just a few parameters for the robot end-effector to follow and then adjust the parameter values based on perception feedback. The goal is to achieve a resulting wrapping motion that can overcome the estimation errors and achieve a high-quality wrap in spite of the unknown physical properties of the rope. To design the canonical spiral curve for the robot's end-effector, we assume a rope hangs over the rod naturally due to gravity, creating contact with the rod on the upper half of the cylinder and leaving the surface of the rod tangentially at points $A$ and $B$, as shown in Fig.~\ref{fig:rope_n_rod_side}. Let $O$ be the center of the rod's cross-section where the rope lies. For the convenience of deriving the spiral function, we define a new 2D coordinate system with the origin at $O$ and the $x$ and $y$ axes as indicated in Fig.~\ref{fig:rope_n_rod_side} on the cross-section. The 3D coordinates of $O$ can be obtained from their relation to the rod's center. The fixed section starts from point $A$, extends downwardly, and is held by the robot's right gripper. The active section is extended from point $B$ to the grasp point $(x_0,y_0)$, with the length $L$. We define $R=r_{rod} + \epsilon$, where $\epsilon$ is a variable for correcting the estimation error in $r_{rod}$. We define $L=2\pi R+L'$, where $L'$ is a safe distance that considers the end-effector's size. Once the left gripper starts to make a wrap, the tangential contact point $B$ moves to $B'$. Note that the wrapping angle $\theta\in[0,2\pi]$ is the angle $\angle B'OB$. As an additional $\wideparen{BB'}=\theta R$ of the rod is covered by the rope, the distance between the gripper to the tangential point $B'$ is reduced to $L-\theta R$, as shown in Fig.~\ref{fig:spiral_side}. Now we consider the following spiral curve with respect to the rod coordinate system: \begin{equation}\label{eq:spiral} \left\{\begin{array}{l} x = R\cos\theta-(2\pi R+L'-\theta R)\sin\theta \\ y = R\sin\theta+(2\pi R+L'-\theta R)\cos\theta\\ z = a\theta/2\pi \end{array}\right. \end{equation} \noindent where $a$ is the displacement of the end-effector along the rod's axial direction for one wrap (Fig.~\ref{fig:spiral_front}), which can be decided through feedback (see \Cref{section:fb_control}). The 2D projection of this spiral path for the end-effector is shown as the magenta dashed curve in Fig.~\ref{fig:spiral_side}. Our system searches for the safety distance $L'\in[L'_{\min},L'_{\max}]$ so that the wrapping path has a feasible inverse kinematics (IK) solution. Note that the goal of the wrapping is to create wraps that are tight along the radial and axial directions of the rod, such that the wrap along the radial direction is as close to the radius of the rod as possible, and the distance along the axial direction from the center of two adjacent wraps equal to the diameter of the rope. \begin{figure}[t] \centering \subfloat[The gripper wraps with a fixed end-effector orientation, causing the rope to tangle the fingers. \label{fig:rope_entangled}] {\includegraphics[width=0.45\linewidth]{figures/real_robot/2_rope_entangled.jpg} } \hfill \subfloat[The gripper wraps with the end-effector orientation adaptive to the spiral curve.\\ \label{fig:gripper_orientation}] {\includegraphics[width=0.45\linewidth]{figures/real_robot/3_gripper_orientation.jpg} } \vspace{-1\baselineskip} \subfloat[The gripper finishes the wrapping motion and leaves a part of the active section hanging over the rod. \label{fig:rope_flip}] {\includegraphics[width=0.45\linewidth]{figures/real_robot/4_rope_flip.jpg} } \hfill \subfloat[The gripper follows a parameterized spiral curve to create a wrap and straight-line path to straighten the rope. \label{fig:rviz}] {\includegraphics[width=0.45\linewidth]{figures/real_robot/1_rviz.png} } \caption{The wrapping motion.} \label{fig:wrapping_motion} \end{figure} During the wrapping process, when the wrapping angle changes to $\theta$, the end-effector rotates to the same angle $\theta$ about the rod's axis simultaneously, as indicated by its orientation in Fig.~\ref{fig:spiral_side}. Without this rotation, the rope tends to entangle the fingers and hinder the gripper's opening motion at the end of the wrap, as shown in Fig. \ref{fig:rope_entangled}. The results of wrapping with the change of the end-effector orientation are shown in Fig.~\ref{fig:gripper_orientation} ($\theta=120^\circ$) and Fig.~\ref{fig:rope_flip} ($\theta=330^\circ$). At this stage, the position and orientation of the robot gripper along the spiral path have been determined with respect to the rod coordinate system, which can be converted to the world coordinate system. In practice, our system takes samples along the spiral curve and discards the first and the last sampled points ($\theta=0$ and $2\pi$) to specify the end-effector wrapping motion. The wrapping motion is connected with auxiliary motions to prepare the rope for wrapping and to release it after wrapping, as detailed in \Cref{section:aux}. Fig.~\ref{fig:rviz} illustrates the connection of the wrapping motion to the releasing motion. \subsection{Generation of auxiliary motions to facilitate wrapping}\label{section:aux} Rope picking and releasing motions are defined based on the spiral wrapping motion. Picking is a point-to-point motion that contains three key poses: 1) the entry pose, where the end-effector moves to the point with the opening of the gripper facing toward the rope at a distance. 2) the grasp pose, where the rope is squeezed in between the fingers of the gripper, and 3) the connection pose, where the gripper moves to the starting point of the spiral path. The grasp pose is derived as in \Cref{section:gp_selection} with $L$ being the input value for $l_{gp}$. The entry pose is obtained by offsetting the grasp pose to move the end-effector away from the rope. The release motion is designed to straighten the rope and to move the gripper away from the rope to avoid occlusion for image processing after the wrapping motion is done. Straightening the rope happens after the gripper arrives at the last sampled point of the spiral (see Fig.~\ref{fig:rope_flip} and \ref{fig:rviz}). The gripper moves downward to follow a straight-line path, sliding along the rope to flip the active section to the front. Then the gripper releases the rope and withdraws to be away from the rope. At this point, if the remaining length of the active section is longer than the rod's height, part of the section may land on the table, preventing the rope drops vertically from the rod. An additional rope alignment step is added. The left gripper rotates $90^{\circ}$ to increase the contact area with the rope. Then it moves to the front of the active section, as shown in Fig. \ref{fig:rope_front} and pushes the rope towards the manipulator to ensure that the rope is roughly vertical beneath the rod (Fig. \ref{fig:push_back}). Finally, the gripper moves away from the rope to leave space for the next grasp point selection step. \begin{figure}[h] \centering \subfloat[The gripper rotates $90^\circ$ and moves to the front of the rope.\\ \label{fig:rope_front}] {\includegraphics[width=0.45\linewidth]{figures/real_robot/5_rope_front.jpg} } \hfill \subfloat[The gripper pushes the active section to align it vertically beneath the rod. \label{fig:push_back}] {\includegraphics[width=0.45\linewidth]{figures/real_robot/6_push_back.jpg} } \caption{The auxiliary motions.} \label{fig:auxiliary_motion} \end{figure} \subsection{Motion outcome estimation and feedback control}\label{section:fb_control} Following the completion of one wrap, our system takes an image of the result and checks for the tightness along the radial direction (refer to as the height) and the axial direction (refer to as the advance) of the wrap, which is then used for feedback control to improve the next wrap. It starts by checking the height of the last wrapped part of the rope. At this step, a rectangle that contains the rod and some areas below is extracted from the image to include pieces of rope that could possibly hang under the rod, as shown in Fig. \ref{fig:len_rgb}. The rope is extracted from the image by using the rope's hue range as the threshold. Assuming that the rope segments in this binary image share the same width $d$ as the detected rope's diameter, our system searches the image for the rope segment created by the last wrap. The segment is skeletonized to find the valley point along it, as shown in Fig. \ref{fig:wrap_valley}. The pixel distance from this point to the bottom of the rod is taken as the height feedback of the last wrap, noted as $h$. This feedback is used to estimate the radius $R$. The feedback controller updates the $R$ by: \begin{equation} R_{n+1}=R_n-K_{PR}(q_r)\text{, where }q_r=h-t_R \end{equation} \noindent where $K_{PR}$ is the proportional coefficient, $t_R$ is a threshold. The stop condition is when $h\leq t_R$. The RGB image is also used to check the advance. Pixels of the rod's area (Fig. \ref{fig:adv_rgb}) are selected and processed to extract the rope. The rope area created by the last wrap $S_r$ and the gap between the last two wraps $S_g$ are measured, as in Fig. \ref{fig:adv_cluster}. The feedback controller updates the $a$ by: \begin{equation} \begin{aligned} a_{n+1} = a_n-K_{Pa}(q_a)\text{, where }q_a = S_g/(S_g+S_r) \end{aligned} \end{equation} \noindent where $K_{Pa}$ is the proportional coefficient. There are two stop conditions for learning the advance: 1) $q_a=0$, 2) for two consecutive wraps $n$ and $n+1$, $|q_{an}-q_{a(n+1)}|<t_a$, where $t_a$ is a threshold. When the wrapping output meets either condition, the system stops to update the advance. \begin{figure} \centering \subfloat[The rod and the below area are extracted to examine the height of the last wrap. \label{fig:len_rgb}] {\includegraphics[width=0.45\linewidth]{figures/feedback/len_rgb.jpg} } \hfill \subfloat[The last wrap is extracted and skeletonized to find the valley point (green). The bottom edge of the rod is indicated by the red line. \label{fig:wrap_valley}] {\includegraphics[width=0.45\linewidth]{figures/feedback/wrap_valley.jpg} } \subfloat[The rod area is extracted to examine the advance of the last wrap. \label{fig:adv_rgb}] {\includegraphics[width=0.45\linewidth]{figures/feedback/adv_rgb.jpg} } \hfill \subfloat[The rope area created by the last wrap (blue) and the gap area (red) are extracted and measured. The rest of the rope area is indicated in white. \label{fig:adv_cluster}] {\includegraphics[width=0.45\linewidth]{figures/feedback/adv_cluster.jpg} } \caption{Image processing to examine the motion outcome of the last wrap.} \end{figure} \section{Experiments and Results}\label{section:exp_n_result} In this section, we describe our experiments to implement and test our approach described in \Cref{section:approach} and present the results. We then discuss the performance and potential improvement of the wrapping algorithm. \subsection{Expriment setup} \setcounter{table}{3} \begin{table*}[!b] \caption{Warpping outcome of each case. The numbers above the wraps indicate the trial number. $a_i$ is the corresponding advance that is used to achieve that wrap.} \label{tab:exp_result} \setlength{\tabcolsep}{3pt} \begin{center} \begin{tabular}{ | c | c c c | c c c | c c c | c |} \hline & \multicolumn{3}{c|}{Rope1} & \multicolumn{3}{c|}{Rope2} & \multicolumn{3}{c|}{Rope3} & Human\\ & \multicolumn{3}{c|}{ } & \multicolumn{3}{c|}{ } & \multicolumn{3}{c|}{ } & wrapping\\ \hline Rod1 & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope1_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope1_2.jpg} & & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope2_1.jpg} & & & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope3_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope3_2.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope3_3.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/human_wrap.jpg}\\ & $a_1=20.0$ & $a_3=-2.7$ & & $a_1=20.0$ & & & $a_1=20.0$ & $a_3=-11.8$ & $a_5=-21.0$ & \multirow{5}{19mm}{Wrapping performed by hands as a comparison.}\\ & $a_2=11.2$ & (complete) & & (complete) & & & $a_2=~2.4$ & $a_4=-18.1$ & (complete) & \\ \cline{1-10} Rod2 & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope1_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope1_2.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope1_3.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope2_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope2_2.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope2_3.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope3_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope3_2.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope3_3.jpg} & \\ & $a_1=20.0$ & $a_3=9.7$ & $a_5=4.7$ & $a_1=20.0$ & $a_2=17.3$ & $a_3=17.0$ & $a_1=20.0$ & $a_3=-7.3$ & $a_5=-11.2$ & \\ & $a_2=10.3$ & $a_4=6.0$ & (complete) & & & (complete) & $a_2=~0.1$ & $a_3=-7.4$ & (complete) & \\ \hline \end{tabular} \end{center} \end{table*} \setcounter{table}{0} The PC of our system is equipped with an Intel Core i5-7500, 16GB RAM, and an NVIDIA GeForce GTX 1050 Ti, running Ubuntu 20.04 with ROS Noetic. We modified KTH's YuMi package \cite{kth-ros-pkg} in order to control the manipulator. The YuMi is configured to run in manual mode, with 100\% speed. Its Axis 6, where the gripper is mounted, has the ability to rotate from $-229^\circ$ to $229^\circ$, which is capable of a $360^\circ$ continuous rotation mentioned in \Cref{section:wrapping}. To achieve the required gripper motion during the wrapping, Axis 6 of the left arm was set to $216^\circ$ at the beginning of wrapping and was at $-144^\circ$ when finished. Trac-IK \cite{beeson_trac-ik_2015} was used to calculate joint variables of the rest of the 6 axes of the left arm and the 7 axes of the right arm from expected end-effector poses. The solving timeout and the error toleration of this solver were set to $50$ms and $1$mm, respectively. The coefficients of the feedback control in our system are given in Table \ref{tab:parameters}. The threshold $t_R$ was set to $1.5d$ and $t_a$ was set to $5\%$. The system's advance feedback started with the initial $a_0=20$mm. For a rod with an estimated radius $r_{rod}$. The inital $R$ was set to $1.5r_{rod}$. In practice, with a larger $R$, some waypoints of the spiral path may lie outside the manipulator's workspace, which prevents the manipulator from executing the path. If this happens, $R$ is reduced by a small value, which we set to $5$mm in the experiments, to generate a new spiral path. This process repeats until the manipulator is able to follow the path to get height feedback for the first time. Then the system uses feedback control to tune $R$. \begin{table}[tb] \caption{The coefficients of the feedback control} \label{tab:parameters} \begin{center} \begin{tabular}{ | c | c | c | c |} \hline $L'_{\min}$ & $L'_{\max}$ & $K_{PR}$ & $K_{Pa}$ \\ \hline $20$mm & $60$mm & $0.001$ & $0.04$\\ \hline \end{tabular} \end{center} \end{table} To validate the capability of handling objects with unknown attributes, we tested our system on two cylindrical rods and three ropes, namely Rod1, Rod2, Rope1, Rope2, and Rope3. Two rods share the same length of $l=280$mm but have different radii. The ground truth and the estimated radius of each rod can be found in Table~\ref{tab:rod_estimation_result}. For each rod, we ran the estimation 10 times. Rope1 is a skein of yarn (Softee Chunky Solid Yarn, Bernat). Rope2 is also a skein of yarn (Chenille Home Yarn, Loops \& Threads). Rope3 is a skein of paracord (1/8 in. x 50 ft. Assorted Color Paracord, Everbilt). We tested our system on the 6 combinations of the rods and ropes. \begin{table}[tb] \caption{The ground truth and the estimation of $r_{rod}$ (over 10 times).} \label{tab:rod_estimation_result} \begin{center} \begin{tabular}{ | l | c | c | c |} \hline & Ground truth& Estimation average & Range \\ & & (Standard deviation) & \\ \hline Rod1 & $21$mm & $18.5(\pm1.8)$mm & $17.4\sim 20.7$mm\\ \hline Rod2 & $17$mm & $14.3(\pm1.1)$mm & $13.6\sim 14.6$mm \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[tbp] \caption{The parameters that the system chose to achieve radial tightness.} \label{tab:r_tightness} \begin{center} \begin{tabular}{ | c | c | c | c |} \hline & Estimated radius & $R$ & $L'$ \\ & $r_{rod}$ & & \\ \hline Rod1 & $20.7$mm & $21.0$mm & $60.0$mm\\ \hline Rod2 & $14.6$mm & $16.9$mm & $60.0$mm \\ \hline \end{tabular} \end{center} \end{table} \subsection{Wrapping outcome}\label{section:outcome} We used one estimated radius $r_{rod}$ for each rod to examine the wrapping algorithm. The system uses the method mentioned in \Cref{section:wrapping} to obtain $R$ and $L'$. We find that the system's selection of the values of these two parameters is independent of the ropes. These parameter values are shown in Table~\ref{tab:r_tightness}. The robot took 5 trials to reach the stop condition for learning the advance in the axial direction for \{Rod1, Rope3\}, \{Rod2, Rope1\}, and \{Rod2, Rope3\}. For \{Rod1, Rope1\} and \{Rod2, Rope2\}, it took 3 trials. It achieved the axial tightness in the first trial for \{Rod1, Rope2\}. The result can be found in Table~\ref{tab:exp_result}. These images were taken by the RGB-D camera and were also used for the feedback control process. Note that since Rope3 has a higher stiffness, making the first wrap tends to push the fixed section to create a larger gap compared to the following wraps (see Fig.~\ref{fig:first_wrap_pushing}). This only happens to Rope3 when no previous wrap is made. Therefore, we always kept one pre-wrap on the rod that was not taken into account when testing Rope3 on both rods. For all cases, the tightness along the radial direction was met. For cases with Rope1 and Rope2, the axial tightness was also met. For cases with Rope3, the final result was comparable with the wrapping result by a person manually (see the last column of Table~\ref{tab:exp_result}). The attached video shows the robot performing those wrapping cases. \begin{figure}[tb] \captionsetup[subfloat]{labelformat=empty} \centering \subfloat[] {\includegraphics[width=0.30\linewidth]{figures/discussion/0_init.jpg}} \hfill \subfloat[] {\includegraphics[width=0.3\linewidth]{figures/discussion/1_lift.jpg} } \hfill \subfloat[] {\includegraphics[width=0.3\linewidth]{figures/discussion/2_place_w_shift.jpg} } \vspace{-1\baselineskip} \caption{The fixed end of the Rope3 is shifted during creating the first wrap because of its' stiffness. The red arrows are used as references.} \label{fig:first_wrap_pushing} \end{figure} \subsection{Time efficiency analysis} \label{section:time_consumption} We report the time consumption of our method in two parts: the computation time and the total execution time. The computation time includes all the calculation steps that are described in \Cref{section:approach}. The total execution time counts the computation time and the manipulator executing the picking, wrapping, and releasing motions described in \Cref{section:wrapping,section:aux}. For the computation time, we repeat each step 10 times to evaluate the average and standard deviation of the running time. Our system spends $13.197(\pm 0.402)$s to finish the rod estimation and $1.042(\pm 0.053)$s to finish the rope estimation at the beginning of a wrapping task. These are the pre-processing steps before wrapping and only need to be done once for each combination. The average time consumption of steps to conduct a single wrap can be found in Table \ref{tab:time_consumption}. The motion planning time in the table is the total time of planning for wrapping and auxiliary motions, which includes solving IK and polynomial interpolation in the joint space. \setcounter{table}{4} \begin{table}[h] \caption{Average computation time of steps to conduct a single wrap.} \label{tab:time_consumption} \begin{center} \begin{tabular}{ |l| l| } \hline Step & Time (sec) \\ \hline Grasp point selection & $4.447(\pm 0.082)$\\ \hline Motion planning& $2.302(\pm 0.050)$\\ \hline Motion outcome estimation & $0.312(\pm 0.016)$\\ \hline \end{tabular} \end{center} \end{table} We also measure the total execution time (including computation and movement time) of one wrap 10 times for \{Rod1, Rope1\}. The average time and the standard deviation is $63.933(\pm 0.611)$s, which is an order of magnitude higher than the computation time for planning the wrap and auxiliary motion. \subsection{Discussion} We have observed that the radial feedback meets the stop condition from the first wrap for all test cases. Due to the camera’s perspective and the methods mentioned in \Cref{section:rod_estimation}, the estimated $r_{rod}$ is always smaller than the ground truth. With $R$ being less or equal to the ground truth, the gripper slid slightly down the active section of the rope during the wrapping to compensate for the insufficient rope length from the gripper to the rod. This motion keeps the tension of the section and results in radial tightness. This suggests that to wrap over a solid of revolution, an $R$ that is smaller than the ground truth can be tolerated and often helpful for radial tightness. \section{Conclusions}\label{section:conclusions} In this paper, we present a novel and general method to solve the problem of using a general-purpose robot manipulator with a parallel gripper to wrap ropes around a rigid rod. This method is based on a parameterized canonical motion and does not require knowledge of the rope or the rod. It uses RGB-D images to estimate the state of the rope and rod, evaluates the wrapping outcome, and generates feedback from the outcome to improve motion planning. We tested our method with 6 combinations of ropes and rods. The result shows that our general method applies to different ropes and rods very well. In the next step, we will expand to using other types of rods, for instance, solids of revolution other than cylinders, such as a cone, and to incorporate tactile sensing. \addtolength{\textheight}{-12cm} \bibliographystyle{IEEEtran} \section{Introduction} Manipulating flexible wire, cable, rope, or other DLOs has a wide range of applications, such as catheter inserting \cite{jayender_robot-assisted_2006}, surgical suturing \cite{schulman_case_2013}, automotive \cite{jiang_robotized_2010}, aerospace \cite{shah_planning_2018}, electromechanical industries \cite{franke_robot_2009}, and so forth. The deformable property results in high-dimensional state space for modeling DLO, which makes manipulating DLO challenging. Existing research for handling DLOs is focused on robot motion planning for such basic tasks as tying/untying knots \cite{suzuki_-air_2021, yamakawa_motion_2010, wakamatsu_knottingunknotting_2006}, forming a given shape \cite{yan_self-supervised_2020}, contact-based cable routing \cite{zhu_robotic_2020}, inserting string and rope in a hole \cite{wang_online_2015}, and winding \cite{gobert_3dwoodwind_2022, koichiro_ito_winding_2017}. While most of the research about DLO considers quasistatic manipulation, some also address dynamic manipulation \cite{yamakawa_motion_2010, zhang_robot_2021}. Solving a DLO manipulation task usually contains three key steps: perception, modeling, and motion planning. Computer vision is often used to perceive a DLO's state. Some researchers attach AR tags along a wire harness as sampling points to detect deformation \cite{jiang_robotized_2010}. More generally, classic algorithms, such as uniform thresholding and Canny edge detector, are applied to extract cables in a controlled environment \cite{zhu_robotic_2020}. With the development of neural networks, deep learning is used to extract features of a DLO \cite{yan_self-supervised_2020,caporali_ariadne_2022}. Tactile servoing is another approach for DLO manipulation. She {\it et al.} design a gripper with GelSight for cable following and cable insertion tasks \cite{yu_she_cable_2021}. Common DLO modeling is done topologically or geometrically. Topology is helpful to describe the spatial relation of a DLO fragment in a knotting task, such as presented in Wakamatsu's work \cite{wakamatsu_knottingunknotting_2006}. Different geometric methods are broadly used for other types of tasks, for example, thin plate splines \cite{schulman_case_2013}, parameterized curve with minimum energy-based scheme \cite{shah_planning_2018}, multi-link system \cite{yamakawa_motion_2010}. A bi-directional long short-term memory (LSTM) is also used to model the structure of a chain-like mass-spring system \cite{yan_self-supervised_2020}. Alternatively, it is possible to bypass the modeling step with deep learning to create low-level joint control directly from input sensor data, as suggested by Suzuki {\it et al.} \cite{suzuki_-air_2021}, who used a convolutional auto-encoder (CAE) and LSTM structure. The system used RGB images and proximity sensor information as input to generate robot joint angles directly. Once the model is ready, a task-oriented motion planning method can be introduced to solve the problem, such as learning from demonstration \cite{schulman_case_2013}, model predictive path integral (MPPI) control \cite{yan_self-supervised_2020}, planning based-on angular contact mobility index (ACMI) \cite{zhu_robotic_2020}. There are several papers investigating different aspects of DLO wrapping tasks. Lee {\it et al.} focused on testing the performance of simulating a high volume of possible contacts \cite{lee_parallelized_2021}. Göbert {\it et al.} \cite{gobert_3dwoodwind_2022} designed a customized end-effector to attach a winding filament to a cyclical mold with a rotation axis and planned the path of the mold. Ito {\it et al.} studied using one whipping motion to wind a whip onto a target object with dynamic manipulation, rather than a quasistatic manner \cite{koichiro_ito_winding_2017}. However, there is a lack of study to enable an off-the-shelf, general-purpose robot manipulator to perform general wrapping manipulation of a DLO around another object. In this paper, we address the open problem of enabling a general-purpose robot manipulator with a simple parallel gripper to autonomously wrap a DLO around another object based on synergizing real-time perception and robot motion planning and control without requiring prior physical and geometrical information of both the rope and the rod. We are interested in providing a general robotic wrapping capability that can be applied to DLOs of varied materials, including different kinds of ropes, flexible cables, fibers, and so on. Specifically, the paper presents a novel approach for general-purpose robot wrapping operations with the following characteristics: \begin{itemize} \item It uses real-time perception of the objects and wrapping state to determine and adapt the robot motion for accomplishing and improving wrapping operations to achieve high-quality results. \item It re-uses and adjusts the canonical motion of making a single wrap of the DLO around the other object to be flexible to the length of the DLO (thus the length of the coil as the wrapping result) and to enable constant check and improvement of wrapping quality with feedback control. \item Hence, it is able to achieve high-quality wrapping without requiring prior knowledge of the physical and geometrical properties of the DLO and the other object. \end{itemize} This paper is organized as follows. \Cref{section:task_definition} defines the problem and setup. \Cref{section:approach} introduces our approach in detail. \Cref{section:exp_n_result} describes our experiments and presents the results. \Cref{section:conclusions} concludes the paper. \section{Task description}\label{section:task_definition} We define the task we study as wrapping a DLO, called the "rope", around a given rigid object, called the "rod". A coordinate system is set up at the base of the manipulator. The positive directions along the $x$, $y$, and $z$ axes are defined as "front", "left", and "up" respectively from the manipulator's perspective. The rod is set in front of the manipulator. The manipulator has a gripper installed as the end-effector. As the initial state, the rope rides over the rod. We divide the rope into three sections: the fixed section, the curving section, and the active section. The fixed section is attached to a fixture or held by the manipulator without moving during the wrapping. The curving section has already been wrapped around the rod. The active section has sufficient length to create a few more wraps. The system could only obtain information through an RGB-D camera set to face the rod and the manipulator. The side view of the setup is depicted in Fig.~\ref{fig:task}. The goal of the task is for the manipulator to wrap the rope around the rod to create a helix that is tight in both radial and axial directions. Neither the dimensions nor the materials of the rod and the rope are known to the robot system. \begin{figure}[t] \centerline{\includegraphics[width=0.4\textwidth]{figures/task.png}} \caption{The side view of the task setup.} \label{fig:task} \end{figure} \section{Approach}\label{section:approach} We set up a real environment according to the task, as shown in Fig.~\ref{fig:setup}. A dual-arm manipulator (YuMi IRB 14000, ABB) is placed on a tabletop for the task. The fingers of the manipulator have been modified to have a $2.5$mm wide $30$mm long slot between two fingers when the gripper is fully closed. This allows the rope to slide in between while keeping the tension of the rope. A support structure is mounted in front of the manipulator to install the rod. An RGB-D camera (RealSense Depth Camera D415, Intel) is placed to face the manipulator. The manipulator and the camera are connected to a desktop computer as the controller. Our approach is to enable the manipulator to learn how to make a high-quality helix of the rope around the rod by repeating and improving each single wrap along the helix. Our system first conducts {\bf rod estimation}: Using collected RGB images and point cloud data to estimate the position, orientation, and dimensions of the rod with respect to the manipulator automatically. This step is detailed in \Cref{section:rod_estimation}. Next, our system processes {\bf rope estimation} by using RGB images and the rod estimation result to obtain the rope's color and diameter information. This step is described in \Cref{section:rope_estimation}. Subsequently, our system conducts a single wrap of the rope around the rod. It selects a point on the rope for grasping, moves the end-effector to grasp the point, executes the motion path to create one wrap, and releases the grip, which involves the following major procedures: \begin{enumerate} \item {\bf Grasp point selection: }Using RGB images to search for a grasp point along the rope. \item {\bf Motion adjustment for wrapping: }Generating robot end-effector's wrapping motion path based on adjustable parameters. \item {\bf Auxiliary motion generation to facilitate wrapping: }Generating picking and releasing motions for the manipulator to perform before and after the wrapping motion respectively to complete the whole process. \item {\bf Motion outcome estimation and feedback control: }Using RGB images to estimate the outcome of the motion and adjust the parameters of the motion path generator. \end{enumerate} \noindent Those procedures are described in \Cref{section:gp_selection,section:wrapping,section:aux,section:fb_control} in detail. Our system repeats the single-wrap process above to generate a helix while improving the wrapping quality until the remaining rope is not sufficient for more wraps. We enable the manipulator to continue practicing wrapping by unraveling the rope manually and letting the manipulator repeat the whole process until it can produce a tight helix wrap satisfactorily. \begin{figure}[t] \centerline{\includegraphics[width=0.4\textwidth]{figures/setup.png}} \caption{The experiment setup.} \label{fig:setup} \end{figure} \subsection{The rod estimation}\label{section:rod_estimation} Rod estimation is done as the first step of the whole process. This process starts with establishing the transformation between the camera and the robot by using the RGB image from the camera and a fiducial marker \cite{ar_track_alvar} attached to the robot. The RGB\nobreakdash-D camera provides a colorized depth map of the workspace (Fig.~\ref{fig:raw_pcd}). Let $P$ denote the set of all the 3D points in the map, and let $Q$ be the set of all 2D pixels with color information in the map. Each data point within the frame can be represented as $(p_x, p_y, p_z, q_x, q_y, c)$, where $(p_x, p_y, p_z)\in P$ is the position in the camera coordinate system, $(q_x, q_y, c)\in Q$ is the pixel location on the image plane, and $c$ is the color. Next, from the point cloud data captured by the camera, our system extracts the points between the robot and the camera, and above the tabletop. The system downsamples the extracted point cloud and applies DBSCAN \cite{ester_density-based_1996} to create clusters according to the distance from the camera. An example result is shown as Fig.~\ref{fig:dbscan}. As there is no other object between the camera and the rod, the cluster with the shortest distance is selected. A 3D bounding box is created around this cluster (Fig.~\ref{fig:rod_n_support_pcd}). All points within the 3D bounding box are collected. White pixels in Fig.~\ref{fig:rod_n_support_mask} indicate the selected data points in the 2D image plane. Those points are further classified into 2 clusters according to their hue with K-mean \cite{macqueen_methods_1967}. The cluster with the most data points is kept. A maximum inscribed rectangle is used to fit the kept points, as shown in Fig.~\ref{fig:inscribed_rectangle}. Finally, the points within the rectangle are treated as points on the rod, with ${P'\subset P}$ being the 3D points and ${Q'\subset Q}$ denoting the pixels. $P'$ are used to estimate the radius $r_{rod}$ and the length $l_{rod}$ of the rod. Our system creates a half-cylinder surface template based on the estimation and employs ICP \cite{besl_method_1992} to match it to $P'$ (Fig.~\ref{fig:icp_result}). At this point, the estimates of the rod's center position $(x_{rod},y_{rod},z_{rod})$, the rod's orientation, and $r_{rod}$ are obtained. \begin{figure} \centering \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/find_rod/0_full_cloud.jpg} \caption{The workspace is captured by the RGB-D camera to generate colorized point cloud.} \label{fig:raw_pcd} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/find_rod/2_dbscan.jpg} \caption{The robot and the background are subtracted. The data points are clustered by distance.} \label{fig:dbscan} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/find_rod/3_select_pcd.jpg} \caption{The cluster with the closest distance toward the camera is selected.} \label{fig:rod_n_support_pcd} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/find_rod/4_apply_pc_mask.jpg} \caption{An image mask is generated from the selected cluster.\\ } \label{fig:rod_n_support_mask} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/find_rod/5_ncp.jpg} \caption{The maximum inscribed rectangle (red) is added to estimate the rod on the 2D image plane.} \label{fig:inscribed_rectangle} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/find_rod/6_icp_result.jpg} \caption{The half-cylinder template (yellow) is matched to the point cloud by applying ICP.} \label{fig:icp_result} \end{subfigure} \caption{Key steps to estimate the rod's dimension and pose.} \label{fig:find_rod} \end{figure} \subsection{The rope estimation}\label{section:rope_estimation} Once the rod's information is determined, our system estimates the color (hue range) and the diameter $d$ of the rope using $Q'$ (highlighted by the red rectangle in Fig.~\ref{fig:img_w_box}). It extracts the hue channel of $Q'$ to create a histogram. Otsu's method \cite{otsu_threshold_1979} is applied to the histogram to find a threshold that can separate $Q'$ into the rope and the rod. Then a Gaussian function $N(\mu,\sigma)$ is used to approximate the normalized rope's hue histogram. The hue range of the rope is chosen as $[\mu-3\sigma,\mu+3\sigma]$. The hue threshold found above is also applied to $Q'$ to create a binary mask of the rope. The result is shown as in Fig.~\ref{fig:rope_mask}. A minimum area rectangle is generated to enclose the selected area, as in Fig.~\ref{fig:rope_contour}. The short edge of the rectangle for the rope segment is taken as $d$. \begin{figure} \begin{minipage}{.25\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.95\linewidth]{figures/find_rope/raw_img_w_box.jpg} \caption{Pixels on the rod are selected from the image (red rectangle).} \label{fig:img_w_box} \end{subfigure} \end{minipage} \hspace{.005\textwidth} \begin{minipage}{.22\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.95\linewidth]{figures/find_rope/rope_mask.jpg} \caption{The rope on the rod is obtained via thresholding with its hue feature.} \label{fig:rope_mask} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.95\linewidth]{figures/find_rope/rope_contour.jpg} \caption{A contour is created to represent the piece of the rope.} \label{fig:rope_contour} \end{subfigure} \end{minipage} \caption{Estimate the rope's width and color.} \label{fig:rope_width} \end{figure} \subsection{Grasp point selection}\label{section:gp_selection} Wrapping a rope typically requires grasping both the fixed section and the active section and moving the active one around the rod. Finding the grasp points on the two sections is performed with an unwrapped rope. For each additional wrapping motion, only the grasp point on the active section needs to be updated. The camera's limited 3D resolution makes it difficult to detect thin features, such as the rope, with the point cloud data. Therefore, this process is done based on the 2D RGB image. Grasp point selection starts with extracting the two sections of the rope from the image. A sub-image of Fig.~\ref{fig:img_w_box} is created as shown in Fig.~\ref{fig:rope_region}, by extending the bounding box of the rod at both sides and downward. The system employs Ariadne+'s \cite{caporali_ariadne_2022} pre-trained DeepLabV3+ \cite{chen_encoder-decoder_2018} to create a binary mask $M_1$ from the sub-image. Compared to traditional computer vision methods, this deep learning approach suggests possible rope areas with less noise. However, we observed that extraction defects may happen. For example, Fig.~\ref{fig:ariadne} shows an incomplete detection on the fixed section that is near the rod. Therefore, our system applies the rope's hue range as the threshold to the sub-image to create another binary mask $M_2$, as shown in Fig.~\ref{fig:hue_mask}. $M_2$ is used to connect detected rope segments in $M_1$ as much as possible. For a white pixel in $M_1$, if $M_2$ has a vertical white line segment across the pixel at the same position, all the pixels on this line are added to $M_1$. The updated $M_1$ is then skeletonized \cite{zhang_fast_1984} to reduce detected objects to 1-pixel width, as shown in Fig.~\ref{fig:rope_skeleton}. Finally, our system searches for all lines from the bottom of the skeletonized mask, extracts the two longest lines, and calculates their center. The one near the right gripper (based on the robot's view) is taken as the fixed section (highlighted with green), and the one near the left gripper is taken as the active section (highlighted with blue), see Fig.~\ref{fig:found_ropes}. After the two sections of the rope are detected, our system determines one point along each of the sections for grasping by providing a value $l_{gp}$ (in millimeters) measured from the lower edge of the rod. We assume: 1) both sections of the rope (within the image) are always parallel to the $YZ$ plane (Fig.~\ref{fig:task}) and tangent to the rod, and 2) the distance between any two pixels and their distance in 3D are uniformly scaled. Our system converts $l_{gp}$ to the measurement in pixels and searches the corresponding point along designated rope section. In Fig.~\ref{fig:found_ropes}, the resulting point in the image plane is represented by a red dot. The position information of the pixel is converted back to the world coordinate system and used to move the robot's end-effector, as shown in Fig.~\ref{fig:grasping_result}. \begin{figure} \centering \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/grasping_point/0_rope_region.jpg} \caption{Selected region that contains the rope.} \label{fig:rope_region} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/grasping_point/1_ariadne.jpg} \caption{Mask $M_1$ generated by using Ariadne+.} \label{fig:ariadne} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/grasping_point/2_hue_masked.jpg} \caption{Mask $M_2$ generated by using the rope's hue feature.} \label{fig:hue_mask} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/grasping_point/3_rope_skeleton.jpg} \caption{Masks are combined and skeletonized.} \label{fig:rope_skeleton} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/grasping_point/4_found_ropes.jpg} \caption{The fixed section (green), the active section (blue), and the grasp point (red) on the fixed section are found on the 2D image plane.} \label{fig:found_ropes} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/grasping_point/5_grasping_result.jpg} \caption{The 3D position of the grasp point is located for the follow-up robot motion planning.\\ \\} \label{fig:grasping_result} \end{subfigure} \caption{Example of grasp point selection for the fixed section.} \label{fig:grasping_point_selection} \end{figure} \subsection{Motion adjustment for wrapping}\label{section:wrapping} \begin{figure} \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=.95\linewidth]{figures/spiral/spiral_1.png} \caption{The side view of the section of the rod and the unwrapped rope.} \label{fig:rope_n_rod_side} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=.95\linewidth]{figures/spiral/spiral_3.png} \caption{The side view of the end-effector position path (magenta dashed line).} \label{fig:spiral_side} \end{subfigure} \hfill \begin{subfigure}{\linewidth} \centering \includegraphics[width=.4\linewidth]{figures/spiral/spiral_2.png} \caption{The front view of the rope's diameter and the wraps' advance. The translucent orange section indicates the active section before the wrapping. The opaque one on its' right is after wrapping.} \label{fig:spiral_front} \end{subfigure} \caption{Spiral curve for the robot end-effector.} \label{fig:spiral} \end{figure} The wrapping motion is the most crucial as it determines the quality of the generated wrapping helix. It is performed in the presence of the estimation error or uncertainty of the rod's pose, dimension, the grasp points along the rope, and the unknown physical properties of the rope. Our approach is to create a spiral curve with just a few parameters for the robot end-effector to follow and then adjust the parameter values based on perception feedback. The goal is to achieve a resulting wrapping motion that can overcome the estimation errors and achieve a high-quality wrap in spite of the unknown physical properties of the rope. To design the canonical spiral curve for the robot's end-effector, we assume a rope hangs over the rod naturally due to gravity, creating contact with the rod on the upper half of the cylinder and leaving the surface of the rod tangentially at points $A$ and $B$, as shown in Fig.~\ref{fig:rope_n_rod_side}. Let $O$ be the center of the rod's cross-section where the rope lies. For the convenience of deriving the spiral function, we define a new 2D coordinate system with the origin at $O$ and the $x$ and $y$ axes as indicated in Fig.~\ref{fig:rope_n_rod_side} on the cross-section. The 3D coordinates of $O$ can be obtained from their relation to the rod's center. The fixed section starts from point $A$, extends downwardly, and is held by the robot's right gripper. The active section is extended from point $B$ to the grasp point $(x_0,y_0)$, with the length $L$. We define $R=r_{rod} + \epsilon$, where $\epsilon$ is a variable for correcting the estimation error in $r_{rod}$. We define $L=2\pi R+L'$, where $L'$ is a safe distance that considers the end-effector's size. Once the left gripper starts to make a wrap, the tangential contact point $B$ moves to $B'$. Note that the wrapping angle $\theta\in[0,2\pi]$ is the angle $\angle B'OB$. As an additional $\wideparen{BB'}=\theta R$ of the rod is covered by the rope, the distance between the gripper to the tangential point $B'$ is reduced to $L-\theta R$, as shown in Fig.~\ref{fig:spiral_side}. Now we consider the following spiral curve with respect to the rod coordinate system: \begin{equation}\label{eq:spiral} \left\{\begin{array}{l} x = R\cos\theta-(2\pi R+L'-\theta R)\sin\theta \\ y = R\sin\theta+(2\pi R+L'-\theta R)\cos\theta\\ z = a\theta/2\pi \end{array}\right. \end{equation} \noindent where $a$ is the displacement of the end-effector along the rod's axial direction for one wrap (Fig.~\ref{fig:spiral_front}), which can be decided through feedback (see \Cref{section:fb_control}). The 2D projection of this spiral path for the end-effector is shown as the magenta dashed curve in Fig.~\ref{fig:spiral_side}. Our system searches for the safety distance $L'\in[L_{\min},L_{\max}]$ so that the wrapping path has a feasible inverse kinematics (IK) solution. Note that the goal of the wrapping is to create wraps that are tight along the radial and axial directions of the rod, such that the wrap along the radial direction is as close to the radius of the rod as possible, and the distance along the axial direction from the center of two adjacent wraps equal to the diameter of the rope. \begin{figure}[t] \centering \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/real_robot/2_rope_entangled.jpg} \caption{The gripper wraps with a fixed end-effector orientation, causing the rope to tangle the fingers.} \label{fig:rope_entangled} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/real_robot/3_gripper_orientation.jpg} \caption{The gripper wraps with the end-effector orientation adaptive to the spiral curve.\\} \label{fig:gripper_orientation} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/real_robot/4_rope_flip.jpg} \caption{The gripper finishes the wrapping motion and leaves a part of the active section hanging over the rod.} \label{fig:rope_flip} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/real_robot/1_rviz.png} \caption{The gripper follows a parameterized spiral curve to create a wrap and straight-line path to straighten the rope.} \label{fig:rviz} \end{subfigure} \caption{The wrapping motion.} \label{fig:wrapping_motion} \end{figure} During the wrapping process, when the wrapping angle changes to $\theta$, the end-effector rotates to the same angle $\theta$ about the rod's axis simultaneously, as indicated by its orientation in Fig.~\ref{fig:spiral_side}. Without this rotation, the rope tends to entangle the fingers and hinder the gripper's opening motion at the end of the wrap, as shown in Fig. \ref{fig:rope_entangled}. The results of wrapping with the change of the end-effector orientation are shown in Fig.~\ref{fig:gripper_orientation} ($\theta=120^\circ$) and Fig.~\ref{fig:rope_flip} ($\theta=330^\circ$). At this stage, the position and orientation of the robot gripper along the spiral path have been determined with respect to the rod coordinate system, which can be converted to the robot or world coordinate system. In practice, our system takes samples along the spiral curve and discards the first and the last sampled points ($\theta=0$ and $2\pi$) to specify the end-effector wrapping motion. The wrapping motion is connected with auxiliary motions to prepare the rope for wrapping and to release it after wrapping, as detailed in \Cref{section:aux}. Fig.~\ref{fig:rviz} illustrates the connection of the wrapping motion to the releasing motion. \subsection{Generation of auxiliary motions to facilitate wrapping}\label{section:aux} Rope picking and releasing motions are defined based on the spiral wrapping motion. Picking is a point-to-point motion that contains three key poses: 1) the entry pose, where the end-effector moves to the point with the opening of the gripper facing toward the rope at a distance. 2) the grasp pose, where the rope is squeezed in between the fingers of the gripper, and 3) the connection pose, where the gripper moves to the starting point of the spiral path. The grasp pose is derived as in \Cref{section:gp_selection} with $L$ being the input value for $l_{gp}$. The entry pose is obtained by offsetting the grasp pose to move the end-effector away from the rope. The release motion is designed to straighten the rope and to move the gripper away from the rope to avoid occlusion for image processing after the wrapping motion is done. Straightening the rope happens after the gripper arrives at the last sampled point of the spiral (see Fig.~\ref{fig:rope_flip} and \ref{fig:rviz}). The gripper moves downward to follow a straight-line path, sliding along the rope to flip the active section to the front. Then the gripper releases the rope and withdraws to be away from the rope. At this point, if the remaining length of the active section is longer than the rod's height, part of the section may land on the table, preventing the rope drops vertically from the rod. An additional rope alignment step is added. The left gripper rotates $90^{\circ}$ to increase the contact area with the rope. Then it moves to the front of the active section, as shown in Fig. \ref{fig:rope_front} and pushes the rope towards the manipulator to ensure that the rope is roughly vertical beneath the rod (Fig. \ref{fig:push_back}). Finally, the gripper moves away from the rope to leave space for the next grasp point selection step. \begin{figure}[h] \centering \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/real_robot/5_rope_front.jpg} \caption{The gripper rotates $90^\circ$ and moves to the front of the rope.\\} \label{fig:rope_front} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/real_robot/6_push_back.jpg} \caption{The gripper pushes the active section to align it vertically beneath the rod.} \label{fig:push_back} \end{subfigure} \caption{The auxiliary motions.} \label{fig:auxiliary_motion} \end{figure} \subsection{Motion outcome estimation and feedback control}\label{section:fb_control} Following the completion of one wrap, our system takes an image of the result and checks for the tightness along the radial direction (refer to as the height) and the axial direction (refer to as the advance) of the wrap, which is then used for feedback control to improve the next wrap. It starts by checking the height of the last wrapped part of the rope. At this step, a rectangle that contains the rod and some areas below is extracted from the image to include pieces of rope that could possibly hang under the rod, as shown in Fig. \ref{fig:len_rgb}. The rope is extracted from the image by using the rope's hue range as the threshold. Assuming that the rope segments in this binary image share the same width $d$ as the detected rope's diameter, our system searches the image for the rope segment created by the last wrap. The segment is skeletonized to find the valley point along it, as shown in Fig. \ref{fig:wrap_valley}. The pixel distance from this point to the bottom of the rod is taken as the height feedback of the last wrap, noted as $d_h$. This feedback is used to estimate the radius $R$. The feedback controller updates the $R$ by: \begin{equation} R_{n+1}=R_n-K_{PR}(q_r)\text{, where }q_r=d_h-t_R \end{equation} \noindent where $K_{PR}$ is the proportional coefficient, $t_R$ is a threshold. The stop condition is when $d_h\leq t_R$. The RGB image is also used to check the advance. Pixels of the rod's area (Fig. \ref{fig:adv_rgb}) are selected and processed to extract the rope. The rope area created by the last wrap $S_r$ and the gap between the last two wraps $S_g$ are measured, as in Fig \ref{fig:adv_cluster}. The feedback controller updates the $a$ by: \begin{equation} \begin{aligned} a_{n+1} = a_n-K_{Pa}(q_a)\text{, where }q_a = S_g/(S_g+S_r) \end{aligned} \end{equation} \noindent where $K_{Pa}$ is the proportional coefficient. There are two stop conditions for learning the advance: 1) $q_a=0$, 2) for two consecutive wraps $n$ and $n+1$, $|q_{an}-q_{a(n+1)}|<t_a$, where $t_a$ is a threshold. When the wrapping output meets either condition, the system stops to update the advance. \begin{figure} \centering \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/feedback/len_rgb.jpg} \caption{The rod and the below area are extracted to examine the height of the last wrap.\\ \\} \label{fig:len_rgb} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/feedback/wrap_valley.jpg} \caption{The last wrap is extracted and skeletonized to find the valley point (green). The bottom edge of the rod is indicated by the red line.} \label{fig:wrap_valley} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/feedback/adv_rgb.jpg} \caption{The rod area is extracted to examine the advance of the last wrap.\\ \\} \label{fig:adv_rgb} \end{subfigure} \hfill \begin{subfigure}[b]{0.235\textwidth} \centering \includegraphics[width=\textwidth]{figures/feedback/adv_cluster.jpg} \caption{The rope area created by the last wrap (blue) and the gap area (red) are extracted and measured. The rest of the rope area is indicated in white.} \label{fig:adv_cluster} \end{subfigure} \caption{Image processing to examine the motion outcome of the last wrap.} \end{figure} \section{Experiments and Results}\label{section:exp_n_result} In this section, we describe our experiments to implement and test our approach described in \Cref{section:approach} and present the results. We then discuss the performance and potential improvement of the wrapping algorithm. \subsection{Expriment setup} \setcounter{table}{3} \begin{table*}[!b] \caption{Warpping outcome of each case. The numbers above the wraps indicate the trial number. $a_i$ is the corresponding advance that is used to achieve that wrap.} \label{tab:exp_result} \setlength{\tabcolsep}{3pt} \begin{center} \begin{tabular}{ | c | c c c | c c c | c c c | c |} \hline & \multicolumn{3}{c|}{Rope1} & \multicolumn{3}{c|}{Rope2} & \multicolumn{3}{c|}{Rope3} & Human\\ & \multicolumn{3}{c|}{ } & \multicolumn{3}{c|}{ } & \multicolumn{3}{c|}{ } & wrapping\\ \hline Rod1 & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope1_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope1_2.jpg} & & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope2_1.jpg} & & & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope3_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope3_2.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod1_rope3_3.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/human_wrap.jpg}\\ & $a_1=20.0$ & $a_3=-2.7$ & & $a_1=20.0$ & & & $a_1=20.0$ & $a_3=-11.8$ & $a_5=-21.0$ & \multirow{5}{19mm}{Wrapping performed by hands as a comparison.}\\ & $a_2=11.2$ & (complete) & & (complete) & & & $a_2=~2.4$ & $a_4=-18.1$ & (complete) & \\ \cline{1-10} Rod2 & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope1_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope1_2.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope1_3.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope2_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope2_2.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope2_3.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope3_1.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope3_2.jpg} & \includegraphics[width=0.08\textwidth]{figures/exp_progress/rod2_rope3_3.jpg} & \\ & $a_1=20.0$ & $a_3=9.7$ & $a_5=4.7$ & $a_1=20.0$ & $a_2=17.3$ & $a_3=17.0$ & $a_1=20.0$ & $a_3=-7.3$ & $a_5=-11.2$ & \\ & $a_2=10.3$ & $a_4=6.0$ & (complete) & & & (complete) & $a_2=~0.1$ & $a_3=-7.4$ & (complete) & \\ \hline \end{tabular} \end{center} \end{table*} \setcounter{table}{0} The PC of our system is equipped with an Intel Core i5-7500, 16GB RAM, and an NVIDIA GeForce GTX 1050 Ti, running Ubuntu 20.04 with ROS Noetic. We modified KTH's YuMi package \cite{kth-ros-pkg} in order to control the manipulator. Yumi is configured to run in manual mode, with 100\% speed. Its Axis 6, where the gripper is mounted, has the ability to rotate from $-229^\circ$ to $229^\circ$, which is capable of a $360^\circ$ continuous rotation mentioned in \Cref{section:wrapping}. To achieve the required gripper motion during the wrapping, Axis 6 of the left arm was set to $216^\circ$ at the beginning of wrapping and was at $-144^\circ$ when finished. Trac-IK \cite{beeson_trac-ik_2015} was used to calculate joint variables of the rest of the 6 axes of the left arm and the 7 axes of the right arm from expected end-effector poses. The solving timeout and the error toleration of this solver were set to $50$ms and $1$mm, respectively. The coefficients of the feedback control in our system are given in Table \ref{tab:parameters}. The threshold $t_R$ was set to $1.5d$ and $t_a$ was set to $5\%$. The system's advance feedback started with the initial $a_0=20$mm. For a rod with an estimated radius $r_{rod}$. The inital $R$ was set to $1.5r_{rod}$. In practice, with a larger $R$, some waypoints of the spiral path may lie outside the manipulator's workspace, which prevents the manipulator from executing the path. If this happens, $R$ is reduced by a small value, which we set to $5$mm in the experiments, to generate a new spiral path. This process repeats until the manipulator is able to follow the path to get height feedback for the first time. Then the system uses feedback control to tune $R$. \begin{table}[h] \caption{The coefficients of the feedback control} \label{tab:parameters} \begin{center} \begin{tabular}{ | c | c | c | c |} \hline $L'_{\min}$ & $L'_{\max}$ & $K_{PR}$ & $K_{Pa}$ \\ \hline $20$mm & $60$mm & $0.001$ & $0.04$\\ \hline \end{tabular} \end{center} \end{table} To validate the capability of handling objects with unknown attributes, we tested our system on two cylindrical rods and three ropes, namely Rod1, Rod2, Rope1, Rope2, and Rope3. Two rods share the same length of $l=280$mm but have different radii. The ground truth and the estimated radius of each rod can be found in Table~\ref{tab:rod_estimation_result}. For each rod, we ran the estimation 10 times. Rope1 is a skein of yarn (Softee Chunky Solid Yarn, Bernat). Rope2 is also a skein of yarn (Chenille Home Yarn, Loops \& Threads). Rope3 is a skein of paracord (1/8 in. x 50 ft. Assorted Color Paracord, Everbilt). We tested our system on the 6 combinations of the rods and ropes. \begin{table}[h] \caption{The ground truth and the estimation of $r_{rod}$ (over 10 times).} \label{tab:rod_estimation_result} \begin{center} \begin{tabular}{ | l | c | c | c |} \hline & Ground truth& Estimation average & Range \\ & & (Standard deviation) & \\ \hline Rod1 & $21$mm & $18.5(\pm1.8)$mm & $17.4\sim 20.7$mm\\ \hline Rod2 & $17$mm & $14.3(\pm1.1)$mm & $13.6\sim 14.6$mm \\ \hline \end{tabular} \end{center} \end{table} \subsection{Wrapping outcome}\label{section:outcome} We used one estimated radius $r_{rod}$ for each rod to examine the wrapping algorithm. The system uses the method mentioned in \Cref{section:wrapping} to obtain $R$ and $L'$. We find that the system's selection of the values of these two parameters is independent of the ropes. These parameter values are shown in Table~\ref{tab:r_tightness}. \begin{table}[h] \caption{The parameters that the system chose to achieve radial tightness.} \label{tab:r_tightness} \begin{center} \begin{tabular}{ | c | c | c | c |} \hline & Estimated radius & $R$ & $L'$ \\ & $r_{rod}$ & & \\ \hline Rod1 & $20.7$mm & $21.0$mm & $60.0$mm\\ \hline Rod2 & $14.6$mm & $16.9$mm & $60.0$mm \\ \hline \end{tabular} \end{center} \end{table} The robot took 5 trials to reach the stop condition for learning the advance in the axial direction for \{Rod1, Rope3\}, \{Rod2, Rope1\}, and \{Rod2, Rope3\}. For \{Rod1, Rope1\} and \{Rod2, Rope2\}, it took 3 trials. It achieved the axial tightness in the first trial for \{Rod1, Rope2\}. The result can be found in Table~\ref{tab:exp_result}. These images were taken by the RGB-D camera and were also used for the feedback control process. Note that since Rope3 has a higher stiffness, making the first wrap tends to push the fixed section to create a larger gap compared to the following wraps (see Fig.~\ref{fig:first_wrap_pushing}). This only happens to Rope3 when no previous wrap is made. Therefore, we always kept one pre-wrap on the rod that was not taken into account when testing Rope3 on both rods. For all cases, the tightness along the radial direction was met. For cases with Rope1 and Rope2, the axial tightness was also met. For cases with Rope3, the final result was comparable with the wrapping result by a person manually (see the last column of Table~\ref{tab:exp_result}). The attached video shows the robot performing those wrapping cases. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{figures/discussion/0_init.jpg} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{figures/discussion/1_lift.jpg} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{figures/discussion/2_place_w_shift.jpg} \end{subfigure} \caption{The fixed end of the Rope3 is shifted during creating the first wrap because of its' stiffness. The red arrows are used as references.} \label{fig:first_wrap_pushing} \end{figure} \subsection{Time efficiency analysis} \label{section:time_consumption} We report the time consumption of our method in two parts: the computation time and the execution time. The computation time includes all the steps that are described in \Cref{section:approach}. The execution time is counted as the manipulator executing the picking, wrapping, and releasing motions described in \Cref{section:wrapping,section:aux}. For the computation time, we repeat each step 10 times to evaluate the average and standard deviation of the running time. Our system spends $13.197(\pm 0.402)$s to finish the rod estimation and $1.042(\pm 0.053)$s to finish the rope estimation at the beginning of a wrapping task. These are the pre-processing steps before wrapping and only need to be done once for each combination. The average time consumption of steps to conduct a single wrap can be found in Table \ref{tab:time_consumption}. The motion planning time in the table is the total time of planning for wrapping and auxiliary motions, which includes solving IK and polynomial interpolation in the joint space. \setcounter{table}{4} \begin{table}[h] \caption{Average computation time of steps to conduct a single wrap.} \label{tab:time_consumption} \begin{center} \begin{tabular}{ |l| l| } \hline Step & Time (sec) \\ \hline Grasp point selection & $4.447(\pm 0.082)$\\ \hline Motion planning& $2.302(\pm 0.050)$\\ \hline Motion outcome estimation & $0.312(\pm 0.016)$\\ \hline \end{tabular} \end{center} \end{table} We also measure the total execution time \textcolor{blue}{\st{(including computation and movement time)}} of one wrap 10 times for \{Rod1, Rope1\}. The average time and the standard deviation is $56.777(\pm 0.591)$s, which is an order of magnitude higher than the computation time for pre-processing. \subsection{Discussion} We have observed that the radial feedback meets the stop condition from the first wrap for all test cases. Due to the camera’s perspective and the methods mentioned in \Cref{section:rod_estimation}, the estimated $r_{rod}$ is always smaller than the ground truth. With $R$ being less or equal to the ground truth, the gripper slid slightly down the active section of the rope during the wrapping to compensate for the insufficient rope length from the gripper to the rod. This motion keeps the tension of the section and results in radial tightness. This suggests that to wrap over a solid of revolution, an $R$ that is smaller than the ground truth can be tolerated and often helpful for radial tightness. \section{Conclusions}\label{section:conclusions} In this paper, we present a novel and general method to solve the problem of using a general-purpose robot manipulator with a parallel gripper to wrap ropes around a rigid rod. This method is based on a parameterized canonical motion and does not require knowledge of the rope or the rod. It uses RGB-D images to estimate the state of the rope and rod, evaluates the wrapping outcome, and generates feedback from the outcome to improve motion planning. We tested our method with 6 combinations of ropes and rods. The result shows that our general method applies to different ropes and rods very well. In the next step, we will expand to using other types of rods, for instance, solids of revolution other than cylinders, such as a cone, and to incorporate tactile sensing. \section{INTRODUCTION} This template provides authors with most of the formatting specifications needed for preparing electronic versions of their papers. All standard paper components have been specified for three reasons: (1) ease of use when formatting individual papers, (2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products, and (3) conformity of style throughout a conference proceedings. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. Some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow. \section{PROCEDURE FOR PAPER SUBMISSION} \subsection{Selecting a Template (Heading 2)} First, confirm that you have the correct template for your paper size. This template has been tailored for output on the US-letter paper size. It may be used for A4 paper size if the paper size setting is suitably modified. \subsection{Maintaining the Integrity of the Specifications} The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations \section{MATH} Before you begin to format your paper, first write and save the content as a separate text file. Keep your text and graphic files separate until after the text has been formatted and styled. Do not use hard tabs, and limit use of hard returns to only one return at the end of a paragraph. Do not add any kind of pagination anywhere in the paper. Do not number text heads-the template will do that for you. Finally, complete content and organizational editing before formatting. Please take note of the following items when proofreading spelling and grammar: \subsection{Abbreviations and Acronyms} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. \subsection{Units} \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as Ò3.5-inch disk driveÓ. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ÒWb/m2Ó or Òwebers per square meterÓ, not Òwebers/m2Ó. Spell out units when they appear in text: Ò. . . a few henriesÓ, not Ò. . . a few HÓ. \item Use a zero before decimal points: Ò0.25Ó, not Ò.25Ó. Use Òcm3Ó, not ÒccÓ. (bullet list) \end{itemize} \subsection{Equations} The equations are an exception to the prescribed specifications of this template. You will need to determine whether or not your equation should be typed using either the Times New Roman or the Symbol font (please no other font). To create multileveled equations, it may be necessary to treat the equation as a graphic and insert it into the text after your paper is styled. Number equations consecutively. Equation numbers, within parentheses, are to position flush right, as in (1), using a right tab stop. To make your equations more compact, you may use the solidus ( / ), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in $$ \alpha + \beta = \chi \eqno{(1)} $$ Note that the equation is centered using a center tab stop. Be sure that the symbols in your equation have been defined before or immediately following the equation. Use Ò(1)Ó, not ÒEq. (1)Ó or Òequation (1)Ó, except at the beginning of a sentence: ÒEquation (1) is . . .Ó \subsection{Some Common Mistakes} \begin{itemize} \item The word ÒdataÓ is plural, not singular. \item The subscript for the permeability of vacuum ?0, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ÒoÓ. \item In American English, commas, semi-/colons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ÒinsetÓ, not an ÒinsertÓ. The word alternatively is preferred to the word ÒalternatelyÓ (unless you really mean something that alternates). \item Do not use the word ÒessentiallyÓ to mean ÒapproximatelyÓ or ÒeffectivelyÓ. \item In your paper title, if the words Òthat usesÓ can accurately replace the word ÒusingÓ, capitalize the ÒuÓ; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ÒaffectÓ and ÒeffectÓ, ÒcomplementÓ and ÒcomplimentÓ, ÒdiscreetÓ and ÒdiscreteÓ, ÒprincipalÓ and ÒprincipleÓ. \item Do not confuse ÒimplyÓ and ÒinferÓ. \item The prefix ÒnonÓ is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ÒetÓ in the Latin abbreviation Òet al.Ó. \item The abbreviation Òi.e.Ó means Òthat isÓ, and the abbreviation Òe.g.Ó means Òfor exampleÓ. \end{itemize} \section{USING THE TEMPLATE} Use this sample document as your LaTeX source file to create your document. Save this file as {\bf root.tex}. You have to make sure to use the cls file that came with this distribution. If you use a different style file, you cannot expect to get required margins. Note also that when you are creating your out PDF file, the source file is only part of the equation. {\it Your \TeX\ $\rightarrow$ PDF filter determines the output file size. Even if you make all the specifications to output a letter file in the source - if your filter is set to produce A4, you will only get A4 output. } It is impossible to account for all possible situation, one would encounter using \TeX. If you are using multiple \TeX\ files you must make sure that the ``MAIN`` source file is called root.tex - this is particularly important if your conference is using PaperPlaza's built in \TeX\ to PDF conversion tool. \subsection{Headings, etc} Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. Styles named ÒHeading 1Ó, ÒHeading 2Ó, ÒHeading 3Ó, and ÒHeading 4Ó are prescribed. \subsection{Figures and Tables} Positioning Figures and Tables: Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ÒFig. 1Ó, even at the beginning of a sentence. \begin{table}[h] \caption{An Example of a Table} \label{table_example} \begin{center} \begin{tabular}{|c||c|} \hline One & Two\\ \hline Three & Four\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[thpb] \centering \framebox{\parbox{3in}{We suggest that you use a text box to insert a graphic (which is ideally a 300 dpi TIFF or EPS file, with all fonts embedded) because, in an document, this method is somewhat more stable than directly inserting a picture. }} \caption{Inductance of oscillation winding on amorphous magnetic core versus DC bias magnetic field} \label{figurelabel} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ÒMagnetizationÓ, or ÒMagnetization, MÓ, not just ÒMÓ. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ÒMagnetization (A/m)Ó or ÒMagnetization {A[m(1)]}Ó, not just ÒA/mÓ. Do not label axes with a ratio of quantities and units. For example, write ÒTemperature (K)Ó, not ÒTemperature/K.Ó \section{CONCLUSIONS} A conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions. \addtolength{\textheight}{-12cm} \section*{APPENDIX} Appendixes should appear before the acknowledgment. \section*{ACKNOWLEDGMENT} The preferred spelling of the word ÒacknowledgmentÓ in America is without an ÒeÓ after the ÒgÓ. Avoid the stilted expression, ÒOne of us (R. B. G.) thanks . . .Ó Instead, try ÒR. B. G. thanksÓ. Put sponsor acknowledgments in the unnumbered footnote on the first page. References are important to the reader; therefore, each citation must be complete and correct. If at all possible, references should be commonly available publications.
{ "arxiv_id": "2302.11296", "language": "en", "timestamp": "2023-02-23T02:13:06", "url": "https://arxiv.org/abs/2302.11296", "yymm": "2302" }
\section{Conclusion} \label{Conclusion} Spectral clustering is a clustering paradigm with heavy computational demands. These demands were closely studied in the literature, and many solutions were proposed. However, many of these solutions compromised on consistency of clustering, or local statistics that are crucial for sparseness and noise detection. We proposed a series of refinement stages to the well-known $k$-nearest neighbor graph. The obtained graph can detect sparse clusters and noisy points. Also, our method was capable of delivering consistent clustering over multiple runs, since it was based on $k$-nearest neighbor graph with no random operations involved. Compared to approximate spectral clustering (ASC), the proposed method detected sparse clusters, and achieved accurate clustering with substantial noise. The future directions of this work lie in two areas: 1) improving memory efficiency, and 2) automatically finding number of neighbors included in the baseline distribution. For memory efficiency, our method still requires memory larger than ASC methods to store the graph. ASC benefits from reducing nodes through vector quantization. Reducing nodes in a deterministic way would improve the memory efficiency of this work. Additionally, the number of neighbors included in the baseline distribution was tuned manually. The baseline distribution was used as a reference to whether or not further edges will be eliminated. \section{Experiments and discussions} \label{Experiments} Experiments were conducted using synthetic data, real data, and a dataset with an increasing noise (10\% to 50\% of $N$). The proposed method was compared against approximate spectral clustering (ASC) methods. The most famous vector quantization for ASC are: $k$-means and self-organizing map (SOM). These were selected for approximation. Similarities between prototypes obtained through $k$-means and SOM were computed using local $\sigma$ \cite{RN237}, CONN \cite{RN275}, and CONNHybrid \cite{RN276}. There were other similarity metrics proposed in \cite{RN276}, but they were built on top of CONN and their performance was highly correlating with CONN. All experiments were coded in MATLAB 2018b and run on a windows 10 machine (3.40 GHz CPU and 8 GB of memory). The code is available on \url{https://github.com/mashaan14/Spectral-Clustering}. \subsection{Evaluation metrics} \label{Experiments-1} The performance of competing methods was evaluated by comparing labels obtained by the clustering method with the true labels provided in ground truth. Two metrics were used for the evaluation: clustering accuracy (ACC) and adjusted Rand index (ARI). Clustering accuracy checks one on one assignments and computes the percentage of hits. Let $T_i$ and $L_i$ be ground truth labels and labels obtained through clustering respectively. Then, accuracy is defined as \cite{RN238}: \begin{equation} ACC(T,L)=\frac{\sum_{i=1}^{N}{\ \delta(T_i,map(L_i))}}{N}, \label{Eq-008} \end{equation} where $N$ is the number of points and the function $\delta(x,y)$ equals one when $x=y$ and equals zero otherwise. The function $map(L_i)$ is the permutation mapping that maps the obtained cluster labels to its equivalent in the ground truth labels. The adjusted Rand index (ARI) \cite{RN365} is one of the “pair counting evaluation measures”. For two groupings $T$ and $L$, ARI counts how many pairs $T$ and $L$ agreed or disagreed. It has better bounds than the original Rand index (RI) \cite{RN364}. The upper bound is 1 indicating identical groupings and the lower bound 0 indicates random groupings. Let $N$ be the number of elements in the contingency table with $T$ rows and $L$ columns. Given all possible pairs in $\binom{n}{2}$, they can be classified into four types: $n_{11}$: pairs in the same cluster in both $T$ and $L$; $n_{00}$: pairs in different clusters in both $T$ and $L$; $n_{01}$: pairs in the same cluster in $T$ but in different clusters in $L$; $n_{10}$: pairs in different clusters in $T$ but in the same cluster in $L$. Then, ARI is defined as: \begin{equation} ARI(T,L)=\frac{2(n_{00}n_{11}-n_{01}n_{10})}{(n_{00}+n_{01})(n_{01}+n_{11})+(n_{00}+n_{10})(n_{10}+n_{11})}\ . \label{Eq-009} \end{equation} The computational efficiency of competing methods was measured by the percentage of edges used compared to all edges in a fully connected graph. This is more suitable measure than simply measuring the running time which is sensitive to the experimental setup (e.g., computation power, machine used, etc.). The metric is computed as follows: \begin{equation} E\%=\frac{E(G)}{E(G_{full})}\ , \label{Eq-009-01} \end{equation} where $E(G)$ is the number of edges in the filtered graph and $E(G_{full})$ is the number of edges in the fully connected graph. \subsection{Synthetic datasets} \label{Experiments-2} \begin{figure}[t \centering \includegraphics[width=\textwidth,height=20cm,keepaspectratio]{figs-03/Fig-06.pdf} \caption{Synthetic datasets used in the experiments.} \label{Fig:Fig-06} \end{figure} \begin{table \centering \caption{Properties of tested datasets. $N =$ number of samples, $d =$ number of dimensions (i.e., features), and $C =$ number of clusters.} \includegraphics[width=\textwidth,height=20cm,keepaspectratio]{figs-03/Table-01.pdf} \label{Table:Table-01} \end{table} For approximate spectral clustering (ASC), the number of prototypes was selected as the elbow point in the quantization error curve by $k$-means. It was set as 32, 32, 64, and 32 for \texttt{sparse303}, \texttt{ring}, \texttt{aggregation}, and \texttt{sparse622} respectively. As a general observation, prototypes placed by self-organizing map (SOM) performed better than the ones placed by $k$-means (especially in \texttt{sparse303} and \texttt{sparse622}). This behaviour could be explained by training of SOM in which prototypes move as a group towards a data point and not independent from each other as in $k$-means. This enables SOM to place more neurons into dense regions than $k$-means (please refer to Fig.\ \ref{Fig:Fig-01} for illustration). In terms of the similarity metric, CONN and CONNHybrid have a slight advantage over local $\sigma$ especially in \texttt{aggregation} dataset in which all clusters are dense. In \texttt{ring} dataset, CONN and CONNHybrid achieved similar performances higher than local $\sigma$. \begin{table}[t \centering \caption{Evaluating competing methods on synthetic data for 100 runs. Three rows for each dataset: \texttt{ACC}: mean accuracy $\pm$ standard deviation, \texttt{ARI}: mean adjusted Rand index $\pm$ standard deviation, and \texttt{E\%}: the percentage of edges compared to edges in a fully connected graph. Bold values are the best scores.} \includegraphics[width=0.94\textwidth,height=20cm,keepaspectratio]{figs-03/Table-02.pdf} \label{Table:Table-02} \end{table} There were performance drops when $C$ was unknown compared when it was given. The most significant drops occurred in \texttt{sparse622} when it reached 50\% drop compared to when $C$ was given. In \texttt{sparse303}, the drop was around 25\%. For \texttt{ring} and \texttt{aggregation} datasets the drop was smaller compared to sparse datasets with exception of local $\sigma$ in \texttt{aggregation} dataset. Usually in sparse datasets, the graph has a single connected component and by giving $C$ we are forcing the method to break it into the desired number of connected components. But when $C$ is unknown it was very difficult for the eigengap detection to uncover $C$ because it was a single connected component which cause the sharp drops in performance. For \texttt{ring} and \texttt{aggregation} datasets, ASC passes a graph with multiple connected components making the task easier for the eigengap detector. The major drawback of ASC that kept persisting across all synthetic datasets is the standard deviation of performance scores. In different experiments it reached 17\% of ACC and 0.2 of ARI, this highlights the inconsistency of these methods. Our proposed method achieved a score close to a full mark across all synthetic datasets. Another advantage of our method was the consistent performance. This indicates that no matter how many times you repeat the method there is a high chance to get a score close to the full mark. The third evaluation metric was the percentage of used edges compared to fully connected undirected graph. All ASC methods had a lower number of edges than our method because they are constructing a graph out of $m$ prototypes and ours used all data points $N$ ($m \ll N$). However, the highest percentage of edges for our method was 4.41\% in \texttt{ring} dataset. This means that 95.59\% were removed from the fully connected graph, this is a considerable reduction in computations and memory footprint. \subsection{Real datasets} \label{Experiments-3} For real datasets the number of prototypes in ASC methods was set by monitoring quantization error to 32, 32, 40, 100, 500, and 1000 for datasets: \texttt{iris}, \texttt{wine}, \texttt{ImageSeg}, \texttt{statlog}, \texttt{Pen digits}, and \texttt{mGamma}. For \texttt{iris} dataset and when $C$ was given our method achieved the highest performance with a low standard deviation across all runs. However, when $C$ was unknown our method dropped the most compared to ASC methods since we lost an entire cluster. SOM based approximation scores above $k$-means approximation across all similarity measures when $C$ was given, due to high quality graphs provided by SOM. For memory footprint, our method used 6.76\% of edges compared to a full graph out of all points in \texttt{iris}. \begin{table \centering \caption{Testing competing methods on real datasets for 100 runs. Three rows for each dataset: \texttt{ACC}: mean accuracy $\pm$ standard deviation, \texttt{ARI}: mean adjusted Rand index $\pm$ standard deviation, and \texttt{E\%}: the percentage of edges compared to edges in a fully connected graph. Bold values are the best scores.} \includegraphics[width=\textwidth,height=20cm,keepaspectratio]{figs-03/Table-03.pdf} \label{Table:Table-03} \end{table} In \texttt{wine} dataset, our method lagged behind ASC methods with a little performance drop when $C$ was unknown. We went to investigate that by projecting points in \texttt{wine} dataset onto their first three principle components. The three clusters were of a convex shape and similar density. In such cases, we do not expect our method to outperform ASC methods that are capable of capturing convex clusters. The inconsistency of ASC methods continues to persist with the standard deviation of runs reached 14\% of ACC and 0.2 of ARI. Our method achieved the highest score for \texttt{ImageSeg} dataset when $C$ was given. Projecting the points onto first three principal components reveal non-convex clusters with varying densities. This explains the high score delivered by our method. For \texttt{statlog} dataset, the performance of our method was in line with ASC methods with a slight advantage for methods based on CONN similarity measure. For the last two datasets: \texttt{Pen digits} and \texttt{mGamma}, the value of the baseline distribution of 7 neighbors was not providing good results. Therefore, it was set manually to be 50 neighbors after testing a range of values. This change makes our method the best performer in \texttt{Pen digits} dataset when $C$ was given. For \texttt{mGamma} dataset, the proposed method was close to the best performer when $C$ was given. But when $C$ was unknown it produced 13 clusters compared to 2 clusters in the ground truth, causing a sharp decline in ACC. \begin{figure \centering \includegraphics[width=\textwidth,height=20cm,keepaspectratio]{figs-03/Fig-07.pdf} \caption{Datasets used for noise robustness test contaminated by 10\% to 50\% noisy points.} \label{Fig:Fig-07} \end{figure} \begin{figure \centering \includegraphics[width=0.95\textwidth,height=20cm,keepaspectratio]{figs-03/Fig-08.pdf} \caption{Results of noise robustness test; (a) results on lines datasets; (b) results on rings dataset (best viewed in color).} \label{Fig:Fig-08} \end{figure} \subsection{Noise robustness test} \label{Experiments-4} The last experiment was testing competing methods’ robustness against increasing noise, where $C$ was given. The datasets in Fig.\ \ref{Fig:Fig-07} were retrieved from \cite{RN237} and the evaluation in Fig.\ \ref{Fig:Fig-08} was based on ARI. In general, CONN based similarities performed better when data was clean. With increasing noise, ASC methods start to drop in performance. On the other hand, our method continues to score better than ASC methods with noise approaching 30\% of the data. Even with 40\% noise, our method delivered the highest score on lines dataset. The core difference between our method and ASC methods is that our method cuts noisy points since they do not match the lines/rings density and ASC tries to accommodate noisy points due to its vector quantization component. \subsection{Experiments for Integration with SpectralNet} \label{Experiments-5} For Integration with spectral clustering using deep neural networks (SpectralNet), we used three synthetic datasets, three methods, four evaluation metrics. The three datasets are: \texttt{cc}, \texttt{aggregation}, and \texttt{compound}, shown in Fig.\ \ref{Fig:Fig-09}. Two of the three methods were designed as described by \citet{RN360}, where each data point is paired with $k$ of its nearest neighbors to form positive points. In our experiments we set $k$ as $k=2$ and $k=4$. Once positive pairs are constructed, an equal number of randomly selected farther neighbors is set as negative pairs. In the third method we let the proposed method detailed in Algorithm \ref{Alg:Alg-01} to decide the number of positive pairs. Then, an equal number of farther neighbors as set as negative pairs. For evaluation metrics, we used clustering accuracy and ARI that are described in equations \ref{Eq-008} and \ref{Eq-009} respectively. In addition to ACC and ARI we used normalized mutual information (NMI) as an evaluation metric because it was reported in the original SpectralNet paper \cite{RN360}. NMI is defined as: \begin{equation} NMI(T,L)=\frac{I(T;L)}{max\{H(T),H(L)\}}\ , \label{Eq-010} \end{equation} where $T$ and $L$ be ground truth labels and labels obtained through clustering respectively. $I(T;L)$ denotes the mutual information between $T$ and $L$, and $H(\cdot)$ denotes their entropy. We also used the total number of pairs as an indicator of computational efficiency. \begin{figure \centering \includegraphics[width=\textwidth,height=20cm,keepaspectratio]{figs-03/Fig-09.pdf} \caption{Datasets used for SpectralNet experiments.} \label{Fig:Fig-09} \end{figure} Table \ref{Table:Table-04} displays the results of our SpectralNet experiments. In \texttt{cc} dataset, our proposed method outperformed the method with fixed $k$. Our method achieved ARI score of 0.800 with an increase of 30\% over its closest competitor. This performance was largely due to the higher number of pairs passed to Siamese net. Our method passed 21,478 pairs on average, compared to 6,000 and 12,000 pairs passed by $k=2$ and $k=4$ respectively. In \texttt{aggregation} dataset, $k=2$ was the best performer with ARI score of 0.808. Our method scored lower ARI than $k=2$, close to 10\%. However, the performance of $k=4$ was worse than ours, 40\% when compared to ground truth. In \texttt{compound} dataset, our method was the best performer with ARI score of 0.705 with a difference of 15\% from the second performer. This experiment reveals that although Siamese nets are superior in building an affinity matrix, they still need an informative selection of positive and negative pairs. Fixing the number of positive pairs for all points limits the influence of points in dense regions. Points in dense regions should be able to pair with more neighbors to strengthen intra-cluster connections. \begin{table \centering \caption{Results of experiments for Integration with SpectralNet for 10 runs. Four rows for each dataset: \texttt{ACC}: mean accuracy $\pm$ standard deviation, \texttt{ARI}: mean adjusted Rand index $\pm$ standard deviation, \texttt{NMI}: mean normalized mutual information $\pm$ standard deviation, and \texttt{Total pairs}: the total number of positive and negative pairs passed to Siamese net. Bold values are the best scores.} \includegraphics[width=0.7\textwidth,height=20cm,keepaspectratio]{figs-03/Table-04.pdf} \label{Table:Table-04} \end{table} \section{Introduction} \label{Introduction} Spectral clustering gains popularity due to its ability of uncovering clusters with non-convex shapes. It uses the spectrum of pairwise similarity matrices to map data points to a space where they can be easily separated \cite{RN261,RN233,RN234,RN295}. Spectral clustering has been used in image segmentation \cite{RN228,RN296}, remote sensing image analysis \cite{RN297,RN276}, and detecting clusters in networks \cite{RN288,RN292,RN298,RN240}. Despite its elegance in uncovering clusters, spectral clustering comes with a heavy computational price. Decomposing the pairwise similarity matrix requires $\mathcal{O}(N^3)$ for $N$ data points. Spectral clustering is infeasible for applications with large $N$. For the graph $G(V,E)$ represented by its affinity matrix $A$, reducing the size of $A$ means removing some vertices in $V$, whereas making $A$ sparser means removing edges. This is the motivation of approximate spectral clustering (ASC) \cite{RN296,RN297,RN276,RN232,RN275,RN285}, which adds two steps to the original algorithm of spectral clustering. First, it places $m$ prototypes in the data space where $m \ll N$. Then, spectral clustering is carried out on $m$ prototypes and uses the initial assignments to generalize the outcome. The $m$ prototypes are usually placed via vector quantization methods (e.g., $k$-means and self-organizing maps). Despite being a popular choice for approximate spectral clustering, vector quantization could converge badly, resulting in ill representation of data points due to randomness in initialization and/or training. \begin{figure}[t \centering \includegraphics[width=\textwidth,height=20cm,keepaspectratio]{figs-03/Fig-01.pdf} \caption{Running approximate spectral clustering (ASC) via vector quantization on synthetic data (best viewed in color).} \label{Fig:Fig-01} \end{figure} Fig.\ \ref{Fig:Fig-01} illustrates two graphs by Zelnik-Manor and Perona \cite{RN237}, where approximate spectral clustering graphs were confused by the variation in local statistics. Generally, one can identify three deficiencies related to ASC via vector quantization: 1) these methods have a random selection step either in initialization or training which affects the consistency of clustering, 2) the obtained m prototypes provide a global overview of the data leaving out local information that could be crucial for clustering 3) vector quantization methods have to accommodate noisy data points as part of their training. Considering aforementioned deficiencies, we proposed a graph for spectral clustering $G=(V,E^\ast)$ where we kept the same number of vertices $V$ and find the most important subset of edges $E^\ast \subset E$. Our goal is to create a graph with less number of edges without compromising on the clustering accuracy. Hence, using a graph with edges less than $E^\ast$ would negatively impact the clustering accuracy. To get $E^\ast$, we used a set of refinement stages that are computationally inexpensive. It starts by linking data points to their nearest neighbors progressively and monitor local statistics to stop linking when links appear outside the local neighborhood of points. Then, we used mutual k-nearest neighbor graph \cite{RN234} to filter edges that lack mutual agreement of the pair of points. The experiments show the proposed method outperforming ASC via vector quantization. This paper is outlined as follows: in the next section we introduce spectral clustering. Also, we reviewed different approaches in the literature to construct a graph. In section 3, we present our approach for refining $k$-nn graph and automatically selecting the number of clusters $C$. In section 4, experiments were discussed. \section{Spectral clustering (SC)} \label{Spectralclustering} The graph $G(V,E)$ connecting data points and its digital representation the affinity matrix A are the core components of spectral clustering. This clustering scheme is a relaxation of the normalized cut problem (Ncut) introduced by Shi and Malik \cite{RN261}. Their contribution made progress on the minimum cut (Mincut) defined as: \begin{equation} cut\left(B,\bar{B}\right)=\sum_{i\in B,j\in\bar{B}}\ A_{ij}\ , \label{Eq-001} \end{equation} where $B\subset\ V, \bar{B}$ is the complement of $B$, and $A_{ij}$ is the similarity score between the nodes $i$ and $j$. However, Mincut tends to cut isolated sets rather than significant partitions since it increases with the number of edges \cite{RN261,RN234}. Consequently, Ncut was introduced as: \begin{equation} Ncut\left(B,\bar{B}\right)=\frac{cut\left(B,\bar{B}\right)}{assoc\left(B,V\right)}+\frac{cut\left(B,\bar{B}\right)}{assoc\left(\bar{B},V\right)}\ . \label{Eq-002} \end{equation} It penalizes the cut cost by the total connections from the nodes in B to all nodes V in the graph. Let $y$ be the exact solution of $Ncut \left(B,\bar{B}\right)$, with $y_i= 1$ if $i\ \epsilon\ B$, and $-1$ otherwise. Then $Ncut\left(B,\bar{B}\right)$ can be optimized as: \begin{equation} \min_x {Ncut\left(x\right)}=\min_y {\frac{y^T\left(D-A\right)y}{y^TDy}}\ , \label{Eq-003} \end{equation} where $D$ and $A$ are the degree and affinity matrices respectively. To exactly solve Ncut, we have to look for two subsets with strong intra-connections and relatively weak weights between them, which was shown to be an NP-complete problem by Shi and Malik \cite{RN261}. However, by relaxing y to take real values it was shown by Shi and Malik \cite{RN362} that equation \ref{Eq-003} can be minimized by solving the generalized eigenvalue system: \begin{equation} \left(D-A\right)y=\lambda Dy\ . \label{Eq-004} \end{equation} The second smallest eigenvalue $\lambda^L$ of the graph Laplacian $L=D-A$ and its corresponding eigenvector $v^L$, provide an approximation for solving Ncut \cite{RN234,RN279}. When there is a partitioning between $B$ and $\bar{B}$ such that: \begin{equation} v_i^L = \systeme*{\alpha \text{,} \quad i\in B,\beta \text{,} \quad i\in \bar{B}} , \label{Eq-005} \end{equation} Then $B$ and $\bar{B}$ becomes the optimal Ncut with a value of $Ncut\left(B,\bar{B}\right)=\lambda^L$ \cite{RN279}. $v^L$ is used to bipartition the graph then the following eigenvectors are used to partition the graph further \subsection{SC grouping algorithm} \label{SCGroupingAlgorithm} Clustering through graph Laplacian eigenvectors could be done iteratively (i.e., ordered by eigenvalues) or by constructing an embedding space using top eigenvectors. The latter approach is more convenient and a well-known method for embedding space clustering was introduced by Ng, Jordan and Weiss \cite{RN233}. They proposed a symmetric graph Laplacian $L_{sym}=D^{-1/2}AD^{-1/2}$ where $D$ and $A$ are degree and similarity matrices respectively. $L_{sym}$ top eigenvectors constitute an embedding space in which points that are strongly connected will fall close to each other making clusters detectable by $k$-means. \subsection{Graph construction} \label{GraphConstruction} When it comes to spectral clustering, it is all about quantifying similarities. Ideally, points in the same cluster are linked by large weights so they can fall close in the embedding space. A naïve approach of assigning weights would be through Euclidean distance. However, this is not a practical choice, since it only considers first-order relationships. In first-order relationships edges are drawn based on information from pair of points only. A more practical approach would be considering second-order relationships where edges are drawn based on information from the neighbors. In the following subsections, we will go through some of the popular methods to construct a graph whose similarity matrix is fed into spectral clustering. \subsubsection{Conventional graphs} \label{ConventionalGraphs} Conventional choices of constructing a graph include $k$-nearest neighbor graph and $\epsilon$-neighborhood graph. These graphs use first-order relationships. In nearest neighbor graph, each point is linked to $k$ points of its nearest neighbors. While in $\epsilon$-neighborhood graph, each point becomes a center of a sphere of radius $\epsilon$ and link with all points inside that sphere. These are straightforward approaches for constructing a graph, but their reliance on first-order relationships and hyperparameters limit their usability. Some restrictions could be applied to boost their performance. For example, connect with k-neighbors if they are closer than a threshold distance. Interested reader is referred to section 2.2 in \cite{RN234} and Appendix D in \cite{RN274}. \subsubsection{Approximate graphs} \label{ApproximateGraphs} Approximate graphs use vector quantization method to construct a graph using a reduced set of prototypes. These methods can be classified into two categories: 1) methods that only places prototypes in the feature space such as $k$-means \cite{RN240,RN309}, 2) methods that are capable of placing prototypes and connecting them by edges (self-organizing map \cite{RN298,RN42} and neural gas \cite{RN259,RN257}). $k$-means attempts to minimize the sum of squared distances between points and their closest prototypes. Self-organizing map (SOM) uses a predefined lattice that connects prototypes. During SOM training, a winning prototype would pull its neighbors in the lattice towards the selected data point. Neural gas (NG) was an improvement over SOM since it links prototypes based on their location on the feature space not on the lattice. During NG training, the winning prototype would link to its closest neighbor, and that edge is allowed to age and “die” if it is not updated again. Both SOM and NG can produce a graph with less edges and vertices making spectral clustering computationally efficient. Once the vector quantization training finishes, pairwise similarities could be set as: 1) a prototype to prototype similarity (approximate graph \cite{RN276,RN275})) or 2) a prototype to data point similarity (anchor graph \cite{RN359,RN358}). The former was used as a benchmark in experiments due to its a larger presence in the literature. Connectivity matrix (CONN) \cite{RN275} defines the similarity for a pair of prototypes according to the induced Delaunay triangulation \cite{RN257} which links the pair if there exists a data point that selects them as first and second best matching units (BMUs). When such a point does not exist, the pair of prototypes are not linked which makes CONN capable of producing sparse graphs. Growing neural gas (GNG) was used in \cite{RN296} as approximation graph. GNG applies the same training as NG, but in an incremental manner, where prototypes introduced to bridge the gaps during training. A comparison study \cite{RN285} discussed different approximate graphs and how to assign their weights using local scaling \cite{RN237} or CONN \cite{RN276,RN275}. \subsubsection{Proximity graphs} \label{ProximityGraphs} In proximity graphs, a pair of points are linked if they satisfy a predefined condition. This makes them use second-order relationships since the linking decision is based on neighbors. In \cite{RN254}, that condition was if the neighborhood between pair of points is empty from any other point, then the pair should be linked. This type of graphs is known as empty region graphs (ERGs). ERGs rely on $\beta$ parameter to identify the neighborhood, that should be empty, to link the pair with an edge. Edges in the graph were locally scaled using similarity metric in \cite{RN237} to achieve accurate clustering. A more sophisticated condition for the empty region could be parameter free. Inkaya et al. introduced neighborhood construction NC algorithm \cite{RN303}. It starts by assigning direct neighbors as core neighbors and indirectly connects a point to other points through its core neighbors. Then it tracks the density between each pair, a pair having a density zero represents core neighbors. Once each point has a list of neighbors, the method tests the mutuality between neighbors’ lists, and drops the points lacking mutual agreement. Drawback of NC include isolated vertices, subgraphs, and asymmetric similarity matrix. This was rectified in \cite{RN253}, where additional steps were proposed to achieve symmetric similarity matrix. An undirected graph was constructed using NC, and if its connected components is less than the desired number of clusters $C$, edges were introduced between nearest points to satisfy the condition. Proximity graphs are capable of capturing underlying shape of the data. However, they came with a heavy computational price or the need for a hyperparameter. Density calculation alone requires $\mathcal{O}(n^3)$ \cite{RN303}. \subsubsection{Constrained graphs} \label{ConstrainedGraphs} Constrained graphs require special types of constraints prior to data linkage. These constraints are must-link ML and cannot-link CL to force link or unlink of data points regardless of their location in the feature space. Authors of \cite{RN300} found that constraints are limited to a small number of points which is not very useful for clustering. They introduced an affinity propagation method where points are linked not only based on their affinity but with evaluation of nearby constraints. This makes a greater impact of constraints to improve clustering. Another approach introduced by Li, Liu and Tang \cite{RN302}. Instead of applying the constraints to the similarity matrix, they were applied to the eigenvectors constituting the embedding space. In that space must-link points should be close to each other and cannot-link points should be far apart. The authors created a measure of “good representation” that should hold the minimum cost when optimized. This method has a computational advantage over applying constraints to the similarity matrix. A separation of constrained graphs was proposed in \cite{RN301}. Must-link graph and cannot-link graph were created, and bi-objective graph optimization employed instead of eigen-decomposition. It is clear that constraining the graph created from data points would get better clustering results. However, this comes at a price of fundamentally changing the problem into semi-supervised instead of unsupervised. Constraints are usually created from the ground truth. \section{Sample section} \input{Introduction} \input{LiteratureReview} \input{ResearchApproach} \input{Experiments} \input{Conclusions} \begin{singlespace} \bibliographystyle{elsarticle-num-names} \section{Refined $k$-nearest neighbor graph for Spectral clustering} \label{ProposedApproach} From the previous overview, it can be noticed that there is no graph selection that works for different sets of data. Every method has to compromise at some stage, but we believe that local statistics between data points represent clustering information and should not be compromised. The ultimate goal of spectral clustering is to detect non-convex clusters, and local statistics are crucially important to achieve that goal. They have been avoided in approximate spectral clustering for computational efficiency. Also, most of the weirdly shaped clusters could be detected by approximation as long as they are dense, but this is not always the case. \begin{figure}[t \centering \includegraphics[width=\textwidth,height=20cm,keepaspectratio]{figs-03/Fig-02.pdf} \caption{A summary of the proposed approach. (a) $k$-nearest neighbor graph with $k$ set according to local statistics; (b) mutual $k$-nearest neighbor graph to filter edges lacking mutual agreement; (c) an optional step to locally monitor the change in eigenvalues to detect the number of clusters $C$; (d) clustering outcome (best viewed in color).} \label{Fig:Fig-02} \end{figure} Our method attempts to balance the tradeoff between locally scaled graphs and computational efficiency. It starts by creating $k$-nearest neighbor graph at each point and stop when it violates local statistics. Then, a mutuality check was run to ensure agreement among data points. We also introduced an eigengap detection method to uncover number of clusters $C$. Overview of the method is shown in Fig.\ \ref{Fig:Fig-02}. \subsection{Setting $k$ in k-nearest neighbor graph} \label{ProposedApproach-1} Conventional $k$-nearest neighbor graphs have the problem of treating all data points equally. Due to their location in feature space, data points have different needs for the number of edges that could be larger or smaller than $k$. Ignoring these needs, and forcing each point to have $k$ edges, might result in linking two clusters or breaking a single cluster. Therefore, the parameter $k$ should be adaptively computed to accommodate the needs for each data point. \begin{figure}[t \centering \includegraphics[width=0.79\textwidth,height=20cm,keepaspectratio]{figs-03/Fig-03.pdf} \caption{Distribution of distances from the point marked as $\times$. (dashed line) $k=7$ (solid line) $k$ set by the proposed method (dotted line) $k$ set by the proposed method plus 20 neighbors.} \label{Fig:Fig-03} \end{figure} Setting $k$ manually for each data point is not a practical process. Therefore, we used the distribution of distances to monitor how it changes as we add more neighboring distances to the distribution. The intuition behind it is at some point we are leaving the current cluster to another cluster while adding more neighbors. This movement between clusters should be reflected on the distribution of distances. First, we need a baseline distribution of distances indicating how distances are distributed in the current cluster. Also, we need a threshold to notify us that we are leaving the baseline distribution towards something different. Our baseline distribution is the normal distribution of distances to the first seven neighbors, this was selected for locally scaled pairwise distances in \cite{RN237,RN274}. By setting the baseline distribution to seven we’re implying that the first seven neighbors belong to the same cluster. This is a strong assumption and it fails in large datasets (as we will see in Table \ref{Table:Table-03}) where we had to change it to be fifty, because seven was very narrow. that’s why we are currently working on figuring this parameter automatically. Since we are moving in one direction which is far from the mean $\mu$ of the baseline distribution, we set the threshold to stop adding more neighbors as $ \lbrack \mu + \sigma \rbrack $. Once the new mean is outside the interval $ \lbrack \mu + \sigma \rbrack $ we set the $k$ for the current point at its current neighbor. In Fig.\ \ref{Fig:Fig-03}, we compare the distribution of distances for two points sitting in different locations in the feature space. The plot on the right shows a dashed line that is a baseline distribution of distances $k=7$, solid line is the selected distribution set not to violate the interval $ \lbrack \mu + \sigma \rbrack $, dotted line is the distribution if we add 20 more neighbors beyond $ \lbrack \mu + \sigma \rbrack $. The selected distribution kept the shape of baseline distribution while adding more edges that are useful for clustering. However, if we keep adding more neighbors beyond$ \lbrack \mu + \sigma \rbrack $ interval, the distribution becomes flatter losing the shape of the baseline. Interestingly the distribution of distances indicates the sparsity of a cluster. In Fig.\ \ref{Fig:Fig-03} (a), the point marked as $\times$ sits in a sparse cluster making its distribution more naturally looking covering large interval (0.1,0.4), due to varying distances around it. On the other hand, the point in bottom row sitting in a dense cluster has a sharp distribution covering a narrow interval (0.01,0.04), given similar distances around it. \begin{algorithm}[t] \DontPrintSemicolon \KwInput{$N$ data points, maximum number of neighbors $k_{max}$} \KwOutput{Refined $k$-nearest neighbor graph} \tcc{The following step has computations in order of $\mathcal{O}(dNlogN)$} Construct $k$-nn graph where $k=k_{max}$ represented by its distance matrix $D(N,k_{max})$ \tcc{The following loop has computations in order of $\mathcal{O}(N_{kmax})$} \For{$i = 1 \text{ to } N$} { \For{$j = 1 \text{ to } k_{max}$} { $DM_{i,j}$ = mean($D_{i,\ 1\ to\ j}$) + standard deviation($D_{i,\ 1\ to\ j}$) } } let $DM7(N,k_{max})$ be an empty matrix let all columns in $DM7$ equal the 7$^\text{th}$ column in $DM$ $D^\ast(N,k_{max})=DM7(N,k_{max})-DM(N,k_{max})$ \tcc{The following loop has computations in order of $\mathcal{O}(N_{kmax})$} \For{all elements in $D^\ast(N,k_{max})$} { \If{$D_{i,j}^\ast<0$} { $D_{i,j}^\ast=0$ } } \tcc{The following loop has computations in order of $\mathcal{O}(N_{kmax})$} \For{all elements in $D^\ast(N,k_{max})$} { \If{$D_{i,j}^\ast==0$ or $D_{j,i}^\ast==0$} { $D_{i,j}^\ast=0$ $D_{j,i}^\ast=0$ } } Construct refined $k$-nn using distance matrix $D^\ast(N,k_{max})$ \caption{Constructing a refined $k$-nearest neighbor graph} \label{Alg:Alg-01} \end{algorithm} The computational bottleneck is to get the initial $k$-nearest neighbor graph. These computations could be reduced by using efficient data structure like $kd$-trees which can be constructed in $\mathcal{O}(dNlogN)$ [34]. The parameter $k$ was set as $k_{max}$. By setting $k_{max}$ we are comfortable that each data point requires edges less than $k_{max}$. Then, getting the refined $k$-nearest neighbor graph requires low computations $\mathcal{O}(Nk_{max})$. Once we have the distance matrix of size $N\times\ k_{max}$, we compute mean and standard deviation $\mu_0$ and $\sigma_0$ for the first seven columns. Then we add more neighbors to compute the new mean $\mu_i$. Once $\mu_i>\mu_0 + \sigma_0$, the comparison stops and all elements up to $k_{max}$ are nullified. This process is illustrated by steps 1--5 in Algorithm \ref{Alg:Alg-01}. \subsection{Checking mutual agreement} \label{ProposedApproach-2} The graph obtained in the previous step is a directed graph. Each edge indicates the existence of the destination point in the source point refined $k$-nn list. A pair of points have a mutual agreement if they have each other in their refined $k$-nn lists. Fig.\ \ref{Fig:Fig-04} shows how crucial this step is. If we convert the refined $k$-nn graph Fig.\ \ref{Fig:Fig-04} (a) into undirected graph Fig.\ \ref{Fig:Fig-04} (b) and proceed with spectral clustering, we should not expect great results since all clusters are connected. However, if we drop the edges between neighbors lacking mutual agreement, we end up with a graph highlighting groups of points separated by different local statistics (check steps 6--7 in Algorithm \ref{Alg:Alg-01}). \begin{figure}[t \centering \includegraphics[width=\textwidth,height=20cm,keepaspectratio]{figs-03/Fig-04.pdf} \caption{Illustration on the importance of mutuality test. (a) refined $k$-nn graph with directed edges, (b) refined $k$-nn graph with undirected edges, (c) refined $k$-nn after removing edges lacking mutual agreement (best viewed in color).} \label{Fig:Fig-04} \end{figure} \subsection{Looking for the eigengap} \label{ProposedApproach-3} Once the graph is ready, we can construct the pairwise similarity matrix and assign weights on edges. We used the similarity introduced by Zelnik-Manor and Perona \cite{RN237} it is superior in highlighting clusters with different statistics. The affinity matrix $A$ is defined as follows: \begin{equation} A_{ij}=\exp{\left(\frac{-d^2\left(i,j\right)}{\sigma_i\ \sigma_j}\right)}. \label{Eq-006} \end{equation} The local scale $\sigma_i$ set as the distance of a point to its k$^\text{th}$ neighbor, here we set it to be the 7$^\text{th}$ neighbor $d(i,i_{k=7})$. The eigen-decomposition is carried out afterwards to get the top $C$ eigenvectors ($C$ is number of clusters). Setting $C$ manually is not practical. Our recent work in \cite{RN296} introduced a framework to uncover $C$ by evaluating eigenvectors independently and calculate Davies-Bouldin index (DBI) for each eigenvector. However, that framework was built around approximate spectral clustering where number of prototypes is $m$ compared to $N$ points in our case $m \ll N$. Applying the same framework here could be computationally expensive. Therefore, we preferred to track the change in eigenvalues. Liu et al. proposed predicting $C$ from eigenvalues, but their method requires a parameter $\tau$ \cite{RN168}. To keep it simple, we start with 2$^\text{nd}$ and 3$^\text{rd}$ eigenvalues and compute their mean and standard deviation. As we add more eigenvalues we check if the new mean is less than old mean plus standard deviation (see steps 1--3 in Algorithm \ref{Alg:Alg-02}). $C$ would be set as: \begin{equation} C=i,\ \ \ \ \ \ \ \text{where}\ \mu_{i+1}>\mu_i+\sigma_i. \label{Eq-007} \end{equation} \subsection{Clustering in the embedding space} \label{ProposedApproach-4} In spectral clustering, it is not enough to specify only the number of clusters $C$, it also needs the number of dimensions of the embedding space. The original algorithm \cite{RN233} states that for $C$ clusters $k$-means should operates in the top $C$ eigenvectors space. In practice, $C$ eigenvectors could be detected in a space where number of eigenvectors is less than $C$. For example, in Fig.\ \ref{Fig:Fig-05} the original data points form three clusters, and by plotting them using top two eigenvectors, it is clear that clusters are separated and detectable via $k$-means. Therefore, it is worth checking how $k$-means would perform in an embedding space with less than $C$ eigenvectors. \begin{figure}[t \centering \includegraphics[width=\textwidth,height=20cm,keepaspectratio]{figs-03/Fig-05.pdf} \caption{Illustration on the importance of mutuality test. (a) refined $k$-nn graph with directed edges, (b) refined $k$-nn graph with undirected edges, (c) refined $k$-nn after removing edges lacking mutual agreement (best viewed in color).} \label{Fig:Fig-05} \end{figure} \begin{algorithm}[!h] \DontPrintSemicolon \KwInput{A refined $k$-nn graph $G(V,E)$ with $N$ vertices, \newline number of required eigenvalues $\lambda_{max}$} \KwOutput{$N$ data points grouped into $C$ clusters} Compute graph Laplacian $L_{sym}=D^{-1/2}AD^{-1/2}$ \tcc{The following step has computations in order of $\mathcal{O}(N^3)$} Compute eigenvalues $\lambda$ and eigenvectors $v$ of graph Laplacian $L_{sym}$ \tcc{The following loop has computations in order of $\mathcal{O}(\lambda_{max})$} \For{$i = 3$ to $\lambda_{max}$} { \If{$\lambda_{i+1} > (\text{mean}(\lambda_{2 \text{ to } i}) + \text{standard deviation}(\lambda_{2 \text{ to } i})$} { set $C = i$ } } \tcc{The following loop has computations in order of $\mathcal{O}(CdtN)$} \For{all elements in $D^\ast(N,k_{max})$} { run $k$-means with $k=C$ on $N$ data points using eigenvectors $v_2$ to $v_i$ of $v$. set $l_i$ as the label vector returned by $k$-means. } \tcc{The following loop has computations in order of $\mathcal{O}(CE)$} \For{$i = 2$ to $C$} { let $a_i$ be the variable holding sum of weights connecting unmatched labels in $l_i$. label all vertices in $V$ using $l_i$. \For{each edge $E(p,q)$ in $E$ where $p,q\in V$} { \If{labels are different $l_i(p)\neq l_i(q)$} { $a_i=a_i+E(p,q)$ } } } Return the lowest $a_i$ and its associated $l_i$ \caption{Clustering in embedding space with unknown $C$} \label{Alg:Alg-02} \end{algorithm} For all eigenvectors less than $C$, we constructed embedding spaces starting by 2$^\text{nd}$ and 3$^\text{rd}$ eigenvectors then add one more eigenvector up to $C$. On each embedding space, $k$-means operates to detect $C$ clusters. We end up with $C$ vectors each of which represents cluster memberships. To choose the right cluster membership, we applied each membership vector to the refined $k$-nn graph and sum the weights of inter-cluster edges. Inter-cluster edges link points with different cluster membership. The right membership vector is the one that yields the lowest sum of inter-cluster weights. Ideally, this sum would be zero indicating no edges are linking different clusters (see steps 4--6 in Algorithm \ref{Alg:Alg-02}). \subsection{Integration with SpectralNet} \label{ProposedApproach-5} Spectral clustering using deep neural networks (SpectralNet) was introduced by \citet{RN360}. They highlighted two shortcomings of spectral clustering: 1) the scalability issue with large datasets where direct computation of eigenvectors could be infeasible, and 2) the generalization issue which is extending spectral embedding to unseen data in a task commonly known as out-of-sample-extension \cite{Bengio2004OOSE,RN227,Coifman2006Geometric}. Our proposed method could be integrated with SpectralNet. The SpectralNet consists of three main stages: 1) unsupervised learning of an affinity matrix given a distance measure, via a Siamese network, 2) unsupervised learning of an embedding space by optimizing spectral clustering objective, and 3) learning cluster assignments by running $k$-means in the embedding space. Our method for filtering the graph could be executed before running the Siamese net. Siamese nets \cite{Hadsell2006Dimensionality,Shaham2018Learning} are trained to learn complex affinity relations that cannot be captured by Euclidean distance. \citet{RN360} empirically found that using Siamese net to determine the affinity often improves the quality of clustering. Siamese nets are usually trained on similar (positive) and dissimilar (negative) pairs of data points. For labeled data, positive and negative pairs could be decided from the labels. For example, a pair with the same label is set a positive pair, while a pair with different labels is set as negative pair. But this is not the case with unlabeled data where nearest neighbor graph can be used to determine positive and negative pairs. \citet{RN360} constructed positive pairs for Siamese net by pairing each point with two of its nearest neighbors. An equal number of negative pairs was randomly chosen from farther neighbors. We used a different approach in our experiments. We let our method detailed in Algorithm \ref{Alg:Alg-01} to decide how many neighbors a data points should have as positive pairs. Then, an equal number of farther neighbors is set as negative pairs. Our approach to pass positive and negative pairs to Siamese has two advantages. First, the number of positive pairs is not fixed for all data points. This makes points in dense regions to have more positive and negative pairs. Second, we did not use a random selection for negative pairs, instead we assigned farther neighbors as negative pairs. This would contribute to the consistency of the method over independent executions.
{ "arxiv_id": "2302.11292", "language": "en", "timestamp": "2023-02-23T02:12:58", "url": "https://arxiv.org/abs/2302.11292", "yymm": "2302" }
\section{Introduction} \label{sec:introduction} Cache systems are vital to reduce communication overhead on the Internet. However, it is not trivial to provide cache systems over encrypted communications because a cache server (CS) must verify whether it has a copy of a particular encrypted content, although information about the content is not revealed due to encryption. Thus, due to the increasing use of encrypted communication, such as Transport Layer Security (TLS), encrypted cache systems are a promising approach for providing communication efficiency and privacy. Leguay et al.~\cite{[LeguayPQS17]} proposed an encrypted cache system called CryptoCache. Although the contents are encrypted, CryptoCache allows users requesting the same content to be linked. Thus, Leguay et al. proposed an extension that prevents this linkability by employing a public key encryption (PKE) scheme. Emura et al.~\cite{EmuraMNY20,EmuraMNY22} further extended CryptoCache by proposing an encrypted cache system called Cache-22. The Cache-22 system not only provides unlinkability without employing PKE, but also presents a formal security definition in a cryptographic manner. The Cache-22 system is briefly explained as follows and illustrated in Figure~\ref{cache-22} in Section~\ref{subsec_cache-22}. It is assumed that all communications are protected by TLS. A tag is assigned to each content, and it is assumed that no information about the content is revealed by the tag (e.g., it can be generated using hash-based message authentication code (HMAC), because it is a pseudorandom function~\cite{Bellare15}). The service provider (SP) encrypts content and stores the ciphertext and corresponding tag on a CS. When a user requests the content, the user sends a request to the SP. Then, the SP sends the corresponding tag back to the user. The user then sends the tag to the CS. If the tag is stored on the CS, the CS sends the corresponding ciphertext to the user and the user information to the SP. Finally, the SP sends the corresponding decryption key to the user. Because the size of the tag is much smaller than the size of the content (ciphertext), the Cache-22 system makes it possible to significantly reduce communications between a CS and the SP. Because the Cache-22 system can employ any cipher suite, seven cipher suites, including National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) candidates~\cite{BIKE,BosDKLLSSSS18,NTRU,SABER} are employed. \medskip\noindent\textbf{Adding Access Control to Cache-22}: In the final procedure of the Cache-22 system, the SP sends the corresponding decryption key to the user. Emura et al.~\cite{EmuraMNY20,EmuraMNY22} claimed that this procedure allows the SP to control which users can access the contents. For example, if a user has downloaded ciphertexts of several episodes of a show, the SP can allow some of the contents (e.g., the first episode) to be available for free while requiring a fee for the remaining contents. However, the authors did not provide a concrete access control method. A naive solution is to add an authentication protocol, such as classical ID/password authentication, before the SP sends the corresponding decryption key to the user. This method is effective; however, it is not scalable. That is, the SP must send the decryption key individually for $N$ users, which leads to a communication cost of $O(N)$. \subsection{Our Contribution} In this paper, we add a scalable access control protocol to the Cache-22 system. Specifically, we propose time-dependent access control, which requires a communication cost of $O(\log T_{\sf max})$ using the Naor--Naor--Lotspiech (NNL) framework~\cite{NaorNL01} where $T_{\sf max}$ is the maximum time period. In the original NNL framework, each user is assigned to a leaf node of a binary tree which provides broadcast encryption in which the encryptor specifies who can decrypt the ciphertext. In our proposed protocol, each time period is assigned to a leaf node (multiple users are assigned to the same node if they have the same access rights). Briefly, let ${\sf TI}=[1,T_{\sf max}]$ be a time interval where $T_{\sf max}\in\mathbb{N}$ and assume that $T_{\sf max}=2^m$ for some $m\in\mathbb{N}$. Then, each time $t\in{\sf TI}$ is assigned to a leaf node of a binary tree that has $2^m$ leaves. This time period indicates how long the content is available. For example, $t$ can represent a day, a week, a month, and so on. The SP encrypts each content according to the time it is available. This NNL-based time-dependent control technique has been employed in other cryptographic primitives, such as attribute-based encryption for range attributes~\cite{AttrapadungHOOW16} and group signatures with time-bound keys~\cite{EmuraHI20}. However, to the best of our knowledge, no encrypted cache system with this technique has been proposed so far. \medskip\noindent\textbf{Toy Example}. Figure~\ref{tree1} presents an example when $T_{\sf max}=8$. A key is assigned to each node (from $k_1$ to $k_{15}$), and each user is also assigned to a leaf according to the corresponding access rights. In this example, there are four users, $u_1$, $u_2$, $u_3$, and $u_4$. Here, $u_2$ and $u_3$ are assigned to the same leaf indicating that they have the same access rights. Each user has keys on the path, namely, from their own leaf node to the root ($u_2$ and $u_3 $ have keys $k_1$, $k_2$, $k_4$, and $k_9$). At time $t_1$, the SP encrypts the content \lq\lq Episode 1" using $k_1$. Now, all users can obtain the content because they have $k_1$. At time $t_2$, the SP encrypts the content \lq\lq Episode 2" using $k_3$, $k_5$, and $k_9$ (see Figure~\ref{tree2}); that is, there are three ciphertexts. In this case, $u_1$ has no rights to obtain the content whereas the other users still have at least one decryption key. Similarly, $k_3$ and $k_5$ are used for encryption at time $t_3$, and $k_3$ and $k_{11}$ are used for encryption at time $t_4$. In this method, the number of ciphertexts is $O(\log T_{\sf max})$, and each user manages $O(\log T_{\sf max})$-size decryption keys. One shortcoming of this construction based on the NNL framework compared to the original Cache-22 is that each user must manage the decryption keys; that is, the protocol is stateful. Nevertheless, this construction provides time-dependent access control with scalability at the expense of key management. \begin{figure*}[t] \centering \includegraphics[keepaspectratio, scale=0.5]{./cache-22.eps} \caption{Cache-22 System~\cite{EmuraMNY20,EmuraMNY22}}\label{cache-22} \end{figure*} \section{Preliminaries} \subsection{Cache-22 System} \label{subsec_cache-22} In this section, we introduce the Cache-22 system. A tag is assigned to each content, and it is assumed that no information about the content is revealed by the tag. The SP encrypts the content and stores a tag and ciphertext pair on the CS. In the implementation proposed by~\cite{EmuraMNY20,EmuraMNY22}, there are multiple CSs due to the color-based cooperative cache system~\cite{NakajimaYWY17}. For the sake of simplicity, we consider the case of a single CS. We assume that all communications between a user, CS, and SP are encrypted with TLS. Let $({\sf Enc},{\sf Dec})$ be a IND-CPA secure SKE scheme, where for a key $k\in\mathcal{K}$ and a message $M\in\mathcal{M}$, ${\sf Dec}_k(C)=M$ holds, where $C\leftarrow{\sf Enc}_{k}(M)$, $\mathcal{K}$ is the key space, and $\mathcal{M}$ is the message space. Here, IND-CPA stands for indistinguishability under chosen-plaintext attack. The upper-order 128 bits of tag are used as the initial vector ($IV$) for {\sf AES-GCM}~\cite{IwataS17}. Then, $IV$ is not reused for other encryption since the tag is pseudorandom. Let ${\sf CacheTbl}$ be the cache table managed by the CS which has the structure ${\sf CacheTbl}=\{({\sf tag}_i,C_i)\}$, and is initiated as $\emptyset$. Although we simply denote ${\sf CacheTbl}=\{({\sf tag}_i,C_i)\}$ here, we can employ any cache system. We also assume that a user knows the content name ${\sf c\_name}$, and that the SP can decide the corresponding content ${\sf content}_i\in\mathcal{M}$ from ${\sf c\_name}$. The flow of the Cache-22 system is illustrated in Figure~\ref{cache-22}, and the formal description of the system is provided as follows. The Cache-22 system consists of $({\sf GenTable},\allowbreak {\sf ContentRequest},\allowbreak {\sf SendContent},\allowbreak {\sf CacheRequest},\allowbreak {\sf SendKey},\allowbreak {\sf ObtainContent})$. It should be noted that the SP sends the corresponding decryption key to a user via the ${\sf SendKey}$ algorithm. Because the SP needs to know the destination, each user sends own identity $ID$ to the CP in the ${\sf SendContent}$ protocol. \begin{itemize} \item ${\sf GenTable}(1^\kappa,1^\lambda,{\sf SetOfContents})$: The table generation algorithm (run by the SP) takes as input security parameters $\kappa,\lambda\in\mathbb{N}$ and a set of contents ${\sf SetOfContents}\allowbreak=\{{\sf content}_i\}_{i=1}^n$. Randomly choose $k_{c,i}\leftarrow \mathcal{K}$ and compute ${\sf tag}_i\leftarrow {\sf HMAC}_{k_{\sf hmac}}({\sf content}_i)$ for each ${\sf content}_i\in\mathcal{M}$. Retrieve $IV$ from ${\sf tag}_i$, and encrypt ${\sf content}_i$ such that $C_i\leftarrow {\sf Enc}_{k_{c,i}}(IV,{\sf content}_i)$. Output a table ${\sf ConTbl}=\{({\sf content}_i,\allowbreak{\sf tag}_i,C_i,k_{c,i})\}$. \smallskip \item ${\sf ContentRequest}({\sf User}({\sf c\_name},ID),{\sf SP}({\sf ConTbl}))$: The ${\sf ContentRequest}$ protocol between a user and the SP takes as input a content name ${\sf c\_name}$ and the user identity $ID$ from the user, and takes as input ${\sf ConTbl}$ from the SP. \begin{enumerate} \item The user sends $({\sf c\_name},\allowbreak ID)$ to the SP via a secure channel. \item The SP decides ${\sf content}_i$ from ${\sf c\_name}$, and retrieves the corresponding $({\sf content}_i,{\sf tag}_i,C_i,k_{c,i})$ from ${\sf ConTbl}$. \item The SP sends ${\sf tag}_i$ to the user via the secure channel. \end{enumerate} \smallskip \item ${\sf SendContent}({\sf User}({\sf tag}_i,ID),{\sf CS}({\sf CacheTbl}))$: The content sending protocol between a user and the CS takes as input $({\sf tag}_i,ID)$ from the user, and takes as input ${\sf CacheTbl}$ from the CS. \begin{enumerate} \item The user sends a request $({\sf tag}_i,ID)$ to the CS via a secure channel. \item The CS checks whether ${\sf tag}_i$ is stored in ${\sf CacheTbl}$. \begin{itemize} \item If yes, the CS retrieves $({\sf tag}_i,C_i)$ from ${\sf CacheTbl}$ by using ${\sf tag}_i$, sends $C_i$ to the user via the secure channel, and sends $({\sf tag}_i,ID)$ to the SP via the secure channel. \item If no, the CS runs the ${\sf CacheRequest}$ protocol with the SP (which is defined later), obtains $C_i$, stores $({\sf tag}_i,C_i)$ to ${\sf CacheTbl}$, and sends $C_i$ to the user via the secure channel. \end{itemize} \end{enumerate} \smallskip \item ${\sf CacheRequest}({\sf CS}({\sf tag}_i,ID),{\sf SP}({\sf ConTbl}))$: The cache request protocol between the CS and the SP takes as input $({\sf tag}_i,ID)$ from the CS, and takes as input ${\sf ConTbl}$ from the SP. \begin{enumerate} \item The CS sends $({\sf tag}_i,ID)$ to the SP via the secure channel. \item The SP retrieves $({\sf content}_i,{\sf tag}_i,C_i,k_{c,i})$ from ${\sf ConTbl}$ by using ${\sf tag}_i$, and sends $C_i$ to the CS via the secure channel. \end{enumerate} \smallskip \item ${\sf SendKey}(ID,k_{c,i})$: The key sending algorithm run by the SP takes as input $(ID,k_{c,i})$. Send $k_{c,i}$ to the user whose identity is $ID$ via the secure channel. \smallskip \item ${\sf ObtainContent}({\sf tag}_i,C_i,k_{c,i}))$: The content obtaining algorithm run by a user takes as input $({\sf tag}_i, C_i,k_{c,i})$. Retrieve $IV$ from ${\sf tag}_i$. Output ${\sf content}_i\leftarrow {\sf Dec}_{k_{c,i}}(IV,\allowbreak C_i)$. \end{itemize} As mentioned in the introduction, there is room for adding an access control system before running the ${\sf SendKey}$ algorithm. \begin{figure*}[t] \centering \includegraphics[keepaspectratio, scale=0.5]{./cache-22-TDAC.eps} \caption{Cache-22 System with Time-Dependent Access Control}\label{cache-22-TDAC} \end{figure*} \subsection{NNL Framework} In this section, we introduce the NNL framework which is called the complete subtree method. Let ${\sf BT}$ be a binary tree with $N$ leaves. For a leaf node $i$, let ${\sf Path}(i)$ be the set of nodes from the leaf to the root. Let ${\sf RSet}$ be the set of revoked leaves. For non leaf node $x$, let $x_{\sf left}$ be the left child of $x$ and $x_{\sf right}$ be the right child of $x$. \begin{enumerate} \item Initialize $X,Y\leftarrow\emptyset$. \item For all $i\in {\sf RSet}$, add ${\sf Path}(i)$ to $X$. \item For all $x\in X$, if $x_{\sf left}\not\in X$ then add $x_{\sf left}$ to $Y$. If $x_{\sf right}\not\in X$ then add $x_{\sf right}$ to $Y$. \item If $|{\sf Rset}|=0$ then add the root node to $Y$. \item Output $Y$. \end{enumerate} \noindent We denote $Y\leftarrow {\sf CompSubTree}({\sf BT},{\sf RSet})$. In the proposed time-dependent access control, a time period is assigned to a leaf, although each user is assigned to a leaf node in the original complete subtree method. Moreover, each leaf is sequentially revoked from the leftmost node. Then, the size of $Y$ is estimated as $|Y|=O(\log N)$ where $N:=T_{\sf max}$ in our protocol, which is scalable regardless of the number of revoked users in the system. \section{Cache System with Time-Dependent Access Control} \label{sec:tdac} In this section, we present our proposed protocol with time-dependent access control. Each content is encrypted with a time period $t$, and if a user is assigned to a time period $t^\prime$, then that user is allowed to obtain contents encrypted with $t$, where $t\leq t^\prime$. For the sake of simplicity, we assume that the access rights of all users are determined in advance. As a remark, we may be able to assume that all contents are encrypted and the SP stores all ciphertexts to the CS regardless of whether they are requested by a user or not. Then, a request sent by a user will always be successful (cache hits). However, this situation is unrealistic because the storage size of the CS will drastically increase. Thus, the SP adds new contents after receiving a user request. Let $T_{\sf max}$ be the maximum time period where $T_{\sf max}\in\mathbb{N}$ and assume that $T_{\sf max}=2^m$ for some $m\in\mathbb{N}$. Each time period $t\in {\sf TI}=[1,T_{\sf max}]$ is assigned to a leaf node. If a user is assigned to a time period $t$, ${\sf Path}(t)$ denotes the set of nodes from the leaf node (which is assigned to $t$) to the root node. Let ${\sf CacheTbl}$ be initialized as $\emptyset$. In the original Cache-22 system, each tag is generated by the corresponding content such as ${\sf tag}_i\leftarrow {\sf HMAC}_{k_{\sf hmac}}({\sf content}_i)$. In our proposed system, one content is multiply encrypted due to the NNL framework. To clarify which ciphertext should be sent to a user, each tag is generated by both the corresponding content and the corresponding index (determined by the NNL framework) such as ${\sf tag}_{i,j}\leftarrow {\sf HMAC}_{k_{\sf hmac}}({\sf content}_i||j)$. The proposed Cache-22 system with time-dependent access control consists of $({\sf KeyGen},\allowbreak {\sf SendKey},{\sf GenTable},\allowbreak {\sf ContentRequest},\allowbreak {\sf CacheRequest}, \allowbreak {\sf SendContent},\allowbreak {\sf ObtainContent})$ as illustrated in Figure~\ref{cache-22-TDAC}. Unlike to the original Cache-22 system, in the proposed system, all keys are generated in advance, i.e., they are independent of the contents. Thus, we add the ${\sf KeyGen}$ algorithm. Moreover, for a user with identity $ID$, the SP sends keys in accordance with the user's access rights. Thus, we run the ${\sf SendKey}$ algorithm before the ${\sf GenTable}$ algorithm. \begin{itemize} \item ${\sf KeyGen}(1^m)$: The key generation algorithm takes as a security parameter $m\in\mathbb{N}$. For $j=1,2,\ldots,2^{m+1}-1$, randomly choose $k_{c,j}\leftarrow \mathcal{K}$ and output $\{k_{c,j}\}_{j=1}^{2^{m+1}-1}$. \item ${\sf SendKey}(ID,t,\allowbreak \{k_{c,j}\}_{j=1}^{2^{m+1}+1})$: The key sending algorithm run by the SP takes as input $(ID,t,\allowbreak \{k_{c,j}\}_{j=1}^{2^{m+1}+1})$. For all $j\in {\sf Path}(t)$, send $k_{c,j}$ to the user with identity $ID$ via a secure channel. \item ${\sf GenTable}(1^\kappa,1^\lambda,\{k_{c,j}\}_{j=1}^{2^{m+1}+1},{\sf SetOfContents})$: The table generation algorithm (run by the SP) takes as input security parameters $\kappa,\lambda\in\mathbb{N}$, a set of keys $\{k_{c,j}\}_{j=1}^{2^{m+1}+1}$, and a set of contents ${\sf SetOfContents}\allowbreak=\{{\sf content}_i\}_{i=1}^n$. For $i=1,2,\ldots,n$, let $t_i\in [1,T_{\sf max}]$ be the time period of ${\sf content}_i$. For all $j\in {\sf Path}(t_i)$, compute ${\sf tag}_{i,j}\leftarrow {\sf HMAC}_{k_{\sf hmac}}({\sf content}_i||j)$, retrieve $IV_j$ from ${\sf tag}_{i,j}$, and encrypt ${\sf content}_i$ such that $C_{i,j}\leftarrow {\sf Enc}_{k_{c,j}}(IV_j,{\sf content}_i)$. Output a table ${\sf ConTbl}=\{({\sf content}_i,\allowbreak\{({\sf tag}_{i,j},C_{i,j},k_{c,j})\}_{j\in {\sf Path}(t_i)})\}$. \item ${\sf ContentRequest}({\sf User}({\sf c\_name},t,t_{\sf curr}),{\sf SP}({\sf ConTbl}))$: The ${\sf ContentRequest}$ protocol between a user and the SP takes as input a content name ${\sf c\_name}$, the time period of the user $t$, and the current time period $t_{\sf curr}$ from the user, and takes as input ${\sf ConTbl}$ from the SP. \begin{table*}[t] \caption{Libraries included in the modules.} \label{tab:Library} \centering \begin{tabular}{lll}\\\hline\hline & Version & Description \\\hline\hline Go & go1.18.6-devel-cf & Custom Go language~\cite{GowithCIRCL}\\\hline CIRCL & v1.2.0 & Collection of PQC primitives\\\hline labstack/echo & v4.9.0 & WebAPI Framework \\\hline syndtr/goleveldb & v1.0.0 & Non-volatile key-value store to configure LRU cache\\\hline math/rand & Standard & Zipf function to generate content requests by user \\\hline\hline \end{tabular} \end{table*} \begin{table*}[t] \caption{Host configuration} \label{tab:HostConfiguration} \centering \centering \begin{tabular}{lrl}\hline\hline & Specifications & Description \\\hline\hline Instance type & c5.4xlarge & up to 0.856 [USD/hour] \\\hline vCPU [Core] & 16 & Intel Xeon Platinum 8275CL @ 3.00GHz \\\hline Memory [GiB] & 32 & \\\hline Network [Gbps] & up to 10 & \\\hline Operating system & Amazon Linux 2 & Kernel 5.10.135-122.509 \\\hline Number of hosts & 3 & for CS, SP, and User \\\hline\hline \end{tabular} \end{table*} \begin{enumerate} \item The user runs $Y\leftarrow {\sf CompSubTree}({\sf BT},[1,t_{\sf curr}-1])$ where ${\sf BT}$ is a binary tree with $2^m$ leaves. If $Y \cap {\sf Path}(t)=\emptyset$, then abort. \item The user chooses $j\in Y \cap {\sf Path}(t)$. \item The user sends $({\sf c\_name},j)$ to the SP via a secure channel. \item The SP decides ${\sf content}_i$ from ${\sf c\_name}$ and retrieves the corresponding $({\sf tag}_{i,j},C_{i,j})$ from ${\sf ConTbl}$ where ${\sf tag}_{i,j}\leftarrow {\sf HMAC}_{k_{\sf hmac}}({\sf content}_i||j)$. \item The SP sends ${\sf tag}_{i,j}$ to the user via the secure channel. If there is no such entry, then return error. \end{enumerate} \smallskip \item ${\sf SendContent}({\sf User}({\sf tag}_{i,j}),{\sf CS}({\sf CacheTbl}))$: The content sending protocol between a user and the CS takes as input ${\sf tag}_{i,j}$ from the user, and takes as input ${\sf CacheTbl}$ from the CS. \begin{enumerate} \item The user sends a request ${\sf tag}_{i,j}$ to the CS via a secure channel. \item The CS checks whether ${\sf tag}_{i,j}$ is stored on ${\sf CacheTbl}$. \begin{itemize} \item If yes, the CS retrieves $({\sf tag}_{i,j},C_{i,j})$ from ${\sf CacheTbl}$ by using ${\sf tag}_{i,j}$, sends $C_{i,j}$ to the user via the secure channel. \item If no, the CS runs the ${\sf CacheRequest}$ protocol with the SP (which is defined later), obtains $C_{i,j}$, stores $({\sf tag}_{i,j},C_{i,j})$ to ${\sf CacheTbl}$, and sends $C_{i,j}$ to the user via the secure channel. \end{itemize} \end{enumerate} \smallskip \item ${\sf CacheRequest}({\sf CS}({\sf tag}_{i,j}),{\sf SP}({\sf ConTbl}))$: The cache request protocol between the CS and the SP takes as input ${\sf tag}_{i,j}$ from the CS, and takes as input ${\sf ConTbl}$ from the SP. \begin{enumerate} \item The CS sends ${\sf tag}_{i,j}$ to the SP via the secure channel. \item The SP retrieves the corresponding $({\sf tag}_{i,j},C_{i,j})$ from ${\sf ConTbl}$ by using ${\sf tag}_{i,j}$, and sends $C_{i,j}$ to the CS via the secure channel. \end{enumerate} \smallskip \item ${\sf ObtainContent}({\sf tag}_{i,j},C_{i,j},k_{c,j}))$: The content obtaining algorithm run by a user takes as input $({\sf tag}_{i,j},C_{i,j},k_{c,j})$. Retrieve $IV_j$ from ${\sf tag}_{i,j}$. Output ${\sf content}_i\leftarrow {\sf Dec}_{k_{c,j}}(IV_j,\allowbreak C_{i,j})$. \end{itemize} As a side effect, users do not need to send their identity to the CS in the proposed system. In contrast, in the original Cache-22 system, users must send their identity to the CS because the SP must send the corresponding decryption key to the user, and the CS thus needs to forward the identity to the SP to provide the destination. The proposed system can thus help hide the user's identity from the CS and preserve privacy. \section{Implementation and Results} \subsection{Cipher Suite} First, we decide the underlying cipher suite as \begin{itemize} \item \small{\texttt{TLS\_Kyber\_ECDSA\_WITH\_AES\_256\_GCM\_SHA256}} \end{itemize} \noindent We employed Kyber (Crystals-Kyber)~\cite{BosDKLLSSSS18} which was selected for NIST PQC standardization in July 2022. Kyber (Crystals-Kyber) is a lattice-based scheme and is secure under the MLWE assumption where MLWE stands for the module learning with errors. In our implementation, we employed Kyber512 to provide 128-bit security. Specifically, we installed the X25519Kyber512Draft00 key agreement in our experiment. As in the original Cache-22 system, the proposed system can employ other PQC such as BIKE~\cite{BIKE}, NTRU~\cite{NTRU}, and SABER~\cite{SABER}. We also considered the underlying SKE scheme and hash function to be secure against the Grover algorithm~\cite{Grover98}, we expanded the key length twice and employed AES256 (specifically, AES-GCM-256) and SHA256. As a remark, as in the original Cache-22 implementation, we did not consider post-quantum authentication.% \footnote{We refer the comment by Alkim et al.~\cite{AlkimDPS16}, \lq\lq \emph{the protection of stored transcripts against future decryption using quantum computers is much more urgent than post-quantum authentication. Authenticity will most likely be achievable in the foreseeable future using proven pre-quantum signatures and attacks on the signature will not compromise previous communication".}} \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{exp_condition.drawio.eps} \caption{Experimental Flow}\label{fig:experimental} \end{figure*} \begin{table*}[t] \caption{Experimental Setup} \label{tab:Settingup} \centering \begin{tabular}{lc}\hline\hline Number of SPs & 1 \\\hline Number of CSs & 1 \\\hline Number of users & \begin{tabular}{c} 2,048\\ (We uniformly assigned users to\\ each effective leaf node defined by\\{\sf CompSubTree}) \end{tabular}\\\hline Number of requests in each $t$ & $2^{17} = 131,072$ (each user requests 64 contents)\\\hline Number of contents & 65,535 \\\hline Cache capacity in CS & 4,096, 8,192 and 16,384 \\ (Maximum number of stored contents) & \\\hline Size of each content [MB] & 1 \\\hline Popularity of content & \begin{tabular}{c} Zipf function in Go standard library \textbf{math/rand}\\ with arguments $s=3,v=3,000$.\\ The arguments are determined\\ by the cache hit ratio when it becomes $75\%$\\ of the cache capacity $4,096$. \end{tabular}\\\hline $T_{\sf max}$ & 16 (depth of the binary tree is 5)\\\hline\hline \end{tabular} \end{table*} \subsection{Implementing Components} To evaluate the cache system with the mechanism described in Section~\ref{sec:tdac}, we experimentally implemented a cache system that provides time-dependent access control. The cache system is an extended version of the Cache-22 system to enable the encryption and decryption of contents with multiple keys. Three types of program code sets were implemented, namely, SP, CS, and User, which correspond to the components in Figure~\ref{cache-22-TDAC}. All modules in these components were written in the Go language using several libraries, as described in Table \ref{tab:Library}. We employed a custom Go language~\cite{GowithCIRCL} that used CIRCL~\cite{circl} patched by Cloudflare to introduce PQC primitives in addition to conventional TLS algorithms such as ECDSA and RSA. We implemented the SP as a web server which received requests from users to obtain ${\sf tag}_{i,j}$ via $({\sf c\_name}, j)$ as illustrated in Figure~\ref{cache-22-TDAC}. We also implemented the CS as a web server to forward user requests to the SP or to return cached encrypted contents to users according to ${\sf tag}_{i,j}$. User was a simulation program to emulate many users to get encrypted contents from the CS and decrypting them when they had the corresponding decryption key. Although users send requests for various contents, the popularity follows a characteristic trend, such as Zipf's law and gamma distribution, especially in the case of video-on-demand services~\cite{Cheng2013}. Although all components were parameterized to adapt to various situations, we set up the experimental conditions as presented in Table~\ref{tab:Settingup} for reasonable discussion. As the underlying cache system, we employed the Least Recently Used (LRU) cache system. That is, ciphertexts generated in the past were unavailable at the current time and were erased from the cache table ${\sf CacheTbl}$. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{cachehitratio.eps} \caption{Time Series of Cache Hit Ratio for Three Cache Capacities in CS} \label{fig:cachehitratio} \end{figure*} We set up several virtual machines on Amazon Elastic Compute Cloud (EC2) with a uniform configuration, as displayed in Table~\ref{tab:HostConfiguration}. Each host ran SP and CS processes. Figure~\ref{fig:experimental} indicates which nodes were activated at each time period. Many user processes were also run on EC2 with the same configuration to emulate multiple users sending requests to obtain contents from the CS according to the sequence illustrated in Figure~\ref{fig:experimental}. Each green node indicates which key associated with the node was used for encryption. We set $T_{\sf max}=16$. First, we check whether no revoked users can obtain a content when users directly communicate with SP (i.e., no cache system is introduced). We emphasize that a user is revoked means that the time period corresponding to the user's access rights has passed. Next, we check that, by introducing CS, we can reduce communications between CS and SP without detracting time-dependent access control. \subsection{Change in Network Traffic by Introducing Time-Dependent Access Control} A cache system is helpful to reduce traffic in a more upstream network, such as that between the CS and SP. There were two evaluation perspectives: (i) reduction in network traffic due to the cache system and (ii) increase in network traffic due to the time-dependent access control protocol. Figure~\ref{fig:cachehitratio} presents the time series of the cache hit ratio for each cache capacity. The three lines demonstrate that the cache capacity explicitly contributed to the reduction in network traffic. The condition of the popularity distribution in the experiment is presented in Table~\ref{tab:Settingup}. At $t_{1}$, all users had $k_1$ (which was assigned to the root node) and could obtain all contents encrypted by $k_1$. This signifies that a user could always decrypt a ciphertext that was stored due to a previous request by another user. The cache hit ratio in this situation was that same as that in a cache system without time-dependent access control. The cache hit ratio was greater than $70\%$ in all cases, which demonstrates that the network traffic was reduced due to the cache system. The reduction in network traffic was approximately $50\%$ when the cache capacity was $4,096$ MB (since the size of each content is 1 MB in our experiment) which contained $6.25\%$ of all contents. It could be increased to over $70\%$ when the cache capacity was increased, such as to $8,192$ MB and $16,384$ MB, which contained $12.5\%$ and $25\%$ of all contents, respectively. This indicates that the network traffic can be further reduced when time-dependent access control is employed. Next, we discuss how the cache capacity affects the hit ratio when employing time-dependent access control. Due to time-dependent access control, for every content, multiple encrypted data are generated with different encryption keys. The number of keys assigned to each content increases the number of duplicated contents. This situation may reduce the cache hit ratio because a user may not be able to decrypt a ciphertext that was stored due to a previous request by another user. The cache hit ratio is increased when the probability that the corresponding ciphertext is stored on the CS increases. Thus, when a relatively large number of keys are used for encryption, the low cache capacity of the CS may cause an increase in the cache miss rate, which increases the amount of traffic. The cache capacity represents the effectiveness when employing time-dependent access control. For example, one key is used at $t_1$, $t_9$, $t_{13}$, $t_{15}$, and $t_{16}$ (i.e., there is one green node in Figure~\ref{fig:experimental} at these time periods). Then, the differences in the hit ratio between the three cache capacities are relatively small. In contrast, four keys are used at $t_2$, and then the differences are relatively large. This prompts us to carefully select $T_{\sf max}$ because it depends on the depth of the binary tree and the number of keys used for encryption, although it provides more fine-grained access control. \section{Conclusion} In this paper, we add a time-dependent access control protocol to the Cache-22 system and provide experimental results. Due to the proposed time-dependent access control, the number of duplicated contents is higher than that in the original Cache-22 system. That is, the proposed protocol is not only effective for controlling access rights, but it also affects the relationship between the cache capacity and network traffic. The prototype implementation of the original Cache-22 system considered multiple CSs and employed the color-based cooperative cache system~\cite{NakajimaYWY17}, which associates servers and caches through a color tag. In the Cache-22 system with time-dependent access control, a key associated with a higher node (i.e., a node closer to the root) is assigned to more users than a key associated with a lower node (i.e., a node closer to a leaf). That is, it should be effective to introduce multiple CSs that store ciphertexts encrypted by keys associated with a higher node. Confirming the effectiveness of introducing multiple CSs is left for future work. \medskip \noindent\textbf{Acknowledgment}: We thank the reviewers of ICISSP 2023 for their invaluable comments and suggestions. This work was supported by JSPS KAKENHI Grant Number JP21K11897.
{ "arxiv_id": "2302.11294", "language": "en", "timestamp": "2023-02-23T02:12:58", "url": "https://arxiv.org/abs/2302.11294", "yymm": "2302" }
\section{Introduction} \label{sec:1} Variational Autoencoder (VAE) \cite{Kingma2013AutoEncodingVB} and Generative Adversarial Networks (GAN) \cite{Goodfellow2014GenerativeAN} are generative models that are used to estimate the underlying distribution of a given dataset. To avoid the curse of dimensionality, VAE and GAN commonly introduce a low-dimensional latent space on which a conditional generative model is defined. By minimizing an information divergence between the original data and its generated data, the generative models are learned to produce synthetic data similar to the original one. Accordingly, VAE and GAN have been applied in various applications, such as generating realistic images, texts, and synthetic tabular data for privacy preservation purposes \cite{Karras2018ASG, wang2019topic, Xu2019ModelingTD, Zhao2021CTABGANET}. However, the difference in the strength of the assumption about the generative distribution brings significant contrasts in the VAE and GAN generation performances. In the GAN framework, the adversarial loss enables direct minimization of the Jensen-Shannon divergence between the ground-truth density function and the generative distribution under no distributional assumption. Roughly speaking, the GAN employs a nonparametric model as its conditional generative model defined on the latent space. On the contrary, in the VAE framework, the Gaussianity assumption has been favored \cite{lucas2019don}. It is because Gaussianity gives us three advantages: 1) the reconstruction loss can be interpreted as $L_2$ loss which is one of the most popular losses in optimization theory, 2) generating a new sample is computationally straightforward, and 3) KL-divergence is computed in a simple closed form. However, these benefits have led us to pay the price for the distributional capacity of the generative model, in that the generative model of the VAE is constrained in the form of marginalization of the product of the two Gaussian distributions. Here, the distributional capacity means the expressive power of the distributional family. This restricted distributional capacity has been the critical limitation \cite{Burda2015ImportanceWA, Kingma2016ImprovedVI} and leads to a heavy parameterization of the decoder mean-vector to approximate complex underlying distributions. To increase the distributional capacity in the VAE framework, \cite{Xu2019ModelingTD, Zhao2021CTABGANET} introduce the multi-modality in the distributional assumption of the decoder, which is known as the mode-specific normalization technique. Although the mixture-Gaussian decoder modeling of \cite{Xu2019ModelingTD} allows handling more complex distributions of the observed dataset while preserving all of the advantages of Gaussianity, we numerically find that the mixture Gaussian is not enough to capture the underlying distribution. Our main contribution is that, beyond Gaussianity, we propose a novel VAE learning method that directly estimates the conditional cumulative distribution function (CDF). It implies that we have a nonparametric distribution assumption on the VAE model. We call this approach distributional learning of the VAE, which is enabled by estimating an infinite number of conditional quantiles. By adopting the loss function of continuous ranked probability score (CRPS) used to estimate the CDF, the objective of the distribution learning is computationally tractable \cite{Gneiting2007StrictlyPS, Matheson1976ScoringRF}. Therefore, in our proposed distributional learning framework, 1) the reconstruction loss can be interpreted as CRPS loss which is a well-known \textit{proper scoring rule} \cite{Gneiting2007StrictlyPS}, 2) generating a new sample is still computationally straightforward due to inverse transform sampling, and 3) KL-divergence is still computed in a simple closed form. To show the effectiveness of our proposed model in capturing the underlying distribution of the dataset, we evaluate our model for synthetic data generation with real tabular datasets. \section{Related Work} \label{sec:2} \textbf{Various decoder modeling.} Decoder modeling has been primarily focused on increasing distributional capacity while maintaining the simple calculation of the KL-divergence. \cite{Takahashi2018StudenttVA, Akrami2020AddressingVS} assume their decoder distributions as Student-$t$ and asymmetric Laplace distributions, respectively. These assumptions mitigate the \textit{zero-variance problem} that the model training becomes unstable if the estimated variance of the decoder shrinks to zero in Gaussian VAE. In the image domain, Gaussian assumption hinders the reconstruction of images and fails to capture the properties of human perception \cite{Larsen2015AutoencodingBP}. Therefore, \cite{Larsen2015AutoencodingBP, Rosca2017VariationalAF, Munjal2019ImplicitDI} replace the reconstruction loss with an adversarial loss, and the decoder is trained without a parametric distributional assumption. However, adopting adversarial loss induces unstable model training in general. \textbf{Synthetic data generation.} The GAN framework is widely adopted in the synthetic data generation task since it enables handling columns of tabular datasets that are usually non-Gaussian \cite{Choi2017GeneratingMD, Park2018DataSB, Xu2019ModelingTD, Zhao2021CTABGANET}. Especially, \cite{Xu2019ModelingTD, Zhao2021CTABGANET} assume their decoder as Gaussian mixture distribution and preprocess the continuous variables using the Variational Gaussian mixture model \cite{Blei2016VariationalIA}, which is known as the mode-specific normalization technique. However, this preprocessing requires additional computational resources and hyperparameter tuning of the number of modes. \section{Proposal} \label{sec:3} Let ${\bf x} \in {\cal X} \subset \mathbb{R}^p$ be a observation, where ${\bf x}_1, \cdots, {\bf x}_{p_1}$ are continuous random variables, and ${\bf x}_{p_1 + 1}, \cdots, {\bf x}_p$ are discrete random variables. Denote $q({\bf x})$ as the true underlying distribution defined over ${\bf x} \in {\cal X}$. Let ${\bf z}$ be a latent variable, where ${\bf z} \in \mathbb{R}^d$ and $d < p$. The prior and posterior distribution of ${\bf z}$ is assumed as $p({\bf z}) = \mathcal{N}({\bf z}|0, I)$ and $q({\bf z}|{\bf x};\phi) = \mathcal{N}\big({\bf z} | \mu({\bf x};\phi), diag(\sigma^2({\bf x};\phi))\big)$, respectively, where $I$ is $d \times d$ identity matrix, $\phi$ is the neural network parameter, and $diag(a), a \in \mathbb{R}^d$ denotes a diagonal matrix with diagonal elements $a$. Assume that there exists $p({\bf x}|{\bf z})$ such that $q({\bf x}) = \int p({\bf z}) p({\bf x}|{\bf z}) d{\bf z}$. In addition, let $\mbox{\boldmath{$\alpha$}}$ be a discrete random variable of which the possible values set is $\{1/K, 2/K, \cdots, (K-1)/K, 1 \}$. $\mbox{\boldmath{$\alpha$}}$ follows a discrete uniform distribution, and we denote $\alpha_k \coloneqq k/K$ for $k=1,\cdots,K$. \subsection{Model Assumptions} \label{sec:3.1} First, we assume that ${\bf x}_1, \cdots, {\bf x}_p$ are conditionally mutually independent given ${\bf z}$. The distribution of ${\bf x}_j$ given ${\bf z}$ and $\alpha_k$ is assumed as the asymmetric Laplace distribution, for $j = 1, 2, \cdots, p_1$. For $l = p_1+1, p_1+2, \cdots, p$, the distribution of ${\bf x}_l$ given ${\bf z}$ is categorical distribution and the number of category of ${\bf x}_l$ is denoted as $T_{l}$. Then, for $k=1,\cdots,K$, the decoder is written as \begin{eqnarray} \label{eq:decoder} && p({\bf x}|{\bf z},\alpha_k;\theta,\beta) \nonumber\\ &=& \prod_{j=1}^{p_1} p({\bf x}_j|{\bf z},\alpha_k;\theta_j,\beta) \cdot \prod_{l=p_1+1}^{p} p({\bf x}_l|{\bf z};\theta_l,\beta) \nonumber\\ &=& \prod_{j=1}^{p_1} \frac{\alpha_k (1 - \alpha_k)}{\beta} \exp \left( -\rho_{\alpha_k}\left( \frac{{\bf x}_j - D_j(\alpha_k|{\bf z}, \theta_j)}{\beta} \right) \right) \nonumber\\ && \cdot \prod_{l=p_1+1}^{p} \prod_{t=1}^{T_l} \pi({\bf z}; \theta_l)_t^{I({\bf x}_l = t)}, \end{eqnarray} where $\theta = (\theta_1, \cdots, \theta_p)$, $\beta$ is the non-trainable hyper-parameter, $\rho_{v}({\bf u}) = {\bf u}(v - I({\bf u} < 0))$ is the check function, and $I(\cdot)$ is indicator function. $D_j(\cdot|\cdot, \theta_j): [0, 1] \times \mathbb{R}^d \mapsto \mathbb{R}$ is location parameter of conditional distribution of continuous ${\bf x}_j$. Note that all continuous variables share the same scale parameter $\beta$. For discrete variables, $\pi(\cdot; \theta_l): \mathbb{R}^d \mapsto [0, 1]^{T_l}$ and $\sum_{t=1}^{T_l} \pi({\bf z}; \theta_l)_t = 1$, for all ${\bf z} \in \mathbb{R}^d$. With proposal distributions, the negative ELBO is written as \begin{eqnarray} \label{eq:negELBO} && \sum_{j=1}^{p_1} \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \left[ \frac{1}{2 \cdot K} \sum_{k=1}^{K} 2 \cdot \rho_{\alpha_k}\Big( {\bf x}_j - D_j(\alpha_k|{\bf z}, \theta_j) \Big) \right] \nonumber\\ &-& \beta p_1 \frac{1}{K} \sum_{k=1}^{K} \log \alpha_k (1 - \alpha_k) + \beta {p_1} \log \beta \nonumber\\ &-& \beta \cdot \sum_{l=p_1+1}^p \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \left[\sum_{t=1}^{T_l} I({\bf x}_l = t) \cdot \log \pi({\bf z};\theta_l)_t \right] \nonumber\\ &+& \beta \cdot \mathcal{KL}(q({\bf z}|{\bf x};\phi) \| p({\bf z})) \end{eqnarray} (see Appendix \ref{app:1} for detailed derivation). Note that the reconstruction loss of \eqref{eq:negELBO} can be seen as estimating $K$ multiple conditional quantiles in Bayesian framework \cite{Yu2001BayesianQR, Moon2021LearningMQ}, where $\alpha_k$ plays the role of quantile level. \subsection{Distributional Learning} \label{sec:3.2} For distributional learning of the VAE, we need to estimate conditional quantiles for an infinite number of quantile levels, i.e., $K \rightarrow \infty$. The following Proposition \ref{prop:convergence} shows that the negative ELBO \eqref{eq:negELBO} converges to the continuous ranked probability score (CRPS) loss, which measures how accurately the proposed CDF approximates the true CDF of the dataset. \begin{prop}[Convergence to CRPS] \label{prop:convergence} Suppose that $\int_0^1 \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \Big\vert 2 \cdot \rho_{\mbox{\boldmath{$\alpha$}}}\Big( {\bf x}_j - D_j(\mbox{\boldmath{$\alpha$}}|{\bf z}, \theta_j) \Big) \Big\vert d\mbox{\boldmath{$\alpha$}} < \infty$. Then, for $j=1, \cdots, p_1$, \begin{eqnarray*} && \lim_{K \rightarrow \infty} \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \left[ \frac{1}{K} \sum_{k=1}^{K} 2 \cdot \rho_{\alpha_k}\Big( {\bf x}_j - D_j(\alpha_k|{\bf z}, \theta_j) \Big) \right] \\ &=& \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \left[ \int_0^1 2 \cdot \rho_{\mbox{\boldmath{$\alpha$}}}\Big( {\bf x}_j - D_j(\mbox{\boldmath{$\alpha$}}|{\bf z}, \theta_j) \Big) d\mbox{\boldmath{$\alpha$}} \right] \\ &=& \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \left[ \mbox{CRPS}(D_j(\cdot|{\bf z};\theta_j), {\bf x}) \right], \end{eqnarray*} where $\mbox{CRPS}(\cdot, \cdot)$ is the continuous ranked probability score (CRPS), and \begin{eqnarray*} \lim_{K \rightarrow \infty} \frac{1}{K} \sum_{k=1}^{K} \log \alpha_k (1 - \alpha_k) = \int_0^1 \log \mbox{\boldmath{$\alpha$}} (1-\mbox{\boldmath{$\alpha$}}) d\mbox{\boldmath{$\alpha$}} = -2. \end{eqnarray*} \end{prop} Therefore, when $K \rightarrow \infty$, our final objective is minimizing \begin{eqnarray} \label{eq:final_obj} && \sum_{j=1}^{p_1} \mathbb{E}_{q({\bf x})} \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \left[ \frac{1}{2} \cdot \mbox{CRPS}(D_j(\cdot|{\bf z};\theta_j), {\bf x}) \right] \nonumber\\ &-& \sum_{l=p_1+1}^p \mathbb{E}_{q({\bf x})} \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \left[\sum_{t=1}^{T_l} I({\bf x}_l = t) \cdot \log \pi({\bf z};\theta_l)_t \right] \nonumber\\ &+& \beta \cdot \mathbb{E}_{q({\bf x})} [\mathcal{KL}(q({\bf z}|{\bf x};\phi) \| p({\bf z}))] \end{eqnarray} with respect to $(\theta, \phi)$, where constant terms are omitted. To balance the learning of two reconstruction losses in \eqref{eq:final_obj}, we remove coefficient $\beta$ of the second term, which is the reconstruction loss of discrete variables. We call our model DistVAE. In addition, see Appendix \ref{app:2} for the interpretation of distributional learning in terms of model misspecification error in MLE. \cite{higgins2016beta} shows that the KL-divergence coefficient $\beta$ (the scale parameter of asymmetric Laplace distribution) controls the reconstruction precision. Since our reconstruction loss consists of CRPS loss, a larger $\beta$ induces an inaccurate estimation of the true CDF, leading to a lower quality of synthetic data. Consequently, the privacy level will be lower if $\beta$ is small. Therefore, $\beta$ creates a trade-off between the synthetic data quality and the risk of privacy leakage, which means that the privacy level is controllable via $\beta$ \cite{Park2018DataSB} (see Section \ref{sec:4}). \subsubsection{Proper Scoring Rule} In this section, we will show that the reconstruction loss of \eqref{eq:final_obj} is a \textit{proper scoring rule} \cite{Gneiting2007StrictlyPS} relative to the true conditional quantile function. Let $F({\bf x}|{\bf z})$ is CDF of $p({\bf x}|{\bf z})$ and denote $F_j({\bf x}_j|{\bf z})$ as the marginal conditional CDF of ${\bf x}_j$, for $j=1,\cdots, p_1$. Denote $D_j^*(\alpha|{\bf z})$ as the true conditional $\alpha$-quantile with respect to $F_j({\bf x}_j|{\bf z})$ for $\alpha \in (0, 1)$ and ${\bf z} \in \mathbb{R}^d$. It implies that $F_j\big( D_j^*(\alpha|{\bf z}) | {\bf z} \big) = \alpha$. We define a risk functional of $D_j \in \mathcal{D}_j$ by \begin{eqnarray*} \mathcal{S}_\alpha(D_j, D_j^*) &=& \mathbb{E}_{q({\bf x})} \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \Big[\rho_\alpha({\bf x}_j - D_j(\alpha|{\bf z})) \Big] \\ \mathcal{S}(D_j, D_j^*) &=& \mathbb{E}_{q({\bf x})} \mathbb{E}_{q({\bf z}|{\bf x};\phi)}\left[ \int_0^1 \rho_\alpha({\bf x}_j - D_j(\alpha|{\bf z})) d\alpha \right], \end{eqnarray*} where $\alpha \in (0, 1)$, and $\mathcal{D}_j$ is a set of isotonic functions $D_j$ such that $D_j(\cdot|\cdot):[0, 1] \times \mathbb{R}^d \mapsto \mathbb{R}$. Note that $\mathcal{S}(D_j, D_j^*)$ is equivalent to the reconstruction loss of \eqref{eq:final_obj}. \begin{assumption} \label{assump:proper} The distributional family of $q({\bf z}|{\bf x};\phi)$ is sufficiently large and we have $q({\bf z}|{\bf x};\phi)$ such that \begin{eqnarray*} q({\bf x}) q({\bf z}|{\bf x};\phi) = p({\bf z}) p({\bf x}|{\bf z}). \end{eqnarray*} \end{assumption} \begin{prop}[Proper scoring rule] \label{prop:proper} Suppose that $\mathbb{E}_{q({\bf x})} \mathbb{E}_{q({\bf z}|{\bf x};\phi)}\left[ \int_0^1 \big\vert \rho_\alpha({\bf x}_j - D_j(\alpha|{\bf z})) \big\vert d\alpha \right] < \infty$ for all $D_j \in \mathcal{D}_j$, $j=1,\cdots,p_1$. Under Assumption \ref{assump:proper}, for all $\alpha \in (0, 1)$, \begin{eqnarray*} \mathbb{E}_{q({\bf x})} \mathbb{E}_{q({\bf z}|{\bf x};\phi)} \Big[\rho_\alpha({\bf x}_j - D_j^*(\alpha|{\bf z})) \Big] = \min_{D_j \in \mathcal{D}_j} \mathcal{S}_\alpha(D_j, D_j^*), \end{eqnarray*} and $\mathcal{S}(D_j, D_j^*) \geq \mathcal{S}(D_j^*, D_j^*)$. \end{prop} Furthermore, based on the conditional CDF $D_j^{-1}$, the following Proposition \ref{prop:cdf} shows that the marginal CDF is equivalent to the marginalization of the conditional CDF with respect to the prior distribution. \begin{prop}[Marginal CDF] \label{prop:cdf} For $j = 1, 2, \cdots, p_1$, the marginal CDF $F_j({\bf x}_j)$ of ${\bf x}_j$ is written as \begin{eqnarray*} F_j(x_j) = \int_{\mathbb{R}^d} D_j^{-1}(x_j|{\bf z}) p({\bf z}) d{\bf z}, \end{eqnarray*} where $x_j \in \mathbb{R}$. \end{prop} In practice, the marginal CDF $F_j(x_j)$ is approximated by $\frac{1}{B}\sum_{b=1}^B D_j^{-1}(x_j|z_b),$ where $z_b \sim p({\bf z}), b=1,\cdots,B$. \subsubsection{Closed Form Loss} To compute the CRPS loss in the closed form for computational efficiency \cite{gasthaus2019probabilistic}, we parameterize the function $D_j$ by a linear isotonic regression spline as follows \begin{eqnarray} \label{eq:isotonic} D_j(\alpha|{\bf z};\theta_j) &=& \gamma^{(j)}({\bf z}) + \sum_{m=0}^M b_m^{(j)} ({\bf z}) (\alpha - d_m)_+ \nonumber\\ \mbox{subject to} && \sum_{m=0}^k b_m^{(j)}({\bf z}) \geq 0, k=1,\cdots,M \end{eqnarray} where $\alpha \in [0, 1]$, $\gamma^{(j)}({\bf z}) \in \mathbb{R}$, $b^{(j)}({\bf z}) = (b_0^{(j)}({\bf z}), \cdots, b_M^{(j)}({\bf z})) \in \mathbb{R}^{M+1}$, $d = (d_0, \cdots, d_M) \in [0, 1]^{M+1}$, $0 = d_0 < \cdots < d_M = 1$, and $({\bf u})_+ := \max(0, {\bf u})$. $\theta_j$ is a neural network parameterized mapping such that $\theta_j: \mathbb{R}^d \mapsto \mathbb{R} \times \mathbb{R}^{M+1}$, which outputs $\gamma^{(j)}({\bf z})$ and $b^{(j)}({\bf z})$. Consequently, CRPS loss is computed in the closed form as \begin{eqnarray*} && \mbox{CRPS}(D_j(\cdot|{\bf z};\theta_j), {\bf x}_j) \\ &=& (2 \Tilde{\alpha}_j - 1){\bf x}_j + (1 - 2\Tilde{\alpha}_j) \gamma^{(j)}({\bf z}) \\ &+& \sum_{m=1}^{M} b_m^{(j)}({\bf z}) \Bigg( \frac{1 - d_m^3}{3} - d_m \\ && - \max(\Tilde{\alpha}_j, d_m) + 2 \max(\Tilde{\alpha}_j, d_m)d_m \Bigg), \end{eqnarray*} where \begin{eqnarray*} D_j(\Tilde{\alpha}_j|{\bf z};\theta_j) &=& {\bf x}_j \\ \Tilde{\alpha}_j &=& \frac{{\bf x}_j - \gamma^{(j)}({\bf z}) + \sum_{m=1}^{m_0} b_m^{(j)}({\bf z}) d_m}{\sum_{m=1}^{m_0} b_m^{(j)}({\bf z})} \\ D_j(d_{m_0}|{\bf z};\theta_j) &\leq& {\bf x}_j \leq D_j(d_{m_0+1}|{\bf z};\theta_j) \\ j &=& 1, 2, \cdots, p_1. \end{eqnarray*} It indicates that our objective function \eqref{eq:final_obj} is still computationally tractable even if we consider an infinite number of quantile levels. \subsection{Sampling Mechanism} \label{sec:3.3} To generate a synthetic sample, we first sample a latent variable from the prior distribution (the $d$-dimensional standard Gaussian distribution). And all continuous and discrete variables share the same sampled latent variable $z$. Denote $\hat{x}_j$ as a synthetic sample of ${\bf x}_j$, for $j=1,\cdots,p$. For $j=1,\cdots,p_1$, continuous variables, we generate a synthetic sample by inverse transform sampling. Specifically, $\hat{x}_j = D_j(u_j|z;\theta_j)$, where $u_j \sim U(0, 1)$ and $U$ is uniform distribution. For $l=p_1+1,\cdots,p$, discrete variables, we use the Gumbel-Max trick \cite{gumbel1954statistical} to generate a synthetic sample. $\hat{x}_l = \arg\max_{t=1,\cdots,T_l} \{\log \pi(z;\theta_l)_t + G_t\}$, where $G_t \sim^{i.i.d.} Gumbel(0, 1), t=1,\cdots,T_l$ and $Gumbel$ is Gumbel distribution. We discover that even when the labels of the discrete variable are highly imbalanced, the sampling based on the Gumbel-Max trick maintains the label's imbalanced ratio. \subsection{Calibration of Estimated CDF} \label{sec:3.4} To ensure that the estimated CDF is properly discretized according to the support of the variable, especially the count variable, a post-ad-hoc calibration including discretization \cite{Salimans2017PixelCNNIT} can be applied. Here, we denote the Monte Carlo approximated estimated CDF $\hat{F}(x;\theta) \coloneqq \frac{1}{B}\sum_{b=1}^B D^{-1}(x|z_b;\theta)$ for $x \in \mathbb{R}$, where the subscript $j$ is omitted for brevity. Let the observed possible values set of the variable to be discretized is $\{x^{(1)}, x^{(2)}, \cdots, x^{(m)}\}$. The calibration algorithm of the estimated CDF is shown in Algorithm \ref{alg:cal}, and an example of the calibration algorithm result is shown in Figure \ref{fig:calibration}. \begin{algorithm}[ht] \caption{Calibration of Estimated CDF} \label{alg:cal} \begin{algorithmic} \State \textbf{Input} $\{x^{(1)}, x^{(2)}, \cdots, x^{(m)}\}$, $\hat{F}(\cdot;\theta)$ \State \textbf{Output} Calibrated estimated CDF $\hat{F}^*(\cdot;\theta)$ \State (1) Compute $\hat{F}(x^{(i)}-0.5;\theta)$ and $\hat{F}(x^{(i)}+0.5;\theta)$ for $i=1,\cdots,m$. \State (2) Discretization: For $i=1,\cdots,m$, \begin{eqnarray*} \hat{F}^*(x^{(i)};\theta) &\coloneqq& \hat{F}^*(x^{(i-1)};\theta) \\ &+& \hat{F}(x^{(i)}+0.5;\theta) - \hat{F}(x^{(i)}-0.5;\theta), \end{eqnarray*} where $\hat{F}^*(x^{(0)};\theta) \coloneqq 0$. \State (3) Ensure monotonicity: For $i=1,\cdots,m-1$, if $\hat{F}^*(x^{(i)};\theta) > \hat{F}^*(x^{(i+1)};\theta)$, \begin{eqnarray*} \hat{F}^*(x^{(i+1)};\theta) \coloneqq \hat{F}^*(x^{(i)};\theta). \end{eqnarray*} \end{algorithmic} \end{algorithm} \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig/adult_CDF_calibration.png} \caption{Calibrated estimated CDF for \texttt{educational-num} covariate of \texttt{adult} dataset. `estimate' indicates $\hat{F}(\cdot;\theta)$, `calibration' indicates $\hat{F}^*(\cdot;\theta)$, and `empirical' indicates the empirical CDF of the observed dataset.} \label{fig:calibration} \end{figure} \section{Experiments} \label{sec:4} In this section, to illustrate that our proposed method can capture the underlying distribution of the given dataset, we numerically show that DistVAE can generate synthetic data, which can be used as a good proxy of the original data. \subsection{Overview} \textbf{Dataset.} For evaluation, we consider following real tabular datasets: \texttt{covertype}, \texttt{credit}, \texttt{loan}, \texttt{adult}, \texttt{cabs}, and \texttt{kings} (see Appendix \ref{app:6} for detailed data descriptions). \textbf{Compared models.} We compared the state-of-the-art synthesizers; CTGAN \cite{Xu2019ModelingTD}, TVAE \cite{Xu2019ModelingTD}, and CTAB-GAN \cite{Zhao2021CTABGANET}. Notably, all models have the same size of the latent dimension. \subsection{Evaluation Metrics} To evaluate the synthetic data quality, we investigate three types of metrics: machine learning utility, statistical similarity, and privacy preservability. Note that we computed all these metrics after standardization because continuous variables have different units. \textbf{Machine learning utility.} To evaluate the machine learning utility (MLu), we use the synthetic data as training data for three widely used machine learning algorithms: linear (logistic) regression, Random Forest, and Gradient Boosting. We average the following metrics: Mean Absolute Relative Error (MARE) for the regression and $F_1$ for the classification problem. We choose $F_1$ since some discrete target variables have imbalanced labels. Note that the synthetic and real training data have the same size. To measure the MLu, the coefficient of determination $R^2$ has been widely used \cite{Xu2019ModelingTD, Zhao2021CTABGANET, Wen2021CausalTGANGT, Kamthe2021CopulaFF}. However, \cite{Li2017AssessingTA} shows that the $R^2$ should not be used to assess predictive performance because $R^2$ is biased, insufficient, and misleading. Because we need to aggregate predictive performance in several different datasets, we use MARE, which is scale independent and bounded from zero to one \cite{Botchkarev2019ANT}. \textbf{Statistical similarity.} Next, to measure the statistical similarity between real and synthetic data, we use two statistical distances; the Kolmogorov statistic and the 1-Wasserstein distance, which measure the distance between empirical CDFs of real training data and synthetic data. The Kolmogorov statistic tests whether samples are drawn from a specific reference distribution (testing goodness of fit) \cite{Lehmann1998ElementsOL}. Note that we average the statistical distances across all variables. \textbf{Privacy preservability.} Lastly, to check whether privacy is preserved in synthetic data generation, we use three metrics; \textit{Distance to Closest Record} (DCR) \cite{Park2018DataSB, Zhao2021CTABGANET}, \textit{membership inference attack} \cite{Shokri2016MembershipIA, Choi2017GeneratingMD, Park2018DataSB}, and \textit{attribute disclosure} \cite{Choi2017GeneratingMD, Matwin2015ARO}. As in \cite{Zhao2021CTABGANET}, we define the DCR as the $5^{th}$ percentiles of $L_2$ distances between all real and synthetic samples (or between synthetic samples). Since DCR is a $L_2$ distance-based metric, we compute DCR for only continuous variables. The higher score of DCR between the real and synthetic datasets indicates that privacy is preserved well since it implies no overlapped record between real and synthetic datasets. However, if the DCR score is too large, it indicates that the quality of the generated synthetic dataset is very poor. The membership inference attack is evaluated according to the steps outlined in Appendix \ref{app:7}. Since we customize the membership inference attack procedure to attack a VAE-based synthesizer, only DistVAE and TVAE are assessed. Since we convert the problem of identifying the complex relationship between real and synthetic dataset members into a binary classification problem, better binary classification scores indicate that the target synthesizer is vulnerable to the membership inference attack. Attribute disclosure occurs when attackers can reveal additional covariates of a record based on a subset of covariates that attackers already have and similar records from the synthetic dataset. In addition, classification metrics are utilized to check the degree to which attackers accurately identify additional variables. Therefore, higher attribute disclosure metrics indicate that attackers can reveal unknown variables precisely, and the target synthesizer has an increased risk of privacy leakage. Since attackers are assumed to have only a subset of covariates of a record in attribute disclosure, attribute disclosure can be considered more major privacy leakage issue. \subsection{Results Analysis} \begin{table}[ht] \caption{Averaged machine learning utilities for synthetic datasets. Mean and standard deviation values are obtained from 10 repeated experiments. $\uparrow$ denotes higher is better and $\downarrow$ denotes lower is better.} \centering \begin{tabular}{lrr} \toprule Model & MARE $\downarrow$ & $F_1$ $\uparrow$\\ \midrule CTGAN & $0.321_{\pm 0.271}$ & $0.672_{\pm 0.234}$\\ TVAE & $\textbf{0.225}_{\pm 0.215}$ & $0.594_{\pm 0.295}$\\ CTAB-GAN & $0.403_{\pm 0.392}$ & $0.702_{\pm 0.162}$\\ \midrule DistVAE($\beta=0.5$) & $0.349_{\pm 0.328}$ & $\textbf{0.769}_{\pm 0.128}$\\ Baseline & $0.150_{\pm 0.200}$ & $0.814_{\pm 0.101}$\\ \bottomrule \end{tabular} \label{tab:mlu} \end{table} \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig/MLu.png} \caption{Machine learning utilities for compared models and real tabular datasets.} \label{fig:mlu} \end{figure} \textbf{Machine learning utility.} Table \ref{tab:mlu} shows the averaged MLu for all tabular datasets, and a better synthesizer is expected to generate synthetic data which shows comparable predictive performance to that of the real training dataset (which is denoted as `Baseline'). DistVAE shows a competitive MARE score and outperforms other methods in $F_1$. For the detailed comparison, we plot the paired (MARE, $F_1$) scores for all tabular datasets and compared models in Figure \ref{fig:mlu}. In Figure \ref{fig:mlu}, a better score (i.e., the synthesizer having the better MLu) is indicated by a dot in the upper left corner. Figure \ref{fig:mlu} demonstrates that DistVAE consistently shows the best or at least competitive MLu across all tabular datasets. Note that TVAE shows quite a low $F_1$ score in \texttt{credit} dataset because it fails to handle the highly imbalanced categorical target variable. See Appendix \ref{app:9} for detailed MLu scores for all tabular datasets. \begin{table}[ht] \caption{Averaged statistical similarity of synthetic datasets. K-S represents the Kolmogorov–Smirnov statistic, and 1-WD represents the 1-Wasserstein distance. Mean and standard deviation values are obtained from 10 repeated experiments. Lower is better.} \centering \subtable[Continuous]{\begin{tabular}{lrr} \toprule Model & K-S & 1-WD \\ \midrule CTGAN & $0.133_{\pm 0.106}$ & $0.087_{\pm 0.025}$\\ TVAE & $0.196_{\pm 0.135}$ & $0.220_{\pm 0.099}$\\ CTAB-GAN & $0.157_{\pm 0.089}$ & $0.130_{\pm 0.037}$\\ \midrule DistVAE($\beta=0.5$) & $\textbf{0.090}_{\pm 0.065}$ & $\textbf{0.075}_{\pm 0.026}$\\ \bottomrule \end{tabular}} \subtable[Discrete]{\begin{tabular}{lrr} \toprule Model & K-S & 1-WD \\ \midrule CTGAN & $0.168_{\pm 0.195}$ & $0.521_{\pm 0.532}$\\ TVAE & $0.385_{\pm 0.144}$ & $1.681_{\pm 1.668}$\\ CTAB-GAN & $0.106_{\pm 0.083}$ & $0.412_{\pm 0.378}$\\ \midrule DistVAE($\beta=0.5$) & $\textbf{0.030}_{\pm 0.017}$ & $\textbf{0.118}_{\pm 0.100}$\\ \bottomrule \end{tabular}} \label{tab:stat} \end{table} \textbf{Statistical similarity.} The averaged statistical similarity is reported in Table \ref{tab:stat}. DistVAE achieves the best statistical similarity for continuous variables between real and synthetic datasets in the Kolmogorov–Smirnov statistic and 1-Wasserstein distance. It implies that the proposed distributional learning method with the CRPS loss can precisely capture the observed dataset's underlying distribution. For discrete variables, Table \ref{tab:stat} indicates that DistVAE outperforms in the statistical similarity performance. Note that we do not rely on the additional training technique such as \textit{training-by-sampling} \cite{Xu2019ModelingTD}, which causes computational burden. See Appendix \ref{app:9} for detailed statistical similarity scores for all tabular datasets. \begin{table}[ht] \caption{Privacy preservability: Averaged distance to closest record (DCR) between real and synthetic datasets (R\&S) and between the same synthetic datasets (S). Mean and standard deviation values are obtained from 10 repeated experiments. Higher is better.} \centering \begin{tabular}{lrr} \toprule Model & R\&S & S \\ \midrule CTGAN & $0.426_{\pm 0.229}$ & $0.356_{\pm 0.202}$\\ TVAE & $0.470_{\pm 0.181}$ & $0.278_{\pm 0.195}$\\ CTAB-GAN & $0.508_{\pm 0.259}$ & $0.039_{\pm 0.073}$\\ \midrule DistVAE($\beta=0.5$) & $0.444_{\pm 0.250}$ & $0.463_{\pm 0.288}$\\ DistVAE($\beta=1$) & $0.463_{\pm 0.282}$ & $0.479_{\pm 0.310}$\\ DistVAE($\beta=5$) & $\textbf{0.517}_{\pm 0.272}$ & $\textbf{0.511}_{\pm 0.335}$\\ \bottomrule \end{tabular} \label{tab:dcr} \end{table} \textbf{Privacy preservability.} The privacy preservability of each model based on DCR scores is shown in Table \ref{tab:dcr}, and we evaluate DCR scores of DistVAE with various $\beta$ values. As $\beta$ increases, the DCR between the real and synthetic datasets of DistVAE (R\&S) increases. It implies that $\beta$ can control the risk of privacy leakage. Also, DistVAE consistently shows the larger DCR for the synthetic dataset (S) for all $\beta$ values. It means that DistVAE can generate more diverse synthetic samples than other methods. We find duplicated records in the synthetic dataset generated by CTAB-GAN, which results in a relatively low DCR score for the synthetic dataset (S). See Appendix \ref{app:9} for detailed DCR scores for all tabular datasets. \begin{table}[ht] \caption{Privacy preservability: Averaged membership inference attack performance. Mean and standard deviation values are obtained from 10 repeated experiments. Lower is better.} \centering \begin{tabular}{lrr} \toprule Model & Accuracy & AUC \\ \midrule TVAE & $0.495_{\pm 0.019}$ & $0.495_{\pm 0.019}$\\ DistVAE($\beta=0.5$) & $0.500_{\pm 0.003}$ & $0.500_{\pm 0.003}$\\ \bottomrule \end{tabular} \label{tab:memattack} \end{table} We prepared the attack models to evaluate the membership inference attack (one per class) following the steps outlined in Section \ref{app:7}. Also, the attack testing records consist of the same number of real training and test records; real training and test records have the labels of $in$ and $out$, respectively. Note that the real test records are not used to build attack models. We use gradient-boosting classifiers as attack models. Due to computational issues, the number of attack models is one (i.e., $C=1$). Since the targe $in/out$ labels are balanced and the membership inference attack is a binary classification problem, we consider accuracy and AUC (Area Under Curve) as binary classification metrics. Table \ref{tab:memattack} shows that DistVAE and TVAE attain an AUC score of 0.5, meaning that attack models can not distinguish between members of real training and test datasets, and the membership inference attack is unsuccessful. Therefore, DistVAE can generate synthetic datasets while preserving privacy regarding the membership inference attack. See Appendix \ref{app:9} for detailed membership inference attack performances for all tabular datasets. \begin{table}[ht] \caption{Privacy preservability: Averaged attribute disclosure performance with $F_1$, where the number of known variables is set to 5 for all tabular datasets. Mean and standard deviation values are obtained from 10 repeated experiments. Lower is better. The number in the parentheses represents the value of $\beta$.} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{lrrr} \toprule & \multicolumn{3}{c}{Number of neighbors ($k$)} \\ \cmidrule(){2-4} Model & 1 & 10 & 100 \\ \midrule CTGAN & $0.262_{\pm 0.091}$ & $0.282_{\pm 0.087}$ & $0.275_{\pm 0.087}$\\ TVAE & $0.437_{\pm 0.162}$ & $0.438_{\pm 0.160}$ & $0.432_{\pm 0.162}$\\ CTAB-GAN & $\textbf{0.257}_{\pm 0.123}$ & $0.258_{\pm 0.114}$ & $0.261_{\pm 0.111}$\\ \midrule DistVAE($0.5$) & $0.328_{\pm 0.088}$ & $0.328_{\pm 0.076}$ & $0.310_{\pm 0.072}$\\ DistVAE($1$) & $0.307_{\pm 0.073}$ & $0.313_{\pm 0.068}$ & $0.297_{\pm 0.066}$\\ DistVAE($5$) & $0.265_{\pm 0.105}$ & $\textbf{0.253}_{\pm 0.103}$ & $\textbf{0.232}_{\pm 0.101}$\\ \bottomrule \end{tabular}} \label{tab:attrdis} \end{table} We evaluate attribute disclosure based on the experiment setup of \cite{Choi2017GeneratingMD} while varying the number of nearest neighbors in the synthetic dataset. In detail, we assume that only continuous variables are known to attackers and set the number of covariates known to the attacker as 5. Unknown discrete variables are estimated based on the majority vote of $k$-nearest neighbors. Note that we construct the nearest neighbors based on $L_2$ distance. The attribute disclosure performance is measured by $F_1$ because imbalanced discrete variables exist across all tabular datasets. The results of attribute disclosure performance are presented in Table \ref{tab:attrdis}. For all numbers of neighbors ($k$), the $F_1$ score of DistVAE decreases as $\beta$ increases, and DistVAE achieves the smallest $F_1$ scores where $k$ is 10 and 100. These results indicate that DistVAE can generate synthetic datasets with a low risk of attribute disclosure, and the privacy level is controlled by $\beta$. See Appendix \ref{app:9} for detailed attribute disclosure performances for all tabular datasets. \subsubsection{Additional Study} To check the accuracy of the estimated quantile function from DistVAE, we evaluate DistVAE with the $\alpha$-Rate \cite{Chen2011ForecastingVU} criterion. $\alpha$-Rate is defined as: \begin{eqnarray} \alpha\mbox{-Rate} = \frac{1}{|I_{test}|} \sum_{i \in I_{test}} I(x_i < \hat{F}^{-1}(\alpha)), \end{eqnarray} where $\hat{F}^{-1}(\cdot)$ is the estimated quantile function, $\alpha \in [0, 1]$, and $I_{test}$ is the set of indices of test dataset. $\alpha$-Rate is simply the proportion of compliance samples. Inherently, the $\alpha$-Rate should be close to the $\alpha$. \begin{table}[ht] \caption{Averaged $\alpha$-Rate and $|\alpha - \alpha\mbox{-Rate}|$ for all test datasets.} \centering \begin{tabular}{lrrrrr} \toprule $\alpha$ & 0.1 & 0.3 & 0.5 & 0.7 & 0.9 \\ \midrule $\alpha$-Rate & 0.204 & 0.373 & 0.533 & 0.725 & 0.908 \\ $|\alpha - \alpha\mbox{-Rate}|$ & 0.104 & 0.083 & 0.04 & 0.032 & 0.008 \\ \bottomrule \end{tabular} \label{tab:alpharate} \end{table} We estimate quantiles for five levels (0.1, 0.3, 0.5, 0.7, 0.9) based on the estimated marginal CDFs \ref{prop:cdf} and Table \ref{tab:alpharate} shows the averaged $\alpha$-Rate results of DistVAE across all tabular datasets. As $\alpha$ increases from 0.1 to 0.9, the ratio of violation test samples decreases. It implies that quantiles estimated by DistVAE can be incorrect for lower quantile levels. We conjecture that extremely skewed continuous variables, such as \texttt{capital-gain} and \texttt{capital-loss} from \texttt{adult} dataset, make the quantile estimation unstable. \section{Conclusion and Limitations} \label{sec:5} This paper proposes a novel distributional learning method for VAE to capture the underlying distribution of the observed dataset. In this paper, distributional learning is defined as estimating the conditional CDF; hence, there is no assumption on the generative model of the VAE. Distributional learning is enabled by estimating an infinite number of conditional quantiles, which becomes computationally tractable by adopting the CRPS loss. And we show that our objective function is a proper relative to the true conditional quantile function. Since each conditional CDF depends on a common latent variable (confounded structure), the latent variable simultaneously affects the generation process of all covariates, which makes covariates correlated in the synthetic data generation process. However, the correlation structure of covariates can only be `partially' explained by the confounded design of the latent variable because it can not account for the direct correlation structure. So, our future work is extending the decoder modeling by including the direct correlation structure between covariates. \bibliographystyle{OptimLabTwoColumn}
{ "arxiv_id": "2302.11307", "language": "en", "timestamp": "2023-02-23T02:13:18", "url": "https://arxiv.org/abs/2302.11307", "yymm": "2302" }
\section{Introduction} Gaussian basis with periodic treatments have been made available for extended systems to compute Hartree-Fock, density functional theory (DFT) and post-mean-field methods in various program packages\cite{Sun2018,Sun2020,WCMS-CP2K,Kuehne2020,Erba2022}. To characterize periodicity, the Gaussian basis employed by crystalline calculations requires an infinity of primitive Gaussian functions recurrently placed in the repeated image cells. Unlike the integral evaluation program for molecules, evaluating a single crystalline integral involves the computation of massive primitive Gaussian integrals. Thanks to the exponential decay of Gaussian function, given a specific requirement on numerical precision, many primitive integrals can be neglected without breaking the translational symmetry of crystalline integrals. Proper integral screening schemes for extended systems need to be developed to filter the negligible primitive integrals. For Coulomb-type interactions, integral screening relies on an accurate estimation of electron repulsion integrals (ERI). Integral estimation has big impact on the accuracy and the computational cost in crystalline calculations. Underestimation may cause a loss of accuracy, while overestimation may lead to a waste of computational efforts. Schwarz inequality is the simplest while quite useful integral estimator although it always overestimates the integral value. By including the factor of distance between charge densities, improved Schwarz inequality estimators were proposed in the past\cite{Gill1994,Lambrecht2005,Maurer2012,Maurer2013,Hollman2015,Valeev2020}. They can be used to screen four-center ERIs and density fitting methods for the bare Coulomb operator and the complementary error function attenuated Coulomb operator\cite{Izmaylov2006,Thompson2019}. Integral screening for crystalline integrals is more complex than for molecular integrals due to the presence of periodicity. Crystalline integrals can be evaluated with different integral algorithms\cite{Lippert1999,Carsky2012,Ben2013,Burow2009,Izmaylov2006,Kudin2000,Maschio2008,Pisani2008,Usvyat2007,Varga2006,James2008,Guidon2009,Sun2017,Ye2021,Ye2021a,Ye2022,Sun2020a,Bintrim2022,Sharma2022}. Various parameters, such as the range of truncated Coulomb operator, the multipole expansion order, the energy cutoff, the real space grids, etc. have to be used to efficiently compute integrals. It often requires preliminary numerical experiments or certain experience to tune these parameters to achieve desired accuracy without sacrificing performance. Among the crystalline integral evaluation algorithms, Ye developed the range-separated Gaussian density fitting (RSDF) algorithm\cite{Ye2021} and explored the integral estimators and integral screening scheme for the short-range part of the integrals required by RSDF. His integral estimator helps RSDF algorithm gain a magnitude speed-up comparing to the earlier GDF implementation\cite{Ye2021,Sun2017}. In this work, we document integral estimators and integral screening parameters for the crystalline integral algorithms developed in the PySCF package, including the range-separated density fitting\cite{Ye2021}, the compensated-charge density fitting (CCDF)\cite{Sun2017}, the range-separated exact exchange algorithm (RSJK)\cite{Sun2020a} and the Fourier transform integral algorithm. In Section~\ref{sec:algorithms}, we briefly review these crystalline integral algorithms. In Section~\ref{sec:cutoffs}, we discuss the integral estimation and various cutoffs and screening parameters for different types of integrals. The effectiveness of integral screening schemes are assessed in Section~\ref{sec:tests}. \section{Integral algorithms in PySCF} \label{sec:algorithms} The periodicity adapted crystalline Gaussian basis is composed of primitive Gaussian functions recurrently placed in $N$ image cells characterized by translational shifts $\mathbf{T}$ \begin{equation} \phi_\mu^\mathbf{k}(\mathbf{r}) = \frac{1}{\sqrt{N}} \sum_\mathbf{T} e^{i\mathbf{k}\cdot\mathbf{T}} \chi_\mu(\mathbf{r}-\mathbf{T}), \end{equation} where $\chi_\mu(\mathbf{r})$ is a primitive Gaussian function centered at $\mathbf{R}_\mu$ with normalization factor $N_\mu$ \begin{equation*} \chi_\mu(\mathbf{r}) = N_\mu (x-R_{\mu x})^{m_x} (y-R_{\mu y})^{m_y} (z-R_{\mu z})^{m_z} e^{-\alpha_\mu (\mathbf{r} - \mathbf{R}_\mu)^2}. \end{equation*} In an {\it ab initio} calculation of a crystalline system, essentially one needs to build the overlap integrals and integrals of kinetic operator, nuclear attraction operator, two-electron Coulomb repulsion operator in terms of the crystalline basis. The translational symmetry allowed overlap integrals between two crystalline basis functions are \begin{gather} S_{\mu\nu}^{\mathbf{k}} = \sum_{\mathbf{T}}e^{i\mathbf{k}\cdot\mathbf{T}} S_{\mu\nu^\mathbf{T}}, \label{eq:overlap} \\ S_{\mu\nu^\mathbf{T}} = \int \chi_\mu(\mathbf{r}) \chi_\nu(\mathbf{r} - \mathbf{T}) d^3\mathbf{r}. \end{gather} The lattice-sum over vector $\mathbf{T}$ can be truncated according to the overlap between the two primitive functions $S_{\mu\nu^\mathbf{T}}$. We can compute the kinetic integrals in a similar manner. Nuclear attraction integrals can be computed using the algorithm for three-center two-electron integrals, which we will discuss later. In this treatment, we use a very sharp s-type function to mimic the nuclear charge distribution \begin{equation} \chi_A(\mathbf{r}) = Z_A \lim_{\zeta\rightarrow\infty} \Big(\frac{\zeta}{2\pi}\Big)^{3/2} e^{-\zeta |\mathbf{r}-\mathbf{R}_A|^2} \end{equation} and rewrite the integral of nuclear attraction to \begin{align} V_{N,\mu\nu}^{\mathbf{k}} = \sum_{A\in \text{cell 0}} \sum_{\mathbf{MN}}e^{i\mathbf{k}\cdot(\mathbf{N-M})} \int \frac{\chi_\mu(\mathbf{r}_1-\mathbf{M}) \chi_\nu(\mathbf{r}_1 - \mathbf{N}) \chi_A(\mathbf{r}_2)}{r_{12}} d^3\mathbf{r}_1 d^3\mathbf{r}_2. \end{align} It is relatively straightforward to compute two-electron Coulomb repulsion integrals with the assistance of plane-waves \begin{equation} \frac{e^{i\mathbf{G}\cdot \mathbf{r}}}{\sqrt{2\pi}}. \end{equation} In reciprocal space, the four-center two-electron integrals can be evaluated \begin{equation} g_{\mu\nu,\kappa\lambda}^{\mathbf{k}_\mu\mathbf{k}_\nu\mathbf{k}_\kappa\mathbf{k}_\lambda} = \frac{1}{\Omega} \sum_{\mathbf{G}} \frac{4\pi\rho_{\mu\nu}^{\mathbf{k}_\mu\mathbf{k}_\nu}(\mathbf{G}+\mathbf{k}_{\mu\nu}) \rho_{\kappa\lambda}^{\mathbf{k}_\kappa\mathbf{k}_\lambda}(-\mathbf{G}+\mathbf{k}_{\kappa\lambda}) }{|\mathbf{G}+\mathbf{k}_{\mu\nu}|^2}, \label{eq:aft:eri} \end{equation} \begin{equation} \mathbf{k}_{\mu\nu} = -\mathbf{k}_\mu + \mathbf{k}_\nu. \end{equation} $\Omega$ is the volume of the unit cell. The plane-wave vector $\mathbf{G}$ is chosen to be integer multipliers of reciprocal lattice vectors. The Fourier transformed density (or the product of basis functions) $\rho(\mathbf{G})$ can be obtained with either analytical Fourier transformation \begin{equation} \rho_{\mu\nu}^{\mathbf{k}_\mu\mathbf{k}_\nu}(\mathbf{G}) = \sum_{\mathbf{T}} e^{i\mathbf{k}_\nu\cdot\mathbf{T}} \int e^{-i(\mathbf{G}-\mathbf{k}_{\mu\nu})\cdot\mathbf{r}} \chi_\mu(\mathbf{r}) \chi_\nu(\mathbf{r} - \mathbf{T}) d^3\mathbf{r} \label{eq:aft:rhoij} \end{equation} or discrete Fourier transformation \begin{equation} \rho_{\mu\nu}^{\mathbf{k}_\mu\mathbf{k}_\nu}(\mathbf{G}) = \frac{1}{\Omega}\sum_\mathbf{r} e^{-i(\mathbf{G}-\mathbf{k}_{\mu\nu})\cdot\mathbf{r}} \phi_\mu^{\mathbf{k}_\mu}(\mathbf{r}) \phi_\nu^{\mathbf{k}_\nu}(\mathbf{r}). \end{equation} This algorithm can be viewed as a density fitting method using plane-waves as auxiliary basis to expand the electron density. Implementations of this algorithm are available in PySCF with the name FFTDF (fast Fourier transform density fitting) and AFTDF (analytical Fourier transform density fitting). Computing the two-electron integrals with Fourier transform is expensive in many scenario. Thanks to the locality of Gaussian function in either real space or reciprocal space, recipes that mix real-space and reciprocal-space integral evaluation were developed. They are the range-separated integral algorithms RSDF and RSJK, and the charge-compensated integral algorithm CCDF. \subsection{Range-separated integral algorithms} Using the error function and its complementary function to split the Coulomb operator \begin{equation} \frac{1}{r_{12}} = \frac{\mathrm{erfc}(\omega r_{12})}{r_{12}} + \frac{\mathrm{erf}(\omega r_{12})}{r_{12}}, \label{eq:coul:split} \end{equation} we get a short-range (SR) component associated with the complementary error function and a long-range (LR) operator associated with the error function. We refer the Coulomb operator with complementary error function to SR because it decays exponentially in real space. In contrast, the LR component has a compact distribution in reciprocal space \begin{equation} \frac{4\pi}{G^2} e^{-\frac{G^2}{4\omega^2}}. \label{eq:coul:lr} \end{equation} We also split the electron density into a compact part $\rho_c(\mathbf{r})$ and a diffused part $\rho_d(\mathbf{r})$ based on their compactness in real space. Typically, the diffused part of the density is constructed using smooth Gaussian functions and is expected to be compact in reciprocal space. When computing the two-electron repulsion integrals with the RSJK or RSDF algorithm, \begin{equation} \int \frac{\rho(\mathbf{r}_1) \rho(\mathbf{r}_2)}{r_{12}} d^3 \mathbf{r}_1 d^3 \mathbf{r}_2, \end{equation} locality is utilized and the integrals are computed in two steps. In the first step, we compute the SR Coulomb with the compact density analytically in real space \begin{equation} \int \frac{\rho_{c}(\mathbf{r}_1)\mathrm{erfc}(\omega r_{12}) \rho_{c}(\mathbf{r}_2)}{r_{12}} d^3 \mathbf{r}_1 d^3 \mathbf{r}_2. \label{eq:int2e:sr} \end{equation} For RSJK, computing the four-index analytical integrals requires three nested lattice-sum \begin{gather} g_{\mu\nu,\kappa\lambda}^{\mathbf{k}_\mu\mathbf{k}_\nu\mathbf{k}_\kappa\mathbf{k}_\lambda} = \sum_{\mathbf{MNT}} e^{i\mathbf{k}_\nu\cdot\mathbf{N}-i\mathbf{k}_\mu\cdot\mathbf{M}-i\mathbf{k}_\kappa\cdot\mathbf{T}} g_{\mu^\mathbf{M}\nu^\mathbf{N},\kappa^\mathbf{T}\lambda} \label{eq:j4c:triple:lattice:sum} \\ g_{\mu^\mathbf{M}\nu^\mathbf{N},\kappa^\mathbf{T}\lambda} = \int \frac{\chi_\mu(\mathbf{r}_1-\mathbf{M}) \chi_\nu(\mathbf{r}_1 - \mathbf{N}) \mathrm{erfc}(\omega r_{12})\chi_\kappa(\mathbf{r}_2-\mathbf{T}) \chi_\lambda(\mathbf{r}_2)}{r_{12}} d^3\mathbf{r}_1 d^3\mathbf{r}_2. \label{eq:eri4c2e:rs} \end{gather} For RSDF, a single lattice-sum is required for the two-center SR integrals and a double lattice-sum is required for the three-center SR integrals \begin{gather} g_{\mu,\nu}^{\mathbf{k}} = \sum_{\mathbf{N}} e^{i\mathbf{k}_\nu\cdot\mathbf{N}} g_{\mu,\nu^\mathbf{N}}, \\ g_{\mu,\nu^\mathbf{N}} = \int \frac{\chi_\mu(\mathbf{r}_1)\mathrm{erfc}(\omega r_{12}) \chi_\nu(\mathbf{r}_2 - \mathbf{N})}{r_{12}} d^3\mathbf{r}_1 d^3\mathbf{r}_2, \\ g_{\mu\nu,\kappa}^{\mathbf{k}_\mu\mathbf{k}_\nu} = \sum_{\mathbf{MN}} e^{i\mathbf{k}_\nu\cdot\mathbf{N}-i\mathbf{k}_\mu\cdot\mathbf{M}} g_{\mu^\mathbf{M}\nu^\mathbf{N},\kappa}, \label{eq:j3c:double:lattice:sum} \\ g_{\mu^\mathbf{M}\nu^\mathbf{N},\kappa} = \int \frac{\chi_\mu(\mathbf{r}_1-\mathbf{M}) \chi_\nu(\mathbf{r}_1 - \mathbf{N}) \mathrm{erfc}(\omega r_{12})\chi_\kappa(\mathbf{r}_2)}{r_{12}} d^3\mathbf{r}_1 d^3\mathbf{r}_2. \label{eq:j2c:lattice:sum} \end{gather} In the second step, rest terms, including the LR Coulomb operator or the terms with diffused part of electron density, are all collected and evaluated in reciprocal space numerically, which can be shortly denoted as \begin{equation} \frac{1}{\Omega}\sum_{\mathbf{G}} \frac{4\pi}{G^2} \Big(\rho(\mathbf{G})\rho(-\mathbf{G}) -(1-e^{-\frac{G^2}{4\omega^2}})\rho_c(\mathbf{G})\rho_c(-\mathbf{G})\Big). \label{eq:int2e:lr} \end{equation} All terms in this step exbibit a compact distribution in reciprocal space, which allows for rapid truncation of the summation over the plane-wave functions in the formula above. More details of the four-center ERI algorithm can be found in reference \onlinecite{Sun2020a}. \subsection{Charge-compensated algorithms} In CCDF, we partition the auxiliary basis function into two components, the zero-multipole component and the plane-wave component \begin{gather} \varphi_\mu^\mathbf{k}(\mathbf{r}) = \frac{1}{\sqrt{N}} \sum_\mathbf{T} e^{i\mathbf{k}\cdot\mathbf{T}} [\chi_\mu(\mathbf{r}-\mathbf{T}) - \xi_\mu(\mathbf{r}-\mathbf{T})] + \frac{1}{(2\pi)^3}\sum_{\mathbf{G}} e^{i(\mathbf{G}+\mathbf{k})\cdot\mathbf{r}} \rho_{\xi_\mu}(\mathbf{G}+\mathbf{k}). \label{eq:cc:auxbas} \end{gather} The zero-multipole function $\varphi$ is a regular Gaussian function compensated by a smooth Gaussian function $\xi$ that has the same charge (or multipoles). The effect of $\xi$ is eliminated by the plane-wave component \begin{gather} \xi_\mu(\mathbf{r}) = \frac{N_\mu}{N_\eta} (x-R_{\mu x})^{m_x} (y-R_{\mu y})^{m_y} (z-R_{\mu z})^{m_z} e^{-\alpha_\mu (\mathbf{r} - \mathbf{R}_\mu)^2} , \quad \eta < \alpha_\mu, \\ \rho_{\xi_\mu}(\mathbf{G}) = \int e^{-i\mathbf{G}\cdot \mathbf{r}} \xi_\mu(\mathbf{r}) d^3\mathbf{r}. \end{gather} For the two-center and the three-center integrals integrals involving $\varphi$, we carry out the analytical integral scheme in real space \begin{gather} g_{\mu,\nu}^{\mathbf{k}} = \sum_{\mathbf{N}} e^{i\mathbf{k}_\nu\cdot\mathbf{N}} ( J_{\mu,\nu^\mathbf{N}} - J_{\xi_\mu,\nu^\mathbf{N}} - J_{\mu,\xi_\nu^\mathbf{N}} + J_{\xi_\mu,\xi_\nu^\mathbf{N}}), \\ J_{\mu,\nu^\mathbf{N}} = \int \frac{\chi_\mu(\mathbf{r}_1) \chi_\nu(\mathbf{r}_2 - \mathbf{N})}{r_{12}} d^3\mathbf{r}_1 d^3\mathbf{r}_2, \\ g_{\mu\nu,\kappa}^{\mathbf{k}_\mu\mathbf{k}_\nu} = \sum_{\mathbf{MN}} e^{i\mathbf{k}_\nu\cdot\mathbf{N}-i\mathbf{k}_\mu\cdot\mathbf{M}} (J_{\mu^\mathbf{M}\nu^\mathbf{N},\kappa} - J_{\mu^\mathbf{M}\nu^\mathbf{N},\xi_\kappa}), \label{eq:j3c:ccdf} \\ J_{\mu^\mathbf{M}\nu^\mathbf{N},\kappa} = \int \frac{\chi_\mu(\mathbf{r}_1-\mathbf{M}) \chi_\nu(\mathbf{r}_1 - \mathbf{N}) \chi_\kappa(\mathbf{r}_2)}{r_{12}} d^3\mathbf{r}_1 d^3\mathbf{r}_2. \end{gather} The integrals associated to the plane-wave component are computed in reciprocal space. The formula for the two-center two-electron integrals is \begin{equation} \frac{4\pi}{\Omega}\sum_{\mathbf{G}} \frac{\rho_\mu^{\mathbf{k}}(\mathbf{G}+\mathbf{k}) \rho_{\xi_\nu}^{\mathbf{k}}(-\mathbf{G}-\mathbf{k}) + \rho_{\xi_\mu}^{\mathbf{k}}(\mathbf{G}+\mathbf{k}) \rho_\nu^{\mathbf{k}}(-\mathbf{G}-\mathbf{k}) - \rho_{\xi_\mu}^{\mathbf{k}}(\mathbf{G}+\mathbf{k}) \rho_{\xi_\nu}^{\mathbf{k}}(-\mathbf{G}-\mathbf{k}) }{|\mathbf{G}+\mathbf{k}|^2} \end{equation} and the formula for the three-center two-electron integrals is \begin{equation} \frac{1}{\Omega} \sum_{\mathbf{G}} \frac{4\pi\rho_{\mu\nu}^{\mathbf{k}_\mu\mathbf{k}_\nu}(\mathbf{G}+\mathbf{k}_{\mu\nu}) \rho_\xi(-\mathbf{G}+\mathbf{k}_{\kappa\lambda}) }{|\mathbf{G}+\mathbf{k}_{\mu\nu}|^2}. \end{equation} \section{Cutoffs estimation} \label{sec:cutoffs} Based on the position of primitive Gaussian functions, we use distance cutoff $R_\text{cut}$ to determine which image cells to be included in the lattice-sum. For plane-wave functions, we use energy cutoff $E_\text{cut}$ to truncation the summation of plane-waves. We first show the conversion between $R_\text{cut}$ and the range of lattice-sum. Let $\Delta\mathbf{R}$ be the displacement between two atoms in the unit cell. For lattice vectors $\mathbf{a} = (\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3$), $|\mathbf{a}\mathbf{T} + \Delta\mathbf{R}|$ gives the distance between one atom in the reference cell (cell 0) and another atom in the image cell indicated by the vector $\mathbf{T}$ (elements of $\mathbf{T}$ are all integers). Lattice-sum should include all $\mathbf{T}$s which satisfy \begin{equation} |\mathbf{a}\mathbf{T} + \Delta\mathbf{R}| < R_\text{cut} \label{eq:rcut:condition} \end{equation} Calling QR decomposition for the 3 $\times$ 4 matrix \begin{equation*} \begin{pmatrix} \mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3, \Delta\mathbf{R} \end{pmatrix} = \mathbf{q}\mathbf{c}, \end{equation*} the inequality \eqref{eq:rcut:condition} can be transformed to \begin{equation*} \begin{pmatrix} T_x, T_y, T_z, 1 \end{pmatrix} \cdot \mathbf{c}^T \mathbf{c} \cdot \begin{pmatrix} T_x \\ T_y \\ T_z \\ 1 \end{pmatrix} < R_\text{cut}^{2}. \end{equation*} This inequality indicates the lower bound and upper bound of $T_z$ \begin{gather} T_z^\text{upper} = \mathrm{ceil}\big(\frac{R_\text{cut} - c_{34}}{c_{33}}\big) \\ T_z^\text{lower} = \mathrm{floor}\big(\frac{-R_\text{cut} - c_{34}}{c_{33}}\big) \label{eq:rcut2latticesum} \end{gather} The lattice-sum along $z$-direction needs to include all integers in the closed set $[T_z^\text{lower}, T_z^\text{upper}]$. Similar procedures can be carried out to determine the bound for $T_x$ and $T_y$ as well as the lattice-sum range along $x$-direction and $y$-direction. Given energy cutoff $E_\text{cut}$ we can determine the minimal number of plane waves $\mathbf{N}$ in each direction with the inequality \begin{gather} |\mathbf{N} \mathbf{b}|^2 > 2 E_\text{cut}. \end{gather} We can find the requirement for $N_z$ with QR decomposition for $\mathbf{b}$ \begin{gather} \begin{pmatrix} \mathbf{b}_1, \mathbf{b}_2, \mathbf{b}_3, \end{pmatrix} = \mathbf{q}\mathbf{c}, \\ N_z \geq \mathrm{ceil}\big(\frac{\sqrt{2E_\text{cut}}}{c_{33}}\big). \label{eq:kecut2nz} \end{gather} $N_y$ and $N_z$ can be determined in a similar manner. \subsection{Distance cutoff for Gaussian basis function on real-space grid} For any grid inside the reference cell, the value of a remote primitive Gaussian function centered at $\mathbf{R}$ has the upper bound \begin{equation} \chi_\mu(\mathbf{r}-\mathbf{R}) \leq N_\mu |\mathbf{R}|^{l_\mu} e^{-\alpha_\mu |\mathbf{R}|^2} \end{equation} We can estimate an overall error $\varepsilon$ due to a single lattice-sum if neglecting all primitive functions which are placed remoter than $R_\text{cut}$ \begin{equation} \varepsilon = \sum_{|\mathbf{T}| > R_\text{cut}} \chi_\mu(\mathbf{r}-\mathbf{T}) \approx \int_{R > R_\text{cut}} \chi_\mu(\mathbf{R}) d^3\mathbf{R} < 4 \pi N_\mu \int_{R > R_\text{cut}} R^{l_\mu+2} e^{-\alpha_\mu R^2} dR. \end{equation} To ensure the truncation error smaller than the required precision $\tau$, the value of $R_\text{cut}$ can be determined by solving the inequality \begin{equation} \frac{2 \pi N_\mu R_\text{cut}^{l_\mu+1} e^{-\alpha_\mu R_\text{cut}^2}}{\Omega\alpha_\mu} < \tau \end{equation} To simplify the estimation for basis functions in the same shell which have the same angular momentum, we approximate the angular part of the normalization factor \begin{gather} N_\mu \approx N_R(\alpha_\mu,l_\mu) \sqrt{\frac{2l_\mu+1}{4\pi}}, \end{gather} where $N_R$ is the radial normalization factor \begin{equation} N_R(\alpha_\mu,l_\mu) = \sqrt{\frac{2(2\alpha_\mu)^{l_\mu+3/2}}{\Gamma(l_\mu+\frac{3}{2})}}. \end{equation} \subsection{Distance cutoff for overlap} Assuming that $\chi_\mu$ is centered at the coordinate $(0,0,0)$, the overlap between two primitive functions $\chi_\mu$ and $\chi_\nu$ has an upper limit \begin{align} \langle \chi_\mu|\chi_\nu\rangle &\leq N_{\mu} N_{\nu} \int x^{l_\mu} (x-|\mathbf{R}_\nu|)^{l_\nu}e^{-\alpha_\mu x^2} e^{-\alpha_\nu(x-|\mathbf{R}_\nu|)^2} e^{-\alpha_{\mu\nu}y^2} e^{-\alpha_{\mu\nu}z^2} dx dy dz \nonumber\\ &= N_{\mu} N_{\nu} \frac{\pi}{\alpha_{\mu\nu}} \int x^{l_\mu} (x-R_\nu)^{l_\nu}e^{-\alpha_\mu x^2} e^{-\alpha_\nu(x-R_\nu)^2} dx. \end{align} To simplify the equations, we adopt shorthand notations \begin{gather} \alpha_{\mu\nu} = \alpha_\mu + \alpha_\nu, \\ l_{\mu\nu} = l_\mu + l_\nu, \\ \theta_{\mu\nu} = (\alpha_\mu^{-1} + \alpha_\nu^{-1})^{-1}. \end{gather} We then employ the Gaussian product theorem (GPT) \begin{equation} L_{\mu\nu}^{k}(R) = \sum_{l=0}^{k} \begin{pmatrix} l_\mu \\ l \end{pmatrix} \begin{pmatrix} l_\nu \\ k-l \end{pmatrix} \big(\frac{\alpha_\nu R}{\alpha_{\mu\nu}}\big)^{l_\mu-l} \big(\frac{-\alpha_\mu R}{\alpha_{\mu\nu}}\big)^{l_\nu+l-k}, \label{eq:gpt:coeff} \end{equation} and derive the primitive overlap integral \begin{align} I_{\mu\nu} &= \pi N_{\mu} N_{\nu} e^{-\theta_{\mu\nu}R_\nu^2} \sum_{k}^{l_{\mu\nu}} L_{\mu\nu}^{k}(R_\nu) \frac{\Gamma(\frac{k+1}{2})}{\alpha_{\mu\nu}^{(k+3)/2}}. \end{align} $\Gamma(s) = \Gamma(s, 0)$ is the incomplete gamma function \begin{equation} \Gamma(s, t) = \int_t^\infty x^{s-1} e^{-x} dx. \end{equation} By approximating $\Gamma(\frac{k+1}{2}) \approx \sqrt{\pi}$ \begin{equation} I_{\mu\nu} \lesssim \pi N_{\mu} N_{\nu} e^{-\theta_{\mu\nu}R_\nu^2} \sum_{k}^{l_{\mu\nu}}|L_{\mu\nu}^{k}(R_\nu)| \frac{\sqrt{\pi}}{\alpha_{\mu\nu}^{(k+3)/2}}, \end{equation} we can factorize the GPT term and obtain the overlap integral upper bound \begin{equation} I_{\mu\nu} \lesssim N_\mu N_\nu e^{-\theta_{\mu\nu}R^2} \Big(\frac{\alpha_\nu R}{\alpha_{\mu\nu}} + \frac{1}{\sqrt{\alpha_{\mu\nu}}}\Big)^{l_\mu} \Big(\frac{\alpha_\mu R}{\alpha_{\mu\nu}} + \frac{1}{\sqrt{\alpha_{\mu\nu}}}\Big)^{l_\nu} \Big(\frac{\pi}{\alpha_{\mu\nu}}\Big)^{3/2}. \label{eq:factorize:ovlp} \end{equation} After considering the effect of the single lattice-sum in the overlap integral \eqref{eq:overlap}, we derive an approximate value of the truncation error $\varepsilon$ \begin{align} \varepsilon &< \sum_{|\mathbf{R}|>R_\text{cut}} I_{\mu\nu} \approx \frac{4\pi}{\Omega} \int_{R > R_\text{cut}} R^2 I_{\mu\nu} dR \approx \frac{2\pi R_\text{cut}}{\Omega \theta_{\mu\nu}} I_{\mu\nu} < \tau. \end{align} Solving this inequality for a specific precision requirement $\tau$, we can obtain $R_\text{cut}$ for the overlap integrals of crystalline basis. \subsection{Distance cutoff for Fourier transform} The analytical Fourier transform of Gaussian function product $\chi_\mu \chi_\nu$ is \begin{equation} \int e^{-i \mathbf{G}\cdot \mathbf{r}} \chi_\mu(\mathbf{r}) \chi_\nu(\mathbf{r}) d^3\mathbf{r} = \pi N_{\mu} N_{\nu} e^{-\frac{G^2}{4\alpha_{\mu\nu}}} e^{-\theta_{\mu\nu}R^2} \sum_{k}^{l_{\mu\nu}} L_{\mu\nu}^{k}(R) \sum_m \begin{pmatrix} k \\ m \end{pmatrix} \frac{(\frac{-iG_x}{2\alpha_{\mu\nu}})^{k-m} \Gamma(\frac{m+1}{2})}{\alpha_{\mu\nu}^{(m+3)/2}}. \label{eq:aft:rhoij1} \end{equation} Compared to the overlap integrals, the Fourier transform introduces a factor \begin{equation} e^{-\frac{G^2}{4\alpha_{\mu\nu}}} (\frac{G}{2\alpha_{\mu\nu}})^{n} \end{equation} which has a maximum value at \begin{equation} \max(e^{-\frac{G^2}{4\alpha_{\mu\nu}}} (\frac{G}{2\alpha_{\mu\nu}})^{n}) = \Big(\frac{n}{2e\alpha_{\mu\nu}}\Big)^{\frac{n}{2}}. \end{equation} This value can hardly be larger than 1 on a regular Gaussian basis in routine calculations. Therefore, it is sufficient to employ the overlap $R_\text{cut}$ estimator for analytical Fourier transform. \subsection{Distance cutoff in RSDF} \label{sec:rcut:rsdf} Density fitting methods require three-center integrals and two-center integrals. We first consider the SR-ERI for three primitive functions based on the multipole expansion estimator developed in Ye's work \onlinecite{Ye2021} \begin{align} g_{\mu\nu\kappa} &= \int \frac{\chi_\mu(\mathbf{r}_1)\chi_\nu(\mathbf{r}_1)\mathrm{erfc}(\omega r_{12})\chi_\kappa(\mathbf{r}_2)}{r_{12}} d\mathbf{r}^3_1 d^3\mathbf{r}_2 \nonumber\\ &\lesssim N_\mu N_\nu N_\kappa e^{-\theta_{\mu\nu}d_{\mu\nu}^2} \sum_{l}^{l_{\mu\nu}}|L_{\mu\nu}^{l}(d_{\mu\nu})| \frac{\pi^3 \nu_{l+l_\kappa}(\theta_{\mu\nu\kappa\omega}, R)}{\alpha_{\mu\nu}^{l+3/2}\alpha_\kappa^{l_\kappa+3/2}}, \label{eq:srj3c:me} \end{align} where \begin{gather} \theta_{\mu\nu} = (\alpha_\mu^{-1} + \alpha_\nu^{-1})^{-1}, \\ \theta_{\mu\nu\kappa\omega} = (\alpha_{\mu\nu}^{-1} + \alpha_\kappa^{-1} + \omega^{-2})^{-1}, \label{eq:def:theta} \\ d_{\mu\nu} = |\mathbf{R}_\mu - \mathbf{R}_\nu|, \\ R = |\mathbf{P}_{\mu\nu} - \mathbf{R}_\kappa|, \\ \mathbf{P}_{\mu\nu} = \frac{\alpha_\mu \mathbf{R}_\mu + \alpha_\nu \mathbf{R}_\nu}{\alpha_{\mu\nu}}. \label{eq:weighted:center} \end{gather} The notation $d_{\mu\nu}$ is the bra separation (between the centers of $\chi_\mu$ and $\chi_\nu$) and $R$ is the bra-ket separation. The effective potential $\nu_l(\theta,R)$ has an upper bound \begin{gather} \nu_l(\theta,R) = \frac{\Gamma(l+\frac{1}{2}, \theta R^2)}{\sqrt{\pi} R^{l+1}} \lesssim \frac{(\theta R)^l e^{-\theta R^2}}{\sqrt{\pi\theta} R^2} f_l(\theta R^2), \\ f_l(x) = \sum_{k=0}^{l-1} \frac{(2l-1)!!}{(2l-2k-1)!! (2x)^k}. \end{gather} Typically, $1 \leq f_{l}\lesssim 2$ when the bra-ket separation $R$ is reasonably large. Therefore, we can assume $f_l$ a constant. By applying the factorization similar to the overlap integral \eqref{eq:factorize:ovlp} we obtain the upper bound of the integral \eqref{eq:srj3c:me} \begin{align} g_{\mu\nu\kappa} &\lesssim N_\mu N_\nu N_\kappa e^{-\theta_{\mu\nu}d_{\mu\nu}^2} \Big(\frac{\pi^2}{\alpha_{\mu\nu}\alpha_{\kappa}}\Big)^{3/2} \sum_{k}^{l_{\mu\nu}} |L_{l_\mu,l_\nu}^{k}(d_{\mu\nu})| \frac{f_{k+l_\kappa} (\theta_{\mu\nu\kappa\omega} R)^{k+l_\kappa} e^{-\theta_{\mu\nu\kappa\omega} R^2}} {\sqrt{\pi\theta_{\mu\nu\kappa\omega}} R^2 \alpha_{\mu\nu}^{k}\alpha_{\kappa}^{l_\kappa}} \nonumber\\ &\leq \frac{Q_{\mu\nu}(R) N_\kappa f_l e^{-\theta_{\mu\nu\kappa\omega}R^2}} {\sqrt{\pi\theta_{\mu\nu\kappa\omega}} R^2} \Big(\frac{\pi}{\alpha_{\kappa}}\Big)^{3/2} \Big(\frac{\theta_{\mu\nu\kappa\omega}R}{\alpha_{\kappa}}\Big)^{l_{\kappa}} \nonumber\\ &\leq \frac{Q_{\mu\nu}(R) N_\kappa f_l e^{-\theta_{\mu\nu\kappa\omega}R^2}} {\sqrt{\pi\theta_{\mu\nu\kappa\omega}} R^2} \Big(\frac{\pi}{\alpha_{\kappa}}\Big)^{3/2} \Big(\frac{\omega^2 R}{\alpha_{\kappa}+\omega^2}\Big)^{l_\kappa}, \end{align} where \begin{equation} Q_{\mu\nu}(R) =N_\mu N_\nu e^{-\theta_{\mu\nu} d_{\mu\nu}^2} \Big(\frac{\pi}{\alpha_{\mu\nu}}\Big)^{3/2} \Big(\frac{\alpha_\nu d_{\mu\nu}}{\alpha_{\mu\nu}} + {\frac{\theta_{\mu\nu\kappa\omega}R}{\alpha_{\mu\nu}}}\Big)^{l_\mu} \Big(\frac{\alpha_\mu d_{\mu\nu}}{\alpha_{\mu\nu}} + {\frac{\theta_{\mu\nu\kappa\omega}R}{\alpha_{\mu\nu}}}\Big)^{l_\nu}. \label{eq:Qij} \end{equation} It is worth noting that $\theta_{\mu\nu\kappa\omega}$ is bounded above \begin{equation} \theta_{\mu\nu\kappa\omega} < (\alpha_{\mu\nu}^{-1} + \omega^{-2})^{-1}, \end{equation} and \begin{equation} R < 2R_\text{cut} \end{equation} because of the distance cutoff which ensures that all primitive Gaussian functions and their products must be inside the sphere of diameter $2R_\text{cut}$. By considering these bounds, we obtain the upper bound of $Q_{\mu\nu}(R)$ \begin{equation} Q_{\mu\nu}^\text{u} = N_\mu N_\nu e^{-\theta_{\mu\nu} d_{\mu\nu}^2} \Big(\frac{\pi}{\alpha_{\mu\nu}}\Big)^{3/2} \Big(\frac{\alpha_\nu d_{\mu\nu}}{\alpha_{\mu\nu}} + \frac{2\omega^2 R_\text{cut}}{\alpha_{\mu\nu}+\omega^2}\Big)^{l_\mu} \Big(\frac{\alpha_\mu d_{\mu\nu}}{\alpha_{\mu\nu}} + \frac{2\omega^2 R_\text{cut}}{\alpha_{\mu\nu}+\omega^2}\Big)^{l_\nu} \end{equation} as well as the upper bound of $g_{\mu\nu\kappa}$ \begin{equation} g_{\mu\nu\kappa} \lesssim \frac{e^{-\theta_{\mu\nu\kappa\omega}R^2}}{R^2} \frac{Q_{\mu\nu}^\text{u} N_\kappa f_l}{\sqrt{\pi}\omega} \Big(\frac{\pi}{\alpha_{\kappa}}\Big)^{3/2} \Big(\frac{2\omega^2 R_\text{cut}}{\alpha_{\kappa}+\omega^2}\Big)^{l_\kappa}. \label{eq:eri3c:upperbound} \end{equation} For SR ERIs, the inequality above offers a more accurate upper bound estimation than Schwarz inequality. In the crystalline integral program, we combine the two estimators. Schwarz inequality is tested first for each $g_{\mu\nu\kappa}$ because it is simple and fast to compute. To reduce the cost of the inequality test \eqref{eq:eri3c:upperbound}, we precompute $Q_{\mu\nu}^\text{u}$ and the intermediate point center $\mathbf{P}_{\mu\nu}$ then adjust the integral screening threshold for each basis product $\chi_\mu \chi_\nu$. Additionally, $\theta_{\mu\nu\kappa}$ can be precomputed and cached as well because many basis functions have the same exponent and the number of unique $\theta_{\mu\nu\kappa}$ is limited. When computing $g_{\mu\nu\kappa}$, we only require the coordinates of $\mathbf{P}_{\mu\nu}$ and auxiliary basis $\chi_\kappa$ to compute $R^2$. Then we test if \begin{equation} \frac{e^{-\theta_{\mu\nu\kappa\omega}R^2}}{R^2} \end{equation} is large enough for the adjusted integral screening threshold. Next we consider the effects of the double lattice-sum in the integral \eqref{eq:j3c:double:lattice:sum}. Without loss of generality, we can assume the center of $\chi_\kappa$ being at $\mathbf{0}$. In terms of the exponent part of the primitive three-center integral \eqref{eq:srj3c:me} \begin{equation} g_{\mu\nu\kappa} \sim e^{-s}, \quad s = \theta_{\mu\nu}d_{\mu\nu}^2 + \theta_{\mu\nu\kappa\omega} R^2, \label{eq:j3c:asymptotic} \end{equation} we can find the asymptotic behaviour for the integral \eqref{eq:j3c:double:lattice:sum} \begin{equation} \sum_{\mathbf{M}\mathbf{N}} g_{\mu^\mathbf{M}\nu^\mathbf{N}\kappa} \sim \sum_{\mathbf{N}} \sum_{\mathbf{M}-\mathbf{N}} e^{-\theta_{\mu\nu}|\mathbf{R}_{\mu^\mathbf{M}} - \mathbf{R}_{\nu^\mathbf{N}}|^2} e^{-\theta_{\mu\nu\kappa\omega} |\mathbf{P}_{\mu^\mathbf{M}\nu^\mathbf{N}}|^2}. \end{equation} This suggests that the contribution from the lattice-sum over $\mathbf{M}$ would decay rapidly, and we can focus on the leading contribution from the lattice sum of $\mathbf{N}$ \begin{equation} \sum_{\mathbf{M}\mathbf{N}} g_{\mu^\mathbf{M}\nu^\mathbf{N}\kappa} \sim \sum_{\mathbf{N}} \max_{\mathbf{R}_\mu}( e^{-\theta_{\mu\nu}|\mathbf{R}_\mu - \mathbf{R}_{\nu^\mathbf{N}}|^2} e^{-\theta_{\mu\nu\kappa\omega} |\mathbf{P}_{\mu\nu^\mathbf{N}}|^2}). \end{equation} The double lattice-sum in \eqref{eq:j3c:double:lattice:sum} is reduced to a single lattice-sum. Assuming that the remotest primitive function $\chi_\nu$ centered at $\mathbf{R}_\text{cut}$, approximately the maximum value of $g_{\mu\nu\kappa}$ can be found when the center of primitive function $\chi_{\mu}$ is chosen at \begin{equation} \mathbf{R}_\mu = \frac{\alpha_{\mu\nu}\alpha_\nu - \alpha_\nu\theta_{\mu\nu\kappa\omega}} {\alpha_{\mu\nu}\alpha_\nu + \alpha_\mu\theta_{\mu\nu\kappa\omega}} \mathbf{R}_\text{cut} \label{eq:Rmu:opt} \end{equation} which minimizes the value of $s$ in \eqref{eq:j3c:asymptotic} \begin{equation} s^* = \theta_{\nu\kappa\omega} R_\text{cut}^2, \quad \theta_{\nu\kappa\omega} = (\alpha_\nu^{-1} + \alpha_\kappa^{-1} + \omega^{-2})^{-1} \end{equation} with $d_{\mu\nu}$ and $R$ chosen at \begin{gather} d_{\mu\nu} = \alpha_\nu^{-1} \theta_{\nu\kappa\omega} R_\text{cut}, \\ R = \theta_{\mu\nu\kappa\omega}^{-1} \theta_{\nu\kappa\omega} R_\text{cut}. \end{gather} At this configuration, we derive the upper bound of the primitive integral $g_{\mu\nu\kappa}$ \begin{equation} g_{\mu\nu\kappa} \lesssim \frac{2^{l_\mu} \pi^{5/2} N_\mu N_\nu N_\kappa e^{-s^*} \theta_{\mu\nu\kappa\omega}^{3/2}(\theta_{\nu\kappa\omega}R_\text{cut})^{l_{\mu\nu\kappa}-2}} {\alpha_{\mu\nu}^{l_{\mu}+3/2}\alpha_\kappa^{l_\kappa+3/2}\alpha_\nu^{l_\nu}} f_{l_{\mu\nu\kappa}}(\theta_{\mu\nu\kappa\omega}^{-1}\theta_{\nu\kappa\omega}^2 R_\text{cut}^2), \end{equation} which suggests the distance cutoff estimator for the three-center integral \eqref{eq:j3c:double:lattice:sum} \begin{align} \varepsilon &< \frac{1}{\Omega}\int_{R>R_\text{cut}} g_{\mu\nu\kappa}d^3\mathbf{R} \lesssim \frac{2\pi R_\text{cut}}{\Omega\theta_{\nu\kappa\omega}} g_{\mu\nu\kappa} < \tau. \label{eq:j3csr:estimator} \end{align} By carrying out a similar analysis for the two-center primitive SR-ERI, we can approximate its upper bound \begin{equation} g_{\mu\nu} \lesssim N_\mu N_\nu \frac{\pi^3 \nu_{l_{\mu\nu}}(\theta_{\mu\nu\omega}, R)}{\alpha_{\mu}^{l+3/2}\alpha_\nu^{l_\nu+3/2}}. \end{equation} After considering the lattice summation effect, we get the radial cutoff estimator for the two-center integral \eqref{eq:j2c:lattice:sum} \begin{equation} \varepsilon \lesssim \frac{2\pi^4 N_\mu N_\nu e^{-\theta_{\mu\nu\omega} R_\text{cut}^2} (\theta_{\mu\nu\omega} R_\text{cut})^{l_{\mu\nu}-1}} {\Omega\sqrt{\pi\theta_{\mu\nu\omega}} \alpha_{\mu}^{l_\mu+3/2}\alpha_\nu^{l_\nu+3/2}} < \tau. \end{equation} \subsection{Distance cutoff for RSJK} The four-center primitive SR-ERI has an approximate value \begin{align} g_{\mu\nu\kappa\lambda} &= \int \frac{\chi_\mu(\mathbf{r}_1)\chi_\nu(\mathbf{r}_1) \mathrm{erfc}(\omega r_{12})\chi_\kappa(\mathbf{r}_2)\chi_\lambda(\mathbf{r}_2)}{r_{12}} d\mathbf{r}^3_1 d^3\mathbf{r}_2 \nonumber\\ &\lesssim N_\mu N_\nu N_\kappa N_\lambda e^{-\theta_{\mu\nu}d_{\mu\nu}^2} e^{-\theta_{\kappa\lambda}d_{\kappa\lambda}^2} \sum_{k}^{l_{\mu\nu}} \sum_{l}^{l_{\kappa\lambda}} |L_{l_\mu,l_\nu}^{k}(d_{\mu\nu}) L_{l_\kappa,l_\lambda}^{l}(d_{\kappa\lambda})| \frac{\pi^3 \nu_{k+l}(\theta_{\mu\nu\kappa\lambda\omega}, R)} {\alpha_{\mu\nu}^{k+3/2}\alpha_{\kappa\lambda}^{l+3/2}}, \end{align} where \begin{gather} \theta_{\mu\nu\kappa\lambda\omega} = (\alpha_{\mu\nu}^{-1} + \alpha_{\kappa\lambda}^{-1} + \omega^{-2})^{-1}, \\ R = |\mathbf{P}_{\mu\nu} - \mathbf{P}_{\kappa\lambda}|. \end{gather} We then derive the approximation of the upper bound for 4-center SR ERIs \begin{align} g_{\mu\nu\kappa\lambda} &\lesssim \frac{Q_{\mu\nu}(R) Q_{\kappa\lambda}(R) f_l e^{-\theta_{\mu\nu\kappa\lambda\omega}R^2}} {\sqrt{\pi\theta_{\mu\nu\kappa\lambda\omega}} R^2}. \end{align} This estimator can be combined with Schwarz inequality to screen integrals in a manner similar to the 3-center integral screening scheme we discussed in Section~\ref{sec:rcut:rsdf}. For the triple lattice-sum in the integral \eqref{eq:j4c:triple:lattice:sum}, it can also be reduced to a single lattice-sum for the similar reason we analyzed in Section~\ref{sec:rcut:rsdf}. The lattice-sum of $\mathbf{M}$ and $\mathbf{T}$ in \eqref{eq:j3c:double:lattice:sum} decays exponentially, and only the lattice-sum of $\mathbf{N}$ needs to be analyzed. Asymptotically, \begin{equation} g_{\mu\nu\kappa\lambda} \sim e^{-s}, \quad s = \theta_{\mu\nu}d_{\mu\nu}^2+\theta_{\kappa\lambda}d_{\kappa\lambda}^2+\theta_{\mu\nu\kappa\lambda\omega} R^2. \label{eq:sr4c:asymptotic} \end{equation} Assuming that $\mathbf{R}_\lambda = \mathbf{0}$, the minimal value of $s$ can be found at \begin{gather} s^* = \theta_{\nu\lambda\omega} R_\nu^2, \quad \theta_{\nu\lambda\omega} = (\alpha_\nu^{-1} + \alpha_\lambda^{-1} + \omega^{-2})^{-1}, \end{gather} when the positions of function $\chi_{\mu}$ and $\chi_\kappa$ are chosen at \begin{gather} \mathbf{R}_\mu = \frac{\alpha_{\mu\nu}\alpha_\nu\alpha_\kappa\theta_{\mu\nu\kappa\lambda\omega} -\alpha_{\kappa\lambda}\alpha_\nu\alpha_\lambda\theta_{\mu\nu\kappa\lambda\omega} +\alpha_{\mu\nu}\alpha_\nu\alpha_{\kappa\lambda}\alpha_\lambda} {\alpha_{\mu\nu}\alpha_\nu\alpha_\kappa\theta_{\mu\nu\kappa\lambda\omega} +\alpha_{\kappa\lambda}\alpha_\mu\alpha_\lambda\theta_{\mu\nu\kappa\lambda\omega} +\alpha_{\mu\nu}\alpha_\nu\alpha_{\kappa\lambda}\alpha_\lambda} \mathbf{R}_\nu, \label{eq:Rmu:opt1} \\ \mathbf{R}_\kappa = \frac{\alpha_{\mu\nu}\alpha_\nu\alpha_{\kappa\lambda}\theta_{\mu\nu\kappa\lambda\omega}} {\alpha_{\mu\nu}\alpha_\nu\alpha_\kappa\theta_{\mu\nu\kappa\lambda\omega} +\alpha_{\kappa\lambda}\alpha_\mu\alpha_\lambda\theta_{\mu\nu\kappa\lambda\omega} +\alpha_{\mu\nu}\alpha_\nu\alpha_{\kappa\lambda}\alpha_\lambda} \mathbf{R}_\nu. \label{eq:Rkappa:opt} \end{gather} This configuration corresponds to the maximum value of $g_{\mu\nu\kappa\lambda}$ approximately \begin{equation} g_{\mu\nu\kappa\lambda} \lesssim \frac{2^{l_{\mu\kappa}} \pi^{5/2} N_\mu N_\nu N_\kappa N_\lambda e^{-s^*} \theta_{\mu\nu\kappa\lambda\omega}^{3/2} (\theta_{\nu\lambda\omega}R_\text{cut})^{l_{\mu\nu}+l_{\kappa\lambda}-2}} {\alpha_{\mu\nu}^{l_{\mu}+3/2}\alpha_{\kappa\lambda}^{l_{\kappa}+3/2} \alpha_\nu^{l_\nu}\alpha_\lambda^{l_\lambda}} f_{l_{\mu\nu\kappa\lambda}}(\theta_{\mu\nu\kappa\lambda\omega}^{-1} \theta_{\nu\lambda\omega}^2 R_\text{cut}^2). \end{equation} We then obtain the requirement of $R_\text{cut}$ for 4-center SR ERIs of RSJK algorithm \begin{align} \frac{2\pi R_\text{cut}}{\Omega\theta_{\nu\lambda\omega}} \frac{2^{l_{\mu\kappa}} \pi^{5/2} f_{l_{\mu\nu\kappa\lambda}} N_\mu N_\nu N_\kappa N_\lambda e^{-s^*} \theta_{\mu\nu\kappa\lambda\omega}^{3/2} (\theta_{\nu\lambda\omega}R_\text{cut})^{l_{\mu\nu}+l_{\kappa\lambda}-2}} {\alpha_{\mu\nu}^{l_{\mu}+3/2}\alpha_{\kappa\lambda}^{l_{\kappa}+3/2} \alpha_\nu^{l_\nu}\alpha_\lambda^{l_\lambda}} < \tau. \label{eq:srj4c:me} \end{align} \subsection{Distance cutoff for CCDF} A regular three-center ERI is approximately\cite{Ye2021,Hollman2015,Valeev2020} \begin{equation} J_{\mu\nu,\kappa} \approx N_\mu N_\nu N_\kappa e^{-\theta_{\mu\nu}d_{\mu\nu}^2} \sum_{l}^{l_{\mu\nu}} L_{l_\mu,l_\nu}^{l}(d_{\mu\nu}) \frac{\pi^3(\Gamma(l+l_\kappa+\frac{1}{2}) - \Gamma(l+l_\kappa+\frac{1}{2}, \theta_{\mu\nu\kappa}R^2))} {\alpha_{\mu\nu}^{l+3/2}\alpha_\kappa^{l_\kappa+3/2}\sqrt{\pi} R^{l+l_\kappa+1}} . \end{equation} As shown in Eq. \eqref{eq:j3c:ccdf}, the compensated function $\chi_\xi$ and the auxiliary function $\chi_\kappa$ are combined when evaluating the analytical three-center integrals \begin{equation} J_{\mu\nu,\kappa} - J_{\mu\nu,\xi} \sim \frac{\Gamma(l+l_\kappa+\frac{1}{2}, \theta_{\mu\nu\eta}R^2) - \Gamma(l+l_\kappa+\frac{1}{2}, \theta_{\mu\nu\kappa}R^2)}{R^{l+l_\kappa+1}}, \label{eq:ccdf:j3c:asymptotic} \end{equation} where \begin{gather} \theta_{\mu\nu\kappa} = (\alpha_{\mu\nu}^{-1} + \alpha_\kappa^{-1})^{-1}, \\ \theta_{\mu\nu\eta} = (\alpha_{\mu\nu}^{-1} + \eta^{-1})^{-1}. \end{gather} In CCDF algorithm, we always have $\theta_{\mu\nu\eta} < \theta_{\mu\nu\kappa}$ because the function $\chi_\xi$ is chosen to be the most smooth function. For sufficiently large $R$, the second $\Gamma$ function in Eq. \eqref{eq:ccdf:j3c:asymptotic} is negligible \begin{equation} J_{\mu\nu,\kappa} - J_{\mu\nu,\xi} \lesssim N_\mu N_\nu N_\kappa e^{-\theta_{\mu\nu}d_{\mu\nu}^2} \sum_{l}^{l_{\mu\nu}} L_{l_\mu,l_\nu}^{l}(d_{\mu\nu}) \frac{\pi^3\Gamma(l+l_\kappa+\frac{1}{2}, \theta_{\mu\nu\eta}R^2)} {\alpha_{\mu\nu}^{l+3/2}\eta^{l_\kappa+3/2}\sqrt{\pi} R^{l+l_\kappa+1}}. \end{equation} Analysis similar to Section~\ref{sec:rcut:rsdf} can be carried out, which suggests the $R_\text{cut}$ estimator of the three center integrals \eqref{eq:j3c:ccdf} for CCDF \begin{gather} \frac{2^{l_\mu+1} \pi^{7/2} N_\mu N_\nu N_\kappa e^{-s^*} \theta_{\mu\nu\eta}^{3/2}(\theta_{\nu\eta}R_\text{cut})^{l_{\mu\nu\kappa}-2}R_\text{cut}} {\Omega\alpha_{\mu\nu}^{l_{\mu}+3/2}\alpha_\kappa^{l_\kappa+3/2}\alpha_\nu^{l_\nu}\theta_{\nu\eta}} f_{l_{\mu\nu\kappa}}(\theta_{\mu\nu\eta}^{-1}\theta_{\nu\eta}^2 R_\text{cut}^2) < \tau \label{eq:ccdf:j3c:me} \end{gather} where \begin{gather} s^* = \theta_{\nu\eta} R_\text{cut}^2, \\ \theta_{\nu\eta} = (\alpha_{\nu}^{-1} + \eta^{-1})^{-1}. \end{gather} \subsection{Energy cutoff for four-center Coulomb integrals} The error of a two-electron ERI due to energy cutoff $E_\text{cut}$ can be estimated \begin{equation} \varepsilon(E_\text{cut}) = \frac{1}{\Omega} \sum_{|\mathbf{G}|^2>2E_\text{cut}} \frac{4\pi}{G^2} \rho_{\mu\nu}(\mathbf{G}) \rho_{\kappa\lambda}(-\mathbf{G}) < 16 \pi^2 \int_{\sqrt{2E_\text{cut}}}^\infty \rho_{\mu\nu}(G) \rho_{\kappa\lambda}(G) d G. \label{eq:ecut:error4c2e} \end{equation} Based on Eq. \eqref{eq:aft:rhoij1} the Fourier transform for orbital products, we obtain the leading term of $\rho_{\mu\nu}(G)$ \begin{equation} \rho_{\mu\nu}(G) = |\rho_{\mu\nu}(\mathbf{G})| \approx N_{\mu} N_{\nu} e^{-\frac{G^2}{4\alpha_{\mu\nu}}} e^{-\theta_{\mu\nu}d_{\mu\nu}^2} (\frac{G}{2\alpha_{\mu\nu}})^{l_{\mu\nu}} (\frac{\pi}{\alpha_{\mu\nu}})^{3/2} \end{equation} Given energy cutoff $E_\text{cut}$, the largest error for a density distribution $\rho_{\mu\nu}$ comes with the interaction between $\rho_{\mu\nu}$ and the most compact density $\rho_{\kappa\kappa}$ \begin{align} \varepsilon(E_\text{cut}) &< 16\pi^2 \int_{\sqrt{2E_\text{cut}}}^\infty \rho_{\mu\nu}(G) \rho_{\kappa\kappa}(G) d G \nonumber\\ &\approx \frac{16\pi^2 N_\mu N_\nu \theta_{\mu\nu\kappa\kappa} e^{-\theta_{\mu\nu}d_{\mu\nu}^2}} {(2l_\kappa-1)!!(2\alpha_{\mu\nu})^{l_{\mu\nu}}(4\alpha_\kappa)^{2l_\kappa}} \big(\frac{\pi^2}{2\alpha_{\mu\nu}\alpha_\kappa}\big)^{3/2} (2E_\text{cut})^{(l_{\mu\nu}+2l_\kappa-1)/2} e^{-\frac{E_\text{cut}}{2\theta_{\mu\nu\kappa\kappa}}}. \end{align} For the entire system, the energy cutoff error can be derived in terms of the interactions between $\rho_{\kappa\kappa}$ and itself \begin{equation} \varepsilon(E_\text{cut}) < 16\pi^2 \int_{\sqrt{2E_\text{cut}}}^\infty \rho_{\kappa\kappa}^2(G) d G. \end{equation} This error estimation then leads to an inequality of $E_\text{cut}$ with respect to the required precision $\tau$ \begin{equation} 8\pi^2 N_\kappa^4(\frac{\pi}{2\alpha_\kappa})^3 (\frac{E_\text{cut}}{8\alpha_\kappa^2})^{2l_\kappa-1/2} e^{-\frac{E_\text{cut}}{2\alpha_\kappa}} < \tau. \label{eq:kecut:4c} \end{equation} It should be noted that the $E_\text{cut}$ error estimation above is derived with the assumption that the Fourier transform of $\rho_{\mu\nu}(\mathbf{G})$ is analytically computed. If $\rho_{\mu\nu}(\mathbf{G})$ are computed with the fast Fourier transform (FFT) algorithm on $N$ discrete real-space grids \begin{equation} \rho_{\mu\nu}(\mathbf{G}) \sim \frac{\Omega}{N} \sum_{n}^N e^{-i \mathbf{G} \cdot \mathbf{r}_n} \phi_\mu^*(\mathbf{r}_n) \phi_\nu(\mathbf{r}_n), \end{equation} the $E_\text{cut}$ estimation \eqref{eq:kecut:4c} is not enough because the error of FFT was not considered. When working on FFT two-electron integrals, one also needs to ensure that the Fourier transform for the orbital product is converged tightly to an error smaller than the required precision. The FFT electron density is \begin{equation} \mathrm{FFT}[\rho(\mathbf{r})] = \frac{\Omega}{N}\sum_{n=0}^N e^{-i\mathbf{G}\cdot\mathbf{r}_n}\rho(\mathbf{r}_n), \quad |\mathbf{G}| \leq \sqrt{2E_\text{cut}}. \end{equation} We can transform and split the electron density $\rho(\mathbf{r})$ according to the momentum of plane-waves \begin{align} \rho(\mathbf{r}) &=\frac{1}{\Omega}\sum_{|\mathbf{G}|=0}^{\infty} e^{i\mathbf{G}\cdot\mathbf{r}_n}\rho(\mathbf{G}) \\ &= \frac{1}{N}\sum_{G\leq\sqrt{2E_\text{cut}}} e^{i\mathbf{G}\cdot\mathbf{r}}\rho(\mathbf{G}) + \frac{1}{N}\sum_{G>\sqrt{2E_\text{cut}}} e^{i\mathbf{G}\cdot\mathbf{r}}\rho(\mathbf{G}). \end{align} The error of FFT electron density is thereby around \begin{align} \mathrm{FFT}[\rho(\mathbf{r})] - \rho(\mathbf{G}) &=\Big(\frac{1}{N}\sum_{G'}^\infty\sum_{n=0}^N e^{-i\mathbf{G}\cdot\mathbf{r}_n} e^{i\mathbf{G}'\cdot\mathbf{r}_n}\rho(\mathbf{G}')\Big) -\rho(\mathbf{G}) \\ &=\sum_{G'>\sqrt{2E_\text{cut}}}\frac{1}{N}\sum_{n=0}^N e^{-i\mathbf{G}\cdot\mathbf{r}_n} e^{i\mathbf{G}'\cdot\mathbf{r}_n}\rho(\mathbf{G}') \\ &\lesssim\sum_{G'>\sqrt{2E_\text{cut}}} \rho(\mathbf{G}'). \end{align} The error for FFT two-electron integrals can be approximated \begin{equation} \varepsilon \approx\frac{1}{\Omega}\sum_{|\mathbf{G}|=0}^{\sqrt{2E_\text{cut}}} V(\mathbf{G})[\mathrm{FFT}[\rho(\mathbf{r})] - \rho(\mathbf{G})] \lesssim v \sum_{G'>\sqrt{2E_\text{cut}}} \rho(\mathbf{G}'), \end{equation} where \begin{equation} v = \frac{1}{\Omega}\sum_{|\mathbf{G}|=0}^{\sqrt{2E_\text{cut}}} V(\mathbf{G}). \end{equation} In practice, we find that the energy cutoff for nuclear attraction integrals \eqref{eq:ecut:nuc} is enough to converge the FFT two-electron integrals. \subsection{Energy cutoff for three-center Coulomb integrals} The Fourier transform for a single Gaussian function is \begin{equation} \rho_\kappa(\mathbf{G}) = \int e^{-i \mathbf{G}\cdot \mathbf{r}} \chi_\kappa(\mathbf{r}) d^3 \mathbf{r} = \pi N_\kappa e^{-\frac{G^2}{4\alpha_\kappa}} \sum_k \begin{pmatrix} l_\kappa \\ k \end{pmatrix} \frac{(\frac{-iG_x}{2\alpha_\kappa})^{l_\kappa-k}\Gamma(\frac{k+1}{2})}{\alpha_\kappa^{(k+3)/2}}. \end{equation} For Coulomb interactions between $\rho_{\mu\nu}(G)$ and $\rho_\kappa(G)$, we can derive the $E_\text{cut}$ error, \begin{align} \varepsilon(E_\text{cut}) &< 16\pi^2 \int_{\sqrt{2E_\text{cut}}}^{\infty} \rho_{\mu\nu}(G) \rho_{\kappa}(G) dG \nonumber\\ &\approx\frac{32\pi^2N_\mu N_\nu N_\kappa e^{-\theta_{\mu\nu}d_{\mu\nu}^2}} {(2\alpha_{\mu\nu})^{l_{\mu\nu}-1}(2\alpha_\kappa)^{l_\kappa}} \Big(\frac{\pi^2}{\alpha_{\mu\nu}\alpha_\kappa}\Big)^{\frac{3}{2}} (2E_\text{cut})^{\frac{l_{\mu\nu}+l_\kappa-1}{2}} e^{-\frac{E_\text{cut}}{2\theta_{\mu\nu\kappa}}} < \tau, \label{eq:ecut:error3c2e} \end{align} where \begin{equation*} \theta_{\mu\nu\kappa} = (\alpha_{\mu\nu}^{-1} + \alpha_\kappa^{-1})^{-1}. \end{equation*} \subsection{Energy cutoff for nuclear attraction integrals} \label{sec:nuc} When calculating nuclear attraction integrals, we can use steep s-type Gaussian functions to mimic the charge distribution of point nuclear charges \begin{equation} \chi_\kappa = \sum_A \lim_{\zeta\rightarrow\infty}\Big(\frac{\pi}{\zeta}\Big)^{3/2} Z_A e^{-\zeta |r-R_A|^2}. \end{equation} The three-center energy cutoff analysis \eqref{eq:ecut:error3c2e} is ready to estimate $E_\text{cut}$ for nuclear attraction integrals with the setting $l_\kappa=0$ and $\alpha_\kappa\rightarrow\infty$ \begin{equation} \varepsilon(E_\text{cut}) \lesssim\frac{32\pi^2N_\mu N_\nu e^{-\theta_{\mu\nu}d_{\mu\nu}^2}} {(2\alpha_{\mu\nu})^{l_{\mu\nu}-1}} \Big(\frac{\pi}{\alpha_{\mu\nu}}\Big)^{\frac{3}{2}} (2E_\text{cut})^{\frac{l_{\mu\nu}-1}{2}} e^{-\frac{E_\text{cut}}{2\alpha_{\mu\nu}}} < \tau. \label{eq:ecut:nuc} \end{equation} \subsection{Energy cutoff for LR integrals} For LR integrals, energy cutoff is primarily determined by the Gaussian factor in the LR Coulomb kernel \eqref{eq:coul:lr}. Similar to the case of the full Coulomb kernel, we only need to consider the most compact orbital products in the system to estimate $E_\text{cut}$. By carrying out the analysis discussed in the previous sections, we obtain the truncation error as well as the $E_\text{cut}$ inequality for the four-center LR integrals \begin{equation} \varepsilon(E_\text{cut}) < \frac{32\pi^2 \theta_{\mu\omega} (2E_\text{cut})^{2l_\mu-1}}{((4l_\mu-1)!!)^2} e^{-\frac{E_\text{cut}}{2\theta_{\mu\omega}}} < \tau \label{eq:ecut:lr4c2e} \end{equation} where \begin{equation} \theta_{\mu\omega} = (\alpha_\mu^{-1} + \omega^{-2})^{-1} \end{equation} To ensure $((4l_\mu-1)!!$ has a meaningful value for all angular momentum $l_{\mu}$, the convention $(-1)!! = 1$ is assumed. For the three-center LR integrals, the $E_\text{cut}$ estimator can be derived in a similar fashion \begin{equation} \varepsilon(E_\text{cut}) < \frac{32\pi^2 \theta_{\mu\mu\kappa\omega} 2^{l_\kappa+3/4} (2E_\text{cut})^{l_\mu+(l_\kappa-1)/2}} {(4l_\mu-1)!!\sqrt{(4l_\kappa-1)!!}} \Big(\frac{\pi}{\alpha_\kappa}\Big)^{\frac{3}{4}} e^{-\frac{E_\text{cut}}{2\theta_{\mu\mu\kappa\omega}}} < \tau. \label{eq:ecut:lr3c2e} \end{equation} \iffalse \subsection{Coulomb attenuation parameter in RSDF and RSJK algorithms} The Coulomb attenuation parameter can be widely found in the $R_\text{cut}$ estimator and $E_\text{cut}$ estimator. In the SR-ERI estimators \eqref{eq:srj3c:me} and \eqref{eq:srj4c:me}, the estimated error mostly depends on the parameter $\theta_{\mu\nu\kappa\lambda\omega}$ \begin{equation} \varepsilon \sim e^{-\theta_{\mu\nu\kappa\lambda\omega}R_\text{cut}^2}. \end{equation} A large value of $\theta_{\mu\nu\kappa\lambda\omega}$ is favored because that helps SR-ERIS decay against $R_\text{cut}$. The value of $\theta_{\mu\nu\kappa\lambda\omega}$ is slightly smaller than the smallest one among $\alpha_{\mu\nu}$, $\alpha_{\kappa\lambda}$ (or $\alpha_\kappa$ for the case of three-center SR-ERIs), and $\omega^2$. $\omega^2$ is likely the smallest one among the three terms thus a rough simplification for $\varepsilon$ is \begin{equation} \varepsilon \sim e^{-\omega^2 R_\text{cut}^2}. \end{equation} Given $R_\text{cut}$ in a reasonable range the lower bound of $\omega$ can be estimated. The upper bound of $\omega$ is related to the LR ERI estimators \eqref{eq:ecut:lr4c2e} and \eqref{eq:ecut:lr3c2e} \begin{equation} \varepsilon \sim e^{-\frac{E_\text{cut}}{2\theta_{\mu\omega}}}. \end{equation} By construction, basis $\chi_\mu$ in energy cutoff estimation is the most compact function in the system. We can safely assume $\omega^2 \ll \alpha_\mu$ and obtain the upper bound of $\omega$ \begin{equation} \varepsilon \sim e^{-\frac{E_\text{cut}}{2\omega^2}}. \end{equation} Using the upper bound and lower bound estimators above, we can find a rough range for the Coulomb attenuation parameter with precision $\tau$, and cutoffs $R_\text{cut}$ and $E_\text{cut}$ \begin{equation} \frac{\sqrt{-\ln{\tau}}}{R_\text{cut}} < \omega < \sqrt{\frac{E_\text{cut}}{-2\ln{\tau}}}. \end{equation} The number of integrals due to a single lattice-sum is roughly proportional to $R_\text{cut}^3$. There are three nested lattice-sums in equation \eqref{eq:eri4c2e:rs}. The cost scaling for SR-ERIs is proportional to the $R_\text{cut}^9$ \begin{equation} R_\text{cut}^9 \sim \frac{(-\ln{\tau})^{9/2}}{\omega^9} \end{equation} The number of plane waves is proportional to $E_\text{cut}^{3/2}$. The cost of Fourier transform \eqref{eq:aft:rhoij} is proportional to $E_\text{cut}^{3/2}R_\text{cut}^3$ because it involves one lattice-sum for each plane-wave \begin{equation} E_\text{cut}^{3/2}R_\text{cut}^3 \sim (\omega^2\ln{\tau})^{3/2} \frac{(-\ln{\tau})^{3/2}}{\omega^3} \sim (-\ln{\tau})^3 \end{equation} The cost for SR-ERIs is sensitive to the value of $\omega$ (proportional to $\omega^{-9}$ for 4-center integrals or $\omega^{-6}$ for 3-center integrals) while $\omega$ has less impact on AFT. We thus tend to set the value of $\omega$ close to its upper bound in practice. Using the upper bound for $\omega$, to find a balance between the cost of SR ERIs and LR AFT, we have the relation based on $E_\text{cut}$ for 4-center ERIs \begin{equation} \frac{(-\ln{\tau})^9}{E_\text{cut}^{9/2}} \sim N_\mathbf{k}(-\ln{\tau})^3 \end{equation} and for SR 3-center ERIs \begin{equation} \frac{(-\ln{\tau})^6}{E_\text{cut}^3} \sim N_\mathbf{k}(-\ln{\tau})^3 \end{equation} The energy cutoff should be chosen \begin{equation} E_\text{cut} \propto N_\mathbf{k}^{-2/9}(-\ln{\tau})^{4/3} \end{equation} for 4-center ERIs and \begin{equation} E_\text{cut} \propto N_\mathbf{k}^{-1/3} \end{equation} for 3-center ERIs. \fi \iffalse \subsection{Compensated charge function in CCDF} Based on the estimator \eqref{eq:ccdf:j3c:me} and \eqref{eq:ecut:error3c2e} we can find the upper and lower bounds for the exponent of charge-compensated function \begin{equation} \frac{-\ln{\tau}}{R_\text{cut}^2} < \eta < \frac{E_\text{cut}}{-2\ln{\tau}}. \end{equation} \fi \section{Numerical tests and discussion} \label{sec:tests} \subsection{Distance cutoff for overlap integrals} \begin{table} \centering \caption{Relative error for overlap integral distance cutoff estimation} \label{tab:ovlp:estimation} \begin{tabular}{llllllll} \hline $l_\mu$ & $l_\nu$ & $\alpha_\mu = \alpha_\nu$ & $\alpha_\mu = 2\alpha_\nu$ & $\alpha_\mu = 5\alpha_\nu$ & $\alpha_\mu = 100\alpha_\nu$ \\ \hline 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0.003 & 0.005 & 0.007 & 0.028 \\ 0 & 2 & 0.006 & 0.009 & 0.015 & 0.058 \\ 0 & 3 & 0.009 & 0.013 & 0.022 & 0.093 \\ 0 & 4 & 0.012 & 0.017 & 0.029 & 0.134 \\ 1 & 0 & 0.003 & 0.002 & 0.001 & 0 \\ 1 & 1 & 0.007 & 0.007 & 0.009 & 0.026 \\ 1 & 2 & 0.010 & 0.011 & 0.015 & 0.053 \\ 1 & 3 & 0.012 & 0.015 & 0.022 & 0.084 \\ 1 & 4 & 0.015 & 0.019 & 0.028 & 0.120 \\ 2 & 0 & 0.006 & 0.005 & 0.003 & 0 \\ 2 & 1 & 0.010 & 0.009 & 0.010 & 0.025 \\ 2 & 2 & 0.012 & 0.013 & 0.016 & 0.050 \\ 3 & 0 & 0.009 & 0.007 & 0.004 & 0.001 \\ 3 & 1 & 0.012 & 0.011 & 0.011 & 0.024 \\ 3 & 2 & 0.015 & 0.015 & 0.017 & 0.048 \\ 4 & 4 & 0.022 & 0.024 & 0.030 & 0.102 \\ \hline \end{tabular} \end{table} During the deviations for $R_\text{cut}$ and $E_\text{cut}$, the factorization approximation \eqref{eq:factorize:ovlp} is widely applied almost in every integral. To measure the effectiveness of this approximation, we compared $R_\text{cut}$ estimated by the overlap estimator \eqref{eq:factorize:ovlp} to the precise $R_\text{cut}$ which is solved by a bisection search for the exact overlap integrals. Given angular momentum for bra and ket, we noticed that the relative error for $R_\text{cut}$ only depends on the ratio between the Gaussian exponents of bra and ket. Table \ref{tab:ovlp:estimation} summarizes the relative errors for various types of Gaussian basis functions. When the two basis functions have similar shapes (exponents ratio $<$ 5), the $R_\text{cut}$ errors are small (typically less than 3\%). However, high angular momentum can slightly increase the error. When bra and ket have very different shapes, the errors can increase to around 10\%. Nevertheless, the factorization approximation provides a good estimation for $R_\text{cut}$ in overlap integrals. \subsection{Errors for ERIs} In an SCF calculation, the error of distance cutoff and energy cutoff estimations may be influenced by several factors, such as the basis set, size of unit cell, k-point mesh, Coulomb attenuation parameters. To evaluate the impact of these factors on the cutoff estimations, we computed ERIs with the range-separated algorithms and compared them to the benchmark data generated with the reciprocal-space formula \eqref{eq:aft:eri} with very tight accuracy requirements ($\tau=10^{-16}$). Unless otherwise specified in each individual test, the test system has one $s$-type primitive Gaussian function with exponent $\alpha=1.0$ inside a cubic cell with the edge length $a=1.5$. The Coulomb attenuation parameter for range-separated algorithms is set to $\omega=0.5$. Gamma point is adopted for the integral computation. In the range-separated algorithm setups, we solve $R_\text{cut}$ and $E_\text{cut}$ for various precision requirements ($\tau=10^{-5}$ to $\tau=10^{-12}$) then transform $R_\text{cut}$ and $E_\text{cut}$ to lattice-sum range and plane-wave summation range using the transformation equations \eqref{eq:rcut2latticesum} and \eqref{eq:kecut2nz}. They are used in the triple lattice-sum for the short-range part Eq. \eqref{eq:j4c:triple:lattice:sum} and the summation over plane-waves for the long-range part Eq. \eqref{eq:int2e:lr}. We found that the accuracy is well-controlled in most tests in the sense that errors are reduced to a value near or slightly under the desired accuracy as we increase the precision requirements. It indicates that computational efforts are being properly utilized without being wasted on the unintended accuracy. Error underestimation is only observed in a few difficult configurations. \begin{itemize} \item The impact of lattice parameters is exhibited in Figure \ref{fig:eri:a}. In this test, we changed the cell edge length from $a=1.0$~\AA~ to $a=2.5$~\AA. Generally, small cells lead to larger errors than big cells. Good accuracy can be achieved with moderate precision settings up to $10^{-10}$. When the required precision is tighter than $10^{-11}$, one may only achieve $10^{-10}$ accuracy for the cell with edge length $a=1.0$~\AA. One possible reason of the error is the numerical uncertainties in the underlying integral library\cite{Sun2015}. To confirm that the error is not caused by approximations in cutoff estimators, we manually increased the value of $R_\text{cut}$ and $E_\text{cut}$ and found that the accuracy was not improved with larger values of $R_\text{cut}$ or $E_\text{cut}$. For the system $a=1.0$ at $\tau=10^{-11}$, the triple lattice-sum in Eq. \eqref{eq:j4c:triple:lattice:sum} involves about $1600^3$ primitive SR-ERIs. Errors in individual primitive integrals, even the round-off error, can easily be accumulated to the magnitude around $10^{-10}$. \item The k-point factor. Integrals $(\phi_\mu^{\mathbf{k}_1}\phi_\mu^{\mathbf{k}_2}|\phi_\mu^{\mathbf{k}_2}\phi_\mu^{\mathbf{k}_1})$ are computed for k-point grids $N_k=1^3 \dots 4^3$ (Figure \ref{fig:eri:kpts}). We find similar accuracy performance in all k-point test cases. All calculations with various accuracy specifications can reach the required accuracy. \item Basis effects. We first tested the effects of basis function compactness by varying the Gaussian function exponent from $\alpha=0.2$ to $\alpha=5.0$ (Figure \ref{fig:eri:exp}). For relatively compact basis functions, errors are reduced normally as we tighten the precision requirements. For diffused basis functions this trend only holds up to the precision around $10^{-9}$. Errors may also be attributed to the numerical uncertainties in primitive integrals. For the basis function with $\alpha=0.2$ at $\tau=10^{-10}$, $1500^3$ primitive SR-ERIs have to be included in the triple lattice-sum. In the test for basis angular momentum effects, basis functions with angular momentum $l=0\dots3$ are tested (Figure \ref{fig:eri:l}). The results show that the accuracy is manageable in all test cases, indicating that the angular momentum of a basis function is not a significant factor in the $R_\text{cut}$ and $E_\text{cut}$ estimation. \item In Figure \ref{fig:eri:omega}, we show the errors for Coulomb attenuation parameters ($\omega=0.2$ to $\omega=2.0$). Similar to the trends we found in lattice parameter tests and basis function compactness tests, errors caused by small $\omega$s are larger than errors of larger $\omega$s. For small $\omega$s, accuracy is limited around $10^{-10}$ because of numerical uncertainties. Cutoffs are slightly overestimated for large omega. \end{itemize} \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{test_a} \caption{Accuracy tests for lattice parameters} \label{fig:eri:a} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{test_kpts} \caption{Accuracy tests for k-points} \label{fig:eri:kpts} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{test_exp} \caption{Accuracy tests for exponents of Gaussian basis} \label{fig:eri:exp} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{test_l} \caption{Accuracy tests for angular momentum of Gaussian basis} \label{fig:eri:l} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{test_omega} \caption{Accuracy tests for Coulomb attenuation parameters} \label{fig:eri:omega} \end{figure} \section{Conclusions} In this work, we provide a comprehensive analysis of the integral upper bound and cutoff estimators for the integral algorithms implemented in PySCF. The distance and energy cutoff estimators derived from the upper bound estimation are shown to be accurate enough to achieve the required accuracy for the range-separated integral algorithms while ensuring that computational resources are efficiently utilized. Our numerical tests show that the estimators are stable and reliable for various factors in routine crystalline calculations, such as k-point meshs, basis sets, unit cell sizes, and Coulomb attenuation parameters. Uncertainties around $10^{-9}$ may be encountered in certain cases when involving diffused basis functions, small unit cells, or small Coulomb attenuation parameters. Based on the integral estimation derived in this work, we expect that more aggressive optimization for crystalline integral programs can be carried out. Comprehensive algorithm and integral screening schemes will need to be designed. Additional technical details will be considered in future work. \clearpage
{ "arxiv_id": "2302.11337", "language": "en", "timestamp": "2023-02-23T02:13:59", "url": "https://arxiv.org/abs/2302.11337", "yymm": "2302" }
\chapter*{\centering \begin{normalsize}Preface\end{normalsize}} In 1954, Alston S. Householder published \textit{Principles of Numerical Analysis}, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network, and its way of reducing the dimensionality of the data and representing it in a way that is easier for the machine learning algorithms to process. Bayesian matrix decomposition is a relatively new field within the broader area of matrix decomposition and machine learning. The concept of Bayesian matrix decomposition is rooted in Bayesian statistics in which case it combines the principles of Bayesian statistics with matrix decomposition methods to perform matrix factorization. The use of Bayesian methods in matrix decomposition was first introduced in the early 2000s, with the aim of addressing the limitations of traditional matrix factorization techniques, e.g., limited explanatory and predictive performance. The sole aim of this book is to give a self-contained introduction to concepts and mathematical tools in Bayesian matrix decomposition in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning Bayesian matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of variational inference for conducting the optimization. We refer the reader to literature in the field of Bayesian analysis for a more detailed introduction to the related fields. This book is primarily a summary of purpose, significance of important Bayesian matrix decomposition methods, e.g., real-valued decomposition, nonnegative matrix factorization, Bayesian interpolative decomposition, and the origin and complexity of the methods which shed light on their applications. The mathematical prerequisite is a first course in statistics and linear algebra. Other than this modest background, the development is self-contained, with rigorous proof provided throughout. \chapter*{\centering \begin{normalsize}Keywords\end{normalsize}} Bayesian inference, Gibbs sampling, Conjugate model, Alternating least squares (ALS), Multiplicative update, Real-valued matrix decomposition, Nonnegative matrix decomposition, Interpolative decomposition, Ordinal matrix decomposition, Poisson matrix decomposition. \vspace{5em} \noindent \textit{Acknowledgment: } We would like to express gratitude towards Ulrich Paquet for providing information on Bayesian ordinal matrix decomposition and towards Gilbert Strang for discussing the proof of the CUR decomposition. Additionally, we would like to thank Federico Poloni for his comment on the proof of alternating least squares. The author also wishes to acknowledge the cooperation of Joerg Osterrieder, Christine P. Chai, and Xuanyu Ye towards the Bayesian approach for nonnegative matrix factorization and interpolative decomposition. Their cooperation has helped enlighten the form of many discussions in the book. \newpage \begingroup \hypersetup{linkcolor=winestain} \dominitoc \pdfbookmark{\contentsname}{toc} \tableofcontents \listoffigures \endgroup \input{notation} \mainmatter \input{chapter-intro.tex} \input{chapter-mcmc} \input{chapter-univariate} \input{chapter-multivariate} \input{chapter-mf-als} \input{chapter-mf-nmf} \input{chapter-bmf-real} \input{chapter-bmf-nmf} \input{chapter-poisson} \input{chapter-ordinal} \input{chapter-bmf-bid} \input{chapter-appendix} \newpage \vskip 0.2in \part{Appendix} \chapter{Bayesian Interpolative Decomposition} \begingroup \hypersetup{linkcolor=winestain} \minitoc \newpage \endgroup \section{Interpolative Decomposition (ID)} Low-rank real-valued or nonnegative matrix factorization is essential in modern data science. Low-rank matrix approximation with respect to the Frobenius norm - minimizing the sum squared differences to the target matrix - can be easily solved with singular value decomposition (SVD) or the Bayesian real-valued/nonnegative matrix decomposition methods. For many applications, however, it is sometimes advantageous to work with a basis that consists of a subset of the columns from the observed matrix itself \citep{halko2011finding, martinsson2011randomized}. The interpolative decomposition (ID) provides one such approximation. The distinguishing feature of the ID is that we can reuse columns from the original matrix. This enables it to preserve matrix properties such as sparsity and nonnegativity that also help reduce memory usage. ID is widely used as a feature selection tool that extracts the essence and allows dealing with big data that may originally be too large to fit into RAM. In addition, we can remove the non-relevant parts of the data which consist of errors and redundant information via these methods \citep{liberty2007randomized, halko2011finding, martinsson2011randomized, ari2012probabilistic, lu2022bayesian, lu2022feature}. Locating the indices associated with the spanning columns is frequently valuable for the purpose of data interpretation and analysis. It can be very useful to identify a subset of the columns that distills the information in the matrix. When the columns of the observed matrix have some specific interpretations, e.g., they are transactions in a transaction data set, the columns of the factored matrix in ID will retain the same meaning as well. The column ID \footnote{The \textit{column ID} will simply be referred to as \textit{ID} without further clarification.} factors a matrix into the product of two matrices, one of which consists of selected columns from the original matrix, and the other of which contains a subset of columns consisting of the identity matrix with all its values having absolute values no greater than 1. We first state and demonstrate the existence of the \textit{exact ID} in the following theorem, and we will later describe the \textit{low-rank ID} through Bayesian approaches. \begin{theorem}[Column Interpolative Decomposition]\label{theorem:interpolative-decomposition} Any rank-$R$ matrix $\bm{A} \in \mathbb{R}^{m \times n}$ can be factored as $$ \underset{M \times N}{\bm{A}} = \underset{M\times R}{\bm{C}} \gap \underset{R\times N}{\bm{W}}, $$ where $\bm{C}\in \mathbb{R}^{M\times R}$ is some $R$ linearly independent columns of $\bm{A}$, $\bm{W}\in \mathbb{R}^{R\times N}$ is the matrix to reconstruct $\bm{A}$ which contains an $R\times R$ identity submatrix (under a mild column permutation). Specifically, entries in $\bm{W}$ have values no larger than 1 in magnitude: $$ \max \abs{w_{ij}}\leq 1, \,\, \forall \,\, i\in [1,R], j\in [1,N]. $$ \item The storage required for the decomposition is then reduced or potentially increased from $MN$ floating-point numbers to $MR$, $(N-R)R$ floats for storing $\bm{C}, \bm{W}$ respectively and extra $R$ integers are required to remember the position of each column of $\bm{C}$ within $\bm{A}$. \end{theorem} \begin{figure}[htp] \centering \includegraphics[width=0.7\textwidth]{imgs/id-column.pdf} \caption{Demonstration of the column ID of a matrix where the \textcolor{mydarkyellow}{yellow} vector denotes the linearly independent columns of $\bm{A}$, white entries denote zero, and \textcolor{mydarkpurple}{purple} entries denote one.} \label{fig:column-id} \end{figure} While we claim that entries in $\bm{W}$ have magnitudes no greater than 1, a weaker construction assumes that no entry of $\bm{W}$ has an absolute value exceeding 2. The illustration of the column ID is shown in Figure~\ref{fig:column-id} where the \textcolor{mydarkyellow}{yellow} vectors denote the linearly independent columns of $\bm{A}$ and the \textcolor{mydarkpurple}{purple} vectors in $\bm{W}$ form an $R\times R$ identity submatrix. The positions of the \textcolor{mydarkpurple}{purple} vectors inside $\bm{W}$ are identical to the placements of the corresponding \textcolor{mydarkyellow}{yellow} vectors within $\bm{A}$. The column ID is very similar to the \textit{CR decomposition}, both select $R$ linearly independent columns into the first factor and the second factor comprises an $R\times R$ identity submatrix \citep{strang2021every, stranglu, lu2021numerical}. The difference between the two is that the CR decomposition precisely selects the first $R$ linearly independent columns into the first factor and the identity submatrix appears in the pivot positions. And more importantly, the second factor in the CR decomposition comes from the \textit{reduced row echelon form (RREF)}. Therefore, the column ID can also be utilized in the same applications as the CR decomposition, say proving the rank equals trace property in idempotent matrices \citep{lu2021numerical}, and demonstrating the elementary theorem in linear algebra that the column rank equals row rank of a matrix \citep{lu2021column}. Moreover, the column ID is also a special case of the \textit{rank decomposition} and is not unique \citep{lu2021numerical}. \paragraph{Notations that will be extensively used in the sequel.} Following again the Matlab-style notation, if $J$ is an index vector with size $R$ that contains the indices of columns selected from $\bm{A}$ into $\bm{C}$, then $\bm{C}$ can be denoted as $\bm{C}=\bm{A}[:,J]$ (Definition~\ref{definition:matlabnotation}, p.~\pageref{definition:matlabnotation}). The matrix $\bm{C}$ contains ``skeleton" columns of $\bm{A}$. From the ``skeleton" index vector $J$, the $R\times R$ identity matrix inside $\bm{W}$ can be recovered by $$ \bm{W}[:,J] = \bm{I}_R \in \mathbb{R}^{R\times R}. $$ Suppose further we put the remaining indices of $\bm{A}$ into an index vector $I$ where $$ J\cap I=\varnothing \qquad \text{and}\qquad J\cup I = \{1,2,\ldots, N\}. $$ The remaining $N-R$ columns in $\bm{W}$ consist of an $R\times (N-R)$ \textit{expansion matrix} since the matrix contains \textit{expansion coefficients} to reconstruct the columns of $\bm{A}$ from $\bm{C}$: $$ \bm{E} = \bm{W}[:,I] \in \mathbb{R}^{R\times (N-R)}, $$ where the entries of $\bm{E}$ are known as the \textit{expansion coefficients}. Moreover, let $\bm{P}\in \mathbb{R}^{N\times N}$ be a (column) permutation matrix (Definition~\ref{definition:permutation-matrix}, p.~\pageref{definition:permutation-matrix}) defined by $\bm{P}=\bm{I}_N[:,(J, I)]$ such that $$ \bm{A}\bm{P} = \bm{A}[:,(J, I)] = \left[\bm{C}, \bm{A}[:,I]\right], $$ and \begin{equation}\label{equation:interpolatibve-w-ep} \bm{W}\bm{P} = \bm{W}[:,(J, I)] =\left[\bm{I}_R, \bm{E} \right] \qquad\underrightarrow{ \text{leads to} }\qquad \bm{W} = \left[\bm{I}_R, \bm{E} \right] \bm{P}^\top. \end{equation} \section{Existence of the Column Interpolative Decomposition}\label{section:proof-column-id} \paragraph{Cramer's rule.} The proof of the existence of the column ID relies on Cramer's rule which will be briefly covered in the following discussion. Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, and it is valid whenever the system has a unique solution, i.e., the underlying matrix is nonsingular. Consider a system of $n$ linear equations for $n$ unknowns, represented in the following matrix multiplication form: \index{Cramer's rule} $$ \bm{M} \bm{x} = \bm{l}, $$ where $\bm{M}\in \mathbb{R}^{n\times n}$ is nonsingular and $\bm{x},\bm{l} \in \mathbb{R}^n$. Then the theorem states that, in this case, the system has a unique solution, whose individual values for the unknowns are given by: $$ x_i = \frac{\mathrm{det}(\bm{M}_i)}{\mathrm{det}(\bm{M})}, \qquad \text{for all}\gap i\in \{1,2,\ldots, n\}, $$ where $\bm{M}_i$ is the matrix formed by replacing the $i$-th column of $\bm{M}$ with the column vector $\bm{l}$. In full generality, Cramer's rule considers the matrix equation $$ \bm{M}\bm{X} = \bm{L}, $$ where $\bm{M}\in \mathbb{R}^{n\times n}$ is nonsingular and $\bm{X},\bm{L}\in \mathbb{R}^{n\times m}$. Let $I_c=[i_1, i_2, \ldots, i_k]$ and $J_c=[j_1,j_2,\ldots, j_k]$ be two index vectors where $1\leq i_1\leq i_2\leq \ldots\leq i_k\leq n$ and $1\leq j_1\leq j_2\leq \ldots\leq j_k\leq m$. Then $\bm{X}[I_c,J_c]$ is a $k\times k$ submatrix of $\bm{X}$. Let further $\bm{M}_{\bm{L}}(I_c,J_c)$ be the $n\times n$ matrix formed by replacing the $(i_s)$-th column of $\bm{M}$ with $(j_s)$-th column of $\bm{L}$ for all $s\in \{1,2,\ldots, k\}$. Then $$ \mathrm{det}(\bm{X}[I_c,J_c]) = \frac{\mathrm{det}\left(\bm{M}_{\bm{L}}(I_c,J_c)\right)}{\mathrm{det}(\bm{M})}. $$ When $I_c$ and $J_c$ are of size 1, it follows that \begin{equation}\label{equation:cramer-rule-general} x_{ij} = \frac{\mathrm{det}\left(\bm{M}_{\bm{L}}(i,j)\right)}{\mathrm{det}(\bm{M})}. \end{equation} Now we are ready to prove the existence of the column ID. \begin{proof}[of Theorem~\ref{theorem:interpolative-decomposition}] We have mentioned above the proof relies on the Cramer's rule. If we can show the entries of $\bm{W}$ can be denoted by the Cramer's rule equality in Equation~\eqref{equation:cramer-rule-general} and the numerator is smaller than the denominator, then we can complete the proof. However, we notice that the matrix in the denominator of Equation~\eqref{equation:cramer-rule-general} is a square matrix. Here comes the trick. \paragraph{Step 1: column ID for full row rank matrix.} For a start, we first consider the full row rank matrix $\bm{A}$ (which implies $R=M$, $M\leq N$, and $\bm{A}\in \mathbb{R}^{R\times N}$ such that the matrix $\bm{C}\in \mathbb{R}^{R\times R}$ is a square matrix in the column ID $\bm{A}=\bm{C}\bm{W}$ that we want). Determine the ``skeleton" index vector $J$ by \begin{equation}\label{equation:interpolative-choose-js} \boxed{ J = \mathop{\arg\max}_{J_t} \left\{\abs{\mathrm{det}(\bm{A}[:,J_t])}: \text{$J_t$ is a subset of $\{1,2,\ldots, N\}$ with size $R=M$} \right\},} \end{equation} i.e., $J$ is the index vector that is determined by maximizing the magnitude of the determinant of $\bm{A}[:,J_t]$. As we have discussed in the last section, there exists a (column) permutation matrix such that $$ \bm{A}\bm{P} = \begin{bmatrix} \bm{A}[:,J]&\bm{A}[:,I] \end{bmatrix}. $$ Since $\bm{C}=\bm{A}[:,J]$ has full column rank $R=M$, it is then nonsingular. The above equation can be rewritten as $$ \begin{aligned} \bm{A} &=\begin{bmatrix} \bm{A}[:,J]&\bm{A}[:,I] \end{bmatrix}\bm{P}^\top\\ &= \bm{A}[:,J] \bigg[ \bm{I}_R \gap \bm{A}[:,J]^{-1}\bm{A}[:,I] \bigg] \bm{P}^\top\\ &= \bm{C} \underbrace{\begin{bmatrix} \bm{I}_R & \bm{C}^{-1}\bm{A}[:,I] \end{bmatrix} \bm{P}^\top}_{\bm{W}} \end{aligned}, $$ where the matrix $\bm{W}$ is given by $ \begin{bmatrix} \bm{I}_R & \bm{C}^{-1}\bm{A}[:,I] \end{bmatrix}\bm{P}^\top = \begin{bmatrix} \bm{I}_R & \bm{E} \end{bmatrix}\bm{P}^\top $ by Equation~\eqref{equation:interpolatibve-w-ep}. To prove the claim that the magnitude of $\bm{W}$ is no larger than 1 is equivalent to proving that entries in $\bm{E}=\bm{C}^{-1}\bm{A}[:,I]\in \mathbb{R}^{R\times (N-R)}$ are no greater than 1 in absolute value. Define the index vector $[j_1,j_2,\ldots, j_N]$ as a permutation of $[1,2,\ldots, N]$ such that $$ [j_1,j_2,\ldots, j_N] = [1,2,\ldots, N] \bm{P} = [J, I].\footnote{Note here $[j_1,j_2,\ldots, j_N] $, $[1,2,\ldots, N]$, $J$, and $I$ are row vectors.} $$ Thus, it follows from $\bm{C}\bm{E}=\bm{A}[:,I]$ that $$ \begin{aligned} \underbrace{ [\bm{a}_{j_1}, \bm{a}_{j_2}, \ldots, \bm{a}_{j_R}]}_{=\bm{C}=\bm{A}[:,J]} \bm{E} &= \underbrace{[\bm{a}_{j_{R+1}}, \bm{a}_{j_{R+2}}, \ldots, \bm{a}_{j_N}]}_{=\bm{A}[:,I]:=\bm{B}}, \end{aligned} $$ where $\bm{a}_i$ is the $i$-th column of $\bm{A}$ and we let $\bm{B}=\bm{A}[:,I]$. Therefore, by Cramer's rule in Equation~\eqref{equation:cramer-rule-general}, we have \begin{equation}\label{equation:column-id-expansionmatrix} e_{kl} = \frac{\mathrm{det}\left(\bm{C}_{\bm{B}}(k,l)\right)} {\mathrm{det}\left(\bm{C}\right)}, \end{equation} where $e_{kl}$ is the entry ($k,l$) of $\bm{E}$ and $\bm{C}_{\bm{B}}(k,l)$ is the $R\times R$ matrix formed by replacing the $k$-th column of $\bm{C}$ with the $l$-th column of $\bm{B}$. For example, $$ \begin{aligned} e_{11} &= \frac{\mathrm{det}\left([\textcolor{blue}{\bm{a}_{j_{r+1}}}, \bm{a}_{j_2}, \ldots, \bm{a}_{j_r}]\right)} {\mathrm{det}\left([\bm{a}_{j_1}, \bm{a}_{j_2}, \ldots, \bm{a}_{j_r}]\right)}, \qquad &e_{12} &= \frac{\mathrm{det}\left([\textcolor{blue}{\bm{a}_{j_{r+2}}}, \bm{a}_{j_2},\ldots, \bm{a}_{j_r}]\right)} {\mathrm{det}\left([\bm{a}_{j_1}, \bm{a}_{j_2}, \ldots, \bm{a}_{j_r}]\right)},\\ e_{21} &= \frac{\mathrm{det}\left([\bm{a}_{j_1},\textcolor{blue}{\bm{a}_{j_{r+1}}}, \ldots, \bm{a}_{j_r}]\right)} {\mathrm{det}\left([\bm{a}_{j_1}, \bm{a}_{j_2}, \ldots, \bm{a}_{j_r}]\right)}, \qquad &e_{22} &= \frac{\mathrm{det}\left([\bm{a}_{j_1},\textcolor{blue}{\bm{a}_{j_{r+2}}}, \ldots, \bm{a}_{j_r}]\right)} {\mathrm{det}\left([\bm{a}_{j_1}, \bm{a}_{j_2}, \ldots, \bm{a}_{j_r}]\right)}. \end{aligned} $$ Since $J$ is chosen to maximize the magnitude of $\mathrm{det}(\bm{C})$ in Equation~\eqref{equation:interpolative-choose-js}, it follows that $$ \abs{e_{kl}}\leq 1, \qquad \text{for all}\gap k\in \{1,2,\ldots, R\}, \,\,\, l\in \{1,2,\ldots, N-R\}. $$ \paragraph{Step 2: apply to general matrices.} To summarize what we have proved above and to abuse the notation. For any matrix $\bm{F}\in \mathbb{R}^{R\times N}$ with \textbf{full} rank $R\leq N$, the column ID exists that $\bm{F}=\bm{C}_0\bm{W}$ where the values in $\bm{W}$ are no greater than 1 in absolute value. Apply the finding to the full general matrix $\bm{A}\in \mathbb{R}^{M\times N}$ with rank $R\leq \{M,N\}$, it is trivial that the matrix $\bm{A}$ admits a \textit{rank decomposition}: $$ \underset{M\times N}{\bm{A}} = \underset{M\times R}{\bm{D}}\gap \underset{R\times N}{\bm{F}}, $$ where $\bm{D}$ and $\bm{F}$ have full column rank $R$ and full row rank $R$ respectively \citep{lu2021numerical}. For the column ID of $\bm{F}=\bm{C}_0\bm{W}$ where $\bm{C}_0=\bm{F}[:,J]$ contains $R$ linearly independent columns of $\bm{F}$. We notice by $\bm{A}=\bm{D}\bm{F}$ such that $$ \bm{A}[:,J]=\bm{D}\bm{F}[:,J], $$ i.e., the columns indexed by $J$ of $(\bm{D}\bm{F})$ can be obtained by $\bm{D}\bm{F}[:,J]$ which in turn are the columns of $\bm{A}$ indexed by $J$. This makes $$ \underbrace{\bm{A}[:,J]}_{\bm{C}}= \underbrace{\bm{D}\bm{F}[:,J]}_{\bm{D}\bm{C}_0} $$ and $$ \bm{A} = \bm{D}\bm{F} =\bm{D}\bm{C}_0\bm{W} = \underbrace{\bm{D}\bm{F}[:,J]}_{\bm{C}}\bm{W}=\bm{C}\bm{W}. $$ This completes the proof. \end{proof} The above proof reveals an intuitive way for computing the optimal column ID of matrix $\bm{A}$ as shown in Algorithm~\ref{alg:column-id-intuitive}. Nevertheless, any algorithm that is guaranteed to find such an optimally-conditioned factorization must have combinatorial complexity \citep{martinsson2019randomized}. In the next sections, we will consider alternative ways to find a relatively well-conditioned factorization. \begin{algorithm}[h] \caption{An \textcolor{blue}{Intuitive} Method to Compute the Column ID} \label{alg:column-id-intuitive} \begin{algorithmic}[1] \Require Rank-$Rr$ matrix $\bm{A}$ with size $M\times N $; \State Compute the rank decomposition $\underset{M\times N}{\bm{A}} = \underset{M\times R}{\bm{D}}\gap \underset{R\times N}{\bm{F}}$ such as from a UTV decomposition \citep{lu2021numerical}; \State Compute column ID of $\bm{F}$: $\bm{F}=\bm{F}[:,J]\bm{W} = \widetilde{\bm{C}}\bm{W}$: $$ \begin{aligned} 2.1. \,\,\,&\left\{ \begin{aligned} J &= \mathop{\arg\max}_{J} \left\{\abs{\mathrm{det}(\bm{F}[:,J])}: \text{$J$ is a subset of $\{1,2,\ldots, N\}$ with size $R$} \right\};&\\ I &= \{1,2,\ldots, N\} \backslash J;&\\ \end{aligned} \right.\\ 2.2.\,\,\,&\left\{ \begin{aligned} \widetilde{\bm{C}} &= \bm{F}[:,J]; \\ \bm{M} &= \bm{F}[:,I]; \end{aligned} \right.\\ 2.3.\,\,\, &\bm{F}\bm{P} = \bm{F}[:,(J,I)] \text{ to obtain permutation matrix $\bm{P}$};\\ 2.4.\,\,\, &e_{kl} = \frac{\mathrm{det}\left(\widetilde{\bm{C}}_{\bm{M}}(k,l)\right)} {\mathrm{det}\left(\widetilde{\bm{C}}\right)}, \qquad \text{for all}\gap k\in [1, R], l\in [1,N-R] \text{ ~(Equation~\eqref{equation:column-id-expansionmatrix})};\\ 2.5.\,\,\, &\bm{W}= [\bm{I}_R, \bm{E}]\bm{P}^\top \text{ ~(Equation~\eqref{equation:interpolatibve-w-ep})}. \end{aligned} $$ \State $\bm{C}=\bm{A}[:,J]$; \State Output the column ID $\bm{A}=\bm{C}\bm{W}$; \end{algorithmic} \end{algorithm} \begin{example}[Compute the Column ID]\label{example:column-id-a} Given a matrix $$ \bm{A}= \begin{bmatrix} 56 & 41 & 30\\ 32 & 23 & 18\\ 80 & 59 & 42 \end{bmatrix} $$ with rank 2, the trivial process for computing the column ID of $\bm{A}$ is shown as follows. We first find a rank decomposition $$ \bm{A} = \bm{D}\bm{F}= \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 2 &-1 \end{bmatrix} \begin{bmatrix} 56 & 41 & 30 \\ 32 & 23 & 18 \end{bmatrix}. $$ Since rank $R=2$, $J$ is one of $[1,2], [0,2], [0,1]$ where the absolute determinant of $\bm{F}[:,J]$ are $48, 48, 24$ respectively. We proceed by choosing $J=[0,2]$: $$ \begin{aligned} \widetilde{\bm{C}} &= \bm{F}[:,J]= \begin{bmatrix} 56 & 30 \\ 32 & 18 \end{bmatrix},\qquad \bm{M} &= \bm{F}[:,I]=\begin{bmatrix} 41 \\ 23 \end{bmatrix}. \end{aligned} $$ And $$ \bm{F}\bm{P} = \bm{F}[:(J,I)] = \bm{F}[:,(0,2,1)] \qquad\underrightarrow{ \text{leads to} }\qquad \bm{P} = \begin{bmatrix} 1 & & \\ & &1\\ & 1 & \end{bmatrix}. $$ In this example, $\bm{E}\in \mathbb{R}^{2\times 1}$: $$ \begin{aligned} e_{11} &= \mathrm{det}\left( \begin{bmatrix} 41 & 30 \\ 23 & 18 \end{bmatrix}\right)\bigg/ \mathrm{det}\left( \begin{bmatrix} 56 & 30 \\ 32 & 18 \end{bmatrix}\right)=1;\\ e_{21} &= \mathrm{det}\left( \begin{bmatrix} 56 & 41 \\ 32 & 23 \end{bmatrix}\right)\bigg/ \mathrm{det}\left( \begin{bmatrix} 56 & 30 \\ 32 & 18 \end{bmatrix}\right)=-\frac{1}{2}. \end{aligned} $$ This makes $$ \bm{E} = \begin{bmatrix} 1\\-\frac{1}{2} \end{bmatrix} \qquad\underrightarrow{ \text{leads to} }\qquad \bm{W} = [\bm{I}_2, \bm{E}]\bm{P}^\top = \begin{bmatrix} 1 & 1 & 0\\ 0 & -\frac{1}{2} & 1 \end{bmatrix}. $$ The final selected columns are $$ \bm{C} = \bm{A}[:,J] = \begin{bmatrix} 56 & 30\\ 32 & 18\\ 80 & 42 \end{bmatrix}. $$ The net result is given by $$ \bm{A}=\bm{C}\bm{W} = \begin{bmatrix} 56 & 30\\ 32 & 18\\ 80 & 42 \end{bmatrix} \begin{bmatrix} 1 & 1 & 0\\ 0 & -\frac{1}{2} & 1 \end{bmatrix}, $$ where entries of $\bm{W}$ are no greater than 1 in absolute value as we want. \hfill $\square$\par \end{example} To end up this section, we discuss the source of the non-uniqueness in the column ID. \begin{remark}[Non-uniqueness of the Column ID] In the above specific example~\ref{example:column-id-a}, we notice the determinant for $\bm{F}[:,(1,2)]$ and $\bm{F}[:,(0,2)]$ both get the maximal absolute determinant. Therefore, both of them can result in a column ID of $\bm{A}$. Whilst, we only select $J$ from $[1,2], [0,2], [0,1]$. When the $J$ is fixed from the maximal absolute determinant search, any permutation of it can also be selected, e.g., $J=[0,2]$ or $J=[2,0]$ are both good. The two choices on the selection of the column index search yield the non-uniqueness of the column ID. \end{remark} \section{Skeleton/CUR Decomposition} To delve deeper into the topic of ID, we first present the rigorous form of a related decomposition known as the \textit{CUR} or \textit{skeleton} decomposition. \index{Decomposition: Skeleton} \index{Decomposition: CUR} \begin{theorem}[Skeleton Decomposition]\label{theorem:skeleton-decomposition} Any rank-$R$ matrix $\bm{A} \in \mathbb{R}^{M \times N}$ can be factored as $$ \underset{M\times N}{\bm{A} }= \underset{M\times R}{\bm{C}} \gap \underset{R\times R}{\bm{U}^{-1} }\gap \underset{R\times N}{\bm{R}}, $$ where $\bm{C}$ contains some $R$ linearly independent columns of $\bm{A}$, $\bm{R}$ contains some $R$ linearly independent rows of $\bm{A}$, and $\bm{U}$ is the nonsingular submatrix on the intersection of $\bm{C}$ and $\bm{R}$. \begin{itemize} \item The storage for the decomposition is then reduced or potentially increased from $MN$ floats to $R(M+N)+R^2$ floats. \item Or further, if we only record the position of the indices, it requires $MR$, $NR$ floats for storing $\bm{C}, \bm{R}$ respectively and extra $2R$ integers to remember the position of each column of $\bm{C}$ in that of $\bm{A}$ and each row of $\bm{R}$ in that of $\bm{A}$ (i.e., construct $\bm{U}$ from $\bm{C},\bm{R}$). \end{itemize} \end{theorem} \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{imgs/skeleton.pdf} \caption{Demonstration of the skeleton decomposition of a matrix where the \textcolor{mydarkyellow}{yellow} vectors denote the linearly independent columns of $\bm{A}$, and \textcolor{mydarkgreen}{green} vectors denote the linearly independent rows of $\bm{A}$.} \label{fig:skeleton} \end{figure} The skeleton decomposition is also referred to as the \text{CUR decomposition} following from the notation in the decomposition. Compared to singular value decomposition (SVD), CUR is better in terms of reification issues since it uses the actual columns (rows) of the matrix whereas SVD uses some artificial singular vectors that may not represent physical reality \citep{mahoney2009cur}. Moreover, CUR maintains sparsity if the data is sparse. On the other hand, similar to SVD, CUR decomposition can be used as a tool for data compression, feature extraction, or data analysis in many application areas \citep{mahoney2009cur, an2012large, lee2008cur+}. The illustration of the skeleton decomposition is shown in Figure~\ref{fig:skeleton} where the \textcolor{mydarkyellow}{yellow} vectors denote the linearly independent columns of $\bm{A}$ and \textcolor{mydarkgreen}{green} vectors denote the linearly independent rows of $\bm{A}$. Specifically, given index vectors $I,J$ both with size $R$ that contain the indices of rows and columns selected from $\bm{A}$ into $\bm{R}$ and $\bm{C}$ respectively, $\bm{U}$ can be denoted as $\bm{U}=\bm{A}[I,J]$ (see Definition~\ref{definition:matlabnotation}, p.~\pageref{definition:matlabnotation}). \paragraph{Existence of the skeleton decomposition.} In linear algebra, the row rank and the column rank of a matrix are equal. In another word, we can also claim that the dimension of the column space and the dimension of the row space are equal \citep{lu2021column}. This property is essential for proving the existence of the skeleton decomposition. The proof is rather elementary. \begin{proof}[of Theorem~\ref{theorem:skeleton-decomposition}] The proof relies on the existence of such nonsingular matrix $\bm{U}$ which is central to this decomposition method. \paragraph{Existence of such nonsingular matrix $\bm{U}$.} Since matrix $\bm{A}$ is of rank $R$, we can pick $R$ columns from $\bm{A}$ such that they are linearly independent. Suppose we put the specific $R$ linearly independent columns $\bm{a}_{i1}, \bm{a}_{i2}, \ldots, \bm{a}_{iR}$ into the columns of an $M\times R$ matrix $\bm{N}=[\bm{a}_{i1}, \bm{a}_{i2}, \ldots, \bm{a}_{iR}] \in \mathbb{R}^{M\times R}$. The dimension of the column space of $\bm{N}$ is $R$ so that the dimension of the row space of $\bm{N}$ is also $R$. Again, we can pick $R$ linearly independent rows $\bm{n}_{j1}^\top,\bm{n}_{j2}^\top, \ldots, \bm{n}_{jR}^\top $ from $\bm{N}$ and put the specific $R$ rows into rows of an $R\times R$ matrix $\bm{U} = [\bm{n}_{j1}^\top; \bm{n}_{j2}^\top; \ldots; \bm{n}_{jR}^\top]\in \mathbb{R}^{R\times R}$. Again, the dimension of the column space of $\bm{U}$ is also $R$ which means there are $R$ linearly independent columns from $\bm{U}$. So $\bm{U}$ is such a nonsingular matrix with size $R\times R$. \paragraph{Main proof.} As long as we find the nonsingular $R\times R$ matrix $\bm{U}$ inside $\bm{A}$, we can find the existence of the skeleton decomposition as follows. Suppose $\bm{U}=\bm{A}[I,J]$ where $I,J$ are index vectors of size $R$. Since $\bm{U}$ is a nonsingular matrix, the columns of $\bm{U}$ are linearly independent. Thus the columns of matrix $\bm{C}$ based on the columns of $\bm{U}$ are also linearly independent (i.e., select the $R$ columns of $\bm{A}$ with the same entries of the matrix $\bm{U}$. Here $\bm{C}$ is equal to the $\bm{N}$ we construct above, and $\bm{C}=\bm{A}[:,J]$). As the rank of the matrix $\bm{A}$ is $R$, if we take any other column $\bm{a}_i$ of $\bm{A}$, $\bm{a}_i$ can be represented as a linear combination of the columns of $\bm{C}$, i.e., there exists a vector $\bm{x}$ such that $\bm{a}_i = \bm{C} \bm{x}$, for all $ i\in \{1, 2, \ldots, N\}$. Let $R$ rows (entries) of $\bm{a}_i\in\mathbb{R}^N$ corresponding to the row entries of $\bm{U}$ be $\bm{r}_i \in \mathbb{R}^R$ for all $i\in \{1, 2, \ldots, N\}$ (i.e., $\bm{r}_i$ contains $R$ entries of $\bm{a}_i$). That is, select the $R$ entries of $\bm{a}_i$'s corresponding to the entries of $\bm{U}$ as follows: $$ \bm{A} = [\bm{a}_1,\bm{a}_2, \ldots, \bm{a}_N]\in \mathbb{R}^{M\times N} \qquad \longrightarrow \qquad \bm{A}[I,:]=[\bm{r}_1, \bm{r}_2, \ldots, \bm{r}_N] \in \mathbb{R}^{R\times N}. $$ Since $\bm{a}_i = \bm{C}\bm{x}$, $\bm{U}$ is a submatrix inside $\bm{C}$, and $\bm{r}_i$ is a subvector inside $\bm{a}_i$, we have $\bm{r}_i = \bm{U} \bm{x}$ which states that $\bm{x} = \bm{U}^{-1} \bm{r}_i$. Thus for every $i\in\{1,2,\ldots,N\}$, we have $\bm{a}_i = \bm{C} \bm{U}^{-1} \bm{r}_i$. Combine the $n$ columns of such $\bm{r}_i$ into $\bm{R}=[\bm{r}_1, \bm{r}_2, \ldots, \bm{r}_N]$, we obtain $$ \bm{A} = [\bm{a}_1, \bm{a}_2, \ldots, \bm{a}_N] = \bm{C} \bm{U}^{-1} \bm{R}, $$ from which the result follows. In short, we first find $R$ linearly independent columns of $\bm{A}$ into $\bm{C}\in \mathbb{R}^{M\times R}$. From $\bm{C}$, we find an $R\times R$ nonsingular submatrix $\bm{U}$. The $R$ rows of $\bm{A}$ corresponding to entries of $\bm{U}$ can help to reconstruct the columns of $\bm{A}$. \end{proof} We note in the case where $\bm{A}$ is square and invertible, we have the skeleton decomposition $\bm{A}=\bm{C}\bm{U}^{-1} \bm{R}$ where $\bm{C}=\bm{R}=\bm{U}=\bm{A}$ such that the decomposition reduces to $\bm{A} = \bm{A}\bA^{-1}\bm{A}$. \section{Row ID and Two-Sided ID} We term the interpolative decomposition above as column ID. This is no coincidence since it has its siblings. \begin{theorem}[The Whole Interpolative Decomposition]\label{theorem:interpolative-decomposition-row} Any rank-$R$ matrix $\bm{A} \in \mathbb{R}^{M \times N}$ can be factored as $$ \begin{aligned} \text{Column ID: }&\gap \underset{M \times N}{\bm{A}} &=& \boxed{\underset{M\times R}{\bm{C}}} \gap \underset{R\times N}{\bm{W}} ; \\ \text{Row ID: } &\gap &=&\underset{M\times R}{\bm{Z}} \gap \boxed{\underset{R\times N}{\bm{R}}}; \\ \text{Two-Sided ID: } &\gap &=&\underset{M\times R}{\bm{Z}} \gap \boxed{\underset{R\times R}{\bm{U}}} \gap \underset{R\times N}{\bm{W}}, \\ \end{aligned} $$ where \begin{itemize} \item $\bm{C}=\bm{A}[:,J]\in \mathbb{R}^{M\times R}$ is some $R$ linearly independent columns of $\bm{A}$, $\bm{W}\in \mathbb{R}^{R\times N}$ is the matrix to reconstruct $\bm{A}$ which contains an $R\times R$ identity submatrix (under a mild column permutation): $\bm{W}[:,J]=\bm{I}_R$; \item $\bm{R}=\bm{A}[S,:]\in \mathbb{R}^{R\times N}$ is some $R$ linearly independent rows of $\bm{A}$, $\bm{Z}\in \mathbb{R}^{M\times R}$ is the matrix to reconstruct $\bm{A}$ which contains an $R\times R$ identity submatrix (under a mild row permutation): $\bm{Z}[S,:]=\bm{I}_R$; \item Entries in $\bm{W}, \bm{Z}$ have values no larger than 1 in magnitude: $\max \abs{w_{ij}}\leq 1$ and $\max \abs{z_{ij}}\leq 1$; \item $\bm{U}=\bm{A}[S,J] \in \mathbb{R}^{R\times R}$ is the nonsingular submatrix on the intersection of $\bm{C}$ and $\bm{R}$; \item \textbf{Skeleton decomposition:} the three matrices $\bm{C},\bm{R},\bm{U}$ in the $\boxed{\text{boxed}}$ texts share same notation as the skeleton decomposition (Theorem~\ref{theorem:skeleton-decomposition}, p.~\pageref{theorem:skeleton-decomposition}) where they even have same meanings such that the three matrices make the skeleton decomposition of $\bm{A}$: $\bm{A}=\bm{C}\bm{U}^{-1}\bm{R}$. \end{itemize} \end{theorem} The proof of the row ID is just similar to that of the column ID. Suppose the column ID of $\bm{A}^\top$ is given by $\bm{A}^\top=\bm{C}_0\bm{W}_0$ where $\bm{C}_0$ contains $R$ linearly independent columns of $\bm{A}^\top$ (i.e., $R$ linearly independent rows of $\bm{A}$). Let $\bm{R}=\bm{C}_0^\top, \bm{Z}=\bm{W}_0^\top$, the row ID is obtained by $\bm{A}=\bm{Z}\bm{R}$. For the two-sided ID, recall from the skeleton decomposition (Theorem~\ref{theorem:skeleton-decomposition}, p.~\pageref{theorem:skeleton-decomposition}). When $\bm{U}$ is the intersection of $\bm{C}$ and $\bm{R}$, it follows that $\bm{A}=\bm{C}\bm{U}^{-1}\bm{R}$. Thus $\bm{C}\bm{U}^{-1}=\bm{Z}$ by the row ID. And this implies $\bm{C}=\bm{Z}\bm{U}$. By column ID, it follows that $\bm{A}=\bm{C}\bm{W}=\bm{Z}\bm{U}\bm{W}$ which proves the existence of the two-sided ID. \paragraph{Data storage.} For the data storage of each ID, we summarize as follows: \begin{itemize} \item \textit{Column ID.} It requires $MR$ and $(N-R)R$ floats to store $\bm{C}$ and $\bm{W}$ respectively, and $R$ integers to store the indices of the selected columns in $\bm{A}$; \item \textit{Row ID.} It requires $NR$ and $(M-R)R$ floats to store $\bm{R}$ and $\bm{Z}$ respectively, and $R$ integers to store the indices of the selected rows in $\bm{A}$; \item \textit{Two-Sided ID.} It requires $(M-R)R$, $(N-R)R$, and $R^2$ floats to store $\bm{Z},\bm{W}$, and $\bm{U}$ respectively. And extra $2R$ integers are required to store the indices of the selected rows and columns in $\bm{A}$. \end{itemize} \paragraph{Further reduction on the storage of two-sided ID for sparse matrix $\bm{A}$.} Suppose the column ID of $\bm{A}=\bm{C}\bm{W}$ where $\bm{C}=\bm{A}[:,J]$ and a good spanning rows index $S$ set of $\bm{C}$ could be found: $$ \bm{A}[S,:] = \bm{C}[S,:]\bm{W}. $$ We observe that $\bm{C}[S,:] = \bm{A}[S,J]\in \mathbb{R}^{R\times R}$ which is nonsingular (since full rank $R$ in the sense of both row rank and column rank). It follows that $$ \bm{W} = (\bm{A}[S,J])^{-1} \bm{A}[S,:]. $$ Therefore, there is no need to store the matrix $\bm{W}$ explicitly. We only need to store $\bm{A}[S,:]$ and $(\bm{A}[S,J])^{-1}$. Alternatively, when we are able to compute the inverse of $\bm{A}[S,J]$ on the fly, it only requires $R$ integers to store $J$ and recover $\bm{A}[S,J]$ from $\bm{A}[S,:]$. The storage of $\bm{A}[S,:]$ is inexpensive if $\bm{A}$ is sparse. \section{Bayesian Low-Rank Interpolative Decomposition} The low-rank ID problem of the observed matrix $\bm{A}$ can be stated as $\bm{A}=\bm{C}\bm{W}+\bm{E}$, where $\bm{A}= [\bm{a}_1, \bm{a}_2, \ldots, \bm{a}_N]\in \mathbb{R}^{M\times N}$ is approximately factorized into an $M\times K$ matrix $\bm{C}\in \mathbb{R}^{M\times K}$ containing $K$ basis columns of $\bm{A}$ and a $K\times N$ matrix $\bm{W}\in \mathbb{R}^{K\times N}$ with entries no larger than 1 in magnitude; the noise is captured by matrix $\bm{E}\in \mathbb{R}^{M\times N}$. The value of $K$ is smaller than the rank $R$, hence the term low-rank ID. There are many methods to compute the low-rank ID of a matrix. The most popular algorithm to compute the low-rank ID approximation is the randomized ID (RID) algorithm \citep{liberty2007randomized}. At a high level, the algorithm randomly samples $S > K$ columns from $\bm{A}$, uses column-pivoted QR (CPQR) to select $K$ of those $S$ columns for basis matrix $\bm{C}$, and then computes $\bm{W}$ via least squares \citep{lu2021numerical}. $S$ is usually set to $S = 1.2K$ to oversample the columns so as to capture a large portion of the range of $\bm{A}$. The drawback of the randomized algorithm is that the maximal magnitude of the factored component $\bm{W}$ may exceed 1. Though this is not a big issue in many applications, it has potential problems for other applications that have a high requirement for numerical stability. \citet{advani2021efficient} even report that the randomized ID may have a magnitude larger than 167 making it less numerically stable. While probabilistic models can easily accommodate constraints on the specific range of the factored matrix. In this light, we focus on the Bayesian ID (BID) of underlying matrices. The Bayesian ID algorithms are proposed in \citet{lu2022bayesian, lu2022comparative} and is further adapted in a feature selection context by \citet{lu2022feature}. Training such models amounts to finding the best rank-$K$ approximation to the observed $M\times N$ target matrix $\bm{A}$ under the given loss function. Let $\bm{r}\in \{0,1\}^N$ be the \textit{state vector} with each element indicating the type of the corresponding column, i.e., basis column or interpolated (remaining) column: if $r_n=1$, then the $n$-th column $\bm{a}_n$ is a basis column; if $r_n=0$, then $\bm{a}_n$ is interpolated using the basis columns plus some error term. Suppose further $J$ is the set of the indices of the selected basis columns (with size $K$ now), $I$ is the set of the indices of the interpolated columns (with size $N-K$) such that $$ \begin{aligned} J\cap I = \varnothing, &\qquad J \cup I =\{1,2,\ldots, N\}; \\ J=J(\bm{r})=\{n\mid r_n=1\}_{n=1}^N, &\qquad I=I(\bm{r})=\{n\mid r_n=0\}_{n=1}^N. \end{aligned} $$ Then $\bm{C}$ can be described as $\bm{C}=\bm{A}[:,J]$ where the colon operator implies all indices. The approximation $\bm{A}\approx \bm{C}\bm{W}$ can be equivalently stated that $$ \underset{M \times N}{\bm{A}} \approx \underset{M \times K}{\bm{C}} \,\,\,\underset{K \times N}{\bm{W}} = \underset{M \times N}{\bm{X}} \,\,\, \underset{N \times N}{\bm{Y}} $$ where $\bm{X}\in \mathbb{R}^{M\times N}$ and $\bm{Y}\in \mathbb{R}^{N\times N}$ with $$ \begin{aligned} \bm{X}[:,J]&=\bm{C}\in\mathbb{R}^{M\times K}, \qquad &\bm{X}[:,I] &= \mathbf{0}\in \mathbb{R}^{M\times (N-K)}; \\ \bm{Y}[J,:] &= \bm{W}\in\mathbb{R}^{K\times N}, \qquad &\bm{Y}[I, :]&=\text{random matrix }\in\mathbb{R}^{(N-K)\times N}. \end{aligned} $$ We also notice that there exists an identity matrix $\bm{I}_K\in \mathbb{R}^{K\times K}$ in $\bm{W}$ and $\bm{Y}$: \begin{equation}\label{equation:submatrix_bid_identity} \bm{I}_K = \bm{W}[:,J] = \bm{Y}[J,J]. \end{equation} To find the low-rank ID of $\bm{A}\approx\bm{C}\bm{W}$ then can be transformed into the problem of finding the $\bm{A}\approx\bm{X}\bm{Y}$ with state vector $\bm{r}$ recovering the submatrix $\bm{C}$ (see Figure~\ref{fig:id-column}). To evaluate the approximation, \textit{reconstruction error} measured by mean squared error (MSE or Frobenius norm) is minimized: \begin{equation}\label{equation:idbid-per-example-loss} \mathop{\min}_{\bm{W},\bm{Z}} \,\, \frac{1}{MN}\sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{x}_m^\top\bm{y}_n\right)^2, \end{equation} where $\bm{x}_m$, $\bm{y}_n$ are the $m$-th \textbf{row} and $n$-th \textbf{column} of $\bm{X}$, $\bm{Y}$ respectively. We approach the magnitude constraint in $\bm{W}$ and $\bm{Y}$ by considering the Bayesian ID model as a latent factor model and we describe a fully specified graphical model for the problem and employ Bayesian learning methods to infer the latent factors. In this sense, explicit magnitude constraints are not required on the latent factors, since this is naturally taken care of by the appropriate choice of prior distribution; here we use a general-truncated-normal (GTN) prior (Definition~\ref{definition:general_truncated_normal}, p.~\pageref{definition:general_truncated_normal}). \begin{figure*}[h] \centering \subfigtopskip=2pt \subfigbottomskip=9pt \subfigcapskip=-5pt \includegraphics[width=0.95\textwidth]{imgs/id-column_bid.pdf} \caption{Demonstration of the interpolative decomposition of a matrix where the \textcolor{mydarkyellow}{yellow} vector denotes the basis columns of matrix $\bm{A}$, white entries denote zero, \textcolor{mydarkpurple}{purple} entries denote one, \textcolor{mydarkblue}{blue} and black entries denote elements that are not necessarily zero. The Bayesian ID models find the approximation $\bm{A}\approx\bm{X}\bm{Y}$ and the post-processing procedure calculates the approximation $\bm{A}\approx\bm{C}\bm{W}$.} \label{fig:id-column} \end{figure*} \index{Decomposition: GBT} \index{Decomposition: GBTN} \paragraph{Bayesian GBT and GBTN models for ID.} We then introduce the Bayesian ID model called the \textit{GBT} model. To further promote flexibility and insensitivity in the hyperparameter choices, we also introduce the hierarchical model known as the \textit{GBTN} algorithm, which has simple conditional density forms with little extra computation. We further provide an example to show that the method can be successfully applied to the large, sparse, and very imbalanced Movie-User data set, containing 100,000 user/movie ratings. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=6pt \subfigcapskip=-2pt \subfigure[GBT.]{\includegraphics[width=0.4\textwidth]{imgs/bmf_bid_GBT.pdf} \label{fig:bmf_bid_GBT}} \subfigure[GBTN.]{\includegraphics[width=0.4\textwidth]{imgs/bmf_bid_GBTN.pdf} \label{fig:bmf_bid_GBTN}} \caption{Graphical representation of GBT and GBTN models. Orange circles represent observed and latent variables, green circles denote prior variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or", and the comma ``," in the variable represents ``and". Parameters $a$ and $b$ are fixed with $a=-1$ and $b=1$ in our case; while a weaker construction can set them to $a=-2,b=2$.} \label{fig:bmf_bids} \end{figure} \paragraph{Likelihood.} We view the data $\bm{A}$ as being produced according to the probabilistic generative process shown in Figure~\ref{fig:bmf_bids}. The observed $(m,n)$-th entry $a_{mn}$ of matrix $\bm{A}$ is modeled using a Gaussian likelihood function with variance $\sigma^2$ and mean given by the latent decomposition $\bm{x}_m^\top\bm{y}_n$ (Equation~\eqref{equation:idbid-per-example-loss}), \begin{equation}\label{equation:gbt_data_entry_likelihood} \begin{aligned} p(a_{mn} \mid \bm{x}_m^\top\bm{y}_n, \sigma^2) &= \mathcal{N}(a_{mn}\mid \bm{x}_m^\top\bm{y}_n, \sigma^2);\\ p(\bm{A}\mid \boldsymbol\theta) = \prod_{m,n=1}^{M,N}\mathcal{N} \left(a_{mn}\mid (\bm{X}\bm{Y})_{mn}, \sigma^2 \right) & = \prod_{m,n=1}^{M,N} \mathcal{N} \left(a_{mn}\mid (\bm{X}\bm{Y})_{mn}, \tau^{-1} \right), \end{aligned} \end{equation} where $\boldsymbol\theta=\{\bm{X},\bm{Y},\sigma^2\}$ denotes all parameters in the model, $\mathcal{N}(\cdot\mid \cdot)$ is a Gaussian distribution, $\sigma^2$ is the variance, and $\tau^{-1}=\sigma^2$ is the precision. \paragraph{Prior.} We choose a conjugate prior over the data variance, an inverse-Gamma distribution with shape $\alpha_\sigma$ and scale $\beta_\sigma$, \begin{equation}\label{equation:prior_gbt_gamma_on_variance} p(\sigma^2 \mid \alpha_\sigma, \beta_\sigma) = \mathrm{IG}(\sigma^2 \mid \alpha_\sigma, \beta_\sigma). \end{equation} While it can also be equivalently given a conjugate Gamma prior $\mathrm{Ga}(\tau \mid \alpha_\tau, \beta_\tau)$ over the precision $\tau=\frac{1}{\sigma^2}$ and we shall not repeat the details (see Equation~\eqref{equation:ggg_gamma_prior}, p.~\pageref{equation:ggg_gamma_prior} in GGG model). We treat the latent variables $y_{kl}$'s as random variables (with $k,l\in \{1,2,\ldots,N\}$, see Figure~\ref{fig:bmf_bids}). And we need prior densities over these latent variables to express beliefs for their values, e.g., constraint with magnitudes smaller than 1 in this context. Here we assume further that the latent variable $y_{kl}$'s are independently drawn from a general-truncated-normal (GTN) prior (Definition~\ref{definition:general_truncated_normal}, p.~\pageref{definition:general_truncated_normal}), \begin{equation}\label{equation:rn_prior_bidd} \begin{aligned} p(y_{kl} \mid \cdot ) &= \mathcal{GTN}(y_{kl} \mid \mu_{kl}, (\tau_{kl})^{-1}, a=-1, b=1)\\ &= \frac{\sqrt{\frac{\tau_{kl}}{2\pi}} \exp \{-\frac{\tau_{kl}}{2}(y_{kl}-\mu_{kl})^2 \} } {\Phi\left((b-\mu_{kl})\cdot \sqrt{\tau_{kl}}\right)-\Phi\left((a-\mu_{kl})\cdot \sqrt{\tau_{kl}}\right)} \cdot \mathds{1}(a\leq y_{kl}\leq b), \end{aligned} \end{equation} where $\mathds{1}(a\leq x\leq b)$ is a step function that has a value of 1 when $a\leq x\leq b$, and 0 when $x<a$ or $a>b$. This prior serves to enforce the constraint on the components $\bm{Y}$ with no entry of $\bm{Y}$ having an absolute value greater than 1, and is conjugate to the Gaussian likelihood (Equation~\eqref{equation:conjugate_truncated_constrained_mean}, p.~\pageref{equation:conjugate_truncated_constrained_mean}). The posterior density is also a GTN distribution. While in a weaker construction of interpolative decomposition, the constraint on the magnitude can be relaxed to 2; the GTN prior is flexible in that the parameters can adapt to this change by adjusting to $a=-2, b=2$ accordingly. \paragraph{Hierarchical prior.} To further favor flexibility, we choose a convenient joint hyperprior density over the parameters $\{\mu_{kl}, \tau_{kl}\}$ of GTN prior in Equation~\eqref{equation:rn_prior_bidd}, namely, the GTN-scaled-normal-Gamma (GTNSNG) prior, \begin{equation} \begin{aligned} &\gap p(\mu_{kl}, \tau_{kl} \mid \cdot) =\mathcal{GTNSNG}(\mu_{kl}, \tau_{kl}\mid \mu_\mu, \frac{1}{\tau_\mu},\alpha_t, \beta_t)\\ &= \left\{\Phi((b-\mu_\mu)\cdot \sqrt{\tau_\mu})-\Phi((a-\mu_\mu)\cdot \sqrt{\tau_\mu})\right\} \cdot \mathcal{N}(\mu_{kl}\mid \mu_\mu, (\tau_\mu)^{-1}) \cdot \mathrm{Ga}(\tau_{kl} \mid \alpha_t, \beta_t). \end{aligned} \end{equation} See Figure~\ref{fig:bmf_bid_GBTN}. This prior can decouple parameters $\mu_{kl}, \tau_{kl}$, and the posterior conditional densities of them are normal and Gamma respectively due to this convenient scale. \paragraph{Terminology.} Following again the Bayesian matrix factorization terminology in Section~\ref{section:bmf_real_intro} (p.~\pageref{section:bmf_real_intro}), the Bayesian ID models are referred to as the \textit{GBT} and \textit{GBTN} models where the letter ``$B$" stands for Beta-Bernoulli density intrinsically. \section{Gibbs Sampler}\label{section:gbt_gbtn_derivation} In this section, we provide the derivation of the Gibbs sampler for the discussed Bayesian ID models. \paragraph{Update of latent variables.} The conditional posterior density of $y_{kl}$ is a GTN distribution. Denote all elements of $\bm{Y}$ except $y_{kl}$ as $\bm{Y}_{-kl}$, and following the graphical representation of the GBT (or the GBTN) model shown in Figure~\ref{fig:bmf_bids}, the conditional posterior density of $y_{kl}$ can be obtained by \begin{equation}\label{equation:posterior_gbt_ykl} \begin{aligned} &\gap p(y_{kl} \mid \bm{A}, \bm{X}, \bm{Y}_{-kl}, \mu_{kl}, \tau_{kl}, \sigma^2) \propto p(\bm{A}\mid \bm{X},\bm{Y}, \sigma^2) \cdot p(y_{kl}\mid \mu_{kl}, \tau_{kl} )\\ &=\prod_{i,j=1}^{M,N} \mathcal{N} \left(a_{ij}\mid \bm{x}_i^\top\bm{y}_j, \sigma^2 \right)\times \mathcal{GTN}(y_{kl} \mid \mu_{kl}, (\tau_{kl})^{-1},a=-1,b=1) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i,j=1}^{M,N} (a_{ij} - \bm{x}_i^\top \bm{y}_j)^2 \right\} \exp \left\{-\frac{\tau_{kl}}{2}(y_{kl}-\mu_{kl})^2 \right\} u(y_{kl} \mid a,b)\\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i}^{M} (a_{il} - \bm{x}_i^\top \bm{y}_l)^2 \right\} \exp \left\{-\frac{\tau_{kl}}{2}(y_{kl}-\mu_{kl})^2 \right\} u(y_{kl} \mid a,b)\\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i}^{M} \bigg( x_{ik} ^2y_{kl }^2 + 2x_{ik} y_{kl } \big(\sum_{j\neq k}^{N}x_{ij} y_{jl}-a_{il}\big)\bigg) \right\} \exp \{-\frac{\tau_{kl}}{2}(y_{kl}-\mu_{kl})^2 \} u(y_{kl} \mid a,b)\\ &\propto \exp\left\{ -y_{kl }^2 \underbrace{\left(\frac{\sum_{i}^{M} x_{ik} ^2}{2\sigma^2}+\textcolor{black}{\frac{\tau_{kl}}{2}} \right)}_ {\textcolor{blue}{\frac{1}{2} \widetilde{\tau}}} +y_{kl } \underbrace{\bigg(\frac{1}{\sigma^2} \sum_{i}^{M} x_{ik} \big(a_{il}-\sum_{j\neq k}^{N}x_{ij} y_{jl}\big) +\textcolor{black}{\tau_{kl}\mu_{kl}} \bigg)}_{\textcolor{blue}{\widetilde{\tau} \cdot \widetilde{\mu}}} \right\} u(y_{kl} \mid a,b)\\ &\propto \mathcal{N}(y_{kl}\mid \widetilde{\mu},( \widetilde{\tau})^{-1})u(y_{kl} \mid a,b) \propto \mathcal{GTN}(y_{kl}\mid \widetilde{\mu},( \widetilde{\tau})^{-1}, a=-1,b=1), \end{aligned} \end{equation} where again, for simplicity, we assume the \textbf{rows} of $\bm{X}$ are denoted by $\bm{x}_i$'s and \textbf{columns} of $\bm{Y}$ are denoted by $\bm{y}_j$'s, $\widetilde{\tau} =\frac{\sum_{i}^{M} x_{ik} ^2}{\sigma^2} +\tau_{kl}$ is the posterior ``parent" precision of the GTN density, and the posterior ``parent" mean of the GTN density is $$ \widetilde{\mu} = \bigg(\frac{1}{\sigma^2} \sum_{i}^{M} x_{ik} \big(a_{il}-\sum_{j\neq k}^{N}x_{ij} y_{jl}\big) +\textcolor{black}{\tau_{kl}\mu_{kl}} \bigg) \bigg/ \widetilde{\tau}. $$ \paragraph{Update of variance parameter.} The conditional density of $\sigma^2$ is an inverse-Gamma distribution by conjugacy, \begin{equation}\label{equation:posterior_gnt_sigma2} \begin{aligned} &\gap p(\sigma^2 \mid \bm{X}, \bm{Y}, \bm{A}) = \mathrm{IG}(\sigma^2 \mid \widetilde{\alpha_\sigma}, \widetilde{\beta_\sigma}), \end{aligned} \end{equation} where $\widetilde{\alpha_\sigma} = \frac{MN}{2}+\alpha_\sigma$, $\widetilde{\beta_\sigma}=\frac{1}{2} \sum_{i,j=1}^{M,N}(a_{ij}-\bm{x}_i^\top\bm{y}_j)^2+\beta_\sigma$. \paragraph{Update of state vector for GBT and GBTN without ARD.} Suppose further that $\bm{r}\in\{0,1\}^N$ is the state vector with each element indicating the type of the corresponding column. If $r_n=1$, then $\bm{a}_n$ is a basis column; otherwise, $\bm{a}_n$ is interpolated using the basis columns plus some error term. Given the state vector $\bm{r}=[r_1,r_2, \ldots, r_N]^\top\in \mathbb{R}^N$, the relation between $\bm{r}$ and the index sets $J$ is simple; $J = J(\bm{r}) = \{n\mid r_n = 1\}_{n=1}^N$ and $I = I(\bm{r}) = \{n\mid r_n = 0\}_{n=1}^N$. A new value of state vector $\bm{r}$ is to select one index $j$ from index set $J$ and another index $i$ from index set $I$ (we note that $r_j=1$ and $r_i=0$ for the old state vector $\bm{r}$) such that \begin{equation}\label{equation:postrerior_gbt_rvector} \begin{aligned} j&\in J, \gap i\in I;\\ o_j &= \frac{p(r_j=0, r_i=1\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-ji})} {p(r_j=1, r_i=0\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-ji})}\\ &= \frac{p(r_j=0, r_i=1)}{p(r_j=1, r_i=0)}\times \frac{p(\bm{A}\mid \sigma^2, \bm{Y}, \bm{r}_{-ji}, r_j=0, r_i=1)}{p(\bm{A}\mid \sigma^2, \bm{Y}, \bm{r}_{-ji}, r_j=1, r_i=0)}, \end{aligned} \end{equation} where $\bm{r}_{-ji}$ denotes all elements of $\bm{r}$ except $j$-th and $i$-th entries. Under a weak construction, we can set $p(r_j=0, r_i=1)=p(r_j=1, r_i=0)$. Then the full conditional probability of $p(r_j=0, r_i=1\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-ji})$ can be calculated by \begin{equation}\label{equation:postrerior_gbt_rvec_withoutard} p(r_j=0, r_i=1\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-ji}) = \frac{o_j}{1+o_j}. \end{equation} \paragraph{Extra update for GBTN model.} Following the conceptual representation of the GBTN model in Figure~\ref{fig:bmf_bids}, the conditional density of $\mu_{kl}$ can be obtained by \begin{equation}\label{equation:posterior_gbt_mukl} \begin{aligned} &\gap p(\mu_{kl} \mid \tau_{kl}, \mu_\mu, \tau_\mu, \alpha_t, \beta_t, y_{kl})\\ &\propto \mathcal{GTN}(y_{kl} \mid \mu_{kl}, (\tau_{kl})^{-1}, a=-1, b=1) \cdot \mathcal{GTNSNG}(\mu_{kl}, \tau_{kl}\mid \mu_\mu, (\tau_\mu)^{-1},\alpha_t, \beta_t)\\ &\propto\mathcal{GTN}(y_{kl} \mid \mu_{kl}, (\tau_{kl})^{-1}, a=-1, b=1) \cdot \big\{\Phi((b-\mu_\mu)\cdot \sqrt{\tau_\mu})-\Phi((a-\mu_\mu)\cdot \sqrt{\tau_\mu})\big\} \\ &\gap\gap\gap\gap\gap\gap\gap\gap\gap\gap\gap\gap\gap \cdot{\mathcal{N}(\mu_{kl}\mid \mu_\mu, (\tau_\mu)^{-1})} \cdot \cancel{\mathrm{Ga}(\tau_{kl} \mid \alpha_t, \beta_t)}\\ &\propto \sqrt{\tau_{kl}}\cdot \exp\left\{ -\frac{\tau_{kl}}{2} (y_{kl}-\mu_{kl})^2\right\} \cdot \exp\left\{ -\frac{\tau_\mu}{2}(\mu_\mu - \mu_{kl})^2 \right\}\\ &\propto \exp\left\{ - \mu_{kl}^2 \underbrace{\frac{\tau_{kl}+\tau_\mu}{2}}_{\textcolor{blue}{\frac{1}{2}\widetilde{t}}} + \mu_{kl} \underbrace{(\tau_{kl}y_{kl}+\tau_\mu\mu_\mu)}_{\textcolor{blue}{\widetilde{m}\cdot \widetilde{t}}} \right\}\propto \mathcal{N}(\mu_{kl}\mid \widetilde{m},(\,\widetilde{t}\,)^{-1}), \end{aligned} \end{equation} where $\widetilde{t} = \tau_{kl}+\tau_\mu$, and $\widetilde{m} =(\tau_{kl}y_{kl}+\tau_\mu\mu_\mu)/\widetilde{t}$ are the posterior precision and mean of the normal density. Similarly, the conditional density of $\tau_{kl}$ is, \begin{equation}\label{equation:posterior_gbt_taukl} \begin{aligned} &\gap p(\tau_{kl} \mid \mu_{kl}, \mu_\mu, \tau_\mu, \alpha_t, \beta_t, y_{kl})\\ &\propto \mathcal{GTN}(y_{kl} \mid \mu_{kl}, (\tau_{kl})^{-1}, a=-1, b=1) \cdot \mathcal{GTNSNG}(\mu_{kl}, \tau_{kl}\mid \mu_\mu, (\tau_\mu)^{-1},\alpha_t, \beta_t)\\ &\propto\mathcal{GTN}(y_{kl} \mid \mu_{kl}, (\tau_{kl})^{-1}, a=-1, b=1) \cdot \big\{\Phi((b-\mu_\mu)\cdot \sqrt{\tau_\mu})-\Phi((a-\mu_\mu)\cdot \sqrt{\tau_\mu})\big\}\\ &\gap\gap\gap\gap\gap\gap\gap\gap\gap\gap\gap\gap\gap \cdot \cancel{\mathcal{N}(\mu_{kl}\mid \mu_\mu, (\tau_\mu)^{-1})} \cdot {\mathrm{Ga}(\tau_{kl} \mid \alpha_t, \beta_t)}\\ &\propto \exp\left\{ -\tau_{kl} \frac{(y_{kl}- \mu_{kl})^2}{2} \right\} \tau_{kl}^{1/2} \tau_{kl}^{\alpha_t-1} \exp\left\{ -\beta_t \tau_{kl} \right\}\\ &\propto \exp\left\{ -\tau_{kl}\left[ \beta_t + \frac{(y_{kl}- \mu_{kl})^2}{2} \right] \right\} \cdot \tau_{kl}^{(\alpha_t+1/2)-1} \propto \mathrm{Ga}(\tau_{kl} \mid \widetilde{a}, \widetilde{b}), \end{aligned} \end{equation} where $\widetilde{a} = \alpha_t+1/2$ and $\widetilde{b}=\beta_t + \frac{(y_{kl}- \mu_{kl})^2}{2}$ are the posterior parameters of the Gamma density. The full procedure is then formulated in Algorithm~\ref{alg:gbtn_gibbs_sampler} for GBT and GBTN models. \begin{algorithm}[htb] \caption{Gibbs sampler for GBT and GBTN ID models. The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative priors are $a=-1, b=1,\alpha_\sigma=0.1, \beta_\sigma=1$, ($\{\mu_{kl}\}=0, \{\tau_{kl}\}=1$) for GBT, ($\mu_\mu =0$, $\tau_\mu=0.1, \alpha_t=\beta_t=1$) for GBTN.} \label{alg:gbtn_gibbs_sampler} \begin{algorithmic}[1] \For{$t=1$ to $T$}\Comment{$T$ iterations} \State \algoalign{Sample state vector $\bm{r}$ from Equation~\eqref{equation:postrerior_gbt_rvec_withoutard};} \State \algoalign{Update matrix $\bm{X}$ by $\bm{A}[:,J]$ where index vector $J$ is the index of $\bm{r}$ with value 1 and set $\bm{X}[:,I]=\mathbf{0}$ where index vector $I$ is the index of $\bm{r}$ with value 0;} \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{X},\bm{Y}, \bm{A})$ in Equation~\eqref{equation:posterior_gnt_sigma2}; \For{$k=1$ to $N$} \For{$l=1$ to $N$} \State Sample $y_{kl}$ from Equation~\eqref{equation:posterior_gbt_ykl}; \State (GBTN only) Sample $\mu_{kl}$ from Equation~\eqref{equation:posterior_gbt_mukl}; \State (GBTN only) Sample $\tau_{kl}$ from Equation~\eqref{equation:posterior_gbt_taukl}; \EndFor \EndFor \State Report loss in Equation~\eqref{equation:idbid-per-example-loss}, stop if it converges. \EndFor \State Report mean loss in Equation~\eqref{equation:idbid-per-example-loss} after burn-in iterations. \end{algorithmic} \end{algorithm} \section{Aggressive Update} In Algorithm~\ref{alg:gbtn_gibbs_sampler}, we notice that we set $\bm{X}[:,I]=\mathbf{0}$ when the new state vector $\bm{r}$ is sampled. However, in the next iteration step, the index set $I$ may be updated in which case one entry $i$ of $I$ may be altered to have a value of 1: $$ r_i=0 \rightarrow r_i=1. $$ This may cause problems in the update of $y_{kl}$ in Equation~\eqref{equation:posterior_gbt_ykl}, a zero $i$-th column in $\bm{X}$ cannot update $\bm{Y}$ accordingly. One solution is to record a proposal state vector $\bm{r}_2$ and a proposal factor matrix $\bm{X}_2$ from the $\bm{r}_2$ vector. When the update in the next iteration selects the old state vector $\bm{r}$, the factor matrix $\bm{X}$ is adopted to finish the updates; while the algorithm chooses the proposal state vector $\bm{r}_2$, the proposal factor matrix $\bm{X}_2$ is applied to do the updates. We call this procedure the \textit{aggressive} update. The \textit{aggressive} sampler for the GBT model is formulated in Algorithm~\ref{alg:gbtn_gibbs_sampler_aggressive}. For the sake of simplicity, we don't include the sampler for the GBTN model because it may be done in a similar way. \begin{algorithm}[h] \caption{\textit{Aggressive} Gibbs sampler for GBT ID model. The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative priors are $a=-1, b=1,\alpha_\sigma=0.1, \beta_\sigma=1$, ($\{\mu_{kl}\}=0, \{\tau_{kl}\}=1$) for GBT. } \label{alg:gbtn_gibbs_sampler_aggressive} \begin{algorithmic}[1] \For{$t=1$ to $T$}\Comment{$T$ iterations} \State \algoalign{Sample state vector $\bm{r}$ from $\{\bm{r}_1, \bm{r}_2\}$ by Equation~\eqref{equation:postrerior_gbt_rvec_withoutard};} \State Decide $\bm{Y}$: $\bm{Y}=\bm{Y}_1$ if $\bm{r}$ is $\bm{r}_1$; $\bm{Y}=\bm{Y}_2$ if $\bm{r}$ is $\bm{r}_2$; \State Update state vector $\bm{r}_1=\bm{r}$; \State Sample proposal state vector $\bm{r}_2$ based on $\bm{r}$; \State Update matrix $\bm{X}$ by $\bm{r}=\bm{r}_1$; \State Update proposal $\bm{X}_2$ by $\bm{r}_2$; \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{X},\bm{Y}, \bm{A})$ in Equation~\eqref{equation:posterior_gnt_sigma2}; \State Sample $\bm{Y}_1 = \{y_{kl}\}$ using $\bm{X}$; \State Sample $\bm{Y}_2 = \{y_{kl}\}$ using $\bm{X}_2$; \State Report loss in Equation~\eqref{equation:idbid-per-example-loss}, stop if it converges. \EndFor \State Report mean loss in Equation~\eqref{equation:idbid-per-example-loss} after burn-in iterations. \end{algorithmic} \end{algorithm} \section{Post-Processing} The Gibbs sampling algorithm finds the approximation $\bm{A}\approx \bm{X}\bm{Y}$ where $\bm{X}\in \mathbb{R}^{M\times N}$ and $\bm{Y}\in \mathbb{R}^{N\times N}$. As stated above, the redundant columns in $\bm{X}$ and redundant rows in $\bm{Y}$ can be removed by the index vector $J$: $$ \begin{aligned} \bm{C}&=\bm{X}[:,J]=\bm{A}[:,J],\\ \bm{W}&= \bm{Y}[J,:]. \end{aligned} $$ Since the submatrix $\bm{Y}[J,J]=\bm{W}[:,J]$ (Equation~\eqref{equation:submatrix_bid_identity}) from the Gibbs sampling procedure is not enforced to be an identity matrix (as required in the interpolative decomposition). We need to set it to be an identity matrix manually. This will basically reduce the reconstructive error further. The post-processing procedure is shown in Figure~\ref{fig:id-column}. \index{Decomposition: GBT with ARD} \section{Bayesian ID with Automatic Relevance Determination} We further extend the Bayesian models with automatic relevance determination (ARD) to eliminate the need for model selection. Given the state vector $\bm{r}=[r_1,r_2, \ldots, r_N]^\top\in \mathbb{R}^N$ whose index sets are $J = J(\bm{r}) = \{n\mid r_n = 1\}_{n=1}^N$ and $I = I(\bm{r}) = \{n\mid r_n = 0\}_{n=1}^N$. A new value of state vector $\bm{r}$ is to select one index $j$ from either the index set $J$ or the index set $I$ such that \begin{equation}\label{equation:postrerior_gbt_rvector_ard} \begin{aligned} j&\in J\cup I;\\ o_j = \frac{p(r_j=0\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-j})} {p(r_j=1\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-j})} &= \frac{p(r_j=0)}{p(r_j=1)} \times \frac{p(\bm{A}\mid \sigma^2, \bm{Y}, \bm{r}_{-j}, r_j=0)}{p(\bm{A}\mid \sigma^2, \bm{Y}, \bm{r}_{-j}, r_j=1)}, \end{aligned} \end{equation} where $\bm{r}_{-j}$ denotes all elements of $\bm{r}$ except $j$-th element. Compare Equation~\eqref{equation:postrerior_gbt_rvector_ard} with Equation~\eqref{equation:postrerior_gbt_rvector}, we may find that in the former equation, the number of selected columns is not fixed now. Therefore, we let the inference decide the number of columns in basis matrix $\bm{C}$ of interpolative decomposition. Again, we can set $p(r_j=0)=p(r_j=1)=0.5$. Then the full conditional probability of $p(r_j=0, r_i=1\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-ji})$ can be calculated by \begin{equation}\label{equation:postrerior_gbt_rvector222_ard} p(r_j=0\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-j}) = \frac{o_j}{1+o_j}. \end{equation} Then the full algorithm of GBT and GBTN with ARD is described in Algorithm~\ref{alg:gbtn_gibbs_sampler_withard} where the difference lies in that we need to iterate over all elements of the state vector rather than just one or two elements of it. We aware that many elements in the state vector can change their signs making the update of matrix $\bm{Y}$ unstable. Therefore, we also define a number of \textit{critical steps $\nu$}: after sampling the whole state vector $\bm{r}$, we update several times (here we repeat $\nu$ times) for the matrix $\bm{Y}$ and its related parameters (the difference is highlighted in blue of Algorithm~\ref{alg:gbtn_gibbs_sampler_withard}). \begin{algorithm}[ht] \caption{Gibbs sampler for GBT and GBTN ID with \textit{ARD} models. The procedure presented here can be inefficient but is explanatory. While a vectorized manner can be implemented to find a more efficient algorithm. By default, weak priors are $a=-1, b=1,\alpha_\sigma=0.1, \beta_\sigma=1$, ($\{\mu_{kl}\}=0, \{\tau_{kl}\}=1$) for GBT, ($\mu_\mu =0$, $\tau_\mu=0.1, \alpha_t=\beta_t=1$) for GBTN. \textcolor{blue}{Number of critical steps: $\nu$}.} \label{alg:gbtn_gibbs_sampler_withard} \begin{algorithmic}[1] \For{$t=1$ to $T$} \Comment{$T$ iterations} \For{\textcolor{blue}{$j=1$ to $N$}} \State \textcolor{blue}{Sample state vector element $r_j$ from Equation~\eqref{equation:postrerior_gbt_rvector222_ard}}; \EndFor \State \algoalign{Update matrix $\bm{X}$ by $\bm{A}[:,J]$ where index vector $J$ is the index of $\bm{r}$ with value 1 and set $\bm{X}[:,I]=\mathbf{0}$ where index vector $I$ is the index of $\bm{r}$ with value 0;} \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{X},\bm{Y}, \bm{A})$ in Equation~\eqref{equation:posterior_gnt_sigma2}; \For{\textcolor{blue}{$n=1$ to $\nu$}} \For{$k=1$ to $N$} \For{$l=1$ to $N$} \State Sample $y_{kl}$ from Equation~\eqref{equation:posterior_gbt_ykl}; \State (GBTN only) Sample $\mu_{kl}$ from Equation~\eqref{equation:posterior_gbt_mukl}; \State (GBTN only) Sample $\tau_{kl}$ from Equation~\eqref{equation:posterior_gbt_taukl}; \EndFor \EndFor \EndFor \State Output loss in Equation~\eqref{equation:idbid-per-example-loss}, stop iteration if it converges. \EndFor \State Output averaged loss in Equation~\eqref{equation:idbid-per-example-loss} for evaluation after burn-in iterations. \end{algorithmic} \end{algorithm} \begin{table}[h] \centering \setlength{\tabcolsep}{4.4pt} \begin{tabular}{lllll} \hline Data set & Num. Rows & Num. Columns & Fraction observed & Matrix rank\\ \hline CCLE $EC50$ & 502 & 48 &0.632 & 24\\ CCLE $IC50$ & 504 & 48 &0.965 & 24\\ Gene Body Methylation &160 & 254 &1.000 & 160\\ Promoter Methylation & 160 & 254 & 1.000 &160\\ \hline \end{tabular} \caption{Overview of the CCLE $EC50$, CCLE $IC50$, Gene Body Methylation, and Promoter Methylation data sets, giving the number of rows, columns, the fraction of entries that are observed, and the matrix rank.} \label{table:datadescription_ard} \end{table} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure{\includegraphics[width=0.231\textwidth]{imgs/plot_ccle_ec.pdf} \label{fig:plot_ccle_ec}} \subfigure{\includegraphics[width=0.231\textwidth]{imgs/plot_ccle_ic.pdf} \label{fig:plot_ccle_ic}} \subfigure{\includegraphics[width=0.231\textwidth]{imgs/plot_methylation_pm.pdf} \label{fig:plot_ctrp}} \subfigure{\includegraphics[width=0.231\textwidth]{imgs/plot_methylation_gm.pdf} \label{fig:plot_movielens100}} \caption{Data distribution of CCLE $EC50$, CCLE $IC50$, Gene Body Methylation, and Promoter Methylation data sets.} \label{fig:datasets_bids_ard} \end{figure} \section{Examples for Bayesian ID}\label{section:bid_ard_experiments} To evaluate the strategy and demonstrate the main advantages of the Bayesian ID method, we conduct experiments with different analysis tasks; and different data sets including Cancer Cell Line Encyclopedia (CCLE $EC50$ and CCLE $IC50$ data sets \citep{barretina2012cancer}), cancer driver genes (Gene Body Methylation \citep{koboldt2012comprehensive}), and the promoter region (Promoter Methylation \citep{koboldt2012comprehensive}) from bioinformatics. Following \citet{brouwer2017prior}, we preprocess these data sets by capping high values to 100 and undoing the natural log transform for the former three data sets. Then we standardize to have zero mean and unit variance for all data sets and fill missing entries by 0. Finally, we copy every column twice (for the CCLE $EC50$ and CCLE $IC50$ data sets) in order to increase redundancy in the matrix; while for the latter two (Gene Body Methylation and Promoter Methylation data sets), the number of columns is already larger than the matrix rank such that we do not increase any redundancy. A summary of the four data sets is reported in Table~\ref{table:datadescription_ard} and their distributions are presented in Figure~\ref{fig:datasets_bids_ard}. In all scenarios, the same parameter initialization is adopted when conducting different tasks. Experimental evidence reveals that post-processing procedure can increase performance to a minor extent, and that the outcomes of the GBT and GBTN models are relatively similar \citep{lu2022bayesian}. For clarification, we only provide the findings of the GBT model after post-processing. We compare the results of ARD versions of GBT and GBTN with vanilla GBT and GBTN algorithms. In a wide range of experiments across various data sets, the GBT or GBTN with ARD models improve reconstructive error and leads to performance that is as good or better than the vanilla GBT or GBTN methods in low-rank ID approximation. In order to measure overall decomposition performance, we use mean squared error (MSE, Equation~\eqref{equation:idbid-per-example-loss}), which measures the similarity between the true and reconstructive; the smaller the better performance. \begin{figure*}[h] \centering \vspace{-0.15cm} \subfigtopskip=2pt \subfigbottomskip=0pt \subfigcapskip=-2pt \subfigure[Convergence of the models on the CCLE $EC50$, CCLE $IC50$, Gene Body Methylation, and Promoter Methylation data sets, measured by the data fit (MSE). The algorithm almost converges in less than 50 iterations.]{\includegraphics[width=1\textwidth]{imgs/convergences_BIDs_ard.pdf} \label{fig:convergences_BIDs_ard}} \subfigure[Averaged autocorrelation coefficients of samples of $y_{kl}$ computed using Gibbs sampling on the CCLE $EC50$, CCLE $IC50$, Gene Body Methylation, and Promoter Methylation data sets.]{\includegraphics[width=1\textwidth]{imgs/convergences_BIDs_autocorr_ard.pdf} \label{fig:convergences_BIDs_autocorr_ard}} \subfigure[Convergence of the number of selected columns on the CCLE $EC50$, CCLE $IC50$, Gene Body Methylation, and Promoter Methylation data sets, measured by the data fit (MSE). The algorithm almost converges in less than 100 iterations.]{\includegraphics[width=1\textwidth]{imgs/convergences_BIDs_rvectorlen_ard.pdf} \label{fig:convergences_BIDs_numrvector_ard}} \caption{Convergence results (upper), sampling mixing analysis (middle), and reconstructive results (lower) on the CCLE $EC50$, CCLE $IC50$, Gene Body Methylation, and Promoter Methylation data sets for various latent dimensions. } \label{fig:allresults_bids_ard} \end{figure*} \subsection{Hyperparameters} In this experiments, we use $a=-1, b=1,\alpha_\sigma=0.1, \beta_\sigma=1$, ($\{\mu_{kl}\}=0, \{\tau_{kl}\}=1$) for GBT, ($\mu_\mu =0$, $\tau_\mu=0.1, \alpha_t=\beta_t=1$) for GBTN, and critical steps $\nu=5$ for GBT and GBTN with ARD. The adopted parameters are very uninformative and weak prior choices and the models are insensitive to them. The observed or unobserved variables are initialized from random draws as long as these hyperparameters are fixed since this initialization method provides a better initial guess of the correct patterns in the matrices. In all scenarios, we run the Gibbs sampling 1,000 iterations with a burn-in of 100 iterations and a thinning of 5 iterations as the convergence analysis shows the algorithm can converge in less than 100 iterations. \begin{table}[] \centering \begin{tabular}{lllllll} \hline & $K_1$ & $K_2$ & $K_3$ & $K_4$ & GBT (ARD) & GBTN (ARD) \\ \hline CCLE $EC50$ & 0.354 & 0.218 & 0.131 & 0.046 & \textbf{ 0.034 } & \textbf{ 0.031 } \\ CCLE $IC50$ & 0.301 & 0.231 & 0.161 & 0.103 & \textbf{ 0.035 } & \textbf{ 0.031 } \\ Gene Body Methylation & 0.433 & 0.443 & 0.466 & 0.492 & \textbf{ 0.363 } & \textbf{ 0.372 } \\ Promoter Methylation & 0.323 & 0.319 & 0.350 & 0.337 & \textbf{ 0.252 } & \textbf{ 0.263 } \\ \hline \end{tabular} \caption{Mean squared error measure with various latent dimension $K$ parameters for CCLE $EC50$, CCLE $IC50$, Gene Body Methylation, and Promoter Methylation data sets. In all cases, $K_4$ is the full rank of each matrix, $K_1=5, K_2=10, K_3=15$ for the former two data sets, and $K_1=100, K_2=120, K_3=140$ for the latter two data sets. Results of GBT and GBTN with ARD surpass those of the GBT and GBTN with full rank $K_4$.} \label{table:covnergence_mse_reporte_ard} \end{table} \subsection{Convergence and Comparative Analysis} We first show the rate of convergence over iterations on the CCLE $EC50$, CCLE $IC50$, Gene Body Methylation, and Promoter Methylation data sets. We run the GBT model with $K=5, 10, 15, 24$ for the CCLE $EC50$ and CCLE $IC50$ data sets where $K=24$ is the full rank of the matrices\footnote{The results of GBT and GBTN without ARD are close so here we only provide the results of GBT for clarity.}, and $K=100, 120, 140, 160$ for the Gene Body Methylation and Promoter Methylation data sets where $K=160$ is the full rank of the matrices; and the error is measured by MSE. Figure~\ref{fig:convergences_BIDs_ard} shows the rate of convergence over iterations. Figure~\ref{fig:convergences_BIDs_autocorr_ard} shows the autocorrelation coefficients of samples computed using Gibbs sampling. We observe that the mixings of the GBT and GBTN with ARD are close to those without ARD. The coefficients are less than 0.1 when the lags are more than 10 showing the mixing of the Gibbs sampler is good. In all experiments, the algorithm converges in less than 50 iterations. The results on the CCLE $EC50$, Gene Body Methylation, and Promoter Methylation data sets show less noise in the sampling; while on the CCLE $IC50$ data set, the sampling of GBT without ARD seems to be noisier than those of GBT and GBTN with ARD. Comparative results for the GBT and GBTN with ARD and those without ARD on the four data sets are again shown in Figure~\ref{fig:convergences_BIDs_ard} and Table~\ref{table:covnergence_mse_reporte_ard}. In all experiments, the GBT and GBTN with ARD achieve the smallest MSE, even compared to the non-ARD versions with latent dimension $K$ setting to full matrix rank ($K=24$ for the CCLE $EC50$ and CCLE $IC50$ data sets, and $K=160$ for the Gene Body Methylation and Promoter Methylation data sets). Figure~\ref{fig:convergences_BIDs_numrvector_ard} shows the convergence of the number of selected columns for GBT and GBTN with ARD models on each data set. We observe that the samples are walking around 27 for the CCLE $EC50$ and CCLE $IC50$ data sets; and around 130 for the Gene Body Methylation and Promoter Methylation data sets. These samples are close to the true rank of each matrix. The GBT and GBTN with ARD thus can determine the number of columns inside the factored component $\bm{X}$ automatically. \index{Decomposition: IID} \section{Bayesian Intervened Interpolative Decomposition (IID)}\label{section:iid_main} Going further from the GBT model, we introduce the intervened interpolative decomposition (IID) algorithm \citep{lu2022feature}. The IID algorithm has exactly the same generative process as shown in Equation~\eqref{equation:gbt_data_entry_likelihood}, it applies an inverse-Gamma prior over the variance parameter $\sigma^2$ in Equation~\eqref{equation:prior_gbt_gamma_on_variance}, and a GTN prior over the latent variables $y_{kl}$'s in Equation~\eqref{equation:rn_prior_bidd}. However, we consider further that some columns of the observed matrix $\bm{A}$ are more significant and important, and they \textit{should} be given a higher priority over other columns. Suppose the importance of each column of the observed matrix $\bm{A}$ is captured by a \textit{raw importance vector} $\widehat{\bm{p}}\in \mathbb{R}^N$ where $\widehat{p}_i \in [-\infty, \infty]$ for all $i$ in $\{1,2,\ldots, N\}$. The raw importance vector can then be transformed into the range 0 to 1 $$ \bm{p} = \text{Sigmoid}(\widehat{\bm{p}}), $$ where \textit{Sigmoid($\cdot$)} is the $f(x) = \frac{1}{1+\exp\{-x\}}$ that can return a value in the range of 0 to 1. The Sigmoid function acts as a squashing function because its domain is the set of all real numbers, and its range is (0, 1). Then we take the $\bm{p}$ vector as the final \textit{importance vector} to indicate the importance of each column in matrix $\bm{A}$. Going further from Equation~\eqref{equation:postrerior_gbt_rvector}, the intermediate variable $o_j$ is calculated instead by \begin{equation}\label{equation:posterior_IID} \begin{aligned} o_j &= \frac{p(r_j=0, r_i=1)}{p(r_j=1, r_i=0)} \times \frac{p(\bm{A}\mid \sigma^2, \bm{Y}, \bm{r}_{-ji}, r_j=0, r_i=1)}{p(\bm{A}\mid \sigma^2, \bm{Y}, \bm{r}_{-ji}, r_j=1, r_i=0)}\\ &= \textcolor{blue}{ \frac{1-p_j }{p_j} \frac{p_i }{1-p_i} } \times \frac{p(\bm{A}\mid \sigma^2, \bm{Y}, \bm{r}_{-ji}, r_j=0, r_i=1)}{p(\bm{A}\mid \sigma^2, \bm{Y}, \bm{r}_{-ji}, r_j=1, r_i=0)}. \end{aligned} \end{equation} And again, the conditional probability of $p(r_j=0, r_i=1\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-ji})$ can be obtained by \begin{equation}\label{equation:postrerior_gbt_rvector222_IID} p(r_j=0, r_i=1\mid \bm{A},\sigma^2, \bm{Y}, \bm{r}_{-ji}) = \frac{o_j}{1+o_j}. \end{equation} Since we intervene in the procedure of the Gibbs sampling in Equation~\eqref{equation:posterior_IID}, hence the name \textit{intervened interpolative decomposition (IID)}. \subsection{Quantitative Problem Statement}\label{section:iid_quantaprob} After the presentation of the algorithms for the intervened interpolative decomposition, one may wonder about its significance and its potential practical applications. It is well known that large quantitative hedge funds and asset managers have been recruiting a significant number of data miners and financial engineers in order to construct effective alphas. The number of alpha components might reach into millions or perhaps billions \citep{tulchinsky2019finding}. As a result, creating a meta-alpha from all of the alphas or a large fraction of the alpha pool might be troublesome for the following reasons: \begin{enumerate} \item If we use the same alphas as others, some illiquid alphas with low volume will be traded heavily. This will make the strategy meaningless due to capacity constraints; \item Using too many alphas may result in overfitting, resulting in poor out-of-sample (OS) performance; \item Many alphas might be mutually dependent, and certain machine learning algorithms, such as neural networks and XGBoost models, might uncover their limits caused by multi-linear difficulties while attempting to determine the meta-strategy from the entire set of alphas; \item Finding trading signals from the full alpha pool can be time-consuming because of limited computing resources; \item To minimize market risks, we constantly aim to discover a distinct subset of alphas to test alternative methods with low correlation. \end{enumerate} For the five reasons stated above, there is an urgent need to design algorithms that choose a small subset of alphas from a large pool of them in order to prevent overfitting, make the final approach more scalable, and obtain the findings in a reasonable amount of time. It is trivial to select an appropriate subset by the \textit{RankIC} metric (see definition below), i.e., we select the alphas having the highest RankIC values. However, the problems still remain that the selected subset will not represent the whole pool of alphas, and the selected alphas may be mutually dependent. Our objective is to identify as many representative alpha factors as possible with optimal performance. The selected subset of alphas is representative in the sense that the small subset of alphas can be used to reconstruct other alphas with a small replication error. The traditional ID algorithm, either using a \textit{Randomized algorithm} \citep{liberty2007randomized} or a Bayesian approach we have discussed above, can only help to find the representative ones. However, the end choices may seem to select alphas with low performance. Using the discussed IID method, on the other hand, can help find the representative (that can reconstruct other alphas with small error) and the desirable (high RankIC scores) alphas at the same time. \subsubsection{Formulaic Alphas} WorldQuant, a quantitative investment management firm, previously disclosed 101 formulaic short-term alpha determinants in 2016 \citep{kakushadze2016101}. Since then, the 191 alpha factors from Guotai Junan Securities \citep{guotaijunan2017} have also been welcomed by many investors and institutions. These formulaic alpha components are derived from several stock data elements, including, among others, volumes, prices, volatilities, and volume-weighted average prices (vwap). As the name implies, a formulaic alpha is a type of alpha that can be expressed as a formula or a mathematical expression. For example, a \textit{mean-reversion} alpha can be expressed in terms of a mathematical expression as follows: $$ \text{Alpha = }- \left( \text{close(today) $-$close(5$\_$days$\_$ago ) } \right)/ \text{close(5$\_$days$\_$ago)}. $$ In this sense, we take the opposite action as indicated by the closing price: we go short if the price has risen during the previous five days, and we go long otherwise. At a high level, the alpha value indicates the trend of the price in the days to come; the higher the alpha value for each stock, the more likely it is that the stock's price will rise in the next few days. \subsubsection{Evaluation Metrics} Let $r_{t}$ denote the yield rate of stock $s$ on the $t$-th day. Suppose further $p_t$ is the closing price of the asset at time $t$ where $t\in \{1,2,\ldots, T\}$, the return of the asset at time $t$ can be obtained by the following equation: \begin{equation} r_t = \frac{p_t - p_{t-1}}{p_t}. \end{equation} We use the \textit{Rank information coefficient (RankIC)} to evaluate the effectiveness of an alpha: \begin{equation} \text{RankIC}(\bm{a}, \bm{r}^h)=\text{Spearman}(\bm{a}, \bm{r}^h), \end{equation} where $\text{Spearman}(\cdot)$ indicates the Spearman correlation, $\bm{a}$ is the sequence of an alpha, $\bm{r}^h$ is the sequence of the return value with holding period $h$ such that the $i$-th element of $\bm{r}^h$ represent the daily return of $h$ days later. The RankIC then can be used as an indicator of the importance of each alpha factor and plugged into Equation~\eqref{equation:posterior_IID} directly. \subsection{Examples for Bayesian IID}\label{section:iid_experiments} For each stock $s$ (i.e., $s\in \{1,2,\ldots, S\}$ where $S$ is the total number of stocks), we have a matrix $\bm{A}_s$ with shape $\bm{A}_s\in \mathbb{R}^{N\times D}$ where $N$ is the number of alphas and $D$ is the number of dates so that each row of $\bm{A}_s$ is regarded as an alpha series. We want to select a subset of the alphas (here we assume $M$ out of the $N$ alphas are selected). The RankIC between each alpha series and the delayed return series with horizon $h=1$ is then taken as the \textit{important value} directly, a higher RankIC indicates a higher priority. \begin{table}[h] \centering \small \setlength{\tabcolsep}{6pt} \renewcommand{1.5}{1.2} \begin{tabular}{llllll} \hline Ticker & Type & Sector & Company & Avg. Amount \\ \hline SH601988 & Share & Bank & Bank of China Limited & 427,647,786 \\ SH601601 & Share & Public Utility & China Pacific Insurance (Group) & 819,382,926 \\ SH600028 & Share & Public Utility & China Petroleum \& Chemical Corporation & 748,927,952\\ SH600016 & Share & Bank & China Minsheng Banking Corporation &285,852,414 \\ SH601186 & Share & Public Utility & China Railway Construction Corporation & 594,970,588\\ SH601328 & Share & Bank & Bank of Communications Corporation & 484,445,915 \\ SH601628 & Share & Public Utility & China Life Insurance Company Limited&368,179,861 \\ SH601939 & Share & Bank & China Construction Bank Corporation &527,876,669 \\ \hline SH510300 & ETF & CSI 300 & Huatai-PineBridge CSI 300 ETF &1,960,687,059 \\ SH510050 & ETF & CSI 50 & ChinaAMC China CSI 50 ETF &2,020,385,879 \\ \hline \end{tabular} \vspace{-0.25cm} \caption{Summary of the underlying portfolios in the China market, ten assets in total. The average amount (in the currency of RMB) is calculated in the period of the test set.} \label{table:iid_cn_data_summary} \end{table} \paragraph{Data set.} To assess the introduced algorithm and highlight the primary benefits of the IID technique, we perform experiments with several analytical tasks and use data for ten assets from the China market and diverse industrial areas, including Bank, Public Utility, and ETF. We obtain publicly available data from tushare \footnote{\url{https://tushare.pro/}.}. The data covers a three-year period, i.e., 2018-07-18 to 2021-07-05 (720 trading days), where the data between 2018-07-18 and 2020-07-09 is considered the training set (480 calendar days); while data between 2020-07-10 and 2021-07-05 is taken as the test set (240 trading days). The underlying portfolios are summarized in Table~\ref{table:iid_cn_data_summary} and Figure~\ref{fig:bid_iid_datasets_ashare} shows the series of different assets where we initialize each portfolio with a unitary value for clarity. The assets are chosen by selecting the ones with high amount values (random ten assets among the fifty assets with the highest average amounts in China market during the selected period) so that there are fewer trading restrictions. We obtain 78 alphas from the 101 formulaic alphas \citep{kakushadze2016101}, 94 alphas from the 191 formulaic alphas \citep{guotaijunan2017}, and 19 proprietary alphas. The alphas are chosen to have a value that is neither too large nor too small. In this sense, the alpha matrix $\bm{A}_s$ is of shape $214\times 720$ for each asset. In all scenarios, the same parameter initialization is adopted when conducting different tasks. Experimental evidence demonstrates that post-processing can marginally improve performance. For clarification, we only provide the findings of the GBT (without ARD) and IID models after post-processing. The IID model can select the important features (alphas) with a higher priority while keeping the reconstructive error as small as possible, resulting in performance that is as good as or better than the vanilla GBT method in low-rank ID approximations across a wide range of experiments on different data sets. We again use the mean squared error (MSE, Equation~\eqref{equation:idbid-per-example-loss}), which measures the similarity between the observed and reconstructive matrices, to evaluate the overall decomposition performance; the smaller the value, the better the performance. \begin{figure*}[h] \centering \vspace{-0.2cm} \subfigtopskip=2pt \subfigbottomskip=0pt \subfigcapskip=-2pt \subfigure[Convergence of the models on the SH510050, SH510300, SH601939, SH601628, and SH601328 data sets, as measured by MSE. The algorithm almost converges in less than 100 iterations.]{\includegraphics[width=1\textwidth]{imgs/iid_alpha_convergence.pdf} \label{fig:iid_alpha_convergence}} \subfigure[Averaged autocorrelation coefficients of samples of $y_{kl}$ computed using Gibbs sampling on the SH510050, SH510300, SH601939, SH601628, and SH601328 data sets.]{\includegraphics[width=1\textwidth]{imgs/iid_alpha_autocorrelation.pdf} \label{fig:iid_alpha_autocorrelation}} \vspace{-0.3cm} \caption{Convergence results (upper), and sampling mixing analysis (lower) on the SH510050, SH510300, SH601939, SH601628, and SH601328 data sets for a latent dimension of $K=10$. } \label{fig:allresults_bids_ard_IID} \end{figure*} \paragraph{Hyperparameters.} In those experiments, we use $a=-1, b=1,\alpha_\sigma=0.1, \beta_\sigma=1$, ($\{\mu_{kl}\}=0, \{\tau_{kl}\}=1$) for both GBT and IID models. The adopted parameters are uninformative and weak prior choices and the models are insensitive to them. The observed or unobserved variables are initialized from random draws as long as those hyperparameters are fixed since this initialization method provides a better initial guess of the correct patterns in the matrices. In all cases, we execute 1,000 iterations of Gibbs sampling with a burn-in of 100 iterations and a thinning of 5 iterations, since the convergence analysis indicates the algorithm can converge in fewer than 100 iterations. \begin{table}[] \centering \vspace{-0.35cm} \scriptsize \setlength{\tabcolsep}{1.1pt} \renewcommand{1.5}{1.1} \begin{tabular}{lllllllllll} \hline & SH601988 & SH601601 & SH600028 & SH600016 & SH601186 & SH601328 & SH601628 & SH601939 & SH510300 & SH510050\\ \hline GBT Min. & 5.235 & 5.814 & 5.235 & \textbf{6.381}& 5.819 & 5.700 & 5.734 & 5.785 & 5.462 & 6.297 \\ IID Min. & \textbf{4.567} &\textbf{5.700}& \textbf{4.843} & 6.490 & \textbf{5.104} & \textbf{5.658} & \textbf{5.445} & \textbf{5.435} & \textbf{4.876} & \textbf{5.767} \\ GBT Mean & 6.476 & \textbf{7.367} & 6.764 & 8.053 & 7.066 & 7.250 & 7.206 & 7.242 & 6.769 & 7.776 \\ IID Mean & \textbf{6.239} & 7.449 & \textbf{6.664} & \textbf{7.831} & \textbf{6.558} & \textbf{7.081} & \textbf{7.002} & \textbf{7.031} & \textbf{6.450} & \textbf{7.492} \\ \hline \end{tabular} \vspace{-0.3cm} \caption{Minimal and mean MSE measures after burn-in across different iterations for GBT and IID models on the 10 alpha matrices from 10 assets. In all cases, $K=10$ is set as the latent dimension. In most cases, the results of IID converge to a smaller value than the GBT model.} \label{table:comparis_gbt_iid_mse} \vspace{-0.1cm} \end{table} \begin{figure*}[h] \centering \vspace{-0.3cm} \subfigtopskip=2pt \subfigbottomskip=9pt \subfigcapskip=-5pt \subfigure[Ten different portfolios where we initialize each portfolio with a unitary value for clarity.]{\includegraphics[width=0.485\textwidth]{imgs/bid_iid_datasets_ashare.pdf} \label{fig:bid_iid_datasets_ashare}} \subfigure[Portfolio values with the same strategy by using different alphas via comparative selection models.]{\includegraphics[width=0.485\textwidth]{imgs/bid_iid_portfolio_ashare.pdf} \label{fig:bid_iid_portfolio_ashare}} \vspace{-0.7cm} \caption{Portfolio values of the ten assets (left), and the portfolio values (right) of different methods where we split by in-sample and out-of-sample periods, and initialize with a unitary value for each period. The IID performs better in the out-of-sample period (see also Table~\ref{table:iid_selected_mean_ic}).} \label{fig:bid_iid_portfolio_ashare_full} \end{figure*} \subsubsection{Convergence and Comparative Analysis} We first show the rate of convergence over iterations on different assets. Due to space constraints, we omit convergence results for the first five assets and only present those for portfolios SH510050, SH510300, SH601939, SH601628, and SH601328. Results for the other assets are quantitatively similar. We run GBT and IID models with $K=10$ for the five data sets where $214$ is the full rank of the matrices, and the error is measured by MSE. Figure~\ref{fig:iid_alpha_convergence} shows the rate of convergence over iterations. Figure~\ref{fig:iid_alpha_autocorrelation} shows the autocorrelation coefficients of samples computed using Gibbs sampling. We observe that the mixings of the IID are close to those of GBT. When the lags are greater than ten, the coefficients are less than 0.1, indicating that the Gibbs sampler mixes well. In all experiments, the algorithm converges in less than 100 iterations. We also observe that the IID model does not converge to a larger error than the vanilla GBT model, though we put more emphasis on selecting the columns with high RankIC. Table~\ref{table:comparis_gbt_iid_mse} presents the minimal MSE and mean MSE after burn-in across different iterations for GBT and IID models on the ten alpha matrices from ten assets. In most cases, the IID can even converge to a smaller MSE value. \begin{algorithm}[!htb] \caption{Alpha selection for portfolio allocation. Select holding period $h$, number of alphas to select is $M$. $D_{in}$ is the in-sample number of days, $D$ is the total number of days, $N$ is the total number of alphas. We then select $M$ alphas out of the $N$ alphas. } \label{alg:iid_alpha_selection} \begin{algorithmic}[1] \State Split the alpha matrix for in-sample (IS) and out-of-sample (OS) evaluations: $$ \bm{A}_{\text{in}} = \bm{A}_s[:,0:D_{\text{in}}] \in \mathbb{R}^{N\times D_{\text{in}}}, \gap \bm{A}_{\text{out}} = \bm{A}_s[:,D_{\text{in}}+1:D]\in \mathbb{R}^{N\times (D-D_{\text{in}})}; $$ \State Using (column) ID to decide the alphas to be selected on matrix $\bm{A}_{\text{in}}^\top$, with the selected indices $\bm{m}$: $$ \widehat{\bm{A}}_{\text{in}} = \bm{A}_s[\bm{m},0:D_{\text{in}}] \in \mathbb{R}^{\textcolor{blue}{M}\times D_{\text{in}}}, \gap \widehat{\bm{A}}_{\text{out}} = \bm{A}_s[\bm{m},D_{\text{in}}+1:D]\in \mathbb{R}^{\textcolor{blue}{M}\times (D-D_{\text{in}})}; $$ \For{$m=1$ to $M$} \State \algoalign{Using the $m$-th IS alpha vector $\bm{a}_m=\widehat{\bm{A}}_{\text{in}}[m,:]\in \mathbb{R}^{D_{\text{in}}}$ to decide the weight $\bm{w}_m$ and interception $b_m$ via ordinary least squares (OLS) so that the MSE between the prediction $\bm{a}_m^\top\bm{w}_m +b_m$ and the shifted return vector $\bm{r}^h$ is minimized, i.e., minimizing $\text{MSE}(\bm{a}_m^\top\bm{w}_m +b_m, \bm{r}^h)$. The weight and interception are then used in OS evaluation.} \EndFor \For{$d=1$ to $D-D_{\text{in}}$} \State \algoalign{On each day in the OS period, we use the mean evaluation of each prediction from the $M$ alphas to decide to go long or not, i.e., to go long if $\sum_{m=1}^M \bm{a}_m^\top\bm{w}_m +b_m >0$; and do nothing otherwise since we restrict the analysis to long-only portfolios.} \State \algoalign{Though we employ a long-only portfolio, we can favor a \textit{market-neutral strategy}: we open long positions only when we anticipate that at least half of the stocks will rise on the following $h$ day, and we weight each stock equally.} \EndFor \end{algorithmic} \end{algorithm} \begin{table}[] \centering \setlength{\tabcolsep}{7.3pt} \renewcommand{1.5}{1.2} \small \begin{tabular}{lllll} \hline Methods & Highest RankIC & Randomized ID & BID with GBT & BID with IID \\ \hline Mean RankIC & \textbf{0.1035} & 0.0651 & 0.0553 & \textbf{0.0752} \\ Mean Correlation & 0.2276$\downarrow$ & 0.5741$\downarrow$ & \textbf{0.1132} & \textbf{0.1497} \\ \hline Sharpe Ratio (OS) & 1.0276 & 1.0544 & 0.5045 & \textbf{1.5721} \\ Sharpe Ratio (IS) & \textbf{2.6511} & 1.3019 & 1.4965 & 2.3231 \\ \hline Annual Return (OS) & 0.1043 & 0.0932 & 0.0484 & \textbf{0.1633} \\ Annual Return (IS) & \textbf{0.4390} & 0.2281 & 0.2425 & 0.3805 \\ \hline Max Drawdown (OS) & 0.0632 & \textbf{0.0373} & \textbf{0.0484} & \textbf{0.0552} \\ Max Drawdown (IS) & \textbf{0.0892} & 0.1548 & 0.1232 & \textbf{0.0975} \\ \hline \end{tabular} \vspace{-0.3cm} \caption{Mean RankIC and correlation of the selected alphas across various assets for different methods. A higher mean RankIC and a lower mean correlation are better. The IID method can find the trade-off between the mean RankIC and the mean correlation. In all cases, \textit{IS} means in-sample measurements, and \textit{OS} means out-of-sample measurements. The symbol ``$\downarrow$" means the performance is extremely poor. } \label{table:iid_selected_mean_ic} \end{table} \subsubsection{Quantitative Strategy} After executing the GBT and IID algorithms for computing the interpolative decomposition of each asset's alpha matrix, the state vector $\bm{r}$ for each asset is saved and the ten alphas with the largest mean selection during the 1,000 iterations are chosen (with a burn-in of 100 iterations, and thinning of 5 iterations). Then we follow the quantitative strategy in Algorithm~\ref{alg:iid_alpha_selection} (in which case $h=1$, $N=214$ alphas, $M=10$ alphas, $D=720$ trading days, and $D_{\text{in}}=480$ trading days). The procedure shown in Algorithm~\ref{alg:iid_alpha_selection} is a very simple quantitative strategy. However, the algorithm can show precisely how the IID method can work in practice. The strategy using the alphas selected by the IID method is only slightly worse than the one selecting the \textit{highest RankIC} alphas for the in-sample (IS) performance in terms of Sharpe ratio, annual return, and maximum drawdown; however, the IID performs better in the out-of-sample (OS) scenario and this is what we actually want (see Table~\ref{table:iid_selected_mean_ic} and Figure~\ref{fig:bid_iid_portfolio_ashare}). To evaluate the strategy, we also adopt the Randomized algorithm to compute the ID for comparison \citep{liberty2007randomized}, termed \textit{Randomized ID}. The Randomized ID performs even worse than BID with GBT (see Table~\ref{table:iid_selected_mean_ic}). Though the IID does not select alphas with the highest RankIC values, this does not mean that the alpha selection procedure is meaningless for the following reasons: \begin{enumerate} \item \textit{Pool size}: We only use a small alpha pool that only contains 214 alpha factors. When the number of alphas is approaching millions or even billions, the alpha selection procedure is expected to work better. \item \textit{Correlation}: The mean correlation of selected alphas across the ten assets of the IID method is smaller than the highest RankIC method. In this sense, the alphas of the latter method have high correlations and a low diversity. If the correlated alphas have low liquidity or perform poorly during a given period, the strategy's risk might increase. \item \textit{Machine learning models}: In our test, we only use OLS to find the weight of each alpha For more complex models, e.g., neural networks or XGBoost models, the correlated alphas can cause multi-linear problems so that the performance and interpretability are hampered. \item \textit{Diversification}: Even if selecting the alphas with the highest RankIC can work well in practice, we also want to diversify the strategies so that we are not exposed to specific risks. The IID method can help find different strategies. \end{enumerate} \chapter{Bayesian Nonnegative Matrix Factorization}\label{chapter:bnmf} \begingroup \hypersetup{linkcolor=winestain} \minitoc \newpage \endgroup \section{Introduction} The nonnegative matrix factorization (NMF) method analyzes data matrices with nonnegative elements, which are common in data sets derived from texts and images \citep{berry2007algorithms}. In cases where the entries in $\bm{A}, \bm{W}$, and $\bm{Z}$ are nonnegative, NMF algorithms have frequently improved performance. Thus, the scope of NMF research has grown rapidly in recent years, particularly in the fields of machine learning \citep{lee1999learning, lee2000algorithms}. Early work on nonnegative matrix factorizations was performed by a Finnish group of researchers in the 1990s under the name \textit{positive matrix factorization} \citep{paatero1991matrix, paatero1994positive, anttila1995source}. This work is rarely cited by later researchers partly due to the unfortunate phrasing of positive matrix factorization which is misleading as \citet{paatero1994positive} actually create a nonnegative matrix factorization. Since its introduction by \citet{lee1999learning, lee2000algorithms}, the NMF problem has received a significant amount of attention, both in published and unpublished work, in various fields such as science, engineering, and medicine. Different authors have also proposed alternative formulations for the NMF problem \citep{schmidt2009bayesian, tan2013automatic, brouwer2017prior, lu2022flexible}. The NMF problem can be stated as $\bm{A}=\bm{W}\bm{Z}+\bm{E}$, where $\bm{A}\in \mathbb{R}^{M\times N}$ is approximated factorized into an $M\times K$ matrix $\bm{W}\in \mathbb{R}_+^{M\times K}$ and a $K\times N$ matrix $\bm{Z}\in \mathbb{R}_+^{K\times N}$. The data set $\bm{A}$ needs not be complete such that the indices of observed entries can be represented by the mask matrix $\bm{M}\in \mathbb{R}^{M\times N}$ where an entry of one indicates the element is observed and unobserved otherwise. This nonnegativity makes the resulting matrices easier to inspect and more intuitive to interpret such as in an image analysis context. To simplify the problem, let us assume that there are no missing entries firstly. Treating missing entries in the Bayesian NMF context is just the same as that in the Bayesian RMF cases (Section~\ref{section:markov-blanket}, p.~\pageref{section:markov-blanket}). Project data vectors $\bm{a}_n$ to a smaller dimension $\bm{z}_n \in \mathbb{R}^K$ with $K<M$, such that the \textit{reconstruction error} measured by Frobenius norm is minimized (assume $K$ is known): \begin{equation}\label{equation:als-per-example-loss_bnmf} \mathop{\min}_{\bm{W},\bm{Z}} L(\bm{W},\bm{Z}) = \mathop{\min}_{\bm{W},\bm{Z}}\sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2, \end{equation} where $\bm{W}=[\bm{w}_1^\top; \bm{w}_2^\top; \ldots; \bm{w}_M^\top]\in \mathbb{R}^{M\times K}$ and $\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N] \in \mathbb{R}^{K\times N}$ containing $\bm{w}_m$'s and $\bm{z}_n$'s as \textbf{rows and columns} respectively. \paragraph{Terminology.} Again, we follow the same terminology in Bayesian RMF, we will call the Bayesian NMF models by the density function in the order of likelihood, priors, and hyperpriors (Section~\ref{section:bmf_real_intro}, p.~\pageref{section:bmf_real_intro}). Table~\ref{table:summ_real_nmf} summarizes the Bayesian models for nonnegative matrix factorization in this chapter. \begin{table}[htp] \centering \small \setlength{\tabcolsep}{1.0pt} \renewcommand{1.5}{1.25} \begin{tabular}{l|llll} \hline Name & Likelihood &Prior $\bm{W}$ & Prior $\bm{Z}$ & Hierarchical prior \\ \hline \hline GEE & $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{E}(w_{mk}|\lambda_{mk}^W)$ & $\mathcal{E}(z_{kn}|\lambda_{kn}^Z)$ & \gap\gap\slash \\ \hline GEEA & $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{E}(w_{mk}|\lambda_k)$ & $\mathcal{E}(z_{kn}|\lambda_k)$ & $\mathrm{Ga}(\lambda_k|\alpha_\lambda, \beta_\lambda)$ \\ \hline GTT& $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{TN}(w_{mk}|\mu_{mk}^W, \frac{1}{\tau_{mk}^W})$ & $\mathcal{TN}(z_{kn}|\mu_{kn}^Z, \frac{1}{\tau_{kn}^Z})$ & \gap\gap\slash \\ \hline GTTN& $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{TN}(w_{mk}|\mu_{mk}^W, \frac{1}{\tau_{mk}^W})$ & $\mathcal{TN}(z_{kn}|\mu_{kn}^Z, \frac{1}{\tau_{kn}^Z})$ & \begin{tabular}{@{}c@{}}$\{\mu_{mk}^W, \tau_{mk}^W\}$, $\{\mu_{kn}^Z, \tau_{kn}^Z\}\sim$ \\ $\mathcal{TNSNG}(\mu_\mu, \tau_\mu, a, b)$\end{tabular}\\ \hline GRR & $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{RN}(\cdot|\mu_{mk}^W, \frac{1}{\tau_{mk}^W}, \lambda_{mk}^W)$ & $\mathcal{RN}(\cdot|\mu_{kn}^Z, \frac{1}{\tau_{kn}^Z}, \lambda_{kn}^Z)$ & \gap\gap\slash \\ \hline GRRN & $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{RN}(\cdot|\mu_{mk}^W, \frac{1}{\tau_{mk}^W}, \lambda_{mk}^W)$ & $\mathcal{RN}(\cdot|\mu_{kn}^Z, \frac{1}{\tau_{kn}^Z}, \lambda_{kn}^Z)$ & \begin{tabular}{@{}c@{}}$\{\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W\}$, \\ $\{\mu_{kn}^Z, \tau_{kn}^Z, \lambda_{kn}^Z\}\sim$ \\ $\mathcal{RNSNG}(\mu_\mu, \tau_\mu, a, b, \alpha_\lambda, \beta_\lambda)$\end{tabular} \\\hline GL$_1^2$& $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & \begin{tabular}{@{}c@{}} $\bm{W}\sim\exp$ \\ $\{ \frac{-\lambda_k^W}{2}\sum_{m}(\sum_{k}w_{mk})^2 \}$ \end{tabular} & \begin{tabular}{@{}c@{}} $\bm{Z}\sim\exp$ \\ $\{ \frac{-\lambda_k^Z}{2}\sum_{n}(\sum_{k}z_{kn})^2 \}$ \end{tabular} & \gap\gap\slash \\ \hline GL$_2^2$& $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & \begin{tabular}{@{}c@{}} $\bm{W}\sim\exp$ \\ $\{ \frac{-\lambda_k^W}{2}\sum_{m}(\sum_{k}w_{mk}^2) \}$ \end{tabular} & \begin{tabular}{@{}c@{}} $\bm{Z}\sim\exp$ \\ $\{ \frac{-\lambda_k^Z}{2}\sum_{n}(\sum_{k}z_{kn}^2) \}$ \end{tabular} & \gap\gap\slash \\ \hline GL$_\infty$& $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & \begin{tabular}{@{}c@{}} $\bm{W}\sim\exp\{ -$ \\ $\lambda_k^W\sum_{m}(\max_k\abs{w_{mk}}) \}$ \end{tabular} & \begin{tabular}{@{}c@{}} $\bm{Z}\sim\exp\{ -$ \\ $\lambda_k^Z\sum_{n}(\max_k\abs{z_{kn}}) \}$ \end{tabular} & \gap\gap\slash \\ \hline\hline GEG & $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{E}(w_{mk}|\lambda_{mk}^W)$ & $\mathcal{N}(z_{kn}|0, (\lambda_{kn}^Z)^{-1})$ & \gap\gap\slash \\ \hline GnVG & $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & \begin{tabular}{@{}c@{}}$\bm{W}\sim$ \\ $\exp\{-\gamma \bm{W}^\top\bm{W}\}u(\bm{W})$ \end{tabular} & $\mathcal{N}(z_{kn}|0, (\lambda_{kn}^Z)^{-1})$ & \gap\gap\slash \\ \hline \end{tabular} \caption{Overview of Bayesian nonnegative and semi-nonnegative matrix factorization models.} \label{table:summ_real_nmf} \end{table} \begin{figure}[htp] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[GEE.]{\label{fig:bmf_gee} \includegraphics[width=0.421\linewidth]{./imgs/bmf_gee.pdf}} \subfigure[GEEA.]{\label{fig:bmf_geea} \includegraphics[width=0.421\linewidth]{./imgs/bmf_geea.pdf}} \caption{Graphical model representation of GEE and GEEA models. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or".} \label{fig:bmf_gee_geea} \end{figure} \index{Decomposition: GEE} \section{Gaussian Likelihood with Exponential Priors (GEE)}\label{section:gee_model} The Gaussian likelihood with exponential priors (GEE) model is perhaps the most simple one for Bayesian NMF where Gaussian likelihood is applied and exponential priors are used over factor matrices \citep{schmidt2009bayesian}. \paragraph{Likelihood.} Again, we view the data $\bm{A}$ as being produced according to the probabilistic generative process shown in Figure~\ref{fig:bmf_gee}. We assume the residuals, $e_{mn}$, are i.i.d. drawn from a zero mean Gaussian distribution with variance $\sigma^2$. This is equivalent to assuming the observed $(m,n)$-th data entry $a_{mn}$ of matrix $\bm{A}$ is modeled using a Gaussian likelihood with variance $\sigma^2$ and mean given by the latent decomposition $\bm{w}_m^\top\bm{z}_n$ (Equation~\eqref{equation:als-per-example-loss_bnmf}), which gives rise to the following likelihood function, \begin{equation}\label{equation:gee_likelihood} \begin{aligned} p(\bm{A}\mid \boldsymbol\theta) &= \prod_{m,n=1}^{M,N} \mathcal{N} \left(a_{mn}\mid (\bm{W}\bm{Z})_{mn}, \sigma^2 \right)\\ &= \prod_{m,n=1}^{M,N} \mathcal{N} \left(a_{mn}\mid (\bm{W}\bm{Z})_{mn}, \tau^{-1} \right) \end{aligned} \end{equation} where $\boldsymbol\theta=\{\bm{W},\bm{Z},\sigma^2\}$ denotes all parameters in the model, $\sigma^2$ is the variance, $\tau^{-1}=\sigma^2$ is the precision, and $\mathcal{N}(x\mid \mu,\sigma^2)$ is the normal density function. \paragraph{Prior.} We treat the latent variables $w_{mk}$'s (and $z_{kn}$'s) as random variables. And we need prior densities over these latent variables to express beliefs about their values, e.g., nonnegativity in this context though there are many other possible constraints (semi-nonnegativity in \citet{ding2008convex}, discreteness in \citet{gopalan2014bayesian, gopalan2015scalable}). Here we assume $w_{mk}$'s and $z_{kn}$'s are independently drawn from exponential priors with rate parameters $\lambda_{mk}^W$ and $\lambda_{kn}^Z$ respectively (Definition~\ref{definition:exponential_distribution}, p.~\pageref{definition:exponential_distribution}), \begin{equation}\label{equation:gee_prior_density_exponential} \begin{aligned} w_{mk} &\sim \mathcal{E}(w_{mk}\mid \lambda_{mk}^W), \gap &z_{kn}\sim& \mathcal{E}(z_{kn}\mid \lambda_{kn}^Z);\\ p(\bm{W}) &=\prod_{m,k=1}^{M,K} \mathcal{E}(w_{mk}\mid \lambda_{mk}^W), \gap &p(\bm{Z}) =&\prod_{k,n=1}^{K,N} \mathcal{E}(z_{kn}\mid \lambda_{kn}^Z), \end{aligned} \end{equation} where $\mathcal{E}(x\mid \lambda)=\lambda\exp(-\lambda x)u(x)$ is the exponential density with $u(x)$ being the unit step function. This prior serves to enforce the nonnegativity constraint on the components $\bm{W}, \bm{Z}$. Same as the GGG model, the prior for the noise variance $\sigma^2$ is chosen as an inverse-Gamma density with shape ${\alpha_\sigma}$ and scale ${\beta_\sigma}$ (Definition~\ref{definition:inverse_gamma_distribution}, p.~\pageref{definition:inverse_gamma_distribution}), \begin{equation}\label{equation:geg_sigma_prior} p(\sigma^2)= \mathrm{IG}(\sigma^2\mid \alpha_\sigma, \beta_\sigma) = \frac{{\beta_\sigma}^{\alpha_\sigma}}{\Gamma({\alpha_\sigma})} (\sigma^2)^{-\alpha_\sigma-1} \exp\left( -\frac{{\beta_\sigma}}{\sigma^2} \right). \end{equation} Again by Bayes' rule (Equation~\eqref{equation:posterior_abstract_for_mcmc}, p.~\pageref{equation:posterior_abstract_for_mcmc}), the posterior is proportional to the product of likelihood and prior, it can be maximized to yield an estimate of $\bm{W}$ and $\bm{Z}$. \paragraph{Posterior.} For NMF, following the Bayes' rule and MCMC, this means we need to be able to draw from distributions (by Markov blanket, Section~\ref{section:markov-blanket}, p.~\pageref{section:markov-blanket}): $$ \begin{aligned} &p(w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2,\boldsymbol\lambda^W), \\ & p(z_{kn}\mid \bm{A}, \bm{W}, \bm{Z}_{-kn}, \sigma^2,\boldsymbol\lambda^Z ), \\ & p(\sigma^2 \mid \bm{A}, \bm{W}, \bm{Z}, \boldsymbol\lambda^W, \boldsymbol\lambda^Z ), \\ \end{aligned} $$ where $\boldsymbol\lambda^W$ is an $M\times K$ matrix containing all $\{\lambda_{mk}^W\}$ entries, $\boldsymbol\lambda^Z$ is a $K\times N$ matrix including all $\{\lambda_{kn}^Z\}$ values, and $\bm{W}_{-{mk}}$ denotes all elements of $\bm{W}$ except $w_{mk}$. Using Bayes' theorem, the conditional density of $w_{mk}$ depends on its parents ($\lambda_{mk}^W$), children ($a_{mn}$), and coparents ($\tau$ or $\sigma^2$, $\bm{W}_{-mk}, \bm{Z}$) \footnote{See Figure~\ref{fig:bmf_gee} and Section~\ref{section:markov-blanket} (p.~\pageref{section:markov-blanket}).}. And it can be obtained by \begin{equation}\label{equation:gee_poster_wmk1} \begin{aligned} &\gap p(w_{mk} \mid \bm{A} , \bm{W}_{-mk}, \bm{Z}, \sigma^2, \lambda_{mk}^W ) \\ &\propto p(\bm{A}\mid \bm{W}, \bm{Z}, \sigma^2) \times p(w_{mk}\mid \lambda_{mk}^W) =\prod_{i,j=1}^{M,N} \mathcal{N} \left(a_{ij}\mid \bm{w}_i^\top\bm{z}_j, \sigma^2 \right)\times \mathcal{E}(w_{mk}\mid \lambda_{mk}^W) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i,j=1}^{M,N}(a_{ij} - \bm{w}_i^\top\bm{z}_j )^2\right\} \times \cancel{\lambda_{mk}^W }\exp(-\lambda_{mk}^W \cdot w_{mk})u(w_{mk})\\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{j=1}^{N}(a_{mj} - \bm{w}_m^\top\bm{z}_j )^2\right\} \cdot \exp(-\lambda_{mk}^W\cdot w_{mk})u(w_{mk})\\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{j=1}^{N} \left( w_{mk}^2z_{kj}^2 + 2w_{mk} z_{kj}\bigg(\sum_{i\neq k}^{K}w_{mi}z_{ij} - a_{mj}\bigg) \right) \right\} \cdot \exp(-\lambda_{mk}^W\cdot w_{mk})u(w_{mk})\\ &\propto \exp\left\{ -\underbrace{\left(\frac{\sum_{j=1}^{N} z_{kj}^2 }{2\sigma^2} \right) }_{\textcolor{blue}{1/(2\widetilde{\sigma_{mk}^{2}})}} w_{mk}^2 + w_{mk}\underbrace{\left( -\lambda_{mk}^W+ \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) \right)}_{\textcolor{blue}{\widetilde{\sigma_{mk}^{2}}^{-1} \widetilde{\mu_{mk}}}} \right\} \cdot u(w_{mk})\\ &\propto \mathcal{N}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})\cdot u(w_{mk}) = \mathcal{TN}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}}), \end{aligned} \end{equation} where $u(x)$ is the unit function with value 1 if $x\geq 0$ and value 0 if $x<0$, \begin{equation}\label{equation:gee_posterior_variance} \widetilde{\sigma_{mk}^{2}}= \frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2} \end{equation} is the posterior ``parent" variance of the normal distribution with ``parent" mean $\widetilde{\mu_{mk}}$, \begin{equation}\label{equation:gee_posterior_mean} \widetilde{\mu_{mk}} = \left( -\lambda_{mk}^W+ \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) \right)\cdot \widetilde{\sigma_{mk}^{2}} \end{equation} and $\mathcal{TN}(x \mid \mu, \sigma^2)$ is the \textit{truncated-normal (TN) density} with ``parent" mean $\mu$ and ``parent" variance $\sigma^2$ (Definition~\ref{definition:truncated_normal}, p.~\pageref{definition:truncated_normal}). \begin{mdframed}[hidealllines=true,backgroundcolor=\mdframecolorNote,frametitle={Interpretation of the Posterior: Sparsity Constraint}] We can see that the exponential prior is equivalent to impose a $L_1$ norm such that the GEE model favors a sparsity constraint. The sparsity comes from the negative term $-\lambda_{mk}^W$ in Equation~\eqref{equation:gee_posterior_mean}, when $\lambda_{mk}^W$ becomes larger, the posterior ``parent" mean becomes smaller, and the TN distribution will have a larger probability for smaller values since the draws of $\mathcal{TN}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})$ will be around zero thus imposing sparsity (see Figure~\ref{fig:dists_truncatednorml_mean}, p.~\pageref{fig:dists_truncatednorml_mean}). \end{mdframed} Or after rearrangement, the posterior density of $w_{mk}$ can be equivalently described by $$ \small \begin{aligned} &\gap p(w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \boldsymbol\lambda^W )\\ &\propto \exp\left\{ (\frac{-1 }{2\sigma^2} \sum_{j=1}^{N} z_{kj}^2) w_{mk}^2 + w_{mk}\underbrace{\left( \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) \right)}_{\textcolor{blue}{\widehat{\sigma_{mk}^{2}}^{-1} \widehat{\mu_{mk}}}} \right\} \exp(-\lambda_{mk}^W w_{mk}) u(w_{mk})\\ &\propto \mathcal{N}(w_{mk} \mid \widehat{\mu_{mk}}, \widehat{\sigma_{mk}^{2}})\cdot \mathcal{E}(w_{mk}\mid\lambda_{mk}^W) = \mathcal{RN}(w_{mk} \mid \widehat{\mu_{mk}}, \widehat{\sigma_{mk}^{2}}, \lambda_{mk}^W ), \end{aligned} $$ where $\widehat{\sigma^2_{mk}}=\widetilde{\sigma_{mk}^{2}}= \frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2}$ is the posterior ``parent" variance of the normal distribution with ``parent" mean $\widehat{\mu_{mk}}$, $$ \widehat{\mu_{mk}} = \frac{1}{\sum_{j=1}^{N} z_{kj}^2} \cdot \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) $$ and $\mathcal{RN}(x\mid \mu, \sigma^2, \lambda)\propto \mathcal{N}(x\mid \mu, \sigma^2) \mathcal{E}(x\mid \lambda)$ is the \textit{rectified-normal (RN) density} (Definition~\ref{definition:reftified_normal_distribution}, p.~\pageref{definition:reftified_normal_distribution}). And due to symmetry, a similar expression for $z_{kn}$ can be easily derived. The conditional density of $\sigma^2$ depends on its parents ($\alpha_\sigma$, $\beta_\sigma$), children ($\bm{A}$), and coparents ($\bm{W}$, $\bm{Z}$). And it is an inverse-Gamma distribution (by conjugacy in Equation~\eqref{equation:inverse_gamma_conjugacy_general}, p.~\pageref{equation:inverse_gamma_conjugacy_general}), \begin{equation}\label{equation:gee_posterior_sigma2} \begin{aligned} & p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z}, \alpha_\sigma, \beta_\sigma) = \mathrm{IG} (\sigma^2\mid \widetilde{\alpha_{\sigma}}, \widetilde{\beta_{\sigma}}), \qquad \\ & \widetilde{\alpha_{\sigma}} = \frac{MN}{2} +{\alpha_\sigma}, \qquad \widetilde{\beta_{\sigma}} = \frac{1}{2} \sum_{m,n=1}^{M,N} (\bm{A}-\bm{W}\bm{Z})_{mn}^2 + {\beta_\sigma}. \end{aligned} \end{equation} \begin{algorithm}[h] \caption{Gibbs sampler for GEE model in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\{\lambda_{mk}^W\}= \{\lambda_{kn}^Z\}=0.1$.} \label{alg:gee_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\sigma, \beta_\sigma, \lambda_{mk}^W, \lambda_{kn}^Z$; \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} \mid \bm{A} , \bm{W}_{-mk}, \bm{Z}, \sigma^2, \lambda_{mk}^W)$; \Comment{Equation~\eqref{equation:gee_poster_wmk1}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn} \mid \bm{A} , \bm{W}, \bm{Z}_{-kn},\sigma^2 \lambda_{kn}^Z )$; \Comment{Symmetry of Eq.~\eqref{equation:gee_poster_wmk1}} \EndFor \EndFor \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z},\alpha_\sigma,\beta_\sigma)$; \Comment{Equation~\eqref{equation:gee_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:als-per-example-loss_bnmf}, stop if it converges. \end{algorithmic} \end{algorithm} \paragraph{Gibbs sampling.} By this Gibbs sampling method introduced in Section~\ref{section:gibbs-sampler} (p.~\pageref{section:gibbs-sampler}), we can construct a Gibbs sampler for the GEE model as formulated in Algorithm~\ref{alg:gee_gibbs_sampler}. And also in practice, all the parameters of the exponential distribution are set to be the same value $\lambda=\{\lambda_{mk}^W\}'s = \{\lambda_{nk}^Z\}'s$ for all $m,k,n$. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\{\lambda_{mk}^W\}= \{\lambda_{kn}^Z\}=0.1$. \index{Decomposition: GEEA} \section{Gaussian Likelihood with Exponential Priors and ARD Hierarchical Prior (GEEA)}\label{section:geea_nmf_model} The Gaussian likelihood with exponential priors and hierarchical prior (GEEA) model is proposed by \citet{tan2013automatic} based on the GEE model where the difference lies in that the GEEA model applies a hyperprior on the exponential prior. Moreover, GEEA favors an \textit{automatic relevance determination} (ARD) that helps perform \textit{automatic model selection}. This ARD works by replacing the individual scale parameter of exponential prior for the factored components $\bm{W},\bm{Z}$ by one that is shared by all entries in the same column of $\bm{W}$ and the same row of $\bm{Z}$. In other words, the parameters are shared for each factor. \paragraph{Hyperprior.} For each prior density in Equation~\eqref{equation:gee_prior_density_exponential}, we assume a Gamma distribution on the parameters of exponential distributions in Equation~\eqref{equation:gee_prior_density_exponential}, $$ w_{mk}\sim \mathcal{E}(w_{mk}\mid \lambda_{k}), \gap z_{kn}\sim \mathcal{E}(z_{kn}\mid \lambda_{k}), \gap \lambda_{k} \sim \mathrm{Ga}(\lambda_{k} \mid \alpha_\lambda, \beta_\lambda), $$ where $\lambda_k$ is shared by all entries in the same column of $\bm{W}$ and the same row of $\bm{Z}$. The entire factor $k$ is then either activated if $\lambda_k$ has a low value or ``turned off" if $\lambda_k$ has a high value (see Figure~\ref{fig:dists_exponential}, p.~\pageref{fig:dists_exponential}). Therefore we can give an upper bound on the number of hidden factors $K$ instead of choosing the correct value of $K$. The graphical model is shown in Figure~\ref{fig:bmf_geea}. \paragraph{Posterior.} For NMF, following the Bayes' rule and MCMC, this means we need to be able to draw from distributions (again by Markov blanket, Section~\ref{section:markov-blanket}, p.~\pageref{section:markov-blanket}): $$ \begin{aligned} &p(\sigma^2 \mid \bm{A}, \bm{W}, \bm{Z}, \boldsymbol\lambda ), &\gap& p(w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z},\sigma^2, \boldsymbol\lambda ), \\ &p(\lambda_k \mid \bm{W}, \bm{Z}, \boldsymbol\lambda_{-k}, \alpha_\lambda, \beta_\lambda), &\gap& p(z_{kn}\mid\bm{A} , \bm{W},\bm{Z}_{-kn}, \sigma^2, \boldsymbol\lambda ), \\ \end{aligned} $$ where $\boldsymbol\lambda\in \mathbb{R}_+^K$ is a vector including all $\lambda_k$ values and $\boldsymbol\lambda_{-k}$ denotes all elements of $\boldsymbol\lambda$ except $\lambda_k$. The posteriors for $w_{mk}$'s and $z_{kn}$'s are the same as those in the GEE model, except now we replace $\lambda_{mk}^W$ and $\lambda_{kn}^Z$ by $\lambda_k$. The posteriors for $\lambda_k$ can be obtained using Bayes' theorem. The conditional density of $\lambda_k$ depends on its parents ($\alpha_\lambda, \beta_\lambda$), children ($k$-th column $\widehat{\bm{w}}_k$ or $\bm{W}$, $k$-th row $\widehat{\bm{z}}_k$ of $\bm{Z}$, note we define $\bm{w}_m$ as the $m$-th row of $\bm{W}$ and $\bm{z}_n$ as the $n$-th column of $\bm{Z}$ in Equation~\eqref{equation:als-per-example-loss_bnmf}), and coparents (none) \footnote{See Figure~\ref{fig:bmf_geea} and Section~\ref{section:markov-blanket} (p.~\pageref{section:markov-blanket}).}. Then it follows that, \begin{equation}\label{equation:posterior-geea_lambdak} \begin{aligned} &\gap p(\lambda_k \mid \bm{W}, \bm{Z}, \alpha_\lambda, \beta_\lambda)\\ &\propto p(\widehat{\bm{w}}_k, \widehat{\bm{z}}_k \mid \lambda_k) \times p(\lambda_k) = \prod_{i=1}^{M} \mathcal{E}(w_{ik}\mid \lambda_{k}) \cdot \prod_{j=1}^{N} \mathcal{E}(z_{kj}\mid \lambda_{k}) \times \mathrm{Ga}(\lambda_k \mid \alpha_{\lambda}, \beta_\lambda)\\ &= \prod_{i=1}^{M} \lambda_k \exp(-\lambda_k w_{ik}) \cdot \prod_{j=1}^{N} \lambda_k \exp(-\lambda_k z_{kj}) \times \frac{\beta_{\lambda}^{\alpha_\lambda}}{\Gamma(\alpha_\lambda)} \lambda_k^{\alpha_\lambda-1} \exp(- \lambda_k \beta_\lambda)\\ &\propto\lambda_k^{M+N +\alpha_\lambda -1} \exp\left\{-\lambda_k \cdot \left(\sum_{k=1}^{K}(w_{mk}+ z_{kn}) +\beta_\lambda\right)\right\}\\ &\propto \mathrm{Ga}(\lambda_k \mid \widetilde{\alpha_\lambda}, \widetilde{\beta_\lambda}), \end{aligned} \end{equation} where $$ \widetilde{\alpha_\lambda} = M+N+\alpha_\lambda, \qquad \widetilde{\beta_\lambda}=\sum_{k=1}^{K}(w_{mk}+ z_{kn}) +\beta_\lambda. $$ From this posterior form, the prior parameter $\alpha_\lambda$ can be interpreted as the number of prior observations, and $\beta_\lambda$ as the sum of the prior observations. Therefore, week prior parameter can be chosen as $\alpha_\lambda=\beta_\lambda=1$. \paragraph{Gibbs sampling.} Again we can construct a Gibbs sampler for the GEEA model as formulated in Algorithm~\ref{alg:geea_gibbs_sampler}. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\alpha_\lambda=\beta_\lambda=1$. \begin{algorithm}[h] \caption{Gibbs sampler for GEEA model in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\alpha_\lambda=\beta_\lambda=1$.} \label{alg:geea_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\sigma, \beta_\sigma, \alpha_\lambda, \beta_\lambda$; \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} \mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \lambda_{k} )$; \Comment{Equation~\eqref{equation:gee_poster_wmk1}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn} \mid \bm{A}, \bm{W}, \bm{Z}_{-kn},\sigma^2, \lambda_{k} )$; \Comment{Symmetry of Eq.~\eqref{equation:gee_poster_wmk1}} \EndFor \State Sample $\lambda_k$ from $p(\lambda_k \mid \bm{W}, \bm{Z}, \alpha_\lambda, \beta_\lambda)$; \Comment{Equation~\eqref{equation:posterior-geea_lambdak}} \EndFor \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z},\alpha_\sigma,\beta_\sigma)$; \Comment{Equation~\eqref{equation:gee_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:als-per-example-loss_bnmf}, stop if it converges. \end{algorithmic} \end{algorithm} \index{Decomposition: GTT} \section{Gaussian Likelihood with Truncated-Normal Priors (GTT)} The Gaussian likelihood with truncated-normal priors (GTT) model is discussed in \citet{brouwer2017prior} where truncated-normal (TN) priors are used over factored matrices (Figure~\ref{fig:bmf_gtt}). The truncated-normal distribution is a variant of the normal distribution where the values smaller than zero are excluded (Definition~\ref{definition:truncated_normal}, p.~\pageref{definition:truncated_normal}) and thus it can impose nonnegativity in Bayesian models. The likelihood is chosen to be the same as that in the GEE model (Equation~\eqref{equation:gee_likelihood}). \paragraph{Prior.} We assume $\bm{W}$ and $\bm{Z}$ are independently truncated-normal distributed with mean and precision $\{\boldsymbol\mu^W ,\boldsymbol\tau^W\}$, $\{\boldsymbol\mu^Z,\boldsymbol\tau^Z\}$, \begin{equation}\label{equation:gtt_prior_wmk} w_{mk} \sim \mathcal{TN}(w_{mk} \mid \mu_{mk}^{W}, (\tau_{mk}^W)^{-1} ), \gap z_{kn} \sim \mathcal{TN}(z_{kn} \mid \mu_{kn}^Z, (\tau_{kn}^Z)^{-1} ), \end{equation} where $\boldsymbol\mu^W$ is an $M\times K$ matrix containing all $\{\mu_{mk}^W\}$ entries, $\boldsymbol\mu^Z$ is a $K\times N$ matrix including all $\{\mu_{kn}^Z\}$ values, $\boldsymbol\tau^W$ is an $M\times K$ matrix containing all $\{\tau_{mk}^W\}$ entries, and $\boldsymbol\tau^Z$ is a $K\times N$ matrix including all $\{\tau_{kn}^Z\}$ values. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[GTT.]{\label{fig:bmf_gtt} \includegraphics[width=0.421\linewidth]{./imgs/bmf_gtt.pdf}} \subfigure[GTTN.]{\label{fig:bmf_gttn} \includegraphics[width=0.421\linewidth]{./imgs/bmf_gttn.pdf}} \caption{Graphical model representation of GTT and GTTN models. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or", and the comma ``," in the variable represents ``and".} \label{fig:bmf_gtt_gttn} \end{figure} \paragraph{Posterior.} Again, following the Bayes' rule and MCMC, this means we need to be able to draw from distributions (by Markov blanket, Section~\ref{section:markov-blanket}, p.~\pageref{section:markov-blanket}): $$ \begin{aligned} &p(w_{mk}\mid\bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \mu_{mk}^W,\tau_{mk}^W), \\ & p(z_{kn}\mid\bm{A}, \bm{W}, \bm{Z}_{-kn}, \sigma^2, \mu_{kn}^Z, \tau_{kn}^Z), \\ & p(\sigma^2 \mid \bm{A}, \bm{W}, \bm{Z}, \alpha_\sigma, \beta_\sigma ), \\ \end{aligned} $$ where $\bm{W}_{-{mk}}$ denotes all elements of $\bm{W}$ except $w_{mk}$ and $\bm{Z}_{-kn}$ denotes all elements of $\bm{Z}$ except $z_{kn}$. Using Bayes' theorem, the conditional density of $w_{mk}$ depends on its parents ($\mu_{mk}^W$, $\tau_{mk}^W$), children ($a_{mn}$), and coparents ($\tau$ or $\sigma^2$, $\bm{W}_{-mk}, \bm{Z}$) \footnote{See Figure~\ref{fig:bmf_gee} and Section~\ref{section:markov-blanket} (p.~\pageref{section:markov-blanket}).}. And it can be obtained by (similar to computing the conditional density of $w_{mk}$ in the GEE model, Equation~\eqref{equation:gee_poster_wmk1}) \begin{equation}\label{equation:gtt_posterior_wmk1} \small \begin{aligned} &\gap p(w_{mk} \mid \sigma^2, \bm{W}_{-mk}, \bm{Z}, \mu_{mk}^W, \tau_{mk}^W, \bm{A}) \propto p(\bm{A}\mid \bm{W}, \bm{Z}, \sigma^2) \cdot p(w_{mk}\mid \mu_{mk}^W, (\tau_{mk}^W)^{-1})\\ &=\prod_{i,j=1}^{M,N} \mathcal{N} \left(a_{ij}\mid \bm{w}_i^\top\bm{z}_j, \sigma^2 \right)\times \mathcal{TN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1} ) \\ &\propto \exp\bigg\{ - (\frac{ \sum_{j=1}^{N} z_{kj}^2 }{2\sigma^2} +\textcolor{black}{\frac{\tau_{mk}^W}{2}}) w_{mk}^2 + w_{mk} \big\{ \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}( a_{mj}- \sum_{i\neq k}^{K}w_{mi}z_{ij}) + \textcolor{black}{\tau_{mk}^W \mu_{mk}^W} \big\} \bigg\} u(w_{mk})\\ &\propto \mathcal{N}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})\cdot u(w_{mk}) = \mathcal{TN}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}}), \end{aligned} \end{equation} where $\widetilde{\sigma_{mk}^{2}}= \frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2 + \tau_{mk}^W\cdot \sigma^2}$ is the posterior ``parent" variance of the normal distribution with ``parent" mean $\widetilde{\mu_{mk}}$, $$ \widetilde{\mu_{mk}} = \left\{ \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij} \bigg) + \textcolor{black}{\tau_{mk}^W \mu_{mk}^W}\right\}\cdot \widetilde{\sigma_{mk}^{2}}. $$ Again, due to symmetry, a similar expression for $z_{kn}$ can be easily derived. Finally, the conditional density of $\sigma^2$ is the same as that in the GEE model (Equation~\eqref{equation:gee_posterior_sigma2}). \begin{algorithm}[h] \caption{Gibbs sampler for GTT model in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\{\mu_{mk}^W\}=\{\mu_{kn}^Z\}=0$, $\{\tau_{mk}^W\}=\{\tau_{kn}^Z\}=0.1$.} \label{alg:gtt_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\sigma, \beta_\sigma, \mu_{mk}^W, \tau_{mk}^W,\mu_{kn}^Z, \tau_{kn}^Z$; \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} \mid \bm{A}, \bm{W}_{-mk},\bm{Z},\sigma^2,\mu_{mk}^W,\tau_{mk}^W)$; \Comment{Equation~\eqref{equation:gtt_posterior_wmk1}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn} \mid \bm{A}, \bm{W},\bm{Z}_{-kn},\sigma^2,\mu_{kn}^Z,\tau_{kn}^Z)$; \Comment{Symmetry of Eq.~\eqref{equation:gtt_posterior_wmk1}} \EndFor \EndFor \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z},\alpha_\sigma,\beta_\sigma)$; \Comment{Equation~\eqref{equation:gee_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:als-per-example-loss_bnmf}, stop if it converges. \end{algorithmic} \end{algorithm} \paragraph{Gibbs sampling.} We can again construct a Gibbs sampler for the GTT model as formulated in Algorithm~\ref{alg:gtt_gibbs_sampler}. And also in practice, all the parameters of the truncated-normal priors are set to be the same value $\mu^W=\{\mu_{mk}^W\}'s, \mu^Z=\{\mu_{nk}^Z\}'s$, $\tau^W=\{\tau_{mk}^W\}'s, \tau^Z=\{\tau_{nk}^Z\}'s$ for all $m,k,n$. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\{\mu_{mk}^W\}=\{\mu_{kn}^Z\}=0$, $\{\tau_{mk}^W\}=\{\tau_{kn}^Z\}=0.1$. \index{Decomposition: GTTN} \section{Gaussian Likelihood with Truncated-Normal and Hierarchical Priors (GTTN)} This hierarchical prior is proposed in \citet{schmidt2009probabilistic} over a rectified-normal distribution originally, and further discussed in \citet{brouwer2017prior} based on the GTT model where the difference lies in that the GTTN model puts a hyperprior on the two parameters of the truncated-normal distribution (Figure~\ref{fig:bmf_gttn}). \paragraph{Hyperprior.} We have shown in Equation~\eqref{equation:conjugate_truncated_nonnegative_mean} (p.~\pageref{equation:conjugate_truncated_nonnegative_mean}) that the truncated-normal density is a conjugate prior over the nonnegative mean parameter of a Gaussian distribution that is favored in the GTT model. Moreover, if the prior over $\{w_{mk}\}$'s and $\{z_{kn}\}$'s had been a Gaussian, appropriate conjugate priors for the mean and variance would be a normal-inverse-Gamma or a normal-inverse-Chi-square distribution (Equation~\eqref{equation:conjugate_nigamma_general}, p.~\pageref{equation:conjugate_nigamma_general}; Equation~\eqref{equation:nix-posterior}, p.~\pageref{equation:nix-posterior}). However, these priors are not conjugate to the truncated-normal density. And instead, a convenient prior called \textit{TN-scaled-normal-Gamma (TNSNG)} distribution is used (or a \textit{TN-scaled-normal-inverse-Gamma} prior for ``parent" mean and ``parent" variance parameters) \footnote{The original hyperprior in \citet{schmidt2009probabilistic} is a rectified-normal (RN) scaled one. Here we scale it in the context of truncated-normal density.}: $$ \begin{aligned} \mu_{mk}^W, \tau_{mk}^W \mid \mu_\mu, \tau_\mu, a, b &\sim \mathcal{TNSNG}(\mu_{mk}^W, \tau_{mk}^W \mid \mu_\mu, \tau_\mu, a, b)\\ &\propto \frac{1}{\sqrt{\tau_{mk}^W}} \left(1 - \Phi\big(-\mu_{mk}^W\sqrt{\tau_{mk}^W} \big) \right) \cdot \mathcal{N}(\mu_{mk}^W\mid \mu_\mu, (\tau_\mu)^{-1})\cdot \mathrm{Ga}(\tau_{mk}^W \mid a, b);\\ \mu_{kn}^Z, \tau_{kn}^Z \mid \mu_\mu, \tau_\mu, a, b &\sim \mathcal{TNSNG}(\mu_{kn}^Z, \tau_{kn}^Z \mid \mu_\mu, \tau_\mu, a, b). \end{aligned} $$ Here we use the same hyperparameters $\{ \mu_\mu, \tau_\mu, a, b\}$ over different entries $\{u_{mk}^W, \tau_{mk}^W \}$ and $\{u_{kn}^Z, \tau_{kn}^Z\}$. However, in rare cases, one may favor different behaviors over $\bm{W}$ and $\bm{Z}$, e.g., small values in $\bm{W}$ and large values in $\bm{Z}$, the hyperparameters can be chosen as different values (see the comparison in Figure~\ref{fig:bmf_gtt_gttn2}). Note that the scaled-normal-Gamma distribution is not simply a product of a normal and a Gamma distribution. It is not easy to sample from this distribution; however, we will see the posteriors have simple forms due to this prior; that's the reason why we add scaled terms in the prior density. The prior can decouple parameters $\mu_{mk}^W, \tau_{mk}^W$, and the posterior conditional densities of them are normal and Gamma respectively due to this convenient scale. \begin{figure}[tp] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[GTTN with same hyperparameters. Same as Figure~\ref{fig:bmf_gttn}.]{\label{fig:bmf_gttnsss} \includegraphics[width=0.421\linewidth]{./imgs/bmf_gttn.pdf}} \subfigure[GTTN with different hyperparameters.]{\label{fig:bmf_gttnddd} \includegraphics[width=0.421\linewidth]{./imgs/bmf_gttnd.pdf}} \caption{Graphical model representation of GTTN with same and different hyperparameters. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or", and the comma ``," in the variable represents ``and".} \label{fig:bmf_gtt_gttn2} \end{figure} \paragraph{Posterior.} The posteriors for $\{w_{mk}\}$'s, $\{z_{kn}\}$'s, and $\sigma^2$ are the same as those in the GTT model. The posteriors for $\{u_{mk}^W, \tau_{mk}^W\}$ can be obtained using Bayes' rule where the conditional density of $\{\mu_{mk}^W, \tau_{mk}^W\}$ depend on their parents ($\mu_\mu, \tau_\mu, a, b$), children ($w_{mk}$), and coparents (none). Then it follows from the likelihood in Equation~\eqref{equation:gtt_prior_wmk} that the conditional densities of $\mu_{mk}^W$ is \begin{equation}\label{equation:posterior_gttn_mu_tau1} \begin{aligned} &\gap p(\mu_{mk}^W \mid\textcolor{black}{ \tau_{mk}^W}, w_{mk}, \mu_\mu, \tau_\mu, a, b)\\ &\propto \mathcal{TN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1} ) \cdot \frac{1}{\sqrt{\tau_{mk}^W}} \left(1 - \Phi\big(-\mu_{mk}^W\sqrt{\tau_{mk}^W} \big) \right) \mathcal{N}(\mu_{mk}^W\mid \mu_\mu, \tau_\mu) \mathrm{Ga}(\tau_{mk}^W \mid a, b)\\ &\propto \exp\left\{ - \underbrace{\frac{\tau_{mk}^W + \tau_\mu}{2}}_{\textcolor{blue}{\widetilde{t}/2}} (\mu_{mk}^W)^2 + \mu_{mk}^W \underbrace{(\tau_{mk}^W w_{mk} +\tau_{\mu}\mu_\mu)}_{\textcolor{blue}{\widetilde{m}\cdot \widetilde{t}}} \right\} \propto \mathcal{N}(\mu_{mk}^W \mid \widetilde{m}, \widetilde{t}^{-1}), \end{aligned} \end{equation} where $ \widetilde{t}=\tau_{mk}^W + \tau_\mu, \widetilde{m}=(\tau_{mk}^W w_{mk} +\tau_{\mu}\mu_\mu)/\widetilde{t}. $ And the conditional density of $\tau_{mk}^W$ is \begin{equation}\label{equation:posterior_gttn_mu_tau2} \begin{aligned} &\gap p(\tau_{mk}^W \mid\textcolor{black}{\mu_{mk}^W }, w_{mk}, \mu_\mu, \tau_\mu, a, b)\\ &\propto \mathcal{TN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1} ) \cdot \frac{1}{\sqrt{\tau_{mk}^W}} \left(1 - \Phi\big(-\mu_{mk}^W\sqrt{\tau_{mk}^W} \big) \right) \mathcal{N}(\mu_{mk}^W\mid \mu_\mu, \tau_\mu) \mathrm{Ga}(\tau_{mk}^W \mid a, b)\\ &\propto (\tau_{mk}^W)^{a-1} \exp\left\{-\left( b+ \frac{(w_{mk}-\mu_{mk}^W)^2}{2} \right) \tau_{mk}^W \right\} \propto \mathrm{Ga}(\tau_{mk}^W \mid \widetilde{a}, \widetilde{b}), \end{aligned} \end{equation} where $\widetilde{a} = a, \widetilde{b}= b+ \frac{(w_{mk}-\mu_{mk}^W)^2}{2}$. And again due to symmetry, the expressions for $\mu_{kn}^Z$ and $\tau_{kn}^Z$ can be easily derived similarly. The Gibbs sampler for the GTTN model is then formulated in Algorithm~\ref{alg:gttn_gibbs_sampler}. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\mu_\mu=0, \tau_\mu=0.1$, $a=b=1$. \begin{algorithm}[h] \caption{GibbssSampler for GTTN model in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\mu_\mu=0, \tau_\mu=0.1$, $a=b=1$.} \label{alg:gttn_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\sigma, \beta_\sigma, \mu_\mu, \tau_\mu, a, b$; \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} \mid\bm{A} , \bm{W}_{-mk}, \bm{Z}, \sigma^2, \mu_{mk}^W, \tau_{mk}^W)$; \Comment{Equation~\eqref{equation:gtt_posterior_wmk1}} \State Sample $\mu_{mk}^W$ from $p(\mu_{mk}^W \mid \tau_{mk}^W, w_{mk}, \mu_\mu, \tau_\mu, a, b)$; \Comment{Equation~\eqref{equation:posterior_gttn_mu_tau1}} \State Sample $\tau_{mk}^W$ from $p(\tau_{mk}^W \mid\mu_{mk}^W, w_{mk} , \mu_\mu, \tau_\mu, a, b)$; \Comment{Equation~\eqref{equation:posterior_gttn_mu_tau2}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn} \mid \bm{A}, \bm{W}, \bm{Z}_{-kn}, \sigma^2,\mu_{kn}^Z, \tau_{kn}^Z )$; \Comment{Symmetry of Eq.~\eqref{equation:gtt_posterior_wmk1}} \State Sample $\mu_{kn}^Z$ from $p(\mu_{kn}^Z \mid \tau_{kn}^Z, z_{kn}, \mu_\mu, \tau_\mu, a, b)$; \Comment{Symmetry of Eq.~\eqref{equation:posterior_gttn_mu_tau1}} \State Sample $\tau_{kn}^Z$ from $p(\tau_{kn}^Z \mid\mu_{kn}^Z, z_{kn} , \mu_\mu, \tau_\mu, a, b)$; \Comment{Symmetry of Eq.~\eqref{equation:posterior_gttn_mu_tau2}} \EndFor \EndFor \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z},\alpha_\sigma,\beta_\sigma)$; \Comment{Equation~\eqref{equation:gee_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:als-per-example-loss_bnmf}, stop if it converges. \end{algorithmic} \end{algorithm} \index{Decomposition: GRR} \index{Decomposition: GRRN} \section{Gaussian Likelihood with Rectified-Normal Priors (GRR) and Hierarchical Prior (GRRN)} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[GRR.]{\label{fig:bmf_grr} \includegraphics[width=0.421\linewidth]{./imgs/bmf_grr.pdf}} \subfigure[GRRN.]{\label{fig:bmf_grrn} \includegraphics[width=0.421\linewidth]{./imgs/bmf_grrn.pdf}} \caption{Graphical representation of GRR and GRRN models. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or", and the comma ``," in the variable represents ``and".} \label{fig:bmf_grr_grrn} \end{figure} The Gaussian likelihood with rectified-normal and hierarchical priors (GRR and GRRN) models are proposed in \citet{lu2022flexible} to further favor flexibility based on GTT and GTTN models. Again, we view the data $\bm{A}$ as being produced according to the probabilistic generative process shown in Figure~\ref{fig:bmf_grrn}. The observed $(m,n)$-th data entry $a_{mn}$ of matrix $\bm{A}$ is modeled using a Gaussian likelihood function with variance $\sigma^2$ and mean given by the latent decomposition $\bm{w}_m^\top\bm{z}_n$ (Equation~\eqref{equation:als-per-example-loss_bnmf}). The likelihood is again chosen to be the same as that in the GEE model (Equation~\eqref{equation:gee_likelihood}). \paragraph{Prior.} We treat the latent variables $w_{mk}$'s (and $z_{kn}$'s) as random variables. And we need prior densities over these latent variables to express beliefs for their values, e.g., nonnegativity in this context. Here we assume further that the latent variables $w_{mk}$'s and $z_{kn}$'s are independently drawn from a \textit{rectified-normal (RN)} priors (a.k.a., an \textit{exponentially rectified-normal} distribution, Definition~\ref{definition:reftified_normal_distribution}, p.~\pageref{definition:reftified_normal_distribution}), \begin{equation}\label{equation:rn_prior_grrn} \begin{aligned} p(w_{mk} \mid \cdot ) &= \mathcal{RN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1}, \lambda_{mk}^W);\\ p(z_{kn} \mid \cdot ) &= \mathcal{RN}(z_{kn} \mid \mu_{kn}^Z, (\tau_{kn}^Z)^{-1}, \lambda_{kn}^Z). \end{aligned} \end{equation} This prior serves to enforce the nonnegativity constraint on the components $\bm{W}, \bm{Z}$, and is conjugate to the Gaussian likelihood (Equation~\eqref{equation:conjugate_rectified_nonnegative_mean}, p.~\pageref{equation:conjugate_rectified_nonnegative_mean}). In some scenarios, the two sets of latent variables can be drawn from two different rectified-normal priors, e.g., enforcing sparsity in $\bm{W}$ while non-sparsity in $\bm{Z}$. And we shall not consider this case for our later examples as it is not the main interest of this book. The posterior density is a truncated-normal distribution that is a special rectified-normal distribution. The model is then called a Gaussian likelihood with rectified-normal priors (GRR) model. Since the RN distribution is a special TN distribution, the GRR model is the same as the GTT model with a careful choice of prior parameters. What makes the RN prior important is from the hierarchical model that provides flexibility and guidance on the prior parameter choices. \paragraph{Hierarchical prior.} To further favor flexibility, we choose a convenient joint hyperprior density over the parameters $\{\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W\}$ of RN prior in Equation~\eqref{equation:rn_prior_grrn}, namely, the \textit{RN-scaled-normal-Gamma (RNSNG)} prior, \begin{equation} \begin{aligned} &\gap p(\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W \mid\cdot) = \mathcal{RNSNG}(\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W\mid \mu_\mu, \tau_\mu, a, b, \alpha_\lambda, \beta_\lambda)\\ &= C(\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W) \cdot \mathcal{N}(\mu_{mk}^W\mid \mu_\mu, (\tau_\mu)^{-1}) \cdot \mathrm{Ga}(\tau_{mk}^W \mid a, b) \cdot \mathrm{Ga}(\lambda_{mk}^W \mid \alpha_\lambda, \beta_\lambda), \end{aligned} \end{equation} where $C(\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W)$ is a constant in terms of $\{\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W\}$. This prior can decouple parameters $\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W$, and the posterior conditional densities of them are Gaussian, Gamma, and Gamma respectively due to this convenient scale. A similar RNSNG prior is given over $\{\mu_{kn}^Z, \tau_{kn}^Z, \lambda_{kn}^Z \}$. \paragraph{Posterior.} Again, following the Bayes' rule and MCMC, this means we need to be able to draw from distributions (by Markov blanket, Section~\ref{section:markov-blanket}, p.~\pageref{section:markov-blanket}): $$ \begin{aligned} &p(w_{mk}\mid\bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \mu_{mk}^W,\tau_{mk}^W, \lambda_{mk}^W), \\ & p(z_{kn}\mid\bm{A}, \bm{W}, \bm{Z}_{-kn}, \sigma^2, \mu_{kn}^Z, \tau_{kn}^Z, \lambda_{kn}^Z), \\ & p(\sigma^2 \mid \bm{A}, \bm{W}, \bm{Z}, \alpha_\sigma, \beta_\sigma ), \\ \end{aligned} $$ where $\bm{W}_{-{mk}}$ denotes all elements of $\bm{W}$ except $w_{mk}$ and $\bm{Z}_{-kn}$ denotes all elements of $\bm{Z}$ except $z_{kn}$. Using Bayes' theorem, the conditional density of $w_{mk}$ depends on its parents ($\mu_{mk}^W$, $\tau_{mk}^W$, $\lambda_{mk}^W$), children ($a_{mn}$), and coparents ($\tau$ or $\sigma^2$, $\bm{W}_{-mk}, \bm{Z}$) \footnote{See Figure~\ref{fig:bmf_gee} and Section~\ref{section:markov-blanket} (p.~\pageref{section:markov-blanket}).}. The conditional density of $w_{mk}$ is a truncated-normal density. And it can be obtained by (similar to computing the conditional density of $w_{mk}$ in the GEE model, Equation~\eqref{equation:gee_poster_wmk1}), \begin{equation}\label{equation:posterior_grrn_wmk122_app} \small \begin{aligned} &\gap p(w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z},\sigma^2, \mu_{mk}^W,\tau_{mk}^W, \lambda_{mk}^W) \propto p(\bm{A}\mid\bm{W}, \bm{Z}, \sigma^2) \times p(w_{mk} \mid \mu_{mk}, \tau_{mk}, \lambda_{mk})\\ &\propto \prod_{i,j=1}^{M,N} \mathcal{N}(a_{ij}\mid \bm{w}_i^\top\bm{z}_j, \sigma^2) \times \mathcal{RN}(\mu_{mk}, (\tau_{mk})^{-1}, \lambda_{mk})\\ &\stackrel{\star}{\propto} \prod_{i,j=1}^{M,N} \mathcal{N}(a_{ij}\mid \bm{w}_i^\top\bm{z}_j, \sigma^2) \times \mathcal{TN}\left( \underbrace{\frac{\tau_{mk}^W\mu_{mk}^W - \lambda_{mk}^W}{\tau_{mk}^W}}_{\textcolor{blue}{:=\mu^\prime}}, (\tau_{mk})^{-1} \right)\\ &\propto \exp\left\{ -\left(\frac{\sum_{j=1}^{N}z_{kj}^2 }{2\sigma^2} + \frac{\tau_{mk}^W}{2}\right)w_{mk}^2 + w_{mk} \left( \frac{1}{\sigma^2} \sum_{j=1}^{N}z_{kj}(a_{mj}- \sum_{i\neq k}^{K}w_{mk}z_{ij}) + \tau_{mk}^W \mu^\prime \right) \right\} u(w_{mk})\\ & \propto \mathcal{N}(w_{mk}\mid \widetilde{\mu_{mk}} , \widetilde{\sigma_{mk}^2})u(w_{mk}) = \mathcal{TN}(w_{mk}\mid \widetilde{\mu_{mk}} , \widetilde{\sigma_{mk}^2}), \end{aligned} \end{equation} where the equality $(\star)$ is from the equivalence between the RN and TN distributions (Definition~\ref{definition:reftified_normal_distribution}, p.~\pageref{definition:reftified_normal_distribution}), $\widetilde{\sigma_{mk}^2} = \frac{\sigma^2}{ \sum_{j=1}^{N} z_{kj}^2 + \tau_{mk}^W \cdot \sigma^2 }$ is the posterior ``parent" variance of the normal distribution with posterior ``parent" mean $$ \widetilde{\mu_{mk}} = \left( \frac{1}{\sigma^2} \sum_{j=1}^{N}z_{kj}(a_{mj}- \sum_{i\neq k}^{K}w_{mk}z_{ij}) + \tau_{mk}^W \mu^\prime \right)\cdot \widetilde{\sigma_{mk}^2}. $$ with $\mu^\prime = \frac{\tau_{mk}^W\mu_{mk}^W - \lambda_{mk}^W}{\tau_{mk}^W}$ being the ``parent" mean of the truncated-normal density. Due to symmetry, the conditional posterior for $z_{kn}$ can be easily derived similarly. \paragraph{Extra update for GRRN} Following the graphical representation of the GRRN model in Figure~\ref{fig:bmf_grrn}, we need to draw samples iteratively from $$ \begin{aligned} &p(\mu_{mk}^W \mid\textcolor{black}{ \tau_{mk}^W, \lambda_{mk}^W}, \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, w_{mk}),\\ &p(\tau_{mk}^W \mid\textcolor{black}{ \mu_{mk}^W, \lambda_{mk}^W}, \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, w_{mk}),\\ &p(\lambda_{mk}^W \mid\textcolor{black}{ \mu_{mk}^W,\tau_{mk}^W} , \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, w_{mk}). \end{aligned} $$ The conditional density for $\mu_{mk}^W$ is a truncated-normal (a special rectified-normal), \begin{equation}\label{equation:posterior_grrn_mu_tau1222_app} \begin{aligned} &\gap p(\mu_{mk}^W \mid\textcolor{black}{ \tau_{mk}^W, \lambda_{mk}^W}, \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, w_{mk})\\ &\propto \mathcal{RN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1}, \lambda_{mk}^W) \cdot \mathcal{RNSNG}(\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W\mid \mu_\mu, \tau_\mu, a, b, \alpha_\lambda, \beta_\lambda)\\ &\propto \mathcal{RN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1}, \lambda_{mk}^W) \cdot \mathcal{N}(\mu_{mk}^W\mid \mu_\mu, (\tau_\mu)^{-1}) \cdot \mathrm{Ga}(\tau_{mk}^W \mid a, b) \cdot \mathrm{Ga}(\lambda_{mk}^W \mid \alpha_\lambda, \beta_\lambda)\\ &= \mathcal{N}(w_{mk}| \mu_{mk}^W, (\tau_{mk}^W)^{-1})\cdot \cancel{\mathcal{E}(w_{mk}| \lambda_{mk}^W)} \cdot \mathcal{N}(\mu_{mk}^W| \mu_\mu, (\tau_\mu)^{-1}) \cdot \cancel{\mathrm{Ga}(\tau_{mk}^W | a, b)} \cdot \cancel{\mathrm{Ga}(\lambda_{mk}^W | \alpha_\lambda, \beta_\lambda)}\\ &\propto \mathcal{N}(w_{mk}\mid \mu_{mk}^W, (\tau_{mk}^W)^{-1})\mathcal{N}(\mu_{mk}^W\mid \mu_\mu, (\tau_\mu)^{-1}) \propto \mathcal{N}(\mu_{mk}^W \mid \widetilde{m}, \widetilde{t}^{-1}), \end{aligned} \end{equation} where $ \widetilde{t}=\tau_{mk}^W + \tau_\mu, \widetilde{m}=(\tau_{mk}^W w_{mk} +\tau_{\mu}\mu_\mu)/\widetilde{t} $ are the posterior mean and precision respectively. The samples $w_{mk}$'s are nonnegative due to the rectification in the distribution (by exponential distribution inside the density). However, this ``parent" mean parameter $\mu_{mk}^W$ is not limited to be nonnegative. The conditional density for $\tau_{mk}^W$ is a Gamma distribution, \begin{equation}\label{equation:posterior_grrn_tau_tau1222_app} \begin{aligned} &\gap p(\tau_{mk}^W \mid\textcolor{black}{ \mu_{mk}^W, \lambda_{mk}^W}, \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, w_{mk})\\ &\propto \mathcal{RN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1}, \lambda_{mk}^W) \cdot \mathcal{RNSNG}(\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W\mid \mu_\mu, \tau_\mu, a, b, \alpha_\lambda, \beta_\lambda) \\ &\propto \mathcal{RN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1}, \lambda_{mk}^W) \cdot \mathcal{N}(\mu_{mk}^W\mid \mu_\mu, (\tau_\mu)^{-1}) \cdot \mathrm{Ga}(\tau_{mk}^W \mid a, b) \cdot \mathrm{Ga}(\lambda_{mk}^W \mid \alpha_\lambda, \beta_\lambda)\\ &= \mathcal{N}(w_{mk}| \mu_{mk}^W, (\tau_{mk}^W)^{-1})\cdot \cancel{\mathcal{E}(w_{mk}| \lambda_{mk}^W)} \cdot \cancel{\mathcal{N}(\mu_{mk}^W| \mu_\mu, (\tau_\mu)^{-1})} \cdot {\mathrm{Ga}(\tau_{mk}^W | a, b)} \cdot \cancel{\mathrm{Ga}(\lambda_{mk}^W | \alpha_\lambda, \beta_\lambda)}\\ &\propto \mathcal{N}(w_{mk}\mid \mu_{mk}^W, (\tau_{mk}^W)^{-1})\mathrm{Ga}(\tau_{mk}^W \mid a, b)\\ &\propto (\tau_{mk}^W)^{a+\frac{1}{2}-1} \exp\left\{-\left( b+ \frac{(w_{mk}-\mu_{mk}^W)^2}{2} \right) \tau_{mk}^W \right\} \propto \mathrm{Ga}(\tau_{mk}^W \mid \widetilde{a}, \widetilde{b}), \end{aligned} \end{equation} where $\widetilde{a} = a+\frac{1}{2}, \widetilde{b}= b+ \frac{(w_{mk}-\mu_{mk}^W)^2}{2}$ are the posterior shape and rate parameters. Furthermore, the conditional density for $\lambda_{mk}^W$ is also a Gamma distribution, \begin{equation}\label{equation:posterior_grrn_lambda122_app} \begin{aligned} &\gap p(\lambda_{mk}^W \mid\textcolor{black}{ \mu_{mk}^W,\tau_{mk}^W} , \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, w_{mk})\\ &\propto \mathcal{RN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1}, \lambda_{mk}^W)\cdot \mathcal{RNSNG}(\mu_{mk}^W, \tau_{mk}^W, \lambda_{mk}^W\mid \mu_\mu, \tau_\mu, a, b, \alpha_\lambda, \beta_\lambda) \\ &\propto \mathcal{RN}(w_{mk} \mid \mu_{mk}^W, (\tau_{mk}^W)^{-1}, \lambda_{mk}^W) \cdot \mathcal{N}(\mu_{mk}^W\mid \mu_\mu, (\tau_\mu)^{-1}) \cdot \mathrm{Ga}(\tau_{mk}^W \mid a, b) \cdot \mathrm{Ga}(\lambda_{mk}^W \mid \alpha_\lambda, \beta_\lambda)\\ &= \cancel{\mathcal{N}(w_{mk}| \mu_{mk}^W, (\tau_{mk}^W)^{-1})}\cdot {\mathcal{E}(w_{mk}| \lambda_{mk}^W)} \cdot \cancel{\mathcal{N}(\mu_{mk}^W| \mu_\mu, (\tau_\mu)^{-1})} \cdot \cancel{\mathrm{Ga}(\tau_{mk}^W | a, b)} \cdot {\mathrm{Ga}(\lambda_{mk}^W | \alpha_\lambda, \beta_\lambda)}\\ &\propto \mathcal{E}(w_{mk}\mid \lambda_{mk}^W)\mathrm{Ga}(\lambda_{mk}^W \mid \alpha_\lambda, \beta_\lambda) \propto \mathrm{Ga}(\lambda_{mk}^W \mid \widetilde{\alpha_\lambda}, \widetilde{\beta_\lambda}), \end{aligned} \end{equation} where $ \widetilde{\alpha_\lambda}= \alpha_\lambda+1, \widetilde{\beta_\lambda}= \beta_\lambda + w_{mk}. $ The \textbf{importance} for this hierarchical prior is revealed that, from this conditional density, \textbf{the prior parameter $\alpha_\lambda$ can be interpreted as the number of prior observations, and $\beta_\lambda$ as the prior knowledge of $w_{mk}$.} On the one hand, an uninformative choice for $\alpha_\lambda$ is $\alpha_\lambda=1$. On the other hand, if one prefers a sparse decomposition with larger regularization on the model, $\beta_\lambda$ can be chosen as a small value, e.g., $\beta_\lambda=0.01$. Or a large value, e.g., $\beta_\lambda=100$, can be applied since we are in the NMF context; a large value in $\bm{W}$ will enforce the counterparts in $\bm{Z}$ to have small values. While an \textit{uninformative choice} for $\beta_\lambda$ is as follows. Suppose the mean value of all entries of matrix $\bm{A}$ is $m_0$, then $\beta_\lambda$ can be set as $\beta_\lambda=\sqrt{\frac{m_0}{K}}$ where the $K$ is the latent dimension such that each prior entry $a_{mn}=\bm{w}_m^\top\bm{z}_n$ is equal to $m_0$. After developing this hierarchical prior, we realize its similarity with the GTTN model (first introduced in a tensor decomposition context \citep{schmidt2009probabilistic}, and further discussed in \citet{brouwer2017prior}). However, the parameters in conditional densities of the GTTN model lack interpretation and flexibility so that there are no guidelines for parameter tuning when the performance is poor. The GRRN model, on the other hand, can work well generally when we select the uninformative prior $\beta_\lambda=\sqrt{\frac{m_0}{K}}$; moreover, one can even set $\beta_\lambda=20 \cdot \sqrt{\frac{m_0}{K}}$ or $0.1 \cdot \sqrt{\frac{m_0}{K}}$ if one prefers a larger regularization as mentioned above. Due to symmetry, the conditional expression for $\mu_{kn}^Z$, $\tau_{kn}^Z$, and $\lambda_{kn}^Z$ can be easily derived similarly; and we shall not go into the details. \paragraph{Gibbs sampling.} The full procedure is formulated in Algorithm~\ref{alg:grrn_gibbs_sampler}. By default, uninformative priors are $\alpha_\sigma=\beta_\sigma=1$, $\mu_\mu =0$, $\tau_\mu=0.1, a=b=1$, $\alpha_\lambda=1, \beta_\lambda = \sqrt{\frac{m_0}{K}}$. \begin{algorithm}[tb] \caption{Gibbs sampler for GRRN in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative priors are $\alpha_\sigma=\beta_\sigma=1$, $\mu_\mu =0$, $\tau_\mu=0.1, a=b=1$, $\alpha_\lambda=1, \beta_\lambda = \sqrt{\frac{m_0}{K}}$. One can even set $\beta_\lambda=20 \cdot \sqrt{\frac{m_0}{K}}$ or $0.1 \cdot \sqrt{\frac{m_0}{K}}$ if one prefers a larger regularization.} \label{alg:grrn_gibbs_sampler} \begin{algorithmic}[1] \State {\bfseries Input:} Choose parameters $\alpha_\sigma, \beta_\sigma, \mu_\mu, \tau_\mu, a, b, \alpha_\lambda, \beta_\lambda$; \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} \mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \mu_{mk}^W, \tau_{mk}^W,\lambda_{mk}^W )$; \Comment{Equation~\eqref{equation:posterior_grrn_wmk122_app}} \State Sample $\mu_{mk}^W$ from $p(\mu_{mk}^W \mid \tau_{mk}^W,\lambda_{mk}^W, \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, w_{mk})$; \Comment{Equation~\eqref{equation:posterior_grrn_mu_tau1222_app}} \State Sample $\tau_{mk}^W$ from $p(\tau_{mk}^W \mid\mu_{mk}^W ,\lambda_{mk}^W, \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, w_{mk})$; \Comment{Equation~\eqref{equation:posterior_grrn_tau_tau1222_app}} \State Sample $\lambda_{mk}^W$ from $p(\lambda_{mk}^W \mid{ \mu_{mk}^W,\tau_{mk}^W} , \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, w_{mk})$; \Comment{Equation~\eqref{equation:posterior_grrn_lambda122_app}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn} \mid \bm{A}, \bm{W}, \bm{Z}_{-kn},\sigma^2, \mu_{kn}^Z, \tau_{kn}^Z,\lambda_{kn}^Z )$; \Comment{Sytry. of Eq.~\eqref{equation:posterior_grrn_wmk122_app}} \State Sample $\mu_{kn}^Z$ from $p(\mu_{kn}^Z \mid \tau_{kn}^Z,\lambda_{kn}^Z, \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, z_{kn})$; \Comment{Sytry. of Eq.~\eqref{equation:posterior_grrn_mu_tau1222_app}} \State Sample $\tau_{kn}^Z$ from $p(\tau_{kn}^Z \mid\mu_{kn}^Z,\lambda_{kn}^Z , \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, z_{kn})$; \Comment{Sytry. of Eq.~\eqref{equation:posterior_grrn_tau_tau1222_app}} \State Sample $\lambda_{kn}^Z$ from $p(\lambda_{kn}^Z \mid{ \mu_{kn}^Z,\tau_{kn}^Z} , \mu_\mu, \tau_\mu, a, b,\alpha_\lambda, \beta_\lambda, z_{kn})$; \Comment{Sytry. of Eq.~\eqref{equation:posterior_grrn_lambda122_app}} \EndFor \EndFor \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z},\alpha_\sigma,\beta_\sigma)$; \Comment{Equation~\eqref{equation:gee_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:als-per-example-loss_bnmf}, stop if it converges. \end{algorithmic} \end{algorithm} \paragraph{Computational complexity.} The adopted Gibbs sampling method for the GRRN model has complexity $\mathcal{O}(MNK^2)$ where the most costs come from the update on the conditional density of $w_{mk}$ and $z_{kn}$. In the meantime, all the methods we have introduced in the above sections (GEE, GTT, GTTN) have complexity $\mathcal{O}(MNK^2)$. Compared to the GTTN model, the GRRN model only has an extra cost on the update of $\lambda_{mk}^W$ which does not amount to the bottleneck of the algorithm. \noindent\makebox[\textwidth][c]{% \begin{minipage}{\textwidth} \begin{minipage}[b]{0.41\textwidth} \centering \begin{figure}[H] \centering \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure{\includegraphics[width=0.451\textwidth]{imgs/plot_movielens_100k_grrn.pdf} \label{fig:data_movielen100k}} \subfigure{\includegraphics[width=0.451\textwidth]{imgs/plot_movielens_1m_grrn.pdf} \label{fig:data_movielen1m}} \caption{Data distribution of MovieLens 100K and MovieLens 1M data sets. The MovieLens 1M data set has a larger fraction of users who give a rate of 5 and a smaller fraction for rates of 3.} \label{fig:datasets_nmf} \end{figure} \end{minipage} \hfill \begin{minipage}[b]{0.57\textwidth} \centering \setlength{\tabcolsep}{5pt} \renewcommand{1.5}{1.34} \begin{tabular}{llll} \hline Data set & Rows & Columns & Fraction obs. \\ \hline MovieLens 100K & 943 & 1473 & 0.072 \\ MovieLens 1M &6040 &3503& 0.047 \\ \hline \end{tabular} \captionof{table}{Data set description. 99,723 and 999,917 observed entries for MovieLens 100K and MovieLens 1M data sets respectively (user vectors or movie vectors with less than 3 observed entries are cleaned). MovieLens 100K is relatively a small data set and the MovieLens 1M tends to be large; while both of them are sparse.} \label{table:datadescription} \end{minipage} \end{minipage} } \subsection{Examples} To demonstrate the main advantages of the introduced GRRN method, we conduct experiments with different analysis tasks; and different data sets including the MovieLens 100K and the MovieLens 1M from movie ratings for different users \citep{harper2015movielens}. The data sets have a range from one to five stars with around 100,000 and 1,000,000 ratings respectively and we want to predict the missing entries for users so that we can recommend the movies they like (user vectors or movie vectors with less than 3 observed entries are cleaned). A summary of the two data sets can be seen in Table~\ref{table:datadescription} and their distributions are shown in Figure~\ref{fig:datasets_nmf}. The MovieLens 1M data set has a larger fraction of users who give a rate of 5 and a smaller fraction for rates of 3. We can see that the MovieLens 100K is relatively a small data set and the MovieLens 1M tends to be large; while both of them are sparse. On the other hand, the MovieLens 1M data set not only has a larger number of users, but also has an increased dimension (the number of movies) making it a harder task to evaluate. In all scenarios, the same parameter initialization is adopted when conducting different tasks. We compare the results in terms of convergence speed and generalization. In a wide range of scenarios across various models, GRRN improves convergence rates, and leads to out-of-sample performances that are as good or better than other Bayesian NMF models. \paragraph{Hyperparameters.} We follow the default hyperparameter setups in \citet{brouwer2017prior}. We use $\{\lambda_{mk}^W\}=\{\lambda_{kn}^Z\}=0.1$ (GEE); $\{\mu_{mk}^Z\}=\{\mu_{kn}^Z\}=0, \{\tau_{mk}^Z\}=\{\tau_{kn}^Z\}=0.1$ (GTT); uninformative $\alpha_\sigma=\beta_\sigma=1$ (Gaussian likelihood in GEE, GTT, GTTN, GRRN); $\mu_\mu =0$, $\tau_\mu=0.1, a=b=1$ (hyperprior in GTTN, GRRN); $\alpha_\lambda=1, \beta_\lambda = \sqrt{\frac{m_0}{K}}$ (hyperprior in GRRN). These are very weak prior choices and the models are not sensitive to them \citep{brouwer2017prior}. As long as the hyperparameters are set, the observed or unobserved variables are initialized from random draws as this initialization procedure provides a better initial guess of the right patterns in the matrices. In all experiments, we run the Gibbs sampler 500 iterations with a burn-in of 400 iterations as the convergence analysis shows the algorithm can converge in less than 200 iterations. \begin{figure*}[h] \centering \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=2pt \subfigure[Convergence on the \textbf{MovieLens 100K} data set with increasing latent dimension $K$.]{\includegraphics[width=1\textwidth]{imgs/convergences_movielens100k.pdf} \label{fig:convergences_gdsc_20}} \subfigure[Convergence on the \textbf{MovieLens 1M} data set with increasing latent dimension $K$.]{\includegraphics[width=1\textwidth]{imgs/convergences_movielens1M.pdf} \label{fig:convergences_movielens100k_20}} \caption{Convergence of the models on the MovieLens 100K (upper) and the MovieLens 1M (lower) data sets, measuring the training data fit (mean squared error). When increasing latent dimension $K$, the GRRN continues to increase the performance; while other models start to decrease on the MovieLens 100K data set or stop increasing on the MovieLens 1M data set.} \label{fig:convergences_gdsc_movielens100k} \end{figure*} \paragraph{Convergence analysis.} Firstly we compare the convergence in terms of iterations on the MovieLens 100K and MovieLens 1M data sets. We run each model with $K=10, 20, 30, 40, 50$, and the loss is measured by mean squared error (MSE). Figure~\ref{fig:convergences_gdsc_movielens100k} shows the average convergence results of ten repeats. On the MovieLens 1M data set, all the methods converge to better performance with smaller MSE when increasing latent dimension $K$; while the performance of GRRN is better than the other models. Moreover, we observe that the convergence results of GTT and GTTN models are rather close since they share similar hidden structures though GTTN is a hierarchical model. On the other hand, when conducting on the MovieLen 100K data set and increasing the feature dimension $K$, the GRRN model continues to converge to better performance with MSE continuing to decrease. However, GEE, GTT, and GTTN models first converge to a better performance and then start to diverge with a larger MSE observed or stop improving at all when increasing latent dimension $K$. From this perspective, GRRN is a better choice for data reduction compared to other Bayesian NMF models. \paragraph{Noise sensitivity.} We further measure the noise sensitivity of different models with predictive performance when the data sets are noisy. To see this, we add different levels of Gaussian noise to the data. We add levels of $\{0\%, 10\%,$ $20\%,$ $50\%, 100\%, 200\%, 500\%, 1000\%\}$ noise-to-signal ratio noise (which is the ratio of the variance of the added Gaussian noise to the variance of the data). The results for the MovieLens 100K with $K=50$ are shown in Figure~\ref{fig:noise_graph_movielens100k}. We observe that the GRRN model performs similarly to other Bayesian NMF models. Similar results can be found on the MovieLens 1M data set and other $K$ values and we shall not repeat the details. \noindent\makebox[\textwidth][c]{% \begin{minipage}{\textwidth} \begin{minipage}[b]{0.415\textwidth} \centering \begin{figure}[H] \centering \subfigtopskip=2pt \subfigbottomskip=9pt \subfigcapskip=-5pt \includegraphics[width=0.9\textwidth]{imgs/noise_graph_movielens100k.pdf} \caption{Ratio of the variance of data to the MSE of the predictions. The higher the better. GRRN model performs similarly to other Bayesian NMF models. Similar results can be found on the MovieLens 1M data set and other $K$ values and we shall not repeat the details.} \label{fig:noise_graph_movielens100k} \end{figure} \end{minipage} \hfill \begin{minipage}[b]{0.57\textwidth} \centering \renewcommand{1.5}{1.14} \small \begin{tabular}{lllll} \hline $K$\textbackslash{}Models & GEE & GTT & GTTN & GRRN \\ \hline $K$=20 & 1.18 & 1.06 & 1.07 & \textbf{ 1.02 } \\ $K$=30 & 1.43 & 1.18 & 1.20 & \textbf{ 1.00 } \\ $K$=40 & 1.86 & 1.42 & 1.45 & \textbf{ 0.98 } \\ $K$=50 & 2.63 & 1.84 & 1.89 & \textbf{ 0.97 } \\ \hline \hline $K$=20 & 3.47 & 1.46 & 1.57 & \textbf{ 1.10 } \\ $K$=30 & 6.86 & 2.27 & 2.52 & \textbf{ 1.05 } \\ $K$=40 & 17056.27 & 4.07 & 4.79 & \textbf{ 1.04 } \\ $K$=50 & 236750.39 & 2650.21 & 5452.18 & \textbf{ 1.05 } \\ \hline \end{tabular} \captionof{table}{Mean squared error measure when 97\% (upper table) and 98\% (lower table) of data is unobserved for MovieLens 100K data set. The performance of the GRRN model is only a little worse when increasing the fraction of unobserved from 97\% to 98\%. Similar situations can be observed in the MovieLens 1M experiment.} \label{table:movielens100k_special_sparsity_case} \end{minipage} \end{minipage} } \begin{figure*}[h] \centering \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-2pt \subfigure[Predictive results on the \textbf{MovieLens 100K} data set with increasing fraction of unobserved data and increasing latent dimension $K$.]{\includegraphics[width=1\textwidth]{imgs/sparsity_movielens_100k_variousK.pdf} \label{fig:sparsity_movielens_100k_variousK}} \subfigure[Predictive results on the \textbf{MovieLens 1M} data set with increasing fraction of unobserved data and increasing latent dimension $K$.]{\includegraphics[width=1\textwidth]{imgs/sparsity_movielens_1M_variousK.pdf} \label{fig:sparsity_movielens_1M_variousK}} \caption{Predictive results on the MovieLens 100K (upper) and MovieLens 1M (lower) data sets with the least fractions of unobserved data being 0.928 and 0.953 respectively (see Table~\ref{table:datadescription} for the data description). We measure the predictive performance (mean squared error) on a held-out data set for different fractions of unobserved data. The blue and red arrows compare the MSEs of GTTN and GRRN models when the fractions of unobserved data are 0.96 and 0.98 respectively. } \label{fig:sparsity_movielen100_1M} \end{figure*} \paragraph{Predictive analysis.} The training performance of the GRRN model steadily improves as the model complexity grows. Inspired by this result, we measure the predictive performance when the sparsity of the data increases to see whether the models overfit or not. For different fractions of unobserved data, we randomly split the data based on that fraction, train the model on the observed data, and measure the performance on the held-out test data. Again, we increase $K$ from $K=20$ to $K=30, 40, 50$ for all models. The average MSE of ten repeats is given in Figure~\ref{fig:sparsity_movielen100_1M}. We observe that when $K=20$ and the fraction of unobserved data is relatively a small value (e.g., fraction unobserved = 0.93 in Figure~\ref{fig:sparsity_movielens_100k_variousK} and fraction unobserved = 0.96 in Figure~\ref{fig:sparsity_movielens_1M_variousK}), all the models perform similarly (GRRN is only slightly better). However, when the fraction of unobserved data increases or the latent dimension $K$ increases, GRRN performs much better than the other models. Table~\ref{table:movielens100k_special_sparsity_case} shows MSE predictions of different models when the fraction of unobserved data is $97\%$ and $98\%$. We observe that the performance of the GRRN model is only a little worse when increasing the fraction of unobserved from 97\% to 98\% showing the GRRN model is more robust with less overfitting. While for other competitive models, the performances become extremely worse in this scenario. From Figure~\ref{fig:convergences_gdsc_movielens100k}, we see that GEE can converge to a better in-sample performance generally; this leads to a worse out-of-sample performance as shown in Figure~\ref{fig:sparsity_movielen100_1M} (compared to GTT and GTTN). However, the GRRN has both better in-sample and out-of-sample performances from this experiment making it a more robust choice in predicting missing entries. Similar situations can be observed in the MovieLens 1M case. We also add a popular non-probabilistic NMF (NP-NMF) model to see the predictive results \citep{lee2000algorithms}. Empirical results (grey lines in Figure~\ref{fig:sparsity_movielen100_1M}) show that the NP-NMF can overfit easily compared to Bayesian NMF approaches even when the fraction of unobserved data is relatively small and latent dimension $K$ is small though the issue is less severe in the MovieLens 1M data set. \section{Priors as Regularization} Denote the prior parameters as $\boldsymbol\theta$ and follow Bayes' rule, the posterior is proportional to the product of likelihood and prior density: $$ p(\boldsymbol\theta \mid \bm{A}) \propto p(\bm{A} \mid \boldsymbol\theta) \cdot p(\boldsymbol\theta) , $$ such that the log-likelihood follows $$ \begin{aligned} \log p(\boldsymbol\theta \mid \bm{A}) &= \log p(\bm{A} \mid \boldsymbol\theta) + \log p(\boldsymbol\theta) + C_1 \\ &=\log \prod_{m,n=1}^{M,N} \mathcal{N} \left(a_{mn}\mid \bm{w}_m^\top\bm{z}_n, \sigma^2 \right) + \log p(\bm{W},\bm{Z}) + C_2\\ &=-\frac{1}{2\sigma^2} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n \right)^2 + \log p(\bm{W},\bm{Z}) + C_3, \end{aligned} $$ where $C_1,C_2,C_3$ are some constants. The ultimate equation is the sum of the negative squared loss of the training fit and a regularization term over the factored components $\bm{W},\bm{Z}$. The prior distributions of $\bm{W},\bm{Z}$ then act as a regularization that can prevent the model from overfitting the data and increase the predictive performance. To be more concrete, the regularizers on $\bm{W}$ can be categorized as follows: \begin{equation}\label{equation:4norms-in-vanilla-bmf} \begin{aligned} L_1 &= \sum_{m=1}^{M} \sum_{k=1}^{K} w_{mk}, \gap\gap &L_2^{1/2} &= \sum_{m=1}^{M} \sqrt{\sum_{k=1}^{K}w_{mk}},\\ L_1^2 &= \sum_{m=1}^{M} \left(\sum_{k=1}^{K} w_{mk}\right)^2, \gap\gap &L_2^2 &= \sum_{m=1}^{M}{\sum_{k=1}^{K}w_{mk}^2}. \end{aligned} \end{equation} We note that the $L_2^2$ norm \footnote{To abuse the terminology, we call it a norm though it does not meet the criteria of a norm. A norm should satisfy nonnegativity ($\norm{\bm{A}}\geq 0$), positive homogeneity ($\norm{\lambda \bm{A}}=\abs{\lambda}\cdot \norm{\bm{A}}$), and triangle inequality ($\norm{\bm{A}+\bm{B}}\leq \norm{\bm{A}}+\norm{\bm{B}}$) given matrices $\bm{A},\bm{B}$ and a scalar $\lambda$; see \citet{lu2021numerical}. } is equivalent to an independent Gaussian prior (GGG model); the $L_1$ norm is equivalent to a Laplace prior in the real-valued decomposition and is equivalently to an exponential prior (GEE model) in nonnegative matrix factorization. In the next sections, we will discuss some Bayesian nonnegative matrix factorization models that arise from different norms, the difference of the conditional posterior for latent variable $w_{MK}$ IS summarized in Table~\ref{table:bnmf_regularizer_posterior}. The conditional densities of $z_{kn}$'s are similar due to their symmetry to $w_{mk}$'s. \begin{table*}[t] \setlength{\tabcolsep}{2.4pt} \renewcommand{1.5}{1.5} \footnotesize \begin{tabular}{l|l|l|l} \hline & Conditional $w_{mk}$& $\widetilde{\mu_{mk}}$ (mean) & $\widetilde{\sigma_{mk}^{2}}$ (variance) \\ \hline\hline GEE & $\mathcal{TN}(w_{mk} | \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})$ & $\left( -\lambda_{mk}^W\gap \gap\gap \,\,\,\,\,+ \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\big( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\big) \right) \widetilde{\sigma_{mk}^{2}}$ & $ \frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2}$ \\ \hline GL$_1^2$ & $\mathcal{TN}(w_{mk} | \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})$ & $\left( -\lambda_k^W\textcolor{red}{\sum_{j\neq k}^{K}w_{mj}}+\frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\big( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\big) \right) \widetilde{\sigma_{mk}^{2}}$ & $\frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2 +\textcolor{red}{\sigma^2\lambda_k^W}}$ \\ \hline GL$_2^2$ & $\mathcal{TN}(w_{mk} | \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})$ & $\left(\gap \gap\gap\gap\gap\,\,\,\, \gap\frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\big( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\big) \right) \widetilde{\sigma_{mk}^{2}}$ & $\frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2 +\textcolor{red}{\sigma^2\lambda_k^W}}$ \\ \hline GL$_\infty$ & $\mathcal{TN}(w_{mk} | \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})$ & $\left(-\textcolor{red}{\lambda_k^W\cdot \mathds{1}(w_{mk})} \,\,\,\,\,\,\, +\frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\big( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\big) \right) \widetilde{\sigma_{mk}^{2}}$ & $\frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2 }$ \\ \hline GL$_{2,\infty}^2$ & $\mathcal{TN}(w_{mk} | \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})$ & $\left(-\textcolor{red}{\lambda_k^W\cdot \mathds{1}(w_{mk})} \,\,\,\,\,\,\, +\frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\big( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\big) \right) \widetilde{\sigma_{mk}^{2}}$ & $\frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2 + \textcolor{red}{\sigma^2\lambda_k^W} }$ \\ \hline \end{tabular} \caption{Posterior conditional densities of $w_{mk}$'s for GEE, GL$_1^2$, GL$_2^2$, GL$_\infty$, and GL$_{2,\infty}^2$ models. The difference is highlighted in \textcolor{red}{red}. The conditional densities of $z_{kn}$'s are similar due to their symmetry to $w_{mk}$'s. $\mathcal{TN}(x|\mu,\tau^{-1}) =\frac{\sqrt{\frac{\tau}{2\pi}} \exp\{-\frac{\tau}{2} (x-\mu)^2 \} } {1-\Phi(-\mu\sqrt{\tau})} u(x)$ is a truncated-normal (TN) density with zero density below $x=0$ and renormalized to integrate to one. $\mu$ and $\tau$ are known as the ``parent" mean and ``parent" precision. $\Phi(\cdot)$ is the cumulative distribution function of standard normal density $\mathcal{N}(0,1)$. } \label{table:bnmf_regularizer_posterior} \end{table*} \index{Decomposition: GL$_1^2$} \section{Gaussian $L_1^2$ Norm (GL$_1^2$) Model} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \includegraphics[width=0.421\linewidth]{./imgs/bmf_gl12.pdf} \caption{Graphical model representation of GL$_1^2$, GL$_2^2$, GL$_\infty$, and GL$_{2,\infty}^2$ models. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or".} \label{fig:bmf_gl12} \end{figure} The Gaussian $L_1^2$ Norm Model (GL$_1^2$) model is proposed by \citet{brouwer2017prior} based on the $L_1^2$ norm in Equation~\eqref{equation:4norms-in-vanilla-bmf} for both $\bm{W}, \bm{Z}$. We again view the data $\bm{A}$ as being produced according to the probabilistic generative process shown in Figure~\ref{fig:bmf_gl12}. The $(m,n)$-th entry $a_{mn}$ follows a Gaussian likelihood with variance $\sigma^2$ and mean given by the latent decomposition $\bm{w}_m^\top\bm{z}_n$ (Equation~\eqref{equation:als-per-example-loss_bnmf}). \paragraph{Prior.} The $L_1^2$ prior follows immediately by replacing the $L_1$ norm with the $L_1^2$ norm in the exponential prior. We assume $\bm{W}$ and $\bm{Z}$ are independently distributed with parameter $\lambda_{k}^W$ and $\lambda_{k}^Z$ proportional to an exponential function: \begin{equation}\label{equation:gl12_prior_density} \begin{aligned} &p(\bm{W}\mid\lambda_{k}^W) &\propto & \left\{ \begin{aligned} &\exp \left[ -\frac{\lambda_{k}^W}{2} \sum_{m=1}^{M} \left(\sum_{k=1}^{K} w_{mk}\right)^2 \right] , &\gap &\text{if $w_{mk}\geq 0$ for all $m,k$ }; \\ &0, &\gap &\text{if otherwise}; \end{aligned} \right.\\ \gap &p(\bm{Z}\mid\lambda_{k}^Z) &\propto & \left\{ \begin{aligned} &\exp \left[ -\frac{\lambda_{k}^Z}{2} \sum_{n=1}^{N} \left(\sum_{k=1}^{K} z_{kn}\right)^2 \right] , &\gap &\text{if $z_{kn}\geq 0$ for all $n,k$ }; \\ &0, &\gap &\text{if otherwise}. \end{aligned} \right.\\ \end{aligned} \end{equation} Again, the prior for the noise variance $\sigma^2=\frac{1}{\tau}$ is an inverse-Gamma density with shape ${\alpha_\sigma}$ and scale ${\beta_\sigma}$. \paragraph{Posterior.} Again, following the Bayes' rule and MCMC, this means we need to be able to draw from distributions (by Markov blanket, Section~\ref{section:markov-blanket}, p.~\pageref{section:markov-blanket}): $$ \begin{aligned} &p(w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z},\sigma^2, \lambda_k^W, \lambda_k^Z ), \\ & p(z_{kn}\mid \bm{A}, \bm{W},\bm{Z}_{-kn}, \sigma^2, \lambda_k^W, \lambda_k^Z), \\ & p(\sigma^2 \mid \bm{A}, \bm{W}, \bm{Z},\alpha_\sigma, \beta_\sigma ), \\ \end{aligned} $$ where $\bm{W}_{-{mk}}$ denotes all elements of $\bm{W}$ except $w_{mk}$ and $\bm{Z}_{-kn}$ denotes all elements of $\bm{Z}$ except $z_{kn}$. Using Bayes' theorem, the conditional density of $w_{mk}$ depends on its parents ($\lambda_k^W$), children ($a_{mn}$), and coparents ($\tau$ or $\sigma^2$, $\bm{W}_{-mk}, \bm{Z}$) \footnote{See Figure~\ref{fig:bmf_gl12} and Section~\ref{section:markov-blanket} (p.~\pageref{section:markov-blanket}).}. Then, the conditional density of $w_{mk}$ can be obtained by \begin{equation}\label{equation:gl12_poster_wmk1} \small \begin{aligned} &\gap p(w_{mk} | \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \lambda_{k}^W) \propto p(\bm{A}| \bm{W}, \bm{Z}, \sigma^2) \cdot p(\bm{W}| \lambda_{k}^W) =\prod_{i,j=1}^{M,N} \mathcal{N} \left(a_{ij}| \bm{w}_i^\top\bm{z}_j, \sigma^2 \right) \cdot p(\bm{W}| \lambda_{k}^W)\\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i,j=1}^{M,N}(a_{ij} - \bm{w}_i^\top\bm{z}_j )^2\right\} \times \exp \left\{ -\frac{\lambda_{k}^W}{2} \sum_{i=1}^{M} \left(\sum_{j=1}^{K} w_{ij}\right)^2 \right\} \cdot u(w_{mk}) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{j=1}^{N}(a_{mj} - \bm{w}_m^\top\bm{z}_j )^2\right\} \times \exp\left\{ -\frac{\lambda_{k}^W}{2} \left( w_{mk} + \sum_{j\neq k }^{K}w_{mj}\right)^2 \right\}\cdot u(w_{mk}) \\ &\propto \exp\Bigg\{ - \underbrace{ \bigg( \frac{\sum_{j=1}^{N} z_{kj}^2 + \textcolor{red}{\sigma^2\lambda_{k}^W} }{2\sigma^2} \bigg) }_{\textcolor{blue}{ 1/(2\widetilde{\sigma^2_{mk} }) }} w_{mk}^2 + w_{mk}\underbrace{ \bigg( -\lambda_{k}^W \textcolor{red}{\sum_{j\neq k}^{K}w_{mj}}+ \sum_{j=1}^{N} \frac{z_{kj}}{\sigma^2}\big( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\big) \bigg) }_{\textcolor{blue}{\widetilde{\sigma_{mk}^{2}}^{-1} \widetilde{\mu_{mk}}}} \Bigg\} u(w_{mk})\\ &\propto \mathcal{N}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})\cdot u(w_{mk}) = \mathcal{TN}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}}), \end{aligned} \end{equation} where $u(x)$ is the unit function with value 1 if $x\geq 0$ and value 0 if $x<0$, $\widetilde{\sigma_{mk}^{2}}= \frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2 +\textcolor{red}{\sigma^2\lambda_{k}^W}}$ is the ``parent" posterior variance of the normal distribution, $$ \widetilde{\mu_{mk}} = \left\{ -\lambda_{k}^W \cdot\textcolor{red}{\sum_{j\neq k}^{K}w_{mj}} + \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) \right\} \cdot \widetilde{\sigma_{mk}^{2}} $$ is the ``parent" posterior mean of the normal distribution, and $\mathcal{TN}(x \mid \mu, \sigma^2)$ is the \textit{truncated-normal density} with ``parent" mean $\mu$ and ``parent" variance $\sigma^2$ (Definition~\ref{definition:truncated_normal}, p.~\pageref{definition:truncated_normal}). Note the posterior density of $w_{mk}$ in Equation~\eqref{equation:gl12_poster_wmk1} is very similar to that of the GEE model in Equation~\eqref{equation:gee_poster_wmk1} where we highlight the difference in \textcolor{red}{red} text. See also the comparison of conditional posteriors for $w_{mk}$ in Table~\ref{table:bnmf_regularizer_posterior}. \begin{mdframed}[hidealllines=true,backgroundcolor=\mdframecolorNote,frametitle={Connection between GEE and GL$_1^2$ models}] We observe that there is an extra term in the denominator of the ``parent" variance value such that when all else are held equal, the GL$_1^2$ has a smaller variance and the distribution is more clustered in a smaller range. This is actually a stronger constraint/regularizer than the GEE model. Moreover, when $\{\lambda_{mk}^W\}$ in the GEE model and $\{\lambda_k^W\}$ in the GL$_1^2$ model are equal (see Table~\ref{table:bnmf_regularizer_posterior}), the extra term $\sum_{j\neq k}^{K}w_{mj}$ in the GL$_1^2$ model plays an important role in controlling the sparsity of factored components in the NMF context. To be more concrete, when the distribution of elements in matrix $\bm{A}$ has a large portion of big values, the extra term $\sum_{j\neq k}^{K}w_{mj}$ will be larger than 1 and thus enforce the posterior ``parent" mean $\widetilde{\mu_{mk}}$ of the truncated-normal density to be a small positive or even a negative value. This in turn constraints the draws of $\mathcal{TN}(w_{mk}|\cdot)$ to be around zero thus favoring sparsity (see the expectation of the variable of a truncated-normal distribution for different ``parent" mean values in Figure~\ref{fig:dists_truncatednorml_mean}, p.~\pageref{fig:dists_truncatednorml_mean}, the smaller the ``parent" mean value $\widetilde{\mu_{mk}}$, the smaller the expectation of the truncated-normal distributed variable $w_{mk}$; see also the example in Section~\ref{section:l22_linfty_models} for the experiment on GDSC $IC_{50}$ data set). On the contrary, when the entries in matrix $\bm{A}$ are small, this extra term will be smaller than 1, the parameter $\lambda_k^W$ has a little impact on the posterior ``parent" mean $\widetilde{\mu_{mk}}$ which will possibly be a large value, and the factored component $\bm{W}$ or $\bm{Z}$ will be dense instead (also see Section~\ref{section:l22_linfty_models} for the experiment on Gene Body Methylation data set). In this sense, the drawback of the GL$_1^2$ model is revealed that it is not consistent and not robust for different types of the matrix $\bm{A}$. In contrast, the GL$_2^2$ and GL$_{2,\infty}^2$ models in the next section are consistent and robust for different matrix types and impose a larger regularization compared with the GEE model such that its predictive performance is better (when the data matrix $\bm{A}$ has large values). \end{mdframed} Or after rearrangement, the posterior density of $w_{mk}$ can be equivalently described by a rectified-normal density (Definition~\ref{definition:reftified_normal_distribution}, p.~\pageref{definition:reftified_normal_distribution}), and we shall not repeat the details. And again, due to symmetry, a similar expression for $z_{kn}$ can be easily derived. The conditional density of $\sigma^2$ in GL$_1^2$ is the same as that in Equation~\eqref{equation:gee_posterior_sigma2} \begin{algorithm}[h] \caption{Gibbs sampler for GL$_1^2$ model in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative priors are $\alpha_\sigma=\beta_\sigma=1$, $\{\lambda_k^W\} =\{\lambda_k^Z\}=0.1$.} \label{alg:gl12_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\sigma, \beta_\sigma, \{\lambda_k^W\}, \{\lambda_k^Z\}$; \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} \mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \lambda_k^W)$; \Comment{Equation~\eqref{equation:gl12_poster_wmk1}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn} \mid \bm{A}, \bm{W}, \bm{Z}_{-kn},\sigma^2, \lambda_k^Z )$; \Comment{Symmetry of Equation~\eqref{equation:gl12_poster_wmk1}} \EndFor \EndFor \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z},\alpha_\sigma,\beta_\sigma)$; \Comment{Equation~\eqref{equation:gee_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:als-per-example-loss_bnmf}, stop if it converges. \end{algorithmic} \end{algorithm} \paragraph{Gibbs sampling.} By this Gibbs sampling method introduced in Section~\ref{section:gibbs-sampler} (p.~\pageref{section:gibbs-sampler}), we can construct a Gibbs sampler for the GL$_1^2$ model as formulated in Algorithm~\ref{alg:gl12_gibbs_sampler}. And also in practice, all the parameters of the prior distribution are set to be the same value $\lambda=\{\lambda_k^W\} =\{\lambda_k^Z\}$. By default, uninformative priors are $\alpha_\sigma=\beta_\sigma=1$, $\{\lambda_k^W\} =\{\lambda_k^Z\}=0.1$. \index{Decomposition: GL$_2^2$} \index{Decomposition: GL$_\infty$} \section{Gaussian $L_2^2$ Norm (GL$_2^2$) and Gaussian $L_\infty$ Norm (GL$_\infty$) Models}\label{section:l22_linfty_models} After the development of the GL$_1^2$ model, more exploration of the behaviors for different ``norms" are done in \citet{lu2022robust}. The $L_p$ prior relies highly on the implicit regularization in the GL$_1^2$ model. For any vector $\bm{x}\in \mathbb{R}^n$, the $L_p$ norm is given by $ L_p(\bm{x}) = \left(\sum_{i=1}^{n} |x_i|^p\right)^{1/p} $ whose unit balls in 2-dimensional space and 3-dimensional space are shown in Figure~\ref{fig:p-norm-2d} (p.~\pageref{fig:p-norm-2d}) and Figure~\ref{fig:p-norm-comparison-3d} (p.~\pageref{fig:p-norm-comparison-3d}) respectively. The norms of a vector are quite useful in machine learning. In Chapter~\ref{section:als} (p.~\pageref{section:als}), we mentioned the least squares problem is to minimize the squared distance between observation $\bm{b}$ and expected observation $\bm{A}\bm{x}$: $\norm{\bm{A}\bm{x}-\bm{b}}_2^2$, i.e., the $L_2$ norm of $\bm{A}\bm{x}-\bm{b}$. On the other hand, minimizing the $L_1$ norm between the observation and the expected observation can result in a robust estimator of $\bm{x}$ \citep{zoubir2012robust}. While the $L_p$ norm over the matrix $\bm{W}\in\mathbb{R}^{M\times K}$ can be defined as \begin{equation} L_p = \sum_{m=1}^{M} \left(\sum_{k=1}^{K} \abs{w_{mk}}^p\right)^{1/p}. \end{equation} In the context of NMF, the $L_1$ norm (for the GEE model) in Equation~\eqref{equation:4norms-in-vanilla-bmf} can be regarded as an $L_p$ norm with $p=1$ since $\{w_{mk}\}$'s are nonnegative. The $L_1$ norm is known to have a sparse constraint (see discussion in Section~\ref{section:gee_model}). We can further extend the Bayesian models with $L_2^2$ and $L_\infty$ norms. Again, we view the data $\bm{A}$ as being produced according to the probabilistic generative process shown in Figure~\ref{fig:bmf_gl12}, the same graphical model as the GL$_1^2$ model. The $(m,n)$-th entry $a_{mn}$ follows a Gaussian likelihood with variance $\sigma^2$ and mean given by the latent decomposition $\bm{w}_m^\top\bm{z}_n$ (Equation~\eqref{equation:als-per-example-loss_bnmf}). Therefore, the posterior density of the Gaussian variance parameter $\sigma^2$, given an inverse-Gamma prior with shape $\alpha_\sigma$ and scale $\beta_\sigma$ parameters, is the same as the GEE model in Equation~\eqref{equation:gee_posterior_sigma2}. \paragraph{Prior for the GL$_2^2$ model.} Based on the $L_2$ norm, we assume $\bm{W}$ and $\bm{Z}$ are independently distributed with parameters $\lambda_{k}^W$ and $\lambda_{k}^Z$ proportional to an exponential function: \begin{equation}\label{equation:gp22_prior_density} \begin{aligned} &p(\bm{W}\mid \lambda_k^W) &\propto & \left\{ \begin{aligned} &\exp \left[ -\frac{\lambda_k^W}{2} \sum_{m=1}^{M} \left(\sum_{k=1}^{K} w_{mk}^2\right) \right] , &\gap &\text{if $w_{mk}\geq 0$ for all $m,k$ }; \\ &0, &\gap &\text{if otherwise}; \end{aligned} \right.\\ \gap &p(\bm{Z}\mid\lambda_k^Z) &\propto & \left\{ \begin{aligned} &\exp \left[ -\frac{\lambda_k^Z}{2} \sum_{n=1}^{N} \left(\sum_{k=1}^{K} z_{kn}^2\right) \right] , &\gap &\text{if $z_{kn}\geq 0$ for all $n,k$ }; \\ &0, &\gap &\text{if otherwise}. \end{aligned} \right.\\ \end{aligned} \end{equation} \paragraph{Posterior for the GL$_2^2$ model.} By Bayes rule (Equation~\eqref{equation:posterior_abstract_for_mcmc}, p.~\pageref{equation:posterior_abstract_for_mcmc}), the posterior is proportional to the product of likelihood and prior, it can be maximized to yield an estimate of $\bm{W}$ and $\bm{Z}$. Using Bayes' theorem, the conditional density of $w_{mk}$ depends on its parents ($\lambda^W$), children ($a_{mn}$), and coparents ($\tau$ or $\sigma^2$, $\bm{W}_{-mk}, \bm{Z}$) \footnote{See Figure~\ref{fig:bmf_gl12} and Section~\ref{section:markov-blanket} (p.~\pageref{section:markov-blanket}).}. And it can be obtained by \begin{equation}\label{equation:gp22_poster_wmk1} \small \begin{aligned} &\gap p(w_{mk} \mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2,\lambda_k^W) \\ &\propto p(\bm{A}\mid \bm{W}, \bm{Z}, \sigma^2) \times p(\bm{W}\mid \lambda_k^W) =\prod_{i,j=1}^{M,N} \mathcal{N} \left(a_{ij}\mid \bm{w}_i^\top\bm{z}_j, \sigma^2 \right)\times p(\bm{W}\mid \lambda_k^W) \cdot u(w_{mk}) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i,j=1}^{M,N}(a_{ij} - \bm{w}_i^\top\bm{z}_j )^2\right\} \times \exp \left\{ -\frac{\lambda_k^W}{2} \sum_{i=1}^{M} \left(\sum_{j=1}^{K} w_{ij}^2\right) \right\} \cdot u(w_{mk}) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{j=1}^{N}(a_{mj} - \bm{w}_m^\top\bm{z}_j )^2\right\} \times \exp\left\{ -\frac{\lambda_k^W}{2}w_{mk}^2 \right\} \cdot u(w_{mk}) \\ &\propto \exp\Bigg\{ - \underbrace{\left(\frac{\sum_{j=1}^{N} z_{kj}^2 + \textcolor{red}{\sigma^2\lambda_k^W} }{2\sigma^2} \right) }_{\textcolor{blue}{ 1/(2\widetilde{\sigma^2_{mk} }) }} w_{mk}^2 + w_{mk}\underbrace{\left( \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) \right)}_{\textcolor{blue}{\widetilde{\sigma_{mk}^{2}}^{-1} \widetilde{\mu_{mk}}}} \Bigg\} \cdot u(w_{mk})\\ &\propto \mathcal{N}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})\cdot u(w_{mk}) = \mathcal{TN}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}}), \end{aligned} \end{equation} where $\widetilde{\sigma_{mk}^{2}}= \frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2 +\textcolor{red}{\sigma^2\lambda_k^W}}$ is the posterior variance of the normal distribution, and $$ \widetilde{\mu_{mk}} = \left\{ \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) \right\} \cdot \widetilde{\sigma_{mk}^{2}} $$ is the posterior mean of the normal distribution, and $\mathcal{TN}(x \mid \mu, \sigma^2)$ is the \textit{truncated-normal density} with ``parent" mean $\mu$ and ``parent" variance $\sigma^2$ (Definition~\ref{definition:truncated_normal}, p.~\pageref{definition:truncated_normal}). Note again the posterior density of $w_{mk}$ in Equation~\eqref{equation:gp22_poster_wmk1} is very similar to that of the GEE model in Equation~\eqref{equation:gee_poster_wmk1} where we highlight the difference in \textcolor{red}{red} text. See also the comparison of conditional posteriors for $w_{mk}$ in Table~\ref{table:bnmf_regularizer_posterior}. \begin{mdframed}[hidealllines=true,backgroundcolor=\mdframecolorNote,frametitle={Connection between GEE, GL$_1^2$, and GL$_2^2$ models}] We observe that the posterior ``parent" mean $\widetilde{\mu_{mk}}$ in the GL$_2^2$ model is larger than that in the GEE model since it does not contain the negative term $-\lambda_{mk}^W$ (see Table~\ref{table:bnmf_regularizer_posterior}). While the posterior ``parent" variance is smaller than that in the GEE model, such that the conditional density of GL$_2^2$ model is more clustered and it imposes a larger regularization in the sense of data/entry distribution (see Figure~\ref{fig:dists_truncatednorml_mean}, p.~\pageref{fig:dists_truncatednorml_mean}, the smaller the ``parent" variance of the truncated-normal distribution, the larger the ``parent" precision, and the smaller the expectation of the truncated-normal variable). This can induce sparsity in the context of nonnegative matrix factorization. Moreover, the GL$_2^2$ does not have the extra term $\sum_{j\neq k}^{K}w_{mj}$ in the GL$_1^2$ model which causes the inconsistency for different types of matrix $\bm{A}$ such that the introduced GL$_2^2$ model is more robust. \end{mdframed} \paragraph{Prior for the GL$_\infty$ model.} When $p\rightarrow \infty$, the $L_p$ norm defined over $\bm{W}$ is \begin{equation} L_\infty = \sum_{m=1}^{M} \left(\sum_{k=1}^{K} \abs{w_{mk}}^\infty\right)^{1/\infty} = \sum_{m=1}^{M} \mathop{\max}_{k} \abs{w_{mk}}. \end{equation} Based on the $L_p$ norm. we assume $\bm{W}$ and $\bm{Z}$ are independently exponentially distributed with scales $\lambda_{mk}$ and $\lambda_{kn}$ (Definition~\ref{definition:exponential_distribution}, p.~\pageref{definition:exponential_distribution}), \begin{equation}\label{equation:gpinfty_prior_density} \begin{aligned} &p(\bm{W}\mid \lambda_k^W) &\propto & \left\{ \begin{aligned} &\exp \left[ -{\lambda_k^W} \sum_{m=1}^{M} \mathop{\max}_{k} \abs{w_{mk}} \right] , &\gap &\text{if $w_{mk}\geq 0$ for all $m,k$ }; \\ &0, &\gap &\text{if otherwise}; \end{aligned} \right.\\ \gap &p(\bm{Z}\mid \lambda_k^Z) &\propto & \left\{ \begin{aligned} &\exp \left[ -\lambda_k^Z \sum_{n=1}^{N} \mathop{\max}_{k} \abs{z_{kn}} \right] , &\gap &\text{if $z_{kn}\geq 0$ for all $n,k$ }; \\ &0, &\gap &\text{if otherwise}. \end{aligned} \right.\\ \end{aligned} \end{equation} Note we remove the 2 in the denominator of $\lambda_k^W$ for consistency issues which we will see shortly in the form of the conditional density in Equation~\eqref{equation:gpinfty_poster_wmk1}. \paragraph{Posterior for GL$_\infty$ model.} By Bayes' rule (Equation~\eqref{equation:posterior_abstract_for_mcmc}, p.~\pageref{equation:posterior_abstract_for_mcmc}), the posterior is proportional to the product of likelihood and prior, it can be maximized to yield an estimate of $\bm{W}$ and $\bm{Z}$. Using Bayes' theorem, the conditional density of $w_{mk}$ depends on its parents ($\lambda_k^W$), children ($a_{mn}$), and coparents ($\tau$ or $\sigma^2$, $\bm{W}_{-mk}, \bm{Z}$) \footnote{See Figure~\ref{fig:bmf_gl12} and Section~\ref{section:markov-blanket} (p.~\pageref{section:markov-blanket}).}. Denote $\mathds{1}(w_{mk})$ as the indicator whether $w_{mk}$ is the largest one for $k=1,2,\ldots, K$, the conditional density can be obtained by \begin{equation}\label{equation:gpinfty_poster_wmk1} \small \begin{aligned} &\gap p(w_{mk} \mid \bm{A}, \bm{W}_{-mk}, \bm{Z},\sigma^2, \lambda_k^W) \\ &\propto p(\bm{A}\mid \bm{W}, \bm{Z}, \sigma^2) \times p(\bm{W}\mid \lambda_k^W) =\prod_{i,j=1}^{M,N} \mathcal{N} \left(a_{ij}\mid \bm{w}_i^\top\bm{z}_j, \sigma^2 \right)\times p(\bm{W}\mid \lambda_k^W) \cdot u(w_{mk}) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i,j=1}^{M,N}(a_{ij} - \bm{w}_i^\top\bm{z}_j )^2\right\} \times \exp \left\{ -{\lambda_k^W} \cdot\sum_{i=1}^{M} \mathop{\max}_{k} \abs{w_{ij}} \right\} \cdot u(w_{mk}) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{j=1}^{N}(a_{mj} - \bm{w}_m^\top\bm{z}_j )^2\right\} \times \exp\left\{ -{\lambda_k^W}\cdot w_{mk} \right\} \cdot u(w_{mk}) \cdot \textcolor{red}{\mathds{1}(w_{mk})}\\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{j=1}^{N} \bigg[ w_{mk}^2z_{kj}^2 + 2w_{mk} z_{kj}\bigg(\sum_{i\neq k}^{K}w_{mi}z_{ij} - a_{mj}\bigg) \bigg] \right\} \cdot \exp\left\{ - w_{mk} \textcolor{red}{\lambda_k^W\mathds{1}( w_{mk})}\right\} \cdot u(w_{mk})\\ &\propto \exp\Bigg\{ - \underbrace{\left(\frac{\sum_{j=1}^{N} z_{kj}^2 }{2\sigma^2} \right) }_{\textcolor{blue}{ 1/(2\widetilde{\sigma^2_{mk} }) }} w_{mk}^2 + w_{mk}\underbrace{\bigg[ -\textcolor{red}{\lambda_k^W \mathds{1}(w_{mk})}+ \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) \bigg]}_{\textcolor{blue}{\widetilde{\sigma_{mk}^{2}}^{-1} \widetilde{\mu_{mk}}}} \Bigg\} \cdot u(w_{mk})\\ &\propto \mathcal{N}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})\cdot u(w_{mk}) = \mathcal{TN}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}}), \end{aligned} \end{equation} where $u(x)$ is the unit function with value 1 if $x\geq 0$ and value 0 if $x<0$, $\widetilde{\sigma_{mk}^{2}}= \frac{\sigma^2}{\sum_{j=1}^{N} z_{kj}^2 }$ is the posterior ``parent" variance of the normal distribution, and $$ \widetilde{\mu_{mk}} = \left\{ -\textcolor{red}{\lambda_k^W\cdot \mathds{1}(w_{mk})}+ \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) \right\} \cdot \widetilde{\sigma_{mk}^{2}} $$ is the posterior ``parent" mean of the normal distribution, and $\mathcal{TN}(x \mid \mu, \sigma^2)$ is the \textit{truncated normal density} with ``parent" mean $\mu$ and ``parent" variance $\sigma^2$ (Definition~\ref{definition:truncated_normal}, p.~\pageref{definition:truncated_normal}). \begin{mdframed}[hidealllines=true,backgroundcolor=\mdframecolorNote,frametitle={Connection between GEE and GL$_\infty$ models}] The posterior ``parent" variance $\widetilde{\sigma^2_{mk}}$ in the GL$_\infty$ model is exactly the same as that in the GEE model (see Table~\ref{table:bnmf_regularizer_posterior}). Denote $\mathds{1}(w_{mk})$ as the indicator whether $w_{mk}$ is the largest one among $k=1,2,\ldots, K$. Suppose further the condition $\mathds{1}(w_{mk})$ is satisfied, parameters $\{\lambda_{mk}^W\}$ in the GEE model and $\{\lambda_k^W\}$ in GL$_\infty$ model are equal, the ``parent" mean $\widetilde{\mu_{mk}}$ is the same as that in the GEE model as well. However, when $w_{mk}$ is not the maximum value among $\{w_{m1}, w_{m2}, \ldots, w_{mK}\}$, the ``parent" mean $\widetilde{\mu_{mk}}$ is larger than that in the GEE model since the GL$_\infty$ model excludes this negative term. The GL$_\infty$ model then has the interpretation that it has a \textit{sparsity constraint} when $w_{mk}$ is the maximum value; and it has a \textit{relatively loose constraint} when $w_{mk}$ is not the maximum value. Overall, the GL$_\infty$ favors a loose regularization compared with the GEE model. \end{mdframed} \paragraph{Further extension: GL$_{2,\infty}^2$ model.} The GL$_{2,\infty}^2$ model takes the advantages of both GL$_2^2$ and GL$_\infty$, and the posterior parameters of GL$_{2,\infty}^2$ are shown in Table~\ref{table:bnmf_regularizer_posterior}. The implicit prior of the GL$_{2,\infty}^2$ model can be obtained by \begin{equation}\label{equation:gp22infty_prior_density} \begin{aligned} &\gap p(\bm{W}\mid \lambda_k^W) \propto \exp \left\{ \frac{-\lambda_k^W}{2} \sum_{m=1}^{M} \bigg(\sum_{k=1}^{K} w_{mk}^2+2\mathop{\max}_{k} |w_{mk}| \bigg) \right\} u(\bm{W}). \end{aligned} \end{equation} \paragraph{Computational complexity and Gibbs sampler.} The adopted Gibbs sampling methods for GEE, GL$_1^2$, GL$_2^2$, GL$_\infty$, and GL$_{2,\infty}^2$ models have complexity $\mathcal{O}(MNK^2)$, where the most expensive operation is the update on the conditional density of $w_{mk}$'s and $z_{kn}$'s. The Gibbs sampler for the above models is formulated in Algorithm~\ref{alg:nmf_regular_12_infty_gibbs_sampler}. By default, uninformative priors are $\{\lambda_{k}^W\}=\{\lambda_{k}^Z\}=0.1$ (GL$_1^2$, GL$_2^2$, GL$_\infty$, GL$_{2,\infty}^2$); $\alpha_\sigma=\beta_\sigma=1$ (inverse-Gamma prior in GL$_1^2$, GL$_2^2$, GL$_\infty$, GL$_{2,\infty}^2$). \begin{algorithm}[h] \caption{Gibbs sampler for GL$_1^2$, GL$_2^2$, and GL$_\infty$ models (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here is for explanatory purposes, and vectorization can expedite the procedure. By default, uninformative priors are $\{\lambda_{k}^W\}=\{\lambda_{k}^Z\}=0.1$ (GL$_1^2$, GL$_2^2$, GL$_\infty$, GL$_{2,\infty}^2$); $\alpha_\sigma=\beta_\sigma=1$ (inverse-Gamma prior in GL$_1^2$, GL$_2^2$, GL$_\infty$, GL$_{2,\infty}^2$). } \label{alg:nmf_regular_12_infty_gibbs_sampler} \begin{algorithmic}[1] \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} | \cdot )=\mathcal{TN}(w_{mk} | \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}})$ from Table~\ref{table:bnmf_regularizer_posterior}; \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn} |\cdot )=\mathcal{TN}(w_{mk} | \widetilde{\mu_{kn}}, \widetilde{\sigma_{kn}^{2}})$; \Comment{symmetry of $w_{mk}$} \EndFor \EndFor \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z},\alpha_\sigma,\beta_\sigma)$; \Comment{Equation~\eqref{equation:gee_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:als-per-example-loss_bnmf}, stop if it converges. \end{algorithmic} \end{algorithm} \subsection{Examples} We conduct experiments with various analysis tasks to demonstrate the main advantages of the introduced GL$_2^2$ and GL$_{2,\infty}^2$ methods. We use two data sets from bioinformatics: The first one is the Genomics of Drug Sensitivity in Cancer data set\footnote{\url{https://www.cancerrxgene.org/}} (GDSC $IC_{50}$) \citep{yang2012genomics}, which contains a wide range of drugs and their treatment outcomes on different cancer and tissue types (cell lines). Following \citet{brouwer2017prior}, we preprocess the GDSC $IC_{50}$ data set by capping high values to 100, undoing the natural log transform, and casting them as integers. The second one is the Gene Body Methylation data set \citep{koboldt2012comprehensive}, which gives the amount of methylation measured in the body region of 160 breast cancer driver genes. We multiply the values in the Gene Body Methylation data set by 20 and cast them as integers as well. A summary of the two data sets can be seen in Table~\ref{table:datadescription_nmf_regularizer} and their distributions are shown in Figure~\ref{fig:datasets_nmf_regularizer}. The GDSC $IC_{50}$ data set has a larger range whose values are unbalanced (either small as 0 or large as 100); while the Gene Body Methylation data set has a smaller range whose values seem balanced. We can see that the GDSC $IC_{50}$ is relatively a large data set whose matrix rank is $139$ and the Gene Body Methylation data tends to be small whose matrix rank is 160. \noindent\makebox[\textwidth][c]{% \begin{minipage}{\textwidth} \begin{minipage}[b]{0.41\textwidth} \centering \begin{figure}[H] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure{\includegraphics[width=0.451\textwidth]{imgs/plot_gdsc_reg.pdf} \label{fig:plot_gdsc}} \subfigure{\includegraphics[width=0.451\textwidth]{imgs/plot_methylation_gm_reg.pdf} \label{fig:plot_methylation_gm}} \caption{Data distribution of GDSC $IC_{50}$ and Gene Body Methylation data sets.} \label{fig:datasets_nmf_regularizer} \end{figure} \end{minipage} \hfill \begin{minipage}[b]{0.575\textwidth} \centering \renewcommand{1.5}{1.25} \setlength{\tabcolsep}{7pt} \small \begin{tabular}{l|lll} \hline Dataset & Rows & Columns & Fraction obs. \\ \hline GDSC $IC_{50}$ & 707 & 139 & 0.806 \\ Gene Body Meth. & 160 & 254 & 1.000 \\ \hline \end{tabular} \captionof{table}{Dataset description. Gene Body Methylation is relatively a small data set and the GDSC $IC_{50}$ tends to be large. The description provides the number of rows, columns, and the fraction of entries that are observed.} \label{table:datadescription_nmf_regularizer} \end{minipage} \end{minipage} } The same parameter initialization is adopted in each scenario. We compare the results in terms of convergence speed and generalization. In a wide range of scenarios across various models, GL$_2^2$ and GL$_{2,\infty}^2$ improve convergence rates, and lead to out-of-sample performance that is as good or better than other Bayesian NMF models with implicit regularization meaning. \paragraph{Hyperparameters.} We follow the default hyperparameter setups in \citet{brouwer2017prior}. We use $\{\lambda_{mk}^W\}=\{\lambda_{kn}^Z\}=0.1$ (GEE); $\{\lambda_{k}^W\}=\{\lambda_{k}^Z\}=0.1$ (GL$_1^2$, GL$_2^2$, GL$_{2,\infty}^2$); uninformative $\alpha_\sigma=\beta_\sigma=1$ (inverse-Gamma prior in GEE, GL$_1^2$, GL$_2^2$, GL$_{2,\infty}^2$). These are very weak prior choices and the models are insensitive to them \citep{brouwer2017prior}. As long as the hyperparameters are set, the observed or unobserved variables are initialized from random draws as this initialization procedure provides a better initial guess of the right patterns in the matrices. In all experiments, we run the Gibbs sampler 500 iterations with a burn-in of 300 iterations as the convergence analysis shows the algorithm can converge in fewer than 200 iterations. \begin{figure*}[htp] \centering \vspace{-0.25cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-1pt \subfigure[Convergence on the \textbf{GDSC $\boldsymbol{IC_{50}}$} data set with increasing latent dimension $K$.]{\includegraphics[width=1\textwidth]{imgs/nmf_regularizers_convergences_gdsc.pdf} \label{fig:nmf_regularizers_convergences_gdsc}}\vspace{-0.6em} \subfigure[Data distribution of factored component $\bm{W}$ in the last 20 iterations for \textbf{GDSC $\boldsymbol{IC_{50}}$}. ]{\includegraphics[width=1\textwidth]{imgs/nmf_regularizers_distributions_gdsc.pdf} \label{fig:nmf_regularizers_distributions_gdsc}}\vspace{-0.1em} \caption{Convergence of the models on the GDSC $IC_{50}$ (upper) and the distribution of factored $\bm{W}$ (lower), measuring the training data fit (mean squared error). When we increase the latent dimension $K$, the GEE and the introduced GL$_2^2$ and GL$_{2,\infty}^2$ algorithms continue to increase the performance; while GL$_1^2$ starts to decrease. The results of GL$_\infty$ and GL$_{2,\infty}^2$ models are similar so we only present the results of the GL$_{2,\infty}^2$ model for brevity.} \label{fig:convergences_gdsc_nmf_regularizer} \end{figure*} \begin{figure*}[h] \centering \vspace{-0.25cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=2pt \subfigure[Convergence on the \textbf{Gene Body Methylation} data set with increasing latent dimension $K$.]{\includegraphics[width=1\textwidth]{imgs/nmf_regularizers_convergences_gene_body.pdf} \label{fig:nmf_regularizers_convergences_gene_body}}\vspace{-0.6em} \subfigure[Data distribution of factored component $\bm{W}$ in the last 20 iterations for \textbf{Gene Body Methylation}.]{\includegraphics[width=1\textwidth]{imgs/nmf_regularizers_distributions_gene_body.pdf} \label{fig:nmf_regularizers_distributions_gene_body}}\vspace{-0.1em} \caption{Convergence of the models on the Gene Body Methylation data set (upper) and the distribution of factored $\bm{W}$ (lower), measuring the training data fit (mean squared error). When we increase the latent dimension $K$, all the models continue to increase the performance.} \label{fig:convergences_ctrp_nmf_regularizer} \end{figure*} \paragraph{Convergence analysis for GDSC $IC_{50}$ with relatively large entries.} Firstly we compare the convergence in terms of iterations on the GDSC $IC_{50}$ and Gene Body Methylation data sets. We run each model with $K=\{10, 20, 30, 40, 50\}$, and the loss is measured by mean squared error (MSE). Figure~\ref{fig:nmf_regularizers_convergences_gdsc} shows the average convergence results of ten repeats and Figure~\ref{fig:nmf_regularizers_distributions_gdsc} shows the distribution of entries of the factored $\bm{W}$ for the last 20 iterations on the GDSC $IC_{50}$ data set. The result is consistent with our analysis (see the connection between different models). Since the values of the data matrix for GDSC $IC_{50}$ data set is large, the posterior ``parent" mean $\widetilde{\mu_{mk}}$ in the GL$_1^2$ model is approaching zero or even negative, thus it has a larger regularization than GEE model. This makes the GL$_1^2$ model converge to a worse performance. GL$_2^2$ and GL$_{2,\infty}^2$ models, on the contrary, impose a looser regularization than the GL$_1^2$ model, and the convergence performances are close to that of the GEE model. \paragraph{Convergence analysis for Gene Body Methylation with relatively small entries.} Figure~\ref{fig:nmf_regularizers_convergences_gene_body} further shows the average convergence results of ten repeats, and Figure~\ref{fig:nmf_regularizers_distributions_gene_body} shows the distribution of the entries of the factored $\bm{W}$ for the last 20 iterations on the Gene Body Methylation data set. The situation is different for the GL$_1^2$ model since the range of the entries of the Gene Body Methylation data set is smaller than that of the GDSC $IC_{50}$ data set (see Figure~\ref{fig:datasets_nmf_regularizer}). This makes the $-\lambda_k^W\cdot\textcolor{black}{\sum_{j\neq k}^{K}w_{mj}}$ term of posterior ``parent" mean $\widetilde{\mu_{mk}}$ in the GL$_1^2$ model approach zero (see Table~\ref{table:bnmf_regularizer_posterior}), and the model then favors a looser regularization than the GEE model. The situation can be further presented by the distribution of the factored component $\bm{W}$ on the GDSC $IC_{50}$ (Figure~\ref{fig:nmf_regularizers_distributions_gdsc}) and the Gene Body Methylation (Figure~\ref{fig:nmf_regularizers_distributions_gene_body}). The GEE model has larger values of $\bm{W}$ on the former data set and smaller values on the latter; while GL$_1^2$ has smaller values of $\bm{W}$ on the former data set and larger values on the latter. In other words, the regularization of the GEE and GL$_1^2$ is \textbf{inconsistent} on the two different data matrices. In comparison, the introduced GL$_2^2$ and GL$_{2,\infty}^2$ are consistent on different data sets, making them more robust algorithms to compute the nonnegative matrix factorization of the observed data. Table~\ref{table:nmfreg_distribtuionsvalues} shows the mean values of the factored component $\bm{W}$ in the last 20 iterations for GDSC $IC_{50}$ (upper table) and Gene Body Methylation (lower table) where the value in the parentheses is the sparsity evaluated by taking the percentage of values smaller than 0.1. The inconsistency of GEE for different matrices can be observed (either large sparsity or small sparsity), while the results for the introduced GL$_2^2$ and GL$_{2,\infty}^2$ models are more consistent. \begin{figure}[ht] \centering \noindent \makebox[\textwidth][c]{% \begin{minipage}{\textwidth} \begin{minipage}[b]{0.4\textwidth} \centering \setlength{\tabcolsep}{3.4pt} \renewcommand{1.5}{1.35} \footnotesize \begin{tabular}{l|llll} \hline $K$ & GEE & GL$_1^2$ & GL$_2^2$ & GL$_{2,\infty}^2$ \\ \hline 10 & 8.1 (1.9) & 1.3 (10.3) & 2.4 (3.8) & 2.4 (4.5) \\ 20 & 8.6 (1.5) & 0.8 (14.7) & 2.3 (4.1) & 2.2 (4.4) \\ 30 & 8.7 (1.4) & 0.7 (17.3) & 2.2 (4.3) & 2.2 (4.4) \\ 40 & 8.3 (1.5) & 0.6 (19.4) & 2.2 (4.4) & 2.2 (4.4) \\ 50 & 8.0 (1.6) & 0.5 (21.2) & 2.2 (4.1) & 2.2 (4.2) \\ \hline \hline 10 & 0.1 (80.4) & 0.7 (11.4) & 0.7 (11.5) & 0.7 (12.7) \\ 20 & 0.1 (87.8) & 0.6 (16.2) & 0.5 (21.3) & 0.5 (21.0) \\ 30 & 0.0 (90.2) & 0.6 (18.2) & 0.3 (37.1) & 0.3 (36.4) \\ 40 & 0.0 (92.2) & 0.6 (20.8) & 0.3 (48.9) & 0.3 (49.1) \\ 50 & 0.0 (93.0) & 0.5 (22.8) & 0.2 (58.4) & 0.2 (58.4) \\ \hline \end{tabular} \captionof{table}{Mean values of the factored component $\bm{W}$ in the last 20 iterations, where the value in the (parentheses) is the sparsity evaluated by taking the percentage of values smaller than 0.1, for GDSC $IC_{50}$ (upper table) and Gene Body Methylation (lower table). The inconsistency of GEE and GL$_1^2$ for different matrices can be observed. } \label{table:nmfreg_distribtuionsvalues} \end{minipage} \vspace{+0.35cm} \hfill\hfill \begin{minipage}[b]{0.57\textwidth} \centering \renewcommand{1.5}{1.1} \footnotesize \setlength{\tabcolsep}{3.8pt} \begin{tabular}{l|l|llll} \hline Unobs.& $K$ & GEE & GL$_1^2$ & GL$_2^2$ & GL$_{2,\infty}^2$ \\ \hline \parbox[t]{5.0mm}{\multirow{4}{*}{\rotatebox[origin=c]{0}{60\%} }} &20 & 787.60 & 880.36 & \textbf{ 769.24 } & \textbf{ 768.27 } \\ &30 & 810.39 & 888.47 & \textbf{ 774.53 } & \textbf{ 773.27 } \\ &40 & 802.39 & 892.01 & \textbf{ 783.26 } & \textbf{ 784.30 } \\ &50 & \textbf{795.72} & 895.05 & \textbf{ 806.14 } & \textbf{ 807.44 } \\ \hline \hline \parbox[t]{5.0mm}{\multirow{4}{*}{\rotatebox[origin=c]{0}{70\%} }} &20 & 841.74 & 895.77 & \textbf{ 798.44 } & \textbf{ 796.15 } \\ &30 & 830.45 & 902.48 & \textbf{ 807.37 } & \textbf{ 806.61 } \\ &40 & \textbf{842.70} & 907.65 & \textbf{ 832.67 } & \textbf{ 835.89 } \\ &50 & \textbf{846.83} & {1018.97} $\uparrow$ & \textbf{ 864.58 } & \textbf{ 869.15 } \\ \hline \hline \parbox[t]{5.0mm}{\multirow{4}{*}{\rotatebox[origin=c]{0}{80\%} }} &20 & 904.39 & 926.72 & \textbf{ 842.24 } & \textbf{ 841.84 } \\ &30 & \textbf{887.63} & 938.92 & \textbf{ 879.30 } & \textbf{ 883.57 } \\ &40 & \textbf{942.44} & 2634.69 & \textbf{ 935.09 } & \textbf{ 939.77 } \\ &50 & \textbf{952.45} & {2730.30} $\uparrow$ & \textbf{ 974.01 } & \textbf{ 973.75 } \\ \hline \end{tabular} \captionof{table}{Mean squared error measure when the percentage of unobserved data is 60\% (upper table), 70\% (middle table), or 80\% (lower table) for the GDSC $IC_{50}$ data set. The performance of the introduced GL$_2^2$ and GL$_{2,\infty}^2$ models is only slightly worse when we increase the fraction of unobserved from 60\% to 80\%; while the performance of GL$_1^2$ becomes extremely poor. Similar observations occur in the Gene Body Methylation experiment. The symbol $\uparrow$ means the performance becomes extremely worse.} \label{table:nmfregu_special_sparsity_case} \end{minipage} \end{minipage} } \end{figure} \begin{figure*}[h] \centering \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-2pt \subfigure[Predictive results on the \textbf{GDSC $\boldsymbol{IC_{50}}$} data set with increasing fraction of unobserved data and increasing latent dimension $K$.]{\includegraphics[width=1\textwidth]{imgs/nmf_regularizer_sparsity_movielens_gdsc.pdf} \label{fig:nmf_regularizer_sparsity_movielens_gdsc}} \subfigure[Predictive results on \textbf{Gene Body Methylation} data set with increasing fraction of unobserved data and increasing latent dimension $K$.]{\includegraphics[width=1\textwidth]{imgs/nmf_regularizer_sparsity_movielens_genebody_meth.pdf} \label{fig:nmf_regularizer_sparsity_movielens_genebody_meth}} \caption{Predictive results on the \textbf{GDSC $\boldsymbol{IC_{50}}$} (upper) and \textbf{Gene Body Methylation} (lower) data sets. We measure the predictive performance (mean squared error) on a held-out data set for different fractions of unobserved data. } \label{fig:sparsity_nmf_regulariza} \end{figure*} \paragraph{Predictive analysis.} The training performances of the GEE, GL$_2^2$, and GL$_{2,\infty}^2$ models steadily improve as the model complexity grows. Inspired by this result, we measure the predictive performance when the sparsity of the data increases to see whether the models overfit or not. For different fractions of unobserved data, we randomly split the data based on that fraction, train the model on the observed data, and measure the performance on the held-out test data. Again, we increase $K$ from $K=20$ to $K=30, 40, 50$ for all models. The average MSE of ten repeats is given in Figure~\ref{fig:sparsity_nmf_regulariza}. We still observe the inconsistency issue in the GL$_1^2$ model, the predictive performance of it is as good as that of the introduced GL$_2^2$ and GL$_{2,\infty}^2$ models on the Gene Body Methylation data set; while the predictive results of the GL$_1^2$ model are extremely poor on the GDSC $IC_{50}$ data set. For the GDSC $IC_{50}$ data set, the introduced GL$_2^2$ and GL$_{2,\infty}^2$ models perform best when the latent dimensions are $K=20, 30, 40$; when $K=50$ and the fraction of unobserved data increases, the GEE model is slightly better. As aforementioned, the GL$_1^2$ performs the worst on this data set; and when the fraction of unobserved data increases or $K$ increases, the predictive results of GL$_1^2$ deteriorate quickly. For the Gene Body Methylation data set, the predictive performance of GL$_1^2$, GL$_2^2$ and GL$_{2,\infty}^2$ models are close (GL$_1^2$ has a slightly larger error). The GEE model performs the worst on this data set. The comparison of the results on the two sets shows the introduced GL$_2^2$ and GL$_{2,\infty}^2$ models have both better in-sample and out-of-sample performance, making them a more robust choice in predicting missing entries. Table~\ref{table:nmfregu_special_sparsity_case} shows MSE predictions of different models when the fractions of unobserved data are $60\%$, $70\%$, and $80\%$ respectively. We observe that the performances of the introduced GL$_2^2$ and GL$_{2,\infty}^2$ models are only slightly worse when we increase the fraction of unobserved from 60\% to 80\%. This indicates the introduced GL$_2^2$ and GL$_{2,\infty}^2$ models are more robust with less overfitting. While for the GL$_1^2$ model, the performance becomes extremely poor in this scenario. \paragraph{Noise sensitivity.} Finally, we measure the noise sensitivity of different models with predictive performance when the data sets are noisy. To see this, we add different levels of Gaussian noise to the data. We add levels of $\{0\%, 10\%,$ $20\%,$ $50\%, 100\%\}$ noise-to-signal ratio noise (which is the ratio of the variance of the added Gaussian noise to the variance of the data). The results for the GDSC $IC_{50}$ with $K=10$ are shown in Figure~\ref{fig:noise_graph_gdsc}. The results are the average performance over 10 repeats. We observe that the introduced GL$_2^2$ and GL$_{2,\infty}^2$ models perform slightly better than other Bayesian NMF models. The introduced GL$_2^2$ and GL$_{2,\infty}^2$ models perform notably better when the noise-to-signal ratio is smaller than 10\% and slightly better when the ratio is larger than 20\%. Similar results can be found on the Gene Body Methylation data set and other $K$ values and we shall not repeat the details. \begin{figure}[h] \centering \subfigtopskip=2pt \subfigbottomskip=9pt \subfigcapskip=-5pt \includegraphics[width=0.421\textwidth]{imgs/noise_graph_gdsc_nmf_reg.pdf} \caption{Ratio of the variance of data to the MSE of the predictions, the higher the better.} \label{fig:noise_graph_gdsc} \end{figure} \section{Semi-Nonnegative Matrix Factorization} Instead of forcing nonnegativity on both factored matrices, we can place this constraint on only one of them \citep{ding2008convex, fei2008semi}. By the Bayesian approach, we can place a real-valued prior over one component, and a nonnegative prior over the other. As discussed above, the NMF works for nonnegative data sets, e.g., data sets derived from images and texts. Nevertheless, the main advantage of the semi-nonnegative matrix factorization is that it allows us to handle real-valued data sets while still enforcing some nonnegative constraints. \index{Decomposition: GEG} \subsection{Gaussian Likelihood with Exponential and Gaussian Priors (GEG)} The Gaussian likelihood with exponential and Gaussian priors (GEG) model place an exponential prior over the component $\bm{W}$ and a Gaussian prior over the component $\bm{Z}$ as those were done in GEE and GGG models respectively (Section~\ref{section:gee_model} and Section~\ref{section:markov-blanket}, p.~\pageref{section:markov-blanket}). The likelihood is chosen to be the same as that in the GEE model (Equation~\eqref{equation:gee_likelihood}). The graphical representation of GEG model is shown in Figure~\ref{fig:bmf_geg}. \paragraph{Prior.} We assume $\bm{W}$ is independently exponentially distributed with scales $\lambda_{mk}^W$, and $\bm{Z}$ is Gaussian distributed with precisions $\lambda_{kn}^Z$, \begin{equation}\label{equation:geg_prior_density_exponential_gaussian} \begin{aligned} w_{mk} &\sim \mathcal{E}(w_{mk}\mid \lambda_{mk}^W), \gap &z_{kn}\sim& \mathcal{N}(z_{kn}\mid 0, ( \lambda_{kn}^Z)^{-1});\\ p(\bm{W}) &=\prod_{m,k=1}^{M,K} \mathcal{E}(w_{mk}\mid\lambda_{mk}^W), \gap &p(\bm{Z}) =&\prod_{k,n=1}^{K,N} \mathcal{N}(z_{kn}\mid 0, (\lambda_{kn}^Z)^{-1}), \end{aligned} \end{equation} where $\mathcal{E}(x\mid \lambda)=\lambda\exp(-\lambda x)u(x)$ is the exponential density with $u(x)$ being the unit step function. The prior for the noise variance $\sigma^2$ is again chosen as an inverse-Gamma density with shape ${\alpha_\sigma}$ and scale ${\beta_\sigma}$ (Equation~\eqref{equation:geg_sigma_prior}). The conditional posterior densities for $w_{mk}$'s and $z_{kn}$'s are already provided in the GEE and GGG models respectively. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[GEG.]{\label{fig:bmf_geg} \includegraphics[width=0.421\linewidth]{./imgs/bmf_geg.pdf}} \subfigure[GnVG.]{\label{fig:bmf_gnvg} \includegraphics[width=0.421\linewidth]{./imgs/bmf_gnvg.pdf}} \caption{Graphical model representation of GEG and GnVG models. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or".} \label{fig:bmf_geg_gnvg} \end{figure} \index{Decomposition: GnVG} \subsection{Gaussian Likelihood with Nonnegative Volume and Gaussian Priors (GnVG)} The volume prior discussed in the GVG model (Section~\ref{section:gvg_model}, p.~\pageref{section:gvg_model}) can be formulated to be nonnegative so as to enforce nonnegativity (Figure~\ref{fig:bmf_gnvg}). \paragraph{Prior.} Same as the GGG and GEG models, we place a Gaussian density with precision $\lambda_{kn}^Z$ over $\bm{Z}$: \begin{equation}\label{equation:gnvg_prior_density_gaussian} \begin{aligned} z_{kn}\sim& \mathcal{N}(z_{kn}\mid 0, ( \lambda_{kn}^Z)^{-1});\\ p(\bm{Z}) =&\prod_{k,n=1}^{K,N} \mathcal{N}(z_{kn}\mid 0, (\lambda_{kn}^Z)^{-1}). \end{aligned} \end{equation} The nonnegative volume prior is constructed as follows, \begin{equation}\label{equation:gnvg_nonneg_volume_prior} \bm{W}\sim \left\{ \begin{aligned} &\exp\{-\gamma \mathrm{det}(\bm{W}^\top\bm{W})\} ,& \mathrm{\,\,if\,\,} w_{mk} \geq 0\text{\,\, for all }m,k; \\ &0 , &\text{\,\,if\,\,any } w_{mk}<0. \end{aligned} \right. \end{equation} The posterior density for $z_{kn}$ is the same as those in the GEG and GGG models. And the posterior density is similar to that of the GVG model, in this case, we draw the samples from a truncated-normal rather than a normal density. \index{Decomposition: NMTF} \section{Nonnegative Matrix Tri-Factorization (NMTF)} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[GEEE.]{\label{fig:bmf_geee} \includegraphics[width=0.421\linewidth]{./imgs/bmf_geee.pdf}} \subfigure[GEEEA.]{\label{fig:bmf_geeea} \includegraphics[width=0.421\linewidth]{./imgs/bmf_geeea.pdf}} \caption{Graphical model representation of GEEE and GEEEA models. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or", and the comma ``," in the variable represents ``and".} \label{fig:bmf_geee_geeea} \end{figure} Similar to the \textit{bilinear} nonnegative matrix factorization in which case we reduce the observed matrix into a product of two factor matrices, the nonnegative matrix tri-factorization (NMTF) extends the components into three, i.e., we factor the data matrix $\bm{A}$ into three matrices $\bm{A} = \bm{W}\bm{F}\bm{Z}+\bm{E}$ where $\bm{W}\in\mathbb{R}_+^{M\times K}, \bm{F}\in\mathbb{R}_+^{K\times L}, \bm{Z}\in\mathbb{R}_+^{L\times N}$. \paragraph{Likelihood.} We again assume the residuals, $e_{mn}$, are i.i.d., zero mean normal with variance $\sigma^2$, which gives rise to the likelihood, \begin{equation}\label{equation:geee_likelihood} \begin{aligned} p(\bm{A}\mid \boldsymbol\theta) &= \prod_{m,n=1}^{M,N} \mathcal{N} \left(a_{mn}\mid \bm{w}_m^\top\bm{F}\bm{z}_n, \sigma^2 \right)\\ &= \prod_{m,n=1}^{M,N} \mathcal{N} \left(a_{mn}\mid \bm{w}_m^\top\bm{F}\bm{z}_n, \tau^{-1} \right) \end{aligned} \end{equation} where $\boldsymbol\theta=\{\bm{W},\bm{F},\bm{Z},\sigma^2\}$ denotes all parameters in the model, $\sigma^2$ is the variance, and $\tau^{-1}=\sigma^2$ is the precision. And now we want the reconstruction error measured by Frobenius norm to be minimized: \begin{equation}\label{equation:als-per-example-loss_tnmf} \mathop{\min}_{\bm{W},\bm{Z}} L(\bm{W},\bm{Z}) = \mathop{\min}_{\bm{W},\bm{Z}}\sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{F}\bm{z}_n\right)^2. \end{equation} \paragraph{Prior.} We assume $\bm{W}$ and $\bm{Z}$ are independently exponentially distributed with scales $\lambda_{mk}^W$ and $\lambda_{kn}^Z$ (Definition~\ref{definition:exponential_distribution}, p.~\pageref{definition:exponential_distribution}), \begin{equation}\label{equation:geee_prior_density_exponential} \begin{aligned} w_{mk} &\sim \mathcal{E}(w_{mk}\mid \lambda_{mk}^W), \gap &f_{kl}\sim& \mathcal{E}(f_{kl}\mid \lambda_{kl}^F), &z_{ln}\sim& \mathcal{E}(z_{ln}\mid \lambda_{ln}^Z);\\ p(\bm{W}) &=\prod_{m,k=1}^{M,K} \mathcal{E}(w_{mk}\mid \lambda_{mk}^W), &p(\bm{F}) =&\prod_{k,l=1}^{K,L} \mathcal{E}(f_{kl}\mid \lambda_{kl}^F), &p(\bm{Z}) =&\prod_{l,n=1}^{L,N} \mathcal{E}(z_{ln}\mid \lambda_{ln}^Z), \end{aligned} \end{equation} where $\mathcal{E}(x\mid \lambda)=\lambda\exp(-\lambda x)u(x)$ is the exponential density with $u(x)$ being the unit step function. The prior for the noise variance $\sigma^2$ is chosen as an inverse-Gamma density with shape ${\alpha_\sigma}$ and scale ${\beta_\sigma}$ (Definition~\ref{definition:inverse_gamma_distribution}, p.~\pageref{definition:inverse_gamma_distribution}), \begin{equation}\label{equation:geeg_sigma_prior} p(\sigma^2)= \mathrm{IG}(\sigma^2\mid \alpha_\sigma, \beta_\sigma) = \frac{{\beta_\sigma}^{\alpha_\sigma}}{\Gamma({\alpha_\sigma})} (\sigma^2)^{-\alpha_\sigma-1} \exp\left( -\frac{{\beta_\sigma}}{\sigma^2} \right). \end{equation} Therefore, the posterior density of $\sigma^2$ is still from Equation~\eqref{equation:gee_posterior_sigma2}. \paragraph{Posterior.} For NMF, following the Bayes' rule and MCMC, this means we need to be able to draw from distributions (by Markov blanket, Section~\ref{section:markov-blanket}, p.~\pageref{section:markov-blanket}): $$ \begin{aligned} &p(w_{mk}\mid \bm{A}, \bm{W}_{-mk},\bm{F}, \bm{Z},\sigma^2, \boldsymbol\lambda^W,\boldsymbol\lambda^F, \boldsymbol\lambda^Z), \\ &p(f_{kl}\mid \bm{A}, \bm{W},\bm{F}_{-kl}, \bm{Z}, \sigma^2,\boldsymbol\lambda^W,\boldsymbol\lambda^F, \boldsymbol\lambda^Z), \\ & p(z_{ln}\mid \bm{A}, \bm{W}, \bm{F},\bm{Z}_{-ln},\sigma^2, \boldsymbol\lambda^W,\boldsymbol\lambda^F, \boldsymbol\lambda^Z), \\ & p(\sigma^2 \mid \bm{A}, \bm{W}, \bm{Z},\alpha_\sigma, \beta_\sigma), \\ \end{aligned} $$ where $\boldsymbol\lambda^W$ is an $M\times K$ matrix containing all $\{\lambda_{mk}^W\}$ entries, $\boldsymbol\lambda^Z$ is a $L\times N$ matrix including all $\{\lambda_{ln}^Z\}$ values, and $\bm{W}_{-{mk}}$ denotes all elements of $\bm{W}$ except $w_{mk}$. The conditional density of $w_{mk}$ is just similar to that in the GEE model in Equation~\eqref{equation:gee_poster_wmk1}. For simplicity, we denote the $k$-th row of $\bm{F}$ as $\bm{r}_k$, and $l$-th column of $\bm{F}$ as $\bm{c}_l$. The conditional density of $w_{mk}$ is the same as that in Equation~\eqref{equation:gee_poster_wmk1}, except now we replace $z_{kj}$ with $\bm{r}_k^\top\bm{z}_j$ in the variance parameter of Equation~\eqref{equation:gee_posterior_variance}, and replace $z_{kj}$ with $\bm{r}_k^\top\bm{z}_j$ and replace $z_{ij}$ with $\bm{r}_i^\top\bm{z}_j$ in Equation~\eqref{equation:gee_posterior_mean}. The reason is obvious, when considering the conditional density of $w_{mk}$, we can treat $\bm{F}\bm{Z}$ as a single matrix, and the problem becomes a bilinear decomposition. Similarly, the conditional posterior density of $z_{ln}$ can be derived due to symmetry to $w_{mk}$. Using Bayes' theorem, the conditional density of $f_{kl}$ depends on its parents ($\lambda_{kl}^F$), children ($a_{mn}$), and coparents ($\tau$ or $\sigma^2$, $\bm{W}, \bm{F}_{kl}, \bm{Z}$) \footnote{See Figure~\ref{fig:bmf_geee} and Section~\ref{section:markov-blanket} (p.~\pageref{section:markov-blanket}).}. And it can be obtained by \begin{equation}\label{equation:geee_poster_fkl1} \begin{aligned} &\gap p(f_{kl}\mid \bm{A} , \bm{W},\bm{F}_{-kl}, \bm{Z}, \sigma^2, \cancel{\boldsymbol\lambda^W},\cancel{\boldsymbol\lambda^Z}, \boldsymbol\lambda^F, \bm{A})=p(f_{kl} \mid \bm{A}, \bm{W}, \bm{F}_{-kl}, \bm{Z}, \sigma^2, \lambda_{kl}^F) \\ &\propto p(\bm{A}\mid \bm{W}, \bm{F},\bm{Z}, \sigma^2) \times p(f_{kl}\mid \lambda_{kl}^F) =\prod_{i,j=1}^{M,N} \mathcal{N} \left(a_{ij}\mid \bm{w}_i^\top\bm{F}\bm{z}_j, \sigma^2 \right)\times \mathcal{E}(w_{kl}\mid \lambda_{kl}^F) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i,j=1}^{M,N}(a_{ij} - \bm{w}_i^\top\bm{F}\bm{z}_j )^2\right\} \times \cancel{\lambda_{kl}^F }\exp(-\lambda_{kl}^F \cdot f_{kl})u(f_{kl})\\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i,j=1}^{M,N} \left(-2a_{ij}(\bm{w}_i^\top\bm{F}\bm{z}_j) + (\bm{w}_i^\top\bm{F}\bm{z}_j)^2 \right)\right\} \cdot \exp(-\lambda_{kl}^F\cdot f_{kl})u(f_{kl}).\\ \end{aligned} \end{equation} To express the conditional density of $\{f_{kl}\mid \bm{A}, \bm{W},\bm{F}_{-kl},\bm{Z}, \sigma^2,\lambda_{kl}^F \}$ in terms of $f_{kl}$, we write out $\bm{w}_i^\top \bm{F}\bm{z}_j$ in the above equation as $$ \begin{aligned} \bm{w}_i^\top \bm{F}\bm{z}_j = \sum_{s,t=1}^{K,L}\, w_{is} \, f_{st} z_{tj} = f_{kl}\, (w_{ik} \, z_{lj}) +C \end{aligned} $$ where $$ C=\sum_{(s,t)\neq (k,l)}^{K,L}\, w_{is} \, f_{st} z_{tj} $$ is a constant when considering $f_{kl}$. Therefore, Equation~\eqref{equation:geee_poster_fkl1} can be expressed as, by excluding terms non-relevant to $f_{kl}$, \begin{equation}\label{equation:geee_poster_fkl2} \begin{aligned} &\gap p(f_{kl} \mid \bm{A}, \bm{W}, \bm{F}_{-kl}, \bm{Z}, \sigma^2, \lambda_{kl}^F )\\ &\propto \exp\left\{ - \underbrace{\frac{\sum_{i,j=1}^{M,N} (w_{ik}z_{lj})^2}{2\sigma^2}}_{\textcolor{blue}{1/(2\widetilde{\sigma_{kl}^{2}})}} f_{kl}^2 + f_{kl} \underbrace{\left[ -\lambda_{kl}^F + \sum_{i,j=1}^{M,N}(w_{ik}z_{lj}) \left( \frac{a_{ij}-C}{\sigma^2} \right) \right] } _{\textcolor{blue}{\widetilde{\sigma_{kl}^{2}}^{-1} \widetilde{\mu_{kl}}}} \right\} \cdot u(f_{kl})\\ &\propto \mathcal{N}(f_{kl} \mid \widetilde{\mu_{kl}}, \widetilde{\sigma_{kl}^{2}})\cdot u(w_{kl}) = \mathcal{TN}(f_{kl} \mid \widetilde{\mu_{kl}}, \widetilde{\sigma_{kl}^{2}}), \end{aligned} \end{equation} where $u(x)$ is the unit function with value 1 if $x\geq 0$ and value 0 if $x<0$, \begin{equation}\label{equation:geee_posterior_variance} \widetilde{\sigma_{kl}^{2}}= \frac{\sigma^2}{\sum_{i,j=1}^{M,N} (w_{ik}z_{lj})^2} \end{equation} is the ``parent" posterior variance of the normal distribution with posterior ``parent" mean $\widetilde{\mu_{kl}}$, \begin{equation}\label{equation:geee_posterior_mean} \widetilde{\mu_{kl}} = \left[ -\lambda_{kl}^F + \sum_{i,j=1}^{M,N}(w_{ik}z_{lj}) \left( \frac{a_{ij}-C}{\sigma^2} \right) \right] \cdot \widetilde{\sigma_{kl}^{2}} \end{equation} and $\mathcal{TN}(x \mid \mu, \sigma^2)$ is the \text{truncated-normal density} with ``parent" mean $\mu$ and ``parent" variance $\sigma^2$ (Definition~\ref{definition:truncated_normal}, p.~\pageref{definition:truncated_normal}). \paragraph{Sparsity.} Similar to the GEE model on the factored component $w_{mk}$, the posterior parameters have a similar sparsity constraint on the component $f_{kl}$. The sparsity comes from the negative term $-\lambda_{kl}^F$ in Equation~\eqref{equation:geee_posterior_mean}. When $\lambda_{kl}^F$ becomes larger, the posterior ``parent" mean becomes smaller, and the TN distribution will have a larger probability for smaller values (or even approaching zero) since the draws of $\mathcal{TN}(f_{kl} \mid \widetilde{\mu_{kl}}, \widetilde{\sigma_{kl}^{2}})$ will be around zero thus imposing sparsity (see Figure~\ref{fig:dists_truncatednorml_mean}, p.~\pageref{fig:dists_truncatednorml_mean}). \begin{algorithm}[h] \caption{Gibbs sampler for GEEE Model in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\{\lambda_{mk}^W\}=\{\lambda_{kl}^F\}= \{\lambda_{ln}^Z\}=0.1$.} \label{alg:geee_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\sigma, \beta_\sigma, \lambda_{mk}^W,\lambda_{kl}^F, \lambda_{ln}^Z$; \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} \mid \bm{A} , \bm{W}_{-mk}, \bm{F}, \bm{Z},\sigma^2, \lambda_{mk}^W)$; \Comment{Equation~\eqref{equation:gee_poster_wmk1}} \EndFor \For{$l=1$ to $L$} \State Sample $f_{kl}$ from $p(f_{kl} \mid \bm{A} , \bm{W}, \bm{F}_{-kl}, \bm{Z}, \sigma^2,\lambda_{kl}^F)$; \Comment{Equation~\eqref{equation:geee_poster_fkl2}} \EndFor \EndFor \For{$l=1$ to $L$} \For{$n=1$ to $N$} \State Sample $z_{ln}$ from $p(z_{ln} \mid \bm{A} , \bm{W}, \bm{F}, \bm{Z}_{-ln},\sigma^2, \lambda_{ln}^Z)$; \Comment{Symmetry of Equation~\eqref{equation:gee_poster_wmk1}} \EndFor \EndFor \State Sample $\sigma^2$ from $p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z},\alpha_\sigma,\beta_\sigma)$; \Comment{Equation~\eqref{equation:gee_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:als-per-example-loss_tnmf}, stop if it converges. \end{algorithmic} \end{algorithm} \paragraph{Gibbs sampling.} By this Gibbs sampling method introduced in Section~\ref{section:gibbs-sampler} (p.~\pageref{section:gibbs-sampler}), we can construct a Gibbs sampler for the GEEE model as formulated in Algorithm~\ref{alg:geee_gibbs_sampler}. And also in practice, all the parameters of the exponential distribution are set to be the same value $\lambda=\{\lambda_{mk}^W\}'s =\{\lambda_{kl}^F\}'s= \{\lambda_{ln}^Z\}'s$ for all $m,k,l,n$. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\{\lambda_{mk}^W\}=\{\lambda_{kl}^F\}= \{\lambda_{ln}^Z\}=0.1$. \paragraph{Automatic relevance determination.} Similar to the GEEA model (Section~\ref{section:geea_nmf_model}), we can use ARD to share the scale parameter of exponential priors for each row of $\bm{W}$ and each column of $\bm{Z}$ so as to perform automatic model selection. The graphical representation is shown in Figure~\ref{fig:bmf_geeea}, $$ \begin{aligned} w_{mk}&\sim \mathcal{E}(w_{mk}\mid \lambda_{k}^W), \qquad &\gap& z_{ln}\sim \mathcal{E}(z_{ln}\mid \lambda_{l}^Z), \\ \lambda_{k}^W &\sim \mathrm{Ga}(\lambda_{k}^W \mid \alpha_\lambda, \beta_\lambda), \qquad &\gap&\lambda_{l}^Z \sim \mathrm{Ga}(\lambda_{l}^Z \mid \alpha_\lambda, \beta_\lambda). \end{aligned} $$ Note in this case, we do not share any parameters for the entries of $\bm{F}$. For brevity, we do not go into the details of this model here. \part{Bayesian Matrix Decomposition} \chapter{Bayesian Real Matrix Factorization} \begingroup \hypersetup{linkcolor=winestain} \minitoc \newpage \endgroup \section{Introduction}\label{section:bmf_real_intro} The explosion of data from advancements in sensor technology and computer hardware has presented new challenges for data analysis. The large volume of data often contains noise and other distortions, requiring pre-processing for deductive science to be applied. For example, signals received by antenna arrays often are contaminated by noise and other degradations. To effectively analyze the data, it is necessary to reconstruct or represent it in a way that reduces inaccuracies while maintaining certain feasibility conditions. Additionally, in many cases, the data collected from complex systems is the result of multiple interrelated variables acting in unison. When these variables are not well defined, the information contained in the original data can be overlapping and unclear. By creating a reduced system model, we can achieve a level of accuracy that is close to the original system. The common approach in removing noise, reducing the model, and reconstructing feasibility, is to replace the original data with a lower-dimensional representation obtained through subspace approximation. Therefore, low-rank approximations or low-rank matrix decompositions play a crucial role in a wide range of applications. Low-rank matrix decomposition is a powerful technique used in machine learning and data mining to represent a given matrix as the product of two or more matrices with lower dimensions. It is used to capture the essential structure of a matrix while ignoring noise and redundancies. The most common methods for low-rank matrix decomposition include singular value decomposition (SVD), principal component analysis (PCA), and multiplicative update nonnegative matrix factorization (NMF). Bayesian low-rank decomposition is a variant of low-rank matrix decomposition that incorporates Bayesian modeling. It models the observed data as a low-rank matrix, where the low-rank approximation is assumed to be generated from a prior distribution. This allows for the inclusion of prior knowledge and uncertainty about the low-rank matrix into the decomposition. The use of priors can also lead to more interpretable results by providing a probabilistic representation of the uncertainty associated with the factor matrices. In addition, Bayesian methods allow for the quantification of uncertainty in the results, providing a measure of the confidence in the estimated factor matrices. Thus it can help to mitigate overfitting and produce more robust results making it a powerful method for modeling both predictive and explanatory data. Given an observed data set, represented as an $M\times N$ matrix of results, $\bm{A}$, where rows represent the number of observations and columns represent the variables of interest. Following Chapter~\ref{section:als} (p.~\pageref{section:als}), the \textit{real matrix factorization (RMF)} problem, a common bilinear decomposition problem, can be stated as $\bm{A}=\bm{W}\bm{Z}+\bm{E}$ where $\bm{A}=[\bm{a}_1, \bm{a}_2, \ldots, \bm{a}_N]\in \mathbb{R}^{M\times N}$ is approximated factorized into an $M\times K$ matrix $\bm{W}\in \mathbb{R}^{M\times K}$ and a $K\times N$ matrix $\bm{Z}\in \mathbb{R}^{K\times N}$. The data set $\bm{A}$ needs not to be complete such that the indices of observed entries can be represented by the mask matrix $\bm{M}\in \mathbb{R}^{M\times N}$ where a value of 1 indicates the entry is observed and a value of 0 indicates the entry is missing. Matrices $\bm{W}$ and $\bm{Z}$ represent the values of explanatory variables which, when multiplied, give a predictor of the values in $\bm{A}$. If entries in $\bm{A}$ are missing, then $\bm{W}$ and $\bm{Z}$ can be used to give predictions of their values. When one of $\bm{W}$ and $\bm{Z}$ is observed, the factorization becomes a regression problem. The factorization of the original data matrix $\bm{A}$ is achieved by finding two such real matrices, one representing the \textit{basis or dictionary} components and the other representing the \textit{activations or coefficients}. Let $\bm{z}_n$ denote the $n$-th column of $\bm{Z}$. Then matrix multiplication of $\bm{W}\bm{Z}$ can be implemented as computing the column vectors of $\bm{A}$ as linear combinations of the columns in $\bm{W}$ using coefficients supplied by columns of $\bm{Z}$: $$ \bm{a}_n = \bm{W}\bm{z}_n. $$ In the Netflix context, the value of $a_{mn}$, the $(m,n)$-th element of $\bm{A}$, denotes the rating of the $n$-th user for the $m$-th movie (the larger the user more likes the movie). Then $\bm{w}_m$ can represent the hidden features of the $m$-th movie and $\bm{z}_n$ contains the features of user $n$ (Section~\ref{section:als-vector-product}, p.~\pageref{section:als-vector-product}). To simplify the problem, let us assume that there are no missing entries firstly. Project data vectors $\bm{a}_n$ to a smaller dimension $\bm{z}_n \in \mathbb{R}^K$ with $K<M$, such that the \textit{reconstruction error} measured by Frobenius norm is minimized (assume $K$ is known): \begin{equation}\label{equation:als-per-example-loss_real} \mathop{\min}_{\bm{W},\bm{Z}} \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2, \end{equation} where $\bm{W}=[\bm{w}_1^\top; \bm{w}_2^\top; \ldots; \bm{w}_M^\top]\in \mathbb{R}^{M\times K}$ and $\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N] \in \mathbb{R}^{K\times N}$ containing $\bm{w}_m$'s and $\bm{z}_n$'s as \textbf{rows and columns} respectively \footnote{Note again in some contexts, $\bm{Z}$ represents an $N\times K$ matrix such that $\bm{A}$ is decomposed into $\bm{A}=\bm{W}\bm{Z}\textcolor{blue}{^\top} +\bm{E}$.}. The loss form in Equation~\eqref{equation:als-per-example-loss_real} is known as the \textit{per-example loss}. It can be equivalently written as \begin{equation}\label{equation:frob_loss_brmf} L(\bm{W},\bm{Z}) = \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2 = \norm{\bm{W}\bm{Z}-\bm{A}}^2 =\mathrm{tr}\left\{(\bm{W}\bm{Z}-\bm{A})^\top (\bm{W}\bm{Z}-\bm{A})\right\}, \end{equation} where $\mathrm{tr}(\cdot)$ represents the trace of the quantity in the brackets. This matrix factorization problem is similar to a standard ``inverse" problem except that in the ``inverse" problem one of the factored components is known, and thus ordinary least squares or similar methods can be applied to find the other component which minimizes the residuals between the reconstruction and the data. When neither $\bm{W}$ nor $\bm{Z}$ is known, the factorization problem is difficult even if the latent dimension $K$ is only 2 or 3. We sample the space of potential solutions using the Markov chain Monte Carlo (MCMC) procedure to determine its properties because there are a vast number of possible solutions and no analytical approach to identify them. We have discussed the Bayesian approach in Section~\ref{section:bayes_approach} (p.~\pageref{section:bayes_approach}). In the Bayesian matrix factorization context, the model leads to the specific form of Bayes' equation, \begin{equation}\label{equation:bmf_bayes} p(\bm{W},\bm{Z} \mid \bm{A}) \propto p(\bm{A} \mid \bm{W}, \bm{Z}) \times p(\bm{W}, \bm{Z}), \end{equation} where $p(\bm{W}, \bm{Z})$ captures the prior beliefs encoding the knowledge of the solution independent of the data and $p(\bm{A}\mid \bm{W}, \bm{Z})$ denotes the likelihood comparing the model to the data. \paragraph{Terminology.} There are three types of choices we make that determine the specific type of matrix decomposition model we use, namely, the likelihood function, the priors we place over the factored matrices $\bm{W}$ and $\bm{Z}$, and whether we use any further hierarchical priors. We will call the model by the density function in the order of the types of likelihood and priors. For example, if the likelihood function for the model is chosen to be a Gaussian density, and the two prior density functions are selected to be exponential density and Gaussian density functions respectively, then the model will be denoted as the \textit{Gaussian Exponential-Gaussian (GEG)} model. Sometimes, we will put a hyperprior over the parameters of the prior density functions, e.g., we put a Gamma prior over the exponential density, then it will further be termed as a \textit{Gaussian Exponential-Gaussian Gamma (GEGA)} model (``A" is short for Gamma density to avoid confusion with Gaussian density). Table~\ref{table:summ_real_mf} summarizes the Bayesian models for real matrix factorization in this chapter. \begin{table}[tp] \centering \setlength{\tabcolsep}{2.7pt} \renewcommand{1.5}{1.25} \begin{tabular}{l|llll} \hline Name & Likelihood &Prior $\bm{W}$ & Prior $\bm{Z}$ & Hierarchical prior \\ \hline \hline GGG & $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{N}(w_{mk}|0, (\lambda_{mk}^W)^{-1})$ & $\mathcal{N}(z_{kn}|0, (\lambda_{kn}^Z)^{-1})$ & \gap\gap\slash \\ \hline GGGM & $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{N}(\bm{w}_m|\mathbf{0}, \lambda^{-1}\bm{I})$ & $\mathcal{N}(\bm{z}_{n}|\mathbf{0}, \lambda^{-1}\bm{I})$ & \gap\gap\slash \\ \hline GGGA& $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{N}(w_{mk}|0, (\lambda_k)^{-1})$ & $\mathcal{N}(z_{kn}|0, (\lambda_k)^{-1})$ & $\mathrm{Ga}(\lambda_k|\alpha_\lambda, \beta_\lambda)$ \\ \hline GGGW& $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & $\mathcal{N}(\bm{w}_m|\boldsymbol\mu_w, \boldsymbol\Sigma_w)$& $\mathcal{N}(\bm{z}_n|\boldsymbol\mu_z, \boldsymbol\Sigma_z)$ & \begin{tabular}{@{}c@{}}$\{\boldsymbol\mu_w, \boldsymbol\Sigma_w\}$, $\{\boldsymbol\mu_z, \boldsymbol\Sigma_z\}\sim$ \\ $\mathcal{NIW}(\bm{m}_0, \kappa_0, \nu_0, \bm{S}_0)$\end{tabular}\\ \hline GVG & $\mathcal{N}(a_{mn}|\bm{w}_m^\top\bm{z}_n, \sigma^2)$ & \begin{tabular}{@{}c@{}}$\bm{W}\sim$ \\ $\exp\{-\gamma \bm{W}^\top\bm{W}\}$ \end{tabular} & $\mathcal{N}(z_{kn}|0, (\lambda_{kn}^Z)^{-1})$ & \gap\gap\slash \\ \hline \end{tabular} \caption{Overview of Bayesian real matrix factorization models.} \label{table:summ_real_mf} \end{table} \index{Decomposition: GGG} \section{All Gaussian (GGG) Model and Markov Blanket}\label{section:markov-blanket} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \includegraphics[width=0.421\linewidth]{./imgs/bmf_ggg.pdf} \caption{Graphical model representation of GGG model. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or".} \label{fig:bmf_ggg} \end{figure} The all Gaussian (GGG) model is perhaps the most simple one for Bayesian RMF where Gaussian priors are applied over factored matrices \citep{salakhutdinov2008bayesian, gonen2012predicting, virtanen2011bayesian, virtanen2012bayesian}. The model involves using Gaussian likelihood and Gaussian priors. \paragraph{Likelihood.} We view the data $\bm{A}$ as being produced according to the probabilistic generative process shown in Figure~\ref{fig:bmf_ggg}. The observed $(m,n)$-th data entry $a_{mn}$ of matrix $\bm{A}$ is modeled using a Gaussian likelihood function with variance $\sigma^2$ and mean given by the latent decomposition $\bm{w}_m^\top\bm{z}_n$ (Equation~\eqref{equation:als-per-example-loss_real}), \begin{equation}\label{equation:ggg_data_entry_likelihood} p(a_{mn} \mid \bm{w}_m^\top\bm{z}_n, \sigma^2) = \mathcal{N}(a_{mn}\mid\bm{w}_m^\top\bm{z}_n, \sigma^2). \end{equation} This is equivalent to assuming the residuals, $e_{mn}$, are i.i.d. drawn from a zero mean normal with variance $\sigma^2$, which gives rise to the following likelihood function, \begin{equation}\label{equation:ggg_likelihood} \begin{aligned} p(\bm{A}\mid \boldsymbol\theta) &= \prod_{m,n=1}^{M,N} \mathcal{N} \left(a_{mn}\mid (\bm{W}\bm{Z})_{mn}, \sigma^2 \right)\\ &= \prod_{m,n=1}^{M,N} \mathcal{N} \left(a_{mn}\mid (\bm{W}\bm{Z})_{mn}, \tau^{-1} \right) \end{aligned} \end{equation} where $\boldsymbol\theta=\{\bm{W},\bm{Z},\sigma^2\}$ denotes all parameters in the model, $\sigma^2$ is the variance, $\tau^{-1}=\sigma^2$ is the precision, and $$ \mathcal{N}(x\mid \mu,\sigma^2)=\frac{1}{(2\pi\sigma^2)^{1/2}} \exp \left\{ -\frac{1}{2\sigma^2 } (x-\mu)^2\right\} =\sqrt{\frac{\tau}{2\pi}}\exp \left\{ -\frac{\tau}{2}(x-\mu)^2 \right\} $$ is the normal density (Definition~\ref{definition:gaussian_distribution}, p.~\pageref{definition:gaussian_distribution}). \paragraph{Prior.} We assume $\bm{W}$ and $\bm{Z}$ are independently Gaussian distributed with precisions $\lambda_{mk}^W$ and $\lambda_{kn}^Z$ respectively, \begin{equation}\label{equation:ggg_prior_density_gaussian} \begin{aligned} w_{mk} &\sim \mathcal{N}(w_{mk}\mid 0, (\lambda_{mk}^W)^{-1}), \gap &z_{kn}\sim& \mathcal{N}(z_{kn}\mid 0, (\lambda_{kn}^Z)^{-1});\\ p(\bm{W}) &=\prod_{m,k=1}^{M,K} \mathcal{N}(w_{mk}\mid 0, (\lambda_{mk}^W)^{-1}), \gap &p(\bm{Z}) =&\prod_{k,n=1}^{K,N} \mathcal{N}(z_{kn}\mid 0,( \lambda_{kn}^Z)^{-1}). \end{aligned} \end{equation} The prior for the noise variance $\sigma^2$ is chosen as an inverse-Gamma density with shape ${\alpha_\sigma}$ and scale ${\beta_\sigma}$ (Definition~\ref{definition:inverse_gamma_distribution}, p.~\pageref{definition:inverse_gamma_distribution}), $$ p(\sigma^2)= \mathrm{IG}(\sigma^2\mid \alpha_\sigma, \beta_\sigma) = \frac{{\beta_\sigma}^{\alpha_\sigma}}{\Gamma({\alpha_\sigma})} (\sigma^2)^{-\alpha_\sigma-1} \exp\left( -\frac{{\beta_\sigma}}{\sigma^2} \right). $$ By Bayes' rule (Equation~\eqref{equation:posterior_abstract_for_mcmc}, p.~\pageref{equation:posterior_abstract_for_mcmc}), the posterior is proportional to the product of likelihood and prior, it can be maximized to yield an estimate of $\bm{W}$ and $\bm{Z}$. \paragraph{Markov blanket.} The most widely used posterior inference methods in Bayesian inference models are Markov chain Monte Carlo (MCMC) methods as described in Section~\ref{sec:monte_carlo_methods} (p.~\pageref{sec:monte_carlo_methods}). The concept behind MCMC methods is to define a Markov chain on the hidden variables that have the posterior as its equilibrium distribution \citep{andrieu2003introduction}. By drawing samples from this Markov chain, one eventually obtains samples from the posterior. A simple form of MCMC sampling is Gibbs sampling, where the Markov chain is constructed by sampling the conditional distribution of each hidden variable given the values of other hidden variables and the observations. Gibbs sampling is widely used when these conditional distributions can be sampled from easily. \begin{figure}[h!] \centering \includegraphics[width=0.35\textwidth]{imgs/markov_blanket.pdf} \caption{The Markov blanket of a directed acyclic graphical (DAG) model. In a Bayesian network, the Markov blanket of node $A$ includes its \textbf{parents, children, and the other parents of all of its children}. That is, the nodes in the cycle are in the Markov blanket of node $A$. The figure is due to wikipedia page of Markov blanket.} \label{fig:markov_blanket} \end{figure} To do Gibbs sampling, we need to derive the conditional posterior distributions for each parameter conditioned on all the other parameters $p(\theta_i \mid \boldsymbol\theta_{-i}, \mathcal{X})$, where $\mathcal{X}$ is again the set of data points (here, the observed matrix $\bm{A}$) and $\theta_i$'s are the variables for which we want to sample the distributions. But for a graphical model, this conditional distribution is a function only of the nodes in the \textit{Markov blanket}. For the GGG model shown in Figure~\ref{fig:bmf_ggg}, which is a directed acyclic graphical (DAG) model, the Markov blanket of a node includes \textbf{the parents, the children, and the coparents} \citep{jordan2004introduction}, as shown in Figure~\ref{fig:markov_blanket}. The Markov blanket of node A is all nodes in the cycle. \paragraph{An example on the Markov blanket.} The idea of the Markov blanket might be mysterious at first glance. Suppose we want to sample the $(m,k)$-th element $w_{mk}$ of $\bm{W}$ for the distribution of it. From Figure~\ref{fig:bmf_ggg}, we find its parents, children, and coparents are $\{\lambda_{mk}^W\}$, $\{a_{mn}\}$, and $\{\sigma^2, \bm{Z}, \bm{W}_{-mk}\}$, respectively. Therefore, the conditional distribution of $w_{mk}$ only depends on the three pairs of parameters: $$ p(w_{mk} \mid -) = p(w_{mk}\mid \bm{A},\bm{W}_{-mk},\bm{Z}, \sigma^2, \lambda_{mk}^W). $$ More specifically, from this graphical representation, we can find the Markov blanket for each parameter in the GGG model, and then figure out their conditional posterior distributions to be derived: \begin{align} p(w_{mk} \mid -) &= p(w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \lambda_{mk}^W), \label{equation:ggg_1} \\ p(z_{kn} \mid -) &= p(z_{kn}\mid \bm{A}, \bm{W}, \bm{Z}_{-kn}, \sigma^2,\lambda_{kn}^Z), \label{equation:ggg_2}\\ p(\sigma^2 \mid -) &= p(\sigma^2\mid \bm{A},\bm{W}, \bm{Z}, \alpha_\sigma, \beta_\sigma,) \label{equation:ggg_3}. \end{align} We sequentially draw samples from the posterior of each parameter, conditioned on all other parameters. It can be shown that the sequence of samples computed constitutes a Markov chain for which the stationary distribution is the posterior in which we are interested. In other words, Gibbs sampler moves the chain forward by one step as follows: \begin{itemize} \item Sample the first component $w_{mk}$ for each observation from Equation~\eqref{equation:ggg_1} which is known as the \textbf{conditional distribution of} $w_{mk}$; \item Sample the second component $z_{kn}$ for each observation from Equation~\eqref{equation:ggg_2}, which is known as the \textbf{conditional distribution of} $z_{kn}$; \item Sample the parameter $\sigma^2$ from Equation~\eqref{equation:ggg_3} which is known as the \textbf{conditional distribution of} data variance. \end{itemize} \paragraph{Posterior.} Given the observed matrix $\bm{A}$, we want to estimate the conditional distribution of the latent structure $p(\bm{W},\bm{Z} \mid \bm{A})$, termed the posterior, which is the key to matrix decomposition-based applications. For example, in the Netflix context, we estimate the posterior expectation of each user's hidden feature/preferences and each movie's hidden attributes to perform predictions of which unconsumed movies each user will like. In this book, we apply Gibbs sampling to perform optimization since it tends to be very accurate at finding the true posterior. The advantage of the MCMC or Gibbs sampling methods is that they produce exact results asymptotically. Other than this method, variational Bayesian inference can be an alternative way but we shall not go into the details. For RMF, following the Bayes' rule and MCMC, this means we need to be able to draw from distributions (by Markov blanket): $$ \begin{aligned} p(w_{mk} &\mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \lambda_{mk}^W), \\ p(z_{kn}&\mid \bm{A}, \bm{W}, \bm{Z}_{-kn},\sigma^2, \lambda_{kn}^Z ), \\ p(\sigma^2 &\mid \bm{A},\bm{W}, \bm{Z}, \alpha_\sigma, \beta_\sigma), \\ \end{aligned} $$ where $\bm{W}_{-{mk}}$ denotes all elements of $\bm{W}$ except $w_{mk}$ and $\bm{Z}_{-kn}$ denotes all elements of $\bm{Z}$ except $\bm{z}_{kn}$. Using Bayes' theorem, the conditional density of $w_{mk}$ depends on its parents ($\lambda_{mk}^W$), children ($a_{mn}$), and coparents ($\tau$ or $\sigma^2$, $\bm{W}_{-mk}, \bm{Z}$) \footnote{See Figure~\ref{fig:bmf_ggg} and Section~\ref{section:markov-blanket}.}. And it can be obtained by \begin{equation}\label{equation:ggg_poster_wmk1} \begin{aligned} &\gap p(w_{mk} \mid \bm{A} , \bm{W}_{-mk}, \bm{Z}, \sigma^2, \lambda_{mk}^W ) \propto p(\bm{A}\mid \bm{W}, \bm{Z}, \sigma^2) \times p(w_{mk}\mid \lambda_{mk}^W)\\ &=\prod_{i,j=1}^{M,N} \mathcal{N} \left(a_{ij}\mid \bm{w}_i^\top\bm{z}_j, \sigma^2 \right)\times \mathcal{N}(w_{mk}\mid 0, (\lambda_{mk}^W)^{-1}) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i,j=1}^{M,N}(a_{ij} - \bm{w}_i^\top\bm{z}_j )^2\right\} \times \exp\left\{ -\frac{w_{mk}^2}{2} \lambda_{mk}^W \right\} \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{j=1}^{N}(a_{mj} - \bm{w}_m^\top\bm{z}_j )^2\right\} \times \exp\left\{ -\frac{w_{mk}^2}{2}\lambda_{mk}^W \right\} \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{j=1}^{N} \left[ w_{mk}^2z_{kj}^2 + 2w_{mk} z_{kj}\bigg(\sum_{i\neq k}^{K}w_{mi}z_{ij} - a_{mj}\bigg) \right] \right\} \cdot \exp\left\{ -\frac{w_{mk}^2}{2}\lambda_{mk}^W \right\}\\ &\stackrel{\star}{\propto} \exp\left\{ -\underbrace{ \left(\frac{\sum_{j=1}^{N} z_{kj}^2 }{2\sigma^2} +\frac{\lambda_{mk}^W}{2} \right) }_{\textcolor{blue}{1/(2\widetilde{\sigma_{mk}^{2}})}} w_{mk}^2 + w_{mk}\underbrace{\left( \frac{1}{\sigma^2} \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) \right)}_{\textcolor{blue}{\widetilde{\sigma_{mk}^{2}}^{-1} \widetilde{\mu_{mk}}}} \right\} \\ &\propto \mathcal{N}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}}), \end{aligned} \end{equation} where the equality ($\star$) conforms to Equation~\eqref{equation:gaussian_form_conform} (p.~\pageref{equation:gaussian_form_conform}) such that it follows from the normal distribution with variance $\widetilde{\sigma_{mk}^{2}}$, \begin{equation}\label{equation:ggg_posterior_variance} \widetilde{\sigma_{mk}^{2}} = 1\bigg/ \left( \frac{1 }{\sigma^2} \sum_{j=1}^{N} z_{kj}^2 +\lambda_{mk}^W\right) \end{equation} and mean $\widetilde{\mu_{mk}}$, \begin{equation}\label{equation:ggg_posterior_mean} \widetilde{\mu_{mk}} = \frac{\widetilde{\sigma_{mk}^{2}}}{\sigma^2} \cdot \sum_{j=1}^{N} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) . \end{equation} In this case, the posterior precision $1/ \widetilde{\sigma_{mk}^{2}}$ is the sum of prior precision $\lambda_{mk}^W$ and ``data precision" $\frac{1 }{\sigma^2} \sum_{j=1}^{N} z_{kj}^2$. This also shows, in the Netflix context, that the conditional distribution over the movie feature vector $\bm{w}_m$, conditioned on the user features, observed rating matrix $\bm{A}$, and the values of the hyperparameters is a Gaussian density. And due to symmetry, a completely analogous derivation for $z_{kn}$ is used for the $\bm{Z}$ factor. The conditional density of $\sigma^2$ depends on its parents ($\alpha_\sigma$, $\beta_\sigma$), children ($\bm{A}$), and coparents ($\bm{W}$, $\bm{Z}$). And it is an inverse-Gamma distribution (by conjugacy in Equation~\eqref{equation:inverse_gamma_conjugacy_general}, p.~\pageref{equation:inverse_gamma_conjugacy_general}), \begin{equation}\label{equation:ggg_posterior_sigma2} \begin{aligned} &p(\sigma^2 \mid \bm{A}, \bm{W},\bm{Z}, \alpha_\sigma, \beta_\sigma ) = \mathrm{IG} (\sigma^2\mid \widetilde{\alpha_{\sigma}}, \widetilde{\beta_{\sigma}}), \\ & \widetilde{\alpha_{\sigma}} = \frac{MN}{2} +{\alpha_\sigma}, \gap \widetilde{\beta_{\sigma}} = \frac{1}{2} \sum_{m,n=1}^{M,N} (\bm{A}-\bm{W}\bm{Z})_{mn}^2 + {\beta_\sigma}. \end{aligned} \end{equation} \paragraph{Missing entries.} In many cases, e.g., the Netflix context, some entries of $\bm{A}$ are missing. Let $\Omega$ denote the set containing all observed entries in data $\bm{A}$. Denote further $\Omega_m = \{n \mid (m,n)\in \Omega\}$, i.e., the observed entries in the $m$-th row; $\Omega_n = \{m \mid (m,n)\in \Omega\}$, i.e., the observed entries in the $n$-th column. The posterior density of $w_{mk}$ can be obtained by \begin{equation} w_{mk}\sim \mathcal{N}(w_{mk} \mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}}), \end{equation} where \begin{equation} \begin{aligned} \widetilde{\sigma_{mk}^{2}} &= 1\bigg/ \left( \frac{1}{\sigma^2}\sum_{\textcolor{blue}{j\in\Omega_m}} z_{kj}^2 +\lambda_{mk}^W\right),\\ \widetilde{\mu_{mk}} &= \frac{\widetilde{\sigma_{mk}^{2}}}{\sigma^2} \cdot \sum_{\textcolor{blue}{j\in\Omega_m}} z_{kj}\bigg( a_{mj} - \sum_{i\neq k}^{K}w_{mi}z_{ij}\bigg) . \end{aligned} \end{equation} In the following discussion, for simplicity, we only consider the data matrix $\bm{A}$ with full observations. The results for missing entries can be derived similarly. \index{Decomposition: GGGM} \paragraph{GGG with shared prior (GGGM).} When we set $\lambda=\lambda_{mk}^W$ for all $m\in\{1,2,\ldots, M\}, k\in\{1,2,\ldots,K\}$, the conditional posterior distributions we obtain in the Gibbs sampling algorithm can be written as a multivariate Gaussian density (Definition~\ref{definition:multivariate_gaussian}, p.~\pageref{definition:multivariate_gaussian}). In this case, we place a multivariate Gaussian prior over each row $\bm{w}_m$ of $\bm{W}$ and each column $\bm{z}_n$ of $\bm{Z}$: $$ \bm{w}_m \sim \mathcal{N}(\bm{w}_m \mid \mathbf{0}, \lambda^{-1}\bm{I}), \gap \bm{z}_n \sim \mathcal{N}(\bm{z}_n \mid \mathbf{0}, \lambda^{-1}\bm{I}), $$ i.e., each of the $M$ or $N$ item factors follows a multivariate normal distribution. Again by Bayes' rule, let $\bm{W}_{-m}$ denote all elements of $\bm{W}$ except the $m$-th row, conditional posterior distribution we obtain in the Gibbs sampling algorithm can be obtained by \begin{equation}\label{equation:gggm_wm_post} \begin{aligned} & \gap p(\bm{w}_m \mid \sigma^2, \bm{W}_{-m}, \bm{Z}, \lambda, \bm{A}) \propto p(\bm{A} \mid \bm{W}, \bm{Z}, \sigma^2) \times \mathcal{N}(\bm{w}_m \mid \mathbf{0} ,\lambda^{-1}\bm{I}) \\ &\propto \mathcal{N}(\bm{A} \mid \bm{W}\bm{Z}, \sigma^2\bm{I}) \times \mathcal{N}(\bm{w}_m \mid \mathbf{0} ,\lambda^{-1}\bm{I}) \\ &\propto \exp\left\{ -\frac{1}{2\sigma^2} \sum_{j=1}^{N}(a_{mj} - \bm{w}_m^\top\bm{z}_j )^2\right\} \times \exp\left\{ -\frac{\lambda}{2} \bm{w}_m^\top\bm{w}_m \right\} \\ &\stackrel{\star}{\propto} \exp\left\{-\frac{1}{2} \bm{w}_m^\top \underbrace{\left[\lambda\bm{I} + \frac{1}{\sigma^2}\sum_{j=1}^{N}\bm{z}_j\bm{z}_j^\top \right]}_{\textcolor{blue}{\widetilde{\boldsymbol\Sigma}^{-1}}} \bm{w}_m + \bm{w}_m^\top \underbrace{\frac{1}{\sigma^2} \sum_{j=1}^{N} a_{mj}\bm{z}_j }_{\textcolor{blue}{\widetilde{\boldsymbol\Sigma}^{-1} \widetilde{\boldsymbol\mu} }} \right\} \propto \mathcal{N}(\bm{w}_m \mid \widetilde{\boldsymbol\mu}, \widetilde{\boldsymbol\Sigma}), \end{aligned} \end{equation} where the equality ($\star$) conforms to Equation~\eqref{equation:multi_gaussian_form_conform} (p.~\pageref{equation:multi_gaussian_form_conform}) such that it follows from the multivariate Gaussian distribution with covariance $\widetilde{\boldsymbol\Sigma}$, $$ \widetilde{\boldsymbol\Sigma}= \left[\lambda\bm{I} + \frac{1}{\sigma^2}\sum_{j=1}^{N}\bm{z}_j\bm{z}_j^\top \right]^{-1} $$ and mean vector $\widetilde{\boldsymbol\mu}$, $$ \widetilde{\boldsymbol\mu} = \frac{1}{\sigma^2} \widetilde{\boldsymbol\Sigma}\cdot \sum_{j=1}^{N} a_{mj}\bm{z}_j. $$ In this case, the \textbf{posterior precision matrix $\widetilde{\boldsymbol\Sigma}^{-1}$ is the sum of the prior precision matrix $\lambda\bm{I}$ and ``data precision matrix" $\frac{1}{\sigma^2}\sum_{j=1}^{N}\bm{z}_j\bm{z}_j^\top$}. And due to symmetry, a similar expression for $\bm{z}_n$ can be derived. \begin{algorithm}[h] \caption{Gibbs sampler for GGG model in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\{\lambda_{mk}^W\} = \{\lambda_{kn}^Z\} = 0.1$.} \label{alg:ggg_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\sigma, \beta_\sigma, \lambda_{mk}^W, \lambda_{kn}^Z$; \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk}\mid \bm{A} , \bm{W}_{-mk}, \bm{Z}, \sigma^2, \lambda_{mk}^W )$; \Comment{Equation~\eqref{equation:ggg_poster_wmk1}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn}\mid \bm{A} , \bm{W}, \bm{Z}_{-kn}, \sigma^2, \lambda_{kn}^Z)$; \Comment{Symmetry of Eq.~\eqref{equation:ggg_poster_wmk1}} \EndFor \EndFor \State Sample $\sigma^2$ from $p(\sigma^2\mid \bm{A}, \bm{W},\bm{Z}, \alpha_\sigma, \beta_\sigma)$; \Comment{Equation~\eqref{equation:ggg_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:frob_loss_brmf}, stop if it converges. \end{algorithmic} \end{algorithm} \paragraph{Gibbs sampling.} Because conjugate priors for the parameters and hyperparameters are used in the Bayesian matrix factorization model, it is easy to sample from the conditional distributions derived from the posterior distribution. By this Gibbs sampling method introduced in Section~\ref{section:gibbs-sampler} (p.~\pageref{section:gibbs-sampler}), we can construct a Gibbs sampler for the GGG model as formulated in Algorithm~\ref{alg:ggg_gibbs_sampler}. Due to our choice of priors, we can sample from all conditional distributions directly using standard methods, which obviates slower sampling procedures such as rejection sampling. And also in practice, all the parameters of Gaussian priors are set to be the same value $\lambda=\{\lambda_{mk}^W\}'s = \{\lambda_{kn}^Z\}'s$ for all $m,k,n$. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\{\lambda_{mk}^W\} = \{\lambda_{kn}^Z\} = 0.1$. \paragraph{Prior by Gamma distribution.} We also notice that putting an inverse-Gamma prior on the variance is equivalent to putting a Gamma prior on the precision parameter (the inverse of variance). For the precision $\tau=\sigma^{-2}$, we use a Gamma distribution with shape $\alpha_\tau>0$ and rate $\beta_\tau>0$ (Definition~\ref{definition:gamma-distribution}, p.~\pageref{definition:gamma-distribution}), \begin{equation}\label{equation:ggg_gamma_prior} p(\tau)\sim \mathrm{Ga} (\tau\mid \alpha_\tau, \beta_\tau) = \frac{\beta_{\tau}^{\alpha_{\tau}}}{\Gamma(\alpha_{\tau})} \tau^{\alpha_{\tau}-1} \exp({-\beta_{\tau}\cdot \tau}), \end{equation} The posterior is obtained similarly (Equation~\eqref{equation:gamma_conjugacy_general}, p.~\pageref{equation:gamma_conjugacy_general}), $$ \begin{aligned} &\gap p(\tau \mid \bm{W},\bm{Z}, \bm{A}) = \mathrm{Ga} (\tau; \widetilde{\alpha_\tau}, \widetilde{\beta_\tau}), \\ &\gap \widetilde{\alpha_\tau} = \frac{MN}{2} +{\alpha_\tau}, \gap \widetilde{\beta_\tau} = \frac{1}{2} \sum_{m,n=1}^{M,N} (\bm{A}-\bm{W}\bm{Z})_{mn}^2 + {\beta_\tau}. \end{aligned} $$ In practice, the prior parameters $\alpha_\tau$, $\beta_\tau$ are chosen to be equal to $\alpha_\sigma$, $\beta_\sigma$ respectively. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[GGGA.]{\label{fig:bmf_ggga} \includegraphics[width=0.421\linewidth]{./imgs/bmf_ggga.pdf}} \subfigure[GGGW.]{\label{fig:bmf_gggw} \includegraphics[width=0.421\linewidth]{./imgs/bmf_gggw.pdf}} \caption{Graphical model representation of GGGA and GGGW models. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or", and the comma ``," in the variable represents ``and".} \label{fig:bmf_ggga_gggw} \end{figure} \index{Decomposition: GGGA} \section{All Gaussian Model with ARD Hierarchical Prior (GGGA)} The all Gaussian with hierarchical Gamma prior (GGGA) model is proposed by \citet{virtanen2011bayesian, virtanen2012bayesian} based on the GGG model where the difference lies in that the GGGA model puts a hyperprior over the Gaussian prior. Moreover, the GGGA model favors an \textit{automatic relevance determination} (ARD) that helps perform \textit{automatic model selection} (Figure~\ref{fig:bmf_ggga}). \paragraph{Hyperprior.} Going further from the GGG model, we consider the ARD prior in which case we place a further Gamma prior (i.e., a hierarchical prior) \begin{equation}\label{equation:hyper_eq_ggga} \begin{aligned} w_{mk} &\sim \mathcal{N}( 0, (\lambda_{k})^{-1}), \gap &z_{kn}\sim& \mathcal{N}( 0, (\lambda_{k})^{-1}), \gap \lambda_{k} \sim \mathrm{Ga}( \alpha_{\lambda}, \beta_{\lambda}),\\ \end{aligned} \end{equation} where $k\in\{1,2,\ldots, K\}$, and $\lambda_k$ is shared by all entries in the same column of $\bm{W}$ and the same row of $\bm{Z}$. The entire factor $k$ is then either activated if $\lambda_k$ has a low value or ``turned off" if $\lambda_k$ has a high value (see Figure~\ref{fig:dists_gaussian}, p.~\pageref{fig:dists_gaussian}). \paragraph{Posterior.} For RMF, following the Bayes' rule and MCMC, this means we need to be able to draw from distributions (again by Markov blanket, Section~\ref{section:markov-blanket}): $$ \begin{aligned} &p(\sigma^2\mid \bm{A}, \bm{W}, \bm{Z}, \alpha_\sigma, \beta_\sigma ), &\gap& p(w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \sigma^2, \boldsymbol\lambda ), \\ &p(\lambda_k \mid \bm{W}, \bm{Z}, \boldsymbol\lambda_{-k}, \sigma^2, \alpha_\lambda, \beta_\lambda), &\gap& p(z_{kn}\mid \bm{A}, \bm{W}, \bm{Z}_{-kn}, \sigma^2, \boldsymbol\lambda ), \\ \end{aligned} $$ where $\boldsymbol\lambda\in \mathbb{R}_+^K$ is a vector including all $\lambda_k$ values and $\boldsymbol\lambda_{-k}$ denotes all elements of $\boldsymbol\lambda$ except $\lambda_k$. The conditional posteriors for $\sigma^2$, $w_{mk}$'s, and $z_{kn}$'s remain unchanged, except now we replace $\lambda_{mk}^W$ and $\lambda_{kn}^Z$ by $\lambda_k$. The conditional posterior density of $\lambda_{k}$ depends on its parents ($\alpha_\lambda$, $\beta_\lambda$), children ($k$-th row $\bm{w}_k$ of $\bm{W}$, $k$-th column $\bm{z}_k$ of $\bm{Z}$, see definition in Equation~\eqref{equation:als-per-example-loss_real}), and coparents ($\boldsymbol\lambda_{-k}$) \footnote{See Figure~\ref{fig:bmf_ggga} and Section~\ref{section:markov-blanket}.}. Then it follows that, \begin{equation}\label{equation:posterior-ggga_lambdak} \begin{aligned} &\gap p(\lambda_k\mid \bm{W}, \bm{Z}, \boldsymbol\lambda_{-k}, \sigma^2, \alpha_\lambda, \beta_\lambda)\\ &\propto p(\bm{w}_k, \bm{z}_k\mid \lambda_k) \cdot p(\lambda_k) = \prod_{i=1}^{M} \mathcal{N}(w_{ik}\mid 0, (\lambda_{k})^{-1}) \cdot \prod_{j=1}^{N} \mathcal{N}(z_{kj}\mid 0,(\lambda_{k})^{-1}) \cdot \mathrm{Ga}(\lambda_k \mid \alpha_{\lambda}, \beta_\lambda)\\ &= \prod_{i=1}^{M} \lambda_k^{1/2} \exp\left\{-\frac{\lambda_k w_{ik}^2}{2}\right\} \cdot \prod_{j=1}^{N} \lambda_k^{1/2} \exp\left\{-\frac{\lambda_k z_{kj}^2}{2}\right\} \cdot \frac{\beta_{\lambda}^{\alpha_\lambda}}{\Gamma(\alpha_\lambda)} \lambda_k^{\alpha_\lambda-1} \exp(- \lambda_k \beta_\lambda)\\ &\propto\lambda_k^{\frac{M+N}{2} +\alpha_\lambda -1} \exp\left\{-\lambda_k \cdot \left(\frac{1}{2}\sum_{i=1}^{M}w_{ik}^2 + \frac{1}{2}\sum_{j=1}^{N}z_{kj}^2 +\beta_\lambda\right)\right\}\\ &\propto \mathrm{Ga}(\lambda_k\mid \widetilde{\alpha_\lambda}, \widetilde{\beta_\lambda}), \end{aligned} \end{equation} where $$ \widetilde{\alpha_\lambda} = \frac{M+N}{2}+\alpha_\lambda, \qquad \widetilde{\beta_\lambda}=\frac{1}{2}\sum_{i=1}^{M}w_{ik}^2 + \frac{1}{2}\sum_{j=1}^{N}z_{kj}^2 +\beta_\lambda. $$ By the definition of the Gamma distribution (Definition~\ref{definition:gamma-distribution}, p.~\pageref{definition:gamma-distribution}), we have the moments of the posterior density for $\lambda_k$: $$ \mathrm{E}[\lambda_k] = \frac{\widetilde{\alpha_\lambda}}{\widetilde{\beta_\lambda}}, \qquad \mathrm{Var}[\lambda_k] = \frac{\widetilde{\alpha_\lambda}}{\widetilde{\beta_\lambda}^2}. $$ Therefore, when the shape of the raw matrix $\bm{A}$ is larger (i.e., $M+N$ is larger), we favor a larger value of $\lambda_k$ (during sampling). From Equation~\eqref{equation:hyper_eq_ggga}, we thus impose a larger and sparser regularization over the model. Moreover, if the elements of the factored components $\bm{W}$ and $\bm{Z}$ are larger, the $\widetilde{\beta_\lambda}$ tends to be larger as well; hence a smaller value of $\lambda_k$ will be drawn. This is reasonable in the sense that we want to explore in a larger space if the factored components have larger elements from previous Gibbs iterations. \paragraph{Gibbs sampling.} Again we can construct a Gibbs sampler for the GGGA model as formulated in Algorithm~\ref{alg:ggga_gibbs_sampler}. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\alpha_\lambda=\beta_\lambda=1$. \begin{algorithm}[h] \caption{ Gibbs sampler for GGGA model in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). The procedure presented here may not be efficient but is explanatory. A more efficient one can be implemented in a vectorized manner. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\alpha_\lambda=\beta_\lambda=1$. } \label{alg:ggga_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\sigma, \beta_\sigma, \alpha_\lambda, \beta_\lambda$; \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z},\sigma^2, \lambda_k )$; \Comment{Equation~\eqref{equation:ggg_poster_wmk1}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn}\mid \bm{A} , \bm{W}, \bm{Z}_{-kn}, \sigma^2, \lambda_k)$; \Comment{Symmetry of Eq.~\eqref{equation:ggg_poster_wmk1}} \EndFor \State Sample $\lambda_k$ from $p(\lambda_k\mid \bm{W}, \bm{Z}, \sigma^2, \alpha_\lambda, \beta_\lambda)$; \Comment{Equation~\eqref{equation:posterior-ggga_lambdak}} \EndFor \State Sample $\sigma^2$ from $p(\sigma^2\mid \bm{A}, \bm{W},\bm{Z}, \alpha_\sigma, \beta_\sigma)$; \Comment{Equation~\eqref{equation:ggg_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:frob_loss_brmf}, stop if it converges. \end{algorithmic} \end{algorithm} \index{Decomposition: GGGW} \section{All Gaussian Model with Wishart Hierarchical Prior (GGGW)}\label{section:gggw_model} The hierarchical prior with Wishart density is proposed in \citet{salakhutdinov2008bayesian} to increase flexibility and calibration based on the GGG model. Instead of assuming independence of each entry in the factored components $\bm{W}, \bm{Z}$, we now assume each row $\bm{w}_m$ of $\bm{W}$ and each column $\bm{z}_n$ of $\bm{Z}$ comes from a multivariate Gaussian density (Definition~\ref{definition:multivariate_gaussian}, p.~\pageref{definition:multivariate_gaussian}) whose parameters are placed over a further normal-inverse-Wishart (NIW) prior (Definition~\ref{definition:normal_inverse_wishart}, p.~\pageref{definition:normal_inverse_wishart}). Figure~\ref{fig:bmf_gggw} shows the graphical representation of the GGGW model. \paragraph{Prior and hyperprior.} Same as the GGG model, we consider the Gaussian likelihood over the data matrix $\bm{A}$, and the variance parameter $\sigma^2$ is placed over an inverse-Gamma prior with shape $\alpha_\sigma$ and scale $\beta_\sigma$. Given the $m$-th row $\bm{w}_m$ of $\bm{W}$ and the $n$-th column $\bm{z}_n$ of $\bm{Z}$, we consider the multivariate Gaussian density and the normal-inverse-Wishart prior as follows: \begin{align} \bm{w}_m&\sim \mathcal{N}(\bm{w}_m\mid \boldsymbol\mu_w, \boldsymbol\Sigma_w), \gap\gap& \boldsymbol\mu_w, \boldsymbol\Sigma_w &\sim \mathcal{NIW}(\boldsymbol\mu_w, \boldsymbol\Sigma_w\mid \bm{m}_0, \kappa_0, \nu_0, \bm{S}_0);\label{equa:gggw_prior_hyper1}\\ \bm{z}_n&\sim \mathcal{N}(\bm{z}_n\mid \boldsymbol\mu_z, \boldsymbol\Sigma_z), \gap\gap& \boldsymbol\mu_z, \boldsymbol\Sigma_z &\sim \mathcal{NIW}(\boldsymbol\mu_z, \boldsymbol\Sigma_z\mid \bm{m}_0, \kappa_0, \nu_0, \bm{S}_0),\label{equa:gggw_prior_hyper2} \end{align} where $\mathcal{NIW} (\boldsymbol\mu, \boldsymbol\Sigma\mid \bm{m}_0, \kappa_0, \nu_0, \bm{S}_0) = \mathcal{N}(\boldsymbol\mu\mid \bm{m}_0, \frac{1}{\kappa_0}\boldsymbol\Sigma) \cdot \mathrm{IW}(\boldsymbol\Sigma\mid \bm{S}_0, \nu_0)$ is the density of a normal-inverse-Wishart distribution and $ \mathrm{IW}(\boldsymbol\Sigma\mid \bm{S}_0, \nu_0)$ is the inverse-Wishart distrbution (Definition~\ref{definition:multi_inverse_wishart}, p.~\pageref{definition:multi_inverse_wishart}). While we can also place normal and inverse-Wishart priors over the mean and covariance parameters separately, i.e., a semi-conjugate prior. We do not repeat the details here, see Section~\ref{section:sep_mu_niw} and \ref{section:sep_sigma_niw} (p.~\pageref{section:sep_mu_niw}, p.~\pageref{section:sep_mu_niw}). Following from the discussion in Section~\ref{sec:niw_posterior_conjugacy} (p.~\pageref{sec:niw_posterior_conjugacy}), the posterior density of $\{\boldsymbol\mu_w, \boldsymbol\Sigma_w\}$ also follows a NIW distribution with updated parameters: \begin{equation}\label{equation:gggw_post_niw} \boldsymbol\mu_w, \boldsymbol\Sigma_w\sim \mathcal{NIW}(\boldsymbol\mu_w, \boldsymbol\Sigma_w\mid \bm{m}_M, \kappa_M, \nu_M, \bm{S}_M) \end{equation} where \begin{align} \bm{m}_M &= \frac{\kappa_0\bm{m}_0 + M\overline{\bm{w}}}{\kappa_M} = \frac{\kappa_0 }{\kappa_M}\bm{m}_0+\frac{M}{\kappa_M}\overline{\bm{w}} , \label{equation:niw_posterior_equation_2_gggw}\\ \kappa_M &= \kappa_0 + M, \label{equation:niw_posterior_equation_3_gggw}\\ \nu_M &=\nu_0 + M, \label{equation:niw_posterior_equation_4_gggw}\\ \bm{S}_M &=\bm{S}_0 + \bm{S}_{\overline{w}} + \frac{\kappa_0 M}{\kappa_0 + M}(\overline{\bm{w}} - \bm{m}_0)(\overline{\bm{w}} - \bm{m}_0)^\top \label{equation:niw_posterior_equation_5_gggw}\\ &=\bm{S}_0 + \sum_{m=1}^M \bm{w}_m \bm{w}_m^\top + \kappa_0 \bm{m}_0 \bm{m}_0^\top - \kappa_M \bm{m}_M \bm{m}_M^\top, \label{equation:niw_posterior_equation_6_gggw}\\ \overline{\bm{w}} &= \frac{1}{M}\sum_{m=1}^{M} \bm{w}_m, \label{equation:niw_posterior_equation_7_gggw}\\ \bm{S}_{\overline{w}} &= \sum_{m=1}^{M} (\bm{w}_m - \overline{\bm{w}})(\bm{w}_m - \overline{\bm{w}})^\top. \label{equation:niw_posterior_equation_8_gggw} \end{align} An intuitive interpretation for the parameters in NIW can be obtained from the updated parameters above. The parameter $\nu_0$ is the prior number of samples to observe the covariance matrix, and $\nu_M =\nu_0 + M$ is the posterior number of samples. The posterior mean $\bm{m}_M$ of the model mean $\boldsymbol\mu_w$ is a weighted average of the prior mean $\bm{m}_0$ and the sample mean $\overline{\bm{w}}$. The posterior scale matrix $\bm{S}_M$ is the sum of the prior scale matrix $\bm{S}_0$, empirical covariance matrix $\bm{S}_{\overline{w}}$, and an extra term due to the uncertainty in the mean. And due to symmetry, a similar form for $\{\boldsymbol\mu_z, \boldsymbol\Sigma_z\}$ can be derived. \paragraph{Gibbs sampling.} Again we can construct a Gibbs sampler for the GGGW model as formulated in Algorithm~\ref{alg:gggw_gibbs_sampler}. By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\bm{m}_0=\mathbf{0}, \kappa_0=1, \nu_0=K+1, \bm{S}_0=\bm{I}$. \begin{algorithm}[h] \caption{ Gibbs sampler for GGGW model in one iteration (prior on variance $\sigma^2$ here, similarly for the precision $\tau$). By default, uninformative hyperparameters are $\alpha_\sigma=\beta_\sigma=1$, $\bm{m}_0=\mathbf{0}, \kappa_0=1, \nu_0=K+1, \bm{S}_0=\bm{I}$. } \label{alg:gggw_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\sigma, \beta_\sigma, \bm{m}_0, \kappa_0, \nu_0, \bm{S}_0$; \For{$m=1$ to $M$} \State Sample $\bm{w}_m$ from $p(\bm{w}_m\mid \boldsymbol\mu_w, \boldsymbol\Sigma_w )$; \Comment{Equation~\eqref{equa:gggw_prior_hyper1}} \EndFor \For{$n=1$ to $N$} \State Sample $\bm{z}_n$ from $p(\bm{z}_n\mid\boldsymbol\mu_z, \boldsymbol\Sigma_z)$; \Comment{Symmetry of Eq.~\eqref{equa:gggw_prior_hyper2}} \EndFor \State Sample $\boldsymbol\mu_w, \boldsymbol\Sigma_w$ from $p(\boldsymbol\mu_w, \boldsymbol\Sigma_w\mid \bm{m}_M, \kappa_M, \nu_M, \bm{S}_M )$ \Comment{Equation~\eqref{equation:gggw_post_niw}} \State Sample $\boldsymbol\mu_z, \boldsymbol\Sigma_z$ from $p(\boldsymbol\mu_z, \boldsymbol\Sigma_z\mid \bm{m}_N, \kappa_N, \nu_N, \bm{S}_N )$ \Comment{Symmetry of Eq.~\eqref{equation:gggw_post_niw}} \State Sample $\sigma^2$ from $p(\sigma^2\mid \bm{A}, \bm{W},\bm{Z}, \alpha_\sigma, \beta_\sigma)$; \Comment{Equation~\eqref{equation:ggg_posterior_sigma2}} \State Report loss in Equation~\eqref{equation:frob_loss_brmf}, stop if it converges. \end{algorithmic} \end{algorithm} \index{Decomposition: GVG} \section{Gaussian Likelihood with Volume and Gaussian Priors (GVG)}\label{section:gvg_model} The Gaussian likelihood with volume and Gaussian prior (GVG) is introduced by \citet{arngren2011unmixing}. While the original paper applies the volume prior to unmix a set of pixels into \textit{pure spectral signatures (endmembers)} and corresponding \textit{fractional abundances} in hyperspectral image analysis in which case the factored components are nonnegative. However, it can also be applied in the real-valued applications here. The prior over $\bm{Z}$ is still a Gaussian density as that in the GGG model. Instead of assuming Gaussian prior over $\bm{W}$, \citet{arngren2011unmixing} put a volume prior for the factored component $\bm{W}$ with density $\bm{W}\propto \exp\{-\gamma \mathrm{det}(\bm{W}^\top\bm{W})\}$ (Figure~\ref{fig:bmf_gvg}). The prior has a single parameter $\gamma$ that is determined by hand. \begin{figure}[h] \centering \subfigtopskip=2pt \subfigbottomskip=6pt \subfigcapskip=-15pt \includegraphics[width=0.421\textwidth]{imgs/bmf_gvg.pdf} \caption{Graphical representation of GVG model. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or".} \label{fig:bmf_gvg} \end{figure} \paragraph{Posterior.} Denote vector $\bm{w}_{m,-k}\in \mathbb{R}^{K-1}$ as the $m$-th row of $\bm{W}$ excluding column $k$; vector $\bm{w}_{-m, k}\in\mathbb{R}^{M-1}$ as the $k$-th column of $\bm{W}$ excluding row $m$; matrix $\bm{W}_{-m,-k}\in \mathbb{R}^{(M-1)\times (K-1)}$ as $\bm{W}$ excluding row $m$ and column $k$; matrix $\bm{W}_{:,-k}\in \mathbb{R}^{M\times (K-1)}$ as $\bm{W}$ excluding column $k$; scalar value $D_{-k,-k}=\mathrm{det}\big( \bm{W}_{:,-k}^\top \bm{W}_{:,-k} \big)$; and the \textit{matrix adjugate} of $\big( \bm{W}_{:,-k}^\top \bm{W}_{:,-k} \big)$ as $\bm{A}_{-k,-k}=\mathrm{det}\big( \bm{W}_{:,-k}^\top \bm{W}_{:,-k} \big)\big( \bm{W}_{:,-k}^\top \bm{W}_{:,-k} \big)^{-1}\in\mathbb{R}^{(K-1)\times(K-1)}$. Then the posterior density of $w_{mk}$ can be obtained by \begin{equation}\label{equation:gvg_post_w} w_{mk} \sim \mathcal{N}(w_{mk}\mid \widetilde{\mu_{mk}}, \widetilde{\sigma_{mk}^{2}}), \end{equation} where $$ \begin{aligned} \widetilde{\mu_{mk}}&= \widetilde{\sigma_{mk}^{2}} \left\{ \gamma \bm{w}_{m,-k}^\top \bm{A}_{-k,-k}(\bm{W}_{-m,-k}^\top)\bm{w}_{-m,k} + \frac{1}{\sigma^2} \sum_{j=1}^{N} (a_{mj} - \sum_{i\neq k}^{K} \bm{w}_{m,-k}^\top \bm{z}_{j, -k})z_{kj} \right\}, \end{aligned} $$ and $$ \begin{aligned} \widetilde{\sigma_{mk}^{2}}&= 1\bigg/ \left( \frac{1 }{\sigma^2} \sum_{j=1}^{N} z_{kj}^2 + \gamma\left(D_{-k,-k} - \bm{w}_{m,-k}^\top \bm{A}_{-k,-k}\bm{w}_{m,-k}\right) \right). \end{aligned} $$ The above result can be obtained by extracting $w_{mk}$ from $\exp\{-\gamma \mathrm{det}(\bm{W}^\top\bm{W})\}$ and from the fact that $\mathrm{det}(\bm{M}) = \mathrm{det}(\bm{D})\mathrm{det}(\bm{A}-\bm{B}\bm{D}^{-1}\bm{C}) = \mathrm{det}(\bm{A})\mathrm{det}(\bm{D}-\bm{C}\bm{A}^{-1}\bm{B})$ if the matrix $\bm{M}$ has the block formulation $\bm{M}= \begin{bmatrix} \bm{A} & \bm{B}\\ \bm{C} & \bm{D} \end{bmatrix}$. \part{Backgrounds} \newpage \section*{Introduction and Background}\label{chapter_introduction} \addcontentsline{toc}{section}{Introduction and Background} Matrix decomposition has become a core technology in statistics \citep{banerjee2014linear, gentle1998numerical}, optimization \citep{gill2021numerical}, clustering and classification \citep{li2009non, wang2013non, lu2021survey}, computer vision \citep{goel2020survey}, and recommender system \citep{symeonidis2016matrix}, largely due to the development of its application in machine learning \citep{goodfellow2016deep, bishop2006pattern}. Machine learning algorithms are designed to learn hidden patterns and relationships in data, but the data can often be high-dimensional and complex. Matrix decomposition techniques provide a way of reducing the dimensionality of the data and representing it in a way that is easier for the machine learning algorithms to process. Matrix decomposition algorithms such as QR decomposition, singular value decomposition (SVD), and nonnegative matrix factorization (NMF) provide a way of breaking down a matrix into a smaller set of constituent matrices that represent the underlying structure of the data. These decomposed matrices can be used as features for the machine learning algorithms, which can then learn the patterns and relationships in the data more effectively. This process can also help reduce the noise and redundancy in the data, making it easier for the machine learning algorithms to identify the important patterns and relationships. On the other hand, matrix decomposition can be applied to a variety of matrix-based problems, such as collaborative filtering and link prediction \citep{marlin2003modeling, lim2007variational, mnih2007probabilistic, raiko2007principal, chen2009collaborative}: \begin{itemize} \item In the context of collaborative filtering, matrix decomposition can be used to uncover hidden patterns in user-item matrices. For example, given a matrix of user ratings for different items, matrix decomposition algorithms can be used to infer the latent user preferences and item attributes. These latent variables can then be used to predict the ratings for unseen items so as to suggest items to users (e.g., articles, movies, or musics). \item In link prediction problems, matrix decomposition algorithms can be used to uncover hidden patterns in networks. For example, given a network of users and their connections, matrix decomposition methods can be used to infer latent user preferences, which can then be used to predict new links in the network. \end{itemize} A (bilinear) matrix decomposition is a way of reducing a complex matrix into its constituent parts which are in simpler forms and represent the original matrix as a product of two (or more) \textit{factor matrices}. The underlying principle of the decompositional approach to matrix computation is that it is not the business of the matrix algorithmists to solve particular problems, but it is an approach that can simplify more complex matrix operations which can be performed on the decomposed parts rather than on the original matrix itself. For example, a matrix decomposition, which though is usually expensive to compute, can be reused to solve new problems involving the original matrix in different scenarios, e.g., as long as the factorization of $\bm{A}$ is obtained, it can be reused to solve the set of linear systems $\{\bm{b}_1=\bm{A}\bm{x}_1, \bm{b}_2=\bm{A}\bm{x}_2, \ldots, \bm{b}_k=\bm{A}\bm{x}_k\}$. There are two main approaches that have been applied to inference in the (low-rank) matrix factorization task. The first is to define a loss function and optimize over the factored components using alternating updates \citep{comon2009tensor, lee1999learning}. The second is to build a probabilistic model representing the matrix factorization and then to perform statistical inference to compute any desired components \citep{salakhutdinov2008bayesian, ari2012probabilistic}. Another core application of matrix decomposition in machine learning is that it provides a way of incorporating prior knowledge into the machine learning algorithms. For example, in \textit{Bayesian matrix decomposition (BMD, or Bayesian matrix factorization, BMF)}, prior information about sparsity or nonnegativity can be encoded into the model, allowing the algorithms to make more informed decisions and produce better results. Bayesian matrix decomposition is a probabilistic model that decomposes a matrix into its constituent parts. It was initially discussed in a factor analysis context \citep{canny2004gap, dunson2005bayesian} and in a matrix completion context \citep{zhou2010nonparametric}. BMD is a generative graphical model that can be used to infer the low-dimensional latent factors from a matrix, such as user preferences and item attributes. It has been successfully applied to a variety of problems such as matrix completion, inpainting, denoising, and super-resolution. The model is based on a Bayesian framework and inference is mainly optimized using Markov chain Monte Carlo (MCMC) methods. Given a data matrix $\bm{A}$, the bilinear matrix decomposition considers factoring it as $\bm{A}=\bm{W}\bm{Z}$. By Bayesian approaches, the minimization problem of finding factored matrices $\bm{W},\bm{Z}$ is described by probabilistic approaches as trying to infer the distributions over latent variables $\bm{W},\bm{Z}$ after observing the data matrix $\bm{A}$. Bayesian approaches extend this by placing prior distributions over latent variables $\bm{W},\bm{Z}$ in which case we can either try to infer a \textit{point estimate} of a \textit{maximum likelihood estimator} by maximizing the likelihood $ \mathop{\max}_{\bm{W},\bm{Z}} p(\bm{A}\mid\bm{W},\bm{Z})$; or of a \textit{maximum a posteriori estimators} by $ \mathop{\max}_{\bm{W},\bm{Z}} p(\bm{W},\bm{Z} \mid \bm{A})$; or find the full posterior distribution $p(\bm{W},\bm{Z} \mid\bm{A})$. Bayesian matrix decomposition approaches use a likelihood distribution to capture noise in the data, e.g., use a Poisson likelihood for count data, or a Gaussian likelihood for real-valued or nonnegative data. Based on the constraints, e.g., nonnegativity, count, sparsity, or ordinal values, different priors are placed over the entries in $\bm{W}, \bm{Z}$. In non-probabilistic matrix decomposition language, the likelihood can be regarded as a cost function (e.g., to minimize over the mean squared error), and the priors can be treated as a penalization term over the factored components (e.g., to favor a sparsity in $\bm{W},\bm{Z}$). Situated within this book, our goal is to expand the repertoire of Bayesian matrix factorization algorithms. In numerical matrix decomposition methods \citep{lu2021numerical}, a matrix decomposition task on matrix $\bm{A}$ can be cast as, \begin{itemize} \item $\bm{A}=\bm{Q}\bm{U}$: where $\bm{Q}$ is an orthogonal matrix that contains the same column space as $\bm{A}$ and $\bm{U}$ is a relatively simple and sparse matrix to reconstruct $\bm{A}$. \item $\bm{A}=\bm{Q}\bm{T}\bm{Q}^\top$: where $\bm{Q}$ is orthogonal such that $\bm{A}$ and $\bm{T}$ are \textit{similar matrices} that share the same properties such as same eigenvalues and sparsity. Moreover, working on $\bm{T}$ is an easier task compared to that on $\bm{A}$. \item $\bm{A}=\bm{U}\bm{T}\bm{V}$: where $\bm{U}, \bm{V}$ are orthogonal matrices such that the columns of $\bm{U}$ and the rows of $\bm{V}$ constitute an orthonormal basis of the column space and row space of $\bm{A}$ respectively. \item $\underset{m\times n}{\bm{A}}=\underset{m\times r}{\bm{B}}\gap \underset{r\times n}{\bm{C}}$: where $\bm{B},\bm{C}$ are full rank matrices that can reduce the memory storage of $\bm{A}$. In practice, a low-rank approximation $\underset{m\times n}{\bm{A}}\approx \underset{m\times k}{\bm{D}}\gap \underset{k\times n}{\bm{F}}$ can be employed where $k<r$ is called the \textit{numerical rank} of the matrix such that the matrix can be stored much more inexpensively and can be multiplied rapidly with vectors or other matrices. An approximation of the form $\bm{A}=\bm{D}\bm{F}$ is useful for storing the matrix $\bm{A}$ more frugally (we can store $\bm{D}$ and $\bm{F}$ using $k(m+n)$ floats, as opposed to $mn$ numbers for storing $\bm{A}$), for efficiently computing a matrix-vector product $\bm{b} = \bm{A}\bm{x}$ (via $\bm{c} = \bm{F}\bm{x}$ and $\bm{b} = \bm{D}\bm{c}$), for data interpretation, and much more. \end{itemize} However, in Bayesian matrix decomposition, we consider simpler factorization forms, e.g., the (low-rank) real-valued factorization, nonnegative factorization, the factorization for count and ordinal data sets, and the Bayesian interpolative decomposition (ID). The sole aim of this book is to give a self-contained introduction to concepts and mathematical tools in Bayesian inference and matrix analysis in order to seamlessly introduce matrix decomposition (or factorization) techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning Bayesian matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the high-order Bayesian decomposition, nonparametric matrix factorization, and variational inference for Bayesian matrix decomposition. We refer the reader to literature in the field of Bayesian analysis for a more detailed introduction to the related fields. Some excellent examples include \citet{rai2015leveraging, qian2016bayesian, lu2021numerical, takayama2022bayesian}. \paragraph{Notation and preliminaries.} In the rest of this section, we will introduce and recap some basic knowledge about linear algebra related to matrix factorization. For the rest of the important concepts, we define and discuss them as per need for clarity. Readers with enough background in matrix analysis can skip this section. In the text, we simplify matters by considering only matrices that are real. Without special consideration, we assume throughout that $\norm{\cdot}=\norm{\cdot}_2$, i.e., the norm is an $L_2$ norm or a Frobenius norm for vectors or matrices. In all cases, scalars will be denoted in a non-bold font possibly with subscripts (e.g., $a$, $\alpha$, $\alpha_i$). We will use \textbf{boldface} lower case letters possibly with subscripts to denote vectors (e.g., $\boldsymbol\mu$, $\bm{x}$, $\bm{x}_n$, $\bm{z}$) and \textbf{boldface} upper case letters possibly with subscripts to denote matrices (e.g., $\bm{A}$, $\bm{L}_j$). The $i$-th element of a vector $\bm{z}$ will be denoted by $z_i$ in the non-bold font. In the meantime, the \textit{normal fonts} of scalars denote \textbf{random variables} (e.g., $\textnormal{a}$ and $\textnormal{b}_1$ are random variables, while italics $a$ and $b_1$ are scalars); the normal fonts of \textbf{boldface} lower case letters possibly with subscripts denote \textbf{random vectors} (e.g., ${\mathbf{a}}$ and ${\mathbf{b}}_1$ are random vectors, while italics $\bm{a}$ and $\bm{b}_1$ are vectors); and the normal fonts of \textbf{boldface} upper case letters possibly with subscripts denote \textbf{random matrices} (e.g., ${\mathbf{A}}$ and ${\mathbf{B}}_1$ are random matrices, while italics $\bm{A}$ and $\bm{B}_1$ are matrices). The $n$-th element in a sequence is denoted by a superscript (in parentheses), e.g., $\bm{A}^{n}$ or $\bm{A}^{(n)}$ denotes the $n$-th matrix in a sequence, $\bm{a}^{k}$ or $\bm{a}^{(k)}$ denotes the $k$-th vector in a sequence. Subarrays are formed when a subset of the indices is fixed. {The $i$-th row and $j$-th column value of matrix $\bm{A}$ (entry ($i,j$) of $\bm{A}$) will be denoted by $a_{ij}$}. Furthermore, it will be helpful to utilize the \textbf{Matlab-style notation}, the $i$-th row to the $j$-th row and the $k$-th column to the $m$-th column submatrix of the matrix $\bm{A}$ will be denoted by $\bm{A}_{i:j,k:m}$. A colon is used to indicate all elements of a dimension, e.g., $\bm{A}_{:,k:m}$ denotes the $k$-th column to the $m$-th column of the matrix $\bm{A}$, and $\bm{A}_{:,k}$ denotes the $k$-th column of $\bm{A}$. Alternatively, the $k$-th column of $\bm{A}$ may be denoted more compactly by $\bm{a}_k$. When the index is not continuous, given ordered subindex sets $I$ and $J$, $\bm{A}[I, J]$ denotes the submatrix of $\bm{A}$ obtained by extracting the rows and columns of $\bm{A}$ indexed by $I$ and $J$, respectively; and $\bm{A}[:, J]$ denotes the submatrix of $\bm{A}$ obtained by extracting the columns of $\bm{A}$ indexed by $J$ where the $[:, J]$ syntax in this expression selects all rows from $\bm{A}$ and only the columns specified by the indices in $J$. \begin{definition}[Matlab Notation]\label{definition:matlabnotation} Suppose $\bm{A}\in \mathbb{R}^{m\times n}$, and $I=[i_1, i_2, \ldots, i_k]$ and $J=[j_1, j_2, \ldots, j_l]$ are two index vectors, then $\bm{A}[I,J]$ denotes the $k\times l$ submatrix $$ \bm{A}[I,J]= \begin{bmatrix} a_{i_1,j_1} & a_{i_1,j_2} &\ldots & a_{i_1,j_l}\\ a_{i_2,j_1} & a_{i_2,j_2} &\ldots & a_{i_2,j_l}\\ \vdots & \vdots&\ddots & \vdots\\ a_{i_k,j_1} & a_{i_k,j_2} &\ldots & a_{i_k,j_l}\\ \end{bmatrix}. $$ Whilst, $\bm{A}[I,:]$ denotes the $k\times n$, and $\bm{A}[:,J]$ denotes the $m\times l$ analogously. We note that it does not matter whether the index vectors $I,J$ are row vectors or column vectors. It matters which axis they index (rows of $\bm{A}$ or columns of $\bm{A}$). We should also notice the range of the index: $$ \left\{ \begin{aligned} 0&\leq \min(I) \leq \max(I)\leq m;\\ 0&\leq \min(J) \leq \max(J)\leq n. \end{aligned} \right. $$ \end{definition} And in all cases, vectors are formulated in a column rather than in a row. A row vector will be denoted by a transpose of a column vector such as $\bm{a}^\top$. A specific column vector with values is split by the semicolon symbol $``;"$, e.g., $\bm{x}=[1;2;3]$ is a column vector in $\mathbb{R}^3$. Similarly, a specific row vector with values is split by the comma symbol $``,"$, e.g., $\bm{y}=[1,2,3]$ is a row vector with 3 values. Further, a column vector can be denoted by the transpose of a row vector e.g., $\bm{y}=[1,2,3]^\top$ is a column vector. The transpose of a matrix $\bm{A}$ will be denoted by $\bm{A}^\top$ and its inverse will be denoted by $\bm{A}^{-1}$ . We will denote the $p \times p$ identity matrix by $\bm{I}_p$ (or simply $\bm{I}$ when the size can be determined from context). A vector or matrix of all zeros will be denoted by a \textbf{boldface} zero $\mathbf{0}$ whose size should be clear from context, or we denote $\mathbf{0}_p$ to be the vector of all zeros with $p$ entries. In this context, we will highly use the idea about the linear independence of a set of vectors. Two equivalent definitions are given as follows. \begin{definition}[Linearly Independent] A set of vectors $\{\bm{a}_1, \bm{a}_2, \ldots, \bm{a}_m\}$ is called linearly independent if there is no combination that can get $x_1\bm{a}_1+x_2\bm{a}_2+\ldots+x_m\bm{a}_m=0$ except all $x_i$'s are zero. An equivalent definition is that $\bm{a}_1\neq \mathbf{0}$, and for every $k>1$, the vector $\bm{a}_k$ does not belong to the span of $\{\bm{a}_1, \bm{a}_2, \ldots, \bm{a}_{k-1}\}$. \end{definition} In the study of linear algebra, every vector space has a basis and every vector is a linear combination of members of the basis. We then define the span and dimension of a subspace via the basis. \begin{definition}[Span] If every vector $\bm{v}$ in subspace $\mathcal{V}$ can be expressed as a linear combination of $\{\bm{a}_1, \bm{a}_2, \ldots,$ $\bm{a}_m\}$, then $\{\bm{a}_1, \bm{a}_2, \ldots, \bm{a}_m\}$ is said to span $\mathcal{V}$. \end{definition} \begin{definition}[Subspace] A nonempty subset $\mathcal{V}$ of $\mathbb{R}^n$ is called a subspace if $x\bm{a}+y\bm{a}\in \mathcal{V}$ for every $\bm{a},\bm{b}\in \mathcal{V}$ and every $x,y\in \mathbb{R}$. \end{definition} \begin{definition}[Basis and Dimension] A set of vectors $\{\bm{a}_1, \bm{a}_2, \ldots, \bm{a}_m\}$ is called a basis of $\mathcal{V}$ if they are linearly independent and span $\mathcal{V}$. Every basis of a given subspace has the same number of vectors, and the number of vectors in any basis is called the dimension of the subspace $\mathcal{V}$. By convention, the subspace $\{\mathbf{0}\}$ is said to have dimension zero. Furthermore, every subspace of nonzero dimension has a basis that is orthogonal, i.e., the basis of a subspace can be chosen orthogonal. \end{definition} \begin{definition}[Column Space (Range)] If $\bm{A}$ is an $m \times n$ real matrix, we define the column space (or range) of $\bm{A}$ to be the set spanned by its columns: \begin{equation*} \mathcal{C} (\bm{A}) = \{ \bm{y}\in \mathbb{R}^m: \exists \bm{x} \in \mathbb{R}^n, \, \bm{y} = \bm{A} \bm{x} \}. \end{equation*} And the row space of $\bm{A}$ is the set spanned by its rows, which is equal to the column space of $\bm{A}^\top$: \begin{equation*} \mathcal{C} (\bm{A}^\top) = \{ \bm{x}\in \mathbb{R}^n: \exists \bm{y} \in \mathbb{R}^m, \, \bm{x} = \bm{A}^\top \bm{y} \}. \end{equation*} \end{definition} \begin{definition}[Null Space (Nullspace, Kernel)]\label{definition:null_space} If $\bm{A}$ is an $m \times n$ real matrix, we define the null space (or kernel, or nullspace) of $\bm{A}$ to be the set: \begin{equation*} \mathcal{N} (\bm{A}) = \{\bm{y} \in \mathbb{R}^n: \, \bm{A} \bm{y} = \mathbf{0} \}. \end{equation*} And the null space of $\bm{A}^\top$ is defined as \begin{equation*} \mathcal{N} (\bm{A}^\top) = \{\bm{x} \in \mathbb{R}^m: \, \bm{A}^\top \bm{x} = \mathbf{0} \}. \end{equation*} \end{definition} Both the column space of $\bm{A}$ and the null space of $\bm{A}^\top$ are subspaces of $\mathbb{R}^n$. In fact, every vector in $\mathcal{N}(\bm{A}^\top)$ is perpendicular to $\mathcal{C}(\bm{A})$ and vice versa.\footnote{Every vector in $\mathcal{N}(\bm{A})$ is also perpendicular to $\mathcal{C}(\bm{A}^\top)$ and vice versa.} \begin{definition}[Rank] The $rank$ of a matrix $\bm{A}\in \mathbb{R}^{m\times n}$ is the dimension of the column space of $\bm{A}$. That is, the rank of $\bm{A}$ is equal to the maximal number of linearly independent columns of $\bm{A}$, and is also the maximal number of linearly independent rows of $\bm{A}$. The matrix $\bm{A}$ and its transpose $\bm{A}^\top$ have the same rank. We say that $\bm{A}$ has full rank if its rank is equal to $min\{m,n\}$. In another word, this is true if and only if either all the columns of $\bm{A}$ are linearly independent, or all the rows of $\bm{A}$ are linearly independent. Specifically, given a vector $\bm{u} \in \mathbb{R}^m$ and a vector $\bm{v} \in \mathbb{R}^n$, then the $m\times n$ matrix $\bm{u}\bm{v}^\top$ obtained by the outer product of vectors is of rank 1. In short, the rank of a matrix is equal to: \begin{itemize} \item number of linearly independent columns; \item number of linearly independent rows; \item and remarkably, these are always the same (see \citet{lu2021numerical}). \end{itemize} \end{definition} \begin{definition}[Orthogonal Complement in General] The orthogonal complement $\mathcal{V}^\perp$ of a subspace $\mathcal{V}$ contains every vector that is perpendicular to $\mathcal{V}$. That is, $$ \mathcal{V}^\perp = \{\bm{v} : \bm{v}^\top\bm{u}=0, \,\,\, \forall \bm{u}\in \mathcal{V} \}. $$ The two subspaces are disjoint that span the entire space. The dimensions of $\mathcal{V}$ and $\mathcal{V}^\perp$ add to the dimension of the whole space. Furthermore, $(\mathcal{V}^\perp)^\perp=\mathcal{V}$. \end{definition} \begin{definition}[Orthogonal Complement of Column Space] If $\bm{A}$ is an $m \times n$ real matrix, the orthogonal complement of $\mathcal{C}(\bm{A})$, $\mathcal{C}^{\bot}(\bm{A})$ is the subspace defined as: \begin{equation*} \begin{aligned} \mathcal{C}^{\bot}(\bm{A}) &= \{\bm{y}\in \mathbb{R}^m: \, \bm{y}^\top \bm{A} \bm{x}=\mathbf{0}, \, \forall \bm{x} \in \mathbb{R}^n \} \\ &=\{\bm{y}\in \mathbb{R}^m: \, \bm{y}^\top \bm{v} = \mathbf{0}, \, \forall \bm{v} \in \mathcal{C}(\bm{A}) \}. \end{aligned} \end{equation*} \end{definition} Then we have the four fundamental spaces for any matrix $\bm{A}\in \mathbb{R}^{m\times n}$ with rank $r$: \begin{description} \item $\bullet$ $\mathcal{C}(\bm{A})$: Column space of $\bm{A}$, i.e., linear combinations of columns with dimension $r$; \item $\bullet$ $\mathcal{N}(\bm{A})$: Null space of $\bm{A}$, i.e., all $\bm{x}$ with $\bm{A}\bm{x}=\mathbf{0}$ with dimension $n-r$; \item $\bullet$ $\mathcal{C}(\bm{A}^\top)$: Row space of $\bm{A}$, i.e., linear combinations of rows with dimension $r$; \item $\bullet$ $\mathcal{N}(\bm{A}^\top)$: Left null space of $\bm{A}$, i.e., all $\bm{y}$ with $\bm{A}^\top \bm{y}=\mathbf{0}$ with dimension $m-r$, \end{description} where $r$ is the rank of the matrix. Furthermore, $\mathcal{N}(\bm{A})$ is the orthogonal complement to $\mathcal{C}(\bm{A}^\top)$, and $\mathcal{C}(\bm{A})$ is the orthogonal complement to $\mathcal{N}(\bm{A}^\top)$ \citep{lu2021numerical}. \index{Vector norm} \index{Matrix norm} \begin{definition}[Vector 2-Norm] For a vector $\bm{x}\in\mathbb{R}^n$, the $L_2$ vector norm is defined as $\norm{\bm{x}}_2 = \sqrt{x_1^2+x_2^2+\ldots+x_n^2}$. \end{definition} For a matrix $\bm{A}\in\mathbb{R}^{m\times n}$, we define the (matrix) Frobenius norm as follows. \begin{definition}[Frobenius Norm]\label{definition:frobenius} The Frobenius norm of a matrix $\bm{A}\in \mathbb{R}^{m\times n}$ is defined as $$ \norm{\bm{A}}_F = \sqrt{\sum_{i=1,j=1}^{m,n} (a_{ij})^2}=\sqrt{\mathrm{tr}(\bm{A}\bA^\top)}=\sqrt{\mathrm{tr}(\bm{A}^\top\bm{A})} = \sqrt{\sigma_1^2+\sigma_2^2+\ldots+\sigma_r^2}, $$ i.e., the square root of the sum of the squares of the elements of $\bm{A}$. Where $\sigma_i$'s are the singular values of $\bm{A}$, and $r$ is the rank of $\bm{A}$. \end{definition} The equivalence of $\sqrt{\sum_{i=1,j=1}^{m,n} (a_{ij})^2}$, $\sqrt{\mathrm{tr}(\bm{A}\bA^\top)}$, and $\sqrt{\mathrm{tr}(\bm{A}^\top\bm{A})}$ is trivial. While we can use the singular value decomposition (SVD) to show the equivalence between $\sqrt{\mathrm{tr}(\bm{A}\bA^\top)}$ and $\sqrt{\sigma_1^2+\sigma_2^2+\ldots+\sigma_r^2}$. Suppose $\bm{A}$ admits SVD $\bm{A} = \bm{U}\boldsymbol\Sigma\bm{V}^\top$, then it follows that $$ \sqrt{\mathrm{tr}(\bm{A}\bA^\top)} = \sqrt{\mathrm{tr}(\bm{U}\boldsymbol\Sigma\bm{V}^\top \bm{V}\boldsymbol\Sigma\bm{U}^\top)} = \sqrt{\mathrm{tr}(\boldsymbol\Sigma^2)}=\sqrt{\sigma_1^2+\sigma_2^2+\ldots+\sigma_r^2}, $$ where $\sigma_i$'s are singular values of $\bm{A}$. Apparently, the Frobenius norm can be also defined by a vector 2-norm such that $\norm{\bm{A}}_F = \sqrt{\sum_{i=1}^{n} \norm{\bm{a}_i}^2}$ where $\bm{a}_i$ for all $i \in \{1,2,\ldots, n\}$ are the columns of $\bm{A}$. \begin{definition}[Permutation Matrix]\label{definition:permutation-matrix} A permutation matrix $\bm{P}$ is a square binary matrix that has exactly one entry of 1 in each row and each column, and 0's elsewhere. \paragraph{Row Point.} That is, the permutation matrix $\bm{P}$ has the rows of the identity $\bm{I}$ in any order and the order decides the sequence of the row permutation. Suppose we want to permute the rows of matrix $\bm{A}$, we just multiply on the left by $\bm{P}\bm{A}$. \paragraph{Column Point.} Or, equivalently, the permutation matrix $\bm{P}$ has the columns of the identity $\bm{I}$ in any order and the order decides the sequence of the column permutation. And now, the column permutation of $\bm{A}$ is to multiply on the right by $\bm{A}\bm{P}$. \end{definition} The permutation matrix $\bm{P}$ can be more efficiently represented via a vector $J \in \mathbb{Z}_+^n$ of indices such that $\bm{P} = \bm{I}[:, J]$ where $\bm{I}$ is the $n\times n$ identity matrix and notably, the elements in vector $J$ sum to $1+2+\ldots+n= \frac{n^2+n}{2}$. \begin{example}[Permutation] Suppose, $$\bm{A}=\begin{bmatrix} 1 & 2&3\\ 4&5&6\\ 7&8&9 \end{bmatrix} ,\qquad \text{and} \qquad \bm{P}=\begin{bmatrix} &1&\\ &&1\\ 1&& \end{bmatrix}. $$ The row permutation is given by $$ \bm{P}\bm{A} = \begin{bmatrix} 4&5&6\\ 7&8&9\\ 1 & 2&3\\ \end{bmatrix}, $$ where the order of the rows of $\bm{A}$ appearing in $\bm{P}\bm{A}$ matches the order of the rows of $\bm{I}$ in $\bm{P}$. And the column permutation is given by $$ \bm{A}\bm{P} = \begin{bmatrix} 3 & 1 & 2 \\ 6 & 4 & 5\\ 9 & 7 & 8 \end{bmatrix}, $$ where the order of the columns of $\bm{A}$ appearing in $\bm{A}\bm{P}$ matches the order of the columns of $\bm{I}$ in $\bm{P}$. \hfill $\square$\par \end{example} \begin{definition}[Selection Matrix]\label{definition:selection-matrix} A selection matrix $\bm{S}$ is a square diagonal matrix with diagonals being 1 or 0. The 1 entries are the rows or columns that will be selected. \paragraph{Row Point.} That is, the selection matrix $\bm{S}$ has the rows of the identity $\bm{I}$ if we want to select the corresponding rows, and otherwise, we mask the rows in the identity $\bm{I}$ by zero. Suppose we want to select the rows of matrix $\bm{A}$, we just multiply from left by $\bm{S}\bm{A}$. \paragraph{Column Point.} Or, equivalently, the selection matrix $\bm{S}$ has the columns of the identity $\bm{I}$ if we want to select the corresponding columns, or otherwise, we mask the columns in the identity $\bm{I}$ by zero. And now, the column selection of $\bm{A}$ is to multiply from right by $\bm{A}\bm{S}$. \end{definition} \begin{example}[Selection and Permutation] Suppose, $$\bm{A}=\begin{bmatrix} 1 & 2&3\\ 4&5&6\\ 7&8&9 \end{bmatrix} ,\qquad \text{and} \qquad \bm{S}=\begin{bmatrix} 1&&\\ & 0&\\ &&1 \end{bmatrix}. $$ The row selection is given by $$ \bm{S}\bm{A} = \begin{bmatrix} 1&2&3\\ 0&0&0\\ 7 & 8&9\\ \end{bmatrix}, $$ where the rows of $\bm{A}$ appearing in $\bm{S}\bm{A}$ match the row entries of $\bm{S}$. And the column selection is given by $$ \bm{A}\bm{S} = \begin{bmatrix} 1& 0 & 3 \\ 4 & 0 & 6\\ 7 & 0 & 9 \end{bmatrix}, $$ where the columns of $\bm{A}$ appearing in $\bm{A}\bm{S}$ match the column entries of $\bm{S}$. If now, we want to reorder the selected rows or columns in the upper left of the final matrix, we can construct a permutation as follows $$ \bm{P}=\begin{bmatrix} 1&&\\ & & 1\\ &1& \end{bmatrix}, $$ such that $$ \bm{P}\bm{S}\bm{A} = \begin{bmatrix} 1&2&3\\ 7 & 8&9\\ 0&0&0\\ \end{bmatrix}, $$ and $$ \bm{A}\bm{S}\bm{P} = \begin{bmatrix} 1 & 3& 0 \\ 4 & 6& 0\\ 7 & 9& 0 \end{bmatrix}. $$ The trick is essential to some mathematical proofs. \hfill $\square$\par \end{example} In conclusion, regarding the equivalent claims of nonsingular matrices, we have the following remark from an introductory course on linear algebra. \begin{remark}[List of Equivalence of Nonsingularity for a Matrix] For a square matrix $\bm{A}\in \mathbb{R}^{n\times n}$, the following claims are equivalent: \begin{itemize} \item $\bm{A}$ is nonsingular; \item $\bm{A}$ is invertible, i.e., $\bm{A}^{-1}$ exists; \item $\bm{A}\bm{x}=\bm{b}$ has a unique solution $\bm{x} = \bm{A}^{-1}\bm{b}$; \item $\bm{A}\bm{x} = \mathbf{0}$ has a unique, trivial solution: $\bm{x}=\mathbf{0}$; \item Columns of $\bm{A}$ are linearly independent; \item Rows of $\bm{A}$ are linearly independent; \item $\mathrm{det}(\bm{A}) \neq 0$; \item $\dim(\mathcal{N}(\bm{A}))=0$; \item $\mathcal{N}(\bm{A}) = \{\mathbf{0}\}$, i.e., the null space is trivial; \item $\mathcal{C}(\bm{A})=\mathcal{C}(\bm{A}^\top) = \mathbb{R}^n$, i.e., the column space or row space span the whole $\mathbb{R}^n$; \item $\bm{A}$ has full rank $r=n$; \item The reduced row echelon form is $\bm{R}=\bm{I}$; \item $\bm{A}^\top\bm{A}$ is symmetric positive definite; \item $\bm{A}$ has $n$ nonzero (positive) singular values; \item All eigenvalues are nonzero; \end{itemize} \end{remark} Keeping the above equivalence in mind is important to avoid confusion. On the other hand, the following remark also shows the equivalent claims for singular matrices. \begin{remark}[List of Equivalence of Singularity for a Matrix] For a square matrix $\bm{A}\in \mathbb{R}^{n\times n}$ with eigenpair $(\lambda, \bm{u})$, the following claims are equivalent: \begin{itemize} \item $(\bm{A}-\lambda\bm{I})$ is singular; \item $(\bm{A}-\lambda\bm{I})$ is not invertible; \item $(\bm{A}-\lambda\bm{I})\bm{x} = \mathbf{0}$ has nonzero $\bm{x}\neq \mathbf{0}$ solutions, and $\bm{x}=\bm{u}$ is one of such solutions; \item $(\bm{A}-\lambda\bm{I})$ has linearly dependent columns; \item $\mathrm{det}(\bm{A}-\lambda\bm{I}) = 0$; \item $\dim(\mathcal{N}(\bm{A}-\lambda\bm{I}))>0$; \item Null space of $(\bm{A}-\lambda\bm{I})$ is nontrivial; \item Columns of $(\bm{A}-\lambda \bm{I})$ are linearly dependent; \item Rows of $(\bm{A}-\lambda \bm{I})$ are linearly dependent; \item $(\bm{A}-\lambda \bm{I})$ has rank $r<n$; \item Dimension of column space = dimension of row space = $r<n$; \item $(\bm{A}-\lambda \bm{I})^\top(\bm{A}-\lambda \bm{I})$ is symmetric semidefinite; \item $(\bm{A}-\lambda \bm{I})$ has $r<n$ nonzero (positive) singular values; \item Zero is an eigenvalue of $(\bm{A}-\lambda \bm{I})$. \end{itemize} \end{remark} \chapter{Monte Carlo Methods} \begingroup \hypersetup{linkcolor=winestain} \minitoc \newpage \endgroup This book focuses on Markov chain Monte Carlo (MCMC) methods for probabilistic inference, which draws conclusions from a probabilistic model. And this chapter surveys the mathematical details of probabilistic inference, focusing on those aspects that will provide the foundation for the rest of this book. \section{The Bayesian Approach}\label{section:bayes_approach} In the past decade, Bayesian approach has been used in a wide variety of problems in data analysis, e.g., economic forecasting, medical imaging, and population studies \citep{besag1986statistical, hill1994bayesian, marseille1996bayesian}. In modern statistics, Bayesian approaches have become increasingly more important and widely used. \textit{Thomas Bayes} came up with this idea but died before publishing it. Fortunately, his friend \textit{Richard Price} carried on his work and published it in 1764. And it was later independently discovered by \textit{Laplace} at the end of the 18-th century. In this section, we describe the basic ideas about the Bayesian approach and use the Beta-Bernoulli model and Bayesian linear model as an appetizer of the pros and prior information of Bayesian models. Bayesian modeling and statistics are fundamentally driven by Bayes' theorem. Formally, we have the following theorem. \begin{theorem}[Bayes' Theorem] Let ${\mathbb{S}}$ be a sample space and let $B_1, B_2, \ldots, B_K$ be a partition of ${\mathbb{A}}$ such that (1). $\cup_k B_k={\mathbb{S}}$ and (2) $B_i \cap B_j=\varnothing$ for all $i\neq j$. Let further $A$ be any event. Then it follows that $$ P(B_k \mid A) = \frac{P(A \mid B_k)P(B_k)}{P(A)} = \frac{P(A\mid B_k)P(B_k)}{\sum_{i=1}^{K}P(A\mid B_i)P(B_i)}. $$ \end{theorem} In Bayesian modeling and statistics, Bayes' Theorem offers a straightforward method for updating probabilities when new information arises such as observed data. This allows us to adjust our prior beliefs about parameters of interest. To be more specific, let $\mathcal{X} (\bm{x}_{1:N})= \{\bm{x}_1, \bm{x}_2, \ldots, \bm{x}_N\}$ be the observations of $N$ data points, and suppose they are independent and identically distributed (i.i.d.), with the probability parameterized by $\boldsymbol\theta$. Note that the parameters $\boldsymbol\theta$ might include the hidden variables, for example, the latent variables in a mixture model to indicate which cluster a data point belongs to. The idea of the Bayesian approach is to assume a \textit{prior} probability distribution for $\boldsymbol\theta$ with hyperparameters $\boldsymbol\alpha$ (i.e., $p(\boldsymbol\theta\mid \boldsymbol\alpha)$, also known as the probability of the model) - that is, a distribution representing the plausibility of each possible value of $\boldsymbol\theta$ before the data is observed and it captures our prior uncertainty regarding $\boldsymbol\theta$. The joint distribution of $\boldsymbol\theta$ and $\mathcal{X}$ is given by $$ p(\boldsymbol\theta, \mathcal{X}) = p(\boldsymbol\theta \mid \boldsymbol\alpha) p(\mathcal{X} \mid \boldsymbol\theta). $$ And we can integrate out $\boldsymbol\theta$ to get the marginal distribution of $\mathcal{X}$, $$ p(\mathcal{X})= \int_{\boldsymbol\theta} p(\boldsymbol\theta \mid \boldsymbol\alpha)p(\mathcal{X} \mid \boldsymbol\theta) d\boldsymbol\theta. $$ Then, to make inferences about $\boldsymbol\theta$, one simply considers the conditional distribution of $\boldsymbol\theta$ given the observed data. This is referred to as the \textit{posterior} distribution, since it represents the plausibility of each possible value of $\boldsymbol\theta$ after seeing the data. The posterior distribution is the solution space for given problems, since it measures the probability of the present model in light of the data. Mathematically, this is expressed via Bayes’ theorem, \begin{equation}\label{equation:posterior_abstract_for_mcmc} \begin{aligned} p(\boldsymbol\theta \mid \mathcal{X}, \boldsymbol\alpha) &= \frac{p(\mathcal{X} \mid \boldsymbol\theta ) p(\boldsymbol\theta \mid \boldsymbol\alpha)}{p(\mathcal{X} \mid \boldsymbol\alpha)} \\ &= \frac{p(\mathcal{X} \mid \boldsymbol\theta ) p(\boldsymbol\theta \mid \boldsymbol\alpha)}{\int_{\boldsymbol\theta} p(\mathcal{X}, \boldsymbol\theta \mid \boldsymbol\alpha) } = \frac{p(\mathcal{X} \mid \boldsymbol\theta ) p(\boldsymbol\theta \mid \boldsymbol\alpha)}{\int_{\boldsymbol\theta} p(\mathcal{X} \mid \boldsymbol\theta ) p(\boldsymbol\theta \mid \boldsymbol\alpha) } \propto p(\mathcal{X} \mid \boldsymbol\theta ) p(\boldsymbol\theta \mid \boldsymbol\alpha), \end{aligned} \end{equation} where $\mathcal{X}$ is the observed data set and $p(\mathcal{X} \mid \boldsymbol\alpha )$ can be ignored in this case since it acts as a scaling parameter (and we shall see the MCMC algorithm only needs relative probabilities). In other words, we say the posterior is proportional to the likelihood times the prior. This means that the relative probability at a point in the solution space is determined completely by the likelihood, which is easily determined by comparing the model to the data, and the prior, which is the probability of the model independent of the data. The prior encodes any knowledge of the solution independent of the data. For example, a prior for a system reducing over-clustering might give a higher probability to a larger cluster than to a small cluster \citep{lu2021survey}. More generally, the Bayesian approach - in a nutshell - is to assume a prior distribution on any unknowns ($\boldsymbol\theta$ in our case), and then just follow the rules of probability to answer any questions of interest. For example, when we find the parameter based on the maximum posterior probability of $\boldsymbol\theta$, we turn to the \textit{maximum a posteriori (MAP)} estimator. \paragraph{Frequentists V.S. Bayesian} The \textit{frequentist approach} to statistics, developed by Neyman, evaluates statistical procedures based on a probability distribution over all possible data sets. To be more specific, frequentists consider the parameter vector $\boldsymbol\theta$ to be fixed (albeit unknown), while introducing uncertainty over possible data sets $\mathcal{X}$. In contrast, the Bayesian approach treats the data set $\mathcal{X}$ as given, while introducing uncertainty over $\boldsymbol\theta$. However, statisticians nowadays tend to move comfortably between these approaches and popular statistical procedures often combine both of them. For instance, empirical Bayesian methods have a Bayesian spirit but are not strictly Bayesian, and their analysis is frequently frequentist \citep{haugh2021tutorial}. \section{Approximate Inference} For this book, we focus on approximate probabilistic inference methods. In certain cases, it is computationally feasible to compute the posterior exactly. For example, exponential families with conjugate priors often enable analytical solutions. Although exact inference methods exist and they are precise and useful for certain classes of problems, exact inference methods in complicated models are usually intractable, because these methods typically depend on integrals, summations, or intermediate representations that grow large as the state space grows too large so as to make the computation inefficient. For example, we may use conjugate priors in a Gaussian mixture model. However, the model is hierarchical and is too complicated to compute the exact posterior. In these cases, approximate probabilistic inference methods are rather useful and necessary. Generally, \textit{variational methods} and \textit{Monte Carlo methods} are two main classes of approximate inference. We here give a brief comparison of the two methods \citep{bonawitz2008composable}. In variational inference methods, we first approximate the full model with a simpler model in which the inference questions are tractable. Then, the parameters of this simplified model are calculated by some methods (e.g. by optimization methods) to minimize a measure of the dissimilarity between the original model and the simplified version; this calculation usually performs deterministically because of the optimization methods used. Finally, certain queries can be calculated and executed in the simplified model. In other words, the main idea behind variational methods is to pick a family of distributions over the parameters with its own \textit{variational parameters} - $q(\boldsymbol\theta \mid \boldsymbol\nu)$ where $\boldsymbol\nu$ is the variational parameters. Then, find the setting of the parameters that makes $q$ close to the posterior of interest. As a detailed example, we can refer to \citet{ma2014bayesian}. The main advantage of variational methods is deterministic; however, the corresponding results are in the form of a lower bound of the desired quantity, and the tightness of this bound depends on the degree to which the simplified distribution can model the original posterior distribution. The variational inference is an important tool for Bayesian deep learning \citep{jordan1999introduction, graves2011practical, hoffman2013stochastic, ranganath2014black, mandt2014smoothed}. On the contrary, in Monte Carlo methods we first draw a sequence of samples from the true target posterior distribution. Then certain inference questions are then answered by using this set of samples as an approximation of the target distribution itself. Monte Carlo methods are guaranteed to converge – if you want a more accurate answer, you just need to run the inference for longer; in the limit of running the Monte Carlo algorithm forever, the approximation results from the samples converge to the target distribution (see Section~\ref{sec:monte_carlo_methods}). \section{Monte Carlo (MC) Methods}\label{sec:monte_carlo_methods} In Monte Carlo methods, we first draw $N$ samples $\boldsymbol\theta_1, \boldsymbol\theta_2, \ldots, \boldsymbol\theta_N$ from the posterior distribution $p(\boldsymbol\theta \mid \mathcal{X}, \boldsymbol\alpha)$ in Equation~\eqref{equation:posterior_abstract_for_mcmc}, and then approximate the distribution of interest by \begin{equation} p(\boldsymbol\theta \mid -) \approx \overset{\sim}{p}(\boldsymbol\theta \mid -) = \frac{1}{N} \sum_{n=1}^N \delta_{\boldsymbol\theta_n}(\boldsymbol\theta), \end{equation} where $\delta_{\boldsymbol\theta_i}(\boldsymbol\theta)$ is the Dirac delta function\footnote{The Dirac delta function $\delta_{\bm{x}_0} (\bm{x})$ has the properties that it is non-zero and equals 1 only at $\bm{x} = \bm{x}_0$.}. As the number of samples increases, the approximation (almost surely) converges to the true target distribution, i.e., $\overset{\sim}{p}(\boldsymbol\theta) \overset{\overset{a.s.}{N\rightarrow \infty} }{\longrightarrow} p(\boldsymbol\theta)$. These kinds of sampling-based methods are extensively used in modern statistics, due to their ease of use and the generality with which they can be applied. The fundamental problem solved by these methods is the approximation of expectations such as \begin{equation} \mathbb{E} {h(\boldsymbol\Theta)} = \int_{\boldsymbol\theta} h(\boldsymbol\theta) p(\boldsymbol\theta) d \boldsymbol\theta, \end{equation} in the case of a continuous random variable $\boldsymbol\Theta$ with probability density function (p.d.f.) $p$. Or \begin{equation} \mathbb{E} {h(\boldsymbol\Theta)} = \sum_{\boldsymbol\theta} h(\boldsymbol\theta) p(\boldsymbol\theta), \end{equation} in the case of a discrete random variable $\boldsymbol\Theta$ with probability mass function (p.m.f.) $p$. The general principle at work is that such expectations can be approximated by \begin{equation} \mathbb{E} {h(\boldsymbol\Theta)} \approx \sum_{n=1}^N h(\boldsymbol\theta_n), \end{equation} If it were generally easy to draw samples directly from $p(\boldsymbol\theta \mid \mathcal{X}, \boldsymbol\alpha)$, the Monte Carlo story would end here. Unfortunately, this is usually intractable. We can consider the posterior form $p(\boldsymbol\theta \mid \mathcal{X}, \boldsymbol\alpha) = \frac{p(\mathcal{X} \mid \boldsymbol\theta ) p(\boldsymbol\theta \mid \boldsymbol\alpha)}{p(\mathcal{X} \mid \boldsymbol\alpha)}$, where in many problems $p(\mathcal{X} \mid \boldsymbol\theta ) p(\boldsymbol\theta \mid \boldsymbol\alpha)$ can be computed easily, but $p(\mathcal{X} \mid \boldsymbol\alpha)$ cannot due to integrals, summations, etc. In this case Markov chain Monte Carlo is especially useful. \subsection{Markov Chain Monte Carlo (MCMC)} Markov chain Monte Carlo (MCMC) algorithms, also called samplers, are numerical approximation algorithms. Since MCMC algorithms directly sample the solution space, uncertainty estimates are determined simultaneously with a ``best" solution. Further, provided that the data support them, multiple solutions are possible. Intuitively, it is a stochastic hill-climbing approach to inference, operating over the complete data set. This inference method is designed to spend most of the computational efforts to sample points from the high probability regions of true target posterior distribution $p(\boldsymbol\theta \mid \mathcal{X}, \boldsymbol\alpha)$ \citep{andrieu2003introduction, bonawitz2008composable, hoff2009first, geyer2011introduction}. In this sampler, a Markov chain stochastic walk is taken through the state space $\boldsymbol\Theta$ such that the probability of being in a particular state $\boldsymbol\theta_t$ at any point in the walk is $p(\boldsymbol\theta_t \mid \mathcal{X}, \boldsymbol\alpha)$. Therefore, samples from the true posterior distribution $p(\boldsymbol\theta \mid \mathcal{X}, \boldsymbol\alpha)$ can be approximated by recording the samples (states) visited by the stochastic walk and some other post-processing methods such as thinning. The stochastic walk is a Markov chain, i.e. the choice of state at time $t + 1$ depends only on its previous state - the state at time $t$. Formally, if $\boldsymbol\theta_t$ is the state of the chain at time $t$, then $p(\boldsymbol\theta_{t+1}\mid \boldsymbol\theta_1, \boldsymbol\theta_2,\ldots, \boldsymbol\theta_t) = p(\boldsymbol\theta_{t+1}\mid \boldsymbol\theta_t)$. That is, Markov chains are history-free, and we can get two main advantages from this history-free property: \begin{itemize} \item From this history-free property, the Markov chain Monte Carlo methods can be run for an unlimited number of iterations without consuming additional memory space; \item The history-free property also indicates that the MCMC stochastic walk can be completely characterized by $p(\boldsymbol\theta_{t+1}\mid \boldsymbol\theta_t)$, known as the \textit{transition kernel}. \end{itemize} We then focus on the discussion of the transition kernel. The transition kernel $\bm{K}$ can also be formulated as a linear transform, thus if $p_t = p_t (\boldsymbol\theta)$ is a row vector that encodes the probability of the walk being in state $\boldsymbol\theta$ at time $t$, then $p_{t+1} = p_t \bm{K}$. If the stochastic walk starts from state $\boldsymbol\theta_0$, then the distribution from this initial state is the delta distribution $p_0 = \delta_{\boldsymbol\theta_0} (\boldsymbol\theta)$ and the state distribution for the chain after step $t$ is $p_t = p_0\bm{K}^t$. We can easily find that the key to Markov chain Monte Carlo is to choose kernel $\bm{K}$ such that $\underset{t\rightarrow \infty}{\mathrm{lim}} p_t = p(\boldsymbol\theta \mid \mathcal{X}, \boldsymbol\alpha)$, independent of the choice of $\boldsymbol\theta_0$. Kernels with this property are said to converge to an \textbf{equilibrium distribution} $p_{eq} = p(\boldsymbol\theta \mid \mathcal{X})$. Convergence is guaranteed if both of the following criteria meet (see \citet{bonawitz2008composable}): \begin{itemize} \item $p_{eq}$ is an invariant (or stationary) distribution for $\bm{K}$. A distribution $p_{inv}$ is an invariant distribution for $\bm{K}$ if $p_{inv} = p_{inv} \bm{K}$; \item $\bm{K}$ is \textit{ergodic}. A kernel is ergodic if it is \textit{irreducible} (any state can be reached from any other state) and \textit{aperiodic} (the stochastic walk never gets stuck in cycles). \end{itemize} There are a large number of MCMC algorithms, too many to review here. Popular families include Gibbs sampling, Metropolis-Hastings (MH), slice sampling, Hamiltonian Monte Carlo, adaptive rejection sampling, and many others. Though the name is misleading, Metropolis-within-Gibbs (MWG) was developed first by \cite{metropolis1953equation}, and MH was a generalization of MWG \citep{hastings1970monte}. All MCMC algorithms are known as special cases of the MH algorithm. Regardless of the algorithm, the goal of Bayesian inference is to maximize the unnormalized joint posterior distribution and collect samples of the target distributions, which are marginal posterior distributions, later to be used for inference queries. The most generalizable MCMC algorithm is the Metropolis-Hastings (MH) generalization \citep{metropolis1953equation, hastings1970monte} of the MWG algorithm. The MH algorithm extended MWG to include asymmetric proposal distributions. In this method, it converts an arbitrary proposal kernel $q(\boldsymbol\theta_{\star} \mid \boldsymbol\theta_t )$ into a transition kernel with the desired invariant distribution $p_{eq}(\boldsymbol\theta)$. In order to generate a sample from a MH transition kernel, we first draw a proposal $\boldsymbol\theta_{\star} \sim q( \boldsymbol\theta_{\star} \mid \boldsymbol\theta_t )$, then evaluates the MH acceptance probability by \begin{equation} P[A(\boldsymbol\theta_{\star}\mid \boldsymbol\theta_{t} )] = \min \left(1, \frac{p(\boldsymbol\theta_{\star} \mid \boldsymbol\alpha)q(\boldsymbol\theta_{t} \mid \boldsymbol\theta_{\star} )}{p(\boldsymbol\theta_{t} \mid \boldsymbol\alpha)q(\boldsymbol\theta_{\star} \mid \boldsymbol\theta_{t} )} \right), \end{equation} with probability $P[A(\boldsymbol\theta_{\star} \mid \boldsymbol\theta_{t} )] $ being the proposal is accepted and we set $\boldsymbol\theta_{t+1} = \boldsymbol\theta_{\star}$; otherwise the proposal is rejected and we set $\boldsymbol\theta_{t+1} = \boldsymbol\theta_{t}$. That is \begin{equation} \boldsymbol\theta_{t+1} =\left\{ \begin{array}{ll} \boldsymbol\theta_\star, \text{ with probability } P[A(\boldsymbol\theta_{\star} \mid \boldsymbol\theta_{t} )]; \\ \boldsymbol\theta_{t}, \text{ with probability } 1 - P[A( \boldsymbol\theta_{\star} \mid \boldsymbol\theta_{t} )] . \end{array} \right. \end{equation} Intuitively, we may find that the $\frac{p(\boldsymbol\theta_{\star} \mid \boldsymbol\alpha)}{p(\boldsymbol\theta_{t} \mid \boldsymbol\alpha)}$ term tends to accept moves that lead to higher probability parts of the state space, while also the $\frac{q( \boldsymbol\theta_{t} \mid \boldsymbol\theta_{\star} )}{q(\boldsymbol\theta_{\star} \mid \boldsymbol\theta_{t} )}$ term tends to accept moves that are easy to undo. Because in MH, we only evaluate $p(\boldsymbol\theta)$ as part of the ratio $\frac{p(\boldsymbol\theta_{\star} \mid \boldsymbol\alpha)}{p(\boldsymbol\theta_{t} \mid \boldsymbol\alpha)}$, we do not need compute $p(\mathcal{X} \mid \boldsymbol\alpha)$ as mentioned in Section~\ref{sec:monte_carlo_methods}. The key in MH is the proposal kernel $q(\boldsymbol\theta_{\star} \mid \boldsymbol\theta_t )$. However, the transition kernel is not $q(\boldsymbol\theta_{\star} \mid \boldsymbol\theta_t )$. Informally, the kernel $K(\boldsymbol\theta_{t+1} \mid \boldsymbol\theta_t )$ in MH is $$ p (\boldsymbol\theta_{t+1} \mid \mathrm{accept}) P[\mathrm{accept}] + p (\boldsymbol\theta_{t+1}\mid \mathrm{reject}) P[\mathrm{reject}]. $$ While \citet{tierney1998note} introduced that the precise transition kernel is \begin{equation} \begin{aligned} K(\boldsymbol\theta_t \rightarrow \boldsymbol\theta_{t+1}) &= p(\boldsymbol\theta_{t+1} \mid \boldsymbol\theta_t) \\ &= q(\boldsymbol\theta_{t+1} \mid \boldsymbol\theta_t ) A(\boldsymbol\theta_{t+1} \mid \boldsymbol\theta_t ) + \delta_{\boldsymbol\theta_t}(\boldsymbol\theta_{t+1}) \int_{\boldsymbol\theta_\star} q(\boldsymbol\theta_{\star} \mid \boldsymbol\theta_t ) (1 - A(\boldsymbol\theta_{\star} \mid \boldsymbol\theta_{t} )) . \end{aligned} \end{equation} \subsection{MC V.S. MCMC} As shown in previous sections, the purpose of Monte Carlo or Markov chain Monte Carlo approximation is to obtain a sequence of parameter values $\{\boldsymbol\theta^{(1)}, \ldots, \boldsymbol\theta^{(N)}\}$ such that \begin{equation} \frac{1}{N} \sum_{n=1}^N h(\boldsymbol\theta^{(n)}) \approx \int_{\boldsymbol\theta} h(\boldsymbol\theta) p(\boldsymbol\theta) d\boldsymbol\theta, \end{equation} for any functions $h$ of interest in the case of continuous random variables. In other words, we want the empirical average of $\{h(\boldsymbol\theta^{(1)}), \ldots,$ $h(\boldsymbol\theta^{(N)})\}$ to approximate the expected value of $h(\boldsymbol\theta)$ under a target probability distribution $p(\boldsymbol\theta)$. In order for this to be a good approximation for a wide range of functions $h$, we require the empirical distribution of the simulated sequence $\{\boldsymbol\theta^{(1)}, \ldots, \boldsymbol\theta^{(N)}\}$ to look like the target distribution $p(\boldsymbol\theta)$. MC and MCMC are two ways of generating such a sequence. MC simulation, in which we generate independent samples from the target distribution, is in some sense the ``true situation". Independent MC samples automatically create a sequence that is representative of $p(\boldsymbol\theta)$, which means the probability that $\boldsymbol\theta^{(n)} \in A$ for any set $A$ is \begin{equation} \int_{A} p(\boldsymbol\theta) d\boldsymbol\theta. \end{equation} where $n \in \{1, \ldots, N\}$. However, this is not true for MCMC samples, in which case all we are sure of is that \begin{equation} \lim_{n \rightarrow \infty} Pr(\theta^{(n)} \in A) = \int_A p(\boldsymbol\theta) d\boldsymbol\theta. \end{equation} \subsection{Gibbs Sampler}\label{section:gibbs-sampler} Gibbs sampling was introduced by Turchin \citep{turchin1971computation}, and later by brothers Geman and Geman \citep{geman1984stochastic} in the context of image restoration. The Geman brothers named the algorithm after the physicist J. W. Gibbs, some eight decades after his death, in reference to an analogy between the sampling algorithm and statistical physics. Gibbs sampling is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and easy to sample from. A Gibbs sampler generates a draw from the distribution of each parameter or variable in turn, conditional on the current values of the other parameters or variables. Therefore, a Gibbs sampler is a componentwise algorithm. In our example, given some data $\mathcal{X}$ and a probability distribution $p(\boldsymbol\theta \mid \mathcal{X}, \boldsymbol\alpha)$ parameterized by $\boldsymbol\theta = \{\theta_1, \theta_2, \ldots, \theta_p\}$. We can successively draw samples from the distribution by sampling from \begin{equation}\label{equation:gibbs_thetai_t} \theta_i^{(t)} \sim p(\theta_i \mid \boldsymbol\theta_{-i}^{(t-1)}, \mathcal{X}, \boldsymbol\alpha), \end{equation} where $\boldsymbol\theta_{-i}^{(t-1)}$ is all current values of $\boldsymbol\theta$ in the $(t-1)$-th iteration except for $\theta_i$. If we sample long enough, these $\theta_i$ values will be random samples from the distribution $p$. If we sample new values in turn for each parameter $\theta_i$ from Equation~\eqref{equation:gibbs_thetai_t}, we will eventually converge to draws from the posterior $p(\boldsymbol\theta \mid \mathcal{X}, \boldsymbol\alpha)$. When doing this Gibbs sampler, we also have to discard the first $k$ draws since it takes a while to converge (i.e., \textit{burn-in}), and because the consecutive draws are correlated we only use every $j$-th sample (i.e., \textit{thinning}). In deriving a Gibbs sampler, it is often helpful to observe that \begin{equation} p(\theta_i \mid \boldsymbol\theta_{- i}, \mathcal{X}) = \frac{ p(\theta_1, \theta_2, \ldots,\theta_p, \mathcal{X}) }{ p(\boldsymbol\theta_{- i}, \mathcal{X}) } \propto p(\theta_1, \theta_2, \ldots,\theta_p, \mathcal{X}). \end{equation} That is, the conditional distribution is proportional to the joint distribution. We will get a lot of benefits from this simple observation by dropping constant terms from the joint distribution (relative to the parameters we are conditioned on). Shortly, as a simplified example, given a joint probability distribution $p(\theta_1,\theta_2\mid \mathcal{X})$, a Gibbs sampler would draw $p(\theta_1\mid \theta_2,\mathcal{X})$ , then $p(\theta_2\mid \theta_1,\mathcal{X})$ iteratively. The procedure defines a sequence of realization of random variables $\theta_1$ and $\theta_2$ \begin{equation} (\theta_1^0, \theta_2^0),\,\,\, (\theta_1^1, \theta_2^1), \,\,\,(\theta_1^2, \theta_2^2),\,\,\, \cdots \nonumber \end{equation} which converges to the joint distribution $p(\theta_1, \theta_2)$. More details about Gibbs sampling can be found in \citet{turchin1971computation, geman1984stochastic, hoff2009first, gelman2013bayesian}. \subsection{Adaptive Rejection Sampling (ARS)} The purpose of the adaptive rejection sampling algorithm is to provide a relatively efficient way to sample from a distribution from the large class of log-concave densities \citep{gilks1992adaptive, wild1993algorithm}. We only overview the algorithm here and we can find more details in \citet{gilks1992adaptive} and \citet{wild1993algorithm}. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{imgs/rejsamp.png} \caption{Rejection sampling. Figure from Michael I. Jordan's lecture notes.} \label{fig:rejection_sampling} \end{figure} \subsubsection{Rejection Sampling} In rejection sampling, we want to sample from a target probability density function $p(x)$, given that we can sample from a probability density function $q(x)$ easily. The target density $p(x)$ is not known. But the idea is that, if $M \times q(x)$ forms an envelope over $p(x)$ for some $M > 1$ as shown in Figure~\ref{fig:rejection_sampling}, i.e. \begin{equation} \frac{p(x)}{q(x)} < M, \text{ for all $x$.} \end{equation} Then if we sample some $x_i$ from $q(x)$, and if $y_i=u \times M\times q(x_i)$ lies below the region under $p(x)$ for some $u \sim \mathrm{Uniform}(0,1)$, then we accept $x_i$, otherwise, we reject $x_i$. Informally, what the method does is to sample $x_i$ from some distribution and then it decides whether to accept it or reject it. \subsubsection{Adaptive Rejection Sampling}\label{section:ars-sampling} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{imgs/ars.png} \caption{Adaptive rejection sampling. Figure from Michael I. Jordan's lecture notes.} \label{fig:adaptive_rejection_sampling} \end{figure} The adaptive rejection sampling method is based on rejection sampling and it works only for log-concave densities. The basic idea is to form an upper envelope (the upper bound on $p(x)$) adaptively and use this to replace $M\times q(x)$ in rejection sampling. As shown in Figure~\ref{fig:adaptive_rejection_sampling}, the log density $\log p(x)$ is considered. We then sample $x_i$ from the upper envelope, and either accepted or rejected as in rejection sampling. If it is rejected, a tangent is drawn passing through $x = x_i$ and $y = \log(p)$; and the tangent is used to reduce the upper envelope to decrease the number of rejected samples. The intersections of these tangent planes enable the formation of an envelope adaptively. To sample from the upper envelope, we need to transform from log space by exponentiating and using properties of the exponential distribution. \section{Bayesian Appetizers} This section will cover semi-conjugate priors with Gibbs sampler and fully conjugate priors without approximate inference to provide a deeper understanding of Bayesian approaches. Readers who already have a basic understanding of Bayesian inference may choose to skip this section. \subsection{Beta-Bernoulli Model}\label{sec:beta-bernoulli} We formally introduce a \textit{Beta-Bernoulli} model to show how the Bayesian approach works. The Bernoulli distribution models binary outcomes, i.e., outputting two possible values. The likelihood under this model is just the probability mass function of Bernoulli distribution with parameter $\theta$: \begin{equation}\label{equation:bernoulli_distribution} \bernoulli(x\mid \theta) = p(x\mid \theta) = \theta^x (1-\theta)^{1-x} \mathds{1}(x\in \{0,1\}). \nonumber \end{equation} That is, $$ \bernoulli(x\mid \theta)=p(x\mid \theta)=\left\{ \begin{aligned} &1-\theta ,& \mathrm{\,\,if\,\,} x = 0; \\ &\theta , &\mathrm{\,\,if\,\,} x =1, \end{aligned} \right. $$ where $\theta$ is the probability of outputting 1 and $1-\theta$ is the probability of outputting 0. The mean of the Bernoulli distribution is $\theta$. Suppose $\mathcal{X}=\{x_1, x_2, \ldots , x_n\}$ are drawn i.i.d. from the Bernoulli distribution $\bernoulli( \theta)$. Then the likelihood under the Bernoulli distribution with parameter $\theta$ is given by $$ \begin{aligned} \text{likelihood} = p(\mathcal{X} \mid \theta) &= \theta^{\sum x_i} (1-\theta)^{n-\sum x_i}, \end{aligned} $$ which is a distribution on $\mathcal{X}$ and is called the \textit{likelihood function} on $\mathcal{X}$. And we will see the prior under this model is the probability density function of a \textit{Beta distribution}: \begin{equation} \mathrm{prior} = \mathrm{Beta}(\theta\mid a, b)= p(\theta\mid a, b) =\frac{1}{B(a,b)} \theta^{a-1}(1-\theta)^{b-1} \mathds{1}(0\leq \theta\leq 1), \nonumber \end{equation} where $B(a,b)$ is the \textit{Euler's beta function} and it can be simply regarded as a constant normalization term, and $\mathds{1}(a\leq x\leq b)$ is a step function that has a value of 1 when $a\leq x\leq b$, and 0 when $x<a$ or $a>b$. Figure~\ref{fig:dists_beta} compares different parameters for the Beta distribution. When $a=b=1$, the beta distribution reduces to a \textit{uniform distribution} in the support of $[0,1]$. \begin{SCfigure \centering \includegraphics[width=0.5\textwidth]{imgs/dists_beta.pdf} \caption{Beta probability density functions for different values of the parameters $a$ and $b$. When $a=b=1$, the beta distribution reduces to a \textit{uniform distribution} in the support of $[0,1]$. The mean, variance, and mode of the Beta distribution are $\mathrm{E}[{\textnormal{x}}]=\frac{a}{a+b}$, $\mathrm{Var}[{\textnormal{x}}]=\frac{ab}{(a+b+1)(a+b)^2}$, and $\mathrm{mode}[{\textnormal{x}}]=\frac{a-1}{(a-1)+(b-1)}$ if $a>1, b>1$, respectively. } \label{fig:dists_beta} \end{SCfigure} We put a Beta prior on the parameter $\theta$ of the Bernoulli distribution. The posterior is proportional to the product of likelihood and prior densities, and it can be obtained by \begin{equation} \begin{aligned} \mathrm{posterior} = p(\theta\mid \mathcal{X}) &\propto p(\mathcal{X} \mid \theta) p(\theta\mid a,b) \\ &=\theta^{\sum x_i} (1-\theta)^{n-\sum x_i} \times \frac{1}{B(a,b)} \theta^{a-1}(1-\theta)^{b-1}\cdot \mathds{1}(0\leq\theta\leq1) \\ &\propto \theta^{a+\sum x_i-1}(1-\theta)^{b+n-\sum x_i-1}\cdot \mathds{1}(0\leq\theta\leq 1) \\ &\propto \mathrm{Beta}\left(\theta \,\bigg|\, a+\sum_{i=1}^n x_i, b+n-\sum_{i=1}^n x_i\right). \nonumber \end{aligned} \end{equation} We find that the posterior distribution shares the same form as the prior distribution. When this happens, we call the prior a \textit{conjugate prior}. The conjugate prior has a nice form such that it is easy to work with for computing the posterior probability density function and its derivatives, and sampling from the posterior. \begin{remark}[Prior Information in Beta-Bernoulli Model] A comparison of the prior and posterior formulations would find that the hyperparameter $a$ is the prior number of $1$'s in the output and $b$ is the prior number of 0's in the output. And $a+b$ is the prior information about the sample size. Uninformative prior parameters are then $a=b=1$, i.e., a uniform distribution. \end{remark} \begin{remark}[Bayesian Estimator] From this example of the Beta-Bernoulli model, like the maximum likelihood estimator and method of moment (MoM, i.e., using the moment information to calculate the model parameter), Bayesian model is also a kind of \textit{point estimator}. But Bayesian models output a probability of the parameter of interest, $p(\theta \mid \mathcal{X})$ in the example. When we want to predict the results of new coming data, we do not give out the prediction by a direct model $p(x_{n+1} \mid \theta)$. But rather an integration: \begin{equation} p(x_{n+1} \mid \mathcal{X}) = \int p(x_{n+1} \mid \theta) p(\theta \mid \mathcal{X}) d\theta.\nonumber \end{equation} In another word, $x_{n+1}$ is dependent of $\mathcal{X}$. Observed data $\mathcal{X}$ provide information about $\theta$, which in turn provides information about $x_{n+1}$ (i.e., $\mathcal{X} \rightarrow \theta \rightarrow x_{n+1}$). \end{remark} \begin{examp2}[Amount of Data Matters]\label{example:amountofdata} Suppose we have three observations for the success in the Bernoulli distribution: \begin{enumerate} \item 10 out of 10 are observed to be success (1's); \item 48 out of 50 are observed to be success (1's); \item 186 out of 200 are observed to be success (1's). \end{enumerate} So, what is the probability of success in the Bernoulli model? The normal answer to case 1, 2, 3 are 100\%, 96\%, and 93\%, respectively. But an observation of 10 inputs is rather a small amount of data and noise can make it less convincing. Suppose we put a $Beta(1,1)$ prior over the Bernoulli distribution. The posterior probability of success for each case would be $\frac{11}{12}=91.6\%$, $\frac{49}{52}=94.2\%$, and $\frac{187}{202}=92.6\%$, respectively. Now we find the case 1 has less probability of success compared to case 2. A Bayesian view of the problem naturally incorporates the amount of data as well as its average. This special case shown here is also called Laplace's rate of succession \citep{ollivier2015laplace}. Laplace's ``add-one" rule (i.e., the $Beta(1,1)$ prior) of succession modifies the observed frequencies in a sequence of successes and failures by adding one to the observed counts. This improves prediction by avoiding zero probabilities and corresponds to a uniform Bayesian prior on the parameter. Suppose further the prior parameters are $a=b=2$, Figure~\ref{fig:dists_beta_posterior} compares the prior distribution and the posterior distributions for the three cases. \hfill $\square$\par \end{examp2} \begin{SCfigure \centering \includegraphics[width=0.55\textwidth]{imgs/dists_beta_posterior.pdf} \caption{Prior distribution is $\mathrm{Beta}(x\mid 2,2)$. The posterior distributions for the three cases in Example~\ref{example:amountofdata} are $\mathrm{Beta}(x\mid 12,2)$, $\mathrm{Beta}(x\mid 50,4)$, and $\mathrm{Beta}(x\mid 188,16)$, respectively.} \label{fig:dists_beta_posterior} \end{SCfigure} \begin{mdframed}[hidealllines=true,backgroundcolor=\mdframecolorNote,frametitle={Why Bayes?}] This example above also shows that Bayesian models consider prior information on the parameters in the model making it particularly useful to regularize regression problems where data information is limited. And this is why the Bayesian approach gains worldwide attention for decades. The prior information $p(\theta)$ and likelihood function $p(x\mid \theta)$ represent a rational person's belief, and then the Bayes' rule is an optimal method of updating this person's beliefs about $\theta$ given new information from the data \citep{fahrmeir2007regression, hoff2009first}. The prior information given by $p(\theta)$ might be wrong if it does not accurately represent our prior beliefs. However, this does not mean that the posterior $p(\theta \mid x)$ is not useful. A famous quote is ``all models are wrong, but some are useful" \citep{box1987empirical}. If the prior $p(\theta)$ approximates our beliefs, then the posterior $p(\theta \mid x)$ is also a good approximation to posterior beliefs. \end{mdframed} \subsection{Bayesian Linear Model with Zero-Mean Prior}\label{sec:bayesian-zero-mean} In the linear model, given the input data matrix $\bm{X}\in\mathbb{R}^{n\times p}$ and the observation vector $\bm{y}\in\mathbb{R}^n$. The linear model considers the overdetermined system $\bm{y}=\bm{X}\boldsymbol\beta$ where the vector $\boldsymbol\beta\in\mathbb{R}^p$ is a vector of weights for the linear model. It often happens that $\bm{y}=\bm{X}\boldsymbol\beta$ has no solution since there are too many equations, i.e., the matrix $\bm{X}$ has more rows than columns ($n>p$). Define the column space of $\bm{X}$ by $\{\bm{X}\boldsymbol\gamma: \forall \boldsymbol\gamma \in\mathbb{R}^p \}$ and denoted by $\mathcal{C}(\bm{X})$. Thus the linear system $\bm{y}=\bm{X}\boldsymbol\beta$ has no solution means that $\bm{y}$ is outside the column space of $\bm{X}$. This problem can be solved by finding the least mean squared error (MSE). Instead of finding the least MSE directly, we can assume further a Gaussian noise vector $\boldsymbol\epsilon\in\mathbb{R}^n$ such that $\boldsymbol{y} = \boldsymbol{X}\boldsymbol\beta + \boldsymbol\epsilon$ where $\boldsymbol\epsilon \sim \mathcal{N}(\mathbf{0}, \sigma^2 \boldsymbol{I})$ and $\sigma^2$ is \textbf{fixed} (where $\mathcal{N}(\bm{a}, \bm{B})$ denotes a multivariate Gaussian distribution \footnote{We delay the definition in Chapter~\ref{chapter:conjugate_models_bmf}, p.~\pageref{chapter:conjugate_models_bmf} when we discuss regular conjugate models, see Definition~\ref{definition:multivariate_gaussian}, p.~\pageref{definition:multivariate_gaussian}.} with mean $\bm{a}$ and covariance $\bm{B}$. And a detailed analysis of this model can be found in \citet{rasmussen2003gaussian, hoff2009first, lu2021rigorous}), this additive Gaussian noise assumption gives rise to the likelihood. Let $\mathcal{X} (\bm{x}_{1:n})= \{\bm{x}_1, \bm{x}_2, \ldots, \bm{x}_n\}$ be the observations of $n$ data points, the likelihood function under this Gaussian additive noise model is \begin{equation}\label{equation:linear_gaussian_addi_likelihood} \mathrm{likelihood} = \bm{y} \mid \bm{X}, \boldsymbol\beta, \sigma^2 \sim \mathcal{N}(\bm{X}\boldsymbol\beta, \sigma^2\bm{I}). \end{equation} Suppose we specify a multivariate Gaussian prior with zero-mean and covariance matrix $\boldsymbol\Sigma_0$ over the weight parameter $\boldsymbol\beta$, \begin{equation} \mathrm{prior} = \boldsymbol\beta \sim \mathcal{N}(\mathbf{0}, \boldsymbol\Sigma_0). \nonumber \end{equation} By the Bayes' theorem, ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", we obtain the posterior \begin{equation} \begin{aligned} &\gap \mathrm{posterior} = p(\boldsymbol\beta\mid \bm{y},\bm{X}, \sigma^2) \\ &\propto p(\bm{y}\mid \bm{X}, \boldsymbol\beta, \sigma^2) \cdot p(\boldsymbol\beta \mid \boldsymbol\Sigma_0) \\ &= \frac{1}{(2\pi \sigma^2)^{n/2}} \exp\left\{-\frac{1}{2\sigma^2} (\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)\right\} \times \frac{1}{(2\pi)^{n/2}\abs{\boldsymbol\Sigma_0}^{1/2}}\exp\left(-\frac{1}{2} \boldsymbol\beta^\top\boldsymbol\Sigma_0^{-1}\boldsymbol\beta\right) \\ &\propto \exp\left\{-\frac{1}{2} (\boldsymbol\beta - \boldsymbol\beta_1)^\top \boldsymbol\Sigma_1^{-1} (\boldsymbol\beta - \boldsymbol\beta_1)\right\} \propto \mathcal{N}(\boldsymbol\beta_1, \boldsymbol\Sigma_1) , \nonumber \end{aligned} \end{equation} where $$ \boldsymbol\Sigma_1 = \left(\frac{1}{\sigma^2} \bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1}, \qquad \boldsymbol\beta_1 = \left(\frac{1}{\sigma^2}\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1} \left(\frac{1}{\sigma^2}\bm{X}^\top\bm{y}\right). $$ Therefore, the posterior distribution is also a multivariate Gaussian distribution (same form as the prior distribution, i.e., a conjugate prior): \begin{equation} \mathrm{posterior} = \boldsymbol\beta\mid \bm{y},\bm{X}, \sigma^2 \sim \mathcal{N}(\boldsymbol\beta_1, \boldsymbol\Sigma_1). \nonumber \end{equation} \paragraph{A word on the notation.} Note that we use $\{\boldsymbol\beta_1,\boldsymbol\Sigma_1\}$ to denote the posterior mean vector and posterior covariance matrix in the \textit{zero-mean prior model}. Similarly, the posterior mean vector and posterior covariance matrix in \textit{semi-conjugate prior} and \textit{fully conjugate prior} models will be denoted as $\{\boldsymbol\beta_2,\boldsymbol\Sigma_2\}$ and $\{\boldsymbol\beta_3,\boldsymbol\Sigma_3\}$ respectively, for clarity (see sections below). \paragraph{Connection to ordinary least squares (OLS).} In the Bayesian linear model, we do not need to assume $\boldsymbol{X}$ has full rank generally. Note further that if we assume $\bm{X}$ has full rank (i.e., $\bm{X}^\top\bm{X}$ is invertible when $n>p$), in the limit, when $\boldsymbol\Sigma_0^{-1} \rightarrow \mathbf{0}$, $\boldsymbol\beta_1 \rightarrow \hat{\boldsymbol\beta} = (\bm{X}^\top\bm{X})^{-1}\bm{X}\bm{y}$, in which case, the \textit{maximum a posteriori (MAP) estimator} from the Bayesian model goes back to the ordinary least squares (OLS) estimator. And the posterior is $\boldsymbol\beta\mid \bm{y},\bm{X}, \sigma^2 \sim \mathcal{N} (\hat{\boldsymbol\beta}, \sigma^2(\boldsymbol{X}^\top \boldsymbol{X})^{-1})$, which shares a similar form with the OLS estimator $\hat{\boldsymbol\beta} \sim \mathcal{N}(\boldsymbol\beta, \sigma^2(\boldsymbol{X}^\top \boldsymbol{X})^{-1})$ under the Gaussian disturbance (see \citet{lu2021rigorous}). \begin{remark}[Ridge Regression] In the least squares approximation problem, we use $\bm{X}\boldsymbol\beta$ to approximate $\bm{y}$. Two issues arise: the model can potentially overfit and $\bm{X}$ may not have full rank. In a ridge regression model, we regularize large values of $\boldsymbol\beta$ and thus favor simpler models. Instead of minimizing $\norm{\bm{y}-\bm{X}\boldsymbol\beta}^2$, we minimize $\norm{\bm{y}-\bm{X}\boldsymbol\beta}^2+\lambda\norm{\boldsymbol\beta}^2$, where $\lambda$ is a hyperparameter that can be tuned accordingly, e.g., via cross-validation (CV): \begin{equation} \mathop{\arg\min}_{\boldsymbol\beta}{(\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta) + \lambda \boldsymbol\beta^\top\boldsymbol\beta}. \nonumber \end{equation} By differentiating and setting the derivative to zero we get \begin{equation} \hat{\boldsymbol\beta}_{ridge} = \left(\bm{X}^\top\bm{X} + \lambda \bm{I}\right)^{-1} \bm{X}^\top\bm{y}, \nonumber \end{equation} in which case, $(\bm{X}^\top\bm{X} + \lambda \bm{I})$ is invertible even when $\bm{X}$ does not have full rank. Further details on ridge regression will be left to the readers. \end{remark} \paragraph{Connection to ridge regression.} We realize that when we set $\boldsymbol\Sigma_0 = \bm{I}$, we obtain $\boldsymbol\beta_1 = \left(\bm{X}^\top\bm{X} + \sigma^2 \bm{I}\right)^{-1} \bm{X}^\top\bm{y}$ and $\boldsymbol\Sigma_1 = \left(\frac{1}{\sigma^2}\bm{X}^\top\bm{X}+ \bm{I}\right)^{-1}$. Since the posterior is $\boldsymbol\beta\mid \bm{y},\bm{X}, \sigma^2 \sim \mathcal{N}(\boldsymbol\beta_1, \boldsymbol\Sigma_1)$. The MAP estimator of $\boldsymbol\beta = \boldsymbol\beta_1 = (\bm{X}^\top\bm{X} + \sigma^2 \bm{I})^{-1} \bm{X}^\top\bm{y}$, which shares the same form as the ridge regression by letting $\sigma^2 = \lambda$. Thus, we notice the ridge regression estimator is a special case of the Bayesian linear model with zero-mean prior. And ridge regression has a nice interpretation from the Bayesian approach - finding the mode of the posterior. An example of this Bayesian linear model is shown in \citet{rasmussen2003gaussian} where the ``well determined" (i.e., the distribution around the slope is more compact) slope of $\boldsymbol\beta$ is almost unchanged after the posterior process while the intercept which is more dispersed is shrunk towards zero. This is actually a regularization effect on the parameter as the ridge regression does. \subsection{Bayesian Linear Model with Semi-Conjugate Prior}\label{sec:semiconjugate} We will use the \textit{Gamma distribution} as the prior density of the inverse variance (precision) parameter of a Gaussian distribution. The rigorous definition of the Gamma distribution can be found in Chapter~\ref{chapter:conjugate_models_bmf} (Definition~\ref{definition:gamma-distribution}, p.~\pageref{definition:gamma-distribution}) when we discuss conjugate models. As for the reason of using the Gamma distribution as the prior for precision, we quote the description from \citet{kruschke2014doing}: \begin{mdframed}[hidealllines=true,backgroundcolor=\mdframecolorNote] Because of its role in conjugate priors for Gaussian likelihood functions, the Gamma distribution is routinely used as a prior for precision (i.e., inverse variance). But there is no logical necessity to do so, and modern MCMC methods permit more flexible specification of priors. Indeed, because precision is less intuitive than standard deviation, it can be more useful to give standard deviation a uniform prior that spans a wide range. \end{mdframed} Same setting as Section~\ref{sec:bayesian-zero-mean}, but we assume the variance $\sigma^2$ of the Gaussian likelihood is \textbf{not fixed} now. Again, we have the likelihood function by \begin{equation} \mathrm{likelihood} = \bm{y} \mid \bm{X}, \boldsymbol\beta, \sigma^2 \sim \mathcal{N}(\bm{X}\boldsymbol\beta, \sigma^2\bm{I}). \nonumber \end{equation} We specify a \textbf{non zero-mean} Gaussian prior over the weight parameter $\boldsymbol\beta$, \begin{equation} \begin{aligned} {\color{blue}\mathrm{prior:\,}} &\boldsymbol\beta \sim \mathcal{N}({\color{blue}\boldsymbol\beta_0}, \boldsymbol\Sigma_0) \\ &{\color{blue}\gamma = 1/\sigma^2 \sim \mathrm{Ga}(a_0, b_0)}, \nonumber \end{aligned} \end{equation} where we differentiate from previous descriptions by blue text and $\mathrm{Ga}(a, b)=\frac{b^a}{\Gamma(a)} x^{a-1}\exp(-bx)$ denotes a Gamma distribution with parameters $a,b$ and $\Gamma(a)=\int_{0}^{\infty} a^{t-1} \exp(-a)dt$ is the Gamma function. \paragraph{Step 1, conditioned on $\sigma^2$.} Then, given $\sigma^2$, by the Bayes' theorem ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", we get the conditional posterior density of $\boldsymbol\beta$, \begin{equation} \begin{aligned} \mathrm{posterior}&= p(\boldsymbol\beta\mid \bm{y},\bm{X}, \sigma^2) \propto p(\bm{y}\mid \bm{X}, \boldsymbol\beta, \sigma^2) \cdot p(\boldsymbol\beta \mid \boldsymbol\beta_0, \boldsymbol\Sigma_0) \\ &= \frac{1}{(2\pi \sigma^2)^{n/2}} \exp\left\{-\frac{1}{2\sigma^2} (\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)\right\} \\ &\gap \times \frac{1}{(2\pi)^{n/2}\abs{\boldsymbol\Sigma_0}^{1/2}}\exp\left\{-\frac{1}{2} (\boldsymbol\beta-\boldsymbol\beta_0)^\top\boldsymbol\Sigma_0^{-1}(\boldsymbol\beta-\boldsymbol\beta_0)\right\} \\ &\propto \exp\left\{-\frac{1}{2} (\boldsymbol\beta - \boldsymbol\beta_2)^\top \boldsymbol\Sigma_2^{-1} (\boldsymbol\beta - \boldsymbol\beta_2)\right\} \propto \mathcal{N}(\boldsymbol\beta_2, \boldsymbol\Sigma_2) , \nonumber \end{aligned} \end{equation} where the parameters are $$ \begin{aligned} \boldsymbol\Sigma_2 &= \left(\frac{1}{\sigma^2} \bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1},\\ \boldsymbol\beta_2 &= \boldsymbol\Sigma_2 (\boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0+\frac{1}{\sigma^2}\bm{X}^\top\bm{y}) = \left(\frac{1}{\sigma^2}\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1} \left(\textcolor{blue}{\boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0}+\frac{1}{\sigma^2}\bm{X}^\top\bm{y}\right). \end{aligned} $$ Therefore, the conditional posterior follows from a Gaussian distribution: \begin{equation} \mathrm{posterior} = \boldsymbol\beta\mid \bm{y},\bm{X}, \sigma^2 \sim \mathcal{N}(\boldsymbol\beta_2, \boldsymbol\Sigma_2). \nonumber \end{equation} \paragraph{Connection to the zero-mean prior model.} We highlight the connection between the zero-mean prior model and the semi-conjugate prior model as follows: \begin{enumerate} \item We note that $\boldsymbol\beta_1$ in Section~\ref{sec:bayesian-zero-mean} is a special case of $\boldsymbol\beta_2$ when $\boldsymbol\beta_0=\mathbf{0}$. \item And if we assume further $\bm{X}$ has full rank. When $\boldsymbol\Sigma_0^{-1} \rightarrow \mathbf{0}$, $\boldsymbol\beta_2 \rightarrow \hat{\boldsymbol\beta} = (\bm{X}^\top\bm{X})^{-1}\bm{X}\bm{y}$ which reduces to the OLS estimator. \item When $\sigma^2 \rightarrow \infty$, $\boldsymbol\beta_2$ is approximately approaching $\boldsymbol\beta_0$, the prior expectation of parameter. However, in the zero-mean prior model, $\sigma^2 \rightarrow \infty$ will make $\boldsymbol\beta_1$ approach $\mathbf{0}$. \item \textbf{Weighted average}: we reformulate $\boldsymbol\beta_2$ by \begin{equation} \begin{aligned} \boldsymbol\beta_2 &= \left(\frac{1}{\sigma^2}\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1} \left(\boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0+\frac{1}{\sigma^2}\bm{X}^\top\bm{y}\right) \\ &= \left(\frac{1}{\sigma^2}\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1} \boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0 + \left(\frac{1}{\sigma^2}\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1} \frac{\bm{X}^\top\bm{X}}{\sigma^2} (\bm{X}^\top\bm{X})^{-1}\bm{X}^\top\bm{y} \\ &=(\bm{I}-\bm{A})\boldsymbol\beta_0 + \bm{A} \hat{\boldsymbol\beta}, \nonumber \end{aligned} \end{equation} where $\hat{\boldsymbol\beta}=(\bm{X}^\top\bm{X})^{-1}\bm{X}^\top\bm{y}$ is the OLS estimator of $\boldsymbol\beta$ and $\bm{A}=(\frac{1}{\sigma^2}\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1})^{-1} \frac{\bm{X}^\top\bm{X}}{\sigma^2}$. We see that the posterior mean of $\boldsymbol\beta$ is a weighted average of the prior mean and the OLS estimator of $\boldsymbol\beta$. Thus, if we set the prior parameter $\boldsymbol\beta_0 = \hat{\boldsymbol\beta}$, the posterior mean of $\boldsymbol\beta$ will be exactly $\hat{\boldsymbol\beta}$. \end{enumerate} \paragraph{Step 2, conditioned on $\boldsymbol\beta$.} Given $\boldsymbol\beta$, again, by Bayes' theorem, we obtain the conditional posterior density of $\gamma = \frac{1}{\sigma^2}$, \begin{equation} \begin{aligned} \mathrm{posterior}&= p(\gamma=\frac{1}{\sigma^2}\mid \bm{y},\bm{X}, \boldsymbol\beta) \propto p(\bm{y}\mid \bm{X}, \boldsymbol\beta, \gamma) \cdot p(\gamma \mid a_0, b_0) \\ &= \frac{\gamma^{n/2}}{(2\pi )^{n/2}} \exp\left\{-\frac{\gamma}{2} (\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)\right\} \\ &\gap \times \frac{{b_0}^{a_0}}{\Gamma(a_0)} \gamma^{a_0-1} \exp(-b_0 \gamma) \\ &\propto \gamma^{(a_0+\frac{n}{2}-1)} \exp\left\{-\gamma\left[b_0+\frac{1}{2}(\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)\right]\right\}, \nonumber \end{aligned} \end{equation} and the conditional posterior follows from a Gamma distribution: \begin{equation} \mathrm{posterior\,\, of\,\,} \gamma \mathrm{\,\,given\,\,} \boldsymbol\beta = \gamma\mid \bm{y},\bm{X}, \boldsymbol\beta \sim \mathrm{Ga}\left(a_0+\frac{n}{2}, \left[b_0+\frac{1}{2}(\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)\right]\right). \nonumber \end{equation} \paragraph{Prior information on the noise/precision.} We can find an intuitive prior interpretation as follows: \begin{enumerate} \item We notice that the prior mean and posterior mean of $\gamma$ are $\mathrm{E}[\gamma]=\frac{a_0}{b_0}$ and $\mathrm{E}[\gamma \mid \boldsymbol\beta]=\frac{a_0 + \frac{n}{2}}{b_0 +\frac{1}{2}(\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)}$ respectively. So the latent meaning of $2 a_0$ is the prior sample size for the noise $\sigma^2 = \frac{1}{\gamma}$. \item As we assume $\bm{y}=\bm{X}\boldsymbol\beta +\boldsymbol\epsilon$ where $\boldsymbol\epsilon \sim \mathcal{N}(\mathbf{0}, \sigma^2\bm{I})$, then $\frac{(\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)}{\sigma^2} \sim \chi^2(n)$ and $\mathrm{E}\left[\frac{1}{2}(\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)\right] = \frac{n}{2}\sigma^2$ \footnote{$\chi^2(n)$ is a Chi-squared distribution with $n$ degrees of freedom. See Definition~\ref{definition:chisquare_distribution} (p.~\pageref{definition:chisquare_distribution}).}. So the latent meaning of $\frac{b_0}{a_0}$ is the prior variance of the noise. \item Some textbooks would write $\gamma \sim \mathrm{Ga}(n_0/2, n_0\sigma_0^2/2)$ to make this explicit (in which case, $n_0$ is the prior sample size, and $\sigma_0^2$ is the prior variance). But a prior in this form seems coming from nowhere at first glance. \end{enumerate} \paragraph{Gibbs sampler.} By this Gibbs sampling method introduced in Section~\ref{section:gibbs-sampler}, we can construct a Gibbs sampler for the Bayesian linear model with semi-conjugate prior: 0. Set initial values to $\boldsymbol\beta$ and $\gamma = \frac{1}{\sigma^2}$; 1. update $\boldsymbol\beta$: $\mathrm{posterior} = \boldsymbol\beta\mid \bm{y},\bm{X}, \gamma \sim \mathcal{N}(\boldsymbol\beta_2, \boldsymbol\Sigma_2)$; 2. update $\gamma$: $\mathrm{posterior} = \gamma\mid \bm{y},\bm{X}, \boldsymbol\beta \sim \mathrm{Ga}\left(a_0+\frac{n}{2}, [b_0+\frac{1}{2}(\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)]\right)$. \subsection{Bayesian Linear Model with Full Conjugate Prior}\label{section:blm-fullconjugate} Putting a Gamma prior over the inverse variance is equivalent to putting an inverse-Gamma prior \footnote{We again delay the definition in Definition~\ref{definition:inverse_gamma_distribution} (p.~\pageref{definition:inverse_gamma_distribution}) when we discuss regular conjugate models.} on the variance. Same setting as the semi-conjugate prior distribution in Section~\ref{sec:semiconjugate}. We have the likelihood function: \begin{equation} \mathrm{likelihood} = \bm{y} \mid \bm{X}, \boldsymbol\beta, \sigma^2 \sim \mathcal{N}(\bm{X}\boldsymbol\beta, \sigma^2\bm{I}). \nonumber \end{equation} But now we specify a joint Gaussian and inverse-Gamma prior over the weight and variance parameters by \begin{equation} \begin{aligned} {\color{blue}\mathrm{prior:\,}} &\boldsymbol\beta\mid \sigma^2 \sim \mathcal{N}(\boldsymbol\beta_0, {\color{blue}\sigma^2} \boldsymbol\Sigma_0) \\ &{\color{blue}\sigma^2 \sim \mathrm{IG}(a_0, b_0)}, \nonumber \end{aligned} \end{equation} where again we differentiate from previous descriptions by blue text. Equivalently, we can formulate the prior into a joint one which is called the \textit{normal-inverse-Gamma (NIG)} distribution: \begin{equation} \begin{aligned} \mathrm{prior:\,} &\boldsymbol\beta,\sigma^2 \sim \mathcal{NIG}(\boldsymbol\beta_0, \boldsymbol\Sigma_0, a_0, b_0) = \mathcal{N}(\boldsymbol\beta_0, \sigma^2 \boldsymbol\Sigma_0)\cdot \mathrm{IG}(a_0, b_0) . \nonumber \end{aligned} \end{equation} Again by the Bayes' theorem, ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", we obtain the posterior \begin{equation} \begin{aligned} \mathrm{posterior}&= p(\boldsymbol\beta,\sigma^2\mid \bm{y},\bm{X}) \propto p(\bm{y}\mid \bm{X}, \boldsymbol\beta, \sigma^2)\cdot p(\boldsymbol\beta, \sigma^2 \mid \boldsymbol\beta_0, \boldsymbol\Sigma_0, a_0, b_0) \\ &= \frac{1}{(2\pi \sigma^2)^{n/2}} \exp\left\{-\frac{1}{2\sigma^2} (\bm{y}-\bm{X}\boldsymbol\beta)^\top(\bm{y}-\bm{X}\boldsymbol\beta)\right\} \\ &\gap \times \frac{1}{(2\pi \sigma^2)^{p/2} \abs{\boldsymbol\Sigma_0}^{1/2}} \exp\left\{-\frac{1}{2\sigma^2} (\boldsymbol\beta - \boldsymbol\beta_0)^\top\boldsymbol\Sigma_0^{-1} (\boldsymbol\beta - \boldsymbol\beta_0)\right\} \\ &\gap \times \frac{{b_0}^{a_0}}{\Gamma(a_0)} \frac{1}{(\sigma^2)^{a_0+1}} \exp(-\frac{b_0}{\sigma^2}) \\ &\propto \frac{1}{(2\pi \sigma^2)^{p/2} } \exp\left\{ \frac{1}{2\sigma^2} (\boldsymbol\beta -\boldsymbol\beta_3)^\top\boldsymbol\Sigma_3^{-1}(\boldsymbol\beta -\boldsymbol\beta_3) \right\} \\ &\gap \times \frac{1}{(\sigma^2)^{a_0 +\frac{n}{2}+1}} \exp\left\{-\frac{1}{\sigma^2} \left[b_0+\frac{1}{2} (\bm{y}^\top\bm{y} +\boldsymbol\beta_0^\top\boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0 -\boldsymbol\beta_3^\top\boldsymbol\Sigma_3^{-1}\boldsymbol\beta_3) \right]\right\}, \nonumber \end{aligned} \end{equation} where the parameters are $$ \begin{aligned} \boldsymbol\Sigma_3 &= \left( \bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1}, \\ \boldsymbol\beta_3 &= \boldsymbol\Sigma_3(\bm{X}^\top\bm{y} + \boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0) = \left( \bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1}(\boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0 + \bm{X}^\top\bm{y}). \end{aligned} $$ Let $a_n = a_0 +\frac{n}{2}+1$ and $b_n=b_0+\frac{1}{2} (\bm{y}^\top\bm{y} +\boldsymbol\beta_0^\top\boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0 -\boldsymbol\beta_3^\top\boldsymbol\Sigma_3^{-1}\boldsymbol\beta_3) $. The posterior is thus a NIG distribution: \begin{equation} \begin{aligned} \mathrm{posterior}&= \boldsymbol\beta, \sigma^2 \mid \bm{y}, \bm{X} \sim \mathcal{NIG}(\boldsymbol\beta_3, \boldsymbol\Sigma_3, a_n, b_n). \nonumber \end{aligned} \end{equation} \paragraph{Connection to zero-mean prior and semi-conjugate prior models.} We highlight the connection of the fully conjugate model to the zero-mean prior and semi-conjugate prior models as follows: \begin{enumerate} \item If we assume further $\bm{X}$ has full rank, when $\boldsymbol\Sigma_0^{-1} \rightarrow \mathbf{0}$, $\boldsymbol\beta_3 \rightarrow \hat{\boldsymbol\beta} = (\bm{X}^\top\bm{X})^{-1}\bm{X}\bm{y}$ which reduces to the OLS estimator. \item When $b_0 \rightarrow \infty$, then $\sigma^2 \rightarrow \infty$ and $\boldsymbol\beta_3$ is approximately approaching $\boldsymbol\beta_0$, the prior expectation of parameter. Compared to $\boldsymbol\beta_2$ in Section~\ref{sec:semiconjugate}, $\sigma^2 \rightarrow \infty$ will make $\boldsymbol\beta_2$ approach to $\boldsymbol\beta_0$ where $\sigma^2$ is a fixed hyperparameter. \item\textbf{Weighted average}: we reformulate \begin{equation} \begin{aligned} \boldsymbol\beta_3 &= \left(\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1}(\boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0+\bm{X}^\top\bm{y}) \\ &= \left(\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1} \boldsymbol\Sigma_0^{-1}\boldsymbol\beta_0 + \left(\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}\right)^{-1} (\bm{X}^\top\bm{X}) (\bm{X}^\top\bm{X})^{-1}\bm{X}^\top\bm{y} \\ &=(\bm{I}-\bm{C})\boldsymbol\beta_0 + \bm{C} \hat{\boldsymbol\beta}, \nonumber \end{aligned} \end{equation} where $\hat{\boldsymbol\beta}=(\bm{X}^\top\bm{X})^{-1}\bm{X}^\top\bm{y}$ is the OLS estimator of $\boldsymbol\beta$ and $\bm{C}=(\bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1})^{-1} (\bm{X}^\top\bm{X})$. We see that the posterior mean of $\boldsymbol\beta$ is a weighted average of the prior mean and the OLS estimator of $\boldsymbol\beta$. Thus, if we set $\boldsymbol\beta_0 = \hat{\boldsymbol\beta}$, the posterior mean of $\boldsymbol\beta$ will be exactly $\hat{\boldsymbol\beta}$. \item From $a_n = a_0 +\frac{n}{2}+1$, we know that $2a_0$ is the prior sample size for $\sigma^2$. \item $\boldsymbol\Sigma_3^{-1} = \bm{X}^\top\bm{X} + \boldsymbol\Sigma_0^{-1}$: the posterior precision matrix (inverse covariance matrix) is equal to data precision $\bm{X}^\top\bm{X}$ + prior precision. \end{enumerate} \part{Non-Bayesian Matrix Decomposition} \newpage \chapter{Alternating Least Squares}\label{section:als} \begingroup \hypersetup{linkcolor=winestain} \minitoc \newpage \endgroup \index{Least squares} \section{Preliminary: Least Squares Approximations} The linear model is the main technique in regression problems and the primary tool for it is the least squares approximation which minimizes a sum of squared errors. This is a natural choice when we’re interested in finding the regression function which minimizes the corresponding expected squared error. Over the recent decades, linear models have been used in a wide range of applications, e.g., decision-making \citep{dawes1974linear}, time series \citep{christensen1991linear, lu2017machine}, quantitative finance \citep{menchero2011barra}, and in many fields of study, e.g., production science, social science, and soil science \citep{fox1997applied, lane2002generalized, schaeffer2004application, mrode2014linear}. To be more concrete, we consider the overdetermined system $\bm{b} = \bm{A}\bm{x} $, where $\bm{A}\in \mathbb{R}^{m\times n}$ is the input data matrix, $\bm{b}\in \mathbb{R}^m$ is the observation vector (target vector), and the sample number $m$ is larger than the dimension number $n$. $\bm{x}$ is a vector of weights of the linear model. Normally, $\bm{A}$ will have full column rank since the data from real world is often uncorrelated (or it is uncorrelated after post-processing). In practice, a bias term is added to the first column of $\bm{A}$ such that the least square is to find the solution of \begin{equation}\label{equation:ls-bias} \widetilde{\bm{A}} \widetilde{\bm{x}} = [\bm{1} ,\bm{A} ] \begin{bmatrix} x_0\\ \bm{x} \end{bmatrix} = \bm{b} . \end{equation} On the other hand, it often happens that $\bm{b} = \bm{A}\bm{x}$ has no solution. The usual reason is: too many equations, i.e., the matrix has more rows than columns. Define the column space of $\bm{A}$ by $\{\bm{A}\boldsymbol\gamma: \,\, \forall \boldsymbol\gamma \in \mathbb{R}^n\}$ and denoted by $\mathcal{C}(\bm{A})$. Thus the meaning of $\bm{b} = \bm{A}\bm{x}$ has no solution is that $\bm{b}$ is outside the column space of $\bm{A}$. In another word, the error $\bm{e} = \bm{b} -\bm{A}\bm{x}$ cannot get down to zero. When the error $\bm{e}$ is as small as possible in the sense of mean squared error (MSE), $\bm{x}_{LS}$ is a least squares solution, i.e., $\norm{\bm{b}-\bm{A}\bm{x}_{LS}}^2$ is minimum. The method of least squares is one of the most effective tools for the mathematical sciences. There are books devoted solely to it. Readers are also advised to consult \citet{trefethen1997numerical, strang2019linear, strang2021every, lu2021rigorous}. \paragraph{Least squares by calculus.} When $\norm{\bm{b}-\bm{A}\bm{x}}^2$ is differentiable, and the parameter space of $\bm{x}$ is an open set (the least achievable value is obtained inside the parameter space), the least squares estimator must be the root of $\norm{\bm{b}-\bm{A}\bm{x}}^2$. We thus come into the following lemma. \begin{lemma}[Least Squares by Calculus]\label{lemma:ols} Assume $\bm{A} \in \mathbb{R}^{m\times n}$ is fixed and has full rank (i.e., the columns of $\bm{A}$ are linearly independent) with $m\geq n$. Consider the overdetermined system $\bm{b} = \bm{A}\bm{x}$, the least squares solution by calculus via setting the derivative in every direction of $\norm{\bm{b}-\bm{A}\bm{x}}^2$ to be zero is $\bm{x}_{LS} = (\bm{A}^\top\bm{A})^{-1}\bm{A}^\top\bm{b}$. The value $\bm{x}_{LS} = (\bm{A}^\top\bm{A})^{-1}\bm{A}^\top\bm{b}$ is known as the \textit{ordinary least squares (OLS)} estimator or simply \textit{least squares (LS)} estimator of $\bm{x}$. \end{lemma} To prove the lemma above, we must show $\bm{A}^\top\bm{A}$ is invertible. Since we assume $\bm{A}$ has full rank and $m\geq n$. $\bm{A}^\top\bm{A} \in \mathbb{R}^{n\times n}$ is invertible if it has rank $n$ which is the same as the rank of $\bm{A}$. This is proved in Lemma~\ref{lemma:rank-of-ata}. \begin{lemma}[Rank of $\bm{A}^\top \bm{A}$]\label{lemma:rank-of-ata} Given any matrix $\bm{A}$, $\bm{A}^\top \bm{A}$ and $\bm{A}$ have the same rank. \end{lemma} \begin{proof}[of Lemma~\ref{lemma:rank-of-ata}] Let $\bm{x}\in \mathcal{N}(\bm{A})$, i.e., $\bm{x}$ is in the null space of $\bm{A}$ (Definition~\ref{definition:null_space}, p.~\pageref{definition:null_space}), we have $$ \bm{A}\bm{x} = \mathbf{0} \qquad\underrightarrow{ \text{leads to} }\qquad \bm{A}^\top\bm{A} \bm{x} =\mathbf{0}, $$ i.e., $\bm{x}\in \mathcal{N}(\bm{A}) \,\,\underrightarrow{ \text{leads to} }\,\, \bm{x} \in \mathcal{N}(\bm{A}^\top \bm{A})$, therefore $\mathcal{N}(\bm{A}) \subseteq \mathcal{N}(\bm{A}^\top\bm{A})$. Further, let $\bm{x} \in \mathcal{N}(\bm{A}^\top\bm{A})$, we have $$ \bm{A}^\top \bm{A}\bm{x} = \mathbf{0}\,\,\underrightarrow{ \text{leads to} }\,\, \bm{x}^\top \bm{A}^\top \bm{A}\bm{x} = 0\,\,\underrightarrow{ \text{leads to} }\,\, \norm{\bm{A}\bm{x}}^2 = 0 \,\,\underrightarrow{ \text{leads to} }\,\, \bm{A}\bm{x}=\mathbf{0}, $$ i.e., $\bm{x}\in \mathcal{N}(\bm{A}^\top \bm{A}) \,\,\underrightarrow{ \text{leads to} }\,\, \bm{x}\in \mathcal{N}(\bm{A})$, therefore $\mathcal{N}(\bm{A}^\top\bm{A}) \subseteq\mathcal{N}(\bm{A}) $. As a result, by ``sandwiching", it follows that $$\mathcal{N}(\bm{A}) = \mathcal{N}(\bm{A}^\top\bm{A}) \qquad \text{and} \qquad \dim(\mathcal{N}(\bm{A})) = \dim(\mathcal{N}(\bm{A}^\top\bm{A})). $$ By the fundamental theorem of linear algebra, $\bm{A}^\top \bm{A}$ and $\bm{A}$ have the same rank. \end{proof} Apply the observation to $\bm{A}^\top$, we can also prove that $\bm{A}\bA^\top$ and $\bm{A}$ have the same rank. This result brings about the ordinary least squares estimator as follows. \begin{proof}[of Lemma \ref{lemma:ols}] Recall from calculus that a minimum of a function $f(\bm{x})$ occurs at a value $\bm{x}_{LS}$ such that the derivative $\frac{\partial}{\partial \bm{x}} f(\bm{x})=\mathbf{0}$. The differential of $\norm{\bm{b}-\bm{A}\bm{x}}^2$ is $2\bm{A}^\top\bm{A}\bm{x} -2\bm{A}^\top\bm{b}$. $\bm{A}^\top\bm{A}$ is invertible since we assume $\boldsymbol{A}$ is fixed and has full rank with $m\geq n$. So the OLS solution of $\bm{x}$ is $\bm{x}_{LS} = (\bm{A}^\top\bm{A})^{-1}\bm{A}^\top\bm{b}$ which completes the proof. \end{proof} \begin{definition}[Normal Equation]\label{definition:normal-equation-als} We can write the zero derivative of $\norm{\bm{b}-\bm{A}\bm{x}}^2$ as $\bm{A}^\top\bm{A} \bm{x}_{LS} = \bm{A}^\top\bm{b}$. The equation is also known as the \textit{normal equation}. In the assumption, $\bm{A}$ has full rank with $m\geq n$. So $\bm{A}^\top\bm{A}$ is invertible which implies $\bm{x}_{LS} = (\bm{A}^\top\bm{A})^{-1}\bm{A}^\top\bm{b}$. \end{definition} \begin{figure}[h!] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[A convex function.]{\label{fig:convex-1} \includegraphics[width=0.31\linewidth]{./imgs/convex.pdf}} \subfigure[A concave function.]{\label{fig:convex-2} \includegraphics[width=0.31\linewidth]{./imgs/concave.pdf}} \subfigure[A random function.]{\label{fig:convex-3} \includegraphics[width=0.31\linewidth]{./imgs/convex-none.pdf}} \caption{Three functions.} \label{fig:convex-concave-none} \end{figure} However, we do not actually know the least squares estimator obtained in Lemma~\ref{lemma:ols} is the least or largest achievable estimator (or neither). An example is shown in Figure~\ref{fig:convex-concave-none}. All we can get is that there only exists one root for the function $\norm{\bm{b}-\bm{A}\bm{x}}^2$. The following remark can address this concern. \begin{remark}[Verification of Least Squares Solution] Why does the zero derivative imply least mean squared error? The usual reason is from the convex analysis as we shall see shortly. But here we verify directly that the OLS solution finds the least squares. For any $\bm{x} \neq \bm{x}_{LS}$, we have \begin{equation} \begin{aligned} \norm{\bm{b} - \bm{A}\bm{x}}^2 &= \norm{\bm{b} - \bm{A}\bm{x}_{LS} + \bm{A}\bm{x}_{LS} - \bm{A}\bm{x}}^2 = \norm{\bm{b}-\bm{A}\bm{x}_{LS} + \bm{A} (\bm{x}_{LS} - \bm{x})}^2 \\ &=\norm{\bm{b}-\bm{A}\bm{x}_{LS}}^2 + \norm{\bm{A}(\bm{x}_{LS} - \bm{x})}^2 + 2\left(\bm{A}(\bm{x}_{LS} - \bm{x})\right)^\top(\bm{b}-\bm{A}\bm{x}_{LS}) \\ &=\norm{\bm{b}-\bm{A}\bm{x}_{LS}}^2 + \norm{\bm{A}(\bm{x}_{LS} - \bm{x})}^2 + 2(\bm{x}_{LS} - \bm{x})^\top(\bm{A}^\top\bm{b} - \bm{A}^\top\bm{A}\bm{x}_{LS}), \nonumber \end{aligned} \end{equation} where the third term is zero from the normal equation and $\norm{\bm{A}(\bm{x}_{LS} - \bm{x})}^2 \geq 0$. Therefore, \begin{equation} \norm{\bm{b} - \bm{A}\bm{x}}^2 \geq \norm{\bm{b}-\bm{A}\bm{x}_{LS}}^2. \nonumber \end{equation} Thus we show that the OLS estimator indeed gives the minimum, not the maximum or a saddle point via the calculus approach. \end{remark} A further question would be posed: why does this normal equation magically produce solutions for $\bm{x}$? A simple example would give the answer. $x^2=-1$ has no real solution. But $x\cdot x^2 = x\cdot (-1)$ has a real solution $\hat{x} = 0$ in which case $\hat{x}$ makes $x^2$ and $-1$ as close as possible. \begin{example}[Multiply From Left Can Change The Solution Set] Consider the matrix and target vector $$ \bm{A}=\left[ \begin{matrix} -3 & -4 \\ 4 & 6 \\ 1 & 1 \end{matrix} \right] \gap\mathrm{and} \gap \bm{b}=\left[ \begin{matrix} 1 \\ -1 \\ 0 \end{matrix} \right]. $$ It can be easily verified that $\bm{A}\bm{x} = \bm{b}$ has no solution for $\bm{x}$. However, if we multiply on the left by $$ \bm{B}=\left[ \begin{matrix} 0 & -1 & 6\\ 0 & 1 & -4 \end{matrix} \right]. $$ Then we have $\bm{x}_{LS} = [1/2, -1/2]^\top$ as the solution of $\bm{B}\bm{A}\bm{x}= \bm{B}\bm{b}$. This specific example shows why the normal equation can give rise to the least square solution. Multiplying on the left of a linear system will change the solution set.\hfill $\square$\par \end{example} \paragraph{Rank deficiency.} Note here, we assume $\bm{A}\in \mathbb{R}^{m\times n}$ has full rank with $m\geq n$ to make $\bm{A}^\top\bm{A}$ invertible. But when two or more columns of $\bm{A}$ are perfectly correlated, the matrix $\bm{A}$ will be \textit{deficient} and $\bm{A}^\top\bm{A}$ is singular. Choosing the $\bm{x}$ that minimizes $\bm{x}_{LS}^\top\bm{x}_{LS}$ which meets the normal equation can help to solve the problem. I.e., choose the shortest magnitude least squares solution. But this is not the main interest of the text. We will leave this topic to the readers. In \citet{lu2021numerical}, UTV decomposition and singular value decomposition (SVD) are applied to tackle the rank-deficient least squares problem. \index{Decomposition: ALS} \index{Netflix} \section{Netflix Recommender and Matrix Factorization}\label{section:als-netflix} In the Netflix prize \citep{bennett2007netflix}, the goal is to predict the ratings of users for different movies, given the existing ratings of those users for other movies. We index $M$ movies with $m= 1, 2,\ldots,M$ and $N$ users by $n = 1, 2,\ldots,N$. We denote the rating of the $n$-th user for the $m$-th movie by $a_{mn}$. Define $\bm{A}$ to be an $M \times N$ rating matrix with columns $\bm{a}_n \in \mathbb{R}^M$ containing ratings of the $n$-th user. Note that many ratings $\{a_{mn}\}$ are missing and our goal is to predict those missing ratings accurately. We formally consider algorithms for solving the following problem: The matrix $\bm{A}$ is approximately factorized into an $M\times K$ matrix $\bm{W}$ and a $K \times N$ matrix $\bm{Z}$. Usually $K$ is chosen to be smaller than $M$ or $N$, so that $\bm{W}$ and $\bm{Z}$ are smaller than the original matrix $\bm{A}$. This results in a compressed version of the original data matrix. An appropriate decision on the value of $K$ is critical in practice, but the choice of $K$ is very often problem dependent. The factorization is significant in the sense, suppose $\bm{A}=[\bm{a}_1, \bm{a}_2, \ldots, \bm{a}_N]$ and $\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N]$ are the column partitions of $\bm{A}$ and $\bm{Z}$ respectively, then $\bm{a}_n = \bm{W}\bm{z}_n$, i.e., each column $\bm{a}_n$ is approximated by a linear combination of the columns of $\bm{W}$ weighted by the components in $\bm{z}_n$. Therefore, columns of $\bm{W}$ can be thought of as containing the column basis of $\bm{A}$. To find the approximation $\bm{A}\approx\bm{W}\bm{Z}$, we need to define a loss function such that the distance between $\bm{A}$ and $\bm{W}\bm{Z}$ can be measured. The loss function is selected to be the \textit{Frobenius norm} (or mean squared error, MSE) between two matrices which vanishes to zero if $\bm{A}=\bm{W}\bm{Z}$ where the advantage will be seen shortly. To simplify the problem, let's assume that there are no missing ratings first. Project data vectors $\bm{a}_n$ to a smaller dimension $\bm{z}_n \in \mathbb{R}^K$ with $K<\min\{M, N\}$, such that the \textit{reconstruction error} measured by Frobenius norm is minimized (assume $K$ is known): \begin{equation}\label{equation:als-per-example-loss_ori} \mathop{\min}_{\bm{W},\bm{Z}} \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2, \end{equation} where $\bm{W}=[\bm{w}_1^\top; \bm{w}_2^\top; \ldots; \bm{w}_M^\top]\in \mathbb{R}^{M\times K}$ and $\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N] \in \mathbb{R}^{K\times N}$ containing $\bm{w}_m$'s and $\bm{z}_n$'s as \textbf{rows and columns} respectively \footnote{Note in some contexts, $\bm{Z}$ represents an $N\times K$ matrix such that $\bm{A}$ is decomposed into $\bm{A}\approx \bm{W}\bm{Z}\textcolor{blue}{^\top}$.}. The loss form in Equation~\eqref{equation:als-per-example-loss_ori} is known as the \textit{per-example loss}. It can be equivalently written as $$ L(\bm{W},\bm{Z}) = \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2 = \norm{\bm{W}\bm{Z}-\bm{A}}^2. $$ Moreover, the loss $L(\bm{W},\bm{Z})=\sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)$ is convex with respect to $\bm{Z}$ given $\bm{W}$ and vice versa. Therefore, we can first minimize with respect to $\bm{Z}$ given $\bm{W}$ and then minimize with respect to $\bm{W}$ given $\bm{Z}$: $$ \left\{ \begin{aligned} \bm{Z} &\leftarrow \mathop{\arg \min}_{\bm{Z}} L(\bm{W},\bm{Z}); \qquad \text{(ALS1)} \\ \bm{W} &\leftarrow \mathop{\arg \min}_{\bm{W}} L(\bm{W},\bm{Z}). \qquad \text{(ALS2)} \end{aligned} \right. $$ This is known as the \textit{coordinate descent algorithm} in which case we employ the least squares alternatively. Hence it is also called the \textit{alternating least squares (ALS)} \citep{comon2009tensor, takacs2012alternating, giampouras2018alternating}. The convergence is guaranteed if the loss function $L(\bm{W},\bm{Z})$ decreases at each iteration and we shall discuss more on this in the sequel.\index{ALS} \begin{remark}[Convexity and Global Minimum] Although the loss function defined by Frobenius norm $\norm{\bm{W}\bm{Z}-\bm{A}}^2$ is convex in $\bm{W}$ given $\bm{Z}$ or vice versa, it is not convex in both variables together. Therefore we are not able to find the global minimum. However, the convergence is assured to find local minima. \end{remark} \paragraph{Given $\bm{W}$, optimizing $\bm{Z}$.} Now, let's see what is in the problem of $\bm{Z} \leftarrow \mathop{\arg \min}_{\bm{Z}} L(\bm{W},\bm{Z})$. When there exists a unique minimum of the loss function $L(\bm{W},\bm{Z})$ with respect to $\bm{Z}$, we speak of the \textit{least squares} minimizer of $\mathop{\arg \min}_{\bm{Z}} L(\bm{W},\bm{Z})$. Given $\bm{W}$ fixed, $L(\bm{W},\bm{Z})$ can be written as $L(\bm{Z}\mid \bm{W})$ (or more compactly, as $L(\bm{Z})$) to emphasize on the variable of $\bm{Z}$: $$ \begin{aligned} L(\bm{Z}\mid \bm{W}) &= \norm{\bm{W}\bm{Z}-\bm{A}}^2= \left\Vert\bm{W}[\bm{z}_1,\bm{z}_2,\ldots, \bm{z}_N]-[\bm{a}_1,\bm{a}_2,\ldots,\bm{a}_N]\right\Vert^2=\left\Vert \begin{bmatrix} \bm{W}\bm{z}_1 - \bm{a}_1 \\ \bm{W}\bm{z}_2 - \bm{a}_2\\ \vdots \\ \bm{W}\bm{z}_N - \bm{a}_N \end{bmatrix} \right\Vert^2. \end{aligned}\footnote{The matrix norm used here is the Frobenius norm such that $\norm{\bm{A}}= \sqrt{\sum_{i=1,j=1}^{m,n} (a_{ij})^2}$ if $\bm{A}\in \mathbb{R}^{m\times n}$. And the vector norm used here is the $L_2$ norm such that $\norm{\bm{x}}_2 = \sqrt{\sum_{i=1}^{n}x_i^2}$ if $\bm{x}\in \mathbb{R}^n$.} $$ Now, if we define $$ \widetilde{\bm{W}} = \begin{bmatrix} \bm{W} & \mathbf{0} & \ldots & \mathbf{0}\\ \mathbf{0} & \bm{W} & \ldots & \mathbf{0}\\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{0} & \mathbf{0} & \ldots & \bm{W} \end{bmatrix} \in \mathbb{R}^{MN\times KN}, \gap \widetilde{\bm{z}}= \begin{bmatrix} \bm{z}_1 \\ \bm{z}_2 \\ \vdots \\ \bm{z}_N \end{bmatrix} \in \mathbb{R}^{KN}, \gap \widetilde{\bm{a}}= \begin{bmatrix} \bm{a}_1 \\ \bm{a}_2 \\ \vdots \\ \bm{a}_N \end{bmatrix} \in \mathbb{R}^{MN}, $$ then the (ALS1) problem can be reduced to the normal least squares problem for minimizing $\norm{\widetilde{\bm{W}} \widetilde{\bm{z}} - \widetilde{\bm{a}}}^2$ with respect to $\widetilde{\bm{z}}$. And the solution is given by $$ \widetilde{\bm{z}} = (\widetilde{\bm{W}}^\top\widetilde{\bm{W}})^{-1} \widetilde{\bm{W}}^\top\widetilde{\bm{a}}. $$ However, it is not wise to obtain the result via this approach. It takes $2(KN)^3$ flops to get the inverse of $\widetilde{\bm{W}}^\top\widetilde{\bm{W}}$ \citep{lu2021numerical}. Alternatively, a direct way to solve (ALS1) is to find the differential of $L(\bm{Z}\mid \bm{W})$ with respect to $\bm{Z}$: \begin{equation}\label{equation:givenw-update-z-allgd} \begin{aligned} \frac{\partial L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}} &= \frac{\partial \,\,\mathrm{tr}\left((\bm{W}\bm{Z}-\bm{A})(\bm{W}\bm{Z}-\bm{A})^\top\right)}{\partial \bm{Z}}\\ &=\frac{\partial \,\,\mathrm{tr}\left((\bm{W}\bm{Z}-\bm{A})(\bm{W}\bm{Z}-\bm{A})^\top\right)}{\partial (\bm{W}\bm{Z}-\bm{A})} \frac{\partial (\bm{W}\bm{Z}-\bm{A})}{\partial \bm{Z}}\\ &\stackrel{\star}{=}2 \bm{W}^\top(\bm{W}\bm{Z}-\bm{A}) \in \mathbb{R}^{K\times N}, \end{aligned} \end{equation} where the first equality is from the definition of Frobenius norm (Definition~\ref{definition:frobenius}, p.~\pageref{definition:frobenius}) such that $\norm{\bm{A}} = \sqrt{\sum_{i=1,j=1}^{m,n} (a_{ij})^2}=\sqrt{\mathrm{tr}(\bm{A}\bA^\top)}$, and equality ($\star$) comes from the fact that $\frac{\partial \mathrm{tr}(\bm{A}\bA^\top)}{\partial \bm{A}} = 2\bm{A}$. When the loss function is a differentiable function of $\bm{Z}$, we may determine the least squares solution by differential calculus, and a minimum of the function $L(\bm{Z}\mid \bm{W})$ must be a root of the equation: $$ \frac{\partial L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}} = \mathbf{0}. $$ By finding the root of the above equation, we have the ``candidate" update on $\bm{Z}$ that find the minimizer of $L(\bm{Z}\mid \bm{W})$ \begin{equation}\label{equation:als-z-update} \boxed{\bm{Z} = (\bm{W}^\top\bm{W})^{-1} \bm{W}^\top \bm{A} \leftarrow \mathop{\arg \min}_{\bm{Z}} L(\bm{Z}\mid \bm{W}).} \end{equation} This takes $2K^3$ flops to compute the inverse of $\bm{W}^\top\bm{W}$ as compared to $2(KN)^3$ flops to get the inverse of $\widetilde{\bm{W}}^\top\widetilde{\bm{W}}$. Before we declare a root of the above equation is actually a minimizer rather than a maximizer (that's why we call the update a ``candidate" update above), we need to verify the function is convex such that if the function is twice differentiable, this can be equivalently done by verifying $$ \frac{\partial^2 L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}^2} > 0, $$ i.e., the Hessian matrix is positive definite. To see this, we write out the twice differential \begin{equation}\label{equation:als-z-update_hessian} \frac{\partial^2 L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}^2}= 2\bm{W}^\top\bm{W} \in \mathbb{R}^{K\times K}, \end{equation} which has full rank if $\bm{W}\in \mathbb{R}^{M\times K}$ has full rank (Lemma~\ref{lemma:rank-of-ata}) and $K<M$. \begin{remark}[Positive Definite Hessian if $\bm{W}$ Has Full Rank] We here claim that if $\bm{W}\in\mathbb{R}^{M\times K}$ has full rank $K$ with $K<M$, then $\frac{\partial^2 L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}^2}$ is positive definite. This can be done by checking that when $\bm{W}$ has full rank, $\bm{W}\bm{x}=\mathbf{0}$ only when $\bm{x}=\mathbf{0}$ since the null space of $\bm{W}$ is of dimension 0. Therefore, $$ \bm{x}^\top (2\bm{W}^\top\bm{W})\bm{x} >0, \qquad \text{for any nonzero vector $\bm{x}\in \mathbb{R}^K$}. $$ \end{remark} Now, the thing is that we need to check \textcolor{blue}{if $\bm{W}$ has full rank so that the Hessian of $L(\bm{Z}\mid \bm{W})$ is positive definite}, otherwise, we cannot claim the update of $\bm{Z}$ in Equation~\eqref{equation:als-z-update} decreases the loss (due to convexity) so that the matrix decomposition is going into the right way to better approximate the original matrix $\bm{A}$ by $\bm{W}\bm{Z}$ in each iteration. We will shortly come back to the positive definiteness of the Hessian matrix in the sequel which relies on the following lemma. \begin{lemma}[Rank of $\bm{Z}$ after Updating]\label{lemma:als-update-z-rank} Suppose $\bm{A}\in \mathbb{R}^{M\times N}$ has full rank with $M\leq N$ and $\bm{W}\in \mathbb{R}^{M\times K}$ has full rank with $K<M$, then the least squares update of $\bm{Z}=(\bm{W}^\top\bm{W})^{-1} \bm{W}^\top \bm{A} \in \mathbb{R}^{K\times N}$ in Equation~\eqref{equation:als-z-update} has full rank. \end{lemma} \begin{proof}[of Lemma~\ref{lemma:als-update-z-rank}] Since $\bm{W}^\top\bm{W}\in \mathbb{R}^{K\times K}$ has full rank if $\bm{W}$ has full rank (Lemma~\ref{lemma:rank-of-ata}) such that $(\bm{W}^\top\bm{W})^{-1} $ has full rank. Suppose $\bm{W}^\top\bm{x}=\mathbf{0}$, this implies $(\bm{W}^\top\bm{W})^{-1} \bm{W}^\top\bm{x}=\mathbf{0}$. Thus, the following two null spaces satisfy $$ \mathcal{N}(\bm{W}^\top) \subseteq \mathcal{N}\left((\bm{W}^\top\bm{W})^{-1} \bm{W}^\top\right). $$ Moreover, suppose $(\bm{W}^\top\bm{W})^{-1} \bm{W}^\top\bm{x}=\mathbf{0}$, and since $(\bm{W}^\top\bm{W})^{-1} $ is invertible. This implies $ \bm{W}^\top\bm{x}=(\bm{W}^\top\bm{W})\mathbf{0}=\mathbf{0}$, and $$ \mathcal{N}\left((\bm{W}^\top\bm{W})^{-1} \bm{W}^\top\right)\subseteq \mathcal{N}(\bm{W}^\top). $$ As a result, by ``sandwiching", it follows that \begin{equation}\label{equation:als-z-sandiwch1} \mathcal{N}(\bm{W}^\top) = \mathcal{N}\left((\bm{W}^\top\bm{W})^{-1} \bm{W}^\top\right). \end{equation} Therefore, $(\bm{W}^\top\bm{W})^{-1} \bm{W}^\top$ has full rank $K$. Let $\bm{T}=(\bm{W}^\top\bm{W})^{-1} \bm{W}^\top\in \mathbb{R}^{K\times M}$, and suppose $\bm{T}^\top\bm{x}=\mathbf{0}$. This implies $\bm{A}^\top\bm{T}^\top\bm{x}=\mathbf{0}$, and $$ \mathcal{N}(\bm{T}^\top) \subseteq \mathcal{N}(\bm{A}^\top\bm{T}^\top). $$ Similarly, suppose $\bm{A}^\top(\bm{T}^\top\bm{x})=\mathbf{0}$. Since $\bm{A}$ has full rank with the dimension of the null space being 0: $\dim\left(\mathcal{N}(\bm{A}^\top)\right)=0$, $(\bm{T}^\top\bm{x})$ must be zero. The claim follows since $\bm{A}$ has full rank $M$ with the row space of $\bm{A}^\top$ being equal to the column space of $\bm{A}$ where $\dim\left(\mathcal{C}(\bm{A})\right)=M$ and the $\dim\left(\mathcal{N}(\bm{A}^\top)\right) = M-\dim\left(\mathcal{C}(\bm{A})\right)=0$. Therefore, $\bm{x}$ is in the null space of $\bm{T}^\top$ if $\bm{x}$ is in the null space of $\bm{A}^\top\bm{T}^\top$: $$ \mathcal{N}(\bm{A}^\top\bm{T}^\top)\subseteq \mathcal{N}(\bm{T}^\top). $$ By ``sandwiching" again, \begin{equation}\label{equation:als-z-sandiwch2} \mathcal{N}(\bm{T}^\top) = \mathcal{N}(\bm{A}^\top\bm{T}^\top). \end{equation} Since $\bm{T}^\top$ has full rank $K<M\leq N$, $\dim\left(\mathcal{N}(\bm{T}^\top) \right) = \dim\left(\mathcal{N}(\bm{A}^\top\bm{T}^\top)\right)=0$. Therefore, $\bm{Z}^\top=\bm{A}^\top\bm{T}^\top$ has full rank $K$. We complete the proof. \end{proof} \paragraph{Given $\bm{Z}$, optimizing $\bm{W}$.} Given $\bm{Z}$, $L(\bm{W},\bm{Z})$ can be written as $L(\bm{W}\mid \bm{Z})$ to emphasize the variable of $\bm{W}$: $$ \begin{aligned} L(\bm{W}\mid \bm{Z}) &= \norm{\bm{W}\bm{Z}-\bm{A}}^2. \end{aligned} $$ A direct way to solve (ALS2) is to find the differential of $L(\bm{W}\mid \bm{Z})$ with respect to $\bm{W}$: $$ \begin{aligned} \frac{\partial L(\bm{W}\mid \bm{Z})}{\partial \bm{W}} &= \frac{\partial \,\,\mathrm{tr}\left((\bm{W}\bm{Z}-\bm{A})(\bm{W}\bm{Z}-\bm{A})^\top\right)}{\partial \bm{W}}\\ &=\frac{\partial \,\,\mathrm{tr}\left((\bm{W}\bm{Z}-\bm{A})(\bm{W}\bm{Z}-\bm{A})^\top\right)}{\partial (\bm{W}\bm{Z}-\bm{A})} \frac{\partial (\bm{W}\bm{Z}-\bm{A})}{\partial \bm{W}}\\ &= 2(\bm{W}\bm{Z}-\bm{A})\bm{Z}^\top \in \mathbb{R}^{M\times K}. \end{aligned} $$ The ``candidate" update on $\bm{W}$ is similar to finding the root of the differential $\frac{\partial L(\bm{W}\mid \bm{Z})}{\partial \bm{W}}$: \begin{equation}\label{equation:als-w-update} \boxed{\bm{W}^\top = (\bm{Z}\bZ^\top)^{-1}\bm{Z}\bm{A}^\top \leftarrow \mathop{\arg\min}_{\bm{W}} L(\bm{W}\mid \bm{Z}).} \end{equation} Again, we emphasize that the update is only a ``candidate" update. We need to further check whether the Hessian is positive definite or not. The Hessian matrix is given by \begin{equation}\label{equation:als-w-update_hessian} \begin{aligned} \frac{\partial^2 L(\bm{W}\mid \bm{Z})}{\partial \bm{W}^2} =2\bm{Z}\bZ^\top \in \mathbb{R}^{K\times K}. \end{aligned} \end{equation} Therefore, by analogous analysis, if $\bm{Z}$ has full rank with $K<N$, the Hessian matrix is positive definite. \begin{lemma}[Rank of $\bm{W}$ after Updating]\label{lemma:als-update-w-rank} Suppose $\bm{A}\in \mathbb{R}^{M\times N}$ has full rank with $M\leq N$ and $\bm{Z}\in \mathbb{R}^{K\times N}$ has full rank with $K<N$, then the update of $\bm{W}^\top = (\bm{Z}\bZ^\top)^{-1}\bm{Z}\bm{A}^\top$ in Equation~\eqref{equation:als-w-update} has full rank. \end{lemma} The proof of Lemma~\ref{lemma:als-update-w-rank} is similar to that of Lemma~\ref{lemma:als-update-z-rank}, and we shall not repeat the details. \paragraph{Key observation.} Combine the observations in Lemma~\ref{lemma:als-update-z-rank} and Lemma~\ref{lemma:als-update-w-rank}, as long as we \textcolor{blue}{initialize $\bm{Z}, \bm{W}$ to have full rank}, the updates in Equation~\eqref{equation:als-z-update} and Equation~\eqref{equation:als-w-update} are reasonable \textbf{since the Hessians in Equation~\eqref{equation:als-z-update_hessian} and \eqref{equation:als-w-update_hessian} are positive definite}. \textbf{The requirement on the $M\leq N$ is reasonable in that there are always more users than the number of movies}. We conclude the process in Algorithm~\ref{alg:als}. \begin{algorithm}[H] \caption{Alternating Least Squares} \label{alg:als} \begin{algorithmic}[1] \Require Matrix $\bm{A}\in \mathbb{R}^{M\times N}$ \textcolor{blue}{with $M\leq N$}; \State Initialize $\bm{W}\in \mathbb{R}^{M\times K}$, $\bm{Z}\in \mathbb{R}^{K\times N}$ \textcolor{blue}{with full rank and $K<M\leq N$}; \State Choose a stop criterion on the approximation error $\delta$; \State Choose cmaximal number of iterations $C$; \State $iter=0$; \Comment{Count for the number of iterations} \While{$\norm{\bm{A}-\bm{W}\bm{Z}}>\delta $ and $iter<C$} \State $iter=iter+1$; \State $\bm{Z} = (\bm{W}^\top\bm{W})^{-1} \bm{W}^\top \bm{A} \leftarrow \mathop{\arg \min}_{\bm{Z}} L(\bm{Z}\mid \bm{W})$; \State $\bm{W}^\top = (\bm{Z}\bZ^\top)^{-1}\bm{Z}\bm{A}^\top \leftarrow \mathop{\arg\min}_{\bm{W}} L(\bm{W}\mid \bm{Z})$; \EndWhile \State Output $\bm{W},\bm{Z}$; \end{algorithmic} \end{algorithm} \section{Regularization: Extension to General Matrices}\label{section:regularization-extention-general} \textit{Regularization} is a machine learning technique used to prevent overfitting and improve model generalization. Overfitting occurs when a model is overly complex and fits the training data too closely, resulting in poor performance on new, unseen data. Regularization adds a constraint or penalty term to the loss function used in model optimization, discouraging models that are overly complex. This results in a trade-off between having a simple, generalizable model and fitting the training data well. $L_1$ regularization, $L_2$ regularization, and elastic net regularization (the combination of $L_1$ and $L_2$ regularization) are all common types of regularization; and they are commonly used in machine learning algorithms such as linear regression, logistic regression, and neural networks. In this context, we can add a $L_2$ regularization term to minimize the following loss: \begin{equation}\label{equation:als-regularion-full-matrix} L(\bm{W},\bm{Z}) =\norm{\bm{W}\bm{Z}-\bm{A}}^2 +\lambda_w \norm{\bm{W}}^2 + \lambda_z \norm{\bm{Z}}^2, \qquad \lambda_w>0, \lambda_z>0, \end{equation} where the differential with respect to $\bm{Z}, \bm{W}$ are given respectively by \begin{equation}\label{equation:als-regulari-gradien} \left\{ \begin{aligned} \frac{\partial L(\bm{W},\bm{Z}) }{\partial \bm{Z}} &= 2\bm{W}^\top(\bm{W}\bm{Z}-\bm{A}) + 2\lambda_z\bm{Z} \in \mathbb{R}^{K\times N};\\ \frac{\partial L(\bm{W},\bm{Z}) }{\partial \bm{W}} &= 2(\bm{W}\bm{Z}-\bm{A})\bm{Z}^\top + 2\lambda_w\bm{W} \in \mathbb{R}^{M\times K}. \end{aligned} \right. \end{equation} The Hessian matrices are given respectively by $$ \left\{ \begin{aligned} \frac{\partial^2 L(\bm{W},\bm{Z}) }{\partial \bm{Z}^2} &= 2\bm{W}^\top\bm{W}+ 2\lambda_z\bm{I} \in \mathbb{R}^{K\times K};\\ \frac{\partial^2 L(\bm{W},\bm{Z}) }{\partial \bm{W}^2} &= 2\bm{Z}\bZ^\top + 2\lambda_w\bm{I} \in \mathbb{R}^{K\times K}, \\ \end{aligned} \right. $$ which are positive definite due to the perturbation by the regularization. To see this, we have $$ \left\{ \begin{aligned} \bm{x}^\top (2\bm{W}^\top\bm{W} +2\lambda_z\bm{I})\bm{x} &= \underbrace{2\bm{x}^\top\bm{W}^\top\bm{W}\bm{x}}_{\geq 0} + 2\lambda_z \norm{\bm{x}}^2>0, \gap \text{for nonzero $\bm{x}$};\\ \bm{x}^\top (2\bm{Z}\bZ^\top +2\lambda_w\bm{I})\bm{x} &= \underbrace{2\bm{x}^\top\bm{Z}\bZ^\top\bm{x}}_{\geq 0} + 2\lambda_w \norm{\bm{x}}^2>0,\gap \text{for nonzero $\bm{x}$}. \end{aligned} \right. $$ \textbf{The regularization makes the Hessian matrices positive definite even if $\bm{W}, \bm{Z}$ are rank-deficient}. And now the matrix decomposition can be extended to any matrix even when $M>N$. In rare cases, $K$ can be chosen as $K>\max\{M, N\}$ such that a high-rank approximation of $\bm{A}$ is obtained. However, in most scenarios, we want to find the low-rank approximation of $\bm{A}$ such that $K<\min\{M, N\}$. For example, the ALS can be utilized to find the low-rank neural networks to reduce the memory of the neural networks whilst increasing the performance \citep{lu2021numerical}. Therefore, the minimizers are given by finding the roots of the differential: \begin{equation}\label{equation:als-regular-final-all} \left\{ \begin{aligned} \bm{Z} &= (\bm{W}^\top\bm{W}+ \lambda_z\bm{I})^{-1} \bm{W}^\top \bm{A} ;\\ \bm{W}^\top &= (\bm{Z}\bZ^\top+\lambda_w\bm{I})^{-1}\bm{Z}\bm{A}^\top . \end{aligned} \right. \end{equation} The regularization parameters $\lambda_z, \lambda_w\in \mathbb{R}$ are used to balance the trade-off between the accuracy of the approximation and the smoothness of the computed solution. The selection of the parameters is typically problem dependent and can be obtained by \textit{cross-validation}. Again, we conclude the process in Algorithm~\ref{alg:als-regularizer}. \begin{algorithm}[H] \caption{Alternating Least Squares with Regularization} \label{alg:als-regularizer} \begin{algorithmic}[1] \Require Matrix $\bm{A}\in \mathbb{R}^{M\times N}$; \State Initialize $\bm{W}\in \mathbb{R}^{M\times K}$, $\bm{Z}\in \mathbb{R}^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$}; \State Choose a stop criterion on the approximation error $\delta$; \State Choose regularization parameters $\lambda_w, \lambda_z$; \State Choose the maximal number of iterations $C$; \State $iter=0$; \Comment{Count for the number of iterations} \While{$\norm{\bm{A}-\bm{W}\bm{Z}}>\delta $ and $iter<C$} \State $iter=iter+1$; \State $\bm{Z} = (\bm{W}^\top\bm{W}+ \lambda_z\bm{I})^{-1} \bm{W}^\top \bm{A} \leftarrow \mathop{\arg \min}_{\bm{Z}} L(\bm{Z}\mid \bm{W})$; \State $\bm{W}^\top = (\bm{Z}\bZ^\top+\lambda_w\bm{I})^{-1}\bm{Z}\bm{A}^\top \leftarrow \mathop{\arg\min}_{\bm{W}} L(\bm{W}\mid \bm{Z})$; \EndWhile \State Output $\bm{W},\bm{Z}$; \end{algorithmic} \end{algorithm} \section{Missing Entries}\label{section:alt-columb-by-column} Since the matrix decomposition via the ALS is extensively used in the Netflix recommender data, where many entries are missing since many users have not watched some movies or they would not rate the movies for some reasons. We can employ an additional mask matrix $\bm{M}\in \mathbb{R}^{M\times N}$ where $m_{mn}\in \{0,1\}$ means if the user $n$ has rated the movie $m$ or not. Therefore, the loss function can be defined as $$ L(\bm{W},\bm{Z}) = \norm{\bm{M}\circledast \bm{A}- \bm{M}\circledast (\bm{W}\bm{Z})}^2, $$ where $\circledast$ is the \textit{Hadamard product} between matrices. For example, the Hadamard product for a $3 \times 3$ matrix $\bm{A}$ with a $3\times 3$ matrix $\bm{B}$ is $$ \bm{A}\circledast \bm{B} = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} \circledast \begin{bmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{bmatrix} = \begin{bmatrix} a_{11}b_{11} & a_{12}b_{12} & a_{13}b_{13} \\ a_{21}b_{21} & a_{22}b_{22} & a_{23}b_{23} \\ a_{31}b_{31} & a_{32}b_{32} & a_{33}b_{33} \end{bmatrix}. $$ To find the solution of the problem, we decompose the updates in Equation~\eqref{equation:als-regular-final-all} into: \begin{equation}\label{equation:als-ori-all-wz} \left\{ \begin{aligned} \bm{z}_n &= (\bm{W}^\top\bm{W}+ \lambda_z\bm{I})^{-1} \bm{W}^\top \bm{a}_n, &\gap& \text{for $n\in \{1,2,\ldots, N\}$} ;\\ \bm{w}_m &= (\bm{Z}\bZ^\top+\lambda_w\bm{I})^{-1}\bm{Z}\bm{b}_m, &\gap& \text{for $m\in \{1,2,\ldots, M\}$} , \end{aligned} \right. \end{equation} where $\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N], \bm{A}=[\bm{a}_1,\bm{a}_2, \ldots, \bm{a}_N]$ are the column partitions of $\bm{Z}, \bm{A}$ respectively. And $\bm{W}^\top=[\bm{w}_1, \bm{w}_2, \ldots, \bm{w}_M], \bm{A}^\top=[\bm{b}_1,\bm{b}_2, \ldots, \bm{b}_M]$ are the column partitions of $\bm{W}^\top, \bm{A}^\top$ respectively. The factorization of the updates indicates the update can be done in a column-by-column fashion. \paragraph{Given $\bm{W}$.} Let $\bm{o}_n\in \mathbb{R}^M$ denote the movies rated by user $n$ where $o_{nm}=1$ if user $n$ has rated movie $m$, and $o_{nm}=0$ otherwise. Then the $n$-th column of $\bm{A}$ without missing entries can be denoted as the Matlab style notation $\bm{a}_n[\bm{o}_n]$. And we want to approximate the existing $n$-th column by $\bm{a}_n[\bm{o}_n] \approx \bm{W}[\bm{o}_n, :]\bm{z}_n$ which is actually a rank-one least squares problem: \begin{equation}\label{equation:als-ori-all-wz-modif-z} \begin{aligned} \bm{z}_n &= \left(\bm{W}[\bm{o}_n, :]^\top\bm{W}[\bm{o}_n, :]+ \lambda_z\bm{I}\right)^{-1} \bm{W}[\bm{o}_n, :]^\top \bm{a}_n[\bm{o}_n], &\gap& \text{for $n\in \{1,2,\ldots, N\}$} . \end{aligned} \end{equation} Moreover, the loss function with respect to $\bm{z}_n$ can be described by $$ L(\bm{z}_n\mid \bm{W}) =\sum_{m\in \bm{o}_n} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2 $$ and if we are concerned about the loss for all users: $$ L(\bm{Z}\mid \bm{W}) =\sum_{n=1}^N\ \sum_{m\in \bm{o}_n} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2. $$ \paragraph{Given $\bm{Z}$.} Similarly, if $\bm{p}_m \in \mathbb{R}^{N}$ denotes the users that have rated the movie $m$ with $p_{mn}=1$ if the movie $m$ has been rated by user $n$. Then the $m$-th row of $\bm{A}$ without missing entries can be denoted as the Matlab style notation $\bm{b}_m[\bm{p}_m]$. And we want to approximate the existing $m$-th row by $\bm{b}_m[\bm{p}_m] \approx \bm{Z}[:, \bm{p}_m]^\top\bm{w}_m$, \footnote{Note that $\bm{Z}[:, \bm{p}_m]^\top$ is the transpose of $\bm{Z}[:, \bm{p}_m]$, which is equal to $\bm{Z}^\top[\bm{p}_m,:]$, i.e., transposing first and then selecting.} which again is a rank-one least squares problem: \begin{equation}\label{equation:als-ori-all-wz-modif-w} \begin{aligned} \bm{w}_m &= (\bm{Z}[:, \bm{p}_m]\bm{Z}[:, \bm{p}_m]^\top+\lambda_w\bm{I})^{-1}\bm{Z}[:, \bm{p}_m]\bm{b}_m[\bm{p}_m], &\gap& \text{for $m\in \{1,2,\ldots, M\}$} . \end{aligned} \end{equation} Moreover, the loss function with respect to $\bm{w}_n$ can be described by $$ L(\bm{w}_n\mid \bm{Z}) =\sum_{n\in \bm{p}_m} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2 $$ and if we are concerned about the loss for all users: $$ L(\bm{W}\mid \bm{Z}) =\sum_{m=1}^M \sum_{n\in \bm{p}_m} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2 . $$ The procedure is again formulated in Algorithm~\ref{alg:als-regularizer-missing-entries}. \begin{algorithm}[h] \caption{Alternating Least Squares with Missing Entries and Regularization} \label{alg:als-regularizer-missing-entries} \begin{algorithmic}[1] \Require Matrix $\bm{A}\in \mathbb{R}^{M\times N}$; \State Initialize $\bm{W}\in \mathbb{R}^{M\times K}$, $\bm{Z}\in \mathbb{R}^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$}; \State Choose a stop criterion on the approximation error $\delta$; \State Choose regularization parameters $\lambda_w, \lambda_z$; \State Compute the mask matrix $\bm{M}$ from $\bm{A}$; \State Choose the maximal number of iterations $C$; \State $iter=0$; \Comment{Count for the number of iterations} \While{\textcolor{blue}{$\norm{\bm{M}\circledast \bm{A}- \bm{M}\circledast (\bm{W}\bm{Z})}^2>\delta $} and $iter<C$} \State $iter=iter+1$; \For{$n=1,2,\ldots, N$} \State $\bm{z}_n = \left(\bm{W}[\bm{o}_n, :]^\top\bm{W}[\bm{o}_n, :]+ \lambda_z\bm{I}\right)^{-1} \bm{W}[\bm{o}_n, :]^\top \bm{a}_n[\bm{o}_n]$; \Comment{$n$-th column of $\bm{Z}$} \EndFor \For{$m=1,2,\ldots, M$} \State $\bm{w}_m = (\bm{Z}[:, \bm{p}_m]\bm{Z}[:, \bm{p}_m]^\top+\lambda_w\bm{I})^{-1}\bm{Z}[:, \bm{p}_m]\bm{b}_m[\bm{p}_m]$;\Comment{$m$-th column of $\bm{W}^\top$} \EndFor \EndWhile \State Output $\bm{W}^\top=[\bm{w}_1, \bm{w}_2, \ldots, \bm{w}_M],\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N]$; \end{algorithmic} \end{algorithm} \section{Vector Inner Product}\label{section:als-vector-product} We have seen that the ALS is to find matrices $\bm{W}, \bm{Z}$ such that $\bm{W}\bm{Z}$ can approximate $\bm{A}\approx \bm{W}\bm{Z}$ in terms of least squared loss: $$ \mathop{\min}_{\bm{W},\bm{Z}} \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2, $$ that is, each entry $a_{mn}$ in $\bm{A}$ can be approximated by the inner product between the two vectors $\bm{w}_m^\top\bm{z}_n$. The geometric definition of the vector inner product is given by $$ \bm{w}_m^\top\bm{z}_n = \norm{\bm{w}}\cdot \norm{\bm{z}} \cos \theta, $$ where $\theta$ is the angle between $\bm{w}$ and $\bm{z}$. So if the vector norms of $\bm{w}, \bm{z}$ are determined, the smaller the angle, the larger the inner product. Come back to the Netflix data, where the rating are ranging from 0 to 5, and the larger the ``better" (the user more likes the movie). If $\bm{w}_m$ and $\bm{z}_n$ fall ``close" enough, then $\bm{w}^\top\bm{z}$ will have a larger value. This reveals the meaning behind the ALS where $\bm{w}_m$ represents the features of movie $m$, whilst $\bm{z}_n$ contains the features of user $n$. In other words, the ALS associates each user with a \textit{latent vector of preference}, and each movie with a \textit{latent vector of attributes}. And each element in $\bm{w}_m$ and $\bm{z}_n$ represents a same feature. For example, it could be that the second feature $w_{m2}$ \footnote{$w_{m2}$ is the second element of vector $\bm{w}_{m}$.} represents if the movie is an action movie or not, and $z_{n2}$ denotes if the user $n$ likes action movies or not. If it happens the case, then $\bm{w}_m^\top\bm{z}_n$ will be large and approximates $a_{mn}$ well. Note that, in the decomposition $\bm{A}\approx \bm{W}\bm{Z}$, we know the rows of $\bm{W}$ contain the hidden features of the movies, and the columns of $\bm{Z}$ contain the hidden features of the users. However, we cannot identify what are the meanings of the rows of $\bm{W}$ or the columns of $\bm{Z}$ explicitly. We know they could be something like categories or genres of the movies, that provide some underlying connections between the users and the movies, but we cannot be sure what exactly they are. This is where the terminology ``hidden" comes from. \index{Gradient descent} \section{Gradient Descent (GD)}\label{section:als-gradie-descent} In Algorithm~\ref{alg:als}, \ref{alg:als-regularizer}, and \ref{alg:als-regularizer-missing-entries}, we reduce the loss via the inverse of matrices. The reality, however, is frequently far from straightforward, particularly in the big data era of today. As data volumes explode, the size of the inversion matrix will grow at a pace proportional to the cube of the number of samples (e.g., the matrix inversion algorithm by LU decomposition in \citet{lu2021numerical}), which poses a great challenge to the storage and computational resources. This leads to the creation of an ongoing development of the gradient-based optimization technique. The \textit{gradient descent (GD)} method and its derivation, the \textit{stochastic gradient descent (SGD)} method, are among them the simplest, fastest, and most efficient methods \citep{lu2022gradient}. Convex loss function optimization problems are frequently solved using this type of approach. We now go into more details about its principle. In Equation~\eqref{equation:als-ori-all-wz}, we obtain the column-by-column update directly from the full matrix way in Equation~\eqref{equation:als-regular-final-all} (with regularization considered). Now let's see what's behind the idea. Following Equation~\eqref{equation:als-regularion-full-matrix}, the loss under the regularization is, \begin{equation} L(\bm{W},\bm{Z}) =\norm{\bm{W}\bm{Z}-\bm{A}}^2 +\lambda_w \norm{\bm{W}}^2 + \lambda_z \norm{\bm{Z}}^2, \qquad \lambda_w>0, \lambda_z>0, \end{equation} Since we are now considering the minimization of the above loss with respect to $\bm{z}_n$, we can decompose the loss into \begin{equation}\label{als:gradient-regularization-zn} \begin{aligned} L(\bm{z}_n) &=\norm{\bm{W}\bm{Z}-\bm{A}}^2 +\lambda_w \norm{\bm{W}}^2 + \lambda_z \norm{\bm{Z}}^2\\\ &= \norm{\bm{W}\bm{z}_n-\bm{a}_n}^2 + \lambda_z \norm{\bm{z}_n}^2 + \underbrace{\sum_{i\neq n} \norm{\bm{W}\bm{z}_i-\bm{a}_i}^2 + \lambda_z \sum_{i\neq n}\norm{\bm{z}_i}^2 + \lambda_w \norm{\bm{W}}^2 }_{C_{z_n}}, \end{aligned} \end{equation} where $C_{z_n}$ is a constant with respect to $\bm{z}_n$, and $\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N], \bm{A}=[\bm{a}_1,\bm{a}_2, \ldots, \bm{a}_N]$ are the column partitions of $\bm{Z}, \bm{A}$ respectively. Taking the differential $$ \frac{\partial L(\bm{z}_n)}{\partial \bm{z}_n} = 2\bm{W}^\top\bm{W}\bm{z}_n - 2\bm{W}^\top\bm{a}_n + 2\lambda_z\bm{z}_n, $$ under which the root is exactly the first update of the column fashion in Equation~\eqref{equation:als-ori-all-wz}: $$ \bm{z}_n = (\bm{W}^\top\bm{W}+ \lambda_z\bm{I})^{-1} \bm{W}^\top \bm{a}_n, \gap \text{for $n\in \{1,2,\ldots, N\}$}. $$ Similarly, we can decompose the loss with respect to $\bm{w}_m$, \begin{equation}\label{als:gradient-regularization-wd} \begin{aligned} L(\bm{w}_m ) &=\norm{\bm{W}\bm{Z}-\bm{A}}^2 +\lambda_w \norm{\bm{W}}^2 + \lambda_z \norm{\bm{Z}}^2\\ &=\norm{\bm{Z}^\top\bm{W}-\bm{A}^\top}^2 +\lambda_w \norm{\bm{W}^\top}^2 + \lambda_z \norm{\bm{Z}}^2\\\ &=\norm{\bm{Z}^\top\bm{w}_m-\bm{b}_n}^2 + \lambda_w \norm{\bm{w}_m}^2 + \underbrace{\sum_{i\neq m} \norm{\bm{Z}^\top\bm{w}_i-\bm{b}_i}^2 + \lambda_w \sum_{i\neq m}\norm{\bm{w}_i}^2 + \lambda_z \norm{\bm{Z}}^2 }_{C_{w_m}}, \end{aligned} \end{equation} where $C_{w_m}$ is a constant with respect to $\bm{w}_m$, and $\bm{W}^\top=[\bm{w}_1, \bm{w}_2, \ldots, \bm{w}_M], \bm{A}^\top=[\bm{b}_1,\bm{b}_2, \ldots,$ $\bm{b}_M]$ are the column partitions of $\bm{W}^\top, \bm{A}^\top$ respectively. Analogously, taking the differential with respect to $\bm{w}_m$, it follows that $$ \frac{\partial L(\bm{w}_m)}{\partial \bm{w}_m} = 2\bm{Z}\bZ^\top\bm{w}_m - 2\bm{Z}\bm{b}_n + 2\lambda_w\bm{w}_m, $$ under which the root is exactly the second update of the column fashion in Equation~\eqref{equation:als-ori-all-wz}: $$ \bm{w}_m = (\bm{Z}\bZ^\top+\lambda_w\bm{I})^{-1}\bm{Z}\bm{b}_m, \gap \text{for $m\in \{1,2,\ldots, M\}$} . $$ Now suppose we write out the iteration number ($k=1,2,\ldots$) as the superscript and we want to find the updates $\{\bm{z}^{(k+1)}_n, \bm{w}^{(k+1)}_m\}$ in the $(k+1)$-th iteration base on $\{\bm{Z}^{(k)}, \bm{W}^{(k)}\}$ in the $k$-th iteration : $$ \left\{ \begin{aligned} \bm{z}^{(k+1)}_n &\leftarrow \mathop{\arg \min}_{\bm{z}_n^{(k)}} L(\bm{z}_n^{(k)});\\ \bm{w}_m^{(k+1)} &\leftarrow \mathop{\arg\min}_{\bm{w}_m^{(k)}} L(\bm{w}_m^{(k)}). \end{aligned} \right. $$ For simplicity, we will be looking at $\bm{z}^{(k+1)}_n \leftarrow \mathop{\arg \min}_{\bm{z}_n^{(k)}} L(\bm{z}_n^{(k)})$, and the derivation for the update on $\bm{w}_m^{(k+1)}$ will be the same. \paragraph{Approximation by linear update.} Suppose we want to approximate $\bm{z}^{(k+1)}_n$ by a \textit{linear update} on $\bm{z}^{(k)}_n$: $$ \text{Linear Update: }\gap \boxed{\bm{z}^{(k+1)}_n = \bm{z}^{(k)}_n + \eta \bm{v}.} $$ The problem now turns to the solution of $\bm{v}$ such that $$ \bm{v}=\mathop{\arg \min}_{\bm{v}} L(\bm{z}^{(k)}_n + \eta \bm{v}) . $$ By Taylor's formula (Appendix~\ref{appendix:taylor-expansion}, p.~\pageref{appendix:taylor-expansion}), $L(\bm{z}^{(k)}_n + \eta \bm{v})$ can be approximated by \index{Taylor's formula} $$ L(\bm{z}^{(k)}_n + \eta \bm{v}) \approx L(\bm{z}^{(k)}_n ) + \eta \bm{v}^\top \nabla L(\bm{z}^{(k)}_n ), $$ where $\eta$ is small enough and $\nabla L(\bm{z}^{(k)}_n )$ is the gradient of $L(\bm{z})$ at $\bm{z}^{(k)}_n$. Then a search under the condition $\norm{\bm{v}}=1$ given positive $\eta$ is shown as follows: $$ \bm{v}=\mathop{\arg \min}_{\norm{\bm{v}}=1} L(\bm{z}^{(k)}_n + \eta \bm{v}) \approx\mathop{\arg \min}_{\norm{\bm{v}}=1} \left\{L(\bm{z}^{(k)}_n ) + \eta \bm{v}^\top \nabla L(\bm{z}^{(k)}_n )\right\}. $$ This is known as the \textit{greedy search}. The optimal $\bm{v}$ can be obtained by $$ \bm{v} = -\frac{\nabla L(\bm{z}^{(k)}_n )}{\norm{\nabla L(\bm{z}^{(k)}_n )}}, $$ i.e., $\bm{v}$ is in the opposite direction of $\nabla L(\bm{z}^{(k)}_n )$. Therefore, the update of $\bm{z}_n^{(k+1)}$ is reasonable to be taken as $$ \bm{z}^{(k+1)}_n =\bm{z}^{(k)}_n + \eta \bm{v} = \bm{z}^{(k)}_n - \eta \frac{\nabla L(\bm{z}^{(k)}_n )}{\norm{\nabla L(\bm{z}^{(k)}_n )}}, $$ which is usually called the \textit{gradient descent} (GD). Similarly, the gradient descent of $\bm{w}_m^{(k+1)}$ is given by $$ \bm{w}^{(k+1)}_m =\bm{w}^{(k)}_m + \eta \bm{v} = \bm{w}^{(k)}_m - \eta \frac{\nabla L(\bm{w}^{(k)}_m )}{\norm{\nabla L(\bm{w}^{(k)}_m )}}. $$ The updated procedure of Algorithm~\ref{alg:als-regularizer} via a gradient descent way is then formulated in Algorithm~\ref{alg:als-regularizer-missing-stochas-gradient}. \begin{algorithm}[h] \caption{Alternating Least Squares with Full Entries and Gradient Descent} \label{alg:als-regularizer-missing-stochas-gradient} \begin{algorithmic}[1] \Require Matrix $\bm{A}\in \mathbb{R}^{M\times N}$; \State Initialize $\bm{W}\in \mathbb{R}^{M\times K}$, $\bm{Z}\in \mathbb{R}^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$}; \State Choose a stop criterion on the approximation error $\delta$; \State Choose regularization parameters $\lambda_w, \lambda_z$, and step size $\eta_w, \eta_z$; \State Choose the maximal number of iterations $C$; \State $iter=0$; \Comment{Count for the number of iterations} \While{$\norm{\bm{A}- (\bm{W}\bm{Z})}^2>\delta $ and $iter<C$} \State $iter=iter+1$; \For{$n=1,2,\ldots, N$} \State $\bm{z}^{(k+1)}_n =\bm{z}^{(k)}_n - \eta_z \frac{\nabla L(\bm{z}^{(k)}_n )}{\norm{\nabla L(\bm{z}^{(k)}_n )}}$; \Comment{$n$-th column of $\bm{Z}$} \EndFor \For{$m=1,2,\ldots, M$} \State $\bm{w}^{(k+1)}_m = \bm{w}^{(k)}_m - \eta_w \frac{\nabla L(\bm{w}^{(k)}_m )}{\norm{\nabla L(\bm{w}^{(k)}_m )}}$;\Comment{$m$-th column of $\bm{W}^\top$} \EndFor \EndWhile \State Output $\bm{W}^\top=[\bm{w}_1, \bm{w}_2, \ldots, \bm{w}_M],\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N]$; \end{algorithmic} \end{algorithm} \paragraph{Geometrical interpretation of gradient descent.} \begin{lemma}[Direction of Gradients]\label{lemm:direction-gradients} An important fact is that the gradients of variables given a loss function are orthogonal to level curves (a.k.a., level surface). \end{lemma} \begin{proof}[of Lemma~\ref{lemm:direction-gradients}] This is equivalent to proving that the gradient is orthogonal to the tangent of the level curve. For simplicity, let's first look at the 2-dimensional case. Suppose the level curve has the form $f(x,y)=c$. This implicitly gives a relation between $x$ and $y$ such that $y=y(x)$ where $y$ can be thought of as a function of $x$. Therefore, the level curve can be written as $$ f(x, y(x)) = c. $$ The chain rule indicates $$ \frac{\partial f}{\partial x} \underbrace{\frac{dx}{dx}}_{=1} + \frac{\partial f}{\partial y} \frac{dy}{dx}=0. $$ Therefore, the gradient is perpendicular to the tangent: $$ \left\langle \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}\right\rangle \cdot \left\langle \frac{dx}{dx}, \frac{dy}{dx}\right\rangle=0. $$ In full generality, suppose the level curve of a vector $\bm{x}\in \mathbb{R}^n$: $f(\bm{x}) = f(x_1, x_2, \ldots, x_n)=c$. Each variable $x_i$ can be regarded as a function of a variable $t$ on the level curve $f(\bm{x})=c$: $f(x_1(t), x_2(t), \ldots, x_n(t))=c$. Differentiate the equation with respect to $t$ by chain rule: $$ \frac{\partial f}{\partial x_1} \frac{dx_1}{dt} + \frac{\partial f}{\partial x_2} \frac{dx_2}{dt} +\ldots + \frac{\partial f}{\partial x_n} \frac{dx_n}{dt} =0. $$ Therefore, the gradient is perpendicular to the tangent in $n$-dimensional case: $$ \left\langle \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, \ldots, \frac{\partial f}{\partial x_n}\right\rangle \cdot \left\langle \frac{dx_1}{dt}, \frac{dx_2}{dt}, \ldots \frac{dx_n}{dt}\right\rangle=0. $$ This completes the proof. \end{proof} The lemma above reveals the geometrical interpretation of gradient descent. For finding a solution to minimize a convex function $L(\bm{z})$, gradient descent goes to the negative gradient direction that can decrease the loss. Figure~\ref{fig:alsgd-geometrical} depicts a $2$-dimensional case, where $-\nabla L(\bm{z})$ pushes the loss to decrease for the convex function $L(\bm{z})$. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[A 2-dimensional convex function $L(\bm{z})$]{\label{fig:alsgd1} \includegraphics[width=0.47\linewidth]{./imgs/alsgd1.pdf}} \subfigure[$L(\bm{z})=c$ is a constant]{\label{fig:alsgd2} \includegraphics[width=0.44\linewidth]{./imgs/alsgd2.pdf}} \caption{Figure~\ref{fig:alsgd1} shows a function ``density" and a contour plot (\textcolor{bluepigment}{blue}=low, \textcolor{mydarkyellow}{yellow}=high) where the upper graph is the ``density", and the lower one is the projection of it (i.e., contour). Figure~\ref{fig:alsgd2}: $-\nabla L(\bm{z})$ pushes the loss to decrease for the convex function $L(\bm{z})$.} \label{fig:alsgd-geometrical} \end{figure} \section{Regularization: A Geometrical Interpretation} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{./imgs/alsgd3.pdf} \caption{Constrained gradient descent with $\bm{z}^\top\bm{z}\leq C$. The \textcolor{green}{green} vector $\bm{w}$ is the projection of $\bm{v}_1$ into $\bm{z}^\top\bm{z}\leq C$ where $\bm{v}_1$ is the component of $-\nabla l(\bm{z})$ perpendicular to $\bm{z}_1$. The right picture is the next step after the update in the left picture. $\bm{z}^\star$ denotes the optimal solution of \{$\min l(\bm{z})$\}.} \label{fig:alsgd3} \end{figure} We have seen in Section~\ref{section:regularization-extention-general} that regularization can extend the ALS to general matrices. The gradient descent can reveal the geometrical meaning of the regularization. To avoid confusion, we denote the loss function without regularization by $l(\bm{z})$ and the loss with regularization by $L(\bm{z}) = \l(\bm{z})+\lambda_z \norm{\bm{z}}^2$ where $l(\bm{z}): \mathbb{R}^n \rightarrow \mathbb{R}$. When minimizing $l(\bm{z})$, descent method will search in $\mathbb{R}^n$ for a solution. However, in machine learning, searching in the whole space $\mathbb{R}^n$ can cause overfitting. A partial solution is to search in a subset of the vector space, e.g., searching in $\bm{z}^\top\bm{z} < C$ for some constant $C$. That is $$ \mathop{\arg\min}_{\bm{z}} \gap l(\bm{z}), \qquad s.t., \gap \bm{z}^\top\bm{z}\leq C. $$ As shown above, a trivial gradient descent method will go further in the direction of $-\nabla l(\bm{z})$, i.e., update $\bm{z}$ by $\bm{z}\leftarrow \bm{z}-\eta \nabla l(\bm{z})$ for a small step size $\eta$. When the level curve is $l(\bm{z})=c_1$ and the current position of $\bm{z}=\bm{z}_1$ where $\bm{z}_1$ is the intersection of $\bm{z}^\top\bm{z}=C$ and $l(\bm{z})=c_1$, the descent direction $-\nabla l(\bm{z}_1)$ will be perpendicular to the level curve of $l(\bm{z}_1)=c_1$ as shown in the left picture of Figure~\ref{fig:alsgd3} (by Lemma~\ref{lemm:direction-gradients}). However, if we further restrict that the optimal value can only be in $\bm{z}^\top\bm{z}\leq C$, the trivial descent direction $-\nabla l(\bm{z}_1)$ will lead $\bm{z}_2=\bm{z}_1-\eta \nabla l(\bm{z}_1)$ outside of $\bm{z}^\top\bm{z}\leq C$. A solution is to decompose the step $-\nabla l(\bm{z}_1)$ into $$ -\nabla l(\bm{z}_1) = a\bm{z}_1 + \bm{v}_1, $$ where $a\bm{z}_1$ is the component perpendicular to the curve of $\bm{z}^\top\bm{z}=C$, and $\bm{v}_1$ is the component parallel to the curve of $\bm{z}^\top\bm{z}=C$. Keep only the step $\bm{v}_1$, then the update $$ \bm{z}_2 = \text{project}(\bm{z}_1+\eta \bm{v}_1) = \text{project}\left(\bm{z}_1 + \eta \underbrace{(-\nabla l(\bm{z}_1) -a\bm{z}_1)}_{\bm{v}_1}\right)\footnote{where the project($\bm{x}$) will project the vector $\bm{x}$ to the closest point inside $\bm{z}^\top\bm{z}\leq C$. Notice here the direct update $\bm{z}_2 = \bm{z}_1+\eta \bm{v}_1$ can still make $\bm{z}_2$ outside the curve of $\bm{z}^\top\bm{z}\leq C$.} $$ will lead to a smaller loss from $l(\bm{z}_1)$ to $l(\bm{z}_2)$ and it still matches the prerequisite of $\bm{z}^\top\bm{z}\leq C$. This is known as the \textit{projection gradient descent}. It is not hard to see that the update $\bm{z}_2 = \text{project}(\bm{z}_1+\eta \bm{v}_1)$ is equivalent to finding a vector $\bm{w}$ (shown by the \textcolor{green}{green} vector in the left picture of Figure~\ref{fig:alsgd3}) such that $\bm{z}_2=\bm{z}_1+\bm{w}$ is inside the curve of $\bm{z}^\top\bm{z}\leq C$. Mathematically, the $\bm{w}$ can be obtained by $-\nabla l(\bm{z}_1) -2\lambda \bm{z}_1$ for some $\lambda$ as shown in the middle picture of Figure~\ref{fig:alsgd3}. This is exactly the negative gradient of $L(\bm{z})=l(\bm{z})+\lambda\norm{\bm{z}}^2$ such that $$ -\nabla L(\bm{z}) = -\nabla l(\bm{z}) - 2\lambda \bm{z}, $$ and $$ \begin{aligned} \bm{w} &= -\nabla L(\bm{z}) \qquad\underrightarrow{ \text{leads to} }\qquad \bm{z}_2 &= \bm{z}_1+ \bm{w} =\bm{z}_1 - \nabla L(\bm{z}). \end{aligned} $$ And in practice, a small step size $\eta$ can avoid going outside the curve of $\bm{z}^\top\bm{z}\leq C$: $$ \bm{z}_2 =\bm{z}_1 - \eta\nabla L(\bm{z}), $$ which is exactly what we have discussed in Section~\ref{section:regularization-extention-general}, the regularization term. \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{./imgs/p-norm-2d_2.pdf} \caption{Unit ball of $L_p$ norm in 2-dimensional space. The $L_p$ norm over a vector $\bm{x}$ is defined as $L_p(\bm{x}) = (\sum_{i} \abs{x_i}^p)^{1/p}$. When $p<1$, the metric is not a norm since it does not meet the triangle inequality property of a norm definition.} \label{fig:p-norm-2d} \end{figure} \begin{figure}[H] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[$p=\infty$]{\label{fig:p-norm-3d1} \includegraphics[width=0.18\linewidth]{./imgs/p-norm-3d-p-1111.pdf}} \subfigure[$p=2$]{\label{fig:p-norm-3d2} \includegraphics[width=0.18\linewidth]{./imgs/p-norm-3d-p-2.pdf}} \subfigure[$p=1$]{\label{fig:p-norm-3d3} \includegraphics[width=0.18\linewidth]{./imgs/p-norm-3d-p-1.pdf}} \subfigure[$p=0.5$]{\label{fig:p-norm-3d4} \includegraphics[width=0.18\linewidth]{./imgs/p-norm-3d-p-05.pdf}} \subfigure[$p=0$]{\label{fig:p-norm-3d5} \includegraphics[width=0.18\linewidth]{./imgs/p-norm-3d-p-0.pdf}} \caption{Unit ball of $L_p$ norm in 3-dimensional space. When $p<1$, the metric is not a norm since it does not meet the triangle inequality property of a norm definition.} \label{fig:p-norm-comparison-3d} \end{figure} \paragraph{Sparsity.} In rare cases, we want to find a sparse solution $\bm{z}$ such that $l(\bm{z})$ is minimized. Regularization to be constrained in $\norm{\bm{z}}_1 \leq C$ exists to this purpose where $\norm{\cdot}_1$ is the $L_1$ norm of a vector or a matrix. The illustration of the $L_1$ in 2-dimensional and 3-dimensional space is shown in Figure~\ref{fig:p-norm-2d} and \ref{fig:p-norm-comparison-3d}. Similar to the previous case, the $L_1$ constrained optimization pushes the gradient descent towards the border of the level of $\norm{\bm{z}}_1=C$. The situation in the 2-dimensional case is shown in Figure~\ref{fig:alsgd4}. In a high-dimensional case, many elements in $\bm{z}$ will be pushed into the breakpoint of $\norm{\bm{z}}_1=C$ as shown in the right picture of Figure~\ref{fig:alsgd4}. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{./imgs/alsgd4.pdf} \caption{Constrained gradient descent with $\norm{\bm{z}}_1\leq C$, where the \textcolor{red}{red} dot denotes the breakpoint in $L_1$ norm. The right picture is the next step after the update in the left picture. $\bm{z}^\star$ denotes the optimal solution of \{$\min l(\bm{z})$\}.} \label{fig:alsgd4} \end{figure} \section{Stochastic Gradient Descent (SGD)} The gradient descent method is a good optimization algorithm, but it has some defects in real applications. In order to understand the problem of the gradient descent method, we take the mean squared error (MSE) from Equation~\eqref{equation:als-per-example-loss_ori}: \begin{equation}\label{equation:als-per-example-loss_mse} \frac{1}{MN}\mathop{\min}_{\bm{W},\bm{Z}} \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2, \end{equation} The MSE needs to calculate the residual $e_{mn} = (a_{mn} - \bm{w}_m^\top\bm{z}_n)^2$ between the predicted value and the real value of each observed entry $a_{mn}$, and finally get the total sum of residual squares $e = \sum_{m,n=1}^{MN}e_{mn}$. When there is a large amount of training entries (i.e., $MN$ is large), the whole training process becomes very slow. In addition, the gradient between different input samples may cancel out, resulting in small changes in the entire parameter. Based on the above problems, researchers have improved the gradient descent method to the \textit{stochastic gradient descent (SGD)} method. The core idea of the stochastic gradient descent method is to randomly select a single sample from all training samples each time. We consider again the per-example loss: $$ L(\bm{W},\bm{Z})= \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2 + \lambda_w \sum_{m=1}^{M}\norm{\bm{w}_m}^2 +\lambda_z \sum_{n=1}^{N}\norm{\bm{z}_n}^2. $$ And when we iteratively decrease the per-example loss term $l(\bm{w}_m, \bm{z}_n)=\left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2$ for all $m\in \{1,2,\ldots, M\}, n\in\{1,2,\ldots,N\}$, the full loss $L(\bm{W},\bm{Z})$ can also be decreased. This is also known as the \textit{stochastic coordinate descent}. The differentials with respect to $\bm{w}_m, \bm{z}_n$, and their roots are given by $$ \left\{ \begin{aligned} \nabla l(\bm{z}_n)=\frac{\partial l(\bm{w}_m,\bm{z}_n)}{\partial \bm{z}_n} &= 2\bm{w}_m\bm{w}_m^\top \bm{z}_n + 2\lambda_w\bm{w}_m -2a_{mn} \bm{w}_m \\ &\qquad \,\,\underrightarrow{ \text{leads to} }\,\, \bm{z}_n= a_{mn}(\bm{w}_m\bm{w}_m^\top+\lambda_z\bm{I})^{-1}\bm{w}_m;\\ \nabla l(\bm{w}_m)=\frac{\partial l(\bm{w}_m, \bm{z}_n)}{\partial \bm{w}_m} &= 2\bm{z}_n\bm{z}_n^\top\bm{w}_m +2\lambda_z\bm{z}_n - 2a_{mn}\bm{z}_n\\ &\qquad \,\,\underrightarrow{ \text{leads to} }\,\, \bm{w}_m= a_{mn}(\bm{z}_n\bm{z}_n^\top+\lambda_w\bm{I})^{-1}\bm{w}_n. \end{aligned} \right. $$ or analogously, the update can be done by gradient descent, and since we update by per-example loss, it is also known as the \textit{stochastic gradient descent (SGD)} $$ \left\{ \begin{aligned} \bm{z}_n&= \bm{z}_n - \eta_z \frac{\nabla l(\bm{z}_n)}{\norm{\nabla l(\bm{z}_n)}}; \\ \bm{w}_m&= \bm{w}_m - \eta_w \frac{\nabla l(\bm{w}_m)}{\norm{\nabla l(\bm{w}_m)}}. \end{aligned} \right. $$ The stochastic gradient descent update for ALS is formulated in Algorithm~\ref{alg:als-regularizer-missing-stochas-gradient-realstoch}. And in practice, the $m,n$ in the algorithm can be randomly produced, that's where the name \textit{stochastic} comes from. \begin{algorithm}[h] \caption{Alternating Least Squares with Full Entries and Stochastic Gradient Descent} \label{alg:als-regularizer-missing-stochas-gradient-realstoch} \begin{algorithmic}[1] \Require Matrix $\bm{A}\in \mathbb{R}^{M\times N}$; \State Initialize $\bm{W}\in \mathbb{R}^{M\times K}$, $\bm{Z}\in \mathbb{R}^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$}; \State Choose a stop criterion on the approximation error $\delta$; \State Choose regularization parameters $\lambda_w, \lambda_z$, and step size $\eta_w, \eta_z$; \State Choose the maximal number of iterations $C$; \State $iter=0$; \Comment{Count for the number of iterations} \While{$\norm{ \bm{A}- (\bm{W}\bm{Z})}^2>\delta $ and $iter<C$} \State $iter=iter+1$; \For{$n=1,2,\ldots, N$} \For{$m=1,2,\ldots, M$} \Comment{in practice, $m,n$ can be randomly produced} \State $\bm{z}_n= \bm{z}_n - \eta_z \frac{\nabla l(\bm{z}_n)}{\norm{\nabla l(\bm{z}_n)}}$;\Comment{$n$-th column of $\bm{Z}$} \State $\bm{w}_m= \bm{w}_m - \eta_w \frac{\nabla l(\bm{w}_m)}{\norm{\nabla l(\bm{w}_m)}}$;\Comment{$m$-th column of $\bm{W}^\top$} \EndFor \EndFor \EndWhile \State Output $\bm{W}^\top=[\bm{w}_1, \bm{w}_2, \ldots, \bm{w}_M],\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N]$; \end{algorithmic} \end{algorithm} \section{Bias Term} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{./imgs/als-bias.pdf} \caption{Bias terms in alternating least squares where the \textcolor{mydarkyellow}{yellow} entries denote ones (which are fixed) and \textcolor{cyan}{cyan} entries denote the added features to fit the bias terms. The dotted boxes give an example of how the bias terms work.} \label{fig:als-bias} \end{figure} In ordinary least squares, a bias term is added to the raw matrix as shown in Equation~\eqref{equation:ls-bias}. A similar idea can be applied to the ALS problem. We can add a fixed column with all 1's to the \textbf{last column} of $\bm{W}$, thus an extra row should be added to the last row of $\bm{Z}$ to fit the features introduced by the bias term in $\bm{W}$. Analogously, a fixed row with all 1's can be added to the \textbf{first row} of $\bm{Z}$, and an extra column in the first column of $\bm{W}$ to fit the features. The situation is shown in Figure~\ref{fig:als-bias}. Following the loss with respect to the columns of $\bm{Z}$ in Equation~\eqref{als:gradient-regularization-zn}, suppose $\widetilde{\bm{z}}_n = \begin{bmatrix} 1\\ \bm{z}_n \end{bmatrix} $ is the $n$-th column of $\widetilde{\bm{Z}}$, we have \begin{equation} \begin{aligned} L(\bm{z}_n) &=\norm{\widetilde{\bm{W}}\widetilde{\bm{Z}}-\bm{A}}^2 +\lambda_w \norm{\widetilde{\bm{W}}}^2 + \lambda_z \norm{\widetilde{\bm{Z}}}^2\\\ &= \left\Vert \widetilde{\bm{W}} \begin{bmatrix} 1 \\ \bm{z}_n \end{bmatrix}-\bm{a}_n \right\Vert^2 + \underbrace{\lambda_z \norm{\widetilde{\bm{z}}_n}^2}_{=\lambda_z \norm{\bm{z}_n}^2+\lambda_z} + \sum_{i\neq n} \norm{\widetilde{\bm{W}}\widetilde{\bm{z}}_i-\bm{a}_i}^2 + \lambda_z \sum_{i\neq n}\norm{\widetilde{\bm{z}}_i}^2 + \lambda_w \norm{\widetilde{\bm{W}}}^2 \\ &= \left\Vert \begin{bmatrix} \overline{\bm{w}}_0 & \overline{\bm{W}} \end{bmatrix} \begin{bmatrix} 1 \\ \bm{z}_n \end{bmatrix}-\bm{a}_n \right\Vert^2 + \lambda_z \norm{\bm{z}_n}^2 + C_{z_n} = \left\Vert \overline{\bm{W}} \bm{z}_n - \underbrace{(\bm{a}_n-\overline{\bm{w}}_0)}_{\overline{\bm{a}}_n} \right\Vert^2 + \lambda_z \norm{\bm{z}_n}^2 + C_{z_n}, \end{aligned} \end{equation} where $\overline{\bm{w}}_0$ is the first column of $\widetilde{\bm{W}}$, $\overline{\bm{W}}$ is the last $K$ columns of $\widetilde{\bm{W}}$, and $C_{z_n}$ is a constant with respect to $\bm{z}_n$. Let $\overline{\bm{a}}_n = \bm{a}_n-\overline{\bm{w}}_0$, the update of $\bm{z}_n$ is just similar to the one in Equation~\eqref{als:gradient-regularization-zn} where the differential is given by: $$ \frac{\partial L(\bm{z}_n)}{\partial \bm{z}_n} = 2\overline{\bm{W}}^\top\overline{\bm{W}}\bm{z}_n - 2\overline{\bm{W}}^\top\overline{\bm{a}}_n + 2\lambda_z\bm{z}_n. $$ Therefore, the update on $\bm{z}_n$ is given by the root of the above differential: $$ \text{update on $\widetilde{\bm{z}}_n$ is } \left\{ \begin{aligned} \bm{z}_n &= (\overline{\bm{W}}^\top\overline{\bm{W}}+ \lambda_z\bm{I})^{-1} \overline{\bm{W}}^\top \overline{\bm{a}}_n, \gap \text{for $n\in \{1,2,\ldots, N\}$};\\ \widetilde{\bm{z}}_n &= \begin{bmatrix} 1\\\bm{z}_n \end{bmatrix}. \end{aligned} \right. $$ Similarly, following the loss with respect to each row of $\bm{W}$ in Equation~\eqref{als:gradient-regularization-wd}, suppose $\widetilde{\bm{w}}_m = \begin{bmatrix} \bm{w}_m \\ 1 \end{bmatrix}$ is the $m$-th row of $\widetilde{\bm{W}}$ (or $m$-th column of $\widetilde{\bm{W}}^\top$), we have \begin{equation} \begin{aligned} L(\bm{w}_m ) &=\norm{\widetilde{\bm{Z}}^\top\widetilde{\bm{W}}-\bm{A}^\top}^2 +\lambda_w \norm{\widetilde{\bm{W}}^\top}^2 + \lambda_z \norm{\widetilde{\bm{Z}}}^2\\\ &= \norm{\widetilde{\bm{Z}}^\top\widetilde{\bm{w}}_m-\bm{b}_m}^2 + \underbrace{\lambda_w \norm{\widetilde{\bm{w}}_m}^2}_{=\lambda_w \norm{\bm{w}_m}^2+\lambda_w} + \sum_{i\neq m} \norm{\widetilde{\bm{Z}}^\top\widetilde{\bm{w}}_i-\bm{b}_i}^2 + \lambda_w \sum_{i\neq m}\norm{\widetilde{\bm{w}}_i}^2 + \lambda_z \norm{\widetilde{\bm{Z}}}^2 \\ &= \left\Vert \begin{bmatrix} \overline{\bm{Z}}^\top& \overline{\bm{z}}_0 \end{bmatrix} \begin{bmatrix} \bm{w}_m \\ 1 \end{bmatrix} -\bm{b}_m\right\Vert^2 + \lambda_w \norm{\bm{w}_m}^2 + C_{w_m}\\ &= \left\Vert \overline{\bm{Z}}^\top\bm{w}_m -(\bm{b}_m-\overline{\bm{z}}_0) \right\Vert^2+ \lambda_w \norm{\bm{w}_m}^2 + C_{w_m}, \end{aligned} \end{equation} where $\overline{\bm{z}}_0$ is the last column of $\widetilde{\bm{Z}}^\top$ and $\overline{\bm{Z}}^\top$ contains the remaining columns of it, $C_{w_m}$ is a constant with respect to $\bm{w}_m$, and $\bm{W}^\top=[\bm{w}_1, \bm{w}_2, \ldots, \bm{w}_M], \bm{A}^\top=[\bm{b}_1,\bm{b}_2, \ldots, \bm{b}_M]$ are the column partitions of $\bm{W}^\top, \bm{A}^\top$ respectively. Let $\overline{\bm{b}}_m = \bm{b}_m-\overline{\bm{z}}_0$, the update of $\bm{w}_m$ is again just similar to the one in Equation~\eqref{als:gradient-regularization-wd} where the differential is given by: $$ \frac{\partial L(\bm{w}_md)}{\partial \bm{w}_m} = 2\overline{\bm{Z}}\cdot \overline{\bm{Z}}^\top\bm{w}_m - 2\overline{\bm{Z}}\cdot \overline{\bm{b}}_m + 2\lambda_w\bm{w}_m. $$ Therefore the update on $\bm{w}_m$ is given by the root of the above differential $$ \text{update on $\widetilde{\bm{w}}_m$ is } \left\{ \begin{aligned} \bm{w}_m&=(\overline{\bm{Z}}\cdot \overline{\bm{Z}}^\top+\lambda_w\bm{I})^{-1}\overline{\bm{Z}}\cdot \overline{\bm{b}}_m, \gap \text{for $m\in \{1,2,\ldots, M\}$} ;\\ \widetilde{\bm{w}}_m &= \begin{bmatrix} \bm{w}_m \\ 1 \end{bmatrix}. \end{aligned} \right. $$ Similar updates by gradient descent under the bias terms or treatment on missing entries can be deduced and we shall not repeat the details (see Section~\ref{section:als-gradie-descent} and \ref{section:alt-columb-by-column} for a reference). \section{Applications} \begin{SCfigure \centering \includegraphics[width=0.6\textwidth]{./imgs/eng300.png} \caption{A gray flag image to be compressed. The size of the image is $600\times 1200$ with a rank of 402. } \label{fig:eng300} \end{SCfigure} \noindent \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[$\sigma_1\bm{u}_1\bm{v}_1^\top$\protect\newline$F_1=60217$]{\label{fig:svd12} \includegraphics[width=0.15\linewidth]{./imgs/svd_pic1.png}} \subfigure[$\sigma_2\bm{u}_2\bm{v}_2^\top$\protect\newline$F_2=120150$]{\label{fig:svd22} \includegraphics[width=0.15\linewidth]{./imgs/svd_pic2.png}} \subfigure[$\sigma_3\bm{u}_3\bm{v}_3^\top$\protect\newline$F_3=124141$]{\label{fig:svd32} \includegraphics[width=0.15\linewidth]{./imgs/svd_pic3.png}} \subfigure[$\sigma_4\bm{u}_4\bm{v}_4^\top$\protect\newline$F_4=125937$]{\label{fig:svd42} \includegraphics[width=0.15\linewidth]{./imgs/svd_pic4.png}} \subfigure[$\sigma_5\bm{u}_5\bm{v}_5^\top$\protect\newline$F_5=126127$]{\label{fig:svd52} \includegraphics[width=0.15\linewidth]{./imgs/svd_pic5.png}} \subfigure[All 5 singular values: $\sum_{i=1}^{5}\sigma_i\bm{u}_i\bm{v}_i^\top$,\protect\newline$F=$\textbf{44379}]{\label{fig:svd62} \includegraphics[width=0.15\linewidth]{./imgs/svd_pic6_all.png}} \quad \subfigure[$\bm{c}_1\bm{r}_1^\top$\protect\newline$G_1=60464$]{\label{fig:skeleton51} \includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_1.png}} \subfigure[$\bm{c}_2\bm{r}_2^\top$\protect\newline$G_2=122142$]{\label{fig:skeleton52} \includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_2.png}} \subfigure[$\bm{c}_3\bm{r}_3^\top$\protect\newline$G_3=123450$]{\label{fig:skeleton53} \includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_3.png}} \subfigure[$\bm{c}_4\bm{r}_4^\top$\protect\newline$G_4=125975$]{\label{fig:skeleton54} \includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_5.png}} \subfigure[$\bm{c}_5\bm{r}_5^\top$\protect\newline$G_5=124794$]{\label{fig:skeleton55} \includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_4.png}} \subfigure[Pseudoskeleton Rank 5 $\sum_{i=1}^{5}\bm{c}_i\bm{r}_i^\top$,\protect\newline$G=45905$.]{\label{fig:skeleton5_all} \includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_all.png}} \quad \subfigure[$\bm{w}_1\bm{z}_1^\top$\protect\newline$S_1=82727$]{\label{fig:als51} \includegraphics[width=0.15\linewidth]{./imgs/als_rank5_3.png}} \subfigure[$\bm{w}_2\bm{z}_2^\top$\protect\newline$S_2=107355$]{\label{fig:als52} \includegraphics[width=0.15\linewidth]{./imgs/als_rank5_2.png}} \subfigure[$\bm{w}_3\bm{z}_3^\top$\protect\newline$S_3=119138$]{\label{fig:als53} \includegraphics[width=0.15\linewidth]{./imgs/als_rank5_5.png}} \subfigure[$\bm{w}_4\bm{z}_4^\top$\protect\newline$S_4=120022$]{\label{fig:als54} \includegraphics[width=0.15\linewidth]{./imgs/als_rank5_1.png}} \subfigure[$\bm{w}_5\bm{z}_5^\top$\protect\newline$S_5=120280$]{\label{fig:als55} \includegraphics[width=0.15\linewidth]{./imgs/als_rank5_4.png}} \subfigure[ALS Rank 5 $\sum_{i=1}^{5}\bm{w}_i\bm{z}_i^\top$,\protect\newline$S=52157$.]{\label{fig:als5_all} \includegraphics[width=0.15\linewidth]{./imgs/als_rank3.png}} \caption{Image compression for gray flag image into a rank-5 matrix via the SVD, and decompose into 5 parts where $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_{5}$, i.e., $F_1\leq F_2\leq \ldots \leq F_5$ with $F_i = \norm{\sigma_i\bm{u}_i\bm{v}^\top - \bm{A}}_F$ for $i\in \{1,2,\ldots, 5\}$. And reconstruct images by single singular value and its corresponding left and right singular vectors, $\bm{c}_i\bm{r}_i^\top$, $\bm{w}_i\bm{z}_i^\top$ respectively. \textbf{Upper:} SVD; \textbf{Middle:} Pseudoskeleton; \textbf{Lower:} ALS.} \label{fig:svdd-by-parts-als} \end{figure} \subsection{Low-Rank Approximation}\label{section:als-low-flag} Figure~\ref{fig:eng300} shows an example of a gray image to be compressed. The size of the image is $600\times 1200$ with a rank of 402. Low-rank approximation via the ALS decomposition, singular value decomposition (SVD), and Pseudoskeleton decomposition \citep{lu2021numerical} can be applied to find the compression. In Figure~\ref{fig:svd12} to \ref{fig:svd62}, we approximate the image into a rank-5 matrix by \textit{truncated SVD}: $\bm{A}\approx \sum_{i=1}^{5}\sigma_i\bm{u}_i\bm{v}_i^\top$ \citep{lu2021numerical}. It is known that the singular values contain the spectrum information with higher singular values containing lower-frequency information. And low-frequency contains more useful information \citep{leondes1995multidimensional}. We find that the image, $\sigma_1\bm{u}_1\bm{v}_1^\top$, reconstructed by the first singular value $\sigma_1$, first left singular vector $\bm{u}_1$, and first right singular vector $\bm{v}_1$ is very close to the original flag image; and second to the fifth images reconstructed by the corresponding singular values and singular vectors containing more details of the flag to reconstruct the raw image. Similar results can be observed for the low-rank approximation via the pseudoskeleton decomposition (See \citet{lu2021numerical} for more details). At a high level, the pseudoskeleton decomposition finds the low-rank approximation by $\bm{A}\approx \bm{C}\bm{R}$ where $\bm{C}\in \mathbb{R}^{m\times \gamma}, \bm{R}\in \mathbb{R}^{\gamma\times n}$ if $\bm{A}\in \mathbb{R}^{m\times n}$ such that $\bm{C}$ and $\bm{R}$ are rank-$\gamma$ matrices. Suppose $\gamma=5$, and $$ \bm{C}=[\bm{c}_1, \bm{c}_2, \ldots, \bm{c}_5], \qquad \text{and} \qquad \bm{R} = \begin{bmatrix} \bm{r}_1^\top \\ \bm{r}_2^\top \\ \vdots \\ \bm{r}_5^\top \end{bmatrix}, $$ are the column and row partitions of $\bm{C}$ and $\bm{R}$ respectively. Then $\bm{A}$ can be approximated by $\sum_{i=1}^{5}\bm{c}_i\bm{r}_i^\top$. The partitions are ordered such that $$ \underbrace{\norm{\bm{c}_1\bm{r}_1^\top-\bm{A}}_F}_{G_1} \leq \underbrace{\norm{\bm{c}_2\bm{r}_2^\top-\bm{A}}_F}_{G_2} \leq \ldots \leq \underbrace{\norm{\bm{c}_5\bm{r}_5^\top-\bm{A}}_F}_{G_5}. $$ We observe (in Figure~\ref{fig:skeleton51} to \ref{fig:skeleton5_all}) that $\bm{c}_1\bm{r}_1^\top$ works similarly to that of $\sigma_1\bm{u}_1\bm{v}^\top$ where the reconstruction errors measured by the Frobenius norm are very close (60,464 in the pseudoskeleton case compared to that of 60,217 in the SVD case). This is partly because the pseudoskeleton decomposition relies on the SVD such that $\bm{c}_1\bm{r}_1^\top$ internally has the largest ``singular value" meaning in this sense \citep{lu2021numerical}. Similarly, the ALS approximation is given by $\bm{A}\approx\bm{W}\bm{Z}$ where $\bm{W}\in \mathbb{R}^{m\times \gamma}, \bm{Z}\in \mathbb{R}^{\gamma\times n}$ if $\bm{A}\in \mathbb{R}^{m\times n}$ such that $\bm{W}$ and $\bm{Z}$ are rank-$\gamma$ matrices. Suppose $$ \bm{W}=[\bm{w}_1, \bm{w}_2, \ldots, \bm{w}_5], \qquad \text{and} \qquad \bm{Z} = \begin{bmatrix} \bm{z}_1^\top \\ \bm{z}_2^\top \\ \vdots \\ \bm{z}_5^\top \end{bmatrix}, $$ are the column and row partitions of $\bm{W}, \bm{Z}$ respectively \footnote{For simplicity, note that this definition is different from what we have defined in Section~\ref{section:als-netflix} where we define $\bm{w}_i$ as the rows of $\bm{W}$. }. Then $\bm{A}$ can be approximated by $\sum_{i=1}^{5}\bm{w}_i\bm{z}_i^\top$. The partitions are ordered such that $$ \underbrace{\norm{\bm{w}_1\bm{z}_1^\top-\bm{A}}_F}_{S_1} \leq \underbrace{\norm{\bm{w}_2\bm{z}_2^\top-\bm{A}}_F}_{S_2} \leq \ldots \leq \underbrace{\norm{\bm{w}_5\bm{z}_5^\top-\bm{A}}_F}_{S_5}. $$ We observe (in Figure~\ref{fig:als51} to \ref{fig:als5_all}) that $\bm{w}_1\bm{z}_1^\top$ works slightly \textbf{different} to that of $\sigma_1\bm{u}_1\bm{v}^\top$ where the reconstruction errors measured by Frobenius norm are not close as well (82,727 in the ALS case compared to that of 60,217 in the SVD case). As we mentioned previously, $\bm{c}_1\bm{r}_1^\top$ works similarly to that of $\sigma_1\bm{u}_1\bm{v}^\top$ since the pseudoskeleton relies on the SVD. However, in ALS, the reconstruction is all from least squares optimization. The key difference between ALS and SVD is in the fact that, in SVD, the importance of each vector in the basis is relative to the value of the singular value associated with that vector. This usually means that the first vector of the basis dominates and is the most used vector to reconstruct data; then the second vector and so on. So the bases in SVD have an implicit hierarchy and that doesn't happen in ALS where we find the second component $\bm{w}_2\bm{z}_2^\top$ via ALS in Figure~\ref{fig:als52} plays an important role in the reconstruction of the original figure; whereas the second component $\sigma_2\bm{u}_2\bm{v}_2^\top$ via SVD in Figure~\ref{fig:svd22} plays a small role in the reconstruction. \begin{SCfigure} \caption{Comparison of reconstruction errors measured by Frobenius norm among the SVD, pseudoskeleton, and ALS where the approximated rank ranges from $3$ to 100. ALS with well-selected parameters works similarly to SVD.} \includegraphics[width=0.5\textwidth]{./imgs/svd_skeleton_als_fnorm.pdf} \label{fig:svd_skeleton_als_fnorm} \end{SCfigure} We finally compare low-rank approximation among the SVD, pseudoskeleton, and ALS with different ranks (3 to 100). Figure~\ref{fig:svdd-pseudoskeleton-als} shows the difference of each compression with rank 90, 60, 30, 10. We observe that the SVD reconstructs well with rank 90, 60, 30. The pseudoskeleton approximation compresses well in the black horizontal and vertical lines in the image. But it performs poorly in the details of the flag. ALS works similarly to the SVD in terms of visual expression and reconstruction errors measured by the Frobenius norm. Figure~\ref{fig:svd_skeleton_als_fnorm} shows the comparison of the reconstruction errors among the SVD, the pseudoskeleton, and the ALS approximations measured by the Frobenius norm ranging from rank $3$ to $100$ where we find in all cases, the truncated SVD does best in terms of Frobenius norm. Similar results can be observed when applied to the spectral norm. The ALS works better than the pseudoskeleton decomposition when $\lambda_w=\lambda_z=0.15$. An interesting cutoff happens when $\lambda_w=\lambda_z=\{0.03, 0.08, 0.15\}$. That is, when the value of rank increases, the ALS will be very close to the SVD in the sense of low-rank approximation. \index{Truncated} \index{Truncated SVD} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[SVD with rank 90\protect\newline Frobenius norm=\textbf{6,498}]{\label{fig:svd902} \includegraphics[width=0.3\linewidth]{./imgs/svd90.png}} \quad \subfigure[Pseudoskeleton with rank 90\protect\newline Frobenius norm=13,751]{\label{fig:skeleton902} \includegraphics[width=0.3\linewidth]{./imgs/skeleton90.png}} \subfigure[ALS with rank 90\protect\newline Frobenius norm=6,622]{\label{fig:als_90} \includegraphics[width=0.3\linewidth]{./imgs/als_rank90.png}}\\ \subfigure[SVD with rank 60\protect\newline Frobenius norm=\textbf{8,956}]{\label{fig:svd602} \includegraphics[width=0.3\linewidth]{./imgs/svd60.png}} \quad \subfigure[Pseudoskeleton with rank 60\protect\newline Frobenius norm=14,217]{\label{fig:skeleton602} \includegraphics[width=0.3\linewidth]{./imgs/skeleton60.png}} \subfigure[ALS with rank 60\protect\newline Frobenius norm=9,028]{\label{fig:als_60} \includegraphics[width=0.3\linewidth]{./imgs/als_rank60.png}}\\ \subfigure[SVD with rank 30\protect\newline Frobenius norm=\textbf{14,586}]{\label{fig:svd302} \includegraphics[width=0.3\linewidth]{./imgs/svd30.png}} \quad \subfigure[Pseudoskeleton with rank 30\protect\newline Frobenius norm=17,853]{\label{fig:skeleton302} \includegraphics[width=0.3\linewidth]{./imgs/skeleton30.png}} \subfigure[ALS with rank 30\protect\newline Frobenius norm=18,624]{\label{fig:als_30} \includegraphics[width=0.3\linewidth]{./imgs/als_rank30.png}} \subfigure[SVD with rank 10\protect\newline Frobenius norm=\textbf{31,402}]{\label{fig:svd102} \includegraphics[width=0.3\linewidth]{./imgs/svd10.png}} \quad \subfigure[Pseudoskeleton with rank 10\protect\newline Frobenius norm=33,797]{\label{fig:skeleton102} \includegraphics[width=0.3\linewidth]{./imgs/skeleton10.png}} \subfigure[ALS with rank 10\protect\newline Frobenius norm=33,449]{\label{fig:als_10} \includegraphics[width=0.3\linewidth]{./imgs/als_rank10.png}} \caption{Image compression for gray flag image with different ranks.} \label{fig:svdd-pseudoskeleton-als} \end{figure} \subsection{Movie Recommender}\label{section:als_movie_rec} The ALS is extensively developed for the movie recommender system. To see this, we obtain the ``MovieLens 100K" data set from MovieLens \citep{harper2015movielens}\footnote{http://grouplens.org}. It consists of 100,000 ratings from 943 users on 1,682 movies. The rating values go from 0 to 5. The data was collected through the MovieLens website during the seven-month period from September 19th, 1997 through April 22nd, 1998. This data has been cleaned up - users who had less than 20 ratings or did not have complete demographic information were removed from this data set such that simple demographic info for the users (age, gender, occupation, zip) can be obtained. However, we will only work on the trivial rating matrix \footnote{In the next chapters, the data set is further cleaned by removing movies with less than 3 users such that 1,473 movies are kept accordingly.}. The data set is split into training and validation data, around 95,015 and 4,985 ratings respectively. The error is measured by \textit{root mean squared error (RMSE)}. The RMSE is frequently used as a measure of the difference between values. For a set of values $\{x_1, x_2, \ldots, x_n\}$ and its predictions $\{\hat{x}_1, \hat{x}_2, \ldots, \hat{x}_n\}$, the RMSE can be described as $$ \text{RMSE}(\bm{x}, \hat{\bm{x}}) = \sqrt{\frac{1}{n} \sum_{i=1}^{n}(x_i-\hat{x}_i)}. $$ The minimal RMSE for validation is obtained when $K=185$ and $\lambda_w=\lambda_z=0.15$, and it is equal to $0.806$ as shown in Figure~\ref{fig:movie100k}. Therefore, when the rating ranges from 0 to 5, the ALS at least can predict whether the user likes to watch the movie (e.g., ranges 4 to 5) or not (e.g., ranges 0 to 2). \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Training]{\label{fig:movie100k1} \includegraphics[width=0.44\linewidth]{./imgs/movielen100k.pdf}} \quad \subfigure[Validation]{\label{fig:movie100k2} \includegraphics[width=0.44\linewidth]{./imgs/movielen100k_val.pdf}} \caption{Comparison of training and validation error for ``MovieLens 100K" data set with different reduction dimensions and regularization parameters.} \label{fig:movie100k} \end{figure} \paragraph{Recommender 1.} A recommender system can work simply by suggesting the movie $m$ when $a_{mn}\geq4$ if user $n$ has not rated the movie $m$. \paragraph{Recommender 2.} Or in rare cases, it happens that the user $n$ has rated all the movies he likes (say rates $\geq4$). Then a partial solution is to find out similar movies to the high-rated movies to recommend. Suppose user $n$ likes movie $m$ very much and he has rated the movie $m$ with $5$: $a_{mn}=5$. Under the ALS approximation $\bm{A}=\bm{W}\bm{Z}$ where each row of $\bm{W}$ represents the hidden features of each movie (see Section~\ref{section:als-vector-product} on the vector inner product). The solution is given by finding the most similar movies to movie $m$ that user $n$ has not rated (or watched). In mathematical language, $$ \mathop{\arg \max}_{\bm{w}_i} \gap \text{similarity}(\bm{w}_i, \bm{w}_m), \qquad \text{for all} \gap i \notin \bm{o}_n, $$ where $\bm{w}_i$'s are the rows of $\bm{W}$ representing the hidden feature of movie $i$ and $\bm{o}_n$ is a mask vector indicating the movies that user $n$ has rated. The method above relies on the similarity function between two vectors. The \textit{cosine similarity} is the most commonly used measure. It is defined as the cosine of the angle between the two vectors: $$ \cos(\bm{x}, \bm{y}) = \frac{\bm{x}^\top\bm{y}}{\norm{\bm{x}}\cdot \norm{\bm{y}}}, $$ where the value ranges from $-1$ to 1 with $-1$ is perfectly dissimilar and 1 is perfectly similar. From the above definition, it follows that the cosine similarity depends only on the angle between the two non-zero vectors, but not on their magnitudes since it can be regarded as the inner product between the normalized version of these vectors. A second measure for calculating similarity is known as the \textit{Pearson similarity}: $$ \text{Pearson}(\bm{x},\bm{y}) =\frac{\mathrm{Cov}(\bm{x},\bm{y})}{\sigma_x \cdot \sigma_y} = \frac{\sum_{i=1}^{n} (x_i - \bar{x} ) (y_i -\bar{y})}{ \sqrt{\sum_{i=1}^{n} (x_i-\bar{x})^2}\sqrt{ \sum_{i=1}^{n} (y_i-\bar{y})^2 }}, $$ whose range varies between $-1$ and 1, where $-1$ is perfectly dissimilar, 1 is perfectly similar, and 0 indicates no linear relationship. The Pearson similarity is usually used to measure the linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations. Both Pearson correlation and cosine similarity are used in various fields, including machine learning and data analysis. Pearson correlation is commonly used in regression analysis, while cosine similarity is commonly used in recommendation systems and information retrieval as we will see the cosine similarity works better in the precision-recall curve analysis. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Cosine Bin Plot]{\label{fig:als-cosine}% \includegraphics[width=0.32\linewidth]{./imgs/als-bin-cosine.pdf}}% \subfigure[Pearson Bin Plot]{\label{fig:als-pearson}% \includegraphics[width=0.32\linewidth]{./imgs/als-bin-pearson.pdf}}% \subfigure[PR Curve]{\label{fig:als-prcurve}% \includegraphics[width=0.35\linewidth]{./imgs/als-prcurve.pdf}}% \caption{Distribution of the insample and outsample under cosine and Pearson similarity and the Precision-Recall curve of them.} \label{fig:als-prcurive-bin} \end{figure} Following from the example above on the MovieLens 100K data set, we choose $\lambda_w=\lambda_z=0.15$ for the regularization and the rank $62$ to minimize the RMSE. We want to look at the similarity between different movie hidden vectors and the goal is to see whether the matrix factorization can help differentiate high-rated and low-rated movies, so that the system can recommend the movies correlated to the existing high-rated movies for each user. Define further the ``\textit{insample}" as the similarity between the movies having rates $5$ for each user, and ``\textit{outsample}" as the similarity between the movies having rates $5$ and $1$ for each user. Figure~\ref{fig:als-cosine} and \ref{fig:als-pearson} depict the bin plot of the distribution of insample and outsample under cosine and Pearson similarity respectively. In both scenarios, a clear distinction is observed between the distributions of the ``insample" and ``outsample" data showing that the ALS decomposition can actually find the hidden features of different movies for each user. Figure~\ref{fig:als-prcurve} shows the \textit{precision-recall (PR) curve} of them where we find the cosine similarity works better such that it can find out more than $73\%$ of the potential high-rated movies with a $90\%$ precision. However, Pearson similarity can only separate out about $64\%$ of the high-rated movies to have a $90\%$ precision. In practice, other measures can also be explored, such as the \textit{negative Euclidean distance} in which case the Euclidean distance can measure the ``dissimilarity" between two vectors, and a negative one thus represents the similarity between them. \chapter{Nonnegative Matrix Factorization (NMF)}\index{NMF}\label{section:nmf} \begingroup \hypersetup{linkcolor=winestain} \minitoc \newpage \endgroup \index{Decomposition: NMF} \section{Nonnegative Matrix Factorization} Following the matrix factorization via the ALS, we now consider algorithms for solving the nonnegative matrix factorization (NMF) problem: \begin{itemize} \item Given a nonnegative matrix $\bm{A}\in \mathbb{R}_+^{M\times N}$ with rank $r$, find nonnegative matrix factors $\bm{W}\in \mathbb{R}_+^{M\times K}$ and $\bm{Z}\in \mathbb{R}_+^{K\times N}$ such that: $$ \bm{A}\approx\bm{W}\bm{Z}. $$ \end{itemize} That is, NMF is a machine learning technique used to factorize a nonnegative matrix into two (or more) nonnegative matrices. Similar to the ALS approximation, the NMF can also uncover hidden patterns and structures in the data represented by the nonnegative matrix. NMF has a wide range of applications in areas such as text mining, image processing, and recommender systems. The accuracy of the approximation is determined by the loss, which is calculated as the Frobenius norm of the difference between the original matrix and the approximation: $$ L(\bm{W},\bm{Z}) = \norm{\bm{W}\bm{Z}-\bm{A}}^2. $$ Early consideration of the NMF problem is due to \citet{paatero1991matrix, paatero1994positive, anttila1995source, cohen1993nonnegative} where they called it \textit{positive matrix factorization}. While \citet{lee2000algorithms} later made the problem famous by the \textit{multiplicative update} in which case the factorization process is done through optimization algorithms. See also the applications of the NMF in the survey paper \citet{berry2007algorithms}. When we want to find two nonnegative matrices $\bm{W}\in\mathbb{R}^{M\times r}_+$ and $\bm{Z}\in\mathbb{R}_+^{r\times N}$ such that $\bm{A}=\bm{W}\bm{Z}$, the problem is known as the \textit{Exact NMF} of $\bm{A}$ of size $r$. Exact NMF is NP-hard \citep{gillis2020nonnegative}. Thus we only consider the approximation of NMF here. \section{NMF via Multiplicative Update (MU)} We consider the NMF via an alternating update. The hidden features in $\bm{W},\bm{Z}$ are modeled as nonnegative vectors in low-dimensional space. These latent vectors are randomly initialized and modified via an alternating multiplicative update rule to minimize the Kullback-Leibler divergence between the observed and modeled matrices. Following Section~\ref{section:als-netflix} (p.~\pageref{section:als-netflix}), given $\bm{W}\in \mathbb{R}_+^{M\times K}$, we want to update $\bm{Z}\in \mathbb{R}_+^{K\times N}$, the gradient with respect to $\bm{Z}$ is given by Equation~\eqref{equation:givenw-update-z-allgd} (p.~\pageref{equation:givenw-update-z-allgd}): $$ \begin{aligned} \frac{\partial L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}} =2 \bm{W}^\top(\bm{W}\bm{Z}-\bm{A}) \in \mathbb{R}^{K\times N}. \end{aligned} $$ Applying the gradient descent idea in Section~\ref{section:als-gradie-descent} (p.~\pageref{section:als-gradie-descent}), the trivial update on $\bm{Z}$ can be done by $$ (\text{GD on $\bm{Z}$})\gap \bm{Z} \leftarrow \bm{Z} - \eta \left(\frac{\partial L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}}\right)=\bm{Z} - \eta \left(2 \bm{W}^\top\bm{W}\bm{Z}-2\bm{W}^\top\bm{A}\right), $$ where $\eta$ is a small positive step size. Now if we impose a different step size for each entry of $\bm{Z}$ and incorporate the constant 2 into the step size, the update can be obtained by $$ (\text{GD$^\prime$ on $\bm{Z}$})\gap \begin{aligned} z_{kn} &\leftarrow z_{kn} - \frac{\eta_{kn}}{2} \left(\frac{\partial L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}}\right)_{kn}\\ &=z_{kn} - \eta_{kn}(\bm{W}^\top\bm{W}\bm{Z}-\bm{W}^\top\bm{A})_{kn}, \gap k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\}, \end{aligned} $$ where $z_{kn}$ is the $(k,n)$-th entry of $\bm{Z}$. We further rescale the step size: $$ \eta_{kn} = \frac{z_{kn}}{(\bm{W}^\top\bm{W}\bm{Z})_{kn}}, $$ then we obtain the update rule: $$ (\text{MU on $\bm{Z}$})\gap z_{kn} \leftarrow z_{kn}\frac{(\bm{W}^\top\bm{A})_{kn}}{(\bm{W}^\top\bm{W}\bm{Z})_{kn}}, \gap k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\}, $$ which is known as the \textit{multiplicative update (MU)} and is first developed in \citet{lee2000algorithms} and further discussed in \citet{pauca2006nonnegative}. Analogously, the multiplicative update on $\bm{W}$ can be obtained by \begin{equation}\label{equation:multi-update-w} (\text{MU on $\bm{W}$})\,\, w_{mk} \leftarrow w_{mk} \frac{(\bm{A}\bm{Z}^\top)_{mk}}{(\bm{W}\bm{Z}\bZ^\top)_{mk}}, \,\, m\in\{1,\ldots,M\}, k\in\{1,\ldots,K\}. \end{equation} \begin{theorem}[Convergence of Multiplicative Update] The loss $L(\bm{W},\bm{Z})=\norm{\bm{W}\bm{Z}-\bm{A}}^2$ is non-increasing under the multiplicative update rules: $$ \left\{ \begin{aligned} z_{kn} &\leftarrow z_{kn}\frac{(\bm{W}^\top\bm{A})_{kn}}{(\bm{W}^\top\bm{W}\bm{Z})_{kn}}, \gap k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\};\\ w_{mk} &\leftarrow w_{mk} \frac{(\bm{A}\bm{Z}^\top)_{mk}}{(\bm{W}\bm{Z}\bZ^\top)_{mk}}, \gap m\in\{1,\ldots,M\}, k\in\{1,\ldots,K\}. \end{aligned} \right. $$ \end{theorem} We refer the proof of the above theorem to \citet{lee2000algorithms}. Clearly the approximations $\bm{W}$ and $\bm{Z}$ remain nonnegative during the updates. It is generally best to update $\bm{W}$ and $\bm{Z}$ ``simultaneously”, instead of updating each matrix completely before the other. In this case, after updating a row of $\bm{Z}$, we update the corresponding column of $\bm{W}$. In the implementation, a small positive quantity, say the square root of the machine precision, should be added to the denominators in the approximations of $\bm{W}$ and $\bm{Z}$ at each iteration step. And a trivial $\epsilon=10^{-9}$ will suffice. The full procedure is shown in Algorithm~\ref{alg:nmf-multiplicative}. \index{Machine precision} \begin{algorithm}[h] \caption{NMF via Multiplicative Updates} \label{alg:nmf-multiplicative} \begin{algorithmic}[1] \Require Matrix $\bm{A}\in \mathbb{R}^{M\times N}$; \State Initialize $\bm{W}\in \mathbb{R}^{M\times K}$, $\bm{Z}\in \mathbb{R}^{K\times N}$ randomly with nonnegative entries. \State Choose a stop criterion on the approximation error $\delta$; \State Choose maximal number of iterations $C$; \State $iter=0$; \Comment{Count for the number of iterations} \While{$\norm{\bm{A}- (\bm{W}\bm{Z})}^2>\delta $ and $iter<C$} \State $iter=iter+1$; \For{$k=1$ to $K$} \For{$n=1$ to $N$} \Comment{update $k$-th row of $\bm{Z}$} \State $z_{kn} \leftarrow z_{kn}\frac{(\bm{W}^\top\bm{A})_{kn}}{(\bm{W}^\top\bm{W}\bm{Z})_{kn}+\textcolor{blue}{\epsilon}}$; \EndFor \For{$m=1$ to $M$} \Comment{update $k$-th column of $\bm{W}$} \State $w_{mk} \leftarrow w_{mk} \frac{(\bm{A}\bm{Z}^\top)_{mk}}{(\bm{W}\bm{Z}\bZ^\top)_{mk}+\textcolor{blue}{\epsilon}}$; \EndFor \EndFor \EndWhile \State Output $\bm{W},\bm{Z}$; \end{algorithmic} \end{algorithm} \section{Regularization} \begin{algorithm}[h] \caption{NMF via Regularized Multiplicative Updates} \label{alg:nmf-multiplicative-regularization} \begin{algorithmic}[1] \Require Matrix $\bm{A}\in \mathbb{R}^{M\times N}$; \State Initialize $\bm{W}\in \mathbb{R}^{M\times K}$, $\bm{Z}\in \mathbb{R}^{K\times N}$ randomly with nonnegative entries. \State Choose a stop criterion on the approximation error $\delta$; \State Choose maximal number of iterations $C$; \State Choose regularization parameter $\lambda_z, \lambda_w$; \State $iter=0$;\Comment{Count for the number of iterations} \While{$\norm{\bm{A}- (\bm{W}\bm{Z})}^2>\delta $ and $iter<C$} \State $iter=iter+1$; \For{$k=1$ to $K$} \For{$n=1$ to $N$} \Comment{udate $k$-th row of $\bm{Z}$} \State $z_{kn} \leftarrow z_{kn}\frac{(\bm{W}^\top\bm{A})_{kn}- \textcolor{blue}{\lambda_z z_{kn}}}{(\bm{W}^\top\bm{W}\bm{Z})_{kn}+\textcolor{blue}{\epsilon}}$; \EndFor \For{$m=1$ to $M$} \Comment{udate $k$-th column of $\bm{W}$} \State $w_{mk} \leftarrow w_{mk} \frac{(\bm{A}\bm{Z}^\top)_{mk} -\textcolor{blue}{\lambda_w w_{mk}}}{(\bm{W}\bm{Z}\bZ^\top)_{mk}+\textcolor{blue}{\epsilon}}$; \EndFor \EndFor \EndWhile \State Output $\bm{W},\bm{Z}$; \end{algorithmic} \end{algorithm} Similar to the ALS with regularization discussed in Section~\ref{section:regularization-extention-general} (p.~\pageref{section:regularization-extention-general}), recall that the regularization can help extend the applicability of ALS to general matrices. Additionally, a regularization term can be incorporated into the NMF framework to enhance its performance: $$ L(\bm{W},\bm{Z}) =\norm{\bm{W}\bm{Z}-\bm{A}}^2 +\lambda_w \norm{\bm{W}}^2 + \lambda_z \norm{\bm{Z}}^2, \qquad \lambda_w>0, \lambda_z>0, $$ where the induced matrix norm is still the Frobenius norm. The gradient with respect to $\bm{Z}$ given $\bm{W}$ is the same as that in Equation~\eqref{equation:als-regulari-gradien} (p.~\pageref{equation:als-regulari-gradien}): $$ \begin{aligned} \frac{\partial L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}} =2\bm{W}^\top(\bm{W}\bm{Z}-\bm{A}) + \textcolor{blue}{2\lambda_z\bm{Z}} \in \mathbb{R}^{K\times N}. \end{aligned} $$ The trivial gradient descent update can be obtained by $$ (\text{GD on }\bm{Z}) \gap \bm{Z} \leftarrow \bm{Z} - \eta \left(\frac{\partial L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}}\right)=\bm{Z} - \eta \left(2 \bm{W}^\top\bm{W}\bm{Z}-2\bm{W}^\top\bm{A}+\textcolor{blue}{2\lambda_z\bm{Z}}\right), $$ Analogously, if we suppose a different step size for each entry of $\bm{Z}$ and incorporate the constant 2 into the step size, the update can be obtained by $$ (\text{GD$^\prime$ on $\bm{Z}$})\gap \begin{aligned} z_{kn} &\leftarrow z_{kn} - \frac{\eta_{kn}}{2} \left(\frac{\partial L(\bm{Z}\mid \bm{W})}{\partial \bm{Z}}\right)_{kn}\\ &=z_{kn} - \eta_{kn}(\bm{W}^\top\bm{W}\bm{Z}-\bm{W}^\top\bm{A}+\textcolor{blue}{\lambda_z\bm{Z}})_{kn}, \,\, k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\}. \end{aligned} $$ Now if we rescale the step size: $$ \eta_{kn} = \frac{z_{kn}}{(\bm{W}^\top\bm{W}\bm{Z})_{kn}}, $$ then we obtain the update rule: $$ \begin{aligned} (\text{MU on $\bm{Z}$})\gap z_{kn} &\leftarrow z_{kn}\frac{(\bm{W}^\top\bm{A})_{kn}- \textcolor{blue}{\lambda_z z_{kn}} }{(\bm{W}^\top\bm{W}\bm{Z})_{kn}}, \gap k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\}. \end{aligned} $$ Similarly, the multiplicative update on $\bm{W}$ can be obtained by $$ \begin{aligned} (\text{MU on $\bm{W}$})\gap w_{mk} &\leftarrow w_{mk} \frac{(\bm{A}\bm{Z}^\top)_{mk} -\textcolor{blue}{\lambda_w w_{mk}} }{(\bm{W}\bm{Z}\bZ^\top)_{mk}}, \gap m\in\{1,\ldots,M\}, k\in\{1,\ldots,K\}. \end{aligned} $$ The procedure is then formulated in Algorithm~\ref{alg:nmf-multiplicative-regularization}. A nonnegative matrix factorization $\bm{A}\approx \bm{W}\bm{Z}$ can also be used for clustering. The data vector $\bm{a}_j$ is assigned to cluster $i$ if $z_{ij}$ is the largest element in column $j$ of $\bm{Z}$ \citep{brunet2004metagenes, gao2005improving}. In the collaborative filtering context, it is acknowledged that the NMF via multiplicative update may lead to overfitting though the convergence results are good. The overfitting can be partially mitigated through regularization, but its out-of-sample performance remains low. Bayesian optimization through the use of generative models, on the other hand, can effectively prevent overfitting (see \citet{brouwer2017comparative, lu2022flexible} or Chapter~\ref{chapter:bnmf}, p.~\pageref{chapter:bnmf}). For other issues in the NMF, readers are advised to consult the survey of \citet{berry2007algorithms}. \index{Bayesian inference} \index{Bayesian optimization} \index{Bayesian matrix decomposition} \section{Initialization} A significant challenge in NMF is that the convergence to a global minimum is not guaranteed. It often happens that convergence is slow and a suboptimal approximation is reached. In the above discussion, we initialize $\bm{W}$ and $\bm{Z}$ randomly. To mitigate this issue, there are also alternative strategies designed to obtain better initial estimates in the hope of converging more rapidly to a good solution \citep{boutsidis2008svd, gillis2014and}. We sketch the methods as follows for a reference: \begin{itemize} \item \textit{Clustering techniques.} Use some clustering methods on the columns of $\bm{A}$, make the cluster means of the top $K$ clusters as the columns of $\bm{W}$, and initialize $\bm{Z}$ as a proper scaling of the cluster indicator matrix (that is, $z_{kn}\neq 0$ indicates $\bm{a}_n$ belongs to the $k$-th cluster); \item \textit{Subset selection.} Pick $K$ columns of $\bm{A}$ and set those as the initial columns for $\bm{W}$, and analogously, $K$ rows of $\bm{A}$ are selected to form the rows of $\bm{Z}$; \item \textit{SVD-based.} Suppose the singular value decomposition (SVD) of $\bm{A}=\sum_{i=1}^{r}\sigma_i\bm{u}_i\bm{v}_i^\top$ where each factor $\sigma_i\bm{u}_i\bm{v}_i^\top$ is a rank-one matrix with possible negative values in $\bm{u}_i, \bm{v}_i$, and nonnegative $\sigma_i$. Denote $[x]_+=\max(x, 0)$, we notice $$ \bm{u}_i\bm{v}_i^\top = [\bm{u}_i]_+[\bm{v}_i]_+^\top+[-\bm{u}_i]_+[-\bm{v}_i]_+^\top-[-\bm{u}_i]_+[\bm{v}_i]_+^\top-[\bm{u}_i]_+[-\bm{v}_i]_+^\top. $$ Either $[\bm{u}_i]_+[\bm{v}_i]_+^\top$ or $[-\bm{u}_i]_+[-\bm{v}_i]_+^\top$ can be selected as a column and a row in $\bm{W},\bm{Z}$. \footnote{Note here we consider general matrix $\bm{A}$. If $\bm{A}$ is nonnegative, then $\bm{u}_i$ and $\bm{v}_i$ are nonnegative as well.} \end{itemize} However, these techniques are not guaranteed to perform better theoretically. We recommend readers to the aforementioned papers for more information. \section{Movie Recommender Context} Both the NMF and the ALS methods approximate the matrix and reconstruct the entries in the matrix with a set of basis vectors. The basis in the NMF is composed of vectors with nonnegative elements while the basis in the ALS can have positive or negative values. The difference then is that the NMF reconstructs each vector as a positive summation of the basis vectors with a ``relative" small component in the direction of each basis vector. Whereas, in the ALS approximation, the data is modeled as a linear combination of the basis such that we can add or subtract vectors as needed and the components in the direction of each basis vector can be large positive values or negative values. Therefore, depending on the application one or the other factorization can be utilized to describe the data with different meanings. In the context of a movie recommender system, then the rows of $\bm{W}$ represent the hidden features of the movies and columns of $\bm{Z}$ represent the hidden features of users. In the NMF method we can say that a movie is 0.5 comedy, 0.002 action, and 0.09 romantic. However, in the ALS approach, we can get combinations such as 4 comedy, $-0.05$ comedy, and $-3$ drama, i.e., a positive or negative component on that feature. The ALS and NMF are similar in the sense that the importance of each basis vector is not ranked in a hierarchical manner. Whereas, the key difference between the ALS (or NMF) and the SVD is that, in the SVD, the importance of each vector in the basis is relative to the value of the singular value associated with that vector. For the SVD of $\bm{A}=\sum_{i=1}^{r}\sigma_i\bm{u}_i\bm{v}_i^\top$, this usually means that the reconstruction $\sigma_1\bm{u}_1\bm{v}_i^\top$ via the first set of basis vectors dominates and is the most used set to reconstruct data, then the second set, and so on. So the basis in the SVD has an implicit hierarchy and that doesn't happen in the ALS or the NMF approaches. Recall the low-rank approximation on the flag image in Section~\ref{section:als-low-flag} (p.~\pageref{section:als-low-flag}) where we find the second component $\bm{w}_2\bm{z}_2^\top$ via the ALS in Figure~\ref{fig:als52} (p.~\pageref{fig:als52}) plays an important role in the reconstruction of the original figure, whereas the second component $\sigma_2\bm{u}_2\bm{v}_2^\top$ via the SVD in Figure~\ref{fig:svd22} (p.~\pageref{fig:svd22}) plays a small role in the reconstruction. \chapter{Multivariate Probability Models and Conjugacy} \section{Conjugate Prior for the Multinomial Distribution}\label{sec:dirichlet_prior_on_multinomial} This section and the next section serve as reference for the rest of the document. \subsection{Multinomial Distribution} The multinomial distribution is widely used in Bayesian mixture model to introduce latent variable. And the use of conjugate priors allows all the results to be derived in closed form. The multinomial distribution is parametrized by an integer $N$ and a p.m.f. $\bm{\pi} = \{\pi_1, \pi_2, \ldots , \pi_K\}$, and can be thought of as following: If we have $N$ independent events, and for each event, the probability of outcome $k$ is $\pi_k$, then the multinomial distribution specifies the probability that outcome $k$ occurs $N_k$ times, for $k = 1, 2, \ldots , K$. For example, the multinomial distribution can model the probability of an $N$-sample empirical histogram, if each sample is drawn i.i.d., from $\bm{\pi}$. Formally, we have the following definition of Multinomial distribution. \begin{definition}[Multinomial Distribution] A random vector $\bm{N}=[N_1, N_2, \ldots, N_K]\in \{0, 1, 2, \ldots, N\}^K$ where $\sum_{k=1}^{K} N_k=N$ is said to follow the multinomial distribution with parameter $N\in \mathbb{N}$ and $\bm{\pi} =[\pi_1, \pi_2, \ldots, \pi_K]\in [0,1]^K$ such that $\sum_{k=1}^{K} \pi_k=1$. Denoted by $\bm{N} \sim $ $\mathrm{Multi}_K(N, \bm{\pi})$. Then its probability mass function is given by \begin{equation*} p(N_1, N_2, \ldots , N_K | N, \bm{\pi} = (\pi_1, \pi_2, \ldots , \pi_K)) = \frac{N!}{N_1! N_2! \ldots N_K!} \prod^K_{k=1}\pi_k^{N_k} \cdot \mathds{1}\left\{\sum_{k=1}^{K}N_k = N\right\}, \end{equation*} where $\{0, 1, 2, \ldots, N\}$ is a set of $N+1$ elements and $[0,1]$ is an closed set with values between 0 and 1. The mean, variance, covariance are $$ \mathrm{E}[N_k] = N\pi_k, \qquad \mathrm{Var}[N_k] = N\pi_k(1-\pi_k), \qquad \mathrm{Cov}[N_k, N_m] = -N\pi_k\pi_m. $$ When $K=2$, the multinomial distribution reduces to the binomial distribution. \end{definition} \subsection{Dirichlet Distribution}\label{section:dirichlet-dist} \begin{figure}[htp] \centering \subfigure[ $ \boldsymbol\alpha=\begin{bmatrix} 10,10,10 \end{bmatrix} $, z-axis is pdf. ]{\includegraphics[width=0.33 \textwidth]{img_visual/dirichlet-pdf.png} \label{fig:dirichlet_pdf}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix} 10,10,10 \end{bmatrix}$, z-axis is $\pi_3$.]{\includegraphics[width=0.33 \textwidth]{img_visual/dirichlet-surface.png} \label{fig:dirichlet_surface}} \centering \subfigure[ $ \boldsymbol\alpha=\begin{bmatrix} 1,1,1 \end{bmatrix} $ ]{\includegraphics[width=0.4 \textwidth]{img_visual/dirichlet_1-1-1.pdf} \label{fig:dirichlet_sample_111}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix} 0.9,0.9,0.9 \end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{img_visual/dirichlet_09-09-09.pdf} \label{fig:dirichlet_sample_090909}} \centering \subfigure[$\boldsymbol\alpha=\begin{bmatrix} 10,10,10 \end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{img_visual/dirichlet_10-10-10.pdf} \label{fig:dirichlet_sample_101010}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix} 15,5,2 \end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{img_visual/dirichlet_15-5-2.pdf} \label{fig:dirichlet_sample_1552}} \centering \caption{Density plots (blue=low, red=high) for the Dirichlet distribution over the probability simplex in $\mathbb{R}^3$ for various values of the concentration parameter $\boldsymbol\alpha$. When $\boldsymbol\alpha=[c, c, c]$, the distribution is called a \textbf{symmetric Dirichlet distribution} and the density is symmetric about the uniform probability mass function (i.e., occurs in the middle of the simplex). When $0<c<1$, there are sharp peaks of density almost at the vertices of the simplex. When $c>1$, the density becomes monomodal and concentrated in the center of the simplex. And when $c=1$, it is uniform distributed over the simplex. Finally, if $\boldsymbol\alpha$ is not a constant vector, the density is not symmetric.}\centering \label{fig:dirichlet_samples} \end{figure} The Dirichlet distribution serves as a conjugate prior for the probability parameter $\bm{\pi}$ of the multinomial distribution. \begin{definition}[Dirichlet Distribution] A random vector $\bm{X}=[x_1, x_2, \ldots, x_K]\in [0,1]^K$ is said to follow Dirichlet distribution if \begin{equation} \mathrm{Dirichlet}(\bm{X} | \boldsymbol\alpha) \triangleq \frac{1}{D(\boldsymbol\alpha)} \prod_{k=1}^K x_k ^ {\alpha_k - 1}, \label{equation:dirichlet_distribution2} \end{equation} such that $\sum_{k=1}^K x_k = 1$, $x_k \in$ [0, 1] and \begin{equation} D(\boldsymbol\alpha) = \frac{\prod_{k=1}^K \Gamma(\alpha_k)}{\Gamma(\alpha_+)}, \label{equation:dirichlet_distribution3} \end{equation} where $\boldsymbol\alpha = [\alpha_1, \alpha_2, \ldots, \alpha_K]$ is a vector of reals with $\alpha_k>0, \forall k$, $\alpha_+ = \sum_{k=1}^K \alpha_k$. The $\boldsymbol\alpha$ is also known as the \textbf{concentration parameter} in Dirichlet distribution. $\Gamma(\cdot)$ is the Gamma function which is a generalization of the factorial function. For $m>0$, $\Gamma(m+1) = m\Gamma(m)$ which implies for positive integers $n$, $\Gamma(n)=(n-1)!$ since $\Gamma(1)=1$. The mean, variance, covariance are $$ \mathrm{E}[x_k] = \frac{\alpha_k}{\alpha_+}, \qquad \mathrm{Var}[x_k] = \frac{\alpha_k(\alpha_+-\alpha_k)}{\alpha_+^2(\alpha_++1)}, \qquad \mathrm{Cov}[x_k, x_m]= \frac{-\alpha_k\alpha_m}{\alpha_+^2(\alpha_++1)}. $$ When $K=2$, the Dirichlet distribution reduces to the Beta distribution, The Beta distribution $Beta(\alpha, \beta)$ is defined on $[0,1]$ with the probability density function given by $$ \mathrm{Beta}(x| \alpha, \beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} x^{\alpha-1}(1-x)^{\beta-1}. $$ That is, if $X\sim \mathrm{Beta}(\alpha, \beta)$, then $\bm{X}=[X, 1-X] \sim \mathrm{Dirichlet}(\boldsymbol\alpha) $, where $\boldsymbol\alpha=[\alpha, \beta]$. \end{definition} Interesting readers can refer to Appendix~\ref{appendix:drive-dirichlet} for a derivation of the Dirichlet distribution. The sample space of the Dirichlet distribution lies on the $(K-1)$-dimensional probability simplex, which is a surface in $\mathbb{R}^K$ denoted by $\triangle_K$. That is a set of vectors in $\mathbb{R}^K$ whose components are non-negative and sum to 1. $$ \triangle_K = \{ \bm{\pi}: 0 \leq \pi_k\leq 1, \sum_{k=1}^{K}\pi_k=1 \}. $$ Notice that $\triangle_K$ lies on a $(K-1)$-dimensional space since each component is non-negative, and the components sum to 1. Figure~\ref{fig:dirichlet_samples} shows various plots of the density of the Dirichlet distribution over the two-dimensional simplex in $\mathbb{R}^3$ for a handful of values of the parameter vector $\boldsymbol\alpha$ and Figure~\ref{fig:dirichlet_points} shows the draw of 5, 000 points for each setting. In specific, the density plots of Dirichlet in $\mathbb{R}^3$ is a surface plot in 4$d$-space. The Figure~\ref{fig:dirichlet_pdf} is a projection of a surface into 3$d$-space where the z-axis is the probability density function and Figure~\ref{fig:dirichlet_surface} is a projection of a surface into 3$d$-space where the z-axis is $\pi_3$. Figure~\ref{fig:dirichlet_sample_111} to Figure~\ref{fig:dirichlet_sample_1552} are the projections into a 2$d$-space. When the concentration parameter $\boldsymbol\alpha=[1,1,1]$, the Dirichlet distribution reduces to the uniform distribution over the simplex. This can be easily verified that $Dirichlet(\bm{X} | \boldsymbol\alpha=[1,1,1]) = \frac{\Gamma(3)}{(\Gamma(1))^3}= 2 $ which is a constant that does not depend on the specific value of $\bm{X}$. When $\boldsymbol\alpha=[c,c,c]$ with $c>1$, the density becomes a monomodal and concentrated in the center of the simplex. This can be seen from $\mathrm{Dirichlet}(\bm{X} | \boldsymbol\alpha=[c,c,c]) = \frac{\Gamma(3c)}{(\Gamma(c))^3}\prod_{k=1}^3 x_k ^ {c - 1} $ such that small value of $x_k$ will make the probability density approach to zero. On the contrary, when $\boldsymbol\alpha=[c,c,c]$ with $c<1$, the density has sharp peaks almost at the vertices of the simplex. More properties of the Dirichlet distribution is provided in Table~\ref{table:dirichlet-property}, and the proof can be found in Appendix~\ref{appendix:drive-dirichlet}. And the derivation on the Dirichlet distribution in Appendix~\ref{appendix:drive-dirichlet} can also be utilized to generate samples from the Dirichlet distribution by a set of samples from a set of Gamma distributions. \begin{table}[] \begin{tabular}{l|l} \hline \begin{tabular}[c]{@{}l@{}}Marginal \\ Distribution\end{tabular} & $X_i \sim \mathrm{Beta}(\alpha_i, \alpha_+-\alpha_i)$. \\ \hline \begin{tabular}[c]{@{}l@{}}Conditional \\ Distribution\end{tabular} & \begin{tabular}[c]{@{}l@{}}$\bm{X}_{-i} | X_i \sim (1-X_i)\mathrm{Dirichlet}(\alpha_{-i})$, \\ where $\bm{X}_{-i}$ is a random vector excluding $X_i$. \end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}Aggregation\\ Property\end{tabular} & \begin{tabular}[c]{@{}l@{}}If $M=X_i+X_j$, then $[X_1, \ldots X_{i-1}, X_{i+1}, \ldots, X_{j-1}, X_{j+1}, \ldots, X_K, M] \sim $\\ $\mathrm{Dirichlet}([\alpha_1, \ldots, \alpha_{i-1}, \alpha_{i+1}, \ldots, \alpha_{j-1}, \alpha_{j+1}, \ldots, \alpha_K, \alpha_i+\alpha_j])$.\\ In general, If $\{A_1, A_2, \ldots, A_r\}$ is a partition of $\{1, 2, \ldots, K\}$, then \\ $\left[\sum_{i\in A_1} X_i, \sum_{i\in A_2} X_i, \ldots, \sum_{i\in A_r} X_i\right] \sim$\\ $\mathrm{Dirichlet}\left(\left[\sum_{i\in A_1} \alpha_i, \sum_{i\in A_2} \alpha_i, \ldots, \sum_{i\in A_r} \alpha_i\right]\right)$.\end{tabular} \\ \hline \end{tabular} \caption{Properties of Dirichlet distribution.} \label{table:dirichlet-property} \end{table} \begin{figure}[!h] \center \subfigure[ $ \boldsymbol\alpha=\begin{bmatrix}1,1,1\end{bmatrix}$ ]{\includegraphics[width=0.4 \textwidth]{img_visual/dirichlet_points_1-1-1.pdf} \label{fig:dirichlet_points_111}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix}0.9,0.9,0.9\end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{img_visual/dirichlet_points_09-09-09.pdf} \label{fig:dirichlet_points_090909}} \center \subfigure[$\boldsymbol\alpha=\begin{bmatrix}10,10,10\end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{img_visual/dirichlet_points_10-10-10.pdf} \label{fig:dirichlet_points_101010}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix}15,5,2\end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{img_visual/dirichlet_points_15-5-2.pdf} \label{fig:dirichlet_points_1552}} \center \caption{Draw of 5, 000 points from Dirichlet distribution over the probability simplex in $\mathbb{R}^3$ for various values of the concentration parameter $\boldsymbol\alpha$.} \label{fig:dirichlet_points} \end{figure} \subsection{Posterior Distribution for Multinomial Distribution}\label{section:dirichlet-dist-post} For the conjugacy, that is, if $(\bm{N} | \bm{\pi}) \sim $ $\mathrm{Multi}_K(N, \bm{\pi})$ and $\bm{\pi} \sim$ $\mathrm{Dirichlet}(\boldsymbol\alpha)$, then $(\bm{\pi} |\bm{N}) \sim$ $\mathrm{Dirichlet}(\boldsymbol\alpha+\bm{N})$ = $\mathrm{Dirichlet}(\alpha_1+N_1, \ldots, \alpha_K+N_K)$. \begin{proof}[of conjugate prior of multinomial distribution] By the Bayes' theorem ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", we get the posterior $$ \begin{aligned} \mathrm{posterior}&= p(\bm{\pi}|\boldsymbol\alpha, \bm{N}) \\ &\propto \mathrm{Multi}_K(\bm{N}|N, \bm{\pi}) \cdot \mathrm{Dirichlet}(\bm{\pi}|\boldsymbol\alpha) \\ &= \left(\frac{N!}{N_1! N_2! \ldots N_K!} \prod^K_{k=1}\pi_k^{N_k}\right) \cdot \left(\frac{1}{D(\boldsymbol\alpha)} \prod_{k=1}^K \pi_k ^ {\alpha_k - 1}\right)\\ &\propto \prod_{k=1}^K \pi_k ^ {\alpha_k +N_k - 1} \propto \mathrm{Dirichlet}(\bm{\pi}|\boldsymbol\alpha+\bm{N}). \end{aligned} $$ Therefore, $(\bm{\pi} | \bm{N}) \sim$ $\mathrm{Dirichlet}(\boldsymbol\alpha+\bm{N})$ = $\mathrm{Dirichlet}(\alpha_1+N_1, \ldots, \alpha_K+N_K)$. \end{proof} A comparison between the prior and posterior distribution reveals that the relative sizes of the Dirichlet parameters $\alpha_k$ describe the mean of the prior distribution of $\bm{\pi}$, and the sum of $\alpha_k$'s is a measure of the strength of the prior distribution. The prior distribution is mathematically equivalent to a likelihood resulting from $\sum_{k=1}^K(\alpha_k - 1)$ observations with $\alpha_k - 1$ observations of the $k^{th}$ group. To be noted, the Dirichlet distribution is a multivariate generalization of the beta distribution, which is the conjugate prior for binomial distribution \citep{hoff2009first, frigyik2010introduction}. To conclude, here are some important points on Dirichlet distribution: \begin{itemize} \item A sample from a Dirichlet distribution is a probability vector (positive and sum to 1). In other words, a Dirichlet distribution is a probability distribution over all possible multinomial distributions with $K$ dimensions. \item Dirichlet distribution is a conjugate prior of multinomial distribution as mentioned in the beginning of this section. \end{itemize} \section{Multivariate Gaussian Distribution and Conjugacy}\label{sec:multi_gaussian_conjugate_prior} We have shown the conjugate prior for the mean parameter of a univariate Gaussian distribution when the variance (or precision) parameter is fixed; and the joint conjugate prior for the mean and variance (or precision) parameters of a univariate Gaussian distribution. In this section, we further provide the conjugate analysis of the multivariate Gaussian distribution. See also discussion in \citet{murphy2007conjugate, murphy2012machine, teh2007exponential, kamper2013gibbs, das2014dpgmm}. \subsection{Multivariate Gaussian Distribution} A multivariate Gaussian distribution (also known as a multivariate normal distribution) is a continuous probability distribution with jointly normal distribution over multiple variables. It is fully described by its mean vector (of size equal to the number of variables) and covariance matrix (square matrix of size equal to the number of variables). The covariance matrix encodes the pairwise relationships between variables in terms of the covariance between them. The multivariate Gaussian can be used to model complex data distributions in various fields such as machine learning, statistics, and signal processing. We first give the rigorous definition of the multivariate Gaussian distribution as follows. \begin{definition}[Multivariate Gaussian Distribution]\label{definition:multivariate_gaussian} A random vector ${\mathbf{x}} \in \mathbb{R}^D$ is said to follow the multivariate Gaussian distribution with parameters $\boldsymbol\mu\in\mathbb{R}^D$ and $\boldsymbol\Sigma\in\mathbb{R}^{D\times D}$, denoted by ${\mathbf{x}}\sim \mathcal{N}(\mu, \boldsymbol\Sigma)$, if $$ \begin{aligned} f(\bm{x}; \boldsymbol\mu, \boldsymbol\Sigma)&= (2\pi)^{-D/2} |\boldsymbol\Sigma|^{-1/2}\exp\left\{-\frac{1}{2}(\bm{x} - \boldsymbol\mu)^\top \boldsymbol\Sigma^{-1}(\bm{x} - \boldsymbol\mu)\right\}, \end{aligned} $$ where $\boldsymbol\mu \in \mathbb{R}^D$ is called the \textbf{mean vector}, and $\boldsymbol\Sigma\in \mathbb{R}^{D\times D}$ is positive definite and is called the \textbf{covariance matrix}. The mean, mode, and covariance of the multivariate Gaussian distribution are given by \begin{equation*} \begin{aligned} \mathrm{E} [{\mathbf{x}}] &= \boldsymbol\mu, \qquad \\ \mathrm{Mode}[{\mathbf{x}}] &= \boldsymbol\mu, \\ \mathrm{Cov} [{\mathbf{x}}] &= \boldsymbol\Sigma. \end{aligned} \end{equation*} Figure~\ref{fig:multi_gaussian_density} compares Gaussian density plots for different kinds of covariance matrices. \end{definition} \begin{figure}[h] \subfigure[Gaussian, $\boldsymbol\Sigma =\begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}. $ ]{\includegraphics[width=0.31 \textwidth]{imgs/dists_multiGauss_sigma1.pdf} \label{fig:dists_multiGauss_sigma1}} \subfigure[Gaussian, $\boldsymbol\Sigma =\begin{bmatrix} 1&0\\ 0&3 \end{bmatrix}.$]{\includegraphics[width=0.31 \textwidth]{imgs/dists_multiGauss_sigma2.pdf} \label{fig:dists_multiGauss_sigma2}} \subfigure[Gaussian, $\boldsymbol\Sigma =\begin{bmatrix} 1&\textendash0.5\\ \textendash0.5&1.5 \end{bmatrix}.$]{\includegraphics[width=0.31 \textwidth]{imgs/dists_multiGauss_sigma3.pdf} \label{fig:dists_multiGauss_sigma3}} \subfigure[Gaussian, $\boldsymbol\Sigma =\begin{bmatrix} 2&0\\ 0&2 \end{bmatrix}. $ ]{\includegraphics[width=0.31 \textwidth]{imgs/dists_multiGauss_sigma4.pdf} \label{fig:dists_multiGauss_sigma4}} \subfigure[Gaussian, $\boldsymbol\Sigma =\begin{bmatrix} 3&0\\ 0&1 \end{bmatrix}.$]{\includegraphics[width=0.31 \textwidth]{imgs/dists_multiGauss_sigma5.pdf} \label{fig:dists_multiGauss_sigma5}} \subfigure[Gaussian, $\boldsymbol\Sigma =\begin{bmatrix} 3&\textendash0.5\\ \textendash0.5&1.5 \end{bmatrix}.$]{\includegraphics[width=0.31 \textwidth]{imgs/dists_multiGauss_sigma6.pdf} \label{fig:dists_multiGauss_sigma6}} \centering \caption{Density and contour plots (\textcolor{darkblue}{blue}=low, \textcolor{mydarkyellow}{yellow}=high) for the multivariate Gaussian distribution over the $\mathbb{R}^2$ space for various values of the covariance/scale matrix with zero mean vector. Fig~\ref{fig:dists_multiGauss_sigma1} and \ref{fig:dists_multiGauss_sigma4}: A spherical covariance matrix has a circular shape; Fig~\ref{fig:dists_multiGauss_sigma2} and \ref{fig:dists_multiGauss_sigma5}: A diagonal covariance matrix is an \textbf{axis aligned} ellipse; Fig~\ref{fig:dists_multiGauss_sigma3} and \ref{fig:dists_multiGauss_sigma6}: A full covariance matrix has an elliptical shape.} \centering \label{fig:multi_gaussian_density} \end{figure} Similar to the likelihood under univariate Gaussian distribution (Equation~\eqref{equation:uni_gaussian_likelihood}) for deriving the conjugate Bayesian result, the likelihood of $N$ random observations $\mathcal{X} = \{\bm{x}_1, \bm{x}_2, \ldots , \bm{x}_N \}$ generated by a multivariate Gaussian with mean vector $\boldsymbol\mu$ and covariance matrix $\boldsymbol\Sigma$ is given by \begin{equation}\label{equation:multi_gaussian_likelihood} \begin{aligned} &\gap p(\mathcal{X} \mid \boldsymbol\mu, \boldsymbol\Sigma) =\prod^N_{n=1} \mathcal{N} (\bm{x}_n\mid \boldsymbol\mu, \boldsymbol\Sigma) \\ &\overset{(a)}{=} (2\pi)^{-ND/2} |\boldsymbol\Sigma|^{-N/2}\exp\left\{-\frac{1}{2} \sum^N_{n=1}(\bm{x}_n - \boldsymbol\mu)^\top \boldsymbol\Sigma^{-1}(\bm{x}_n - \boldsymbol\mu)\right\} \\ &\overset{(b)}{=} (2\pi)^{-ND/2} |\boldsymbol\Sigma|^{-N/2}\exp\left\{-\frac{1}{2} \mathrm{tr}( \boldsymbol\Sigma^{-1}\bm{S}_{\boldsymbol\mu} ) \right\}\\ &\overset{(c)}{=} (2\pi)^{-ND/2} |\boldsymbol\Sigma|^{-N/2}\exp\left\{-\frac{N}{2}(\boldsymbol\mu - \overline{\bm{x}})^\top \boldsymbol\Sigma^{-1}(\boldsymbol\mu - \overline{\bm{x}})\right\} \exp\left\{-\frac{1}{2}\mathrm{tr}( \boldsymbol\Sigma^{-1}\bm{S}_{\overline{x}} )\right\}, \end{aligned} \end{equation} where \begin{equation}\label{equation:mvu-sample-covariance} \begin{aligned} \bm{S}_{\boldsymbol\mu} &= \sum^N_{n=1}(\bm{x}_n - \boldsymbol\mu)(\bm{x}_n - \boldsymbol\mu)^\top,\\ \bm{S}_{\overline{x}} &= \sum^N_{n=1}(\bm{x}_n - \overline{\bm{x}})(\bm{x}_n - \overline{\bm{x}})^\top, \\ \overline{\bm{x}} &=\frac{1}{N}\sum^N_{n=1}\bm{x}_n. \end{aligned} \end{equation} The $\bm{S}_{\overline{x}}$ is the \textit{matrix of sum of squares} and is also known as the \textit{scatter matrix}. The equivalence of equation (a) and equation (c) above follows from the identity (similar reason for the equivalence of equation (a) and equation (b)): \begin{align} \sum^N_{n=1}(\bm{x}_n - \boldsymbol\mu)^\top\boldsymbol\Sigma^{-1}(\bm{x}_n - \boldsymbol\mu) = \mathrm{tr}(\boldsymbol\Sigma^{-1}\bm{S}_{\overline{x}}) + N \cdot (\overline{\bm{x}} - \boldsymbol\mu)^\top\boldsymbol\Sigma^{-1}(\overline{\bm{x}} - \boldsymbol\mu). \label{equation:multi_gaussian_identity} \end{align} where the trace of a square matrix $\bm{A}$ is defined to be the sum of the diagonal elements $a_{ii}$ of $\bm{A}$: \begin{equation} \mathrm{tr}(\bm{A}) = \sum_i a_{ii}. \end{equation} The formulation in equation (b) is useful for the separated view of the conjugate prior for $\boldsymbol\Sigma$, and equation (c) is useful for the unified view of the joint conjugate prior for $\boldsymbol\mu, \boldsymbol\Sigma$ in the sequel. \begin{proof}[Proof of Identity~\ref{equation:multi_gaussian_identity}] There is a ``trick" involving the trace that makes such calculations easy (see also Chapter 3 of \citet{gentle2007matrix}) \begin{equation} \bm{x}^\top \bm{A} \bm{x} = \mathrm{tr}(\bm{x}^\top \bm{A} \bm{x}) = \mathrm{tr}(\bm{x} \bm{x}^\top \bm{A}) = \mathrm{tr}(\bm{A} \bm{x} \bm{x}^\top ) \end{equation} where the first equality follows from the fact that $\bm{x}^\top \bm{A} \bm{x}$ is a scalar and the trace of a product is invariant under cyclical permutations of the factors \footnote{Trace is invariant under cyclical permutations: $\mathrm{tr}(\bm{A}\bm{B}\bm{C}) = \mathrm{tr}(\bm{B}\bm{C}\bm{A}) = \mathrm{tr}(\bm{C}\bm{A}\bm{B})$ if all $\bm{A}\bm{B}\bm{C}$, $\bm{B}\bm{C}\bm{A}$, and $\bm{C}\bm{A}\bm{B}$ exist.} . We can then rewrite $\sum^N_{n=1}(\bm{x}_n - \boldsymbol\mu)^\top\boldsymbol\Sigma^{-1}(\bm{x}_n - \boldsymbol\mu)$ by \begin{equation} \begin{aligned} &\, \sum^N_{n=1}(\bm{x}_n - \overline{\bm{x}})^\top\boldsymbol\Sigma^{-1}(\bm{x}_n - \overline{\bm{x}}) + \sum^N_{n=1}(\overline{\bm{x}} - \boldsymbol\mu)^\top\boldsymbol\Sigma^{-1}(\overline{\bm{x}} - \boldsymbol\mu) \\ &= \mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S}_{\overline{x}} ) + N \cdot (\overline{\bm{x}} - \boldsymbol\mu)^\top\boldsymbol\Sigma^{-1}(\overline{\bm{x}} - \boldsymbol\mu). \end{aligned} \end{equation} This concludes the proof. \end{proof} By equivalence from the Identity~\eqref{equation:multi_gaussian_identity}, we cannot reduce the complexity, but it is useful to show the conjugacy in Section~\ref{sec:niw_posterior_conjugacy} below. Similar to the univariate Gaussian likelihood in Equation~\eqref{equation:gaussian_form_conform}, given fixed mean $\boldsymbol\mu$ and covariance $\boldsymbol\Sigma$ parameters, we have \begin{equation}\label{equation:multi_gaussian_form_conform} \begin{aligned} p(\bm{x}\mid \boldsymbol\mu, \boldsymbol\Sigma) &=\mathcal{N}(\bm{x} \mid \boldsymbol\mu, \boldsymbol\Sigma) \propto \exp\left\{ -\frac{1}{2} \bm{x}^\top\boldsymbol\Sigma^{-1}\bm{x} + \bm{x}^\top \boldsymbol\Sigma^{-1}\boldsymbol\mu \right\}. \end{aligned} \end{equation} Therefore, if we find the form conforming to the above equation, we can say the random variable ${\mathbf{x}}$ follows the Gaussian distribution ${\mathbf{x}}\sim \mathcal{N}(\boldsymbol\mu, \boldsymbol\Sigma)$. See the example of a Bayesian \textit{GGGM} matrix decomposition model in Equation~\eqref{equation:gggm_wm_post} (p.~\pageref{equation:gggm_wm_post}). \subsection{Multivariate Student's $t$ Distribution} The multivariate Student's $t$-distribution is a continuous probability distribution over multiple variables that generalizes the Gaussian distribution to allow for heavier tails, i.e., the probability of extreme values is higher than that in a Gaussian distribution. The multivariate Student's $t$ distribution will be often used in the posterior predictive distribution of multivariate Gaussian parameters. We rigorously define the distribution as follows. \index{Multivariate Student's $t$ distribution} \begin{definition}[Multivariate Student's $t$ Distribution]\label{definition:multivariate-stu-t} A random vector ${\mathbf{x}}\in\mathbb{R}^D$ is said to follow the multivariate Student's $t$ distribution with parameters $\boldsymbol\mu\in\mathbb{R}^D$, $\boldsymbol\Sigma\in\mathbb{R}^{D\times D}$, and $\nu$, denoted by ${\mathbf{x}} \sim \tau( \boldsymbol\mu, \boldsymbol\Sigma, \nu)$, if $$ \begin{aligned} f(\bm{x}; \boldsymbol\mu, \boldsymbol\Sigma, \nu)&= \frac{\Gamma(\nu/2 + D/2)}{\Gamma(\nu/2)} \frac{|\boldsymbol\Sigma|^{-1/2}}{\nu^{D/2} \pi^{D/2}} \times \left[ 1+ \frac{1}{\nu} (\bm{x}-\boldsymbol\mu)^\top \boldsymbol\Sigma^{-1} (\bm{x}-\boldsymbol\mu) \right]^{-(\frac{\nu+D}{2})}\\ &= \frac{\Gamma(\nu/2 + D/2)}{\Gamma(\nu/2)} |\pi\bm{V}|^{-1/2} \times \left[ 1+ \frac{1}{\nu} (\bm{x}-\boldsymbol\mu)^\top \bm{V}^{-1} (\bm{x}-\boldsymbol\mu) \right]^{-(\frac{\nu+D}{2})}, \end{aligned} $$ where $\boldsymbol\Sigma$ is called the \textbf{scale matrix} and $\bm{V}=\nu\boldsymbol\Sigma$, and $\nu$ is the \textbf{degree of freedom}. This distribution has fatter tails than a Gaussian one. The smaller the $\nu$ is, the fatter the tails. As $\nu \rightarrow \infty$, the distribution converges towards a Gaussian. The mean, mode, and covariance of the multivariate Student's $t$ distribution are given by \begin{equation*} \begin{aligned} \mathrm{E} [\bm{x}] &= \boldsymbol\mu,\\ \mathrm{Mode}[\bm{x}] &= \boldsymbol\mu, \\ \mathrm{Cov} [\bm{x}] &= \frac{\nu}{\nu-2}\boldsymbol\Sigma. \end{aligned} \end{equation*} Note that the $\boldsymbol\Sigma$ is called the scale matrix since it is not exactly the covariance matrix as that in a multivariate Gaussian distribution. Specifically, When $D=1$, it follows that (see Definition~\ref{equation:student_t_dist}) \begin{equation}\label{equation:uni-stu-nonzero} \begin{aligned} \tau(x\mid \mu, \sigma^2, \nu)&= \frac{\Gamma(\frac{\nu+1}{2})}{\Gamma(\frac{\nu}{2})} \frac{1}{\sigma\sqrt{\nu\pi}} \times \left[ 1+ \frac{(x-\mu)^2}{\nu \sigma^2} \right]^{-(\frac{\nu+1}{2})}. \end{aligned} \end{equation} When $D=1, \boldsymbol\mu=0, \boldsymbol\Sigma=1$, then the p.d.f., defines the \textbf{univariate $t$ distribution}. \begin{equation*} \begin{aligned} \tau(x\mid \nu)&= \frac{\Gamma(\frac{\nu+1}{2})}{\Gamma(\frac{\nu}{2})} \frac{1}{\sqrt{\nu\pi}} \times \left[ 1+ \frac{x^2}{\nu } \right]^{-(\frac{\nu+1}{2})}. \end{aligned} \end{equation*} \end{definition} Figure~\ref{fig:studentt_densitys-1} compares the Gaussian and the Student's $t$ distribution for various values such that when $\nu\rightarrow \infty$, the difference between the densities is approaching zero. Given the same parameters in the densities, the Student's $t$ in general has longer ``tails" than a Gaussian which can be seen from the comparison between Figure~\ref{fig:gauss-diagonal} and Figure~\ref{fig:student-1}. This provides the Student's $t$ distribution an important property known as \textbf{robustness}, which means that it is much less sensitive than the Gaussian to the presence of a few data points which are outliers \citep{bishop2006pattern, murphy2012machine}. \begin{figure}[h] \subfigure[Gaussian, $\boldsymbol\Sigma =\begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}. $ ]{\includegraphics[width=0.31 \textwidth]{imgs/gauss-dist-diagonal.pdf} \label{fig:gauss-diagonal}} \subfigure[Gaussian, $\boldsymbol\Sigma =\begin{bmatrix} 1&0\\ 0&3 \end{bmatrix}.$]{\includegraphics[width=0.31 \textwidth]{imgs/gauss-dist-spherical.pdf} \label{fig:gauss-spherical}} \subfigure[Gaussian, $\boldsymbol\Sigma =\begin{bmatrix} 1&\textendash0.5\\ \textendash0.5&1.5 \end{bmatrix}.$]{\includegraphics[width=0.31 \textwidth]{imgs/gauss-dist-full.pdf} \label{fig:gauss-full}} \subfigure[Student $t$, $\boldsymbol\Sigma =\begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}, \nu=1. $ ]{\includegraphics[width=0.31 \textwidth]{imgs/stu-diagonal-1.pdf} \label{fig:student-1}} \subfigure[Student $t$, $\boldsymbol\Sigma =\begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}, \nu=3. $ ]{\includegraphics[width=0.31 \textwidth]{imgs/stu-diagonal-3.pdf} \label{fig:student-3}} \subfigure[Stu $t$, $\boldsymbol\Sigma =\begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}, \nu=200. $ ]{\includegraphics[width=0.31 \textwidth]{imgs/stu-diagonal-200.pdf} \label{fig:student200}} \subfigure[Diff between (a) and (d)]{\includegraphics[width=0.31 \textwidth]{imgs/gauss-stu-diff1.pdf} \label{fig:gauss-stu-diff1}} \subfigure[Diff between (a) and (e)]{\includegraphics[width=0.31 \textwidth]{imgs/gauss-stu-diff3.pdf} \label{fig:gauss-stu-diff3}} \subfigure[Diff between (a) and (f)]{\includegraphics[width=0.31 \textwidth]{imgs/gauss-stu-diff200.pdf} \label{fig:gauss-stu-diff200}} \centering \caption{Density and contour plots (\textcolor{darkblue}{blue}=low, \textcolor{mydarkyellow}{yellow}=high) for the multivariate Gaussian distribution and multivariate Student's $t$ distribution over the $\mathbb{R}^2$ space for various values of the covariance/scale matrix with zero mean vector. Fig~\ref{fig:gauss-diagonal}: A spherical covariance matrix has a circular shape; Fig~\ref{fig:gauss-spherical}: A diagonal covariance matrix is an \textbf{axis aligned} ellipse; Fig~\ref{fig:gauss-full}: A full covariance matrix has a elliptical shape; \\ Fig~\ref{fig:student-1} to Fig~\ref{fig:student200} for Student's $t$ distribution with the same scale matrix and increasing $\nu$ such that the difference between (a) and (f) in Fig~\ref{fig:gauss-stu-diff200} is approaching zero.}\centering \label{fig:studentt_densitys-1} \end{figure} A Student's $t$ distribution can be written as a \textbf{Gaussian scale mixture} \begin{equation}\label{equation:gauss-scale-mixture} \tau(\bm{x}\mid \boldsymbol\mu, \boldsymbol\Sigma, \nu) = \int_0^{\infty} \mathcal{N}(\bm{x} \mid \boldsymbol\mu, \boldsymbol\Sigma/z)\cdot \mathrm{Ga}\big(z\mid \frac{\nu}{2}, \frac{\nu}{2}\big) dz. \end{equation} This can be thought of as an ``infinite'' mixture of Gaussians, each with a slightly different covariance matrix. In other words, a Student's $t$ distribution is obtained by adding up an infinite number of Gaussian distributions having the same mean vector but different covariance matrices. From this Gaussian scale mixture view, when $\nu \rightarrow \infty$, the Gamma distribution becomes a degenerate random variable with all the non-zero mass at the point unity such that the multivariate Student's $t$ distribution converges to a multivariate Gaussian distribution. \subsection{Prior on Parameters of Multivariate Gaussian Distribution} In Equation~\eqref{equation:inverse_gamma_conjugacy_general}, we have shown that the inverse-Gamma distribution is a conjugate prior to the variance parameter of a Gaussian distribution. A generalization to this is the \textit{inverse-Wishart} distribution which is a conjugate prior to the full covariance matrix of a multivariate Gaussian distribution. That is, the inverse-Wishart distribution is a probability distribution of random positive definite matrices that can be used to model random covariance matrices. Before delving into the topic of the inverse-Wishart distribution, it's important to note that it originates from the Wishart distribution. As stated by \citet{anderson1962introduction} in 1962, ``The Wishart distribution ranks next to the (multivariate) normal distribution in order of importance and usefulness in multivariate statistics". \index{Wishart distribution} \begin{definition}[Wishart Distribution]\label{definition:wishart_dist} A random symmetric positive definite matrix $\boldsymbol\Lambda\in \mathbb{R}^{D\times D}$ is said to follow the Wishart distribution with parameter $\bm{M}\in\mathbb{R}^{D\times D}$ and $\nu$, denoted by $\boldsymbol\Lambda \sim \mathrm{Wi}(\bm{M}, \nu)$, if $$ \begin{aligned} &\gap f(\boldsymbol\Lambda;\textcolor{black}{\bm{M}}, \nu)\\ &= |\boldsymbol\Lambda|^{\textcolor{black}{\frac{\nu-D-1}{2}}} \exp\left\{-\frac{1}{2}\mathrm{tr}(\textcolor{black}{\boldsymbol\Lambda} \textcolor{black}{\bm{M}^{-1}})\right\} \left[2^{\frac{\nu D}{2}} \pi^{D(D-1)/4} \textcolor{black}{|\bm{M}|^{\nu/2} } \prod_{d=1}^D\Gamma\big(\frac{\nu+1-d}{2}\big) \right]^{-1}, \end{aligned} $$ where $\nu > D$ and $\bm{M}$ is a $D\times D$ symmetric positive definite matrix, and $|\boldsymbol\Lambda| = \mathrm{det}(\boldsymbol\Lambda)$ is the determinant of matrix $\boldsymbol\Lambda$. The $\nu$ is called the \textbf{number of degrees of freedom}, and $\bm{M}$ is called the \textbf{scale matrix}. The mean and variance of the Wishart distribution are given by $$ \begin{aligned} \mathbb{E} [\boldsymbol\Lambda] &= \nu \bm{M},\\ \mathrm{Var}[\boldsymbol\Lambda_{i,j}] &= \nu (m_{ij}^2 + m_{ii}m_{jj}), \label{equation:wishart_expectation} \end{aligned} $$ where $m_{ij}$ is the ($i,j$)-th element of $\bm{M}$. When $D=1$ and $\bm{M}=1$, the Wishart distribution reduces to the Chi-squared distribution (Definition~\ref{definition:chisquare_distribution}) such that: $$ \mathrm{Wi}(x\mid 1, \nu) = \chi^2(x\mid \nu). $$ \end{definition} An interpretation of the Wishart distribution is as follows. Suppose we sample i.i.d. $\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_{\nu}\in\mathbb{R}^D$ from $\mathcal{N}(\mathbf{0}, \bm{M})$. The sum of squares matrix of the collection of multivariate vectors is given by $$ \sum_{i=1}^{\nu} \bm{z}_i\bm{z}_i^\top = \bm{Z}^\top\bm{Z}, $$ where $\bm{Z}$ is the $\nu \times D$ matrix whose $i$-th row is $\bm{z}_i^\top$. It is trivial that $\bm{Z}^\top\bm{Z}$ is positive semidefinite (PSD) and symmetric. If $\nu >D$ and the $\bm{z}_i$'s are linearly independent, then $\bm{Z}^\top \bm{Z}$ will be positive definite (PD) and symmetric. That is $\bm{Z}\bm{x}=\mathbf{0}$ only happens when $\bm{x}=\mathbf{0}$. We can repeat over and over again, generating matrices $\bm{Z}_1^\top\bm{Z}_1, \bm{Z}_2^\top\bm{Z}_2, \ldots, \bm{Z}_l^\top\bm{Z}_l$. The population distribution of these matrices follows a Wishart distribution with parameters $(\bm{M}, \nu)$. By definition, $$ \begin{aligned} \boldsymbol\Lambda&=\bm{Z}^\top\bm{Z} = \sum_{i=1}^{\nu} \bm{z}_i\bm{z}_i^\top; \\ \mathrm{E}[\boldsymbol\Lambda]&=\mathrm{E}[\bm{Z}^\top\bm{Z}] = \mathrm{E}\left[\sum_{i=1}^{\nu} \bm{z}_i\bm{z}_i^\top\right] = \nu \mathrm{E}[\bm{z}_i\bm{z}_i^\top] = \nu\bm{M}. \\ \end{aligned} $$ When $D=1$, this reduces to the case that if $z$ is drawn from a zero-mean univariate normal random variable, then $z^2$ is drawn from a Gamma random variable. To be specific, $$ \mathrm{suppose } \qquad z \sim \mathcal{N}(0, a) , \qquad \mathrm{then } \qquad z^2\sim \mathrm{Ga}(a/2, 1/2). $$ Just like the relationship between the inverse-Gamma distribution and the Gamma distribution that if $x \sim \mathrm{Ga}(r, \lambda)$, then $y=\frac{1}{x} \sim \mathrm{IG}(r, \lambda)$. There is a similar connection between the inverse-Wishart distribution and the Wishart distribution. Since we often use the inverse-Wishart (IW) distribution as a prior distribution for a covariance matrix, it is often useful to replace $\bm{M}$ in the Wishart distribution with $\bm{S}=\bm{M}^{-1}$. This results in that a random $D\times D$ symmetric positive definite matrix $\boldsymbol\Sigma$ follows an inverse-Wishart $\mathrm{IW}(\boldsymbol\Sigma\mid \bm{S}, \nu)$ distribution if $\boldsymbol\Sigma^{-1}=\boldsymbol\Lambda$ follows a Wishart $\mathrm{Wi}(\boldsymbol\Lambda\mid \bm{M}, \nu)$ distribution. \begin{definition}[Inverse-Wishart Distribution]\label{definition:multi_inverse_wishart} A random symmetric positive definite matrix $\boldsymbol\Sigma\in \mathbb{R}^{D\times D}$ is said to follow the inverse-Wishart distribution with parameters $\bm{S}\in\mathbb{R}^{D\times D}$ and $\nu$, denoted by $\boldsymbol\Sigma\sim \mathrm{IW}(\bm{S}, \nu)$, if $$ \begin{aligned} &\gap f(\boldsymbol\Sigma; \textcolor{red}{\bm{S}}, \nu)\\ &= |\boldsymbol\Sigma|^{\textcolor{blue}{-\frac{\nu+D+1}{2}}} \exp\left\{-\frac{1}{2}\mathrm{tr}(\textcolor{blue}{\boldsymbol\Sigma^{-1}} \textcolor{red}{\bm{S}})\right\} \times \left[2^{\frac{\nu D}{2}} \pi^{D(D-1)/4} \textcolor{red}{|\bm{S}|^{-\nu/2}} \prod_{d=1}^D\Gamma\big(\frac{\nu+1-d}{2}\big) \right]^{-1}, \end{aligned} $$ where $\nu > D$ and $\bm{S}$ is a $D\times D$ symmetric positive definite matrix, and $|\boldsymbol\Sigma| = \mathrm{det}(\boldsymbol\Sigma)$. The $\nu$ is called the \textbf{number of degrees of freedom}, and $\bm{S}$ is called the \textbf{scale matrix}. And it is denoted by $\boldsymbol\Sigma \sim \mathrm{IW}(\bm{S}, \nu)$. The mean and mode of the inverse-Wishart distribution are given by \begin{equation} \begin{aligned} \mathbb{E} [\boldsymbol\Sigma ^{-1}] &= \nu \bm{S}^{-1}=\nu \bm{M},\\ \mathbb{E} [\boldsymbol\Sigma] &= \frac{1}{\nu - D - 1} \bm{S}, \\ \mathrm{Mode}[\boldsymbol\Sigma] &= \frac{1}{\nu + D + 1} \bm{S}. \label{equation:iw_expectation} \end{aligned} \end{equation} Note that, sometimes, we replace $\bm{S}$ by $\bm{M}=\bm{S}^{-1}$ such that $\mathbb{E} [\boldsymbol\Sigma ^{-1}] = \nu \bm{M}$ which does not involve the inverse of the matrix. When $D=1$, the inverse-Wishart distribution reduces to the inverse-Gamma such that $\frac{\nu}{2} = r$ and $\frac{S}{2}=\lambda$ (see Definition~\ref{definition:inverse_gamma_distribution}): $$ \mathrm{IW}(y\mid S, \nu) = \mathrm{IG}(y\mid r, \lambda). $$ \end{definition} Note that the Wishart density is not simply the inverse-Wishart density with $\boldsymbol\Sigma$ replaced by $\boldsymbol\Lambda = \boldsymbol\Sigma^{-1}$. There is an additional factor of $|\boldsymbol\Sigma|^{-(D+1)}$. See Theorem 7.7.1 in \citet{anderson1962introduction} that the Jacobian of the transformation $\boldsymbol\Lambda = \boldsymbol\Sigma^{-1}$ is $|\boldsymbol\Sigma|^{-(D+1)}$. Substitution of $\boldsymbol\Sigma^{-1}$ in the definition of the Wishart distribution and multiplying by $|\boldsymbol\Sigma|^{-(D+1)}$ can yield the inverse-Wishart distribution. \footnote{Which is from the Jacobian in the change-of-variables formula. A short proof is provided here. Let $\boldsymbol\Lambda = g(\boldsymbol\Sigma)=\boldsymbol\Sigma^{-1}$ where $\boldsymbol\Sigma\sim \mathrm{IW}(\bm{S}, \nu)$ and $\boldsymbol\Lambda\sim \mathrm{Wi}(\bm{S}, \nu)$. Then, $f(\boldsymbol\Sigma) = f(\boldsymbol\Lambda) |J_g|$ where $J_g$ is the Jacobian matrix results in $f(\boldsymbol\Sigma) = f(\boldsymbol\Lambda) |J_g| = f(\boldsymbol\Lambda)|\boldsymbol\Sigma|^{-(D+1)} $. } The multivariate analog of the normal-inverse-Chi-squared distribution (Definition~\ref{definition:normal_inverse_chi_square}) is the \textit{normal-inverse-Wishart (NIW) distribution} \citep{murphy2007conjugate}. We will see that a sample drawn from a normal-inverse-Wishart distribution, a joint conjugate prior, gives a mean vector and a covariance matrix that can define a multivariate Gaussian distribution. Separately, we can first sample a matrix $\boldsymbol\Sigma$ from an inverse-Wishart distribution parameterized by \{$\bm{S}_0, \nu_0$, $\boldsymbol\mu$\} (this is called a \textit{semi-conjugate prior}), and then sample a mean vector from a Gaussian distribution parameterized by \{$\bm{m}_0, \bm{V}_0, \boldsymbol\Sigma$\}. \footnote{Here we use a subscript value of $0$ to indicate the parameters are used for prior density. However, in the Bayesian matrix decomposition analysis, things become more complex and the prior parameters could have other subscript values.} \index{Normal-inverse-Wishart distribution} \begin{definition}[Normal-Inverse-Wishart (NIW) Distribution]\label{definition:normal_inverse_wishart} Analog to the (univariate) normal-inverse-Chi-squared distribution, the multivariate counterpart, the normal-inverse-Wishart (NIW) distribution is defined as \begin{equation}\label{equation:normal_inverse_wishart} \begin{aligned} &\gap \mathcal{NIW} (\boldsymbol\mu, \boldsymbol\Sigma\mid \bm{m}, \kappa, \nu, \bm{S}) = \mathcal{N}(\boldsymbol\mu\mid \bm{m}, \frac{1}{\kappa}\boldsymbol\Sigma) \cdot \mathrm{IW}(\boldsymbol\Sigma\mid \bm{S}, \nu) \\ &=\frac{1}{Z_{\mathcal{NIW}}(D, \kappa, \nu, \bm{S})} |\boldsymbol\Sigma|^{-1/2}\exp\left\{\frac{\kappa}{2}(\boldsymbol\mu - \bm{m})^\top\boldsymbol\Sigma^{-1}(\boldsymbol\mu - \bm{m})\right\} \\ &\gap\gap\gap\gap \times |\boldsymbol\Sigma|^{-\frac{\nu+D+1}{2}} \exp\left\{-\frac{1}{2}\mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S})\right\} \\ &= \frac{1}{Z_{\mathcal{NIW}}(D, \kappa, \nu, \bm{S})} |\boldsymbol\Sigma|^{-\frac{\nu+D+2}{2}}\\ &\gap\gap\gap\gap \times \exp\left\{-\frac{\kappa}{2}(\boldsymbol\mu - \bm{m})^\top\boldsymbol\Sigma^{-1}(\boldsymbol\mu - \bm{m}) -\frac{1}{2}\mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S})\right\}, \end{aligned} \end{equation} where the random vector $\boldsymbol\mu\in\mathbb{R}^D$ and the random positive definite matrix $\boldsymbol\Sigma\in\mathbb{R}^{D\times D}$ are said to follow NIW, denoted by $\boldsymbol\mu, \boldsymbol\Sigma\sim \mathcal{NIW}(\bm{m}, \kappa, \nu, \bm{S})$. And $Z_{\mathcal{NIW}}(D, \kappa, \nu, \bm{S})$ is a normalizing constant: \begin{equation} Z_{\mathcal{NIW}}(D, \kappa, \nu, \bm{S}) = 2^{\frac{(\nu+1)D}{2}} \pi^{D(D+1)/4} \kappa^{-D/2} | \bm{S}|^{-\nu/2}\prod_{d=1}^D\Gamma\big(\frac{\nu+1-d}{2}\big). \label{equation:multi_gaussian_giw_constant} \end{equation} \end{definition} We then proceed to discuss the posterior density of the Gaussian model under the NIW or IW prior from two perspectives: a separated view and a unified view. \subsection{Posterior Distribution of $\boldsymbol\mu$: Separated View}\label{section:sep_mu_niw} Consider again $N$ random observations $\mathcal{X} = \{\bm{x}_1, \bm{x}_2, \ldots , \bm{x}_N \}$ generated by a multivariate Gaussian with mean vector $\boldsymbol\mu$ and covariance matrix $\boldsymbol\Sigma$. Suppose the covariance matrix $\boldsymbol\Sigma$ of a multivariate Gaussian distribution is known in Equation~\eqref{equation:multi_gaussian_likelihood}, the likelihood function is (equality (a) in Equation~\eqref{equation:multi_gaussian_likelihood}) $$ \begin{aligned} \mathrm{\textbf{likelihood}} &=p(\mathcal{X} \mid \boldsymbol\mu) =\mathcal{N}(\mathcal{X}\mid \boldsymbol\mu, \boldsymbol\Sigma)=\prod^N_{n=1} \mathcal{N} (\bm{x}_n\mid \boldsymbol\mu, \boldsymbol\Sigma)\\ &= (2\pi)^{-ND/2} |\boldsymbol\Sigma|^{-N/2}\exp\left\{-\frac{1}{2} \sum^N_{n=1}(\bm{x}_n - \boldsymbol\mu)^\top \boldsymbol\Sigma^{-1}(\bm{x}_n - \boldsymbol\mu)\right\}\\ &\propto \exp\left( - \frac{1}{2}N \boldsymbol\mu^\top \boldsymbol\Sigma^{-1}\boldsymbol\mu + N\overline{\bm{x}}^\top \boldsymbol\Sigma^{-1} \boldsymbol\mu \right), \end{aligned} $$ where $\overline{\bm{x}} = \frac{1}{N}\sum_{n=1}^{N}\bm{x}_n$. The conjugate prior of the mean vector is also a Gaussian $p(\boldsymbol\mu)= \mathcal{N}(\boldsymbol\mu \mid \bm{m}_0, \bm{V}_0)$, $$ \begin{aligned} \mathrm{\textbf{prior}}&=p(\boldsymbol\mu)= \mathcal{N}(\boldsymbol\mu \mid \bm{m}_0, \bm{V}_0)\\ &= (2\pi)^{-D/2} |\bm{V}_0|^{-1/2}\exp\left\{-\frac{1}{2} (\boldsymbol\mu - \bm{m}_0)^\top \bm{V}_0^{-1}(\boldsymbol\mu - \bm{m}_0)\right\} \\ &= (2\pi)^{-D/2} |\bm{V}_0|^{-1/2}\exp \left( -\frac{1}{2}\boldsymbol\mu^\top\bm{V}_0^{-1}\boldsymbol\mu + \boldsymbol\mu^\top \bm{V}_0^{-1}\bm{m}_0 - \frac{1}{2} \bm{m}_0^\top\bm{V}_0^{-1} \bm{m}_0 \right)\\ &\propto \exp \left( -\frac{1}{2}\boldsymbol\mu^\top\bm{V}_0^{-1}\boldsymbol\mu + \boldsymbol\mu^\top \bm{V}_0^{-1}\bm{m}_0\right). \end{aligned} $$ By the Bayes' theorem ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", we can obtain a Gaussian posterior for $\boldsymbol\mu$: $$ \begin{aligned} \mathrm{\textbf{posterior}}&=p(\boldsymbol\mu \mid \mathcal{X}, \boldsymbol\Sigma) \propto p(\mathcal{X} \mid \boldsymbol\mu, \boldsymbol\Sigma ) \times p(\boldsymbol\mu)\\ &=\exp\left( N\overline{\bm{x}}^\top \boldsymbol\Sigma^{-1} \boldsymbol\mu - \frac{1}{2}N \boldsymbol\mu^\top \boldsymbol\Sigma^{-1}\boldsymbol\mu \right) \times \exp \left( -\frac{1}{2}\boldsymbol\mu^\top\bm{V}_0^{-1}\boldsymbol\mu + \boldsymbol\mu^\top \bm{V}_0^{-1}\bm{m}_0\right)\\ &= \exp\left\{ -\frac{1}{2}\boldsymbol\mu^\top(\bm{V}_0^{-1} + N\boldsymbol\Sigma^{-1})\boldsymbol\mu + \boldsymbol\mu^\top( \bm{V}_0^{-1}\bm{m}_0 + N\boldsymbol\Sigma^{-1}\overline{\bm{x}} ) \right\} \\ &\propto \mathcal{N}(\boldsymbol\mu \mid \bm{m}_N, \bm{V}_N) \end{aligned} $$ where $\bm{V}_N^{-1} = \bm{V}_0^{-1} + N\boldsymbol\Sigma^{-1}$, and $\bm{m}_N = \bm{V}_N ( \bm{V}_0^{-1}\bm{m}_0 + N\boldsymbol\Sigma^{-1}\overline{\bm{x}} )$. In this case, \textbf{the posterior precision matrix is the sum of the prior precision matrix $\bm{V}_0^{-1}$ and data precision matrix $N\boldsymbol\Sigma^{-1}$}. By letting $\bm{V}_0 \rightarrow \infty \bm{I}$, we can model \textit{an uninformative prior} such that the posterior distribution of the mean is $p(\boldsymbol\mu \mid \mathcal{X}, \boldsymbol\Sigma) =\mathcal{N}(\boldsymbol\mu \mid \overline{\bm{x}}, \frac{1}{N}\boldsymbol\Sigma)$. \subsection{Posterior Distribution of $\boldsymbol\Sigma$: Separated View}\label{section:sep_sigma_niw} Suppose now the mean vector $\boldsymbol\mu$ of a multivariate Gaussian distribution is known in Equation~\eqref{equation:multi_gaussian_likelihood}, the likelihood is (equality (b) in Equation~\eqref{equation:multi_gaussian_likelihood}) \begin{equation*} \begin{aligned} \mathrm{\textbf{likelihood}} = p(\mathcal{X}\mid \boldsymbol\mu, \boldsymbol\Sigma) =\prod^N_{n=1} \mathcal{N} (\bm{x}_n\mid \boldsymbol\mu, \boldsymbol\Sigma) = (2\pi)^{-ND/2} |\boldsymbol\Sigma|^{-N/2}\exp\left\{-\frac{1}{2} \mathrm{tr}(\boldsymbol\Sigma^{-1}\bm{S}_{\boldsymbol\mu} ) \right\}. \end{aligned} \end{equation*} The corresponding conjugate prior is the inverse-Wishart distribution: $$ \begin{aligned} \mathrm{\textbf{prior}}= \mathrm{IW}(\boldsymbol\Sigma\mid \bm{S}_0, \nu_0)&= |\boldsymbol\Sigma|^{-\frac{\nu_0+D+1}{2}} \exp\left\{-\frac{1}{2}\mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S}_0)\right\}\\ &\gap \times \left[2^{\frac{\nu_0 D}{2}} \pi^{D(D-1)/4} |\bm{S}_0|^{-\nu_0/2} \prod_{d=1}^D\Gamma(\frac{\nu_0+1-d}{2}) \right]^{-1}. \end{aligned} $$ By Bayes' theorem, ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", we obtain an inverse-Wishart posterior for $\boldsymbol\Sigma$: $$ \begin{aligned} \mathrm{\textbf{posterior}} &=p(\boldsymbol\Sigma \mid \mathcal{X}, \boldsymbol\mu) \propto p(\mathcal{X} \mid \boldsymbol\mu, \boldsymbol\Sigma ) \times p(\boldsymbol\Sigma)\\ &\propto |\boldsymbol\Sigma|^{-N/2}\exp\left\{-\frac{1}{2} \mathrm{tr}(\boldsymbol\Sigma^{-1}\bm{S}_{\boldsymbol\mu} ) \right\} \times |\boldsymbol\Sigma|^{-\frac{\nu_0+D+1}{2}} \exp\left\{-\frac{1}{2}\mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S}_0)\right\} \\ &= |\boldsymbol\Sigma|^{-\frac{\nu_0+N+D+1}{2}} \exp\left\{-\frac{1}{2}\mathrm{tr}\left(\boldsymbol\Sigma^{-1} [\bm{S}_0+\bm{S}_{\boldsymbol\mu}]\right)\right\}\\ &\propto \mathrm{IW}(\boldsymbol\Sigma \mid\bm{S}_0+\bm{S}_{\boldsymbol\mu}, \nu_0+N). \end{aligned} $$ The posterior degree of freedom is the sum of the prior degree of freedom $\nu_0$ and the number of observations $N$. And the posterior scale matrix is the sum of the prior scale matrix $\bm{S}_0$ and the data scale matrix $\bm{S}_{\boldsymbol\mu}$. The mean of the posterior $\boldsymbol\Sigma$ is given by $$ \begin{aligned} \mathrm{E}[\boldsymbol\Sigma| \mathcal{X}, \boldsymbol\mu] &= \frac{1}{\nu_0 +N- D - 1} (\bm{S}_0+\bm{S}_{\boldsymbol\mu}) \\ &= \frac{\nu_0 -D-1}{\nu_0 +N- D - 1} \cdot (\frac{1}{\nu_0 -D-1} \bm{S}_0) + \frac{N}{\nu_0 +N- D - 1} \cdot (\frac{1}{N}\bm{S}_{\boldsymbol\mu})\\ &= \lambda \cdot \big(\frac{1}{\nu_0 -D-1} \bm{S}_0 \big) + (1-\lambda) \cdot \big(\frac{1}{N}\bm{S}_{\boldsymbol\mu}\big), \end{aligned} $$ where $\lambda=\frac{\nu_0 -D-1}{\nu_0 +N- D - 1}$, $(\frac{1}{\nu_0 -D-1} \bm{S}_0)$ is the prior mean of $\boldsymbol\Sigma$, and $(\frac{1}{N}\bm{S}_{\boldsymbol\mu})$ is an unbiased estimator of the covariance such that $(\frac{1}{N}\bm{S}_{\boldsymbol\mu})$ converges to the true population covariance matrix when $N\rightarrow \infty$. Thus, \textbf{the posterior mean of the covariance matrix can be seen as the weighted average of the prior expectation and the unbiased estimator}. The unbiased estimator can be also shown to be equal to the maximum likelihood estimator (MLE) of $\boldsymbol\Sigma$. As $N\rightarrow \infty$, it can be shown that the posterior expectation of $\boldsymbol\Sigma$ is a consistent \footnote{An estimator $\hat{\theta}_N$ of $\theta$ constructed on the basis of a sample of size $N$ is said to be consistent if $\hat{\theta}_N \stackrel{p}{\longrightarrow} \theta$ as $N\rightarrow \infty$. See also \citet{lu2021rigorous}.} estimator of the population covariance. When we set $\nu_0=D+1$, $\lambda=0$ and we recover the MLE. Similarly, the mode of the posterior $\boldsymbol\Sigma$ is given by \begin{equation}\label{equation:map-covariance-multigauss} \begin{aligned} \mathrm{Mode}[\boldsymbol\Sigma] &= \frac{1}{\nu_0 +N+ D + 1} (\bm{S}_0+\bm{S}_{\boldsymbol\mu})\\ &= \frac{\nu_0+D+1}{\nu_0 +N+ D + 1} (\frac{1}{\nu_0+D+1} \bm{S}_0)+ \frac{N}{\nu_0 +N+ D + 1} (\frac{1}{N} \bm{S}_{\boldsymbol\mu})\\ &=\beta (\frac{1}{\nu_0+D+1} \bm{S}_0)+ (1-\beta)(\frac{1}{N} \bm{S}_{\boldsymbol\mu}), \end{aligned} \end{equation} where $\beta=\frac{\nu_0+D+1}{\nu_0 +N+ D + 1}$, and $(\frac{1}{\nu_0+D+1} \bm{S}_0)$ is the prior mode of $\boldsymbol\Sigma$. \textbf{The posterior mode is a weighted average of the prior mode and the unbiased estimator}. Again, the maximum a posterior (MAP) estimator in Equation~\eqref{equation:map-covariance-multigauss} is a consistent estimator. \subsection{Gibbs Sampling of the Mean and Covariance: Separated View} The separated view here is known as a \textbf{semi-conjugate prior} on the mean and covariance of a multivariate Gaussian distribution since both conditionals, $p(\boldsymbol\mu\mid\mathcal{X},\boldsymbol\Sigma)$ and $p(\boldsymbol\Sigma\mid\mathcal{X},\boldsymbol\mu)$, are individually conjugate. In the last two sections, we have shown $$ \begin{aligned} \boldsymbol\mu \mid \mathcal{X}, \boldsymbol\Sigma &\sim \mathcal{N}(\bm{m}_N, \bm{V}_N),\\ \boldsymbol\Sigma \mid \mathcal{X}, \boldsymbol\mu &\sim \mathrm{IW}(\bm{S}_0+\bm{S}_{\boldsymbol\mu}, \nu_0+N). \end{aligned} $$ The two full conditional distributions can be used to construct a Gibbs sampler. The Gibbs sampler generates the mean and covariance $\{\boldsymbol\mu^{t+1}, \boldsymbol\Sigma^{t+1}\}$ for $(t+1)$-th step from $\{\boldsymbol\mu^{t}, \boldsymbol\Sigma^{t}\}$ in $t$-th step via the following two steps: \begin{enumerate} \item Sample $\boldsymbol\mu^{t+1}$ from its full conditional distribution: $\boldsymbol\mu^{t+1} \sim \mathcal{N}(\bm{m}_N, \bm{V}_N)$, where $\{\bm{m}_N, \bm{V}_N\}$ depend on $\boldsymbol\Sigma^{t}$. \item Sample $\boldsymbol\Sigma^{t+1}$ from its full conditional distribution: $\boldsymbol\Sigma^{t+1} \sim \mathrm{IW}(\bm{S}_0+\bm{S}_{\boldsymbol\mu}, \nu_0+N)$, where $\{\bm{S}_0+\bm{S}_{\boldsymbol\mu}, \nu_0+N\}$ depend on $\boldsymbol\mu^{t+1}$. \end{enumerate} \subsection{Posterior Distribution of $\boldsymbol\mu$ and $\boldsymbol\Sigma$ Under NIW: Unified View}\label{sec:niw_posterior_conjugacy} The NIW, on the other hand, serves as a fully conjugate prior for the mean vector and covariance matrix of a multivariate Gaussian model. \paragraph{Likelihood.} The likelihood of $N$ random observations $\mathcal{X} = \{\bm{x}_1, \bm{x}_2, \ldots , \bm{x}_N \}$ generated by a multivariate Gaussian with mean vector $\boldsymbol\mu$ and covariance matrix $\boldsymbol\Sigma$ is (equality (c) in Equation~\eqref{equation:multi_gaussian_likelihood}) $$ \begin{aligned} p(\mathcal{X} \mid \boldsymbol\mu, \boldsymbol\Sigma)= \frac{1}{(2\pi)^{ND/2}} |\boldsymbol\Sigma|^{-N/2}\exp\left\{-\frac{N}{2}(\boldsymbol\mu - \overline{\bm{x}})^\top \boldsymbol\Sigma^{-1}(\boldsymbol\mu - \overline{\bm{x}})-\frac{1}{2}\mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S}_{\overline{x}})\right\}. \end{aligned} $$ \paragraph{Prior.} A trivial prior is to combine the conjugate priors for $\boldsymbol\mu$ and $\boldsymbol\Sigma$ respectively in the above sections: $$ p(\boldsymbol\mu, \boldsymbol\Sigma) = \mathcal{N}(\boldsymbol\mu \mid \bm{m}_0, \bm{V}_0)\cdot \mathrm{IW}(\boldsymbol\Sigma \mid \bm{S}_0, \nu_0). $$ However, this is not a conjugate prior of the likelihood with parameters $\{\boldsymbol\mu, \boldsymbol\Sigma\}$ since $\boldsymbol\mu$ and $\boldsymbol\Sigma$ appear together in a non-factorized way in the likelihood. For the full parameters of a multivariate Gaussian distribution (i.e., mean vector $\boldsymbol\mu$ and covariance matrix $\boldsymbol\Sigma$), the normal-inverse-Wishart (NIW) prior is fully conjugate: \begin{equation}\label{equation:multi_gaussian_prior} \begin{aligned} &\gap \mathcal{NIW} (\boldsymbol\mu, \boldsymbol\Sigma\mid \bm{m}_0, \kappa_0, \nu_0, \bm{S}_0) = \mathcal{N}(\boldsymbol\mu\mid \bm{m}_0, \frac{1}{\kappa_0}\boldsymbol\Sigma) \cdot \mathrm{IW}(\boldsymbol\Sigma\mid \bm{S}_0, \nu_0) \\ &= \frac{|\boldsymbol\Sigma|^{-\frac{\nu_0+D+2}{2}}}{Z_{\mathcal{NIW}}(D, \kappa_0, \nu_0, \bm{S}_0)} \cdot \exp\left\{-\frac{\kappa_0}{2}(\boldsymbol\mu - \bm{m}_0)^\top\boldsymbol\Sigma^{-1}(\boldsymbol\mu - \bm{m}_0) -\frac{1}{2}\mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S}_0)\right\}, \end{aligned} \end{equation} where \begin{equation}\label{equation:multi_gaussian_giw_constant_2} Z_{\mathcal{NIW}}(D, \kappa_0, \nu_0, \bm{S}_0) = 2^{\frac{(\nu_0+1)D}{2}} \pi^{D(D+1)/4} \kappa_0^{-D/2} | \bm{S}_0|^{-\nu_0/2}\prod_{d=1}^D\Gamma\big(\frac{\nu_0+1-d}{2}\big). \end{equation} The specific form of the normalization term $Z_{\mathcal{NIW}}(D, \kappa_0, \nu_0, \bm{S}_0)$ will be useful to show the posterior marginal likelihood of the data in Section~\ref{section:posterior-marginal-of-data}. \paragraph{A ``prior" interpretation for the NIW prior.} The inverse-Wishart distribution will ensure that the resulting covariance matrix is positive definite when $\nu_0 > D$. And if we are confident that the true covariance matrix is near some covariance matrix $\boldsymbol\Sigma_0$, then we might choose a large value of $\nu_0$ and set $\bm{S}_0 = (\nu_0 - D - 1) \boldsymbol\Sigma_0$, making the distribution of the covariance matrix $\boldsymbol\Sigma$ concentrated around $\boldsymbol\Sigma_0$. On the other hand, choosing $\nu_0 = D+2$ and $\bm{S}_0 = \boldsymbol\Sigma_0$ will make $\boldsymbol\Sigma$ loosely concentrated around $\boldsymbol\Sigma_0$. More details can be referred to \citet{chipman2001practical, fraley2007bayesian, hoff2009first, murphy2012machine}. An intuitive interpretation of the hyperparameters \citep{murphy2012machine, hoff2009first}: $\bm{m}_0$ is our prior mean for $\boldsymbol\mu$, $\kappa_0$ is how strongly we believe this prior for $\boldsymbol\mu$ (the larger the stronger we believe this prior mean), $\bm{S}_0$ is proportional to our prior mean for $\boldsymbol\Sigma$, and $\nu_0$ controls how strongly we believe this prior for $\boldsymbol\Sigma$. Because the Gamma function is not defined for negative integers and zero, from Equation~\eqref{equation:multi_gaussian_giw_constant_2} we require $\nu_0 > D - 1$ (which can also be shown from the expectation of the covariance matrix Equation~\eqref{equation:iw_expectation}. And also $\bm{S}_0$ needs to be a positive definite matrix, where an intuitive reason can be shown from Equation~\eqref{equation:iw_expectation}. A more detailed reason can be found in \citet{hoff2009first}. \paragraph{Posterior.} By Bayes' theorem, ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", the posterior of the $\boldsymbol\mu$ and $\boldsymbol\Sigma$ parameters under the NIW prior is \begin{equation} p(\boldsymbol\mu, \boldsymbol\Sigma\mid \mathcal{X}, \boldsymbol\beta ) \propto p(\mathcal{X} \mid \boldsymbol\mu, \boldsymbol\Sigma) p(\boldsymbol\mu, \boldsymbol\Sigma \mid \boldsymbol\beta) = p(\mathcal{X}, \boldsymbol\mu, \boldsymbol\Sigma \mid \boldsymbol\beta), \label{equation:niw_full_posterior} \end{equation} where $\boldsymbol\beta=\{\bm{m}_0, \kappa_0, \nu_0, \bm{S}_0\}$ are the hyperparameters and the right hand side of Equation~\eqref{equation:niw_full_posterior} is also known as the full joint distribution $p(\mathcal{X}, \boldsymbol\mu, \boldsymbol\Sigma \mid \boldsymbol\beta)$, and is given by \begin{equation} \begin{aligned} p(\mathcal{X}, \boldsymbol\mu, \boldsymbol\Sigma\mid \boldsymbol\beta)&=p(\mathcal{X} \mid \boldsymbol\mu, \boldsymbol\Sigma) \cdot p(\boldsymbol\mu, \boldsymbol\Sigma \mid \boldsymbol\beta) \\ &= C\times |\boldsymbol\Sigma|^{- \frac{\nu_0+N+D+2}{2}} \times\\ &\gap \exp \Bigg\{ -\frac{N}{2}(\boldsymbol\mu - \overline{\bm{x}})^\top \boldsymbol\Sigma^{-1} (\boldsymbol\mu - \overline{\bm{x}}) - \frac{\kappa_0}{2} (\boldsymbol\mu - \bm{m}_0)^\top \boldsymbol\Sigma^{-1}(\boldsymbol\mu - \bm{m}_0) \\ &\gap -\frac{1}{2} \mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S}_{\overline{x}}) - \frac{1}{2} \mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S}_0) \Bigg\},\\ \end{aligned} \label{equation:niw_full_joint} \end{equation} where $C =\frac{(2\pi)^{-ND/2}}{Z_{\mathcal{NIW}}(D, \kappa_0, \nu_0, \bm{S}_0)}$ is a constant normalization term. This can be reduced to \begin{equation} \begin{aligned} &\gap p(\mathcal{X}, \boldsymbol\mu, \boldsymbol\Sigma \mid \boldsymbol\beta)\\ &=C|\boldsymbol\Sigma|^{- \frac{\nu_0+N+D+2}{2}} \times \\ &\gap \exp \Bigg\{-\frac{\kappa_0+N}{2} \left(\boldsymbol\mu - \frac{\kappa_0 \bm{m}_0+N \overline{\bm{x}}}{\kappa_N} \right)^\top \boldsymbol\Sigma^{-1} \left(\boldsymbol\mu - \frac{\kappa_0 \bm{m}_0+N \overline{\bm{x}}}{\kappa_N} \right) \\ &\gap - \frac{1}{2} \mathrm{tr} \left[\boldsymbol\Sigma^{-1} \left( \bm{S}_0 + \bm{S}_{\overline{x}} + \frac{\kappa_0 N}{\kappa_0 + N} (\overline{\bm{x}} - \bm{m}_0)(\overline{\bm{x}}-\bm{m}_0)^\top \right) \right] \Bigg\}, \end{aligned} \label{equation:niw_full_joint2} \end{equation} which is reformulated to compare with the NIW form in Equation~\eqref{equation:multi_gaussian_prior}, and we can see the reason why we rewrite the multivariate Gaussian distribution into Equation~\eqref{equation:multi_gaussian_identity} by the trace trick. It follows that the posterior is also a NIW density with updated parameters and gives the view of conjugacy for a multivariate Gaussian distribution: \begin{equation} p(\boldsymbol\mu, \boldsymbol\Sigma\mid \mathcal{X} , \boldsymbol\beta) = \mathcal{NIW} (\boldsymbol\mu, \boldsymbol\Sigma \mid \bm{m}_N, \kappa_N, \nu_N, \bm{S}_N), \label{equation:niw_posterior_equation_1} \end{equation} where \begin{align} \bm{m}_N &= \frac{\kappa_0\bm{m}_0 + N\overline{\bm{x}}}{\kappa_N} = \frac{\kappa_0 }{\kappa_N}\bm{m}_0+\frac{N}{\kappa_N}\overline{\bm{x}} , \label{equation:niw_posterior_equation_2}\\ \kappa_N &= \kappa_0 + N, \label{equation:niw_posterior_equation_3}\\ \nu_N &=\nu_0 + N, \label{equation:niw_posterior_equation_4}\\ \bm{S}_N &=\bm{S}_0 + \bm{S}_{\overline{x}} + \frac{\kappa_0N}{\kappa_0 + N}(\overline{\bm{x}} - \bm{m}_0)(\overline{\bm{x}} - \bm{m}_0)^\top \label{equation:niw_posterior_equation_5}\\ &=\bm{S}_0 + \sum_{n=1}^N \bm{x}_n \bm{x}_n^\top + \kappa_0 \bm{m}_0 \bm{m}_0^\top - \kappa_N \bm{m}_N \bm{m}_N^\top . \label{equation:niw_posterior_equation_6} \end{align} \paragraph{A ``posterior" interpretation for the NIW prior.} An intuitive interpretation of the parameters in NIW can be obtained from the updated parameters above. The parameter $\nu_0$ is the prior number of samples to observe the covariance matrix, and $\nu_N =\nu_0 + N$ is the posterior number of samples. The posterior mean $\bm{m}_N$ of the model mean $\boldsymbol\mu$ is a weighted average of the prior mean and the sample mean. The posterior scale matrix $\bm{S}_N$ is the sum of the prior scale matrix, empirical covariance matrix $\bm{S}_{\overline{x}}$, and an extra term due to the uncertainty in the mean. \subsubsection{Parameter Choice} In practice, it is often better to use a weakly informative data-dependent prior. A common choice is to set $\bm{S}_0 = \mathrm{diag}(\bm{S}_{\overline{x}})/N$, and $\nu_0 =D+2$, to ensure $\mathbb{E}[\boldsymbol\Sigma]=\bm{S}_0$, and to set $\bm{m}_0 =\overline{\bm{x}}$ and $\kappa_0$ to some small number, such as 0.01, where $\bm{S}_{\overline{x}}$ is the sample covariance matrix and $\overline{\bm{x}}$ is the sample mean vector as shown in Equation~\eqref{equation:mvu-sample-covariance} \citep{chipman2001practical, fraley2007bayesian, hoff2009first, murphy2012machine}. Equivalently, we can also standardize the observation matrix $\mathcal{X}$ first to have zero mean and unit variance for every feature, and then let $\bm{S}_0 = \bm{I}_D$, and $\nu_0 =D+2$, to ensure $\mathbb{E}[\boldsymbol\Sigma]=\bm{I}_D$, and to set $\bm{m}_0 =\bm{0}$ and $\kappa_0$ to some small number, such as 0.01. \subsubsection{Reducing Sampling Time by Maintaining Squared sum of customers}\label{section:reduce-sampling-sum-square} In this section, we introduce some tricks to implement NIW more efficiently. The trick is largely used in the Gaussian mixture models \citep{das2014dpgmm, lu2021survey}. While in our Bayesian matrix factorization context, this trick is useful when we use cross-validation (CV) to down-sample matrix elements in each iteration (Section~\ref{section:gggw_model}, p.~\pageref{section:gggw_model}). The readers will understand the Chinese restaurant process terminology better in this section after reading \citet{lu2021survey}. Feel free to skip this section. We have seen the equivalence between Equation~\eqref{equation:niw_posterior_equation_5} and Equation~\eqref{equation:niw_posterior_equation_6}. The reason why we make a step further to Equation~\eqref{equation:niw_posterior_equation_6} from Equation~\eqref{equation:niw_posterior_equation_5} is to reduce sampling time. Suppose now that the data is not fixed and some data points can be removed from or added to $\mathcal{X}$. If we stick to the form in Equation~\eqref{equation:niw_posterior_equation_5}, we need to calculate $\bm{S}_{\overline{x}}$ and $\overline{\bm{x}}$ over and over again whenever the data points are updated. In Chinese restaurant process/clustering terminology, if we use Equation~\eqref{equation:niw_posterior_equation_5} instead of Equation~\eqref{equation:niw_posterior_equation_6}, whenever a customer is removed from (or added to) a table, we have to compute the matrix $\bm{S}_{\overline{x}}$, which requires to go over each point in this cluster (or each customer in this table following the term from the Chinese restaurant process). Computing this term every time when a customer is removed or added, could be computationally expensive. We realize that the data terms in Equation~\eqref{equation:niw_posterior_equation_6} only involve a sum of the outer product which does not contain any cross product (e.g., $\bm{x}_i\bm{x}_j^\top$ for $i \neq j$). By reformulating into Equation~\eqref{equation:niw_posterior_equation_6}, whenever a customer (e.g., a customer represented by $\bm{x}_n$) is removed or added, we just have to subtract or add $\bm{x}_n \bm{x}_n^\top$. Thus for each table, we only have to maintain the squared sum of customer vectors $\sum_{n=1}^N \bm{x}_n \bm{x}_n^\top$ for $\bm{S}_N$. Similarly, for $\bm{m}_N$, we need to maintain the sum of customer vectors $\sum_{n=1}^N\bm{x}_n$ for the same reason from Equation~\eqref{equation:niw_posterior_equation_2}. \subsection{Posterior Marginal Likelihood of Parameters} The posterior marginal for $\boldsymbol\Sigma$ is given by \begin{equation*} \begin{aligned} p(\boldsymbol\Sigma \mid\mathcal{X},\boldsymbol\beta) &= \int_{\boldsymbol\mu} p(\boldsymbol\mu, \boldsymbol\Sigma \mid\mathcal{X},\boldsymbol\beta) d\boldsymbol\mu\\ &=\mathrm{IW}(\boldsymbol\Sigma\mid \bm{S}_N, \nu_N), \end{aligned} \end{equation*} where the mean and mode can be obtained by Equation~\eqref{equation:iw_expectation}, $$ \begin{aligned} \mathrm{E}[\boldsymbol\Sigma \mid \mathcal{X}, \boldsymbol\beta] &= \frac{\bm{S}_N}{\nu_N - D-1}, \\ \mathrm{Mode}[\boldsymbol\Sigma \mid \mathcal{X}, \boldsymbol\beta]&=\frac{\bm{S}_N}{\nu_N + D+1}. \end{aligned} $$ The posterior marginal for $\boldsymbol\mu$ follows from a multivariate Student's $t$ distribution (Definition~\ref{definition:multivariate-stu-t}). We can show the posterior marginal for $\boldsymbol\mu$ is given by \begin{equation*} \begin{aligned} p(\boldsymbol\mu \mid \mathcal{X},\boldsymbol\beta) &= \int_{\boldsymbol\Sigma} p(\boldsymbol\mu, \boldsymbol\Sigma\mid \mathcal{X},\boldsymbol\beta) d\boldsymbol\Sigma\\ &= \int_{\boldsymbol\Sigma} \mathcal{NIW} (\boldsymbol\mu, \boldsymbol\Sigma \mid \bm{m}_N, \kappa_N, \nu_N, \bm{S}_N) d\boldsymbol\Sigma \\ &=\tau\big(\boldsymbol\mu \mid \bm{m}_N, \frac{1}{\kappa_N(\nu_N-D+1)}\bm{S}_N, \nu_N-D+1\big), \end{aligned} \end{equation*} which is from the Gaussian scale mixture property of the Student's $t$ distribution, see Equation~\eqref{equation:gauss-scale-mixture} and further discussion in \citet{murphy2012machine}. \subsection{Posterior Marginal Likelihood of Data}\label{section:posterior-marginal-of-data} By integrating the full joint distribution in Equation~\eqref{equation:niw_full_joint2}, we can get the marginal likelihood of data under hyperparameter $\boldsymbol\beta=\{\bm{m}_0, \kappa_0, \nu_0, \bm{S}_0\}$: \begin{equation} \begin{aligned} p(\mathcal{X}\mid\boldsymbol\beta) &= \int_{\boldsymbol\mu} \int_{\boldsymbol\Sigma} p(\mathcal{X}, \boldsymbol\mu, \boldsymbol\Sigma \mid \boldsymbol\beta) d\boldsymbol\mu d\boldsymbol\Sigma \\ &= \int_{\boldsymbol\mu} \int_{\boldsymbol\Sigma} \mathcal{N}(\mathcal{X}\mid\boldsymbol\mu, \boldsymbol\Sigma) \cdot \mathcal{NIW}(\boldsymbol\mu, \boldsymbol\Sigma\mid\boldsymbol\beta) d\boldsymbol\mu d\boldsymbol\Sigma \\ &= \frac{(2\pi)^{-ND/2}}{Z_{\mathcal{NIW}}(D, \kappa_0, \nu_0, \bm{S}_0)} \int_{\boldsymbol\mu}\int_{\boldsymbol\Sigma}|\boldsymbol\Sigma|^{-\frac{\nu_0+N+D+2}{2}} \\ &\gap \times \exp \left(-\frac{\kappa_N}{2} (\boldsymbol\mu - \bm{m}_N) \boldsymbol\Sigma^{-1} (\boldsymbol\mu - \bm{m}_N ) - \frac{1}{2} \mathrm{tr}(\boldsymbol\Sigma^{-1} \bm{S}_N ) \right)d \boldsymbol\mu d \boldsymbol\Sigma \\ &\overset{(*)}{=} (2\pi)^{-ND/2} \frac{Z_{\mathcal{NIW}}(D, \kappa_N, \nu_N, \bm{S}_N)}{Z_{\mathcal{NIW}}(D, \kappa_0, \nu_0, \bm{S}_0)} \\ &= \pi^{-\frac{ND}{2}} \cdot\frac{\kappa_0^{D/2}\cdot |\bm{S}_0|^{\nu_0/2}}{\kappa_N^{D/2}\cdot |\bm{S}_N|^{\nu_N/2}} \prod_{d=1}^D \frac{\Gamma(\frac{\nu_N+1-d}{2})}{\Gamma(\frac{\nu_0+1-d}{2})}, \end{aligned} \label{equation:niw_marginal_data} \end{equation} where the identity (*) above is from the fact that the integral reduces to the normalizing constant of the NIW density given in Equation~\eqref{equation:niw_posterior_equation_1}. \subsection{Posterior Predictive for Data without Observations} Similarly, suppose now we observe a data vector $\bm{x}^{\star}$ without observing any old data. Then the predictive for the data vector can be obtained by \begin{equation} \begin{aligned} p(\bm{x}^{\star} \mid \boldsymbol\beta) &= \int_{\boldsymbol\mu} \int_{\boldsymbol\Sigma} p(\bm{x}^{\star}, \boldsymbol\mu, \boldsymbol\Sigma \mid \boldsymbol\beta) d\boldsymbol\mu d\boldsymbol\Sigma \\ &= \int_{\boldsymbol\mu} \int_{\boldsymbol\Sigma} \mathcal{N}(\bm{x}^{\star} \mid \boldsymbol\mu, \boldsymbol\Sigma) \cdot \mathcal{NIW}(\boldsymbol\mu, \boldsymbol\Sigma \mid \boldsymbol\beta) d\boldsymbol\mu d\boldsymbol\Sigma\\ &= \pi^{-D/2} \frac{\kappa_0^{D/2} |\bm{S}_0|^{\nu_0/2} }{(\kappa_0 + 1) ^{D/2} |\bm{S}_1|^{\nu_1/2}} \prod_{d=1}^D \frac{\Gamma(\frac{\nu_1+ 1-d}{2})}{\Gamma(\frac{\nu_0 + 1-d}{2})}\\ &= \pi^{-D/2} \frac{\kappa_0^{D/2} |\bm{S}_0|^{\nu_0/2} }{(\kappa_0 + 1) ^{D/2} |\bm{S}_1|^{\nu_1/2}} \frac{\Gamma(\frac{\nu_0+ 2-D}{2})}{\Gamma(\frac{\nu_0 }{2})}, \end{aligned} \label{equation:niw_prior_predictive_abstract} \end{equation} where $\nu_1 = \nu_0+1$, $\bm{S}_1 = \bm{S}_0+ \frac{\kappa_0 }{\kappa_0+1} (\bm{x}^{\star}-\bm{m}_0)(\bm{x}^{\star}-\bm{m}_0)^\top$. An alternative form of Equation~\eqref{equation:niw_prior_predictive_abstract} is to rewrite by a multivariate Student's $t$ distribution \begin{equation} p(\bm{x}^{\star} | \boldsymbol\beta) = \tau\big(\bm{x}^{\star} \mid \bm{m}_0, \frac{\kappa_0 + 1}{\kappa_0(\nu_0 - D + 1)}\bm{S}_0, \nu_0 - D + 1\big). \end{equation} \subsection{Posterior Predictive for New Data with Observations} Similar to posterior predictive for data without observation, now suppose we observe a new data vector $\bm{x}^{\star}$ given old observations $\mathcal{X}$. Then the posterior predictive for this vector is \begin{equation} p(\bm{x}^{\star} \mid \mathcal{X}, \boldsymbol\beta) = \frac{p(\bm{x}^{\star}, \mathcal{X}\mid \boldsymbol\beta) }{p(\mathcal{X} \mid \boldsymbol\beta)}. \label{equation:niw_posterior_predictive_abstract} \end{equation} The denominator of Equation~\eqref{equation:niw_posterior_predictive_abstract} can be obtained directly from Equation~\eqref{equation:niw_marginal_data}. The numerator of it can be obtained in a similar way from Equation~\eqref{equation:niw_marginal_data} by considering the marginal likelihood of the new set $\{\mathcal{X}, \bm{x}^{\star}\}$. We just need to replace $N$ by $N^{\star}=N+1$ in Equation~\eqref{equation:niw_posterior_equation_2}, Equation~\eqref{equation:niw_posterior_equation_3}, and Equation~\eqref{equation:niw_posterior_equation_4}, and replace $\bm{S}_N$ by $\bm{S}_{N^{\star}}$ in Equation~\eqref{equation:niw_posterior_equation_5}. Therefore, we obtain \begin{equation} \begin{aligned} p(\bm{x}^{\star} \mid \mathcal{X}, \boldsymbol\beta) &= (2\pi)^{-D/2} \frac{Z_{\mathcal{NIW}}(D, \kappa_{N^{\star}}, \nu_{N^{\star}}, \bm{S}_{N^{\star}})}{Z_{\mathcal{NIW}}(D, \kappa_{N}, \nu_{N}, \bm{S}_{N})} \\ &= \pi^{-D/2} \frac{(\kappa_{N^{\star}})^{-D/2}|\bm{S}_{N} |^{(\nu_N)/2}}{(\kappa_{N})^{-D/2} |\bm{S}_{N^{\star}} |^{(\nu_{N^{\star}})/2} } \prod_{d=1}^D\frac{ \Gamma(\frac{\nu_{N^{\star}} + 1-d}{2})}{ \Gamma(\frac{\nu_{N} + 1-d}{2})} \\ &= \pi^{-D/2} \frac{(\kappa_{N^{\star}})^{-D/2}|\bm{S}_{N} |^{(\nu_N)/2}}{(\kappa_{N})^{-D/2} |\bm{S}_{N^{\star}} |^{(\nu_{N^{\star}})/2} } \frac{ \Gamma(\frac{\nu_{0} + N+2-D}{2})}{ \Gamma(\frac{\nu_{0} + N}{2})} . \end{aligned} \label{equation:niw_posterior_predictive_equation} \end{equation} Again an alternative form of Equation~\eqref{equation:niw_posterior_predictive_equation} is to rewrite by a multivariate Student's $t$ distribution: \begin{equation*} p(\bm{x}^{\star} \mid \mathcal{X}, \boldsymbol\beta) = \tau \big(\bm{x}^{\star} \mid \bm{m}_N, \frac{\kappa_N + 1}{\kappa_N (\nu_N - D + 1)} \bm{S}_N, \nu_N - D + 1\big). \end{equation*} Thus, the mean and covariance of $\bm{x}^\star$ are given by \begin{equation*} \begin{aligned} \mathrm{E} [\bm{x}^\star \mid\mathcal{X}, \boldsymbol\beta] &= \bm{m}_N = \frac{\kappa_0 }{\kappa_0+N}\bm{m}_0+\frac{N}{\kappa_0+N}\overline{\bm{x}} ,\\ \mathrm{Cov} [\bm{x}^\star \mid\mathcal{X}, \boldsymbol\beta] &= \frac{\kappa_N + 1}{\kappa_N (\nu_N - D - 1)} \bm{S}_N = \frac{\kappa_0+N + 1}{(\kappa_0+N) (\nu_0+N - D - 1)} \bm{S}_N, \end{aligned} \end{equation*} where we can find, on average, the new coming data has expectation $\bm{m}_N$. We mentioned previously, $\kappa_0$ controls how strongly we believe this prior for $\boldsymbol\mu$. When $\kappa_0$ is large enough, $\mathrm{E} [\bm{x}^\star \mid\mathcal{X}, \boldsymbol\beta]$ converges to $\bm{m}_0$, the prior mean, and $\mathrm{Cov} [\bm{x}^\star \mid\mathcal{X}, \boldsymbol\beta]$ converges to $\frac{\bm{S}_N}{(\kappa_0+N) (\nu_0+N - D - 1)} $. In the meantime, if we set $\nu_0$ large enough, the covariance matrix $\boldsymbol\Sigma$ is concentrated around $\boldsymbol\Sigma_0$, and $$ \bm{S}_N \rightarrow \frac{\bm{S}_{\overline{x}}}{\nu_0} + \frac{\kappa_0N}{\nu_0(\kappa_0 + N)}(\overline{\bm{x}} - \bm{m}_0)(\overline{\bm{x}} - \bm{m}_0)^\top , $$ which is largely controlled by data sample and data magnitude (rather than the prior hyperparameters), so as the posterior variance $\mathrm{Cov} [\bm{x}^\star \mid \mathcal{X}, \boldsymbol\beta]$. \subsection{Further Optimization via the Cholesky Decomposition} \subsubsection{Definition} The Cholesky decomposition of a symmetric positive definite matrix $\bm{S}$ is its decomposition into the product of a lower triangular matrix $\bm{L}$ and its transpose: \begin{equation} \bm{S} = \bm{L} \bm{L}^\top, \end{equation} where $\bm{L}$ is called the \textit{Cholesky factor} of $\bm{S}$. We realize that an alternative form of the Cholesky decomposition is using its upper triangular $\bm{U}=\bm{L}^\top$, i.e., $\bm{S} = \bm{U}^\top \bm{U}$. A triangular matrix is a special kind of square matrices. Specifically, a square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero. If the matrix has dimensionality $D$, the complexity of Cholesky decomposition is $O(D^3)$. In specific, it requires $\sim \frac{1}{3}D^3$ floating points operations (flops) to compute a Cholesky decomposition of a $D\times D$ positive definite matrix \citep{lu2021numerical}, where the symbol ``$\sim$" has the usual asymptotic meaning \begin{equation*} \lim_{D \to +\infty} \frac{\mathrm{number\, of\, flops}}{(1/3)D^3} = 1. \end{equation*} \subsubsection{Rank One Update} A rank 1 update of matrix $\bm{S}$ by vector $\bm{x}$ is of the form \citep{seeger2004low, lu2021numerical} \begin{equation*} \bm{S}^\prime = \bm{S} + \bm{x} \bm{x}^\top. \end{equation*} If we have already calculated the Cholesky factor $\bm{L}$ of $\bm{S}$, then the Cholesky factor $\bm{L}^\prime$ of $\bm{S}^\prime$ can be calculated efficiently. Note that $\bm{S}^\prime$ differs from $\bm{S}$ only in the rank one matrix $\bm{x}\bx^\top$. Hence we can compute $\bm{L}^\prime$ from $\bm{L}$ using the rank one Cholesky update, which takes $O(D^2)$ operations each saving from $O(D^3)$ if we do know $\bm{L}$, the Cholesky decomposition of $\bm{S}$. \subsubsection{Speedup for Determinant} The determinant of a positive definite matrix $\bm{S}$ can be computed from its Cholesky factor $\bm{L}$: \begin{equation*} |\bm{S}| = \prod_{d=1}^{D} l_{dd}^2,\qquad \log(|\bm{S}|) = 2\log(|\bm{L}|)= 2 \times \sum_{d=1}^D \log(l_{dd}), \end{equation*} where $l_{dd}$ is the ($d,d$)-th entry of matrix $\bm{L}$. This is an $O(D)$ operation, i.e., given the Cholesky decomposition, the determinant is just the product of the diagonal terms. \subsubsection{Update in NIW} Now we consider computing the marginal likelihood of data in Equation~\eqref{equation:niw_marginal_data} and the posterior predictive for new coming data in Equation~\eqref{equation:niw_posterior_predictive_abstract} of which the two cases are similar. Take the latter as an example, note that to compute posterior predictive for new coming data $p(\bm{x}^{\star} \mid \mathcal{X}, \boldsymbol\beta)$ in Equation~\eqref{equation:niw_posterior_predictive_abstract}, we just need to evaluate $ \frac{p(\bm{x}^{\star}, \mathcal{X} \mid \boldsymbol\beta) }{p(\mathcal{X} \mid \boldsymbol\beta)}$, in which we must calculate $|\bm{S}_N|$ and $|\bm{S}_{N^{\star}}|$ efficiently where $N^{\star} = N+1$.We deal with computing the determinants $|\bm{S}_N|$ and $|\bm{S}_{N^{\star}}|$ by representing $\bm{S}_N$ and $\bm{S}_{N^{\star}}$ using their Cholesky decomposition forms. In particular, updates to $\bm{S}_N$ and $\bm{S}_{N^{\star}}$ will be carried out by directly updating their Cholesky decompositions; given the Cholesky decomposition, the determinant is just the product of the diagonal terms. Write out $\bm{S}_{\star}$ by $\bm{S}_N$: \begin{align} \bm{m}_N &= \frac{\kappa_{N^{\star}}\bm{m}_{N^{\star}} - x^\star}{\kappa_N}=\frac{(\kappa_0 + N + 1)\bm{m}_{N^{\star}} - x^\star}{\kappa_0 + N} , \\ \bm{m}_{N^{\star}} &= \frac{\kappa_{N} \bm{m}_N +\bm{x}^{\star}}{\kappa_{N^{\star}}} = \frac{(\kappa_{0}+N) \bm{m}_N +\bm{x}^{\star}}{\kappa_{0}+N+1},\\ \bm{S}_{N^{\star}} &= \bm{S}_N + \bm{x}^{\star} \bm{x}^{\star T} - \kappa_{N^{\star}} \bm{m}_{N^{\star}} \bm{m}_{N^{\star}}^\top + \kappa_N \bm{m}_N \bm{m}_N^\top \\ &= \bm{S}_N + \frac{\kappa_0 + N + 1}{\kappa_0 + N}(\bm{m}_{N^{\star}} - \bm{x}^\star)(\bm{m}_{N^{\star}} - \bm{x}^\star)^\top, \label{equation:cholesky_rank_1_form} \end{align} where Equation~\eqref{equation:cholesky_rank_1_form} implies that Cholesky decomposition of $\bm{S}_{N^\star}$ can be obtained from Cholesky decomposition of $\bm{S}_N$ by a rank 1 update. Therefore if we know the Cholesky decomposition of $\bm{S}_N$, the Cholesky decomposition of $\bm{S}_{N^\star}$ can be obtained in $O(D^2)$ complexity. \chapter{Bayesian Ordinal Matrix Factorization} \begingroup \hypersetup{linkcolor=winestain} \minitoc \newpage \endgroup \index{Decomposition: OGGW} \section{Ordinal Likelihood with Gaussian Prior and Wishart Hierarchical Prior (OGGW)} The properties of the Poisson factorization models (PF, e.g., PAA and PAAA models) show the aim of them is to make recommendations by predicting future interactions between users and items. Therefore, the PF is often applied to the implicit consumer data where the data $\bm{A}\in\{0,1\}^{M\times N}$, i.e., the data contains only the information that a user is interacting with an item or not. In many applications, the data matrix $\bm{A}$ will be further constrained. The ordinal matrix factorization (OMF) considers ordinal data \citep{stevens1946theory} where entries in $\bm{A}$ are restricted to a finite ordered (ranked) set of values such that a judgment of preference is made, e.g., in collaborate filtering, we seek to predict a consumer's rating of a novel item on an ordinal scale such as \textit{good} $>$ \textit{average} $>$ \textit{bad}; the temperature of a day is \textit{hot} $>$ \textit{warm} $>$ \textit{cold}; a teacher always rates his/her students by giving grades on their overall performance having the ordering $A>B>C>D>F$ \citep{paquet2005bayesian, chu2005gaussian, gouvert2020ordinal}. Although the real-valued or nonnegative matrix factorization techniques introduced in previous chapters can be used to find the decomposition, a specific approach that explicitly models this ordinal data can be more efficient. Instead of factoring $\bm{A}=\bm{W}\bm{Z}+\bm{E}$ directly as those were done in real-valued matrix factorization and nonnegative matrix factorization, the Bayesian ordinary matrix factorization introduces an additional \textit{hidden} matrix $\bm{H}=\bm{W}\bm{Z}+\bm{E}\in\mathbb{R}^{M\times N}$ which is then used as an unobserved input to an ordinal regression model to obtain the data matrix $\bm{A}$ (see Figure~\ref{fig:bmf_oggw}). Therefore, the ordinary matrix factorization is also called a hierarchical Bayesian model. \subsection{Ordinal Regression Likelihood} We now consider the data matrix $\bm{A}\in{\mathbb{A}}^{M\times N}$ where ${\mathbb{A}}$ is a finite set of $A$-ordered categories. Without loss of generality, these categories can be denoted as consecutive integers ${\mathbb{A}}=\{1,2,\ldots, A\}$ that conserve the known ordering information. The real number line is divided into a set of contiguous intervals with boundaries $\{b_a\}$, $$ -\infty =b_1 <b_2 <\ldots <b_{A+1}=\infty, $$ such that the interval $[b_a, b_{a+1})$ corresponds to the discrete category $a\in {\mathbb{A}}$. To model the hidden variable $h$, we introduce an additional hidden variable $f$ (see Figure~\ref{fig:bmf_oggw_ordireg}). The value of the variable $f$ implies the rank $a$ if $f$ falls in the rank $a$'s interval: \begin{equation} p(a\mid f)= \left\{ \begin{aligned} &1, \, &\text{if } b_a\leq f < b_{a+1} \\ &0, \, &\text{otherwise} \end{aligned} \right. =u(f-b_a) - u(f-b_{a+1}), \end{equation} where $u(y)$ is the step function with value 1 if $y\geq 0$ and value 0 if $y<0$. Given the hidden value $h$, uncertainty about the exact location of $f$ can be modeled by a unit variance Gaussian, \begin{equation} p(f\mid h) = \mathcal{N}(f\mid h,1). \end{equation} Averaging over $f$ in $p(a, f\mid h)=p(a\mid f)p(f\mid h)$, we have \begin{equation}\label{equation:oggw_averaging_f} p(a\mid h) = \int p(a, f\mid h)df = \Phi(h-b_a) - \Phi(h-b_{a+1}), \end{equation} where $\Phi(y) = \int_{-\infty}^{y} \mathcal{N}(u\mid 0,1)du= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{y} \exp(-\frac{u^2}{2}) du $ is the cumulative distribution function of $\mathcal{N}(0,1)$. In Equation~\eqref{equation:oggw_averaging_f}, we use the fact that $$ \Phi(h-b) = \int \mathcal{N}(f\mid h,1)\,u(f-b) df $$ (see \citet{albert1993bayesian}). Figure~\ref{fig:dists_oggw_ordireg} shows the probability functions for $p(a\mid h)$ by varying $h$. We observe when $h$ falls outside of the interval [$b_a, b_{a+1}$], the probability is not exactly zero. And when the interval $b_{a+1}-b_a$ is small, the probability tends to be small (in a sense, a small interval of $b_{a+1}-b_a$ indicates there are a large number of categories such that the probability tends to be small for falling into each interval). \begin{figure}[htp] \centering \subfigtopskip=0pt \subfigbottomskip=6pt \subfigcapskip=-15pt \includegraphics[width=0.421\textwidth]{imgs/bmf_oggw_ordireg.pdf} \caption{Graphical representation of the ordinal regression model. } \label{fig:bmf_oggw_ordireg} \end{figure} \begin{SCfigure \centering \subfigtopskip=2pt \subfigbottomskip=6pt \subfigcapskip=-15pt \includegraphics[width=0.621\textwidth]{imgs/dists_oggw_ordireg.pdf} \caption{Ordinal probability of the ordinal regression model in Equation~\eqref{equation:oggw_averaging_f}. } \label{fig:dists_oggw_ordireg} \end{SCfigure} \paragraph{Full likelihood.} The ordinal regression model then maps continuous latent variables $h_{mn}$ in $\bm{H}$ to probabilities $p(a_{mn} \mid h_{mn})$, \begin{equation} p(a_{mn} \mid h_{mn}) = \prod_{a=1}^{A} \left[ \Phi(h_{mn} - b_a) -\Phi(h_{mn} - b_{a+1})\right]^{\mathds{1}(a_{mn}=a)}. \end{equation} Suppose the set of observed entries or the training set is denoted by $\mathcal{X} = \{a_{mn} \mid (m,n)\in \textit{training set}\}$, the likelihood of observed entries under the ordinal regression model is \begin{equation} p(\mathcal{X} \mid \bm{H}) = \prod_{(m,n)} p(a_{mn} \mid h_{mn}), \end{equation} where the product is over all the observed entries or training set entries $(m,n)$. \subsection{Matrix Factorization Modeling on Latent Variables} The Bayesian modeling is just the same as that of the GGGW model (Section~\ref{section:gggw_model}, p.~\pageref{section:gggw_model}), except that we now consider Gaussian likelihoods over the hidden variables $h_{mn}$ rather than the observed ratings $a_{mn}$. The full graphical representation of the model is then shown in Figure~\ref{fig:bmf_oggw} which we call the ordinal likelihood with Gaussian and hierarchical normal-inverse-Wishart priors (OGGW) model. \paragraph{Likelihood.} We assume the residuals, $e_{mn}$, are i.i.d., zero mean normal with precision $\tau = \frac{1}{\sigma^2}$, which gives rise to the following likelihood function, \begin{equation}\label{equation:oggw_likelihood} \begin{aligned} p(\bm{H}\mid \bm{W},\bm{Z},\tau) &= \prod_{m,n=1}^{M,N} \mathcal{N} \left(h_{mn}\mid (\bm{W}\bm{Z})_{mn}, \sigma^2 \right)\\ &= \prod_{m,n=1}^{M,N} \mathcal{N} \left(h_{mn}\mid (\bm{W}\bm{Z})_{mn}, \tau^{-1} \right) \end{aligned} \end{equation} where $\sigma^2$ is the variance, and $\tau^{-1}=\sigma^2$ is the precision. \paragraph{Prior.} Given the $m$-th row $\bm{w}_m$ of $\bm{W}$ and the $n$-th column $\bm{z}_n$ of $\bm{Z}$, we consider the multivariate Gaussian density and the normal-inverse-Wishart prior as follows: \begin{equation}\label{equation:oggw_prior_wm_zn} \begin{aligned} \bm{w}_m\sim \mathcal{N}(\bm{w}_m\mid \boldsymbol\mu_w, \boldsymbol\Sigma_w), &\gap\gap \boldsymbol\mu_w, \boldsymbol\Sigma_w \sim \mathcal{NIW}(\boldsymbol\mu_w, \boldsymbol\Sigma_w\mid \bm{m}_0, \kappa_0, \nu_0, \bm{S}_0);\\ \bm{z}_n\sim \mathcal{N}(\bm{z}_n\mid \boldsymbol\mu_z, \boldsymbol\Sigma_z), &\gap\gap \boldsymbol\mu_z, \boldsymbol\Sigma_z \sim \mathcal{NIW}(\boldsymbol\mu_z, \boldsymbol\Sigma_z\mid \bm{m}_0, \kappa_0, \nu_0, \bm{S}_0),\\ \end{aligned} \end{equation} where $\mathcal{NIW} (\boldsymbol\mu, \boldsymbol\Sigma\mid \bm{m}_0, \kappa_0, \nu_0, \bm{S}_0) = \mathcal{N}(\boldsymbol\mu\mid \bm{m}_0, \frac{1}{\kappa_0}\boldsymbol\Sigma) \cdot \mathrm{IW}(\boldsymbol\Sigma\mid \bm{S}_0, \nu_0)$ is the density of a normal-inverse-Wishart distribution and $ \mathrm{IW}(\boldsymbol\Sigma\mid \bm{S}_0, \nu_0)$ is the inverse-Wishart distribution (Equation~\eqref{equation:multi_gaussian_prior}, p.~\pageref{equation:multi_gaussian_prior}). \begin{figure}[h] \centering \subfigtopskip=2pt \subfigbottomskip=6pt \subfigcapskip=-15pt \includegraphics[width=0.421\textwidth]{imgs/bmf_oggw.pdf} \caption{Graphical representation of OGGW model. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables. The slash ``/" in the variable represents ``or", and the comma ``," in the variable represents ``and".} \label{fig:bmf_oggw} \end{figure} The prior for the noise variance $\sigma^2$ is chosen as a conjugate inverse-Gamma density with shape ${\alpha_\sigma}$ and scale ${\beta_\sigma}$ (Definition~\ref{definition:inverse_gamma_distribution}, p.~\pageref{definition:inverse_gamma_distribution}), $$ p(\sigma^2)= \mathrm{IG}(\sigma^2\mid \alpha_\sigma, \beta_\sigma) = \frac{{\beta_\sigma}^{\alpha_\sigma}}{\Gamma({\alpha_\sigma})} (\sigma^2)^{-\alpha_\sigma-1} \exp\left( -\frac{{\beta_\sigma}}{\sigma^2} \right). $$ Or placing an inverse-Gamma prior on the variance is equivalent to applying a Gamma prior on the precision parameter. For the precision $\tau=\sigma^{-2}$, we use a Gamma distribution with shape $\alpha_\tau>0$ and rate $\beta_\tau>0$ (Definition~\ref{definition:gamma-distribution}, p.~\pageref{definition:gamma-distribution}), $$ p(\tau)\sim \mathrm{Ga} (\tau\mid \alpha_\tau, \beta_\tau) = \frac{\beta_{\tau}^{\alpha_{\tau}}}{\Gamma(\alpha_{\tau})} \tau^{\alpha_{\tau}-1} \exp({-\beta_{\tau}\cdot \tau}), $$ \subsection{Gibbs Sampler} To construct the Gibbs sampler, we need to obtain the conditional posterior for each variable. \paragraph{Latent variables.} The conditional density for latent variables $h_{mn}$ is \begin{equation} \begin{aligned} p(h_{mn}\mid a_{mn}, \bm{w}_m, \bm{z}_n, \tau) \propto p(a_{mn}\mid h_{mn})\, p(h_{mn}\mid \bm{w}_m,\bm{z}_n, \tau). \end{aligned} \end{equation} To sample from this conditional density, we introduce back the hidden variable $f_{mn}$. For brevity, we omit the subscript $m,n$. The density $f, h\mid a, \bm{w},\bm{z},\tau$ then can be sampled from in two steps, $f\mid a, \bm{w}, \bm{z}, \tau$ and $h\mid f, \bm{w}, \bm{z}, \tau$. The joint marginal distribution of $a, f$, and $h$, given $m=\bm{w}^\top\bm{z}$ and $\tau$, is \begin{equation} p(a\mid f)\,p(f\mid h)\,p(h\mid m, \tau) = \left[ u(f-b_a) - u(f-b_{a+1})\right]\, \mathcal{N}(f\mid h,1)\, \mathcal{N}(h\mid m, \tau^{-1}). \end{equation} The conditional density of $p(f\mid a, m, \tau )$ follows from $$ p(f\mid a, m, \tau )=\mathcal{GTN}(f\mid m, 1+\tau^{-1}, b_a, b_{a+1}), $$ a general-truncated-normal density (Definition~\ref{definition:general_truncated_normal}, p.~\pageref{definition:general_truncated_normal}). Therefore, the sample $h$ can be obtained by \begin{equation}\label{equation:oggw_pos_hmn} \begin{aligned} p(h\mid f, m, \tau) &\propto p(f\mid h)\, p(h\mid m,\tau^{-1}) = \mathcal{N}(f\mid h,1) \, \mathcal{N}(h\mid m,\tau^{-1})\\ &\propto \mathcal{N}\left(h \,\bigg|\, \frac{f+m\tau}{1+\tau}, (1+\tau)^{-1}\right). \end{aligned} \end{equation} \paragraph{Multivariate Gaussian parameters.} Same as the GGGW model, from the discussion in Section~\ref{sec:niw_posterior_conjugacy} (p.~\pageref{sec:niw_posterior_conjugacy}), the posterior density of $\{\boldsymbol\mu_w, \boldsymbol\Sigma_w\}$ also follows a NIW distribution with updated parameters: \begin{equation}\label{equation:oggw_mu_cov} \boldsymbol\mu_w, \boldsymbol\Sigma_w\sim \mathcal{NIW}(\boldsymbol\mu_w, \boldsymbol\Sigma_w\mid \bm{m}_M, \kappa_M, \nu_M, \bm{S}_M) \end{equation} where \begin{align} \bm{m}_M &= \frac{\kappa_0\bm{m}_0 + M\overline{\bm{w}}}{\kappa_M} = \frac{\kappa_0 }{\kappa_M}\bm{m}_0+\frac{M}{\kappa_M}\overline{\bm{w}} \label{equation:niw_posterior_equation_2_oggw}\\ \kappa_M &= \kappa_0 + M \label{equation:niw_posterior_equation_3_oggw}\\ \nu_M &=\nu_0 + M \label{equation:niw_posterior_equation_4_oggw}\\ \bm{S}_M &=\bm{S}_0 + \bm{S}_{\overline{w}} + \frac{\kappa_0 M}{\kappa_0 + M}(\overline{\bm{w}} - \bm{m}_0)(\overline{\bm{w}} - \bm{m}_0)^\top \label{equation:niw_posterior_equation_5_oggw}\\ &=\bm{S}_0 + \sum_{m=1}^M \bm{w}_m \bm{w}_m^\top + \kappa_0 \bm{m}_0 \bm{m}_0^\top - \kappa_M \bm{m}_M \bm{m}_M^\top \label{equation:niw_posterior_equation_6_oggw}\\ \overline{\bm{w}} &= \frac{1}{M}\sum_{m=1}^{M} \bm{w}_m. \label{equation:niw_posterior_equation_7_oggw}\\ \bm{S}_{\overline{w}} &= \sum_{m=1}^{M} (\bm{w}_m - \overline{\bm{w}})(\bm{w}_m - \overline{\bm{w}})^\top \label{equation:niw_posterior_equation_8_oggw} \end{align} \paragraph{Gaussian variance parameter.} The conditional density of $\sigma^2$ depends on its parents ($\alpha_\sigma$, $\beta_\sigma$), children ($\bm{A}$), and coparents ($\bm{W}$, $\bm{Z}$). And it is an inverse-Gamma distribution (by conjugacy in Equation~\eqref{equation:inverse_gamma_conjugacy_general}, p.~\pageref{equation:inverse_gamma_conjugacy_general}), \begin{equation}\label{equation:oggw_posterior_sigma2} \begin{aligned} &p(\sigma^2 \mid {\bm{W}}, {\bm{Z}}, \bm{A})= p(\sigma^2 \mid \bm{W},\bm{Z}, \bm{A}) = \mathrm{IG} (\sigma^2\mid \widetilde{\alpha_{\sigma}}, \widetilde{\beta_{\sigma}}), \gap\gap\qquad \\ & \widetilde{\alpha_{\sigma}} = \frac{MN}{2} +{\alpha_\sigma}, \qquad \widetilde{\beta_{\sigma}} = \frac{1}{2} \sum_{m,n=1}^{M,N} (\bm{A}-\bm{W}\bm{Z})_{mn}^2 + {\beta_\sigma}. \end{aligned} \end{equation} \paragraph{Gaussian precision parameter.} Alternatively, the conditional posterior density of $\tau=\frac{1}{\sigma^2}$ is obtained similarly (Equation~\eqref{equation:gamma_conjugacy_general}, p.~\pageref{equation:gamma_conjugacy_general}), \begin{equation}\label{equation:oggw_posterior_tau2} \begin{aligned} &p(\tau \mid {\bm{W}}, {\bm{Z}}, \bm{A})= p(\tau \mid \bm{W},\bm{Z}, \bm{A}) = \mathrm{Ga} (\tau\mid \widetilde{\alpha_\tau}, \widetilde{\beta_\tau}), \gap\gap\qquad \\ &\widetilde{\alpha_\tau} = \frac{MN}{2} +{\alpha_\tau}, \qquad \widetilde{\beta_\tau} = \frac{1}{2} \sum_{m,n=1}^{M,N} (\bm{A}-\bm{W}\bm{Z})_{mn}^2 + {\beta_\tau}. \end{aligned} \end{equation} In practice, the prior parameters $\alpha_\tau$, $\beta_\tau$ are chosen to be equal to $\alpha_\sigma$, $\beta_\sigma$ respectively. \paragraph{Gibbs sampling.} By this Gibbs sampling method introduced in Section~\ref{section:gibbs-sampler} (p.~\pageref{section:gibbs-sampler}), we can construct a Gibbs sampler for the OGGW model as formulated in Algorithm~\ref{alg:oggw_gibbs_sampler}. A specific choice of hyperparameters for the normal-inverse-Wishart prior on the mean and covariance is not significant in practice since there is a large amount of data for learning that will override any reasonable weak prior. The weak prior can be chosen as $\bm{m}_0=\mathbf{0}, \kappa_0=1, \nu_0=K+1, \bm{S}_0=\bm{I}$. While the choice for $\alpha_\tau, \beta_\tau$ rather depends on the data sets. A week prior choice is $\alpha_\tau=\beta_\tau=1$. \begin{algorithm}[h] \caption{Gibbs sampler for OGGW model in one iteration (prior on $\tau=\frac{1}{\sigma^2}$). By default, uninformative hyperparameters are $\bm{m}_0=\mathbf{0}, \kappa_0=1, \nu_0=K+1, \bm{S}_0=\bm{I}$, $\alpha_\tau=\beta_\tau=1$.} \label{alg:oggw_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha_\tau, \beta_\tau, \bm{m}_0, \kappa_0, \nu_0, \bm{S}_0$; \For{$m=1$ to $M$} \State Sample $\bm{w}_{m}$ from $p(\bm{w}_m \mid \boldsymbol\mu_m, \boldsymbol\Sigma_m)$; \Comment{Equation~\eqref{equation:oggw_prior_wm_zn}} \State Sample $h_{mn}$ from $p(h_{mn}\mid a_{mn}, \bm{w}_m,\bm{z}_n,\tau)$ for each $n$; \Comment{Equation~\eqref{equation:oggw_pos_hmn}} \EndFor \For{$n=1$ to $N$} \State Sample $\bm{z}_{n}$ from $p(\bm{z}_n \mid \boldsymbol\mu_z, \boldsymbol\Sigma_z)$; \Comment{Equation~\eqref{equation:oggw_prior_wm_zn}} \State Sample $h_{mn}$ from $p(h_{mn}\mid a_{mn}, \bm{w}_m,\bm{z}_n,\tau)$ for each $m$; \Comment{Equation~\eqref{equation:oggw_pos_hmn}} \EndFor \State Sample $\tau$ from $p(\tau \mid \bm{W},\bm{Z}, \bm{A})$; \Comment{Equation~\eqref{equation:oggw_posterior_tau2}} \State Sample $\boldsymbol\mu_w, \boldsymbol\Sigma_w$ from $p(\boldsymbol\mu_w, \boldsymbol\Sigma_w\mid \bm{W}, M)$; \Comment{Equation~\eqref{equation:oggw_mu_cov}} \State Sample $\boldsymbol\mu_z, \boldsymbol\Sigma_z$ from $p(\boldsymbol\mu_z, \boldsymbol\Sigma_z\mid \bm{Z}, N)$; \Comment{Symmetry of Eq.~\eqref{equation:oggw_mu_cov}} \end{algorithmic} \end{algorithm} \section{Properties of OGGW} The OGGW model's significant feature is that it is not limited to offering only the expected value of missing entries in $\bm{A}$; it can also provide a probability distribution over feasible discrete values. Though this extra information cannot enhance root mean squared error (RMSE) performance, other measurements, such as mean absolute error (MAE), can profit from it. Given the hidden variables $\{h_{mn}\}$ and using the likelihood in Equation~\eqref{equation:oggw_averaging_f}, the expected value of the category value for $(m,n)$-th entry is $$ \begin{aligned} \sum_{a=1}^{A} a \cdot p(a\mid h_{mn}) &= \sum_{a=1}^{A} a \cdot \big(\Phi(h_{mn}-b_a)-\Phi(h_{mn}-b_{a+1})\big)\\ &= \sum_{a=1}^{A} \Phi(h_{mn} - b_a) - A\Phi(h_{mn}-b_{A+1}). \end{aligned} $$ Following likelihood in Equation~\eqref{equation:oggw_likelihood} and integrating out $h_{mn}$, we have \begin{equation}\label{equation:oggw_rec_score1} {\textnormal{y}}_{mn} := \sum_{a=1}^{A} a\cdot p(a \mid \bm{w}_m,\bm{z}_n, \tau) = \sum_{a=1}^{A} \Phi\left( \frac{\bm{w}_m^\top\bm{z}_n - b_a}{\sqrt{1+\tau^{-1}}}\right). \end{equation} Therefore, instead of using the score in Equation~\eqref{equation:recom_poisson1} (p.~\pageref{equation:recom_poisson1}), the score $\mathrm{E}[{\textnormal{y}}_{mn} \mid \bm{A}]$ can be obtained by averaging the values of Equation~\eqref{equation:oggw_rec_score1} during the Gibbs sampling process. Similar to the third recommendation system introduced in Section~\ref{section:recom_poisson} (p.~\pageref{section:recom_poisson}), the OGGW model can also provide uncertainty about each entry in $\bm{A}$. Adopting again the idea of the Sharpe ratio, we can suggest the unconsumed movie $m$ (in the Netflix context) when $a_{mn}$ for user $n$ by the uncertainty-adjusted score of posterior expected Poisson parameters, $$ \text{score}_{mn} = \frac{\mathrm{E}[{\textnormal{y}}_{mn} \mid \bm{A}]}{\sqrt{\mathrm{Var}[{\textnormal{y}}_{mn} \mid \bm{A}]}}. $$ \chapter{Bayesian Poisson Matrix Factorization} \begingroup \hypersetup{linkcolor=winestain} \minitoc \newpage \endgroup \index{Decomposition: PAA} \section{Poisson Likelihood with Gamma Priors (PAA)} The Poisson likelihood with Gamma priors (PAA) model is proposed by \citet{gopalan2013scalable, gopalan2015scalable} in a recommendation system context due to the prevalence and popularity of movie recommendation data sets like the Netflix Challenge. The PAA model extends the Poisson factorization \citep{canny2004gap, dunson2005bayesian, cemgil2009bayesian} and it is further discussed in \citet{gopalan2014bayesian, hu2015zero}. The model is working on the nonnegative count data, $\bm{A}\in\mathbb{N}^{M\times N}$, e.g., a data matrix about users and items \footnote{The items can be any of movies, songs, articles, or products. We also refer to movies in the Netflix context.} where each user has consumed and possibly rated a set of items. The observation $a_{mn}$ is the rating that user $n$ gave to item $m$, or zero if no rating was given. We again assume the matrix is factored as the product of $\bm{W}\in\mathbb{R}_+^{M\times K}$ and $\bm{Z}\in\mathbb{R}_+^{K\times N}$. To be more specific, the PAA model considers to minimize the following loss: \begin{equation}\label{equation:poisson_per_example} \mathop{\min}_{\bm{W},\bm{Z}} L(\bm{W},\bm{Z}) = \mathop{\min}_{\bm{W},\bm{Z}}\sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bm{w}_m^\top\bm{z}_n\right)^2, \end{equation} where $\bm{W}=[\bm{w}_1^\top; \bm{w}_2^\top; \ldots; \bm{w}_M^\top]\in \mathbb{R}^{M\times K}$ and $\bm{Z}=[\bm{z}_1, \bm{z}_2, \ldots, \bm{z}_N] \in \mathbb{R}^{K\times N}$ containing $\bm{w}_m$'s and $\bm{z}_n$'s as \textbf{rows and columns} respectively. Therefore, each item $m$ is represented by a vector of $K$ \textit{latent attributes} $\bm{w}_m$ and each user $n$ by a vector of $K$ \textit{latent preferences} $\bm{z}_n$. In the Netflix context, the PAA model is specifically designed to accommodate the \textit{heterogeneous interests of users} (some users tend to consume more than others), the different types of items (some items/movies are more popular than others), and the realistic distribution of limited resources that users have to consume these items as the recommendation system literature suggests that an effective model should consider the \textit{heterogeneity} among both users and items \citep{koren2009matrix}. \paragraph{Likelihood.} Given a data matrix $\bm{A}$ about users and items, where each user has consumed and possibly rated a set of items. We assume each element $a_{mn}$ is Poisson distributed with mean given by the factored component $\bm{w}_m^\top\bm{z}_n$ (see Figure~\ref{fig:bmf_paa}), $$ a_{mn}\sim \mathcal{P}( \bm{w}_m^\top\bm{z}_n), $$ where $\mathcal{P}(\cdot)$ is a Poisson distribution whose parameter is a linear combination of the corresponding user preferences and item attributes $\bm{w}_m^\top\bm{z}_n$. This is just like the Gaussian likelihood where the expectation of $a_{mn}$ is also given by $\bm{w}_m^\top\bm{z}_n$ (Definition~\ref{definition:poisson_distribution}, p.~\pageref{definition:poisson_distribution}). Suppose further we decompose $a_{mn}$ by $K$ components, $$ a_{mn} = \sum_{k=1}^{K}o_{mnk}. $$ The prior over $a_{mn} $ then can be decomposed into $$ o_{mnk}\sim \mathcal{P}( w_{mk}z_{kn}). $$ Since, by Theorem~\ref{theorem:sum_iid_poisson} (p.~\pageref{theorem:sum_iid_poisson}), the sum of Poisson random variables is a Poisson random variable, we obtain the assumed Poisson likelihood as follows, \begin{equation}\label{equation:amn_poisson} a_{mn}=\sum_{k=1}^{K}o_{mnk}\sim \mathcal{P}( \bm{w}_m^\top\bm{z}_n). \end{equation} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[PAA.]{\label{fig:bmf_paa} \includegraphics[width=0.421\linewidth]{./imgs/bmf_paa.pdf}} \subfigure[PAAA.]{\label{fig:bmf_paaa} \includegraphics[width=0.421\linewidth]{./imgs/bmf_paaa.pdf}} \caption{Graphical model representations of PAA and PAAA models. Green circles denote prior variables, orange circles represent observed and latent variables, and plates represent repeated variables.} \label{fig:bmf_paa_paaa} \end{figure} \paragraph{Prior.} We assume $\bm{W}$ and $\bm{Z}$ are independently Gamma distributed with shape and rate parameters $\alpha $ and $\beta$ respectively (Definition~\ref{definition:gamma-distribution}, p.~\pageref{definition:gamma-distribution}), \begin{equation} w_{mk}\sim\mathrm{Ga}(w_{mk}\mid \alpha, \beta), \qquad z_{kn}\sim \mathrm{Ga}(z_{kn} \mid \alpha, \beta). \end{equation} This Gamma prior on the latent attributes and the latent preferences can drive the model towards a \textbf{sparse representation} of users and items (see Figure~\ref{fig:dists_gamma}, p.~\pageref{fig:dists_gamma} for examples of the Gamma distribution), which is more representative of real-world behavior. \paragraph{Posterior.} Let $\bm{o}^{mn}=[o_{mn1}, o_{mn2}, \ldots, o_{mnK}]^\top \in \mathbb{R}^K$, by Theorem~\ref{theorem:multinomial_poisson} (p.~\pageref{theorem:multinomial_poisson}), the conditional distribution of $\bm{o}^{mn}$ given $a_{mn} = \sum_{k=1}^{K}o_{mnk}$ is \begin{equation}\label{equation:paa_bomn} \mathrm{Multi}_K(\bm{o}^{mn}\mid a_{mn}, \bm{p}), \end{equation} where $\bm{p}=\frac{1}{\bm{w}_m^\top\bm{z}_n}[w_{m1}z_{1n}, w_{m2}z_{2n}, \ldots, w_{(mK)}z_{(Kn)}]^\top\in [0,1]^K$ such that $\mathbf{1}^\top\bm{p}=1$ and each element $p_i$ in $\bm{p}$ is in the range of $[0,1]$. The conditional posterior of $w_{mk}$ is again from Bayes's rule, \begin{equation}\label{equation:paa_wmk} \begin{aligned} &\gap p(w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \alpha, \beta) \propto \prod_{j=1}^{N} \mathcal{P}(o_{mjk}\mid w_{mk}z_{kn}) \cdot \mathrm{Ga}(w_{mk}\mid \alpha, \beta)\\ &\propto w_{mk}^{(\sum_{j=1}^{N}o_{mjk})} \exp\left\{ -w_{mk}\left( \sum_{j=1}^{N}z_{kj} \right) \right\} \cdot w_{mk}^{\alpha-1} \exp\left( -\beta w_{mk}\right)\\ &\propto \mathrm{Ga}(w_{mk}\mid \widetilde{\alpha}, \widetilde{\beta}), \end{aligned} \end{equation} where $$ \begin{aligned} \widetilde{\alpha} = \alpha+\sum_{j=1}^{N}o_{mjk}, \qquad \widetilde{\beta} = \beta + \sum_{j=1}^{N}z_{kj} . \end{aligned} $$ By the definition of the Gamma distribution (Definition~\ref{definition:gamma-distribution}, p.~\pageref{definition:gamma-distribution}), the posterior mean of $w_{mk}$ is given by $$ \mathrm{E}[w_{mk}\mid \bm{A}, \bm{W}_{-mk}, \bm{Z}, \alpha, \beta] =\frac{\alpha+\sum_{j=1}^{N}o_{mjk}}{\beta + \sum_{j=1}^{N}z_{kj}}. $$ This is reasonable in the sense that, when $\sum_{j=1}^{N}o_{mjk}$ is large, we are delving with a large value of $a_{mn}$ thus favoring a large value of $w_{mk}$; while when the value of $\sum_{j=1}^{N}z_{kj}$ is large from the last iteration of the Gibbs sampling procedure, we expect a small value of $w_{mk}$ to compensate for this. By symmetry, a similar conditional density for $z_{kn}$ can be derived. \begin{algorithm}[h] \caption{Gibbs sampler for PAA model in one iteration. By default, uninformative hyperparameters are $\alpha=\beta=1$.} \label{alg:paa_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha, \beta$; \For{$m=1$ to $M$} \For{$n=1$ to $N$} \State Sample $\bm{o}^{mn}$ from $p(\bm{o}^{mn}\mid a_{mn}, \bm{p})$; \Comment{Equation~\eqref{equation:paa_bomn}} \EndFor \EndFor \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} \mid \bm{A},\bm{W}_{-mk}, \bm{Z}, \alpha, \beta)$; \Comment{Equation~\eqref{equation:paa_wmk}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn} \mid \bm{A},\bm{W}, \bm{Z}_{-kn}, \alpha, \beta)$; \Comment{Symmetry of Eq.~\eqref{equation:paa_wmk}} \EndFor \EndFor \end{algorithmic} \end{algorithm} \paragraph{Gibbs sampling.} By this Gibbs sampling method introduced in Section~\ref{section:gibbs-sampler} (p.~\pageref{section:gibbs-sampler}), we can construct a Gibbs sampler for the PAA model as formulated in Algorithm~\ref{alg:paa_gibbs_sampler}. In practice, the initial hyperparameters can be set to a weak prior with $\alpha=\beta=1$. \index{Decomposition: PAAA} \section{Poisson Likelihood with Gamma Priors and Hierarchical Gamma Priors (PAAA)} The Poisson likelihood with Gamma priors and hierarchical Gamma priors (PAAA) model is introduced in \citet{gopalan2015scalable}. \paragraph{Prior.} Going further from the PAA model, we put a hierarchical Gamma prior over the Gamma parameter: \begin{equation} \begin{aligned} a_{mn}\sim \mathcal{P}(a_{mn}\mid \bm{w}_m^\top\bm{z}_n), \gap &w_{mk}\sim\mathrm{Ga}(w_{mk}\mid \alpha, \lambda_m^W), &\gap& z_{kn}\sim \mathrm{Ga}(z_{kn} \mid \alpha, \lambda_n^Z),\\ &\lambda_m^W\sim \mathrm{Ga}(a, \frac{a}{b}), &\gap&\lambda_n^Z\sim \mathrm{Ga}(a, \frac{a}{b}). \end{aligned} \end{equation} The hierarchical structure allows us to capture the \textit{diversity of users}, i.e., some users tend to consume more than others; and the \textit{diversity of items}, i.e., some items are more popular than others. \paragraph{Posterior.} The conditional posteriors for $w_{mk}, z_{kn}$, and $\bm{o}^{mn}$ are identical to the PAA model except we replace $\beta$ with $\lambda_m^W$ in the expression for $\widetilde{\beta}$. For the conditional posterior of $\lambda_m^W$, the hierarchical part, we again follow Bayes' rule, \begin{equation}\label{equation:paaa_pos_lambda_mw} \begin{aligned} p(\lambda_m^W\mid \bm{W},\alpha,a,b )&\propto \prod_{k=1}^{K} \mathrm{Ga}(w_{mk}\mid \alpha, \lambda_m^W) \cdot \mathrm{Ga}(\lambda_m^W\mid a, \frac{a}{b})\\ &\propto \prod_{k=1}^{K} \frac{(\lambda_m^W)^\alpha}{\Gamma(\alpha)} w_{mk}^{\alpha-1} \exp(-\lambda_m^W w_{mk}) \cdot \frac{(\frac{a}{b})^a}{\Gamma(a)} (\lambda_m^W)^{a-1}\exp(-\frac{a}{b} \lambda_m^W)\\ &\propto (\lambda_m^W)^{K\alpha+a-1} \exp \left\{ -\lambda_m^W \left( \frac{a}{b} +\sum_{k=1}^{K} w_{mk} \right) \right\}\\ &\propto \mathrm{Ga}(\lambda_m^W \mid \widetilde{a}_m, \widetilde{b}_m), \end{aligned} \end{equation} where $$ \widetilde{a}_m=K\alpha+a, \qquad \widetilde{b}_m=\frac{a}{b} +\sum_{k=1}^{K} w_{mk} . $$ \paragraph{Gibbs sampling.} By this Gibbs sampling method introduced in Section~\ref{section:gibbs-sampler} (p.~\pageref{section:gibbs-sampler}), we can construct a Gibbs sampler for the PAA model as formulated in Algorithm~\ref{alg:paaa_gibbs_sampler} . In practice, the initial parameters can be set to a weak prior with $\alpha=a=b=1$. \begin{algorithm}[h] \caption{Gibbs sampler for PAAA model in one iteration. By default, uninformative hyperparameters are $\alpha=a=b=1$.} \label{alg:paaa_gibbs_sampler} \begin{algorithmic}[1] \Require Choose initial $\alpha,a, b$; \For{$m=1$ to $M$} \For{$n=1$ to $N$} \State Sample $\bm{o}^{mn}$ from $p(\bm{o}^{mn}\mid a_{mn}, \bm{p})$; \Comment{Equation~\eqref{equation:paa_bomn}} \EndFor \EndFor \For{$k=1$ to $K$} \For{$m=1$ to $M$} \State Sample $w_{mk}$ from $p(w_{mk} \mid \bm{A},\bm{W}_{-mk}, \bm{Z}, \alpha, \lambda_m^W)$; \Comment{Eq.~\eqref{equation:paa_wmk}, replace $\beta$ by $\lambda_m^W$} \State Sample $\lambda_{m}^W$ from $p(\lambda_m^W\mid \bm{W},\alpha,a,b )$; \Comment{Equation~\eqref{equation:paaa_pos_lambda_mw}} \EndFor \For{$n=1$ to $N$} \State Sample $z_{kn}$ from $p(z_{kn} \mid \bm{A},\bm{W}, \bm{Z}_{-kn}, \alpha, \lambda_n^Z)$; \Comment{Eq.~\eqref{equation:paa_wmk}, replace $\beta$ by $\lambda_n^Z$} \State Sample $\lambda_{n}^Z$ from $p(\lambda_{n}^Z\mid \bm{Z},\alpha,a,b )$; \Comment{Symmetry of Eq.~\eqref{equation:paaa_pos_lambda_mw}} \EndFor \EndFor \end{algorithmic} \end{algorithm} \begin{SCfigure \centering \subfigtopskip=2pt \subfigbottomskip=6pt \subfigcapskip=-15pt \includegraphics[width=0.621\textwidth]{imgs/dists_gamma_paa.pdf} \caption{Gamma probability density functions $\mathrm{Ga}(\alpha, \beta)$ by reducing the shape parameter $\alpha$. } \label{fig:dists_gamma_paa} \end{SCfigure} \section{Properties of PAA or PAAA} After introducing the modeling details, we outline some statistical features of the PAA or PAAA approaches. These characteristics offer benefits over the Gaussian likelihood matrix factorization methods we have discussed in previous chapters when considering the Netflix context. \paragraph{PAA or PAAA captures sparse factors.} As mentioned previously, the Gamma priors on the factored components, i.e., on the user preferences and item attributes, can encourage sparse representations of users and items. A small shape parameter in the Gamma prior results in most weights being close to zero, leaving only a few large ones (see Figure~\ref{fig:dists_gamma_paa} for examples of the Gamma distribution $\mathrm{Ga}(\alpha, \beta)$ where we reduce the shape parameter $\alpha$ from 3 to 1, given $\beta=1$. The density drives to zero.). This leads to a simpler and more easily interpretable model. \begin{figure}[h] \centering \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-3pt \subfigure[User activity.]{\label{fig:movielen_1m_user_activity} \includegraphics[width=0.481\linewidth]{./imgs/movielen_1m_user_activity.pdf}} \subfigure[Item popularity.]{\label{fig:movielen_1m_item_activity} \includegraphics[width=0.481\linewidth]{./imgs/movielen_1m_item_activity.pdf}} \caption{User activity and item popularity for the MovieLens 1M data set (see data description in Table~\ref{table:datadescription}, p.~\pageref{table:datadescription}).} \label{fig:movielen_1m_item_user} \end{figure} \paragraph{PAA or PAAA models the long-tail of users and items.} In the \textit{``implicit" consumer data} that we consider here $a_{mn}$ equals one if user $n$ consumed item $m$ and zero otherwise \footnote{In contrast, the ``explicit" consumer data contains a matrix of integer ratings.}, the distribution of \textit{user activity} (i.e., how many items a user consumed) and \textit{item popularity} (i.e., how many users consumed an item) in real-world user behavior data is characterized by a long-tail distribution, where the majority of users consume only a few items while a small number of ``tail users" consume a large amount. To see this, we consider the MovieLens 1M data set (see Table~\ref{table:datadescription}, p.~\pageref{table:datadescription}) which contains movie ratings for 6,040 movies from 3,503 users. Figure~\ref{fig:movielen_1m_user_activity} shows only a small portion of users have consumed more than 1,500 movies; and Figure~\ref{fig:movielen_1m_item_activity} shows only a small portion of items have been consumed by more than 500 users, indicating the long-tail behavior. The PAA or PAAA model can capture this property easily via a two-stage process. From Equation~\eqref{equation:amn_poisson}, for each user $n$, again by Theorem~\ref{theorem:sum_iid_poisson} (p.~\pageref{theorem:sum_iid_poisson}) and Theorem~\ref{theorem:multinomial_poisson} (p.~\pageref{theorem:multinomial_poisson}), we have $$ \begin{aligned} u_n = \sum_{i=1}^{M}a_{in} &\sim \mathcal{P}\left(\sum_{i=1}^{M} \bm{w}_i^\top\bm{z}_n\right),\\ [a_{1n}, a_{2n }, \ldots, a_{Mn}]^\top &\sim \mathrm{Multi}_M (u_n, \bm{q}), \end{aligned} $$ where $\bm{q}=\frac{1}{\sum_{i=1}^{M} \bm{w}_i^\top\bm{z}_n}[ \bm{w}_1^\top\bm{z}_n, \bm{w}_2^\top\bm{z}_n, \ldots, \bm{w}_M^\top\bm{z}_n]^\top \in [0,1]^M$ such that $\mathbf{1}^\top\bm{q}=1$. Therefore, the PAA or PAAA model first learns a \textit{budget} $u_n$ for each user $n$ and then learns how to distribute the budget across items. Learning this budget value is important for modeling the long-tail behavior of user activity. Similar for each item $m$, we have $$ \begin{aligned} v_m = \sum_{i=1}^{N}a_{mi} &\sim \mathcal{P}\left(\sum_{i=1}^{N} \bm{w}_m^\top\bm{z}_i\right),\\ [a_{m1}, a_{m2}, \ldots, a_{mN}]^\top &\sim \mathrm{Multi}_N (v_m, \bm{s}), \end{aligned} $$ where $\bm{s}=\frac{1}{\sum_{i=1}^{N} \bm{w}_m^\top\bm{z}_i}[ \bm{w}_m^\top\bm{z}_1, \bm{w}_m^\top\bm{z}_2, \ldots, \bm{w}_m^\top\bm{z}_N]^\top \in [0,1]^N$ such that $\mathbf{1}^\top\bm{s}=1$. The PAA or PAAA model finds the \textit{popularity} of item $m$ by $v_m$ and then learns how the popularity is distributed across users. \section{Recommendation Systems}\label{section:recom_poisson} In Section~\ref{section:als_movie_rec} (p.~\pageref{section:als_movie_rec}), we introduce two recommendation systems based on matrix factorization. We shortly discuss in the following two paragraphs and we consider a new recommender via this Bayesian matrix factorization model. \paragraph{Recommender 1.} A recommender system can work simply by suggesting the unconsumed movie $m$ when $a_{mn}$ for user $n$ by the posterior expected Poisson parameters, \begin{equation}\label{equation:recom_poisson1} \text{score}_{mn} = \mathrm{E}[\bm{w}_m^\top\bm{z}_n \mid \bm{A}]. \end{equation} The score $\mathrm{E}[\bm{w}_m^\top\bm{z}_n \mid \bm{A}]$ can be obtained by averaging the values during the Gibbs sampling iterations. \paragraph{Recommender 2.} After obtaining the item attributes $\{\bm{w}_1, \bm{w}_2, \ldots, \bm{w}_M\}$, we compare the similar matrix (with different measures, e.g., Pearson similarity or cosine similarity) of movies and suggest the movie with high similarity to the consumed item for each user $n$. The precision-recall curve can help find out a threshold to decide the final recommendation. \paragraph{Recommender 3.} In the movie recommendation case, the uncertainty about each entry in $\bm{A}$ can be measured by its predictive standard deviation. A practical system can exploit this to only suggest items with high confidence. Since we can also model uncertainty via the Bayesian approach. Absorbing the idea of the \textit{Sharpe ratio} in quantitative finance. The Sharpe ratio is a measure of the risk-adjusted return of an investment, calculated as the ratio of its average return and its standard deviation. It measures the excess return per unit of risk, and is widely used in finance to evaluate the performance of an investment relative to its volatility. The higher the Sharpe ratio, the better the risk-adjusted return of the investment is considered to be. Embracing the concept of the Sharpe ratio, we can suggest the unconsumed item $m$ when $a_{mn}$ for user $n$ by the \textit{uncertainty-adjusted score} of posterior expected Poisson parameters, $$ \text{score}_{mn} = \frac{\mathrm{E}[\bm{w}_m^\top\bm{z}_n \mid \bm{A}]}{\sqrt{\mathrm{Var}[\bm{w}_m^\top\bm{z}_n \mid \bm{A}]}}. $$ \chapter{Regular Probability Models and Conjugacy}\label{chapter:conjugate_models_bmf} \begingroup \hypersetup{linkcolor=winestain} \minitoc \newpage \endgroup \section{Conjugate Priors} In Section~\ref{sec:beta-bernoulli} (p.~\pageref{sec:beta-bernoulli}), we have discussed about conjugate priors shortly. We now present the formal definition as follows. \begin{definition}[Conjugate Prior]\index{Conjugate prior} Given a family $\{p(\mathcal{X} \mid \boldsymbol\theta): \boldsymbol\theta \in \boldsymbol\Theta\}$ of generating distributions, a collection of priors $p_\omega (\boldsymbol\theta)$ indexed by $\boldsymbol\omega \in \boldsymbol\Omega$ is called a conjugate prior family if for any $\boldsymbol\omega$ and any data, the resulting posterior equals to $p_{\boldsymbol\omega^\prime} (\boldsymbol\theta \mid \mathcal{X})$ for some $\boldsymbol\omega^\prime \in \boldsymbol\Omega$. \end{definition} A toy example of the Beta-Bernoulli model can give us a better sense of the meaning behind the conjugate priors. \begin{example}[Beta-Bernoulli] Suppose $\mathcal{X}=\{x_1, x_2, ..., x_N\}$ are drawn independently and identically distributed (i.i.d.) from a Bernoulli distribution with parameter $\theta$, i.e., $Bernoulli(x\mid \theta)$. $Beta(\theta \mid a,b)$ distribution, with $a, b >0$, is conjugate to $Bernoulli(x\mid\theta)$, since the posterior density is $p(\theta \mid \mathcal{X}) = Beta(\theta \mid a+\sum x_i, b+N-\sum x_i)$. \hfill $\square$\par \end{example} Conjugate priors make it possible to do Bayesian reasoning in a computationally efficient manner, as well as having the philosophically satisfying interpretation of representing real or imaginary prior data. Generally, there are basically two reasons why models with conjugate priors are popular \citep{robert2007bayesian, bernardo2009bayesian, hoff2009first, gelman2013bayesian}: \begin{itemize} \item they usually allow us to derive a closed-form expression for the posterior distribution; \item they are easy to interpret, as we can easily see how the parameters of the prior change after the Bayesian update. \end{itemize} \section{Regular Univariate Models and Conjugacy} \begin{table}[H] \centering \setlength{\tabcolsep}{5.7pt} \begin{tabular}{l|l|l} \hline \hline \hyperref[definition:gaussian_distribution]{Gaussian, p.~\pageref{definition:gaussian_distribution}} & \hyperref[definition:gamma-distribution]{Gamma, p.~\pageref{definition:gamma-distribution}} & \hyperref[definition:gaussian_distribution]{Student's $t$, p.~\pageref{equation:student_t_dist}} \\ \hline \hyperref[{definition:inverse_gamma_distribution}]{Inverse-Gamma, p.~\pageref{definition:inverse_gamma_distribution}} & \hyperref[definition:truncated_normal]{Truncated-Normal, p.~\pageref{definition:truncated_normal}}& \hyperref[definition:gaussian_distribution]{Inverse-Gaussian, p.~\pageref{definition:inverse_gaussian_distribution}} \\ \hline \hyperref[definition:chisquare_distribution]{Chi-Square, p.~\pageref{definition:chisquare_distribution}} & \hyperref[definition:gaussian_distribution]{Normal-Inv-Gamma, p.~\pageref{definition:normal_inverse_gamma}}& \hyperref[definition:gaussian_distribution]{Inverse-Chi-Squared, p.~\pageref{definition:inverse-chi-square}} \\ \hline \hyperref[definition:gaussian_distribution]{Nor-Inv-Chi-Squared, p.~\pageref{definition:normal_inverse_chi_square}} & \hyperref[definition:general_truncated_normal]{General-Truncated-Nor, p.~\pageref{definition:general_truncated_normal}} & \hyperref[definition:gaussian_distribution]{Half-Normal, p.~\pageref{definition:half_normal}} \\ \hline \hyperref[definition:gaussian_distribution]{Laplace, p.~\pageref{definition:laplace_distribution}} & \hyperref[definition:gaussian_distribution]{Skew-Laplace, p.~\pageref{definition:skew_laplace_distribution}} & \hyperref[definition:gaussian_distribution]{Rectified-Normal, p.~\pageref{definition:reftified_normal_distribution}} \\ \hline \hyperref[definition:multinomial_dist]{Multinomial, p.~\pageref{definition:multinomial_dist}} & \hyperref[definition:dirichlet_dist]{Dirichlet, p.~\pageref{definition:dirichlet_dist}} & \hyperref[definition:poisson_distribution]{Poisson, p.~\pageref{definition:poisson_distribution}} \\ \hline \hyperref[definition:exponential_distribution]{Exponential, p.~\pageref{definition:exponential_distribution}} & \hyperref[definition:multivariate_gaussian]{Multi Gaussian, p.~\pageref{definition:multivariate_gaussian}} & \hyperref[definition:multivariate-stu-t]{Multi Student's $t$, p.~\pageref{definition:multivariate-stu-t}} \\ \hline \hyperref[definition:wishart_dist]{Wishart, p.~\pageref{definition:wishart_dist}} & \hyperref[definition:multi_inverse_wishart]{Inverse-Wishart, p.~\pageref{definition:multi_inverse_wishart}} & \hyperref[definition:normal_inverse_wishart]{Normal-Inv-Wishart, p.~\pageref{definition:normal_inverse_wishart}} \\ \hline \hline \end{tabular} \caption{Links for common distributions.} \label{table:common_distributions} \end{table} In most of our Bayesian matrix decomposition developments, we express the models with univariate distributions. While in the Gaussian model, we also apply multivariate distributions. In this section, we provide rigorous definitions for common univariate distributions and their conjugate priors. Table~\ref{table:common_distributions} provides an overview of what we will cover in this section. With special considerations, we also discuss the multivariate Gaussian distribution and its conjugacy in the next section. \index{Gaussian distribution} \begin{definition}[Gaussian or Normal Distribution]\label{definition:gaussian_distribution} A random variable ${\textnormal{x}}$ is said to follow the Gaussian distribution (a.k.a., a normal distribution) with mean and variance parameters $\mu$ and $\sigma^2>0$, denoted by ${\textnormal{x}} \sim \mathcal{N}(\mu,\sigma^2)$ \footnote{Note if two random variables ${\textnormal{a}}$ and ${\textnormal{b}}$ have the same distribution, then we write ${\textnormal{a}} \sim {\textnormal{b}}$.}, if $$ f(x; \mu,\sigma^2)=\frac{1}{\sqrt{2\pi\sigma^2}} \exp \left\{-\frac{1}{2\sigma^2 }(x-\mu)^2 \right\} =\sqrt{\frac{\tau}{2\pi}}\exp \left\{ -\frac{\tau}{2}(x-\mu)^2 \right\} . $$ The mean and variance of ${\textnormal{x}} \sim \mathcal{N}( \mu,\sigma^2)$ are given by $$ \mathrm{E}[{\textnormal{x}}] = \mu, \qquad \mathrm{Var}[{\textnormal{x}}] =\sigma^2=\tau^{-1}, $$ where $\tau$ is also known as the \textit{precision} of the Gaussian distribution. Figure~\ref{fig:dists_gaussian} compares different parameters $\mu, \sigma^2$ for the Gaussian distribution. \end{definition} \begin{SCfigure \centering \includegraphics[width=0.5\textwidth]{imgs/dists_gaussian.pdf} \caption{Gaussian probability density functions for different values of the mean and variance parameters $\mu$ and $\sigma^2$.} \label{fig:dists_gaussian} \end{SCfigure} Suppose $\mathcal{X}=\{x_1, x_2, ..., x_N\}$ are drawn i.i.d. from a Gaussian distribution of $\mathcal{N}(x\mid \mu, \sigma^2)$. For conjugate Bayesian analysis, we may rewrite the Gaussian probability density function as follows, \begin{equation}\label{equation:uni_gaussian_likelihood} \begin{aligned} p(\mathcal{X} \mid \mu, \sigma^2) &= \prod^N_{i=1} \mathcal{N} (x_i\mid\mu, \sigma^2) \\ &= (2\pi)^{-N/2} (\sigma^2)^{-N/2} \exp\left\{-\frac{1}{2 \sigma^2} \left[ N(\overline{x} - \mu)^2 + N \sum_{n=1}^N(x_n - \overline{x})^2 \right] \right\} \\ &= (2\pi)^{-N/2} (\sigma^2)^{-N/2} \exp\left\{-\frac{1}{2 \sigma^2} \left[ N(\overline{x} - \mu)^2 + N S_{\overline{x}} \right] \right\}, \end{aligned} \end{equation} where $S_{\overline{x}}=\sum_{n=1}^N(x_n - \overline{x})^2$ and $\overline{x} = \frac{1}{N} \sum_{i=1}^{N}x_i$. This form can help find the conditional posterior of Gaussian likelihood under \textit{normal-inverse-Gamma} prior (Equation~\eqref{equation:conjugate_nigamma_general}). Given fixed mean $\mu$ and variance $\sigma^2$ parameters, we have \begin{equation}\label{equation:gaussian_form_conform} \begin{aligned} p(x\mid \mu, \sigma^2) &=\mathcal{N}(x \mid \mu, \sigma^2) \propto \exp\left\{ -\frac{1}{2\sigma^2} x^2 + \frac{\mu}{\sigma^2} x \right\}, \end{aligned} \end{equation} where ``$\propto$" means ``proportional to". Therefore, if we find the form conforming to the above equation, we can say the random variable ${\textnormal{x}}$ follows the Gaussian distribution ${\textnormal{x}}\sim \mathcal{N}(\mu, \sigma^2)$. See example of a Bayesian \textit{GGG} matrix decomposition model in Equation~\eqref{equation:ggg_poster_wmk1} (p.~\pageref{equation:ggg_poster_wmk1}). While the product of two Gaussian variables remains an open problem, the sum of Gaussian variables follows from a new Gaussian distribution. \begin{remark}[Sum of Gaussians] Let ${\textnormal{x}}$ and ${\textnormal{y}}$ be two Gaussian distributed variables with means $\mu_x, \mu_y$ and variance $\sigma_x^2, \sigma_y^2$, respectively. \begin{itemize} \item When there is no correlation between the two variables, then it follows that $$ {\textnormal{x}}+{\textnormal{y}} \sim \mathcal{N}(\mu_x+\mu_y, \sigma_x^2+\sigma_y^2). $$ \item When there exists a correlation of $\rho$ between the two variables, then it follows that $$ {\textnormal{x}}+{\textnormal{y}} \sim \mathcal{N}(\mu_x+\mu_y, \sigma_x^2+\sigma_y^2+2\rho\sigma_x\sigma_y). $$ \end{itemize} \end{remark} \paragraph{Conjugate prior for mean of a Gaussian distribution and Normal-Normal model.} Gaussian distribution is a conjugate prior of the mean parameter of a Gaussian distribution when the variance is fixed. To see this, suppose $\mathcal{X}=\{x_1, x_2, \ldots, x_N\}$ are i.i.d. normal with mean $\theta$ and precision $\lambda$, i.e., the likelihood is $\mathcal{N}(x_i \mid \theta, \lambda^{-1})$ where the variance $\sigma^2=\lambda^{-1}$ is fixed, and $\theta$ is given a $\mathcal{N}(\mu_0, \lambda^{-1}_0)$ prior: $\theta \sim \mathcal{N}( \mu_0, \lambda^{-1})$. Using Bayes' theorem, ``posterior $\propto$ likelihood $\times$ prior", the posterior density is $$ p(\theta \mid \mathcal{X} ) \propto \prod_{i=1}^{N} \mathcal{N}(x_i \mid \theta, \lambda^{-1}) \times \mathcal{N}(\theta \mid \mu_0, \lambda_0^{-1})\propto \mathcal{N}(\theta \mid \widetilde{\mu}, \widetilde{\lambda}^{-1}), $$ where, given $\overline{x} = \frac{1}{N}\sum_{i=1}^{N} x_i$, \begin{equation}\label{equation:posterior-param-normal-normal} \begin{aligned} \widetilde{\mu} &= \frac{\lambda_0 \mu_0 + \lambda \sum_{i=1}^{N} x_i}{ \lambda_0 + N\lambda} = \frac{\lambda_0}{\lambda_0 + N\lambda} \mu_0 + \frac{N\lambda}{\lambda_0 + N\lambda} \overline{x}, \\ \widetilde{\lambda} &= \lambda_0 + N \lambda. \end{aligned} \end{equation} Thus, the posterior mean is a weighted mean of the prior mean $\mu_0$ and the sample average $\overline{x}$; the posterior precision is the sum of the prior precision and sample precision $N\lambda$. We show that the Gaussian distribution is, itself, a conjugate prior for the mean parameter of a Gaussian distribution when fixing variance. This is often referred to as the \textit{Normal-Normal model} and this model can be used to provide evidence for bimodal property of human heights \citep{schilling2002human}. \index{Student's $t$ distribution} \begin{definition}[Student's $t$ Distribution]\label{equation:student_t_dist} A random variable ${\textnormal{x}}$ is said to follow the Student's $t$ distribution with parameters $\mu$, $\sigma^2>0$, and $\nu$, denoted by ${\textnormal{x}} \sim \tau(\mu,\sigma^2, \nu)$, if \begin{equation}\label{equation:stut_defini} \begin{aligned} f(x; \mu, \sigma^2, \nu)&= \frac{\Gamma(\frac{\nu+1}{2})}{\Gamma(\frac{\nu}{2})} \frac{1}{\sigma\sqrt{\nu\pi}} \times \left[ 1+ \frac{(x-\mu)^2}{\nu \sigma^2} \right]^{-(\frac{\nu+1}{2})}, \end{aligned} \end{equation} where $\sigma^2$ is called the \textbf{scale parameter}, and $\nu$ is the \textbf{degree of freedom}. The distribution has fatter tails than a Gaussian distribution. The smaller $\nu$ is, the fatter the tail. As $\nu\rightarrow \infty$, the distribution converges towards a Gaussian. A particular case of the Student's $t$ distribution is the \textbf{Cauchy distribution}, ${\textnormal{x}}\sim \mathcal{C}(\mu, \sigma^2)$ if ${\textnormal{x}}\sim \tau(\mu, \sigma^2, 1)$, i.e., $\nu=1$. The mean and variance of ${\textnormal{x}} \sim \tau(\mu,\sigma^2, \nu)$ are given by $$ \mathrm{E}[{\textnormal{x}}]=\left\{ \begin{aligned} &\mu , \, &\mathrm{if\,} \nu >1; \\ &\text{undefined}, \, &\mathrm{if\,} \nu \leq 1. \end{aligned} \right.\qquad \mathrm{Var}[{\textnormal{x}}]=\left\{ \begin{aligned} &\frac{\nu}{\nu-2}\sigma^2, \, &\mathrm{if\,} \nu >2; \\ &\infty, \, &\mathrm{if\,} 1<\nu\leq 2. \end{aligned} \right. $$ Figure~\ref{fig:dist_students_all} compares different parameters $\mu,\sigma^2, \nu$ for the Student's $t$ distribution. \end{definition} In Figure~\ref{fig:dists_studentt_varNU}, we vary $\nu$ parameter for the Student's $t$ distribution. As $\nu$ decreases, the distribution becomes more spread out, leading to fatter tails compared to a Gaussian distribution. This allows for more flexibility in modeling data with greater uncertainty or \textit{outliers} since the Student's $t$ distribution has a greater probability of observing extreme values. In Bayesian modeling, the Student's $t$ distribution is often used as a prior for the mean parameter of a Gaussian likelihood, allowing for estimation of both the mean and precision of the data. This results in a \textit{Student's $t$-Normal} model. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Student's $t$ distribution by varying parameter $\nu$. When $\nu=100$, the distribution is very close to a Gaussian distribution.]{\label{fig:dists_studentt_varNU} \includegraphics[width=0.481\linewidth]{./imgs/dists_studentt_varNU.pdf}} \subfigure[Student's $t$ distribution by varying parameter $\sigma^2$.]{\label{fig:dists_studentt_varVar} \includegraphics[width=0.481\linewidth]{./imgs/dists_studentt_varVar.pdf}} \caption{Student's $t$ distribution for different values of the parameters $\nu$ and $\sigma^2$.} \label{fig:dist_students_all} \end{figure} \index{Gamma distribution} \begin{definition}[Gamma Distribution]\label{definition:gamma-distribution} A random variable ${\textnormal{x}}$ is said to follow the Gamma distribution with shape parameter $r>0$ and rate parameter $\lambda>0$ \footnote{Note the inverse rate parameter $1/\lambda$ is called the scale parameter. In probability theory and statistics, the \textbf{location} parameter shifts the entire distribution left or right, e.g., the mean parameter of a Gaussian distribution; the \textbf{shape} parameter compresses or stretches the entire distribution; the \textbf{scale} parameter changes the shape of the distribution in some way. }, denoted by ${\textnormal{x}} \sim \mathrm{Ga}(r, \lambda)$, if $$ f(x; r, \lambda)=\left\{ \begin{aligned} &\frac{\lambda^r}{\Gamma(r)} x^{r-1} \exp(-\lambda x) ,& \mathrm{\,\,if\,\,} x \geq 0; \\ &0 , &\mathrm{\,\,if\,\,} x <0, \end{aligned} \right. $$ where $\Gamma(x)=\int_{0}^{\infty} x^{t-1}\exp(-x)dt$ is the Gamma function and we can just take it as a function to normalize the distribution into sum to 1. In special cases when $y$ is a positive integer, $\Gamma(y) = (y-1)!$. The mean and variance of ${\textnormal{x}} \sim \mathrm{Ga}(r, \lambda)$ are given by \begin{equation} \mathrm{E}[{\textnormal{x}}] = \frac{r}{\lambda}, \qquad \mathrm{Var}[{\textnormal{x}}] = \frac{r}{\lambda^2}. \nonumber \end{equation} Specially, let ${\textnormal{x}}_1, {\textnormal{x}}_2, \ldots, {\textnormal{x}}_n$ be i.i.d., random variables drawn from $\mathrm{Ga}(r_i, \lambda)$ for each $i \in \{1, 2, \ldots, n\}$. Then ${\textnormal{y}} = \sum_{i=1}^{n} {\textnormal{x}}_i$ is a random variable following from $\mathrm{Ga}(\sum_{i=1}^{n}r_i, \lambda)$. Figure~\ref{fig:dists_gamma} compares different parameters $r, \lambda$ for the Gamma distribution. \end{definition} It's crucial to keep in mind that the definition of the Gamma distribution does not restrict $r$ to be a natural number and it allows for $r$ to be any positive number. However, when $r$ is a positive integer, the Gamma distribution can be interpreted as a sum of $r$ exponentials of rate $\lambda$ (see Definition~\ref{definition:exponential_distribution}). The summation property holds true more generally for Gamma variables with the same rate parameter. If ${\textnormal{x}}_1$ and ${\textnormal{x}}_2$ are random variables from $\mathrm{Ga}(r_1, \lambda)$ and $\mathrm{Ga}(r_2, \lambda)$ respectively, then their sum ${\textnormal{x}}_1+{\textnormal{x}}_2$ is a Gamma random variable from $\mathrm{Ga}(r_1+r_2, \lambda)$. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Gamma distribution.]{\label{fig:dists_gamma} \includegraphics[width=0.481\linewidth]{./imgs/dists_gamma.pdf}} \subfigure[Inverse-Gamma distribution.]{\label{fig:dists_inversegamma} \includegraphics[width=0.481\linewidth]{./imgs/dists_inversegamma.pdf}} \caption{Gamma and inverse-Gamma probability density functions for different values of the parameters $r$ and $\lambda$.} \label{fig:dists_gamma_inversegamma} \end{figure} \paragraph{Conjugate prior for rate of a Gamma distribution and Gamma-Gamma model.} The Gamma distribution is a conjugate prior of the \textit{rate} parameter of another Gamma distribution. Suppose data $\mathcal{X} =\{x_1,x_2,\ldots,x_N\}$ follows i.i.d. from a Gamma distribution ${\textnormal{x}}_i\sim \mathrm{Ga}(r, \lambda)$. Suppose further the rate parameter is given a Gamma prior $\lambda \sim \mathrm{Ga}(a, \frac{a}{b})$. Using Bayes' theorem, the posterior density is $$ \begin{aligned} p(\lambda \mid \mathcal{X}) &\propto \prod_{i=1}^{N}\mathrm{Ga}(x_i \mid r, \lambda) \times \mathrm{Ga}(\lambda \mid a, \frac{a}{b})\\ &\propto \prod_{i=1}^{N} \frac{\lambda^r}{\Gamma(r)} x_i^{r-1} \exp(-\lambda x_i) \times \frac{(\frac{a}{b})^a}{\Gamma(a)} \lambda^{a-1} \exp(-\frac{a}{b} \lambda) \\ &\propto \lambda^{Nr+a-1} \exp\left\{-\bigg(\sum_{i=1}^{N}x_i +\frac{a}{b}\bigg) \lambda\right\} \propto \mathrm{Ga}(\lambda \mid\widetilde{\alpha}, \widetilde{\beta}), \end{aligned} $$ where $$ \widetilde{\alpha} =Nr+a, \qquad \widetilde{\beta}=\sum_{i=1}^{N}x_i +\frac{a}{b}. $$ That is, the posterior density of rate $\lambda$ follows from a Gamma distribution. We show that the Gamma distribution is, itself, a conjugate prior for the rate parameter of a Gamma distribution when fixing the shape parameter. This is often referred to as the \textit{Gamma-Gamma model}. \paragraph{Conjugate prior for precision of a Gaussian distribution.} The Gamma distribution is a conjugate prior to the \textit{precision} parameter of a Gaussian distribution. To see this, suppose each entry $a_{mn}$ of matrix $\bm{A}$ is i.i.d. normal model with mean $b_{mn}$ and precision $\tau$, i.e., the likelihood is $p(\bm{A} \mid \bm{B}, \tau^{-1})=\mathcal{N}(\bm{A}\mid\bm{B}, \tau^{-1})$, the prior of $\tau$ is $p(\tau)=\mathrm{Ga}(\tau \mid \alpha, \beta)$ where $\bm{A}, \bm{B}\in \mathbb{R}^{M\times N}$ are two matrices containing $a_{mn}$ and $b_{mn}$ respectively (the result can be applied to vector or scalar cases). Using Bayes' theorem, it can be shown that \begin{equation}\label{equation:gamma_conjugacy_general} \begin{aligned} p(\tau \mid \bm{A}, \bm{B}, \alpha, \beta) &\propto \mathcal{N}(\bm{A}\mid\bm{B}, \tau^{-1})\times \mathrm{Ga}(\tau\mid \alpha, \beta)\\ &=\prod_{i,j=1}^{M,N} \mathcal{N}(a_{ij}\mid b_{ij}, (\tau)^{-1}) \times \frac{\beta^\alpha}{\Gamma(\alpha)} \tau^{\alpha-1} \exp(-\beta \tau)\\ &\propto \tau^{\frac{MN}{2}}\exp\left\{ -\frac{\tau}{2} \sum_{i,j=1}^{M,N}(a_{ij} - b_{ij} )^2\right\} \cdot \tau^{\alpha-1}\exp(-\beta \tau)\\ &=\tau^{\frac{MN}{2}+\alpha-1} \exp\left\{ -\tau \left(\sum_{i,j=1}^{M,N}\frac{1}{2}(a_{ij} - b_{ij} )^2 +\beta\right)\right\}\\ &\propto \mathrm{Ga}(\tau \mid \widetilde{\alpha}, \widetilde{\beta}),\\ \end{aligned} \end{equation} where \begin{equation}\label{equation:gamma_conju_posterior} \widetilde{\alpha}=\frac{MN}{2}+\alpha, \qquad \widetilde{\beta}= \sum_{i,j=1}^{M,N}\frac{1}{2}(a_{ij} - b_{ij} )^2 +\beta. \end{equation} That is, the posterior density of precision $\tau$ follows from a Gamma distribution. \paragraph{Joint conjugate prior for Gaussian mean and precision.} Going further, when the variance/precision parameter of the Gaussian distribution is not fixed with $x_1, x_2, \ldots, x_N$ drawn i.i.d. from a normal distribution with mean $\theta$ and precision $\lambda$. The \textit{normal-Gamma} distribution $\mathcal{NG}(\alpha, \beta, \mu, c)$, with $\mu\in\mathbb{R}$ and $\alpha, \beta, c\in \mathbb{R}_+$ is a joint distribution on $(\theta, \lambda)$ by letting $$ \begin{aligned} \lambda &\sim \mathrm{Ga}(\alpha, \beta); \\ \theta \mid \lambda &\sim \mathcal{N}(\mu, (c\lambda)^{-1}). \end{aligned} $$ That is, the joint p.d.f. is $$ p(\theta, \lambda ) = \mathcal{N}(\theta \mid \mu, (c\lambda)^{-1})\cdot \mathrm{Ga}(\lambda \mid \alpha, \beta) =\mathcal{NG}(\theta, \lambda\mid \alpha, \beta, \mu, c). $$ It turns out the posterior density is again a normal-Gamma distribution with $$ p(\theta, \lambda ) \propto \prod_{i=1}^{N} \mathcal{N}(x_i \mid \theta, \lambda^{-1}) \cdot \mathcal{NG}(\theta, \lambda\mid \alpha, \beta, \mu, c) \propto \mathcal{NG}(\theta, \lambda \mid \widetilde{\alpha},\widetilde{\beta}, \widetilde{\mu}, \widetilde{c}), $$ where $$ \begin{aligned} \widetilde{\mu} &= \frac{c\mu +\sum_{i=1}^{N}x_i}{c+N} , \gap &\widetilde{c} &= c+N, \\ \widetilde{\alpha}&= \alpha+ \frac{N}{2}, \gap &\widetilde{\beta} &= \beta + \frac{1}{2} \bigg(c\mu^2 -\widetilde{c}\widetilde{\mu}^2 + \sum_{i=1}^{N}x_i\bigg). \end{aligned} $$ In contrast to the Normal-Normal, this model is often referred to as the \textit{NormalGamma-Normal model}. The posterior mean for $\theta$ is a weighted average of the prior mean and the sample mean, $$ \widetilde{\mu} = \frac{c\mu +\sum_{i=1}^{N}x_i}{c+N}= \frac{c}{c+N} \mu + \frac{N}{c+N} \overline{x}, $$ where $\overline{x} = \frac{1}{N}\sum_{i=1}^{N}x_i$. From the posterior form of $\widetilde{c}$, the prior interpretation of $c$ can be described as the prior sample size for estimating mean parameter $\theta$. The posterior shape parameter $\widetilde{\alpha}$ grows linearly with sample size. And the posterior rate parameter $\widetilde{\beta}$ can be written as $$ \widetilde{\beta} = \beta + \frac{1}{2} \bigg(c\mu^2 -\widetilde{c}\widetilde{\mu}^2 + \sum_{i=1}^{N}x_i\bigg) = \beta + \frac{1}{2}\sum_{i=1}^{N}(x_i - \overline{x})^2 + \frac{1}{2} \frac{cN}{c+N} (\overline{x}-\mu)^2. $$ In other words, it is decomposed into a prior variation, observed variation (sample variance), and variation between the prior mean and sample mean: $$ \widetilde{\beta}=\text{(prior variation)} + \frac{1}{2}N \text{(observed variation)} + \frac{1}{2}\frac{cN}{c+N} \text{(variation between means)}. $$ \index{Inverse-Gamma distrbution} Putting a Gamma prior over the inverse variance of a Gaussian distribution is equivalent to putting an inverse-Gamma prior on the \textit{variance}. We now give the formal definition of the inverse-Gamma distribution. \begin{definition}[Inverse-Gamma Distribution]\label{definition:inverse_gamma_distribution} A random variable ${\textnormal{x}}$ is said to follow the inverse-Gamma distribution with shape parameter $r>0$ and scale parameter $\lambda>0$, denoted by ${\textnormal{x}}\sim \mathrm{IG}(r, \lambda)$, if $$ f(x; r, \lambda)=\left\{ \begin{aligned} &\frac{\lambda^r}{\Gamma(r)} x^{-r-1} \exp(- \frac{\lambda}{x} ) ,& \mathrm{\,\,if\,\,} x > 0; \\ &0 , &\mathrm{\,\,if\,\,} x \leq 0. \end{aligned} \right. $$ And it is denoted by ${\textnormal{x}} \sim \mathrm{IG}(r, \lambda)$. The mean and variance of inverse-gamma distribution are given by $$ \mathrm{E}[{\textnormal{x}}]=\left\{ \begin{aligned} &\frac{\lambda}{r-1}, \, &\mathrm{if\,} r\geq 1; \\ &\infty, \, &\mathrm{if\,} 0<r<1. \end{aligned} \right.\qquad \mathrm{Var}[{\textnormal{x}}]=\left\{ \begin{aligned} &\frac{\lambda^2}{(r-1)^2(r-2)}, \, &\mathrm{if\,} r> 2; \\ &\infty, \, &\mathrm{if\,} 0<r\leq 2. \end{aligned} \right. $$ Figure~\ref{fig:dists_inversegamma} compares different parameters $r, \lambda$ for the inverse-Gamma distribution. \end{definition} If ${\textnormal{x}}$ is Gamma distributed, then ${\textnormal{y}}=1/{\textnormal{x}}$ is inverse-Gamma distributed. Note that the inverse-Gamma density is not simply the Gamma density with $x$ replaced by $\frac{1}{y}$. There is an additional factor of $y^{-2}$. \footnote{Which is from the \textit{Jacobian in the change-of-variables formula}. A short proof is provided here. Let $y=\frac{1}{x}$ where $y\sim \mathrm{IG}(r, \lambda)$ and $x\sim \mathrm{Ga}(r, \lambda)$. Then, $f(y) |dy| = f(x) |dx|$ which results in $f(y) = f(x) \abs{\frac{dx}{dy}} = f(x)x^2 \xlongequal{ \mathrm{y}=\frac{1}{x}} \frac{\lambda^r}{\Gamma(r)} y^{-r-1} \exp(- \frac{\lambda}{y})$ for $y>0$. } The inverse-Gamma distribution is useful as a prior for positive parameters. It imparts a quite heavy tail and keeps probability further from zero than the Gamma distribution (see examples in Figure~\ref{fig:dists_inversegamma}). \paragraph{Conjugate prior for variance of a Gaussian distribution.} The inverse-Gamma distribution is a conjugate prior of the variance parameter of a Gaussian distribution with fixed mean parameter. To see this, let the likelihood be $p(\bm{A} \mid \bm{B}, \sigma^2)=\mathcal{N}(\bm{A}\mid\bm{B}, \sigma^2)$ where $\bm{A},\bm{B}\in\mathbb{R}^{M\times N}$ are two matrices containing elements of $a_{mn}, b_{mn}$ respectively (again the result can be applied to vector or scalar cases), the prior of $\sigma^2$ be $p(\sigma^2)=\mathrm{IG}(\sigma^2 \mid \alpha, \beta)$. Using Bayes' theorem, it can be shown that \begin{equation}\label{equation:inverse_gamma_conjugacy_general} \begin{aligned} p(\sigma^2 \mid \bm{A}, \bm{B}, \alpha, \beta) &\propto \mathcal{N}(\bm{A}\mid \bm{B}, \sigma^2)\times \mathrm{IG}(\sigma^2 \mid \alpha, \beta)\\ &=\prod_{i,j=1}^{M,N} \mathcal{N}(a_{ij}\mid b_{ij}, \sigma^2) \times \frac{\beta^\alpha}{\Gamma(\alpha)} (\sigma^2)^{-\alpha-1} \exp(-\frac{\beta}{\sigma^2})\\ &\propto \frac{1}{\sigma^{MN}}\exp\left\{ -\frac{1}{2\sigma^2} \sum_{i,j=1}^{M,N}(a_{ij} - b_{ij} )^2\right\} \cdot (\sigma^2)^{-\alpha-1}\exp(-\frac{\beta}{\sigma^2})\\ &=(\sigma^2)^{-\frac{MN}{2}-\alpha-1} \exp\left\{ -\frac{1}{\sigma^2} \left(\sum_{i,j=1}^{M,N}\frac{1}{2}(a_{ij} - b_{ij} )^2 +\beta\right)\right\}\\ &\propto \mathrm{IG}(\sigma^2 \mid \widetilde{\alpha}, \widetilde{\beta}),\\ \end{aligned} \end{equation} where \begin{equation}\label{equation:inversegamma_conjugate_posterior} \widetilde{\alpha}=\frac{MN}{2}+\alpha, \qquad \widetilde{\beta}= \sum_{i,j=1}^{M,N}\frac{1}{2}(a_{ij} - b_{ij} )^2 +\beta. \end{equation} That is, the posterior density of variance $\sigma^2$ is also an inverse-Gamma distribution. And as claimed, we find the posterior parameters in Equation~\eqref{equation:inversegamma_conjugate_posterior} are exactly the same as that in Equation~\eqref{equation:gamma_conju_posterior} from a Gamma prior. \index{Normal-inverse-Gamma distribution} As we have seen that the normal-Gamma density is a joint conjugate prior for the mean and precision parameters of a Gaussian distribution. The \textit{normal-inverse-Gamma (NIG)} distribution defined as follows is a joint conjugate prior for the mean and variance parameters of a Gaussian distribution. \begin{definition}[Normal-Inverse-Gamma (NIG) Distribution]\label{definition:normal_inverse_gamma} The joint density of normal-inverse-Gamma distribution is a density defined as \begin{equation} \begin{aligned} &\gap \mathcal{NIG} (\mu, \sigma^2 \mid m, \kappa, r, \lambda) = \mathcal{N} (\mu\mid m, \frac{\sigma^2}{\kappa}) \cdot \mathrm{IG} (\sigma^2 \mid r, \lambda) \\ &=\frac{1}{Z_{\mathcal{NIG}}(\kappa, r, \lambda)} (\sigma^2)^{-\frac{2r +3}{2}} \exp\left\{-\frac{1}{2 \sigma^2}\left[\kappa(m-\mu)^2 + 2\lambda \right] \right\}, \\ \end{aligned} \label{equation:uni_gaussian_prior-nig} \end{equation} where $\sigma^2, r, \lambda>0$, and $Z_{\mathcal{NIG}}(\kappa, r, \lambda)$ is a normalizing constant: \begin{equation}\label{equation:uni_gaussian_giw_constant-nig} Z_{\mathcal{NIG}}(\kappa, r, \lambda) = \frac{\Gamma(r)}{\lambda^{r}} \sqrt{\frac{2\pi}{\kappa}}. \end{equation} Figure~\ref{fig:dists_normalinversegamma_s} shows some normal-inverse-Gamma probability density functions by varying different parameters. \end{definition} \paragraph{Joint conjugate prior for the Gaussian mean and variance (under NIG).} The normal-inverse-Gamma defines an equivalent prior over the mean and variance parameter of a Gaussian distribution as the normal-Gamma prior, but is sometimes more convenient than the latter one. Similar to the normal-Gamma prior, when the variance and mean parameters of the Gaussian distribution are not fixed with $N$ data points $\mathcal{X}=\{x_1, x_2, \ldots, x_N\}$ drawn i.i.d. from a normal distribution with mean $\mu$ and variance $\sigma^2$. The normal-inverse-Gamma $\mathcal{NIG}(m_0, \kappa_0, r_0, \lambda_0)$ with $m_0\in\mathbb{R}$ and $r_0, \lambda_0, \kappa_0\in\mathbb{R}_+$ is a joint distribution on $\mu, \sigma^2$ by letting $$ \begin{aligned} \sigma^2 &\sim \mathrm{IG}(r_0, \lambda_0);\\ \mu \mid \sigma^2 &\sim \mathcal{N}(m_0, \frac{\sigma^2}{\kappa_0}). \end{aligned} $$ With this prior, $\mu$ and $\sigma^2$ decouple, and the posterior conditional densities of $\mu$ and $\sigma^2$ are Gaussian and inverse-Gamma respectively. The joint p.d.f of NIG prior can be written as $$ p(\mu, \sigma^2) = \mathcal{N}(m_0, \frac{\sigma^2}{\kappa_0}) \cdot \mathrm{IG}(r_0, \lambda_0) = \mathcal{NIG}(\mu, \sigma^2 \mid m_0, \kappa_0, r_0, \lambda_0). $$ Again, by Bayes' theorem ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", the posterior of the $\mu$ and $\sigma^2$ parameters under the NIG prior is \begin{equation}\label{equation:conjugate_nigamma_general} \begin{aligned} &\gap p(\mu, \sigma^2\mid \mathcal{X}, \boldsymbol\beta ) \\ &\propto \mathcal{N}(\mathcal{X} \mid \mu, \sigma^2) \cdot \mathcal{NIG}(\mu, \sigma^2 \mid \boldsymbol\beta)\\ &\propto \prod_{i=1}^{N}\mathcal{N}(x_i\mid \mu, \sigma^2) \cdot \mathcal{NIG}(\mu, \sigma^2 \mid m_0, \kappa_0, r_0, \lambda_0)\\ &\stackrel{\star}{=} \frac{C}{(\sigma^2)^{\frac{2r_0 + 3+N}{2}}} \exp\left\{-\frac{1}{2 \sigma^2} \left[ N(\overline{x} - \mu)^2 + N S_{\overline{x}} \right] \right\} \exp\left\{ -\frac{1}{2 \sigma^2} \left[2\lambda_0 + \kappa_0(m_0-\mu)^2\right] \right\}\\ &\propto (\sigma^2)^{-\frac{2r_N + 3}{2}}\exp\left\{ -\frac{1}{2 \sigma^2} \left[ \lambda_N + \kappa_N(m_N-\mu)^2\right] \right\}\\ &\propto\mathcal{NIG}(\mu, \sigma^2 \mid m_N, \kappa_{N}, r_N, \lambda_N). \end{aligned} \end{equation} where $\boldsymbol\beta=\{m_0, \kappa_0, r_0, \lambda_0\}$, $C=\frac{(2\pi)^{-N/2}}{Z_{\mathcal{NIG}}(\kappa_0, r_0, \lambda_0)}$, equality $(\star)$ is from Equation~\eqref{equation:uni_gaussian_likelihood}, and $$ \begin{aligned} m_N &= \frac{\kappa_0 m_0 + N\overline{x}}{\kappa_{N}} = \frac{\kappa_0 }{\kappa_{N}}m_0 + \frac{N}{\kappa_{N}}\overline{x},\\ \kappa_{N}&= \kappa_{0} +N,\\ r_N &= r_0 +\frac{N}{2},\\ \lambda_N &=\lambda_0 +\frac{1}{2}(NS_{\overline{x}} + N\overline{x}^2 + \kappa_{0} m_0^2 -\kappa_{N}m_N^2)\\ &= \lambda_0+\frac{1}{2}\left(NS_{\overline{x}} + \frac{\kappa_0 N }{\kappa_{0}+N} (\overline{x} - m_0)^2\right). \end{aligned} $$ where $S_{\overline{x}}=\sum_{n=1}^N(x_n - \overline{x})^2$ and $\overline{x} = \frac{1}{N} \sum_{i=1}^{N}x_i$. Note in the above derivation, we use the fact about the likelihood under Gaussian in Equation~\eqref{equation:uni_gaussian_likelihood}. We will discuss the posterior marginal likelihood in the \textit{normal-inverse-Chi-squared (NIX)} case. Further discussion on the posterior marginal likelihood for the NIG prior can be found in \citet{murphy2007conjugate}. We will leave this to the readers as it is rather similar as that in the NIX prior. \begin{figure}[htp] \centering \vspace{-0.55cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Contour plot of normal-inverse-Gamma density by varying parameter $r$ (\textcolor{brightlavender}{purple}=low, \textcolor{mydarkyellow}{yellow}=high). ]{\label{fig:dists_normalinversegamma_varyingR} \includegraphics[width=0.955\linewidth]{./imgs/dists_normalinversegamma_varyingR.pdf}} \subfigure[Contour plot of normal-inverse-Gamma density by varying parameter $\lambda$ (\textcolor{brightlavender}{purple}=low, \textcolor{mydarkyellow}{yellow}=high).]{\label{fig:dists_normalinversegamma_varyingLmabda} \includegraphics[width=0.955\linewidth]{./imgs/dists_normalinversegamma_varyingLmabda.pdf}} \subfigure[Contour plot of normal-inverse-Gamma density by varying parameter $\kappa$ (\textcolor{brightlavender}{purple}=low, \textcolor{mydarkyellow}{yellow}=high).]{\label{fig:dists_normalinversegamma_varyingKappa} \includegraphics[width=0.955\linewidth]{./imgs/dists_normalinversegamma_varyingKappa.pdf}} \subfigure[Contour plot of normal-inverse-Gamma density by varying parameter $m$ (\textcolor{brightlavender}{purple}=low, \textcolor{mydarkyellow}{yellow}=high).]{\label{fig:dists_normalinversegamma_varyingaM} \includegraphics[width=0.955\linewidth]{./imgs/dists_normalinversegamma_varyingaM.pdf}} \caption{Normal-inverse-Gamma probability density functions by varying different parameters.} \label{fig:dists_normalinversegamma_s} \end{figure} \index{Chi-squared distribution} Another distribution that is closely related to the Gamma distribution is called the \textit{Chi-squared} distribution which is extensively used in the distribution theory of linear models \citep{lu2021rigorous}. The rigorous definition is given as follows. \begin{definition}[Chi-Squared Distribution]\label{definition:chisquare_distribution} Let $\bm{A} \sim \mathcal{N}(\mathbf{0}, \bm{I}_{p})$ where $\bm{I}_p$ is a $p\times p$ identity matrix. Then ${\textnormal{x}}=\sum_i^p a_{ii}$ follows the Chi-squared distribution with $p$ \textbf{degrees of freedom}. We write ${\textnormal{x}} \sim \chi^2(p)$, and we can see this is equivalent to ${\textnormal{x}}\sim \mathrm{Ga}(p/2, 1/2)$: $$ f(x; p)=\left\{ \begin{aligned} &\frac{1}{2^{p/2}\Gamma(\frac{p}{2})} x^{\frac{p}{2}-1} \exp(-\frac{x}{2}) ,& \mathrm{\,\,if\,\,} x \geq 0; \\ &0 , &\mathrm{\,\,if\,\,} x <0. \end{aligned} \right. $$ The mean, variance of ${\textnormal{x}}\sim \chi^2(p)$ are given by $$ \mathrm{E}[{\textnormal{x}}]=p, \qquad \mathrm{Var}[{\textnormal{x}}]=2p. $$ Figure~\ref{fig:dists_chisquared} compares different \text{degrees of freedom} $p$ for the Chi-squared distribution. \end{definition} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Chi-squared distribution.]{\label{fig:dists_chisquared} \includegraphics[width=0.481\linewidth]{./imgs/dists_chisquared.pdf}} \subfigure[Inverse-Chi-squared distribution.]{\label{fig:dist_inversechisquared} \includegraphics[width=0.481\linewidth]{./imgs/dists_inversechisquared.pdf}} \caption{Chi-squared and inverse-Chi-squared probability density functions for different values of parameters.} \label{fig:dists_chi_inversechi} \end{figure} The analog of the inverse-Gamma distribution is known as the \textit{inverse-Chi-squared} distribution. Following the definition of the inverse-Gamma distribution in Definition~\ref{definition:inverse_gamma_distribution}, we provide the rigorous definition of the inverse-Chi-squared distribution as follows. \begin{definition}[Inverse-Chi-Squared Distribution]\label{definition:inverse-chi-square} A random variable ${\textnormal{x}}$ is said to follow the inverse-Chi-squared distribution with parameter $\nu>0$ and $s^2>0$, denoted by ${\textnormal{x}}\sim \mathrm{IG}(\frac{\nu}{2}, \frac{\nu s^2}{2})$ if $$ f(x; \nu, s^2)=\left\{ \begin{aligned} &\frac{{(\frac{\nu s^2}{2})}^{\frac{\nu}{2}}}{\Gamma(\frac{\nu}{2})} x^{-\frac{\nu}{2}-1} \exp(- \frac{\nu s^2}{2x} ) ,& \mathrm{\,\,if\,\,} x > 0; \\ &0 , &\mathrm{\,\,if\,\,} x \leq 0. \end{aligned} \right. $$ And it is denoted by ${\textnormal{x}} \sim \mathrm{\chi^{-2}}(\nu, s^2)$. The parameter $\nu >0$ is called the \textbf{degrees of freedom}, and $s^2 > 0$ is the \textbf{scale parameter}. And it is also known as the \textbf{scaled} inverse-Chi-squared distribution. The mean and variance of the inverse-Chi-squared distribution are given by $$ \mathrm{E}[{\textnormal{x}}]=\left\{ \begin{aligned} &\frac{\nu s^2}{\nu-2}, \, &\mathrm{if\,\,} \nu\geq 2; \\ &\infty, \, &\mathrm{if\,\,} 0<\nu<2. \end{aligned} \right.\qquad \mathrm{Var}[{\textnormal{x}}]=\left\{ \begin{aligned} &\frac{2\nu^2 s^4}{(\nu-2)^2(\nu-4)}, \, &\mathrm{if\,\,} \nu\geq 4; \\ &\infty, \, &\mathrm{if\,\,} 0<\nu<4. \end{aligned} \right. $$ To make a connection to the inverse-Gamma distribution, we can set $S=\nu s^2$. Then the inverse-Chi-squared distribution can also be denoted by ${\textnormal{x}}\sim \mathrm{IG}(\frac{\nu}{2}, \frac{S}{2})$ if ${\textnormal{x}} \sim \mathrm{\chi^{-2}}(\nu, s^2)$ the form of which conforms to the univariate case of the inverse-Wishart distribution (Definition~\ref{definition:multi_inverse_wishart}, p.~\pageref{definition:multi_inverse_wishart}). And we will see the similarity in the posterior parameters too. Figure~\ref{fig:dist_inversechisquared} compares different parameters $\nu, s^2$ for the inverse-Chi-squared distribution. \end{definition} \begin{exercise}[Conjugate prior for variance of a Gaussian distribution] Show that the inverse-Chi-squared distribution is a conjugate prior for the Gaussian variance parameter when the mean parameter is fixed. Hint: the derivation is just the same as that in the inverse-Gamma case. \end{exercise} \index{Normal-inverse-Chi-squared distribution} As we have seen the normal-inverse-Gamma distribution is a joint conjugate prior for Gaussian mean and variance parameters. The \textit{normal-inverse-Chi-squared (NIX)} distribution defined as follows is an alternative joint conjugate prior. \begin{definition}[Normal-Inverse-Chi-Squared (NIX) Distribution]\label{definition:normal_inverse_chi_square} Similar to the normal-inverse-Gamma distribution, the normal-inverse-Chi-squared (NIX) distribution is defined as (where again we set $S=\nu s^2$ as that in the inverse-Chi-square distribution to make a connection to the normal-inverse-Gamma density) \begin{equation} \begin{aligned} &\gap \mathcal{NIX} (\mu, \sigma^2 \mid m, \kappa, \nu, S) = \mathcal{N} (\mu\mid m, \frac{\sigma^2}{\kappa}) \cdot \mathrm{\chi^{-2}} (\sigma^2 \mid \nu, s^2) \\ &=\frac{1}{Z_{\mathcal{NIX}}(\kappa, \nu, s^2)} (\sigma^2)^{-(\nu/2 + 3/2)} \exp\left\{ -\frac{1}{2 \sigma^2} \left[\nu s^2 + \kappa(m-\mu)^2\right] \right\}, \\ &\xlongequal{S = \nu s^2} \frac{1}{Z_{\mathcal{NIX}}(\kappa, \nu, s^2)} (\sigma^2)^{-(\nu/2 + 3/2)} \exp\left\{ -\frac{1}{2 \sigma^2} \left[S + \kappa(m-\mu)^2\right] \right\} \end{aligned} \label{equation:uni_gaussian_prior} \end{equation} where $\sigma^2, \nu, s^2>0$, and $Z_{\mathcal{NIX}}(\kappa, \nu, s^2)$ is a normalizing constant: \begin{equation}\label{equation:uni_gaussian_giw_constant} Z_{\mathcal{NIX}}(\kappa, \nu, s^2) = \Gamma\big(\frac{\nu}{2}\big) \big(\frac{2}{\nu s^2}\big)^{\nu/2} \sqrt{\frac{2\pi}{\kappa}} = \Gamma\big(\frac{\nu}{2}\big) \big(\frac{2}{S}\big)^{\nu/2} \sqrt{\frac{2\pi}{\kappa}}. \end{equation} The normal-inverse-Chi-squared distribution can also be denoted by ${\textnormal{x}}\sim \mathcal{NIG}(m, \kappa, \frac{\nu}{2}, \frac{S}{2})$ if ${\textnormal{x}} \sim \mathcal{NIX}(m, \kappa, \nu, s^2)$ the form of which conforms to the univariate case of the normal-inverse-Wishart distribution (Equation~\eqref{equation:multi_gaussian_prior}, p.~\pageref{equation:multi_gaussian_prior}). And we will see the similarity in the posterior parameters as well. \end{definition} \paragraph{Joint conjugate prior for the Gaussian mean and variance (under NIX).} Similar to the normal-inverse-Gamma prior, when the variance and mean parameters of the Gaussian distribution are not fixed with $N$ data points $\mathcal{X}=\{x_1, x_2, \ldots, x_N\}$ drawn i.i.d. from a normal distribution with mean $\mu$ and variance $\sigma^2$. The normal-inverse-Chi-squared $\mathcal{NIX}(m_0, \kappa_0, \nu_0, S_0=\nu_0\sigma_0^2)$ with $m_0\in\mathbb{R}$ and $\kappa_0, \mu_0, S_0\in\mathbb{R}_+$ is a joint distribution on $\mu, \sigma^2$ by letting $$ \begin{aligned} \sigma^2 &\sim \mathrm{\chi^{-2}}(\nu_0, \sigma_0^2);\\ \mu \mid \sigma^2 &\sim \mathcal{N}(m_0, \frac{\sigma^2}{\kappa_0}). \end{aligned} $$ Again, by Bayes' theorem ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", the conditional posterior of the $\mu$ and $\sigma^2$ parameters under the NIX prior is \begin{equation}\label{equation:nix-posterior} \begin{aligned} &\gap p(\mu, \sigma^2\mid \mathcal{X}, \boldsymbol\beta ) \\ &\propto p(\mathcal{X} \mid \mu, \sigma^2) p(\mu, \sigma^2 \mid \boldsymbol\beta) = p(\mathcal{X}, \mu, \sigma^2 \mid \boldsymbol\beta)\\ &= \frac{C}{(\sigma^2)^{\frac{\nu_0 + 3+N}{2}}} \exp\left\{ -\frac{1}{2 \sigma^2} \left[ N(\overline{x} - \mu)^2 + N S_{\overline{x}} \right] \right\} \exp\left\{ -\frac{1}{2 \sigma^2} \left[S_0 + \kappa_0(m_0-\mu)^2\right] \right\}\\ &= C\times (\sigma^2)^{-\frac{\nu_N + 3}{2}}\exp\left\{ -\frac{1}{2 \sigma^2} \left[ S_N + \kappa_N(m_N-\mu)^2\right] \right\}\\ &\propto\mathcal{NIX}(\mu, \sigma^2\mid m_N, \kappa_{N}, \nu_N, \textcolor{blue}{S_N}) = \mathcal{N} (\mu\mid m_N, \frac{\sigma^2}{\kappa_N}) \cdot \mathrm{\chi^{-2}} (\sigma^2 \mid \nu_N, \textcolor{blue}{\sigma^2_N}), \end{aligned} \end{equation} where $\boldsymbol\beta=\{m_0, \kappa_0, \nu_0, S_0=\nu_0\sigma_0^2\}$, $C=\frac{(2\pi)^{-N/2}}{Z_{\mathcal{NIX}}(\kappa_0, \nu_0, \sigma^2_0)}$, and $$ \begin{aligned} m_N &= \frac{\kappa_0 m_0 + N\overline{x}}{\kappa_{N}} = \frac{\kappa_0 }{\kappa_{N}}m_0 + \frac{N}{\kappa_{N}}\overline{x},\\ \kappa_{N}&= \kappa_{0} +N,\\ \nu_N &= \nu_0 +N,\\ S_N &= S_0 +NS_{\overline{x}} + N\overline{x}^2 + \kappa_{0} m_0^2 -\kappa_{N}m_N^2\\ &=S_0 +NS_{\overline{x}} + \frac{\kappa_0 N }{\kappa_{0}+N} (\overline{x} - m_0)^2,\\ \nu_N \sigma_N^2 &= S_N \qquad\underrightarrow{ \text{leads to} }\qquad \sigma_N^2 = \frac{S_N}{\nu_N} , \end{aligned} $$ Then the posterior density is a normal-inverse-Chi-squared density. \footnote{This posterior shares the same form as that in the multivariate case from Equation~\eqref{equation:niw_posterior_equation_1} except the $N$ in $NS_{\overline{x}}$ which results from the difference between the multivariate Gaussian distribution and the univariate Gaussian distribution. Similarly, in the inverse-Chi-squared language, we can show the $\nu_N \sigma_N^2 = S_N$.} Suppose $\nu_0\geq 2$, or $N\geq 2$ such that $\nu_N\geq 2$, the posterior expectations are given by $$ \mathrm{E}[\mu \mid \mathcal{X}, \boldsymbol\beta] = m_N, \qquad \mathrm{E}[\sigma^2 \mid \mathcal{X}, \boldsymbol\beta] = \frac{S_N}{\nu_N-2}. $$ \paragraph{Marginal posterior of $\sigma^2$.} Integrate out $\mu$ in the posterior, we have $$ \begin{aligned} p(\sigma^2 \mid \mathcal{X}, \boldsymbol\beta) &= \int_{\mu} p(\mu, \sigma^2 \mid \mathcal{X}, \boldsymbol\beta) d \mu \\ &= \int_{\mu } \mathcal{N} (\mu\mid m_N, \frac{\sigma^2}{\kappa_N}) \cdot \mathrm{\chi^{-2}} (\sigma^2 \mid \nu_N, \sigma^2_N)d\mu \\ &= \mathrm{\chi^{-2}} (\sigma^2\mid \nu_N, \sigma^2_N), \end{aligned} $$ which is just an integral over a Gaussian distribution. \paragraph{Marginal posterior of $\mu$.} Integrate out $\sigma^2$ in the posterior, we have $$ \begin{aligned} p(\mu \mid \mathcal{X}, \boldsymbol\beta) &= \int_{\sigma^2} p(\mu, \sigma^2 \mid \mathcal{X}, \boldsymbol\beta) d \sigma^2 \\ &= \int_{\sigma^2 } \mathcal{N} (\mu\mid m_N, \frac{\sigma^2}{\kappa_N}) \cdot \mathrm{\chi^{-2}} (\sigma^2 \mid \nu_N, \sigma^2_N)d\sigma^2 \\ &= \int_{\sigma^2}C(\sigma^2)^{-\frac{\nu_N + 3}{2}} \exp\left \{-\frac{1}{2 \sigma^2} \left[ S_N + \kappa_N(m_N-\mu)^2\right] \right\}d\sigma^2. \end{aligned} $$ Let $\phi = \sigma^2$ and $\alpha = (\nu_N+1)/2$, $A = S_N + \kappa_N(m_N-\mu)^2$, and $x = \frac{A}{2\phi}$, we have $$ \frac{d \phi}{d x} = -\frac{A}{2}x^{-2}. $$ where $A$ can be easily verified to be positive and $\phi=\sigma^2>0$. It follows that \begin{equation*} \begin{aligned} p(\mu \mid \mathcal{X}, \boldsymbol\beta) &=\int_{0}^{\infty} C(\phi)^{-\alpha-1} \exp\left(-\frac{A}{2 \phi} \right)d\phi\\ &=\int_{\textcolor{black}{\infty}}^{\textcolor{black}{0}} C(\frac{A}{2x})^{-\alpha-1} \exp\left( -x\right) ( \textcolor{black}{-}\frac{A}{2}x^{-2}) dx \qquad &\text{(since $x=\frac{A}{2\phi}$)}\\ &=\int_{\textcolor{black}{0}}^{\textcolor{black}{\infty}} C(\frac{A}{2x})^{-\alpha-1} \exp\left( -x\right) ( \frac{A}{2}x^{-2}) dx\\ &= (\frac{A}{2})^{-\alpha} \int_{x} Cx^{\alpha-1} \exp\left( -x\right) dx \\ &= (\frac{A}{2})^{-\alpha} (C\cdot \Gamma(1)) \int_{x}\mathrm{Ga}(x\mid \alpha, 1) dx\qquad &\text{(see Definition~\ref{definition:gamma-distribution})}\\ &= (C\cdot \Gamma(1))\left[ \nu_N\sigma_N^2 +\kappa_{N}(m_N-\mu)^2 \right]^{-\frac{\nu_N+1}{2}}\\ &\overset{(a)}{=} (C\cdot \Gamma(1)) (\nu_N\sigma_N^2)^{-\frac{\nu_N+1}{2}} \left[ 1 +\frac{\kappa_{N}}{\nu_N\sigma_N^2}(m_N-\mu)^2 \right]^{-\frac{\nu_N+1}{2}} \end{aligned} \end{equation*} We notice that $C$ is defined in Equation~\eqref{equation:nix-posterior} (in terms of $\{\kappa_N, \nu_N, \sigma^2_N\}$) that $$ C\overset{(b)}{=}\frac{(2\pi)^{-N/2}}{Z_{\mathcal{NIX}}(\kappa_N, \nu_N, \sigma^2_N)} = \frac{(2\pi)^{-N/2}}{\frac{\sqrt{(2\pi)}}{\sqrt{\kappa_N}} \Gamma(\frac{\nu_N}{2}) (\frac{2}{\nu_N \sigma^2_N})^{\nu_N/2}} \propto (\nu_N \sigma^2_N)^{\nu_N/2}. $$ Combine equalities (a) and (b) above, we obtain $$ p(\mu\mid \mathcal{X}, \boldsymbol\beta) \propto \frac{1}{\sigma_N/\sqrt{\kappa_N}} \left[ 1 +\frac{\kappa_{N}}{\nu_N\sigma_N^2}(\mu-m_N)^2 \right]^{-\frac{\nu_N+1}{2}} \propto \tau(\mu\mid m_N, \sigma_N^2/\kappa_N, \nu_N), $$ which is a univariate Student's $t$ distribution (Definition~\ref{equation:student_t_dist}, p.~\pageref{equation:student_t_dist}). \paragraph{Marginal likelihood of data.} By Equation~\eqref{equation:nix-posterior}, we can get the marginal likelihood of data under hyperparameter $\boldsymbol\beta=( m_0, \kappa_0, \nu_0, S_0=\nu_0\sigma_0^2)$ $$ \begin{aligned} p(\mathcal{X} \mid \boldsymbol\beta) &= \int_{\mu} \int_{\sigma^2} p(\mathcal{X}, \mu, \sigma^2 \mid\boldsymbol\beta) d\mu d\sigma^2\\ &=\frac{(2\pi)^{-N/2}}{Z_{\mathcal{NIX}}(\kappa_0, \nu_0, \sigma^2_0)} \int_{\mu} \int_{\sigma^2} (\sigma^2)^{-\frac{\nu_N + 3}{2}}\exp\left\{ -\frac{1}{2 \sigma^2} \left[ S_N + \kappa_N(m_N-\mu)^2\right] \right\} d\mu d\sigma^2\\ &= (2\pi)^{-N/2}\frac{Z_{\mathcal{NIX}}(\kappa_N, \nu_N, \sigma^2_N)}{Z_{\mathcal{NIX}}(\kappa_0, \nu_0, \sigma^2_0)} \\ &= (\pi)^{-N/2} \frac{\Gamma(\nu_N/2)}{\Gamma(\nu_0/2)} \sqrt{\frac{\kappa_0}{\kappa_N}} \frac{(\nu_0\sigma^2_0)^{\nu_0/2}}{(\nu_N\sigma^2_N)^{\nu_N/2}}. \end{aligned} $$ \paragraph{Posterior predictive for new data with observations.} Let the number of samples for data set $\{x^{\star}, \mathcal{X}\}$ be ${N^{\star}} = N+1$, we have \begin{equation}\label{equation:nix-posterior-new-withobser} \begin{aligned} &\gap p(x^{\star} \mid\mathcal{X}, \boldsymbol\beta) = \frac{p(x^{\star}, \mathcal{X} \mid \boldsymbol\beta)}{p(\mathcal{X}\mid \boldsymbol\beta)}\\ &=\left\{(2\pi)^{-{N^{\star}}/2}\frac{Z_{\mathcal{NIX}}(\kappa_{N^{\star}}, \nu_{N^{\star}}, \sigma^2_{N^{\star}})}{Z_{\mathcal{NIX}}(\kappa_0, \nu_0, \sigma^2_0)}\right\} \bigg/\left\{(2\pi)^{-N/2}\frac{Z_{\mathcal{NIX}}(\kappa_N, \nu_N, \sigma^2_N)}{Z_{\mathcal{NIX}}(\kappa_0, \nu_0, \sigma^2_0)}\right\}\\ &=(2\pi)^{-1/2} \frac{Z_{\mathcal{NIX}}(\kappa_{N^{\star}}, \nu_{N^{\star}}, \sigma^2_{N^{\star}})}{Z_{\mathcal{NIX}}(\kappa_N, \nu_N, \sigma^2_N)}\\ &=(\pi)^{-1/2} \sqrt{\frac{\kappa_N}{\kappa_{N^{\star}}}} \frac{\Gamma(\frac{\nu_{N^{\star}}}{2})}{\Gamma(\frac{\nu_{N}}{2})} \frac{(\nu_N \sigma_N^2)^{\frac{\nu_N}{2}}}{(\nu_{{N^{\star}}}\sigma_{{N^{\star}}}^2)^{\frac{\nu_{{N^{\star}}}}{2}}}\\ &=\frac{\Gamma(\frac{\nu_{N}+1}{2})}{\Gamma(\frac{\nu_{N}}{2})} \sqrt{\frac{\kappa_N}{(\kappa_{N}+1)} \frac{1}{(\pi\nu_{N}\sigma_{N}^2)}} \left[\frac{(\nu_{{N^{\star}}}\sigma_{{N^{\star}}}^2)}{(\nu_N \sigma_N^2)}\right]^{-\frac{\nu_{N}+1}{2}}. \end{aligned} \end{equation} We realize that $$ \begin{aligned} m_N &= \frac{\kappa_{N^{\star}}m_{N^{\star}} - x^\star}{\kappa_N}=\frac{(\kappa_0 + N + 1)m_{N^{\star}} - x^\star}{\kappa_0 + N} , \\ m_{N^{\star}} &= \frac{\kappa_{N} m_N +x^{\star}}{\kappa_{N^{\star}}} = \frac{(\kappa_{0}+N) m_N +x^{\star}}{\kappa_{0}+N+1},\\ S_{N^{\star}} &= S_N + x^{\star} x^{\star T} - \kappa_{N^{\star}} m_{N^{\star}}^2 + \kappa_N m_N^2 \\ &= S_N + \frac{\kappa_N + 1}{\kappa_N}(m_{N^{\star}} - x^\star)^2\\ &=S_N + \frac{\kappa_N }{\kappa_N+ 1}(m_{N} - x^\star)^2, \end{aligned} $$ Thus, we have \begin{equation}\label{equation:nix-substitute-posterior-withobser} \begin{aligned} \left[\frac{(\nu_{{N^{\star}}}\sigma_{{N^{\star}}}^2)}{(\nu_N \sigma_N^2)}\right]^{-\frac{\nu_{N}+1}{2}}&= \left(\frac{S_{N^{\star}}}{S_N}\right)^{-\frac{\nu_{N}+1}{2}} =1 + \frac{\kappa_N(m_{N} - x^\star)^2 }{(\kappa_N+ 1)\nu_N\sigma_N^2}. \end{aligned} \end{equation} Substitute Equation~\eqref{equation:nix-substitute-posterior-withobser} into Equation~\eqref{equation:nix-posterior-new-withobser}, it follows that $$ \begin{aligned} p(\bm{x}^{\star} \mid\mathcal{X}, \boldsymbol\beta) &=\frac{\Gamma(\frac{\nu_{N}+1}{2})}{\Gamma(\frac{\nu_{N}}{2})} \sqrt{\frac{\kappa_N}{(\kappa_{N}+1)} \frac{1}{(\pi\nu_{N}\sigma_{N}^2)}} \left[1 + \frac{\kappa_N(m_{N} - x^\star)^2 }{(\kappa_N+ 1)\nu_N\sigma_N^2}\right]^{-\frac{\nu_{N}+1}{2}}\\ &= \tau(x^\star \mid m_N, \frac{\kappa_{N}+1}{\kappa_{N}}\sigma^2_N, \nu_N ). \end{aligned} $$ \paragraph{Posterior predictive for new data without observations.} Similarly, we have $$ \begin{aligned} p(x^{\star} \mid \boldsymbol\beta) &= \int_{\mu} \int_{\sigma^2} p(x^{\star}, \mu, \sigma^2 \mid \boldsymbol\beta) d\mu d\sigma^2 \\ &=(2\pi)^{-1/2}\frac{Z_{\mathcal{NIX}}(\kappa_1, \nu_1, \sigma^2_1)}{Z_{\mathcal{NIX}}(\kappa_0, \nu_0, \sigma^2_0)}\\ &=(\pi)^{-1/2} \sqrt{\frac{\kappa_0}{\kappa_{1}}} \frac{\Gamma(\frac{\nu_{1}}{2})}{\Gamma(\frac{\nu_{0}}{2})} \frac{(\nu_0 \sigma_0^2)^{\frac{\nu_0}{2}}}{(\nu_{1}\sigma_{1}^2)^{\frac{\nu_{1}}{2}}}\\ &=\frac{\Gamma(\frac{\nu_{0}+1}{2})}{\Gamma(\frac{\nu_{0}}{2})} \sqrt{\frac{\kappa_0}{(\kappa_{0}+1)} \frac{1}{(\pi\nu_{0}\sigma_{0}^2)}} \left[\frac{(\nu_{1}\sigma_{1}^2)}{(\nu_0 \sigma_0^2)}\right]^{-\frac{\nu_{0}+1}{2}}\\ &=\tau\big(x^\star \mid m_0, \frac{\kappa_{0}+1}{\kappa_{0}}\sigma^2_0, \nu_0 \big). \end{aligned} $$ \section{Exponential and Conjugacy} \index{Exponential distribution} The \textit{exponential} distribution is a probability distribution commonly used in modeling events that happen randomly over time, such as the time elapsed until the occurrence of a certain event, or the time between two consecutive events. It is a special Gamma distribution with support on nonnegative real values. \begin{definition}[Exponential Distribution]\label{definition:exponential_distribution} A random variable ${\textnormal{x}}$ is said to follow the exponential distribution with rate parameter $\lambda>0$, denoted by ${\textnormal{x}} \sim \mathcal{E}(\lambda)$, if $$ f(x; \lambda)=\left\{ \begin{aligned} & \lambda \exp(-\lambda x) ,& \mathrm{\,\,if\,\,} x \geq 0; \\ &0 , &\mathrm{\,\,if\,\,} x <0. \end{aligned} \right. $$ We can see this is equivalent to ${\textnormal{x}}\sim \mathrm{Ga}(1, \lambda)$. The mean and variance of ${\textnormal{x}} \sim \mathcal{E}(\lambda)$ are given by \begin{equation} \mathrm{E}[{\textnormal{x}}] = \lambda^{-1}, \qquad \mathrm{Var}[{\textnormal{x}}] =\lambda^{-2}. \nonumber \end{equation} The support of an exponential distribution is on $(0,\infty)$. Figure~\ref{fig:dists_exponential} compares different parameters $\lambda$ for the exponential distribution. \end{definition} Note that the average $\lambda^{-1}$ is the average time until the occurrence of the event of interest so that $\lambda$ is interpreted as a rate parameter. An important property of the exponential distribution is that it is ``memoryless", meaning that the probability of waiting for an additional amount of time $x$ depends only on $x$, not on the past waiting time. \begin{remark}[Property of Exponential Distribution] Let ${\textnormal{x}}\sim \mathcal{E}(\lambda)$. Then we have $p({\textnormal{x}}\geq x + s \mid {\textnormal{x}} \geq s) = p({\textnormal{x}}\geq x)$. \end{remark} \begin{SCfigure \centering \includegraphics[width=0.5\textwidth]{imgs/dists_exponential.pdf} \caption{Exponential probability density functions for different values of the rate parameter $\lambda$.} \label{fig:dists_exponential} \end{SCfigure} \paragraph{Conjugate prior for the exponential rate parameter.} The Gamma distribution is a conjugate prior of the rate parameter of an exponential distribution. To see this, suppose $\mathcal{X}=\{{\textnormal{x}}_1, {\textnormal{x}}_2, \ldots, {\textnormal{x}}_N\}$ are drawn i.i.d. from an exponential distribution with rate $\lambda$, i.e., the likelihood is $\mathcal{E}(x \mid \lambda)$, and $\lambda$ is given a $\mathrm{Ga}(\alpha_0, \beta_0)$ prior: $\lambda \sim \mathrm{Ga}(\alpha_0, \beta_0)$. Using Bayes' theorem, the posterior is $$ p(\lambda \mid \mathcal{X}) \propto \prod_{i=1}^{N} \mathcal{E}(x_i \mid \lambda) \times \mathrm{Ga}(\lambda\mid \alpha_0, \beta_0) \propto \mathrm{Ga}(\theta \mid \widetilde{\alpha}, \widetilde{\beta}), $$ where \begin{equation}\label{equation:posterior-param-exponenral-gamma} \widetilde{\alpha}= \alpha_0+N, \qquad \widetilde{\beta} =\beta_0 + \sum_{i=1}^{N} x_i. \end{equation} From this posterior form, the prior parameter $\alpha_0$ can be interpreted as the number of prior observations, and $\beta_0$ as the sum of the prior observations. The posterior mean is $$ \frac{\widetilde{\alpha}}{\widetilde{\beta}} = \frac{\alpha_0+N}{\beta_0 + \sum_{i=1}^{N} x_i}. $$ \section{Univariate Gaussian-Related Models} The \textit{truncated-normal (TN)} distribution is a variant of the normal distribution, where the values smaller than zero are excluded. In other words, it is a normal distribution that is ``cut off" at zero. The support of the distribution is nonnegative real such that it can be applied in a nonnegative matrix factorization context. \index{Truncated-normal distribution} \begin{definition}[Truncated-Normal (TN) Distribution]\label{definition:truncated_normal} A random variable ${\textnormal{x}}$ is said to follow the truncated-normal distribution with ``parent" mean $\mu$ and ``parent" precision $\tau>0$, denoted by ${\textnormal{x}} \sim \mathcal{TN}(\mu, \tau^{-1})$, if $$ f(x; \mu, \tau^{-1})=\left\{ \begin{aligned} &\frac{\sqrt{\frac{\tau}{2\pi}} \exp \{-\frac{\tau}{2}(x-\mu)^2 \} }{1-\Phi(-\mu\sqrt{\tau})} ,& \mathrm{\,\,if\,\,} x \geq 0; \\ &0 , &\mathrm{\,\,if\,\,} x <0, \end{aligned} \right. $$ where $\Phi(y) = \int_{-\infty}^{y} \mathcal{N}(u\mid 0,1)du= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{y} \exp(-\frac{u^2}{2}) du $ is the cumulative distribution function (c.d.f.) of $\mathcal{N}(0,1)$, the standard normal distribution. Generally, the cumulative density function of ${\textnormal{y}}\sim \mathcal{N}(\mu, \sigma^2)$ can be written as $$ F(y) = p({\textnormal{y}}\leq y) = \Phi\left(\frac{y-\mu}{\sigma}\right) = \Phi\left((y-\mu)\cdot \sqrt{\tau}\right). \footnote{ Or equivalently, for general Gaussian distribution ${\textnormal{y}}\sim\mathcal{N}(\mu,\sigma^2)$, the c.d.f. is $F(y)=\frac{1}{2}\left\{ 1+\text{erf}\left(\frac{y-\mu}{\sigma\sqrt{2}}\right) \right\}$, where the error function is $\text{erf}(t)=\frac{2}{\sqrt{\pi}} \int_{0}^{t}\exp(-y^2) dy$.} $$ The mean and variance of ${\textnormal{x}} \sim \mathcal{TN}(\mu, \tau^{-1})$ are given by $$ \begin{aligned} \mathrm{E}[{\textnormal{x}}] &= \mu - \frac{1}{\sqrt{\tau}}\cdot \frac{ - \phi(\alpha)}{1 - \Phi(\alpha)}, \\ \mathrm{Var}[{\textnormal{x}}] &= \frac{1}{\tau} \left( 1 + \frac{ \alpha\phi(\alpha)}{1 - \Phi(\alpha)}+ \left(\frac{ \alpha\phi(\alpha)}{1 - \Phi(\alpha)}\right)^2 \right), \end{aligned} $$ where $\phi(y)=\frac{1}{\sqrt{2\pi} } \exp(-\frac{y^2}{2})$ is the p.d.f. of the standard normal distribution, and $\alpha = -\mu\cdot \sqrt{\tau}$ \citep{burkardt2014truncated}. Figure~\ref{fig:dists_truncatednorml} compares different parameters $\mu, \tau$ for the TN distribution. Figure~\ref{fig:dists_truncatednorml_mean} shows the mean value of the TN distribution by varying $\mu$ given fixed $\tau$; we can find when $\mu\rightarrow -\infty$, the mean is approaching zero. \end{definition} \paragraph{Conjugate prior for the nonnegative mean parameter of a Gaussian.} We have shown that a Gaussian distribution is a conjugate prior of the mean parameter of another Gaussian distribution when the variance is fixed previously. The truncated-normal distribution is also a conjugate prior of the \textbf{nonnegative} mean parameter of a Gaussian distribution when the variance is fixed. To see this, suppose $\mathcal{X}=\{x_1, x_2, \ldots, 0x_N\}$ are drawn i.i.d. from a normal distribution with mean $\theta$ and precision $\tau$, i.e., the likelihood is $\mathcal{N}(x \mid \theta, \tau^{-1})$ where the variance $\sigma^2=\tau^{-1}$ is fixed, and $\theta$ is given a $\mathcal{TN}(\mu_0, \tau^{-1}_0)$ prior: $\theta \sim \mathcal{TN}(\mu_0, \tau_0^{-1})$. Using Bayes' theorem, the posterior is \begin{equation}\label{equation:conjugate_truncated_nonnegative_mean} \begin{aligned} &\gap p(\theta \mid \mathcal{X}) \propto \prod_{i=1}^{N} \mathcal{N}(x_i \mid \theta, \tau^{-1}) \times \mathcal{TN}(\theta \mid \mu_0, \tau_0^{-1})\\ &\propto \exp\left\{ -\frac{\tau_0+ N\tau}{2} \theta^2+ \big(\tau \sum_{i=1}^{N}x_i + \tau_0 \mu_0\big)\theta \right\}\cdot u(\theta)\\ &\propto \mathcal{TN}(\theta \mid \widetilde{\mu}, \widetilde{\tau}^{-1}), \end{aligned} \end{equation} where $u(y)$ is the step function with value 1 if $y\geq 0$ and value 0 if $y<0$, and $$ \widetilde{\mu}= \frac{\tau_0 \mu_0 + \tau \sum_{i=1}^{N} x_i}{ \tau_0 + N\tau}, \qquad \widetilde{\tau} = \tau_0 + N \tau. $$ The posterior parameters are exactly the same as those in the Normal-Normal model (Equation~\eqref{equation:posterior-param-normal-normal}) and the posterior ``parent" mean can also be written as a weighted mean of $\mu_0$ and $\overline{x}$. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Truncated-normal.]{\label{fig:dists_truncatednorml} \includegraphics[width=0.481\linewidth]{./imgs/dists_truncatednorml.pdf}} \subfigure[General-truncated-normal.]{\label{fig:dists_generaltruncatednorml} \includegraphics[width=0.481\linewidth]{./imgs/dists_generaltruncatednorml.pdf}} \caption{Truncated-normal and general-truncated-normal probability density functions for different values of the parameters $\mu$ and $\tau$.} \label{fig:dists_truncatednorml_and_general} \end{figure} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Truncated-normal.]{\label{fig:dists_truncatednorml_mean} \includegraphics[width=0.481\linewidth]{./imgs/dists_truncatednorml_mean.pdf}} \subfigure[General-truncated-normal.]{\label{fig:dists_generaltruncatednorml_mean} \includegraphics[width=0.481\linewidth]{./imgs/dists_generaltruncatednorml_mean.pdf}} \caption{Mean of truncated-normal and general-truncated-normal distribution by varying $\mu, \tau, a$, and $b$ parameters.} \label{fig:mean_truncatednorml_and_general} \end{figure} Going further from the truncated-normal distribution, the \textit{general-truncated-normal (GTN)} distribution is also a variant of the normal distribution, where the values outside a certain range are excluded. In other words, it is a normal distribution that is ``cut off" at some specified lower and/or upper bound. The range between the lower and upper bound is called the support of the distribution. \footnote{In some contexts, the general-truncated-normal distribution is named the truncated-normal directly. We here differentiate the two as the definitions describe.} \index{General-truncated-normal distribution} \begin{definition}[General-Truncated-Normal (GTN) Distribution]\label{definition:general_truncated_normal} A random variable ${\textnormal{x}}$ is said to follow the general-truncated-normal distribution with ``parent" mean $\mu$ and ``parent" precision $\tau>0$, denoted by ${\textnormal{x}} \sim \mathcal{GTN}(\mu, \tau^{-1}, \textcolor{blue}{a, b})$, if $$ f(x; \mu, \tau^{-1}, a, b)=\left\{ \begin{aligned} &0, & \mathrm{\,\,if\,\,} x < a; \\ &\frac{\sqrt{\frac{\tau}{2\pi}} \exp \{-\frac{\tau}{2}(x-\mu)^2 \} }{\Phi((b-\mu)\cdot \sqrt{\tau})-\Phi((a-\mu)\cdot \sqrt{\tau})} ,& \mathrm{\,\,if\,\,} a\leq x \leq b; \\ &1 , &\mathrm{\,\,if\,\,} 0 >b, \end{aligned} \right. $$ where $\Phi(\cdot)$ is the c.d.f. of $\mathcal{N}(0,1)$, the standard normal distribution. The mean and variance of ${\textnormal{x}} \sim \mathcal{GTN}(\mu, \tau^{-1}, a, b)$ are given by $$ \begin{aligned} \mathrm{E}[{\textnormal{x}}] &= \mu - \frac{1}{\sqrt{\tau}}\cdot \frac{\phi(\beta) - \phi(\alpha)}{\Phi(\beta) - \Phi(\alpha)}, \\ \mathrm{Var}[{\textnormal{x}}] &= \frac{1}{\tau} \left( 1 - \frac{\beta \phi(\beta) - \alpha\phi(\alpha)}{\Phi(\beta) - \Phi(\alpha)}- \left(\frac{\beta \phi(\beta) - \alpha\phi(\alpha)}{\Phi(\beta) - \Phi(\alpha)}\right)^2 \right), \end{aligned} $$ where $\phi(\cdot)$ is the p.d.f. of the standard normal distribution, and $$ \alpha = (a-\mu)\cdot \sqrt{\tau}, \qquad \beta = (b-\mu)\cdot \sqrt{\tau}. $$ Note that, the truncated-normal distribution is a special general-truncated-normal with $a=0$ and $b=\infty$ \citep{burkardt2014truncated}. Figure~\ref{fig:dists_generaltruncatednorml} compares different parameters $\mu, \tau$ for the GTN distribution. Figure~\ref{fig:dists_generaltruncatednorml_mean} shows the mean value of the GTN distribution by varying $\mu$ given fixed $\tau, a, b$; we again find when $\mu\rightarrow -\infty$, the mean is approaching zero. \end{definition} \paragraph{Conjugate prior for the constrained mean parameter of a Gaussian.} We have shown that a Gaussian distribution is a conjugate prior of the mean parameter of another Gaussian distribution when the variance is fixed previously. The truncated-normal distribution is also a conjugate prior of the \textbf{nonnegative} mean parameter of a Gaussian distribution when the variance is fixed. To see this, suppose $\mathcal{X}=\{x_1, x_2, \ldots, x_N\}$ are drawn i.i.d. from a normal distribution with mean $\theta$ and precision $\tau$, i.e., the likelihood is $\mathcal{N}(x \mid \theta, \tau^{-1})$ where the variance $\sigma^2=\tau^{-1}$ is fixed, and $\theta$ is given a $\mathcal{TN}(\mu_0, \tau^{-1}_0)$ prior: $\theta \sim \mathcal{TN}( \mu_0, \tau_0^{-1})$. Using Bayes' theorem, the posterior is \begin{equation}\label{equation:conjugate_truncated_constrained_mean} \begin{aligned} &\gap p(\theta \mid \mathcal{X}) \propto \prod_{i=1}^{N} \mathcal{N}(x_i \mid \theta, \tau^{-1}) \times \mathcal{GTN}(\theta \mid \mu_0, \tau_0^{-1}, a, b)\\ &\propto \exp\left\{ -\frac{\tau_0+ N\tau}{2} \theta^2 + \big(\tau \sum_{i=1}^{N}x_i + \tau_0 \mu_0\big)\theta \right\} \cdot \mathds{1}(a\leq \theta\leq b)\\ &\propto \mathcal{GTN}(\theta \mid \widetilde{\mu}, \widetilde{\tau}^{-1}, a, b), \end{aligned} \end{equation} where $\mathds{1}(a\leq y\leq b)$ is the step function with value 1 if $ a\leq y\leq b$ and value 0 otherwise, and $$ \widetilde{\mu}= \frac{\tau_0 \mu_0 + \tau \sum_{i=1}^{N} x_i}{ \tau_0 + N\tau}, \qquad \widetilde{\tau} = \tau_0 + N \tau. $$ The posterior parameters are again exactly the same as those in the Normal-Normal model (Equation~\eqref{equation:posterior-param-normal-normal}). \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Half-normal.]{\label{fig:dists_halfnorml} \includegraphics[width=0.481\linewidth]{./imgs/dists_halfnorml.pdf}} \subfigure[Normal, $\tau$ is the precision parameter.]{\label{fig:dists_halfnorml_gausscompare} \includegraphics[width=0.481\linewidth]{./imgs/dists_halfnorml_gausscompare.pdf}} \caption{Half-normal and normal probability density functions for different values of the parameters $\mu$ and $\tau$. The probability density of any value $x\geq \mu$ in the half-normal distribution is twice as that in the normal distribution with the same parameters $\mu, \tau$. } \label{fig:dists_half_and_general} \end{figure} The half-normal distribution is a special case of the normal distribution, where the support is larger than the parameter $\mu$ of a normal distribution (known as the ``parent" mean parameter of the normal distribution) and the distribution is symmetrical around $\mu$. This distribution is often used in modeling the scale or standard deviation of a process where the values cannot be smaller than $\mu$. \index{Half-normal distribution} \begin{definition}[Half-Normal Distribution]\label{definition:half_normal} A random variable ${\textnormal{x}}$ is said to follow the half-normal distribution with ``parent" mean $\mu$ and ``parent" precision $\tau>0$, denoted by ${\textnormal{x}} \sim \mathcal{HN}(\mu, \tau^{-1})$, if $$ f(x; \mu, \tau^{-1})=\left\{ \begin{aligned} &\sqrt{\frac{2\tau}{\pi}} \exp\left\{- \frac{\tau}{2}(x-\mu)^2 \right\} ,& \mathrm{\,\,if\,\,} x \geq \mu; \\ &0 , &\mathrm{\,\,if\,\,} x <\mu. \end{aligned} \right. $$ The mean and variance of ${\textnormal{x}} \sim \mathcal{HN}(\mu, \tau^{-1})$ are given by $$ \mathrm{E}[{\textnormal{x}}] = \mu + \sqrt{\frac{2}{\pi \tau}} , \qquad \mathrm{Var}[{\textnormal{x}}] = \frac{1}{\tau}\big(1-\frac{2}{\pi} \big). $$ Figure~\ref{fig:dists_halfnorml} compares different parameters $\mu, \tau$ for the half-normal distribution. \end{definition} \index{Rectified-normal distribution} In this text, the \textit{rectified-normal (RN)} distribution is defined as proportional to the product of a Gaussian distribution and an exponential distribution. And it can also be called the \textit{exponentially rectified-normal} distribution. \begin{definition}[Rectified-Normal (RN) Distribution]\label{definition:reftified_normal_distribution} A random variable ${\textnormal{x}}$ is said to follow the rectified-normal distribution (or \textbf{exponentially rectified-normal} distribution) with ``parent" mean $\mu$, ``parent" precision $\tau>0$, and ``parent" rate $\lambda >0$, denoted by ${\textnormal{x}} \sim \mathcal{RN}(\mu, \tau^{-1}, \lambda)$, if $$ \begin{aligned} f(x; \mu, \tau^{-1}, \lambda) &=\frac{1}{C}\cdot \mathcal{N}(x\mid \mu, \tau^{-1}) \cdot \mathcal{E}(x\mid \lambda)\\ &\propto \exp\left\{ -\frac{\tau}{2} \left( x - \frac{\tau\mu-\lambda}{\tau} \right)^2 \right\} \cdot u(x)\\ &\propto \mathcal{TN}(x\mid\textcolor{blue}{\frac{\tau\mu-\lambda}{\tau}}, \tau^{-1}), \end{aligned} $$ where $\mathcal{TN}(\cdot)$ is the density function of a truncated-normal distribution, and $C$ is a constant value, \begin{equation}\label{equation:rf_constant} C =C^{RN}(\mu, \tau, \lambda) = \lambda \left\{1 - \Phi\bigg(-\frac{\tau\mu-\lambda}{\sqrt{\tau}} \bigg)\right\} \cdot \exp\left( -\mu\lambda + \frac{\lambda^2}{2\tau}\right). \end{equation} That is, the rectified-normal distribution is a special truncated-normal distribution with more flexibility. The mean and variance of ${\textnormal{x}} \sim \mathcal{RN}(\mu, \tau^{-1}, \lambda)$ are given by $$ \begin{aligned} \mathrm{E}[{\textnormal{x}}] &= \frac{\tau\mu-\lambda}{\tau} - \frac{1}{\sqrt{\tau}}\cdot \frac{ - \phi(\alpha)}{1 - \Phi(\alpha)}, \\ \mathrm{Var}[{\textnormal{x}}] &= \frac{1}{\tau} \left\{ 1 + \frac{ \alpha\phi(\alpha)}{1 - \Phi(\alpha)}+ \left(\frac{ \alpha\phi(\alpha)}{1 - \Phi(\alpha)}\right)^2 \right\}. \end{aligned} $$ where $ \alpha = -\frac{\tau\mu-\lambda}{\tau}\cdot \sqrt{\tau}. $ Figure~\ref{fig:dists_rectifiednorml} compares different parameters $\mu, \tau$ for the RN distribution. \end{definition} The comparison between the truncated-normal and rectified-normal distributions is presented in Figure~\ref{fig:dists_truncatednorml_and_rectified}. \paragraph{Conjugate prior for the nonnegative mean parameter of a Gaussian by RN.} Like TN distribution, the RN distribution also serves to enforce nonnegative constraint, and is conjugate to the Gaussian likelihood. However, due to the extra parameter $\lambda$, the RN distribution is more flexible in this sense. And the derivation follows from Equation~\eqref{equation:conjugate_truncated_nonnegative_mean}. To see this, suppose $\mathcal{X}=\{x_1, x_2, \ldots, x_N\}$ are drawn i.i.d. from a normal distribution with mean $\theta$ and precision $\tau$, i.e., the likelihood is $\mathcal{N}(x \mid \theta, \tau^{-1})$ where the variance $\sigma^2=\tau^{-1}$ is fixed, and $\theta$ is given a $\mathcal{RN}(\mu_0, \tau^{-1}_0, \lambda_0)$ prior: $\theta \sim \mathcal{RN}( \mu_0, \tau_0^{-1}, \lambda_0)$. Using Bayes' theorem, the posterior is \begin{equation}\label{equation:conjugate_rectified_nonnegative_mean} \begin{aligned} &\gap p(\theta \mid \mathcal{X}) \propto \prod_{i=1}^{N} \mathcal{N}(x_i \mid \theta, \tau^{-1}) \times \mathcal{RN}(\theta\mid \mu_0, \tau_0^{-1}, \lambda_0) \\ &= \prod_{i=1}^{N} \mathcal{N}(x_i \mid \theta, \tau^{-1}) \times \mathcal{TN}(\theta\mid m_0, \tau_0^{-1}) &(m_0=\frac{\tau_0\mu_0-\lambda_0}{\tau_0})\\ &\propto \mathcal{TN}(\theta \mid \widetilde{\mu}, \widetilde{\tau}^{-1}), \end{aligned} \end{equation} where $$ \widetilde{\mu}= \frac{\tau_0 m_0 + \tau \sum_{i=1}^{N} x_i}{ \tau_0 + N\tau}, \qquad \widetilde{\tau} = \tau_0 + N \tau. $$ That is, the posterior density is a special RN or TN distribution. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Truncated-normal. Same as Figure~\ref{fig:dists_truncatednorml}.]{\label{fig:dists_truncatednorml2} \includegraphics[width=0.481\linewidth]{./imgs/dists_truncatednorml.pdf}} \subfigure[Rectified-normal.]{\label{fig:dists_rectifiednorml} \includegraphics[width=0.481\linewidth]{./imgs/dists_rectifiednorml.pdf}} \caption{Truncated-normal and rectified-normal probability density functions for different values of the parameters $\mu$, $\tau$, and $\lambda$.} \label{fig:dists_truncatednorml_and_rectified} \end{figure} \index{Inverse-Gaussian distribution} The \textit{inverse-Gaussian} distribution, also known as the \textit{Wald} distribution, is a continuous probability distribution with two parameters, $\mu > 0$ and $b > 0$. It is a versatile distribution that is used in various applications including modeling waiting times, stock prices, and lifetimes of mechanical systems. An important property of the distribution is that it is well-suited for modeling nonnegative, continuous, and positively skewed data with finite mean and variance. \begin{definition}[Inverse-Gaussian Distribution]\label{definition:inverse_gaussian_distribution} A random variable ${\textnormal{x}}$ is said to follow the inverse-Gaussian distribution with parameters $\mu>0$ and $b>0$, denoted by ${\textnormal{x}} \sim \mathcal{N}^{-1}(\mu,\lambda)$ \footnote{Note we use $\mathcal{N}^{-1}$ to denote the inverse-Gaussian distribution and use $\mathrm{IG}$ to denote the inverse-Gamma distribution.}, if $$ f(x; \mu,\lambda)=\left\{ \begin{aligned} & \sqrt{\frac{\lambda}{2\pi x^3}} \exp \left( -\frac{\lambda(x-\mu)^2}{2\mu^2 x} \right) ,& \mathrm{\,\,if\,\,} x \geq 0; \\ &0 , &\mathrm{\,\,if\,\,} x <0. \end{aligned} \right. $$ The mean and variance of ${\textnormal{x}} \sim \mathcal{N}^{-1}( \mu,\lambda)$ are given by \begin{equation} \mathrm{E}[{\textnormal{x}}] = \mu, \qquad \mathrm{Var}[{\textnormal{x}}] = \frac{\mu^3}{\lambda}. \nonumber \end{equation} The support of an inverse-Gaussian distribution is on $(0,\infty)$. Figure~\ref{fig:dists_inversegaussian} compares different parameters $\mu, \lambda$ for the inverse-Gaussian distribution. \end{definition} \begin{SCfigure \centering \includegraphics[width=0.5\textwidth]{imgs/dists_inversegaussian.pdf} \caption{Inverse-Gaussian probability density functions for different values of the parameters $\mu, \lambda$.} \label{fig:dists_inversegaussian} \end{SCfigure} The \textit{Laplace} distribution, a.k.a., the \textit{double exponential} distribution, is named after \textit{Pierre-Simon Laplace} (1749-1827), who obtained the distribution in 1774 \citep{kotz2001laplace, hardle2007applied}. The Laplace distribution is useful in modeling heavy-tailed data since it has heavier tails than the normal distribution and it is used extensively in sparse-favoring models since it expresses a high peak with heavy tails (same as the $l_1$ regularization term in non-probabilistic or non-Bayesian optimization methods). When we have a prior belief that the parameter of interest is likely to be close to the mean with the potential for large deviations, the Laplace distribution is then used in Bayesian modeling as a prior distribution for this context. \index{Laplace distribution} \begin{definition}[Laplace Distribution]\label{definition:laplace_distribution} A random variable ${\textnormal{x}}$ is said to follow the Laplace distribution with location and scale parameters $\mu$ and $b>0$, respectively, denoted by ${\textnormal{x}} \sim \mathcal{L}(\mu,b)$, if $$ f(x; \mu,b)=\frac{1}{2b} \exp \left( -\frac{\abs{x-\mu}}{b } \right). $$ The mean and variance of ${\textnormal{x}} \sim \mathcal{L}( \mu,b)$ are given by \begin{equation} \mathrm{E}[{\textnormal{x}}] = \mu, \qquad \mathrm{Var}[{\textnormal{x}}] =2b^2. \nonumber \end{equation} Figure~\ref{fig:dists_laplace} compares different parameters $\mu, b$ for the Laplace distribution. \end{definition} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Laplace distribution.]{\label{fig:dists_laplace} \includegraphics[width=0.481\linewidth]{./imgs/dists_laplace.pdf}} \subfigure[Skew-Laplace distribution.]{\label{fig:dists_skewlaplace} \includegraphics[width=0.481\linewidth]{./imgs/dists_skewlaplace.pdf}} \caption{Laplace and skew-Laplace probability density functions for different values of the parameters.} \label{fig:dists_lap_and_skewlap} \end{figure} \paragraph{Laplace as a mixture of normal distributions.} Any Laplace random variable can be thought of as a Gaussian random variable with the same mean value and a \textit{stochastic} variance that follows an exponential distribution. More formally, the Laplace distribution can be rewritten as: \begin{equation}\label{equation:laplace_as_mixture} \mathcal{L}(x\mid\mu, b) = \int_{0}^{\infty} \mathcal{N}(x\mid \mu, \epsilon) \cdot \mathcal{E}(\epsilon \mid \frac{1}{2b^2})d\epsilon. \end{equation} To see this, we have $$ \begin{aligned} &\gap \int_{0}^{\infty} \mathcal{N}(x\mid\mu, \epsilon) \cdot \mathcal{E}(\epsilon \mid \frac{1}{2b^2})d\epsilon \\ &= \int_{0}^{\infty} \frac{1}{\sqrt{2\pi \epsilon}} \exp\left\{-\frac{1}{2\epsilon} (x-\mu)^2\right\} \cdot \frac{1}{2b^2} \exp(-\frac{1}{2b^2}\epsilon ) d\epsilon \\ &= \frac{1}{2b^2} \int_{0}^{\infty} \frac{1}{\sqrt{2\pi \epsilon}} \exp\left\{ - \frac{(x-\mu)^2 + \frac{\epsilon^2}{b^2}}{2\epsilon } \right\} d\epsilon \\ &= \frac{1}{2b^2} \int_{0}^{\infty} \frac{\epsilon}{\sqrt{2\pi \epsilon^3}} \exp\left\{ - \frac{(x-\mu - \frac{\epsilon}{b})^2 + 2\abs{x-\mu}\frac{\epsilon}{b} }{2\epsilon } \right\} d\epsilon \\ &\xlongequal{z:=\abs{x-\mu}b} \frac{1}{2b^2} \int_{0}^{\infty} \frac{\epsilon}{\sqrt{2\pi \epsilon^3}} \exp\left\{ - \frac{( z -\epsilon )^2 }{2\epsilon b^2 } \right\} \exp\left\{ \frac{\abs{x-\mu}}{b}\right\} d\epsilon \\ &\xlongequal{\lambda:=\abs{x-\mu}} \frac{1}{\sqrt{\lambda}} \frac{1}{2b^2} \exp\left\{ \frac{\abs{x-\mu}}{b}\right\} \int_{0}^{\infty} \epsilon\frac{\sqrt{\lambda}}{\sqrt{2\pi \epsilon^3}} \exp\left\{ - \frac{\lambda( z -\epsilon )^2 }{2\epsilon z^2 } \right\} d\epsilon \\ &= \frac{1}{2b} \exp \left\{ -\frac{\abs{x-\mu}}{b } \right\}, \end{aligned} $$ where the last equality is from the mean value of an inverse-Gaussian distribution in Definition~\ref{definition:inverse_gaussian_distribution}. \paragraph{Conjugate prior for the Laplace scale parameter.} Similar to the Gaussian case, the inverse-Gamma distribution is a conjugate prior for the scale parameter of a Laplace distribution. To see this, let the likelihood be $p(\bm{A} \mid \bm{B}, b)=\mathcal{L}(\bm{A}\mid\bm{B}, b)$ where $\bm{A},\bm{B}\in\mathbb{R}^{M\times N}$, the prior of $b$ be $p(b)=\mathrm{IG}(b\mid \alpha, \beta)$. Using Bayes' theorem, it can be shown that \begin{equation}\label{equation:inverse_gamma_conjugacy_general_laplace} \begin{aligned} p(b \mid \bm{A}, \bm{B}, \alpha, \beta) &\propto \mathcal{L}(\bm{A}\mid\bm{B}, b)\times \mathrm{IG}(b \mid \alpha, \beta)\\ &=\prod_{i,j=1}^{M,N} \mathcal{L}(a_{ij}\mid b_{ij}, b) \times \frac{\beta^\alpha}{\Gamma(\alpha)} (b)^{-\alpha-1} \exp(-\frac{\beta}{b})\\ &\propto \frac{1}{b^{MN}}\exp\left\{ -\frac{1}{b} \sum_{i,j=1}^{M,N}\abs{a_{ij} - b_{ij}}\right\} \cdot b^{-\alpha-1}\exp(-\frac{\beta}{b})\\ &=(b)^{-{MN}-\alpha-1} \exp\left\{ -\frac{1}{b} \left(\sum_{i,j=1}^{M,N}\frac{1}{2} \abs{a_{ij} - b_{ij} } +\beta\right)\right\}\\ &\propto \mathrm{IG}(b\mid \widetilde{\alpha}, \widetilde{\beta}),\\ \end{aligned} \end{equation} where \begin{equation}\label{equation:inversegamma_conjugate_posterior_laplace} \widetilde{\alpha}={MN}+\alpha, \qquad \widetilde{\beta}= \sum_{i,j=1}^{M,N}\frac{1}{2}\abs{a_{ij} - b_{ij} } +\beta. \end{equation} That is, the posterior density of the scale parameter $b$ is also an inverse-Gamma distribution. \index{Skew-Laplace distribution} The skew-Laplace distribution is a type of heavy-tailed probability distribution that is similar to the Laplace distribution but allows for skewness (see Figure~\ref{fig:dists_lap_and_skewlap}). \begin{definition}[Skew-Laplace Distribution]\label{definition:skew_laplace_distribution} A random variable ${\textnormal{x}}$ is said to follow the skew-Laplace (or the asymmetric Laplace) distribution with location and scale parameters $\mu$ and $\alpha, \beta>0$, respectively, denoted by ${\textnormal{x}} \sim \mathcal{SL}(\mu, \alpha, \beta)$, if $$ f(x; \mu,\alpha, \beta)=\left\{ \begin{aligned} & \frac{\alpha\beta}{\alpha+\beta} \exp \left\{-\alpha(x-\mu)\right\} , & \mathrm{\,\,if\,\,} x \geq \mu; \\ & \frac{\alpha\beta}{\alpha+\beta} \exp \left\{\beta (x-\mu)\right\} , & \mathrm{\,\,if\,\,} x< \mu. \end{aligned} \right. $$ When $\alpha=\beta=\frac{1}{b}$, the skew-Laplace ${\textnormal{x}} \sim \mathcal{SL}(\mu, \alpha, \beta)$ reduces to a Laplace density ${\textnormal{x}} \sim \mathcal{L}(\mu, b)$. The mean and variance of ${\textnormal{x}} \sim \mathcal{SL}(\mu, \alpha, \beta)$ are given by \begin{equation} \mathrm{E}[{\textnormal{x}}] = \mu+ \frac{\beta-\alpha}{\alpha\beta}, \qquad \mathrm{Var}[{\textnormal{x}}] =\frac{\alpha^2+\beta^2}{\alpha^2\beta^2}. \nonumber \end{equation} Figure~\ref{fig:dists_skewlaplace} compares different parameters $\mu, \alpha, \beta$ for the skew-Laplace distribution. When $\alpha>\beta$, the distribution is skewed to the right. \end{definition} \section{Multinomial Distribution and Conjugacy} The multinomial distribution is widely used in the Bayesian mixture model to introduce latent variables. It is a discrete probability distribution that describes the probabilities of obtaining different outcomes from $N$ independent trials, each with $K$ different possible outcomes and with probabilities of the $K$ outcomes that are specified. It models the distribution of counts or frequencies of events among $K$ categories. In specific, the multinomial distribution is parameterized by an integer $N$ and a p.m.f. $\bm{\pi} = \{\pi_1, \pi_2, \ldots , \pi_K\}$, and can be thought of as following: if we have $N$ independent events, and for each event, the probability of outcome $k$ is $\pi_k$, then the multinomial distribution specifies the probability that outcome $k$ occurs $N_k$ times, for $k = 1, 2, \ldots , K$. Formally, we have the following definition of the multinomial distribution. \index{Multinomial distribution} \begin{definition}[Multinomial Distribution]\label{definition:multinomial_dist} A $K$-dimensional random vector $\bm{N}=[N_1, N_2, \ldots, N_K]\in \{0, 1, 2, \ldots, N\}^K$ where $\sum_{k=1}^{K} N_k=N$ is said to follow the multinomial distribution with parameter $N\in \mathbb{N}$ and $\bm{\pi} =[\pi_1, \pi_2, \ldots, \pi_K]\in [0,1]^K$ such that $\sum_{k=1}^{K} \pi_k=1$. Denoted by $\bm{N} \sim $ $\mathrm{Multi}_K(N, \bm{\pi})$. Then its probability mass function is given by \begin{equation*} p\big(N_1, N_2, \ldots , N_K | N, \bm{\pi} = (\pi_1, \pi_2, \ldots , \pi_K)\big) = \frac{N!}{N_1! N_2! \ldots N_K!} \prod^K_{k=1}\pi_k^{N_k} \cdot \mathds{1}\left\{\sum_{k=1}^{K}N_k = N\right\}, \end{equation*} where $\{0, 1, 2, \ldots, N\}$ is a set of $N+1$ elements and $[0,1]$ is a closed set with values between 0 and 1. The mean, variance, and covariance of the multinomial distribution are $$ \mathrm{E}[N_k] = N\pi_k, \qquad \mathrm{Var}[N_k] = N\pi_k(1-\pi_k), \qquad \mathrm{Cov}[N_k, N_m] = -N\pi_k\pi_m. $$ When $K=2$, the multinomial distribution reduces to the \textbf{binomial} distribution. \end{definition} \index{Binomial distribution} \begin{remark}[Binomial Distribution]\label{remark:binomial_dist} In multinomial distribution, when $K=2$, it is also known as a \textit{binomial} distribution. A random variable ${\textnormal{x}}$ is said to follow the binomial distribution with parameter $\pi\in(0,1)$ and $N\in\mathbb{N}$, denoted by ${\textnormal{x}}\sim\mathrm{Binom}(N, \pi)$, if $$ p(x\mid N, \pi) ={N\choose x} \pi^x (1-\pi)^{N-x}. $$ The mean and variance of the binomial distribution are $$ \mathrm{E}[{\textnormal{x}}] = N\pi, \qquad \mathrm{Var}[{\textnormal{x}}] = N\pi(1-\pi). $$ Figure~\ref{fig:dists_binom} compares different parameters $N, \pi$ for the binomial distribution. \end{remark} A random variable is said to follow the \textit{Bernoulli} distribution with parameter $\pi\in(0,1)$, denoted as ${\textnormal{x}}\sim\mathrm{Bern}(\pi)$, if $$ f(x; \pi) =\pi\mathds{1}\{x=1\} + (1-\pi)\mathds{1}\{x=0\}, $$ with mean $\mathrm{E}[{\textnormal{x}}]=\pi$ and variance $\mathrm{Var}[{\textnormal{x}}]=\pi(1-\pi)$ respectively. \begin{exercise}[Bernoulli and Binomial] Show that if ${\textnormal{x}}=\sum_{i=1}^{N} {\textnormal{y}}_i$ where ${\textnormal{y}}_i\stackrel{i.i.d.}{\sim} \mathrm{Bern}(\pi)$, then we have ${\textnormal{x}}\sim \mathrm{Binom}(N, \pi)$. \end{exercise} \begin{SCfigure \centering \includegraphics[width=0.5\textwidth]{imgs/dists_binomial.pdf} \caption{Binomial probability mass functions for different values of the parameters $N, \pi$.} \label{fig:dists_binom} \end{SCfigure} \subsection{Dirichlet Distribution}\label{section:dirichlet-dist} The Dirichlet distribution is a multi-dimensional probability distribution over the simplex. It takes a vector of positive real numbers as input and outputs a probability distribution over a set of probabilities that sum to 1. The Dirichlet distribution commonly serves as a prior distribution in Bayesian statistics, particularly in the context of discrete and categorical data, and it is a conjugate prior for the probability parameter $\bm{\pi}$ of the multinomial distribution. \begin{definition}[Dirichlet Distribution]\label{definition:dirichlet_dist} A random vector ${\mathbf{x}}=[{\textnormal{x}}_1, {\textnormal{x}}_2, \ldots, {\textnormal{x}}_K]\in [0,1]^K$ is said to follow the Dirichlet distribution with parameter $\boldsymbol\alpha$, denoted by ${\mathbf{x}}\sim \mathrm{Dirichlet}( \boldsymbol\alpha)$, if \begin{equation} f(\bm{x} ; \boldsymbol\alpha) = \frac{1}{D(\boldsymbol\alpha)} \prod_{k=1}^K x_k ^ {\alpha_k - 1}, \label{equation:dirichlet_distribution2} \end{equation} such that $\sum_{k=1}^K x_k = 1$, $x_k \in$ [0, 1] and \begin{equation} D(\boldsymbol\alpha) = \frac{\prod_{k=1}^K \Gamma(\alpha_k)}{\Gamma(\alpha_+)}, \label{equation:dirichlet_distribution3} \end{equation} where $\boldsymbol\alpha = [\alpha_1, \alpha_2, \ldots, \alpha_K]$ is a vector of reals with $\alpha_k>0, \forall k$, and $\alpha_+ = \sum_{k=1}^K \alpha_k$. The $\boldsymbol\alpha$ is also known as the \textbf{concentration parameter} in Dirichlet distribution. $\Gamma(\cdot)$ is the Gamma function which is a generalization of the factorial function. The mean, variance, and covariance are $$ \mathrm{E}[{\textnormal{x}}_k] = \frac{\alpha_k}{\alpha_+}, \qquad \mathrm{Var}[{\textnormal{x}}_k] = \frac{\alpha_k(\alpha_+-\alpha_k)}{\alpha_+^2(\alpha_++1)}, \qquad \mathrm{Cov}[{\textnormal{x}}_k, {\textnormal{x}}_m]= \frac{-\alpha_k\alpha_m}{\alpha_+^2(\alpha_++1)}. $$ When $K=2$, the Dirichlet distribution reduces to the Beta distribution, The Beta distribution $Beta(\alpha, \beta)$ is defined on $[0,1]$ with the probability density function given by $$ \mathrm{Beta}(x\mid \alpha, \beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} x^{\alpha-1}(1-x)^{\beta-1}. $$ That is, if ${\textnormal{x}}\sim \mathrm{Beta}(\alpha, \beta)$, then ${\mathbf{x}}=[{\textnormal{x}}, 1-{\textnormal{x}}] \sim \mathrm{Dirichlet}(\boldsymbol\alpha) $, where $\boldsymbol\alpha=[\alpha, \beta]$. \end{definition} Interesting readers can refer to Appendix~\ref{appendix:drive-dirichlet} (p.~\pageref{appendix:drive-dirichlet}) for a derivation of the Dirichlet distribution. The sample space of the Dirichlet distribution lies on the $(K-1)$-dimensional probability simplex, which is a surface in $\mathbb{R}^K$ denoted by $\triangle_K$. That is a set of vectors in $\mathbb{R}^K$ whose components are nonnegative and sum to 1. $$ \triangle_K = \left\{ \bm{\pi}: 0 \leq \pi_k\leq 1, \,\, \sum_{k=1}^{K}\pi_k=1 \right\}. $$ Notice that $\triangle_K$ lies on a $(K-1)$-dimensional space since each component is nonnegative, and the components sum to 1. \begin{figure}[htp] \centering \subfigure[ $ \boldsymbol\alpha=\begin{bmatrix} 10,10,10 \end{bmatrix} $, z-axis is pdf. ]{\includegraphics[width=0.4 \textwidth]{imgs/dirichlet-pdf.png} \label{fig:dirichlet_pdf}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix} 10,10,10 \end{bmatrix}$, z-axis is $\pi_3$.]{\includegraphics[width=0.4 \textwidth]{imgs/dirichlet-surface.png} \label{fig:dirichlet_surface}} \centering \subfigure[ $ \boldsymbol\alpha=\begin{bmatrix} 1,1,1 \end{bmatrix} $ ]{\includegraphics[width=0.4 \textwidth]{imgs/dist_dirichlet_contour_1-1-1.pdf} \label{fig:dirichlet_sample_111}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix} 0.9,0.9,0.9 \end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{imgs/dist_dirichlet_contour_09-09-09.pdf} \label{fig:dirichlet_sample_090909}} \centering \subfigure[$\boldsymbol\alpha=\begin{bmatrix} 10,10,10 \end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{imgs/dist_dirichlet_contour_10-10-10.pdf} \label{fig:dirichlet_sample_101010}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix} 15,5,2 \end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{imgs/dist_dirichlet_contour_15-5-2.pdf} \label{fig:dirichlet_sample_1552}} \centering \caption{Density plots (\textcolor{blue}{blue}=low, \textcolor{red}{red}=high) for the Dirichlet distribution over the probability simplex in $\mathbb{R}^3$ for various values of the concentration parameter $\boldsymbol\alpha$. When $\boldsymbol\alpha=[c, c, c]$, the distribution is called a \textit{symmetric Dirichlet distribution} and the density is symmetric about the uniform probability mass function (i.e., occurs in the middle of the simplex). When $0<c<1$, there are sharp peaks of density almost at the vertices of the simplex. When $c>1$, the density becomes monomodal and concentrated in the center of the simplex. And when $c=1$, it is uniform distributed over the simplex. Finally, if $\boldsymbol\alpha$ is not a constant vector, the density is not symmetric.}\centering \label{fig:dirichlet_samples} \end{figure} Figure~\ref{fig:dirichlet_samples} shows various plots of the Dirichlet distribution's density over the two-dimensional simplex in $\mathbb{R}^3$ for a handful of values of the parameter vector $\boldsymbol\alpha$ and Figure~\ref{fig:dirichlet_points} shows the draw of 5, 000 points for each setting. In specific, the density plots of Dirichlet in $\mathbb{R}^3$ is a surface plot in 4$d$-space. Figure~\ref{fig:dirichlet_pdf} is a projection of a surface into 3$d$-space where the z-axis is the probability density function and Figure~\ref{fig:dirichlet_surface} is a projection of a surface into 3$d$-space where the z-axis is $\pi_3$. Figure~\ref{fig:dirichlet_sample_111} to Figure~\ref{fig:dirichlet_sample_1552} are the projections into a 2$d$-space. When the concentration parameter $\boldsymbol\alpha=[1,1,1]$, the Dirichlet distribution reduces to the uniform distribution over the simplex. This can be easily verified that $\mathrm{Dirichlet}(\bm{x}\mid \boldsymbol\alpha=[1,1,1]) = \frac{\Gamma(3)}{(\Gamma(1))^3}= 2 $ which is a constant that does not depend on the specific value of $\bm{x}$. When $\boldsymbol\alpha=[c,c,c]$ with $c>1$, the density becomes monomodal and concentrated in the center of the simplex. This can be seen from $\mathrm{Dirichlet}(\bm{x} \mid \boldsymbol\alpha=[c,c,c]) = \frac{\Gamma(3c)}{(\Gamma(c))^3}\prod_{k=1}^3 x_k ^ {c - 1} $ such that a small value of $x_k$ will make the probability density approach zero. On the contrary, when $\boldsymbol\alpha=[c,c,c]$ with $c<1$, the density has sharp peaks almost at the vertices of the simplex. More properties of the Dirichlet distribution are provided in Table~\ref{table:dirichlet-property}, and the proof can be found in Appendix~\ref{appendix:drive-dirichlet} (p.~\pageref{appendix:drive-dirichlet}). And the derivation on the Dirichlet distribution in Appendix~\ref{appendix:drive-dirichlet} (p.~\pageref{appendix:drive-dirichlet}) can also be utilized to generate samples from the Dirichlet distribution by a set of samples from a set of Gamma distributions. \begin{figure}[htp] \center \subfigure[ $ \boldsymbol\alpha=\begin{bmatrix}1,1,1\end{bmatrix}$ ]{\includegraphics[width=0.4 \textwidth]{imgs/dist_dirichlet_points_1-1-1.pdf} \label{fig:dirichlet_points_111}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix}0.9,0.9,0.9\end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{imgs/dist_dirichlet_points_09-09-09.pdf} \label{fig:dirichlet_points_090909}} \center \subfigure[$\boldsymbol\alpha=\begin{bmatrix}10,10,10\end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{imgs/dist_dirichlet_points_10-10-10.pdf} \label{fig:dirichlet_points_101010}} ~ \subfigure[$\boldsymbol\alpha=\begin{bmatrix}15,5,2\end{bmatrix}$]{\includegraphics[width=0.4 \textwidth]{imgs/dist_dirichlet_points_15-5-2.pdf} \label{fig:dirichlet_points_1552}} \center \caption{Draw of 5, 000 points from Dirichlet distribution over the probability simplex in $\mathbb{R}^3$ for various values of the concentration parameter $\boldsymbol\alpha$.} \label{fig:dirichlet_points} \end{figure} \begin{table}[] \begin{tabular}{l|l} \hline \begin{tabular}[c]{@{}l@{}}Marginal \\ Distribution\end{tabular} & ${\textnormal{x}}_i \sim \mathrm{Beta}(\alpha_i, \alpha_+-\alpha_i)$. \\ \hline \begin{tabular}[c]{@{}l@{}}Conditional \\ Distribution\end{tabular} & \begin{tabular}[c]{@{}l@{}}${\mathbf{x}}_{-i} \mid {\textnormal{x}}_i \sim (1-{\textnormal{x}}_i)\mathrm{Dirichlet}(\alpha_{-i})$, \\ where ${\mathbf{x}}_{-i}$ is a random vector excluding ${\textnormal{x}}_i$. \end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}Aggregation\\ Property\end{tabular} & \begin{tabular}[c]{@{}l@{}}If $M={\textnormal{x}}_i+{\textnormal{x}}_j$, then $[{\textnormal{x}}_1, \ldots {\textnormal{x}}_{i-1}, {\textnormal{x}}_{i+1}, \ldots, {\textnormal{x}}_{j-1}, {\textnormal{x}}_{j+1}, \ldots, {\textnormal{x}}_K, M] \sim $\\ \gap\gap$\mathrm{Dirichlet}([\alpha_1, \ldots, \alpha_{i-1}, \alpha_{i+1}, \ldots, \alpha_{j-1}, \alpha_{j+1}, \ldots, \alpha_K, \alpha_i+\alpha_j])$.\\ In general, If $\{A_1, A_2, \ldots, A_r\}$ is a partition of $\{1, 2, \ldots, K\}$, then \\ $\left[\sum_{i\in A_1} {\textnormal{x}}_i, \sum_{i\in A_2} {\textnormal{x}}_i, \ldots, \sum_{i\in A_r} {\textnormal{x}}_i\right] \sim$\\ \gap\gap$\mathrm{Dirichlet}\left(\left[\sum_{i\in A_1} \alpha_i, \sum_{i\in A_2} \alpha_i, \ldots, \sum_{i\in A_r} \alpha_i\right]\right)$.\end{tabular} \\ \hline \end{tabular} \caption{Properties of the Dirichlet distribution.} \label{table:dirichlet-property} \end{table} \subsection{Posterior Distribution for Multinomial Distribution}\label{section:dirichlet-dist-post} For the conjugacy, that is, if $(\bm{N} \mid \bm{\pi}) \sim $ $\mathrm{Multi}_K(N, \bm{\pi})$ and $\bm{\pi} \sim$ $\mathrm{Dirichlet}(\boldsymbol\alpha)$, then $(\bm{\pi} \mid\bm{N}) \sim$ $\mathrm{Dirichlet}(\boldsymbol\alpha+\bm{N})$ = $\mathrm{Dirichlet}(\alpha_1+N_1, \ldots, \alpha_K+N_K)$. \begin{proof}[of conjugate prior of multinomial distribution] By the Bayes' theorem ``$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior} $", we obtain the posterior density $$ \begin{aligned} \mathrm{posterior} &= p(\bm{\pi}\mid\boldsymbol\alpha, \bm{N}) \propto \mathrm{Multi}_K(\bm{N}\mid N, \bm{\pi}) \cdot \mathrm{Dirichlet}(\bm{\pi}\mid \boldsymbol\alpha) \\ &= \left(\frac{N!}{N_1! N_2! \ldots N_K!} \prod^K_{k=1}\pi_k^{N_k}\right) \cdot \left(\frac{1}{D(\boldsymbol\alpha)} \prod_{k=1}^K \pi_k ^ {\alpha_k - 1}\right)\\ &\propto \prod_{k=1}^K \pi_k ^ {\alpha_k +N_k - 1} \propto \mathrm{Dirichlet}(\bm{\pi}\mid \boldsymbol\alpha+\bm{N}). \end{aligned} $$ Therefore, $(\bm{\pi} \mid \bm{N}) \sim$ $\mathrm{Dirichlet}(\boldsymbol\alpha+\bm{N})$ = $\mathrm{Dirichlet}(\alpha_1+N_1, \ldots, \alpha_K+N_K)$. \end{proof} A comparison between the prior and posterior distribution reveals that the relative sizes of the Dirichlet parameters $\alpha_k$ describe the mean of the prior distribution of $\bm{\pi}$, and the sum of $\alpha_k$'s is a measure of the strength of the prior distribution. The prior distribution is mathematically equivalent to a likelihood resulting from $\sum_{k=1}^K(\alpha_k - 1)$ observations with $\alpha_k - 1$ observations of the $k$-th group. Since the Dirichlet distribution is a multivariate generalization of the Beta distribution, the Beta distribution can be taken as a conjugate prior for binomial distribution \citep{hoff2009first, frigyik2010introduction}. \section{Poisson and Multinomial} \index{Poisson distribution} The \textit{Poisson} distribution is a discrete probability distribution that expresses the number of events in a fixed interval of time or space, given the average number of events in that interval. The Poisson distribution is commonly used to model the count data, such as the number of calls received by a call center in an hour, or the number of emails received in a day provided that the probability of a ``success" for any given instant is ``very small". \begin{definition}[Poisson Distribution]\label{definition:poisson_distribution} A random variable ${\textnormal{x}}$ is said to follow the Poisson distribution with rate parameter $\lambda>0$, denoted by ${\textnormal{x}} \sim \mathcal{P}(\lambda)$, if $$ f(x; \lambda)= \frac{\lambda^x}{x!} \exp(-\lambda). $$ The mean and variance of ${\textnormal{x}} \sim \mathcal{P}( \lambda)$ are given by \begin{equation} \mathrm{E}[{\textnormal{x}}] = \lambda, \qquad \mathrm{Var}[{\textnormal{x}}] =\lambda. \nonumber \end{equation} The support of an exponential distribution is on $\{0,1,2,3,\ldots\} = \{0\}\cup \mathbb{N}$. Figure~\ref{fig:dists_poisson} compares probability mass functions of different parameter values $\lambda$ for the Poisson distribution. \end{definition} The mean and variance of the Poisson distribution are the same. Roughly speaking, a Poisson distribution is the limit of a binomial distribution when $N\rightarrow \infty$ and $\pi=\lambda/N$, i.e., the number of trials diverges to infinity but the probability of success decreases to zero linearly with respect to the number of trials. This is also known as the \textit{law of rare events}. \begin{SCfigure \centering \includegraphics[width=0.5\textwidth]{imgs/dists_poisson.pdf} \caption{Poisson probability mass functions for different values of the parameter $\lambda$.} \label{fig:dists_poisson} \end{SCfigure} The sum of independently identical Poisson distributed random variables again follows a Poisson distribution. \begin{theorem}[Sum of Independently Distributed Poisson]\label{theorem:sum_iid_poisson} Let ${\textnormal{x}}_i\sim \mathcal{P}(\lambda_i)$. Then $ {\textnormal{y}}=\sum_{i=1}^{n} {\textnormal{x}}_i\sim \mathcal{P}(\sum_{i=1}^{n}\lambda_i)$. \end{theorem} For simplicity, we consider two independent Poisson random variables ${\textnormal{x}}\sim \mathcal{P}(\lambda_1)$ and ${\textnormal{y}}\sim\mathcal{P}(\lambda_2)$. Define $\lambda=\lambda_1+\lambda_2$ and ${\textnormal{z}}={\textnormal{x}}+{\textnormal{y}}$. Then ${\textnormal{z}}$ is a Poisson random variable with parameter $\lambda$. To see this, we have $$ \begin{aligned} p(z) &= P({\textnormal{z}}=z) = \sum_{k=1}^{z} P({\textnormal{x}}=k) \cdot P({\textnormal{y}}=z-k)\\ &= \sum_{k=1}^{z} \frac{\lambda_1^k}{k!} \exp(-\lambda_1) \cdot \frac{\lambda_2^{z-k}}{(z-k)!} \exp(-\lambda_2)\\ &= \frac{\exp(-\lambda_1-\lambda_2)}{z!} \sum_{k=1}^{z} {z\choose k} \lambda_1^k\lambda_2^{z-k}\\ &\stackrel{*}{=}\frac{\exp(-\lambda)}{z!}(\lambda_1+\lambda_2)^z = \frac{\lambda^z}{z!} \exp(-\lambda), \end{aligned} $$ where the equality (*) is from the \textit{binomial theorem}. Working for general, once we know the sum of two Poisson random variables, we can keep adding more and more of them to obtain another Poisson variable. \begin{theorem}[Poisson and Multinomial]\label{theorem:multinomial_poisson} Let ${\textnormal{x}}_i\sim \mathcal{P}(\lambda_i)$ be independent for $i\in\{1,2,\ldots, K\}$. Then the conditional distribution of ${\mathbf{x}}=[{\textnormal{x}}_1,{\textnormal{x}}_2,\ldots, {\textnormal{x}}_k]^\top$ given $\sum_{i=1}^{K}{\textnormal{x}}_i=N$ is $\mathrm{Multi}_K(N,\{ p_1, p_2, \ldots,p_K\})$ with $$ p_i= \frac{\lambda_i}{\lambda_1+\lambda_2+\ldots+\lambda_K}, \gap \text{for all }i\in\{1,2,\ldots,K\}. $$ \end{theorem} \chapter*{Notation}\label{notation} \index{Notation} This section provides a concise reference describing notation used throughout this book. If you are unfamiliar with any of the corresponding mathematical concepts, the book describes most of these ideas in Chapter~\ref{chapter_introduction} (p.~\pageref{chapter_introduction}). \vspace{0.4in} \begin{minipage}{\textwidth} \centerline{\bf Numbers and Arrays} \bgroup \def1.5{1.5} \begin{tabular}{cp{4.25in}} $\displaystyle a$ & A scalar (integer or real)\\ $\displaystyle \bm{a}$ & A vector\\ $\displaystyle \bm{A}$ & A matrix\\ $\displaystyle \bm{\mathscr{A}}$ & A tensor\\ $\displaystyle \bm{I}_n$ & Identity matrix with $n$ rows and $n$ columns\\ $\displaystyle \bm{I}$ & Identity matrix with dimensionality implied by context\\ $\displaystyle {\bm{e}}_i$ & Standard basis vector $[0,\dots,0,1,0,\dots,0]$ with a 1 at position $i$\\ $\displaystyle \text{diag}({\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\bm{a}}$\\ $\displaystyle {\textnormal{a}}$ & A scalar random variable\\ $\displaystyle {\mathbf{a}}$ & A vector-valued random variable\\ $\displaystyle {\mathbf{A}}$ & A matrix-valued random variable\\ \end{tabular} \egroup \index{Scalar} \index{Vector} \index{Matrix} \index{Tensor} \end{minipage} \index{Sets} \vspace{0.2in} \begin{minipage}{\textwidth} \centerline{\bf Sets} \bgroup \def1.5{1.5} \begin{tabular}{cp{4.25in}} $\displaystyle {\mathbb{A}}$ & A set\\ $\displaystyle \varnothing$ & The null set \\ $\displaystyle \mathbb{R}$ & The set of real numbers \\ $\displaystyle \mathbb{N}$ & The set of natural numbers \\ $\displaystyle \mathbb{C}$ & The set of complex numbers \\ $\displaystyle \{0, 1\}$ & The set containing 0 and 1 \\ $\displaystyle \{0, 1, \dots, n \}$ & The set of all integers between $0$ and $n$\\ $\displaystyle [a, b]$ & The real interval including $a$ and $b$\\ $\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\ $\displaystyle {\mathbb{A}} \backslash {\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\mathbb{A}}$ that are not in ${\mathbb{B}}$\\ \end{tabular} \egroup \index{Scalar} \index{Vector} \index{Matrix} \index{Tensor} \index{Graph} \index{Set} \end{minipage} \index{Matrix indexing} \vspace{0.2in} \begin{minipage}{\textwidth} \centerline{\bf Indexing} \bgroup \def1.5{1.5} \begin{tabular}{cp{4.25in}} $\displaystyle {a}_i$ & Element $i$ of vector ${\bm{a}}$, with indexing starting at 1 \\ $\displaystyle {a}_{-i}$ & All elements of vector ${\bm{a}}$ except for element $i$ \\ $\displaystyle {\bm{A}}_{ij}, {A}_{ij}, a_{ij}$ & Element $i, j$ of matrix ${\bm{A}}$ \\ $\displaystyle {\bm{A}}_{i, :}$ & Row $i$ of matrix ${\bm{A}}$ \\ $\displaystyle {\bm{A}}_{:, i}$ & Column $i$ of matrix ${\bm{A}}$ \\ $\displaystyle \bm{\mathscr{A}}_{ijk}, {\mathscr{A}}_{ijk}, a_{ijk}$ & Element $(i, j, k)$ of a 3-D tensor $\bm{\mathscr{A}}$\\ $\displaystyle \bm{\mathscr{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor $\bm{\mathscr{A}}$\\ \end{tabular} \egroup \end{minipage} \vspace{0.2in} \begin{minipage}{\textwidth} \centerline{\bf Linear Algebra Operations} \bgroup \def1.5{1.5} \begin{tabular}{cp{4.25in}} $\displaystyle \bm{A}^\top$ & Transpose of matrix ${\bm{A}}$ \\ $\displaystyle \bm{A}^+$ & Moore-Penrose pseudoinverse of ${\bm{A}}$\\ $\displaystyle \bm{A} \circledast \bm{B} $ & Element-wise (Hadamard) product of ${\bm{A}}$ and ${\bm{B}}$ \\ $\displaystyle \mathrm{det}(\bm{A})$ & Determinant of $\bm{A}$ \\ $\displaystyle \mathrm{rref}(\bm{A})$ & Reduced row echelon form of $\bm{A}$ \\ $\displaystyle \mathcal{C}(\bm{A})$ & Column space of $\bm{A}$ \\ $\displaystyle \mathcal{N}(\bm{A})$ & Null space of $\bm{A}$ \\ $\displaystyle \mathcal{V}$ & A general subspace \\ $\displaystyle \mathrm{rank}(\bm{A})$ & Rank of $\bm{A}$ \\ $\displaystyle \mathrm{tr}(\bm{A})$ & Trace of $\bm{A}$ \\ \end{tabular} \egroup \index{Transpose} \index{Element-wise product|see {Hadamard product}} \index{Hadamard product} \index{Determinant} \end{minipage} \vspace{0.4in} \begin{minipage}{\textwidth} \centerline{\bf Calculus} \bgroup \def1.5{1.5} \begin{tabular}{cp{4.25in}} $\displaystyle\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\ [2ex] $\displaystyle \frac{\partial y} {\partial x} $ & Partial derivative of $y$ with respect to $x$ \\ $\displaystyle \nabla_{\bm{x}} y $ & Gradient of $y$ with respect to $\bm{x}$ \\ $\displaystyle \nabla_{\bm{X}} y $ & Matrix derivatives of $y$ with respect to $\bm{X}$ \\ $\displaystyle \nabla_{\bm{\mathscr{X}}} y $ & Tensor containing derivatives of $y$ with respect to $\bm{\mathscr{X}}$ \\ $\displaystyle \frac{\partial f}{\partial {\bm{x}}} $ & Jacobian matrix ${\bm{J}} \in \mathbb{R}^{m\times n}$ of $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$\\ $\displaystyle \nabla_{\bm{x}}^2 f({\bm{x}})\text{ or }{\bm{H}}( f)({\bm{x}})$ & The Hessian matrix of $f$ at input point ${\bm{x}}$\\ $\displaystyle \int f({\bm{x}}) d{\bm{x}} $ & Definite integral over the entire domain of ${\bm{x}}$ \\ $\displaystyle \int_{\mathbb{S}} f({\bm{x}}) d{\bm{x}}$ & Definite integral with respect to ${\bm{x}}$ over the set ${\mathbb{S}}$ \\ \end{tabular} \egroup \index{Derivative} \index{Integral} \index{Jacobian matrix} \index{Hessian matrix} \end{minipage} \vspace{0.4in} \begin{minipage}{\textwidth} \centerline{\bf Probability and Information Theory} \bgroup \def1.5{1.5} \begin{tabular}{cp{4.25in}} $\displaystyle {\textnormal{a}} \bot {\textnormal{b}}$ & The random variables ${\textnormal{a}}$ and ${\textnormal{b}}$ are independent\\ $\displaystyle {\textnormal{a}} \bot {\textnormal{b}} \mid {\textnormal{c}} $ & They are conditionally independent given ${\textnormal{c}}$\\ $\displaystyle P({\textnormal{a}})$ & A probability distribution over a discrete variable\\ $\displaystyle p({\textnormal{a}})$ & A probability distribution over a continuous variable, or over a variable whose type has not been specified\\ $\displaystyle {\textnormal{a}} \sim P$ & Random variable ${\textnormal{a}}$ has distribution $P$\\% so thing on left of \sim should always be a random variable, with name beginning with \r $\displaystyle \mathbb{E}_{{\textnormal{x}}\sim P} [ f(x) ]\text{ or } \mathbb{E} [f(x)]$ & Expectation of $f(x)$ with respect to $P({\textnormal{x}})$ \\ $\displaystyle \mathrm{Var}[f(x)] $ & Variance of $f(x)$ under $P({\textnormal{x}})$ \\ $\displaystyle \mathrm{Cov}[f(x),g(x)] $ & Covariance of $f(x)$ and $g(x)$ under $P({\textnormal{x}})$\\ $\displaystyle H({\textnormal{x}}) $ & Shannon entropy of the random variable ${\textnormal{x}}$\\ $\displaystyle D_{\mathrm{KL}} ( P \Vert Q ) $ & Kullback-Leibler divergence of P and Q \\ $\displaystyle \mathcal{N} ( {\bm{x}} | {\bm{\mu}} , {\bm{\Sigma}})$ & Gaussian distribution % over ${\bm{x}}$ with mean ${\bm{\mu}}$ and covariance ${\bm{\Sigma}}$ \\ \end{tabular} \egroup \index{Independence} \index{Conditional independence} \index{Variance} \index{Covariance} \index{Kullback-Leibler divergence} \index{Shannon entropy} \end{minipage} \vspace{0.4in} \begin{minipage}{\textwidth} \centerline{\bf Functions} \bgroup \def1.5{1.5} \begin{tabular}{cp{4.25in}} $\displaystyle f: {\mathbb{A}} \rightarrow {\mathbb{B}}$ & The function $f$ with domain ${\mathbb{A}}$ and range ${\mathbb{B}}$\\ $\displaystyle f \circ g $ & Composition of the functions $f$ and $g$ \\ $\displaystyle f({\bm{x}} ; {\bm{\theta}}) $ & A function of ${\bm{x}}$ parametrized by ${\bm{\theta}}$. (Sometimes we write $f({\bm{x}})$ and omit the argument ${\bm{\theta}}$ to lighten notation) \\ $\displaystyle \log(x)$ & Natural logarithm of $x$ \\ $\displaystyle \sigma(x)$ & Logistic sigmoid, i.e., $\displaystyle \frac{1} {1 + \exp(-x)}$ \\ $\displaystyle \zeta(x)$ & Softplus, $\log(1 + \exp(x))$ \\ $\displaystyle \norm{\bm{x}}_p $ & $L_p$ norm of ${\bm{x}}$ \\ $\displaystyle \norm{\bm{x}}=\norm{\bm{x}}_2 $ & $L_2$ norm of ${\bm{x}}$ \\ $\displaystyle x^+$ & Positive part of $x$, i.e., $\max(0,x)$\\ $\displaystyle u(x)$ & Step function with value 1 when $x\geq0$ and value 0 otherwise\\ $\displaystyle \mathds{1}\{\mathrm{condition}\}$ & is 1 if the condition is true, 0 otherwise\\ \end{tabular} \egroup \index{Sigmoid} \index{Softplus} \index{Norm} \end{minipage} Sometimes we use a function $f$ whose argument is a scalar but apply it to a vector, matrix, or tensor: $f({\bm{x}})$, $f({\bm{X}})$, or $f(\bm{\mathscr{X}})$. This denotes the application of $f$ to the array element-wise. For example, if $\bm{\mathscr{C}} = \sigma(\bm{\mathscr{X}})$, then $\mathscr{C}_{i,j,k} = \sigma(\mathscr{X}_{i,j,k})$ for all valid values of $i$, $j$, and $k$. \vspace{0.4in} \begin{minipage}{\textwidth} \centerline{\bf Abbreviations} \bgroup \def1.5{1.5} \begin{tabular}{cp{4.25in}} MCMC & Markov chain Monte Carlo \\ i.i.d. & Independently and identically distributed \\ p.d.f. & Probability density function \\ p.m.f. & Probability mass function \\ OLS & Ordinary least squares\\ NG & Normal-Gamma distribution \\ NIG & Normal-inverse-Gamma distribution \\ NIX & Normal-inverse-Chi-squared distribution\\ TN & Truncated-normal distribution \\ GTN & General-truncated-normal distribution\\ RN & Rectified-normal distribution \\ IW & Inverse-Wishart distribution \\ NIW & Normal-inverse-Wishart distribution \\ ALS & Alternating least squares \\ GD & Gradient descent\\ SGD & Stochastic gradient descent \\ MU & Multiplicative update \\ MSE & Mean squared error\\ NMF & Nonnegative matrix factorization\\ ID & Interpolative decomposition\\ IID & Intervened interpolative decomposition \\ BID & Bayesian interpolative decomposition \\ \end{tabular} \egroup \end{minipage} \clearpage
{ "arxiv_id": "2302.11375", "language": "en", "timestamp": "2023-02-23T02:14:47", "url": "https://arxiv.org/abs/2302.11375", "yymm": "2302" }
\section{Introduction} Let $\tilde{A}(t)$ be an $N\times N$ matrix-valued function analytic over $t\in \mathcal{I}=[0,1]$ and $I_N$ the $N \times N$ identity matrix. Then, the system of ODEs \begin{equation}\label{eq:ode:intro:hom} \frac{d}{dt}U_s(t) = \tilde{A}(t) U_s(t), \quad U_s(s)=I_N, \quad \text{ for } t \geq s, \quad t,s\in \mathcal{I}, \end{equation} has a unique solution $U_s(t)$. When $\tilde{A}(\tau_1)\tilde{A}(\tau_2)=\tilde{A}(\tau_2)\tilde{A}(\tau_1)$ for every $\tau_1,\tau_2 \in \mathcal{I}$, $U_s(t)$ takes the form $$U_s(t)=\exp\left(\int_s^{t} \tilde{A}(\tau)\, \text{d}\tau\right).$$ In general, however, $U_s(t)$ has no known simple expression in terms of $\tilde{A}(t)$. Indeed, despite systems of non-autonomous linear ODEs are crucial, common problems that appear in a variety of contexts \cite{Autler1955,BenEtAll17,Blanes15,kwaSiv72,Lauder1986,Shirley1965}, their solution is surprisingly difficult to formulate by an analytic expression. When $\tilde{A}(t)$ is a scalar function, \cite{PozVan22proc_scalar} shows that a closed form of the solution exists in the non-commutative ring $\mathcal{S}$ composed of a certain distribution set $\mathcal{D}(\mathcal{I})$ \cite{schwartz1978}, the so-called $\star$-product \cite{ProceedingsPaper2020}, and the usual addition. The $\star$-product is a convolution-like operation that generalizes the \emph{Volterra composition} (e.g., \cite{Volterra1928}). The closed form is given in terms of a $\star$-product inverse in the ring. Moreover, it is easy to define a $\mathcal{S}$-module of matrices with a bilinear product that generalizes the results to the case of a matrix-valued $\tilde{A}(t)$; see \cite{Giscard2015,GiscardPozza2021}. In a few words, in this case, the solution $U_s(t)$ is given by the bilinear product inverse of a matrix in the $\mathcal{S}$-module. This means that in the framework of the $\mathcal{S}$ ring, it is possible to express $U_s(t)$ in a closed form for every matrix-valued analytic function $\tilde{A}(t,s)$. This new expression has led to several new symbolic and numerical approaches to the solution of \eqref{eq:ode:intro:hom} \cite{Giscard2015,BonGis20,ProceedingsPaper2020,GiscardPozza2021,GiscardPozza2022,PozVan22proc_mtx,PozVan22proc_PANM,PozVan22proc_scalar}. In the pieces of literature mentioned above, the new expression for the solution of \eqref{eq:ode:intro:hom} has not been derived in the ring $\mathcal{S}$, but in alternative equivalent ways. This paper aims to show the potentiality of working in the $\mathcal{S}$-ring module. We do that by deriving a new result, namely, the expression for the solution of the non-homogeneous system of linear ODEs \begin{equation}\label{eq:ode:intro} \tilde{A}(t) U_s(t)=\frac{d}{dt}U_s(t) + \tilde{B}(t), \quad U_s(s)=I_N, \quad \text{ for } t \geq s, \quad t,s\in \mathcal{I}, \end{equation} where $\tilde{B}(t)$ is an $N\times N$ matrix-valued analytic function over $\mathcal{I}$. Moreover, we will show that there is a subring of $\mathcal{S}$ that corresponds to a subalgebra of infinite matrices, and we will prove the existence of certain matrix inverses in the subalgebra using the connection with $\mathcal{S}$. In Section \ref{sec:starexpression}, we define the $\star$-product and the related algebraic structures, and we derive the new expression of the solution of \eqref{eq:ode:intro}. Section \ref{sec:starsol} shows the connection between the $\mathcal{S}$ subring and a subalgebra of infinite matrices. As a consequence, the ODE solution can be obtained by solving a linear system in the subalgebra. Section \ref{sec:conclusion} concludes the presentation. \section{A $\star$-product solution to non-homogeneous ODEs}\label{sec:starexpression} Let $\tilde{f}_1(t,s), \tilde{f}_2(t,s)$ be two bivariate functions and assume that they are analytic\footnote{Note that in the previously appeared works, we have usually assumed the functions to be smooth. Here we restrict the assumption to analytic for the sake of simplicity.}, in both $t$ and $s$, over $\mathcal{I}=[0, 1]$; we denote such a set of functions by $\mathcal{A}(\mathcal{I})$. The \emph{Volterra composition} of $\tilde{f}_1, \tilde{f}_2$, introduced by Vito Volterra (e.g., \cite{Volterra1928}), is defined as \begin{equation*} \big(\tilde{f}_2 \star_v \tilde{f}_1\big)(t,s) := \int_s^{t} \tilde{f}_2(t,\tau) \tilde{f}_1(\tau, s) \, \text{d}\tau, \quad t,s \in \mathcal{I}. \end{equation*} Note that, from now on, a function marked with a tilde will stand for a function from $\mathcal{A}(\mathcal{I})$. If we look at it as a product, the Volterra composition lacks important features. For instance, the identity. This is why the Volterra composition has been extended to the so-called $\star$-product \cite{ProceedingsPaper2020}. Let $\Theta(t-s)$ be the Heaviside theta function, i.e., \begin{equation*} \Theta(t-s) = \begin{cases} 1, \quad t \geq s \\ 0, \quad t < s \end{cases}. \end{equation*} Moreover, let $\delta(\cdot)=\delta^{(0)}(\cdot)$ be the Dirac delta distribution and $\delta^{(i)}(\cdot)$ be its $i$th derivatives. We denote with $\mathcal{D}(\mathcal{I})$ the class of the distributions $d$ that can be expressed as \begin{equation*} d(t,s)=\widetilde{d}(t,s)\Theta(t-s) + \sum_{i=0}^k \widetilde{d}_i(t,s)\delta^{(i)}(t-s), \end{equation*} with $\tilde{d}, \tilde{d}_i \in \mathcal{A}(\mathcal{I})$. The $\star$-product $ \star: \mathcal{D}(\mathcal{I}) \times \mathcal{D}(\mathcal{I}) \rightarrow \mathcal{D}(\mathcal{I}) $ is defined as \begin{equation}\label{eq:def:star} \big(f_2 \star f_1\big)(t,s) := \int_\mathcal{I} f_2(t,\tau) f_1(\tau, s) \, \text{d}\tau, \quad f_1, f_2 \in \mathcal{D}(\mathcal{I}), \end{equation} Consider the subclass $\mathcal{A}_\Theta(\mathcal{I}) \subset \mathcal{D}(\mathcal{I})$ comprising those distributions of the form \begin{equation*} f(t,s)=\widetilde{f}(t,s)\Theta(t-s). \end{equation*} Then, the $\star$-product of $f_1,f_2 \in \mathcal{A}_\Theta(\mathcal{I})$ is equivalent to the Volterra composition \begin{align*} \big(f_2 \star f_1\big)(t,s) &= \int_{\mathcal{I}} \widetilde{f}_2(t,\tau) \widetilde{f}_1(\tau, s)\Theta(t-\tau)\Theta(\tau-s) \, \text{d}\tau,\\ &=\Theta(t-s)\int_s^{t} \widetilde{f}_2(t,\tau) \widetilde{f}_1(\tau, s) \, \text{d}\tau = \Theta(t-s)( \tilde{f}_2 \star_v \tilde{f}_1)(t,s). \end{align*} The $\star$-product is well-defined and closed in $\mathcal{D}(\mathcal{I})$; we refer the reader to \cite{GiscardPozza2021,ProceedingsPaper2020} for further details. For the goals of this paper, it will be enough to recall the following properties. Given $f \in \mathcal{A}_\Theta(\mathcal{I})$, then \begin{align} \left(\delta'(t-s) \ast f \right) (t,s) &= \left(\partial_t \tilde{f}(t,s)\right)\Theta(t-s) + \tilde{f}(s,s)\delta(t-s); \label{eq:diracder} \\ \left(f\ast\delta'(t-s) \right) (t,s) &= -\left(\partial_s \tilde{f}(t,s)\right)\Theta(t-s) + \tilde{f}(t,t)\delta(t-s); \nonumber \end{align} see \cite{schwartz1978,ProceedingsPaper2020}. As a consequence, \begin{equation}\label{eq:dira1:inverse} \Theta(t-s) \star \delta'(t-s) = \delta'(t-s) \star \Theta(t-s) = \delta(t-s), \end{equation} i.e., $\delta'$ is the $\star$-inverse of $\Theta$. Moreover, \begin{itemize} \item $\mathcal{D}(\mathcal{I})$ is closed under $\star$-multiplication; \item the $\star$-product is associative over $\mathcal{D}(\mathcal{I})$; \item the Dirac delta distribution $1_\star(t,s):=\delta(t-s)$ is the identity of the $\star$-product. \end{itemize} Therefore, $\mathcal{S}(\mathcal{I}):=(\mathcal{D}(\mathcal{I}), \star, +, 0, 1_\star)$ is a non-commutative ring. The $\star$-product can also be extended to matrices and vectors composed of elements from $\mathcal{D}(\mathcal{I})$. This is easily done by replacing the standard multiplication appearing in the integrand of \eqref{eq:def:star} with the usual matrix-matrix multiplication \cite{GiscardPozza2022}. Similarly, we can define the right (and left) scalar-matrix multiplication. As a result, we obtain the module of the matrices with elements from $\mathcal{D}(\mathcal{I})$ with, as bilinear product, the $\star$-product between matrices, and, as scalar product, the $\star$-product between a scalar and a matrix. \bigskip The system of ODEs in \eqref{eq:ode:intro} can be rewritten in the form \begin{equation}\label{eq:ode:star} \partial_t U(t,s) = \tilde{A}(t) \Theta(t-s) U(t,s) + \tilde{B}(t,s)\Theta(t-s), \quad U(s,s) = I_N, \quad t,s \in \mathcal{I}, \end{equation} with $U_s(t) = U(t,s)$. Note that the matrices $U(t,s) = \tilde{U}(t,s)\Theta(t-s)$, $A(t,s) := \tilde{A}(t) \Theta(t-s)$, and $B(t,s):= \tilde{B}(t,s) \Theta(t-s)$ are all composed of elements from $\mathcal{A}_\Theta(\mathcal{I})$. Therefore, by exploiting formula \eqref{eq:diracder} and \eqref{eq:dira1:inverse}, equation~\eqref{eq:ode:star} becomes: \begin{equation}\label{eq:ode:star2} \delta'(t-s) \star U(t,s) = \tilde{A}(t)U(t,s) + I_\star(t,s) + B(t,s), \end{equation} where $I_\star(t,s)$ is the identity matrix $I_N$ multiplied by $\delta(t-s)= 1_\star(t,s)$. Once the problem has been rewritten into the $\star$-framework, we can derive a formula for its solution by working in the $\mathcal{S}(\mathcal{I})$-module. If we $\star$-multiplying \eqref{eq:ode:star2} from the left by $\Theta(t-s)$ we obtain \begin{equation}\label{eq:star:u:iter} U(t,s) = \Theta(t-s) \star \left( \tilde{A}(t)U(t,s) + I_\star(t,s) + B(t,s) \right). \end{equation} Now, by replacing $U(t,s)$ in the right-hand side of \eqref{eq:star:u:iter} with the right-hand side of \eqref{eq:star:u:iter} itself, we get the following iterations (we drop the dependency from $t,s$ for the sake of readability) \begin{align*} U &= \Theta \star \left( \tilde{A}U + I_\star + B \right), \\ &= \Theta \star \left( \tilde{A}\left(\Theta \star \left( \tilde{A}U + I_\star + B \right)\right) + I_\star + B \right), \\ &= \Theta \star \left( A \star \left( \tilde{A}U + I_\star + B \right) + I_\star + B \right), \\ &= \Theta \star \left( A \star \tilde{A}U + \left( A + I_\star \right) \star ( I_\star + B ) \right). \end{align*} Note that the equality \begin{equation*} \tilde{A}\left(\Theta \star \left( \tilde{A}U + I_\star + B \right)\right) = A \star \left( \tilde{A}U + I_\star + B \right) \end{equation*} holds since $\tilde{A}(t)$ does not depend on $s$. Repeating the iterations $k$ times\footnote{In fact, such iterations are Picard iterations, see \cite[Section 2]{PozVan22proc_PANM}.}, we obtain \begin{align} U &= \Theta \star \left( A \star A \star \tilde{A}U + \left( A \star A + A + I_\star \right) \star ( I_\star + B ) \right), \nonumber \\ & \, \, \,\vdots \nonumber \\ &= \Theta \star \left( A^{k\star} \star \tilde{A}U + \left( A^{k\star} + \dots + A + I_\star \right) \star ( I_\star + B ) \right), \label{eq:picard:it} \end{align} with $A^{k\star}$ the $k$th $\star$-power of $A$. As shown in \cite{GiscardPozza2022}, \begin{equation*} \max_{t,s \in \mathcal{I}} \left\| \left(A^{k\star}\right)(t,s) \right\| \leq \left(\max_{t,s \in \mathcal{I}}\|A(t,s) \|\right)^k \frac{(t-s)^{k-1}}{(k-1)!}, \quad k \geq 1, \end{equation*} for any induced matrix norm. Therefore \eqref{eq:picard:it} uniformly converges to the expression \begin{equation}\label{eq:star:sol} U(t,s) = \Theta(t-s) \star R_\star(A)(t,s) \star \left( I_\star(t,s) + B(t,s) \right), \end{equation} where $R_\star(A)$ is the $\star$-resolvent of $A$, i.e., \begin{equation*} R_\star(A) = I_\star + \sum_{k=1}^\infty \left(A^{\star k}\right)(t,s). \end{equation*} Noticing that \begin{equation}\label{eq:res:inv} R_{\star}(A) \star (I_\star - A) = \left(I_\star + \sum_{k\geq 1} A^{\star k}\right) \star (I_\star - A) = I_\star, \end{equation} that is, $R_{\star}(A) = (I_\star - A)^{-\star}$ (the $\star$-inverse of $(I_\star - A)$), we get \begin{equation}\label{eq:star:sol:closed} U(t,s) = \Theta(t-s) \star (I_\star - A)^{-\star}(t,s) \star \left( I_\star(t,s) + B(t,s) \right), \end{equation} that is a closed-form expression in the $\mathcal{S}$-module. Finally, since the matrix $A$ is composed of elements from the subset $$\mathcal{A}_\Theta^t(\mathcal{I}) :=\left\{f \in \mathcal{A}_\Theta(\mathcal{I}) : f(t,s) = \tilde{f}(t)\Theta(t-s) \right\} \subset \mathcal{A}_\Theta(\mathcal{I}),$$ it is useful to define the set $$ \mathcal{D}^t_0(\mathcal{I}) := \left\{f(t,s) = \alpha 1_\star + \sum_{i=1}^n \big(g_{i,1} \star \dots \star g_{i,m_n} \big), \; g_{i,j} \in \mathcal{A}_\Theta^t(\mathcal{I}) , \; \alpha \in \mathbb{C}\right\}. $$ and the related subring $(\mathcal{D}^t_0(\mathcal{I}), \star, +, 0, 1_\star)$. \section{The $\star$-product and the matrix algebra}\label{sec:starsol} Let $\{p_k\}_k$ be a sequence of orthonormal shifted Legendre polynomials over the bounded interval $\mathcal{I}=[0,1]$, i.e., \begin{align*} \int_{\mathcal{I}} p_k(\tau) p_\ell(\tau) d\tau = \delta_{k,\ell} = \begin{cases} 0,\quad \text{if }k\neq \ell\\ 1,\quad \text{if } k=\ell \end{cases}. \end{align*} Despite the fact that the functions $p_k$ are not in $\mathcal{D}(\mathcal{I})$, with a small abuse of notation, we can still define the product $$ p_k(s) \star p_\ell(t) = \int_\mathcal{I} p_k(\tau) p_\ell(\tau) \; \textrm{d} \tau = \delta_{k,\ell}. $$ Given a function $f(t,s)= \tilde{f}(t,s) \Theta(t-s) \in \mathcal{A}_\Theta(\mathcal{I})$, we can expand it into the series \begin{equation}\label{eq:f:exp} f(t,s) = \sum_{k=0}^\infty \sum_{\ell=0}^\infty f_{k,\ell} \, p_k(t) p_\ell(s), \quad t \neq s, \quad t,s \in \mathcal{I}, \end{equation} with coefficients \begin{equation*} f_{k,\ell} = \int_\mathcal{I} \int_\mathcal{I} f(\tau,\rho) p_k(\tau) p_\ell(\rho) \; \textrm{d} \rho \; \textrm{d} \tau; \end{equation*} see, e.g., \cite[p.~55]{LebSil72}. The expansion can be rewritten in the matrix form \begin{align*} f(t,s) = \sum_{k=0}^{\infty} \sum_{\ell=0}^{\infty} f_{k,\ell} \, p_k(t) p_\ell(s) = \phi(t)^T F \, \phi(s) \quad t \neq s, \quad t,s \in \mathcal{I}. \end{align*} where the \emph{coefficient matrix} $F$ and the vector $\phi(\tau)$ are defined as follows \begin{equation}\label{eq:coeff:mtx} F := \begin{bmatrix} f_{0,0} & f_{0,1} & f_{0,2} & \dots \\ f_{1,0} & f_{1,1} & f_{1,2} & \dots \\ f_{2,0} & f_{2,1} & f_{2,2} & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix}, \quad \phi_M(\tau) := \begin{bmatrix} p_0(\tau)\\ p_1(\tau)\\ p_2(\tau) \\ \vdots \end{bmatrix}. \end{equation} Note that each element of $F$ can be bounded by \begin{equation}\label{eq:F:bound:const} |f_{k,\ell}| \leq \max_{t, s \in [0,1]} |\tilde{f}(t,s)| \sqrt{2k + 1} \sqrt{2\ell + 1} \end{equation} In particular, an univariate function $\tilde{f}(t)$ can be expanded as \begin{equation*} \tilde{f}(t) := \sum_{d=0}^\infty \alpha_d p_d(t),\quad \text{with } \alpha_d = \int_{-1}^1 \tilde{f}(t) p_d(t) dt. \end{equation*} Let $B^{(k)}$ be the coefficient matrix of $p_k(t) \Theta(t-s)$. Then, the coefficient matrix of $f(t,s) = \tilde{f}(t)\Theta(t-s)$ can be also expanded into the series $$ F = \sum_{k=0}^\infty \alpha_k B^{(k)}. $$ Note that each $B^{(k)}$ is a banded matrix with bandwidth $k+1$, \cite{PozVan22proc_scalar}. Moreover, the Fourier coefficients $\{\alpha_k\}_{k\geq 0}$ decay geometrically \cite{TreApp13}. Indeed, there exist $0<\rho<1, C>0$ such that \begin{equation}\label{eq:coeffDecayRate} \vert a_k\vert \leq C \rho^{k}. \end{equation} As a consequence, each element of $F$ can be bounded as follows \begin{align} |f_{k,\ell}| &= \left| \sum_{j=|k-\ell|+2}^\infty \alpha_j B^{(j)}_{k,\ell} \right | \leq \sum_{j=|k-\ell|+2}^\infty |\alpha_j| | B^{(j)}_{k,\ell} | \nonumber \\ &\leq C \max_{t, s \in [0,1]} |\tilde{f}(t,s)| \sum_{j=|k-\ell|+2}^\infty \rho^{j} \sqrt{2k + 1} \sqrt{2\ell + 1} \nonumber \\ &\leq K \rho^{|k-\ell|+2}, \label{eq:F:bound} \end{align} for some $K>0$. This means that $F$ is characterized by a geometric decay of the element magnitude as we move away from the diagonal. Consider $f(t,s) = \tilde{f}(t)\Theta(t-s),g(t,s) = \tilde{g}(t)\Theta(t-s) \in \mathcal{A}_\Theta^t(\mathcal{I})$ and the related coefficient matrices \eqref{eq:coeff:mtx} $F,G$, respectively. By orthogonality, $\phi(s) \star \phi(t)^T = I$, with $I$ the identity matrix. Therefore, for every $t \neq s$, \begin{align*} (f\star g)(t,s) &= \left( \phi(t)^T F \, \phi(s) \right) \star \left( \phi(t)^T G \, \phi(s) \right), \\ &= \phi(t)^T F \left(\phi(s) \star \phi(t)^T \right) G \, \phi(s), \\ &= \phi(t)^T FG \, \phi(s). \end{align*} Thus, the coefficient matrix $H$ of the function $h = f \star g \in \mathcal{D}_0^t(\mathcal{I})$ is given by the matrix-matrix product $FG$. It is important to note that the product $FG$ is well-defined. Indeed, the series is convergent since by \eqref{eq:F:bound} there exist $K>0$ and $0< \rho < 1$ so that \begin{align*} \left|(FG)_{k,\ell} \right| &= \left|\sum_{j=1}^\infty F_{k,j} G_{j,\ell} \right| \leq \sum_{j=1}^\infty \left| F_{k,j} \right| \left| G_{j,\ell} \right| \\ &\leq \sum_{j=1}^\infty K_f K_g \rho_f^{|k-j|+2} \rho_g^{|\ell-j|+2} \leq \sum_{j=1}^\infty K \rho^{|k-j| + |\ell-j| + 4}. \end{align*} Moreover, since $\min_{j=1,2\dots} |k-j| + |\ell-j| = |k-\ell|$, there exist $K_{fg}>0$ and $0<\rho_{fg}<1$ so that \begin{align}\label{eq:FG:bound} \left|(FG)_{k,\ell} \right| \leq K_{fg} \rho_{fg}^{|k-\ell|}. \end{align} This latter bound show that $FG$ is also characterized by a geometric decay of the element magnitude as we move away from the diagonal. Therefore, given $F, G, H$ coefficient matrices of $f(t,s) = \tilde{f}(t)\Theta(t-s), g(t,s) = \tilde{g}(t)\Theta(t-s), h(t,s) = \tilde{h}(t)\Theta(t-s)$, the matrix product $FGH$ is also well-defined and characterized by a off-diagonal geometric decay. As a consequence, the set $\mathcal{F}$ of all the coefficient matrices of functions from $\mathcal{D}_0^t(\mathcal{I})$ is a subalgebra (with the usual sum, product, and matrix product) and it corresponds to the subring $(\mathcal{D}_0^t(\mathcal{I}), \star, +, 0, 1_\star)$. \bigskip Consider now the $N \times N$ matrix-valued functions $A(t,s) = [f_{i,j}(t,s)]_{i,j=1}^N, B(t,s) = [g_{i,j}(t,s)]_{i,j=1}^N \in \mathbb{C}$ composed of elements from $\mathcal{D}_0^t(\mathcal{I})$. The functions $f_{i,j}$ and $g_{i,j}$ are associated with their coefficient matrices $F^{(i,j)}, G^{(i,j)}$, respectively. By extending the arguments presented above, we get the following expression for the (matrix) $\star$-product $C = A \star B = [h_{i,j}]_{i,j=1}^N$, \begin{equation*} \left(A \star B \right)_{k,\ell}(t,s) = \sum_{j=1}^N (f_{k,j} \star g_{j,\ell})(t,s) = \sum_{j=1}^N \phi(t)^T F^{(k,j)} G^{(j,\ell)} \, \phi(s), \quad t\neq s. \end{equation*} Therefore, the coefficient matrices $H^{(k,\ell)}$ of the functions $h_{k,\ell}(t,s)$ are given by \begin{equation} \label{eq:block:prod} H^{(k,\ell)} = \sum_{j=1}^N F^{(k,j)} G^{(j,\ell)}. \end{equation} Defining the block matrices $\mathbf{A} = [F_{i,j}]_{i,j=1}^N$, $\mathbf{B} = [G_{i,j}]_{i,j=1}^N$, $\mathbf{C} = [H_{i,j}]_{i,j=1}^N$, we obtain the relation: \begin{equation}\label{eq:block:mtx:prod} \mathbf{C} = \mathbf{A} \mathbf{B}. \end{equation} Note that, despite the blocks having an infinite size, the product in \eqref{eq:block:mtx:prod} is well-defined since the matrix products in \eqref{eq:block:prod} are well-defined. In Section~\ref{sec:starexpression}, we have seen the crucial role played by the $\star$-resolvent $R_\star(A)$. Let $C := \sum_{k\geq 1} A^{\star k} = [h_{i,j}]_{i,j=1}^N$, with $ h_{i,j} \in \mathcal{A}_\Theta$. Therefore, using the notation above, \begin{align*} \left(R_{\star}(A)\right)_{k,\ell}(t,s) &= \phi(t)^T \left(I + H^{(k,\ell)} \right) \phi(s). \end{align*} Hence, for $t \neq s$, we obtain \begin{align}\label{eq:reso:mtx:1} \left(R_{\star}(A)\star (I_\star - A) \right)_{k,\ell} &= \sum_{j=1}^N \phi(t)^T \left(I + H^{(k,j)} \right) \phi(s) \star \phi(t)^T \left(I - F^{(j,\ell)} \right) \phi(s), \\ \label{eq:reso:mtx:2} &= \phi(t)^T \sum_{j=1}^N \left(I + H^{(k,j)} \right) \left(I - F^{(j,\ell)} \right) \phi(s) . \end{align} Then, relation \eqref{eq:res:inv} implies \begin{align}\label{eq:reso:mtx:3} \phi(t)^T \sum_{j=1}^N \left(I + H^{(k,j)} \right) \left(I - F^{(j,\ell)} \right) \phi(s) = 1_\star \delta_{k,\ell} = \phi(t)^T I\, \phi(s) \delta_{k,\ell}. \end{align} In the following, we prove that $\sum_{j=1}^N \left(I + H^{(k,j)} \right) \left(I - F^{(j,\ell)} \right) = I$. \begin{lemma}\label{lemma:dirac:ident} Let $D$ be an infinite matrix so that $\phi(t)^T D \phi(s) = \delta(t-s)$, then $D = I$. \end{lemma} \begin{proof} First of all, since $\delta(t-s)$ is a generalized function, the convergence of the series $\phi(t)^T D \phi(s)$ is intended in a weak sense. This means that, for every $\tilde{f}(t)$ analytic on $\mathcal{I}$, \begin{equation*} \lim_{N\rightarrow \infty}\int_{\mathcal{I}} \sum_{k,\ell=1}^N d_{k,\ell} p_k(\tau) p_\ell(s) \tilde{f}(\tau) d\tau = \tilde{f}(s) = \int_{\mathcal{I}} \delta(\tau - s)\tilde{f}(\tau) d\tau. \end{equation*} Setting $\Tilde{f}(t) = p_j(t)$ gives \begin{align*} p_j(s) &= \lim_{N\rightarrow \infty} \sum_{k,\ell=1}^N d_{k,\ell} \int_{\mathcal{I}} p_k(\tau) p_\ell(s) p_j(\tau) d\tau \\ &= \lim_{N\rightarrow \infty} \sum_{k,\ell=1}^N d_{k,\ell} p_\ell(s) \int_{\mathcal{I}} p_k(\tau) p_j(\tau) d\tau, = \sum_{\ell=1}^\infty d_{j,\ell} p_\ell(s). \end{align*} As the Legendre expansion of $p_j(s)$ is unique, $d_{j,\ell} = \delta_{j,\ell}$, for $j,\ell = 1, 2, \dots$ . \end{proof} \begin{theorem}\label{thm:res:exist} Let $A(t,s)$ be an $N \times N$ matrix-valued function composed of elements from $\mathcal{D}_0^t(\mathcal{I})$ and let $\mathbf{A}=[F^{(k,\ell)}]_{k,\ell=1}^N$ be the related block matrix, with $F^{(k,\ell)}$ the coefficient matrix of $A_{k,\ell}(t,s)$. Moreover, consider the matrix-valued function $C(t,s) = \sum_{k \geq 1} A^{\star k}(t,s)$, and let $\mathbf{C}=[H^{(k,\ell)}]_{k,\ell=1}^N$ be the related block matrix, with $H^{(k,\ell)}$ the coefficient matrix of $C_{k,\ell}(t,s)$. Then \begin{equation*} (I + \mathbf{C})(I- \mathbf{A}) = I, \end{equation*} that is, $(I- \mathbf{A})$ is invertible. \end{theorem} \begin{proof} Equations~\eqref{eq:reso:mtx:1}, \eqref{eq:reso:mtx:2} and \eqref{eq:reso:mtx:3} show that \begin{align*} \phi(t)^T \sum_{j=1}^N \left(I + H^{(k,j)} \right) \left(I - F^{(j,\ell)} \right) \phi(s) = \phi(t)^T D^{(k,\ell)} \phi(s) = \delta(t-s) \delta_{k,\ell}, \end{align*} where the products $ (I + H^{(k,j)} ) (I - F^{(j,\ell)} )$ are well-defined since $(I + H^{(k,j)} )_{m,n}$ is bounded by $1+K_h \sqrt{2m+1}\sqrt{2n+1}$ and $(I - F^{(j,\ell)} )_{m,n}$ by $K_f \rho^{|k-\ell|}$, see \eqref{eq:F:bound:const} and \eqref{eq:FG:bound}. Thus, for $k = \ell$, $D^{(k,k)} = \sum_{j=1}^N \left(I + H^{(k,j)} \right) \left(I - F^{(j,\ell)} \right) = I$ by Lemma~\ref{lemma:dirac:ident}. For $k \neq \ell$, $D^{(k,\ell)} = 0$. \end{proof} Consider the solution $U(t,s)$ of \eqref{eq:ode:star} and let us define $\mathbf{T}$ as the block diagonal matrix with blocks all equal to $T$, the coefficient matrix of $\Theta(t-s)$. Using the notation of Theorem~\ref{thm:res:exist} and Expression~\eqref{eq:star:sol}, we can transform the ODE \eqref{eq:ode:star} into the matrix problem: \begin{equation}\label{eq:inf:mtx:sol} \mathbf{U} = \mathbf{T} (I + \mathbf{C}) (I + \mathbf{B}) = \mathbf{T} (I - \mathbf{A})^{-1} (I + \mathbf{B}), \end{equation} where the block matrix $\mathbf{B} = [G^{(k,\ell)}]_{k,\ell=1}^N$ is composed of the coefficient matrices $G^{(k,\ell)}$ of the functions $B_{k,\ell}(t,s)$. Hence, the solution of \eqref{eq:ode:star} can be expressed by \begin{equation*} U_{k,\ell}(t,s) = \phi(t)^T Y^{(k, \ell)} \, \phi(s), \quad k,\ell =1,\dots, N, \end{equation*} with $\mathbf{U} = [Y^{(k,\ell)}]_{k,\ell=1}^N$. To conclude the presentation, we need to discuss the convergence of the expansion \eqref{eq:f:exp}. Indeed, since $f$ is discontinuous for $t=s$, the expansion does not converge to $f(t,t)$ and, moreover, it converges only linearly for $t\neq s$; see, e.g., \cite{LebSil72,TreApp13}. \begin{lemma}\label{eq:conv:disc} Consider $f(t,s) \in \mathcal{A}_\Theta$ and the related expansion in orthonormal shifted Legendre polynomials \eqref{eq:f:exp}. Then, \begin{equation*} \lim_{N \rightarrow \infty} \sum_{k=0}^N \sum_{\ell=0}^N f_{k,\ell} \, p_k(t) p_\ell(s) = \left\{ \begin{array}{lc} f(t,s), & t\neq s \\ f(t,t)/2, & t =s \end{array} \right. , \quad t,s \in (0,1). \end{equation*} \end{lemma} \begin{proof} The proof is direct consequence of Theorem~1 and Remark~1 in Section~4.7 of \cite{LebSil72}. \end{proof} However, for the fixed $s = 0$, the univariate function $f(t,0) = \tilde{f}(t,0)\Theta(t-0) = \tilde{f}(t,0)$ is analytic over $[0,1]$. Therefore, defining $a_k = \sum_{\ell=0}^\infty (f_{k,\ell}\, p_\ell(0))$, we get the Legendre expansion \begin{equation*} f(t,0) = \sum_{k=0}^\infty p_k(t) \sum_{\ell=0}^\infty f_{k,\ell} \, p_\ell(0) = \sum_{k=0}^\infty a_k p_k(t), \quad t \in [0,1]. \end{equation*} Therefore, the truncated series $\sum_{k=0}^M a_k p_k(t)$ converges geometrically to $f(t,0)$. As a consequence, in a numerical setting, we can approximate the function $f(t,0)$ by using $F_M$, the principal leading submatrix of $F$, obtaining the approximation \begin{equation*} f(t,0) \approx \phi_M(t)^T F_M \, \phi_M(0), \end{equation*} with $\phi_M(t)$ the first $M$ elements of $\phi(t)$. In this case, we expect to reach a good enough accuracy for a (relatively) small $M$. Note that for $s>0$ this is not possible, as we expect the emergence of the Gibbs phenomenon; see, e.g., \cite{TreApp13}. By considering the principal leading submatrix of each of the blocks in formula \eqref{eq:inf:mtx:sol}, for $s=0$, we get the following approximated solution to \eqref{eq:ode:intro} \begin{equation}\label{eq:ode:trunc} U_0(t) \approx (I_N \otimes \phi_M(t)^T T_M) (I_M- \mathbf{A}_M)^{-1} (I_M + \mathbf{B}_M) (I_N \otimes \phi_M(0)); \end{equation} see also \cite{PozVan22proc_mtx}. The numerical approach for the solution of a non-autonomous linear ODE system derived from \eqref{eq:ode:trunc} can be found in \cite{PozVan22proc_mtx,PozVan22proc_scalar,PozVan22proc_PANM} where several numerical examples show its efficacy. \section{Conclusion}\label{sec:conclusion} In this paper, we have presented a new expression for the solution of a (non-homogeneous, non-autonomous) system of linear ODEs using the so-called $\star$-product. The $\star$-product, the usual sum, and a specific set of distributions constitute a ring $\mathcal{S}$. We have also shown that a certain subring of $\mathcal{S}$ corresponds to a subalgebra of infinite matrices. Thanks to this correspondence, we have expressed the solution of the linear ODE system in the infinite matrix algebra. Such a solution is obtained by inverting a determined infinite matrix. The connection between the $\star$-product ring and the matrix subalgebra helped us to show that such an inverse always exists. By truncating the infinite matrices, it is possible to derive numerical methods for the solution of ODEs. This paper complements the results we are developing in the truncated case by placing it in the general framework of the infinite matrix algebra. \section*{Acknowledgements} This work was supported by Charles University Research programs UNCE/SCI/023 and PRIMUS/21/SCI/009 and by the Magica project ANR-20-CE29-0007 funded by the French National Research Agency. \bibliographystyle{tfnlm}
{ "arxiv_id": "2302.11339", "language": "en", "timestamp": "2023-02-23T02:14:07", "url": "https://arxiv.org/abs/2302.11339", "yymm": "2302" }
\section{Introduction} \label{sec:intro} We investigate the power of uniform sampling in data reduction for \ProblemName{$k$-Median}\xspace, which is a fundamental machine learning problem that has wide applications. Given a metric space $(\mathcal{X},\dist)$, \ProblemName{$k$-Median}\xspace takes an $n$-point dataset $X\subseteq \mathcal{X}$ and an integer parameter $k\ge 1$ as inputs, and the goal is to find a $k$-point center set $C\subseteq \mathcal{X}$ that minimizes the objective \[ \cost(X, C) := \sum_{x\in X}\dist(x, C), \] where $\dist(x,C):=\min_{c\in C}\dist(x,c)$ is the distance to the closest center. Data reduction is a powerful way for dealing with clustering problems, and a popular method called coreset~\cite{DBLP:conf/stoc/Har-PeledM04} has been extensively studied during the last decades. Roughly, an $\epsilon$-coreset aims to find a tiny but accurate proxy of the data, so that an $\epsilon$-approximate center set $C$ can be found by running existing algorithms on top of it. Specifically, coresets for \ProblemName{$k$-Median}\xspace in Euclidean $\mathbb{R}^d$ has been studied in a series of works~\cite{DBLP:conf/stoc/Har-PeledM04,DBLP:journals/dcg/Har-PeledK07,DBLP:journals/siamcomp/Chen09,DBLP:conf/stoc/FeldmanL11, DBLP:journals/siamcomp/FeldmanSS20, DBLP:conf/focs/SohlerW18, DBLP:conf/stoc/HuangV20, DBLP:conf/soda/BravermanJKW21, DBLP:conf/stoc/Cohen-AddadSS21, DBLP:conf/focs/BravermanCJKST022}, and coresets of size $\poly(k\epsilon^{-1})$ were obtained, which is independent of the dimension $d$ and the data size. In addition to speeding up existing algorithms, coresets can also be used to derive algorithms in sublinear modesl such as streaming~\cite{DBLP:conf/stoc/Har-PeledM04}, distributed~\cite{DBLP:conf/nips/BalcanEL13} and dynamic algorithms~\cite{DBLP:conf/esa/HenzingerK20}. Despite the progress on the size bounds, an outstanding issue of coresets is that known coreset \emph{constructions} are usually not query-efficient, i.e., it needs to access $\Omega(n)$ data points (even in sublinear models such as streaming). In fact, it is not hard to see that this limitation cannot be avoided, and in the worst case, any algorithm must make $\Omega(n)$ queries to the identity of data points in order to construct a coreset (see \Cref{thm:intro_lb} which we discuss later). Technically, existing coreset constructions are often based on non-uniform sampling which is not data-oblivious, and this inherently requires to read the entire dataset. This also makes it more difficult to efficiently implement in practice due to the sophisticated sampling procedure. Hence, in order to achieve a sublinear query complexity, one must consider other methods than the coreset. To this end, we consider uniform sampling as a natural alternative data reduction approach for clustering. Clearly, uniform sampling is data-oblivious and hence has a great potential to achieve sublinear query complexity. Moreover, it often yields near-optimal solutions with only a few samples in practice, as was demonstrated by various experiments on real datasets in recent works on coresets for variants of clustering where uniform sampling is considered as a baseline (e.g.,~\cite{DBLP:conf/nips/MaromF19,DBLP:conf/icml/JubranTMF20,DBLP:conf/icml/BakerBHJK020,DBLP:conf/nips/BravermanJKW21, HJLW23}), even though it is known that uniform sampling cannot yield a coreset in the worst case. Thus, the focus of this paper is to understand the sampling complexity of uniform sampling for \ProblemName{$k$-Median}\xspace and to justify its performance in practice. \subsection{Our Results} We first give a hardness result (\Cref{thm:intro_lb}) that even for $k = 2$ and 1D line, any algorithm, not only the uniform sampling, must make $\Omega(n)$ queries to the identity of points (which is the coordinate in 1D), in order to be $O(1)$-approximate to \ProblemName{$k$-Median}\xspace. In addition, \Cref{thm:intro_lb} further states that the number of queries must depend on a parameter $\beta \in (0, 1]$, called \emph{balancedness} (\Cref{def:balance}), which is a property of the dataset. Intuitively, $\beta$ measures how balance the optimal solution is, and precisely, it requires the size of the smallest cluster in an optimal solution is at least $\beta$ times of the average cluster size $\frac{|X|}{k}$. \begin{definition}[Balancedness] \label{def:balance} Given a dataset $X \subseteq \mathcal{X}$, the balancedness $\beta \in (0, 1]$ of $X$ is the smallest number such that there is an optimal solution of \ProblemName{$k$-Median}\xspace on $X$ satisfying that every cluster\footnote{For a center set $C = \{c_i\}_{i=1}^k$, each $c_i$ defines a cluster $X_i \subseteq X$ that consists of points whose nearest neighbor in $C$ is $c_i$.} has at least $\frac{\beta|X|}{k}$ points. \end{definition} The same notion of balancedness was considered in~\cite{DBLP:journals/ml/MeyersonOP04} which also studied the complexity of uniform sampling for \ProblemName{$k$-Median}\xspace but achieved a weaker bound (which we will discuss later). This balancedness was also enforced as a constraint to clustering problems~\cite{bradley2000constrained}, and more generally in capacitated clustering~\cite{DBLP:journals/jcss/CharikarGTS02} and fair clustering~\cite{DBLP:conf/nips/Chierichetti0LV17}. \begin{theorem} \label{thm:intro_lb} Any $O(1)$-approximate algorithm for \ProblemName{$2$-Median}\xspace with success probability at least $3/4$ must query the identify of data points in $X$ for $\Omega(1/\beta)$ times, where $\beta$ is the balancedness of the dataset $X$, even if the queried points have free access to the distance function. \end{theorem} Intuitively, the hard instance in \Cref{thm:intro_lb} has two clusters, one has only one point and the other has many points with a small diameter, and the two clusters are very far away. Then the uniform sampling fails to pick any point from singleton cluster with high probability, which incurs a big error. The detailed proof can be found in \Cref{sec:proof_intro_lb}. \Cref{thm:intro_lb} suggests that the balancedness $\beta$ of the dataset may be a fundamental parameter that determines the necessary size of uniform sampling. In our main result, stated in \Cref{thm:intro_ub}, we give a nearly-matching upper bound (with respect to $\beta$) for \ProblemName{$k$-Median}\xspace in Euclidean $\mathbb{R}^d$, which helps to justify this parameter is indeed fundamental. This bound breaks the $\Omega(n)$ query complexity barrier of coresets, and it readily yields sublinear-time algorithms for \ProblemName{$k$-Median}\xspace. The theorem may also be interpreted as beyond worst case analysis for uniform sampling, where the parameter $\beta$ provides a refined description to the structure of the dataset. \begin{theorem}[Informal version of \Cref{thm:dim_ind}] \label{thm:intro_ub} Given a dataset $X\subset \R^d$ with balancedness $\beta \in (0, 1]$, $\epsilon \in (0, 0.5)$ and integer $k\ge 1$, let $S$ be $\tilde O(\frac{k^2}{\beta\epsilon^3})$ uniform samples\footnote{Throughout, $\tilde O(f) = O(f \poly \log f)$.} from $X$, then with probability $0.9$, one can find a $(1+\epsilon)$-approximation for \ProblemName{$k$-Median}\xspace on $X$ only using $S$. \end{theorem} As mentioned, the dependence of $\beta$ in \Cref{thm:intro_ub} is nearly optimal. Furthermore, the dependence of $k$ and $\epsilon^{-1}$ is only low-degree polynomial, and it also matches the known coreset size bounds. Another feature of our bound is that it does not have a dependence in the Euclidean dimension $d$, thus it is very useful for dealing with high dimensional and/or sparse datasets. In addition to the Euclidean case, we show a more general version (\Cref{thm:dim_ind}) that relates the sampling complexity to a notion of covering number (\Cref{def:covering}) which measures the complexity of the underlying metric space $(\mathcal{X}, \dist)$. By bounding the mentioned covering number (see \Cref{sec:application}), we also obtain similar bounds of $\poly(k \epsilon^{-1} \beta^{-1})$ for various other metric spaces such as doubling metrics and shortest-path metrics of graphs with bounded treewidth. For general finite metric spaces, we obtain a bound of $\poly(k \epsilon^{-1}\beta^{-1} \log |\mathcal{X}|)$. Compared with the notion of coreset~\cite{DBLP:conf/stoc/Har-PeledM04}, the uniform sample $S$ may be interpreted as a coreset in a \emph{weaker} sense. Specifically, coresets usually require the clustering cost be preserved for \emph{all} center sets $C \subseteq \mathcal{X}$, while in \Cref{thm:intro_ub} we only guarantee the cost on near-optimal solutions, which still suffices for finding good approximation efficiently by running existing algorithms on $S$. Another important difference is that for coresets, the size does not need to have a dependence in $\beta$ which we have, but as mentioned earlier, this size bound of coreset cannot be realized by query-efficient algorithms, which the uniform sampling does. Indeed, similar notions of ``weak coreset'' was previously studied in the literature, but the focus was mostly on the Euclidean case, and our bounds for doubling metrics and graph metrics are new. Even for Euclidean spaces, only the special case of $k = 1$ was studied, and previous bounds either depend on $d$~\cite{DBLP:journals/ki/MunteanuS18} or have a worse $\epsilon^{-4}$ dependence than our $\epsilon^{-3}$~\cite{DBLP:conf/nips/Cohen-AddadSS21,danos21} (noting that $\beta = 1$ if $k = 1$). For general metrics, \cite{DBLP:journals/ml/MeyersonOP04} gave a very similar size bound as in ours (which also depends on $\beta^{-1}$), but it only achieves $O(1)$ error instead of our $\epsilon$. Somewhat less related, \cite{DBLP:conf/soda/MishraOP01,DBLP:journals/ml/Ben-David07,DBLP:journals/rsa/CzumajS07} gave uniform sampling bounds for the additive error guarantee, which is incomparable to ours. Finally, even though \Cref{thm:intro_ub} implies a small uniform sample $S$ suffices for a sublinear-time algorithm for \ProblemName{$k$-Median}\xspace, to actually find the near-optimal solution on the sample $S$ can be tricky. A natural idea is to find the optimal \ProblemName{$k$-Median}\xspace on $S$ as the approximate solution for $X$, but we show that this does not work, even when the dataset is balanced. In particular, if one uses the optimal \ProblemName{$k$-Median}\xspace on $S$ as the approximate solution, then it still requires $\Omega(n)$ samples even for a balanced dataset (see \Cref{lemma:kbmedian_lb}). This motivates us to consider a variant of \ProblemName{$k$-Median}\xspace, called \ProblemName{$(k,\beta)$-Median}\xspace, which aims to find the optimal center set $C$ subject to the constraint that $C$ is $\beta$-balanced (i.e., every cluster induced by $C$ has size at least $\beta \frac{|X|}{k}$; see \Cref{def:balanced_center}). The balancedness constraint in \ProblemName{$(k,\beta)$-Median}\xspace is intuitive, since if the dataset is already balanced, then by definition there must be a balanced optimal solution, which \ProblemName{$(k,\beta)$-Median}\xspace can find. We show in \Cref{thm:dim_ind} (i.e., the full statement of \Cref{thm:intro_ub}) that an $\alpha$-approximate \ProblemName{$(k,\beta)$-Median}\xspace on $S$ is $O(\alpha(1 + \epsilon))$-approximate \ProblemName{$k$-Median}\xspace on $X$ with constant probability. \paragraph{Experiments} Our experiments focus on validating the performance of running an algorithm for \ProblemName{$k$-Median}\xspace (instead of \ProblemName{$(k,\beta)$-Median}\xspace) on top of the uniform $S$, since it is arguably the most natural approach and is likely to be used in practice. Our experiments were conducted on various real datasets of different types of metric spaces including Euclidean $\mathbb{R}^d$ and shortest-path metrics. We find that the solution returned by the \ProblemName{$k$-Median}\xspace algorithm is fairly balanced, which effectively enforces the balancedness constraint of \ProblemName{$(k,\beta)$-Median}\xspace. Moreover, we find that these datasets are mostly balanced, especially when the number of clusters $k$ is small. Even when $k$ is relatively large and the balancedness becomes worse, the factor of $1 / \beta$ in our upper bound is still reasonably bounded (\textasciitilde $100$), and we also observe that the practical performance is not significantly worse than that of the small-$k$ case, if at all. All these findings, together with our \Cref{thm:intro_ub}, justifies the effectiveness of uniform sampling in real datasets. \subsection{Technical Overview} Our proof of \Cref{thm:intro_ub} builds on a structural lemma (\Cref{lem:badsol}), which states that if a center set $C$ is ``bad'', i.e., its cost is larger than $(1 + \epsilon) \OPT$, then this badness carries on to the sample. Specifically, let $C^\star$ be the optimal center set, if a center set $C$ satisfies $\cost(X, C) \geq (1 + \epsilon) \cost(X, C^\star)$, then we have $\cost(S, C) - \cost(S, C^\star) \geq \frac{O(\epsilon) |S|}{n} \cost(X, C^\star)$ with probability $1 - \exp(-O(|S|))$ (the big O hides dependence in other parameters such as $k$ and $\epsilon$). Intuitively, conditioning on this event, one can conclude that any ``good'' center found in $S$ is also good in $X$, which implies the main theorem. The special case $k = 1$ of the structural lemma was proved in~\cite{DBLP:journals/ki/MunteanuS18}, but the analysis does not seem to generalize our case $k \geq 2$. Specifically, to bound $\cost(S, C) - \cost(S, C^\star)$, it suffices to bound $\dist(x, C) - \dist(x, C^\star)$ for $x \in S$, and in~\cite{DBLP:journals/ki/MunteanuS18} they can use triangle inequality $\dist(x, C) - \dist(x, C^\star) \leq \dist(C, C^\star)$ since $|C| = |C^\star| = 1$, and the remaining analysis is on $\dist(C, C^\star)$ which does not depend on the variable $x$. However, when $k \geq 2$ such a triangle inequality no longer holds, and this forces us to use an alternative bound $\dist(x, C) - \dist(x, C^\star) \leq \max_{c\in C}\dist(c,C^\star)+\max_{c^\star\in C^\star}\dist(c^\star,C)$. To analyze this, we crucially use a new observation of balancedness: if $C$ is a balanced center set, then $C$ and $C^\star$ must be ``close'' which depends on $1 / \beta$ (see \Cref{def:good} and \Cref{lem:balance_is_good}). This implies that both $\max_{c\in C} \dist(c, C^\star)$ and $\max_{c^\star \in C^\star} \dist(c^\star, C)$ are small enough, and this bound eventually allows us to apply a concentration inequality to bound $\cost(S, C) - \cost(S, C^\star)$. Indeed, the use of balancedness is a fundamental difference to~\cite{DBLP:journals/ki/MunteanuS18}. Once the structural lemma is established, a natural next step is to apply it with a union bound on all centers that are close to $C^*$. While these centers can still be infinitely many, one can apply standard discretization techniques, such as $\rho$-nets in $\mathbb{R}^d$, to reduce the number of events in the union bound. \paragraph{Removing Dependence on $d$} However, for Euclidean $\mathbb{R}^d$, a naive application of the net argument only leads to a size bound that depends on $d$. To remove the dependence on $d$ for the Euclidean case, we need a better union bound. To this end, we use an alternative interpretation of the structural lemma. In particular, we change the ``variable'' in the lemma from the center set $C$ to a vector $v^C := (\dist(x, C) - \dist(x, C^\star))_{x \in X} \in \mathbb{R}^X$ which represents all the relevant costs that $C$ induce. Then the structural lemma is equivalently stated as: if some $v \in \mathbb{R}^X$ satisfies $\|v\|_1 \geq \epsilon \cdot \cost(X, C^\star)$, then $\|v|_{S}\|_1 \geq \frac{O(\epsilon) |S|}{n} \cost(X, C^\star)$ with high probability, where $v|_{S}$ means restricting $v$ to the uniform sample $S$. Now, we try to apply the union bound on a discretization of the cost vectors $\{ v^C \}_{C}$, instead of the space of all center points $C$. Specifically, for a fixed sample $S$, we need to find a discretized set $U$ such that for every $v$, $U$ contains a vector $v'$ with $\|v|_S\|_1 \approx \|v'|_S\|_1$ (recalling that we do not need to preserve $\|v\|_1$). Note that this $U$ needs not be a subset of $\{v^C\}_C$. In other words, a vector $u \in U$ may not correspond to a center set $C \subset \mathcal{X}$, and they can be any real vector in $\mathbb{R}^X$. This turns out to be great freedom compared with discretization of center sets. Indeed, for Euclidean $\mathbb{R}^d$, to preserve $\|v|_S\|$, we can map $S \cup C^\star$ into a low dimensional space of dimension $d' := O(\log(|S \cup C^\star|) / \epsilon^2)$ using a terminal version of the Johnson-Lindenstrauss transform~\cite{DBLP:conf/stoc/NarayananN19}, and discretize only in this lower dimensional space. This removes the dependence in $d$ and replaces it with $d'$. In addition, we use a chaining argument~\cite{talagrand1996majorizing} which was also used in several recent works about coresets~\cite{DBLP:conf/nips/Cohen-AddadSS21,DBLP:conf/stoc/Cohen-AddadLSS22, DBLP:journals/corr/abs-2211-11923}, to further save an $\epsilon^{-1}$ factor and obtain $\epsilon^{-3}$ dependence in the final bound. Compared with a closely related work~\cite{DBLP:conf/nips/Cohen-AddadSS21}, our chaining argument is applied on the entire $X$, while theirs is applied on $O(\epsilon^{-2})$ pieces of $X$ separately which results in an addition $\epsilon^{-1}$ factor more than ours. Finally, we note that it may cause randomness issues since we assumed fixed sample $S$ before finding $U$. Luckily, we manage to fix this by relating it with a Gaussian process. \paragraph{Beyond Euclidean Spaces} In fact, the above argument is useful not only for removing the dependence on $d$ for Euclidean spaces, but also for obtaining bounds for other metric spaces. We show it suffices to bound the size of $U$ in order to obtain a bound for uniform sampling. We formulate this minimum size of $U$ as the covering number (see \Cref{def:covering}), and we derive covering number bounds for several types of metric spaces. Previously, A similar notion of covering number as well as its use to bound the coreset size was also considered in the coreset literature~\cite{DBLP:conf/stoc/Cohen-AddadLSS22, DBLP:journals/corr/abs-2211-11923}. We give a more detailed comparison in \Cref{sec:covering}. \subsection{Related Work} Stemming from~\cite{DBLP:conf/stoc/Har-PeledM04}, the study of size bounds for coresets has been very fruitful. For \ProblemName{$k$-Median}\xspace in Euclidean $\mathbb{R}^d$, a series of works improves the size from $O(\poly(k) \epsilon^{-d} \log n)$ all the way to $\poly(k\epsilon^{-1})$~\cite{DBLP:conf/stoc/Har-PeledM04,DBLP:journals/dcg/Har-PeledK07,DBLP:journals/siamcomp/Chen09,DBLP:conf/stoc/FeldmanL11,DBLP:journals/siamcomp/FeldmanSS20,DBLP:conf/focs/SohlerW18,DBLP:conf/nips/Cohen-AddadSS21,DBLP:conf/stoc/Cohen-AddadLSS22}. Recent works focus on deriving tight bounds for $k$ and $\epsilon^{-1}$. The state-of-the-art coresets for \ProblemName{$k$-Median}\xspace in $\R^d$ achieves a size of $\tilde O(\min\{k \varepsilon^{-3},k^{\frac{4}{3}}\epsilon^{-2}\})$~\cite{DBLP:conf/stoc/Cohen-AddadLSS22,DBLP:journals/corr/abs-2211-11923}, which nearly matches a lower bound of $\Omega(k\epsilon^{-2})$~\cite{DBLP:conf/stoc/HuangV20,DBLP:conf/stoc/Cohen-AddadLSS22}. Beyond Euclidean spaces, coresets for \ProblemName{$k$-Median}\xspace were obtained in doubling metrics~\cite{DBLP:conf/focs/HuangJLW18}, shortest-path metrics of graphs with bounded treewidth~\cite{DBLP:conf/icml/BakerBHJK020} and graphs that exclude a fixed minor~\cite{DBLP:conf/soda/BravermanJKW21}. In addition, coresets for variants of clustering have also been studied, notably capacitated clustering and the highly related fair clustering~\cite{DBLP:conf/waoa/0001SS19, DBLP:conf/nips/HuangJV19, DBLP:conf/icalp/BandyapadhyayFS21, DBLP:conf/focs/BravermanCJKST022}, and robust clustering~\cite{DBLP:conf/soda/FeldmanS12,HJLW23}. \section{Preliminaries} \label{sec:pre} We define \ProblemName{$(k,\beta)$-Median}\xspace problem in \Cref{def:kBMedian} which depends on the notion of balanced center sets (\Cref{def:balanced_center}). The notion of weak coreset (\Cref{def:weak_coreset}) captures the main guarantee of \Cref{thm:dim_ind}. \begin{definition}[Balanced Center Set] \label{def:balanced_center} Given a dataset $X\subseteq \mathcal{X}$, an integer $k\geq 1$ and $\beta\in (0,1)$, we say a center set $C\subseteq \mathcal{X}$ is $\beta$-balanced if for every cluster $X_i$ ($i\in [k]$) induced by $C$ contains at least $\beta |X|/k$ points. Let $\C_\beta(X)$ denote the collection of all $\beta$-balanced center sets on $X$. \end{definition} Recall that if the balancedness of $X$ is $\beta$, there exists an optimal solution of \ProblemName{$k$-Median}\xspace on $X$ that is $\beta$-balanced. Then we naturally define the following \ProblemName{$(k,\beta)$-Median}\xspace problem that aims to find optimal solutions within $C_\beta(X)$. \begin{definition}[\ProblemName{$(k,\beta)$-Median}\xspace] \label{def:kBMedian} Given a dataset $X\subseteq \mathcal{X}$, an integer $k\geq 1$ and $\beta\in (0,1]$, the goal of the \ProblemName{$(k,\beta)$-Median}\xspace problem is to find a $k$-point set $C\in\C_\beta(X)$ that minimizes $\cost(X,C)$. Let $\OPT_{\beta}(X):=\cost(X,C^\star)$ where $C^\star$ is an optimal solution for \ProblemName{$(k,\beta)$-Median}\xspace on $X$. \end{definition} Again, if the balancedness of $X$ is $\beta$, solving \ProblemName{$(k,\beta)$-Median}\xspace also solves \ProblemName{$k$-Median}\xspace and $\OPT_{\beta}(X)$ equals the optimal \ProblemName{$k$-Median}\xspace cost on $X$. A similar notion of \ProblemName{$(k,\beta)$-Median}\xspace also appears in the literature, e.g.,~\cite{bradley2000constrained, DBLP:conf/sspr/MalinenF14, DBLP:journals/isci/CostaAM17, DBLP:conf/ijcai/LinHX19, DBLP:journals/tcs/Ding20}. The main difference is that they allow points being assigned to a non-closest center for achieving a balanced dataset partition, and hence, consider all possible $k$ points sets instead of $\C_{\beta}(X)$. \begin{definition}[Weak Coreset for \ProblemName{$(k,\beta)$-Median}\xspace] \label{def:weak_coreset} Given a dataset $X\subseteq\mathcal{X}$, an integer $k\geq 1$ and $\beta,\varepsilon \in (0,1]$, an $\varepsilon$-weak coreset for \ProblemName{$(k,\beta)$-Median}\xspace on $X$ is a subset $S\subseteq X$ such that for every $k$-point set $C\in\C_{\beta/2}(S)$ with $ \sum_{x\in S}\dist(x, C)\le (1+\epsilon)\OPT_{\beta/2}(S), $ it holds that $ \sum_{x\in X}\dist(x,C)\le (1+O(\epsilon))\OPT_{\beta}(X). $ \end{definition} Intuitively, the above definition requires that any near-optimal solution for \ProblemName{$(k,\beta/2)$-Median}\xspace on a weak coreset $S$ is a near-optimal solution for \ProblemName{$(k,\beta)$-Median}\xspace on $X$. Consequently, solving \ProblemName{$(k,\beta/2)$-Median}\xspace on $S$ leads to a near-optimal solution for \ProblemName{$k$-Median}\xspace on $X$ if the balancedness of $X$ is $\beta$. The points are unweighted in our weak coreset, as opposed to the weighted points considered in (strong) coresets~\cite{DBLP:conf/stoc/Har-PeledM04}. This is natural since we consider uniform sampling. Note that an optimal solution $C^\star$ for \ProblemName{$(k,\beta)$-Median}\xspace on $X$ may not be $\beta$-balanced on $S$ due to the small size of $S$, and hence, we consider a relaxed balancedness $\beta/2$ instead of $\beta$ for $S$ such that the considered collection $\C_{\beta/2}(S)$ is likely to include $C^\star$. \section{Uniform Sampling Yields Weak Coreset} \label{sec:algorithm} \begin{theorem}[Main Theorem] \label{thm:dim_ind} Let $(\mathcal{X},\dist)$ be a metric space and $X\subseteq \mathcal{X}$ be a dataset. Given an integer $k\ge 1$ and real numbers $\beta\in(0,1],\epsilon\in (0,0.5)$, let integer $m$ satisfy that \begin{equation} \label{eq:m_X} m\ge O\left(\frac{k}{\beta\varepsilon^2}\left(\sum_{i=1}^{\log\varepsilon^{-1}}\sqrt{2^{-i}\log N^{2^{-i}}_X(m)} \right)^2\right) \end{equation} where $N_X^{\alpha}(m)$ is the covering number defined in \Cref{def:covering}. Then, a set $S$ of $m$ uniform samples from $X$ is an $\epsilon$-weak coreset for \ProblemName{$(k,\beta)$-Median}\xspace on $X$ with probability at least $0.9$. \end{theorem} The factor $\sum_{i=1}^{\log\varepsilon^{-1}}\sqrt{2^{-i}\log N^{2^{-i}}_X(m)}$ plays a similar role as the entropy integral (or Dudley integral) that are commonly used in the chaining argument (see e.g., Corollary 5.25 of~\cite{Handel2014ProbabilityIH}). A more concise sufficient condition for \eqref{eq:m_X} is \begin{equation} \label{eq:simple_m_X} m\ge O\left(\frac{k}{\beta\varepsilon^2}\cdot \log N_X^{\varepsilon}(m) \right) \end{equation} which is derived directly by the monotonicity of the covering number, i.e., $N^{2^{-i}}_X(m)\le N^{\varepsilon}_X(m)$ for all $i\le \log\varepsilon^{-1}$. However, we still need to use ~\eqref{eq:m_X} in order to obtain better sample complexity, especially for Euclidean spaces. We give bounds for this term for various metrics in \Cref{sec:application}. \paragraph{Proof Overview} To utilize the balancedness, we consider \emph{good center sets} $\C'(X)$ (\Cref{def:good}) as a collection of center sets $C$ that are ``close'' to $C^\star$. We show that any near-optimal center set for $(k,\beta/2)$-Median on $S$ is likely to be good (\Cref{lem:balance_is_good}). Then it suffices to show that all center sets $C\in \C'(X)$ with $\cost(X,C) \geq (1 + O(\varepsilon))\OPT_\beta(X)$, denoted as $\C^\mathrm{bad}(X)$, are likely to have a large cost on $S$, i.e., $\cost(S,C) > (1+\varepsilon) \OPT_{\beta/2}(S)$. In other words, we need a uniform convergence guarantee on $\C^\mathrm{bad}(X)$. To this end, we need to bound the ``complexity'' of $\C^\mathrm{bad}(X)$, by considering the notion of \emph{covering}, which may be viewed as a set of representatives, and the \emph{covering number} which measures the complexity of the covering (\Cref{def:covering}). In the last steps of the proof (\Cref{sec:mainproof}), we reduce the above requirement for $\C^\mathrm{bad}(X)$ to a Gaussian process and applies a chaining argument based on the covering. \subsection{Good Event and Good Center Sets} \label{sec:candi} We first introduce some useful notations. Let $\lambda>1000$ be a constant throughout this section. For a center set $C\subseteq \mathcal{X}$ and $x\in \mathcal{X}$, denote by $C(x)=\arg\min_{c\in C}\dist(c,x)$ the closest center in $C$ to $x$ (breaking ties arbitrarily). Let $C^\star$ denote an optimal center set of $X$ for \ProblemName{$(k,\beta)$-Median}\xspace. For a subset $A\subseteq X$, and real numbers $\eta \in (0,1),\alpha>0$, denote by $ \C_{\eta}^{(\alpha)}(A):=\{C\in \C_{\eta}(A):\cost(A,C) \le (1+\alpha)\OPT_{\eta}(A)\} $ the set of all $(1+\alpha)$-approximate $k$-point set for \ProblemName{$(k,\eta)$-Median}\xspace on $A$, and let $\overline{\C}_{\eta}^{(\alpha)}(A)=\C_{\eta}(A)\setminus \C_{\eta}^{(\alpha)}(A)$. By definition, we know that $S$ is an $\varepsilon$-coreset weak coreset for \ProblemName{$(k,\beta)$-Median}\xspace on $X$ if and only if \begin{equation} \label{eqn:iff} \C_{\beta/2}^{(\varepsilon)}(S)\cap \overline{\C}_{\beta}^{(O(\varepsilon))}(X) = \emptyset. \end{equation} Let $\mathcal{P}^\star=\{X_1^\star,...,X_k^\star\}$ be the partition of $X$ induced by $C^\star$. We denote by $\xi_S$ the event that \begin{equation} \label{eq:xi_S} \begin{aligned} &\frac{1}{m}\cost(S,C^\star)\le \lambda\cdot\frac{1}{n}\OPT_\beta(X)\quad \wedge\quad\forall i\in[k], \frac{|S\cap X_i^\star|}{|S|}\in (1\pm\frac{1}{2})\frac{|X_i^\star|}{|X|}, \end{aligned} \end{equation} where the first condition requires that the average \ProblemName{$k$-Median}\xspace cost of $S$ to $C^\star$ is not too large compared to that of $X$, and the second condition requires that the ratio of sampled points in every cluster $X_i^\star$ is close to the underlying one. The following lemma claims that $\xi_S$ happens with high probability, and hence, we can condition on $\xi_S$ in the analysis. \begin{lemma} \label{lem:Pr_xi_S} $\xi_S$ happens with probability at least 0.99. \end{lemma} \begin{proof} Since $C^\star$ is a $\beta$-balanced center set on $X$, we have $|X_i^\star|\ge \frac{\beta n}{k}$ for every $i\in[k]$. Recall that $S$ is a set of uniform samples, therefore by Chernoff bound, we have \begin{equation*} \Pr\left[\frac{|S\cap X_i^\star|}{|S|}\not \in \left(1\pm \frac{1}{2}\right)\frac{|X_i^\star|}{|X|}\right] \le 2\exp\left(-\frac{\beta m}{12k} \right) \le 0.001/k. \end{equation*} By the union bound, we have \begin{equation*} \Pr\left[\forall i\in [k], \frac{|S\cap X_i^\star|}{|S|}\in \left(1\pm \frac{1}{2}\right)\frac{|X_i^\star|}{|X|}\right]\ge 0.999. \end{equation*} Also note that $\E_S[\cost(S,C^\star)]=\frac{m}{n}\OPT_\beta(X)$. Then by the Markov inequality, with probability at least $0.999$, we have $\cost(S,C^\star)\le \lambda\cdot\frac{m}{n}\OPT_\beta(X)$ since $\lambda>1000$. This completes the proof. \end{proof} \begin{definition}[Good Center Sets] \label{def:good} We say a $k$-point set $C\subseteq \mathcal{X}$ is \emph{good} if we have \begin{equation} \label{eq:b1} \frac{1}{n}\sum_{x\in X}\dist(C^\star(x),C)\le \frac{6\lambda}{n}\OPT_\beta(X), \end{equation} and for every $x\in X$, \begin{equation} \label{eq:b2} \begin{aligned} \left|\dist(x,C)-\dist(x,C^\star) \right| \le \frac{6\lambda k}{\beta n} \OPT_\beta(X). \end{aligned} \end{equation} Let $ \mathcal{C}'(X):=\{C\subseteq \mathcal{X}:|C|=k, C\text{ satisfies \eqref{eq:b1},\eqref{eq:b2}} \} $ denote the collection of all good center sets on $X$. \end{definition} Intuitively, we say a center set $C$ is good if it is ``close'' to $C^\star$. \eqref{eq:b1} means that the average distance from every $C^\star(x)$ to $C$ is not too large, and~\eqref{eq:b2} states that all distance differences $|\dist(x, C)-\dist(x, C^\star)|$ are small. We note that the definition of good center sets is independent of $S$, which is useful for the probability analysis on $S$. The following lemma states that all $(1+\epsilon)$-approximate center sets for \ProblemName{$(k,\beta/2)$-Median}\xspace on $S$ are good conditioning on $\xi_S$. Recall that we want to prove \Cref{eqn:iff}, hence by \Cref{lem:balance_is_good}, it remains to prove $\C_{\beta/2}^{(\varepsilon)}(S)\cap \left(\C'(X)\cap \overline{\C}_{\beta}^{(O(\varepsilon))}(X)\right) = \emptyset$, which is easier to handle due to good properties of $\C'(X)$. \begin{lemma} \label{lem:balance_is_good} $\mathcal{C}_{\beta/2}^{(\epsilon)}(S)\subseteq \C'(X)$ holds conditioning on $\xi_S$. \end{lemma} \begin{proof} Conditioning on $\xi_S$, we have $C^\star\in \C_{\beta/2}(S)$ and for every $C\in\C_{\beta/2}^{(\epsilon)}(S)$, it holds that \begin{equation} \label{eq:sumCstar} \sum_{x\in S}\dist(C^\star(x),C)\le \sum_{x\in S}\left(\dist(x,C^\star)+\dist(x,C)\right) \le (2+\epsilon)\sum_{x\in S}\dist(x,C^\star) \le \frac{3\lambda m}{n}\OPT_\beta(X), \end{equation} where the first derivation is due to the triangle inequality, the second derivation is because $\sum_{x\in S}\dist(x,C)\le (1+\varepsilon)\OPT_{\beta/2}(S)\le (1+\varepsilon)\sum_{x\in S}\dist(x,C^\star)$, and the last derivation is due to \eqref{eq:xi_S}. We can rewrite $\sum_{x\in S}\dist(C^\star(x),C)$ as $\sum_{c^\star_i\in C^\star}\dist(c^\star_i,C)\cdot |S\cap X_i^\star|$, where $c^\star_i$ denotes the center of cluster $X_i^\star$. Therefore, we have by \eqref{eq:xi_S}, \begin{equation*} \sum_{x\in S}\dist(C^\star(X),C)\ge \frac{m}{2n}\sum_{c^\star_i\in C^\star}\dist(c^\star_i,C)\cdot |X_i^\star| = \frac{m}{2n}\sum_{x\in X}\dist(C^\star(x),C), \end{equation*} which combining with \eqref{eq:sumCstar} completes the proof of \eqref{eq:b1}. To prove \eqref{eq:b2}, we observe that for every $x\in X$, it holds that \begin{equation*} \left|\dist(x,C)-\dist(x,C^\star) \right| \le \max\{\dist(C(x),C^\star),\dist(C^\star(x),C)\}. \end{equation*} We only upper bound $\dist(c,C^\star)$ for every $c\in C$, and bounding $\dist(c^\star,C)$ for every $c^\star\in C^\star$ is almost the same. Since $C\in\C_{\beta/2}(S)$, for every $c\in C$, we have \begin{eqnarray*} \dist(c,C^\star)&\le& \sum_{c\in C}\dist(c,C^\star)\\ &\le&\frac{2k}{\beta}\cdot\frac{1}{m}\sum_{c\in C}\frac{\beta m}{2k}\dist(c,C^\star)\\ &\le&\frac{2k}{\beta}\cdot \frac{1}{m}\sum_{x\in S}\dist(C(x),C^\star)\\ &\le&\frac{2k}{\beta}\cdot \frac{1}{m}\sum_{x\in S}\left(\dist(x,C)+\dist(x,C^\star) \right)\\ &\le&\frac{2k}{\beta}\cdot \frac{2+\epsilon}{m}\sum_{x\in S}\dist(x,C^\star)\\ &\le&\frac{6\lambda k}{\beta n}\OPT_\beta(X), \end{eqnarray*} where the third derivation is because $\sum_{x\in S}\dist(C(x), C^\star)=\sum_{c_i\in C}\dist(c_i,C^\star)\cdot |X_i|$ for $\{X_1,\dots,X_k\}$ being the partition of $X$ induced by $C$, and $|X_i|\ge \frac{\beta m}{2k}$ since $C\in \C_{\beta/2}(S)$. \end{proof} \subsection{Covering and Covering Number} \label{sec:covering} The notion of covering and covering number, defined in \Cref{def:covering}, plays a crucial role in our analysis. We start with giving the definition, and then discuss how the several relevant parameters are chosen for our application. \begin{definition}[Covering and Covering Number] \label{def:covering} Given a dataset $X\subseteq \mathcal{X}$ and a subset $S\subseteq X$, a set of vectors $V\subset \R^X$, an error function $\mathrm{err}:X\times V\to \R$ and real numbers $\alpha,\gamma>0$, we say $U\subset \R^X$ is a $\gamma$-bounded $\alpha$-covering of $V$ w.r.t. $(S,\mathrm{err})$ if the following holds: \begin{compactenum} \item (Bounded Covering Error) for every $v\in V$, there exists a vector $u\in U$ such that \begin{equation*} \forall x\in S,\quad \left|v_x-u_x \right|\le \alpha\cdot\mathrm{err}(x,v) \end{equation*} \item (Bounded $L_\infty$ Norm) for every $u\in U$, $\|u\|_\infty\le \gamma$. \end{compactenum} Define $N^{\alpha,\gamma}(S,V,\mathrm{err})$ to be the minimum cardinality $|U|$ of any $\gamma$-bounded $\alpha$-covering $U$ of $V$ w.r.t. $(S,\mathrm{err})$. Moreover, let $\mathcal{S}\subseteq 2^X$ be a collection of subsets and define the $\gamma$-bounded $\alpha$-covering number of $V$ w.r.t. $(\mathcal{S},\mathrm{err})$ to be \begin{equation*} N_X^{\alpha,\gamma}(\mathcal{S},V,\mathrm{err}):=\max_{S\in \mathcal{S}} N^{\alpha,\gamma}(S,V,\mathrm{err}) \end{equation*} \end{definition} \paragraph{Explanation of \Cref{def:covering}} The idea of $\epsilon$-covering has also been used in the coreset literature, e.g.,~\cite{DBLP:conf/stoc/Cohen-AddadLSS22,DBLP:journals/corr/abs-2211-08184,DBLP:journals/corr/abs-2211-11923}. Intuitively, the covering may be viewed as a discretization/representative of $V$, and the covering number measures its complexity. The parameter $\alpha$ together with the error function $\mathrm{err}$ controls the granularity of the discretization of $V$, and the covering number $N^{\alpha,\gamma}(S,V,\mathrm{err})$ increases as $\alpha$ becomes larger. The relative errors $\mathrm{err}(x,v)$ should often be tailored to the application (e.g.,~\cite{DBLP:conf/stoc/Cohen-AddadLSS22,DBLP:journals/corr/abs-2211-08184,DBLP:journals/corr/abs-2211-11923}) and we need to use a specific definition of it. Compared with a standard definition of covering, we additionally require $\|u\|_\infty$ for all $u\in U$ bounded (by parameter $\gamma$). This requirement is useful for bounding the variance of a Gaussian process in our analysis, which plays a similar role as excluding huge subsets as in the definition of covering in \cite{DBLP:journals/corr/abs-2211-11923} (their Definition 3.2). A natural choice of $\mathcal{S}$ is the collection of all $S\subseteq X$ with a fixed cardinality but in our case we need additional constraints on $\mathcal{S}$ to bound the overall covering errors. \paragraph{Specifying $V$, $\gamma$, $\mathrm{err}$ and $\mathcal{S}$} For a center set $C\subseteq \mathcal{X}$, define $v^C\in \R^X$ to be a cost vector such that \[ \forall x\in X,\quad v^C_x=\dist(x,C)-\dist(x,C^\star), \] and this is motivated by~\eqref{eq:b2} which considers the difference of the distances. Since our goal is to prove $\C'(X)\subseteq \C_{\beta}^{(O(\varepsilon))}(X)$, we consider the following $V$ on good center sets: \[ V=\{v^C:C\in\C'(X)\}. \] As~\eqref{eq:b2} implies $\|v^C\|_{\infty}\leq \frac{6\lambda k}{\beta n}\OPT_{\beta}(X)$, we select \[ \gamma = \frac{12\lambda k}{\beta n}\OPT_{\beta}(X). \] Now we define function $\mathrm{err}: X\times V\rightarrow \R$. For every $x\in X$ and $v^C\in V$, \begin{align*} &\quad\mathrm{err}(x,v^C)\\ =&\quad v^C_x + 2\dist(x,C^\star) + \frac{1}{n}\OPT_\beta(X)\\ =&\quad \dist(x,C)+\dist(x,C^\star) + \frac{1}{n}\OPT_\beta(X). \end{align*} \noindent The term $\frac{1}{n}\OPT_\beta(X)$ is consistent with the selection of $\gamma$. The term $\dist(x,C)+\dist(x,C^\star)$ is mainly designed for obtaining a dimension-independent covering number in Euclidean spaces; see \Cref{lem:covering_euclidean} for details. Finally, we specify $\mathcal{S}$ with an additional restriction $\xi_S$. \[ \mathcal{S}(m) = \{S\subseteq X: |S|\le m, \xi_S \}. \] We shorten the notation of the covering number by \[ N_X^{\alpha}(m):=N_X^{\alpha,\gamma}\left(\mathcal{S}(m),V,\mathrm{err}\right), \] and call it the $\alpha$-covering number. We have the following lemma showing that $\xi_S$ leads to a bound of the total covering error, which is helpful for bounding the variance of our Gaussian process. \begin{lemma}[$\xi_S$ Implies Bounded Covering Error] \label{lem:bounderror} For $S$ such that $\xi_S$ happens, we have for every $C\in \C'(X)$, \begin{equation*} \sum_{x\in S}\mathrm{err}(x, v^C)\le \frac{15\lambda m}{n}\OPT_\beta(X). \end{equation*} \end{lemma} \begin{proof} $\xi_S$ implies that $\cost(S,C^\star)\le \lambda\cdot\frac{m}{n}\OPT_\beta(X)$ and \begin{align*} \sum_{x\in S}\dist(C^\star(x),C)&=\sum_{c^\star_i\in C^\star} \dist(c^\star_i,C)\cdot|S\cap X_i^\star|\\ &\le \sum_{c^\star_i\in C^\star}\dist(c^\star_i,C)\cdot 2|X_i^\star|\cdot\frac{m}{n}\\ &\le \frac{2m}{n}\sum_{x\in X}\dist(C^\star(x),C)\\ &\le \frac{12\lambda m}{n}\OPT_\beta(X). \end{align*} where the second derivation is due to~\eqref{eq:xi_S}, and the forth derivation is due to~\eqref{eq:b1}. Therefore for every $C\in \C'(X)$, it holds that \begin{align*} \sum_{x\in S}\mathrm{err}(x,v^C) &=\sum_{x\in S}\left(\dist(x,C)+\dist(x,C^\star) + \frac{1}{n}\OPT_\beta(X) \right)\\ &\le \frac{m}{n}\OPT_\beta(X)+\sum_{x\in S}\left(2\dist(x,C^\star)+\dist(C^\star(x),C) \right)\\ &\le \frac{15\lambda m}{n}\OPT_\beta(X) \end{align*} \end{proof} \subsection{Proof of Main Theorem: \Cref{thm:dim_ind}} \label{sec:mainproof} Conditioning on $\xi_S$, for every $C\in \C_{\beta/2}^{(\epsilon)}(S)$, it holds that \begin{equation} \label{eq:goodsol} \sum_{x\in S}\dist(x,C)\le (1+\epsilon)\sum_{x\in S}\dist(x,C^\star) \le \sum_{x\in S}\dist(x,C^\star)+\frac{\lambda\epsilon m}{n}\OPT_\beta(X), \end{equation} where the second inequality is due to \eqref{eq:xi_S}. Let \[ \C^\mathrm{bad}(X):=\C'(X)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X) \] denote the collection of all good solutions $C$ which are bad on $X$, i.e., $\cost(X,C)\ge (1+10\lambda^2\varepsilon)\OPT_\beta(X)$. To show $S$ is an $\epsilon$-weak coreset, let $\phi_S$ denotes the event that for every $C\in\C^\mathrm{bad}(X)$,~\eqref{eq:goodsol} is far from being satisfied, that is, $\sum_{x\in S}\dist(x,C)\ge \sum_{x\in S}\dist(x,C^\star)+\frac{\lambda^2\epsilon m}{n}\OPT_\beta(X)$. Then it suffices to bound the following probability \begin{align*} \Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset ] &= \Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset\wedge \phi_S ]\\ &\quad+\Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset\wedge \neg\phi_S]\\ &\le \Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset\wedge \phi_S ]+ \Pr_S[\neg\phi_S]. \end{align*} For the first term, we have \begin{align*} \Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset\wedge \phi_S ] &= \Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset\wedge \phi_S \mid \xi_S ]\Pr[\xi_S]\\ &\quad+\Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset\wedge \phi_S \mid \neg\xi_S ]\Pr_S[\neg \xi_S]\\ &\le \Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset\wedge \phi_S \mid \xi_S ]+\Pr_S[\neg \xi_S] \end{align*} By the discussion above, conditioning on $\xi_S$, it holds that $\C_{\beta/2}^{(\epsilon)}(S)\subseteq \C'(X)$. So $\C^{(\epsilon)}_{\beta/2}(S)\cap \bar{\C}_0^{(10\lambda^2\epsilon)}(X)\neq \emptyset$ is equivalent to $\C^{(\epsilon)}_{\beta/2}(S)\cap \C^\mathrm{bad}(X)\neq \emptyset$, which implies that there exists $C\in \C^\mathrm{bad}(X)$ such that~\eqref{eq:goodsol} holds. We know this contradicts $\phi_S$. Therefore, we have \[ \Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset\wedge \phi_S \mid \xi_S ]=0, \] and thus \[ \Pr_S[\C^{(\epsilon)}_{\beta/2}(S)\cap \overline{\C}_\beta^{(10\lambda^2\epsilon)}(X)\neq \emptyset\wedge \phi_S ]\le \Pr_S[\neg \xi_S]\le 0.01, \] where the second inequality follows from \Cref{lem:Pr_xi_S}. It remains to bound $\Pr_S[\neg\phi_S]\le 0.09$. Recall that for any $C\in \C^\mathrm{bad}(X)$, $\|v^C\|_1=\sum_{x\in X}(\dist(x,C)-\dist(x,C^\star))\ge 10\lambda^2\varepsilon\OPT_\beta(X)$. $\Pr[\neg\phi_S]$ can be regarded as a uniformly convergence guarantee for all $v^C\in V$ and, let $\mu:=\frac{\varepsilon m}{n}\OPT_\beta(X)$, it is equivalent to bound \begin{equation} \label{eq:uniform_convergence} \Pr_{S}\left[\inf_{C\in\C^\mathrm{bad}(X)} \sum_{x\in S}v^C_x\le \lambda^2 \mu\right]\le 0.09 \end{equation} To bound~\eqref{eq:uniform_convergence}, our plan is to set up a Gaussian process and apply the chaining argument. The following lemma provides a convergence guarantee for a single $C$. \begin{lemma} \label{lem:badsol} For any $C\in\C^\mathrm{bad}(X)$, the following holds: \begin{equation*} \Pr_S\left[\sum_{x\in S}v_x^C\le 5\lambda^2\mu \right]\le 2\exp\left(-\Theta\left(\frac{\epsilon^2\beta m}{k} \right) \right). \end{equation*} \end{lemma} \begin{proof} Since $\C^\mathrm{bad}(X)\subseteq \C'(X)$, due to~\eqref{eq:b2}, for every $C\in\C^\mathrm{bad}(X)$, it holds that \begin{equation*} \|v^C\|_\infty\le \gamma=\frac{12\lambda k}{\beta n}\OPT_\beta(X). \end{equation*} Since $S$ is a set of $m$ uniform samples, and $\E\sum_{x\in S}v_x^C=\frac{m}{n}\|v^C\|_1\ge \frac{10\lambda^2\epsilon m}{n}\OPT_\beta(X)$, we have \begin{align*} \Pr_S\left[\sum_{x\in S}v^C_x\le \frac{5\lambda^2\epsilon m}{n}\OPT_\beta(X) \right]\le \Pr_S\left[\left|\sum_{x\in S}v^C_x-\E\sum_{x\in S}v^C_x \right|\ge \frac{5\lambda^2\epsilon m}{n}\OPT_\beta(X) \right] \end{align*} Here we apply Bernstein inequality to finish the proof. To this end, we should also bound the variance of $\sum_{x\in S}v^C_x$, which is \begin{align*} &\le m\cdot\frac{1}{n}\sum_{x\in X}(v^C_x)^2\\ &\le \gamma \cdot \frac{m}{n}\sum_{x\in X} |v^C_x|\\ &\le \gamma \cdot \frac{m}{n}\sum_{x\in X}\left(\dist(x,C)+\dist(x,C^\star) \right)\\ &\le \gamma \cdot \frac{m}{n}\sum_{x\in X}\left(2\dist(x,C^\star)+\dist(C^\star(x),C) \right)\\ &\le O\left(\frac{km}{\beta n^2}(\OPT_\beta(X))^2 \right). \end{align*} where the last derivation is due to~\eqref{eq:b1}. Therefore, by Bernstein inequality, we have \begin{align*} \Pr_S\left[\left|\sum_{x\in S}v^C_x-\E\sum_{x\in S}v^C_x \right|\ge \frac{5\lambda^2\epsilon m}{n}\OPT_\beta(X) \right]\le 2\exp\left(-\Theta\left(\frac{\varepsilon^2\beta m}{k} \right) \right) \end{align*} which completes the proof. \end{proof} By~\Cref{lem:badsol}, it remains to bound the ``complexity'' of $\C^\mathrm{bad}(X)$. To this end, we reduce to a Gaussian process where we only need to consider the complexity of coverings with respect to the sample set $S$. This idea is formalized in the following lemma. \begin{lemma}[Reduction to Gaussian Process] \label{lem:reduce_to_Guassian} ~\eqref{eq:uniform_convergence} holds if the following holds: \begin{equation} \label{eq:Guassian} \E_S\left[\E_{g_i}\left[\sup_{C\in \C^\mathrm{bad}(X)} \frac{1}{\mu}\left|\sum_{i=1}^mg_iu^C_{s_i} \right| \right]\mid \xi_S\right] \le \lambda. \end{equation} Here, $g_1,\dots,g_m$ are the independent standard Gaussian random variables, $U_{\varepsilon}$ is an $\varepsilon$-covering of $V$ w.r.t. a random set $S=\{s_1,\dots,s_m\}\subseteq X$, and $u^C\in U_{\varepsilon}$ denotes the $\varepsilon$-covering of $v^C$ for $C\in\C^\mathrm{bad}(X)$, \end{lemma} \begin{proof} Let $S'=\{s_1',...,s_m'\}$ be another set of independent uniform samples from $X$ that is independent of $S$. We first use the symmetrization trick to show that \begin{align*} \quad\Pr_S\left[\inf_{C\in\C^\mathrm{bad}(X)} \sum_{i=1}^m v^C_{s_i}\le \lambda^2\mu\right] \nonumber\le 2\Pr_{S,S'}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m(v_{s_i}^C-v_{s_i'}^C) \right|\ge 4\lambda^2\mu \right]. \end{align*} To see this, we assume for some $S$, the event $\phi_S$, i.e., $\inf_{C\in\C^\mathrm{bad}(X)} \sum_{i=1}^m v^C_{s_i}\le \lambda^2\mu$ happens, then take an arbitrarily $C_S\in \C^\mathrm{bad}(X)$ such that $\sum_{i=1}^m v^{C_S}_{s_i}\le \lambda^2\mu$ (if $\xi_S$ does not happen, we let $C_S$ be an arbitrarily center set). If the event $\sum_{i=1}^m v^{C_S}_{s_i'}\ge 5\lambda^2\mu$, denoted by $\varphi_{C_S,S'}$, happens, then it holds that \begin{equation*} \left|\sum_{i=1}^m(v^{C_S}_{s_i}-v^{C_S}_{s_i'})\right|\ge 4\lambda^2\mu. \end{equation*} Note that $\varphi_{C_S,S'}$ is a convergence guarantee for $C_S\in\C^\mathrm{bad}(X)$, and~\Cref{lem:badsol} gives a lower bound of $1/2$ to $\Pr_{S'}[\varphi_{C_S,S'}]$. Therefore, the following holds. \begin{align*} \Pr_{S,S'}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m(v_{s_i}^C-v_{s_i'}^C) \right|\ge 4\lambda^2\mu \right] &\ge \Pr_{S,S'}\left[\phi_S\wedge \varphi_{C_S,S'}\right]\\ &=\Pr_{S}[\phi_S]\Pr_{S,S'}[\varphi_{C_S,S'}\mid \phi_S]\\ &=\Pr_S[\phi_S]\E_S\left[\Pr_{S'}[\varphi_{C_S,S'}]\mid\phi_S\right]\\ &\ge \frac{1}{2}\Pr_S[\phi_S], \end{align*} Let $r_1,...,r_m$ be independent Rademacher random variables\footnote{A Rademacher random variable $r$ takes value $-1$ with probability $1/2$ and takes value $1$ with probability $1/2$.}. We have \begin{align} &\quad\Pr_{S,S'}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m(v_{s_i}^C-v_{s_i'}^C) \right|\ge 4\lambda^2\mu \right] \nonumber\\ &= \Pr_{S,S',r_i}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m r_i(v_{s_i}^C-v_{s_i'}^C) \right|\ge 4\lambda^2\mu \right] \nonumber\\ &\le \Pr_{S,S',r_i}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left(\left|\sum_{i=1}^m r_iv_{s_i}^C \right|+\left|\sum_{i=1}^m r_i v_{s_i'}^C \right|\right)\ge 4\lambda^2\mu \right] \nonumber \end{align} where the first derivation is because $v_{s_i}^C-v_{s_i'}^C$ is symmetric and thus is distributed identically to $r_i(v_{s_i}^C-v_{s_i'}^C)$, the second derivation is due to the triangle inequality. If $\sup_{C\in\C^\mathrm{bad}(X)} \left(\left|\sum_{i=1}^m r_iv_{s_i}^C \right|+\left|\sum_{i=1}^m r_i v_{s_i'}^C \right|\right)\ge 4\lambda^2\mu $ holds, then either $\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m r_iv_{s_i}^C \right|\ge 2\lambda^2\mu$ or $\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m r_iv_{s_i'}^C \right|\ge 2\lambda^2\mu$ holds. Since $S$ and $S'$ are distributed identically, by union bound, we have \begin{align} &\quad\Pr_{S,S',r_i}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left(\left|\sum_{i=1}^m r_iv_{s_i}^C \right|+\left|\sum_{i=1}^m r_i v_{s_i'}^C \right|\right)\ge 4\lambda^2\mu \right] \nonumber\\ &\le 2\Pr_{S,r_i}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m r_iv_{s_i}^C\right|\ge 2\lambda^2\mu \right] \nonumber\\ &= 2\Pr_{S,r_i}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m r_iv_{s_i}^C\right|\ge 2\lambda^2\mu\mid \xi_S \right]\Pr[\xi_S]\nonumber\\ &\quad+2\Pr_{S,r_i}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m r_iv_{s_i}^C\right|\ge 2\lambda^2\mu\mid \neg\xi_S \right]\Pr[\neg \xi_S]\nonumber\\ &\le 2\Pr_{S,r_i}\left[\sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^m r_iv_{s_i}^C\right|\ge 2\lambda^2\mu\mid \xi_S \right] + 0.02, \label{eq:pr} \end{align} It suffices to prove that \begin{equation}\label{eq:ES} \E_{S}\left[\E_{r_i}\sup_{C\in\C^\mathrm{bad}(X)}\left|\sum_{i=1}^m r_iv_{s_i}^C \right|\mid \xi_S \right]\le 20\lambda\mu. \end{equation} We can thus apply Markov inequality to bound the probability in~\eqref{eq:pr} by $0.01$, which leads to $\Pr[\neg \phi_S]\le 0.08$ and completes the proof of~\eqref{eq:uniform_convergence}. It remains to show how to derive~\eqref{eq:ES}. Recall that $U_{\varepsilon}$ denotes an $\varepsilon$-covering of $V$ w.r.t. random subset $S$. We next replace the cost vector $v^C$ with its $\varepsilon$-covering $u^C\in U_{\varepsilon}$, which satisfies that $|u^C_x-v^C_x|\le \varepsilon\cdot \mathrm{err}(x,v^C)$ for every $x\in S$. \begin{equation} \label{eq:applycover} \begin{aligned} \sup_{C\in\C^\mathrm{bad}(X)}\left|\sum_{i=1}^mr_iv_{s_i}^C \right| &\le \sup_{C\in\C^\mathrm{bad}(X)} \left|\sum_{i=1}^mr_i u_{s_i}^C +r_i(v_{s_i}^C- u_{s_i}^C) \right|\\ &\le \sup_{C\in\C^\mathrm{bad}(X)}\left(\left|\sum_{i=1}^mr_i u^C_{s_i} \right| + \varepsilon \sum_{i=1}^m\mathrm{err}(s_i,v^C)\right)\\ &\le \sup_{C\in\C^\mathrm{bad}(X)}\left|\sum_{i=1}^mr_iu^C_{s_i} \right| + \varepsilon\cdot\frac{15\lambda m}{n}\OPT_\beta(X)\\ &\le \sup_{C\in\C^\mathrm{bad}(X)}\left|\sum_{i=1}^mr_iu^C_{s_i} \right|+15\lambda\mu \end{aligned} \end{equation} where the second derivation is due to the triangle inequality, and the third derivation is due to \Cref{lem:bounderror} and $|r_i|=1$ for every $i\in[m]$. Then it suffices to prove \begin{equation*} \E_{S}\left[\E_{r_i}\sup_{C\in\C^\mathrm{bad}(X)}\left|\sum_{i=1}^m r_iu_{s_i}^C \right|\mid \xi_S \right]\le 5\lambda\mu \end{equation*} which is equivalent to \begin{equation} \label{eq:Eg} \E_{S}\left[\E_{r_i}\sup_{C\in\C^\mathrm{bad}(X)}\frac{1}{\mu} \left|\sum_{i=1}^m r_iu_{s_i}^C \right|\mid \xi_S \right]\le 5\lambda \end{equation} Finally, we replace the Rademacher random variables with standard Gaussian random variables. \begin{lemma}[Lemma 7.4 of~\cite{Handel2014ProbabilityIH}] \label{lem:r2g} For $r_1,\dots,r_m$ are Rademacher random variables, let $g_1,\dots,g_m$ be the independent standard Gaussian random variables, it holds that \begin{equation*} \E_{r_i}\left[\sup_{C\in\C^\mathrm{bad}(X)} \frac{1}{\mu}\left|\sum_{i=1}^mr_iu^C_{s_i} \right| \right]\le \sqrt{\frac{\pi}{2}}\E_{g_i}\left[\sup_{C\in\C^\mathrm{bad}(X)} \frac{1}{\mu}\left|\sum_{i=1}^mg_iu^C_{s_i} \right| \right] \end{equation*} \end{lemma} It suffices to prove \begin{equation*} \E_{S}\left[\E_{g_i}\sup_{C\in\C^\mathrm{bad}(X)}\frac{1}{\mu} \left|\sum_{i=1}^m g_iu_{s_i}^C \right|\mid \xi_S \right]\le \lambda \end{equation*} which leads to \eqref{eq:Eg} by \Cref{lem:r2g}, and completes the proof. \end{proof} Now it suffices to prove~\eqref{eq:Guassian}. The main idea is to apply a chaining argument. For every $C\in \C^\mathrm{bad}(X)$, let $v^{C,h}\in U_{2^{-h}}$ denote the $2^{-h}$-covering of $v^C$, then we can rewrite $u^C$ as a telescoping sum $ u^C=\sum_{h=1}^{\log\varepsilon^{-1}}(v^{C,h}-v^{C,h-1}). $ Hence, it suffices to prove the following lemma which implies~\eqref{eq:Guassian}, and this completes the proof of~\Cref{thm:dim_ind}. \begin{lemma}[Bounding Error in the Chaining Argument] \label{lem:chaining_argument} Conditioning on $\xi_S$, the following holds: \begin{equation*} \sum_{h=1}^{\log\varepsilon^{-1}}\E_{g_i}\left[\sup_{C\in\C^\mathrm{bad}(X)}\frac{1}{\mu}\left|\sum_{i=1}^mg_i(v_{s_i}^{C,h}-v_{s_i}^{C,h-1}) \right| \right] \le \lambda. \end{equation*} \end{lemma} The proof of \Cref{lem:chaining_argument} heavily relies on our definition of covering. In particular, it allows us to bound the difference $|v_x^{C,h}-v_x^{C,h-1}|$ either by $2^{-h+2}\cdot\mathrm{err}(x,v^C)$, or by an absolute value $2\gamma$. This eventually guarantees that each Gaussian variable $\sum_{i=1}^mg_i(v_{s_i}^{C,h}-v_{s_i}^{C,h-1}) $ has a well bounded variance. \begin{proof}[Proof of~\Cref{lem:chaining_argument}] For every $h\in [\log\varepsilon^{-1}]$, let \begin{equation*} E_h:=\E_{g_i}\left[\sup_{C\in\C^\mathrm{bad}(X)}\frac{1}{\mu}\left|\sum_{i=1}^mg_i(v_{s_i}^{C,h}-v_{s_i}^{C,h-1}) \right| \right]. \end{equation*} For every $C\in \C^\mathrm{bad}(X)$, $\sum_{i=1}^m\frac{g_i}{\mu}(v_{s_i}^{C,h}-v_{s_i}^{C,h-1})$ is Gaussian with zero mean and variance \begin{align*} \sum_{i=1}^m\left(\frac{1}{\mu}(v_{s_i}^{C,h}-v_{s_i}^{C,h-1}) \right)^2 &\le \frac{\|v^{C,h}\|_\infty+\|v^{C,h-1}\|_\infty }{\mu}\sum_{i=1}^m\frac{|v_{s_i}^{C,h}-v_{s_i}^{C,h-1}|}{\mu}\\ &\le \frac{24\lambda k}{\beta\epsilon m}\sum_{i=1}^m\frac{|v_{s_i}^{C,h}-v^{C}_{s_i}+v^{C}_{s_i}-v_{s_i}^{C,h-1} |}{\mu}\\ &\le \frac{24\lambda k}{\beta\epsilon m}\sum_{x\in S}\frac{2^{-h+2}\mathrm{err}(x,v^{C})}{\mu}\\ &\le O\left(\frac{2^{-h+6} k}{\beta\epsilon^2 m}\right)\\ \end{align*} where the second derivation is due to $\|v^{C,h}\|_\infty\le \gamma$ for all $h\in [\log\varepsilon^{-1}]$ and the last derivation is due to~\Cref{lem:bounderror}. The following lemma demonstrates that an upper bound of the variance of each $\sum_{i=1}^m\frac{g_i}{\mu}(v_{s_i}^{C,h}-v_{s_i}^{C,h-1})$ leads to an upper bound of $E_h$. \begin{lemma}[Lemma 2.3 of~\cite{massart2007concentration}] Let $g_i\sim N(0,\sigma_i^2)$ for $i\in [m]$ be Gaussian random variables (not need to be independent ) and let $\sigma=\max_{i\in m}\sigma_i$, then it holds that \begin{equation*} \E\left[\max_{i\in [m]}|g_i| \right]\le 2\sigma\cdot\sqrt{2\ln n}. \end{equation*} \end{lemma} The number of distinct difference vector $v^{C,h}-v^{C,h-1}$ is at most $|U_{2^{-h}}\times U_{2^{-h+1}}|\le N_X^{2^{-h}}(m)\cdot N_X^{2^{-h+1}}(m)$. Therefore, we have \begin{align*} E_h&\le \sqrt{8\log (|U_{2^{-h}}|\cdot|U_{2^{-h+1}}|)\cdot O\left(\frac{2^{-h+6} k}{\beta\epsilon^2 m}\right)}\\ &\le \sqrt{O(1)\cdot \log N_X^{2^{-h}}(m)\cdot \frac{2^{-h}k}{\beta\varepsilon^2m}} \end{align*} Plug in $\sum_{h=1}^{\log\varepsilon^{-1}}E_h$, we have \begin{equation} \label{eq:final} \sum_{h=1}^{\log \varepsilon^{-1}}E_h\le O(1)\cdot \sqrt{\frac{k}{\beta\varepsilon^2m}}\sum_{h=1}^{\log\varepsilon^{-1}} \sqrt{\log N_X^{2^{-h}}(m)}. \end{equation} Since $m\ge t\cdot \frac{k}{\beta\varepsilon^2}\left(\sum_{i=1}^{\log\varepsilon^{-1}}\sqrt{2^{-i}\log N^{2^{-i}}_X(m)} \right)^2$ for sufficiently large constant $t$, we can bound $\sum_{h=1}^{\log\varepsilon^{-1}}E_h$ by $\lambda$. which completes the proof. \end{proof} \section{Weak Coresets in Various Metric Spaces} \label{sec:application} We apply \Cref{thm:dim_ind} to various metric spaces and obtain the following theorem, by analyzing their covering number. \begin{theorem} \label{thm:various_metric} For a metric space $M=(\mathcal{X},\dist)$ and a dataset $X\subseteq \mathcal{X}$, an integer $k\ge 1$ and real numbers $\beta,\epsilon\in(0,1)$, let $S$ be a set of uniform samples with size \begin{itemize} \item $O\left(\frac{k^2}{\beta\varepsilon^3}\cdot\log^2\frac{k}{\beta\varepsilon}\cdot\log^2\frac{1}{\varepsilon} \right)$ if $M$ is Euclidean $\mathbb{R}^d$; \item $O\left(\frac{k^2}{\beta\varepsilon^2}\cdot \ddim\cdot \log\frac{k}{\beta\varepsilon}\right)$ if $M$ has doubling dimension $\ddim$; \item $O\left(\frac{k^2}{\beta\varepsilon^2}\cdot \log |\mathcal{X}|\cdot \log\frac{k}{\beta\varepsilon}\right)$ if $M$ is a finite metric; \item $O\left(\frac{k^2}{\beta\varepsilon^2}\cdot \tw\cdot \log\frac{k}{\beta\varepsilon}\right)$ if $M$ is the shortest-path metric of a graph with treewidth $\tw$. \end{itemize} Then $S$ is an $\epsilon$-weak coreset for \ProblemName{$(k,\beta)$-Median}\xspace on $X$ with probability $0.9$. \end{theorem} \subsection{Euclidean Space} \begin{lemma}[Covering Number in Euclidean Space] \label{lem:covering_euclidean} In Euclidean space $(\R^d,\dist)$, for integer $m>0$ and real number $0<\alpha<1/2$, it holds that \begin{equation*} \log |N_X^{\alpha}(m)|\le O\left(k \alpha^{-2}\log(m+k)\log\frac{k}{\beta\alpha}\right) \end{equation*} \end{lemma} \begin{proof} By \Cref{def:covering} of the covering number, it suffices to construct an $\alpha$-covering of $V$ w.r.t $(S,\mathrm{err})$ for every $S\in \mathcal{S}(m)$. We need \emph{terminal embedding} as stated in the following theorem: \begin{theorem}[Terminal Johnson-Lindenstrauss Lemma \cite{DBLP:conf/stoc/NarayananN19}] \label{thm:terminal} For every $\varepsilon\in(0,1/2)$ and finite set $Y\subseteq \R^d$, there exists an embedding $g:\R^d\to\R^t$ for $t=O(\varepsilon^{-2}\log |Y|)$ such that \begin{eqnarray*} \forall x\in Y,\forall c\in \R^d,\quad \dist(x,c)\le \dist(g(x),g(c))\le (1+\varepsilon)\dist(x,c). \end{eqnarray*} We call $g$ is an $\varepsilon$-terminal embedding for $Y$. \end{theorem} For every $S\in \mathcal{S}(m)$, let $Y:=S\cup C^\star$, by \Cref{thm:terminal}, there exists an $\alpha$-terminal embedding $g$ for $Y$ with target dimension $t=O(\alpha^{-2}\log |Y|)=O(\alpha^{-2}\log (m+k))$. For a set $A\subset \R^d$, we denote by $g(A):=\{g(x):x\in A\}$ the set of images of all $x\in A$. By the definition of $\C'(X)$, for every $C\in \C'(X)$, $c\in C$, \begin{align*} \dist(g(c),g(C^\star))&\le \dist(g(c),g(C^\star(c)))\\ &\le 2\dist(c,C^\star)\\ &\le \frac{12\lambda k}{\beta n}\OPT(X) \end{align*} We denote by $B_A(c,r):=\{x\in A:\dist(x,c)\le r\}$ the ball centered at $c$ of radius $r$ for $A\subseteq \mathcal{X}$. Thus we have for every $C\in \C'(X)$, \[ g(C)\subseteq \bigcup_{c^\star\in C^\star}B_{\R^t}\left(g(c^\star), \frac{12\lambda k}{\beta n}\OPT_\beta(X)\right) \] To construct a covering for the set of cost vectors, we discretize each ball via a classical \emph{covering} of a point set. \begin{definition}[Covering of a Point Set] For a metric space $(\mathcal{X},\dist)$, a point set $A\subseteq\mathcal{X}$ and real number $0\le\alpha<1$, we say $T\subseteq\mathcal{X}$ is an $\alpha$-covering of $A$ if for every $x\in A$, there exists $y\in T$ such that $\dist(x,y)\le \alpha$. \end{definition} Note that the above definition of covering is different from \Cref{def:covering}, since they are for different objects. The following lemma bounds the cardinality of $\alpha$-covering of an Euclidean ball. \begin{lemma}[Covering of a Euclidean Ball] \label{lem:ball_cover} For $\alpha>0$ and an Euclidean ball $B\subset \R^t$ of radius $\Delta>0$, there exists an $\alpha$-covering $T\subseteq B$ of size at most $\exp\left({O(t\log (\Delta/\alpha))}\right)$. \end{lemma} For every $c^\star\in C^\star$, let $T_{c^\star}$ be such an $\frac{\alpha\OPT_\beta(X)}{n}$-covering of $B_{\R^t}\left(g(c^\star), \frac{12\lambda k}{\beta} \cdot\frac{1}{n}\OPT_\beta(X)\right)$, and let $T_{C^\star}:=\bigcup_{c^\star\in C^\star}T_{c^\star}$. By \Cref{lem:ball_cover}, we have \begin{eqnarray*} |T_{C^\star}|\le k\cdot \exp\left({O\left(t\log{\frac{k}{\alpha\beta}}\right)}\right) \end{eqnarray*} Let $g':\R^d\to T_{C^\star}$ be a function satisfying that \begin{eqnarray*} g'(x)=\arg \min_{y\in T_{C^\star}}\dist(g(x),y), \end{eqnarray*} Construct $U$ is $\{\tilde v^{C}\in \R^{X}:C\in\C'(X)\}$, where $\tilde v^{C}$ is a cost function defined as: for every $x\in X$, $\tilde v^{C}_x:=\dist(g(x),g'(C))-\dist(g(x),g(C^\star))$. Observe that \begin{equation*} |U|\le |T_{C^\star}|^k\le \exp\left({O\left(kt\log\frac{k}{\alpha\beta}\right)}\right), \end{equation*} which implies that $\log|U|\le O(kt\log\frac{k}{\beta\alpha})=O(k\alpha^{-2}\log(m+k)\log\frac{k}{\beta\alpha})$, it remains to show that $U$ is the desired $\alpha$-covering of $V$. \paragraph{Bounded Covering Error} By the definition of $g'$, we have that for every $C\in \C'(X)$ and $c\in C$, \begin{equation*} \dist(g(c),g'(C)) \le \dist(g(c),g'(c))\le \frac{\alpha}{n}\OPT_\beta(X), \end{equation*} and \begin{equation*} \dist(g'(c),g(C))\le \dist(g'(c),g(c))\le \frac{\alpha}{n}\OPT_\beta(X), \end{equation*} and thus for every $x\in S$, let $c_1$ denote the closest center in $g(C)$ to $g(x)$, and $c_2$ denote the closest center in $g'(C)$ to $g(x)$, then it holds that \begin{equation} \label{eq:error_ball} |\dist(g(x),g(C))-\dist(g(x),g'(C))|\le \max\left\{\dist(c_1,g'(C)),\dist(c_2,g(C))\right\}\le \frac{\alpha}{n}\OPT_\beta(X). \end{equation} The error incurred by the covering is \begin{align*} |v^C_x-\tilde v^C_x|&=|\dist(x,C)-\dist(x,C^\star)-\dist(g(x),g'(C))+\dist(g(x),g(C^\star)) |\\ &\le |\dist(x,C)-\dist(g(x),g'(C)) | + |\dist(x,C^\star)-\dist(g(x),g(C^\star)) |\\ &\le |\dist(x,C)-\dist(g(x),g(C)) | + |\dist(x,C^\star)-\dist(g(x),g(C^\star)) | +\frac{\alpha}{n}\OPT_\beta(X) \\ &\le \alpha\dist(x,C)+\alpha\dist(x,C^\star)+\frac{\alpha}{n}\OPT_\beta(X)\\ &=\alpha\cdot \mathrm{err}(x,v^C) \end{align*} where the second derivation is due to triangle inequality, the third derivation is due to~\eqref{eq:error_ball}, and the forth derivation is due to the terminal embedding guarantee. \paragraph{Bounded $L_\infty$ Norm} For every $\tilde v^C\in U$, \begin{align*} \|\tilde v^C\|_\infty&=\max_{x\in X}|\dist(g(x),g'(C))-\dist(g(x),g(C^\star))|\\ &\le\max_{x\in X} |\dist(g(x),g(C))-\dist(g(x),g(C^\star))| + \frac{\alpha}{n}\OPT_\beta(X)\\ &\le \max\left\{\max_{c\in C}\dist(g(c),g(C^\star)),\max_{c^\star\in C^\star}\dist(g(c^\star),g(C)) \right\}+ \frac{\alpha}{n}\OPT_\beta(X)\\ &\le (1+\alpha)\max\left\{\max_{c\in C}\dist(c,C^\star),\max_{c^\star\in C^\star}\dist(c^\star,C) \right\}+ \frac{\alpha}{n}\OPT_\beta(X)\\ &\le \frac{12\lambda k}{\beta n}\OPT_\beta(X), \end{align*} where the second derivation is due to~\eqref{eq:error_ball}, the third derivation is due to the triangle inequality, and the last derivation follows from a similar argument of \Cref{lem:balance_is_good}. Thus we can construct $\alpha$-covering of $V$ with respect to any $S\in \mathcal{S}(m)$, which concludes the proof. \end{proof} \begin{proof}[Proof of \Cref{thm:various_metric} in Euclidean space] We only need to bound the summation in \eqref{eq:m_X}. \begin{align*} &\quad\sum_{i=1}^{\log\varepsilon^{-1}}\sqrt{2^{-i}\log N^{2^{-i}}_X(m)}\\ &\le \sum_{i=1}^{\log\varepsilon^{-1}} O(1)\cdot \sqrt{2^{-i}\cdot k\cdot 2^{2i}\log m\log\frac{k}{\beta 2^{-i}}}\\ &\le O(1)\cdot \log\varepsilon^{-1}\cdot \sqrt{\varepsilon^{-1}\cdot k \log m\log \frac{k}{\beta\varepsilon}}. \end{align*} Therefore, it suffices to set $m=O\left(\frac{k^2}{\beta\varepsilon^3}\cdot\log^2\frac{k}{\beta\varepsilon}\cdot\log^2\frac{1}{\varepsilon} \right)$. \end{proof} \subsection{Doubling Metric Space} \begin{definition}[Doubling Dimension~\cite{assouad1983plongements,DBLP:conf/focs/GuptaKL03}] The doubling dimension of a metric space $M = (\mathcal{X}, \dist)$, denoted as $\ddim(M)$, is the smallest integer $d$ such that any ball can be covered by at most $2^d$ balls of half the radius. \end{definition} \begin{lemma}[Covering Number in Doubling Metric Space] \label{lem:covering_doubling_metric} In a metric space $(\mathcal{X},\dist)$ with doubling dimension $\ddim$, for integer $m>0$ and real number $0<\alpha<1/2$, it holds that \begin{equation*} \log |N_X^{\alpha}(m)|\le O(k\cdot \ddim \cdot \log (k/\alpha\beta)). \end{equation*} \end{lemma} \begin{proof} It suffices to construct $\alpha$-covering with respect to $S$ for every $S\in \mathcal{S}(m)$. Different from the Euclidean case (\Cref{lem:covering_euclidean}), we directly apply the covering of point set without using terminal embedding. The following lemma bounds the cardinality of a covering of point set in doubling metrics. \begin{lemma}[\cite{DBLP:conf/focs/GuptaKL03}] \label{lem:cover_dm} For $\alpha>0$ and a metric space $(\mathcal{X},\dist)$ with doubling dimension $\ddim$ and diameter $\Delta$, there exists an $\alpha$-covering $T$ of $\mathcal{X}$ with $|T|\le \exp\left(O(\ddim\cdot\log (\Delta/\alpha))\right)$. \end{lemma} By definition of $\C'(X)$, for every $C\in\C'(X)$, $c\in C$, it holds that $\dist(c,C^\star)\le \frac{6\lambda k}{\beta}\cdot \frac{1}{n}\OPT(X)$. Hence, consider an $\frac{\alpha\OPT(X)}{n}$-covering $T_{c^\star}$ of $B_{\mathcal{X}}\left(c^\star, \frac{6\lambda k}{\beta} \cdot\frac{1}{n}\OPT(X)\right)$ for every $c^\star\in C^\star$ and let $T_{C^\star}:=\bigcup_{c^\star\in C^\star}T_{c^\star}$. By \Cref{lem:cover_dm}, we have \begin{eqnarray*} |T_{C^\star}|\le k\cdot \exp\left(O\left(\ddim\cdot \log{\frac{k}{\alpha\beta}}\right)\right) \end{eqnarray*} We define $g:V\to V$ such that for every $c\in V$, \begin{equation*} g(c)=\arg\min_{c'\in T_{C^\star}} \dist(c,c'). \end{equation*} Construct $U$ is $\{\tilde v^{C}\in \R^{X}:C\in\C'(X)\}$, where $\tilde v^{C}$ is a cost function defined as: for every $x\in X$, $\tilde v^{C}_x:=\dist(x,g(C))-\dist(x,C^\star)$. Then a similar analysis for bounding the covering error and $L_\infty$ norm as in Euclidean case(\Cref{lem:covering_euclidean}) certifies that $U$ is an $\alpha$-covering of $V$ with respect to any $S\in \mathcal{S}(m)$, and $\log |U|\le O(k\cdot\ddim\cdot\log\frac{k}{\alpha\beta})$. \end{proof} \begin{proof}[Proof of \Cref{thm:various_metric} in doubling metric space and general discrete metric space] Directly plugging the covering number in \Cref{lem:covering_doubling_metric} into \eqref{eq:simple_m_X} concludes the proof in doubling metric space. For general discrete metric space $M=(\mathcal{X},\dist)$, we know that the doubling dimension is $O(\log |\mathcal{X}|)$, and thus directly applying the result of doubling metric space completes the proof. \end{proof} \subsection{Shortest-path Metric of a Graph with Bounded Treewidth} \begin{definition}[Tree Decomposition and Treewidth] A tree decomposition of a graph $G=(V, E)$ is a tree $\mathcal{T} = (\mathcal{V}, \mathcal{E})$, where each node in $\mathcal{V}$, called a bag, is a subset of vertices in $V$, such that the following holds. \begin{itemize} \item $\bigcup_{S \in \mathcal{V}} S = V$. \item $\forall u \in V$, the nodes in $\mathcal{V}$ that contain $u$ form a connected component in $\mathcal{T}$. \item $\forall (u, v) \in E$, there exists $S \in \mathcal{V}$ such that $\{u, v\} \subseteq S$. \end{itemize} The treewidth of $G$, denoted as $\tw(G)$, is the smallest integer $t$ such that there is a tree decomposition of $G$ with maximum bag size $t + 1$. \end{definition} \begin{lemma}[Covering Number in Shortest-path Metric of a Graph] \label{lem:covering_graph_metric} In shortest-path metric $M=(\mathcal{X},\dist)$ of a graph with bounded treewidth $\tw$, for integer $m>0$ and real number $0<\alpha<1/2$, it holds that \begin{equation*} \log |N_X^{\alpha}(m)|\le O(k\cdot \tw \cdot \log (k/\alpha\beta + m/\alpha)). \end{equation*} \end{lemma} \begin{proof} To bound the covering number in graph metrics, it suffices to construct an $\alpha$-covering of $V$ with respect $S$ for every $S\in \mathcal{S}(m)$. The proof of \Cref{lem:covering_graph_metric} relies on the following structural lemma, proposed by~\cite{DBLP:conf/icml/BakerBHJK020}. \begin{lemma}[Structural Lemma, \cite{DBLP:conf/icml/BakerBHJK020}] \label{lem:structral_lem} Given graph $G=(\mathcal{X},E)$ with treewidth $\tw$, and $A\subseteq \mathcal{X}$, there exists a collection $\mathcal{T}_A$ of subsets of $\mathcal{X}$, such that the following holds. \begin{itemize} \item[1.] $\bigcup_{T\in \mathcal{T}_A} T=\mathcal{X}$. \item[2.] $|\mathcal{T}_A|\le \poly(|A|)$. \item[3.] For every $T\in\mathcal{T}_A$, either $|T|\le O(\tw)$, or i) $|T\cap A|\le O(\tw)$ and ii) there exists $P_T\subseteq V$ with $|P_T|\le O(\tw)$ such that there is no edge in $E$ between $T$ and $\mathcal{X}\setminus (T\cup P_T)$. \end{itemize} \end{lemma} Recall that any $S\in \mathcal{S}(m)$ makes $\xi_S$ happens, i.e., $S$ satisfies \eqref{eq:xi_S}. Thus for every $C\in \C'(X)$ and every $x\in S$, \begin{eqnarray*} \dist(x,C^\star)\le \sum_{x\in S}\dist(x,C^\star)\le \frac{\lambda m}{n}\OPT(X), \end{eqnarray*} and thus \begin{equation*} \dist(x,C)\le \dist(C(x),C^\star)+\dist(x,C^\star)\le\left(\frac{6k}{\beta}+m \right)\cdot\frac{\lambda}{n}\OPT(X). \end{equation*} Let $\mathcal{T}_S$ be a collection of subsets asserted by \Cref{lem:structral_lem}. We can construct the covering as follows: For every $T\in\mathcal{T}$, and $c\in T$, let $P_{T}\subseteq \mathcal{X}$ be a set asserted in condition 3 in \Cref{lem:structral_lem} (we assume $|T|>O(\tw)$, otherwise let $P_{T}=T$). By \Cref{lem:structral_lem}, we have for every $x\in \mathcal{X}\setminus T$, it holds that \begin{equation*} \dist(x,c)=\min_{y\in P_{T}}\left\{\dist(x,y) + \dist(y,c)\right\}, \end{equation*} since the shortest path between $x$ and $c$ must pass by some vertices in $P_T$. Let $I_T:=(S\cap T)\cup P_T$ denote the \emph{important vertices} of $T$, we define a \emph{rounded distance function} $\dist_T':T\times X\to \R_+$ w.r.t. $T$ as follows: \begin{itemize} \item $\forall c\in T, x\in I_T$, $\dist_T'(c,x)$ is the closest multiple of $\frac{\alpha}{n}\OPT(X)$ to $\dist(x,c)$ no greater than $\left(\frac{6k}{\beta}+m \right)\cdot\frac{\lambda}{n}\OPT(X)$. \item $\forall c\in T, x\in X\setminus I_T$, $\dist'_T(c,x)=\min_{y\in P_T}\left\{\dist'_T(c,y)+\dist(y,x) \right\}$. \end{itemize} Note that for $x\in T\setminus S$ or for $x\in \mathcal{X}$ that satisfies $\dist(c,x)>\left(\frac{6k}{\beta}+m \right)\cdot\frac{\lambda}{n}\OPT(X)$, the rounded distance $\dist'_T(c,x)$ may be distorted badly. However, as discussed above, we only case about $x\in S$ which is ensured that the distance to $c$ is not that far. Notice that for a fixed $c\in T$, the rounded distance function $\dist(c,\cdot)$ is determined by the values of $\dist_T'(c,x)$ for all $x\in I_T$, and each of them is among $\frac{6\lambda k}{\alpha\beta} + \frac{\lambda m}{\alpha}$ possible values. Therefore the number of distinct rounded distance functions $|\{\dist_T'(c,\cdot):c\in T\}|$ w.r.t. $T$ is \begin{equation*} \le \left(\frac{2\lambda k}{\alpha\beta}+\frac{\lambda m}{\alpha} \right)^{|I_T|}\le \left(\frac{k}{\alpha\beta}+\frac{m}{\alpha}\right)^{O(\tw)} \end{equation*} We define $\dist':\mathcal{X}\times X\to \R_+$ as: for every $c\in \mathcal{X}$, let $T_c\in \mathcal{T}$ that contains $c$, it holds that $\dist'(c,x)=\dist'_{T_c}(c,x)$. Then we have \begin{equation*} |\{\dist'(c,\cdot):c\in \mathcal{X}\}|\le |\mathcal{T}_S|\cdot \left(\frac{k}{\alpha\beta}+\frac{m}{\alpha}\right)^{O(\tw)}\le \poly(|S|)\cdot \left(\frac{k}{\alpha\beta}+\frac{m}{\alpha}\right)^{O(\tw)}. \end{equation*} For every $C\in \C'(X)$, we denote by $\tilde v^C\in \R^X$ the \emph{rounded coset vector} defined as follows: for every $x\in X$, $\tilde v^C_x:=\min_{c\in C} \dist'(c,x)-\dist(x,C^\star)$. Let $\tilde{V}$ denote the set $\{\tilde v^C:C\in C'(X)\}$, we construct $U$ as follows: for every $\tilde v\in \tilde{V}$, $U$ contains one $v^C$ for some $C\in\C'(X)$ such that $\tilde v^C=\tilde v$. We next show that $U$ is the desired $\alpha$-covering. \paragraph{Cardinality of~$U$} Observe that the cardinality of $U$ is upper bounded by the number of distinct rounded distance functions $|\{\min_{c\in C}\dist'(c,\cdot): C\in \mathcal{X}^k \}|$, which is at most \begin{equation*} \left(\poly(|S|)\cdot \left(\frac{k}{\alpha\beta}+\frac{m}{\alpha}\right)\right)^{O(\tw)\cdot k} \le \left(\frac{k}{\alpha\beta}+\frac{m}{\alpha}\right)^{O(k\cdot \tw)} \end{equation*} Thus it holds that $\log |U|\le O(k\cdot \tw\cdot \log(\frac{k}{\alpha\beta}+\frac{m}{\alpha}) )$. \paragraph{Bouneded Covering Error} For every $C\in \C'(X)$, and $x\in S$, we have $\dist(x,C)\le \left(\frac{6k}{\beta}+m \right)\cdot\frac{\lambda}{n}\OPT(X)$. Hence, the error incurred by rounding, i.e., $|\dist'(c,x)-\dist(c,x)|$ is at most $\frac{\alpha}{n}\OPT(X)$. Let $\tilde v^C$ be the rounded cost vector with respect to $v^C$, by definition, there exists $C'\in \C'(X)$ such that $v^{C'}\in U$ and $\tilde v^{C'}=\tilde v^C$. The covering error is \begin{align*} \left|v^C_x-v^{C'}_x\right|&\le \left|v^C_x-\tilde v^C+\tilde v^{C'}-v^{C'}_x\right|\\ &\le \left|v^C_x-\tilde v^C \right| + \left|\tilde v^{C'}-v^{C'}_x \right|\\ &\le \frac{2\alpha}{n}\OPT(X)\\ &\le 2\alpha\cdot \mathrm{err}(x,v^C). \end{align*} It suffices to rescale $\alpha$. \paragraph{Bounded $L_\infty$ Norm} For every $v^C\in U$, we have $v^C\in V$ also, thus the $L_\infty$ norm is bounded because the $L_\infty$ norm of cost vectors in $V$ is bounded. \end{proof} \begin{proof}[Proof of \Cref{thm:various_metric} in graph metric space] Similar to doubling metric case, we plug the covering number in~\Cref{lem:covering_graph_metric} into \eqref{eq:simple_m_X} to concludes the proof. \end{proof} \section{Lower Bounds} \subsection{An $\Omega(1/\beta)$ Query Complexity Lower Bound for Any Algorithm} \label{sec:proof_intro_lb} \begin{theorem}[Restatement of \Cref{thm:intro_lb}] \label{thm:lb} There exists a family of datasets $X \subset \mathbb{R}$ with balancedness $\beta$ such that any (randomized) $O(1)$-approximate algorithm for \ProblemName{$2$-Median}\xspace with success probability at least $3/4$ must query the identify of data points in $X$ for $\Omega(1/\beta)$ times (provided that queried points have free access to distance function). \end{theorem} To prove our lower bound, we first apply the Yao's principle~\cite{yao1983lower} and derive the following lemma that reduce to proving lower bounds for deterministic algorithms with respect to some input distribution. \begin{lemma} \label{lem:yao} For real number $\alpha>1$, let $D$ be a distribution over a family of datasets $X\subset \R$, if any deterministic algorithm must query $\Omega(1/\beta)$ times to computes an $\alpha$-approximate center set for \ProblemName{$2$-Median}\xspace on $X$ sampled from $D$ with success probability at least $3/4$, then any randomized $\alpha$-approximate algorithm for \ProblemName{$2$-Median}\xspace with success probability at least $3/4$ must query $\Omega(1/\beta)$ times. \end{lemma} \begin{proof} For an algorithm $\mathcal{A}$, we denote by $\mathcal{A}(X)$ the output of $A$ when running on $X$. For any randomized algorithm $\mathcal{R}$ which makes at most $o(1/\beta)$ queries, it can be seen as a distribution over some deterministic algorithms $\mathcal{R}_1,\dots,\mathcal{R}_s$. By assumption, for every $i\in [s]$, it holds that \begin{equation*} \Pr_{X\sim D}\left[\cost(X,\mathcal{R}_i(X)) > \alpha \OPT(X)\right]> 1/4. \end{equation*} Therefore, averaging over all deterministic algorithms, we have \begin{equation*} \Pr_{X\sim D}\left[\cost(X,\mathcal{R}(X)) > \alpha \OPT(X)\right]> 1/4. \end{equation*} where the randomness is from the choice of $X$ and the randomness of $\mathcal{R}$. Hence, there exists an instance $X\in\mathrm{supp}(D)$ such that \begin{equation*} \Pr\left[\cost(X,\mathcal{R}(X)) > \alpha \OPT(X)\right]> 1/4. \end{equation*} which completes the proof. \end{proof} \begin{proof}[Proof of \Cref{thm:lb}] For an integer $n\ge 10/\beta$, let $m=\beta n/2$, we construct a family of $n$-point datasets as follows. For a set $\Pi\in [n]$ and a real number $t$, let $X^{\Pi,t}:=\{x_1^{\Pi,t},\dots,x_n^{\Pi,t}\}$ denotes an ordered $n$-point set in $1$-dimensional space such that, \begin{equation*} \forall i\in [n], \quad x_i^{\Pi,t}=\begin{cases} t& i\in \Pi\\ 0& i\not\in\Pi \end{cases}. \end{equation*} Let $\mathcal{X}:=\{X^{\Pi,t};|\Pi|=m, t\in \{1,2\}\}$ be a family of $n$-point datasets. Clearly, every $X\in\mathcal{X}$ is of balancedness $\beta$ for \ProblemName{$2$-Median}\xspace, and the optimal objective value of $X$ for \ProblemName{$2$-Median}\xspace is $0$. Let $D$ be a uniform distribution over $\mathcal{X}$. By \Cref{lem:yao}, it suffices to prove that for any $\alpha > 1$ and any deterministic algorithm $\mathcal{A}$, which makes at most $\frac{1}{2\beta}$ queries, will compute a $2$-point center set $C$ such that $\cost(X,C)>\alpha\cdot \OPT(X)$ for $X$ sampled from $D$ with probability at least $1/4$. We assume $\mathcal{A}$ queries with index $i_1,\dots, i_l$ with $l\le \frac{1}{2\beta}$, and receives $x_{i_1},\dots,x_{i_l}$ from the oracle. We observe that $\mathcal{A}$ finds a $2$-center set $C$ such that $\cost(X,C)>\alpha\cdot \OPT(X)$ if and only if $\mathcal{A}$ finds the exact optimal solution (otherwise, we say $\mathcal{A}$ fails). Therefore, we have \begin{equation} \label{eq:lb_goal} \begin{aligned} &\quad\Pr_{X\sim D}\left[\cost(X,\mathcal{A}(X))>\alpha\cdot\OPT(X) \right]\\ &=\Pr_{X\sim D}[\mathcal{A}\text{ fails}]\\ &=\Pr_{X\sim D}\left[\mathcal{A}\text{ fails} \mid \forall j\in[l], X_{i_j}=0 \right]\Pr_{X\sim D}\left[\forall j\in[l], x_{i_j}=0 \right]\\ &\quad +\Pr_{X\sim D}\left[\mathcal{A}\text{ fails} \mid \exists j\in[l], X_{i_j}\neq 0 \right]\Pr_{X\sim D}\left[\exists j\in[l], x_{i_j}\neq 0 \right]\\ &\geq \Pr_{X\sim D}\left[\mathcal{A}\text{ fails} \mid \forall j\in[l], X_{i_j}=0 \right]\Pr_{X\sim D}\left[\forall j\in[l], x_{i_j}=0 \right] \end{aligned} \end{equation} Recall that $\mathcal{A}$ makes queries with index $i_{1},\dots,i_{l}$, and thus the output $\mathcal{A}(X)$ depends only on $x_{i_j}$ for all $j\in[l]$. For $X$ sampled uniformly from $\mathcal{X}$, its optimal center set is $\{0,1\}$ with probability $0.5$ and $\{0,2\}$ with probability $0.5$. Therefore, condition on $\forall j\in[l], X_{i_j}=0$, $\mathcal{A}$ will fail with probability at least $0.5$, i.e., \begin{equation} \label{eq:lb_1} \Pr[\mathcal{A}\text{ fails}\mid \forall j\in[l], X_{i_j}=0]\ge \frac{1}{2} \end{equation} since it can not determine the center other than $0$. It remains to bound $\Pr_{X\sim D}[\forall j\in[l], X_{i_j}=0]$. \begin{equation} \label{eq:lb_2} \begin{aligned} \Pr_{X\sim D}[\forall j\in[l], X_{i_j}=0]&={n-l\choose m}\cdot {n\choose m}^{-1}\\ &=\frac{(n-l)!(n-m)!}{(n-l-m)!n!}\\ &=\frac{(n-l-m+1)(n-l-m+2)\cdots(n-l)}{(n-m+1)(n-m+2)\cdots n}\\ &=\left(1-\frac{l}{n-m+1} \right)\left(1-\frac{l}{n-m+2} \right)\cdots \left(1-\frac{l}{n}\right)\\ &\ge \left(1-\frac{l}{n-m+1} \right)^{m}\\ &\ge 1-\frac{lm}{n-m+1}\\ &\ge 1-\frac{1}{2\beta}\cdot \frac{\beta n}{2}\cdot \left(\left(1-\frac{\beta}{2}\right)n + 1 \right)^{-1}\\ &\ge \frac{1}{2} \end{aligned} \end{equation} where the sixth derivation is due to Bernoulli's inequality. Therefore, plugging \eqref{eq:lb_1} and \eqref{eq:lb_2} into \eqref{eq:lb_goal}, we have \begin{equation*} \Pr_{X\sim D}\left[\cost(X,\mathcal{A}(X))>\alpha\cdot\OPT(X) \right]\ge \frac{1}{4} \end{equation*} which completes the proof. \end{proof} \subsection{Vanilla \ProblemName{$k$-Median}\xspace on Uniform Sample Can Incur Big Error} \label{sec:proof_kmedian_lb} \begin{lemma} \label{lemma:kbmedian_lb} There exists a family of $0.5$-balanced datasets $X_n\subset \R$ with $|X_n|=n$ for any integer $n\ge 1$, such that letting $S_n$ be a set of $o(n)$ uniform samples from $X_n$, with probability at least $1/4$, it holds that $\cost(X_n,C^\star_n)\ge 1.01\cdot\OPT(X_n)$, where $C^\star_n$ is an optimal center set for \ProblemName{$3$-Median} on $S_n$. \end{lemma} \begin{proof} Our proof is constructive. For $n\ge 10$, assume $S_n$ has $f(n)$ points sampled uniformly and independently from $X_n$, where $3\le f(n) < n/2$. To simplify the proof, we construct $X_n$ with size of $O(n)$ (instead of $n$) as follows. We place $n$ points at $0$ and $n$ points at $1$. Also, we place $n$ points at $w$, $\frac{n}{1.01\cdot f(n)}$ points at $w + f(n)$, and let $w\to \infty$. Clearly, the optimal $3$-point center set $C^\star_{n}$ is $\{0,1,w\}$, and the objective value is $n/1.01$. Recall that $S_n$ is a set of $f(n)$ uniform samples from $X_n$, we have the probability of that $w+f(n) \in S_n$ is \begin{align*} \Pr[w+f(n)\in S_n] &= 1- \left(1 - \frac{n/(1.01\cdot f(n))}{3n + n/(1.01\cdot f(n))} \right)^{f(n)}\\ &\ge 1 - \exp\left(-\frac{n/1.01}{3n + n/(1.01\cdot f(n))} \right) \\ &\ge 1 - \exp\left(-\frac{1}{3.03 + 1/3} \right)\\ &\ge 0.25 \end{align*} where the third derivation is due to $f(n)\ge 3$. Condition on $w+f(n)\in S_n$, we observe that $S_n$ contains at most $f(n)$ points at $0$, at most $f(n)$ points at $1$, at most $f(n)$ points at $w$ and at least $1$ point at $w+f(n)$. In this case, the optimal $3$-point center set $C'$ must contain $w+f(n)$. On $X_n$, the optimal center set that contains $w+f(n)$ is either $\{0,w,w+f(n)\}$ or $\{1,w,w+f(n)\}$, both of which have an objective value $n\ge 1.01\cdot\OPT(X_n)$. Therefore, $f(n)$ must be greater than $n/2$, which completes the proof. \end{proof} \section{Experiments} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{code/figure/twitter_size_error.pdf} \caption*{Twitter dataset} \end{subfigure} \qquad \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{code/figure/census_size_error.pdf} \caption*{Census1990 dataset} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{code/figure/graph_size_error.pdf} \caption*{NY dataset} \end{subfigure} \caption{The size-error tradeoff of the uniform sampling.} \label{fig:size_vs_error} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{code/figure/twitter_vs_coreset.pdf} \caption*{Twitter dataset} \end{subfigure} \qquad \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{code/figure/census_vs_coreset.pdf} \caption*{Census1990 dataset} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{code/figure/graph_vs_coresets.pdf} \caption*{NY dataset} \end{subfigure} \caption{The size-objective tradeoff, compared with that of coresets.} \label{fig:vs_coresets} \end{figure*} \paragraph{Experiment Setup} Our experiments are conducted on $3$ real datasets: Twitter, Census1990 and NY. For Twitter~\cite{twitter_data} and Census1990~\cite{census1990_data}, we select numerical features and normalize them into Euclidean vectors in $\mathbb{R}^d$, with $n=21040936$ data points and $d=2$ for Twitter, and $n=2458285, d=68$ for Census1990. The NY dataset is a weighted graph with $|V| = 2276360$ vertices and $|E|=2974516$ edges, representing the largest connected component of the road network of New York State, extracted from OpenStreetMap~\cite{OpenStreetMap}. All experiments are conducted on a PC with Intel Core i7 CPU and 16 GB memory, and algorithms are implemented in C++ 11. \paragraph{Error Measure} To measure the error of a center set $C$ (which is a solution to \ProblemName{$k$-Median}\xspace), ideally one should compare $\cost(X, C)$ with $\OPT$. However, since the exact value of $\OPT$ is NP-hard to find, we turn to compare it with an approximate solution $C^{\mathrm{apx}}$ instead. Hence, our measure of error for a center set $C$ is defined as the following $\hat{\epsilon}$ which is a signed relative error compared with $\cost(X, C^{\mathrm{apx}})$: $\hat\epsilon(C) := \frac{\cost(X,C)-\cost(X,C^{\mathrm{apx}})}{\cost(X,C^{\mathrm{apx}})}$. The reason to keep the sign is that $C^{\mathrm{apx}}$ is not the optimal solution, hence it is possible that $\cost(X, C)$ is better than $\cost(X, C^{\mathrm{apx}})$ which leads to a negative sign. To compute $C^{\mathrm{apx}}$, we first construct a coreset (e.g., using~\cite{DBLP:conf/stoc/FeldmanL11}) to reduce the size of the dataset, and then apply a standard approximation algorithm for \ProblemName{$k$-Median}\xspace~\cite{DBLP:conf/stoc/AryaGKMP01} on top of it. The use of coreset is to ensure the local search can run for sufficiently large number of rounds to converge in a reasonable time. \paragraph{Experiment: Size-error Tradeoff and Balancedness} Given $k \geq 1$, we take $m$ uniform samples $S$ from the dataset with varying $m$, and run a local search algorithm~\cite{DBLP:conf/stoc/AryaGKMP01} on $S$ to find a center set $C_S$ for \ProblemName{$k$-Median}\xspace. We evaluate the error $\hat\epsilon(C_S)$ and plot the size-error curves in \Cref{fig:size_vs_error} for various choices of $k$. In fact, it turns out that the choice of $k$ also affect the balancedness parameter $\beta$. Hence, for each dataset and value of $k$, we evaluate the balancedness $\beta$ of the abovementioned $C^{\mathrm{apx}}$ on the original dataset, and we also take $m = 500$ uniform samples $S$ and evaluate the balancedness $\beta'$ of $C_S$ on $S$. The resulted $\beta$ and $\beta'$ are reported in \Cref{tab:balancedness}. To make the measurement stable, we repeat the sampling and local search $20$ times independently and report the average statistics. As can be seen from \Cref{fig:size_vs_error}, uniform sampling shows an outstanding performance, and admits a similar convergence of the error curve regardless of the choice of $k$ and the consequent balancedness of datasets. In NY dataset, it even achieves negative error when $k = 20$, which means it performs even better than the baseline. We also observe in \Cref{tab:balancedness} that the datasets are mostly balanced even when $k$ is relatively large, and thus the factor of $1/\beta$ is reasonably bounded (i.e., no greater than $100$), and even for the less balanced scenario, the error is still very small. Moreover, the center set computed on $S$, although computed using a vanilla local search for \ProblemName{$k$-Median}\xspace, actually satisfies the balancedness constraint of \ProblemName{$(k,\beta)$-Median}\xspace problem, that is, $\beta'\ge \beta/2$ for every choice of $k$ and every dataset. This suggests that it is not necessary to run the more sophisticated \ProblemName{$(k,\beta)$-Median}\xspace at all in practice. These findings help to justify why it is often seen in practice that a few uniform samples are enough to compute a good approximation. \begin{table}[t] \caption{Balancedness under different values of $k$.} \label{tab:balancedness} \begin{center} \begin{small} \begin{sc} \begin{tabular}{ccccccc} \toprule \multirow{2}{*}{$k$} & \multicolumn{2}{c}{Twitter} & \multicolumn{2}{c}{Census1990} & \multicolumn{2}{c}{NY} \\ & $\beta$ & $\beta'$ & $\beta$ & $\beta'$ & $\beta$ & $\beta'$ \\ \midrule 5 & 0.549 & 0.523 & 0.302 & 0.292 & 0.036 & 0.127 \\ 10 & 0.107 & 0.273 & 0.359 & 0.326 & 0.053 & 0.093\\ 15 & 0.138 & 0.192 & 0.120 & 0.249 & 0.073 & 0.121 \\ 20 & 0.202 & 0.184 & 0.045 & 0.216 & 0.072 & 0.120\\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \paragraph{Experiment: Comparison to Coreset} Since uniform sampling is usually very efficient, we expect to see an advantage in running time compared with coreset constructions. Our second experiment aims to demonstrate this advantage, particularly to compare with the coresets constructed by importance sampling~\cite{DBLP:conf/stoc/FeldmanL11}. We set $k=20$, and vary the sample size $m$ for both uniform sampling and importance sampling. Let $S$ and $S'$ be the subsets constructed by uniform sampling and importance sampling, respectively. We run a local search algorithms on each of $S$ and $S'$, and compute the objective value of the output center set on the original dataset. In \Cref{fig:vs_coresets}, we plot the objective-size curves, and observe that uniform sampling is comparable to coreset on all these datasets. However, for the running time, it takes $391$s/$114$s/$298$s to compute the coreset on Twitter/Census1990/NY dataset, while the runtime of the local step is $16$s/$42$s/$203$s. Hence, the coreset construction is even more costly than the local search, and the coreset actually becomes the bottleneck of the total running time. On the other hand, the uniform sampling takes merely $10^{-3}$s to sample $500$ points from the dataset which is significantly more efficient. This also justifies the practical benefit of using uniform sampling than methods like importance sampling. \bibliographystyle{alpha}
{ "arxiv_id": "2302.11320", "language": "en", "timestamp": "2023-02-23T02:13:35", "url": "https://arxiv.org/abs/2302.11320", "yymm": "2302" }
\section{The effect of post-selection \label{subsec: post-selection} } In this section, we discuss the effect of post-selection to mitigate errors in QSCI. To be specific, we consider the post-selection technique introduced in Sec.~\ref{subsec:ground-state}, which exploits the conservation of particle number and spin, targeting the bit-flip noise. In the following, we consider to measure an $N$-qubit computational basis state. We assume that the state is prepared without the effects of noise but each bit of the measurement result is flipped with error probability $p$. It is equivalent to the situation where the bit-flip noise is introduced to each qubit independently after the input state is generated. The probability that the $N$-bit string describing the state is measured correctly is $(1-p)^N$. \subsection{Jordan-Wigner mapping} Let us now assume that we consider an electronic Hamiltonian converted by the Jordan-Wigner mapping. In this case, the number of 1's in the $N$-bit string, which we denote by $n_1$, corresponds to the number of electrons in the system and is sometimes known prior to the calculation for a ground state or an excited state. One can thus perform the post-selection for a measurement outcome that excludes resulting bit strings with the number of 1's not equal to $n_1$. Although one may still get incorrect results, the probability is reduced. More concretely, the probability to get a result with correct $n_1$ is \begin{equation} (1-p)^N+n_0n_1p^2(1-p)^{N-2}+O(p^4), \end{equation} where we define $n_0:=N-n_1$. After the post-selection, the probability to get the correct result is thus \begin{align} &\dfrac{(1-p)^N}{(1-p)^N+n_0n_1p^2(1-p)^{N-2}+O(p^4)}\\ &=\dfrac{1}{1+n_0n_1p^2/(1-p)^2+O(p^4)}\\ &=\dfrac{1}{1+n_0n_1p^2+O(p^3)}\\ &=1-n_0n_1p^2+O(p^3). \end{align} The error rate of getting an incorrect result is reduced from $1-(1-p)^N\sim pN$ to $n_0n_1p^2$ with the ratio being \begin{equation} \dfrac{n_0n_1p^2}{pN}=\left(\dfrac{n_0}{N}\right)\left(\dfrac{n_1}{N}\right)Np \sim O(pN), \end{equation} which is less than one in a sensible situation $Np \ll 1$, where the original success probability $(1-p)^N$ is not too small. Although we here considered a computational basis state as the input state, we expect that the post-selection similarly works for a general input state that is a superposition of computational basis states with some fixed $n_1$. Note that, if one also knows the total spin $S_z$ of electrons prior to the calculation, one can count the number of 1's separately for up- and down-spin electrons, and make the post-selection more efficient. \subsection{Other mappings} For most of the other fermion-qubit mappings, it is not expected that the reduction of the error probability from $O(p)$ to $O(p^2)$ happens. For example, in the parity mapping~\cite{bravyi2002fermionic,seeley2012bravyi} and the Bravyi-Kitaev mapping~\cite{bravyi2002fermionic}, the states $\ket{01}$ and $\ket{11}$ are connected by just one bit flip, but both of them are one-electron states. This bit flip, which cannot be detected by the post-selection, occurs with probability $O(p)$, and thus the error rate after the post-selection is still $O(p)$. The same is true for a Hamiltonian with a reduced number of qubits using symmetries, where there is always a bit flip that does not change the total number of electrons. \section{Details of the algorithms} In this section, we present several detailed discussions on the QSCI algorithms. \subsection{Choice of $\beta_i$ parameters and variational inequalities in sequential diagonalization scheme} \label{subsec:details-sequential} Here we discuss the sequential diagonalization scheme, introduced in Sec.~\ref{sssec:method-sequantial}, on how to choose the $\beta_i$ parameters and a potential violation of the variational inequality, following the discussion in Ref.~\cite{higgott2019variational}. Suppose $k$ low-lying eigenstates of $\hat{H}$, $\ket{E_0}, \cdots, \ket{E_{k-1}}$, are known exactly. Then, the effective Hamiltonian to find the $k$-th eigenstate can be exactly constructed as \begin{align} \hat{H}^{(k)\prime} =\hat{H}+ \sum_{i=0}^{k-1}\beta_i \ket{E_i}\bra{E_i}. \end{align} This can be formally expressed as \begin{align} \hat{H}^{(k)\prime} = \sum_{i=0}^{k-1}(E_i + \beta_i) \ket{E_i}\bra{E_i} + \sum_{i\geq k} E_i \ket{E_i}\bra{E_i}, \end{align} where $E_i$ represents the $i$-th eigenvalue of $\hat{H}$ in this appendix. For $\beta_i > E_k - E_i$ ($i=0,\cdots, k-1$), the following inequality holds for an arbitrary $\ket{\psi}$ with $\bra{\psi}\ket{\psi}=1$: \begin{align} \bra{\psi} \hat{H}^{(k)\prime} \ket{\psi} \geq E_k, \end{align} where the equality holds if and only if $\ket{\psi}=\ket{E_k}$ up to a phase factor. In the language of the eigenvalue problem of Eq.~\eqref{eq:eigenvalue-eq}, this implies $E_{R_k}^{(k)\prime}\geq E_k$, where $E_{R_k}^{(k)\prime}$ is the smallest eigenvalue of $\bm{H}_{R_k}^{(k)\prime}$, the subspace matrix for $\hat{H}^{(k)\prime}$, defined in the same way as Eq.~\eqref{eq:sequential-matrix}. In practice, the condition $\beta_i > E_k - E_i$ can be utilized if one has prior knowledge on the energy spectrum, e.g., based on variational quantum algorithms. But even without such knowledge, one may still rely on the stronger condition of $\beta_i > 2\sum_j \abs{c_j}$~\cite{higgott2019variational}, which is written in terms of the coefficients $c_j$ of the qubit Hamiltonian $\hat{H}=\sum_j c_j P_j$, expressed by the Pauli strings $P_j$. In reality, the effective Hamiltonian cannot be exactly constructed as the $k$ low-lying eigenstates would be obtained only approximately and, hence, the inequality $E_{R_k}^{(k)}\geq E_k$ is not guaranteed. For instance, in the problem to find the first excited state, the effective Hamiltonian $\hat{H}^{(1)}$ is constructed with $|\psi_{\rm out}^{(0)}\rangle$, the output state for the ground state obtained by the preceding step in sequential diagonalization. Unless the output state perfectly overlaps with the true ground state $\ket{E_0}$, or $|\langle\psi_{\rm out}^{(0)}| E_0\rangle|=1$, there is no guarantee that $\ev{\hat{H}^{(1)}}{\psi}$ is bounded by the exact eigenvalue $E_1$. Instead, $\min_\psi \ev{\hat{H}^{(1)}}{\psi}$ is only bounded as~\cite{higgott2019variational}: \begin{align} E_1 - O((E_1-E_0)\epsilon_0) \leq \min_\psi \ev{\hat{H}^{(1)}}{\psi} \leq E_1 +\beta_0\epsilon_0, \end{align} where $\epsilon_0 = 1 - |\langle\psi_{\rm out}^{(0)}| E_0\rangle|^2$ and $\bra{\psi}\ket{\psi}=1$. A concrete example for breaching the variational inequality is given as follows. Consider a system with the unique ground state, i.e., $E_0 < E_1$. Suppose that one has a poor output state $|\psi_{\rm out}^{(0)}\rangle$ that is orthogonal to the true ground state $\ket{E_0}$, i.e., $\epsilon_0=1$. Then, $\bra{\psi}\hat{H}^{(1)} \ket{\psi} \geq E_0$ for any positive $\beta_0$, where the equality holds if and only if $\ket{\psi}=\ket{E_0}$ up to a phase factor. This means that the variational inequality $E_{R_1}^{(1)}\geq E_1 (> E_0)$ is violated at least in the limit where the subspace $\mc{S}_{R_1}^{(1)}$ is enlarged to cover the part of the Fock space necessary to express $\ket{E_0}$. Such a subspace can be constructed, e.g., if the input state is chosen to be $|\psi_{\rm in}^{(1)}\rangle = \ket{E_0}$ with a sufficiently large $R_1$. \subsection{An optimal shot allocation for evaluating expectation values of multiple observables in conventional method} \label{subsec:appendix-scaling-multiple-operators} In this subsection, we describe the details of the numerical estimation of computational cost in Sec.~\ref{subsec:scaling} for evaluating multiple operators. In the numerical simulation, we considered a situation where we want to calculate the expectation values of the nuclear gradient $\left\{\pdv{\hat{H}}{x_i} \mid i=1,\dots,3N_{\text{atom}} \right\}$ and the nuclear Hessian $\left\{\pdv{\hat{H}}{x_i}{x_j} \mid i,j=1,\dots,3N_{\text{atom}} \right\}$ along with the Hamiltonian $\hat{H}(\left\{x_i\right\})$, where $x_i$ are the nuclear coordinates and $N_\mr{atom}$ is the number of atoms in the molecule. The most naive way of doing it would be to calculate each expectation value completely separately. This is, though, too naive to be considered as the optimal strategy; all the observables are linear combinations of operators $a^\dagger_i a_j$ and $a^\dagger_i a_j a^\dagger_k a_l$ in the fermionic basis, and the expectation values of these operators can be reused among the operators. Before discussing the optimal strategy for evaluating expectation values of multiple observables, let us review the one for a single observable, following the discussion in Ref.~\cite{rubin2018application}. Consider a quantum state $\ket{\psi}$ and the expectation value of an operator $\hat{O}$ which can be written as a sum of operators $\hat{O}_l$: \begin{equation} \hat{O}=\sum_{l=1}^{L}\hat{O}_l. \end{equation} Each term $\hat{O}_l$ can be either a Pauli string or a sum of Pauli strings that commute with each other, which admits the projective measurement on eigenvalues of each $\hat{O}_l$. We denote the variance of each term $\hat{O}_l$ per one shot by $\sigma_l^2:=\text{Var}(\hat{O}_l) := \ev{\hat{O}_l^2}{\psi}-\ev{\hat{O}_l}{\psi}^2$. By measuring each term $\hat{O}_l$ with a number of shots $M_l$, the observed expectation value has the variance $\sum_l \sigma_l^2/M_l$. Employing the method of Lagrange multiplier with the Lagrangian \begin{equation} \mathcal{L}=\sum_l M_l +\lambda \left( \sum_l \dfrac{\sigma_l^2}{M_l} -\epsilon^2\right), \end{equation} one can get the optimal allocation of the number of shots with the total variance of the expectation value fixed to $\epsilon^2$, which is \begin{equation} M_l\propto \sigma_l. \end{equation} In general, $\sigma_l$ is not exactly known a priori, so one may use $\sigma_l$ for Haar random states to get a reasonable strategy. One may also try to improve the strategy by dividing the shot budget for one evaluation of an expectation value into several iterations: one can simply evaluate the expectation value with a mildly optimized strategy in the first iteration, and then, in the rest of the iterations, one can adjust the strategy by calculating $\sigma_l$ by using the expectation values obtained in the previous iterations. Generalizing the above discussion, let us consider a situation where one calculates the expectation values of a set of operators $\left\{\hat{O}^{(i)} \mid i=1,\dots,n\right\}$. We assume that $\hat{O}^{(i)}$ is decomposed as \begin{equation} \hat{O}^{(i)}=\sum_{l=1}^{L} \hat{O}^{(i)}_l, \end{equation} where all of $\left\{\hat{O}^{(i)}_l\mid i=1,\dots,n\right\}$ are simultaneously measurable for each $l$, i.e., $[\hat{O}^{(i)}_l, \hat{O}^{(j)}_l] = 0$ for any $i,j$. In our numerical simulation, the grouping was done by firstly taking the sum of all the observables $\hat{O}^{(i)}$ with each Pauli string with negative coefficient multiplied by $-1$ to make it positive. Then the greedy qubit-wise grouping of Refs.~\cite{mcclean2016theory, crawford2021efficient} was used. Our aim here is to find a good strategy to estimate the expectation values of all the operators $O^{(i)}$ with statistical error less than $\epsilon$. Note that one can always rescale the observables so that the required precision is the same for all observables even when one requires different precision for different operators. To get an analytical solution, we choose the following Lagrangian with slightly modified constraint, \begin{equation} \mathcal{L}=\sum_l M_l +\lambda \left(\sum_l \sum_i \left(\dfrac{{\sigma_l ^{(i)}}^2}{M_l}\right)-\epsilon_{\text{tot}}\right), \end{equation} where $\epsilon_{\text{tot}}$ can be $N\times \epsilon$ but it turns out that the choice of $\epsilon_{\text{tot}}$ does not affect the final result. By solving the extremal condition of this Lagrangian, one can get the best shot allocation that minimizes the total number of shots, while keeping the sum of the variances of all the operators less than $\epsilon_{\text{tot}}$. The result implies that \begin{equation} M_l\propto \sqrt{\sum_i {\sigma_l^{(i)}}^2}. \end{equation} By estimating the variance of each operator $\hat{O}^{(i)}$ with this shot allocation, and by adjusting the total number of shots so that the statistical error of each operator is $\epsilon$ at worst, one can obtain the total number of shots $\sum_l M_l$ with desired precision for all the operators. This may not be the optimal shot allocation to achieve the statistical error $\epsilon$ for each operator as we are minimizing the total variance rather than the maximum value of the variances, but this will give a reasonable strategy that is analytically available. There is one comment to make on the evaluation of nuclear Hessians. In the following, the expectation value is always taken by an ansatz state $\ket{\psi(\bm{\theta}(\left\{x_i\right\}))}$ parametrized by the ansatz parameters $\bm{\theta}(\left\{x_i\right\})$; we assume that $\bm{\theta}(\left\{x_i\right\})$ is optimized so that $\ket{\psi(\bm{\theta}(\left\{x_i\right\}))}$ has the minimum energy within the ansatz for each $\left\{x_i\right\}$. We denote the energy expectation value of the state by $E(\left\{x_i\right\})$. In the case of the nuclear gradient, \begin{equation} \pdv{E(\left\{x_i\right\})}{x_i}=\expval{\pdv{\hat{H}(\left\{x_i\right\})}{x_i}} \end{equation} holds thanks to the Hellmann-Feynman theorem, and it suffices to compute the right-hand side to obtain the nuclear gradient of the energy. In the case of the nuclear Hessian of the energy~\cite{mitarai2020theory}, on the other hand, it is in general necessary to evaluate the contribution of derivatives acting on the state as well as on the Hamiltonian operator, \begin{equation} \pdv{E(\left\{x_i\right\})}{x_i}{x_j}=\expval{\pdv{\hat{H}(\left\{x_i\right\})}{x_i}{x_j}}+\dots \end{equation} In our numerical simulation, we ignored the contribution of the derivatives acting on the state for simplicity. In the case of QWC, this contribution requires additional quantum resources. On the other hand, in the case of QSCI, one can generate and diagonalize the Hamiltonians at small finite distance $x_i\to x_i\pm \delta$ to get the derivatives of the state within the same selected subspace of the Fock space with no additional quantum resources. If we take this contribution into account properly, the advantage of QSCI will increase. It should also be noted that, although we evaluate $O(N_{\text{atom}}^2)$ observables in the numerical simulations for the hydrogen chain in the main text, due to the rich geometrical symmetry of the molecules, many of the observables are zero as an operator. It is likely that, for more generic molecules, the crossing-point of QSCI and QWC comes at a larger number of qubits. \section{Details of numerical simulations and experiments} \label{sec:appendix-details-of-sim-and-exp} In this section, we explain details of the numerical simulations and the experiment on quantum hardware in the main text. For all the molecules examined in this study, the second-quantized electronic Hamiltonian under the Born-Oppenheimer approximation is generated by OpenFermion~\cite{mcclean2020openfermion} interfaced with PySCF~\cite{sun2018pyscf} using the Hartree-Fock orbitals with the STO-3G minimal basis set, unless otherwise stated. The electronic Hamiltonians are mapped to qubit ones by the Jordan-Wigner transformation. The molecular geometries used in our study are shown in Table~\ref{tab: geometries}. Stable geometries for diatomic molecules are taken from CCCBDB database~\cite{johnson2022nist} and Ref.~\cite{wang2016relativistic}, while those for the other molecules are taken from PubChem~\cite{kim2023pubchem}, except for the hydrogen chains which are not in their stable geometries. We list the details specific to each of simulations and experiment in the following. \begin{table*}[] \caption{Geometries of molecules. ``$(\mr{X}, (x,y,z))$" denotes three dimensional coordinates $x,y,z$ of an atom X in units of \AA. \label{tab: geometries} } \begin{tabular}{c|p{12cm}} \hline \hline Molecule & Geometry \\ \hline \ce{H2O} & (O, (0, 0, 0)), (H, (0.2774, 0.8929, 0.2544)), (H, (0.6068, -0.2383, -0.7169)) \\ \ce{H}$_n$ ($n=4,6,8,10,12)$ & (H, (0, 0, 0)), (H, (0, 0, 1.0), \dots, (H, (0, 0, $n \times 1.0$)) \\ \ce{LiH} & (Li, (0, 0, 0)), (H, (0, 0, 1.595))\\ \ce{N2} & (N, (0, 0, 0)), (N, (0, 0, 1.1))\\ \ce{O2} & (O, (0, 0, 0)), (O, (0, 0, 1.2))\\ \ce{F2} & (F, (0, 0, 0)), (F, (0, 0, 1.4))\\ \ce{Cl2} & (Cl, (0, 0, 0)), (Cl, (0, 0, 2.0))\\ \ce{HCl} & (H, (0, 0, 0)), (Cl, (0, 0, 1.3))\\ \ce{CO} & (C, (0, 0, 0)), (O, (0, 0, 1.1))\\ \ce{Cr2} & (Cr, (0, 0, 0)), (Cr, (0, 0, 1.6))\\ \ce{Benzene} &(C, (-1.2131, -0.6884, 0)), (C, (-1.2028, 0.7064, 0.0001)), (C, (-0.0103, -1.3948, 0)), (C, (0.0104, 1.3948, -0.0001)), (C, (1.2028, -0.7063, 0)), (C, (1.2131, 0.6884, 0)), (H, (-2.1577, -1.2244, 0)), (H, (-2.1393, 1.2564, 0.0001)), (H, (-0.0184, -2.4809, -0.0001)), (H, (0.0184, 2.4808, 0)), (H, (2.1394, -1.2563, 0.0001)), (H, (2.1577, 1.2245, 0))\\ \ce{Naphthalene} &(C, (0, -0.7076, 0)), (C, (0, 0.7076, 0.0001)), (C, (1.225, -1.3944, 0.0001)), (C, (1.225, 1.3944, 0)), (C, (-1.225, -1.3943, 0)), (C, (-1.225, 1.3943, 0)), (C, (2.4327, -0.6958, 0)), (C, (2.4327, 0.6959, -0.0001)), (C, (-2.4327, -0.6958, -0.0001)), (C, (-2.4327, 0.6958, 0)), (H, (1.2489, -2.4822, 0.0001)), (H, (1.2489, 2.4821, -0.0001)), (H, (-1.2489, -2.4822, -0.0001)), (H, (-1.249, 2.4821, 0.0001)), (H, (3.3733, -1.239, -0.0001)), (H, (3.3732, 1.2391, -0.0001)), (H, (-3.3733, -1.239, -0.0001)), (H, (-3.3732, 1.239, 0))\\ \ce{Anthracene} &(C, (-1.225, 0.706, 0.0001)), (C, (-1.2251, -0.7061, 0.0001)), (C, (1.2251, 0.7061, 0.0002)), (C, (1.2251, -0.7061, 0.0001)), (C, (0, 1.3937, 0.0001)), (C, (0, -1.3938, 0)), (C, (-2.4504, 1.393, -0.0001)), (C, (-2.4505, -1.393, 0)), (C, (2.4505, 1.3929, 0)), (C, (2.4505, -1.3929, 0)), (C, (-3.6587, 0.6956, -0.0001)), (C, (-3.6588, -0.6955, -0.0001)), (C, (3.6587, 0.6956, -0.0002)), (C, (3.6587, -0.6956, -0.0002)), (H, (0, 2.4838, 0)), (H, (0, -2.4839, -0.0001)), (H, (-2.4742, 2.4808, -0.0001)), (H, (-2.4744, -2.4809, 0)), (H, (2.4742, 2.4808, 0)), (H, (2.4743, -2.4808, 0)), (H, (-4.5989, 1.2394, -0.0003)), (H, (-4.5991, -1.2391, -0.0002)), (H, (4.5989, 1.2393, -0.0003)), (H, (4.5989, -1.2393, -0.0004))\\ \ce{Tetracene} & (C, (0, 0.7045, -0.0002)), (C, (0, -0.7046, -0.0001)), (C, (-2.451, 0.7058, 0)), (C, (-2.4511, -0.7058, 0.0002)), (C, (2.4511, 0.7057, 0.0001)), (C, (2.4511, -0.7058, -0.0001)), (C, (1.2254, 1.3923, -0.0001)), (C, (1.2254, -1.3924, -0.0003)), (C, (-1.2254, 1.3923, -0.0002)), (C, (-1.2255, -1.3923, 0.0002)), (C, (-3.6764, 1.3928, -0.0001)), (C, (-3.6764, -1.3929, 0.0002)), (C, (3.6764, 1.3929, 0.0003)), (C, (3.6765, -1.3929, -0.0001)), (C, (-4.8846, 0.6957, -0.0001)), (C, (-4.8847, -0.6955, 0.0001)), (C, (4.8846, 0.6957, 0.0004)), (C, (4.8847, -0.6956, -0.0001)), (H, (1.2253, 2.4825, -0.0001)), (H, (1.2254, -2.4825, -0.0003)), (H, (-1.2254, 2.4824, -0.0003)), (H, (-1.2255, -2.4824, 0.0003)), (H, (-3.6999, 2.4807, -0.0002)), (H, (-3.7001, -2.4808, 0.0003)), (H, (3.6999, 2.4807, 0.0004)), (H, (3.7001, -2.4807, -0.0003)), (H, (-5.8248, 1.2393, -0.0002)), (H, (-5.8249, -1.2392, 0.0002)), (H, (5.8248, 1.2394, 0.0005)), (H, (5.8249, -1.2392, -0.0002))\\ \hline \hline \end{tabular} \end{table*} \subsection{Noiseless simulation for ground state} \label{subsec:setup-noiseless-vqe} \begin{figure}[h!] \includegraphics[width=.45\textwidth]{figures/rsp-ansatz.pdf} \caption{Real-valued symmetry-preserving ansatz with $n$ qubits and depth $d$.} \label{fig:rsp-ansatz} \end{figure} In Sec.~\ref{subsec:ground-state-simulation-with-noiseless-vqe}, the \ce{H2O} molecule with six active electrons and five active orbitals, is chosen to find the ground state by QSCI. In the VQE calculation for preparing the input states, the BFGS optimizer is employed through the scientific library SciPy~\cite{virtanen2020scipy}, and the real-valued symmetry-preserving ansatz~\cite{ibe2022calculating} is used to construct parametric quantum circuits with depth 10 (Fig.~\ref{fig:rsp-ansatz}). The initial state of the ansatz circuits is set to be the Hartree-Fock state, and the initial parameters in the optimization are randomly chosen. \subsection{Noiseless simulations for excited states} \label{subsec:setup-noiseless-vqd} In Sec.~\ref{subsec:simulation-excited-h2o}, QSCI is demonstrated for the same \ce{H2O} molecule but to find excited states. To prepare the input states, the VQD calculations are performed in the same setup as the previous VQE calculation, but with the penalty terms~\cite{mcclean2016theory,ryabinkin2018constrained,kuroiwa2021penalty} added to the Hamiltonian for constraining the resulting states to have $S_z=0$ and $N_e=6$; specifically, the following operator (in atomic units) is added to the Hamiltonian \begin{equation} 3.0 (\hat{S}_z)^2 + 3.0 (\hat{N}_e-6)^2, \end{equation} where $\hat{S}_z$ is the operator for the total electron spin in $z$-direction, and $\hat{N}_e$ for the particle number operator of electrons in the active space. Furthermore, the overlap terms to constrain the state to be orthogonal to lower energy eigenstates~\cite{higgott2019variational} are added with coefficients of unity (in Hartree). For the sequential diagonalization scheme of QSCI, the coefficients $\beta_i$ for ensuring orthogonality are also set to unity. \begin{figure}[h!] \includegraphics[width=.25\textwidth]{figures/ryansatz.pdf} \caption{Ry ansatz with 8 qubits. All the rotational gates have independent parameters. The depth is set to be 8 in our experiment.} \label{fig:ryansatz} \end{figure} \subsection{Noisy simulation and experiment} \label{ssec:setup-experiment} For the noisy simulation and experiment in Sec.~\ref{sec:noisy-simulation-experiment}, the input states are prepared by noiseless VQE simulations. The VQE calculations are performed with the BFGS optimizer and Ry ansatz (Fig.~\ref{fig:ryansatz}) with depth 8. Other details are described in the main text. \section{Supplemental numerical results} In this section, we provide additional numerical results to supplement the contents in Sec.~\ref{sec:numerical}. \subsection{Scaling of computational costs with various molecules} \label{ssec:more-results-scaling} Figure~\ref{fig:all-scaling} shows the scaling of the classical and quantum computational costs, discussed in Sec.~\ref{subsec:scaling}, for different types of molecules. Here, we test three kinds of molecules: hydrogen chains, diatomic molecules, and aromatic molecules. The data for hydrogen chains are exactly the same as in the main text. Diatomic molecules are \ce{N2}, \ce{O2}, \ce{F2}, \ce{Cl2}, \ce{HCl}, \ce{CO}, and \ce{Cr2} with cc-pVQZ basis, and we tested them with various active spaces just as in the main text for \ce{Cr2}. To test with larger molecules, four aromatic molecules are chosen: benzene, naphthalene, anthracene, and tetracene. The Hamiltonian is generated by using the Hartree-Fock orbitals with STO-3G basis. The active space of $n$ orbitals and $n$ electrons with varying $n$ was employed for the diatomic and aromatic molecules. The geometries of these tested molecules are summarized in Table~\ref{tab: geometries}. As can be seen in Fig.~\ref{fig:all-scaling}, hydrogen chains with various numbers of atoms show the worst scalings, while \ce{Cr2} is one of the least expensive systems among others. \begin{figure*} \begin{minipage}{0.9\textwidth} \subfloat[][$R$ required for energy error $\epsilon=\SI{0.001}{Hartree}$.]{ \includegraphics[width=.45\textwidth]{figures/scaling/qubit-to-R-all-0.001.pdf} } \subfloat[][$1/\abs{c_R}^2$ for energy error $\epsilon=\SI{0.001}{Hartree}$.]{ \centering \includegraphics[width=.45\textwidth]{figures/scaling/qubit-to-cR-all-0.001.pdf} } \subfloat[][$R$ required for energy error $\epsilon=\SI{0.01}{Hartree}$.]{ \centering \includegraphics[width=.45\textwidth]{figures/scaling/qubit-to-R-all-0.01.pdf} } \subfloat[][$1/\abs{c_R}^2$ for energy error $\epsilon=\SI{0.01}{Hartree}$.]{ \includegraphics[width=.45\textwidth]{figures/scaling/qubit-to-cR-all-0.01.pdf} } \subfloat[][$R$ required for energy error $\epsilon=\SI{0.1}{Hartree}$.]{ \includegraphics[width=.45\textwidth]{figures/scaling/qubit-to-R-all-0.1.pdf} } \subfloat[][$1/\abs{c_R}^2$ for energy error $\epsilon=\SI{0.1}{Hartree}$.]{ \includegraphics[width=.45\textwidth]{figures/scaling/qubit-to-cR-all-0.1.pdf} } \end{minipage} \caption{Estimated $R$ and $1/\abs{c_R}^2$ for various molecules with the same setup as Figs. \ref{fig:scaling-a} and \ref{fig:scaling-b}. The number of qubits is varied by changing the number of atoms for hydrogen chains, and by changing the active space for the other molecules.} \label{fig:all-scaling} \end{figure*} \subsection{Sampling simulations with various molecules} \label{ssec:appendix-sampling} In this subsection, we present similar results as Fig.~\ref{fig:conventional} but with various other molecules. The results, shown in Fig.~\ref{fig:all-sampling}, show the same features as \ce{H6} in the main text, such as the small standard deviation for QSCI and $1/\abs{c_R}^2$ giving an accurate estimation of the number of shots for given accuracy $\epsilon$. One can also see that the standard deviation is almost constant for hydrogen chains with various numbers of atoms, while the absolute error is highly dependent on the number of atoms. Comparing the three 12-qubit systems, it can be seen that the difference between the standard deviation and the absolute error depends on the system. \begin{figure*} \begin{minipage}{0.9\textwidth} \subfloat[][\ce{H4} (8 qubits)]{ \includegraphics[width=.45\textwidth]{figures/sampling/H4_diff.pdf} } \subfloat[][\ce{H6} (12 qubits)]{ \includegraphics[width=.45\textwidth]{figures/sampling/H6_diff.pdf} } \subfloat[][\ce{H8} (16 qubits)]{ \includegraphics[width=.45\textwidth]{figures/sampling/H8_diff.pdf} } \subfloat[][\ce{H10} (20 qubits)]{ \includegraphics[width=.45\textwidth]{figures/sampling/H10_diff.pdf} } \subfloat[][\ce{H2O} (12 qubits)]{ \includegraphics[width=.45\textwidth]{figures/sampling/H2O_diff.pdf} } \subfloat[][\ce{LiH} (12 qubits)]{ \includegraphics[width=.45\textwidth]{figures/sampling/LiH_diff.pdf} } \end{minipage} \caption{Sampling simulation with various molecules. See Fig.~\ref{fig:conventional} and the main text for details.} \label{fig:all-sampling} \end{figure*} \subsection{Accuracy of expectation values of observables other than the Hamiltonian in QSCI} \label{ssec:appendix-multiple-observable-accuracy} \begin{figure}[h!] \begin{minipage}{0.45\textwidth} \subfloat[][$\epsilon=0.001$]{ \includegraphics[width=\textwidth]{figures/multiple_obs/multiple_obs_qsci_0.001.pdf}} \subfloat[][$\epsilon=0.01$]{ \includegraphics[width=\textwidth]{figures/multiple_obs/multiple_obs_qsci_0.01.pdf}} \subfloat[][$\epsilon=0.1$]{ \includegraphics[width=\textwidth]{figures/multiple_obs/multiple_obs_qsci_0.1.pdf}} \end{minipage} \caption{Histograms of absolute errors of nuclear gradients and Hessians for \ce{H4}, \ce{H6}, and \ce{H8} molecules in QSCI, compared to the exact, CASCI value. Each plot corresponds to QSCI calculations with an energy error $\epsilon$. Absolute errors are in units of Hartree, Hartree/\AA, or $\text{Hartree}/\text{\AA}^2$, depending on the observables. } \label{fig:appendix-multiple-observables} \end{figure} Here, we examine the accuracy of the expectation values of observables other than the Hamiltonian, estimated for the output state obtained by QSCI calculation. Figure~\ref{fig:appendix-multiple-observables} shows the histograms for absolute errors of the expectation values for the gradient and Hessian, where the absolute error for an observable $\hat{O}$ is defined by \begin{equation} \abs{\ev{\hat{O}}{{\psi_{\rm out}}}-\ev{\hat{O}}{\psi_\text{exact}}}. \end{equation} Here, $\ket{{\psi_{\rm out}}}$ is the output state of QSCI calculation with the idealized sampling from the exact ground state with $R$ given in Fig.~\ref{fig:scaling-a} for each error tolerance $\epsilon$ for energy, and $\ket{\psi_\text{exact}}$ is the exact ground state. The observables $\hat{O}$ are set to be the nuclear gradient $\pdv{\hat{H}}{x_i}$ $(i=1,\dots,3N_\text{atom})$ and the Hessian $\pdv{\hat{H}}{x_i}{x_j}$ $(i,j=1,\dots,3N_\text{atom})$, where $N_\text{atom}$ is the number of atoms in the molecule and $x_i$ are coordinates of the nuclei. The absolute error is shown in the unit of Hartree, Hartree/\AA, or $\text{Hartree}/\text{\AA}^2$, depending on the observables. Although there are some observables (i.e., components of the gradient or Hessian) whose expectation values exhibit larger absolute errors than that of the energy, the expectation values of the majority of the observables have similar accuracy as the energy. \subsection{Bond length dependence} \label{ssec:appendix-bond-length} \begin{figure}[h!] \includegraphics[width=.45\textwidth]{figures/bond-to-R-with-PEC.pdf} \caption{Estimated $R$ and $1/\abs{c_R}^2$ for \ce{H2O} molecule (14 qubits) with various bond lengths. Potential energy curves are also shown for reference. The estimation method of $R$ and $1/\abs{c_R}^2$ are the same as in Figs. \ref{fig:scaling-a} and \ref{fig:scaling-b}.} \label{fig:pec} \end{figure} The Hartree-Fock calculation is known to perform better for a stable geometry of a molecule than for the dissociation limit, so it is worth studying if QSCI also performs worse in the dissociation limit. Figure~\ref{fig:pec} shows the result of the same numerical analysis as Fig.~\ref{fig:scaling-a}, but for various bond lengths of \ce{H2O} molecules. The Hamiltonian is generated by the Hartree-Fock orbitals using STO-3G basis without specifying the active space, and is of 14-qubit after the Jordan-Wigner mapping. The bond lengths of two H-O bonds are taken to be equal, and the H-O-H angle is fixed to \ang{104.45}. The result implies that, although there is some dependency on the bond length for larger $\epsilon$, the dependency disappears for smaller $\epsilon$. It can be expected from the result that the potential energy surface calculated by QSCI has a relatively constant accuracy, at least compared to the Hartree-Fock result, when the error tolerance is not very large. \subsection{Comparison to ASCI} \label{ssec:comparison-to-asci} Here, we investigate if there is a possibility that QSCI outperforms the state-of-the-art selected CI methods by taking ASCI for illustration. ASCI is a selected CI method solely based on classical computation, which adaptively searches for the optimal subspace of the Fock space for the diagonalization. In Fig.~\ref{fig:asci}, we compare QSCI with ASCI, for which we follow the description in Ref.~\cite{tubman2020modern}. Here, we use the QSCI method with the idealized sampling from the ground state obtained by the exact diagonalization (full-CI) calculation. The target molecule is the linear hydrogen chain \ce{H10} with the equal separation of 1.0 \AA. The basis set is STO-3G and the Hamiltonian with the Hartree-Fock orbitals is mapped to the 20-qubit one by the Jordan-Wigner mapping. For ASCI, in addition to the parameter $R$ (called as $N_{tdets}$ in Ref.~\cite{tubman2020modern}), there are two additional parameters: they are denoted by $\epsilon$ and $N_{cdets}$ in that paper, and are denoted by $\delta$ and $R_{\text{core}}$, respectively, in the following. The parameters $\delta$ and $R_{\text{core}}$ determine the size of the search space for the iterative search for the new determinants, while the cost for the generation and diagonalization of the Hamiltonian, which is common for both ASCI and QSCI, are determined solely by $R$. We fixed $\delta=\SI{0.05}{Hartree}$ and $r:=R/R_{\text{core}}=10$ or $20$ for ASCI, and run QSCI and ASCI calculations with various $R$. While in the case of $r=10$ the two methods perform similarly, QSCI performs better for $r=20$, where less computational cost is required for searching for a better set of configurations in ASCI. The result shows that, depending on the hyperparameters for ASCI, there is a possibility that QSCI performs better, at least in the case of the idealized sampling from the exact ground state. \begin{figure}[h!] \includegraphics[width=.45\textwidth]{figures/ASCI_H10_d1.0_epsH0.05.pdf} \caption{Comparison of ASCI and QSCI for \ce{H10} molecule (20 qubits). The ASCI and QSCI energies as the difference to the exact CASCI energy are plotted against the size $R$ of the selected subspace. The parameter $r$ determines the size of the search space for ASCI.} \label{fig:asci} \end{figure} \section{Conclusion} \label{sec:conclusion} In this work, we proposed QSCI, a class of hybrid quantum-classical algorithms, to find low-lying eigenvalues and eigenstates of a many-electron Hamiltonian. Taking rough approximations of such eigenstates as input, QSCI selects important electron configurations to represent the eigenstates by sampling the input states on quantum computers, and then classically diagonalizes the Hamiltonian in the subspace(s) spanned by the selected configurations to yield better approximations for the eigenstates and their energies. QSCI is robust against noise and statistical fluctuation, as quantum computation is used only to define the subspaces. A quantum speed-up potentially arises in that sampling a quantum state is, in general, classically intractable. We verified the algorithms for ground and excited states of small molecules by numerical simulations and experiment, where the latter was conducted on the quantum device with the 8-qubit quantum circuits. We discussed potential utility of QSCI in various aspects: for instance, taking a state obtained by VQE as the input state, QSCI can be used to refine the VQE result, which may not be accurate enough due to statistical fluctuation, physical noise, and poor optimization; QSCI can be used as a technique for eigenstate tomography, which enables estimation of a variety of observables with no additional quantum computational cost. We also argued that QSCI is potentially feasible to tackle challenging molecules such as the chromium dimer by exploiting quantum devices with several tens of qubits, assisted by a high-performance classical computing resource for diagonalization. \begin{acknowledgements} The authors thank Amazon Web Services for supporting this work through their Amazon Braket service. A part of this work is supported by JST PRESTO JPMJPR2019 and JPMJPR191A, MEXT Quantum Leap Flagship Program (MEXTQLEAP) Grant No. JPMXS0118067394 and JPMXS0120319794, and JST COI-NEXT program Grant No. JPMJPF2014. \end{acknowledgements} \section{Discussion} \label{sec:discussion} In this section, we discuss various aspects of QSCI. We start from its classical and quantum computational costs in Sec.~\ref{ssec:discussion-computational-cost}, and then discuss its benefits for refining VQE results in Sec.~\ref{ssec:discussion-qsci-refine-vqe}. In Sec.~\ref{subsec:state-preparation}, several ideas for preparing input states are introduced. The aspect of QSCI as a selected CI is discussed in Sec.~\ref{ssec:discussion-qsci-as-selected-ci}, and ideas for future directions are finally introduced. \subsection{Computational costs} \label{ssec:discussion-computational-cost} Here classical and quantum computational costs are examined. In QSCI, classical computing is used for generating the truncated Hamiltonian matrix $\bm{H}_R$ and diagonalizing it. Exploiting the Slater-Condon rules, one can generate the sparse matrix $\bm{H}_R$ efficiently in both $R$ and the number of orbitals (see, e.g., Ref.~\cite{tubman2020modern} for details). For diagonalizing $\bm{H}_R$, one can employ algorithms to diagonalize a sparse matrix, such as the Lanczos method or the Davidson method. The generation and diagonalization of the Hamiltonian matrix are common procedures in the selected CI methods, and it is reported~\cite{garniron2019quantum} that $R\simeq \SI{5e7}{}$ of Slater determinants are manageable when a state-of-the-art high-performance computing resource is available, even for the method that repeats the Hamiltonian generation and diagonalization. In our method, such a repetition is not needed, and thus the computational cost should be smaller. As already discussed in Sec.~\ref{subsec:scaling}, Fig.~\ref{fig:scaling-a} suggests that, for some challenging molecules of $\sim$50 qubits, the QSCI calculation is feasible in terms of the classical cost by the current state-of-the-art classical computing, while meeting the accuracy requirement of $\epsilon \lesssim \SI{0.001}{Hartree}$. Note such a system size would be beyond the reach of the exact diagonalization. The quantum computational time is $t_Q=N_{\text{shot}}\times t_{\text{prepare}}$, where $N_{\text{shot}}$ is the number of shots for the sampling, i.e., the repetitions of the input-state preparation and measurement, and $t_{\text{prepare}}$ is the time needed for a single shot. Note that the total computational time can be reduced if multiple quantum computers are available, since the sampling procedures are completely parallelizable. $t_{\text{prepare}}$ highly depends on the type of quantum device to be used and the way to prepare the input state. For example, the Sycamore processor used in the Google's quantum supremacy experiment~\cite{arute2019quantum} can achieve $N_{\text{shot}}=\SI{1e6}{}$ in 200 seconds for a quantum circuit with 53 qubits and 20 repetitions of entangling operations, which corresponds to $N_{\text{shot}}\sim\SI{4e8}{}$ in a day. Hence, Fig.~\ref{fig:scaling-b} implies that the sampling cost is affordable for \ce{Cr2} with several tens of qubits, while it may be challenging at the moment to achieve $\epsilon=\SI{0.001}{Hartree}$ for a hydrogen chain with, say, 50 qubits. We remark that the sampling cost can be significantly reduced if one can prepare a state $\ket{\Delta\psi}$ that is orthogonal to a classically tractable state $\ket{\psi_c}$ such that, for some complex numbers $\alpha$ and $\beta$, $\ket{\psi_{\text{GS}}}=\alpha \ket{\psi_c} + \beta \ket{\Delta\psi}$ approximates the ground state, and can sample from $\ket{\Delta\psi}$ on a quantum computer. The state $\ket{\psi_c}$ can be the Hartree-Fock state or more intricate states such as the CISD state. For example, if $\abs{\alpha}^2=0.9$, then the sampling cost for a given precision can be reduced by a factor of ten. On the other hand, $\ket{\Delta\psi}$ can be prepared, e.g., by the method of Ref.~\cite{radin2021classically}. \subsection{ Use of QSCI to refine VQE results } \label{ssec:discussion-qsci-refine-vqe} QSCI can be viewed as a post-processing technique for VQE and its variants, when they are used to prepare the input states. Our methods have the following advantages: \begin{description} \item[Error reduction] By virtue of the classical diagonalization of a Hamiltonian matrix generated classically, the proposed methods can refine the VQE results, as demonstrated in Sec.~\ref{sec:numerical} and Sec.~\ref{sec:noisy-simulation-experiment}. Although results of noiseless VQE simulations are used to prepare the input states in our numerical and experimental studies, our results suggest that QSCI is also effective to refine \textit{dirty} VQE results subject to the statistical and physical errors. Figure~\ref{fig:vqe-experiment} also shows the effectiveness of the post-selection: the rate of the readout error, which is one of the major sources of physical errors, can be reduced from $O(p)$ to $O(p^2)$ with the Jordan-Wigner mapping, as discussed in Appendix~\ref{subsec: post-selection}. Note that QSCI does not require extra gate operations for the measurement, unlike expectation-value estimations in VQE. As already shown in Fig.~\ref{fig:noiseless-vqe}, our method is also effective to improve the quality of the input state even in the absence of physical and statistical errors. This feature may enable one to use ansatzes with shallower circuits, or to reduce the number of optimization steps in VQE, by employing QSCI to improve the final result. \item[Reliability] Our method is free of errors in the sense that the resulting ground-state energy is exact within the subspace spanned by the quantum-selected configurations. This means that the obtained energy is a definite upper bound for the exact ground-state energy, which is not the case in conventional VQE because of physical and statistical errors, as discussed in Sec.~\ref{subsec:ground-state}. This is advantageous for comparing the QSCI result obtained on noisy quantum devices with the results of classical variational methods such as CISD or density matrix renormalization group (DMRG)~\cite{white1992,white1993,Schollwock2005}: the variational nature of these methods guarantees that the method that gives the lowest energy is the most accurate one. Similar variational inequalities hold for excited states in the single diagonalization scheme of QSCI, while there is no such guarantee in the sequential diagonalization scheme. Although the latter appeared to be more accurate in our numerical simulation, the former is of great use if one is interested in giving rigorous upper bounds on excited-state energies. \item[Handiness] As one has the classical representations of the eigenstates as output, one can compute the expectation values of a large class of observables with no additional quantum computation. Our method becomes more valuable when more observables are to be evaluated, as exemplified in Fig.~\ref{fig:scaling-b}. Moreover, one can also analyze the classical vectors themselves, which may be useful to study the significance of each Slater determinant. \end{description} \subsection{ Use of QSCI with more general input states } \label{subsec:state-preparation} As discussed in the previous sections, input states for ground state can be prepared by VQE, and those for excited states by its variants, but the proposed methods are applicable to more general input states. Our method can in principle be applied to any kind of input states that can be prepared and sampled on a quantum computer. We give an incomplete list of possible preparation schemes for input states in the following: the adiabatic state preparation~\cite{farhi2000quantum, aspuru2005simulated}, the imaginary time evolution~\cite{williams2004probabilistic, terashima2005nonunitary, mcardle2019variational,mao2022measurementbased}, classically-boosted VQE~\cite{radin2021classically}, classically-optimized shallow ansatz circuits~\cite{okada2022identification}, unitary coupled-cluster ansatz circuits with classically-optimized parameters~\cite{mcclean2016theory, romero2018strategies, kuroiwa2023clifford+, hirsbrunner2023beyond}, and parametrized states classically optimized by Clifford circuits~\cite{mitarai2022quadratic, ravi2022cafqa}. Note that the performance of QSCI depends on the quality of the input state and also on the form of the exact eigenstate. For example, if the exact eigenstate is the equal superposition of all the computational basis states, then our algorithm will not perform well. The algorithm can also be useful for a Hamiltonian that has an exactly-known ground state. For example, one can calculate an exact ground-state energy of a system that is solvable with the Bethe ansatz, but there are quantities, such as a class of correlation functions, that cannot be computed efficiently~\cite{verstraete2009quantum}. Our method provides the classical representation of an approximate eigenstate, which means that one can evaluate various physical quantities without additional quantum resource, as we already discussed for states prepared by VQE. The preparation of the Bethe ansatz states on quantum computers is addressed in Refs.~\cite{van2021preparing, sopena2022algebraic}. Moreover, although we proposed the method as a hybrid quantum-classical algorithm, one can apply the method to input states that can be sampled efficiently on classical computers. This is shortly discussed in Sec.~\ref{ssec:outlook}. \subsection{ QSCI as selected CI } \label{ssec:discussion-qsci-as-selected-ci} As selected CI methods, the novelty of QSCI comes simply from how to define the subspace on which we construct the subspace Hamiltonian. Quantum computers are used to sample important configurations from the input state, and there is a quantum speed-up when the input state is hard to sample classically. In selected CI methods, the subspace of the Fock space for the diagonalization is either fixed by the method, e.g., CISD, or adaptively chosen according to the algorithm. We have shown experimentally that CISD performs worse even when compared to the QSCI result on the current NISQ device (Fig.~\ref{fig:vqe-experiment}). One of the most advanced methods for sampling dynamically important bases is the adaptive sampling configuration interaction (ASCI) algorithm developed by Tubman and co-workers~\cite{tubman2016deterministic,tubman2020modern}. The idea of systematically selecting important bases based on perturbation theory was developed about 50 years ago~\cite{bender1969pr,whitten1969jcp,huron1973jcp,buenker1974tca,buenker1975tca}, and a selection scheme based on Monte Carlo methods was proposed in the 1990s~\cite{greer1995jcp,greer1998jcpss}. However, systematically selected CI was not widely used in quantum chemistry calculations for many years. Recently, it has undergone rapid development and is now becoming applicable to large-scale quantum chemical simulationst~\cite{evangelista2014jcp,holmes2016jctc,schriber2016jcp,holmes2016jctc2,tubman2016deterministic,ohtsuka2017jcp,schriber2017jctc,sharma2017semistochastic,chakraborty2018ijqc,coe2018jctc,coe2019jctc,abraham202jctc,tubman2020modern,zhang2020jctc,zhang2021jctc,chilkuri2021jcc,chilkuri2021jctc,goings2021jctc,pineda2021jctc,jeong2021jctc,coe2023jctc,seth2023jctc}. Indeed, Tubman \textit{et al.} showed that ASCI is capable of handling 34 electrons in 152 spatial orbitals~\cite{tubman2020modern}. ASCI has hyperparameters that define the size of the search space to adaptively select the configurations, and we will see in Appendix~\ref{ssec:comparison-to-asci} that, with some set of hyperparameters, QSCI can perform better than ASCI. \subsection{Outlook} \label{ssec:outlook} QSCI is applicable to diverse systems, and has many directions for generalizations. \begin{itemize} \item It would be possible to consider a hybrid of the proposed method and another adaptive selected CI method, such as ASCI, by combining the configurations suggested by QSCI with those of the other method. In this way, one could improve the results of the state-of-the-art selected CI methods by using quantum computers. \item QSCI is essentially a selected CI where the configurations are randomly selected according to a probability distribution $p(x) = \abs{\braket{x}{\psi_{\mathrm{in}}}}^2$. A classical counterpart of this approach is called Monte-Carlo configuration interaction (MCCI)~\cite{greer1995jcp,greer1998jcpss}. MCCI does not seem to have been extensively studied since the first proposal in 1995, and the use of a more sophisticated (classical) probability distribution for MCCI is yet to be explored. It would be an interesting future work to use a classically tractable $p(x)$ for MCCI and compare/combine it with QSCI. For example, some of the tensor network states, such as the matrix product states (MPS) and the multi-scale entanglement renormalization ansatz (MERA) states, can be efficiently sampled on classical computers~\cite{ferris2012perfect}. \item Our method, compared to the conventional VQE, has an advantage that it can evaluate physical observables classically with no additional quantum computational cost. One may leverage this feature by using QSCI for the geometry optimization problem of a molecule, or a molecular dynamics calculation. In those applications, one may skip the sampling for some iterations, and continue to run with the same state subspace defined by the $R$ electron configurations, thereby reducing the quantum computational cost further. \end{itemize} We remark that the performance of QSCI depends highly on the quality of the input state. It would be great if there is a way to start from an input state with modest quality, and then improve the quality of the input state by an iterative use of QSCI. \section{Benchmark of QSCI with noisy simulation and experiment} \label{sec:noisy-simulation-experiment} \begin{figure*} \begin{minipage}{\textwidth} \subfloat[][Noisy simulator w/o post-selection]{ \includegraphics[width=.45\textwidth]{figures/ionq-experiment/manylines_exp-sampling-noisysim-nops.pdf} } \subfloat[][Noisy simulator w/ post-selection]{ \includegraphics[width=.45\textwidth]{figures/ionq-experiment/manylines_exp-sampling-noisysim-ps.pdf} } \subfloat[][IonQ device w/o post-selection]{ \includegraphics[width=.45\textwidth]{figures/ionq-experiment/manylines_exp-sampling-ionq-nops.pdf} } \subfloat[][IonQ device w/ post-selection]{ \includegraphics[width=.45\textwidth]{figures/ionq-experiment/manylines_exp-sampling-ionq-ps.pdf}} \end{minipage} \caption{ QSCI results for the ground state of the linear hydrogen chain $\ce{H4}$ on 8 qubits by the noisy simulator [(a), (b)] and by the IonQ device [(c), (d)], with and without post-selection, compared with the conventional method of quantum-expectation value estimation and CISD, which uses 27 Slater determinants. The resulting energies are plotted in Hartree as deviations from the one obtained by the exact diagonalization (CASCI). Lines specified by ``VQE'' show the exact energy value of the parametrized state at each iteration, and the states at four selected iterations are used as the input states for the QSCI calculations, shown by ``QSCI'' with four different values of $R$. The conventional method uses the QWC grouping in the energy estimation. The markers on the solid lines show the average value of some trials while each line without markers shows the result of one of the trials. The number of trials is ten for both QSCI and the conventional method on the noisy simulator, one for the conventional method and five for QSCI on the IonQ device. } \label{fig:vqe-experiment} \end{figure*} In this section, we describe the result of the experiment for the ground state of the hydrogen chain \ce{H4} (8 qubits), conducted on the IonQ 11-qubit device through Amazon Braket service, along with the result of noisy sampling simulation using Qulacs with the identical setup. We first run a VQE calculation of a linear \ce{H4} molecule with bond lengths \SI{1.0}{\AA} on a noiseless state-vector simulator. We use the STO-3G basis set without freezing any orbitals, and thus the problem Hamiltonian is 8-qubit. The so-called Ry ansatz with depth 8 is employed for the VQE calculation. See Appendix~\ref{ssec:setup-experiment} for details, including the circuit diagram of the ansatz. Then, we perform QSCI calculations on the quantum hardware and the noisy simulator using four sets of parameters at four distinct iterations of the VQE calculation. We use 10,000 shots for each sampling, and the most frequent $R$ configurations are selected to define the subspace, with and without the post-selection. The post-selection of the sampling result is performed using the number of electrons $N_e=4$ and the spin $S_z=0$. For noisy simulation, to simulate the physical noises on the device, single-qubit depolarizing noise is added after each gate and bit-flip noises are added at the end of the circuit to mimic the measurement error. The level of each type of the noise is determined by the single-qubit and two-qubit gate fidelities, and the measurement fidelity of the actual device: 99.61\%, 96.868\%, and 99.824\%, respectively\footnote{More precisely, the error rate of the single-qubit depolarizing noise for each single-qubit gate is set to $p_1$, where $p_1$ is the single-qubit gate infidelity. For the two-qubit gates, single-qubit depolarizing noise is applied to each of the two qubits with probability $1-\sqrt{1-p_2}$ for a two-qubit gate infidelity $p_2$. The bit-flip noise is applied to each qubit with probability $p_{\text{ro}}$, the measurement infidelity.}. For comparison, on the quantum device and the noisy simulator, the calculation of the expectation value of the energy using a conventional method is performed with 10,000 shots. The QWC grouping and the shot allocation optimized for Haar random states are employed. Error mitigation techniques, which may improve the result at the cost of additional quantum resources, are not employed in this study. The results are presented in Fig.~\ref{fig:vqe-experiment}. By comparing the results from the noisy simulator and the quantum device, one can see that they have a reasonable agreement, although the result from the quantum device seems to be more affected by the errors. Moreover, it is clear that the post-selection is powerful in both simulation and experiment. It is particularly worth noting that even on the physical device, some of the QSCI calculations with $R=27$ do outperform the result of CISD, which also diagonalizes the subspace Hamiltonian with 27 Slater determinants, and achieve the chemical accuracy on the 8-qubit system. Some minor comments are in order: firstly, at the earlier iterations, the number of sampled (and post-selected) configurations was sometimes less than the given $R$, because the state is concentrated in some computational basis states. In that case, we only used the sampled configurations for the QSCI calculation; secondly, CASCI result, i.e., the exact diagonalization result, corresponds to\footnote{The number of Slater determinants which have the required particle number and $S_z$ is $\binom{4}{2}\cdot \binom{4}{2}=36$.} $R=36$; this number may seem to be comparable to $R=27$, but it is still a non-trivial task to choose 27 configurations out of 36 possibilities. \section{Introduction} Recent years have seen a rapid development of quantum computers towards their practical use. Although current quantum devices are prone to errors due to physical noise, ways to achieve \textit{quantum advantage} over classical computations have been explored experimentally~\cite{arute2019quantum, zhong2020quantum, madsen2022quantum}, and such noisy intermediate-scale quantum (NISQ) devices are believed to become useful in the near future~\cite{preskill2018quantum}. Quantum chemistry is at the top of the list of such useful applications (see, e.g., Refs.~\cite{cao2019quantum, mcardle2020quantum, cerezo2021variational, bharti2022noisy, tilly2022variational}): for instance, energy eigenvalues of a molecular Hamiltonian can be calculated by quantum algorithms developed for NISQ devices, where the most notable is the variational quantum eigensolver (VQE)~\cite{peruzzo2014variational} to find the ground-state energy. However, VQE faces several challenges to be overcome for practical use. The major obstacle comes from errors caused by statistical fluctuation and physical noise inherent in the noisy devices. Suppressing the statistical error to a practically acceptable level needs a prohibitively large number of samples~\cite{gonthier2020identifying}, and error mitigation techniques~\cite{viola1999dynamical, temme2017error, li2017efficient,endo2018practical,koczor2021exponential,huggins2021virtual,mcardle2019error,bonet2018low,maciejewski2020mitigation,endo2021hybrid} for reducing physical noise require even more samples to compensate the additional statistical error they introduce~\cite{wang2021can,takagi2022fundamental, tsubouchi2022universal,takagi2022universal}. In particular, the effect of the errors can spoil the \textit{variational} nature of VQE: that is, the energy estimated by quantum devices is not guaranteed to give an upper bound on the exact ground-state energy. This is problematic because lowering the resulting energy of VQE does not necessarily mean approaching to the exact ground state. Besides, there are other challenges for VQE such as the barren plateau problem, which can interrupt the optimization~\cite{mcclean2018barren}. In this paper, we propose a class of hybrid quantum-classical methods, which we call quantum-selected configuration interaction (QSCI), to find low-lying eigenvalues and eigenstates of a many-electron Hamiltonian.\footnote{We focus on applications to quantum chemistry in this paper. However, the proposed methods can be applied to a variety of many-body Hamiltonians, including many-electron and spin problems in condensed matter physics.} QSCI is noise resilient and, in principle, free of costly optimization of parametrized quantum circuits. In particular, QSCI sets rigorous upper bounds on the ground-state energy\footnote{QSCI can also set rigorous upper bounds on the excited-state energies, depending on its algorithmic implementation.} even under the effect of physical and statistical errors. Here we outline a version of QSCI for finding a ground state: suppose that an approximate ground state, which we call an \textit{input state} in this paper, can be prepared on a quantum computer; one then repeats a measurement of the state to identify the computational basis states, or electron configurations, that are important to express the ground state~\cite{kohda2022quantum}; one then diagonalizes, on classical computers, the truncated Hamiltonian matrix in the subspace spanned by the identified configurations to obtain the smallest eigenvalue and eigenvector. The resulting eigenvalue approximates the ground-state energy. The diagonalization is classically tractable unless the number of selected configurations is exponentially large in the system size. The algorithm can be extended to find excited states by enlarging the subspace or by repeating the procedure for each energy eigenstate. Since the matrix elements of the Hamiltonian in the computational basis can be exactly calculated on classical computers, the diagonalization results in an energy that gives a definite upper bound on the exact ground-state energy regardless of the quality of the subspace spanned by the identified configurations; the quality only affects how tight the bound is. The states need to be measured only in the computational basis, and thus no additional gate operation is required for the measurement. In the presence of symmetries with conserved quantities such as the particle number, the post-selection of the computational basis states in the sampling outcome allows one to mitigate the bit-flip errors. We experimentally demonstrate the effectiveness of the post-selection in this paper. The algorithm may take any quantum states as the input states, if they roughly approximate the desired eigenstates and can be prepared on quantum devices. Such input states can be prepared, e.g., by parametrized quantum circuits moderately optimized via VQE and its variants~\cite{tilly2022variational}, and other preparation schemes are discussed in Sec.~\ref{subsec:state-preparation}. Sampling from such quantum states can be hard for classical computers~\cite{arute2019quantum}, and thereby providing a potential quantum speed-up in QSCI. QSCI can also be advantageous as a technique for \textit{eigenstate} tomography in that it can (classically) estimate the expectation values of a variety of observables at no additional quantum cost: as we already have the classical representation of the state, one can efficiently compute the expectation values using that representation. Unlike QSCI, other efficient tomography techniques such as classical shadows~\cite{huang2020, zhao2021}, neural network tomography~\cite{Torlai_2018}, and tensor network tomography~\cite{Cramer_2010} do not exploit the fact that the states of our interest are eigenstates of some problem Hamiltonian. As the name suggests, QSCI can be viewed as a configuration interaction (CI), where the many-body basis set is determined by quantum computers via sampling of an input state. There are established techniques~\cite{helgaker2014molecular} that choose fixed basis sets. A common approach in electronic structure theory is to select only one- and two-particle excitations from a reference wavefunction. When the reference wavefunction is chosen to be Hartree-Fock, the resulting method is known as CI with singles and doubles (CISD). If the reference wavefunction is a correlated wavefunction beyond the mean-field approximation, the method is called multi-reference CISD (MR-CISD). In the context of quantum computing, MR-CISD has sometimes been called as quantum subspace expansion (QSE)~\cite{mcclean2017hybrid,takeshita2020increasing,urbanek2020jctc}. Another approach is the adaptive selection of a suitable basis set for a target system. In quantum chemistry, a systematic selection of important bases has a long history~\cite{bender1969pr,whitten1969jcp,huron1973jcp,buenker1974tca,buenker1975tca,nakatsuji1983cluster,cimiraglia1987jcc,harrison1991jcp,greer1995jcp,greer1998jcpss}. And there are recently active studies along with such a systematic selected CI~\cite{evangelista2014jcp,holmes2016jctc,schriber2016jcp,holmes2016jctc2,tubman2016deterministic,ohtsuka2017jcp,schriber2017jctc,sharma2017semistochastic,chakraborty2018ijqc,coe2018jctc,coe2019jctc,abraham202jctc,tubman2020modern,zhang2020jctc,zhang2021jctc,chilkuri2021jcc,chilkuri2021jctc,goings2021jctc,pineda2021jctc,jeong2021jctc,coe2023jctc,seth2023jctc}. Thanks to such developments, systematic selected CIs are now gradually being considered as a promising approach for large-scale quantum chemical simulations. The likely reason for this revival is that selected CI is an algorithm that can be adapted to current classical computer architectures with sufficient memory. QSCI may be seen as a new systematic selected CI that utilizes quantum computers. Our methods are capable of selecting electron configurations which are necessary to describe the eigenstates to some accuracy but are missed in the conventional methods with a fixed basis set. Note that our methods call the diagonalization procedure at most only once for each eigenstate, while the adaptive methods iteratively repeat the diagonalization to search for a configuration to be added in the basis set; our methods require much less classical computational time compared to those adaptive methods. The classical diagonalization is already utilized in various hybrid quantum-classical algorithms to find energy eigenstates. Most notable is QSE, which spans the subspace by states built upon the reference VQE state, and is widely used for various applications, e.g., excited state calculations~\cite{mcclean2017hybrid}, band structure calculations~\cite{yoshioka2022variational}, and noise reduction~\cite{mcclean2017hybrid,bonet2018low, takeshita2020increasing,mcclean2020decoding,yoshioka2022generalized, epperly2022theory}. More generally, one can span the subspace by various methods~\cite{huggins2020non,motta2020determining, parrish2019quantumfilter, stair2020multireference,parrish2019quantum, seki2021quantum,baek2022say, kirby2022exact}, which are sometimes collectively called as the quantum subspace diagonalization. In those methods, however, the matrix elements of the subspace Hamiltonian are calculated on quantum computers, and thus are subject to the physical and statistical errors. There is a proposal~\cite{radin2021classically} where some of the matrix elements are classically calculated, but the method still requires some matrix elements which are efficiently computable only by quantum computers for a possible quantum speed-up. In QSCI, on the other hand, all the matrix elements are classically computed, giving up the use of more complex and physically-motivated states as basis states that define the subspace. The rest of the paper is organized as follows. The proposed methods are introduced in Sec.~\ref{sec:methods}, and numerically tested in Sec.~\ref{sec:numerical}. A demonstration on a quantum device is presented in Sec.~\ref{sec:noisy-simulation-experiment}, along with a noisy simulation as a preparatory study. We discuss aspects of the proposed methods in Sec.~\ref{sec:discussion}, and finally conclude in Sec.~\ref{sec:conclusion}. Details of the algorithms, numerical simulations and experiment, as well as supplemental numerical results are given in the appendices. \section{Methods} \label{sec:methods} In this section, we present the methods of QSCI. Two ways of implementation are introduced: single diagonalization scheme in Sec.~\ref{sssec:method-single} and sequential diagonalization scheme in Sec.~\ref{sssec:method-sequantial}. They are designed for finding multiple energy eigenstates, and reduce to the same simplified method when used for finding the ground state alone. After introducing necessary ingredients, we begin with the algorithm specific to finding the ground state, which is simple and illustrative, and then proceed to the two methods which can also find excited states. \subsection{ Preliminary } We consider electronic structure problems of molecules in the second-quantization formalism with the Born-Oppenheimer approximation. A Hamiltonian and wave functions for electrons, in this setup, can be mapped onto $N_q$ qubits such that the Slater determinants\footnote{Instead, linear combinations of Slater determinants such as configuration state functions may be mapped to the computational basis states. QSCI can work with such a mapping, if the matrix elements of the Hamiltonian in the computational basis can be efficiently computed by classical computation.} for the Hartree-Fock state and its excitations are associated with the computational basis states $\ket{x}$, where $x\in{\{0, 1\}^{N_q}}$ is an $N_q$-bit string (see, e.g., Ref.~\cite{cao2019quantum,mcardle2020quantum}). In the Jordan-Wigner mapping, which we adopt in the numerical study, $N_q$ corresponds to the number of spin orbitals, and ``1'' or ``0'' represents whether each spin orbital is occupied or not. The methods can work with other mapping schemes such as the Bravyi-Kitaev mapping~\cite{bravyi2002fermionic}, although the fermion-qubit correspondence is less intuitive and the error mitigation (discussed later) is less effective. We denote the qubit Hamiltonian by $\hat{H}$. A linear combination of all the computational basis states, \begin{align} \ket{\psi} =\sum_{x\in{\{0, 1\}^{N_q}}} \alpha_x \ket{x}, \label{eq:general_state} \end{align} encompasses the full-CI wave function. Note that for a fixed number of electrons only a subset of the computational basis states is needed. In the full-CI method, sets of the CI coefficients \{$\alpha_x$\} that correspond to energy eigenstates are found by diagonalizing the Hamiltonian in the full Fock space. The method is costly due to the combinatorial growth of the Fock-space dimension as the number of spin-orbitals increases. For reducing the computational cost, there exist various classical approaches which truncate the Fock space and approximate the sum in Eq.~\eqref{eq:general_state} using a fixed or adaptively selected basis set, as mentioned in the previous section. In line with these efforts, but from a different viewpoint, we propose methods which harness quantum computers to identify important computational basis states, or electron configurations, for truncating the Fock space. \subsection{QSCI for ground state \label{subsec:ground-state} } We now describe the explicit algorithms. We begin with the algorithm for finding the lowest eigenvalue and the corresponding eigenstate (ground state) of an electronic Hamiltonian $\hat{H}$ on $N_q$ qubits. For simplicity, we assume the ground state is unique. When the degeneracy exists, the algorithms given in the next subsection, which is aimed at finding multiple eigenstates, can be straightforwardly applied. Indeed, the algorithm introduced in this subsection is a special case of each of the two algorithms in the next subsection. Let $\ket{{\psi_{\rm in}}}$ be an input state, which roughly approximates the ground state, and suppose $\ket{{\psi_{\rm in}}}$ can be prepared by a quantum circuit with $N_q$ qubits. Then, one prepares the input state on a quantum computer and measures the state in the computational basis, which results in an outcome bit string $x\in{\{0, 1\}^{N_q}}$. Repeating such a sampling procedure (or shot) for $N_{\rm shot}$ times, one counts how many times each $x$ appears. Based on the total sampling result, the most frequent $R$ computational basis states are selected to define the set \begin{align} \mc{S}_R = \{ \ket{x} | x\in{\{0, 1\}^{N_q}}, R~{\rm most~frequent} \}, \label{eq:set_GS} \end{align} where $R$ is a positive integer manually determined. This is to truncate the Fock space. One may in principle include all the computational basis states appeared in the measurements, while choosing an appropriately small $R$ can reduce the computational cost for diagonalization. One then solves the eigenvalue problem in the subspace spanned by $\mathcal{S}_R$: \begin{align} \bm{H}_R\bm{c} = E_R\bm{c}, \end{align} where $\bm{H}_R$ is the $R\times R$ Hermitian matrix defined by \begin{align} (\bm{H}_R)_{xy}= \mel{x}{\hat{H}}{y}~{\rm for}~\ket{x}, \ket{y} \in \mathcal{S}_R, \end{align} and $\bm{c}$ is an eigenvector with eigenvalue $E_R$, satisfying $\bm{c}^\dagger \bm{c}=1$. This step of the algorithm proceeds via classical computations: calculations of the matrix elements $\mel{x}{\hat{H}}{y}$ and the diagonalization of $\bm{H}_R$. The former calculations can be efficiently done by some classical method, e.g., by the Slater-Condon rules in the fermionic basis. The latter diagonalization is performed to obtain the smallest eigenvalue $E_R$ and the eigenvector $\bm{c}$, which are output of the algorithm. See Sec.~\ref{ssec:discussion-computational-cost} for further discussion on costs of these classical computations. Here, $E_R$ approximates the exact ground-state energy of $\hat{H}$, while $\bm{c}$ approximately gives the (normalized) CI coefficients, or the vector representation of the ground state, respectively. The corresponding quantum state, which we call the \textit{output state}, is constructed as \begin{align} \ket{{\psi_{\rm out}}}=\sum_{\ket{x}\in \mathcal{S}_R} c_x \ket{x}, \label{eq:output-state-gs} \end{align} where $c_x$ is an element of the eigenvector $\bm{c}$. The output state $\ket{{\psi_{\rm out}}}$ approximates the true ground state of $\hat{H}$. We remark that one does not need to realize the output state on quantum computers. Retaining the eigenvector $\bm{c}$ as classical data is enough for the application explained below. The output state can be used to estimate the expectation values of observables other than the Hamiltonian for the ground state, solely based on classical computations. Specifically, suppose that an observable in question is represented by a qubit operator $\hat{O}$. If the matrix elements $\mel{x}{\hat{O}}{y}$ can be efficiently computed on classical computers, so does the expectation value $\ev{\hat{O}}{{\psi_{\rm out}}}$, which is expected to give an approximation to the expectation value for the true ground state. In particular, if $\hat{O}$ can be expressed as a linear combination of $\text{poly}(N_q)$ Pauli strings, which is the case in many physical quantities, its expectation value can be efficiently computed on classical computers. Comments are in order for technical details. We identify the set $\mathcal{S}_R$ to span the subspace by sampling the input state. In this way, we expect that important computational basis states, or Slater determinants, to describe the ground state wave function can be selected. This is because in the sampling procedure a bit string $x$ occurs with the probability $\abs{\bra{x}\ket{{\psi_{\rm in}}}}^2$, while $\bra{x}\ket{{\psi_{\rm in}}}$ gives the CI coefficient of the corresponding Slater determinant in the input wave function $\ket{{\psi_{\rm in}}}$.\footnote{See, e.g., Eq.~\eqref{eq:general_state}. There, the CI coefficients can be expressed as $\alpha_x = \bra{x}\ket{\psi}$.} Indeed, $\mathcal{S}_R$ gives the $R$ Slater determinants with the largest coefficients $\abs{\bra{x}\ket{{\psi_{\rm in}}}}$ in $\ket{{\psi_{\rm in}}}$, under the ideal situation where physical noise can be ignored and the sampling is performed with an infinite number of shots. In passing, we sometimes adopt a method equivalent to this ideal situation to define $\mc{S}_R$ in the numerical study: that is, we just pick up the $R$ Slater determinants with the largest absolute values of the CI coefficients in the input state, instead of performing actual sampling procedures. We call this method as the \textit{idealized sampling} in this paper. Note that we assume the input state roughly approximates the true ground state. This is just to ensure the two states share the important computational basis states, and there is no need for a precise agreement between the CI coefficients of the two states. Such an input state can be prepared, e.g., by a parametrized quantum circuit moderately optimized via VQE. In Sec.~\ref{subsec:state-preparation}, we discuss methods to prepare the input state, including non-VQE based ways. The set $\mathcal{S}_R$ is defined in Eq.~\eqref{eq:set_GS} by specifying $R$, the number of the computational basis states retained in the subspace. But this is not the unique choice. For instance, one may define the set by taking all the computational basis states in the measurement outcome, as already mentioned. Or, one may instead set a threshold on the rate of occurrence $f_x$ for an outcome $x$ in the total sampling result, and then define an alternative set $\mathcal{S}_\epsilon = \{ \ket{x} | f_x \geq \epsilon \}$ with a threshold parameter $\epsilon$, For a proof-of-principle demonstration, we adopt Eq.~\eqref{eq:set_GS} to define the subspace for diagonalizing the Hamiltonian in the rest of the paper. In reality, physical noise and statistical fluctuation, the latter due to a finite number of shots, cannot be ignored, causing some errors in the output. However, the effect is only indirect and the method is robust against those errors: that is, the errors can degrade the quality of the selected subspace by missing important configurations or by picking up irrelevant configurations in the sampling procedures, but the lowest eigenvalue and eigenvectors are exact within the subspace. The latter point, the exactness within the subspace, results from the use of diagonalization for the matrix $\bm{H}_R$, whose elements are exactly computed. Consequently, the obtained energy $E_R$ sets an upper bound on $E_{\rm exact}$, the true ground-state energy of $\hat{H}$: \begin{align} E_{\rm exact}\leq E_R. \label{eq:variational-inequality} \end{align} Note that this variational inequality holds even under statistical fluctuation and physical noise. The situation is in contrast with VQE, where such an inequality is not guaranteed as the energy is directly measured on quantum computers and hence is susceptible to the errors.\footnote{In VQE, physical noise in the state preparation can hardly lead to the violation of the variational inequality, but it may be possible that an error during the measurement procedure causes it. The use of error mitigation techniques can also lead to the breakdown of the inequality. } It is also worth mentioning that, for a given sampling outcome, increasing $R$, the subspace size, always leads to a better approximation of the ground-state energy: $E_{\rm exact} \leq E_{R_a} \leq E_{R_b}$ for $R_a > R_b$. This can be used to see if the calculation converges. On the other hand, smaller $R$ can reduce the classical computational cost. Such a trade-off between the accuracy and cost is discussed in Sec.~\ref{subsec:scaling}. The algorithm finds the lowest energy state in the subspace $\mc{S}_R$, which gives an approximation to the ground state in the {\it full} Fock space. When there exists symmetry in the Hamiltonian, there are associated conserved quantities, e.g., the total electron number $N_e$ (or the charge of molecule) and the $z$-component of total electron spin $S_z$. Given this, one may wish to find the lowest energy state in a specific symmetry sector. In such a case, the method can be similarly applied but by relying on the subspace with fixed conserved quantities. For $N_e$ and $S_z$, this can be easily achieved as follows since each computational basis state corresponds to a Slater determinant with definite $N_e$ and $S_z$: one prepares an input state with the desired values of $(N_e, S_z)$, for which the sampling results in configurations each with the desired $(N_e, S_z)$; or, if such an input state cannot be prepared, one may post-select the sampling outcome, where one discards an outcome $x\in{\{0, 1\}^{N_q}}$ if it conflicts with the desired $(N_e,S_z)$. It is worth noting that the variational inequality~\eqref{eq:variational-inequality} still holds in each sector of Fock space specified by $(N_e, S_z)$. \begin{figure*} \includegraphics[width=\textwidth]{figures/schematic/qsci-for-ground-state.pdf} \caption{Schematic description of the QSCI algorithm for finding the ground state. When selecting the configurations, one may post-select the configurations by using conserved quantities such as the electron number or spin $S_z$ to mitigate the errors.} \label{fig:algorithm-gs} \end{figure*} Physical noise can cause a contamination of symmetry sectors: for an input state with fixed $(N_e, S_z)$, sampling on a noisy device can result in electron configurations with unwanted values of $(N_e, S_z)$, due to the bit-flip noise\footnote{Note that an error that corresponds to a phase-flip error occurring at the end of a circuit does not affect the probability distribution $\abs{\bra{x}\ket{{\psi_{\rm in}}}}^2$ and hence the sampling outcome.} or readout error. Nevertheless, one can mitigate such errors by post-selecting the sampling outcome according to the conserved quantities, as described above. One then diagonalizes the Hamiltonian in the post-selected subspace. We find that the post-selection is particularly effective to mitigate the readout error in the Jordan-Wigner mapping, while it is also applicable to other fermion-qubit mapping schemes (see Appendix~\ref{subsec: post-selection} for discussions). The algorithm is schematically summarized in Fig.~\ref{fig:algorithm-gs}. \subsection{ QSCI for multiple energy eigenstates \label{subsec:excited-states} } \begin{figure*} \begin{minipage}{\textwidth} \subfloat[][Single diagonalization]{ \includegraphics[width=\textwidth]{figures/schematic/qsci-for-excited-state-single.pdf}} \subfloat[][Sequential diagonalization]{ \includegraphics[width=\textwidth]{figures/schematic/qsci-for-excited-state-multi.pdf}} \end{minipage} \caption{Schematic descriptions of the QSCI algorithms for finding the ground state and the first excited state: (a) single diagonalization scheme, and (b) sequential diagonalization scheme. In both panels, $|\psi_{\rm in}^{(0)}\rangle$ ($|\psi_{\rm in}^{(1)}\rangle$) is the input state for the ground (first excited) state. In the panel (b), the overlap term is constructed from the preobtained output state $|\psi_{\rm out}^{(0)}\rangle$ to define the effective Hamiltonian for the first excited state, $\hat{H}^{(1)} =\hat{H}+\beta_0|\psi_{\rm out}^{(0)}\rangle\langle \psi_{\rm out}^{(0)}|$.} \label{fig:algorithm-es} \end{figure*} We now extend the algorithm to find multiple energy eigenstates, including low-lying excited states. For this, we note that the previous algorithm can output multiple eigenvectors, which can be taken to approximate excited states as well as the ground state. Yet, the quality of the approximation would not be satisfactory for the excited states, as the subspace is tailored for the ground state. Hence, we introduce extra input states to construct subspace(s) which can capture the excited states. In the following, we present two distinct algorithms to find multiple energy eigenstates and energies, schematically shown in Fig.~\ref{fig:algorithm-es}. The first algorithm, which we call the single diagonalization scheme, constructs a common subspace for both ground and excited states of interest, and performs the diagonalization in the subspace to simultaneously obtain all the desired eigenstates and energies. On the other hand, the second algorithm, dubbed as the sequential diagonalization scheme, constructs multiple subspaces, each tailored for each energy eigenstate, and sequentially diagonalizes the Hamiltonian in each subspace. Both of the algorithms contain the algorithm specific to the ground state, introduced in the preceding subsection, as a special case. \subsubsection{ Single diagonalization scheme \label{sssec:method-single} } Here we describe the single diagonalization scheme. Suppose one seeks for $N_s$ low-lying eigenstates of $\hat{H}$, which consist of the ground state(s) and subsequent excited states. In this case, one prepares multiple input states $|\psi_{\rm in}^{(i)}\rangle$ ($i=0,1,\cdots,N_{\rm in}-1$), which correspond to the low-lying energy eigenstates. Here, we allow $N_{\rm in}\leq N_s$, although the natural choice would be $N_{\rm in}=N_s$. For each of the input states, one repeats the sampling procedure as in the previous subsection. One then obtains the set of important configurations $\mc{S}_{R_i}^{(i)}$, formed by most frequent $R_i$ bit strings in the total sampling outcome for the $i$-th input state. Combining all the sets $\mc{S}_{R_i}^{(i)}$, one constructs the common subspace\footnote{Note that the set $\mc{S}_R$ defined here agrees with the definition~\eqref{eq:set_GS} in the preceding subsection when $N_{\rm in}=1$.}: \begin{align} \mc{S}_R = \mc{S}_{R_0}^{(0)} \cup \mc{S}_{R_1}^{(1)} \cup \cdots \cup \mc{S}_{R_{N_{\rm in}-1}}^{(N_{\rm in}-1)}. \label{eq:set_single-step} \end{align} In this case, the parameters $R_i$ may be eigenstate dependent, while $R$ is the number of the elements in the common subspace $\mc{S}_R$. $R\geq N_s$ is required to yield at least $N_s$ eigenvectors in the diagonalization procedure shortly explained. One may treat all $R_i$ as free parameters, which determine $R$ in turn. Or, one may first choose a value for $R$ and, then, decide each $R_i$ following some strategy. There are various ways for the latter strategy, depending on the purpose of using the algorithm. For example, if one prioritizes the ground state in terms of accuracy, a possible choice would be $R_0=R$ and $R_i=0$ for $i\neq 0$, albeit extreme. Or, if one wishes to treat all the input states on equal footing, each of $R_i$ can be chosen as equal as possible.\footnote{One can make each of $R_i$ as equal as possible by the following cycle of procedures, starting from an empty set $\mathcal{S}_R$, for a given $R$: in the first cycle, for each of the $N_\text{in}$ input states, the most frequent bit string is selected from the sampling outcome and then added to $\mathcal{S}_R$; this procedure is executed from the 0-th input state to $(N_{\rm in}-1)$-th input state, where one skips the state if the selected bit string already exists in $\mathcal{S}_R$; in the second cycle, the second frequent bit string is added to $\mathcal{S}_R$ for each input state according to the same rule; one repeats such a cycle until $\mathcal{S}_R$ is filled with $R$ distinct bit strings. Suppose such procedures finished after completing $R'$ cycles. Then, one can ensure that at least $R'$ most frequent bit strings for each input state are included in $\mathcal{S}_R$. This implies $R'$ or $R'+1$ most important configurations in each input state are included in the common subspace~\eqref{eq:set_single-step}, in the ideal situation where statistical fluctuation and physical noise can be ignored.} With the common subspace $\mc{S}_R$ constructed, one then diagonalizes the Hamiltonian in $\mc{S}_R$ as in the previous subsection: one constructs the $R\times R$ Hermitian matrix $\bm{H}_R$, solves the eigenvalue equation $\bm{H}_R\bm{c} = E_R\bm{c}$, and then picks up $N_s$ low-lying eigenvectors and eigenvalues, $(\bm{c}^{(0)}, E_R^{(0)}), (\bm{c}^{(1)}, E_R^{(1)}), \cdots, (\bm{c}^{(N_s-1)}, E_R^{(N_s-1)})$, where $\bm{c}^{(i)\dagger} \bm{c}^{(j)}=\delta_{ij}$. Here, $E_R^{(i)}$ ($E_R^{(0)}$) approximates the true energy of the $i$-th excited state (ground state), when the ground state is unique, for instance. The corresponding output states can be constructed as \begin{align} |\psi_{\rm out}^{(i)}\rangle =\sum_{\ket{x}\in \mathcal{S}_R} c_x^{(i)} \ket{x}, \label{eq:output-state_single-step} \end{align} for $i=0,1,\cdots, N_s-1$. Note that the algorithm in the previous subsection is a special case of the single diagonalization scheme with a single input state ($N_{\rm in}=1$). In this method, one can apply the same error mitigation technique by the post-selection as described in the previous subsection. The variational inequality now holds for each of energy eigenstates by Cauchy's interlace theorem~\cite{helgaker2014molecular} (see also Refs.~\cite{hylleraas1930numerical, macdonald1933successive}): \begin{align} E^{(i)}_{\rm exact} \leq E_R^{(i)}, \label{eq:variational-inequality-single} \end{align} for $i=0,1,\cdots, N_s-1$, where $E^{(i)}_{\rm exact}$ is the $i$-th eigenvalue (in ascending order) by the exact diagonalization. We remark that QSE~\cite{mcclean2017hybrid} and multistate-contracted VQE (MCVQE)~\cite{parrish2019quantum}, which also rely on the subspace diagonalization to obtain excited states, need to measure matrix elements, while the current method exactly calculates the matrix elements. Hence, we expect our method to be more noise-robust with the guarantee of the variational inequality. \subsubsection{ Sequential diagonalization scheme } \label{sssec:method-sequantial} We now give another scheme of QSCI to find excited states. The sequential diagonalization finds the ground state(s) and subsequent excited states by sequential diagonalization procedures of the Hamiltonian $\hat{H}$ in distinct subspaces. The algorithm is similar to the variational quantum deflation (VQD)~\cite{higgott2019variational}, a variant of VQE for excited states. Suppose one seeks for the $k$-th excited state\footnote{We implicitly assume the ground state is unique for ease of illustration. One can straightforwardly translate the description here to cases of degenerate ground (and possibly excited) states.} given that preceding $(k-1)$ excited states and ground state are already obtained by this method with the output states $|\psi_{\rm out}^{(i)}\rangle$ ($i=0,1,\cdots, k-1$). As in the previous methods, one repeats the preparation and measurement of the input state $|\psi_{\rm in}^{(k)}\rangle$ to obtain the set of important configurations: \begin{align} \mc{S}_{R_k}^{(k)} = \{ \ket{x} | x\in{\{0, 1\}^{N_q}}, R_k~{\rm most~frequent} \}. \label{eq:set_multi-step} \end{align} One then has to find the lowest energy state of $\hat{H}$ in this subspace, under the restriction that this state is orthogonal to the states already found, $|\psi_{\rm out}^{(i)}\rangle$ ($i=0,1,\cdots, k-1$). This can be achieved by diagonalizing the following effective Hamiltonian\footnote{This is not the unique choice of the effective Hamiltonian. For instance, the orthogonality can be imposed without introducing extra parameters though the implementation would be less suitable for NISQ devices~\cite{lee2018generalized}.} in the subspace spanned by $\mc{S}_{R_k}^{(k)}$: \begin{align} \hat{H}^{(k)} =\hat{H}+ \sum_{i=0}^{k-1}\beta_i |\psi_{\rm out}^{(i)}\rangle \langle \psi_{\rm out}^{(i)} |, \label{eq:sequential-Heff} \end{align} where $\beta_i$ are real parameters, which need to be sufficiently large for ensuring the orthogonality. The additional terms correspond to the overlap terms in VQD. This is equivalent to solving the eigenvalue equation \begin{align} \bm{H}_{R_k}^{(k)}\bm{c}^{(k)} = E_{R_k}^{(k)}\bm{c}^{(k)}, \label{eq:eigenvalue-eq} \end{align} and then pick up the smallest eigenvalue $E_{R_k}^{(k)}$ and eigenvector $\bm{c}^{(k)}$, normalized by $\bm{c}^{(k)\dagger} \bm{c}^{(k)}=1$. Here, $\bm{H}_{R_k}^{(k)}$ is the $R_k \times R_k$ Hermitian matrix defined by \begin{align} (\bm{H}_{R_k}^{(k)})_{xy}= \mel{x}{\hat{H}^{(k)}}{y}~{\rm for}~\ket{x}, \ket{y} \in \mathcal{S}^{(k)}_{R_k}, \label{eq:sequential-matrix} \end{align} whose matrix elements can be efficiently calculated by classical computations based on the expression \begin{align} (\bm{H}_R^{(k)})_{xy} = \mel{x}{\hat{H}}{y} +\sum_{i=0}^{k-1}\beta_i c_x^{(i)}c_y^{(i)*}. \end{align} One then constructs the output state \begin{align} |\psi_{\rm out}^{(k)}\rangle =\sum_{\ket{x}\in \mathcal{S}_{R_k}^{(k)}} c_x^{(k)} \ket{x}, \label{eq:output-state_multi-step} \end{align} which approximates the $k$-th excited state. Note that the expressions are specific to the $k$-th excited state. In order to find entire (low-lying) spectrum, one has to repeat the above procedure sequentially, starting from $k=0$, the ground state, which can be found by the ground-state algorithm already explained. This is similar to VQD, but the QSCI method does not require extra circuits to calculate the overlap terms. The coefficients $\beta_i$ can be chosen in the same manner as VQD. We want the smallest eigenvalue of $\bm{H}_{R_k}^{(k)}$ to approximate $E^{(k)}_{\rm exact}$, the $k$-th eigenvalue of $\hat{H}$. Following the discussion in Ref.~\cite{higgott2019variational}, it suffices to choose $\beta_i > E^{(k)}_{\rm exact}-E^{(i)}_{\rm exact}$ for $i=0,\cdots, k-1$; or, one may apply the looser condition of $\beta_i > 2\sum_j \abs{c_j}$, where $c_j$ are coefficients in the qubit Hamiltonian $\hat{H}=\sum_j c_j P_j$, expressed by the Pauli strings $P_j$ (see Appendix~\ref{subsec:details-sequential} for details). In practice, the condition $\beta_i > E^{(k)}_{\rm exact} - E^{(i)}_{\rm exact}$ can be utilized if one has prior knowledge on the energy spectrum, e.g., based on variational quantum algorithms. Even if such information is not available, one may still rely on the looser condition $\beta_i > 2\sum_j \abs{c_j}$. Note that in the sequential diagonalization scheme, the variational inequality like Eq.~\eqref{eq:variational-inequality-single} is not guaranteed due to the inexactness of the effective Hamiltonian, i.e., as Eq.~\eqref{eq:sequential-Heff} would be constructed only by approximate eigenstates in practice (see Appendix~\ref{subsec:details-sequential} for further discussion). \section{Benchmark of QSCI with noiseless simulations \label{sec:numerical} } In this section, we test various aspects of QSCI for small molecules by noiseless numerical simulations, where the effects of physical noise are not included. In Secs.~\ref{subsec:ground-state-simulation-with-noiseless-vqe} and \ref{subsec:simulation-excited-h2o}, QSCI calculations are performed for ground states and excited states, using input states prepared by VQE and VQD~\cite{higgott2019variational}, a variant of VQE for excited states. Then the scalability of QSCI is examined in Sec.~\ref{subsec:scaling}, and finally the effect of the statistical error in QSCI is studied in Sec.~\ref{ssec:sampling-simulation}. For the numerical simulations, a quantum-circuit simulation library Qulacs~\cite{suzuki2021qulacs} is used with the help of QURI Parts~\cite{quri_parts}, a library for developing quantum algorithms. The simulations in Sec.~\ref{ssec:sampling-simulation} are carried out by the sampling simulator which takes into account the statistical error, while all the other simulations are performed by the state-vector simulator, where the expectation values are exactly calculated without errors. For each simulation and experiment in this paper, the molecular Hamiltonian is first prepared as the second-quantized electronic Hamiltonian using the Born-Oppenheimer approximation with Hartree-Fock orbitals using the STO-3G basis unless otherwise stated, and converted to the qubit one by the Jordan-Wigner mapping. Active spaces are explicitly specified when employed, otherwise the full-space Hamiltonians are used. The electronic Hamiltonians are generated by OpenFermion~\cite{mcclean2020openfermion} interfaced with PySCF~\cite{sun2018pyscf}. The molecular geometries and other details are shown in Appendix~\ref{sec:appendix-details-of-sim-and-exp}. Stable geometries are chosen for all the molecules except for the hydrogen chains, and a potential impact of unstable geometry is briefly analyzed in Appendix~\ref{ssec:appendix-bond-length}. \subsection{QSCI for ground state} \label{subsec:ground-state-simulation-with-noiseless-vqe} \begin{figure} \includegraphics[width=.45\textwidth]{figures/History-H2O-CAS65-log_20221025.pdf} \caption{ The result of QSCI, the proposed method, for the ground state of \ce{H2O} molecule by noiseless simulation, shown with optimization history of VQE, which is used to prepare the input states of QSCI. Each of the resulting energies is shown as the difference to the CASCI result $E_{\rm exact}$ in Hartree. The dash-dotted line shows the result by the state-vector simulation of VQE. The lines specified by the parameter $R$ show the results of QSCI, $E_R - E_{\rm exact}$, for the given value of $R$, using the parametrized state at each iteration of VQE as the input state. The parameter $R$ determines the classical computational cost for QSCI, as explained in the main text.} \label{fig:noiseless-vqe} \end{figure} We first show the result of numerical simulation for ground state with input states prepared by noiseless VQE. We choose \ce{H2O} molecule with five active spatial orbitals and six active electrons as our problem, which leads to a 10-qubit Hamiltonian after the Jordan-Wigner mapping. In the VQE calculation, the parametrized quantum circuit is constructed by the real-valued symmetry-preserving ansatz~\cite{gard2020efficient, ibe2022calculating} with the depth 10, and is optimized by the Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimizer in the scientific library SciPy~\cite{virtanen2020scipy}. See Appendix~\ref{subsec:setup-noiseless-vqe} for details. The QSCI calculation is performed for each iteration of the VQE optimization: given the values of ansatz parameters obtained at the iteration, the input state is prepared by the parametrized quantum circuit with those values assigned; then, the QSCI calculation with the idealized sampling introduced in Sec.~\ref{subsec:ground-state} is performed to estimate the ground-state energy $E_R$ for a given $R$, the number of configurations in the subspace $\mc{S}_R$. This calculation is repeated for all the iterations of VQE with different values of $R$. The effect of the uncertainty due to the finiteness of the number of shots is addressed later in Secs.~\ref{ssec:sampling-simulation}, \ref{sec:noisy-simulation-experiment}, and Appendix~\ref{ssec:appendix-sampling}. In Fig.~\ref{fig:noiseless-vqe}, the result is shown along with the optimization history of VQE: $E_R - E_{\rm exact}$ is plotted (in Hartree) for each optimization step of VQE, where $E_{\rm exact}$ is the ground-state energy obtained by the exact diagonalization in the active space, called the complete active space configuration interaction (CASCI). The energies obtained by VQE are shown in the same way. Comparing the results at the last iteration in the plot, one can see that QSCI gives a lower energy than VQE for $R\gtrsim 16$. This shows that the method is able to improve the results of VQE even in the noiseless setting, where the effect of error mitigation is not present. We emphasize that, as discussed in Sec.~\ref{subsec:ground-state}, a lower energy by QSCI means that the energy is closer to the exact ground-state energy, which is manifested in the plot where $E_R - E_{\rm exact}$ is always positive. It is notable that we can already achieve the chemical accuracy\footnote{In this paper, we define the chemical accuracy by $\SI{1}{kcal/mol} \simeq \SI{1.6e-3}{Hartree}$ for the deviation of the calculated energy from the one obtained by the exact diagonalization of the Hamiltonian. } of $\SI{1.6e-3}{Hartree}$ with $R\sim 16$ while the CASCI in this case uses 100 determinants to express the ground state.\footnote{ For the active space restriction of five active orbitals and six active electrons with $S_z=0$, the number of the Slater determinants is $\binom{5}{3}\cdot \binom{5}{3}=100$. If one does not know the number of electrons and $S_z$ of the ground state before the calculation, then one would need to deal with the full Hamiltonian in the Fock space of $2^{10}=1024$ dimensions.} A similar tendency is observed for iterations of $\gtrsim 200$. Note, in this case, the VQE results already achieve the chemical accuracy. On the other hand, for intermediate iterations of 70--200, the VQE results do not reach the chemical accuracy, while QSCI can improve them to meet the chemical accuracy if $R \gtrsim 16$. This suggests that an intermediate result of VQE, which is not seeing convergence in the optimization yet, is already useful as an input state of QSCI, and that one can reduce the number of optimization steps for VQE by employing QSCI as a post-processing. We note that the QSCI results do not monotonically decrease, as a QSCI calculation for an input state with a lower energy does not necessarily result in a lower output energy. \subsection{QSCI for excited states} \label{subsec:simulation-excited-h2o} \begin{figure} \begin{minipage}{0.5\textwidth} \subfloat[][$T_1$ state]{ \includegraphics[width=\textwidth]{figures/excited_H2O_state1.pdf}} \subfloat[][$S_1$ state]{ \includegraphics[width=\textwidth]{figures/excited_H2O_state2.pdf}} \end{minipage} \caption[]{ Same as Fig.~\ref{fig:noiseless-vqe} but for the first ($T_1$) and second ($S_1$) excited states of \ce{H2O} with $S_z = 0$, along with optimization histories of VQD, shown by dotted lines, for input-state preparation; the energy differences are plotted by the absolute values. For the QSCI calculation of the $T_1$ ($S_1$) state, the input state(s) corresponding to the lower energy states, i.e., $S_0$ ($S_0$ and $T_1$) state(s), are prepared by converged sets of parameters of VQD. QSCI results are shown for the sequential diagonalization and the single diagonalization with two types of configuration selection, as described in the main text. The QSCI calculations with sequential diagonalization are done with $R_i=16$ for $i=0,1,2$, while the value of $R$ is set to $R=16$ for single diagonalization.} \label{fig:noiseless-vqd} \end{figure} We next show the results of noiseless simulations for excited states of \ce{H2O} using the two distinct implementations of QSCI presented in Sec.~\ref{subsec:excited-states}, namely the single diagonalization and sequential diagonalization schemes, which take the input states for excited states as well as the ground state. The numerical setup is similar to the previous subsection, with some differences explained below. In the input-state preparation, we employ VQE for the ground state and VQD for excited states, each with the same ansatz and optimizer as in the previous subsection; we use the same 10-qubit Hamiltonian, but with the overlap terms and penalty terms~\cite{mcclean2016theory,ryabinkin2018constrained,kuroiwa2021penalty} in VQD, for orthogonality between the eigenstates and for symmetry restrictions (charge neutrality for the molecule and $S_z=0$ for the total electron spin) on the excited states. Under the same symmetry restrictions, the first excited state is a triplet state ($T_1$) and the second excited state is a singlet state ($S_1$), according to the exact diagonalization. The VQD calculation for $T_1$ requires information of the ground state ($S_0$) to generate the overlap term, for which the ansatz state is used with the converged parameters in VQE. A similar procedure is applied for $S_1$, but with the extra overlap term for $T_1$ added. With the input states for $S_0$, $T_1$ and $S_1$, we perform QSCI calculations to find $T_1$ and $S_1$, where the idealized sampling is used. See Appendix~\ref{subsec:setup-noiseless-vqd} for details. The results are shown in Fig.~\ref{fig:noiseless-vqd} along with the optimization history of VQD, in the similar way as Fig.~\ref{fig:noiseless-vqe}, but for $|E_R^{(i)} - E_{\rm exact}|$ ($i=1$ for $T_1$ and $i=2$ for $S_1$). At each iteration, three types of QSCI calculations are performed: sequential, single-ground-state, and single-mixed. Sequential diagonalization uses the $T_1$ ($S_1$) state at the iteration, and one (two) lower energy state(s) at their final iterations as input states. Two single diagonalization methods use different input states: ``single-ground-state'' uses the ground state prepared by the converged VQE calculation, and that is why they are constant in the plot; on the other hand, ``single-mixed'' uses the two (three) states as input states, and selects $R$ configurations so that each of the two (three) states contributes as equally as possible, as explained in Sec.~\ref{sssec:method-single}. Note that $R$ is the dimension of the common subspace $\mc{S}_R$ in Eq.~\eqref{eq:set_single-step}. The coefficient(s) $\beta_0$ (and $\beta_1$ for $S_1$ state) of the overlap term(s) for orthogonality is set to $\beta_0=\beta_1=\SI{1}{Hartree}$, which is sufficiently larger than the energy gaps between the states in question. For sequential diagonalization, the values of $R_i$ are fixed to $R_i=16$ for $i=0,1$ and 2, corresponding to $S_0$, $T_1$, and $S_1$ states, respectively; for single diagonalization, the value of $R$ is set to $R=16$, so that the sizes of the subspace Hamiltonian matrices to be diagonalized are the same among all the setups. Comparing the three QSCI results for excited states, the sequential diagonalization performs the best except for the initial steps of iterations where the quality of the input state is significantly low. Moreover, the sequential diagonalization outperforms the VQD calculation, even with a moderate value of $R_i=16$. For some larger $R$, the single diagonalization is also expected to improve and eventually outperform the VQD result at the same iteration as it can achieve the same representability as the sequential one\footnote{To show this explicitly, assume $N_s=2$ for simplicity. The single diagonalization with the subspace $\mc{S}_{R}=\mc{S}_{R_0}^{(0}\cup\mc{S}_{R_1}^{(1)}$, where the subspaces on the right-hand side denote those of the sequential diagonalization, have at least the same representability as the sequential diagonalization calculation with $\mc{S}_{R_1}^{(1)}$.}. Although the sequential diagonalization seems to be better in terms of performance, it should be noted that there is no guarantee for the variational inequality in the sequential diagonalization. The inequality for excited states holds in the single diagonalization, as explained in Sec.~\ref{subsec:excited-states}. \subsection{Scaling of computational costs} \label{subsec:scaling} \begin{figure} \begin{minipage}{0.45\textwidth} \subfloat[][\ce{Cr2}]{ \includegraphics[width=\textwidth]{figures/scaling/qubit-to-R-Cr2.pdf} } \subfloat[][Hydrogen chain]{ \includegraphics[width=\textwidth]{figures/scaling/qubit-to-R-Hchain.pdf} } \end{minipage} \caption{Estimated $R$ required for a given energy error $\epsilon$. Results with (a) expanding active spaces (\ce{Cr2}) or (b) various numbers of atoms (hydrogen chain) are shown by markers, along with the linear fit of each plot.} \label{fig:scaling-a} \end{figure} We now investigate the scalability of the proposed method by estimating the classical and quantum computational costs to calculate the ground states for molecular Hamiltonians of various sizes. More concretely, we estimate the minimum value for $R$ and the required number of shots $N_{\rm shot}$ to obtain the ground-state energy within an error $\epsilon$ for those Hamiltonians. For this sake, we employ the chromium dimer \ce{Cr2} with various active spaces and the linear hydrogen chains with different numbers of atoms. Both \ce{Cr2} and hydrogen chains are known to be challenging molecules in quantum chemistry (see, e.g., Refs.~\cite{larsson2022chromium, motta2020ground} and references therein), while the hydrogen chains are also expected to show a clear scaling in the number of atoms. For \ce{Cr2}, the cc-pVQZ basis set is used with $n$ active orbitals and $n$ active electrons with $n=2,4,\dots,12$; the Jordan-Wigner mapping produces $4,8,\cdots,24$-qubit Hamiltonians, respectively. For the linear hydrogen chains, we consider $4,6,\cdots,12$ hydrogen atoms equally separated by a distance \SI{1.0}{\AA}; we use the STO-3G basis set without specifying the active space, corresponding to full-space Hamiltonians of $8,12,\cdots, 24$-qubit after the Jordan-Wigner mapping, respectively. For each setup, the exact ground state of the Hamiltonian is prepared as the input state, and the QSCI calculation is performed by the idealized sampling introduced in Sec.~\ref{subsec:ground-state}, which picks up the $R$ Slater determinants with the largest absolute values of CI coefficients in the input-state wavefunction. Then, for a given accuracy $\epsilon$, the minimal $R$ that satisfies $\abs{E_R -E_{\rm exact}} \leq \epsilon$ is determined, where $E_R$ is the energy obtained by QSCI with the $R$ configurations and $E_{\rm exact}$ by the exact diagonalization. In Fig.~\ref{fig:scaling-a}, the results are plotted for each molecule by varying the number of qubits, for $\epsilon=0.1, 0.01$ and $0.001$~Hartree; they are extrapolated by fitting (shown by lines) to discuss the feasibility for larger system sizes. As detailed in Sec.~\ref{ssec:discussion-computational-cost}, we infer that the diagonalization with $R\simeq \SI{5e7}{}$ configurations is achievable by the current state-of-the-art classical computing according to the reports~\cite{stampfuss2003improved, garniron2019quantum}. The result for \ce{Cr2} suggests that $R$ is expected to be manageable even when we require $\epsilon=\SI{0.001}{Hartree}$ for a system larger than 50 qubits, where the exact diagonalization in the whole Fock space, i.e., CASCI, is challenging for classical computers. In the case of the hydrogen chains, on the other hand, the exponential growth of $R$ is more clearly observed, and it may become hard to achieve $\epsilon=\SI{0.001}{Hartree}$ for a system much larger than 50 qubits due to the limitation of classical computing. Note that the two scalings have slightly different meanings: the active space is enlarged for \ce{Cr2} while fixing the molecule, i.e., the system size, while the system size itself is enlarged for the hydrogen chains. The results may suggest that our method is more suited to a localized system with many electrons involved, rather than a spatially extended system. For similar studies and results for several diatomic and aromatic molecules, see Appendix~\ref{ssec:more-results-scaling}. \begin{figure} \begin{minipage}{0.5\textwidth} \subfloat[][\ce{Cr2}]{ \includegraphics[width=.9\textwidth]{figures/scaling/qubit-to-cR-Cr2.pdf} } \subfloat[][Hydrogen chain]{ \includegraphics[width=\textwidth]{figures/scaling/qubit-to-cR-H10-include-0.01.pdf} } \end{minipage} \caption{Estimated number of shots for a given energy error $\epsilon$. For QSCI, the number of shots are approximated by $1/\abs{c_R}^2$, where $R$ for each $\epsilon$ is obtained in Fig.~\ref{fig:scaling-a} and $c_R$ is the CI coefficient of the input state with $R$-th largest absolute value. The reasoning for this approximation is explained in the main text. For comparison, the results of the conventional expectation-value estimation with QWC grouping are plotted for each $\epsilon$, fitted by a logarithmic function in the plot: the required numbers of shots for evaluating the energy are shown by dashed curves; for hydrogen chains, the ones for evaluating the gradients and Hessians in addition to the energy are shown by dash-dotted curves; the precision of gradients and Hessians are set to be $\epsilon/\text{\AA}$ and $\epsilon/\text{\AA}^2$, respectively.} \label{fig:scaling-b} \end{figure} We next estimate the number of shots for sampling required to achieve an error of $\epsilon$ by using the value of $1/\abs{c_R}^2$ for each setup (Fig.~\ref{fig:scaling-b}). Here, $c_R$ is the CI coefficient that has the $R$-th largest absolute value in the input state, where $R$ is taken to be the values shown in Fig.~\ref{fig:scaling-a}. When the state is sampled $1/\abs{c_R}^2$ times, the probability of obtaining the $R$-th most significant configuration is $O(1)$, and in that sense, $1/\abs{c_R}^2$ gives a rough estimator for the number of shots required to sample $R$ most significant configurations. We see in the next section, especially in Fig.~\ref{fig:conventional}, that this gives a ballpark estimate of the required number of shots for a given accuracy. For comparison, the total number of shots required in a conventional expectation-value estimation is also estimated. More precisely, we analytically estimate the number of shots for which the standard deviation of the expectation-value estimations equals $\epsilon$ for the exact ground state (see, e.g., Ref.~\cite{kohda2022quantum}). In the conventional methods, the expectation value of the Hamiltonian, which is expressed as a linear combination of Pauli strings, is estimated by directly measuring the quantum state in the basis of the Pauli strings multiple times and taking the average of the measurement outcome. To reduce the number of measurements, we employ the qubit-wise commuting (QWC) grouping~\cite{mcclean2016theory} with the sorted insertion algorithm~\cite{crawford2021efficient}. The total shot is distributed to each of the groups with the shot allocation optimized for the exact ground state\footnote{This shot allocation may not be possible in practice without prior knowledge of the exact ground state, but this estimation gives the lower-bound on the required number of total shots among possible shot allocation strategies, for a given error tolerance with the given grouping method and the state. }~\cite{wecker2015progress, rubin2018application}. Note that, although there are methods that are capable of reducing the number of measurements better than QWC, they require more gate operations for measurements than QWC does; QWC requires a layer of single-qubit rotations after the state preparation, which is minimal for methods that measure the Pauli strings directly, while QSCI requires no gate operation. Most of the other grouping methods are thus expected to be more vulnerable to noise, and QWC is chosen for a fair comparison in this study. Figure~\ref{fig:scaling-b} shows the values of $1/\abs{c_R}^2$ in QSCI for various numbers of qubits, along with the estimated numbers of shots in QWC. For the hydrogen chains, the results of QWC are fitted by a function $a(N_q)^b$ with parameters $a$ and $b$ as they are expected to be polynomial in the number of qubits\footnote{More precisely, the fit was performed by a function \begin{equation} \log (N_{\text{shot}}(N_q))= B\log(N_q)+A, \end{equation} where $A$ and $B$ are the free parameters. Similarly in Fig.~\ref{fig:scaling-a}, a linear function $c N_q+d$ was used to fit the data for $\log(R)$, rather than $D\times 2^{C N_q}$ for $R$. }, while the scaling of QSCI is unclear and fitting is not performed. In the case of \ce{Cr2}, the number of shots for the proposed method seems to be consistently smaller than that of QWC, while the advantage of QSCI, in terms of reducing the effect of statistical fluctuation, is more non-trivial in hydrogen chains with the numbers of qubits $N_q\gtrsim 30$. The more operators are evaluated with the same output state, the more advantageous QSCI becomes; as we already noted in the previous section, QSCI does not require any additional quantum computation to evaluate additional observables, because QSCI outputs the classical vector representation of the state, and the expectation values are evaluated classically. On the other hand, in the conventional methods, quantum computational cost becomes more expensive, e.g., to measure additional Pauli strings introduced by the extra operators. To exploit this feature, we explore a scenario where the nuclear gradient and Hessian are evaluated along with the energy in the case of the hydrogen chains. For the shot allocation in the QWC grouping, we developed a method that is optimized for measuring multiple operators at once and is used in the simulation; see Appendix~\ref{subsec:appendix-scaling-multiple-operators} for details. The result, shown also in Fig.~\ref{fig:scaling-b} (b), implies that such a scenario makes QSCI much more advantageous\footnote{It is numerically shown in Appendix~\ref{ssec:appendix-multiple-observable-accuracy} that the accuracy of the gradients and Hessians in QSCI are of the same order as $\epsilon$ when expressed in the units of Hartree$/\mathrm{\AA}$ and Hartree$/\mathrm{\AA}^2$, respectively.}, as the number of shots for QWC significantly increases. QSCI generally outperforms QWC in terms of the sampling cost within the range of the system size that we studied. Although the scaling of QSCI seems to be worse than that of QWC in hydrogen chains, we should emphasize here that, even if QWC outperforms QSCI at, say, 50 qubits, it does not mean that QSCI is not useful for Hamiltonians with more than 50 qubits: QSCI has various features, such as error mitigation and the explicit representation of the output state, over the conventional methods, in addition to the reduction of the number of shots. The result should be interpreted as an implication that QSCI can be advantageous in moderately smaller but still classically-challenging systems, even when we only consider the effect of reduction of the number of shots. \subsection{Sampling simulation} \label{ssec:sampling-simulation} For assessing the effect of the statistical error during the sampling in QSCI, sampling simulation with different numbers of shots is performed. The result for a linear \ce{H6} molecule (12 qubits) is shown in Fig.~\ref{fig:conventional}, and results for other molecules are in Appendix~\ref{ssec:appendix-sampling}. For this simulation, the exact ground state is used as the input state, and we include all the configurations obtained in the sampling into the basis set $\mathcal{S}_R$ and we do not specify $R$ beforehand. For comparison, we also performed a conventional sampling estimation for the exact ground state with QWC grouping and a shot allocation optimized for Haar random states. For both QSCI and QWC, we perform 10 trials of sampling simulation for each number of shots, and the average of the absolute differences to the exact ground-state energy is plotted along with the standard deviation of the 10 trials. The absolute differences to the exact value are much smaller in QSCI compared to the conventional sampling with QWC grouping. It is worth noting that the standard deviation of QSCI energy is smaller than its average difference, while those of QWC sampling are roughly equal. Energy values obtained by QSCI are biased estimators for the exact expectation values even when using the exact ground states as input states. Thus, the absolute difference can roughly be calculated as a sum of the intrinsic bias existing in QSCI and the standard deviation which comes from statistical fluctuation of the subspace $\mathcal{S}_R$. In QWC, on the other hand, the statistical error is the only source of error. One can say that the QSCI result is much less affected by the statistical error compared to the conventional method. Furthermore, as one can see in Fig.~\ref{fig:conventional}, $1/\abs{c_R}^2$ calculated in the previous simulation gives a relatively accurate estimation of the total shots that gives an average error close to $\epsilon$. Thus the plots in Fig.~\ref{fig:scaling-b} for both QWC and QSCI give reasonable estimations of the number of shots with expected average error $\epsilon$, and the comparison is fair in this sense. \begin{figure} \includegraphics[width=.45\textwidth]{figures/sampling/H6_diff.pdf} \caption{Energy error results for both QSCI and conventional QWC in sampling simulations. For each set of 10 trials for each method, the standard deviation and the average of the absolute error to the exact value obtained by exact diagonalization are shown. QSCI energy error using $1/\abs{c_R}^2$ shots obtained in Fig.~\ref{fig:scaling-b} for $\epsilon=0.1,0.01,0.001$ Ha are also plotted for reference. The horizontal line indicates the chemical accuracy, \SI{1.6}{mHa}.} \label{fig:conventional} \end{figure}
{ "arxiv_id": "2302.11291", "language": "en", "timestamp": "2023-02-23T02:12:52", "url": "https://arxiv.org/abs/2302.11291", "yymm": "2302" }
\section{Introduction} Investigating the complexity of strategic voting problems has been a vibrant research topic in Computational Social Choice over the last three decades. Since the pioneering works of~\citeas{Bartholdi92howhard,BARTHOLDI89,BartholdiJames1991SocialChoiceWelfareSTV}, many manipulation, control, and bribery problems have been proposed for single-winner voting rules. Initiated by~\citeas{DBLP:journals/jair/MeirPRZ08}, these problems have been extended to multiwinner voting in recent years. Enriching this line of research, we propose some natural manipulation\onlyfull{, bribery,} and control problems for approval-based multiwinner voting ({ABMV}) rules and investigate the complexity of these problems for the prevalent rules {{approval voting}} (AV), {{satisfaction approval}} voting (SAV), {{net-satisfaction approval voting}} (NSAV), proportional approval voting (PAV), approval-based Chamberlin-Courant voting (ABCCV), minimax approval voting (MAV), etc. \onlyfull{In general, in these problems, there are either several voters who attempt to coordinate their votes in order to improve the results\onlyfull{ in their favor} ({\it{manipulation}})\onlyfull{, or an external agent who aims to make some distinguished candidates win by either bribing some vulnerable voters ({\it{bribery}})}, or by directly modifying the election ({\it{control}}). The study on strategy voting problems for single-winner voting rules has dominated computational social choice for a decade and recently such study has been extended to multi-winner voting rules~\cite{DBLP:conf/ijcai/BredereckKN17,DBLP:journals/jair/MeirPRZ08,DBLP:conf/atal/Peters18}. Nevertheless, our models differ from the previous studied ones in many aspects.} Manipulation models the scenario where some voters participating in an election, called manipulators, want to improve the election result in their favor by misreporting their preferences. A necessary notion in the study of manipulation is the preference of a voter over all possible outcomes. Unlike ranking-based single-winner voting where every voter has a linear preference over all candidates and the outcome is a single winner (see, e.g.,~\cite{smith2006votingsystems}), in the setting of {ABMV} how to deduce voters' preferences over committees (subsets of candidates) from their dichotomous preferences over candidates is already a question without a clear-cut answer. Some prominent approaches for this purpose have been proposed in the literature (see, e.g.,~\cite{DBLP:conf/ijcai/LacknerS18,DBLP:journals/scw/LaslierS16,DBLP:conf/atal/Peters18}). For example, a voter may prefer a committee to another one if the former contains more of her approved candidates. In a more conservative situation, a voter is more satisfied with the former one only if the former contains not only more of her truly approved candidates but also all of her approved candidates included in the latter one. The two approaches lead to the concepts of {\it{cardinality-strategyproofness}} and {\it{subset-strategyproofness}} respectively when only one manipulator is considered (see, e.g.,~\cite{DBLP:conf/atal/Peters18}). In contrast to the celebrated Gibbard-Satterthwaite theorem for single-winner voting~\cite{Gibbard73,Satterthwaite75}, there exist natural {ABMV} rules (like AV) that are strategyproof with respect to the above two concepts. However, many {ABMV} rules are not strategyproof. For example,~\citeas{DBLP:conf/atal/Peters18} showed that any {ABMV} rules that satisfy certain proportional properties are not cardinality- and subset-strategyproof. \citeas{DBLP:conf/atal/AzizGGMMW15} showed that SAV is not cardinality-strategyproof. Motivated by these non-strategyproofness results, many multiwinner manipulation problems have been proposed recently. Particularly,~\citeas{DBLP:conf/atal/AzizGGMMW15} studied the {\prob{Winner Manipulation}} and {\prob{Winner Set Manipulation}} problems for resolute multiwinner voting rules, i.e., rules that return exactly one wining committee (precisely, the resoluteness of the rules studied in their paper is achieved by using a linear order over candidates to break ties). In the {\prob{Winner Manipulation}} problem, given are an election, a distinguished candidate, and an integer~$\ell$, and the question is whether it is possible to add~$\ell$ additional votes so that the distinguished candidate is included in the winning committee. In the {\prob{Winner Set Manipulation}} problem, there are multiple distinguished candidates, and the question is whether we can add~$\ell$ additional votes so that the distinguished candidates are exactly the winners. In all of these problems, it is assumed that the manipulators have one clear target set of candidates whom they want to make the winners. This assumption, however, does no seem to be very realistic since there may exist exponentially many outcomes which are more preferred to the current outcome by the manipulators. In this paper, we study two new versions of manipulation problems, where manipulators judge the results with respect to the two preference extension approaches discussed above. Concretely, given an election~$E$ and a winning committee of~$E$, in the first version, manipulators aim to obtain winning committees including more of their truthfully approved candidates ({\prob{Radical Coalition Manipulation}}), and in the second version manipulators aim to obtain winning committees including all their current approved winning candidates and at least one more approved candidate that is not in the current winning $k$-committee ({\prob{Conservative Coalition Manipulation}}). \onlyfull{ Second, we study some multiwinner bribery problems, where an external agent wants to make some distinguished candidates win by bribing some vulnerable voters, i.e., those who are more satisfied with the final winning committee (the one after all bribed voters change their votes according to the briber's request) than with the current winning $k$-committee. One motivation of the problems is that bribing or persuading the vulnerable voters may be cheaper or easier for the external agent to reach her goal. \onlyfull{This is different from most of the bribery problems studied in the literature where every voter can be bribed provided that the external agent pay some money to them, or in some cases the external agent can bribe a limited number of voters. We refer to~\cite{handbookofcomsoc2016Cha7FR} for a survey of these bribery problems.} Our bribery problems are inspired by the {\prob{Frugal Bribery}} problem for single-winner elections, recently proposed by~\citeas{DBLP:journals/tcs/DeyMN17}. } \citeas{DBLP:conf/ijcai/LacknerS18} put forward the notion of stochastic domination strategyproofness (SD-strategyproofness), and called for investigating the algorithmic challenge of finding successful SD-manipulations. Several of our reductions for the above two manipulation problems also apply to the SD-strategyproofness based manipulation problems. (see Section~\ref{sec-preliminaries} for formal definitions of the notions). Besides, we study some control problems called {\prob{Constructive Control by Adding Voters}} ({\prob{CCAV}}), {\prob{Constructive Control by Adding Candidates}} ({\prob{CCAC}}), {\prob{Constructive Control by Deleting Voters}} ({\prob{CCDV}}), and {\prob{Constructive Control by Deleting Candidates}} ({\prob{CCDC}}) for multiwinner voting rules. These problems model the scenario where a powerful external agent would like to make a given subset of candidates be contained in all winning $k$-committees by adding or deleting a limited number of voters or candidates. They are direct generalizations of the extensively studied control problems for single-winner voting rules~\cite{handbookofcomsoc2016Cha7FR}. For the problems studied, we obtain a lot of complexity results including {{\sf{P}}}-results, {{\np}-{\sf{hardness}}} results, {{\sf{coNP}-hardness}} results, and {{\sf{FPT}}}-results with respect to a diversity of natural parameters. Importantly, our study offers a comprehensive understanding of the complexity of election control for the aforementioned {ABMV} rules. For our concrete results, we refer to Table~\ref{tab-results-summary} for a summary \subsection{Related Works} \label{sec-related-works} In this section, we discuss important related works that have not been mentioned or well-elaborated above. \citeas{DBLP:journals/jair/MeirPRZ08} initiated the study of the complexity of control and manipulation problems for multiwinner voting rules, but they mainly considered ranking-based voting rules, and their treatment for manipulation assumes the presence of only a single manipulator. Besides, in their models, strategic agents (the manipulator or the powerful external agent) derive utilities from candidates, and they attempt to achieve a winning~$k$-committee yielding the maximum total utilities. These models nicely bypass the question of extending voters' preferences over candidates to preferences over committees, but captures many real-world scenarios. The work of~\citeas{DBLP:journals/jair/MeirPRZ08} on manipulation was later complemented by~\citeas{DBLP:conf/atal/ObraztsovaZE13} who considered further ranking-based voting rules and investigated how different tie-breaking schemes change the complexity of the problem. \citeas{DBLP:journals/aamas/BredereckKN21} take a step further by firstly extending the manipulation model by considering the presence of multiple manipulators. Notably, \citeas{DBLP:journals/aamas/BredereckKN21} adopted three different functions (utilitarian, egalitarian, candidate-wise egalitarian) to merge the utilities of manipulators obtained from different committees. Compared to these works, we focus on {ABMV} rules. On top of that, our model departs from previously utility-based manipulation by assuming that manipulators resort to preference extensions to evaluate committees. We remark that {{\np}-{\sf{hardness}}} proofs of our problems can be modified to show the {{\np}-{\sf{hardness}}} of the utility-involved variants by assigning to certain candidates in the constructed elections in the {{\np}-{\sf{hardness}}} proofs very high utilities and assign to other candidates utility zero. Our study on manipulations is also related to the works of~\citeas{DBLP:conf/ijcai/LacknerS18}, \citeas{DBLP:journals/scw/LaslierS16}, \citeas{DBLP:conf/atal/Peters18}, and \citeas{DBLP:conf/ijcai/YangW18} who studied strategyproofness of {ABMV}. However, they were concerned with mathematical properties of these rules and focused only on one manipulator. Another line of research concerning manipulation in {ABMV} is as follows. In this setting, it is assumed that voters have linear preferences (but they are only allowed to submit dichotomous preferences) over candidates and the question is whether voters have incentive to submit nonsincere votes in order to improve the result. A vote is sincere with respect to a linear order over candidates if the approved candidates are exactly those ranked above some threshold candidate in the linear order. Moreover, voters compare different outcomes based on some preference extension principles such as the Kelly extension principle, G\"{a}rdenfors extension principle, etc. We refer to~\cite{Barber2004,EndrissTD2013} and references therein for further discussions. Our control problems are trivial generalizations of four standard election control problems first proposed in~\citep{Bartholdi92howhard} for single-winner voting. The number of papers covering this theme is huge. We refer to~\cite{BaumeisterR2016,handbookofcomsoc2016Cha7FR} and references therein for important progress by 2016, and refer to~\cite{DBLP:journals/aamas/ErdelyiNRRYZ21,DBLP:journals/ai/NevelingR21,DBLP:conf/atal/Yang17,DBLP:conf/ecai/000120} for some recent results. In addition, our control problems are related to the group control problems in the setting of group identification~\cite{DBLP:journals/aamas/ErdelyiRY20,DBLP:journals/aamas/YangD18}. Group identification models the scenario where voters and candidates (termed as individuals) coincide and the goal is to classify the individuals into two groups: the group of socially qualified individuals and the group of not socially qualified individuals. The group control problems consist in making some given distinguished individuals socially qualified by adding/deleting/partitioning the individuals. Finally, we would like to point out that other types of strategic voting problems in multiwinner voting have been also studied recently. \citeas{DBLP:conf/atal/FaliszewskiST17} studied bribery problems for {ABMV} rules, where the goal is to ensure one distinguished candidate to be included in the winning $k$-committee by applying a limited number of modification operations. Complementing this work, \citeas{DBLP:conf/atal/000120} studied the complexity of the counterparts of these bribery problems. \citeas{DBLP:journals/aamas/ErdelyiRY20} and~\citeas{DBLP:conf/ijcai/BoehmerBKL20} studied the complexity of bribery in group identification. \onlyfull{ The bribery problems studied in this paper are inspired by the {\prob{Frugal Bribery}} problem for ranking-based single-winner voting rules proposed by \citeas{DBLP:journals/tcs/DeyMN17}. In particular, in the {\prob{Frugal Bribery}} problem the briber aims to make a distinguished candidate the winner by bribing only vulnerable voters, i.e., those who prefer the distinguished candidate to the current winner.} \subsection{Organization} The remainder of the paper is organized as follows. In Section~\ref{sec-preliminaries}, we provide important notions and definitions that are used in our study. Then, we unfold our detailed results in Sections~\ref{sec-manipulation}--\ref{sec-fpt}, where Section~\ref{sec-manipulation} is devoted to the complexity of manipulation problems, Section~\ref{sec-control} covers our complexity results for control problems, and Section~\ref{sec-fpt} studies a variety of {{\sf{FPT}}}-algorithms for problems studied in Sections~\ref{sec-manipulation} and~\ref{sec-control}. Section~\ref{sec-conclusion} concludes our main contributions and lay out interesting topics for future research. \begin{table} \caption{A summary of the complexity of constructive control problems for {ABMV} rules. Here, ``{{\np}-{\sf{h}}}'' stands for ``{{\np}-{\sf{hard}}}'', and ``{{\sf{P}}}'' for ``polynomial-time solvable''. Additionally,~$t$ denotes the number of manipulators,~$m$ denotes the number of candidates,~$n$ denotes the number of votes, and~$J$ denotes the set of distinguished candidates. Parameters with respect to which {{\sf{FPT}}}-results hold are shown as superscript. Notice that $k=1$ implies $\abs{J}=1$. Underlined results for manipulation are for {CBCM} and {SBCM}, and other results for manipulation are for {CBCM}, {SBCM}, and {SDCM}. Results labeled by~$\spadesuit$ are implied by or from \protect\cite{DBLP:journals/jair/MeirPRZ08}, by~$\heartsuit$ are from~\protect\cite{baumeisterapproval09,DBLP:journals/jcss/HemaspaandraH07}, and~by $\clubsuit$ are from~\protect\cite{baumeisterapproval09,DBLP:journals/ai/HemaspaandraHR07,DBLP:conf/icaart/Lin11}. Several of our hardness results hold with even further restrictions on the input. Related discussions are provided after the corresponding theorems.} \label{tab-results-summary} \renewcommand{\tabcolsep}{0mm} \renewcommand\arraystretch{1.5} \scriptsize{ \begin{center} \begin{tabular}{|l|l|l|l|l|l|}\toprule & manipulation & {\prob{CCAV}} & {\prob{CCDV}} & {\prob{CCAC}} & {\prob{CCDC}} \\ \toprule AV & {{{\np}-{\sf{h}}} (Thm.~\ref{thm-ccm-av-np-hard})} & {{\np}-{\sf{h}}} ($k=1$, $\clubsuit$) & {{\np}-{\sf{h}}} ($k=1$, $\clubsuit$) & {immune} ($\heartsuit$) & {{\sf{P}}} ($\spadesuit$)\\ \cline{2-4} & \underline{{{\sf{P}}} ($t=\bigo{1}$, Thm.~\ref{thm-maniuplation-polynomial-time-solvable-constant-number-manipulators})} & \multicolumn{2}{l|}{{{\sf{FPT}}} (Thm.~\ref{thm-ccav-ccdv-av-sav-nsav-fpt-candidates})} & & \\ \cline{2-2} & {{{\sf{FPT}}}}$^m$ (Thm.~\ref{thm-manipulation-fpt-wrt-candidate},~\ref{thm-new-manipulation-sav-nsav-fpt-wrt-candidate}) & \multicolumn{2}{l|}{} & & \\ \midrule SAV & {{{\np}-{\sf{h}}} (Thm.~\ref{thm-manipulation-np-hard-many-rules})} & {{\np}-{\sf{h}}} ($k=1$, Thm.~\ref{thm-ccav-sav-np-hard}) & {{\np}-{\sf{h}}} ($k=1$, $\clubsuit$) & {{\np}-{\sf{h}}} ($k=1$, Thm.~\ref{thm-ccac-sav-np-hard}) & {{\np}-{\sf{h}}} ($k=1$, Thm.~\ref{thm-ccdc-sav-np-hard})\\ \cline{2-6} & \underline{{{\sf{P}}} ($t=\bigo{1}$, Thm.~\ref{thm-maniuplation-sav-nsav-polynomial-time-solvable-constant-number-manipulators})} & \multicolumn{2}{l|}{{{\sf{FPT}}} (Thm.~\ref{thm-ccav-ccdv-av-sav-nsav-fpt-candidates})} & \multicolumn{2}{l|}{{{\sf{FPT}}}$^{n+\ell}$ (Thm.~\ref{thm-ccac-ccdc-many-rules-fpt-ell-plus-n})}\\ \cline{2-2} & {{{\sf{FPT}}}$^m$ (Thm.~\ref{thm-manipulation-sav-nsav-fpt-wrt-candidate},~\ref{thm-new-manipulation-sav-nsav-fpt-wrt-candidate})} & \multicolumn{2}{l|}{} & \multicolumn{2}{l|}{} \\ \midrule NSAV & {{{\np}-{\sf{h}}} (Thm.~\ref{thm-manipulation-np-hard-many-rules})} & {{\np}-{\sf{h}}} ($k=1$, Thm.~\ref{thm-ccav-sav-np-hard}) & {{\np}-{\sf{h}}} ($k=1$, $\clubsuit$) & {{\np}-{\sf{h}}} ($k=1$, Thm.~\ref{thm-ccac-sav-np-hard}) & {{\np}-{\sf{h}}} ($k=1$, Thm.~\ref{thm-ccdc-sav-np-hard})\\ \cline{2-6} & \underline{{{\sf{P}}} ($t=\bigo{1}$, Thm.~\ref{thm-maniuplation-sav-nsav-polynomial-time-solvable-constant-number-manipulators})} & \multicolumn{2}{l|}{{{\sf{FPT}}} (Thm.~\ref{thm-ccav-ccdv-av-sav-nsav-fpt-candidates})} & \multicolumn{2}{l|}{{{\sf{FPT}}}$^{n+\ell}$ (Thm.~\ref{thm-ccac-ccdc-many-rules-fpt-ell-plus-n})}\\ \cline{2-2} & {{{\sf{FPT}}}$^m$ (Thm.~\ref{thm-manipulation-sav-nsav-fpt-wrt-candidate},~\ref{thm-new-manipulation-sav-nsav-fpt-wrt-candidate})} & \multicolumn{2}{l|}{} & \multicolumn{2}{l|}{} \\ \midrule ABCCV & & \multicolumn{4}{l|}{{{\sf{coNP}-h}} ($\abs{J}=1 \& \ell=0$, Thm.~\ref{thm-cc-abccv-pav-co-np})}\\ \cline{3-6} & & \multicolumn{2}{l|}{{{\np}-{\sf{h}}} ($k=1$, $\clubsuit$)} & {immune} ($\abs{J}=k$, Thm.~\ref{thm-pav-abbcv-mav-immue-to-ccac-k-equal-distinguished-candidates}) &{{\sf{P}}} ($k=1$, $\spadesuit$)\\ & & \multicolumn{2}{l|}{} & & {{\np}-{\sf{h}}} ($k=2$\&$\abs{J}=1$, Thm.~\ref{thm-ccdc-abccv-pav-nph-k-2})\\ \cline{3-6} & & \multicolumn{2}{l|}{{{\sf{FPT}}} (Thm.~\ref{thm-ccadv-abccv-pav-fpt-m})} & \multicolumn{2}{l|}{{{\sf{FPT}}}$^{n+\ell}$ (Thm.~\ref{thm-ccac-ccdc-many-rules-fpt-ell-plus-n})} \\ \midrule PAV & & \multicolumn{4}{l|}{{{\sf{coNP}-h}} ($\abs{J}=1 \& \ell=0$, Thm.~\ref{thm-cc-abccv-pav-co-np})}\\ \cline{3-6} & & \multicolumn{2}{l|}{{{\np}-{\sf{h}}} ($k=1$, $\clubsuit$)} & {immune} ($\abs{J}=k$, Thm.~\ref{thm-pav-abbcv-mav-immue-to-ccac-k-equal-distinguished-candidates}) &{{\sf{P}}} ($k=1$, $\spadesuit$)\\ & & \multicolumn{2}{l|}{} & & {{\np}-{\sf{h}}} ($k=2$\&$\abs{J}=1$, Thm.~\ref{thm-ccdc-abccv-pav-nph-k-2})\\ \cline{3-6} & & \multicolumn{2}{l|}{{{\sf{FPT}}}$^m$ (Thm.~\ref{thm-ccadv-abccv-pav-fpt-m})} & \multicolumn{2}{l|}{{{\sf{FPT}}}$^{n+\ell}$ (Thm.~\ref{thm-ccac-ccdc-many-rules-fpt-ell-plus-n})} \\ \midrule MAV & {{{\np}-{\sf{h}}} (Thm.~\ref{thm-manipulation-mav-np-hard})} & \multicolumn{4}{l|}{{{\np}-{\sf{h}}} ($\abs{J}=1 \& \ell=0$, Thm.~\ref{thm-cc-mav-nph})}\\ \cline{3-6} & & {{\np}-{\sf{h}}} ($k=1$\&$\abs{V}=1$, Thm.~\ref{thm-ccav-mav-nph-k-1}) & {{\sf{P}}} ($k=\bigo{1}$, Thm.~\ref{thm-ccav-ccdv-mav-polynomial-time-solvable-k-constant}) & {{\np}-{\sf{h}}} ($k=1$\&$\abs{C}=2$, Thm.~\ref{thm-ccac-mav-nph-k-1}) & {{\np}-{\sf{h}}} ($k=1$, Thm.~\ref{thm-ccdc-mav-nph-k-1})\\ \cline{3-6} & & {{\sf{FPT}}}$^m$ (Thm.~\ref{thm-ccav-mav-fpt-m}) & {{\sf{FPT}}}$^m$ (Thm.~\ref{cor-ccdv-mav-fpt-m}) & \multicolumn{2}{l|}{{{\sf{FPT}}}$^{n+\ell}$ (Thm.~\ref{thm-ccac-ccdc-many-rules-fpt-ell-plus-n})} \\ \bottomrule \end{tabular} \end{center} } \end{table} \section{Preliminaries} \label{sec-preliminaries} We assume the reader is familiar with the basics in graph theory and (parameterized) complexity theory~\cite{DBLP:books/sp/CyganFKLMPPS15,DBLP:conf/lata/Downey12,Douglas2000,DBLP:journals/interfaces/Tovey02}. \subsection{Approval-Based Multiwinner Voting} In the approval model, an {\memph{election}} is a tuple~$(C, V)$ where~$C$ is a set of candidates, and~$V$ is a multiset of votes cast by a set of voters. Each vote~$v\in V$ is a subset of~$C$, consisting of the candidates approved by the corresponding voter. For ease of exposition, we interchangeably use the terms vote and voter. So, by saying that a vote~$v$ approves a candidate~$c$, we simply mean that~$c\in v$. By saying that a vote~$v$ approves~$C'\subseteq C$, we mean that~$v$ approves all candidates in~$C'$ (and probably also approves some other candidates not in~$C'$). For a subset~$C'\subseteq C$ and a vote~$v\in V$, we use~$v_{C}$ to denote the vote obtained from~$v$ by removing all candidates in~$C'$. Moreover, we use $V_{C'}=\{v_{C'} \mymid v\in V\}$ to denote the multiset of votes obtained from~$V$ by replacing each $v\in V$ by~$v_{C'}$. By the definition of these notions,~$(C', V_{C'})$ is the election~$(C, V)$ restricted to~$C'$. For ease of exposition, we write $(C', V)$ for $(C', V_{C'})$. For a candidate~$c\in C$, we denote by~$V(c)$ the multiset of votes in~$V$ approving~$c$, i.e.,~$V(c)=\{v\in V \mid c\in v\}$. For~$C'\subseteq C$, let~$V(C')=\bigcup_{c\in C'}V(c)$ be the multiset of votes in~$V$ approving at least one candidate from~$C'$, and let $\vaes{V}{C'}=\{v\in V\mymid v=C'\}$ be the multiset of votes in~$V$ which approve exactly the candidates in~$C'$. Moreover, for a submultiset~$V'\subseteq V$ of votes, let~$C(V')$ denote the set of candidates approved by at least one vote in~$V'$, i.e.,~$C(V')=\bigcup_{v\in V'}v$, and let $C^{{\star}}_V(V')=\{c\in C \mid V(c)=V'\}$ be the set of candidates in~$C$ who are approved by all votes in~$V'$ but are disapproved by any vote in~$V\setminus V'$. Let $m^{\star}_V(V')=\abs{C_V^{\star}(V')}$. A~$k$-set is a set of cardinality~$k$. A subset of candidates is also called a committee, and a $k$-subset of candidates is called a $k$-committee. For an integer $k$ and a subset $J\subseteq C$, let $\mathcal{C}_{k, C}(J)$ denote the set of all $k$-committees of~$C$ containing~$J$, and let ${\mathcal{C}_{k, C}}(\overline{J})$ be the set of all the other $k$-committees of~$C$. An {\memph{{ABMV} rule}} (aka.\ {\memph{approval-based~committee selection rule}}) maps each election $(C, V)$ and an integer~$k$ to a collection of~$k$-committees of~$C$, which are called {\memph{winning~$k$-committees}} of this rule at~$(C, V)$. In practice, when a rule returns multiple winning $k$-committees, a certain tie-breaking scheme is often used to select exactly one winning $k$-committee. We study some important {ABMV} rules which can be categorized into two groups. Under these rules, each~$k$-committee receives a score based on the votes, and winning $k$-committees are those either maximizing or minimizing the corresponding scoring functions. For the first group of rules the score of a committee is the sum of the scores of its members. Due to this property, these rules are referred to as {\memph{additive rules}} in the literature~\cite{DBLP:conf/atal/AzizGGMMW15,Kilgour2010,DBLP:conf/ijcai/YangW18}. In the following, let $(C, V)$ be an election. \renewcommand*\descriptionlabel[1]{\hspace\labelsep \normalfont\bfserie {#1}} \begin{description \item[Approval voting (AV)] The score of a candidate~$c\in C$ in $(C, V)$ is the number of votes in~$V$ approving~$c$, and winning $k$-committees are those with the highest total scores of their members. Formally, a $k$-committee~$w$ is a winning $k$-committee of AV at $(C, V)$ if $\sum_{v\in V} \abs{v\cap w}\geq \sum_{v\in V} \abs{v\cap w'}$ for all $w'\subseteq C$ such that $\abs{w'}=k$. \end{description} Due to its simplicity and intuitiveness, AV has been widely used in practice both as a single-winner voting rule and as a multiwinner voting rule~\citep{BramsF2007approvalvotingsecondedition}. Additionally,~\citeas{DBLP:conf/ijcai/LacknerS18} showed that AV is the only {ABMV} rule that satisfies some desirable properties. \begin{description} \item[Satisfaction approval voting (SAV)] Each candidate~$c\in C$ receives~$\frac{1}{\abs{v}}$ points from each vote~$v$ approving~$c$, and the SAV score of~$c$ in the election $(C, V)$ is $\sum_{c\in v\in V}\frac{1}{\abs{v}}$. Similar to AV, winning $k$-committees of SAV at $(C, V)$ are those with the highest SAV score, i.e., $k$-committees~$\w\subseteq C$ with the maximum possible value of $\sum_{\emptyset\neq v\in V}\frac{\abs{v\cap \w}}{\abs{v}}$. \end{description} Using SAV as a multiwinner voting rule was advocated by~\citeas{Bram2014Kilgour}. Interestingly, the application of SAV scores goes beyond the setting of {ABMV}. For instance, SAV scores are related to the Shapley values of players in a ranking game which is relevant to the settings of ranking wines and allocating profits among museums~\cite{DehezG2018,DBLP:journals/geb/GinsburghZ03,GinsburghZ2012}. \begin{description} \item[Net-satisfaction approval voting (NSAV)] This rule is a variant of SAV which captures the principle that if addition of approved candidates of a vote in a committee increases the satisfaction of the corresponding voter, then addition of disapproved candidates decreases the satisfaction. In particular, each candidate~$c\in C$ receives $\frac{1}{\abs{v}}$ points from every vote $v\in V$ approving~$c$, and receives $\frac{1}{m-\abs{v}}$ points from every vote $v\in V$ not approving~$c$, and the NSAV score of~$c$ in $(C, V)$ is defined as $\sum_{c\in v\in V} \frac{1}{\abs{v}}-\sum_{c\not\in v\in V}\frac{1}{m-\abs{v}}$. The satisfaction degree of a vote~$v$ derived from a committee~$\w\subseteq C$ is measured by the total points of candidates in~$\w$ received from~$v$, and this rule aims to maximize voters' total satisfaction. Precisely, winning $k$-committees of NSAV at $(C, V)$ are $k$-committees $\w\subseteq C$ maximizing $\sum_{v\in V, v\neq\emptyset} \frac{\abs{w\cap v}}{\abs{v}}-\sum_{v\in V, v\neq C}\frac{\abs{w\setminus v}}{m-\abs{v}}$. \end{description} NSAV was proposed by~\citeas{Kilgour2014Marshall}. We call an additive rule polynomial computable if given a vote and a candidate, the score of the candidate received from the vote can be computed in polynomial time in the size of the vote. To the best of our knowledge, all natural rules studied so far in the literature are polynomial computable. \medskip Now we give the definitions of the second group of rules where the score of each committee is combinatorially determined by its members. \begin{description} \item[Approval-based Chamberlin-Courant voting (ABCCV)] A voter is satisfied with a committee if this committee includes at least one of her approved candidates. The score of a committee is the number of voters who are satisfied with the committee, and winning $k$-committees are those with the maximum score. \end{description} ABCCV was first suggested by~\citeas{Thiele1985} and then independently proposed by~\citeas{ChamberlinC1983APSR10.2307/1957270}. \begin{description} \item [Proportional approval voting (PAV)] The score of a committee~$\w\subseteq C$ is $\sum_{v\in V, v\cap \w\neq \emptyset}\left(\sum_{i=1}^{\abs{v\cap \w}}\frac{1}{i}\right)$. Winning $k$-committees are those with the maximum score. \onlyfull{PAV fulfills several proportional properties~\cite{DBLP:journals/scw/AzizBCEFW17}. } \item[Minimax approval voting (MAV)] The Hamming distance between two subsets~$\w\subseteq C$ and~$v\subseteq C$ is $\abs{\w\setminus v}+\abs{v\setminus \w}$. The score of a committee~$\w$ is the maximum Hamming distance between~$\w$ and the votes, i.e., $\max_{v\in V} (\abs{\w\setminus v}+\abs{v\setminus \w})$. Winning $k$-committees are those having the smallest score. \end{description} PAV was first studied by~\citeas{Thiele1985}, and MAV was proposed by~\citeas{Bramsminimaxapproval2007}. It should be pointed out that calculating a winning $k$-committee with respect to the second group of rules is computaionally hard~\cite{DBLP:conf/atal/AzizGGMMW15,LeGrand2004Tereport,DBLP:journals/scw/ProcacciaRZ08}, standing in contrast to the polynomial-time solvability for many additive rules~\cite{DBLP:conf/atal/AzizGGMMW15,DBLP:conf/atal/YangW19}.% Now we introduce the class of Thiele's rules which contain AV, ABCCV, and PAV. \begin{description} \item[$\omega$-Thiele] Each Thiele's rule is characterized by a function $\omega: \mathbb{N}\rightarrow \mathbb{R}$ so that $\omega(0)=0$ and $\omega(i)\leq \omega(i+1)$ for all nonnegative integers~$i$. The score of a committee $\w\subseteq C$ is defined as $\sum_{v\in V}\omega(\abs{v\cap w})$. The rule selects $k$-committees with the maximum score.\footnote{Thiele's rules are also studied under some other names including weighted PAV rules, generalized approval procedures, etc.~\citep{DBLP:journals/scw/AzizBCEFW17,Kilgour2014Marshall}.} \end{description} Obviously, AV is the $\omega$-Thiele rule where $\omega(i)=i$, ABCCV is the $\omega$-Thiele's rule such that $\omega(i)=1$ for all $i>0$, and PAV is the $\omega$-Thiele's rule such that $\omega(i)=\sum_{j=1}^i 1/j$ for all $i>0$. Many of our results apply to a subclass of Thiele's rules such that $\omega(2)<2\omega(1)$. In particular, ABCCV and PAV belong to this subclass. In the presentations of our algorithms, the following notions are consistently used. Let~$\varphi$ be an {ABMV} rule defined above except~MAV. For a set of candidates~$C$, a vote~$v$ over~$C$, and a committee~$\w\subseteq C$, we use~$\textsf{sc}_{\varphi}(v, \w)$ to denote the~$\varphi$ score of~$\w$ received from~$v$. For a multiset of votes~$V$ over~$C$, we define~$\textsf{sc}_{\varphi}(V, \w)=\sum_{v\in V}\textsf{sc}_{\varphi}(v, \w)$. If~$\w=\{c\}$ is a singleton, we simply write~$c$ instead of~$\{c\}$ in the above notions. Additionally, we drop the subscript~$\varphi$ whenever it is clear from the context which rule~$\varphi$ is discussed. The {\nemph{$k$-winning-threshold}} of~$\varphi$ at an election $E=(C, V)$ is defined as follows. Let~$\rhd$ be a linear order of~$C$ so that for $c, c'\in C$ it holds that $c\rhd c'$ implies $\textsf{sc}_{\varphi}(V, \{c\})\geq \textsf{sc}_{\varphi}(V, \{c'\})$, i.e., candidates are ordered in $\rhd$ with respect to their~$\varphi$ scores received from~$V$, from those with the highest scores to those with the lowest scores. The $k$-winning-threshold of~$\varphi$ at~$E$ is the~$\varphi$ score of the $k$-th candidate in~$\rhd$. Let~$s$ be the $k$-winning-threshold of~$\varphi$ at~$E$, and let~$x$ be the number of candidates from~$C$ whose~$\varphi$ scores in~$E$ are exactly~$s$. Then,~$W^+_{\varphi,E}$ is defined as the set of candidates from~$C$ whose~$\varphi$ scores in~$E$ are at least~$s$ if $x=1$, and is defined as the set of candidates from~$C$ whose~$\varphi$ scores in~$E$ are strictly larger than~$s$ if $x\geq 2$. Additionally, we define $L_{\varphi,E}\subseteq C$ as the set of all candidates whose~$\varphi$ scores in~$E$ are strictly smaller than~$s$, and define $W_{\varphi,E}=C\setminus (W^+_{\varphi,E}\cup L_{\varphi,E})$ as the set of all the other candidates in~$C$. Obviously, $W_{\varphi,E}=\emptyset$ if and only if $x=1$ and, moreover, if $W_{\varphi,E}\neq\emptyset$ then everyone in~$W_{\varphi,E}$ has~$\varphi$ score exactly~$s$ in~$E$. If it is clear from the context which rule~$\varphi$ or which election~$E$ are discussed, we drop~$\varphi$, or~$E$, or both from the notions. It is easy to see that, for an additive {ABMV} rule~$\varphi$, a $k$-committee $\w\subseteq C$ is a winning $k$-committee of~$\varphi$ at $(C, V)$ if and only if $W^+_{\varphi,E}\subseteq \w\subseteq (W^+_{\varphi,E}\cup W_{\varphi,E})$. It is obvious that for AV, SAV, and NSAV, $W^+_{\varphi,E}$, $W_{\varphi,E}$, and $L_{\varphi,E}$ can be computed in polynomial time. \subsection{Problem Formulations} Now we formulate the manipulation and control problems studied in this paper. Let~$\varphi$ be a multiwinner voting rule. \subsubsection{The Manipulation Problems} \label{subsec-manipulation} \EP {Cardinality-Based Coalition Manipulation for~$\varphi$}{\probb{CBCM}{$\varphi$}} {A set of candidates~$C$, two multisets~$V$ and~$V_{\text{M}}$ of votes over~$C$, a winning committee $\w\in \varphi(C, V\cup V_{\text{M}}, k)$.} {Is there a multiset~$U$ of~$\abs{V_{\text{M}}}$ votes over~$C$ such that for all $\w'\in \varphi(C, V\cup U, k)$ and all~$v\in V_{\text{M}}$, it holds that~$\abs{v\cap \w'}>\abs{v\cap \w}$?} If we replace $\abs{v\cap \w'}>\abs{v\cap \w}$ in the above definition by $(v\cap \w)\subsetneq (v\cap \w')$, we obtain {\prob{Subset-Based Coalition Manipulation for~$\varphi$}} ({\probb{SBCM}{$\varphi$}}). In the definitions of \probb{CBCM}{$\varphi$} and {\probb{SBCM}{$\varphi$}}, votes in~$V_{\text{M}}$ are called manipulative votes. Particularly, each $v\in V_{\text{M}}$ consists of candidates whom the corresponding voter (manipulator) truthfully approves. The above manipulation problems are relevant to the setting of iterative voting. In this setting, after voters submit their preferences to a central platform, a winning $k$-committee is announced. After this, voters are allowed to change their preferences at will in several rounds. The above manipulation problems model the situation where in a particular round of the voting process some voters (manipulators) form a coalition and want to replace the current winning $k$-committee~$\w$ with another more favorable one by misreporting their preferences. In cases where the tie-breaking schemes are publicity unknown, or when ties are broken randomly, the manipulators may want to ensure that their coordination results in an improved result without taking any risk. This is why in the question we demand that every manipulator prefers all winning $k$-committees of the new election to~$\w$. The problems assume that except the manipulators, other voters do not change their preferences, and manipulators know the submitted preferences of other voters. This may not be very realistic all the time. However, our main message of the study is that even when this is the case, for many rules the manipulators are faced with a computationally hard problem to solve. Now we give the notion of stochastic domination introduced by~\citeas{DBLP:conf/ijcai/LacknerS18}, and the respective manipulation problem. Let~$C$ be a set of candidates, $S\subseteq C$, and let~$\mathcal{A}$ and~$\mathcal{B}$ be two collections of committees of~$C$. We say that~$\mathcal{A}$ {\memph{stochastically dominates}}~$\mathcal{B}$ subject to~$S$ if and only if for every integer~$i$ it holds that \[\frac{\abs{\w\in \mathcal{A} \mymid \abs{\w\cap S}\geq i}}{\abs{\mathcal{A}}}\geq \frac{\abs{\w\in \mathcal{B} \mymid \abs{\w\cap S}\geq i}}{\abs{\mathcal{B}}},\] and, moreover, there exists at lest one~$i$ for which the above inequality is strict. Based on the above notion, a natural manipulation problem can be formally defined as follows. \EP {SD-Coalition Manipulation for~$\varphi$}{\probb{SDCM}{$\varphi$}} {A set of candidates~$C$, two multisets~$V$ and~$V_{\text{M}}$ of votes over~$C$, and an integer $k\leq \abs{C}$.} {Is there a multiset~$U$ of~$\abs{V_{\text{M}}}$ votes such that $\varphi(C, V\cup U, k)$ stochastically dominates $\varphi(C, V\cup V_{\text{M}}, k)$ subject to every $v\in V_{\text{M}}$?} Notice that in the above manipulation problem we do not have a winning $k$-committee in the input. \onlyfull{ Now we introduce the bribery problems. For an election~$(C, V)$, the current winning $k$-committee~$\w$ of~$(C, V)$, and a subset~$\w'\subseteq C$, we say a vote~$v\in V$ is radically (resp.\ conservatively)~$\w'$-{\it{vulnerable}} at~$(C, V)$ if~$\abs{v\cap \w'}>\abs{v\cap \w}$ (resp.\ $(v\cap \w)\subsetneq (v\cap \w')$). \EP {Radical/Conservative Frugal Bribery}{RCFB/CCFB} {An election $(C, V)$, the current winning $k$-committee~$\w\in \varphi(C, V, k)\subseteq C$ such that~$\abs{w}=k$, a nonempty subset $J\subseteq C$ of at most~$k$ candidates.} {Is there a subset $\w'\subseteq C$ such that $J\subseteq \w'$ and there exists a change of the votes cast by the radically/conservatively~$\w'$-vulnerable voters so that~$\w'$ becomes the unique winning $k$-committee?} \medskip } \subsubsection{The Election Control Problems} Now we extend the definitions of four standard single-winner election control problems to multiwinner voting. These control problems model the scenario where some external agent (e.g., the election chair) aims to make some distinguished candidates the winners by modifying the election. \EP {Constructive Control by Adding Voters for~$\varphi$}{\probb{CCAV}{$\varphi$}} {A set of candidates~$C$, two multisets~$V$ and~$U$ of votes over~$C$, a positive integer~$k\leq \abs{C}$, a nonempty subset~$J\subseteq C$ of at most~$k$ distinguished candidates, and a nonnegative integer~$\ell$.} {Is there~$U'\subseteq U$ such that~$|U'|\leq \ell$ and~$J$ belongs to all winning $k$-committees of~$\varphi$ at $(C, V\cup U', k)$?} In the above definition, votes in~$V$ are called {\memph{registered votes}} and votes in~$U$ are called {\memph{unregistered votes}}. Generally speaking, the above problem consists in determining if we can add a limited number of unregistered votes into the multiset of registered votes so that all distinguished candidates are entirely contained in all winning $k$-committees with respect to the final multiset of registered votes. \EP {Constructive Control by Deleting Voters for~$\varphi$}{\probb{CCDV}{$\varphi$}} {A set of candidates~$C$, a multiset of votes~$V$ over~$C$, a positive integer~$k\leq \abs{C}$, a nonempty subset~$J\subseteq C$ of at most~$k$ distinguished candidates, and a nonnegative integer~$\ell$.} {Is there~$V'\subseteq V$ such that~$\abs{V'}\leq \ell$ and~$J$ belongs to all winning $k$-committees of~$\varphi$ at $(C, V\setminus V', k)$?} Generally speaking, {\prob{CCDV}} aims to determine if we can delete a limited number of votes so that all distinguished candidates are contained in all winning $k$-committees of the resulting election. \EP {Constructive Control by Adding Candidates for~$\varphi$}{\probb{CCAC}{$\varphi$}} {Two disjoint subsets~$C$ and~$D$ of candidates, a multiset~$V$ of votes over $C\cup D$, a positive integer $k\leq \abs{C}$, a nonempty subset $J\subseteq C$ of at most~$k$ distinguished candidates, and a nonnegative integer~$\ell$.} {Is there~$D'\subseteq D$ of at most~$\ell$ candidates such that~$J$ belongs to all winning $k$-committees of~$\varphi$ at $(C\cup D', V, k)$?} In the above definitions, we call candidates in~$C$ {\memph{registered candidates}} and call candidates in~$D$ {\memph{unregistered candidates}}. In general, the above problem determines if we can add a limited number of unregistered candidates into the set of registered candidates so that all distinguished candidates are contained in all winning $k$-committees. \EP {Constructive Control by Deleting Candidates for~$\varphi$}{\probb{CCDC}{$\varphi$}} {A set of candidates~$C$, a multiset of votes~$V$ over~$C$, a positive integer~$k\leq \abs{C}$, a nonempty subset~$J\subseteq C$ of at most~$k$ distinguished candidates, and a nonnegative integer~$\ell$.} {Is there a subset~$C'\subseteq C\setminus J$ of at most~$\ell$ candidates such that~$\abs{C\setminus C'}\geq k$ and~$J$ belongs to all winning $k$-committees of $\varphi(C\setminus C', V, k)$?} In the above control problems, the goal of the external agent is to incorporate the distinguished candidates into all winning $k$-committees. This is natural in many situations. For example, when the external agent is unaware of the tie-breaking scheme, or when a randomized tie-breaking scheme is used, the external agent may want to make sure that her favorite candidates become winners without taking any risk. {The above situation also motivates our requirement on~$\w'$ in the manipulation problem\onlyfull{ and bribery problems}.} \onlyfull{Especially, in the bribery problem, the briber wants to first figure out a committee~$\w'$ to determine the $\w'$-vulnerable voters and, moreover, guarantees these voters that once they follow her guidance~$\w'$ will be the winning $k$-committee regardless of tie-breaking schemes.} Notice that {\prob{CCAV}}, {\prob{CCDV}}, {\prob{CCAC}}, and {\prob{CCDC}} with the restriction~$\abs{J}=k=1$ are exactly the extensively studied constructive control (by adding/deleting voters/candidates) problems for single-winner voting (more precisely, they are the unique-winner models of these constructive control problems)~\cite{Bartholdi92howhard,DBLP:journals/jair/FaliszewskiHHR09,handbookofcomsoc2016Cha7FR,DBLP:conf/atal/Yang17}. A multiwinner voting rule is {\it{immune}} to a constructive control type (adding/deleting voters/candidates) if it is impossible to make any $J\subseteq C$, which is not contained in the current winning $k$-committee, be included in all winning $k$-committees, by performing the corresponding operation. A multiwinner voting rule is {\it{susceptible}} to a control type if it is not immune to this control type. \subsubsection{Two Important Special Cases} \label{sec-two-speical-cases} Now we discuss two interesting special cases of the problems defined in the previous section. The reasons that we separately define them are as follows. First, we believe that they are of independent interest. Second, several of our hardness results are established for the special cases. Third, several our polynomial-time algorithms and {{\sf{FPT}}}-algorithms rely on algorithms solving these special cases. \EPP{{\probb{$J$-CC}{$\varphi$}}} {An election $(C, V)$, an integer~$k\leq \abs{C}$, and a subset~$J$ of at most~$k$ candidates.} {Is~$J$ contained in all winning $k$-committees of~$\varphi$ at $(C, V)$?} We denote by {\probb{$p$-CC}{$\varphi$}} the special case of {\probb{$J$-CC}{$\varphi$}} where~$J$ is a singleton. Obviously, {\probb{$p$-CC}{$\varphi$}} is a special case of {\probb{CCAV}{$\varphi$}}, {\probb{CCDV}{$\varphi$}}, {\probb{CCAC}{$\varphi$}}, and {\probb{CCDC}{$\varphi$}}, where $\ell=0$. For an {Yes-instance} of the above problems defined in Sections~\ref{subsec-manipulation}--\ref{sec-two-speical-cases}, a subset that satisfies all conditions given in the corresponding {\textbf{Question}} is referred to as a {{feasible solution}} of the instance. \onlyfull{{\bf{Parameterized complexity.}} A parameterized problem instance consists of a main part~$I$ and a parameter which is often (but not necessarily to be) an integer~$k$. A parameterized problem is fixed-parameter tractable ({\sf{FPT}}) if every instance~$(I, k)$ of the problem can be solved in~$f(k)\cdot \abs{I}^{O(1)}$ time where~$f(k)$ is a computable function in~$k$. Fixed-parameter intractable problems are categorized into a hierarchy of classes. The most widely studied classes are arguably the {{\sf{W[1]-hard}}}. Unless parameterized complexity collapses at some level, {{\sf{W[1]-hard}}} problems do not admit {{\sf{FPT}}}-algorithms. We assume the reader is familiar with the basics in parameterized complexity such as fixed-parameter tractability ({{\sf{FPT}}}). We refer\onlyfull{ the reader} to~\cite{DBLP:books/sp/CyganFKLMPPS15} for detailed discussion of the parameterized complexity.} \subsection{Useful Hardness Problems} Our hardness results are based on reductions from the following problems. Let~$G$ be a graph, and let~$\vset$ be the vertex set of~$G$. A {\memph{vertex cover}} of~$G$ is a subset of vertices whose removal results in a graph without any edge. A subset~$S\subseteq N$ is an {\memph{independent set}} in~$G$ if $\vset\setminus S$ is a vertex cover of~$G$. A {\memph{clique}} in~$G$ is a subset of pairwise adjacent vertices. \EPP {Vertex Cover} {A graph~$G=(N, A)$ where~$N$ is the set of vertices and~$A$ is the set of edges of~$G$, and an integer~$\kappa$.} {Does~$G$ have a vertex cover of size~$\kappa$?} It is well-known that {\prob{Vertex Cover}} remains {{\np}-{\sf{hard}}} even when restricted to $3$-regular graphs~\cite{DBLP:journals/tcs/GareyJS76,DBLP:journals/jct/Mohar01}. Recall that a {\memph{$3$-regular graph}} is a graph where every vertex has degree~$3$. \EP {Restricted Exact Cover by Three Sets}{RX3C} {A universe $A=\{a_1,a_2,\dots,a_{3\kappa}\}$ and a multiset $\mathcal{H}=\{H_1,H_2,\dots,H_{3\kappa}\}$ of $3$-subsets of~$A$ such that every~$a\in A$ occurs in exactly three elements of~$\mathcal{H}$.} {Is there~$\mathcal{H}'\subseteq \mathcal{H}$ such that~$\abs{\mathcal{H}'}=\kappa$ and every~$a\in A$ occurs in exactly one element of~$\mathcal{H}'$?} In an instance $(A, \mathcal{H})$ of {\prob{RX3C}}, we say that an $H\in \mathcal{H}$ covers $a\in A$ if $a\in H$, and say that an $\mathcal{H}'\subseteq \mathcal{H}$ covers some $A'\subseteq A$ if every $a\in A'$ is covered by at least one element of~$\mathcal{H}'$. Therefore, the problem determines if~$\mathcal{H}$ contains an~$\mathcal{H}'$ of cardinality~$\kappa$ which covers~$A$. It is known that {\prob{RX3C}} is {{\np}-{\sf{hard}}}~\cite{DBLP:journals/tcs/Gonzalez85}. \EPP {Clique} {A graph $G=(\vset, \eset)$ and an integer~$\kappa$.} {Does~$G$ have a clique of size~$\kappa$?} \EPP {Independent Set} {A graph $G=(\vset, \eset)$ and an integer~$\kappa$.} {Does~$G$ have an independent set of size~$\kappa$?} It is known that {\prob{Clique}} and {\prob{Independent Set}} are {{\np}-{\sf{hard}}} and, moreover, with respect to~$\kappa$, they are {{\sf{W[1]-hard}}}, and this holds even when restricted to regular graphs~\citep{DBLP:journals/cj/Cai08,DBLP:conf/iwpec/Marx04,DBLP:conf/cats/MathiesonS08}. \subsection{Remarks} It should be pointed out that our hardness results also hold for corresponding multiwinner voting rules which always select exactly one winning $k$-committee by utilizing a specific tie-breaking scheme (see, e.g.,~\cite{DBLP:journals/aamas/BredereckKN21} for descriptions of some tie-breaking schemes). In fact, our hardness reductions are established carefully to avoid ties. \onlyfull{For example, the prevalent lexicographic tie-breaking scheme selects the lexicographically smallest tied committees with respect to a fixed order over candidates. Other widely studied tie-breaking schemes include the one against the strategic agent and the one in favor of the strategic agent. When several committees are tied, the former one select one including the minimum number of distinguished candidates and the latter one select.}% Moreover, our {{\sf{P}}}- and {{\sf{FPT}}}-algorithms for the additive rules can be adapted for these variants when ties are broken lexicographically. \section{Manipulation\onlyfull{ and Bribery}} \label{sec-manipulation} In this section, we \onlyfull{study the manipulation and bribery problems formulated in Preliminaries. We first }consider the manipulation problems and show that in general these problems are {{\np}-{\sf{hard}}}. \begin{theorem} \label{thm-ccm-av-np-hard} {\probb{CBCM}{AV}}, {\probb{SBCM}{AV}}, and {\probb{SDCM}{AV}} are {\memph{{{\np}-{\sf{hard}}}}}. Moreover, these hold even if there are four nonmanipulative votes. \end{theorem} \begin{proof} We give a reduction from {\prob{Vertex Cover}} that applies to all of {\probb{CBCM}{AV}}, {\probb{SBCM}{AV}}, and {\probb{SDCM}{AV}}. Let $(G=(\vset, \eset), \kappa)$ be a {\prob{Vertex Cover}} instance where~$G$ is a $3$-regular graph. Let~$n=\abs{\vset}$ and~$m=\abs{\eset}$. Without loss of generality, we assume that~$m>4$ (otherwise the instance can be solved in polynomial time). For each vertex~$u\in \vset$, we create one candidate denoted still by~$u$ for simplicity. Additionally, we create~$\kappa$ candidates~$c_1$,~$c_2$,~$\dots$,~$c_{\kappa}$. Let~$C$ be the set of the above~$n+\kappa$ created candidates. We create the following votes. First, we create four nonmanipulative votes in~$V$ each of which approves exactly the~$\kappa$ candidates~$c_1$, $\dots$,~$c_{\kappa}$. In addition, for each edge~$\edge{b}{b'}\in A$, we create one manipulative vote~$v(b,b')=\{b,b'\}$ in~$V_{\text{M}}$. Finally, we set~$k=\kappa$. Let $\w=\{c_1,\dots,c_{\kappa}\}$. Then, $(C, V, V_{\text{M}}, \w)$ is the instance of both {\probb{CBCM}{AV}} and {\probb{SBCM}{AV}}, and $(C, V, V_{\text{M}})$ is the instance of {\probb{SDCM}{AV}}. Let $E=(C, V\cup V_{\text{M}})$. As~$G$ is $3$-regular, every candidate in~$\vset$ has AV score~$3$ in~$E$. As every candidate in~$\w$ has AV score~$4$ in~$E$,~$\w$ is the unique winning $k$-committee of AV at~$E$. Now we prove the correctness of the reduction. Notice that~$\w$ does not include any candidate approved by any manipulators. As a consequence, the instance of {\probb{CBCM}{AV}} and {\probb{SBCM}{AV}} (resp.\ {\probb{SDCM}{AV}}) is a {Yes-instance} if and only if the manipulators can coordinate their votes so that every (at least one) AV winning $k$-committee of the resulting election contains at least one approved candidates of every manipulative votes. $(\Rightarrow)$ Suppose that~$G$ has a vertex cover~$S\subseteq N$ of size~$\kappa$. If all manipulators turn to approve exactly the candidates corresponding to~$S$, the AV score of each candidate in~$S$ increases from~$3$ to~$m>4$, and hence~$S$ forms the unique winning $k$-committee. As~$S$ is a vertex cover of~$G$, by the construction of the votes, for every manipulative vote $v(b,b')=\{b,b'\}$ where $\edge{b}{b'}\in A$, at least one of~$b$ and~$b'$ is included in~$S$. $(\Leftarrow)$ If there exists no vertex cover of size~$\kappa$ in~$G$, then no matter which~$\kappa'\leq \kappa$ candidates from~$N$ are in a winning $k$-committee after the manipulators change their votes, there is at least one manipulative vote~$v(b, b')$, $\edge{b}{b'}\in A$, such that neither~$b$ nor~$b'$ is contained in this winning~$k$-committee. \end{proof} Similar to the reduction in the above proof of Theorem~\ref{thm-ccm-av-np-hard}, we can show the {{\np}-{\sf{hardness}}} of {CBCM}, {SBCM}, and {SDCM} for SAV. Clearly, after all manipulators change to approve candidates corresponding to a vertex cover~$S$ of size~$\kappa$, the SAV score of each candidate in~$S$ increases from~$3/2$ to~$m/\kappa$. Given this, to prove the {{\np}-{\sf{hardness}}} of {CBCM}, {SBCM}, and {SDCM} for SAV, we need only to create~$m-5$ further nonmanipulative votes approving~$c_1$,~$\dots$,~$c_{\kappa}$ so that in the original election each~$c_i$ has SAV score~$(m-1)/\kappa$.\footnote{The correctness of the reduction also relies on the assumption $\kappa<\frac{2}{3}\cdot (m-1)$ which does not change the {{\np}-{\sf{hardness}}} of {\prob{Vertex Cover}} restricted to $3$-regular graphs~\cite{DBLP:journals/tcs/GareyJS76,DBLP:journals/jct/Mohar01}.} \onlyfull{One can check that the reduction also applies to NSAV if we assume that~$\kappa$ is substantially smaller than~$m$, say~$\kappa^5\leq m$, which does not change the complexity of the {\prob{Vertex Cover}} problem.} It is easy to see that the same reduction works for NSAV too. We arrive at the following theorem. \begin{theorem} \label{thm-manipulation-np-hard-many-rules} For $\varphi\in \{\memph{\text{SAV}}, \memph{\text{NSAV}}\}$, {\probb{CBCM}{$\varphi$}}, {\probb{SBCM}{$\varphi$}}, and {\probb{SDCM}{$\varphi$}} are {\memph{{\np}-{\sf{hard}}}}. \end{theorem} By establishing a different reduction, we can show similar hardness results for MAV. \begin{theorem} \label{thm-manipulation-mav-np-hard} {\probb{CBCM}{MAV}}, {\probb{SBCM}{MAV}}, and {\probb{SDCM}{MAV}} are {\memph{{\np}-{\sf{hard}}}}. Moreover, these hold even if there is only one manipulator. \end{theorem} \begin{proof} Our proof is based on a reduction from {\prob{Vertex Cover}} which applies to all of {\probb{CBCM}{MAV}}, {\probb{SBCM}{MAV}}, and {\probb{SDCM}{MAV}}. Let~$(G=(N, A), \kappa)$ be an instance of {\prob{Vertex Cover}} where~$G$ is a~$3$-regular graph. Without loss of generality, we assume that $|A|>\kappa$. The candidates are as follows. First, for each~$a\in N$, we create one candidate denoted still by~$a$ for simplicity. Then, we create a set~$X$ of~$2\kappa+1$ candidates. Moreover, for each edge~$\edge{a}{b}\in A$, we create a set~$Y(\{a,b\})$ of~$3\kappa+1$ candidates. Let $C=\bigcup_{\edge{a}{b}\in A}Y(\edge{a}{b})\cup N\cup X$. Now we construct the votes. First, for each edge~$\edge{a}{b}\in A$, we create one vote~$v(\edge{a}{b})=\{a,b\}$. These votes are the manipulative votes, i.e., we let~$V_{\text{M}}=\{v(a,b) \mid \edge{a}{b}\in A\}$. In addition, we create one nonmanipulative vote approving exactly the~$2\kappa+1$ candidates in~$X$. Let~$k=\kappa$, and let~$\w$ be any arbitrary $k$-subset of~$X$. The instances of {\probb{CBCM}{MAV}} and {\probb{SBCM}{MAV}} are the same $(C, V, V_{\text{M}}, \w)$, and the instance of {\probb{SDCM}{MAV}} is $(C, V, V_{\text{M}})$. The constructions of the instances clearly can be done in polynomial time. One can check that winning $k$-committees of MAV at $(C, V\cup V_{\text{M}})$ are exactly all $k$-subsets of~$X$. In fact, any such a $k$-committee is of Hamming distance $k+2$ from every manipulative vote, and is of Hamming distance $k+1$ from the nonmanipulative vote. So, the MAV score of such a committee is $k+2$. In addition, every $k$-committee which contains at most $k-1$ candidates from~$X$ is of Hamming distance at least $1+(k+2)=k+3$ from the nonmanipulative vote, and hence has MAV score at least $k+3$. It remains to show the correctness of the reduction. Note that as none of the winning $k$-committees of~$(C, V\cup V_{\text{M}})$ intersects any manipulative votes, the instance of {\probb{CBCM}{MAV}} and {\probb{SBCM}{MAV}} (resp.\ {\probb{SDCM}{MAV}}) is a {Yes-instance} if and only if manipulators can coordinate their votes so that every (resp.\ at least one) winning $k$-committee of the resulting election which intersects every manipulative vote in~$V_{\text{M}}$. $(\Rightarrow)$ Assume that there is a vertex cover~$S\subseteq N$ of~$\kappa$ vertices in~$G$. Then, we change each manipulative vote~$v(\{a,b\})$ so that after the change it approves all candidates in~$S\cup Y(a,b)$. For the sake of clarity, we use $v'(\{a, b\})$ to denote $v(\{a, b\})$ after the change. Let~$E$ denote the election after these changes. One can check that~$S$ has MAV score~$3\kappa+1$ in~$E$. We complete the proof by showing that~$S$ is the unique winning $k$-committee of~$E$, i.e., any other~$k$-committee has Hamming distance at least~$3\kappa+2$ in~$E$. Consider first a $k$-committee which contains some candidate in $\bigcup_{\edge{a}{b}\in A}Y(\edge{a}{b})$. Due to the assumption $|A|>k$, there exists an edge $\edge{a'}{b'}\in A$ such that none of $Y(\edge{a'}{b'})$ is in this $k$-committee. It is easy to check that the Hamming distance between this $k$-committee and the vote $v'(\edge{a'}{b'})$ is at least $1+1+(3k+1)=3k+3$. It remains to consider $k$-committees which do not contain any candidate from $\bigcup_{\edge{a}{b}\in A}Y(\edge{a}{b})$ but contains some candidate in~$X$. The Hamming distance between such a $k$-committee and every manipulative vote is then at least $(3k+1)+1+1=3k+3$. It follows that~$S$ is the unique winning $k$-committee of~$E$. Finally, as~$S$ intersects each of the truth manipulative votes, we can conclude that in this case the constructed instance of {CBCM}/{SBCM}/{SDCM} is a {Yes-instance}. $(\Leftarrow)$ Similar to the argument in the proof of Theorem~\ref{thm-ccm-av-np-hard}, if there exists no vertex cover of size~$\kappa$ in the graph~$G$, for any $k$-committee $\w'\subseteq C$, at least one of the manipulative votes is disjoint from~$\w'$. So, the constructed {CBCM}/{SBCM}/{SDCM} instance is a {No-instance}. \end{proof} In the proofs of Theorems~\ref{thm-ccm-av-np-hard}--\ref{thm-manipulation-mav-np-hard}, manipulators considerably outnumber nonmanipulators. This stands in contrast to the polynomial-time solvability of the canonical coalition manipulation problem (see~\cite{handbookofcomsoc2016Cha7FR} for the definitions) for many ranking-based single-winner voting rules, where, when there are more manipulators than nonmanipulators, the manipulators can always make the distinguished candidate the winner by ranking the distinguished candidate in the top and ranking other candidates greedily. On the other hand, the canonical coalition manipulation problem for many single-winner voting rules is already {{\np}-{\sf{hard}}} even when there is only one or two manipulators~\cite{BARTHOLDI89,DBLP:journals/ai/DaviesKNWX14}. However, we show that this is not always the case for our problems. In particular, we show that {CBCM} and {SBCM} for AV are polynomial-time solvable when the number of manipulators is a constant. \begin{theorem} \label{thm-maniuplation-polynomial-time-solvable-constant-number-manipulators} {\probb{CBCM}{AV}} and {\probb{SBCM}{AV}} are polynomial-time solvable if there are a constant number of manipulators. \end{theorem} \begin{proof} We consider first {\probb{CBCM}{AV}}. Let $I=(C, V, V_{\text{M}}, \w)$ be an instance of {\probb{CBCM}{AV}}, where $\w\subseteq C$ is a winning $k$-committee of AV at $(C, V)$, and~$V_{\text{M}}$ is the multiset of manipulative votes. Let $m=\abs{C}$ denote the number of candidates, let $k=\abs{\w}$ be the cardinality of~$\w$, and let~$t=\abs{V_{\text{M}}}$ be the number of manipulators which is a constant. We derive a polynomial-time algorithm by splitting the instance~$I$ into polynomially many subinstances, so that the original instance~$I$ is a {Yes-instance} if and only if at least one of the subinstances is a {Yes-instance}. Before presenting the algorithm, we first study a property of the solution space. \begin{claimm} \label{claim-b} If~$I$ is a {Yes-instance}, then it has a feasible solution so that all manipulators turn to approve the same candidates which are all from~$C(V_{\memph{\text{M}}})$, the union of the sets of candidates truthfully approved by the manipulators, i.e., there exists a subset $S\subseteq C(V_{\memph{\text{M}}})$ so that every $v\in V_{\memph{\text{M}}}$ prefers every AV winning $k$-committee of $(C, V\cup V')$ to~$\w$, where~$V'$ is the multiset of~$t$ votes each of which approves exactly candidates in~$S$. \end{claimm}% \smallskip {\noindent{\textit{Proof of Claim~\ref{claim-b}.}}} Assume that~$I$ is a {Yes-instance}, and let~$V'$ be a multiset of~$t$ votes over~$C$, one-to-one corresponding to votes in~$V_{\text{M}}$, so that every $v\in V_{\text{M}}$ prefers every AV winning $k$-committee of $E=(C, V\cup V')$ to~$\w$. For every $v\in V_{\memph{\text{M}}}$, let~$v'$ denote the vote in~$V'$ corresponding to~$v$. If all votes in~$V$ approve the same candidates and they are all from~$C(V_{\text{M}})$, we are done. Otherwise, two cases as described below may occur. \begin{description} \item[Case~1.] $\exists v'\in V'$ so that~$v'$ approves at least one candidate $c\in C\setminus C(V_{\text{M}})$. Let $\tilde{v}=v'\setminus \{c\}$, and let~$E'$ be the election obtained from~$E$ by replacing~$v'$ with~$\tilde{v}$. Now we prove that every $v\in V_{\text{M}}$ prefers every AV winning $k$-committee of~$E'$ to~$\w$. Let~$\w'$ be a winning $k$-committee of~$E'$. The proof proceeds by distinguishing the following cases. \begin{itemize} \item $c\in W^+_{E}$ In this case, if $c\in W^+_{E'}$, the AV winning $k$-committees of~$E$ and those of~$E'$ are the same; we are done. If $c\in W_{E'}\cup L_{E'}$, then $W^+_{E'}\subseteq W^+_{E}$, $W^+_{E'}\subseteq \w'$, and $\w'\setminus W^+_{E'}\subseteq W_{E'}$. \begin{itemize} \item If $W_{E}=\emptyset$ (equivalently, $\abs{W^+_{E}}=k$), then~$W_{E'}$ contains~$c$ and a subset $B\subseteq L_{E}$. Then, if $c\in \w'$,~$\w'$ is also an AV winning $k$-committee of~$E$, and hence every $v\in V_{\text{M}}$ prefers $\w'$ to $\w$. Otherwise, let~$\{c'\}=\w'\setminus W^+_{E'}$. Let $w''=\w'\setminus \{c'\}\cup \{c\}$. Then, $\w''$ is the AV winning $k$-committee of~$E$ and hence is more preferred to~$\w$ by every $v\in V_{\text{M}}$. That is, for every $v\in V_{\text{M}}$ it holds that $\abs{v\cap \w''}>\abs{v\cap w}$. As $c\not\in C(V_{\text{M}})$, it follows that $\abs{v\cap \w'}\geq \abs{v\cap \w''}>\abs{v\cap \w}$. (the argument applies to subset-strategyproofness too). \item Otherwise, $\abs{W_E}\geq 2$ and, moreover, $W_{E'}=W_E\cup \{c\}$. If $c\in \w'$, $\w'$ is also an AV winning $k$-committee of~$E$, and we are done. Otherwise, let $c'$ be any arbitrary candidate in $\w'\cap W_E$. Then, similar to the above analysis, by defining $\w''=\w'\setminus \{c'\}\cup \{c\}$ and utilizing the above analysis, we can show that every $v\in V_{\text{M}}$ prefers~$\w'$ to~$\w$. \end{itemize} \item $c\in W_E$ As $W_E\neq \emptyset$, it holds that $\abs{W_E}\geq 2$ and $\abs{W^+_{E}\cup W_E}>k$. It follows then $c\in L_{E'}$ and, more importantly, $(W^+_{E'}\cup W_{E'})\subseteq (W^+_{E}\cup W_{E})$, which implies that the AV winning $k$-committees of $E'$ is a subcollection of the AV winning $k$-committees of~$E$; we are done. \item $c\in L_E$ In this case, $(W^+_{E'}\cup W_{E'})= (W^+_{E}\cup W_{E})$ holds too. And by the above analysis, we know that every $v\in V_{\text{M}}$ prefers every AV winning $k$-committee of~$E'$ to~$\w$. \end{itemize} \item[Case~2.] $v'\subseteq C(V_{\text{M}})$ for all $v'\in V'$ but votes in~$V'$ do not approve exactly the same candidates. Recall that~$C(V')$ denotes the set of candidates approved by at least one vote in~$V'$. In the following, we compute a subset~$S$ of candidates from $C(V_{\text{M}})$ so that if all manipulators turn to approve exactly the candidates in~$S$, every manipulator prefers every winning $k$-committee of the resulting election to~$\w$. First, let $S=C(V')\setminus L_{E}$. We consider two cases. \begin{itemize} \item $\abs{S}\leq k-\abs{W^+_{E}}$ In this case, we let~$E'$ be the election obtained from~$E$ by replacing every $v'\in V'$ by a vote approving exactly candidates in~$S$. Then, we have that $W^+_{E'}\subseteq (W^+_{E}\cup S) \subseteq (W^+_E\cup W_E)$, and $W_{E'}\subseteq W_{E}$. So, the AV winning $k$-committees of $E'$ is a subcollection of AV winning $k$-committees of $E$; we are done. \item $\abs{S}> k-\abs{W^+_{E}}$ In this case, let~$S'$ be any subset of arbitrary~$k-\abs{W^+_{E}}$ candidates in $C(V')\cap W_E$. Then, we reset $S:=(S\cap W^+_{E})\cup S'$. Then, after all manipulators turn to approve candidates in~$S$, the AV winning $k$-committees of the resulting election is a subcollection of those of~$E$; we are done. \end{itemize} \end{description} This completes the proof of Claim~\ref{claim-b}. \medskip Now we start to describe the algorithm. Recall that for a subset~$U\subseteq V$ of votes,~$C_V^{{\star}}(U)$ is the set of candidates in~$C$ that are approved exactly by votes in~$U$. Clearly, for distinct $U, U'\subseteq V$, it holds that $C_V^{{\star}}(U)\cap C_V^{{\star}}(U')=\emptyset$. In the remainder of the proof, for $S\subseteq V_{\text{M}}$, we use~$C^{\star}(S)$ to denote~$C^{\star}_{V_{\text{M}}}(S)$, for notational brevity. For each nonempty~$S\subseteq V_{\text{M}}$, we guess a nonnegative integer~$x_{S}\leq \abs{C^{\star}(S)}$ which indicates the number of candidates from~$C^{\star}(S)$ that are approved by all manipulators in the final election. Therefore, we guess in total at most~$2^t$ such integers, and there are at most $\prod_{S\subseteq V_{\text{M}}}(1+\abs{C^{\star}(S)})=\bigo{(m+1)^{2^t}}$ different combinations of the guesses. In addition, in light of Claim~\ref{claim-b}, we guess the number~$k'$ of candidates approved by all manipulators in the final election. In effect, these guesses split the original instance~$I$ into polynomially many ($\bigo{m\cdot (m+1)^{2^t}}$) subinstances each of which takes as input~$I$, a positive integer~$k'\leq \abs{C(V_{\text{M}})}$, and a nonnegative integer $x_S\leq \abs{C^{\star}(S)}$ for every nonempty $S\subseteq V_{\text{M}}$, and asks if there is a $k'$-subset $\w'\subseteq C(V_{\text{M}})$ so that the following two conditions hold: \begin{enumerate} \item[(1)] for every nonempty $S\subseteq V_{\text{M}}$,~$\w'$ includes exactly~$x_{S}$ candidates from~$C^{\star}(S)$, and \item[(2)] every manipulator prefers every winning $k$-committee of the election after all manipulators turn to approve exactly the candidates in~$\w'$ to the input winning $k$-committee~$\w$. \end{enumerate} Now we show how to solve a subinstance associated with~$k'$, and $\{x_{S} \mymid S\subseteq V_{\text{M}}\}$ as given above. First, if $\sum_{\emptyset\ne S\subseteq V_{\text{M}}} x_S \neq k'$, by Condition~(1), we conclude that the subinstance is a {No-instance}. Otherwise, we do the following. We let all manipulators approve~$x_{S}$ certain candidates in each~$C^{{\star}}(S)$ where $\emptyset\neq S\subseteq V_{\text{M}}$ and~$x_{S}>0$. We first prove the following claim. For each $S\subseteq V_{\text{M}}$, let~$\rhd_{S}$ be a linear order over~$C^{\star}(S)$ so that $c \rhd_S c'$ if $\textsf{sc}(V, \{c\})\geq \textsf{sc}(V, \{c'\})$. We use $\rhd_S[i]$ to denote the $i$-th candidate in~$S$. A block in~$\rhd_{S}$ is set of candidates in~$C^{\star}(S)$ that are consecutive in the order~$\rhd_S$. For two positive integers $i,j$ so that $i\leq j\leq \abs{C^{\star}(S)}$, let $\rhd_S[i,j]=\{\rhd_S[x] \vert i\leq x\leq j\}$. \medskip The following observation is easy to see. \begin{observation} \label{obs-a} Let $X, Y\subseteq C$ be two $k$-committees of~$C$ so that $\abs{X\cap C^{\star}(S)}\geq \abs{Y\cap C^{\star}(S)}$ holds for every $S\subseteq V_{\text{M}}$. Then, for every $v\in V_{\text{M}}$, $v$ prefers~$Y$ to~$\w$ implies that $v$ prefers~$X$ to~$\w$. \end{observation} \begin{claimm} \label{claim-c} If~$I$ is a {Yes-instance} and~$V'$ is a feasible solution of $I$, then for every $v'\in V'$ and every $S\subseteq V_{\text{M}}$, it holds that $v'$ induces at most two blocks of~$\rhd_S$. \end{claimm} {\noindent{\it{Proof of Claim~\ref{claim-c}}}.} We prove the claim by contradiction. Let~$V'$ be a feasible solution of~$I$. By Claim~\ref{claim-b}, we may assume that all votes in~$V'$ approve exactly the same candidates. If every vote in~$V'$ induces at most two blocks of~$\rhd_S$, we are done. Otherwise, let~$v'$ be a vote in~$V'$ which induces at least three blocks of~$\rhd_S$. Let $B_1$, $B_2$, $\dots$, $B_z$ be the blocks induced by~$v'$ so that $B_1\rhd_S B_2 \rhd_S\cdots \rhd_S B_z$, where $z\geq 3$ and $B_i \rhd_S B_j$ means $c\rhd_S c'$ for all $c\in B_i$ and all $c'\in B_j$. Let $B=\bigcup_{i=1}^z B_i$. Without loss of generality, let $B_i=\rhd_S[i_{\text{L}}, i_\text{R}]$ where $1\leq i_{\text{L}}\leq i_{\text{R}}\leq \abs{C^{\star}(S)}$. Let $E'=(C, V\cup V')$. Obviously, it holds that $\textsf{sc}(V\cup V', \{c\})\geq \textsf{sc}(V\cup V', \{c'\})$ for any $c, c'\in B$ so that $c\rhd_S c'$. We may assume that $B\cap L_{E'}=\emptyset$, since otherwise we remove from all $v'\in V'$ the candidates in~$L_{E'}$, and after the removals the AV winning $k$-committees remain the same, and hence every vote in~$V_{\text{M}}$ still prefers every AV winning $k$-committee of the resulting election to $\w$. Our proof proceeds by distinguishing the following cases. \begin{description} \item[Case~1.] $B\subseteq W_{E'}$ or $B\subseteq W^+_{E'}$ In this case, after replacing candidates from~$B\setminus B_1$ by the $\abs{B\setminus B_1}$ consecutive candidates immediately after~$\rhd_S[1_{\text{R}}]$ in every vote of~$V'$, for every $S'\subseteq V_{\text{M}}$ the number of candidates in~$C^{\star}(S')$ contained in every~AV winning $k$-committee of the resulting election equals that contained in every AV wining $k$-committee of the election before the replacement. Then, by Observation~\ref{obs-a}, after the replacement~$V'$ remains a feasible solution of~$I$. \item[Case~2.] $B\cap W^+_{E'}\neq \emptyset$ and $B\cap W_{E'}\neq \emptyset$ In this case, let~$c$ be the right-most candidate in~$B$ with respect to~$\rhd_S$ so that $c\in W^+_{E'}$, and let~$c'$ be the left-most candidate in~$B$ with respect to~$\rhd_S$ that is contained in~$W_{E'}$. The following observations are clear: \begin{itemize} \item $c\rhd_S c'$, and hence~$c$ has larger~AV score than~$c'$ in $(C, V)$; and \item there are no other candidates from~$B$ between~$c$ and~$c'$ with respect to~$\rhd_S$. \end{itemize} Let~$x$ be the number of candidates in~$B$ before~$c$ in~$\rhd_S$, and $y$ be the number of candidates in~$B$ after~$c'$ in~$\rhd_S$. Then, in every vote in~$V'$ we replace all candidates in~$B$ before~$c$ by the~$x$ consecutive candidates immediately before~$c$, and replace all candidates in~$B$ after~$c'$ by the~$y$ consecutive candidates immediately after~$c'$. By Observation~\ref{obs-a}, the new~$V'$ remains a feasible solution of~$I$. \end{description} By the analysis in the above two cases, if in a feasible solution of~$I$ there are votes which induce more than two blocks of~$\rhd_S$, we can transform it into another feasible solution of~$I$ where every vote induces at most two blocks of~$\rhd_S$. This completes the proof of Claim~\ref{claim-c}. \medskip Armed with Claim~\ref{claim-c}, for each $S\subseteq V_{\text{M}}$, we guess whether each manipulator's new votes induce one or two blocks, and guess the starting and ending points of the blocks. The number of the combinations of the guesses is bounded from above by $(m^4)^{2^t}$ which is a constant because~$t$ is a constant. Each fixed combination of the guesses corresponds to a multiset~$V'$ of~$t$ votes approving the same candidates. Given such a~$V'$ and $E'=(C, V\cup V')$, we compute $W^{+}_{E'}$, $W_{E'}$, and $L_{E'}$. We conclude that given instance~$I$ is a {Yes-instance} if and only if there exists at least one such~$V'$ so that every $v\in V_{\text{M}}$ prefers every AV winning $k$-committee of~$E'$ to~$\w$, which can be done by checking if for every $v\in V_{\text{M}}$ whether the following inequality holds: \[\abs{W^{+}_{E'}\cap v}+\max\{0, k+\abs{W_{E'}\cap v}-\abs{W^+_{E'}\cup W_{E'}}\}>\abs{v\cap \w}.\] The algorithms for {SBCM} are analogous. We only outline the differences. First, as all approved candidates of a manipulator which are in the original winning~$k$-committee~$\w$ are demanded to be in the final winning~$k$-committee, we can allow first that all manipulators approve all candidates in $\w\cap C(V_{\text{M}})$. Then, we calculate what other candidates from~$C(V_{\text{M}})\setminus \w$ should be approved by the manipulators by guessing an integer $k'\leq k-\abs{\w\cap C(V_{\text{M}})}$, in a way similar to the above algorithm. Third, in this case a vote $v\in V_{\text{M}}$ prefers every AV winning $k$-committee of~$E'$ to~$\w$ if and only if either (1) $\abs{W_{E'}}=1$ and $L_{E'}\cap v\cap \w=\emptyset$, or (2) $\abs{W_{E'}}>1$ and $(W_{E'}\cup L_{E'})\cap (v\cap \w)=\emptyset$. \end{proof} For SAV and NSAV, we also have polynomial-time algorithms. \begin{theorem} \label{thm-maniuplation-sav-nsav-polynomial-time-solvable-constant-number-manipulators} For $\varphi\in \{\memph{\text{SAV}}, \memph{\text{NSAV}}\}$, {\probb{CBCM}{$\varphi$}} and {\probb{SBCM}{$\varphi$}} are polynomial-time solvable if there are a constant number of manipulators. \end{theorem} We defer the proof of Theorem~\ref{thm-maniuplation-sav-nsav-polynomial-time-solvable-constant-number-manipulators} to the Appendix. The algorithms for SAV and NSAV are similar in principle to the one for AV but with a much larger number of subinstances to solve. The reason is that, unlike AV, for SAV and NSAV it is not always optimal for the manipulators to approve the same candidates, as shown in Example~\ref{ex-1} below. For this reason, instead of guessing only one common integer~$x_S$ for manipulators in~$S$, we need to guess many integers for manipulators in~$S$ separately. Nevertheless, as long as the number of manipulators is a constant, we are still ensured with a polynomially many combinations of guesses. \begin{example} \label{ex-1} Consider an election~$E$ with nine candidates, seven nonmanipulative votes, and three manipulative votes as shown in Figure~\ref{fig-sav-nsav-manipulation}. Obviously, the SAV score of each of $x$, $y$, and $z$ is $\frac{7}{4}$, that of~$a$ is $1+\frac{2}{3}=\frac{5}{3}$, that of each of~$b$ and~$c$ is~$\frac{1}{3}+\frac{1}{2}=\frac{5}{6}$, and that of each of the other candidates is strictly smaller than~$1$. Hence, the winning $2$-committees of SAV at~$E$ are exactly $2$-subsets of $\{x, y, z\}$. To satisfy all manipulators, winning $2$-committees should be among $\{a, b\}$, $\{b, c\}$, $\{a, c\}$, $\{d_1, c\}$, and $\{d_2, b\}$. If all the three manipulators turn to approve the same set of candidates, at least one if $\{b, c, d_1, d_2\}$ has SAV score at most $\frac{3}{2}$, which is strictly smaller than that of each of~$x$, $y$, and~$z$, implying that the result is not more favorable by at least one of the manipulators. However, the manipulators are capable of improving the result by coordinating their votes in some other way. For instance, if one of the manipulator approves~$a$, and the other two approve~$b$, the SAV scores of~$a$ and~$b$ both become~$2$, which is strictly larger than that of every other candidate, making $\{a, b\}$ the unique winning $2$-committee. This shows that for SAV it is not always optimal for manipulators turn to approve the same set of candidates in order to improve the result in their favor. By adding a large number of dummy candidates not approved by any vote, we can show similar result for NSAV utilizing a lemma studied in~\cite{DBLP:conf/atal/000120} stated below. \begin{figure}[h!] \centering{ \includegraphics[width=0.65\textwidth]{fig-sav-nsav-manipulation.pdf} \caption{An illustration that for SAV and NSAV it is not always optimal for the manipulators to approve the same candidates.} \label{fig-sav-nsav-manipulation} } \end{figure} \end{example} For an election $(C, V)$ and a candidate~$c$, let $\textsf{SAV}_{(C, V)}(c)$ and $\textsf{NSAV}_{(C, V)}(c)$ be, respectively, the SAV score and the NSAV score of~$c$ in~$(C, V)$. \begin{lemma}[\citet{DBLP:conf/atal/000120}] \label{lem-relation-sav-nsav} Let $(C, V)$ be an election where $m=\abs{C}\geq 2$ and $n=\abs{V}$. Let~$B$ be a set of at least $n\cdot m^2$ candidates disjoint from~$C$. Then, for every two candidates~$c$ and~$c'$ in~$C$, it holds that ${\memph{\textsf{SAV}}}_{(C, V)}(c)>{\memph{\textsf{SAV}}}_{(C, V)}(c')$ if and only if ${\memph{\textsf{NSAV}}}_{(C\cup B, V)}(c)>{\memph{\textsf{NSAV}}}_{(C\cup B, V)}(c')$. \end{lemma} \onlyfull{ Now we study the bribery problems RCFB and CCFB. A major difference between RCFB/CCFB and {CBCM}/{SBCM} is that in {CBCM}/{SBCM} the voters who change their votes (i.e., the manipulators) are given in the input, but in RCFB/CCFB who are vulnerable voters depend on the final winning~$k$-committee which is not given in the input. Nevertheless, in the {{\np}-{\sf{hardness}}} reduction for {CBCM}/{SBCM} in Theorem~\ref{thm-ccm-av-np-hard}, the four voters approving the current winning candidates $c_1,\dots,c_{\kappa}$ clearly cannot be any~$\w'$-vulnerable voters for any $\w'\neq \{c_1,\dots,c_{\kappa}\}$ because all of their approved candidates have been in the winning $k$-committee. Hence, only the voters corresponding to the edges can be bribed. Based on this observation we can derive the {{\np}-{\sf{hardness}}} of RCFB/CCFB for AV by modifying the reductions for {CBCM}/{SBCM}. In particular, we add one additional candidate which serves as the distinguished candidate, one more dummy candidate~$c_{\kappa+1}$, and reset~$k=\kappa+1$. The {{\np}-{\sf{hardness}}} reduction of {CBCM}/{SBCM} for other rules can be also modified to show the {{\np}-{\sf{hardness}}} of RCFB/CCFB for the same rules. \begin{theorem} \label{thm-bribery-np-hard-many-rules-only-one-distinguished-candidate} RCFB and CCFB for AV, SAV, NSAV, PAV, ABCCV, and MAV are {{\np}-{\sf{hard}}} even if there is only one distinguished candidate. \end{theorem} We can also obtain {{\sf{FPT}}}-results for the bribery problems with respect to the number of candidates. \begin{theorem} \label{thm-bribery-fpt-wrt-candidates} For $\varphi\in \{\memph{\text{AV}}, \memph{\text{SAV}}, \memph{\text{NSAV}}\}$, {\probb{RCFB}{$\varphi$}} and {\probb{CCFB}{$\varphi$}} are {{\sf{FPT}}} with respect to the number of candidates. \end{theorem} } \section{Control} \label{sec-control} In this section, we study the complexity of election control problems. We first study control by modifying the set of voters and then study control by modifying the set of candidates. \subsection{Control of Voters} \onlyfull{This section is devoted to the control problems.} When considering AV as a single-winner voting rule (i.e., when $k=1$), it is known that {\probb{CCAV}{AV}} and {\probb{CCDV}{AV}} are {{\np}-{\sf{hard}}}~\cite{DBLP:journals/ai/HemaspaandraHR07}. Notably, when~$k=1$, PAV, ABBCV, and AV are identical. As a consequence, {\prob{CCAV}} and {\prob{CCDV}} for PAV and ABCCV are also {{\np}-{\sf{hard}}} even when~$k=1$. For {\prob{CCAV}} and {\prob{CCDV}}, it remains to consider SAV, NSAV, and MAV\@. Note that these three rules are not equivalent to AV when $k=1$~\cite{DBLP:journals/corr/abs-2007-01795}. We first show the {{\np}-{\sf{hardness}}} for SAV and NSAV even when restricted the above-mentioned special case. Our reduction is from {\prob{RX3C}}. We provide the detailed reductions for SAV, and utilize Lemma~\ref{lem-relation-sav-nsav} to show how to adapt the reductions to make them applicable to NSAV. \begin{theorem} \label{thm-ccav-sav-np-hard} For $\varphi\in \{\memph{\text{SAV}}, \memph{\text{NSAV}}\}$, {\probb{CCAV}{$\varphi$}} and {\probb{CCDV}{$\varphi$}} are {{\np}-{\sf{hard}}} even when~$k=1$. Moreover, the hardness for {\prob{CCDV}} holds even if every vote approves at most three candidates and every candidate is approved by at most three voters. \end{theorem} \begin{proof} We prove the theorem separately for each problem. All reductions are from {\prob{RX3C}}. Let~$(A,\mathcal{H})$ be an instance of {\prob{RX3C}} such that~$\abs{A}=\abs{\mathcal{H}}=3\kappa$. \begin{itemize} \item {\probb{CCAV}{SAV}} \end{itemize} We assume that~$\kappa>2$ and~$\kappa$ is divisible by~$4$. This assumption does not change the complexity of the problem{\footnote{If~$\kappa$ is not divisible by~$4$, we can modify the instance into an equivalent instance where this condition is satisfied. In particular, if $\kappa\equiv 3 \pmod 4$, we create three further elements in~$A$, add three $3$-subsets each of which consists of exactly these three newly introduced elements in~$\mathcal{H}$, and increase~$\kappa$ by one. If $\kappa\equiv 2\pmod 4$, we create six further elements $a_1,\dots,a_6$ in~$A$, add three copies of $\{a_1,a_2,a_3\}$ and three copies of $\{a_4,a_5,a_6\}$ into~$\mathcal{H}$, and increase~$\kappa$ by two. If $\kappa\equiv 1\pmod 4$, we create~$9$ further elements $a_1,\dots,a_9$ in~$A$, add three copies of each of $\{a_1,a_2,a_3\}$, $\{a_4, a_5, a_6\}$, and $\{a_7,a_8, a_9\}$ into~$\mathcal{H}$, and increase~$\kappa$ by three. After the operations, the instance is still an instance of {\prob{RX3C}}, and it is easy to see that the original instance is a {Yes-instance} if and only if the new instance is a {Yes-instance}.}}. We create an instance $((C, V), k, U, J, \ell)$ of {\probb{CCAV}{SAV}} as follows. For each~$a\in A$, we create one candidate denoted still by~$a$ for simplicity. In addition, we create one candidate~$p$ which is the only distinguished candidate. Let~$C=A\cup \{p\}$ and let $J=\{p\}$. We create $\frac{3}{4} \kappa \cdot(\kappa-2)$ registered votes each of which approves all candidates except the distinguished candidate~$p$. Let~$V$ be the multiset of these registered votes. Note that as $\kappa\equiv 0 \pmod{4}$ and $\kappa>2$, $\frac{3}{4} \kappa \cdot(\kappa-2)$ is a positive integer, and hence~$V$ is well-defined. Then, for each $H=\{a_x,a_y,a_z\}\in \mathcal{H}$, we create in~$U$ an unregistered vote~$v(H)$ which approves the four candidates~$p$,~$a_x$,~$a_y$, and~$a_z$. We complete the construction by setting~$k=1$ and~$\ell=\kappa$. The construction clearly can be done in polynomial time. In the following we show that the given {\prob{RX3C}} instance is a {Yes-instance} if and only if the above constructed instance of {\probb{CCAV}{SAV}} is a {Yes-instance}. $(\Rightarrow)$ Assume that~$\mathcal{H}$ contains an exact~$3$-set cover $\mathcal{H}'\subseteq \mathcal{H}$ of~$A$. Let us consider the election after adding into~$V$ all unregistered votes corresponding to the~$\kappa$ elements in~$\mathcal{H}'$, i.e., the election $E=(C, V\cup U')$ where $U'=\{v(H)\in U \mymid H\in \mathcal{H}'\}$. As each unregistered vote approves exactly four candidates including~$p$, the SAV score of~$p$ in~$E$ is~$\frac{\kappa}{4}$. In addition, as~$\mathcal{H}'$ is an exact~$3$-set cover of~$A$, due to the above construction, for each candidate~$a\in A$,~$U'$ contains exactly one unregistered vote approving~$a$, implying that the SAV score of~$a$ increases by~$\frac{1}{4}$ after the addition of votes in~$U'$ to~$V$. Given that the SAV score of each candidate~$a\in A$ in $(C, V)$ is~$\frac{3}{4}\kappa \cdot (\kappa-2)\cdot \frac{1}{3\kappa}=\frac{\kappa-2}{4}$, we know that the SAV score of~$a$ in~$E$ is~$\frac{\kappa-1}{4}$. Therefore, the distinguished candidate~$p$ has the unique highest score and hence~$p$ uniquely wins~$E$. $(\Leftarrow)$ Assume that there is a multiset~$U'\subseteq U$ of cardinality at most $\ell=\kappa$ so that~$p$ becomes the unique SAV winner of $E=(C, V\cup U')$. Let $\mathcal{H}'=\{H\in \mathcal{H} \mid v(H)\in U'\}$. Since the distinguished candidate~$p$ has the unique least SAV score~$0$ in $(C, V)$, it holds that $\abs{U'}\geq 1$. Then, it is easy to see that~$\abs{U'}=\kappa$, since otherwise at least one candidate in~$A$ has SAV score at least $\frac{\kappa-2}{4}+\frac{1}{4}=\frac{\kappa-1}{4}$, and the distinguished candidate has SAV score at most~$\frac{\kappa-1}{4}$ in~$E$, contradicting that~$p$ is the unique winner of~$E$. It follows that the SAV score of~$p$ in~$E$ is~$\frac{\kappa}{4}$, and the SAV score of every~$a\in A$ in~$E$ is strictly smaller than~$\frac{\kappa}{4}$. Due to the construction of the votes, this means that for every~$a\in A$ there is at most one vote $v(H)\in U'$ which approves~$a$, and by the definition of~$v(a)$ and~$\mathcal{H}'$ it holds that~$a\in H\in \mathcal{H}'$. Given $\abs{\mathcal{H}'}=\abs{U'}=\kappa$, it holds that~$\mathcal{H}'$ is an exact~$3$-set cover of~$A$. \begin{itemize} \item {\probb{CCDV}{SAV}} \end{itemize} We create an instance $((C, V), k, J, \ell)$ of {\probb{CCDV}{SAV}} as follows. For each~$a\in A$, we create one candidate denoted by the same symbol for simplicity. In addition, we create four candidates~$p$,~$d_1$,~$d_2$, and~$d_3$, where~$p$ is the distinguished candidate. Let $C=A\cup \{p, d_1, d_2, d_3\}$ and let~$J=\{p\}$. We create the following votes in~$V$. First, we create two votes~$v_1=\{p,d_1\}$ and $v_2=\{p, d_2, d_3\}$. Then, for each $H=\{a_x,a_y,a_z\}\in \mathcal{H}$, we create one vote $v(H)=\{a_x, a_y, a_z\}$. We complete the construction by setting~$k=1$ and~$\ell=\kappa$. Clearly, the construction can be done in polynomial time. It remains to show the correctness of the reduction. The SAV scores of all candidates are summarized in Table~\ref{tab-ccdv-sav-scores}. \begin{table} \caption{SAV scores of candidates in the instance of {\probb{CCDV}{SAV}} in the proof of Theorem~\ref{thm-ccav-sav-np-hard}.} \begin{center} \begin{tabular}{llllll}\toprule &$p$ & $a\in A$ & $d_1$ & $d_2$ & $d_3$ \\ \midrule SAV scores &$\frac{1}{2}+\frac{1}{3}=\frac{5}{6}$ & $1$ & $\frac{1}{2}$ & $\frac{1}{3}$ & $\frac{1}{3}$ \\ \bottomrule \end{tabular} \end{center} \label{tab-ccdv-sav-scores} \end{table} $(\Rightarrow)$ Let~$\mathcal{H}'\subseteq \mathcal{H}$ be an exact~$3$-set cover of~$A$. Let $V'=\{v(H) \mymid H\in \mathcal{H}'\}$ be the votes corresponding to~$\mathcal{H}'$. We claim that after removing all the~$\kappa$ votes in~$V'$ from~$V$, the distinguished candidate~$p$ becomes the unique winner. Let $E=(C, V\setminus V')$. Clearly, the SAV scores of~$p$,~$d_1$,~$d_2$, and~$d_3$ in~$E$ remain the same as summarized in Table~\ref{tab-ccdv-sav-scores}. As~$\mathcal{H}'$ is an exact~$3$-set cover of~$A$, due to the construction of the votes, for each candidate~$a\in A$, $V'$ contains exactly one vote approving~$a$. Therefore, the SAV score of~$a$ in~$E$ decreases to $1-\frac{1}{3}=\frac{2}{3}$, resulting in~$p$ being the unique SAV winner of~$E$. $(\Leftarrow)$ Assume that there is $V'\subseteq V$ of at most~$\ell=\kappa$ votes whose removal results in~$p$ being the unique SAV winner. We may assume that $\{v_1,v_2\}\cap V'=\emptyset$, since it is easy to see that if~$p$ uniquely wins $(C, V\setminus V')$, then~$p$ also uniquely wins $(C, V\setminus (V'\setminus \{v_1, v_2\}))$. Under this assumption, the SAV score of~$p$ in $(C,V\setminus V')$ remains $\frac{5}{6}$. To decrease the score of a candidate~$a\in A$, at lease one vote approving~$a$ must be in~$V'$. Given $\ell=\kappa$, we know that the submuliset $\{H\in \mathcal{H} \mymid v(H)\in V'\}$ corresponding to~$V'$ is an exact~$3$-set cover of~$A$. \begin{itemize} \item {\probb{CCAV}{NSAV}} and {\probb{CCDV}{NSAV}} \end{itemize} Our reduction for {\probb{CCAV}{NSAV}} (resp.\ {\probb{CCDV}{NSAV}}) is obtained from the above reduction for {\probb{CCAV}{SAV}} (resp.\ \probb{CCDV}{SAV}) by adding additional $n\cdot m^2$ candidates who are not approved by any vote. Here,~$m$ and~$n$ are respectively the number of candidates and the number of votes created in the instance of {\probb{CCAV}{SAV}} (resp.\ \probb{CCDV}{SAV}). Then, by Lemma~\ref{lem-relation-sav-nsav}, we know that for any $V'\subseteq V\cup U$ (resp.~$V'\subseteq V$), a candidate has the unique highest SAV score in $(C, V')$ if and only if it has the unique NSAV score in $(C, V')$. This implies that the constructed instance of {\probb{CCAV}{SAV}} (resp.\ \probb{CCDV}{SAV}) is a {Yes-instance} if and only if the instance of {\probb{CCAV}{NSAV}} (resp.\ \probb{CCDV}{NSAV}) is a {Yes-instance}. \end{proof} Now we consider {\prob{CCAV}} and {\prob{CCDV}} for MAV. Unlike the above results, we show that, somewhat interestingly, {\probb{CCAV}{MAV}} and {\probb{CCDV}{MAV}} have different complexity. Concretely, {\probb{CCAV}{MAV}} is {{\np}-{\sf{hard}}} when $k=1$, while {\probb{CCDV}{MAV}} turns out to be polynomial-time solvable as long as~$k$ is a constant. The following lemma, which characterizes the space of MAV winning $1$-committees, is useful in establishing the {{\np}-{\sf{hardness}}} of {\probb{CCAV}{MAV}}. \begin{lemma} \label{lem-a} Let~$(C, V)$ be an election, and let~$A\subseteq V$ be the submultiset of votes in~$V$ each approving the maximum number of candidates, i.e., $A=\argmax_{v\in V}\{\abs{v}\}$. Moreover, let~$C'$ be the set of candidates approved by all votes in~$A$, i.e., $C'=\bigcap_{v\in A} v$. Then, if $C'\neq \emptyset$, all candidates in~$C'$ are tied as MAV single winners of $(C, V)$. Otherwise, all candidates in~$C$ are tied as MAV single winners of $(C, V)$. \end{lemma} \begin{proof} Assume that~$(C, V)$, $A$, and~$C'$ are as stipulated in the lemma. Let~$x$ be the number of candidates approved by each vote in~$A$, i.e., for every $v\in A$ it holds that $x=|v|$. The lemma clearly holds if $C'=C$. So, in the following let us assume that $C\setminus C'\neq\emptyset$. If $C'\neq \emptyset$, then for any singleton committee~$\{a\}$ such that~$a\in C'$, the Hamming distance between~$\{a\}$ and every vote in~$A$ is $x-1$, and that between~$\{a\}$ and every vote not in~$A$ is at most~$x$. Hence,~$\{a\}$ has MAV score at most~$x$. Now we analyze the MAV score of a singleton committee~$\{b\}$ where $b\in C\setminus C'$. Due to the definition of~$C'$, there exists at least one vote $v\in A$ such that $b\not\in v$. The Hamming distance between~$\{b\}$ and~$v$ is $x+1$, implying that $\{b\}$ has MAV score at least $x+1$. Therefore, every candidate~$a\in C'$ is a MAV single winner. Now we prove for the case where $C'=\emptyset$. In this case, for any singleton committee~$\{a\}\subseteq C$, there exists at least one vote $v\in A$ such that $a\not\in v$. The Hamming distance between~$\{a\}$ and~$v$ is $x+1$. Clearly, the Hamming distance between~$\{a\}$ and any vote not in~$A$ is at most~$x$. Hence, we know that all singleton committees of~$C$ have the same MAV score $x+1$, implying that all candidates are tied as MAV single winners of~$(C, V)$. \end{proof} We mention in passing that \citeas{DBLP:journals/corr/abs-2007-01795} call rules which are not identical to AV when $k=1$ {\textit{non-standard}} rules, and gave two small examples to show that both MAV and SAV are nonstandard. Lemma~\ref{lem-a} fully characterizes the space of winning $1$-committees of MAV, and from the characterization it is easy to see that MAV does not necessarily select candidates receiving the most approvals when $k=1$. Now we are ready to give the {{\np}-{\sf{hardness}}} of {\probb{CCAV}{MAV}}. \begin{theorem} \label{thm-ccav-mav-nph-k-1} {\probb{CCAV}{MAV}} is {{\np}-{\sf{hard}}} even when $k=1$ and there is only one registered vote. \end{theorem} \begin{proof} We prove the theorem via a reduction from {\prob{RX3C}}. Let $(A,\mathcal{H})$ be an {\prob{RX3C}} instance where $\abs{A}=\abs{\mathcal{H}}=3\kappa>0$. We create a {\probb{CCAV}{MAV}} instance $((C, V), k, U, J, \ell)$ as follows. For each $a\in A$, we create one candidate denoted still by~$a$ for simplicity. In addition, for each $H\in \mathcal{H}$, we create three candidates denoted by~$c(H_1)$,~$c(H_2)$, and~$c(H_3)$. Moreover, we create one candidate~$p$ which is the only distinguished candidate. Let~$C$ be the set of all these $12\kappa+1$ created candidates, and let $J=\{p\}$. Concerning the votes, we create only one registered vote in~$V$ which approves all candidates in~$A$ and the distinguished candidate~$p$, and disapproves all the other candidates. Unregistered votes are created according to $\mathcal{H}$. In particular, for each $H\in \mathcal{H}$, we create one unregistered vote~$v(H)$ which approves exactly the four candidates~$p$,~$c(H_1)$,~$c(H_2)$,~$c(H_3)$, and every candidate $a\inA$ such that $a\not\in H$. Let~$U$ be the set of all the created~$3\kappa$ unregistered votes. Note that all created votes approve exactly $3\kappa+1$ candidates. We complete the reduction by setting $\ell=\kappa$. The construction can be done in polynomial time. It remains to prove the correctness of the reduction. $(\Rightarrow)$ Suppose that there is an exact $3$-set cover $\mathcal{H}'\subsetneq \mathcal{H}$ of~$A$. Let $U'=\{v(H) \mymid H\in \mathcal{H}'\}$ be the set of the~$\ell$ unregistered votes corresponding to~$\mathcal{H}'$. Let $E=(C, V\cup U')$. Due to the definition of~$\mathcal{H}'$ and the construction of the election,~$p$ is the unique candidate that is approved in all votes in $V\cup U'$. By Lemma~\ref{lem-a},~$p$ is the unique MAV winner of~$E$. $(\Leftarrow)$ Suppose that there exists $U'\subseteq U$ of cardinality at most~$\ell$ so that~$p$ is the unique MAV winner of $E=(C, V\cup U')$. Due to Lemma~\ref{lem-a}, in~$E$, for every candidate $a\in A$, there must be at least one vote in~$U'$ not approving~$a$. Due to the construction of the unregistered votes, this means that~$U'$ contains at least one vote~$v(H)$ such that $a\in H\in \mathcal{H}$. It follows that $\mathcal{H}'=\{H\in \mathcal{H} \mymid v(H)\in U'\}$ is a set cover of~$A$. Moreover, as every $H\in \mathcal{H}$ is of cardinality three, it holds that $\abs{U'}=\kappa$ and~$\mathcal{H}'$ is an exact $3$-set cover of~$A$. \end{proof} In contrast to the {{\np}-{\sf{hardness}}} of {\probb{CCAV}{MAV}} even when restricted to the special case as stated in Theorem~\ref{thm-ccav-mav-nph-k-1}, {\probb{CCDV}{MAV}} is polynomial-time solvable as long as~$k$ is a constant. As far as we know, MAV is the first natural voting rule for which the complexity of {\prob{CCAV}} and {\prob{CCDV}} differs. \begin{theorem} \label{thm-ccav-ccdv-mav-polynomial-time-solvable-k-constant} {\probb{CCDV}{MAV}} is polynomial-time solvable when~$k$ is a constant. \end{theorem} \begin{proof} Let $I=((C, V), k, J, \ell)$ be a {\probb{CCDV}{MAV}} instance where $(C, V)$ is an election. Let $m=|C|$ denote the number of candidates. We derive an algorithm as follows. First, we compute~$\mathcal{C}_{k, C}(J)$ and~$\mathcal{C}_{k,C}(\overline{J})$, i.e., the collection of $k$-committees containing~$J$ and the collection of $k$-committees not containing~$J$, respectively. As the number of all~$k$-committees is at most~$m\choose k$ and~$k$ is a constant, they can be computed in polynomial time. Then, we split the given instance~$I$ into polynomially many subinstances each of which takes as input~$I$, a nonnegative integer~$x\leq m$, and a $k$-committee $\w\in \mathcal{C}_{k, C}(J)$, and determines whether we can delete at most~$\ell$ votes from~$V$ in $(C,V)$ so that in the remaining election \begin{enumerate} \item[(1)] $\w$~has MAV at most~$x$, and \item[(2)] all $k$-committees from $\mathcal{C}_{k, C}(\overline{J})$ have MAV scores at least~$x+1$. \end{enumerate} The above two conditions ensure that in the remaining election winning $k$-committees must be from~$\mathcal{C}_{k, C}(J)$. Obviously,~$I$ is a {Yes-instance} if and only if at least one of the subinstances is a {Yes-instance}. We focus on solving a subinstance $(I, x, w)$. Our algorithm proceeds as follows. First, by Condition~(1), all votes in~$V$ which are of Hamming distance at least~$x+1$ from~$\w$ need to be deleted; we do so and decrease~$\ell$ by the number of votes deleted. If $\ell<0$ after doing so, we immediately conclude that the subinstance is a {No-instance}. Otherwise, if there exists $\w'\in \mathcal{C}_{k, C}(\overline{J})$ whose MAV score in the remaining election is at most~$x$, we conclude that the subinstance is a {No-instance} too. (Note that in this case, the original instance~$I$ might be a {Yes-instance}. Nevertheless, a feasible solution of~$I$ will be captured by another subinstance associated with the same~$\w$ but with a smaller~$x$). Otherwise, the above two conditions are satisfied, and we conclude that the subinstance is a {Yes-instance}. \end{proof} The algorithm in the proof of Theorem~\ref{thm-ccav-ccdv-mav-polynomial-time-solvable-k-constant} runs in $\bigos{m^k}$ time. In the language of parameterized complexity theory this is an {{\sf{XP}}}-algorithm with respect to~$k$. It is interesting to study if this algorithm can be improved to an {{\sf{FPT}}}-algorithm. We leave it as an open question for future research. \subsection{Control of Candidates} Now we consider control by modifying the candidate set. Notice that for AV it is impossible to change the scores of registered candidates by adding unregistered candidates, as observed already in the context of single-winner voting~\cite{baumeisterapproval09,DBLP:journals/jcss/HemaspaandraH07}. This implies that AV is immune to CCAC. However, this is not the case for SAV and NSAV, since in these two cases adding candidates may increase the number of approved candidates of some votes and hence affect the scores of these candidates. We show that {\probb{CCAC}{SAV}} and {\probb{CCAC}{NSAV}} are {{\np}-{\sf{hard}}} even when treating them as single-winner voting rules. \begin{theorem} \label{thm-ccac-sav-np-hard} {\probb{CCAC}{SAV}} and {\probb{CCAC}{NSAV}} are {{\np}-{\sf{hard}}}. Moreover, this holds even when~$k=1$, every vote approves at most four candidates, and every candidate is approved by at most three votes. \end{theorem} \begin{proof} We prove the theorem by reductions from {\prob{RX3C}}. Let~$(A,\mathcal{H})$ be an {\prob{RX3C}} instance where $\abs{A}=\abs{\mathcal{H}}=3\kappa>0$. We first provide the reduction for SAV, and then show how to utilize Lemma~\ref{lem-relation-sav-nsav} to adapt the reduction for NSAV. \begin{itemize} \item {\probb{CCAC}{SAV}} \end{itemize} We create an instance $((C\cup D, V), k, J, \ell)$ of {\probb{CCAC}{SAV}} as follows. For each~$H\in \mathcal{H}$, we create one candidate~$c(H)$. For each $a\in A$, we create one candidate~$c(a)$. In addition, we create a distinguished candidate~$p$ and three dummy candidates~$d_1$,~$d_2$, and~$d_3$. Let \[C=\{c(a) \mid a\in A\}\cup \{p\}\cup \{d_1, d_2, d_3\},\] $D=\{c(H) \mid H\in \mathcal{H}\}$, and~$J=\{p\}$. We create the following votes. First, we create three votes~$v_1=\{p\}$, $v_2=\{p, d_1\}$, and $v_3=\{p, d_2, d_3\}$. Then, for each~$a\in A$, we create two votes~$v(a)$ and~$v'(a)$. In particular,~$v'(a)$ approves exactly~$c(a)$, and~$v(a)$ approves exactly~$c(a)$ and every~$c(H)$ such that $a\in H\in \mathcal{H}$. Hence, the vote~$v(a)$ approves exactly four candidates, one from~$C$ and three from~$D$. We complete the reduction by setting~$k=1$ and~$\ell=\kappa$. The above instance clearly can be constructed in polynomial time. We show the correctness of the reduction as follows. The SAV scores of the candidates in~$(C, V)$ are summarized in Table~\ref{tab-ccac-sav-nph}. \begin{table} \caption{The SAV scores of candidates in the election restricted to registered candidates constructed in the proof of Theorem~\ref{thm-ccac-sav-np-hard}.} \begin{center} \begin{tabular}{ll}\toprule candidates & SAV scores \\ \midrule $p$ & $1+1/2+1/3=11/6$\\ $c(a)$ & $1+1=2$ \\ $d_1$ & $1/2$\\ $d_2$, $d_3$ & $1/3$ \\ \bottomrule \end{tabular} \end{center} \label{tab-ccac-sav-nph} \end{table} $(\Rightarrow)$ Let $\mathcal{H}'\subseteq \mathcal{H}$ be an exact~$3$-set cover of~$A$, and let $D'=\{c(H) \mid H\in \mathcal{H}'\}\subseteq D$ be the set of the~$\kappa$ candidates corresponding to~$\mathcal{H}'$. Consider the election $E=(C\cup D', V)$. Clearly, the SAV scores of~$p$,~$d_1$,~$d_2$, and~$d_3$ in~$E$ remain the same as in~$(C, V)$ (see Table~\ref{tab-ccac-sav-nph}). Now we analyze the SAV scores of all~$c(a)$ where~$a\in A$. As~$\mathcal{H}'$ is an exact $3$-set cover of~$A$, in~$E$ each vote~$v(a)$ approves exactly two candidates---~$c(a)$ and some~$c(H)$ such that~$a\in H\in \mathcal{H}'$. Therefore, after adding the candidates from~$D'$ into~$C$, the SAV score of~$c(a)$ decreases to~$2-1/2=3/2$. Each candidate in~$D'$ has SAV score~$3/2$ in~$E$ too. Therefore,~$p$ is the unique SAV winner of~$E$. $(\Leftarrow)$ Assume that~$p$ becomes the unique SAV winner after adding a set~$D'\subseteq D$ of at most~$\ell=\kappa$ candidates into~$C$. Let $E=(C\cup D', V)$. Similar to the above analysis, we know that the SAV score of~$p$ in~$E$ remains $11/6$. As~$p$ uniquely wins~$E$ under SAV, we know that the SAV score of every candidate~$c(a)$ in~$E$, where~$a\in A$, must be decreased compared to that in $(C, V)$. Due to the construction of the election, this means that for every candidate~$c(a)$ where~$a\in A$ there exists at least one candidate~$c(H)\in D'$ where~$H\in \mathcal{H}$ such that~$a\in H$. After adding such a candidate~$c(H)$ from~$D$ into~$C$, the SAV score of~$c(a)$ with respect to the vote~$v(a)$ decreases from~$1$ to~$1/2$, leading to a final SAV score~$3/2$, smaller than the score of~$p$. As $\ell=\kappa$, it follows that $\{H\in \mathcal{H} \mid c(H)\in D'\}$ is an exact~$3$-set cover of~$A$. \begin{itemize} \item {\probb{CCAC}{NSAV}} \end{itemize} Our reduction for {\probb{CCAC}{NSAV}} is obtained from the above reduction for {\probb{CCAC}{SAV}} by adding~$n\cdot m^2$ new registered candidates in~$C$ who are not approved by any votes. Here,~$m$ and~$n$ are respectively the number of all candidates (registered and unregistered) and the number of votes created in the instance of {\probb{CCAC}{SAV}}. The newly created registered candidates have enough small SAV scores so that no matter which unregistered candidates from~$D$ are added into~$C$, none of them is winning. Then, by Lemma~\ref{lem-relation-sav-nsav}, we know that for any $D'\subseteq D$, a candidate has the unique highest SAV score in $(C\cup D', V)$ if and only if it has the unique NSAV score in $(C\cup D', V)$. This implies that the constructed instance of {\probb{CCAC}{SAV}} is a {Yes-instance} if and only if the instance of {\probb{CCAC}{NSAV}} is a {Yes-instance}. \end{proof} Now we move on to the three nonadditive rules PAV, ABCCV, and MAV. The immunity of single-winner AV to CCAC also implies that PAV and ABCCV are immune to CCAC when~$k=1$. More generally, one can observe that PAV and ABCCV are immune to CCAC when the number of distinguished candidates equals~$k$\onlyfull{, the size of the desired winning $k$-committee}. In this case, the question of {\prob{CCAC}} is degenerated to whether we can add at most~$\ell$ unregistered candidates so that a given~$k$-committee~$J$ is the unique $k$-winning committee. To see that ABCCV and PAV are immune to CCAC in this special case, observe that if the given $k$-committee~$J$ is not a winning $k$-committee of $(C, V)$, there exists a committee~$\w\subseteq C$ other than~$J$ which has at least the same score as that of~$J$. As the scores of committees in $(C, V)$ do not change by adding further candidates into~$C$, the committee~$\w$ prevents~$J$ from being the unique winning $k$-committee no matter which candidates are added. This reasoning in fact applies to all multiwinner voting rules that satisfy an axiomatic property defined below, which in general states that if a committee is not uniquely winning it cannot be uniquely winning when additional candidates are introduced. \begin{definition}[Negated Revealed Preference (NRP)] A multiwinner voting rule~$\varphi$ satisfies NRP if for every election $(C, V)$ and every $k$-committee $\w\subseteq C$, if~$\w$ is not the unique $k$-winning committee of~$\varphi$ at $(B, V)$ for some $B\subseteq C$ such that $\w\subseteq B$, then for any $B'\subseteq C$ such that $B\subseteq B'$ the committee~$\w$ is not the unique winning $k$-committee of~$\varphi$ at $(B', V)$. \end{definition} It should be noted that the above definition is a variant of the notion of the unique version of weak axiom of revealed preference (unique-WARP) that has been extensively studied for single-winner voting rules. Concretely, a single-winner voting rule satisfies unique-WARP if the unique winner remains as unique winner when restricted to any subset of candidates containing this winner. Compared to unique-WARP, NRP specifies the nonwinning status of some committee without mentioning the identities of winning committees.\footnote{There are several extensions of WARP to choice correspondences in the literature (see, e.g.,~\citep{DBLP:journals/jet/BrandtH11,PetesPtheorydecision2021}), where a choice correspondence is a function that assigns to each subset $S\subseteq C$ a subset $C'\subseteq S$. So, these notions apply to resolute multiwinner voting rules which always select exactly one winning committee.} \begin{theorem} \label{thm-pav-abbcv-mav-immue-to-ccac-k-equal-distinguished-candidates} Every NRP multiwinner voting rule is immune to {\memph{CCAC}} when the number of distinguished candidates is~$k$. \end{theorem} It is easy to see that ABCCV and PAV fulfill NRP. However, this is not the case for MAV, as shown in the example below. \begin{example} Let $C=\{a,b\}$, $D=\{c, d\}$, and $V=\{\{b\}, \{a,c\}, \{a,d\}\}$. Clearly, both~$\{a\}$ and~$\{b\}$ are winning $1$-committees of~MAV at $(C, V)$. However, $\{a\}$ is the unique winning $1$-committee of~MAV at $(C\cup D, V)$. \end{example} The above example also shows that MAV is not immune to CCAC even when $k=1$. From the complexity point of view, we have the following result. \begin{theorem} \label{thm-ccac-mav-nph-k-1} {\probb{CCAC}{MAV}} is {{\np}-{\sf{hard}}} even when $k=1$ and there are only two registered candidates. \end{theorem} \begin{proof} We prove the theorem via a reduction from {\prob{RX3C}}. Let $(A, \mathcal{H})$ be an instance of {\prob{RX3C}} where $\abs{A}=\abs{\mathcal{H}}=3\kappa$. Without loss of generality, we assume that $\kappa>1$. We create an instance $((C\cup D, V), k, J, \ell)$ of {\probb{CCAC}{MAV}} as follows. First, we create two candidates denoted by~$p$ and~$q$. Then, for every $H\in \mathcal{H}$, we create one candidate~$c(H)$. Let $C=\{p, q\}$ be the set of registered candidates, let $J=\{p\}$, and let $D=\{c(H) \mymid H\in \mathcal{H}\}$ be the set of unregistered candidates. Regarding the votes, we first create one vote which approves only~$q$. Then, for every $a\in A$, we create one vote~$v(a)$ which approves all candidates except~$q$ and the three candidates corresponding to~$H\in \mathcal{H}$ containing~$a$, i.e., $v(a)=\{p\}\cup \{c(H) \mymid a\not\in H, H\in \mathcal{H}\}$. So,~$v(a)$ approves exactly $3\kappa-2$ candidates in $C\cup D$. Let~$V$ denote the multiset of all $3\kappa+1$ votes created above. Finally, we set $k=1$ and $\ell=\kappa$. The instance of {\probb{CCAC}{MAV}} clearly can be constructed in polynomial time. In the following, we show the correctness. $(\Rightarrow)$ Suppose that~$\mathcal{H}$ contains an exact $3$-set cover~$\mathcal{H}'$ of~$A$. Let $D'=\{c(H) \mymid H\in \mathcal{H}'\}$ be the unregistered candidates corresponding to~$\mathcal{H}'$. Let $E=(C\cup D', V)$. We show below that the {\probb{CCAC}{MAV}} instance constructed above is a {Yes-instance} by showing that~$p$ is the unique MAV winning $1$-committee of~$E$. As~$\mathcal{H}'$ is an exact set cover of~$A$, for every vote~$v(a)$ where $a\in A$ there is exactly one~$c(H)\in D'$ such that~$v(a)$ does not approve~$c(H)$. Therefore, every vote~$v(a)$ where $a\in A$ approves exactly~$\kappa$ candidates from $C\cup D'$ and, moreover,~$p$ is the only candidate that is approved by all votes corresponding to~$A$. Then, by Lemma~\ref{lem-a} and the assumption that $\kappa>1$, we know that~$\{p\}$ is the unique MAV winning $1$-committee of~$E$. $(\Leftarrow)$ Suppose that there is a $D'\subseteq D$ such that $\abs{D'}\leq \ell=\kappa$ and~$\{p\}$ becomes the unique MAV winning $1$-committee of $E=(C\cup D', V)$. We prove below that $\mathcal{H}'=\{H\in \mathcal{H} \mymid c(H)\in D'\}$ corresponding to~$D'$ is an exact $3$-set cover of~$A$. For the sake of contradiction, assume that this is not the case. Let $a\in A$ be any arbitrary element in~$A$ that is not covered by~$\mathcal{H}'$, i.e., $a\not\in H$ holds for all $H\in \mathcal{H}'$. Then,~$v(a)$ approves all candidates in $C\cup D'$ except only~$q$. As~$q$ is not approved by any vote corresponding to~$A$, this implies that~$v(a)$ approves the largest number of candidates from $C\cup D'$, and any other vote approving the maximum number of candidates from $C\cup D'$ must approve exactly the same candidates among $C\cup D'$ as~$v(a)$. Then, by Lemma~\ref{lem-a}, every candidate in $C\cup D'$ except~$q$ is an MAV single winner of~$E$. This contradicts that~$\{p\}$ is the unique MAV winning $1$-committee of~$E$. \end{proof} One may wonder whether the restriction that the number distinguished candidates equals~$k$ is necessary for ABCCV and PAV to be immune to CCAC. The following example answers the question in the affirmative by illustrating that for every $k\geq 2$, ABCCV and PAV are susceptible to CCAC when there are at most~$k-1$ distinguished candidates. \begin{example} Let $C=\{a,b,c\}$, $D=\{d\}$, and $J=\{a\}$. For ABCCV, we have five votes $v_1=\{a\}$, $v_2=v_3=\{b,d\}$, and $v_4=v_5=\{c,d\}$. For PAV, we have eight votes $v_1=v_2=\{a\}$, $v_3=v_4=v_5=\{b,d\}$, and $v_6=v_7=v_8=\{c,d\}$. It is easy to verify that with respect to~$C$ the only ABCCV/PAV winning $2$-committee is $\{b,c\}$. However, if we add the candidate~$d$, $\{a, d\}$ becomes the unique ABCCV/PAV winning $2$-committee. We can show the susceptibility of ABCCV and PAV to CCAC for every $k\geq 3$ by slight modifying the above elections. We first we create $k-2$ copies of~$a$ in~$C$, and let~$J$ be the set consisting of~$a$ and all the copies of~$a$. Then, in addition to the above votes, for ABCCV, we create $k-2$ new votes each of which approves exactly one copy of~$a$, and each copy of~$a$ is approved by one of these $k-2$ votes, and for PAV, we create new $2(k-2)$ votes so that each of them approves one copy of~$a$, and each copy of~$a$ is approved by two of these votes. \end{example} Concerning the complexity, {\probb{CCAC}{ABCCV}} and {\probb{CCAC}{PAV}} are {{\sf{coNP}}-{\sf{hard}}}. In fact, we can show the {{\sf{coNP}-hardness}} even for a class of rules and for the special case where there is only one distinguished candidate and we do not allow to add any unregistered candidate, i.e., $\abs{J}=1$ and $\ell=0$. In this case, the question becomes whether a distinguished candidate~$p$ is included in all winning $k$-committees, which is exactly the {\probb{$p$-CC}{$\varphi$}} problem. \begin{theorem} \label{thm-cc-abccv-pav-co-np} Let~$\varphi$ be an $\omega$-Thiele rule such that $\omega(2)<2\omega(1)$. Then {\probb{$p$-CC}{$\varphi$}} is {{\sf{coNP}}-{\sf{hard}}}. Moreover, this holds even when every vote approves at most two candidates, and every candidate is approved by three votes. \end{theorem} \begin{proof} We prove the theorem by a reduction from {\prob{Independent Set}} on regular graphs to {\probb{$p$-CC}{$\varphi$}}, where~$\varphi$ is an $\omega$-Thiele rule such that $\omega(2)<2\omega(1)$. Let $(G, \kappa)$ be an {\prob{Independent Set}} instance where~$G=(\vset,\eset)$ is a regular graph. Let~$t$ be the degree of vertices in~$G$. For each vertex $u\in \vset$, we create one candidate denoted by the same symbol for simplicity. In addition, we create a candidates~$p$. Let $C=\{p\}\cup \vset$, and let $J=\{p\}$. We create the following votes. First, for each edge $\edge{u}{u'}\in \eset$, we create one vote~$v(\edge{u}{u'})$ which approves exactly~$u$ and~$u'$. In addition, we create~$t$ votes each of which approves exactly the distinguished candidate~$p$. Finally, we set $k=\kappa$. The instance of {\probb{$p$-CC}{$\varphi$}} is $((C, V), J, k)$ which can be constructed in polynomial time. It remains to prove the correctness of the reduction. $(\Rightarrow)$ If the instance of {\prob{Independent Set}} is a {Yes-instance}, then it is easy to verify that every $k$-committee corresponding to an independent set of~$\kappa$ vertices is a~$\varphi$ winning $k$-committee with~$\varphi$ score~$\kappa\cdot t\cdot \omega(1)$, implying that the above constructed instance of {\probb{$p$-CC}{$\varphi$}} is a {No-instance}. $(\Leftarrow)$ If~$G$ does not contain any independent set of size~$\kappa$, we claim that the distinguished candidate~$p$ is included in all winning $k$-committees of $(C, V)$ under~$\varphi$. Assume, for the sake of contradiction, that there is a~$\varphi$ winning~$k$-committee~$C'$ such that $p\not\in C'$. Clearly, $C'\subseteq\vset$. As~$C'$ is not an independent set, there exist distinct $u, u'\in C'$ which are both approved in the vote~$v(\edge{u}{u'})$. As~$p$ is approved by~$t$ votes which do not approve any of~$C'$, if we remove one of~$u$ and~$u'$ from~$C'$, and add~$p$ into~$C'$, the~$\varphi$ score of~$C'$ increases by at least $t\cdot \omega(1)-((t-1)\cdot \omega(1)+(\omega(2)-\omega(1)))>0$, which contradicts that~$C'$ is a~$\varphi$ winning $k$-committee. As {\prob{Independent Set}} remains {{\np}-{\sf{hard}}} when restricted to $3$-regular graphs~\citep{DBLP:journals/jct/Mohar01}, the hardness of the problem remains when restricted to the case where every vote approves at most two candidates, and every candidate is approved by three votes. \end{proof} As {\prob{Independent Set}} is {{\sf{W[1]-hard}}} with respect to~$\kappa$, even when restricted to regular graphs~\citep{DBLP:conf/cats/MathiesonS08}, the proof of Theorem~\ref{thm-cc-abccv-pav-co-np} implies the following corollary. \begin{corollary} \label{cor-p-cc-thiele-cowah-k} For each $\omega$-Thiele rule~$\varphi$ such that $\omega(2)<2\omega(1)$, {\probb{$p$-CC}{$\varphi$}} is {{\sf{coW[1]}-hard}} with respect to~$\kappa$ even when every vote approves at most two candidates. \end{corollary} For MAV, we have the following result. \begin{theorem} \label{thm-cc-mav-nph} {\probb{$p$-CC}{MAV}} is {{\np}-{\sf{hard}}}. Moreover, this holds even when every vote approves three candidates and every candidate is approved by at most three votes. \end{theorem} \begin{proof} We prove the theorem by a reduction from {\prob{RX3C}} to {\probb{$p$-CC}{MAV}}. Let $(A, \mathcal{H})$ be an {\prob{RX3C}} instance such that $\abs{A}=\abs{\mathcal{H}}=3\kappa$ for some positive integer~$\kappa$. For each $H\in \mathcal{H}$, we create one candidate~$c(H)$. In addition, we create five candidates~$p$,~$d_1$,~$d_2$,~$d_3$, and~$d_4$. Let $C=\{p, d_1, d_2, d_3, d_4\}\cup \{c(H) \mymid H\in \mathcal{H}\}$, and let $J=\{p\}$. We create the following votes. First, for each $a\in A$, we create one vote~$v(a)$ which approves exactly the three candidates~$c(H)$ such that $a\in H\in \mathcal{H}$. In addition, we create two votes $v_1=\{p, d_1,d_2\}$ and $v_2=\{p, d_3, d_4\}$. Note that every vote approves exactly three candidates. Finally, we set $k=\kappa+1$. The instance of {\probb{$p$-CC}{MAV}} is $((C, V), J, k)$ which can be constructed in polynomial time. It remains to prove the correctness of the reduction. $(\Rightarrow)$ Suppose that there is an exact $3$-set cover $\mathcal{H}'\subsetneq \mathcal{H}$ of~$A$. Let $\w=\{c(H) \mymid H\in \mathcal{H}'\}$ be the subset of candidates corresponding to~$\mathcal{H}'$. It is easy to check that $\w\cup \{p\}$ is a winning $k$-committee with MAV score $\kappa+2$, i.e.,~$\w$ contains at least one approved candidate of every vote. For the sake of contradiction, assume that there is another winning $k$-committee~$\w'$ such that $p\not\in \w'$. First,~$w'$ must contain at least~$\kappa$ candidates corresponding to~$\mathcal{H}$, since otherwise there must be at least one vote~$v(a)$, $a\in A$, such that none of its approved candidates~$c(H)$ where $a\in H\in \mathcal{H}$ is included in~$\w'$, implying that the MAV score of~$\w'$ is at least~$\kappa+4$, a contradiction. Second, if~$\w'$ contains $k=\kappa+1$ candidates corresponding to~$\mathcal{H}$, then~$\w'$ is of Hamming distance~$\kappa+4$ from~$v_1$ and~$v_2$, a contradiction too. Therefore,~$\w'$ contains exactly~$\kappa$ candidates corresponding to~$\mathcal{H}$ and contains exactly one candidate from $\{d_1, d_2, d_3, d_4\}$. However, if $w'\cap \{d_1, d_2\}\neq\emptyset$,~$\w'$ is of Hamming distance at least~$\kappa+4$ from~$v_2$, and if $\w'\cap \{d_3, d_4\}\neq \emptyset$,~$w'$ is of Hamming distance at least $\kappa+4$ from~$v_1$, contradicting that~$w'$ is a winning $k$-committee in both cases. So, we can conclude that such a winning $k$-committee~$\w'$ does not exist. $(\Leftarrow)$ Assume that~$\mathcal{H}$ does not contain any exact $3$-set cover of~$A$. Suppose that there is a winning $k$-committee~$\w$ which contains~$p$. Then, there exists at least one vote~$v(a)$ where $a\in A$ such that none of its three approved candidates~$c(H)$ where $a\in H\in \mathcal{H}$ is in~$\w$. Hence, the MAV score of~$\w$ is exactly~$\kappa+4$. In this case, by replacing~$p$ with some candidate approved by~$v(a)$, we obtain another winning $k$-committee, implying that the instance of {\probb{$p$-CC}{MAV}} is a {No-instance}. \end{proof} We mention in passing that~\citeas{DBLP:conf/atal/AzizGGMMW15} studied a problem named {\prob{$R$-TestWS}} which determines if a given $k$-committee is a winning committee of a given election under a multiwinner voting rule. This problem has a flavor of {\prob{$p$}-CC} in the sense that both problems aim to test the winning status of some particular candidates. \citeas{DBLP:conf/atal/AzizGGMMW15} showed {{\sf{coNP}-hardness}} of {\prob{$R$-TestWS}} for PAV by a reduction from {\prob{Independent Set}}, but did not study ABCCV and MAV. We would also like to point out that the {{\np}-{\sf{hardness}}} and {{\sf{coNP}-hardness}} of {\prob{CCAV}} and {\prob{CCDV}} for ABCCV and PAV suggest that when~$k$ is unbounded, {\prob{CCAV}} and {\prob{CCDV}} for ABCCV and for PAV may belong to a much harder class of problems. We leave this as an open question for future research. Let us move on to {\prob{CCDC}}. Unlike the immunity of AV to CCAC, it is easy to see that AV is susceptible to CCDC. Concerning the complexity, it has been shown by~\citeas{DBLP:journals/jair/MeirPRZ08} that {\probb{CCDC}{AV}} is polynomial-time solvable. However, for SAV and NSAV, the complexity of {\prob{CCDC}} is the same as {\prob{CCAC}}. \begin{theorem} \label{thm-ccdc-sav-np-hard} {\probb{CCDC}{SAV}} and {\probb{CCDC}{NSAV}} are {{\np}-{\sf{hard}}} even when~$k=1$. \end{theorem} \begin{proof} We prove the theorem by reductions from {\prob{RX3C}}. Let~$(A, \mathcal{H})$ be an {\prob{RX3C}} instance where $\abs{A}=\abs{\mathcal{H}}=3\kappa$. Without loss of generality, we assume $\kappa\geq 3$. We consider first SAV. \begin{itemize} \item {\probb{CCDC}{SAV}} \end{itemize} We create an instance $((C, V), k, J, \ell)$ of {\probb{CCDC}{SAV}} as follows. We create in total~$6\kappa+1$ candidates. In particular, for each~$a\in A$, we create one candidate~$c(a)$. For each~$H\in \mathcal{H}$, we create one candidate~$c(H)$. For a given $A'\subseteq A$ (resp.\ $\mathcal{H}'\subseteq \mathcal{H}$), let $\candc{A'}=\{c(a) \mid a\in A'\}$ (resp.\ $\candc{\mathcal{H}'}=\{c(H) \mid H\in \mathcal{H}'\}$) be the set of candidates corresponding to~$A'$ (resp.\ $\mathcal{H}'$). In addition, we create one candidate~$p$. Let $C=\candc{A} \cup \candc{\mathcal{H}} \cup \{p\}$ and let~$J=\{p\}$. We create the following votes. \begin{itemize} \item First, for each~$H\in \mathcal{H}$, we create six votes~$v(H, 1)$,~$v(H, 2)$, $\dots$, $v(H, 6)$ each of which approves exactly~$p$ and~$c(H)$. \item Second, for each $a\in A$, we create~$12\kappa$ votes $v(a,1),\dots,v(a,12\kappa)$ each of which approves exactly~$c(a)$ and every~$c(H)$ such that $a\in H\in \mathcal{H}$. \item Additionally, for each $a\in A$, we create $8\kappa-2$ votes each of which approves exactly~$c(a)$. \item Finally, we create $6 (3\kappa+1)$ votes each of which approves exactly~$p$ and all the~$3\kappa$ candidates corresponding to~$A$. These votes give to~$p$ and every~$c(a)$ where $a\in A$ six points. \end{itemize} Let~$V$ be the multiset of the above created votes. Obviously, $\abs{V}=60\kappa^2+30\kappa+6$. We complete the construction by setting~$k=1$ and~$\ell=\kappa$. Obviously, we can construct the above instance in polynomial time. The SAV scores of the candidates are summarized in Table~\ref{tab-ccdc-sav-nph}. \begin{table} \centering{ \caption{A summary of the SAV scores of candidates in the election constructed in the proof of Theorem~\ref{thm-ccdc-sav-np-hard}.} \label{tab-ccdc-sav-nph} \begin{tabular}{ll}\\ \toprule candidates & SAV scores \\ \midrule $p$ & $18\kappa \cdot \frac{1}{2}+6=9\kappa+6$ \\[2mm] $c(a)$ & $12\kappa\cdot\frac{1}{4}+8\kappa-2+6=11\kappa+4$ \\[2mm] $c(H)$ & $3+12\kappa\cdot \frac{3}{4}=3+9\kappa$ \\ \bottomrule \end{tabular} } \end{table} We prove the correctness of the reduction as follows. $(\Rightarrow)$ Assume that there exists an exact $3$-set cover $\mathcal{H}'\subseteq \mathcal{H}$ of~$A$. We show that after removing the candidates corresponding to~$\mathcal{H}'$, the distinguished candidate~$p$ becomes the unique winner. Let $E=(C\setminus \candc{\mathcal{H}'}, V)$. First, after removing a candidate~$c(H)$ where $H\in \mathcal{H}$, the SAV score of~$p$ given by~$v(H, i)$, $i\in [6]$, increases from~$1/2$ to~$1$. As~$C_{\mathcal{H}'}$ contains exactly~$\kappa$ candidates, the removal of these candidates leading to~$p$ having an SAV score $9\kappa+6+6\kappa\cdot \frac{1}{2}=12\kappa+6$ in~$E$. In addition, removing one candidate~$c(H)$ where~$H\in \mathcal{H}$ increases the score of each~$c(a)$ such that~$a\in H$ by $12\kappa\cdot (1/3-1/4)=\kappa$, because after removing~$c(H)$ the vote~$v(a)$ approves~$3$ candidates (it approves~$4$ candidates in advance). As~$\mathcal{H}'$ is an exact $3$-set cover, the SAV score of each~$c(a)$ where~$a\in A$ increases to $11\kappa+4+\kappa=12\kappa+4$ in~$E$. Analogously, we can show that the SAV score of every~$c(H)$ where $H\in \mathcal{H}\setminus \mathcal{H}'$ is $3+9\kappa+\frac{1}{12}\cdot 3\cdot 12\kappa=12\kappa+3$ in~$E$. Therefore,~$p$ becomes the SAV unique-winner of~$E$. $(\Leftarrow)$ Assume that there exists~$C'\subseteq C\setminus \{p\}$ of at most $\ell=\kappa$ candidates so that~$p$ becomes the winner of $E=(C\setminus C', V)$. Observe first that~$C'$ contains at least one candidate from~$C(\mathcal{H})$. The reason is that if this is not the case, the SAV score of~$p$ in~$E$ can be at most $9\kappa+\frac{6(3\kappa+1)}{2\kappa+1}< 9\kappa+9$, but there exists~$c(a)\in C(A)\setminus C'$ of SAV score at least $11\kappa+4$ which is larger than $9\kappa+9$ given $\kappa\geq 3$. However, this contradicts that~$p$ uniquely wins~$E$. Then, to complete the proof, we claim that~$C'$ does not contain any candidate from~$C(A)$ and, moreover,~$C'$ contains exactly~$\kappa$ candidates. The claim holds, since otherwise~$C'$ contains at most $\kappa-1$ candidates from~$C(\mathcal{H})$ which leads to some~$c(a)$ where $a\in A$ having at least the same SAV score as~$p$, and hence contradicts that~$p$ uniquely wins~$E$. To verify this, we distinguish the following cases. Let $C'\cap \candc{\mathcal{H}}=\candc{\mathcal{H}'}$ where $\mathcal{H}'\subseteq \mathcal{H}$ and let $C'\cap \candc{A}=\candc{A'}$ where $A'\subseteq A$. In other words,~$\mathcal{H}'$ (resp.~$A'$) is the submultiset (resp.\ subset) of~$\mathcal{H}$ (resp.~$A$) corresponding to candidates contained in~$C'$ from~$\mathcal{H}$ (resp.~$A$). \begin{itemize} \item Case~1. $\abs{\mathcal{H}'}=\kappa-1$ In this case, the SAV score of~$p$ in~$E$ can be at most $9\kappa+6(\kappa-1)\cdot \frac{1}{2}+\frac{6(3\kappa+1)}{3\kappa}< 12\kappa+4$. As $\kappa\geq 3$,~$C'$ contains at least one candidate $c(H)\in \candc{\mathcal{H}}$ where $H\in \mathcal{H}'$. Then, by the construction of the votes, there exists at least one candidate~$c(a)\in C(A)\setminus C'$ such that $a\in H$ whose SAV score in~$E$ is at least $11\kappa+4+12\kappa (\frac{1}{3}-\frac{1}{4})=12\kappa+4$. However, this contradicts that~$p$ uniquely wins~$E$. \item Case~2. $\abs{\mathcal{H}'}\leq \kappa-2$ and $\abs{\mathcal{H}'}\geq \frac{2\kappa}{3}$ In this case, the SAV score of~$p$ in~$E$ can be at most $9\kappa+6(\kappa-2)\cdot \frac{1}{2}+\frac{6(3\kappa+1)}{2\kappa+2}< 12\kappa+3$. As $\abs{\mathcal{H}'}\geq \frac{2\kappa}{3}$ and $\abs{C'}\leq \kappa$, we know that~$\abs{A'}\leq \frac{\kappa}{3}$. As each $a\in A$ is contained in at most three elements of~$\mathcal{H}'$ and every element of~$\mathcal{H}'$ is a $3$-subset, we know that~$\mathcal{H}'$ covers at least $\frac{2\kappa}{3}$ elements of~$A$. This implies that there exists at least one candidate~$c(a)\in \candc{A}\setminus C'$ such that $a\in A\setminus A'$ and~$a$ is contained in at least one~$H\in \mathcal{H}'$. By the construction of the votes, the deletion of~$c(H)$ increases the SAV score of~$c(a)$ by $12\kappa (\frac{1}{3}-\frac{1}{4})=\kappa$. It follows that the SAV score of the candidate~$c(a)$ in~$E$ is at least $11\kappa+4+\kappa=12\kappa+4$. However, this contradicts that~$p$ wins~$E$. \item Case~3. $\abs{\mathcal{H}'}\leq \frac{2\kappa}{3}-1$ In this case, the SAV score of~$p$ minors that of any candidate $c(a)\in \candc{A}\setminus C'$ where $a\in A\setminus A'$ in~$E$ is at most $9\kappa+6\cdot (\frac{2\kappa}{3}-1)\cdot \frac{1}{2}-(\frac{12\kappa}{4}+8\kappa-2)=-1$, which contradicts that~$p$ wins~$E$. \end{itemize} By the claim, we assume that~$C'$ consists of exactly~$\kappa$ candidates from~$\candc{\mathcal{H}}$, i.e., $\abs{\mathcal{H}'}=\kappa$ and $A'=\emptyset$. Similar to the analysis, we know that the SAV score of~$p$ in~$E$ is $9\kappa+6+3\kappa=12\kappa+6$. This implies that for every $a\in A$, $\candc{\mathcal{H}'}$ contains at most one candidate~$c(H)$ such that $a\in H\in \mathcal{H}'$, since otherwise the SAV score of~$c(a)$ in~$E$ is at least $11\kappa+4+12\kappa \cdot (\frac{1}{2}-\frac{1}{4})=14\kappa+4$ which contradicts that~$p$ wins~$E$. It directly follows that~$\mathcal{H}'$ is set packing, i.e., none of two elements from~$\mathcal{H}'$ intersect. Then, from the facts that $\abs{\mathcal{H}'}=\kappa$, every $H\in\mathcal{H}'$ is a $3$-set, and $\abs{xs}=3\kappa$, it follows that~$\mathcal{H}'$ covers~$A$. Thus, the instance of {\prob{RX3C}} is a {Yes-instance}. \begin{itemize} \item {\probb{CCDC}{NSAV}} \end{itemize} Our reduction for {\probb{CCDC}{NSAV}} is obtained from the above reduction for {\probb{CCDC}{SAV}} by adding $n\cdot m^2+\kappa$ new candidates who are not approved by any vote. Here,~$m$ and~$n$ are respectively the number of candidates and the number of votes created in the instance of {\probb{CCDC}{SAV}}. Then, by Lemma~\ref{lem-relation-sav-nsav}, we know that for any $C'\subseteq C$ such that $\abs{C'}\leq \kappa$, a candidate has the unique highest SAV score in $(C\setminus C', V)$ if and only if it has the unique NSAV score in $(C\setminus C', V)$. This implies that the constructed instance of {\probb{CCDC}{SAV}} is a {Yes-instance} if and only if the instance of {\probb{CCDC}{NSAV}} is a {Yes-instance}. \end{proof} Now we consider the three nonadditive rules ABCCV, PAV, and MAV. Theorem~\ref{thm-cc-abccv-pav-co-np} implies already the {{\sf{coNP}-hardness}} of {\prob{CCDC}} for ABCCV and PAV. When $k=1$, the {{\np}-{\sf{hardness}}} for ABCCV and PAV vanishes. In fact, in this special case, {\probb{CCDC}{ABCCV}} and {\probb{CCDC}{PAV}} are polynomial-time solvable because CCDC for single-winner AV is polynomial-time solvable and ABCCV, PAV, and AV are identical when treated as single winner voting rules. However, for MAV, we can establish an {{\np}-{\sf{hardness}}} reduction inspired by Lemma~\ref{lem-a}. \begin{theorem} \label{thm-ccdc-mav-nph-k-1} {\probb{CCDC}{MAV}} is {{\np}-{\sf{hard}}} even when $k=1$, every vote approves three candidates, and every candidate is approved by at most three votes. \end{theorem} \begin{proof} We prove the theorem by a reduction from {\prob{RX3C}}. Let $(A, \mathcal{H})$ be an instance of {\prob{RX3C}} where $\abs{A}=\abs{\mathcal{H}}=3\kappa>0$. We create an instance $((C, V), k, J, \ell)$ of {\probb{CCDC}{MAV}} as follows. First, we create five candidates~$p$,~$d_1$,~$d_2$,~$d_3$, and~$d_4$. Then, for every $H\in \mathcal{H}$, we create one candidate~$c(H)$. Let $C=\{p, d_1, d_2, d_3, d_4\}\cup \{c(H) \mymid H\in \mathcal{H}\}$. Let $J=\{p\}$, $k=1$, and $\ell=\kappa$. We create the following votes. First, we create two votes $v_1=\{p, d_1, d_2\}$ and $v_2=\{p, d_3, d_4\}$. Then, for every $a\in A$, we create one vote $v(a)=\{c(H) \mymid a\in H\in \mathcal{H}\}$ which approves exactly the three candidates corresponding to the $3$-subsets in~$\mathcal{H}$ containing~$a$. Let~$V$ be the set of the above $3\kappa+2$ votes. By Lemma~\ref{lem-a}, all candidates are tied as MAV single winners. The above instance of {\probb{CCDC}{MAV}} clearly can be constructed in polynomial time. It remains to show the correctness of the reduction. $(\Rightarrow)$ Assume that there is an exact $3$-set cover $\mathcal{H}'\subseteq \mathcal{H}$ of~$A$. Let $C'=\{c(H) \mymid H\in \mathcal{H}'\}$ be the set of the~$\kappa$ candidates corresponding to~$\mathcal{H}'$. Let $E=(C\setminus C', V)$. As~$\mathcal{H}'$ is an exact set cover of~$A$, every vote~$v(a)$ approves exactly two candidates in~$C\setminus C'$. Then,~$v_1$ and~$v_2$ become the only two votes approving the maximum number of candidates in~$E$. As~$p$ is the only candidate approved by both~$v_1$ and~$v_2$, due to Lemma~\ref{lem-a}, $\{p\}$ is the unique MAV winning $1$-committee of~$E$. $(\Leftarrow)$ Assume that there exists $C'\subseteq C$ of at most $\ell=k$ candidates so that $\{p\}$ is the unique MAV winning $1$-committee of $E=(C\setminus C', V)$. Let $\mathcal{H}'=\{H\in \mathcal{H} \mymid c(H)\in C'\}$. By Lemma~\ref{lem-a}, this means that for every vote $v(a)$ where $a\in A$, at least one of the candidates approved in $v(a)$ is contained in~$C'$. By the definition of~$v(a)$, this means that~$\mathcal{H}'$ contains at least one~$H$ such that $a\in H$. As this holds for all $a\in A$, we conclude that~$\mathcal{H}'$ covers~$A$. As $\abs{\mathcal{H}'}\leq \abs{C'}=\kappa$, we know that~$\mathcal{H}'$ is an exact set cover of~$A$. \end{proof} However, when~$k$ increases to two, the complexity of {\prob{CCDC}} for ABCCV and PAV radically changes, as implied by the following theorem. \begin{theorem} \label{thm-ccdc-abccv-pav-nph-k-2} Let~$\varphi$ be an $\omega$-Thiele rule such that $\omega(2)<2\omega(1)$. Then, {\probb{CCDC}{$\varphi$}} is {{\np}-{\sf{hard}}} even when $k=2$, there is only one distinguished candidate, and every vote approves at most two candidates. \end{theorem} \begin{proof} We prove the theorem by a reduction from {\prob{Clique}} restricted to regular graphs. Let $(G, \kappa)$ be an instance of {\prob{Clique}}, where $G=(\vset, \eset)$ is a regular graph. Let~$t$ be the degree of vertices in~$G$. We create an instance $((C, V), k, J, \ell)$ of {\probb{CCDC}{$\varphi$}} as follows. First, we create one candidate~$p$. Then, for every vertex $u\in \vset$, we create one candidate denoted still by the same symbol for simplicity. Let $C=\vset\cup \{p\}$ and let $J=\{p\}$. In addition, let $k=2$ and $\ell=\abs{\vset}-\kappa$. We create the following votes. First, we create a multiset~$V_p$ of~$t$ votes each of which approves exactly~$p$. Then, for every edge $\edge{u}{u'}\in \eset$, we create one vote $v(\edge{u}{u'})=\{u, u'\}$. Let~$V$ be the set of the $t+\abs{A}$ votes created above. This completes the construction of the instance of {\probb{CCDC}{$\varphi$}}, which can be done in polynomial time. In the following, we prove the correctness of the reduction. $(\Rightarrow)$ Assume that~$G$ has a clique~$\vset'$ of size~$\kappa$. Let $E=(\{p\}\cup \vset', V)$. By the construction of the votes and candidates, every candidate in~$C$ is approved by exactly~$t$ votes. Given $\omega(2)<2\omega(1)$, the largest possible~$\varphi$ score of any $2$-committee is~$2t\cdot \omega(1)$, and this is achieved by any~$2$-committee containing the distinguished candidate. As~$N'$ is a clique, for any $2$-committee $\{u, u'\}$ such that $u, u'\in N'$, the vote $v(\edge{u}{u'})$ approves both~$u$ and~$u'$, implying that the~$\varphi$ score of this committee can be at most $2(t-1)\cdot \omega(1)+\omega(2)$ which is strictly smaller than $2t\cdot \omega(1)$ when $\omega(2)<2\omega(1)$. Therefore, we know that all~$\varphi$ winning $2$-committees of~$E$ contain~$p$. $(\Leftarrow)$ Assume that there exists a subset $C'\subseteq \vset$ of cardinality at most $\ell=\abs{N}-\kappa$ so that all~$\varphi$ winning $2$-committees of $E=(C\setminus C', V)$ contains the distinguished candidate~$p$. Let $N'=N\setminus C'$. Clearly, $\abs{N'}\geq \kappa$. We claim that~$N'$ is a clique in~$G$. Assume, for the sake of contradiction, that this is not the case. Then, there exist distinct $u, u'\in N'$ so that~$u$ and~$u'$ are not adjacent in~$G$. Then, according to the construction of the votes and candidates, the~$\varphi$ score of $\{u, u'\}$ is~$2t\cdot \omega(1)$ in~$E$, implying that $\{u, u'\}$ is a~$\varphi$ winning $2$-committee of~$E$. However, this contradicts that every~$\varphi$ winning $2$-committee of~$E$ contains~$p$. \end{proof} As {\prob{Clique}} restricted to regular graphs is {{\sf{W[1]-hard}}} with respect to~$\kappa$~\citep{DBLP:journals/cj/Cai08,DBLP:conf/iwpec/Marx04,DBLP:conf/cats/MathiesonS08}, the proof of Theorem~\ref{thm-ccdc-abccv-pav-nph-k-2} implies that for any $\omega$-Thiele rule~$\varphi$ such that $\omega(2)<2\omega(1)$, it holds that {\probb{CCDC}{$\varphi$}} is {{\sf{W[1]-hard}}} with respect to the number of candidates not deleted even when $k=2$, there is only one distinguished candidate, and every vote approves at most two candidates. As ABCCV and PAV are both such $\omega$-Thiele rules, these intractability results hold for both rules. \begin{corollary} \label{cor-ccdc-pav-abccv-wa-hard-dual} For $\varphi\in \{{\emph{\text{ABCCV}}}, {\emph{\text{PAV}}}\}$, {\probb{CCDC}{$\varphi$}} is {{\sf{W[1]-hard}}} when parameterized by the number of candidates not deleted. Moreover, this holds even when $k=2$, there is only one distinguished candidate, and every vote approves at most two candidates. \end{corollary} \section{Some Fixed-Parameter Algorithms} \label{sec-fpt} In the previous sections, we showed that manipulation and control problems are generally computationally hard with only a few exceptions. In this section, we consider these problems from the parameterized complexity point of view. An important parameter that has been frequently studied in voting problems is the number of candidates (see, e.g.,~\citep{DBLP:journals/tcs/BredereckFNST20,DBLP:journals/jacm/ConitzerSL07,DBLP:journals/jair/FaliszewskiHHR09,DBLP:conf/ecai/Yang14}). In many real-world applications, this parameter is small~\cite{Fishburm05,DBLP:conf/aldt/MatteiW13}. It is easy to see that {\prob{CCAC}} and {\prob{CCDC}} for all rules studied in this paper are {{\sf{FPT}}} with respect to this parameter: we enumerate all possible choices of at most~$\ell$ candidates to add ({\probb{CCAC}{$\varphi$}}) or to delete ({\probb{CCDC}{$\varphi$}}), and check whether at least one of the enumerations leads to a ``{Yes}''-answer. Another natural parameter is the number of votes. It is easy to see that {\probb{CCAV}{$\varphi$}} and {\probb{CCDV}{$\varphi$}} where $\varphi\in \{\text{AV}, \text{SAV}, \text{NSAV}\}$ are {{\sf{FPT}}} with respect to this parameter: we enumerate all possible choices of at most~$\ell$ votes to add ({\probb{CCAV}{$\varphi$}}) or to delete ({\probb{CCDV}{$\varphi$}}), and check whether at least one of the enumerations leads to a ``{Yes}''-answer. For manipulation, we have similar results. \begin{theorem} \label{thm-manipulation-fpt-wrt-candidate} {\probb{CBCM}{AV}} and {\probb{SBCM}{AV}} are {{\sf{FPT}}} with respect to the number of candidates~$m$. More precisely, it can be solved in $\bigos{2^m}$ time. \end{theorem} \begin{proof} Let $I=((C, V\cup V_{\text{M}}), \w)$ be an instance of {\probb{CBCM}{AV}} (resp.~{\probb{SBCM}{AV}}). Let $m=\abs{C}$ be the number of candidates, and let $k=\abs{w}$ be the size of the winning committee~$\w$. We enumerate all subsets $C'\subseteq C$ of at most~$k$ candidates. For each enumerated $C'\subseteq C$, we let all manipulators approve only candidates in~$C'$, i.e., we replace every $v\in V_{\text{M}}$ with $v'=C'$. Let $V'=\{v' \mymid v\in V_{\text{M}}\}$ be the multiset of these votes of manipulators. Let~$E$ be the election after the manipulators turn to approve~$C'$. Then, the AV scores of all candidates in~$E$ are determined. Recall that $W^+_{\text{AV}, E}$, $L_{\text{AV}, E}$, and~$W_{\text{AV}, E}$ are respectively the subset of candidates that are contained in all AV winning $k$-committees of~$E$, the subset of candidates none of which is contained in any AV winning $k$-committees of~$E$, and the subset of the remaining candidates. For notational brevity, we drop the subindices ${\text{AV}, E}$ from the three notions. Recall that these three subsets can be computed in polynomial time as follows: order all candidates linearly according to their AV scores in~$E$, from the highest to the lowest with ties being broken arbitrarily, and let~$s$ be the AV score of the $k$-th candidate in the order; then,~$W^+$ consists of all candidates with AV scores strictly larger than~$s$,~$W$ consists of all candidates with AV score exactly~$s$, and~$L$ consists of all candidates with AV scores strictly smaller than~$s$. Clearly, every~$k$-committee contains~$W^+$ and any arbitrary $k-\abs{W^+}$ candidates from~$W$ is an AV winning $k$-committee of~$E$. Therefore, for {\probb{CBCM}{AV}}, if for all $v\in V_{\text{M}}$ it holds that $\abs{W^+\cap v}+(k-\abs{W^+}-\abs{W\setminus v})>\abs{v\cap w}$ we immediately conclude that the given instance~$I$ is a {Yes-instance}; otherwise, we discard the currently enumerated~$C'$. For {\probb{SBCM}{AV}}, we conclude that the given instance is a {Yes-instance} if the following conditions are satisfied simultaneously: \begin{itemize} \item $L\cap v\cap w=\emptyset$ for all $v\in V_{\text{M}}$. If this is not satisfied, then there exists a manipulative vote~$v$ so that at least one of candidates from $v\cap w$ is not contained in any AV winning $k$-committees of~$E$. \item Either $v\cap \w\cap W=\emptyset$ for all $v\in V_{\text{M}}$ or $\abs{W^+\cup W}=k$. In fact, if this condition fails, there exists $v\in V_{\text{M}}$ so that $v\cap \w\cap W\neq \emptyset$ and $\abs{W^+\cup W}>k$. Let~$c$ be any arbitrary candidate in $v\cap \w\cap W$, then any $k$-committee containing~$W^+$ and any arbitrary $k-\abs{W^+}$ candidates from $W\setminus \{c\}$ is an AV winning $k$-committee of~$E$. However, such a winning committee is not more preferred by~$v$. \end{itemize} If at least one of the above two conditions fails, we discard the enumerated~$C'$. For both {\probb{CBCM}{AV}} and {\probb{SBCM}{AV}}, if all enumerations are discarded, we conclude that the given instance~$I$ is a {No-instance}. Regarding time complexity, as we have at most~$2^m$ enumerations~$C'$ to consider, the algorithms run in~$\bigos{2^m}$ time. \onlyfull{ For the bribery problem, we guess all possible~$k$-committee $w\subseteq C$ such that $J\subseteq w$. Then, for every voter which is more satisfied with~$\w$ than with the current winning~$k$-committee, we let the this voter approves exactly the candidates in~$\w$. Then we return ``{Yes}'' if doing so leading to~$\w$ to be the winning $k$-committee. If this is not the case, we discard this guess and proceed to the next. If no guess gives us a ``{Yes}'' answer, we conclude that the given instance is a {No-instance}.} \end{proof} Now we present a general algorithm for {\probb{CBCM}{$\varphi$}} and {\probb{SBCM}{$\varphi$}} for all polynomial computable additive rules. A difficulty is that it is not essentially true that all manipulators need to approve the same candidates in order to improve the result in their favor, as already shown in Example~\ref{ex-1}. However, as the number of candidates is bounded, we can enumerate all possible winning committees in desired running time, and exploit ILP to formulate the question of assigning approved candidates to manipulators. \begin{theorem} \label{thm-manipulation-sav-nsav-fpt-wrt-candidate} For all polynomial computable additive rules~$\varphi$, {\probb{CBCM}{$\varphi$}} and {\probb{SBCM}{$\varphi$}} are {{\sf{FPT}}} with respect to the number of candidates. \end{theorem} \begin{proof} Let $I=((C, V), \w, V_M)$ be an instance of {\probb{CBCM}{$\varphi$}}/{\probb{SBCM}{$\varphi$}}. Let $m=\abs{C}$ and $k=\abs{\w}$. We first enumerate all collections~$\mathcal{W}$ of $k$-committees of~$C$. Each enumerated~$\mathcal{W}$ corresponds to a guess that~$\mathcal{W}$ is exactly the set of~$\varphi$ winning $k$-committees of the final election. There are at most $2^{m\choose k}$ such different collections to consider. Let~$\mathcal{W}$ be an enumerated collection. If there exists a committee $\w'\in \mathcal{W}$ and a vote $v\in V_{\text{M}}$ so that~$v$ does not prefer~$\w'$ to~$\w$ (recall that for {\probb{CBCM}{$\varphi$}},~$v$ prefers~$\w'$ to $\w$ if and only if $\abs{v\cap \w'}>\abs{v\cap \w}$, and for {\probb{SBCM}{$\varphi$}},~$v$ prefers~$\w'$ to~$\w$ if and only if $v\cap \w\subsetneq v\cap \w'$), we discard~$\mathcal{W}$. Otherwise, we determine if manipulators can cast their votes so that committees in~$\mathcal{W}$ are exactly the winning $k$-committees. Clearly, the given instance~$I$ is a {Yes-instance} if and only if there is at least one enumerated~$\mathcal{W}$ so that the answer to the question for~$\mathcal{W}$ is ``Yes''. In the following, we give an ILP formulation of the above question with a limited number of variables. For each subset $S\subseteq C$, let~$V_{\text{M}}^S$ be the subset of votes from~$V_{\text{M}}$ which approve exactly the candidates in~$S$. That is, $V_{\text{M}}^S=\{v\in V_{\text{M}} \mymid v=S\}$. Let $n^S=\abs{V_{\text{M}}^S}$. For each $S, T\subseteq C$, we create a nonnegative integer variable~$x_{S\hspace{.2mm}\scalebox{.8}{\Arrow[.15cm]}\hspace{.2mm} T}$ which indicates the number of votes in $V_{\text{M}}^S$ that turn to approve exactly the candidates in~$T$. We create the following constraints. Recall that $\textsf{sc}(S, T)$ denotes the~$\varphi$ score of a committee~$T$ received from a vote approving exactly the candidates in~$S$, and $\textsf{sc}(V, T)=\sum_{v\in V}\textsf{sc}(v, T)$ is the~$\varphi$ score of the committee~$T$ received from votes in~$V$. As~$\varphi$ is polynomial computable, $\textsf{sc}(S,T)$ and $\textsf{sc}(V, T)$ can be computed in polynomial time for each fixed~$S\subseteq C$ and~$T\subseteq C$. For $T\subseteq C$, let $X(T)=\sum_{S\subseteq C}x_{S\hspace{.2mm}\scalebox{.8}{\Arrow[.15cm]}\hspace{.2mm} T}$, which indicates the number of manipulators whose truthfully approved candidates are exactly those in~$S$ and are demanded to approve candidates in~$T$ in order to improve the result. \begin{itemize} \item First, we have the natural constraints $0\leq x_{S\hspace{.2mm}\scalebox{.8}{\Arrow[.15cm]}\hspace{.2mm} T}\leq n^S$ for every variable~$x_{S\hspace{.2mm}\scalebox{.8}{\Arrow[.15cm]}\hspace{.2mm} T}$. \item Second, for each $S\subseteq C$, we have that $\sum_{T\subseteq C}x_{S\hspace{.2mm}\scalebox{.8}{\Arrow[.15cm]}\hspace{.2mm} T}=n^S$. \item Third, as committees in~$\mathcal{W}$ are supposed to be the winning $k$-committees in the final election, they should have the same~$\varphi$ score. Therefore, for every $T, T'\in \mathcal{W}$, we have that \[\textsf{sc}(V, T)+\sum_{S\subseteq C}\textsf{sc}(S, T)\cdot X(S)=\textsf{sc}(V,T')+\sum_{S\subseteq C}\textsf{sc}(S, T')\cdot X(S).\] \item Let~$T$ be any arbitrary $k$-committee in~$\mathcal{W}$. To ensure that committees in~$\mathcal{W}$ are exactly the~$\varphi$ winning $k$-committees of the final election, for every $k$-committee $T'\subseteq C$ such that $T'\not\in \mathcal{W}$, we have \[\textsf{sc}(V, T)+\sum_{S\subseteq C}\textsf{sc}(S, T)\cdot X(S)>\textsf{sc}(V, T')+\sum_{S\subseteq C}\textsf{sc}(S, T')\cdot X(S).\] \end{itemize} The correctness of the ILP formulation is fairly easy to see. Due to Lenstra's algorithm for ILP~\citep{DBLP:journals/mor/Lenstra83}, the above ILP can be solved in {{\sf{FPT}}}-time in~$m$. \end{proof} For {\probb{SDCM}{$\varphi$}}, we could obtain the same fixed-parameter tractability result by utilizing a similar algorithm. \begin{theorem} \label{thm-new-manipulation-sav-nsav-fpt-wrt-candidate} For all polynomial computable additive rules~$\varphi$, {\probb{SDCM}{$\varphi$}} is {{\sf{FPT}}} with respect to the number of candidates. \end{theorem} \begin{proof} Let $I=(C, V, V_M, k)$ be an instance of {\probb{SDCM}{$\varphi$}}. Let $m=\abs{C}$. We first enumerate all collections~$\mathcal{W}$ of $k$-committees of~$C$. Each enumerated~$\mathcal{W}$ corresponds to a guess that~$\mathcal{W}$ is exactly the set of~$\varphi$ winning $k$-committees of the final election. In addition, let~$\mathcal{W}'$ denote the collection of all winning $k$-committees of~$\varphi$ at $(C, V)$. We first determine if~$\mathcal{W}$ stochastically dominates~$\mathcal{W}'$, which can be done in {{\sf{FPT}}}-time in~$m$ according to the definition of stochastic domination. If this is not the case, we discard this enumerated~$\mathcal{W}$, and proceed to the next one, if there are any. Otherwise, we further determine if it is possible for the manipulators to cast their votes to make~$\mathcal{W}$ exactly the collection of winning $k$-committees, which can be done in {{\sf{FPT}}}-time by solving the ILP described in the proof of Theorem~\ref{thm-manipulation-sav-nsav-fpt-wrt-candidate}. If the ILP has a feasible solution, we conclude that~$I$ is a {Yes-instance}. If all possible collections of~$\mathcal{W}$ are enumerated without providing us with a conclusion on~$I$, we conclude that~$I$ is a {No-instance}. \end{proof} In the following, we study fixed-parameter algorithms for election control by adding/deleting voters with respect to the number of candidates. In particular, we show that a natural generalization of both {\probb{CCAV}{$\varphi$}} and {\probb{CCDV}{$\varphi$}} formally defined below is {{\sf{FPT}}} with respect to this parameter. \EP {Constructive Control by Adding and Deleting Voters for~$\varphi$}{\probb{CCADV}{$\varphi$}} {A set~$C$ of candidates, two multisets~$V$ and~$U$ of votes over~$C$, a positive integer~$k\leq \abs{C}$, a nonempty subset~$J\subseteq C$ of at most~$k$ distinguished candidates, and two nonnegative integers~$\ell_{\text{AV}}$ and $\ell_{\text{DV}}$ such that $\ell_{\text{AV}}\leq \abs{U}$ and~$\ell_{\text{DV}}\leq \abs{V}$.} {Are there $V'\subseteq V$ and~$U'\subseteq U$ such that~$|V'|\leq \ell_{\text{DV}}$, $|U'|\leq \ell_{\text{AV}}$, and~$J\subseteq \w$ for all $\w\in \varphi(C, V\setminus V'\cup U', k)$?} Obviously, both {\probb{CCAV}{$\varphi$}} and {\probb{CCDV}{$\varphi$}} are special cases of {\probb{CCADV}{$\varphi$}}. Now we showcase our {{\sf{FPT}}}-results for {\probb{CCADV}{$\varphi$}}. We first consider polynomial computable additive rules. \begin{theorem} \label{thm-ccav-ccdv-av-sav-nsav-fpt-candidates} For~$\varphi$ being a polynomial computable additive rule, {\probb{CCADV}{$\varphi$}} is {{\sf{FPT}}} with respect to the number of candidates. \end{theorem} \begin{proof} Let $\varphi$ be a polynomial computable additive rule, and let $I=(C, V, U, k, J, \ell_{\text{AV}}, \ell_{\text{DV}})$ be an instance of {\probb{CCADV}{$\varphi$}}. Let $m=\abs{C}$. To solve the instance, we first guess a subset~$A\subseteq C\setminus J$ of at least~$m-k$ candidates and a candidate~$b\in J$. The guessed candidate~$b$ is expected to be a candidate in~$J$ with the minimum~$\varphi$ score in the final election among all candidates in~$J$. In addition, the guessed candidates in~$A$ are expected to be the candidates not in~$J$ and have strictly smaller scores than that of~$b$ in the final election. To be more precise, we split the given instance~$I$ into {{\sf{FPT}}}-many subinstances, each of which takes as input~$I$, a subset $A\subseteq C\setminus J$ such that $\abs{A}\geq m-k$, and a candidate~$b\in J$, and asks whether there exist $V'\subseteq V$ and $U'\subseteq U$ such that $\abs{V'}\leq \ell_{\text{DV}}$, $\abs{U'}\leq \ell_{\text{AV}}$, and in the election $(C, V\setminus V'\cup U')$ the candidate~$b$ has the smallest~$\varphi$ score among all candidates in~$J$, and all candidates in~$A$ have strictly smaller~$\varphi$ scores than that of~$b$. It is easy to see that~$I$ is a {Yes-instance} if and only if at least one of the subinstances is a {Yes-instance}. In the following, we show how to solve a subinstance associated with guessed~$b$ and~$A$ in {{\sf{FPT}}}-time by giving an ILP formulation with a bounded number of variables. For each subset~$C'\subseteq C$, we create two integer variable~$x_{C'}$ and~$y_{C'}$. So, there are~$2^{m+1}$ variables in total. Recall that for a candidate~$c\in C$ and a vote~$v\in V$, $\textsf{sc}(v, c)$ is the~$\varphi$ score of~$v$ given to~$c$, which can be computed in polynomial time. The variables~$x_{C'}$ and~$y_{C'}$ indicate respectively the number of deleted votes in~$\vaes{V}{C'}$ and the number of added votes in~$\vaes{U}{C'}$ in a desired solution. Recall that~$\vaes{V}{C'}$ and~$\vaes{U}{C'}$ are respectively the multiset of votes in~$V$ and the multiset of votes in~$U$ that approve exactly the candidates in~$C'$. For every candidate~$c\in C$, let \[\textsf{sc}(c)=\sum_{v\in V}{\textsf{sc}}(v, c)-\sum_{C'\subseteq C}{\textsf{sc}}(C', c)\cdot x_{C'}+\sum_{C'\subseteq C}{\textsf{sc}}(C', c)\cdot y_{C'}.\] The constraints are as follows. \begin{enumerate} \item[(1)] As we add at most~$\ell_{\text{AV}}$ unregistered votes and delete at most~$\ell_{\text{DV}}$ votes, we have that $\sum_{C'\subseteq C}x_{C'}\leq \ell_{\text{AV}}$ and $\sum_{C'\subseteq C}y_{C'}\leq \ell_{\text{DV}}$. \item[(2)] For $C'\subseteq C$, we naturally have~$0\leq x_{C'}\leq \abs{\vaes{V}{C'}}$ and $0\leq y_{C'}\leq \abs{{\vaes{U}{C'}}}$. \item[(3)] To ensure that~$b$ has the minimum~$\varphi$ score among all candidates in~$J$, for each $b'\in J\setminus \{b\}$, we have that $\textsf{sc}(b')\geq \textsf{sc}(b)$. \item[(4)] To ensure that all candidates in~$A$ have strictly smaller scores than that of~$b$, for each~$a\in A$, we have that~$\textsf{sc}(a)<\textsf{sc}(b)$. \end{enumerate} This ILP can be solved in {{\sf{FPT}}}-time with respect to~$m$~\cite{DBLP:journals/mor/Lenstra83}. As we have at most $m\cdot 2^m$ subinstances to solve, the whole algorithm runs in {{\sf{FPT}}}-time in~$m$. \end{proof} For some nonadditive rules considered in the paper, we can obtain similar results. \begin{theorem} \label{thm-ccadv-abccv-pav-fpt-m} For each $\omega$-Thiele rule~$\varphi$, {\probb{CCADV}{$\varphi$}} is {{\sf{FPT}}} with respect to the number of candidates. \end{theorem} \begin{proof} Let $I=(C, V, U, k, J, \ell_{\text{AV}}, \ell_{\text{DV}})$ be an instance of {\probb{CCADV}{$\varphi$}}. Let $m=\abs{C}$. We derive {{\sf{FPT}}}-algorithms for the problems stated in the theorem based on Lenstra's theorem on ILP. First, we compute~$\mathcal{C}_{k, C}(J)$ which can be done in~$\bigo{2^m}$ time. Then, we split the instance~$I$ into {{\sf{FPT}}}-many subinstances each taking as input~$I$ and a nonempty $\mathcal{W}\subseteq \mathcal{C}_{k, C}(J)$. In particular,~$\mathcal{W}$ is supposed to be exactly the collection of all winning $k$-committees of~$\varphi$ at the final election. The question of the subinstance is whether we can add at most~$\ell_{\text{AV}}$ unregistered votes in~$U$ and delete at most~$\ell_{\text{DV}}$ registered votes in~$V$ so that in the resulting election all $k$-committees in~$\mathcal{W}$ have the same~$\varphi$ score which is higher than that of any $k$-committee not in~$\mathcal{W}$. Clearly,~$I$ is a {Yes-instance} if and only if at least one of the subinstances is a {Yes-instance}. In the following, we give an ILP formulation for the subinstance associated with a nonempty~$\mathcal{W}\subseteq \mathcal{C}_{k, C}(J)$. Similar to the proof of Theorem~\ref{thm-ccav-ccdv-av-sav-nsav-fpt-candidates}, for each subset $C'\subseteq C$, we create two integer variables~$x_{C'}$ and~$y_{C'}$. Regarding the constraints, we first adopt the constraints described in~(1) and~(2) in the proof of Theorem~\ref{thm-ccav-ccdv-av-sav-nsav-fpt-candidates}. Then, we create the following constraints. For each $k$-committee~$\w$, let ${\textsf{sc}}(\w)$ denote the~$\varphi$ score of~$\w$ with respect to the multiset~$V$ of registered votes. To ensure that all $k$-committees in~$\mathcal{W}$ have the same score in the final election, for every $w, w'\in \mathcal{W}$, we require \[{\sf{sc}}(\w)+\sum_{\substack{\w\cap C'\neq\emptyset\\ C'\subseteq C}} \omega(\abs{w\cap {C'}})\cdot (y_{C'} - y_{C'}) = {\sf{sc}}(\w')+\sum_{\substack{\w'\cap C'\neq\emptyset\\ C'\subseteq C}} \omega(\abs{\w'\cap C'})\cdot (y_{C'}-x_{C'}).\] (If~$\mathcal{W}$ consists of only one $k$-committee, we do not create such constraints.) Finally, to ensure that the~$\varphi$ score of every $k$-committee in~$\mathcal{W}$ is higher than that of any $k$-committee not in~$\mathcal{W}$, we fix a $k$-committee~$\w$ in~$\mathcal{W}$, and for every $k$-committee~$w'$ not in~${\mathcal{W}}$, we require \[{\sf{sc}}(w)+\sum_{\substack{w\cap C'\neq\emptyset\\ C'\subseteq C}} \omega(\abs{w\cap C'})\cdot (y_{C'}-x_{C'}) > {\sf{sc}}(w')+\sum_{\substack{w'\cap C'\neq\emptyset\\ C'\subseteq C}} \omega(\abs{\w'\cap C'})\cdot (y_{C'}-x_{C'}).\] As we have $2^{m+1}$ variables, by a theorem of Lenstra~\cite{DBLP:journals/mor/Lenstra83}, the above ILP can be solved in {{\sf{FPT}}}-time in~$m$. \end{proof} Now we present the algorithms for MAV. In the proof of Theorem~\ref{thm-ccav-ccdv-mav-polynomial-time-solvable-k-constant}, we presented a polynomial-time algorithm for {\probb{CCDV}{MAV}} for~$k$ being a constant. The algorithm runs in $\bigos{m^k}=\bigos{m^m}$ time, and hence the following corollary holds. \begin{corollary} \label{cor-ccdv-mav-fpt-m} {\probb{CCDV}{MAV}} is {{\sf{FPT}}} with respect to the number of candidates. \end{corollary} For {\probb{CCAV}{MAV}}, we present a natural {{\sf{FPT}}}-algorithm with respect to the same parameter. \begin{theorem} \label{thm-ccav-mav-fpt-m} {\probb{CCAV}{MAV}} is {{\sf{FPT}}} with respect to the number of candidates. \end{theorem} \begin{proof} Let $I=(C, V, U, k, J, \ell)$ be an instance of {\probb{CCAV}{MAV}}. Observe that if two unregistered votes approve exactly the same candidates, we can remove any of them without changing the answer to the instance. In light of this observation, we assume that all unregistered votes are distinct, and thus $\abs{U}\leq 2^m$. We enumerate all subsets $U'\subseteq U$ of cardinality at most~$\ell$, and check if all MAV winning $k$-committees of the election $(C, V\cup U')$ contain~$J$ (this can be done in $\bigos{2^m}$ time by enumerating all $k$-committees). If this is the case for at least one of the enumerations, we conclude that the given instance~$I$ is a {Yes-instance}; otherwise, we conclude that~$I$ is a {No-instance}. \end{proof} Finally, we present color-coding based {{\sf{FPT}}}-algorithms for control by modifying candidate sets when parameterized by the number of voters plus the number of added or deleted candidates. As a matter of fact, our algorithm is for a natural combination of {\probb{CCAC}{$\varphi$}} and {\probb{CCDC}{$\varphi$}}, formally defined below. \EP {Constructive Control by Adding and Deleting Candidates for~$\varphi$}{\probb{CCADC}{$\varphi$}} {Two disjoint sets~$C$ and~$D$ of candidates, a multiset~$V$ of votes over~$C\cup D$, a positive integer $k\leq \abs{C}$, a nonempty subset $J\subseteq C$ of at most~$k$ distinguished candidates, and two nonnegative integers~$\ell_{\text{AC}}$ and~$\ell_{\text{DC}}$ such that $\ell_{\text{AC}}\leq \abs{D}$ and~$\ell_{\text{DC}}\leq \abs{C}$.} {Are there~$C'\subseteq C$ and~$D'\subseteq D$ such that $\abs{C'}\leq \ell_{\text{DC}}$, $\abs{D'}\leq \ell_{\text{AC}}$, and~$J\subseteq \w$ for all $\w\in \varphi(C\setminus C'\cup D', V, k)$?} The color-coding technique was first used to derive an {{\sf{FPT}}}-algorithm for the~$k$-{\prob{Path}} problem~\cite{DBLP:journals/jacm/AlonYZ95}. At a high level, this technique first randomly colors the ``units'' in the solution space with~$k$ different colors, and then utilizes dynamic programming to find a colored solution. Thanks to a theory on perfect hash functions, such a randomized algorithm can be derandomized without sacrificing the fixed-parameter tractability. In our problems, the solution space are collections of subsets of candidates. Hence, we first randomly color the candidates, and then we explore a certain solution where no two candidates have the same color. To describe our algorithm formally, we need the following notions. For a universe~$X$ and a positive integer $\kappa\leq \abs{X}$, an $(X, \kappa)$-perfect class of hash functions is a set of functions $f_i: X\rightarrow [\kappa]$, $i\in [t]$, where~$t$ is an integer, such that for every $\kappa$-subset~$A\subseteq X$, there exists at least one~$f_i$, $i\in [t]$, such that $\bigcup_{a\in A}f_i(a)=[\kappa]$. It is known that there always exists an $(X, \kappa)$-perfect class of hash functions of cardinality at most~$g(\kappa)$ where~$g$ is a function in~$\kappa$ and, moreover, such functions can be constructed in {{\sf{FPT}}}-time in~$\kappa$~\cite{DBLP:journals/jacm/AlonYZ95}. Our algorithms hinge upon algorithms for {\probb{$J$-CC}{$\varphi$}} running in {{\sf{FPT}}}-times in the number of voters. \begin{theorem} \label{thm-J-CC-FPT-n} For $\varphi\in \{\memph{\text{ABCCV}}, \memph{\text{PAV}}, \memph{\text{MAV}}\}$, {\probb{$J$-CC}{$\varphi$}} is {{\sf{FPT}}} with respect to the number of voters. \end{theorem} For not distracting the reader, we defer to Appendix the proof of Theorem~\ref{thm-J-CC-FPT-n}. At a high level, our algorithms first compute the optimal score~$s$ of winning $k$-committees which can be done in {{\sf{FPT}}}-time in the number of voters for all concrete rules considered in the paper~\citep{DBLP:journals/jair/BetzlerSU13,DBLP:conf/atal/MisraNS15,DBLP:conf/atal/YangW18}. Having this optimal score~$s$, the question is then whether there exists at least one $k$-committee which does not contain~$J$ and has score at least (ABCCV and PAV) or at most (MAV)~$s$. Obviously, the {\probb{$J$-CC}{$\varphi$}} instance is a {Yes-instance} if and only if we have at least one ``{Yes}'' answer. We show that this question can be answered in {{\sf{FPT}}}-time by giving ILP formulations, analogous to those for solving the winners determination problems for these rules studied in~\citep{DBLP:journals/jair/BetzlerSU13,DBLP:conf/atal/MisraNS15,DBLP:conf/atal/YangW18}. Armed with Theorem~\ref{thm-J-CC-FPT-n}, we are ready to present our {{\sf{FPT}}}-algorithms for {\probb{CCADC}{$\varphi$}}. \begin{theorem} \label{thm-ccac-ccdc-many-rules-fpt-ell-plus-n} For $\varphi\in \{\memph{\text{SAV}}, \memph{\text{NSAV}}, \memph{\text{ABCCV}}, \memph{\text{PAV}}, \memph{\text{MAV}}\}$, \probb{CCADC}{$\varphi$} is {{\sf{FPT}}} with respect to the combined parameter~$\ell_{\memph{\text{AC}}}+\ell_{\memph{\text{DC}}}+n$, where~$n$ is the number of voters. \end{theorem} \begin{proof} We derive an algorithm for {\probb{CCADC}{$\varphi$}} as follows. Let $I=(C, D, V, k, J, \ell_{\text{AC}}, \ell_{\text{DC}})$ be an instance of {\probb{CCADC}{$\varphi$}}. Let $n=\abs{V}$ be the number of votes. First, we guess two nonnegative integers~$\ell'_{\text{AC}}\leq \ell_{\text{AC}}$ and $\ell'_{\text{DC}}\leq \ell_{\text{DC}}$. Each guessed pair $\{\ell'_{\text{AC}}, \ell'_{\text{DC}}\}$ corresponds to a subinstance of~$I$ which asks whether there is a subset $C'\subseteq C$ of exactly~$\ell'_{\text{DC}}$ candidates and a subset $D'\subseteq D$ of exactly~$\ell'_{\text{AC}}$ candidates so that in election restricted to $C\setminus C'\cup D'$ all candidates in~$J$ are in all winning $k$-committees. Obviously, there are polynomially many subinstances and, moreover,~$I$ is a {Yes-instance} if and only if at least one of the subinstances is a {Yes-instance}. To completes the proof, it suffices to show that we can solve each subinstance in polynomial time, which is the focus of the remainder of the proof. Let~$\{\ell'_{\text{AC}}, \ell'_{\text{DC}}\}$ be a guessed pair of integers. We construct a $(C,\ell'_{\text{DC}})$-perfect class~$\mathcal{F}$ of hash functions whenever $\ell'_{\text{DC}}\geq 1$, and construct a $(D, \ell'_{\text{AC}})$-perfect class~$\mathcal{G}$ of hash functions whenever $\ell'_{\text{AC}}\geq 1$. According to~\cite{DBLP:journals/jacm/AlonYZ95},~$\mathcal{F}$ and~$\mathcal{G}$ can be constructed in {{\sf{FPT}}}-time in~$\ell'_{\text{AC}}+\ell'_{\text{DC}}$. Our algorithm considers all pairs of $(f, g)$ one by one where $f\in \mathcal{F}$ and $g\in \mathcal{G}$. If $\mathcal{F}= \emptyset$ (resp.\ $\mathcal{G}= \emptyset$), our algorithms only considers functions in~$\mathcal{G}$ (resp.~$\mathcal{F}$) one by one. Let $(f, g)$ be a considered pair. For each~$i\in [\ell'_{\text{DC}}]$ and each $j\in [\ell'_{\text{AC}}]$), let~$C_i$ be the subset of candidates of~$C$ assigned the value~$i$ by~$f$, and let~$D_j$ be the subset of candidates of~$D$ assigned the value~$j$ by~$g$, i.e., $C_i=\{c\in C \mid f(c)=i\}$ and $D_j=\{c\in D \mid g(c)=j$\}. We aim to explore a feasible solution of the subinstance corresponding to $\{\ell'_{\text{AC}}, \ell'_{\text{DC}}\}$ which contains exactly one candidate from each~$C_i$, $i\in [\ell'_{\text{DC}}]$, and contains exactly one candidate from each~$D_j$, $j\in [\ell'_{\text{AC}}]$. With respect to such a solution, if there are two candidates~$c$ and~$c'$ from the same set~$C_i$ or~$D_j$ such that the voters approving them are exactly the same, then the two candidates~$c$ and~$c'$ are indistinguishable. In view of this observation, we partition each~$C_i$ (resp.~$D_j$) into~$2^n$ subsets~$\{C_i^{V'}\}_{V'\subseteq V}$ (resp.\ $\{D_j^{V'}\}_{V'\subseteq V}$), so that~$C_i^{V'}$ (resp.~$D_j^{V'}$) consists of all candidates $c\in C_i$ (resp.~$c\in D_j$) such that $V(c)=V'$. By the above discussion, for each~$C_i$ (resp.~$D_j$), it only matters which element from $\{C_i^{V'}\}_{V'\subseteq V}$ (resp.\ $\{D_j^{V'}\}_{V'\subseteq V}$) intersects the feasible solution. In light of this fact, for each~$C_i$ (resp.~$D_j$), we guess a~$V^i\subseteq V$ (resp.\ $U^i\subseteq V$) such that~$C_i^{V^i}\neq \emptyset$ (resp.\ $D_j^{U^j}\neq \emptyset$), which indicates that the feasible solution contains exactly one candidate in~$C_i$ which is from~$C_i^{V^i}$, and contains exactly one candidate in~$D_j$ which is from~$D_j^{U^j}$. As we have at most~$2^n$ choices for each~$C_i$ and each~$D_j$, and we have at most $\ell'_{\text{DC}}\leq \ell_{\text{DC}}$ many~$C_i$s and at most $\ell'_{\text{AC}}\leq \ell_{\text{AC}}$ many~$D_j$s to consider, there are in total at most $2^{n\cdot (\ell_{\text{AC}}+\ell_{\text{DC}})}$ combinations of guesses. Each combination $\{\{C_i^{V^i}\}_{i\in [\ell'_{\text{DC}}]}, \{D_j^{U^j}\}_{j\in [\ell'_{\text{AC}}]}\}$ of guesses determines an instance $((C', V), k, J)$ of {\probb{$J$-CC}{$\varphi$}}, where~$C'$ is obtained from~$C$ by deleting any arbitrary candidate in~$C_i^{V^i}$ and including any arbitrary candidate in~$D_j^{U^j}$, for all $i\in [\ell'_{\text{DC}}]$ and all $j\in [\ell'_{\text{AC}}]$. Then, we check if the instance of {\probb{$J$-CC}{$\varphi$}} is a {Yes-instance}, which can be done in {{\sf{FPT}}}-time for ABCCV, MAV, and PAV with respect to~$n$ (see Theorem~\ref{thm-J-CC-FPT-n}), and can be trivially done in polynomial time for SAV and NSAV . If at least one of the no more than $2^{n\cdot (\ell_{\text{AC}}+\ell_{\text{DC}})}$ instances of {\probb{$J$-CC}{$\varphi$}} is a {Yes-instance}, the subinstance corresponding to~$\{\ell'_{\text{AC}}, \ell'_{\text{DC}}\}$ is a {Yes-instance}; otherwise, we consider the next pair $(f', g')$ where $f'\in \mathcal{F}$ and $g'\in \mathcal{G}$, if there are any. If none of the pairs $(f, g)$ where $f\in \mathcal{F}$ and $g\in \mathcal{G}$ results in a conclusion that the subinstance corresponding to~$\{\ell'_{\text{AC}}, \ell'_{\text{DC}}\}$ is a {Yes-instance}, we conclude that the subinstance corresponding to~$\{\ell'_{\text{AC}}, \ell'_{\text{DC}}\}$ is a {No-instance}. \end{proof} \section{Concluding Remarks} \label{sec-conclusion} In this paper, we have studied the complexity of several manipulation\onlyfull{, bribery,} and control problems for numerous sought-after {ABMV} rules, namely AV, SAV, NSAV, ABCCV, PAV, and MAV. We showed that these rules generally resist these strategy problems by giving many intractability results. However, it should be pointed out that our study is purely based on worst-case analysis. Whether these problems are difficult to solve in practice demands further investigations. In addition to the hardness results, we also derived several {{\sf{FPT}}}-algorithms with respect to natural parameters and polynomial-time algorithms for some special cases of these problems. We refer to Table~\ref{tab-results-summary} for a summary of our results. { Our study invites many interesting directions for future research. First, in the control problems studied in this paper, the goal of the external agent is to include the given distinguished candidates into all winning $k$-committees. In real-world applications, a tie-breaking scheme is applied so that only one winning $k$-committee is selected. When the external agent knows which deterministic tie-breaking scheme is used, a more natural goal is to make the distinguished candidates be included in the unique winning $k$-committee with respect to the tie-breaking scheme. It is interesting to see whether the complexity of the problems changes with respect to different tie-breaking schemes. Notably, for single-winner control problems, it has been observed that tie-breaking schemes may significantly change the complexity of the problems (see, e.g.,~\cite{DBLP:conf/aaai/AzizGMNW13,DBLP:conf/ecai/MatteiNW14,DBLP:conf/atal/ObraztsovaEH11,DBLP:conf/atal/FaliszewskiHS08}). Second, one could study faster or combinatorial {{\sf{FPT}}}-algorithms of {{\sf{FPT}}} problems studied in this paper, or consider other multiwinner voting rules such as seqPAV. Third, for the {{\np}-{\sf{hard}}} problems proved in the paper, it is natural to explore their approximation algorithms. In addition, in our manipulation problems, the manipulators are allowed to change their votes in any possible way. It is interesting to see if the complexity changes if manipulators' actions are restricted somehow, e.g., if they are only allowed to drop approved candidates or only to approve some previously disapproved candidates. Finally, it is interesting to see if the complexity of the problems changes when restricted to specific domains of dichotomous preferences. We refer to~\cite{DBLP:conf/ijcai/ElkindL15,DBLP:journals/corr/abs-2205-09092,DBLP:conf/ijcai/Yang19a} for the notions of several restricted domains. Finally, it has been shown recently that for single-winner voting rules, the problems of control by adding/deleting candidates are already {{\np}-{\sf{hard}}} when there is only a constant number of voters~\cite{DBLP:journals/jair/ChenFNT17}. It is interesting to explore whether similar results hold for {\probb{CCAC}{$\varphi$}} and {\probb{CCDC}{$\varphi$}} for multiwinner voting rules.
{ "arxiv_id": "2302.11342", "language": "en", "timestamp": "2023-02-23T02:14:09", "url": "https://arxiv.org/abs/2302.11342", "yymm": "2302" }
\section{Introduction}\label{sec:Introduction} Electromagnetic wave (Lasers, Microwave, and Radio-frequency regime) is used in a variety of contexts. The importance and applicability of laser plasma interactions are well-known and strongly pursued in the context of fusion \cite{kaw2017nonlinear,das2020laser}, particle acceleration \cite{nishida1987high,joshi2006plasma}, etc. Microwaves are also being employed for many studies such as the generation of plasma sources \cite{tarey2016studies,ganguli2016development,ganguli2019evaluation}. Its absorption leads to plasma heating \cite{litvak1993nonlinear}, which is important in several contexts such as tokamak \cite{gilgenbach1980heating,mueck2007demonstration} , stellarator \cite{koehn2012schemes,hammond2018overdense} , particle acceleration \cite{alvarez1981application,batanov1986large}, microwave heated chemical reactions \cite{tiwari2020microwave} and mirror machines \cite{launois1972contribution, ganguli1997characterization}. While high-power lasers have already been available for a long time, the production of high-power microwave sources has been recent \cite{levush1996high,fan2020direct,wang2020preliminary}. In the recent study by \cite{xiao2020efficient,wang2020preliminary}, high power microwave pulsed source of up to 4.6 GW and of frequency 9.96 GHz has been achieved in experiments. These high-power microwave sources have stimulated interest in the study of nonlinear effects in microwave plasma interaction. Nonlinear phenomenon such as self-focusing and self-guiding \cite{ito1996formation,ito2004propagation}, resonant absorption \cite{lee1982hot,rajyaguru2001observation}, soliton excitation \cite{nishida1986excitation,kaw1992nonlinear} and ponderomotive forcing have been studied \cite{max1974self,max1976strong} by several authors. The studies in laser-plasma interaction have primarily been carried out for the unmagnetized plasma response. The magnetic field requirement is considerably high (and hence not possible to achieve) for eliciting magnetized response at laser frequencies from the charged species of the plasma. However, recent technological development of producing high magnetic fields of the order of kilo Tesla in the laboratory \cite{nakamura2018record} and proposals to produce even stronger, of the order of Mega Tesla \cite{korneev2015gigagauss} magnetic fields, has sparked research interest in the area of laser interacting with magnetized plasmas for which a variety of theoretical and simulation studies are now being conducted \cite{vashistha2020new,vashistha2021excitation,vashistha2022localized,mandal2020spontaneous,mandal2021electromagnetic,kumar2019excitation,goswami2021ponderomotive,goswami2022observations,maity2021harmonic,PhysRevE.105.055209}. On the other hand, the available microwave and RF sources typically have low power and produce continuous waves. The magnetized plasma response can be invoked relatively easily at low MW frequencies in experiments with reasonably smaller magnetic field strength values. The MW plasma experiments involve low-power microwave sources and complicated configurations of magnetic fields. Thus the conventional laser plasma and the MW/RF research, in general, have explored distinct regimes. The former (laser) has been extensively employed to study an unmagnetized and highly nonlinear response from the plasma to EM waves. On the other hand, MW/RF has typically explored magnetized but, at best, a weakly nonlinear response. Also, the explorations of MW/RF have primarily been conducted in the context of Tokamak devices and have, therefore, employed complex magnetic field geometry. This gap in the regime of research explorations between laser and MW/RF is now getting bridged by technological advancements in the production of strong magnetic fields and pulsed high-power MW sources. One of the nonlinear effects associated with laser interaction with plasmas is the generation of higher harmonics. Excitation of high harmonics has been keenly pursued to produce high-frequency, coherent radiation sources. For the case of unmagnetized plasma, theoretical analysis for a high harmonic generation was first given by \cite{margenau1948theory}. Experimentally, under a dc magnetic field, harmonic generation in microwave-induced gas-discharged plasma had been reported by \cite{hill1959harmonic}. Over the decades' many authors have contributed to this pursuit \cite{sodha1965third,sodha1970theory,basov1979second,gibbon1997high}. Theoretical studies of a third harmonic due to the interaction of a microwave with plasma in a nonsteady state have been described by \cite{tripathi1971non}. High Harmonics in the reflected radiation of intense electromagnetic wave interaction with plasma slab by relativistic oscillating mirror model was first proposed by \cite{bulanov1994interaction} and the selection rules for identifying the polarization of reflected harmonics from an overdense plasma has been discussed by \cite{lichters1996short}. Analytical investigation of Second harmonic generation in plasma-filled cylindrical waveguide interaction with the High Power Microwave (HPM) has been done by \cite{fu2008harmonic}. In this work, we have explored the question of producing harmonics by the EM wave in a magnetized plasma through PIC simulations using EPOCH. The magnetic field has been chosen to be along the EM wave's propagation direction, termed the R-L mode geometry. Though we have chosen MW parameters for this study, it is equally applicable to possible laser parameters, which we have also listed. The question of harmonic generation in the context of X and O mode geometry (for which the external magnetic field is applied along the laser magnetic and electric field, respectively) has been studied earlier \cite{maity2021harmonic}. In contrast to previous X and O mode configurations, we observe the formation of only odd harmonics. The dependence of the efficiency of harmonic generation on the applied magnetic field and other system parameters has also been investigated. The paper is organized as follows: Section \ref{sec:SimulationDetails} describes simulation geometry. The parameters used in the simulation are also mentioned. In section \ref{sec:observations} we have provided a comprehensive analysis of various simulations performed for different ranges of parameters. We have illustrated various possible cases by varying the external magnetic field. The theory behind the generation of harmonics is also described in this section. The effect of the external magnetic field, EM wave polarization, and its intensity on the generation of harmonics have been investigated. The analytical calculation for the process of harmonic generation has been provided in Appendix \ref{appA}. Finally, we have summarized our studies in section \ref{sec:summary}. \begin{figure} \centering \includegraphics[width=6.0in]{figures/schematic_new.eps \caption{A schematic of our simulation box is shown in figure. We have carried out one dimensional PIC simulation. Here, external magnetic field $B_0$ is applied along direction of propagation of linearly polarized microwave $(\omega_{mw})$ (along $\hat x$ direction). As the HPM interacts with the plasma at surface we observe it excites higher harmonics of different polarisation (L-polarized) while passes the fundamental R-wave. All the higher harmonics satisfies the dispersion relation.} \label{fig:schematic} \end{figure} \section{Simulation details}\label{sec:SimulationDetails} The interaction of high-power microwave pulse with magnetized plasma has been studied with the help of a one-dimensional particle-in-cell (PIC) simulation. The simulation has been carried out using EPOCH 4.17.16 PIC code \cite{arber2015contemporary,bennett2017users}. The schematic of the simulation geometry is shown in figure \ref{fig:schematic}. We have divided our simulation box of 6 meters into 120,000 grids for which $dx=50\mu m$. The plasma boundary starts from $x = 1 m$ and extends up to $x = 5$ meters. We have considered fully ionized homogeneous electron-proton plasma in our simulation box, where the ion-electron mass ratio is $1837.2$. The number density of electrons and ions is chosen to be constant in the plasma region $n_{e,i}=2\times10^{18} m^{-3}$. In terms of skin depth thus the grid size corresponds to $dx=0.013c/\omega_p$. The number of macro particles per cell has been chosen to be $20$. We have considered a short pulse microwave of wavelength $\lambda=44.88 mm$ with a pulse width equal to $10$ wavelengths. Profile of laser pulse is chosen Gaussian in time with a peak intensity of ($I=5.86\times 10^{11} W \ m^{-2}$). The transverse electric field of microwave $\tilde{E}_{mw}$ is along $\hat y$ direction and the oscillating magnetic field $\tilde{B}_{mw}$ of the wave is in $\hat z$ direction. Microwave enters in simulation box from the left direction at $t=0s$. \begin{table} \caption{Simulation parameters are shown here in normalized as well as in corresponding SI units} \label{table:simulationtable} \begin{center} \begin{tabular}{|c|c|c|c|} \hline $$\textbf{ Parameters}$$ & $$ \textbf{ Normalized values}$$ & $$ \textbf{ Microwave System}$$ & $$ \textbf{Laser System}$$ \\[3pt] \hline \multicolumn{4}{|c|}{\textbf{Microwave/Laser Parameters}} \\ \hline Frequency($\omega_{mw}$) & 0.53$\omega_{pe} $ & $4.2\times 10^{10}rad \ s^{-1}$ & $0.2\times 10^{15}rad \ s^{-1}$ \\ \hline Wavelength($\lambda_{mw}$) & 12.1$c/\omega_{pe}$ & 44.88 $mm$ &$ 9.42 \mu m $\\ \hline Intensity ($I_0$) & $a_0=0.29$ & $5.86\times 10^{11} W \ m^{-2}$ & $1.33\times 10^{19} W \ m^{-2}$\\ \hline \multicolumn{4}{|c|}{\textbf{Plasma Parameters}} \\ \hline Density$(n_{e,i})$ & 1 & $2\times 10^{18}m^{-3}$ & $4.47\times 10^{25} m^{-3}$\\ \hline ($\omega_{pe})$ & 1 & $7.973\times10^{10}rad \ s^{-1}$ & $3.77\times 10^{14} rad \ s^{-1}$ \\ \hline ($c/\omega_{pe})$ & 1 & 3.7 mm & $0.79 \mu m$\\ \hline \end{tabular} \end{center} \end{table} External magnetic field $B_0$ is chosen to be along propagation direction $\hat x$ for the R-L mode configuration. In the absence of an external magnetic field, microwaves cannot penetrate inside the plasma as it is overdense $\omega_{mw}<\omega_{pe}$. We have chosen two different values of applied external magnetic fields. This ensures that in one case the incident laser is in the stop band of both L and R waves and the EM wave does not penetrate the plasma. Whereas for the second choice, while the L mode is in the stop band the R mode is in the pass band. Boundary conditions are taken as absorbing for fields as well as particles. Since the ion-electron mass ratio is high which makes electron cyclotron frequency to be quite high ($\approx 1837.2$) compared to the ion cyclotron frequency ($\omega_{ce}>>>\omega_{ci}$). At the chosen MW frequency, only the lighter electron species will respond. \section{Observations and Discussion}\label{sec:observations} We report four cases of parameters for which simulations have been carried out in Table \ref{tab:Magneticfieldparameters}. The polarization of the microwave/EM wave has also been shown in Table \ref{tab:Magneticfieldparameters}. Microwave/EM frequency in all these cases has been chosen as $0.51\omega_{pe}$, and its intensity is chosen to be $a_0 = 0.29$ for all the studies. We have observed that even if the fundamental frequency of microwave lies in a cutoff of both R $\&$ L-modes, some traveling structures (EM waves) are still present inside the bulk plasma. We have analyzed these EM disturbances using various diagnostics for each case. \begin{table} \caption{List of simulations performed for linear and circularly polarized wave with applied external magnetic field.} \label{tab:Magneticfieldparameters} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Case} & \textbf{Polarisation} & $\textbf{( MW )} B_0$ (T) & $\textbf{ (Laser)} B_0$ (kT) & $\omega_{ce}$ ($\omega_{pe}$) & \textbf{R mode} & \textbf{L mode} \\[3pt] \hline A & linear & 0.1775 &0.839 & 0.39 & cut off & cut off \\ \hline B & linear & 0.355 &1.68 & 0.78 & \color{red}pass band & cut off \\ \hline C & RCP & 0.355 &1.68 & 0.78 &\color{red} pass band & cut off \\ \hline D & LCP & 0.355 & 1.68 & 0.78 & \color{red}pass band & cut off \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \centering \includegraphics[width=6.0in]{figures/DISPERSION_PAPER.eps \caption{Dispersion curves of L and R mode corresponding to case A ($B_0=0.1775T$) shown in figure (a) and (b) respectively. Similarly, figure (c) and (d) illustrates dispersion curves of L and R mode for case B ( $B_0=0.355T$) \cite{chen1984introduction}.} \label{fig:Dispersion curve} \end{figure} \begin{figure} \centering \includegraphics[width=6.0in]{figures/EycaseA.eps \caption{Time evolution of interaction of incident EM wave with magnetized plasma is shown in above figures $(a-c)$ at $t=3, 6, 15 ns$. fig-$(d)$ shows zoomed version of $E_y$ component of transmitted structures. } \label{fig:time_evol_A} \end{figure} \begin{figure} \centering \includegraphics[width=6.0in]{figures/polarisation_case_A.eps} \caption{Time evolution of the electric field component along y and z direction is plotted here for fast $(a)$ and slow $(b)$ wave inside plasma } \label{fig:polarisation_t} \end{figure} \begin{figure} \centering \includegraphics[width=6.0in]{figures/fftcaseA.eps} \caption{Fast Fourier transform was performed in time $(a)$ and space $(b)$ to measure the corresponding frequency and propagation vector for the two travelling structure in magnetized plasma. } \label{fig:FFTcaseA} \end{figure} \begin{figure} \centering \includegraphics[width=6.0in]{figures/reflectedwave} \caption{figure $(a)$ shows the time evolution of electric field along y and z direction of reflected wave in vacuum region. While, figure $(b)$ depicts time FFT of $|B_z(\omega)|^2$ in vacuum region. } \label{fig:polarisation_r} \end{figure} \subsection{Case A} We choose the value of the applied external magnetic field of $0.3 m_ec\omega_{pe}e^{-1}$, which corresponds to $B_0=0.1775T$ for the microwave-plasma interaction studies. The direction of the magnetic field is chosen to be along the propagation vector of the microwave in the plasma. The microwave frequency has been chosen greater than electron cyclotron frequency, i.e. $\omega_{mw}>\omega_{ce}>>>\omega_{ci}$. Here $\omega_{cs}$, represents ion and electron cyclotron frequency for $s=i,e$, respectively. Also, for this magnetic field $\omega_{mw}<\omega_{R,L} $, where $\omega_{R,L}$ represents right and left-hand cut-off frequency. So for this case, the dispersion relation governed by the magnetized plasma medium is given by: \cite{chen1984introduction}. \begin{equation}\label{eq:1} \frac{c^2k^2}{\omega^2}=1-\frac{\omega_{pe}^2/\omega^2}{1\mp\omega_{ce}/\omega} \end{equation} Where '$\mp$' signs correspond to R-mode/L-mode, respectively. Now for this case, the equation (\ref{eq:1}) tells us that incident EM wave frequency falls under the cut-off region of both R and L modes. This can be observed from the dispersion relation shown in figure \ref{fig:Dispersion curve}(a,b). Thus, for this particular case, one expects the incident EM wave to reflect from the plasma boundary. There would be no propagation inside the plasma. The simulations, however, show something different. The snapshots at various times of incoming EM wave interacting with magnetized plasma for this case have been shown in figure \ref{fig:time_evol_A}. The microwave/EM pulse enters from the left side of the simulation box. Figure \ref{fig:time_evol_A}$(a)$ at $t= 3 ns$ shows it is still in the vacuum region. Later, it interacts with the plasma boundary at $x=1 m$. At $t= 6 ns$ we can see the two wave pulse in the simulation box (figure \ref{fig:time_evol_A}$(b)$) transmitted and reflected pulse. While the reflected amplitude of the wave is high, one also observes a small amplitude of a wave that seems to propagate inside the plasma. Subsequently, at $t=15ns$, the EM wave in bulk is also observed to separate in two distinct pulses. A zoomed view of these two structures has also been shown in figure \ref{fig:time_evol_A}$(d)$. \indent To understand the behavior of the transmitted pulses, we carry out various diagnostics. It is observed that both $y$ and $z$ components of the electric field are finite for these pulses, unlike the incident linearly polarised EM pulse. The time evolution of the electric field vectors $E_y, E_z$ at a fixed location of $x=3m$ for the two pulses has been shown in figure \ref{fig:polarisation_t}. From the phase difference between $E_y $ and $E_z$, one infers that the faster-moving pulse is left circularly polarized (L-wave). On the other hand, the slow-moving wave is found to be right circularly polarized (R-wave). The Fast Fourier transform in time at the same location $x=3 m$ (from $ t= 4$ to $20 ns$) for which these pulses cross this particular point has been obtained and shown in figure \ref{fig:FFTcaseA}$(b)$. The dominant peak is near the third harmonic, and the other smaller peaks occur at higher odd multiples of the incident wave frequency ($\omega_{mw}$). We also performed the space FFT analysis in figure \ref{fig:FFTcaseA}$(a)$ at $t=11 ns$ when the two pulses extend from $ x=1.5$ to $3m$ inside the box. In this case the dominant peaks occur at $k= 286.4 m^{-1}$ and $343.5 m^{-1}$. The observed values of dominant peaks in $\omega$ and $k$ satisfy dispersion curves of R and L mode shown in figure \ref{fig:Dispersion curve}$(a,b)$ for the applied external magnetic field of $(0.39m_ec\omega_{pe}e^{-1})$. We similarly analyze the properties of the reflected wave. Figure \ref{fig:polarisation_r}$(a)$ shows that the reflected wave also has both $E_y$ and $E_z$ components, but both are in phase. Thus the reflected wave remains linearly polarized, but its plane of polarization gets rotated along the yz-plane. These observations thus show that though the fundamental frequency lies in the stop band (for both L and R waves) and, therefore, cannot propagate inside the plasma, high harmonics get generated at the vacuum plasma surface and propagate inside the plasma. The $3^{rd}$ harmonic is the most dominant frequency generated. Time FFT spectrum in figure \ref{fig:polarisation_r}$(b)$ depicts that there are also higher odd harmonics present in the transmitted region. \subsection{Case B} We next choose a case for which the strength of the external magnetic field is doubled to $0.355T$ ($0.78 m_ec\omega_{pe}e^{-1}$). For this case (B) microwave frequency holds $\omega_{ce}>\omega_{mw}>\omega_{ci}$. Subsequently, in this case, microwave frequency falls under the range $\omega_{ce}/2<\omega_{mw}<\omega_{ce}$, where the group velocity decreases while the phase velocity of the incident wave increases. So, the microwave frequency lies in the pass band of the R mode dispersion curve figure \ref{fig:Dispersion curve}$(c)$ However, in the cut-off region of the L-mode dispersion curve shown in figure \ref{fig:Dispersion curve}$(c)$ (shown by blue dots). In this case, a part of the incident EM wave travels inside the plasma as a Right circularly polarized wave at the fundamental frequency. Higher odd harmonics also get generated, which propagate inside the plasma. It should be observed from the dispersion curve (for the applied magnetic field of this particular case) shown in figures \ref{fig:Dispersion curve}$(c,d)$ depicts that $3^{rd}$ harmonic of both R and L polarization lie in the pass-band. One would, therefore, expect the $3^{rd}$ harmonic of both polarization to get generated like the previous case and propagate inside the plasma. \begin{figure} \centering \includegraphics[width=6.0in]{figures/EycaseBtime_evol.eps \caption{ Time evolution of interaction of incident EM wave with magnetized plasma is shown in figures (a-c) at t=3, 6, 15 ns for the Case B. There are two travelling structures inside the magnetized plasma has been detected.} \label{fig:time_evol_B} \end{figure} \begin{figure} \centering \includegraphics[width=6.0in]{figures/polarisation_case_B.eps \caption{ Subplots of Electric field components along y and z direction has been plotted with time. Figure (a) correspond to the slow travelling wavepulse, while figure (b) corresponds to the fast travelling wavepulse. } \label{fig:polarisation_harmonic} \end{figure} \begin{figure} \centering \includegraphics[width=6.0in]{figures/fftfullcaseB_new.eps} \caption{ Figure (a) shows the snapshot of transverse magnetic field ($B_z^{(mw)}$ of microwave inside magnetized plasma at t=11 ns. Figure (b) and (d) depicts space FFT associated to each of these pulses, and figure (c) demonstrates time FFT at $x=2$ meters in space from time period of $t= 4$ to $20 ns$.} \label{fig:FFTcaseB} \end{figure} We have shown the time evolution of incident microwaves in this case. Figure \ref{fig:time_evol_B}$(a)$ depicts that at $t= 3ns$ Gaussian wave pulse of the incoming microwave reached the vacuum plasma interface. After $t=6ns$, the microwave entered the plasma, and a reflected wave from the plasma boundary could be seen in figure \ref{fig:time_evol_B}$(b)$. Later, after $t=15 ns$ reflected wave has left the simulation box, two clear, distinct wave pulses are observed in magnetized plasma. The slow traveling wave-pulse in figure \ref{fig:time_evol_B}$(c)$ turns out to be the fundamental R-mode as indicated by the dispersion curve in figure \ref{fig:time_evol_B}$(d)$. Besides that, we observe only one fast-moving wave-pulse instead of two, as one would expect from the dispersion curve in figures \ref{fig:Dispersion curve}$(c,d)$. The polarization of these waves has been depicted in figure \ref{fig:polarisation_harmonic}, which describes the polarisation of the slow-moving waves as the R-wave and the fast-moving wave as an L-wave. Figure \ref{fig:FFTcaseB} (a) shows the simulation snapshot at $t=11 ns$. The time FFT for the time window of $t=4$ to $20 ns$ has been evaluated until the whole wave pulse passes through from x=2 m. It shows the fundamental peak at $\omega=0.98\omega_{mw}$ and small peaks at $\omega/\omega_{mw}= 2.96, 5.01, 6.93 $. Figures \ref{fig:FFTcaseB}$(b,c)$ illustrates the space FFTs for the two wave packets shown in (a). There corresponding k values for slow wave-packet at $x= 1$ to $2$ metres is found at $k= 383.3 m^{-1}$. While for the fast moving wave-pulse the maximum at $k=351.9 m^{-1}$ and further small peaks at $k= 612.4, 636.9 m^{-1}$ and $921.5 m^{-1}$ has been observed. This ensures that odd harmonics have excited in plasma along with the fundamental R-wave. Interestingly, the $3rd$ harmonic of only the L-wave got excited in the plasma. \begin{figure} \centering \includegraphics[width=5.0in]{figures/LCPRCP.eps \caption{Figure depicts time evolution of RCP and LCP wave encountering RL-mode in magnetized plasma at external magnetic field $B_0=0.355T$. At $t=2 ns$ profile of RCP and LCP wave has shown. Later in time, at $t=6 ns$ following the dispersion relation RCP wave pass through plasma and LCP wave reflects from surface.} \label{fig:polarisation} \end{figure} \subsection{Case C and D} A circularly polarized microwave (RCP and LCP) has been made to fall on the magnetized plasma. Figure \ref{fig:polarisation} shows when incident right circularly polarized microwave of frequency ($\omega=4.2\times 10^{10}rad/s$) is set to fall on plasma medium only fundamental R-wave travels with the finite group velocity. Similarly, when the left circular polarized microwave passes through the magnetized plasma, it follows the L-mode dispersion relation, and all the incident microwave gets reflected from the plasma boundary. These two cases reveal that no higher harmonics get excited for a circularly polarized light in the magnetized plasma. This indicates that the polarisation of the wave also has an essential role in the generation of high harmonics. The reason behind this will be covered in the following section. \section{Symmetric Generation of Harmonics at plasma boundary } Excitation of only odd harmonics in RL-mode ($ \tilde k||B_0$) configuration could be explained by invoking the symmetry of our chosen simulation geometry. The incoming EM wave with arbitrary polarisation propagating along $x$ direction has associated Electric ($\tilde E_{mw}$) and Magnetic ($\tilde B_{mw}$) fields that are in the $\hat y$ and $\hat z$ plane orthogonal to each other. A detailed perturbative expansion has been carried out in the Appendix. This geometry depicts that in the linear regime electrons at the surface of the vacuum-plasma interface, in response to the external oscillating Electric field ($\tilde E_{mw}$) start oscillating in yz-plane with their quiver velocities $v_y^{(1)}$ and $v_z^{(1)}$ with EM wave frequency ($\omega_{mw}$) according to equation (\ref{eq:velocity1st}). Consequently, this will excite the surface current $\tilde J_{ey}(\omega_{mw})$ and $\tilde J_{ez}(\omega_{mw})$. Electrons oscillating in the yz-plane further couples with the oscillatory magnetic field ($\tilde B_{mw}$) of the microwave, which is in yz plane through Lorentz force ($~\tilde v^{(1)} \times \tilde B_{mw} exp(-2i\omega_{mw}t)$). Consequently, This will excite second harmonic ($\tilde v_x^{(2)}$) along the $\hat x$ direction. Thus, the surface current will be excited only along the x-direction in the second harmonic. Which could be described by the equation (\ref{eq:currrentdensity}). \begin{equation} \tilde{J_{ex}}^{(2)}=-\frac{n_ee^3}{2m^2c}\frac{E_{mw0}^2}{(\omega_{mw}^2-\omega_c^2)}(1+\alpha^2)exp(-2i\omega_{mw}t) \label{eq:currrentdensity} \end{equation} \indent It is interesting to note that for circular polarization since $\hat y$ and $\hat z$-component of $\tilde B_{mw}$ will be equal and have phase difference with each other ($\alpha =\pm i $) that makes ( $J_{ey,ez}(2\omega)\times B_{l(y,z)}$ terms in second harmonic to vanish, illustrating from the equation-(\ref{eq:currrentdensity}) that for circular polarisation current density in second harmonic cannot be excited. This is why further coupling to generate $3^{rd}$ or higher harmonic is impossible for circularly polarized waves. This was confirmed earlier by simulations in cases C and D. The geometry being 1-D with variations along $x$, this second harmonic current along $x$ does not generate any radiation. \begin{figure} \centering \includegraphics[width=6.0in]{figures/currentdensity.eps} \caption{Figure $(a,b,c)$ elucidate time evolution of current density along $\hat x$, $\hat y$ and $\hat z$ direction at surface of vacuum plasma interface for the cases A and B. Figure $(d,e,f)$ describes the time FFT's corresponding to these current densities at the surface.} \label{fig:surfacecurrent} \end{figure} \indent Furthermore, since current density in second harmonic will be finite for linear polarized light ($\alpha=0$ in \ref{Elec_field},\ref{Mag_field}) only the electrons wiggles with frequency ($2\omega_{mw}$) along x-direction at the surface of plasma and couples with oscillatory magnetic field ($\tilde B_{mw}$) through the Lorentz force ($~\tilde v^{(2)}\times \tilde B_{mw} exp(-i3\omega_{mw}t)$) again. By symmetry, it will excite the third harmonic current in yz-plane. Thus, surface current in third harmonic will be along $\hat y$ and $\hat z$ direction. Thus one will obtain odd harmonic radiation (Electromagnetic modes) from this surface current. In the other geometry of X and O-mode which was reported earlier by \cite{maity2021harmonic} even harmonics were also got generated. \indent In our simulation we have been able to observe up till $7^{th}$ harmonic in FFT spectrum. Figure (\ref{fig:surfacecurrent}) depicts surface current generated at the vacuum plasma boundary along each directions for the cases A and B. Figure (\ref{fig:surfacecurrent})$(a)$ shows variation of surface current for both cases along x-direction with time (from $t=3.5$ to $8 ns$) up till the incoming microwave pulse interacts with the vacuum plasma interface. Time Fast Fourier transform (FFT) of the interaction has been shown in figure \ref{fig:surfacecurrent}$(b)$. The FFTs clearly reveals that current density corresponding to even harmonics ($\omega=2\omega_{mw},4\omega_{mw},6\omega_{mw}$) have been excited at the surface. Since the even harmonics cannot propagate inside the plasma, they only exist at the surface and further excite odd harmonics which can propagate through plasma according to the specific dispersion relation. Similarly, figure \ref{fig:surfacecurrent}$(b,c$) shows surface current density excited along $\hat y$ and $\hat z$ direction. Figure \ref{fig:surfacecurrent}$(e,f)$ illustrates time FFT in y and z direction respectively, which clearly shows that current densities of odd harmonic frequencies got excited at the surface. Here, we observed that for lower magnetic field (case A) current density is greater along $\hat y$ than $\hat z$ direction but as we doubled the magnetic field (case B), current density become greater along $\hat z$ then $\hat y$ direction. Which reveals that for increasing the magnetic field plane of linearly polarized light is rotating in yz-plane. \section{Physics of Propagation of Harmonics in RL-Mode Configuration ($\tilde k||B_0$)} To unravel the physics of propagation of harmonics in magnetized plasma along RL-mode geometry, we have considered a fully ionized cold electron-proton plasma in the present scenario. Since the microwave frequency ($\omega_{mw}$) is much higher compared to ion response time scales ($\omega_{pi}$,$\omega_{ci}$) in magnetized plasma. So they do not respond to the interaction at the surface but merely provide a neutralizing background. Figure \ref{fig:dispersion_Rmode}$(a)$ demonstrates three regions in the RL-mode dispersion curve which depicts different characteristics of generated high harmonics when the incident electromagnetic wave has chosen to be in any of these three regions. We will discuss these three regions briefly to understand the physics of harmonics in RL-mode Geometry. \begin{figure} \centering \includegraphics[width=6.0in]{figures/dispersion_new_vg_vp.eps} \caption{Figure demonstrates different regions of pass and cut off bands of R-mode depending on group and phase velocity of incoming EM wave in plasma under RL-mode configuration. For EM wave frequency($\omega_{mw}<\omega_{ce}/2 $) bounds region-I, subsequently ($\omega_{ce}/2 <\omega_{mw}<\omega_{ce} $) bounds region-II and ($\omega_{ce}<\omega_{mw}<\omega_{L}$) bounds region-III. } \label{fig:dispersion_Rmode} \end{figure} \subsection{Region-I} \indent When the incoming microwave frequency is less than $\omega_{ce}/2$ then it falls under region-I. Equation-(\ref{eq:1}) could be solved to derive the group velocity ($v_g$) and phase velocity ($v_p$) of incoming EM wave in the R-mode. Which then given as, \begin{equation} v_p= \frac{\omega}{k} =c\sqrt{\frac{\omega(\omega_{ce}-\omega)}{\omega_{pe}^2 +\omega_{ce}\omega -\omega^2}} \label{eq:v_p} \end{equation} \begin{equation} v_g= \frac{2c^2 k(\omega_{ce}-\omega)}{c^2 k^2+\omega_{pe}^2 -3\omega^2 +2\omega_{ce}\omega} \label{eq:v_g} \end{equation} Figure \ref{fig:dispersion_Rmode}$(b)$ depicts the variation of the group ($v_g$) and phase ($v_p$) velocity with incident EM wave frequency. Figure \ref{fig:dispersion_Rmode}$(b)$ describes that in the region for $\omega < \omega_{ce}/2$, with increasing the incoming wave frequency, group velocity decreases whereas the phase velocity of EM wave increases. Also, when the applied EM wave frequency is less than $\omega_{ce}/2$, the time scale of the incoming em wave ($\omega_{mw}^{-1}$) would be more significant compared with the cyclotron gyration frequency of electrons ($\omega_{ce}^{-1}$). Because of these circumstances, the surface interaction of EM waves with magnetized plasma cannot excite higher harmonics in the magnetized plasma. Furthermore, equation (\ref{eq:vel_harmonic}) in Appendix \ref{appA} illustrate that the denominator of quantity $v_{x}^{(2)}$ would become higher, which in turn makes surface currents ($J_{ex}^{(2)}$) in second harmonic to be lesser and coupling further to excite higher $3^{rd}$ harmonic would become rather low in efficiency as indicated by the denominator in equation (\ref{currentdens}). Therefore, in this region harmonic generation efficiency would be low. For this reason, harmonic generation was not observed in an earlier work by \cite{goswami2021ponderomotive}. \subsection{Region-II} \indent Let us now understand the dynamics when the incident microwave frequency is chosen in the region $\omega_{ce}/2 < \omega < \omega_{ce}$. Equations (\ref{eq:v_p},\ref{eq:v_g}) illustrate that in this region group, and phase velocity of the transmitted wave in plasma decreases for increasing the incident EM wave frequency. The time scales of microwave and electron gyration frequency will also become closer as we move towards resonance frequency ($\omega_{ce}$). Equation (\ref{eq:vel_harmonic},\ref{currentdens}) will become significant to excite current densities in higher harmonics. That is why surface interaction will significantly excite higher harmonics in magnetized plasma. \indent Since the fundamental frequency of EM wave fall under the pass band of R-mode but in the cutoff region of L-mode. Consequently, when linearly polarized light is subjected to an incident on the vacuum plasma interface, it could be assumed as a superposition of RCP and LCP waves. In RL-mode configuration, RCP and LCP follow different dispersion relations and travel with different group velocities. In this case, the RCP wave will pass through the plasma while the LCP wave finds a reflecting boundary. Because of that RCP wave will not have enough time to excite the $3^{rd}$ harmonic at the surface. The LCP wave, which got stopped at the interface, will have sufficient time to excite the $3^{rd}$ harmonic of the L-wave. Therefore, the efficiency of $3^{rd}$ Harmonic of the L-wave is significantly higher than the R-wave in this region. This makes us unable to detect the $3^{rd}$ harmonic of the R-wave in the case of B, even when it lies within the pass band of the R-wave. \subsection{Region III} \indent In this region frequency of the EM wave falls within the cutoff band of both R and L-mode configurations. Figure (\ref{fig:dispersion_Rmode}) describes that boundaries of region-III are extended between $(\omega_{ce} ,\omega_{mw})$. When the incident linearly polarized EM wave has frequency falls within this region, it will be reflected from the vacuum plasma boundary. Linearly polarized waves are considered the superposition of RCP and LCP waves in a vacuum. The time scales of the interaction of both component of the wave is now equivalent, which is why if the $3^{rd}$ Harmonic of both R and L mode lies in the pass band ($3\omega_{mw}>\omega_{L, R}$) then we can observe odd harmonics of incident wave in this case. It has also been confirmed by our simulation performed in case A. \begin{figure} \centering \includegraphics[width=6.0in]{figures/jyjz.eps \caption{ Figure (a) and (b) depicts the variation of current density of third harmonic along y and z-direction at the vacuum plasma boundary for varying external magnetic field from $B_0$=$0$ to $1.2T$. The blue curve plots amplitude of the analytical expression-(\ref{currentdens}) and orange curve plot peak of $3^{rd}$ harmonic of FFT spectrum.} \label{fig:JyJz} \end{figure} \section{Effect of magnetic field variation on harmonic generation} We have observed that the excitation of harmonics and their efficiency depend on the region in which the frequency of incoming EM waves falls into the dispersion curves of RL mode. The externally applied magnetic field plays a crucial role in generating these harmonics. We have varied the strength of the magnetic field ranging from $B_0=0$ to $1.2 T$. This range of magnetic field will cover all three regions of the dispersion curve in figure (\ref{fig:dispersion_Rmode}) for incoming microwave frequency of $\omega_{mw}$. Thus to verify its effect on harmonics, we have plotted peak current density ($J_{ey}(3\omega_{mw})$ and $J_{ez}(3\omega_{mw})$ ) of $3^{rd}$ harmonic frequency from time FFT Spectrum along $\hat y$ and $\hat{z}$ directions as a function of the strength of applied external magnetic field. The analytical values have been obtained from the Appendix equation (\ref{currentdens}) using similar parameters as that of the chosen in simulation. It can be observed that the analytical current densities (\ref{currentdens}) follow a very similar trend with the external magnetic field, as the simulation shows. The location of the two peaks obtained from the perturbative approximate analytical expressions for ($\omega_{mw}=\omega_{ce}$) and ($\omega_{mw}=3\omega_{ce}$) are observed in simulation also. \begin{figure} \centering \includegraphics[width=6.0in]{figures/intensity_variation.eps \caption{Figure (a,b,c) demonstrates the snapshot of transverse Electric field ($E_y^{mw}$) of microwave in transmitted region for incoming microwave of intensities ($I_0$,$5I_0$, $10I_0$) corresponding to $I_0=5.857\times 10^{11}W/m^2$ at t= 15s, while the applied external magnetic field is $B_0=0.355T$. } \label{fig:Intensity_variation} \end{figure} \begin{figure} \centering \includegraphics[width=6.0in]{figures/Intensity_time_fft.eps \caption{Figure demonstrates the time Fast Fourier transforms of transverse magnetic field ($B_z^{mw}$) of microwave in transmitted region for incoming microwave of intensities ($I_0$,$5I_0$, $10I_0$) corresponding to $I_0=5.857\times 10^{11}W/m^2$, while the applied external magnetic field is $B_0=0.355T$. } \label{fig:Intensity_fft} \end{figure} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|} \hline Intensity & Harmonics ($n\omega_{mw}$) & {$\eta_{ref}$($\%$)} & \multicolumn{2}{|c|}{$\eta_{trans}$($\%$)} \\ & & & \L-wave & R-wave \\ \hline $I=I_0$ & $3\omega_{mw}$ & $0.074$ & $0.343$ & $0$ \\ $(a_0=0.29)$ & $5\omega_{mw}$ & $6.997\times 10^{-4}$ & $0.0024$ & $0.027$ \\ & $7\omega_{mw}$ & $0$ &\multicolumn{2}{|c|}{$0.0021$ } \\ \hline $I=5I_0$ & $3\omega_{mw}$ & $0.355$ & $1.27$ & $0$ \\ $(a_0=0.65)$ & $5\omega_{mw}$ & $0.0188$ & $0.0607$ & $0.33$ \\ & $7\omega_{mw}$ & $0$ & \multicolumn{2}{|c|}{$0.17$} \\ \hline $I=10I_0$ & $3\omega_{mw}$ & $1.028$ & $1.837$ & $0$ \\ $(a_0=0.92)$& $5\omega_{mw}$ & $0.0765$ & $0.069$ & $0.464$ \\ & $7\omega_{mw}$ & $0$ & \multicolumn{2}{|c|}{$0.21$} \\ \hline \end{tabular} \caption{Conversion efficiency of reflected and transmitted harmonics of L and R polarisation has been shown for ($B_0=0.355T$)} \label{tab:intensity_transmittance} \end{table} \section{Effect of Intensity of HPM pulse on harmonic generation} We have so far explored the weakly nonlinear regime by choosing microwave intensity of about $I_0=5.857\times 10^{11}W/m^2$ for which relativistic factor corresponds to $a_0=0.29$. Here, we have demonstrated the effect of microwave intensity on the generation of harmonics by making the intensity relativistic. Thus the regime of ($a_0\approx 1 $) is explored. Three different intensities $I=I_0, 5I_0, 10I_0$ of the microwave while keeping the other parameters fixed as chosen in case B have been chosen. For these intensities, figure (\ref{fig:Intensity_variation}) shows the profile of the transverse Electric field ($E_y^{mw}$) at $t= 15 ns$ inside the plasma for the corresponding three cases. By comparison, it could be understood that transmission of higher harmonic increases in comparison to the fundamental R-mode inside the magnetized plasma as the intensity is chosen to be relativistic. Time FFTs of these three cases have been shown in figure (\ref{fig:Intensity_fft}) at $x=2$ meters inside the plasma for a time window of $t=4ns$ to $20 ns$. The spectra clearly indicate that the efficiency of higher harmonics grows inside the plasma, and even $7^{th}$ harmonic shows a significantly high intensity. Figure (\ref{tab:intensity_transmittance}) shows the reflectivity and transmittance of higher harmonics in the vacuum and plasma, respectively, for the three different intensities. Reflected EM wave follows the linear dispersion relation ($\omega=kc$) and travels with the speed of light, making all the higher harmonics spatially superposed in the reflected region. While in the transmitted region in RL-mode configuration, RCP and LCP waves follow different dispersion relations and travel with different groups and phase velocities in the medium. Thus, both polarized waves get spatially separated. Since the polarised harmonics of R and L-wave have, We have calculated the efficiency of each polarized harmonic by the ratio of peaks of corresponding space FFT of each harmonic by the peak of incident incoming microwave in a vacuum. The above analysis in the table (\ref{tab:intensity_transmittance}) revealed that the efficiency of $3^{rd}$ harmonic of L-wave is finite, while for $3^{rd}$ harmonic of R-wave, it is zero. Similarly, the efficiency of $5^{th}$ harmonic of the R-wave gets enhanced in comparison to the L-wave component. While for the $7^{th}$ harmonic dispersion relation of R and L-mode almost becomes equivalent because of that it travels with linear polarisation. This illustrates that the alternate polarized harmonic got excited in the plasma medium. Further, in the second case, by increasing the intensity $5$ times, we observe that the efficiency of $3^{rd}$ harmonic of L-wave increases by $3.7$ times, while $5^{th}$ harmonic of R-wave increases $12.2$ times and $7^{th}$ harmonic grows almost by $80$ times. This elucidates that the efficiency of harmonics grows proportionally with the order of harmonics. This behavior agrees with our theoretical analysis in Appendix {\ref{currentdens}}. \section{Summary}\label{sec:summary} In this paper, the Interaction of High Power microwave with magnetized plasma has been comprehensively studied with one dimension PIC simulation. We have described the new results observed on the excitation of higher harmonic in RL mode configuration in case of an overdense plasma ($\omega_{mw}<\omega_{pe}$). It has been demonstrated that when the frequency of incident linearly polarized EM wave fall under particular regions of dispersion curve of R $\&$ L modes $({\tilde k||B_0})$, efficient higher odd harmonics of circularly polarized wave can be excited in the bulk plasma which travels with different group velocities. It has been demonstrated by choosing an appropriate external magnetic field in comparison to the incident microwave frequency. Harmonics of either RCP wave, LCP wave, or both kinds of polarized wave could be excited in the bulk plasma. Further, for a circularly polarized EM wave, it has been established that higher harmonics cannot be generated in the bulk plasma and vacuum. The efficiency of harmonics could be further increased by choosing an external magnetic field such that it is closer to electron cyclotron resonance ($\omega_{mw}=\omega_{ce}$). By increasing the intensity of incoming electromagnetic waves, the conversion efficiency of higher harmonics grows in proportion to the order of the high harmonic. An earlier study by \cite{maity2021harmonic} revealed that propagating second harmonics could be excited in X and O mode configuration. However, in RL-mode configuration, symmetry restricts the generation of even harmonics at the surface. This leads to a couple the Electromagnetic energy transfer directly to the odd higher harmonics. The total maximum conversion efficiency of about 2.865 $\%$ has been detected for third harmonic radiation in the bulk plasma and vacuum for $a_0=0.92$ and normalized external magnetic field of $B_0= 0.78$. It is also crucial that the efficiency of alternate polarization of harmonics grows in bulk plasma as the intensity of incident radiation increases. This could be further improved by optimizing intensity and external magnetic field parameters for chosen frequency of the electromagnetic wave. \section*{Acknowledgements} \indent The Authors would like to thank EPOCH consortium for providing open access to EPOCH 4.17.16 framework. AD would like to acknowledge her J.C. bose fellowship grant of AD(JCB/2017/000055) as well as CRG/2018/000624, grant of Department of Science and Technology (DST) Government of India. The authors would like to thank IIT Delhi HPC facility for computational resources. T.D. would also wishes to thank Council for Scientific and Industrial Research (Grant no- 09/086/(1489)/2021-EMR-I) for funding the research. \section*{Conflict of Interest} \noindent Authors report no conflict of interest \section{Introduction} \label{sec:Introduction} In recent years there has been a lot of progress (both fundamental and technological) in the area of laser plasma interaction studies \cite{das2020laser}. The development of low frequency pulsed $CO_2$ lasers \cite{tochitsky2016prospects, haberberger2010fifteen, beg1997study} and the possibility of generating strong magnetic fields (e.g. of the order of Kilo Tesla has already been achieved \cite{nakamura2018record} and there are proposals to generate Mega Tesla \cite{korneev2015gigagauss} fields) are two developments which have opened up possibilities of carrying out experiments in a new direction involving laser interacting with magnetized plasma. This regime is of importance for fundamental explorations and also has rich implications for frontier applications such as direct heating of ions and neutron production in table top devices\cite{goswami2021ponderomotive, vashistha2020new, vashistha2021excitation, maity2021harmonic, mandal2021transparency, kumar2019excitation, mandal2020spontaneous, sano2020thermonuclear, sano2019ultrafast}. In this paper we explore the prospect of parametric scattering process associated with a laser pulse interacting and propagating inside a magnetized plasma. The applied external magnetic field has been chosen to be along the laser propagation direction. This particular geometry supports the propagation of Right (R) and Left (L) circularly polarised electromagnetic waves inside the plasma for the specific range of pass band frequencies in their dispersion curves. The parametric processes have been studied extensively for the past several decades. A detailed discussion of the same in the context of plasmas and its relevance to nuclear fusion experiments can be found in a recent review article \cite{kaw2017nonlinear}. The parametric instability occurs when a high-amplitude pump electromagnetic wave couples with an electrostatic disturbance in the plasma and generates a scattered Electromagnetic radiation. The interaction of the pump and the scattered radiation accentuates the density disturbance and enhances the electrostatic perturbations, thereby creating a feedback loop for the instability to occur. Some early work in this area are as follows. Drake et al. \cite{drake1974parametric}, derived a compact dispersion relation for the wave interactions in the context of an unmagnetized plasma. They also discussed the limiting form of the dispersion relation leading to the various instabilities like Brillouin, Compton, and Raman scattering processes in plasma. In Raman scattering the excited electrostatic mode is an electron plasma wave, whereas for the Brillouin scattering process it is essentially an ion acoustic wave. The analytical expression for the growth rate and the threshold condition for the Raman and Brillouin scattering instability has been obtained by Liu et al. \cite{liu1974raman}. The occurrence of these Raman and Brillouin instabilities were directly shown for pulsed electromagnetic solitonic structures propagating in plasmas by Saxena et al. \cite{saxena2007stability} and Sundar et al. \cite{sundar2011relativistic} respectively with the help of fluid simulations. The application of Brillouin scattering process in the context of laser fusion has been highlighted in many works \cite{kruer1973instability, tsytovich1973one, weiland1977coherent, panwar2009stimulated}. For magnetized plasma theoretical studies have been carried out for the parametric instability by many authors. For instance, Stenflo \cite{stenflo1990stimulated, stenflo1995theory} and Shukla \cite{shukla2010stimulated} provided a theoretical prediction of ionospheric heating experiments. Jaiman and Tripathi \cite{jaiman1998stimulated} analyzed the generation of Brillouin and Compton scattering in magnetized plasma for oblique propagation of electromagnetic pump wave with respect to the external magnetic field direction. We provide here an evidence of parametric process occurring for an electromagnetic pulse propagating inside a magnetized plasma using particle-in-cell (PIC) simulations \cite{dawson1983particle, birdsall1991particle}. The laser is chosen to be incident on an overdense plasma and it propagates parallel to the applied external magnetic field. This particular geometry supports the propagation of Right (R) and/or Left (L) circularly polarised electromagnetic (EM) waves even for an overdense plasma, provided the EM wave frequency lies in the appropriate pass band of their dispersion relation. The manuscript has been organized as follows. Section \ref{sec:SimulationDetails} provides simulation details. In section \ref{sec:Observations}, we present the details of observations made by simulating different cases of polarization of the incident EM pulse. These observations have been analyzed in detail in Section{\ref{sec:CharacterizationOfModes}} which provide evidence of the occurrence of parametric Brillouin back scattering process. In a recent work from our group \cite{goswami2021ponderomotive} we had observed the generation of electrostatic fluctuations for a sharp profile of the laser pulse in the same geometry by the ponderomotive pressure of the laser pulse. Here, in contrast for a smoother laser profile, we observe electrostatic fluctuations getting generated by the Brillouin backscattering process. We point out in section\ref{sec:LaserProfiles} that both these processes in fact take place. The ponderomotive pressure driven process is the first to occur and gets saturated if the profile is relatively smooth. However the fluctuations generated from this process mimic the effective temperature of electron fluid to drive the brillouin scattering process which occurs at a later stage. We summarize and conclude in section \ref{sec:Conclusion}. \section{Simulation details} \label{sec:SimulationDetails} \begin{figure*} \centering \includegraphics[width=6.0in]{Figures/SchematicsBrillounFinal.pdf} \caption{Schematics (not to scale) shows the simulation setup and a summary of the physical process observed in our study. We have performed 1D particle-in-cell (PIC) simulations using OSIRIS. The laser is propagating along the X-direction. The laser is incident at time $t_0$ from the left side of the simulation box on the vacuum-plasma interface at $x = 400$. We have chosen the RL-mode geometry, where the external magnetic field $(B_0)$ is applied along the laser propagation direction. Here $t_0$, $t_1$, $t_2$, and $t_3$ represent different simulation times (in ascending order). It is evident from the schematic that there is an excitation of electromagnetic (EM) mode (pump wave) in the plasma at time $t_1$. The pump wave broadens with its propagation inside the plasma at time $t_2$. With further propagation of pump wave, scattered waves are excited in the bulk of plasma at time $t_3$. The amplitude of these scattered radiations increases with time.} \label{fig:SchematicBrillouin} \end{figure*} We have employed the OSIRIS-4.0 framework \cite{hemker2000particle, fonseca2002osiris, fonseca2008one} for carrying out 1-D particle-in-cell (PIC) simulations to study the interaction of a laser with magnetized plasma. The schematic of the simulation geometry (not to scale) has been shown in Fig.\ref{fig:SchematicBrillouin}. The external magnetic field is directed along the laser propagation direction $\hat{x}$. A 1-D simulation box with dimension $L_{x} = 4000 c/\omega_{pe}$ has been chosen. Here $c$ is the velocity of light and $\omega_{pe}$ represents the plasma frequency. The number of particles per cell are taken to be $8$. The plasma boundary starts from $x = 400 c/\omega_{pe}$. There is vacuum region between $x = 0$ to $ 400 c/\omega_{pe}$ with a sharp plasma vacuum interface at $x=400 c/\omega_{pe}$. The spatial resolution is taken as $100$ cells per electron skin depth and it corresponds to the grid size $\Delta x = 0.01 c/\omega_{pe}$. The laser is incident on the plasma target from left side. We consider a short-pulse laser of frequency $\omega_l = 0.3 \omega_{pe}$. The laser profile is Gaussian having rise and fall time of $200 \omega_{pe}^{-1}$ ($400 \omega_{pe}^{-1}$ for some simulations) with the peak intensity of $I = 3.5 \times 10^{19} Wm^{-2}$ (corresponding to a relativistic factor $a_0 = 0.5$). Boundary conditions are taken as absorbing in the longitudinal direction. The PIC code OSIRIS uses normalised values of the various parameters. We have followed the dynamics of both electrons and ions. To reduce the computational time, we carried out the simulations for a reduced mass of ions, which is chosen to be $25$ ( the value of $40$, $50$ are also chosen for some simulations) times heavier than electrons. Thus $m_{i} = 25m_{e}$ ($40m_e$, $50m_e$ for some simulations), where $m_{i}$ and $m_{e}$ represent the rest mass of the ion and electron species respectively. We have provided the laser and plasma simulation parameters both in normalized units alongside their typical possible values in standard units in Table-{\ref{table:simulationtable}}. As per the convention the frequencies have been normalised by electron plasma frequency ($\omega_{pe}$), length by electron skin depth ($c/\omega_{pe}$), and the electric and magnetic fields by $m_ec\omega_{pe}e^{-1}$. \begin{table} \caption{Values of simulation parameters in normalized and standard units} \label{table:simulationtable} \begin{center} \begin{tabular}{|c|c|c|} \hline \color{red}Parameters & \color{red}Normalized value & \color{red}Values in SI unit\\ \hline \hline \multicolumn{3}{|c|}{\color{blue}Plasma Parameters} \\ \hline $n_0$ & $1.0$ & $1.34\times10^{20} cm^{-3}$ \\ \hline $\omega_{pe}$ & $1.0$ & $0.67\times10^{15} rad/s$\\ \hline $\omega_{pi}$ ($M/m = 25$) & $0.2$ & $0.13\times10^{15} rad/s$\\ \hline \multicolumn{3}{|c|}{\color{blue}Laser Parameters} \\ \hline $\omega_{l}$ & $0.30$ & $0.2 \times10^{15} rad/s$\\ \hline $\lambda_{l}$ & $21$ & $9.42 \mu m$\\ \hline Intensity & $a_0 = 0.5$ &$3.5\times 10^{19} W/m^2$\\ \hline \end{tabular} \end{center} \end{table} We have considered different configurations of incident laser polarization by appropriately choosing the parameter $\alpha_i$ (it is $0$ for linear, and $\pm 1$ for right (RCP), left (LCP) circular polarizations respectively). Here the incident and transmitted laser electric field is denoted as $\vec{E}_i = \Tilde{E}_{i}\left(\hat{y} + i\alpha_i\hat{z}\right)e^{-i\omega t}$ and $\vec{E}_t = \Tilde{E}_{t}\left(\hat{y} + i\alpha_t\hat{z}\right)e^{-i\omega t}$ respectively. Where $\alpha_i$ and $\alpha_t$ respectively denote the polarization of incident and transmitted EM pulse. The applied external magnetic field $B_0$ in normalized units has been varied from $2.5$ to $10$ for our simulation studies as tabulated in Table-\ref{table:simulationCases}. The four different cases of study carried out by us correspond to four different choices of the incident laser EM field parameters. For case (A) and case (B), a linearly polarized laser ($\alpha_i = 0$) propagating along the $X$-direction is considered. The two choices of magnetic field ensures that for the given laser frequency and the choice of $B_0 = 2.5$ in case(A) only the $R$ mode lies in the pass band of the dispersion curve, while for higher value of $B_0 = 10$ in case(B) both $R$ and $L$ EM modes are in the pass band. The polarization of laser is taken RCP ($\alpha_i = +1$) and LCP ($\alpha_i = -1$) in case (C) and (D) respectively. The transmitted wave also has the same polarization as the incident wave in these two cases as listed in Table -\ref{table:simulationCases}. A fifth and sixth possibility of choosing LCP for incident EM wave with an external magnetic field of $B_0 = 2.5$ and RCP for $B_0 = 10$ respectively, does not permit any EM wave propagation inside the plasma, and hence have not been reported. In Table-\ref{table:simulationCases}, $\omega_{cs} = \frac{q_sB_0}{m_s}$ is the gyro- frequency of the species $s$ of the plasma (where $s = e,i$ correspond to electron and ion particles) in the external magnetic field $B_0$. \begin{table} \caption{List of simulation cases, incident and transmitted laser polarization, and external magnetic field parameters} \label{table:simulationCases} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \color{red}Cases & \color{red}Incident Laser & \multicolumn{3}{|c|}{\color{red}Magnetic Field Parameters} & \color{red}EM in Plasma\\ \hline & \color{blue}$\alpha_i$ & \color{blue}$B_0$ &\color{blue}$\omega_{ce}$ & \color{blue}$\omega_{ci}$ & \color{blue}$\alpha_t$\\ \hline \hline \color{blue} Case A & Linear $\alpha_i = 0$ & $B_0 = 2.5$ & $2.5$ & $0.1$ & R-wave $\alpha_t = +1$ \\ \hline \color{blue} Case B & Linear $\alpha = 0$ & $B_0 = 10$ & $10$ & $0.4$& R and L-waves $\alpha_t = \pm 1$ \\ \hline \color{blue} Case C & RCP $\alpha = +1$ & $B_0 = 2.5$ & $2.5$ & $0.1$& R-wave $\alpha_t = +1$ \\ \hline \color{blue} Case D & LCP $\alpha = -1$ & $B_0 = 10$ & $10$ & $0.4$& L-wave $\alpha_t = -1$ \\ \hline \end{tabular} \end{center} \end{table} \section{Observations} \label{sec:Observations} In a recent publication Goswami et al. \cite{goswami2021ponderomotive} have employed similar geometry to study the interaction of a laser pulse with a magnetized plasma. The study had interestingly shown excitation of electrostatic perturbations at the electron plasma frequency. Since the background plasma is overdense, such a perturbation cannot be understood through a parametric excitation process. Instead it was shown that the difference in the ponderomotive pressure experienced by electrons and ions led to the charge separation leading to plasma oscillations. Here, on the other hand we demonstrate that with a choice of a smoother laser pulse profile (consequently weakening the ponderomotive pressure term) one can excite parametric instability. Since the plasma is overdense only the Brillouin scattering process is observed. \begin{figure*} \centering \includegraphics[width=6.1in]{Figures/DispersionCurve.pdf} \caption{Dispersion curves for R-mode (a, b) and L-mode (c, d). Here $B_0$ is the normalized magnitude of external magnetic field applied. Fig. (a), (c) are for $B_0 = 2.5$ and Fig. (b), (d) are for $B_0 = 10$. Region-1 and Region-3 in each subplot represent the passband of the dispersion relation. Stopband is represented by the Region-2. Encircled red-dot in each figure represent the incident laser frequency. Laser frequency $\omega_l = 0.3\omega_{pe}$ lies in the passband of R-wave for both $B_0 = 2.5$ and $B_0 = 10$. For L-wave, $\omega_l = 0.3\omega_{pe}$ lies in the stopband for $B_0 = 2.5$ and passband for $B_0 = 10$.} \label{fig:DispersionCurve} \end{figure*} \begin{figure*} \centering \includegraphics[width=6.0in]{Figures/EyPlotsTime.pdf} \caption{Figure shows the time evolution of the $y$ component of the EM wave inside the plasma for different values of the external magnetic field $B_0$ and the polarization of incident laser pulse for cases (A, B, C, D). At $t=0$, the EM wave is incident at vacuum-plasma boundary ($x=400$). The pump wave broadens with propagation in the bulk of plasma ($t = 1200$). Further propagation leads to side band EM scattering ($t=3000$).} \label{fig:EyPlots} \end{figure*} We show in Fig.\ref{fig:DispersionCurve} the dispersion curves for both $R$ and $L$ waves for the two values of the magnetic field $2.5$ and $10$ respectively. The incident laser frequency of $\omega_l = 0.3 \omega_{pe}$ lies in the pass band of only the $R$ wave for the case of $B_0 = 2.5$. For the other case of $B_0 = 10$ the laser frequency lies in the pass band of both $R$ and $L$ waves. The two waves, however, have distinct phase and group speeds. The various subplots (A, B, C, D) of Fig.\ref{fig:EyPlots} correspond to the four different cases (A, B, C, D) respectively of study listed in Table.2. The $y$ component of the electric field $E_y$ has been shown at three different times in this figure as a function of $x$. While in subplot(A) and (B) the incident pulse has linear polarization for (C) and (D) the incident pulse has been chosen to have Right and left circular polarization. The pulse in red color shows $E_y$ for the incident EM pulse at $t = 0$. It is placed in the vacuum region initially. The plasma starts from $x= 400c/\omega_{pe}$. The blue and green pulses in the figure depict the $E_y$ component of electric field at $t = 1200 \omega_{pe}^{-1}$ and $t = 3800 \omega_{pe}^{-1}$ respectively. For $B_0 = 2.5 $ both linear and right hand circular polarised incident laser, shown in (A) and (C) subplots propagate as $R$ waves inside the plasma. This is so because the laser frequency lies in the stop band of $L$ wave. The left circularly polarised incident radiation in this case gets totally reflected for this particular case. For case (B) the incident linearly polarised EM pulse generates both $L$ and $R$ waves, as the frequency lies inside the pass bands of both the modes. The $L$ and $R$ waves get spatially separated as they propagate inside the plasma due to their different group speeds. The $R$ wave being faster it moves ahead. For case (D), the incident laser pulse is chosen to have left circular polarization. This pulse propagates as $L$ wave inside the plasma. Thus Case (A) and (C) show the propagation of pure $R$ waves inside the plasma and case (D) shows pure $L$ wave propagation. Case(B) on the other hand depicts the propagation of both $L$ and $R$ wave. Thus for these cases one can study separately the propagation of each of the $R$ and $L$ waves without getting hindered by the other wave in any fashion. \begin{figure*} \centering \includegraphics[width=6.0in]{Figures/ExPlots.eps} \caption{The figure shows the time evolution of the $x$ component of the electric field inside the plasma for cases (A, B, C, D). Initially the amplitude of $E_x$ is small ($t=1200$). It is evident from the figure that as the Brillouin scattering starts, laser amplitude electrostatic fluctuations are observed in the bluk of plasma ($t=3000$, and $t=3800$).} \label{fig:ExPlots} \end{figure*} \begin{figure*} \centering \includegraphics[width=6.0in]{Figures/DensPlots.eps} \caption{The figure shows the electron ($n_e$) and ion ($n_i$) density fluctuations at $t = 3800$ for the cases (A, B, C, D). The zoomed plot alongside indicates that electrons and ions density perturbations have similar form. This is an evidence of the Brillouin scattering phenomenon.} \label{fig:DensPlots} \end{figure*} From Fig.\ref{fig:EyPlots} it is clear that as the EM pulse propagates inside the plasma, electrostatic field disturbances get generated behind the pulse. These disturbances have a higher amplitude when the incident laser pulse has circular polarisation. This may be so as the power level of the propagating $L$ and $R$ waves for the linear case get low compared to the case of circular polarization. Associated with these electromagnetic disturbances one can observe electrostatic perturbation which are shown as the plot of $E_x$ in Fig.\ref{fig:ExPlots} at three different times. The corresponding electron and ion charge density perturbations are shown in Fig.\ref{fig:DensPlots} for $t = 3800$. The zoomed plot alongside shows that the electron (in solid blue line) and ion (in red dash line) density perturbations have similar form. In the next section we try to characterize these perturbations and provide the evidence that they represent the Brillouin scattering phenomena. \section{Characterization of the observed excitation as a Brillouin process } \label{sec:CharacterizationOfModes} \begin{figure*} \centering \includegraphics[width=6.0in]{Figures/SpaceFFTExEyLCP.pdf} \caption{Figure gives the space Fast Fourier Transform (FFT) of (a) scattered wave, (b) pump wave, and (c) the electrostatic (ES) modes (c) for case C. In this case, the incident laser pulse is left circularly polarized (LCP). The FFT is performed at time $t = 3000$ in the bulk of plasma, when there is generation of scattered waves and ES modes.} \label{fig:SpaceTimeFFTLCPParametricInstability} \end{figure*} We evaluate the spatial Fourier transform (FFT) spectra for the electromagnetic $E_y$ and electrostatic $E_x$ fluctuations. The spectral wavenumber peak for one particular case (D) of the $L$ wave propagation has been shown in Fig.\ref{fig:SpaceTimeFFTLCPParametricInstability}. The FFT of $E_y$ field using only the region of pump pulse region shows a peak at $k_0 = 0.41$ (which satisfies the Dispersion curve for the $L$ wave. However, when FFT is taken of the $E_y$ in a region just behind the main pulse the peak of the spectra occurs slightly shifted at $k_1=0.46$. This appears to be the scattered EM radiation. The fourier spectra of the electrostatic field $E_x$ on the other hand peaks at $k_2 = 0.87$. It is interesting to note that the wave-vector matching condition for the parametric process is satisfied as $\vec{k_2} = \vec{k_0} + \vec{k_1}$. The electrostatic perturbation being at shorter scales than the pump and scattered EM radiation shows that it is a back scattering process that is taking place. Furthermore, as shown in Fig.\ref{fig:DensPlots}, the electron and ion density perturbations seem to occur in phase indicating that it is a Brillouin process. Furthermore, we have also repeated these simulations for the case of static ions (corresponding to $m_i = \infty$). For such a case no scattering is observed. The simulations were carried out for the case of a plasma which was cold. The question then arises is what plays the role of effective temperature for exciting an ion acoustic perturbation in the Brillouin process. We feel that the velocity acquired by the electrons as a result of ponderomotive pressure from the EM pulse may contribute for the same. In fact similar observations were noted in the work by Sundar et. al. \cite{sundar2011relativistic} where they observed a Brillouin scattering process for a flat top solitonic structure in a fluid simulation. We will discuss more about the role the ponderomotive pressure term plays by choosing different laser profiles in section \ref{sec:LaserProfiles}. \begin{figure*} \centering \includegraphics[width=5.0in]{Figures/GrowthRateRCP.pdf} \caption{Figure shows the time evolution of electrostatic energy for (i) $m_i = 25$ and (ii) $m_i = 50$ for case C, when the incident laser pulse is right circularly polarized (RCP). The growth is calculated using the slope of $\log\left(\frac{E_x^2}{2}\right)$ vs. time plot. For each curve, the slope is taken along the black-dash line.} \label{fig:GrowthRateRCP} \end{figure*} \begin{figure*} \centering \includegraphics[width=5.0in]{Figures/GrowthRateLCP.pdf} \caption{Figure shows the time evolution of electrostatic energy for (i) $m_i = 40$ and (ii) $m_i = 50$ for case D, when the incident laser pulse is left circularly polarized (LCP). The growth is calculated using the slope of $\log\left(\frac{E_x^2}{2}\right)$ vs. time plot. For each curve, the slope is taken along the black-dash line.} \label{fig:GrowthRateLCP} \end{figure*} In Fig.\ref{fig:GrowthRateRCP} we have shown the evolution of electrostatic field energy for Case(C) of $R$ wave propagation for two different ion to electron mass ratios (e.g. $25 m_e$ and $50 m_e$). The other parameters for the laser pulse and the plasma medium are the same. It is interesting to observe that the initial slope for the two cases are identical. Subsequently a second phase with a faster growth rate occurs in both the cases. The onset of second phase occurs at an earlier time for ion mass $m_i = 25m_e$ compared to that of $m_i = 50m_e$. Also, the growth rate as discerned from the slope in this phase for the lower ion mass is higher. This can be understood by realising that the initial small rise of the electrostatic energy arises as a result of ponderomotive forcing from the laser pulse, during which the electrons acquire a certain kinetic energy which acts as an effective temperature for facilitating the Brillouin scattering process. The second phase corresponds to the Brillouin scattering instability during which the two growth rate differs. After the second phase the nonlinear effects seem to set in which slows down the growth rate. In Fig.\ref{fig:GrowthRateLCP} we have plotted the evolution of electrostatic energy for the case of $L$ wave propagation of Case (D). In this case also similar behaviour is observed. \begin{figure*} \centering \includegraphics[width=6.0in]{Figures/WavenumberPower.pdf} \caption{Figure shows the space Fast Fourier Transform (FFT) of the electrostatic field at various spatial block with respect to the pump wave at time $t = 3000$. Different subplots indicate the (i) Electromagnetic pulse both pump and scattered wave $E_y$, (ii) Electrostatic fluctuations $E_x$, FFT spectrum for different spatial blocks ((iii) from $1200$ to $1400$, (iv) from $1400$ to $1800$), and (v) variation in power spectrum of electrostatic component $|E_x(k)|^2$ with the choice of spatial blocks.} \label{fig:PowerChangeExFFT} \end{figure*} We now discuss briefly the nonlinear phase of the instability. We choose a specific time of $t = 3000$ for the Case D at which the evolution of electrostatic energy is in its third phase, showing a slow down in the growth. In Fig.\ref{fig:PowerChangeExFFT} we show the spatial fourier transform of the electrostatic field at various spatial blocks with respect to the pump pulse as illustrated in the figure. It can be observed that when the spatial FFT is taken just behind the pulse it shows a clear peak ($k_2 = 0.87$). At longer distances behind the pulse other peak in the spectra start appearing (e.g. $k= 0.97$). Thereafter it becomes broad with multiple scales indicating the presence of nonlinear effects as expected. We have tracked the power of two different wave numbers by carrying out a Fourier transform in several blocks of localised spatial region and observe that while the power in $k = 0.87$ satisfying the Brillouin condition maximizes in the spatial regime overlapping with the pump pulse, the power in the second mode picks up thereafter. \section{Role of smooth and sharp laser profiles} \label{sec:LaserProfiles} \begin{figure*}[ht] \centering \includegraphics[width=6.0in]{Figures/GrowthProfilesFinal.pdf} \caption{Figure highlights the effect of incident laser profile on the growth of Brillouin scattering process. Different subplots indicate (i) the evolution of electrostatic energy ((ii) zoomed plot) normalized by total input energy of incident laser, (iii) growth rate of Brillouin scattering for different laser profiles, and (v) four laser profiles used in simulation. It is evident from the plot that initially the ponderomotive pressure driven mechanism creates electrostatic fluctuations that are laser profile dependent and hence the Brillouin scattering mechanism occurs. } \label{fig:LaserProfileEffects} \end{figure*} In the work by Goswami et al. \cite{goswami2021ponderomotive}, published earlier for the same geometry a very sharp laser profile was chosen. There we observed the ponderomotive force driven electrostatic flluctutions. For the present studies we have chosen a smoother laser profile. In the section we describe a study where we carry out simulations with varying laser profile. The observations corresponding to four different laser profiles have been shown in Fig.\ref{fig:LaserProfileEffects} subplot(v). The sharpest profile has been denoted by the violet line with triangular dots. It can be observed from the subplot (iii) that there is an initial growth of electrostatic energy which is highest for this particular case. A zoomed picture has been shown in subplot(iv) of this region which also confirms the same. This is the time regiem where the initial ponderomotive pressure driven electrostatic fluctations occur. The thermalization of the effective electron kinetic energy acquired by this process leads to the effective temperature for the Brillouin process to occur later. The onset of the Brillouin process occurs earlier for the cases of the fourth profile where this happens fastest. Since the total energy associated with the pulse for these profiles may slightly differ. We have also compared the evolution of the electrostatic energy normalised by the total input energy from the laser pulse. It can be observed that the normalised electrostatic energy in the case of profile 4 is higher compared to all other profiles after the scattering process takes place. \section{Conclusion} \label{sec:Conclusion} With the help of OSIRIS-4.0, we have carried out a comprehensive PIC simulations studies to understand the interaction of a laser pulse propagating in an overdense magnetize plasma for which the external magnetic field is directed along the laser propagation direction. It has been shown that the laser energy can get converted to the electrostatic energy by two distinct processes. The ponderomotive force driven process occurs first and is obviously found to be dependent on the sharpness of laser profile. Such an excitation process was also shown in an earlier publication by us \cite{goswami2021ponderomotive} where purposely a very sharp laser pulse profile was chosen. Here we provide evidence of the Brillouin scattering process taking place. We also observe that there is a synergy between the ponderomotive process leading to charge density fluctuations and the Brillouin back scattering process. The Brillouin process occurs subsequently and the timing of its onset is related to electrostatic energy gained during the first process. \begin{figure*} \centering \includegraphics[width=6.0in]{Figures/PhaseElectron.pdf} \caption{Momentum phase plot for electrons.} \label{fig:DensPlots} \end{figure*} \begin{figure*} \centering \includegraphics[width=6.0in]{Figures/PhaseIons.pdf} \caption{Momentum phase plot for ions.} \label{fig:DensPlots} \end{figure*} \section{Acknowledgment} The authors would like to acknowledge the OSIRIS Consortium, consisting of UCLA ans IST (Lisbon, Portugal) for providing access to the OSIRIS4.0 framework which is the work supported by NSF ACI-1339893. This research work has been supported by the Core Research Grant No. CRG/2018/000624 of Department of Scient and Technology (DST), Government of India. We also acknowledge support from J C Bose Fellowship Grant of A D (JCB-000055/2017) from the Science and Engineering Research Board (SFERB), Government of India. The authors thank IIT Delhi HPC facility for computational resources. Laxman thanks the Council for Scientific and Industrial Research (Grant no. -09/086(1442)/2020-EMR-I) for funding the research. \section{References} \bibliographystyle{unsrt}
{ "arxiv_id": "2302.11317", "language": "en", "timestamp": "2023-02-23T02:13:32", "url": "https://arxiv.org/abs/2302.11317", "yymm": "2302" }
\section{Introduction} Numerical analytic continuation (AC) solves the following inversion problem, \begin{equation} G(\tau)=\int d\omega K(\tau,\omega) A(\omega). \end{equation} The goal of AC is to extract the real frequency spectrum $A(\omega)$ from the imaginary-time correlation function $G(\tau)$, which is typically obtained by Monte Carlo simulation. The spectrum $A(\omega)$ is required to be non-negative at any $\omega$-point and subjected to certain sum rule $\int d\omega A(\omega)=\text{const}.$ $K(\tau, \omega)$ is the inversion kernel, the form of which varies on specific problems being handled. This study involves two kinds of inversion kernels $K(\tau,\omega)$ including $K_F(\tau,\omega)=e^{-\tau \omega}/(1+e^{-\beta \omega})$ and $K_S(\tau,\omega)=e^{-\tau \omega}$. $K_F(\tau,\omega)$ usually appears while calculating single-particle excitation spectra from measured Green's functions\cite{white1989monte,silver1990maximum}. $K_S(\tau,\omega)$ is usually involved while extracting dynamic structure factors from spin-spin correlation functions in some spin models\cite{henelius2000monte}. To carry out actual calculation, $\tau$ and $\omega$ are often discretized as $\tau_i=\tau_1,\cdots, \tau_M$, $\omega_i=\omega_1,\cdots, \omega_N$. Then the target problem can be reformulated as $G(\tau_i)=\sum_{j=1}^N K(\tau_i, \omega_j) A(\omega_j) \Delta \omega.$ For the purpose of simplicity, $\Delta \omega$ will be absorbed to $A(\omega_j)$ by $A(\omega_j)\Delta \omega \rightarrow A(\omega_j)$ in further discussions. The sum rule is then discretized to be $\sum_{i=1}^N A(\omega_i)\Delta \omega=\text{const}.$ It seems like a simple problem of matrix inversion at first sight but turns out to be a notoriously challenging task due to the ill-conditioned nature of this inversion problem. In almost all cases, corresponding condition numbers go far beyond the tolerance of existing computers' machine precision. Several methods are proposed to solve this problem such as the Maximum Entropy method (Maxent)\cite{silver1990maximum} and Stochastic Analytic continuation (SAC)\cite{sandvik1998stochastic}. Both of them succeed in extracting empirically correct spectra. However, these methods usually demand highly accurate simulated correlation functions $G_\text{sim}(\tau)$. As an emerging technique for machine learning, neural networks (NNs)\cite{gurney2018introduction} have experienced great success in a variety of physics-related domains. From the perspective of machine learning, analytic continuation can be viewed as a vector-to-vector prediction task, where $G(\tau_i)$ is mapped to $A(\omega_j)$. To construct a neural network capable of performing analytic continuation, both the network topology and training set should be built appropriately. The common framework on this task usually contains several steps: (1) Build a neural network. (2) Synthesize spectra $A_\text{train}$ for training purpose. (3) Calculate $G_\text{train}$ by the forward mapping $A \rightarrow G$. Noting that the forward mapping is well-conditioned, thus $G_\text{train}$ can be exactly determined. (4) Train the network using the dataset pair $(G_\text{train}, A_\text{train})$ so that spectra predicted from $G_\text{train}$ closely match $A_\text{train}$. (5) When developing and testing NNs, synthesize testing set ($G_\text{test}$, $A_\text{test}$) and evaluate the performance of trained NN on it. When using NNAC in actual tasks, apply trained network to predict spectra $A_\text{pred}$ from simulated correlation functions $G_\text{sim}$ generated by Monte Carlo simulations. To mimic real-world simulated data, noises are usually added to correlation functions obtained from synthetic spectra such as $G_\text{train}$ and $G_\text{test}$. In a relatively early study, Hongkee Yoon\cite{yoon2018analytic} and co-authors designed a network mainly based on fully-connected-layers (FCLs)\cite{gurney2018introduction}. In their research, both training and testing sets are obtained from synthetic Gaussian-type multi-peak spectra. Noises of Gaussian distribution are added to $G_\text{train}$ and $G_\text{test}$. The trained NN performs well in the testing set as the predicted spectra are very close to synthetic testing spectra. Several different network structures\cite{arsenault2017projected,xie2021analytic,huang2022learned,zhang2022training} trained on similar Gaussian-type datasets are also proposed. In addition to synthetic datasets, neural networks based analytic continuation (NNAC) are also examined on some exactly solvable models such as one-dimensional transverse-field Ising model\cite{yao2022noise} and harmonic oscillator linearly coupled to an ideal environment\cite{fournier2020artificial}. In these two studies, artifitial training sets ($G_\text{train}$,$A_\text{train}$) are generated from exactly solved correlation functions and corresponding spectra. Different spectra in the training set correspond to different parameter values in the Hamiltonian being studied. Target spectra $A_\text{pred}$ are predicted from simulated correlation function $G_\text{sim}$ using Monte Carlo techniques. Ref\cite{yao2022noise} points out that the neural network's prediction performance can be improved by adding uniform noises to the exactly solved Green's functions at each imaginary time in the training set. Theoretically we have no knowledge about precise forms of spectra to be predicted before target spectra are actually predicted. That's because the knowledge of Gaussian-type spectra are not expected to be known before prediction. This is actually an intriguing topic dubbed ``data leakage"\cite{kaufman2012leakage} in the field of machine learning. Data leakage occurs when information is used in the training process but not expected to be available at prediction time. All aforementioned articles about NNAC have the issue of data leakage at some levels. In practice, we usually apply numerical analytical continuation to models that are not exactly solvable, where it is not possible to construct training sets by exactly solved spectra. To design the training set, hints from experiments or traditional AC approaches such as Maxent should also be explored. It should be mentioned that NNAC is useful even when spectra are already obtained from Maxent: NNAC performs better at least in highly-noisy cases as described in Ref\cite{fournier2020artificial}. This topic will also be elaborated upon in this paper. In general, domain knowledge\cite{jordan2015machine,domingos2012few}, especially possible spectrum peak shapes, should be incorporated when designing the training set as much as feasible but without data leakage. We then expect the trained NN to generalize\cite{giles1987learning,novak2018sensitivity} well enough to handle unobserved correlation functions like $G_\text{test}$ and $G_\text{sim}$. Intuitively, people expect better prediction of spectra by incorporating more information. Monte Carlo simulations can provide more information beyond the measured correlation functions, such as the statistical errors of $G(\tau)$. Specifically, they can provide information regarding two aspects of statistical errors: the measured errors $R(\tau_i)$ of $G(\tau_i)$ at each $\tau_i$, and the covariance of correlation functions at different imaginary times. This work avoids data leakage while synthesizing the training sets and incorporates information of statistical errors to improve the performance of NNAC. With these means, NNAC has the potential to be a usable algorithm in practical applications and a significant component in the Monte Carlo-Analytic Continuation toolchain. In section 2, NNAC of kernel $K_F(\tau,\omega)$ is examined on synthetic data, where datasets synthsized from spectra of different types of shapes are addressed. In section 3, NNAC of kernel $K_S(\tau,\omega)$ is applied to one-dimensional Heisenberg chain as a real-world example of an AC problem. Conclusions are presented in the final section. \section{NNAC on Synthetic Datasets} In this section, we design and test NNs on synthetic datasets. Principles for generating training sets will be developed. We first discuss three types of datasets, the training framework, as well as the actual training process. Noise level matching between the training and the testing set is then explored. Resulting spectra are then compared with those from Maxent. The impact of measured noise shapes and time-displaced correlation is then investigated. \subsection{Preparation of Dataset} Multi-peak spectra $A(\omega)$ are produced by summing over single peaks $F(\omega)$. \begin{equation} A(\omega)=\frac{1}{Z} \sum_i F_i(\omega). \end{equation} In the formula above, $Z$ is a scaling constant ensuring that $A(\omega)$ obeys the sum rule. In this section, we assume $\int d \omega A(\omega)=1$ for convenience. This paper involves three distinct peak types: asymmetric exponential power(ASEP), skew Gaussian(Skew), and Lorentz. The ASEP single-peak curve reads: \begin{equation} F^\text{ASEP}(\omega)=\left \{ \begin{aligned} &h \exp \big [-(\frac{m-\omega}{a_1})^{b_1} \big ], \omega<m ; \\ &h \exp \big [-(\frac{\omega-m}{a_2})^{b_2} \big ],\omega\geq m . \end{aligned} \right. \end{equation} In the above formula, $h$, $m$, $a_1$, $a_2$, $b_1$, $b_2$ are all control parameters. In this study, we set $m\in [-5,5]$, $a_1,a_2 \in [0.3,3]$, $b_1,b_2 \in [1,3]$, $h \in [0.2,1]$. The Skew peak takes the form \begin{equation} F^\text{Skew}(\omega)=\left \{ \begin{aligned} &0 , z\leq 0; \\ &\frac{h}{az}\exp(-\frac{y^2}{2}),z>0. \end{aligned} \right. \end{equation} $z(\omega)= 1-k\frac{\omega-m}{a}$ and $y=\frac{1}{k}\ln(z)$. Control parameters are $m\in [-2,2]$, $a \in [0.5,1]$, $k \in [-1,1]$, and $h \in [0.2,1]$. The Lorentz curve takes the relatively simple form \begin{equation} F^\text{Lorentz}(\omega)=h\frac{1}{(\omega^2-a^2)^2-g^2\omega^2}, \end{equation} where $g \in [1,2]$, $a \in [2,4]$ and $h \in [0.2,1]$. In this study, we investigate spectra containing one to four peaks. At least $10^5$ samples are generated for each peak number by randomly selecting control parameters. In other words, one single dataset includes at least $4\times 10^5$ samples. Training and testing sets of the same peak type are independently created. ASEP-type dataset has the most control parameters among these three types and thus contains a greater diversity of spectra while not explicitly contain spectra of Skew-type or Lorentz-type dataset. We expect the neural network to learn from ASEP-type dataset and generalize effectively to achieve good performance on the other two datasets. It should be noted that, unlike in some previous studies, we will not examine Gaussian-type spectra here, as they are explicitly included in ASEP-type dataset when $b_1=b_2=2$ and $a_1=a_2$. This explicit inclusion case does not frequently occur in real-world AC tasks and the performance of NNAC will be overestimated in the case of Gaussian-type testing sets. The imaginary time $\tau \in [0,16]$ is discretized uniformly into 512 pieces and the frequency domain $\omega \in [-15,15]$ is discretized into 1024 pieces. $\beta$ is fixed to be $16$ in the Fermion kernel $e^{-\tau \omega}/(1+e^{-\beta \omega})$. \subsection{Training Framework} Convolution neural networks (CNNs)\cite{albawi2017understanding} will be employed in this work. FCL-based neural networks are also evaluated in the early stage of this study, which proves inferior to CNNs. Involvement of residual modules\cite{he2016deep} or deep layer aggregation\cite{yu2018deep} also does not prove to make significant improvements. In the case of deep layer aggregation, both iterative deep aggregation and hierarchical deep aggregation are attempted. Based on the aforementioned factors, we employ the neural network shown in Figure \ref{net_cnn}. At first the 512-length $G(\tau_i)$ is transferred to a $p$-length vector via a FCL (labeled ``Dense") and then reshaped to be a $1 \times p$ matrix. This matrix can be regarded as a specific image that can be naturally processed by convolution layers. Next, this image is passed to a $q$-channel one dimensional convolution layer ``Conv1d", followed by the activation layer ``Swish". Within the ``Conv1d" layer, convolution kernels of size $1\times3$ are used. Within the activation layer, the activation function named ``Swish"\cite{ramachandran2017searching} is used. This activation function is both non-monotonic and smooth and may improve the overall performance of the neural network compared to the commonly used ReLU\cite{agarap2018deep} activation function according to Ref\cite{ramachandran2017searching}. This ``convolution $\rightarrow$ activation" process will be carried out $n$ times. The $q$-channel image is then compressed by an average-pooling layer\cite{iosifidis2022deep} and flattened to be a $pq/2$-long vector. The flattened vector will be mapped to a 1024-long vector by another ``Dense" layer. Ultimately, the ``SoftMax" layer will present predictions of the spectra where the sum rule $\sum_j A(\omega_j)=1$ is naturally satisfied after this softmax operation. Tricks to reduce overfitting such as dropout\cite{srivastava2014dropout} are not adopted here. Instead, we recommend enlarging the training set when signs of overfitting emerge since it is rather cheap to acquire data from synthetic spectra. \begin{figure}[htbp] \centering \includegraphics[width=0.7 \linewidth]{net_cnn.jpg} \caption{The convolution-based structure of the neural network used in this work. Hyper-parameters are chosen to be $n=8$, $p=64$ and $q=64$ in actual training process.} \label{net_cnn} \end{figure} Hyper-parameters are chosen to be $n=8$, $p=64$, and $q=64$. To select appropriate hyper-parameters, we build an additional ASEP-type validation set, on which to evaluate NN trained by ASEP-type training set. When selecting hyper-parameters, the trade-off between performance and training time is taken into consideration. We use Kullback-Leibler Divergence(KLD)\cite{joyce2011kullback} as the loss function, which takes the form \begin{equation} D_\text{KL}(A_\text{true} || A_\text{pred})=-\sum_{j} A_\text{true}(\omega_j) \ln\frac{A_\text{true}(\omega_j)}{A_\text{pred}(\omega_j)}. \end{equation} KLD measures the difference (more precisely, relative entropy) between the true distribution $A_\text{true}$ and the predicted distribution $A_\text{pred}$, which makes it a natural choice in this task. Other commonly-used loss functions include mean absolute error (MAE) and mean squared error (MSE) as shown below. KLD also has the property of positivity as MAE and MSE. \begin{align} \text{MAE}(A_\text{true},A_\text{pred})&=\frac{1}{N}\sum_{j=1}^N \big |A_\text{true}(\omega_j)-A_\text{pred}(\omega_j) \big|\\ \text{MSE}(A_\text{true},A_\text{pred})&=\frac{1}{N}\sum_{j=1}^N \big [A_\text{true}(\omega_j)-A_\text{pred}(\omega_j) \big ]^2 \end{align} Empirically, spectra from NNs with MSE loss are often smoother than those from NNs with MAE loss since MSE punish large spectrum difference more severely. In this study, we didn't observe discernible difference in the performance between MSE-loss and KLD-loss NNs. NNs are programmed using Keras toolkits\cite{chollet2015keras} with Tensorflow\cite{tensorflow2015-whitepaper} backends. The Adam\cite{kingma2014adam} optimizer is used for gradient descent. The early-stopping trick is utilized during training. The training process terminates when KLD measured on the validation set does not drop for 20 epochs, where the validation set is generated in the same manner as the training set. Trained weights are then restored to the epoch with the lowest KLD. Each training task will be repeated at least 5 times with different random seeds. KLDs shown in this paper are averaged among NNs trained with different seeds. The training process is depicted in Figure \ref{process}, where both the training set and the testing set are of ASEP-type. Errors at noise level $10^{-3}$ are introduced to $G_\text{train}$ and $G_\text{test}$ (the concept of noise level will be discussed later). Relative values of three statistics measured on the testing set are tracked throughout the training process in Figure \ref{process} (a). We track $\text{RMSE}=\sqrt{\text{MSE}}$ instead of $\text{MSE}$ itself because RMSE shares the same dimension as MAE and KLD. Relative loss in this figure is defined as ``loss after this epoch"/``loss after the first epoch". In Figure \ref{process} (b) we show an example from the testing set of how one predicted spectrum becomes closer to the true spectrum at different KLD levels. Selected checkpoints are indicated by red dots in Figure \ref{process} (a). While visualizing the training process, we only use 1000 samples for each epoch because statistics converge too quickly for visualization if the entire training set containing $4 \times 10^5$ samples is used. The complete training set will be used in actual AC tasks hereafter. In this study, model training on an RTX3060 graphics card takes approximately 20 minutes on average. This is acceptable in the majority of circumstances, especially in contrast to the amount of time saved in the Monte Carlo simulation if highly accurate correlation functions are not incorporated. \begin{figure}[htb] \centering \includegraphics[width=0.9 \linewidth]{train_process.jpg} \caption{Tracking the training process. (a) Relative losses, including KLD, MAE, and RMSE, \textit{w.r.t.} number of trained epochs. This so-called relative loss is defined as ``loss after this epoch"/``loss after the first epoch". (b) A typical example of the convergence process of one predicted spectrum to the true spectrum as KLD decreases. Selected checkpoints are labeled by red dots in (a).} \label{process} \end{figure} \subsection{Noise Level Matching} Correlation functions measured from Monte Carlo simulation inevitably contain statistical errors. To mimic simulated errors, Gaussian-type noises are added to $G(\tau_i)$ by $G(\tau_i) \rightarrow G(\tau_i)+R(\tau_i)$, where $R(\tau_i) \sim N(0,\sigma^2)$. Four different noise levels are investigated in this work, $\sigma=10^{-4}, 10^{-3}, 3\times 10^{-3}, 10^{-2}$. $\sigma$ in this formula can also be interpreted as the absolute average of noises. At this stage, we assume $G(\tau_i)$ to be independently measured for each $i$. In real-world NNAC-based tasks, noises of $G_\text{sim}$ are measured from Monte Carlo simulation, and noises of the training set should be carefully arranged accordingly. Besides, the noise level of the testing set should be the same as the simulated data to mimic real-world tasks. To design the training set, a natural question arises as how we should set noise level $\sigma_\text{train}$ of the training set when the noise level $\sigma_\text{test}$ of the testing set is known? We train NNs by training sets of different $\sigma_\text{train}$ and apply these NNs on testing sets of different $\sigma_\text{test}$. Corresponding results are shown in Table \ref{tab_mutual_noise} and Figure \ref{mutual_noise}. Table \ref{tab_mutual_noise} contains KLDs of spectra predicted from testing sets with different noise levels $\sigma_\text{test}$ by NNs trained by training sets with different $\sigma_\text{train}$. The smallest KLD in each line (marked red) is obtained when noise levels of the training set and the testing set match ($\sigma_\text{train}=\sigma_\text{test}$). Performance degrades but remains acceptable when $\sigma_\text{train}$ increases and $\sigma_\text{train}>\sigma_\text{test}$ while the opposite is not true when $\sigma_\text{train}<\sigma_\text{test}$. For instance, KLD is relatively small when $(\sigma_\text{train},\sigma_\text{test})=(10^{-2},10^{-4})$ but is large and unsatisfactory when $(\sigma_\text{train},\sigma_\text{test})=(10^{-4},10^{-2})$. That's because information of ASEP($\sigma=10^{-4}$) is somehow ``contained'' in ASEP($\sigma=10^{-2}$): for each curve in ASEP($\sigma=10^{-4}$) we may find similar samples with similar noises in ASEP($\sigma=10^{-2}$) if datasets are large enough given noises are randomly selected, whereas the converse is not true. We train NNs with different noise levels and use them to predict one sample of $G(\tau_i)$ from the testing set with $\sigma_\text{test}=3\times 10^{-3}$ and $\sigma_\text{test}=10^{-2}$, which are presented in Figure \ref{mutual_noise} (a) and (b), respectively. Resulted spectra become closer to the ground truth when $\sigma_\text{train}$ is closer to $\sigma_\text{test}$. In Figure \ref{mutual_noise} (b), incorrect and unstable peaks are predicted by the NNs trained with $\sigma_\text{train}=10^{-4}$ or $10^{-3}$, whose KLDs are large correspondingly as seen in Table \ref{tab_mutual_noise}. Note that in this part, data leakage is not intentionally avoided: the training set and the testing set are both of ASEP type. With the same $\sigma_\text{test}$, KLD differences caused by different $\sigma_\text{train}$ may be relatively small and taking datasets with different line shapes may introduce unnecessary complexity, resulting in unsolid or even incorrect conclusions. From another perspective, we expect NNs to use the knowledge learned from the training set to predict correct spectra in actual tasks. The performance will be usually slightly weakened if line shapes of the testing set and training set are different. Therefore, we expect the NNs of proper $\sigma_\text{train}$ to at least achieve good results on the testing set with the same line shape. The KLD results here do not represent actual performances of the NNs in practical tasks. \begin{table}[htbp] \centering \begin{tabular}{||l|l|l|l|l||} \hhline{|t:=====:t|} \diagbox{$\sigma_\text{test}$}{$\sigma_\text{train}$} & $10^{-4}$ & $10^{-3}$ & $3\times10^{-3}$ & $10^{-2}$\\ \hhline{|:=====:|} $10^{-4}$ & \textcolor{red}{0.0137(3)} & 0.0151(4) & 0.0181(2) & 0.0280(1) \\ \hhline{|:=====:|} $10^{-3}$ & 0.0172(1) & \textcolor{red}{0.0164(4)} & 0.0185(2) & 0.0280(1) \\ \hhline{|:=====:|} $3\times10^{3}$ & 0.045(2) & 0.0268(3) & \textcolor{red}{0.0217(1)} & 0.02854(9) \\ \hhline{|:=====:|} $10^{-2}$ & 0.31(2) & 0.148(6) & 0.060(1) & \textcolor{red}{0.0350(1)} \\ \hhline{|b:=====:b|} \end{tabular} \caption{KLDs of spectra predicted from testing sets with different$\sigma_\text{test}$ by NNs trained by training sets with different $\sigma_\text{train}$. In each line, the smallest KLD (marked red) is obtained when $\sigma_\text{train}=\sigma_\text{test}$. To determine the errors of the KLDs in the table, we train NNs with at least 10 distinct random seeds and calculate statistical uncertainty of KLDs of spectra predicted by these NNs.} \label{tab_mutual_noise} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=0.9 \linewidth]{mutual_noise.jpg} \caption{Illustration of noise level matching. Ground truths in both sub-figures are the same curve. (a) Prediction of spectra from testing set with $\sigma=3\times 10^{-3}$ by NNs trained with different $\sigma_\text{train}$. The best spectrum is obtained when $\sigma_\text{train}=\sigma_\text{test}=3\times 10^{-3}$. (b) Prediction of spectra from testing set with $\sigma=10^{-2}$ by NNs trained with different $\sigma_\text{train}$. The best spectrum is obtained when $\sigma_\text{train}=\sigma_\text{test}=10^{-2}$. The predicted spectrum contains unstable peaks at wrong locations when $\sigma_\text{train}=10^{-4}$ or $3\times 10^{-3}$.} \label{mutual_noise} \end{figure} \subsection{Comparison with Maxent} With the knowledge of noise level matching, $G_\text{train}$ will be designed to have the same noise level as $G_\text{test}$ in this work hereafter and we are now ready to compare NNAC with traditional AC methods like Maxent. We train NNs by ASEP training sets and use them to predict ASEP-type, Skew-type and Lorentz-type spectra, respectively. Corresponding outcomes are depicted in Figure \ref{compare_maxent}. Figure \ref{compare_maxent} (a),(b) and (c) show KLDs of spectra predicted by these two methods on ASEP, Skew, and Lorentz dataset respectively. Error bars of KLDs are omitted in this and subsequent figures to make graphs more comprehensible as they are relatively small. Typical predicted results at noise level $3 \times 10^{-3}$ are shown in Figure \ref{compare_maxent} (d),(e) and (f) of three peak types. Performance of NNAC is comparable with Maxent at the lowest noise level $10^{-4}$ but outperforms Maxent significantly at relatively high noise levels. The improvement of prediction effect is also obvious when the training set and testing set are not of the same spectrum type. In spectrum examples depicted in Figure \ref{compare_maxent} (d),(e) and (f), peak locations are precisely predicted by NNAC but Maxent didn't provide accurate peak locations at this noise level. In some frequencies, Maxent may even give incorrect signals of peaks. Peak heights predicted by NNAC are also more accurate and closer to ground truths than Maxent's. \begin{figure}[htbp] \centering \includegraphics[width=0.9 \linewidth]{compare_maxent.jpg} \caption{Comparison with Maxnet. NNs are trained by ASEP dataset and applied on three different testing sets: ASEP, Skew, and Lorentz. (a) to (c): KLD predicted results of ASEP, Skew, Lorentz dataset respectively at different noise levels. (d) to (f): typical predicted spectra at the noise level $3 \times 10^{-3}$ by Maxent and NNAC. The ground truth is also shown as comparison. The performance of NNAC is comparable with Maxent when the dataset contains low-level noise but outperforms Maxent at high-level noise even if NNs are not trained by the dataset of the same type as the testing set.} \label{compare_maxent} \end{figure} Spectra from Maxent in this section about kernel $K_F(\tau,\omega)$ are calculated mainly based on the software ``TRIQS/maxent''\cite{PhysRevB.96.155128} so that results can be easily checked. Various $\alpha$-choosing algorithms are evaluated, where $\alpha$ is the penalty coefficient of the entropy term in the Maxent objective function\cite{silver1990maximum}. Among these algorithms discussed in Ref\cite{PhysRevB.96.155128}, ``$\chi_2$-curvature'' , which is analogous to $\Omega$-Maxent\cite{PhysRevE.94.023303}, and ``Bryan'' algorithms greatly outperform others in terms of KLD in tasks of interest. Between these two, ``$\chi_2$-curvature'' is marginally superior to the Bryan algorithm. In this way, we use ``$\chi_2$-curvature'' in this work to ensure a level playing field for Maxent. \subsection{Influence of Noise Dependency on Imaginary Time} In the preceding discussion, noise $R(\tau_i)$ at each $\tau_i$ is assumed to be sampled from the same Gaussian distribution and has the same variance, which is rarely the case in Monte Carlo simulation. We introduce the noise-shape-multiplier $\lambda(\tau)$ to investigate influence of noise dependency on imaginary Time and assume $R(\tau_i) \sim N(0,\sigma(\tau_i)^2)$, $\sigma(\tau_i)=\lambda(\tau_i) \sigma$. We refer to this dependency as "noise shape" hereafter. These multipliers satisfy $\frac{1}{\beta}\int_0^\beta \lambda(\tau) d\tau=1$ to ensure that datasets with the same $\sigma$ but different noise shapes are at approximately the same noise level. $\lambda(\tau)$ of four distinct linear shapes labeled A, B, C, and D are displayed in Figure \ref{linear_noise} (a). \begin{figure}[htbp] \centering \includegraphics[width=0.7 \linewidth]{linear_noise.jpg} \caption{Influence of linear noise shapes. (a) Four types of shape multiplier $\lambda(\tau)$. (b) Noises of the testing set are of shape A. Two neural networks are trained by training set of equal noise ($\lambda(\tau)=1$) and noise shape A, respectively. KLDs of trained neural networks are compared on the shape-A testing set at different noise levels. (c) Typical spectra predicted from $G(\tau)$ of shape-A noises ($\sigma=3\times10^{-3}$) by neural networks trained by equal-noise and shape-A training sets, respectively. (d) The relative difference in PFI between the neural network trained on a training set with linearly-shaped noise and the neural network trained on a training set with uniformly-shaped ($\lambda(\tau)=1$) noise at various imaginary times.} \label{linear_noise} \end{figure} To demonstrate the impact of noise shape and how to appropriately arrange noises in the training set, we train NNs by ASEP-type training sets with equal noise ($\lambda(\tau) =1$) and noise shape A, respectively. These trained NNs are implemented on Skew-type testing sets with noise shape A. Corresponding measured KLDs are presented in Figure \ref{linear_noise} (b). Spectra examples at noise level $3\times 10^{-3}$ are shown in in Figure \ref{linear_noise} (c). Origins of different performances by different noise shapes can be, to some extent, explained by permutation feature importance (PFI)\cite{altmann2010permutation}, despite the fact that neural networks are typically seen as black boxes. To calculate PFI, we rearrange $G(\tau_i)$ randomly over samples on one certain time piece $\tau_i$ in the testing set and PFI at this time piece is defined by how much the resulted KLD increases. PFI difference between NNs trained by datasets of linear noise shapes and equal-noise dataset are defined by $[\text{PFI}^\text{T}(\tau_i)-\text{PFI}^\text{E}(\tau_i)]/[\text{PFI}^\text{T}(\tau_i)+\text{PFI}^\text{E}(\tau_i)]$. $\text{PFI}^\text{E}(\tau_i)$ denotes PFI from NNs trained by equal-noise dataset and $\text{PFI}^\text{T}(\tau_i)$ denotes PFI from NNs trained by dataset of some other noise shape, where $\text{T} \in [\text{A},\text{B},\text{C},\text{D}]$. Resulted relative PFI differences are shown in Figure \ref{linear_noise} (d). Moving average of adjacent five points are carried out to make curves smoother and clearer. Relative PFI curves and $\lambda(\tau)$ curves increase or decrease in the opposite direction, which means NNs assign large feature importance on imaginary time pieces where $G(\tau_i)$ are less noisy. It should be emphasized that measured correlation functions do not often have linear-type noise shapes. Instead, they are frequently of exponential-like shapes. However, things can become more subtle in the case of exponential noise shape, when it becomes more difficult to disentangle the effects of different noise levels and noise shapes. In light of these concerns, we only examine linear-type noise shapes here, and it is believed that physical images are similar in other scenarios. \subsection{Influence of Time-Displaced Correlation} So far we've assumed that $G(\tau_i)$ at different $\tau_i$ are measured independently, which is not always true in practical Monte Carlo simulation. At this time, covariance instead of independent errors of $G(\tau_i)$ should be considered. Covariance can be decomposed as $\Sigma=U^TCU$. $U=[U(\tau_1),\cdots,U(\tau_N)]^T$, where $U(\tau_i)$ is the independently measured statistical error of $G(\tau_i)$. $C$ is the correlation matrix. $C_{ij}$ is defined as Pearson correlation of measured $G(\tau_i)$ and $G(\tau_j)$. In practical AC tasks, $\Sigma$ of $G_\text{sim}$ should be measured before designing the training set. If we require the training set to share the same covariance as the testing set, noises of the training set should be generated from corresponding joint Gaussian distribution, that is, $R \sim N(0,\Sigma)$. To illustrate influences of time-displaced correlation, we create the toy correlation matrix for the testing set by \begin{equation} C_{ij}=\frac{1}{1+|i-j|^{1/\gamma}}. \end{equation} In this work, we will investigate correlation matrices with condition numbers being $10^3$, $10^6$, $10^9$, and $10^{12}$ respectively by adjusting $\gamma$. $U(\tau)$ are generated at four noise levels $\sigma \in [10^{-4}, 10^{-3}, 3 \times 10^{-3}, 10^{-2}]$. NNs are trained by ASEP-type datasets and are to be applied to Skew-type testing sets with various noise levels and condition numbers. Training sets are designed in two manners: they may have zero correlation or the same correlation as the testing set. In Figure \ref{cov} (a), condition number of the testing set is fixed to be $10^{12}$. NNs are trained by dataset with or without time-displaced correlation on each noise level. As being illustrated, influence of $\tau$-correlation is not significant at low noise levels but correlation mismatching may lead to incorrect prediction at high noise levels. In Figure \ref{cov} (b), the noise level of the testing set (and the training set, as well) is fixed to be $10^{-2}$, where KLDs are lower when condition number is smaller. The reason may be that $R(\tau_i)$ are dominated by only a few singular values of $\Sigma$, whose pattern of noises is relatively easy to be learned by NNs. Spectrum examples are shown in Figure \ref{cov} (c) with noise level $10^{-2}$ and condition number $10^{12}$, which contain predicted spectra by NNs trained with zero or the same correlation as the testing set, as well as the ground truth. Clearly the predicted spectra contain wrong peaks at wrong locations when time-displaced correlation is not matched. \begin{figure}[htbp] \centering \includegraphics[width=0.9 \linewidth]{cov.jpg} \caption{Illustration of influences of time-displaced correlation of $G(\tau_i)$. Noise levels of the training set and the testing set are matched. (a) The condition number of the correlation matrix is fixed to be $10^{12}$. NNs trained by datasets without correlation may give wrong predictions if $G(\tau_i)$ in the testing set is correlated, especially when the noise level is high. (b) Noise levels are fixed to be $10^{-2}$. KLDs are shown \textit{w.r.t.} different condition numbers. (c) Spectra examples with condition number $10^{12}$ and noise level $10^{-2}$.} \label{cov} \end{figure} \section{NNAC on Heisenberg Chain} In this section, NNAC is carried out to extract dynamic structure factors of the spin-$\frac{1}{2}$ anti-ferromagnetic Heisenberg chain of length $L$, which reads \begin{equation} H=J\sum_{i=1}^L \vec{S}_i \cdot \vec{S}_{i+1}. \end{equation} $\vec{S}_i$ represents a spin located on site $i$. Periodic boundary condition is assumed, \textit{i.e.}, $\vec{S}_{L+1}=\vec{S}_1$. Imaginary-time-displaced spin-spin correlation of $z$-component is measured by stochastic series expansion\cite{sandvik1999stochastic}. \begin{align} G_{i,j}(\tau)&=\langle e^{\tau H} S_i^z e^{-\tau H} S_j^z \rangle,\\ G_k(\tau)&=\frac{1}{L} \sum_{i,j} G_{i,j}(\tau) e^{-i (r_i-r_j)k/L}. \end{align} $G_{i,j}(\tau)$ is time-displaced spin-spin correlation of $z$-component between spin $i$ and spin $j$. Target correlation function $G_k(\tau)$ in wave-vector domain is then calculated via Fourier transformation. $r_i$ denotes the location of spin $i$, where the lattice constant is set to be 1. $J$ is used as the energy unit. We set the inverse temperature $\beta=1$ in the Monte Carlo simulation. In this work we'll focus on $k=\pi$ and $G_k(\tau)$ will be represented by $G(\tau)$ for the sake of simplicity. Then the AC task reads $G(\tau)=\int d\omega e^{-\tau \omega} A(\omega)$, where $A(\omega)$ is the target dynamic structure factor. The corresponding sum rule is obtained by setting $\tau=0$, \textit{i.e.}, $\int d\omega A(\omega) = G(0)$. The same NN structure and hyper-parameters are used as in the previous section. Frequency $\omega$ takes the range $\omega \in [-10,10]$. The time domain and the frequency domain are discretized into 512 and 1024 pieces respectively as before. The spectrum of Heisenberg chain can be regarded as a sum of $\delta$ functions at zero temperature. These $\delta$ functions broaden as temperature increases. We perform quantum Monte Carlo simulation on a 32-site Heisenberg chain, where $\delta$ functions are dense enough on the required energy scale $\Delta \omega \sim 0.02$ so that a smooth spectrum can be obtained. The stochastic series expansion approach with loop-update\cite{sandvik1999stochastic} algorithm is used in simulation. Spin-spin correlation is measured every 100 update steps so that auto-correlation can be ignored. The covariance matrix $\Sigma$ is measured by $\Sigma_{ij}=[\langle G(\tau_i) G(\tau_j) \rangle -\langle G(\tau_i) \rangle \langle G(\tau_j) \rangle]/(N_s-1)$, where $N_s$ is the number of independent samples. Spin-spin correlation functions are measured using different number of Monte Carlo samples to create datasets of different noise levels. In this section, noise levels are represented by relative statistical errors of $G(0)$, which takes range from $3.8 \times 10^{-3}$ to $3.6 \times 10^{-2}$. Simulated $G(\tau)$ are divided by corresponding $G(0)$ before being fed into neural networks so that the sum rule is restored to $\int d\omega A(\omega)=1$. Then the ``SoftMax'' layer results in the correct sum rule and the scale of extracted spectra will be recovered accordingly by multiplying with $G(0)$. Correlation functions $G(\tau_i)$ at different imaginary time $\tau_i$ are measured independently to ensure zero time-displaced correlation between $G(\tau_i)$. The obtained covariance matrix $\Sigma$ is then a diagonal matrix since $\langle G(\tau_i) G(\tau_j) \rangle -\langle G(\tau_i) \rangle \langle G(\tau_j) \rangle=0$. \begin{figure}[htbp] \centering \includegraphics[width=0.9 \linewidth]{Heisenberg.jpg} \caption{Spectra extracted by different methods. (a) Comparison of spectra generated by Maxent and NNAC from highly accurate $G(\tau_i)$. (b) KLDs of spectra generated by Maxent and NNAC. The most accurate spectra (with the lowest noise level) are taken as ground truths while calculating KLDs. (c) Spectra predicted by Maxent from $G(\tau_i)$ of different noise levels.(d) Spectra predicted by NNAC from $G(\tau_i)$ of different noise levels.} \label{Heisenberg} \end{figure} Extracted spectra are shown in Figure \ref{Heisenberg}, where Maxent and NNAC are compared. In Figure \ref{Heisenberg} (a), spectra extracted from spin-spin correlation function of relative error $3.8\times 10^{-3}$ by Maxent and NNAC are compared, where two spectra coincide well with each other in this relatively simple single-peak case. These two spectra also agree with those obtained from smaller systems using Lanczos-based methods\cite{okamoto2018accuracy}. Figure \ref{Heisenberg} (b) compares KLDs of the spectra produced by these two methods at different noise levels. Spectra corresponding to the lowest noise level of each method is regarded as ground truths respectively when calculating KLDs. When the noise level increases, the accuracy of the spectra produced by both Maxent and NNAC decreases, but the accuracy of NNAC decays slower than Maxent. Here again, the previous conclusion is confirmed: at low noise levels, Maxnet and NNAC can produce equally accurate results. At high noise levels, however, NNAC performs better than Maxent. Figures \ref{Heisenberg} (c) and (d) show how spectra extracted by the two methods change when the noise is gradually increased from $3.8 \times 10^{-3}$ to $3.6 \times 10^{-2}$. Spectra get progressively lower and wider in both cases. Spectra generated by Maxent exhibit large peak position shifts, while those generated by NNAC show little shift in peak positions. \section{Conclusions} Applications of neural network-based analytic continuation were discussed in this paper. Numerical experiments are carried on both synthetic datasets and Monte Carlo data. The main conclusion is that a NN can learn from a carefully designed training set and make good predictions on spectra without data leakage, which surpass Maxent in highly noisy cases. To ensure that the neural network acquires adequate knowledge to predict the target spectral functions, the training dataset should comprise a sufficient number of diverse spectral functions. Incorporating information of measured statistical errors leads to better prediction on spectra. $G(\tau_i)$ of the training set should match those of simulated correlation functions in terms of noises at each $\tau_i$ and time-displaced correlation. While acceptable, the time required for NNAC is relatively long compared to Maxent. Improving the efficiency of model training may be a fruitful area for future investigation. It may be possible to apply the idea of transfer-learning\cite{pan2010survey} here so that we do not need to train a model from scratch for each target spectrum but rather begin with a pre-trained model. A more valuable and ambitious goal is to train a model that is general to any spectrum. The input to this model should probably be the simulated correlation functions and the accompanying covariance matrices, which contain most (if not all) information needed to perform analytic continuation. \section*{Acknowldgement} FW acknowledges support from National Natural Science Foundation of China (No. 12274004), and National Natural Science Foundation of China (No. 11888101). Quantum Monte Carlo simulations are performed on TianHe-1A of National Supercomputer Center in Tianjin. \bibliographystyle{unsrt}
{ "arxiv_id": "2302.11379", "language": "en", "timestamp": "2023-02-23T02:14:50", "url": "https://arxiv.org/abs/2302.11379", "yymm": "2302" }
\section{Introduction} \label{sec:intro} \subsection{Motivation and background} In statistical mechanics, the energy landscapes of many disordered systems have a complex geometry, where the configuration with the lowest energy -- the ground state -- corresponds to the bottom of the deepest valley of the landscape. The phenomenon of `chaos' is characterized by energy landscapes with many valleys, and many roughly orthogonal near-ground states, resulting in a system where a slight perturbation of the disorder will lead to a significant change of the ground state. The mathematical study of chaos in disordered systems was initiated by Chatterjee in two preprints~\cite{C08,C09}, which were later combined into a book~\cite{C14}. Chatterjee established a precise relation between fluctuations of the ground state energy and the effect on the ground state of a perturbation of the medium. This relation allowed him to deduce an equivalence between `superconcentration', that is, sub-Gaussian fluctuations, and chaos for certain Gaussian disordered systems. By establishing superconcentration for a Gaussian directed polymer, and the top eigenvalue of a Gaussian matrix, he obtained the first evidence of a chaotic behaviour. More recently, the precise location of the transition from stability to chaos has been established for the top eigenvector of Wigner matrices by Bordenave-Lugosi-Zhivotovskiy~\cite{BLZ20} and in the context of Brownian last-passage percolation by Ganguly-Hammond~\cite{GH20a,GH20b}. The analysis in~\cite{C14} and~\cite{GH20a,GH20b} is specific to the Gaussian context, and based on spectral techniques. While the methods of~\cite{BLZ20} are not specific to the Gaussian setting, they are applied to eigenvalues and the corresponding vectors of random matrices, which is a well understood topic. In a companion paper~\cite{ADS22}, our goal has been to show that the equivalence between superconcentration and chaos is part of a more general principle, achieved by establishing the connection in the context of first-passage percolation. In the current paper we proceed in this vein, and determine the precise location of the transition from stability to chaos in non-Gaussian and non-integrable models of last-passage percolation. \subsection{Model and results} Last-passage percolation is a model for spatial growth, defined on the integer lattice $\mathbb{Z}^d$. For integers $n\ge1$, consider the square $V = [0,n]^2 \cap \mathbb{Z}^d$ and let $\omega = (\omega_v)_{v \in V}$ be a collection of i.i.d.\ weights drawn from some probability distribution $F$ on $[0, \infty)$. Let $T = T(\omega)$ denote the maximal weight-sum picked up along any directed (up-right) path from $(0,0)$ to $(n,n)$, i.e., \begin{equation} T = T(\omega) = \max_{\gamma \in \Gamma} \sum_{v \in \gamma} \omega_v, \end{equation} where $\Gamma$ is the set of all nearest-neighbour paths from $(0,0)$ to $(n,n)$ whose steps follows either $(1,0)$ or $(0,1)$. We will refer to $T$ as the {\bf passage time} between $(0,0)$ and $(n,n)$ due to its interpretation as the occupancy time in the related corner growth model. We refer to any weight-maximising path, i.e.\ any path $\pi\in\Gamma$ that attains the maximum in $T$, as a {\bf geodesic} between the same points. When $F$ is continuous there is a unique such path. It follows from \cite[Theorem 2.3]{M02} that $\mathbb{E}[T]$ grows at most linearly in $n$ provided that $\int_0^\infty (1-F(x))^{1/d}dx<\infty$. In our arguments, we will need that the passage time grows at most linearly for a configuration consisting of squared weights. We will hence throughout assume that \begin{equation}\label{Fass} \int_0^\infty (1-F(\sqrt{x}))^{1/d}dx<\infty, \end{equation} which is slightly stronger than finite $2d$-moment. In 1986, the work of Kardar-Parisi-Zhang~\cite{KPZ} gave predictions for the asymptotic behaviour of a large class of planar spatial growth models. Via the analysis of a differential equation, they were led to predict that the fluctuations of $T$ around its mean are of the order $n^{1/3}$ and that the fluctuations of the/any geodesic associated with $T$ are of the order $n^{2/3}$. Remarkable work of~\cite{J00a}, inspired by \cite{BDJ}, verified these predictions in last-passage percolation with exponential or geometric weight distribution, which have come to be referred to as the `integrable' or `exactly solvable' setting. Moreover, their work also determines that $T$, when centered and appropriately normalised, converges to the Tracy-Widom distribution (also known to be the asymptotic distribution of the largest eigenvalue of a GUE random matrix). In particular, it follows from~\cite{BDMLMZ01} that when $F$ is the exponential or geometric distribution, then \begin{equation} \mathrm{Var}(T)=\Theta(n^{2/3}). \end{equation} This result will be relevant in combination with our main result below. In this paper we will consider a dynamic version of last-passage percolation, in order to address how it is affected when exposed to perturbations of the weight configuration. Recall that $V = [0,n]^2 \cap\mathbb{Z}^2$, where $n\ge1$ is an integer, and let $\omega = (\omega_v)_{ v \in V}$ and $\omega' =(\omega'_v)_{v \in V}$ be independent weight configurations, that is, collections of independent variables distributed as $F$. Let $U = (U_v)_{v\in V}$ be a collection of independent random variables uniformly distributed on $[0,1]$, and independent of $(\omega, \omega')$. For each $t \in [0,1]$ we obtain a weight configuration $\omega(t)$ according to \begin{equation} \omega_v(t) := \begin{cases} \omega_v, & \text{if } U_v > t, \\ \omega_v', & \text{if } U_v \leq t. \end{cases} \end{equation} We will think of $t\in[0,1]$ as `time' (not to be confused with the passage time $T$) and of $(\omega(t))_{t\in[0,1]}$ as a weight configuration evolving dynamically over time. For $t>0$ the configuration $\omega(t)$ corresponds to a perturbation of $\omega(0)$, and $t$ dictates the magnitude of the perturbation. (The coordinate-wise correlation of the two configurations at time $t$ equals $1-t$.) For the purposes of this paper, an alternative construction would be to update weights according to independent Poisson clocks; our results below can be recast in this language via a re-parametrisation of time. For $t \in [0,1]$, we denote by $T_t$ the passage time with respect to the configuration $\omega(t)$, and by $\pi_t$ the set of vertices contained in some geodesic for $T_t$. Recall that if $F$ is continuous, then $\pi_t$ is the unique maximiser of $T_t$, while if $F$ has atoms there may be multiple geodesics with the same passage time. Our main result addresses the transition from stability to chaos in the context of last-passage percolation. In the integrable setting (when $F$ is exponential or geometric) our main result states that this transition occurs at $t\asymp n^{-1/3}$ in the following sense: \begin{description} \item[Stability:] For $t\ll n^{-1/3}$ we have $\mathrm{Corr}(T_0,T_t)=1-o(1)$. \item[Chaos:] For $t\gg n^{-1/3}$ we have $\mathbb{E}[|\pi_0\cap\pi_t|]=o(1)$. \end{description} In fact, our methods show that the analogous result holds in a more general context, and not only in the exactly solvable setting. We shall first require that $F$ satisfies \eqref{Fass}. Second, we shall make the assumption that weights conditioned on being `large' have variance uniformly bounded from below, that is, \begin{equation} \label{assumption2} \exists \, c > 0 \text{ such that } \mathrm{Var}(\omega_v \, | \, \omega_v >k) > c \text{ for all } k\ge0 \text{ and } v \in V. \end{equation} We show in the appendix that this assumption is for instance met by weight distributions $F$ with $1-F(x)=Cx^{-\gamma}$ for $\gamma>2$, or with $1-F(x)=C\exp(-\bar{\beta}x^\beta)$ for $\beta\in(0,1]$. Our main result state that, under the above assumptions on the weight distribution, the transition from stability to chaos occurs at $t \asymp \frac{1}{n}\mathrm{Var}(T)$. \begin{theorem}[From stability to chaos] \label{thm:stabilityandchaos} Consider last-passage percolation with a weight distribution satisfying \eqref{Fass}. There exists a constant $C<\infty$ such that for all $n\ge1$ and $0<\alpha<\frac{n}{\mathrm{Var}(T)}$ the following two statements hold. \begin{itemize} \item[(i)] {\bf Stability:} For $t \leq \alpha\frac{1}{n}\mathrm{Var}(T)$, we have \begin{equation} \label{stability} \mathrm{Corr}(T_0, T_t) \geq 1-C\alpha. \end{equation} \item[(ii)] {\bf Chaos:} If, in addition, \eqref{assumption2} holds and $t \geq \alpha\frac{1}{n}\mathrm{Var}(T)$, then \begin{equation} \label{chaos} \mathbb{E}[|\pi_0 \cap \pi_t|] \leq C\frac{n}{\alpha}. \end{equation} \end{itemize} \end{theorem} In the exactly solvable setting, we have that $\mathrm{Var}(T)=\Theta(n^{2/3})$ and the transition from stability to chaos hence occurs at $t=\Theta(n^{-1/3})$. The same asymptotic behaviour is predicted to prevail under mild conditions on the weight distribution. A sub-linear $n/\log n$-upper bound on $\mathrm{Var}(T)$ was obtained in~\cite{C14,G12} for a large class of weight distributions referred to as `nearly Gamma'. This sub-linear upper bound is sufficient to establish that the transition from stability to chaos occurs at $t=o(1)$. However, for many such distributions, condition~\eqref{assumption2} will not hold. It would be interesting to extend Theorem~\ref{thm:stabilityandchaos} so that condition~\eqref{assumption2} is not required. \subsection{Discussion} Let us emphasise that Theorem~\ref{thm:stabilityandchaos} establishes a transition from stability to chaos in a weaker sense than in~\cite{BLZ20,GH20a}, albeit by more flexible methods. Our theorem considers different quantities in the stable and chaotic regimes (the distance function and the distance-maximising path, respectively), whereas the authors in~\cite{BLZ20,GH20a} consider stability and chaos of the same object (the top eigenvector and distance-maximising path, respectively). In~\cite{BLZ20} this is possible due to the detailed understanding of eigenvectors of random matrices, and in~\cite{GH20a} due to the precise understanding of the geometry of near-ground states obtained by the same authors in~\cite{GH20b}. Our proof of Theorem~\ref{thm:stabilityandchaos} relies on the other hand on a covariance formula derived in~\cite{ADS22}, and requires no precise model specific estimates, which we emphasise by working outside of the exactly solvable setting. The covariance formula alluded to concerns the function $Q_t:=\mathbb{E}[T_0T_t]$ defined for $t\in[0,1]$. Since the configurations at time 0 and at time 1 are independent, we may express the variance of $T$ as \begin{equation}\label{variance} \mathrm{Var}(T) = \mathbb{E}[T_0^2] - \mathbb{E}[T_0 T_1] = Q_0- Q_1 = - \int_0^1 \frac{d}{dt}Q_t \, dt. \end{equation} The core of the argument will be to relate the contribution to the derivative of $Q_t$ that comes from a vertex $v$ to the probability that $v$ belongs to both $\pi_0$ and $\pi_t$. Looking beyond Theorem~\ref{thm:stabilityandchaos}, we expect that in the stable regime also the expected overlap between the two geodesics remains significant. Our belief is supported by the analogous behaviour established for Brownian last-passage percolation in~\cite{GH20a}. \begin{conjecture} \label{conjecture2} If $t \ll \frac{1}{n}\mathrm{Var}(T)$, then $\mathbb{E}[|\pi_0 \cap \pi_t|] = \Theta(n)$ as $ n \to \infty$. \end{conjecture} We further expect that in the chaotic regime also the passage times decorrelate. While we are not aware of results of this kind for related models, heuristic reasoning involving the KPZ scaling relations suggests this should be the case. \begin{conjecture} \label{conjecture1} If $t \gg \frac{1}{n}\mathrm{Var}(T)$, then $\mathrm{Corr}(T_0, T_t) = o(1)$ as $n \to \infty$. \end{conjecture} The study of decorrelation as an effect of small perturbations was initiated by Benjamini-Kalai-Schramm~\cite{BKS99} in the context of Boolean functions, and referred to as noise sensitivity. A substantial framework for the study of noise sensitivity has since been developed, but has remained largely restricted to the Boolean setting. The notion of chaos is strongly related to the notion of noise sensitivity introduced in \cite{BKS99}, according to which an event is noise sensitive if, with high probability, an arbitrary small random perturbation of the configuration gives almost no prediction of whether the event occurs. In general, while chaos refers to the energy minimizing object (in our case the geodesics), noise sensitivity refers to the decorrelation of the energy of that object over time (the passage time). In this regard, we conjecture that the location for the transition from stability to chaos is also the right location for the transition from stability to noise sensitivity. In other words, for values of $t$ smaller than the threshold, the overlap between geodesics is still of order $n$, while for values of $t$ larger than the threshold, the passage times become decorrelated, hence they are noise sensitive. We expect both the following results to be challenging to prove, possibly requiring different techniques.\medskip \noindent {\bf Outline of the paper.} The remainder of the paper is organized as follows. In Section~2 we describe a covariance formula from our companion paper \cite{ADS22}. In Section~3 we derive lower and upper bounds on the influence of a vertex in terms of the probability that the vertex belongs to the geodesic both at time 0 and at time $t$. These bounds are then combined with the covariance formula in the proof of Theorem~\ref{thm:stabilityandchaos} Section~4. \section{The covariance formula} \label{sec:covariance} The proof of Theorem~\ref{thm:stabilityandchaos} relies on a covariance formula derived in~\cite{ADS22}, which we describe next. For $v \in V$ and $x \in [0, \infty)$, let $\sigma_v^{x}: [0, \infty)^{V} \to [0, \infty)^{V}$ be the operator that replaces the weight $\omega_v$ at $v$ by $x$. Write $T^{v \to x} := T \circ \sigma_v^x$ and let \begin{equation} \label{defDvx} D_v^x T := T^{v \to x} - \int T^{v \to y} \, dF(y). \end{equation} That is, $D_v^xT$ compares the travel time when the weight at $v$ is fixed to $x$ and when averaged over all possible values. We define the {\bf co-influence} of a vertex $v \in V$ at times $0$ and $t$ as \begin{equation} \label{definfluence} \textrm{Inf}_v(t) := \int \mathbb{E}\big[D_v^x T_0 D_v^x T_t\big]\, dF(x). \end{equation} A standard coupling argument shows that, for any function $f:[0,\infty)^V\to\mathbb{R}$, the outcomes $f(\omega(0))$ and $f(\omega(t))$ are positively correlated, that is, \begin{equation}\label{eq:time_corr} \mathbb{E}\big[f(\omega(0))f(\omega(t))\big]-\mathbb{E}[f]^2\ge0, \end{equation} and the correlation is non-increasing in $t$. In particular, as a consequence, the co-influences are non-negative and non-increasing in $t$. For the same reason, the probability $\mathbb{P}(v\in\pi_0\cap\pi_t)$ is non-increasing as a function of $t$. The co-influences was in~\cite{ADS22} related to $Q_t$ through the formula \begin{equation} \label{margulisrusso} -\frac{d}{dt}Q_t = \sum_{v \in V} \textrm{Inf}_v(t). \end{equation} This quantity is thus non-negative, so that $Q_t$ is non-increasing in $t$. Combining~\eqref{variance} and~\eqref{margulisrusso} gives the formula \begin{equation} \label{varianceinfluence} \mathrm{Var}(T) = \int_0^1 \sum_{v \in V} \mathrm{Inf}_v(t) \, dt. \end{equation} Since co-influences are non-increasing, the sum of influences $\sum_{v \in V} \mathrm{Inf}_v(0)$ gives an upper bound on the variance, reminiscent of an Efron-Stein or Poincar\'e inequality. \section{Bounding the influence} \label{sec:boundingtheinfluence} The proof of the main result of the paper will go via the covariance formula~\eqref{varianceinfluence}. The key step in the proof will be to show that the influence of a vertex $v$ is proportional to the probability that it is part of the geodesic. That is the goal of this section, which will start with some additional notation. For $v\in V$ and $t \in [0,1]$ fixed, we let $k_v(t)$ denote the smallest non-negative value that the weight at $v$ can take on for $v$ to be on some geodesic for $T_t$. Since $T_t$ is the maximal weight sum on directed paths from $(0,0)$ and $(n,n)$, it follows that $k_v(t)$ is almost surely finite for all $v$. By definition, the weight $k_v(t)$ is determined by $(\omega_u(t))_{u\neq v}$ and \begin{equation} \label{defk*} \{v \in \pi_t\} = \{\omega_v(t) \geq k_v(t)\}. \end{equation} Moreover, the vertex $v$ is on all geodesics of $T_t$ if $\omega_v(t)>k_v(t)$. Let $\mathcal{F}_v$ be the $\sigma$-algebra generated by the weights at vertices other than $v$, that is, $$ \mathcal{F}_v = \sigma\big(\big\{\omega_u(t): u \in V, u \neq v, t \in [0,1]\big\}\big), $$ and note that both $k_v(0)$ and $k_v(t)$ are determined by $\mathcal{F}_v$. In particular, $\omega_v(t)$ and $k_v(t)$ are independent, so if $\tilde\omega$ denotes a generic random variable with distribution $F$, and independent of everything else, then we have from~\eqref{defk*} that \begin{equation} \label{eqprobevents} \mathbb{P}(v \in \pi_t \, | \, \mathcal{F}_v) = \mathbb{P}(\omega_v(t) \geq k_v(t) \, | \, \mathcal{F}_v) = \mathbb{P}(\tilde\omega \geq k_v(t) \, | \, \mathcal{F}_v). \end{equation} By similar reasoning, we also have that \begin{equation}\label{eqprobevents2} \mathbb{P}(v \in \pi_0\cap\pi_t \, | \, \mathcal{F}_v) = \mathbb{P}(\omega_v(0)\geq k_v(0),\omega_v(t)\geq k_v(t)\, | \, \mathcal{F}_v)\leq \mathbb{P}(\tilde\omega \geq \max\{k_v(0),k_v(t)\} \, | \, \mathcal{F}_v). \end{equation} Also note that, by the definition of $k_v(t)$ as the smallest value for the weight at $v$ that causes the geodesic to go through $v$, we have for $x\ge0$ that $$ T_t^{v \to x} = T_t^{v \to k_v(t)} + (x-k_v(t))_+, $$ where $(\,\cdot\,)_+$ denotes the positive part of the expression within brackets. Hence \begin{equation} \label{eq1} D_v^x T_t = T_t^{v \to x} - \int T_t^{v\to y}\,dF(y) = (x-k_v(t))_+ - \int (y-k_v(t))_+ \, dF(y). \end{equation} This will be the basis for the following characterization of the co-influence. \begin{lemma} \label{lemma:infcov} Suppose that $F$ satisfies \eqref{Fass}, and let $\tilde\omega$ denote a generic random variable distributed as $F$. Then the co-influence of $v$ at time $t$ can be written as \begin{equation} \label{influencecovariance} \mathrm{Inf}_v(t) = \mathbb{E} \Big[ \mathrm{Cov}\big( (\tilde\omega-k_v(0))_+,(\tilde\omega-k_v(t))_+ \, \big| \, \mathcal{F}_v \big)\Big]. \end{equation} \end{lemma} \begin{proof} Since $\tilde\omega$ is $F$-distributed, we have that $$ \int (y-k_v(t))_+ \, dF(y)=\mathbb{E}\big[(\tilde\omega-k_v(t))_+\big|\mathcal{F}_v\big]. $$ Together with~\eqref{eq1}, this shows that $$ \int D_v^xT_0 D_v^xT_t\,dF(x) = \mathrm{Cov}\big( (\tilde\omega-k_v(0))_+,(\tilde\omega-k_v(t))_+ \, \big| \, \mathcal{F}_v \big). $$ The result follows by taking expectation. \end{proof} Next we derive an upper bound on the influence of $v$ for $t=0$ in terms of the geodesic. \begin{lemma}[Upper bound] \label{lemmaUB} Suppose that $F$ satisfies \eqref{Fass}. Then \begin{equation} \label{UBinf} \mathrm{Inf}_v(0) \leq \mathbb{E} \left[ \omega_v(0)^2 \, \mathbbm{1}_{\{ v \in \pi_0 \}} \right]. \end{equation} \end{lemma} \begin{proof} When $t=0$, from Lemma~\ref{lemma:infcov}, we have that $$ \mathrm{Inf}_v(0) = \mathbb{E} \big[ \mathrm{Var}\big( (\tilde\omega-k_v(0))_+\, \big| \, \mathcal{F}_v \big)\big] \leq \mathbb{E} \big[\mathbb{E} \big[ (\tilde\omega-k_v(0))^2 \mathbbm{1}_{\{ \tilde\omega \geq k_v(0)\}} \, \big| \, \mathcal{F}_v \big] \big]. $$ Consequently, by independence of $\omega_v(0)$ and $k_v(0)$, we obtain from~\eqref{defk*} that $$ \mathrm{Inf}_v(0) \leq \mathbb{E} \big[ (\omega_v(0)-k_v(0))^2 \mathbbm{1}_{\{ \omega_v(0) \geq k_v(0)\}} \big] \leq \mathbb{E} \big[ \omega_v(0)^2 \, \mathbbm{1}_{\{ v \in \pi_0 \}} \big], $$ as required. \end{proof} \begin{lemma}[Lower bound] \label{lemmaLB} Suppose that $F$ satisfies \eqref{Fass} and that~\eqref{assumption2} holds. Then there exists $c>0$ such that for all $v\in V$ and $t\in[0,1]$ we have \begin{equation} \label{LBinf} \mathrm{Inf}_v(t) \geq c \, \mathbb{P}(v \in \pi_0 \cap \pi_t). \end{equation} \end{lemma} \begin{proof} Let $\tilde\omega$ be a generic $F$-distributed random variable independent of everything else. Then by Lemma~\ref{lemma:infcov} we have \begin{equation}\label{eq:re-coinf} \mathrm{Inf}_v(t)=\mathbb{E}\Big[\mathrm{Cov}\big( (\tilde\omega-k_v(0))_+,(\tilde\omega-k_v(t))_+ \,\big| \, \mathcal{F}_v \big)\Big]. \end{equation} Let $A_0 = \{\tilde\omega \geq k_v(0)\}$, $A_t = \{\tilde\omega \geq k_v(t)\}$ and set $A = A_0 \cap A_t = \{ \tilde\omega \geq \max \{k_v(0),k_v(t)\} \}$. Then $(\tilde\omega-k_v(t))_+=(\tilde\omega-k_v(t))\mathbbm{1}_{A_t}$, and the conditional covariance in the right-hand side of~\eqref{eq:re-coinf} can be rewritten as \begin{equation}\label{eq:cov_cond} \begin{split} &\, \mathbb{E}\big[ (\tilde\omega-k_v(0))(\tilde\omega-k_v(t)) \mathbbm{1}_{A} \, \big| \, \mathcal{F}_v \big] \\ & - \mathbb{E}\big[ (\tilde\omega-k_v(0)) \mathbbm{1}_{A_0} \, \big| \, \mathcal{F}_v\big] \mathbb{E}\big[ (\tilde\omega-k_v(t))\mathbbm{1}_{A_t} \, \big| \, \mathcal{F}_v\big] \\ = & \, \mathbb{E}\big[ (\tilde\omega-k_v(0))(\tilde\omega-k_v(t)) \, \big| \, A, \mathcal{F}_v \big] \mathbb{P} (A \, | \, \mathcal{F}_v) \\ & - \mathbb{E}\big[ \tilde\omega-k_v(0) \, \big| \, A_0, \mathcal{F}_v\big] \mathbb{P}(A_0 \, | \, \mathcal{F}_v)\, \mathbb{E}\big[ \tilde\omega-k_v(t) \, \big| \, A_t, \mathcal{F}_v\big] \mathbb{P}( A_t \, | \, \mathcal{F}_v). \end{split} \end{equation} We claim that \begin{equation}\label{eq:exp_ineq} \mathbb{E}\big[ \tilde\omega-k_v(t) \, \big| \, A_t, \mathcal{F}_v\big] \le \mathbb{E}\big[ \tilde\omega-k_v(t) \, \big| \, A, \mathcal{F}_v\big]. \end{equation} On the event that $k_v(0)\le k_v(t)$, this is a trivial statement. On the event that $k_v(0)>k_v(t)$, the claim follows immediately from the two observations: \begin{itemize} \item[(i)] given a random variable $X$, if $x > k$, then $$ \mathbb{P}(X > x \, | \, X > k) = \frac{\mathbb{P}(X >x)}{\mathbb{P}(X>k)} \geq \mathbb{P}(X >x); $$ \item[(ii)] given two probability distributions $F$ and $\widetilde{F}$, if $F(x) \geq \widetilde{F}(x)$ for all $x\in\mathbb{R}$ and $g:\mathbb{R}\to\mathbb{R}$ is an increasing function, then $$ \int g \, dF \leq \int g \, d \widetilde{F}. $$ \end{itemize} Next, we note that if $k_v(0)\le k_v(t)$, then $A_t\subseteq A_0$ which implies that $A=A_0\cap A_t=A_t$. If, on the other hand, $k_v(0)> k_v(t)$, then $A_0\subseteq A_t$ and hence $A=A_0\cap A_t=A_0$. In either case, since $k_v(0)$ and $k_v(t)$ are $\mathcal{F}_v$-measurable, we obtain that \begin{equation}\label{eq:prob_ineq} \mathbb{P}(A_0|\mathcal{F}_v)\mathbb{P}(A_t|\mathcal{F}_v)\le\mathbb{P}(A|\mathcal{F}_v). \end{equation} Combining~\eqref{eq:exp_ineq} and~\eqref{eq:prob_ineq} we obtain the following lower bound on~\eqref{eq:cov_cond} \begin{equation} \label{covcov} \begin{split} &\mathbb{E}\big[ (\tilde\omega-k_v(0))(\tilde\omega-k_v(t)) \, \big| \, A, \mathcal{F}_v \big] \mathbb{P} (A \, | \, \mathcal{F}_v) \\ &\quad - \mathbb{E}\big[ \tilde\omega-k_v(0) \, \big| \, A, \mathcal{F}_v\big] \mathbb{E}\big[ \tilde\omega-k_v(t) \, \big| \, A, \mathcal{F}_v\big] \mathbb{P}( A \, | \, \mathcal{F}_v), \end{split} \end{equation} which hence gives \begin{equation}\label{eq:inf_lb1} \mathrm{Inf}_v(t)\ge\mathbb{E}\Big[\mathrm{Cov}\big(\tilde\omega-k_v(0),\tilde\omega-k_v(t) \,\big| \, A,\mathcal{F}_v \big)\mathbb{P}(A|\mathcal{F}_v)\Big]. \end{equation} Since $k_v(0)$ and $k_v(t)$ are $\mathcal{F}_v$-measureable, we have \begin{equation}\label{eq:inf_lb2} \mathrm{Cov}\big( \tilde\omega-k_v(0), \tilde\omega-k_v(t) \, \big| \, A, \mathcal{F}_v\big) =\mathrm{Cov}\big( \tilde\omega, \tilde\omega \, \big| \, A, \mathcal{F}_v\big) = \mathrm{Var}( \tilde\omega \, | \, A, \mathcal{F}_v). \end{equation} By assumption~\eqref{assumption2}, the above expression is uniformly bounded from below by a strictly positive constant $c>0$. Consequently, equations~\eqref{eq:inf_lb1} and~\eqref{eq:inf_lb2} together give that $$ \mathrm{Inf}_v(t) \ge \mathbb{E} \big[\mathrm{Var}\left( \tilde\omega \, | \, A, \mathcal{F}_v\right) \mathbb{P}( A \, | \, \mathcal{F}_v) \big] \ge c \, \mathbb{E} \big[ \mathbb{P}( A \, | \, \mathcal{F}_v) \big]. $$ Finally, from~\eqref{eqprobevents2}, we obtain $\mathrm{Inf}_v(t) \ge c \, \mathbb{P}(v \in \pi_0 \cap \pi_t)$, as required. \end{proof} \section{From stability to chaos} \label{sec:proofs} Equipped with the covariance formula~\eqref{varianceinfluence}, and the connections between co-influences and geodesics derived in Section~\ref{sec:boundingtheinfluence}, we are now in a position to prove Theorem~\ref{thm:stabilityandchaos}. \subsection{Proof of Theorem~\ref{thm:stabilityandchaos}, part~(i)} The $L^2$-distance between $T_0$ and $T_t$ can be re-written into an expression similar to~\eqref{varianceinfluence} \begin{equation} \label{eq5} \mathbb{E}\left[ (T_0 - T_t)^2 \right] = 2 \, \mathbb{E}[T_0^2] - 2\, \mathbb{E}[T_0T_t] = -2 \int_0^t \frac{d}{ds}Q_s \, ds = 2\int_0^t\sum_{v\in V}\mathrm{Inf}_v(s)\,ds. \end{equation} Since the co-influences are non-negative and non-increasing,~\eqref{eq5} and Lemma~\ref{lemmaUB} give that \begin{equation}\label{eq:eq5} \mathbb{E}\left[ (T_0 - T_t)^2 \right] \le 2t\sum_{v\in V}\mathrm{Inf}_v(0)\le 2t \sum_{v \in V} \mathbb{E} \big[ \omega_v(0)^2 \, \mathbbm{1}_{\{ v \in \pi_0 \}} \big] \le 2t\,\mathbb{E}\bigg[\sum_{v\in\pi_0}\omega_v(0)^2\bigg]. \end{equation} The expectation in the right-hand side of~\eqref{eq:eq5} is bounded by the expected passage time for the last-passage problem where the weights have been replaced by the weights squared. As pointed out in Section 1.2, the condition \eqref{Fass} guarantees that the expected passage time in this setting grows linearly in $n$. Hence there exists $C<\infty$ such that \begin{equation} \label{eq6} \mathbb{E}\left[ (T_0 - T_t)^2 \right] \leq 2C t n\quad\mbox{for all }n\geq 1. \end{equation} By expanding the square, we also note that $$ \mathbb{E}\big[(T_0-T_t)^2\big]=\mathbb{E}[T_0^2]+\mathbb{E}[T_t^2]-2\mathbb{E}[T_0T_t]=\mathrm{Var}(T_0)+\mathrm{Var}(T_t)-2\mathrm{Cov}(T_0,T_t). $$ Rearranging the terms of the above expression yields \begin{equation}\label{eq:eq6} \mathrm{Cov}(T_0,T_t)=\mathrm{Var}(T_0)-\frac12\mathbb{E}\big[(T_0-T_t)^2\big]. \end{equation} Combining~\eqref{eq6} and~\eqref{eq:eq6} gives $$ \mathrm{Cov}(T_0,T_t)\ge\mathrm{Var}(T_0)-Ctn. $$ In particular, for $0<\alpha<\frac{n}{\mathrm{Var}(T)}$ and $t\le\alpha\frac{\mathrm{Var}(T)}{n}$, we conclude that $$ \mathrm{Corr}(T_0,T_t)=\frac{\mathrm{Cov}(T_0,T_t)}{\mathrm{Var}(T_0)}\ge1-C\alpha, $$ as required. \subsection{Proof of Theorem~\ref{thm:stabilityandchaos}, part~(ii)} The covariance formula~\eqref{varianceinfluence} together with Lemma~\ref{lemmaLB} gives the existence of a constant $c>0$ such that \begin{equation} \label{eq7} \mathrm{Var}(T) \ge \int_0^t \sum_{v \in V} \mathrm{Inf}_v(s) \, ds \geq c \int_0^t \sum_{v \in V} \mathbb{P}(v \in \pi_0 \cap \pi_s) \, ds = c \int_0^t \mathbb{E}[|\pi_0 \cap \pi_s|] \, ds. \end{equation} Since $\mathbb{P}(v\in\pi_0\cap\pi_s)$ is non-increasing as a function of $s$ (recall the discussion related to~\eqref{eq:time_corr}), the same holds for $\mathbb{E}[|\pi_0\cap\pi_s|]$. Consequently, $\mathrm{Var}(T)\ge ct\,\mathbb{E}[|\pi_0\cap\pi_t|]$. Hence, for $0<\alpha<\frac{n}{\mathrm{Var}(T)}$ and $t\ge\alpha\frac{\mathrm{Var}(T)}{n}$, we conclude that $$ \mathbb{E}[|\pi_0 \cap \pi_t|] \leq \frac{1}{c t}\mathrm{Var}(T) \leq \frac{n}{c\alpha}. $$ This completes the proof of Theorem~\ref{thm:stabilityandchaos}.
{ "arxiv_id": "2302.11406", "language": "en", "timestamp": "2023-02-23T02:15:46", "url": "https://arxiv.org/abs/2302.11406", "yymm": "2302" }
\section{Introduction} Thanks to the machine learning (ML),and deep learning (DL) methods in artificial intelligence studies, a wide range of applications and publications have been developed in many areas, such as healthcare \cite{ahmed_adaptive_2022}, finance \cite{rundo_machine_2019}, and transportation \cite{bhavsar_machine_2017}. Organizations using ML come up with problems by making data-based interpretations and analyses. Although ML methods are easily implementable, the development of successful, consistent, and generalized models is a complex process. There are several alternatives for increasing the learning performance in machine-learning problems. Which are; (i) Using more data and increasing the amount of training data allows the model to generalize more effectively. (ii) Data preprocessing: Data preprossessing methods such as normalizing data, selecting features, or scaling features can improve the performance of the model. (iii) Model selection: Choosing the best appropriate model among different ML algorithms can increase the learning performance, as well. For example, a researcher might try an artificial neural network (ANN) model instead of a decision tree model. (iv) Ensemble methods: The learning performance can be increased by using multiple ML models or by combining the results of trained models. (v) Hyper parameter optimization: Adjusting the hyper parameters to optimal values can increase the learning performance. For example, the learning rate or initial values of the weights can be changed. This process aims to obtain more accurate models through various experimental studies. One of the challenging issues in ML is hyper parameter optimization(HPO). With ML models integrating with an optimization algorithm, models with acceptable performance that are close to the best are being developed. The main objectives of the HPO are as follows: \begin{itemize} \item To increase the generalization ability of models, \item To achieve a higher train or test performance score, \item To capture the best combination of parameters without trying to search space entirely, \item To avoid over fitting to the training data as well as under fitting. \end{itemize} According to several studies, swarm intelligence and evolutionary computation can be used to increase the predictability and generalized of ML models. In this study, a comparison study will be conducted between meta-heuristic algorithms such as the genetic algorithm(GA), particle swarm optimization (PSO), and classical HPO techniques such as grid, random search, and Bayesian optimization. \section{Understanding Hyper parameters} \label{sec:2} There are two types of parameters in ML: (i) model parameters and (ii) hyper parameters (HPs). Model parameters can generally vary depending on the variables in the data set and may not be configure by the user. The model performs the learning process by changing them during the training. For example, the weights in ANNs are model parameters. The second parameter type is HPs, which are set before training a ML model and cannot be directly learned from the data. These parameters control the learning process, such as the learning rate, maximum depth in a decision tree, and number of neurons in an ANN layer. The optimal values of these parameters can significantly affect the performance of the model, including its accuracy, speed, and generalization to new data. The main objective of the HPO is to effectively scan the search space to find the values that result in the best performance on a validation or test set. This process is typically performed through iterative steps that attempt different combinations of HPs and evaluate their performance, thereby allowing the selection of the best set of HPs for a given task. Suppose the optimization process in the form of a set of all HPs $\sigma=x_1,x_2,\ldots,x_{nhp}$. $nhp$ denotes the number of hyper parameters. $x_i\in\ X_i$, $X_i$ is the search space of $x_i$. $X_i$ may be sometimes categorical (such as criterion in a random forest (RF)), integer (such as max\_depth), continuous (such as learning rate) variable. In the case of continuous or integer, the spaces can be selected from a probability distribution. HPs are best when $f\left(\sigma\right)$ is the minimum or maximum according to the performance metric. If the performance metric is the error values, the objective function is $min\ f\left(\sigma\right)$, otherwise it is $max\ f\left(\sigma\right)$ as shown in Eq. \ref{Eq1} \cite{feurer_hyperparameter_2019}. Another challenging in HPO is because of the interconnected hyper parameters. For example, in deep neural networks, the behavior of HPs such as learning rate will change according to the optimizer hyper parameter selected. At the same time, the learning rate parameter that is valid when the optimizer is Adam may not apply to any other optimizer. All these situations are among the difficulties of HPO studies. ML engineer will need to solve many complex challenges like those and offer practical solutions to these problems. \begin{equation} \label{Eq1} \sigma^\ast={\rm argmin}_{x_i\in X_i}\ f\left(\sigma\right)\ or\ {\rm argmax}_{x_i\in X_i}\ f\left(\sigma\right) \end{equation} HPO studies are used in all three types of machine learning: supervised, unsupervised, and reinforced. In supervised learning, the labels of output variables are known. As a result of the learning, performance criteria such as confusion matrix, precision, ROC-AUC curves, and accuracy were developed to test the accuracy of the training. In regression analysis, the metrics include the mean square error (MSE), mean absolute error (MAE), and R2. Two sets were selected as the training and test sets to evaluate their performance. Training was performed on the training set, and if the performance of the training provided the desired result, training was applied to the test set. In Fig. \ref{fig:fig1}, a flowchart of machine learning is presented. \begin{figure} \centering \includegraphics[width=10cm]{fig1.pdf} \caption{The training cycle of ML} \label{fig:fig1} \end{figure} Although the flow of ML studies seems simple, it requires routine and challenging processes because it involves repetitive operations. Automated machine learning (AutoML) studies aim to produce machine learning models that are more suitable for real data with little intervention in the model \cite{bello_neural_2017}. AutoML studies include sub-studies such as selecting the learning algorithm, determining the set of features, and HPO. Simultaneously, AutoML studies were conducted to obtain repeatable results. In recent years, we have witnessed DL applications in many areas such as image recognition, video processing, voice recognition, and self-driving cars, which have not been realized in the past. Tuning the HPs of deep-learning algorithms is a very difficult process for users. Therefore, HPO is indispensable in DL studies. As can be expected, the topic of HPO has gained significant momentum with the increase in DL studies owing to the more complex structure of DL. In this context, when the science direct database is scanned with the "hyper parameter optimization" query, the number of articles can be seen in Fig. \ref{fig:fig2}. This is a topic of growing interest. \begin{figure} \centering \includegraphics{fig2.png} \caption{HPO Studies in the literature} \label{fig:fig2} \end{figure} The research questions usually focus on the following: (i) Is it possible for the DL model to achieve a better performance with hyper parameter changes? (ii) What are the best hyper parameter combinations for the DL model? To answer these questions, it is necessary to use several optimization techniques and correctly define the constraints together with the objective function. To classify the optimization techniques used for HPO, a taxonomy can be determined in general terms, as shown in Fig. \ref{fig:fig3}. According to this classification, there are a variety of optimizations that provide the exact best results and approximate good results. It is impossible to use algorithms that provide precise results in HPO studies. Instead, approximate methods that work well are classified as local and meta-heuristic algorithms. Meta-heuristic algorithms, on the other hand, are classified as evolutionary algorithms (such as genetic algorithms (GAs), harmonic search, genetic programming), population-based algorithms (such as ant colony optimization (ACO), artificial bee colony (ABC), PSO), physics-based (such as electromagnetism-based, annealing simulation, gravitational search), and human-based algorithms. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{fig3.pdf} \caption{The training cycle of ML} \label{fig:fig3} \end{figure} \section{Hyper parameter Optimization Techniques} In HPO, most studies have been conducted using trial and error, grid search, and random search methods. The trial-and-error method requires primitive and challenging tasks. The researcher’s skills are at the forefront of this technique. Trial and Error Technique: The model was trained with different HPs and its performance was tested. In conclusion, HPs that provide the best performance are used. \textbf{Grid search}, is a simple approach for HPO. This involves exhaustively searching the hyper parameter space by testing all possible combinations of hyper parameters. An ML model was trained and evaluated for each combination, and its performance was recorded. The combination of HPs that resulted in the best performance on the validation set was selected as the optimal set. The grid search is simple to implement, easy to understand, and can be used as a baseline comparison to other HPO techniques. However, grid search can be computationally expensive, especially for high-dimensional hyper parameter spaces, and may not always find the optimal set of hyper parameters, particularly when the relationship between HPs and performance is not well understood. Despite its limitations, grid search remains a popular method for hyper parameter optimization, particularly for small data sets and simple models. \textbf{Random Search} is another popular approach to HPO in machine learning. Unlike grid search, which exhaustively searches the hyper parameter space, random search generates a set of random combinations of HPs and evaluates their performance. The goal is to randomly sample the hyper parameter space and determine the best set of hyper parameters. Random search is computationally less expensive than grid search and can be more efficient for high-dimensional hyper parameter spaces. Additionally, random search has been shown to perform better than grid search in some cases, particularly when the relationship between HPs and performance is not well understood. However, the performance of random search is highly dependent on the number of trials and random sampling distribution, and it may not always find the optimal set of hyper parameters. Despite these limitations, random search remains a popular method for hyper parameter optimization, particularly when combined with Bayesian optimization or other meta-heuristic algorithms. \textbf{Bayesian optimization} is a probabilistic model-based approach to HPO in machine learning. It models the relationship between HPs and performance using a Bayesian model such as a Gaussian process. This model was used to guide the search for an optimal set of hyper parameters. Bayesian optimization iterative selects the next set of HPs to be evaluated based on the model’s performance prediction, allowing for an efficient search of the hyper parameter space. Bayesian optimization is highly effective for hyper parameter optimization, particularly in high-dimensional spaces, and it is widely used in machine learning and other fields. However, Bayesian optimization can be more computationally expensive than grid search or random search and may require more computational resources for implementation. Despite these limitations, Bayesian optimization remains a popular method for hyper parameter optimization, particularly when combined with other optimization techniques or meta-heuristics. The other category of algorithms in the topic of HPO is meta-heuristic algorithms. The fact that meta-heuristics do not require gradient knowledge and may be used on issues where the gradient cannot be collected is one of their key benefits in optimization problems. They may also be used to solve optimization issues in which the fitness or objective function is unknown or differentiable, which is an extra benefit. In addition, compared to other optimization methods, meta-heuristics can deliver an optimum solution using fewer processing resources \cite{kouziokas_swarm_2022}. Meta-heuristics can be used in any optimization problem as well as in some special areas, such as financial trading, credit card data, portfolio optimization, routing problems, load balancing problems, facility location, influence maximization in social networks, scheduling, smart building, and reconstruction of ECG signals \cite{jafar_financial_2022} \cite{shahriari_taking_2015}. Some studies in the field of HPO by meta-heuristics are listed in Table \ref{tab:tab1}. \begin{table}[hbt!] \centering \caption{Reference studies} \label{tab:tab1} \resizebox{\columnwidth}{!}{% \begin{tabular}{|p{0.10\linewidth}| p{0.30\linewidth} | p{0.30\linewidth} | p{0.2\linewidth} | p{0.10\linewidth} |} \hline \textbf{Reference} & \textbf{Optimization Algorithm} & \textbf{ML Algorithm} & \textbf{Application Area} & \textbf{ML/DL} \\ \hline \cite{ahmet_senel_hyperparameter_2023} & Grey Wolf Optimization (GWO) & Deep learning models (CNN) & Galaxy Classification & DL \\ \hline \cite{ahmed_adaptive_2022} & Simulated Annealing (SA) & extreme gradient boosting, categorical boosting & E-Triage Tool & ML \\ \hline \cite{ali_hyperparameter_2023} & Ant Bee Colony Algorithm, GA, Whale Optimization, and PSO & Support Vector Machine (SVM) & Optimizing the Computational Complexity & ML \\ \hline \cite{erden_genetic_2023} & GA & Deep learning models (RNN, LSTM, GRU) & PM2.5 time-series prediction & DL \\ \hline \cite{he_assessment_2023} & GWO, the whale optimization algorithm, swarm algorithm & RF & Predict Blast-Induced Over break & ML \\ \hline \cite{inik_cnn_2023} & PSO & Deep learning models (CNN) & Environmental Sound Classification & DL \\ \hline \cite{kalita_novel_2023} & Moth-Flame Optimization & SVM & Intrusion Detection System & ML \\ \hline \cite{si_metamodel-based_2023} & non dominated sorting GA II, GA, PSO, simulated annealing, and the multi-objective GA & ANN & Energy Optimization & ML \\ \hline \cite{tayebi_performance_2022} & GA, differential evolution, ABC, GWO, PSO, and teaching learning-based optimization & AdaBoost, RF, logistic regression, SVM, k-nearest neighbors, and decision tree & Fraud Transactions Detection & ML \\ \hline \cite{nematzadeh_tuning_2022} & GWO and GA & Averaged Perceptron, Fast Tree, Fast Forest, Light Gradient Boost Machine, Limited memory Broyden Fletcher Goldfarb Shanno algorithm Maximum Entropy, Linear SVM, and DL & Biological, Biomedical Data sets & ML and DL \\ \hline \cite{ma_comprehensive_2022} & multi-verse optimization & SVM & Geohazard Modeling & ML \\ \hline \cite{bacanin_benefits_2023} & GA, PSO, ABC, Firefly Algorithm, Bat Algorithm, Sine Cosine Algorithm & Deep learning models (LSTM) & Energy Load Forecasting & DL \\ \hline \cite{ma_metaheuristic-based_2022} & PSO & SVM & Landslide Displacement Prediction & ML \\ \hline \cite{chou_metaheuristics-optimized_2022} & Jellyfish search optimization & Deep learning models (CNN) & Generation Of Sustainable Energy & DL \\ \hline \cite{suddle_metaheuristics_2022} & GA, PSO, Differential Evolution, Firefly, and Cat Swarm Optimization & Deep learning models (LSTM) & Sentiment Analysis & DL \\ \hline \cite{drewil_air_2022} & GA & Deep learning models (LSTM) & Air Pollution Prediction & DL \\ \hline \cite{al_duhayyim_metaheuristics_2022} & Salp Swarm Algorithm & Deep learning models (GRU) & Image Captioning System & DL \\ \hline \cite{maumela_population_2022} & Ulimisana Optimization Algorithm & ML & ML Unfairness & ML \\ \hline \cite{zhao_lithium-ion_2023} & Hunger Game Search & Gaussian process regression & Lithium-Ion Battery State of Health Estimation & ML \\ \hline \cite{albakri_metaheuristics_2023} & Rock Hyrax Swarm Optimization & Deep learning models & Cybersecurity and Android Malware Detection and Classification & DL \\ \hline \end{tabular} } \end{table} In addition, optimization algorithms can be used as optimizer for ML models. Examples include: \begin{itemize} \item \textbf{Stochastic Gradient Descent (SGD)} is a well-liked optimization technique in DL and ML. Finding the set of parameters that minimizes a loss function is the objective of optimization in these domains. SGD updates the parameters in the direction of the loss's negative gradient with respect to the parameters. Instead of computing the gradient using the whole training set as in batch gradient descent, the gradient is calculated using a single randomly chosen sample from the training set. The algorithm is stochastic since just one sample was chosen at random. \item \textbf{Levenberg-Marquardt}: For non-linear least squares problems, the Levenberg-Marquardt algorithm is a potent optimization technique that has found widespread application in many disciplines, including machine learning, computer vision, and control engineering. \item \textbf{Adagrad}: The stochastic gradient descent optimization algorithm Adagrad (Adaptive Gradient Algorithm) is applied to optimization issues in DL and machine learning. Adagrad's primary principle is to scale the learning rate for each parameter individually, so that parameters that receive a lot of updates have a reduced learning rate and parameters that receive less updates have a bigger learning rate. This makes it possible for the optimization process to continue moving forward for all parameters even when there are sparse gradients. \item \textbf{RMSPop}: The optimization approach RMSProp (Root Mean Square Propagation) is used to train deep neural networks and other machine learning models. This stochastic gradient descent variation attempts to overcome some of the drawbacks of conventional gradient descent, including sluggish convergence and oscillating behavior. \item \textbf{Adadelta}: It is a variant of the popular Adagrad algorithm and aims to address some of its limitations, such as the need to manually set the learning rate and the tendency to converge slowly. \end{itemize} \textbf{GAs}: GAs are meta-heuristic optimization methods inspired by the natural selection phase in biology. The GA works by encoding candidate solutions as binary strings or other representations and using operators such as mutation, crossover, and selection to generate new solutions and evolve the population over iterations. The performance of each candidate solution(fitness) was evaluated, and the best solutions were selected to form the next generation. This process continues until a satisfactory solution is found or a stopping criterion is met (see Fig. \ref{fig:fig4}). \begin{figure} \centering \includegraphics{fig4.pdf} \caption{Genetic algorithm steps for hyper parameter optimization} \label{fig:fig4} \end{figure} \textbf{PSO}: A meta-heuristic optimization technique, PSO, mimics the behavior of swarms such as fish and bird flocks. In PSO, a group of particles moves around the hyper parameter space, adjusting their locations according to their own and neighbors' best performance. The particles in the PSO algorithm are simulated in the search space adopted \cite{clerc_particle_2002}\cite{kennedy_particle_2010}. The flow of the PSO algorithm is shown in Fig. \ref{fig:fig5} \cite{noauthor_optunity_2023}. Where $v_i$ is the velocity, $x_i$ is the position, $p_i$ is the personal best position, $p_g$ is the global best position, $\phi_1$ and $\phi_2$ are the random values. \begin{figure} \centering \includegraphics{fig5.pdf} \caption{PSO for hyper parameter optimization} \label{fig:fig5} \end{figure} \textbf{Ant Colony Optimization}: ACO is a meta-heuristic optimization method inspired by the foraging behavior of ant colonies. It is often used for HPO in machine learning. ACO is computationally expensive and requires numerous evaluations to converge to an optimal solution. Studies of HPO with ACO are quite few \cite{lohvithee_ant_2021}. The studies to be carried out in this regard will be needed in the future. \subsection{Available Tools for HPO} To apply these techniques, a flexible tool is required. The most widely used programming language for machine learning applications is Python. Owing to its libraries, the Python programming language offers machine-learning applications more easily and flexibly. ML applications can also be developed on programs with interfaces, such as MATLAB, Rapidminer, and Weka. A comparison of popular ML tools is provided in Table \ref{tab:tab2}. \begin{table}[hbt!] \centering \caption{Popular ML Tools} \label{tab:tab2} \resizebox{\columnwidth}{!}{% \begin{tabular}{|p{0.1\linewidth}|p{0.2\linewidth}|p{0.20\linewidth}|p{0.1\linewidth}|p{0.40\linewidth}|} \hline & \textbf{Platform} & \textbf{Fee} & \textbf{Programming Language} & \textbf{Usage} \\ \hline Scikit-Learn & Linux, Mac OS, Windows & Free & Python, Cython, C, C++ & Classification, Regression, Clustering, Data Preprocessing, Model Selection, Size reduction \\ \hline PyTorch & Linux, Mac OS, Windows & Free & Python, C++, CUDA & Autograd Module, Optimization Module, ANN Module \\ \hline TensorFlow & Linux, Mac OS, Windows & Free & Python, C++, CUDA & Enables data flow programming. \\ \hline Weka & Linux, Mac OS, Windows & Free & Java & Data preparation Classification, Regression, Clustering, Visualization, Data Mining, Relationship Rules \\ \hline KNIME & Linux, Mac OS, Windows & Free & Java & It can work with large data volumes. It supports text mining and image mining through plugins. \\ \hline Colab & Cloud Service & Free & - & PyTorch supports Keras, TensorFlow, and OpenCV tools. \\ \hline Apache Mahout & Cross-platform & Free & Java, Scala & Preprocessing, Regression, Clustering, Recommendation Systems, Distributed Linear Algebra, Classification, regression, clustering, Hypothesis Testing \& Core Methods, Image, Sound, and Signal. \& Processing \\ \hline Shogun & Windows, Linux, UNIX, Mac OS & Free & C++ & Regression, Classification, Clustering, Support vector machines, Size reduction, Online learning \\ \hline Keras.io & Cross-platform & Free & Python & API for ANN \\ \hline Rapid Miner & Cross-platform & Yearly Small: 2500$. Medium:5000 $. Large:10.000\$. & Java & Data Loading and Conversion Data preprocessing and visualization. \\ \hline \end{tabular}% } \end{table} As can be seen from Table \ref{tab:tab2}, Python programming language stands out because of the library facilities it offers. One of most popular libraries in traditional machine learning applications is the Scikit-Learn library \cite{virtanen_scipy_2020}. The Scikit-Learn library includes functions such as grid search and random search related to HPO. As previously mentioned, a random search includes combinations of random parameters. In the grid search, the result was obtained by trying all the parameter combinations. Many tools are available in the Python programming language with which HPO studies can be performed. Of the open-source tools mentioned here, except Auto-Weka, they are mainly developed in the Python programming language. These tools: \begin{itemize} \item Hyperopt (Hyperopt, 2011/2021) \item Auto-Weka (Automl/Autoweka, 2015/2021) \item Auto-sklearn (Auto-Sklearn, 2015/2021) \item Optuna (Optuna, 2018/2021) \item Ray-Tune (Ray-Project/Ray, 2016/2021) \item Bayesian Optimization (fernando, 2014/2021) \end{itemize} \section{Applications of Hyper Parameter Optimization} The ELEC2 dataset is popular and can be accessed at https://datahub.io/machine-learning/electricity \cite{hutchison_learning_2004} \cite{harries_splice-2_1999}. For this study, a popular dataset was selected to make comparisons easier to interpret. The dataset contains 45312 data collected from the Australian New South Wales Electricity Market area from May 7, 1996, to December 5, 1998. Electricity prices vary according to the supply and demand. Each entry in the dataset had a duration of 30 min. The output variable of the dataset is whether the prices are up or down. In this study, experiments are run on a data set that can be considered balanced. As seen in Fig. \ref{fig:fig6}, the total number of inputs with UP class is 19237 and the number of DOWN is 16075. \begin{figure} \centering \includegraphics[width=12cm]{fig6.pdf} \caption{Number of instances for the output variable} \label{fig:fig6} \end{figure} In this study, 20\% of the dataset was used for testing, and 80\% for training. MinMax normalization of variables in the dataset. In addition, training was carried out using 5-fold cross-validation. The performance of the algorithms was calculated using the accuracy score based on rates such as true positive(TP) true negative(TN), false positive (FP), false negative (FN), as presented in Eq. \ref{Eq2}. Where ${\hat{y}}_i$ is predicted, $y_i$ is the true value of the i-th sample, and $1\left(x\right)$ is the indicator function. Besides, for the performance evaluations of the algorithms, the precision, recall, and f1-score metrics were used as shown in Eq. 3-5. Hyper parameter search spaces were determined to be fair for all algorithms. In this study, a RF algorithm was used. Therefore, in RF, the number of random features on each node (max\_features), the number of trees in the forest (n\_estimators), maximum depth (max\_depth), and the strategy the split will be performed (criterion) are used, and their search spaces are given in Table \ref{tab:tab3}. \begin{equation} \label{Eq2} \mathrm{accuracy}\left(y,\hat{y}\right)=\frac{1}{n_{\mathrm{samples}}}\sum_{i=0}^{n_{\mathrm{samples}}-1}1\left({\hat{y}}_i=y_i\right) \end{equation} \begin{equation} precision\ =\ \frac{TP}{TP+FP}\ \end{equation} \begin{equation} recall\ =\ \frac{TP}{TP+FN} \end{equation} \begin{equation} f1\ Score\ =2\times\frac{Precision\ \times\ Recall}{Precision\ +\ Recall} \end{equation} \begin{table}[hbt!] \centering \caption{Hyper Parameter Search Spaces} \label{tab:tab3} \resizebox{\columnwidth}{!}{% \begin{tabular}{|p{0.15\linewidth}|p{0.15\linewidth}|p{0.5\linewidth}|} \hline \textbf{HPs} & \textbf{Grid Search} & \textbf{Random Search}, \textbf{Bayesian Optimization}, \textbf{Genetic Algorithm}, \textbf{PSO} \\ \hline n\_estimators & {[}50, 100, 150, 200, 250{]} & Integer (50, 400) \\ \hline max\_features & {[}1, 3, 5, 8{]} & Integer (4, 10) \\ \hline max\_depth & {[}1, 2, 4, 8{]} & Integer (4, 10) \\ \hline criterion & {[}'entropy', 'gini'{]} & {[}'entropy', 'gini'{]} \\ \hline \end{tabular}% } \end{table} GA parameters: \begin{itemize} \item Population Size: 10 \item Generation Size: 40 \item Crossover Probability: 0.9 \item Mutation Probability: 0.05 \item Algorithm: EaSimple \end{itemize} PSO parameters: \begin{itemize} \item Population size: 10 \item Generation Size: 40 \item $\phi_1$, $\phi_2$: 0.5 \end{itemize} The algorithms were implemented in the Python programming language using skopt \cite{head_scikit-optimizescikit-optimize_2021}, SciPy \cite{virtanen_scipy_2020}, sklearn (Auto-Sklearn, 2015/2021), sklearn\_genetic \cite{calzolari_manuel-calzolarisklearn-genetic_2022} and Optunity \cite{noauthor_optunity_2023} packages in the Jupyter notebook development environment on a laptop with 12 GB of RAM of Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz 2.59 GHz. The algorithms were run 15 times and the precision, recall and f1-score results obtained in the best individual run are presented in Table \ref{tab:tab4}. Accordingly, the best learning rate according to the decision classes of the learning rates performed with different optimization techniques was realized in the PSO algorithm. Looking at the results of all algorithms individually, the "DOWN" class generally has a better learning percentage than the "UP" class. In the PSO algorithm, it is seen that the learning rates are smaller for both classes than in other algorithms. \begin{table}[] \centering \caption{Classification performance results of the algorithms} \label{tab:tab4} \resizebox{\columnwidth}{!}{% \begin{tabular}{|p{0.2\textwidth}|p{0.2\textwidth}|p{0.2\textwidth}|p{0.2\textwidth}|p{0.2\textwidth}|} \hline Algorithm & Classes & Precision & Recall & F1-score \\ \hline \multirow{2}{*}{Grid} & DOWN & 0.87 & 0.84 & 0.85 \\ \cline{2-5} & UP & 0.79 & 0.82 & 0.81 \\ \hline \multirow{2}{*}{Random} & DOWN & 0.89 & 0.84 & 0.86 \\ \cline{2-5} & UP & 0.79 & 0.85 & 0.82 \\ \hline \multirow{2}{*}{Bayesian} & DOWN & 0.87 & 0.89 & 0.88 \\ \cline{2-5} & UP & 0.84 & 0.83 & 0.83 \\ \hline \multirow{2}{*}{Genetic} & DOWN & 0.87 & 0.89 & 0.88 \\ \cline{2-5} & UP & 0.84 & 0.82 & 0.83 \\ \hline \multirow{2}{*}{PSO} & DOWN & 0.94 & 0.94 & 0.94 \\ \cline{2-5} & UP & 0.92 & 0.91 & 0.92 \\ \hline \end{tabular}% } \end{table} The PSO algorithm has the best average test and training accuracy, as shown in Table \ref{tab:tab5}. The PSO performed very close to 1 in training accuracy. GA and Bayes optimization algorithms presented similar results. Grid search and random search have worse performances as expected. Another criterion observed in this study is the CPU times of the algorithms. In some cases, search algorithms may require long periods of CPU-times, or it is not possible to perform these studies without highly capable processing power. The environment in this study could be completed by the days of these studies. Accordingly, when the CPU times are compared, it is seen that the CPU time in the Grid search is low because the search space in the Grid search consists of integer numbers. In addition, GA has been the algorithm that takes the most CPU time. PSO was concluded as the best performing algorithm in a reasonable time. \begin{table}[hbt!] \centering \caption{Accuracy performances of optimization algorithms} \label{tab:tab5} \resizebox{\columnwidth}{!}{% \begin{tabular}{|p{0.25\linewidth}|p{0.25\linewidth}|p{0.25\linewidth}|p{0.25\linewidth}|} \hline \textbf{Algorithm} & \textbf{Test Accuracy} & \textbf{Train Accuracy} & \textbf{CPU Time} \\ \hline Bayes & 0.8611 & 0.8825 & 5850.78 \\ \hline GA & 0.8609 & 0.8822 & 6639.02 \\ \hline Grid & 0.8297 & 0.8409 & 714.89 \\ \hline PSO & 0.9262 & 0.9933 & 1286.4 \\ \hline Random & 0.8425 & 0.8586 & 2567.56 \\ \hline \end{tabular}% } \end{table} \section{Conclusion} HPO has become necessary because of the high number of hyper parameters in machine learning applications. It is not possible to manually adjust the hyper parameters in machine learning. This only applies to problems with very small data sets or a very small number of parameters and solution spaces. Even in a mid-level machine-learning problem, the topic of HPO is complex and consists of many stages. Therefore, there is a need to develop, test and compare optimization algorithms to solve the HPO problem. Optimization algorithms can be classified in several ways. One of these classes is meta-heuristic algorithms. HPO studies powered by meta-heuristic algorithms may have higher performance, reliable and provide results without scanning the entire search space. Therefore, meta-heuristic algorithms offer an important application area for HPO. Random search, grid search, Bayesian optimization, genetic algorithms and PSO algorithms were tried on a generic data set discussed in this study. As a result of the study, it was shown that PSO algorithms are superior to other algorithms both in terms of learning performance and faster operation. In the studies to be carried out after this study, studies on which algorithm from meta-heuristic algorithms will have a performance in HPO studies with which search space will be investigated. In addition, various studies can be carried out by testing meta-heuristic algorithms, which are relatively new. \bibliographystyle{unsrt}
{ "arxiv_id": "2302.11371", "language": "en", "timestamp": "2023-02-23T02:14:44", "url": "https://arxiv.org/abs/2302.11371", "yymm": "2302" }
\section{Introduction} \label{sec:Introduction} In the past few years, the cryptocurrency market has experienced substantial growth, surpassing the three trillion dollar mark by the end of 2021 and still accounting for about a trillion in this present `winter' (\citealp{Statista}). The interest in cryptocurrencies has been driven by various factors, which have created a heterogeneous market with competing forces. The cryptocurrency market has provided a platform for experimenting with new financial models that have the potential to be decentralised and free from state control. However, this market has also seen a trend toward concentration and centralisation, which goes against the original ideals of the cryptocurrency movement. As we reached a turning point in this market, it became evident that opaque financial conditions and poor governance characterise some financial entities operating within it. This development is a cause for concern, especially if the lack of transparency involves unregulated centralised entities with dominant positions in the market. Indeed, this contradicts the principles of transparency, independence, and accountability originally envisioned for the cryptocurrency environment. In May 2022, Terra-Luna stablecoin system collapsed, provoking a contagion across the cryptocurrency ecosystems with long-run effects. As described by \cite{Briola2023}, Terra-Luna was an algorithmic stablecoin whose underlying protocol relied on a two-coin system that was not backed by traditional collaterals. Its failure was presumably induced by a liquidity pool attack and eased by the inappropriate blockchain framework behind the crypto asset itself. This collapse remarkably damaged the confidence in the crypto market, accelerating the onset of a ``crypto winter". Consequently, users started massively withdrawing their funds from crypto institutions while investors recalled their loans with cryptocurrencies as collateral. The summer of 2022 signed the onset of bankruptcy for many actors with excessive leverage, such as Three Arrows Capital (3AC), a Singapore-based cryptocurrency hedge fund (\citealp{Jha2022}), Voyager Digital, a cryptocurrency brokerage company (\citealp{Andersen2022}), and Celsius, a cryptocurrency lending company (\citealp{Newar2022}). Compared to the entities mentioned above, FTX, the third-largest digital currency exchange with $\$10$ billion active trading volume and $32\$$ billion valuations at the time of events (\citealp{Fu2022, Exchanges_Cap}), was able to hide its financial situation until 02 November 2022. On this day, CoinDesk reported that Alameda Research owned $\$6$ billion FTX Tokens (FTT) in its balance sheet (\citealp{Allison2022}). In other words, the balance sheet of the leading FTX trading firm mainly included the non-collateral native token created by FTX itself. This in-house token was costless for the issuer since it was not backed by any real asset. Native tokens are common in centralised digital currency exchanges such as Binance (Binance token - BNB), Huobi (Huobi token - HT), and Hxro (Hxro token - HXRO). They serve as utility tokens and offer customers various incentives, including reduced trading fees, among other non-financial perks. However, the case of FTX and its token FTT concealed a deeper underlying truth. Since its Initial Coin Offering (ICO) in 2019, most of FTTs ($80\%$ of the total supply) were held by FTX and Alameda Research (\citealp{Khoo2022}). In this scenario, both entities could have easily controlled the price of the non-collateral native token, FTT, to secure additional financing while increasing the value of their balance sheets. Their financial strategy relied on a leverage mechanism where a native token without inherent value was used as collateral to raise funds. This vicious cycle (see Figure \ref{logicalflow_1}) was fragile and prone to external events affecting the FTT price. As we show in this study, the Terra-Luna collapse represented such a shock. After that, both FTX and Alameda Research suffered from a credit crunch. They were initially able to avoid bankruptcy, given the misappropriation of clients' deposits, the sale of their reserves and the inflated value of FTT in their balance sheets. However, CoinDesk's report on the reliance of Alameda Research and FTX on their proprietary token unfolded the leverage mechanism used by both companies. In response to this new information, on 06 November 2022, Binance announced the liquidation of all the FTTs on its books, giving rise to a Twitter debate between FTX, Alameda Research and Binance, which ended with the bankruptcy of FTX on 11 November 2022. \begin{figure}[h] \centering \caption{Schematic depiction of the leverage mechanism used by FTX and Alameda Research.} \begin{center} \includegraphics[scale=0.32]{./images/logic_flow_1.png} \end{center} \label{logicalflow_1} \end{figure} In this paper, we provide the scientific community with insights on (i) the main events that led to the FTX's failure using on-chain, transaction and hourly data, (ii) the systemic effect of FTX collapse on the cryptocurrency market through instruments of network science, and (iii) the increase in Binance dominance on the crypto ecosystem by analysing the performance of its native token, its stablecoin, and additional products. \section{Data and quantitative nature of the events}\label{data} \subsection{Hourly data analysis} \label{OHLCV_analysis} In this paper, we use hourly USD closing prices for 199 cryptocurrencies from 01 January 2022 to 01 December 2022. The dataset is directly obtained from Binance, the largest digital currency exchange in terms of traded volume (\citealp{Exchanges_Cap}), through the use of the CCXT Python package (\citealp{Ccxt2023}).\footnote{As reported by \cite{Alexander2020} and \cite{VidalTomas2022a}, using traded data from liquid exchanges guarantees the reliability of results.} In order to analyse dependency structures in the cryptocurrency market during the FTX collapse, we compute hourly log-returns. Figure \ref{fig_des0} reports rescaled hourly closing prices for FTT, BNB and Bitcoin (BTC). It is worth noting that, since 01 January 2022, FTT demonstrated superior performance compared to BTC and BNB. As discussed in Section \ref{sec:Introduction}, such behaviour could be the result of a potential price manipulation of the crypto asset. However, after (a) Terra-Luna's collapse, we conjecture that FTX lost control over the FTT price due to its liquidity issues. Dotted lines in Figure \ref{fig_des0} highlight the main events that led to the FTX collapse (see also \citealp{Khoo2022}, \citealp{Coghlan2022}, \citealp{Ramirez2022}, \citealp{Nathan2022}, \citealp{Conlon2022}). On (b) 02 November 2022, 14:44 (GMT), CoinDesk reported that Alameda Research owned $\$6$ billion FTTs in its balance sheet (\citealp{Allison2022}). On (c) 06 November 2022, 15:47 (GMT), Binance CEO Changpeng ``CZ" Zhao announced that any remaining FTT on the company's books would have been liquidated. \begin{figure}[h!] \centering \caption{Rescaled hourly closing prices for FTT, BNB and BTC from 01 January 2022 to 01 December 2022. Events listed in Table \ref{events} are plotted with dotted lines.} \begin{center} \includegraphics[scale=0.4]{./images/rescaled_closing_price.pdf} \end{center} \label{fig_des0} \end{figure} Immediately after, at 16:03 (GMT), the Alameda Research CEO, Caroline Ellison, reacted by tweeting that FTX would have bought FTT tokens from Binance at $\$22$ each. Due to the huge concerns throughout the crypto space regarding the financial viability of FTX and Alameda Research, Bankman-Fried (i.e. FTX's CEO) tweeted on (d) 07 November 2022, 12:38 (GMT), ``\textit{A competitor is trying to go after us with false rumours. FTX is fine. Assets are fine.}". As we show in Section \ref{transaction}, on (e) 08 November 2022, from 02:47 (GMT) to 02:48 (GMT), there was a sudden increase in selling pressure equal to $1.3$ million BUSD, which resulted in the first decrease in FTT price. In line with \cite{Khoo2022}, we conjecture that creditors recalled Alameda Research loans with FTT as collateral. Since Alameda Research could not repay loans, Sam Bankman-Fried was forced to ask Binance to step in and acquire the company. Consequently, on (f) 08 November 2022, 16:03 (GMT), Binance announced a non-binding letter of intent to purchase FTX. On (g) 09 November 2022, 15:32 (GMT), CoinDesk anticipated Binance's intention to decline any deal. Binance confirmed the leak at 20:50 (GMT) of the same day as a result of the corporate due diligence. Finally, on (h) 11 November 2022, 15:23 (GMT), FTX, and $130$ associate companies, announced that they commenced voluntary proceedings under chapter $11$ of the United States bankruptcy code. Table \ref{events} reports a detailed timeline of the events. \begin{table}[H] \caption{Timeline of the events that led to the FTX bankruptcy in November 2022.} \label{events} \resizebox{\columnwidth}{!}{% \begin{tabular}{ccc} \hline \textbf{Date} & \textbf{Reference} & \textbf{Event} \\ \hline 07-05-2022 22:00 & (a) & Terra-Luna collapse. First day that Terra (USDT) lost the peg with USD. \\ \hline 02-11-2022 14:44 & (b) & \begin{tabular}[c]{@{}c@{}}Coindesk reported that the value of Alameda Research relied on the FTX's in-house tokens, FTT. \\ In particular, Alameda Research had \$14.6 billion of assets as of June 30. Approximately \$6 billion were FTT.\end{tabular} \\ \hline 06-11-2022 15:47 & (c) & \begin{tabular}[c]{@{}c@{}}Changpeng ``CZ" Zhao (i.e. Binance CEO) announced that it will liquidate any remaining FTT on Binance books. \\ In response to the FTT liquidation announcement, at 16:03, Caroline Ellison (i.e. Alameda Research CEO), \\ reacted by tweeting that FTX would have been happy to buy the FTT tokens from Binance at a value of \$22 each.\end{tabular} \\ \hline 07-11-2022 12:38 & (d) & \begin{tabular}[c]{@{}c@{}}Due to the huge concerns throughout the crypto space regarding the financial viability of FTX and Alameda Research, \\ Bankman-Fried tweeted ``\textit{A competitor is trying to go after us with false roumors. FTX is fine. Assets are fine}".\end{tabular} \\ \hline 08-11-2022 02:48 & (e) & \begin{tabular}[c]{@{}c@{}}We identify a massive selling pressure on FTT, which could be related to the liquidation of Alameda Research loans.\end{tabular} \\ \hline 08-11-2022 16:03 & (f) & \begin{tabular}[c]{@{}c@{}}Alameda Research could not repay loans, thus Sam Bankman-Fried was forced to ask Binance to acquire FTX group. \\ Binance announced the existence of a non-binding letter of intent to purchase FTX.\end{tabular} \\ \hline 09-11-2022 15:32 & (g) & \begin{tabular}[c]{@{}c@{}}Coindesk anticipated Binance intention to decline any kind of deal, \\ which was confirmed at 20:50 by Binance as a result of the corporate due diligence.\end{tabular} \\ \hline 11-11-2022 15:23 & (h) & \begin{tabular}[c]{@{}c@{}}FTX and its 130 related companies, announced that they commenced voluntary proceedings \\ under chapter 11 of the United Stated bankruptcy code.\end{tabular} \\ \hline \end{tabular}% } \end{table} \subsection{On-chain data analysis}\label{sec_on-chain} In this Section, we highlight the relevance of Terra-Luna's collapse on the FTX bankruptcy. After the failure of the algorithmic stablecoin, both FTX and Alameda Research suffered from a credit crunch caused by the decrease in FTT price and the complexity of obtaining additional credit from lenders. Indeed, bankruptcies in the summer of 2022 (i.e. 3AC, Voyager Digital and Celsius) increased the market uncertainty, giving rise to a higher decrease in lending volume and the generalised down-market. Consequently, since May 2022, FTX and Alameda Research could not control FTT price anymore, and the leverage mechanism described in Figure \ref{logicalflow_1} was interrupted. The Wall Street Journal reported that the CEO of Alameda Research informed her employees that the Firm used FTX clients' funds to pay back creditor's loans that were being recalled due to the credit crunch triggered by Terra-Luna collapse (\citealp{De2022}). The apology letter sent on November 2022 by Sam Bankman-Fried to his employees further confirms the crucial role of Terra-Luna's failure in FTX bankruptcy: ``\textit{I believe that the events that led to the breakdown this month} [November 2022] \textit{included a crash in markets this spring} [Terra-Luna] \textit{that led to a roughly $50\%$ reduction in the value of collateral}" (\citealp{Rooney2022}). In order to quantitatively validate our events' reconstruction, we report in Figure \ref{fig:on_chain_data} the rescaled total amount of reserves (in BTC and ETH) owned by FTX and Binance from 01 January 2022 to 01 December 2022. It is possible to notice that FTX started to sell its reserves slightly after the Terra-Luna collapse in May 2022 to overcome the credit crunch. In comparison, Binance doesn't show any decrease in reserves' balance during the same days. Based on these findings, in Figure \ref{logicalflow}, we extend Figure \ref{logicalflow_1} by incorporating a new branch that depicts the circumstances disrupting the vicious cycle involving FTX and the resulting consequences observed since May 2022. \begin{figure}[H] \centering \begin{center} \includegraphics[scale=0.4]{./images/balance_on_exchange.pdf}\\ \end{center} \caption{Rescaled total amount of Bitcoins and Ethereum Coins held on FTX and Binance addresses from 01 January 2022 to 01 December 2022. The data are retrieved from \cite{Glassnode2022}.} \label{fig:on_chain_data} \end{figure} \begin{figure}[H] \centering \caption{Schematic depiction of the mechanism that led to the FTX collapse.} \begin{center} \includegraphics[scale=0.45]{./images/logic_flow.pdf} \end{center} \label{logicalflow} \end{figure} \subsection{Transaction data analysis}\label{transaction} In this Section, we analyse FTT's public trades data on the Binance digital currency exchange. Also in this case, the dataset is directly obtained from Binance digital currency exchange using the CCXT Python package (\citealp{Ccxt2023}). Unlike hourly closing prices, transaction data are expressed in Binance USD (BUSD), as this is the primary exchange pair on Binance. Figure \ref{imbalance} reports minutely imbalances, where positive values (red area in the plot) indicate a selling pressure, while negative ones (green area in the plot) indicate buying pressure\footnote{The calculation of the imbalance is based on the methodology described in \cite{Briola2023}. The procedure involves separating data into buy and sell trades, computing the transaction costs by multiplying the volume of each transaction by its execution price, aggregating the transaction costs by minute, and subtracting the total of buy transactions costs from the total of sell transactions costs to obtain minutely imbalances.}. Our analysis reveals that, prior to November 2022, the highest selling pressure in FTT occurred on 12 May 2022 with a value of BUSD $695,690$ as a result of the Terra-Luna failure. This finding enforces, one more time, the statement about the crucial role of the Terra-Luna collapse on FTX bankruptcy, as no other significant event appears to have influenced the market, including the Russia-Ukraine conflict (\citealp{BedowskaSojka2022}) and the implementation of contractionary monetary policies (\citealp{Castrovilli2022}). In Figure \ref{imbalance} we observe that, in November 2022, market dynamics changed due to the Twitter debate between FTX, Alameda Research and Binance. CoinDesk's report alone (b) had a limited impact on the market. In contrast, the first significant shock is observed on (c) 06 November 2022 at 15:47 (GMT), when the Binance CEO announced the intention to liquidate FTT reserves, leading to a rise in selling pressure at 15:49 (GMT), amounting to BUSD $369,420$. In light of the high buying pressure at 16:23 (GMT) (i.e. BUSD $523,730$), we postulate that Binance elicited a counter-reaction from FTX. Despite the efforts by the FTX's CEO to restore investor trust (d), FTT experienced a significant selling pressure equal to BUSD $1.3$ million on (e) 08 November 2022, at 02:48 (GMT), when it was traded at BUSD $21.83$. The most significant selling pressure (i.e. BUSD $2.56$ million) is detected at 02:56 (GMT) of the same day when FTT had already dropped to BUSD $19.6$\footnote{Caroline Ellis announced that Alameda Research would have purchased all of Binance's FTT holdings for USD $22$ per token and FTT traded between BUSD $21.83$ (i.e. USD $21.85$) and BUSD $22$ (i.e. USD $22.01$) for a couple of hours, suggesting that USD $22$ was not a psychological barrier for investors.}. In agreement with \cite{Khoo2022}, a plausible explanation could be that Alameda Research had loans to be liquidated when the price of FTT would have fallen below BUSD $21.8$. Hence, we cannot exclude that this abnormal selling pressure could have been generated by FTX itself trying to repay loans collateralized by FTT. In this scenario, having already used the majority of clients' funds and most of the reserves to front the credit crunch triggered by the Terra-Luna collapse (see Section \ref{sec_on-chain}), FTX did not have alternative sources of liquidity. Despite its origin, this event led to the technical collapse of FTX, as observed on (f) 08 November 2022, when Sam Bankman-Fried asked Binance to acquire the FTX group, spreading panic among investors and leading to a significant drop in the FTT price (see Figure \ref{fig_des0}). On (g) 09 November 2022, Binance declined to acquire FTX, adding additional selling pressure. On (h) 11 November 2022, the bankruptcy of FTX was announced with minimal impact, as investors had already taken into account the collapse, and the price was BUSD $2.79$. \begin{figure}[h!] \centering \begin{center} \includegraphics[scale=0.4]{./images/extended_imbalance_ftt.pdf} \\ \includegraphics[scale=0.4]{./images/imbalance_focus_ftt.pdf} \end{center} \caption{Minutely imbalance for FTT on the Binance digital currency exchange. Positive values (red color area) and negative values (green color area) denote selling and buying pressure, respectively.} \label{imbalance} \end{figure} \section{Methodology}\label{methodology} \subsection{Network analysis: Triangulated Maximally Filtered Graph (TMFG)}\label{methodology_network} To analyse the systemic effect of FTX's collapse on the market's dynamics, we describe the evolution of dependency structures among cryptocurrencies during the crash. In order to do this, as shown in \cite{Briola2023}, we use instruments provided by network science. Networks have been extensively used to study financial systems (\citealp{Mantegna1999, Aste2010, wang2022sparsification, wang2023dynamic}) modelling asset dependencies through various similarity measures. This study uses the Pearson correlation coefficient to model linear relationships among cryptocurrencies. It is worth noting that during periods of stress in the underlying system, pure correlations may exhibit heightened sensitivity. An exponential time-weighting structure that assigns a larger weight to the latest observations and lower weights to older observations can mitigate this effect. Results show that weighted correlations are smoother and more resilient to market turbulence than unweighted correlations. Additionally, weighted correlations are more effective in distinguishing genuine correlations from spurious ones. Following the definition in \cite{Pozzi2012}, we define the Pearson correlation coefficient weighted with exponential smoothing as follows: \begin{equation} \label{eq:exponentially_smoothed_corr_coef} \rho_{i,j}^w = \frac{\sum_{t=1}^{\Delta t} w_t (y_t^i - \overline{y}_i^w)(y_t^j - \overline{y}_j^w)}{\sqrt{\sum_{t=1}^{\Delta t} w_t (y_t^i - \overline{y}_i^w)^2} \sqrt{\sum_{t=1}^{\Delta t} w_t (y_t^j - \overline{y}_j^w)^2}} . \end{equation} where $w_t = w_0 e^{\frac{t-\Delta t}{\theta}}, \forall t \in \{1, 2, \dots, \Delta t\} \land \theta > 0$ represents an exponentially smoothed weight structure such that $\sum_{t=1}^{\Delta t} w_t = 1$ and $\overline{y}_k^w = \sum_{t=1}^{\Delta t} w_t y_t^k$. $\Delta t$ corresponds to rolling windows made of 24 hours with steps of 1 hour each, and $\theta$ is set to $0.1$. These time-dependent exponentially smoothed correlations are used to build a range of networks representing dependency structures among cryptocurrencies (\citealp{Briola2022}). A smoothing factor of $0.1$ is used\footnote{The results are consistent for different values of $\theta$.}. In this study, the state-of-the-art methodology in network filtering techniques, Triangulated Maximally Filtered Graph (TMFG) (\citealp{Massara2017, briola2023topological}), is employed. This filtering network presents several advantages compared to alternative methods. It can capture meaningful interactions among multiple assets and exhibits topological constraints that facilitate regularisation in probabilistic modelling (\citealp{aste2022topological}). As a measure of network centrality, we compute the eigenvector centrality, which allows us to measure the influence of a crypto asset in the system. A cryptocurrency exhibits a higher eigenvector centrality if connected to other highly influential cryptocurrencies. \subsection{Buy and hold returns}\label{methodology_bhr} We use buy-and-hold returns (BHR) to analyse the financial performance of the cryptocurrencies in our dataset (see \citealp{Momtaz2021, briola2021deep, VidalTomas2022, vidal2023illusion}). In the current work, BHR is defined as the relative difference between prices on 01 December 2022 and 01 January 2022. This simple computation allows us to analyse the investors' trust in Binance compared to the rest of the cryptocurrencies. \section{Results}\label{results} \subsection{FTX's collapse: Correlations and network analysis}\label{results_network} Figure \ref{netfig1} reports exponentially smoothed average correlation coefficients for FTT, BNB, BTC, and the Binance digital currency exchange\footnote{The Binance digital currency exchange represents the average correlation of all the $199$ cryptocurrencies available in the study.}. In line with findings in Section \ref{transaction}, results indicate that the CoinDesk report did not significantly impact market's dynamics also for what concerns the market structure. The first notable event is observed on (c) when the Binance CEO announced the intention to liquidate the FTT reserves held by his company. Indeed, this announcement gave rise to the complete disconnection of FTT from the market (Binance), with an average correlation coefficient close to $0$. This finding is coherent with results provided by \cite{Conlon2022}, who observed the first significant negative FTT response on 06 November 2022. Afterwards, we identify a continuous increase in market correlations, since the whole market reacted to the flow of FTT-related news (i.e. (d), (e) and (f)). The maximum correlation is observed on (f) 08 November 2022, at 19:00 (GMT), shortly after Binance announced a non-binding letter of intent to acquire FTX. \begin{figure}[h!] \centering \begin{center} \includegraphics[scale=0.45]{./images/corr_full.pdf} \end{center} \caption{Exponentially smoothed weighted correlations for FTT, BNB, BTC and Binance digital currency exchange, using 24 hour rolling windows with steps of 1 hour each. Binance (red line) denotes the average correlation of all the 199 cryptocurrencies, while FTT (blue line), BNB (green line) and BTC (orange line) represent their average correlation with the rest of the system.} \label{netfig1} \end{figure} The maximum market's correlation coincides with the highest hourly selling pressure in FTT, with BUSD $6.29$ million in (net) sales (see Figure \ref{fig:hourly_imbalance}), which shows the systemic effect of FTX collapse on the cryptocurrency market. This trend persisted until the official FTX bankruptcy (h) when the market correlation decreased remarkably. FTT was then ``excluded" from the crypto system, given its persistent low average correlation. \begin{figure}[h!] \centering \begin{center} \includegraphics[scale=0.4]{./images/imbalance_focus_ftt_hourly.pdf} \end{center} \caption{Hourly imbalance for FTT on the Binance digital currency exchange. Positive values (red color area) and negative values (green color area) denote selling and buying pressure, respectively.} \label{fig:hourly_imbalance} \end{figure} Following the initial exploration of market dynamics through an analysis of cryptocurrency correlations, we conduct a further examination of nodes' centralities within the TMFG to enhance our understanding of the system's collective dynamics. Figure \ref{netfig2} illustrates the temporal evolution of non-overlapping eigenvector centralities for FTT, BNB, and BTC, computed using 24-hour rolling windows. This figure reports two interesting findings. First, it is worth noting the impact of CoinDesk's report (b) on the FTT eigenvector centrality. Despite a lack of significant effects on prices (see Figure \ref{fig_des0}) and market correlations (see Figure \ref{netfig1}), a decline in eigenvector centrality can be observed (see Figure \ref{netfig2}). This suggests that, following the report's publication on 02 November 2022, FTX and Alameda Research may ceased their speculative operations with FTT to avoid media attention. However, the series of tweets from the CEO of Binance on 06 November 2022 drew public attention to FTX and FTT, leading to the collapse of FTX. Second, Figure \ref{netfig2} also reveals the potential misuse of FTT. As utility token, it should have been utilized by the digital currency exchange, FTX, to offer various incentives to users, such as reduced trading fees or the ability to pay for goods and services (see \citealp{Binance2023} for an overview of the usage of BNB, a utility token similar to FTT). This token was, therefore, not intended to be mainly used for speculative purposes. In other words, FTT and native tokens should be characterised by a low market correlation and degree centrality, by nature. This phenomenon was firstly analysed by \cite{Briola2022}, who found that centralized exchange tokens (e.g., BNB, HT, and HXRO) are characterized by lower market correlation (and lower degree centrality) compared to digital currencies (e.g., BTC and Litecoin) or smart contract tokens (e.g., ETH and Tron). In that paper, \cite{Briola2022} also found a suspicious result, apparently without a clear explanation: FTT was characterised by a high degree centrality similar to the one of more speculative cryptocurrencies such as BTC and Litecoin. Given that the authors utilized data from the FTX digital currency exchange, they hypothesized that results could have been explained by ``\textit{an over-estimation of the role played by the exchange-specific token, FTT}". As depicted in Figure \ref{netfig1}, FTT exhibited a high degree of centrality also in the Binance digital currency exchange, with a degree centrality even higher than the one of BTC during specific periods of time. As a direct consequence of the events described in Section \ref{sec:Introduction}, we can assert, with a sufficient degree of confidence, that the high degree of centrality observed in FTT prior to its collapse was due to its misuse as a speculative currency. Specifically, users could only use at most the $20\%$ of the total supply as a utility token. In contrast, $80\%$ of the supply was used for speculative purposes by Alameda Research's and FTX's managers to take advantage of the upward market and drive up FTT prices. In other words, given the unequal FTT distribution, FTX's managers could inflate the token's price during up-markets periods, as long as credit lines were available. This misuse resulted in a higher correlation and centrality of FTT. In comparison, BNB was more equally distributed during its ICO among various participants, including the foundation team ($40\%$), angel investors ($10\%$), and the general public ($50\%$) (\citealp{Cointelegraph2022}). This distribution guaranteed a fair valuation of BNB and a correct use as utility token by Binance's users, giving rise to a lower degree of centrality, as observed by \cite{Briola2022}. \begin{figure}[h!] \centering \caption{Non-overlapping eigenvector centrality of FTT, BNB and BTC using a 24h rolling window. Color areas show the distribution of the eigenvector centrality for the rest of cryptocurrencies considering $1\%$-$99\%$, $5\%$-$95\%$, and $25\%$-$75\%$ percentiles.} \label{netfig2} \begin{center} \includegraphics[scale=0.45]{./images/non_overlapping_centrality.pdf} \end{center} \end{figure} \subsection{Binance: The raise of the centralised system}\label{results_herding} In this Section, we provide some insights about the increasing dominance of Binance in the crypto space. In Table \ref{tab:best_worst_crypto}, we report the ten cryptocurrencies with the worst and best performance in terms of BHR. Despite the Ukraine-Russia conflict (\citealp{BedowskaSojka2022}), Terra-Luna collapse (\citealp{Briola2023}) and contractive monetary policies (\citealp{Castrovilli2022}), we observe for BNB comparatively lower negative results, with a BHR of $-43.5\%$. Meanwhile, BTC and ETH experienced a decrease in value of $-64\%$ and $-66\%$, respectively. The median BHR for the sample was $-79\%$, with $25^{th}$ and $75^{th}$ percentiles of $-69\%$ and $-87\%$, respectively. This result demonstrates the investors' confidence in Binance and the consolidation of the cryptocurrency market around the company. Moreover, following the FTX failure, Binance reported a $30\%$ increase in trading activity (\citealp{Pan2022}), further emphasising its growing dominance in the crypto space. This point is also supported by the daily active users in the blockchain infrastructure, on 01 December 2022, since BNB chain was the leader with $1,497,102$ daily active users, followed by Ethereum ($313,110$), Polygon ($361,252$), PancakeSwap ($146,097$) and Solana ($107,943$) (\citealp{tokenterminal2023}). \begin{table}[H] \small \caption{Buy and hold (BHR) returns from 01 January 2022 to 01 December 2022. First row reports cryptocurrencies showing the most negative returns, while the second row reports cryptocurrencies showing the most positive returns.} \label{tab:best_worst_crypto} \resizebox{\columnwidth}{!}{% \begin{tabular}{ccccccccccc} \textbf{Crypto (-)} & SPELL & RAY & FTT & ILV & MOVR & JASMY & MIR & PERP & GALA & HNT \\ \hline \textbf{BHR} & -0.974 & -0.971 & -0.967 & -0.959 & -0.956 & -0.947 & -0.947 & -0.945 & -0.945 & -0.941 \\ \hline & & & & & & & & & & \\ \textbf{Crypto (+)} & CHZ & BNB & ETC & UNFI & DOGE & XMR & QNT & TRX & LAZIO & TWT \\ \hline \textbf{BHR} & -0.442 & -0.435 & -0.428 & -0.426 & -0.408 & -0.382 & -0.323 & -0.287 & 0.068 & 2.123 \\ \hline \end{tabular}% } \end{table} The dominance of Binance in the crypto space can be further assessed by considering the top two crypto assets performers in 2022, LAZIO and TWT. On the one hand, LAZIO appears to have a financial advantage in its niche due to its presence on Binance (a rare exception for tokens traded on \cite{Socios2023}). This advantage is further bolstered by Binance's sponsorship of S.S. Lazio football club, which prominently displays the Binance brand on the team's jerseys (\citealp{Proch2021}). On the other hand, TWT is the native token of the Trust Wallet, a self-custodian cryptocurrency wallet founded by Viktor Radchenko in November 2017. In July 2018, Binance acquired Trust Wallet to improve its offerings. On 13 November 2022, Binance CEO tweeted about the advantages of self-custodianship and the role of Trust Wallet in this regard, leading to a $47\%$ increase in the value of TWT (\citealp{Somraaj2022}). In the same line, we can also highlight Binance's relevance to the stablecoins market, with BUSD. As shown in Figure \ref{stablecoins}, on 01 December 2022, BUSD represented approximately the $50\%$ of all the stablecoins' supply on digital currency exchanges. This fact is supported by the positive evolution of the total market cap in Figure \ref{rescaledmarketcap}, given that BUSD increased its value by $54\%$ since 01 January 2022, while its main centralised competitors USDT and USDC showed poor performance, $-17\%$ and $1\%$, respectively. Interestingly, the decentralised option, DAI, reported the worst performance after the Terra-Luna collapse, with a decrease in market cap equal to $-42\%$. This result could highlight a potential shift in market preference, which would be in line with what is stated in \cite{Duan2023}, where the authors observed that ``\textit{BUSD is found as the most stable stablecoin with the fastest correction speed}", while DAI is the least stable\footnote{BUSD is a Binance branded stablecoin, issued by Paxos Trust comapny. Paxos is a regulated institution supervised by the New York Department of Financial Services (NYDFS). On 13 February 2023, NYDFS ordered Paxos Trust to stop the issuance of BUSD, since the United States Securities and Exchange Commission alleged that BUSD is an unregistered security (\citealp{Partz2023}). Binance informed that they ``\textit{will make product adjustments accordingly}" (\citealp{Zhao2023}).}. \begin{figure}[h] \centering \begin{center} \includegraphics[scale=0.45]{./images/supply_on_exchanges.pdf} \end{center} \caption{Supply held on exchanges' reserves of the top four stablecoins (i.e. USDT, USDC, BUSD and DAI) from 01 January 2022 to 01 December 2022. Also the corresponding aggregate value is reported. The data are retrieved from \cite{Glassnode2022}.} \label{stablecoins} \end{figure} \begin{figure}[h] \centering \begin{center} \includegraphics[scale=0.45]{./images/rescaled_market_cap.pdf} \end{center} \caption{Rescaled market capitalisation of the top four stablecoins (i.e. USDT, USDC, BUSD and DAI) from 01 January 2022 to 01 December 2022. Data are retrieved from \cite{Defillama2023}.} \label{rescaledmarketcap} \end{figure} \section{Conclusion}\label{im_fr} The absence of regulation and the lack of transparency allowed FTX to build a leverage mechanism characterized by (i) the issuance of non-collateralized native tokens (FTT), (ii) control over the majority of FTTs, and (iii) unlimited loan requests using FTT as collateral despite its lack of inherent value. The decline of FTX was triggered by the Terra-Luna crash, which resulted in a decrease in prices and a sudden reduction of credit availability in the market. FTX attempted to hide its financial situation by selling its digital reserves and misappropriating customer funds to pay loans. The reliance of FTX and Alameda Research on FTT was finally reported by CoinDesk, rising a Twitter debate between FTX, Alameda Research and Binance. We identify the Binance announcement on 06 November 2022 to sell FTT reserves held by the Firm as the catalyst for the accelerated collapse of FTX. Following the announcement, FTX became a subject of media scrutiny. The public exchange of accusations between Binance and FTX exacerbated the market's panic, leading to a simultaneous decrease in the value of FTT and the entire crypto space. The systemic impact of FTX on the market was further evident after Binance announced a non-binding letter of intent to acquire the company. Our findings also reveal that the potential misuse of FTT as a speculative asset could have resulted in a high degree of centrality in the market. The recent collapse of FTX and the consolidation of the Binance's leading role in the crypto space highlights the need to protect users and prevent the creation of opaque monopolies. Non-transparent centralized, unregulated entities constitute real threats to investors. In 2022, Binance’s volume market share increased from $48.7\%$, in the first quarter, to $66.7\%$, in the last quarter, among the $11$ leading exchanges (\citealp{Cryptocompare_review}). When Bitcoin was created in 2008, Satoshi Nakamoto (\citealp{Nakamoto2008}) stated that ``\textit{what is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party}". After $15$ years, the cryptocurrency industry appears to be moving towards centralization, with third party entities serving as the primary means for exchanging cryptocurrencies. Despite not being created by central banks, cryptocurrencies are now predominantly managed by unregulated private companies acting as traditional financial institutions (e.g. paying interests for deposits, providing landings and releasing debit cards). These centralised and unregulated entities cannot be considered part of the new digital economy, since they are a transposition of the existing regulated financial institutions inside the crypto space. Decentralised Finance (DeFi) should be the obvious candidate to support the future digital economy, given that it naturally provides users various advantages, such as on-chain transparency, self-custody, governance, and fair access for participants. Consequently, DeFi could avoid the governance issues represented by FTX, whose managers were able to raise USD $2$ billion from $80$ investors (\citealp{Griffith2022}), misuse users' funds, and create an articulate corpus of $130$ side companies without any supervision. Unfortunately, as underlined by \cite{Fu2022} and \cite{Aramonte2021}, DeFi is still not an ideal solution considering its security risks and excessive concentration of decision power in the hands of large coin-holders. \section*{Acknowledgements} The author, D.V-T., acknowledges the financial support from the Margarita Salas contract MGS/2021/13 (UP2021-021) financed by the European Union-NextGenerationEU. The author, T.A, acknowledges the financial support from ESRC (ES/K002309/1), EPSRC (EP/P031730/1) and EC (H2020-ICT-2018-2 825215).
{ "arxiv_id": "2302.11387", "language": "en", "timestamp": "2023-02-23T02:15:00", "url": "https://arxiv.org/abs/2302.11387", "yymm": "2302" }
\subsection{Introduction} Magnetic force microscopy (MFM) is a widespread method in fundamental surface studies and nanoscale technological applications with a high lateral resolution of up to tens of nanometers and \SI{}{\pico\newton} force sensitivity\cite{Kazakova2019FrontiersMicroscopy, Hug1998QuantitativeSamples}. The working principle of MFM relies on the force interaction between the tip's magnetic stray field and a samples' spatially varying magnetic textures. By nanoscale utilization of this magnetic force interaction, MFM covers a wide operational range from characterization to manipulation of magnetic objects \cite{Casiraghi2019IndividualGradients, Albisetti2016NanopatterningLithography, Gartside2018RealizationWriting}. Despite its extensive use, a general MFM hits its capability limits mainly in lateral resolution of imaging materials with weak or time-varying magnetization. For instance, low coercive and weak ferromagnetic (FM) or superparamagnetic (SP) structures generally require an external magnetic field to saturate \cite{Schreiber2008MagneticNanoparticles}, and without this the magnetic force to the tip is weak or may be undetectable by the bandwidth of the MFM. Hence, nonmagnetic interactions such as those of electrostatic origin can mask the magnetic signal\cite{Torre2011MagneticNanoparticles, Angeloni2016RemovalNanoparticles, Krivcov2018UnderstandingResolution}. To obtain the pure magnetic signal of nanoscale weak FM or SP textures, such as isolated islands, an MFM variant called switching magnetization force microscopy (SM-FM) \cite{Cambel2011SwitchingMFM, Cambel2013HighMicroscopy} or controlled magnetization-MFM (CM-MFM) \cite{Angeloni2016RemovalNanoparticles} stands out by extracting such signals out of the detected force \cite{Cambel2013HighMicroscopy, Wren2017SwitchableStructures, Krivcov2018UnderstandingResolution}. Beyond traditional MFM, SM-FM measures a relative force change due to controlled altering of the magnetic state of the tip or the sample (or both). Only the magnetic field interaction is sensitive to the relative magnetic polarities between the tip and sample and hence can thus be detected. The need for a SM-FM imaging technique with a capability of imaging weak FM or SP islands with a resolution beyond \SI{10}{\nano\meter} can be found in the study of epitaxial complex oxide perovskites such as LaMnO$_3$ (LMO$_3$). Wang \textit{et al.} \cite{Wang2015ImagingHeterostructures} have shown that epitaxial LMO$_3$ reveals an abrupt transition from an antiferromagnetic (AF) state to a ferromagnetic (FM) depending on the thickness of the LMO$_3$ layer. The magnetic transition occurred at a film of \SI{5}{} atomic unit cell (u.c.) thickness. Furthermore, Anahory \textit{et al.}\cite{Anahory2016EmergentInterfaces} observed inhomogeneously distributed SP islands besides the FM domains, with the former only detected following an applied in-plane magnetic field of variable strength. Both groups used a scanning SQUID microscope (SSM), albeit with different lateral resolution, to image the LMO$_3$ sample's stray field distribution. However, the SSM imaging performed by Anahory \textit{et al.} \cite{Anahory2016EmergentInterfaces} could not go beyond a resolution of \SI{100}{\nano\meter}, which left the SP island size to be only indirectly inferred between \SI{10}{\nano\meter} and \SI{20}{\nano\meter}. To solve the problem of limited imaging resolution of traditional MFM, we carefully designed a new type of SM-FM sensor. To this end, we design a magnetic tip with a stray-field of several hundred mT strong enough to saturate the magnetic textures. The tip is realised by forming an oriented single domain state near the tip apex \cite{Corte-Leon2019MagneticWalls}. Traditional needle-like MFM tips generally only generate up to a few tens of mT of stray field \cite{Sakar2021QuantumMicroscopy, Hug1998QuantitativeSamples}. With this approach the LMO$_3$ weak FM domains are simultaneously saturated and profiled for imaging by the same tip. The tip's stray field decays rapidly from the tip and hence, by changing the height of the tip with respect to the sample surface the weak magnetic textures can be actively saturated. We demonstrate that our sensor is capable of imaging magnetic textures of LMO$_3$ with a resolution beyond \SI{10}{\nano\meter}. For this, we present a new approach combining planar chip-like probes\cite{Siahaan2015CleavedMicroscopy, Ciftci2019PolymerProbes, LeeuwenhoekFabricationMicroscopy} with highly sensitive tuning fork force sensors which we call the switching-magnetization planar probe (SM-PP) and is illustrated in Figure \ref{fig:TipConcept}a. This method aims to provide an on-chip reorientable tip magnetization with no required external magnetic field, to act as an switchable magnetic force sensor. \begin{figure*} \centering \includegraphics[scale=0.4]{Figures/Main_Principle.jpg} \caption{\textbf{Planar probe with switchable tip magnetization (SM-PP).} (\textbf{a}) Illustration of the SM-PP sensor, with electrical contacts for force sensing and sending current pulses $I_{p}$ to the tip apex. The tip is formed by a tip-on-chip called the planar probe (PP). (\textbf{b}) SEM image of the PP with a sharp tip apex formed by cleaving a Si wafer. Sending a current pulse generates an \O rsted field ($\Vec{H_{p}}$) within the metallic film to orient the tip magnetization into a singular domain state. The current pathway is created by the formation of a FIB milled bridge. (\textbf{c}) The metallic film with two main layers: the current-carrying Pt layer and the ferromagnetic Co layer. The polarity of $I_{p}$ determines the direction of the \O rsted field ($\Vec{H_{p}}$) which alters (reverse) the direction of magnetization of the Co film. (\textbf{d}) Multi-domain state can be poled into an oriented single domain by a controlled current pulse. (\textbf{e}) Schematic side view of the SM-PP shows the mass retuned tuning fork prong and a lateral view of the surrounding tip stray field $\vec{B}$. The planar probe is placed under a \SI{45}{\degree} angle to the prong. (\textbf{f}) Kerr microscopy image showing a poled magnetic tip domain after sending a single $I_{p}$. The dark contrast at the bottom of the tip demonstrates the singular domain with magnetization $\Vec{M_{\text{tip}}}$. The white scale bar equals \SI{5}{\micro\meter}.} \label{fig:TipConcept} \end{figure*} \subsection{Results and Discussion} The working principle of the SM-PP relies on switching from a multi-domain state of the tip to a poled single domain via an internally generated \O rsted field ($H_{p}$) within a planar chip-like probe, as illustrated in Figures \ref{fig:TipConcept}a, b and d. Initially, the magnetic layer on the tip is in a multi-domain state with a closed flux loop, Figure \ref{fig:TipConcept}d. The direction of this flux may be irregular, and hence inappropriate for perturbing weak FM islands. The planar probe design used in this study has a bi-metallic structure of thin-film components: the current-carrying layer and the ferromagnetic layer, as depicted in Figures \ref{fig:TipConcept}b and c. By sending an electrical pulse ($I_{p}$) through the current-carrying layer in a designated electrical pathway (called the bridge) near the tip apex, we generate an \O rsted field of controlled magnitude and well-defined direction which penetrates the ferromagnetic layer. This action leads to a singular domain state of the tip apex, with a preferable orientation. The planar probe is formed by cleaving a silicon wafer into a small \SI{1}{\square\milli\meter} square piece with a \SI{90}{\degree} tip apex \cite{Ciftci2022EnhancingProbes, Siahaan2015CleavedMicroscopy}. Near the tip apex, i.e. the cleaved corner, naturally increases the flux density, increasing the tip stray field compared to a needle-like MFM tip. This magnetic field strength and distribution is discussed later on. The single domain state of the tip can be used to probe weak FM domains. To this end, we used a \SI{30}{\nano\meter} Pt film for the current-carrying layer and a \SI{15}{\nano\meter} Co film for the ferromagnetic layer, placed on top of the planar probe. Detailed fabrication procedures of the film and planar probe are given in Supplementary S1. Contrary to the traditional passive needle-like MFM tips, we can orient the SM-PP multi-domains into a singular domain by only a single current pulse as often as needed to combat transient tip demagnetisation, a known issue in MFM. The resulting tip domain after sending a current pulse is illustrated in Figure \ref{fig:TipConcept}d. As a result, we can obtain consistently oriented tip domains, resulting in a predictable stray field in the tip vicinity, as indicated by $\vec{B}$ in Figure \ref{fig:TipConcept}e. Figure \ref{fig:TipConcept}e illustrates the side view of the SM-PP with the tip stray field distribution predominately out-of-plane from the sample's perspective. In Supplementary S4 we discuss in-depth the tip stray field distribution derived from a numerical study. Finally, Figure \ref{fig:TipConcept}f shows a Kerr microscopy image of the SM-PP, after having sent a current pulse of sufficient amplitude. A poled tip domain is formed as observed with the dark contrast near the apex, highlighted within the dashed circle. We attached this functionalized planar probe to a mass retuned \cite{Ciftci2022EnhancingProbes} quartz tuning fork (QTF) force sensor with integrated electrical access to the probe for the current pulse $I_{p}$, as schematically illustrated in Figure \ref{fig:TipConcept}a. QTF's have been successfully used before for MFM \cite{Schneiderbauer2012QPlusResolution} and are easily integrated in an UHV scanning probe microscope. The retuned tuning fork approach significantly improved the load capacity of the QTF sensor. As widespread AFM and MFM applications have previously experienced, once the mass exceeds several tens of \SI{}{\micro\gram} \cite{Dagdeviren2017OptimizingAnalysis}, which is far below the mass of a chip-like probe, the oscillation's Q-factor value drops to only several hundred from the original 40 thousand. This reduction in Q-factor results in a large loss in force sensing capabilities \cite{Ooe2014ResonanceMicroscopy, Ciftci2022EnhancingProbes}. For degraded Q-factor sensors, we would be unable to use the planar probe for imaging magnetic fields of LMO$_3$. To this end, the retuned tuning fork approach compensates for the mass unbalancing from planar probe attachment and recovers sensitivity. As a result we are able to restore the Q-factor to over \SI{2e4}{} at room temperature in ultra-high vacuum (UHV) \cite{Ciftci2019PolymerProbes, Ciftci2022EnhancingProbes}. In Supplementary S3 we discuss further the need for a high $Q$. As a consequence, the Q-factor drops to only a few hundred \cite{Ciftci2019PolymerProbes}, leading to degraded force sensing capabilities. The same effect can arise from the additional wires connecting the current control signal for pulsing, which is the reason why dedicated electrical contacts to the tip are now integrated on the tuning fork itself \cite{Giessibl2019TheMicroscope}. We solve the mass-imbalance by mass retuning \cite{Ooe2014ResonanceMicroscopy} the QTF as described in our previous work \cite{Ciftci2022EnhancingProbes} and utilising readily available electrodes on the tuning fork. For extracting the magnetic signal of weak FM islands, the SM-PP needs to change to a fully oriented $M_{\text{tip}}$ near the tip apex, starting from a multi-domain state. We turned to finite element modeling (FEM), with COMSOL\texttrademark, to simulate the generated \O rsted field within the bridge to assess the required $I_{p}$ magnitude for tip magnetisation control. Furthermore, we can investigate the thermal response of the tip by Joule heating. \begin{figure*} \centering \includegraphics[scale=0.45]{Figures/Modelling_Experiments.jpg} \caption{\textbf{Numerical and experimental validation of the magnetic switch of the SM-PP tip.} (\textbf{a}) Numerical calculations of $\vec{H_{\text{p}}}(\Vec{r})$ magnetic field components ($B_x$,$B_y$) and $B_z$) from a \SI{150}{\milli\ampere} pulse. The bridge is \SI{5}{\micro\meter} wide. (\textbf{b}) Numerical switching behaviour of the SM-PP, covering bridge width $d$ varying from \SI{50}{\nano\meter} to \SI{7}{\micro\meter}, presented in the phase diagram. The colors indicate a switch between two oppositely poled single domain state (green), domain fluctuations (orange) or no switch (red). (\textbf{c}), (\textbf{f}) Simulated single domain formation of the tip for inverting $I_{p}$ polarity. (\textbf{d})-(\textbf{h}) Kerr microscopy results show domain orientation switching after inverting $I_{p}$ polarity. The vertical component of the altered magnetization (indicated in blue and yellow) is visible in the location near the tip end in \textbf{d} and \textbf{g}. The horizontal component is mostly aside the tip end \textbf{e, h}.} \label{fig:TipSwitch} \end{figure*} Figure \ref{fig:TipSwitch} presents the numerical and experimental validation of the magnetic switch of SM-PP tip. The simulations cover various bridge widths $d$ in the range from \SI{50}{\nano\meter} to \SI{7}{\micro\meter} and different $I_{p}$ values from \SI{10}{\milli\ampere} to \SI{200}{\milli\ampere}. The pulse duration is \SI{500}{\nano\second}. See Supplementary S4 for details on the simulations. First, Figure \ref{fig:TipSwitch}a shows the calculated spatial field components of $\vec{H_{\text{p}}}(\Vec{r})$ of a \SI{5}{\micro\meter} bridge under application of $I_{p}=$ \SI{150}{\milli\ampere}. The in-plane field components $B_x$ and $B_y$ of $\vec{H_{\text{p}}}(\Vec{r})$ follow the bridge structure. This indicates that both symmetric sides of the tip have opposing magnetic direction, as is evident from the current flow pathway. Near the tip apex, $B_x$ and $B_y$ are relatively small since the current density is lowest (between the white lines of Figure \ref{fig:TipSwitch}a). A strong out-of-plane component $B_z$ (Figure \ref{fig:TipSwitch}a) is only observed at the boundary of the tip and bridge, but is of little importance with respect to the in-plane magnetisation of the Co film. At just \SI{1}{\micro\meter} away from the tip apex, above the upper white line, the in-plane magnetic field is larger than \SI{10}{\milli\tesla} which implies the nucleation of oriented in-plane Co domains. In Supplementary S1 the magnetisation response of the Co film is given. Following, $\vec{H_{\text{p}}}(\Vec{r})$ is used as an input parameter within Mumax$^3$ to calculate the magnetisation response (switch vs. no switch) of the Co film at the tip apex, as a function of the bridge width $d$ and $I_{\text{p}}$. In Figure \ref{fig:TipSwitch}b the color scale represents three different states of the tip magnetization after applying $I_{p}$. Green means the tip end domain shows a 180\textdegree\hspace{1pt} reversal, so a full switch. Yellow represents an observed modification or a limited rotation by less than 180\textdegree\hspace{1pt} in the tip domain. Red implies that the magnetization remained identical to the pre-pulse orientation. The results show a few tens of mA increase in critical current level for the bridge gap width values from \SI{50}{\nano\meter} until \SI{1}{\micro\meter}, as given in Figure \ref{fig:TipSwitch}a. For the bridge gap widths greater than \SI{1}{\micro\meter}, the critical current shows a larger increase. Although the nanometer scale of the bridge can be achieved with various techniques and types of lithography, in our experiments we used focused ion beam (FIB) milling. This resulted in bridges on the micrometer scale and as the simulations results shows, we require a current magnitude in the order of \SI{e2}{\milli\ampere}. Supplementary S1 discusses the FIB fabrication in further detail. When we simulate a current pulse with \SI{130}{\milli\ampere} amplitude and \SI{500}{\nano\second} duration for a bridge of \SI{5}{\micro\meter}, the tip magnetization changes fully accordingly to the pulse polarity, as shown in Figures \ref{fig:TipSwitch}c and f. The notion of tip switch at values below \SI{130}{\milli\ampere} is important, especially for micrometer scale bridges, because it significantly limits the Joule heating, as we discuss later. Based on the simulation results, a Kerr microscopy experiment was performed to validate the magnetic switch. The Kerr microscopy experimental details are given in Supplementary S2. Gray/black tones in Figure \ref{fig:TipSwitch}d, e, g, and h represent Co domains preserving initial orientations before the pulse, upon applying a current pulse. False colored areas represent the Co domains' response to $I_{p}$. Figures \ref{fig:TipSwitch}c and f indicate the orientation in the vertical direction expressed by the blue-to-yellow color scale. Figures \ref{fig:TipSwitch}d and g show the domain orientation in the horizontal direction, given by the pink-to-green color scale. The domain is mainly confined to the bridge region, as only here the current density is sufficient for inducing Co domain reversal. Along the length of the FIB bridge, the domains are inverted (pink and green), which follows closely those of the numerical simulations of Figures \ref{fig:TipSwitch}c and f, validating the realisation of the SM-PP. \begin{figure*} \centering \includegraphics[scale=0.5]{Figures/Thermal.jpg} \caption{\textbf{Numerical study of the thermal response.} (\textbf{a}) On the right, the calculated current density across the bridge for a current pulse of \SI{150}{\milli\ampere}. The current density increases up to \SI{6e11}{\ampere\per\meter\squared} at the smallest section of the bridge. On the left, the corresponding temperature profile. (\textbf{b}) The current pulse $I_{p}$ has to form of an asymmetric double sigmoidal function plotted as the black curve. With a FWHM of \SI{160}{\nano\second} and a peak value of \SI{150}{\milli\ampere}. The transient temperature response, red curve, show a rapid decay of the temperature, highlighting the efficient thermal dissipation of the bridge and ensuring mechanical stability.} \label{fig:Numerical} \end{figure*} After applying $I_{p}$, the temperature increase should be excessive i.e. above \SI{100}{\kelvin}, because it would hamper operation in UHV and degrade the tip's metallic layers. Examples and solution by metallic layer composition with respect to preventing degradation of tips are discussed in Supplementary S1. Hence, we modelled the (transient) temperature response of the tip for $I_{p}=$ \SI{150}{\milli\ampere} for an upper limit of thermal increase. Figure \ref{fig:Numerical}a compares the simulated spatial current density across the bridge for a $I_{p}$ of \SI{160}{\nano\second}, with the thermal profile. As expected, the current density is highest near the shortest width of the metallic film and is in the order of \SI{e11}{\ampere\per\square\meter}. Yet, the maximum temperature increase is observed to be only \SI{50}{\kelvin}, which means operation in UHV is possible and would minimize Joule heating damage to the metallic films. We experimentally pulsed several tips for tens of times and no degradation was observed. The transient heating response was also simulated, with the results given in Figure \ref{fig:Numerical}b. Here, a \SI{160}{\nano\second} asymmetric double sigmoidal pulse, see Supplementary S4 for pulse details, is simulated. The temperature decreases quickly within a microsecond due to efficient thermal dissipation of the Si substrate. We studied the effects of substrate capping material, i.e. Si coated with SiO$_2$ or MgO, on this thermal dissipation and the results are also discussed in Supplementary S4. To conclude the first part of this work; the design, fabrication and optimisation of SM-PP provides us a SM-PP sensor with high Q-factor. Combined with the current-controlled tip magnetization it enables the possibility to study the magnetic surface textures of LMO$_3$ \cite{Wang2015ImagingHeterostructures, Anahory2016EmergentInterfaces}. For the second part of this work we turn to applying the SM-PP sensor to saturate and image the weak FM islands, aswell the AF domains the former are embedded into, of epitaxial LMO$_3$. The magnetic texture of a 6 u.c. LMO$_3$ on STO$_3$ sample was imaged with our MFM operating both below ($T=$ \SI{100}{\kelvin}) and above ($T=$ \SI{300}{\kelvin}) of LMO$_3$'s $T_c=$ \SI{115}{\kelvin} \cite{Wang2015ImagingHeterostructures}. The first aim was to identify the AF and weak-FM texture distribution across the surface. Secondly, the SM-PP is able to magnetize magnetic islands by the tip's oriented stray field exceeding \SI{300}{\milli\tesla}, see Supplementary S4, and hence the size of the magnetic islands can be observed with a lateral scale between \SI{10}{} and \SI{20}{\nano\meter} \cite{Anahory2016EmergentInterfaces}. The same SM-PP sensor was used for all imaging, with Frequency Modulation (FM) feedback. The scanning parameters are kept constant throughout all the measurements, see Supplementary S6 for methods and experimental details. \begin{figure*} \centering \includegraphics[scale=0.47]{Figures/MFM_V2.jpg} \caption{\textbf{MFM images obtained with the SM-PP sensor on 6 u.c. LMO$_3$/STO.} (\textbf{a}) Topographic images of LMO$_3$. (\textbf{b}) Multi-domain tip state MFM measurement at \SI{100}{\kelvin} showing no magnetic contrast. (\textbf{c}) The SM-PP tip is magnetised into a single domain. MFM imaging reveals spatially inhomogenous magnetic contrast at \SI{100}{\kelvin}. (\textbf{d}) Typical force-distance (F-z) spectroscopy and damping (voltage) signal taken at red areas of \textbf{c}. F-z spectroscopy shows a sudden kink in the attractive regime as highlighted with the blue arrow. The orange arrow indicated short range vdW forces. The damping signal (red line) is simultaneously taken, showing a sudden change in dissipation as indicated with the red arrow, (\textbf{e}) F-z spectroscopy and damping signal taken at a blue spot of \textbf{c}, showing significant reduction in sudden dissipation and force changes at the blue and red arrow, compared to \textbf{d}. (\textbf{f}, \textbf{g}) Magnetic features observed with a poled tip. The tip's stray field induces local magnetic domain perturbation (streaks) as indicated with the black circles and arrows. The forward and backward scan are compared. In all images the black scale bar is equal to \SI{30}{\nano\meter}.} \label{fig:MFM} \end{figure*} First, we scanned at a temperature of \SI{100}{\kelvin} and imaged a plateau of the LMO$_3$/STO$_3$ stepped surface. The 90$\times$90 \SI{}{\square\nano\meter} topography images are given in Figure \ref{fig:MFM}a and demonstrates a LMO film RMS roughness $S_q$ of \SI{32}{\pico\meter}. Although the surface of LMO$_3$ can show up to 1 u.c. roughness variations, in-homogeneously distributed across the stepped surface, which is a known surface feature for manganites \cite{Gambardella2014SurfaceFilms}. The lateral resolution in topography is limited by the relatively large amplitude of \SI{10}{\nano\meter} used for detecting the long range magnetic force. Ideally, one would use a small amplitude for high resolution topography and a large amplitude for lift mode magnetic imaging \cite{Schneiderbauer2012QPlusResolution}. We leave this for future work, as currently this approach of consecutively switching of the amplitude introduced large drift in our setup. After obtaining the local topography, we switched to MFM. The MFM signal was first acquired with a multi-domain, closed flux tip, where negligible (out-of-plane oriented) stray field should interact with the sample magnetic domains. Indeed, Figure \ref{fig:MFM}b shows that no MFM signal could be measured below the noise level of \SI{1.5}{\milli\hertz}. The lack of topography cross-talk in the lift mode image also demonstrates that the lateral variation of the electrostatic force is neglible. Following, the tip was pulsed by a \SI{160}{\milli\ampere} current pulse for \SI{205}{\nano\second}, which aligned the tip domain in the downward position, as indicated schematically in Figure \ref{fig:MFM}c and confirmed with Kerr microscopy prior. For safety, the tip was retracted by \SI{0.5}{\micro\meter} from the surface during the pulse, which can give rise to lateral drift of around \SI{10}{\nano\meter} in our SPM at these temperatures. With the magnetically oriented tip, we continue MFM imaging at \SI{100}{\kelvin} and observed a complex landscape of magnetic textures across the scanned area, as given in Figure \ref{fig:MFM}c. The smallest magnetic objects are highlighted with red circles in Figure \ref{fig:MFM}c, which also correspond to the strongest attractive magnetic forces. These features have on average a diameter of \SI{10}{\nano\meter}. We attribute these area's as stray field induced magnetised domains. Observing the nominal size, it is very likely that these corresponds to the weak FM textures \cite{Anahory2016EmergentInterfaces}, even at \SI{100}{\kelvin}. Performing force-distance (F-z) spectroscopy on the red islands of Figure \ref{fig:MFM}d showed complex behaviour and provides more evidence for weak FM properties. The tip was retracted up to \SI{20}{\nano\meter} above the surface, and then lowered until a notable repulsive fore was observed. The frequency shift $df$ was measured during spectroscopy. Evidence of the tip stray field induced magnetic alignment is given in Figure \ref{fig:MFM}d. We observe four distinct regimes; firstly we note a long-range attractive forces between \SI{20}{\nano\meter} and \SI{8}{\nano\meter}. This can be assigned to long range electrostatic forces. At around \SI{7.5}{\nano\meter}, a sudden negative change in frequency (force) is observed as indicated with a blue arrow. We attribute this to the significant increase of the magnetic field the sample experiences as the SM-PP tip approaches the weak FM domain and hence saturating it. At \SI{3.4}{\nano\meter}, indicated with an orange arrow, the attractive van der Waals force region is noted. At very small tip-sample distances the frequency shift becomes positive evidence of repulsive forces. We also measured the damping, a sign of energy loss via local magnetization change of the weak FM islands \cite{Torre2011MagneticNanoparticles}. In Figure \ref{fig:MFM}d, the red curve shows a sudden rise in the damping as indicated with the red arrow. Likely, at this distance the weak FM islands are magnetized periodically as the tip oscillates up and down. As a comparison, Figure \ref{fig:MFM}e shows the same F-z spectroscopy experiment performed on the blue areas of Figure \ref{fig:MFM}c. Less perturbing of the attractive force is noted, and no measurable dissipation change is observed as highlighted with the colored arrows. We conjecture that those blue colored areas are the antiferromagnetic domains. Generally, the weak FM features (coloured red) are embedded in magnetic labyrinth-like domains colored yellow/green in Figure \ref{fig:MFM}c. These domains are continuous and spread across the surface. Furthermore, they have smaller attractive force than the weak FM areas. Due to the tip's large stray field induced magnetization of LMO$_3$, no repulsive area's could be observed. Areas depicted in blue are observed in two distinct regimes. First, we note distributed areas highlighted with the dashed white line. Secondly, blue circular like objects are noted as highlighted with the blue circle. These objects show very little attractive frequency shift. Hence, excluding electrostatic forces as these are constant across the surface, these domains form an antiferromagnetic texture, corroborating the SSM observations of Wang \textit{et al.} \cite{Wang2015ImagingHeterostructures} and Anahory \textit{et al.} \cite{Anahory2016EmergentInterfaces}. Furthermore, we note that the tip stray field can induce local magnetic perturbations in the real-space imaging. By comparing the forward and the backward scan, Figures \ref{fig:MFM}f-g, the areas highlighted in the black circle show clear distinction between the two images. We conjecture that the field from the tip perturbed the local weak FM domains. This would also be in agreement with the observation of streak-lines as indicated with the arrows. In conclusion, the results lay strong indications of the imaging capabilities of the magnetically controllable SM-PP tips for weak FM islands with a resolution higher than \SI{10}{\nano\meter}. Firstly, we achieved a repeatable control over the magnetization at the SM-PP tip with a consistently distributed domain state at the tip apex. Following, we demonstrated imaging of a complex magnetic texture of the rare-earth metal oxide perovskite LMO$_3$ with nanometric identification of weak FM islands. For further investigation of LMO$_3$ the SM-PP can be employed for ultra-high resolution imaging of the local u.c. variation in film thickness and the possible correlation with weak FM islands. Furthermore, the integration of the SM-PP in a LHe cryostat would increase the Q-factor by another order of magnitude, significantly improving the signal-to-noise ratio. Finally, future application the SM-PP can be combined with scanning tunneling microscopy functionality because of the tip metallic layers and electrode accessibility of the tuning fork. This way we can combine ultra-high lateral resolution imaging of conductive metal-oxide-perovskites and measure the long range MFM forces without the need to switch to different setups. This possibility opens up a approach to disentangle the atomic scale structure and long range magnetic ordering relations of transition metal oxides for application in spintronic and catalytic devices. Considering its enhanced sensitivity, the widened scope of tip-on-chip design can convert the MFM/AFM from a surface analysis tool with passive probes to a more sophisticated device with active more complex probes for characterization, e.g. nitrogen-vacancy centers diamond tips as quantum sensors for detecting ultra-small magnetic fields \cite{Casola2018ProbingDiamond, Healey2023QuantumHeterostructures} or currents \cite{Ariyaratne2018NanoscaleDiamond}. \subsection{Materials and Methods} \subsubsection{Planar probe fabrication} The metallic layers was sputtered on a thin \SI{150}{\micro\meter} thin Si $<100>$ wafer (intrinsic, UniversityWafer). The wafer was cleaved by a diamond scriber in \SI{1}{\square\milli\meter} pieces and inspected by an optical microscope. Following, the tip apex radius was inspected and selected for a radius below \SI{50}{\nano\meter} by a ZEISS-Sigma SEM. A FEI Nova600i SEM-FIB was used to fabricate the bridge structure by Ga-ion etching. A sequential beam current of \SI{0.05}{}, \SI{0.46}{} and \SI{2.8}{\nano\ampere} was used. Near the bridge the smallest current prevents damage and increases the etching resolution. The acceleration voltage was \SI{30}{\kilo\volt}. The planar probe was placed onto the QTF (AB38T) prong with UV-curable resin with minimal volume (less than \SI{100}{\micro\liter}) employed by the use of a syringe needle. Silver paste was used to connect the electrical leads of the planar probe to those of the QTF. EPO-TEK 4410 was used to connect the wires from the QTF to a custom PEEK sensor holder. Detailed fabrication procedure is further outlined in Supplementary S1. \subsubsection{Kerr Microscopy} A Zeiss Axio Imager.D2m Kerr microscope was used with a $50\times$ magnification lens, assembled by Evico Magnetics with a polariser/analyser pair and manual slit diaphragm. The setup was combined with a set of water cooled Helmholtz coils for magnetic moment alignment by in-plane magnetic fields with respect to the Co film orientation. Kerrlab software was used for data acquisition. \subsubsection{Numerical calculations} MuMax$^3$ was employed to simulate the domain structure of a \SI{16}{\nano\meter} Co film. Exchange length of \SI{5}{\nano\meter} and grid unit cell of 4x4 \SI{}{\square\nano\meter} were used. For study of the thermal and magnetic properties of the SM-PP COMSOL Multiphysics was used with the AC/DC module and Heat Transfer module. Further numerical details are outlined in Supplementary S4. \subsubsection{Imaging in UHV} A Scienta Omicron VT-SPM setup was modified to carry two additional electrical contacts for pulsing the planar probe tip. The contacts are constructed from 2 gold coated pogo pins placed on a custom PEEK holder onto the scanning tube. An square wave generator (Agilent 33120A) was connected to a custom MOSFET circuit to reduce the pulse down to several hundred ns. Coax cables where used to connect the function generator to the VT-SPM. Detailed imaging methods are outlined in Supplementary S5. \begin{acknowledgement} The authors thank W. Dijkstra for assistance in both the modification of the UHV-SPM and fabrication of the custom pulse generator. Special thanks to H. Hilgenkamp of Twente University of Technology, Netherlands, for providing the 6 u.c. LaMnO$_3$ thin film on SrTiO$_3$ sample. O. Kurnosikov acknowledge support from ANR-15-IDEX-04-LUE CAP-MAT ans by the “FEDER-FSE Lorraine et Massif Vosges 2014–2020. Financial support from the Eindhoven University of Technology is acknowledged. \end{acknowledgement} \subsection{Supplementary S1: Fabrication and characterisation of the SM-PP} \label{Supp:fabrication} \begin{figure*} \centering \includegraphics[scale=0.45]{Figures/Supp_Tip_Fabrication.jpg} \caption{\textbf{Fabrication procedure of the switching-magnetisation planar probe.} (\textbf{a}) A \SI{150}{\micro\meter} thin intrinsic (110) silicon wafer is diced into smaller pieces, (\textbf{b}). (\textbf{c}) The pieces of wafer are sputtered with a \SI{150}{\nano\meter} MgO layer for electrical insulation needed to reduce the doping effect of Ga-ion implantation by FIB milling. Following, the metallic layers are sequentially deposited by plasma sputtering deposition, as schematically drawn. (\textbf{d}) After film deposition, the wafer pieces are cleaved into small rectangular planar probes with \SI{90}{\degree} angles forming the SPM tip. (\textbf{e}) Finally, the probe is functionalised with a bridge fabricated by FIB milling. (\textbf{f}) SEM images of a FIB fabricated bridge structure. (\textbf{g}) The magnetisation curve of the thin Co film on a planar probe as measured with Kerr microscopy. } \label{fig:supp_fabrication} \end{figure*} The fabrication procedure of the SM-PP is schematically given in Figure \ref{fig:supp_fabrication}. The use of a thin \SI{150}{\micro\meter} Si wafer (intrinsic, from UniversityWafer) makes it possible to facilitate easy mechanical cleaving without having to apply large mechanical force and simultaneously reduce the planar probe's mass. First the wafer is diced into \SI{20}{}x\SI{20}{\square\milli\meter} pieces, as shown in Figure \ref{fig:supp_fabrication}b. Single crystal silicon (100) is known to cleave in atomically smooth planes \cite{Lei2012DieReview}. By cleaving in two perpendicular directions a nanometer scale tip apex can be achieved. The cleaving results in square pieces up to \SI{1}{}x\SI{1}{\square\milli\meter}, with the tip apex' are evaluated for their radius (sharpness) with SEM, see Figure \ref{fig:supp_fabrication}e for a large scale image. For integration into a sensor, tips with a radius below \SI{50}{\nano\meter} were chosen for further fabrication steps, tips with larger radii where discarded. Because of the square nature of the cleaved planar probes, each piece offers up to \SI{4}{} adequate tip apices. This makes the availability of many excellent tips with a small radius very likely, fabricated in a short amount of time. With FIB milling, significant Ga-ion implantation occurred into the Si wafer which shortened the milled trench and voiding the bridge functionality. Hence, we resorted to growing an insulator spacer layer of SiO$_2$ or MgO between the metallic stack and the wafer. The integration of a microscale current pathway requires to reduce thermal-induced damage by excessive Joule heating. To enhance thermal management near the tip apex a high thermal conductivity MgO spacer layer is sputtered on top of the silicon wafer. MgO has a sufficient thermal conductivity of about \SI{40}{\watt\per\meter\per\kelvin} \cite{Slifka1998ThermalMeasurements}. Furthermore, MgO simultaneously facilitates electrical insulation to reduce electrical leakage currents. In Supplementary S4 we discuss the thermal dissipation behaviour between Si/MgO and Si/SiO$_2$ substrates after sending a current pulse through the bridge. Next, the metallic multilayer was deposited. The planar probe metallic layer consists of the following structure, see Figure \ref{fig:supp_fabrication}c. First, a \SI{4}{\nano\meter} tantalum (Ta) seed layer is grown to induce good mechanical adhesion of the subsequent metal layers with the MgO/Si substrate. The Ta seed layer also smooths the surface roughness of the MgO layer to to some extent, which still results in a final RMS roughness of \SI{6}{\nano\meter}. Such a reduced roughness can actually be beneficial as it can be expected that near the cleaved tip apex small nanometer scale bumps form the nano-tip and reduce the van der Waals forces compared to a fully triangular structure, as it scales with the tip volume. The measured roughness of SiO$_2$ and MgO layered films are given in Figure \ref{fig:supp_roughness}. Subsequently, a \SI{30}{\nano\meter} current-carrying Pt layer is grown. This relatively thick Pt layer has the lowest film electrical resistance of the metallic stack; the majority of the current will flow through this layer. The ferromagnetic film is made from \SI{15}{\nano\meter} Co. Finally, the stacking is capped with \SI{3}{\nano\meter} of Ta and \SI{3}{\nano\meter} of Pt to induce high mechanical rigidity of the tip apex and prevent native oxidation. \begin{figure*} \centering \includegraphics[scale=0.8]{Figures/Supp_Roughness.jpg} \caption{\textbf{Surface roughness of planar probe tips measured with AFM.} (\textbf{a}) A topographic AFM image of SiO$_2$ layered planar probe covered with the multi-layer metallic stack. The roughness is found to be around \SI{400}{\pico\meter}. (\textbf{b}) A planar probe surface but with a \SI{100}{\nano\meter} MgO layer instead of SiO$_2$. The surface roughness is larger compared to (\textbf{a}) and around \SI{6}{\nano\meter}. The black arrows point to the FIB milled trenches.} \label{fig:supp_roughness} \end{figure*} When using (native) SiO$_2$ as a spacer layer between the intrinsic silicon substrate and the metallic stack of the planar probe, the low thermal conductivity of SiO$_2$ of only \SI{1}{\watt\per\meter\per\kelvin}, limits the thermal durability of the device. This low thermal conductivity was found to be insufficient to prevent damage from Joule heating to the bridge when using pulses above \SI{80}{\milli\ampere}. Limiting the current below \SI{80}{\milli\ampere} proved insufficient for full domain reversal for many devices with bridges in the micrometer widths. Figures \ref{fig:supp_damage}a, b and c show SEM images of FIB fabricated tips with either a single straight trench or the crossed configuration. The tips have a nominal bridge width, measured from the end of the trench to the tip end, of (a) \SI{12}{\micro\meter}, (b) \SI{10}{\micro\meter} and (c) \SI{8}{\micro\meter}. Figures \ref{fig:supp_damage}e, f and g show optical microscope images of observed tip damage, highlighted with orange circles, after sequential \SI{80}{\milli\ampere} pulsing. In these images, the metallic films have clearly degraded by excessive Joule heating. The Joule heating is most intense where the bridge width is smallest i.e. near the tip end, corresponding to the highest current density. Figures \ref{fig:supp_damage}d and \ref{fig:supp_damage}h show AFM topographic images of the FIB trench end, after 2 pulses. Clearly, the metallic film shows first signs of degradation or "peel-back", before full layer destruction occurs. With the inclusion of MgO, which is directly sputtered on top of the silicon wafer, we observed no Joule heating induced damage. MgO has a much high thermal conductivity of \SI{40}{\watt\per\meter\per\kelvin} \cite{Slifka1998ThermalMeasurements}. MgO devices pulsed over 25 times showed no degradation or change of the bridge resistance, even for pulses up to \SI{250}{\milli\ampere}. \begin{figure*} \centering \includegraphics[scale=0.46]{Figures/Supp_Damage.jpg} \caption{\textbf{SEM and AFM images of Joule heating-induced bridge damage with a SiO$_2$ layer.} (\textbf{a}) - (\textbf{c}) SEM image of pristine tips with FIB fabricated bridges. (\textbf{e}) - (\textbf{g}) Same tips after consecutive current pulsing of \SI{80}{\milli\ampere} showing film degradation. (\textbf{d}) and (\textbf{h}) AFM topography shows signs of film degradation and "peal-back" from excessive heating.} \label{fig:supp_damage} \end{figure*} \subsection{Supplementary S2: Kerr microscopy} A Zeiss Axio Imager.D2m Kerr microscope was used with a $50\times$ magnification lens, assembled by Evico Magnetics with a polariser/analyser pair and manual slit diaphragm. The setup was combined with a set of water cooled Helmholtz coils for magnetic moment alignment by in-plane magnetic fields with respect to the Co film orientation. Kerrlab software was used for data acquisition. The slit diaphragm makes it possible to filter light and select a magnetisation direction to visualise (horizontal or vertical) moment sensitivity. A custom-made sample holder was fabricated with integrated electrical wiring. The holder offered 3-degree's of positioning freedom needed for positioning of the SM-PP below the Kerr lens. A custom pulsing circuit was used to pulse a \SI{50}{\milli\ampere} to \SI{300}{\milli\ampere} current for \SI{150}{\nano\second} to \SI{500}{\nano\second}. By ramping up the current in consecutive pulses, the domain switching threshold was found. \begin{figure*} \centering \includegraphics[scale=0.42]{Figures/Supp_Kerr_Microscopy_V2.jpg} \caption{\textbf{Kerr microscopy.} (\textbf{a}) Setup to measure the tip magnetisation of a SM-PP device. The SM-PP is placed on a movable holder, to align the tip with the lens of the microscope. (\textbf{b}) and (\textbf{c}) show SEM images of FIB fabricated tips either with a straight trench, or with a crossed configuration. The black arrows highlight the pulsed tip Co domain, with a more stable domain with a crossed structure. The orange arrows point to the bridge section of smallest diameters. These regions have the highest current density under pulsing. } \label{fig:supp_Kerr} \end{figure*} It was found that fabricating a single straight FIB trench does not always result in a stable magnetic domain reversal, with an example given in Supplementary Figure \ref{fig:supp_Kerr}b. For these straight trench devices, a stable domain was observed only after several consecutive current pulses. Even then, the domain does not extend across the complete bridge region, as indicated with the black arrows. With a single trench structure, only two nucleation sites are formed, near the very tip apex where the current density is highest (orange arrows). Although our numerical model suggests that it is possible to fully switch the domain with a single pulse, we attribute some devices needing more pulses to the fact that the FIB bridge was not always placed perfectly placed along the mirror-symmetrical axis of the probe. Hence, the current distribution is not equal along the bridge which could hamper full domain reversal near the tip apex. In our numerical models we have always assumed fully symmetrical structures. To remedy this problem a perpendicular and smaller trench was FIB milled near the original trench, close to the tip apex, see Figure \ref{fig:supp_Kerr}c. This design adds an extra nucleation site, indicated with orange arrows, in close proximity to the bridge end. It was observed that this enables formation of a very stable domain that can be reliably reversed with a single pulse in all fabricated tips (number of tips exceeding 15). Figure \ref{fig:supp_Kerr}c show gray-scale Kerr Microscope images focused on the crossed bridge design. Kerr sensitivity was selected which corresponds to the domain pointing in-plane. Thus, the double crossed trench solution overcomes the non-perfect mirror-symmetric alignment of the FIB trench and induces multiple nucleation sites across the bridge. \subsection{Supplementary S3: Note on the QTF function} The large mass of the planar probe would decrease the Q-factor $Q$ below a few hundred, with $Q$ strongly influencing the sensitivity of the force sensor \cite{Giessibl2019TheMicroscope}. Hence, mass retuning of the QTF is performed by offsetting the mass of the planar probe carrying prong to equate that of the added mass of the oversized tip. This effectively restores $Q$ to its pristine value \cite{Ciftci2022EnhancingProbes}. The need to enhance $Q$ is vital by satisfying two requirements for the SM-PP to function. First, the frequency noise in frequency-modulation (FM) AFM scales inversely with $Q$ \cite{Giessibl2019TheMicroscope}. Hence, a higher $Q$ results in larger signal-to-noise ratio needed to measure the smaller forces associated with magnetic stray field gradient sensing \cite{Giessibl2019TheMicroscope}. Second, due to the mentioned separation of the electrode layout, the actual force-to-voltage conversion signal by the piezo-electric effect of the quartz tuning fork, is measured in the upper prong of the QTF. This prong essentially measures the tip-sample force indirectly and works most efficiently if the two oscillating prongs operate in the anti-phase mode with little dissipation in the connecting node \cite{Ciftci2022EnhancingProbes}. A low $Q$ would lead to insufficient coupling between the two oscillating prongs and hence impede operation of the force sensing. This is fundamentally different from the qPlus sensor, were only one prong oscillates and hence the force sensing is limited to this prong \cite{Giessibl2019TheMicroscope}. Using the retuned tuning fork approach, we explore the possibility of using a high spring constant $k$ ($\sim \SI{e4}{\newton\per\meter}$) force sensors, which is remarkably higher than those of the qPlus (\SI{1500}{\newton\per\meter}). With a high $k$, the minimal detectable force gradient $dF_{\text{m}}/dz$ is reduced together with the detectable frequency shift down to several tens of mHz, according to the equation $\omega/\delta\omega = 1/(2k) \cdot dF/dz$ \cite{Giessibl2019TheMicroscope}. Hence, the tip needs to scan only a few nanometers above the sample surface to measure the magnetic stray field gradients, above the noise level of approximately \SI{1.5}{\milli\hertz}. This small $dF_{\text{m}}/dz$ simultaneously prevents the signals from areas beyond the dimensions of the tip to be picked up and hence the resolution can be increased significantly, needed for imaging nanometer scale SP islands of LMO$_3$. Our SM-PP realises a $Q$ above \SI{20000}{} at room temperature, in ultra-high vacuum (UHV), which results in a noise floor of \SI{1.5}{\milli\hertz}. For MFM imaging of nanometer sized SP textures, the high $k$ and large $Q$ are beneficial, as forces coming from area's not directly beneath the tip are not picked up by the SM-PP because the $df/dz$ decreases rapidly below the noise floor. \subsection{Supplementary S4: Numerical calculations of thermal and magnetic properties} \subsubsection{Influence of current pulse characteristics on Joule heating} \begin{figure*} \centering \includegraphics[scale=0.42]{Figures/Supp_Heating.jpg} \caption{\textbf{Temperature response of the SM-PP for different pulse characteristics and spacer layer materials.} (\textbf{a}) Increase in temperature for different current densities. Even for very high currents exceeding \SI{e12}{\ampere\per\meter\squared}, the temperature does barely exceed \SI{400}{\kelvin}. (\textbf{b}) The temperature magnitude strongly depends on the pulse length between \SI{300}{} and \SI{900}{\nano\second}. (\textbf{c}) Operation in UHV at \SI{77}{\kelvin} should be possible since only a marginal increase in temperature is observed for a pulse sufficient to change the magnetisation of the SM-PP. (\textbf{d}, \textbf{e}) Simulation of the temperature response of MgO vs. SiO$_2$ spacer layers. The pulse shape is varied between an asymmetric double sigmoid (\textbf{d}) and a square pulse (\textbf{e}), highlighting the need to keep the maximum current as short as possible to reduce thermal heating.} \label{fig:supp_Heating} \end{figure*} The (transient) temperature response of the SM-PP bridge was numerically studied with COMSOL \texttrademark. First we address the current pulse shape. In our SPM setup, capacitance of the cables that are connected to the SM-PP affect the pulse shape, resulting in a more asymmetric shape deviating from a square current pulse input. We simulate this curve in COMSOL using the asymmetric double sigmoidal function, which has the following functional form: \begin{equation} I\left(t\right)=A_1/\left(1+\exp\left(-\left(t-t_c+w_1/2\right)/w_2\right)\right)\left[1-1/\left(1+\exp\left(-\left(t-t_c-w_1/2\right)/w_3\right)\right)\right] \label{eq:sigmoidal} \end{equation} Here, $w_1$ defines the width of the pulse, $w_2$ and $w_3$ together define the asymmetry of the pulse, $t_c$ is the centre of the pulse and $A$ is proportional to the amplitude of the current pulse $I_0$. We fit Eq 1 to the real pulse of which we obtain the full-width-at-half-maximum (FWHM), and pulse amplitude $I_0$. Heat transport in ultrahigh vacuum (UHV) conditions is modelled via COMSOL\texttrademark heat conduction. Thermal dissipation occurs via conduction within the planar probe and via radiative emission. The environment is set at room temperature, as is the case for the Scienta Omicron VT-SPM lacking a cryostat. Only the sample is cooled. The radiative thermal dissipation is set by the surface emissivity $\varepsilon$. We used the values for the following materials that were used in the functionalised planar probe: $\varepsilon_{\text{Si}}=0.6$, $\varepsilon_{\text{Pt}}=0.04$, $\varepsilon_{{\rm SiO}_2}=0.8$ and $\varepsilon_{\text{MgO}}=0.5$. The other metals, Ta and Co, are neglected. First, we consider the maximum current through the bridge without overheating it. To this end, we varied the current density $J$ in the planar probe, which is related to the total current $I_p$ via $J=\frac{I_{p}}{d_{Pt}d}$, where $d_{Pt}$ is the thickness of the main platinum layer and $d$ is the width of the bridge gap (\SI{5}{\micro\meter}), respectively. The situation we consider is the asymmetric pulse from Equation \ref{eq:sigmoidal} with a current peak of \SI{150}{\milli\ampere}, which amounts to a current density of \SI{6e11}{\ampere\per\meter\squared}. We vary the current density from \SI{e11}{\ampere\per\meter\squared} to \SI{e12}{\ampere\per\meter\squared}, in steps of \SI{0.5e11}{\ampere\per\meter\squared}. The results are shown in Figure \ref{fig:supp_Heating}a, in which the temperature barely increases by \SI{20}{\kelvin} for the smallest current densities, and increases by approximately \SI{115}{\kelvin} in case of the highest current density. To characterize the temperature increase with respect to the current pulse length, we simulate three current pulses with a different pulse duration: \SI{300}{}, \SI{600}{} and \SI{900}{\nano\second} FWHM. The pulses are indicated in Figure \ref{fig:supp_Heating}b in dark solid lines. The resulting temperature evolution at the probe’s tip end is shown in Figure \ref{fig:supp_Heating}b in dotted red lines. Indeed, the temperature increases with the duration of the pulse. The temperature increases by about \SI{45}{\kelvin} in case of a short \SI{160}{\nano\second} pulse, up to an increase of over \SI{100}{\kelvin} for the \SI{900}{\nano\second} pulse. This large difference illustrates that the current pulse duration has a major impact on the temperature increase, so the current pulses need to be as short as possible in order to prevent overheating the probe. We also investigated the thermal aspects of the planar probe when it is cooled down to liquid nitrogen temperature (\SI{77}{\kelvin}). The values of the surface emissivity are set to the same values as at \SI{293}{\kelvin}. The temperature evolution of the probe’s tip at \SI{77}{\kelvin} is shown in Figure \ref{fig:supp_Heating}c. Evidently, the temperature of the probe increases by only \SI{35}{\kelvin}, which would make operation in a cryostat possible. Finally, we discuss the thermal characteristics of a SiO$_2$ layer. As Supplementary S1 discussed, the inclusion of SiO$_2$ as a spacer layer to reduce electrical shorting by FIB milling, turned out to negatively impact the thermal dissipation, eventually leading to bridge damage after several pulses above \SI{80}{\milli\ampere}. Numerical calculations of MgO and SiO$_2$ are shown in Figure \ref{fig:supp_Heating}d. These plots show a temperature increase when SiO$_2$ is used, to a maximum above \SI{400}{\kelvin}. These values were calculated with a pulse of equation \ref{eq:sigmoidal}, with a FWHM of \SI{160}{\nano\second}. In the case of a \SI{230}{\nano\meter} MgO layer, the temperature increase is less than \SI{40}{\kelvin}, supporting the experimentally observed stability of MgO as a spacer layer. For comparison, we have also used a block pulse of \SI{160}{\nano\second} with a magnitude of \SI{150}{\milli\ampere} with the thermal response shown in Figure \ref{fig:supp_Heating}e, with much higher temperature peaks observed, exceeding over \SI{500}{\kelvin}. Hence, the peak value needs to be minimized to only a few ns to prevent damage to the tip, and a block pulse is not sufficient. \subsubsection{MuMax3 calculation of tip domain orientation} MuMax$^3$ \cite{Vansteenkiste2014TheMuMax3} was used in which a \SI{16}{\nano\meter} Co film was simulated, with an exchange length of \SI{5}{\nano\meter} and grid unit cell of 4x4 \SI{}{\square\nano\meter}. The following values are taken to simulate the cobalt; saturation magnetisation $M_{sat}$ = \SI{1.4}{\mega\ampere\per\meter}, exchange constant $A_{ex}$ = \SI{16}{\pico\joule\per\meter} and the $1^{st}$ order uniaxial anisotropy constant $Ku_1$ of \SI{0.72}{\mega\joule\per\cubic\meter}. \subsubsection{Note on the magnetic stray field computation} To study the magnetic properties of the SM-PP, specifically its tip magnetic field component distribution $B_x$, $B_y$ and $B_z$ and the field magnitude, a 3D COMSOL\texttrademark FEM model was made. The SM-PP is modelled as a \SI{15}{\nano\meter} ferromagnetic Co layer with a magnetization of \(M=\SI{1400}{\kilo\ampere\per\meter}\). In the main text we showed the stray field of the SM-PP in the in-plane direction of the tip as it extends outwards, when placed a certain height $z$ above a flat sample. In the calculations, the SM-PP and sample are placed in an air medium. Figure \ref{fig:Supp_MagneticFieldDistributions} presents the modeled stray field results. Figures \ref{fig:Supp_MagneticFieldDistributions}a and b show the field distribution at two different cross-sections: in the $xy$-plane, and parallel to the probe surface, respectively. Important to note is that the probe length is much larger than the tip-sample distance $z$. However, for computational reasons the probe was made smaller ("cut-off"), which gives rise to an additional curvature of the field near the upper side of the probe. Near the tip, where the calculations were performed, the effect of this curvature has been taken into account during the final calculations. Figure \ref{fig:Supp_MagneticFieldDistributions}c presents the calculated $B_x$, $B_y$ and $B_z$ between the tip sample distance $z$. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Supp_Comsol_Field.jpg} \caption{\textbf{COMSOL-calculated magnetic field distribution around the tip}. (\textbf{a}), (\textbf{b}) Arrow plot of the magnetic field distribution. Color bar gives the absolute strength in \si{\milli\tesla}. (\textbf{c}) Magnetic field components $B_x$, $B_y$ and $B_z$ as a function of tip-sample distance $z$. } \label{fig:Supp_MagneticFieldDistributions} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Supp_MFM_Distribution.jpg} \caption{\textbf{Distribution of the SM-PP tip stray field on a sample surface in the xy plane.} (\textbf{a})-(\textbf{c}) Numerically calculated stray field components across the sample surface placed at z = 0 nm and the tip apex positioned at \SI{1}{\nano\meter}. (\textbf{d}),(\textbf{e}) Numerical calculated stray field and spatial distribution at different heights of the tip (z)} \label{fig:Supp_MFM_Distribution} \end{figure} We calculated the spatial distribution of the tip's stray field $\vec{B}$ across the sample surface. Figure \ref{fig:Supp_MFM_Distribution} shows the lateral distribution of $B_x$, $B_y$ and $B_z$ at a tip-sample distance of \SI{1}{\nano\meter}. We expect this to be the closest tip-sample distance during oscillation. At the lowest point of the amplitude extension with respect to the sample surface, the stray field perturbation should be strongest. The $B_x$ and $B_y$ components in Figure \ref{fig:Supp_MFM_Distribution}a and \ref{fig:Supp_MFM_Distribution}b show sign inversion close to the (0,0) point, where the tip apex is positioned. This is to be expected from the symmetry of the planar probe. The magnitude of $\vec{B}$ can become quite large up to \SI{100}{\milli\tesla}. However, at (0,0) the field is minimal for $B_y$. But for $B_x$ a sizable field up to \SI{80}{\milli\tesla} is observed directly below the tip. For a fabricated planar probe, the tip is never fully symmetric, nor is the bridge shape and it's position on the tip apex. The in-plane values are hence expected to be larger below the tip during experiments. The out-of-plane component $B_z$ is strongest at (0,0) and given in Figure \ref{fig:Supp_MFM_Distribution}c. The magnitude can reach up to over \SI{300}{\milli\tesla}. For measuring LMO$_3$, having both an in-plane and out-of-plane field is beneficial, as the SM-PP in-plane field magnetises the SP islands, while $B_z$ is used to read out the stray signal. This is similar to the SSM experiments of Anahory \textit{et al.} \cite{Anahory2016EmergentInterfaces} and supports our work on observing the complex SP islands. We also examined the dependency of the magnetic field magnitude $|B|$ and distribution on $z$, between \SI{5}{\nano\meter} and \SI{20}{\nano\meter}. The spatial distribution of $|B|$ is given in Figure \ref{fig:Supp_MFM_Distribution}d and \ref{fig:Supp_MFM_Distribution}e, respectively. Evidently, $|B|$ increases by almost a factor of $3$ as $z$ decreases down to \SI{5}{\nano\meter}. Hence, this may explain why we observe a sudden change (kink) in F-z spectroscopy, in Figure \ref{fig:MFM} in the main text, as the SP islands are magnetized and a sudden change in the magnitude of the attractive force occurs. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Supp_Angle_Distribution.jpg} \caption{\textbf{Distribution of the SM-PP tip stray field on a sample surface in the xy plane, depending on the off-axis orientation of the in-plane magnetisation of the Co layer.}} \label{fig:Supp_Angle_Distribution} \end{figure} Finally, we examined the effect of the direction of the Co magnetisation along the planar probe symmetry axis, on the tip stray field magnitude and distribution. Figure \ref{fig:Supp_Angle_Distribution} shows the calculated $|B|$ at a fixed $z =\SI{0.5}{\nano\meter}$. Here, the relative Co magnetisation direction on both sides of the symmetry axis are depicted in blue and red with the angle between the two directions indicated as $\theta$. Two main conclusions can be derived from these results: the magnitude of magnetic field strength is relatively independent on $\theta$. And secondly, the shape of the magnetic field distribution on the surface changes only slightly. In conclusion, changing the symmetry of Co magnetisation along the planar probe vertical symmetry axis does not alter the final $|B|$ distribution a lot. Hence it is reasonable to assume $|B|$ is mainly tip-sample distance $z$, dependent. \subsection{Supplementary S5: Methods SM-PP MFM experiment} \subsubsection{LMO$_3$ sample in UHV} The 6 u.c. LMO$_3$/STO$_3$ sample was exposed to ambient air prior to measurement, hence we expect a thin layer of contaminates present of the LMO$_3$ surface before being loaded into UHV. Gently heating to \SI{100}{\degree} in UHV removed the water adsorbents. Higher temperatures were not used to prevent oxygen diffusion which likely alters the stochiometry and magnetic behaviour \cite{Li2019ControllingEvidence, Xie2015EffectsFilms}. However, we cannot attest if small stochiometric changes have occurred (and also over time) or between sample growths. This variable offers future investigation possibilities between the stochiometry relationship and the magnetic textures. \subsubsection{FM-MFM imaging and data visualisation} To measure the long-range MFM signal, the tip was retracted by \SI{5}{\nano\meter} and followed the previously obtained topography, the so-called lift mode \cite{Kazakova2019FrontiersMicroscopy}. For measuring the long range magnetic gradient force $dF_m/dz$ , the oscillation amplitude $A$ was set to \SI{10}{\nano\meter}. A Scienta Omicron VT-SPM was used with a custom tip holder and low temperature sample holder for enhanced thermal conduction. Imaging was performed by locking the amplitude and tracking the frequency shift with a phase lock loop (PLL). Care was taken to keep all the scanning parameters constant, with optimized feedback settings for the phase, amplitude and frequency. The PLL bandwidth was set to \SI{384}{\hertz}. After a topography line-scan was obtained, the tip was lifted by the predefined lift height. Both forward and backward scans where compared to identify repeatability and exclude artifacts. We imaged with a speed of \SI{39}{\nano\meter\per\second} at a resolution of 256x256 pixels. Gwyddion software was used for the AFM and MFM data plotting and analysis. We used plane projection to level the image, and line-alignment in the vertical direction. Due to thermal drift induced by the thermal gradient between the cold sample and the room temperature SM-PP (as dictated by the VT-SPM design), a small eigenmode frequency shift of \SI{50}{\milli\hertz} was observed during the prolonged imaging, hence we renormalized the frequency MFM shift accordingly to the first few line-scans. No influence on the MFM lateral dimensions was noted by this effect. Finally, both the sample and tip are grounded. Future work will translate the SM-PP to a cryostat to reduce thermal gradients which would also significantly increase the $Q$ further. \subsection{Supplementary S6: Variable temperature MFM} \begin{figure} \centering \includegraphics[scale=0.42] {Figures/Supp_VT_MFM.jpg} \caption{\textbf{MFM images at variable temperature. }(\textbf{a}), (\textbf{b}). MFM images taken with a multi-state and single-state domain at \SI{100}{\kelvin}, respectively. (\textbf{c}), (\textbf{d}) Multi-domain and single domaing MFM imaging at \SI{300}{\kelvin}, respectively.} \label{fig:Supp_VT_MFM} \end{figure}
{ "arxiv_id": "2302.14340", "language": "en", "timestamp": "2023-03-02T02:14:54", "url": "https://arxiv.org/abs/2302.14340", "yymm": "2302" }
\section{Introduction} \label{sec:intro} Surface reconstruction of a scene from a set of observed multi-view images stands as a long-term challenge in computer vision research. A rich literature \cite{mvg_book,furukawa2015multi,chen20153survey} exists to address the challenge, including different paradigms of methods from stereo matching to volumetric fusion. Among them, the representative methods of multi-view stereo (MVS) \cite{xu2020planar, galliani2015massively,schonberger2016pixelwise,zheng2014patchmatch} first recover the properties (e.g, depth and/or normal) of discrete surface points, by globally optimizing the local, pixel-wise correspondences across the multi-view images, where photometric and geometric consistencies across views are used as the optimization cues, and a continuous fitting method (e.g., Poisson reconstruction \cite{kazhdan2006poisson, kazhdan2013screened}) is then applied to recover a complete surface. MVS methods usually make a reliable recovery only on surface areas with rich textures. More recently, differentiable volume rendering is proposed that connects the observed multi-view images with neural modeling of the implicit surface and radiance field \cite{mildenhall2020nerf, wang2021neus, yariv2021volume}. They show a surprisingly good promise for recovery of object-level surfaces, especially when the object masks are available in the observed images \cite{yariv2020multiview, liu2020dist}; indeed, these methods favor a continuous, closed surface given that a single deep network is used to model the scene space, whose deep prior induces a smoothness bias for surface recovery \cite{wang2021neus, yariv2021volume}. For complex scene surfaces, however, the induced smoothness bias is less capable to regularize the learning and recover the scene surface with fine geometry details \cite{zhang2020nerf++, peng2020convolutional}. To overcome the limitation, we observe that the strategies from the two paradigms of MVS and neural implicit learning are different but potentially complementary to the task. We are thus motivated to make use of the complementary benefits with an integrated solution. In this work, we achieve the goal technically by using the intermediate prediction from one strategy as the guidance to regularize the learning/optimization of the other one, and conducting such \emph{intertwined regularization iteratively} during the process. Considering that the iterative intertwined regularization makes the optimization curve as a shape of double helix, we term our method as \emph{Helix-shaped neural implicit Surface learning or HelixSurf}. Given that MVS predictions are less reliable for textureless surface areas, we regularize the learning on such areas in HelixSurf by leveraging the homogeneity inside individual superpixels of observed images. We also improve the efficiency of differentiable volume rendering in HelixSurf, by maintaining dynamic occupancy grids that can adaptively guide the point sampling along rays; our scheme improves the learning efficiency with orders of magnitude when compared with existing neural implicit surface learning methods, even with the inclusion of MVS inference time. An illustration of the proposed HelixSurf is given in \cref{fig:pipeline}. Experiments on the benchmark datasets of ScanNet\cite{dai2017scannet} and Tanks and Temples\cite{knapitsch2017tanks} show that our method compares favorably with existing methods, and is orders of magnitude faster. We note that a few recent methods \cite{wang2022neuris, Yu2022MonoSDF} use geometric cues provided by models pre-trained on auxiliary data to regularize the neural implicit surface learning; compared with them, our method achieves better results as well. Our technical contributions are summarized as follows. \begin{itemize} \item We present a novel method of \emph{HelixSurf} for reconstruction of indoor scene surface from multi-view images. HelixSurf enjoys the complementary benefits of the traditional MVS and the recent neural implicit surface learning, by regularizing the learning/optimization of one strategy iteratively using the intermediate prediction from the other; \item MVS methods make less reliable predictions on textureless surface areas. We further devise a scheme that regularizes the learning on such areas by leveraging the region-wise homogeneity organized by superpixels in each observed image; \item To improve the efficiency of differentiable volume rendering in HelixSurf, we adopt a scheme that can adaptively guide the point sampling along rays by maintaining dynamic occupancy grids in the 3D scene space; our scheme improves the efficiency with orders of magnitude when compared with existing neural implicit surface learning methods. \end{itemize} \section{Related Works} \label{sec:relworks} \subsection{PatchMatch based Multi-view Stereo} 3D reconstruction from posed multi-view images is a fundamental but challenging task in computer vision. Among all the techniques in the literature, PatchMatch based Multi-view Stereo (PM-MVS) is traditionally the most explored one \cite{mvg_book, furukawa2015multi}. PM-MVS methods \cite{schonberger2016structure, schonberger2016pixelwise, zheng2014patchmatch, shen2013accurate,galliani2015massively, romanoni2019tapa, xu2020planar} represent the geometric with depth and/or normal maps. They estimate depth and/or normal of each pixel by exploiting inter-image photometric and geometric consistency and then fuse all the depth maps into a global point cloud with filtering operations, which can be subsequently processed using meshing algorithms \cite{labatut2007efficient, kazhdan2006poisson}, \eg Screened Poisson surface reconstruction \cite{kazhdan2013screened}, to recover complete surface. These traditional methods have achieved great success on various occasions and can produce plausible geometry of textured surfaces, but there exist artifacts and missing parts in the areas without rich textures. Indeed, their optimization highly relies on the photometric measure to discriminate which random estimate is the best guess. In the case of indoor scenes with textureless areas \cite{xu2020planar,romanoni2019tapa}, the inherent homogeneity inactivates the photometric measure and consequently poses difficulties to the accurate depth estimation. With the development of deep learning, learning-based MVS methods \cite{yao2018mvsnet, yao2019recurrent, im2018dpsnet, xu2020pvsnet, wang2021patchmatchnet} demonstrate promising performance in recent years. However, they crucially rely on ground-truth 3D data for supervision, which hinders their practical application. \subsection{Neural Implicit Surface} In contrast to classic explicit representation, recent works \cite{park2019deepsdf,mescheder2019occupancy,chibane2020neural} implicitly represent surfaces via learning neural networks, which models continuous surface with Multi-Layer Perceptron (MLP) and makes it more feasible and efficient to represent complex geometries with arbitrary typologies. For the task of multi-view reconstruction, the 3D geometry is represented by a neural network that outputs either a signed/unsigned distance field or an occupancy field. Some works \cite{niemeyer2020differentiable, yariv2020multiview, liu2020dist} utilize surface rendering to enable the reconstruction of 3D shapes from 2D images, but they always rely on extra object masks. Inspired by the success of NeRF \cite{mildenhall2020nerf}, recent works \cite{wang2021neus, oechsle2021unisurf, yariv2021volume} attach differentiable volume rendering techniques to reconstruction, which eliminates the need of mask and achieves impressive reconstruction. And follow-up works \cite{fu2022geo, darmon2022improving, wang2022hfneus} further improve the geometry quality with fine-grained surface details. Although these methods show better accuracy and completeness compared with the traditional MVS methods, they still suffer from the induced smoothness bias of deep network \cite{peng2020convolutional, zhang2020nerf++}, which discourages them to regularize the learning and recover fine details in scene reconstruction. Most recent works \cite{guo2022neural, wang2022neuris, Yu2022MonoSDF} try to get rid of this dilemma by incorporating geometric cues provided by models pre-trained on auxiliary data. Our HelixSurf integrates traditional PM-MVS and neural implicit learning surface in complementary mechanisms and achieves better results than these methods. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{images/main/pipeline.pdf} \caption{\textbf{Overview of HelixSurf: Helix-shaped neural implicit Surface learning}. HelixSurf integrates the neural implicit surface learning (\cf Section \ref{subsec:reg_neural_from_MVS}) and PatchMatch based MVS (\cf Section \ref{subsec:reg_MVS_from_neural}) in a robust and efficient manner. We optimize HelixSurf with an iterative intertwined regularization, which uses the intermediate prediction from one strategy as guidance to regularize the learning/optimization of the other one; given that MVS predictions are less reliable for textureless surface areas, we additionally devise a scheme that regularizes the learning on such areas by leveraging the homogeneity per superpixel in observed multi-view images (\cf Section \ref{subsec:handling_textureless_areas}). We also propose a scheme for point sampling along rays (\cf Section \ref{subsec:occupancy_grids}), which significantly improves the efficiency. At the inference stage of HelixSurf, we conduct grid sampling to query the learned SDF values at sampled points and run Marching Cubes to get the reconstruction results. } \label{fig:pipeline} \end{figure*} \section{Preliminary} \label{sec:preliminary} In this section, we give technical background and math notations that are necessary for presentation of our proposed method in subsequent sections. \vspace{0.1cm} \noindent{\textbf{Neural Implicit Surface Representation}} Among the choices of neural implicit surface representation \cite{park2019deepsdf,mescheder2019occupancy,chibane2020neural}, we adopt DeepSDF \cite{park2019deepsdf} that learns to encode a continuous surface as the zero-level set of a signed distance field (SDF) $f: \mathbb{R}^3 \rightarrow \mathbb{R}$, which is typically parameterized as an MLP; for any point $\bm{x}\in\mathbb{R}^3$ in the 3D space, $| f(\bm{x}) |$ assigns its distance to the surface $\mathcal{S} = \left\{\bm{x} \in \mathbb{R}^3 | f(\bm{x}) = 0\right\}$; by convention, we have $f(\bm{x}) < 0$ for points inside the surface and $f(\bm{x}) > 0$ for those outside. \vspace{0.1cm} \noindent{\textbf{SDF-induced Volume Rendering}} Differentiable volume rendering is used in NeRF \cite{mildenhall2020nerf} for synthesis of novel views. Denote a ray emanating from a viewing camera as $\bm{r}(t) = \bm{o} + t\bm{v}$, $t\geq 0$, where $\bm{o}\in\mathbb{R}^3$ is the camera center and $\bm{v}\in\mathbb{R}^3, \Vert \bm{v} \Vert = 1$ denotes the unit vector of viewing direction. NeRF models a continuous scene space as a neural radiance field $\bm{F}: \mathbb{R}^3 \times \mathbb{R}^3 \rightarrow \mathbb{R}_{+} \times \mathbb{R}^3$, which for any space point $\bm{x}$ and direction $\bm{v}$, assigns $ \bm{F}(\bm{x}, \bm{v}) = (\sigma, \bm{c})$, where $\sigma \in \mathbb{R}_{+}$ represents the volume density at the location $\bm{x}$, and $\bm{c} \in \mathbb{R}^3$ is the view-dependent color from $\bm{x}$ along the ray $- \bm{r}$ towards $\bm{o}$. Assume $N$ points are sampled along $\bm{r}$; the color accumulated along the ray $\bm{r}$ can be approximated, using the quadrature rule\cite{max1995optical}, as \begin{equation} \label{eq:render_color} \bm{C}(\bm{r}) = \sum^N_{i = 1} T_i \alpha_i \bm{c}(\bm{r}(t_i), \bm{v}),\quad T_i = \prod^{i - 1}_{j = 1}(1 - \alpha_j) , \end{equation} where $\alpha_i = 1 - \exp(-\int^{t_{i + 1}}_{t_i}\sigma(\bm{r}(t)) dt)$ denotes the opacity of a segment. While the volume density $\sigma: \mathbb{R}^3 \rightarrow \mathbb{R}_{+}$ is learned as a direct output of the MLP based radiance field function $\bm{F}$ in \cite{mildenhall2020nerf}, it is shown in VolSDF \cite{yariv2021volume} and NeuS \cite{wang2021neus} that $\sigma$ can be modeled as a transformed function of the implicit SDF function $f$, enabling better recovery of the underlying geometry. In this work, we follow \cite{wang2021neus} to model $\sigma$ as an SDF-induced volume density. With such an SDF-induced, differentiable volume rendering, the geometry $f$ and color $\bm{c}$ can be learned by minimizing the difference between rendering results and multiple views of input images. Note that analogous to (\ref{eq:render_color}), the depth $d$ of the surface from the camera center $\bm{o}$ can be approximated along the ray $\bm{r}$ as well, giving rise to \begin{equation} \label{eq:render_depthnorm} d(\bm{r}) = \sum^N_{i = 1} T_i \alpha_i t_i, \quad \bm{n}(\bm{r}) = \nabla f(\bm{o} + d(\bm{r}) \bm{v}) , \end{equation} where $\bm{n}(\bm{r}) \in \mathbb{R}^3$ denotes the surface normal at the intersection point and $\nabla f(\bm{x})$ is the gradient of SDF at $\bm{x}$. \vspace{0.1cm} \noindent{\textbf{Multi-View Stereo with PatchMatch}} Assume that a reference image $\bm{I}^\text{ref}$ and a set of source images $\mathcal{I}^\text{src} = \{\bm{I}^m | m = 1\ldots M\}$ capture a common scene; we write collectively as $\mathcal{I} = \{\bm{I}^\text{ref}, \mathcal{I}^\text{src} \}$. PatchMatch based multi-view stereo (PM-MVS) methods\cite{zheng2014patchmatch, schonberger2016pixelwise, galliani2015massively, xu2020planar} aim to recover the scene geometry by predicting the depth $d_l \in \mathbb{R}^+$ and normal $\bm{n}_l \in \mathbb{R}^3, \Vert \bm{n}_l \Vert = 1$ for each pixel in $\bm{I}^\text{ref}$, which is indexed by $l$ with $l \in \{1, \dots, L\}$. Considering that any $l^{th}$ pixel in $\bm{I}^\text{ref}$ may not be visible in all images in $\mathcal{I}^\text{src}$, the methods then predict an occlusion indicator $\mathcal{Z}^{src} = \{Z^m_l | l = 1, \ldots, L, m = 1, \ldots, M\}$ for $\bm{I}^\text{ref}$. Optimization of $\{d_l\}_{l=1}^L$, $\{\bm{n}_l\}_{l=1}^L$, and $\mathcal{Z}^{src}$ is based on enforcing photometric and geometric consistencies between corresponding patches in $\bm{I}^\text{ref}$ and $\mathcal{I}^\text{src}$; this is mathematically formulated as a probabilistic graphical model and is solved via generalized expectation-maximization (GEM) algorithm \cite{schonberger2016pixelwise, galliani2015massively}, where PatchMatch \cite{barnes2009patchmatch,bleyer2011patchmatch} is used to efficiently establish pixel-wise correspondences across multi-view images. More specifically, let $\mathcal{A}^{src} = \{ \bm{A}^m_l | l = 1, \ldots, L, m = 1, \ldots, M\}$ denote the set of homography-warped patches from source images \cite{shen2013accurate}, the PatchMatch based methods optimize $d_l$ and $\bm{n}_l$ for a pixel in the reference image as \begin{equation} \begin{aligned} \label{eq:mvs_em} \{d_l^{*}, \bm{n}_l^{*}\} &= \mathop{\arg\max}P( d_l, \bm{n}_l | \mathcal{A}^{src}, \mathcal{Z}^{src}) \\ &\propto \mathop{\arg\max}P( \mathcal{A}^{src} | d_l, \bm{n}_l, \mathcal{Z}^{src})P(d_l, \bm{n}_l) \\ &= \mathop{\arg\min} \sum_{m=1}^{M} P_l(m) \xi^m_l(d_l, \bm{n}_l) \\ \textrm{with} \ \ \xi^m_l =& \ 1 - \rho^m_l (d_l, \bm{n}_l) + \eta\min(\varphi^m_l (d_l, \bm{n}_l), \varphi_\text{max}) , \end{aligned} \end{equation} where $\rho^m_l(d_l, \bm{n}_l)$ denotes the color similarity between the reference patch $\bm{A}^\text{ref}_l$ and source patch $\bm{A}^{m}_l$ based on normalized cross-correlation, which is a function of $d_l$ and $\bm{n}_l$, and $\varphi^m_l(d_l, \bm{n}_l)$ is the forward-backward reprojection error to evaluate the geometric consistency incurred by the predicted $d_l$ and $\bm{n}_l$, which is capped by a pre-defined $\varphi_\text{max}$; the probability $P_l(m)$ serves for view selection that assigns different weights to the $M$ source images. Indeed, source images with small values of $P_l(m)$ are less informative; hence Monte-Carlo view sampling is used in \cite{zheng2014patchmatch} to draw samples according to $P_l(m)$. Assume that the selected views form a subset $S \subset \{1\ldots M\}$, the problem (\ref{eq:mvs_em}) can be simplified as \begin{equation} \{d_l^{*}, \bm{n}_l^{*}\} = \mathop{\arg\min} \frac{1}{|S|} \sum_{m\in S} \xi^m_l(d_l, \bm{n}_l) . \end{equation} \section{HelixSurf for Intertwined Regularization of Neural Implicit Surface Learning} \label{sec:methods} Given a set of calibrated RGB images $\{\bm{I}_m\}_{m=1}^M$ of an indoor scene captured from multiple views, the task is to reconstruct the scene geometry with fine details. Under the framework of neural differentiable volume rendering, the task translates as learning an MLP based radiance field function $\bm{F}$ that connects the underlying scene geometry with the image observations $\{\bm{I}_m\}_{m=1}^M$; with the use of an SDF-induced volume density $\sigma(f)$, the scene surface can be reconstructed by extracting the zero-level set of the learned SDF $f$. As stated in Section \ref{sec:intro}, although the supervision from $\{\bm{I}_m\}_{m=1}^M$ is conducted in a pixel-wise, independent manner, the MLP based function $f$ has \emph{deep priors} that induce the function learning biased towards encoding continuous and piece-wise, smooth surface \cite{wang2021neus,yariv2021volume}; indeed, assuming a successful learning of a ReLU-based MLP $f$, its zero-level set can be \emph{exactly} recovered as a continuous polygon mesh \cite{lei2020analytic}. In the meanwhile, PatchMatch based MVS methods couple the predictions of $\{ d_l, \bm{n}_l \}$ for individual pixels in a probabilistic framework, and conduct the optimization \emph{globally} such that the predicted $\{ d_l, \bm{n}_l \}$ achieves an overall best consistencies of photometry and geometry across $\{\bm{I}_m\}_{m=1}^M$; after obtaining $\{ d_l, \bm{n}_l \}$, a continuous, watertight surface can be fitted using Poisson reconstruction \cite{kazhdan2006poisson, kazhdan2013screened}. The above two strategies reconstruct the surface using different but potentially complementary mechanisms. We are thus motivated to propose an integrated solution that can take both advantages of them. In this work, we achieve the goal technically by using the intermediate prediction from one strategy as the guidance to regularize the learning of the other one, and conducting such \emph{intertwined regularization iteratively} during the learning process. Considering that the iterative intertwined regularization makes the optimization curve as a shape of double helix, we term our method as \emph{Helix-shaped neural implicit Surface learning or HelixSurf}. Details of HelixSurf are presented as follows. An illustration is given in \cref{fig:pipeline}. \subsection{Regularization of Neural Implicit Surface Learning from MVS predictions} \label{subsec:reg_neural_from_MVS} Given the set of multi-view images $\mathcal{I} = \{\bm{I}^\text{ref}, \mathcal{I}^\text{src} \}$, neural implicit surface learning via differentiable volume rendering samples rays in the 3D space; for any sampled ray $\bm{r}(t) = \bm{o} + t\bm{v}$, $t \geq 0$, in a viewing direction $\bm{v}$, assume that it emanates from the camera center $\bm{o}$ and passes through a pixel $\bm{a} \in \mathbb{R}^3$ in an image $\bm{I}$ in $\mathcal{I}$. Let $\bm{F}$ be the SDF-induced neural radiance field that models the scene geometry via the SDF function $f$; we can then write as $\bm{F}(\bm{r}(t), \bm{v}; f) = (\sigma(f(\bm{r}(t))), \bm{c}(\bm{r}(t), \bm{v}))$ for any point $t$ along $\bm{r}(t)$. According to (\ref{eq:render_color}) of approximated volume rendering, the color $\bm{C}(\bm{r})$ accumulated along the ray $\bm{r}$ can be computed, given $\{ \sigma(f(\bm{r}(t_i))), \bm{c}(\bm{r}(t_i), \bm{v}) \}_{i=1}^N$ at $N$ sampled points; the following loss defines the color based image supervision from ray $\bm{r}$ for learning $\bm{F}$ (i.e., learning the MLPs $f$ and $\bm{c}$, see Section \ref{sec:preliminary} for the details): \begin{equation} \mathcal{L}_\text{\tiny Neural}(\bm{r}; f, \bm{c}) = \texttt{SmoothL1}(\bm{C}(\bm{r}; f, \bm{c}), \bm{a}(\bm{r})) . \end{equation} We can also compute the depth $d(\bm{r}; f)$ and surface normal $\bm{n}(\bm{r}; f)$ according to (\ref{eq:render_depthnorm}). Section \ref{sec:preliminary} suggests that given $\mathcal{I}$, PatchMatch based MVS methods can predict pairs of depth and surface normal for pixels in the observed reference image. Such methods usually produce a sparse set of predictions on texture-rich surface areas \cite{schonberger2016pixelwise,Xu2019ACMM}. Without loss of generality, assume that $\{d_{\bm{a}}^{\text{\tiny MVS}}, \bm{n}_{\bm{a}}^{\text{\tiny MVS}}\}$ are the MVS prediction for the pixel $\bm{a}$ in the image $\bm{I}$. We use $\{d_{\bm{a}}^{\text{\tiny MVS}}, \bm{n}_{\bm{a}}^{\text{\tiny MVS}} \}$ to regularize the learning of $f$ in the current iteration, based on the following loss \begin{equation} \begin{aligned} \mathcal{L}_\text{\tiny MVSRegu}(\bm{r}; f) &= w(\bm{r}) \left(\bigl\lvert d(\bm{r}; f) - d_{\bm{a}}^{\text{\tiny MVS}} \bigr\rvert + \bigl\lvert \bm{n}(\bm{r}; f) - \bm{n}_{\bm{a}}^{\text{\tiny MVS}} \bigr\rvert \right) , \\ \textrm{with} \ \ w(\bm{r}) &= \mathds{1}_{\text{\tiny MVSRegu}}(\bm{r}) \cdot (1 - \bigl\lvert \bm{C}(\bm{r}) - \bm{a}(\bm{r}) \bigr\rvert) \end{aligned} \end{equation} where $\mathds{1}_{\text{\tiny MVSRegu}}(\bm{r})$ is an indicator to cope with the case when $\{d_{\bm{a}}^{\text{\tiny MVS}}, \bm{n}_{\bm{a}}^{\text{\tiny MVS}}\}$ are not predicted by MVS for the pixel $\bm{a}$. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{images/main/planar_normal_generation.pdf} \caption{Illustration of handling textureless surface areas. (a): the inference results of PM-MVS, (b): watertight surface mesh $\mathcal{M^\text{\tiny MVS}}$ reconstructed by Poisson reconstruction from (a), (c): surface $\mathcal{M}_{-}^\text{\tiny MVS}$ obtained by pruning textureless triangle faces from (b), (d): an input image, (e): superpixels extracted by the graph-based segmentation algorithm \cite{felzenszwalb2004efficient}, (f): textureless areas obtained by ray casting, (g): superpixels covered by textureless areas, (h): surface normal map predicted by HelixSurf, (i): smooth normal map obtained by aggregating the normals in textureless superpixels. } \label{fig:planar_normal} \end{figure} \subsubsection{Handling of Textureless Surface Areas} \label{subsec:handling_textureless_areas} PatchMatch based MVS methods make reliable predictions only on texture-rich surface areas. We resort to other sources to regularize the neural implicit learning for texture-less surface areas. Our motivation is based on the observation that textureless surface areas tend to be both homogeneous in color and geometrically smooth; indeed, when the surface areas are of high curvature or when they have different colors, 2D image projections of such areas would have richer textures. The projected 2D image counterparts of textureless surface areas in fact correspond to those in images that can be organized as superpixels. We thus propose to further regularize the neural implicit surface learning by leveraging the homogeneity of image superpixels. We technically encourage the predicted normals of surface points, whose 2D projections fall in a same superpixel, to be close. For any image $\bm{I}$ in $\mathcal{I}$, we pre-compute its region partitions of superpixels using methods such as \cite{felzenszwalb2004efficient,achanta2012slic}. Let $\tilde{\bm{r}}$ be a ray passing through a pixel $\tilde{\bm{a}}$ that falls in a superpixel of $\bm{I}$; denote the superpixel as $\tilde{\bm{A}}_{\tilde{\bm{a}}}$. We know that the volume rendering in HelixSurf predicts surface normal $\bm{n}(\tilde{\bm{r}}; f)$ for the ray $\tilde{\bm{r}}$, and denote as $\{ \bm{n}'(\tilde{\bm{r}}'; f) | \tilde{\bm{a}}' \in \tilde{\bm{A}}_{\tilde{\bm{a}}} \}$ the predicted surface normals for all pixels in $\tilde{\bm{A}}_{\tilde{\bm{a}}}$. We first compute $ \bm{n}_{\tilde{\bm{a}}, \bm{I}}^{\text{\tiny Smooth}} = \sum_{i=1}^{|\tilde{\bm{A}}_{\tilde{\bm{a}}}|} \bm{n}'_i / |\tilde{\bm{A}}_{\tilde{\bm{a}}}| $, and then apply the above computation to all those images in $\mathcal{I}$ that capture the same surface point and have the corresponding pixels of $\tilde{\bm{a}}$ in $\bm{I}$. Assume we have a total of $M'$ such image, we compute $\bm{n}_{\tilde{\bm{a}}}^{\text{\tiny Smooth}} = \sum_{m=1}^{M'+1} \bm{n}_{\tilde{\bm{a}}_m, \bm{I}_m}^{\text{\tiny Smooth}} / (M' + 1)$ and enforce closeness of surface normal predictions for pixels both inside a superpixel and across multi-view images with the following loss \begin{equation} \mathcal{L}_\text{\tiny Smooth}(\tilde{\bm{r}}; f) = \mathds{1}_\text{\tiny Smooth}(\tilde{\bm{r}}) \cdot \bigl\lvert \bm{n}(\tilde{\bm{r}}; f) - \bm{n}_{\tilde{\bm{a}}}^{\text{\tiny Smooth}} \bigr\rvert , \end{equation} where $\mathds{1}_\text{\tiny Smooth}(\tilde{\bm{r}})$ indicates whether the pixel $\tilde{\bm{a}}$ cast by the ray $\tilde{\bm{r}}$ belongs to a textureless area. In practice, we identify the textureless areas for a surface $\mathcal{S} = \left\{\bm{x} \in \mathbb{R}^3 | f(\bm{x}) = 0\right\}$ as follows. We first use MVS methods to produce a sparse set of depth and normal predictions, to which we apply Poisson reconstruction \cite{kazhdan2006poisson, kazhdan2013screened} and obtain a watertight surface mesh $\mathcal{M^\text{\tiny MVS}}$ (\cref{fig:planar_normal}(b)). We prune those triangle faces in $\mathcal{M^\text{\tiny MVS}}$ that contain no the depths and normals predicted by the MVS methods, resulting in $\mathcal{M}_{-}^\text{\tiny MVS}$. For an image $\bm{I}$, we conduct ray casting and treat the pixels whose associated rays do not hit $\mathcal{M}_{-}^\text{\tiny MVS}$ as those belonging to textureless areas (\cref{fig:planar_normal}(e)). The overall scheme is illustrated in \cref{fig:planar_normal}. Please refer to the supplementary for more details. \subsection{Regularization of Multi-View Stereo from Neural Implicit Surface Learning} \label{subsec:reg_MVS_from_neural} \cref{eq:mvs_em} of MVS methods optimize the depth and normal predictions by maximizing a posterior probability and a prior of $P(d, \bm{n})$ (cf. line 2 in \cref{eq:mvs_em}). Without other constraints, $P(d, \bm{n})$ is usually set as a uniformly random distribution. In HelixSurf, it is obviously feasible to use the depth and normal learned in the current iteration of neural implicit learning as the prior. More specifically, given $d_l$, $\bm{n}_l$, $\mathcal{A}^{src}$, and $\mathcal{Z}^{src}$ denoted as in Section \ref{sec:preliminary}, let $d_l^{\text{\tiny Neural}}$ and $\bm{n}_l^{\text{\tiny Neural}}$ be the depth and normal learned in the current iteration of neural implicit learning for the corresponding pixel in an observed image. We can improve MVS predictions using \begin{equation} \begin{aligned} \label{eq:mvs_prior} \{d_l^{*}, \bm{n}_l^{*}\} = \arg\max P(d_l, \bm{n}_l | \mathcal{A}^{src}, \mathcal{Z}^{src}, d_l^{\text{\tiny Neural}}, \bm{n}_l^{\text{\tiny Neural}}) \\ \propto \arg\max P( \mathcal{A}^{src} | d_l, \bm{n}_l, \mathcal{Z}^{src}) P(d_l, \bm{n}_l | d_l^{\text{\tiny Neural}}, \bm{n}_l^{\text{\tiny Neural}}) . \end{aligned} \end{equation} Qualitative results in Section \ref{subsec:ablation} show that MVS methods with priors of a uniformly random distribution tend to produce noisy results with outliers, which would impair the iterative learning in HelixSurf. Instead, the proposed (\ref{eq:mvs_prior}) gives better results. \subsection{Improving the Efficiency by Establishing Dynamic Space Occupancies} \label{subsec:occupancy_grids} Differentiable volume rendering suffers from the heavy cost of point sampling along rays for accumulating pixel colors \cite{mildenhall2020nerf,wang2021neus,yariv2021volume}. While a common coarse-to-fine sampling strategy is used in these methods, it still counts as the main computation. In this work, we are inspired by Instant-NGP and propose a simple yet effective sampling scheme, which establishes dynamic occupancies in the 3D scene space and adaptively guides the point sampling along rays. \cref{fig:pipeline} gives the illustration. More specifically, we partition the 3D scene space regularly using a set $\mathcal{G}_\text{\tiny{Occu}}$ of occupancy grids of size $64^3$, and let the occupancy of any voxel partitioned and indexed by $\{ g \subset \mathcal{G}_\text{\tiny{Occu}} \}$ be $o_g$. During training of HelixSurf, we update $o_g$ using exponential moving average (EMA), i.e., $o_g^{\text{\tiny EMA}} \leftarrow \max(\sigma_g, \alpha (\sigma_g - o_g^{\text{\tiny EMA}}) + o_g^{\text{\tiny EMA}} )$, where $\sigma_g$ is the density at $g$ given by the inducing SDF function $f$ and $\alpha = 0.05$ is a decaying factor. We set the voxel indexed by $g$ as occupied if $o_g^{\text{\tiny EMA}} > \tau_\text{\tiny{Occu}}$, where $\tau_\text{\tiny{Occu}}$ is a pre-set threshold. Non-occupied voxels will be skipped directly when performing point sampling along each ray, thus improving the efficiency of differentiable volume rendering used in HelixSurf. More details of our scheme are given in the supplementary material. \cref{fig:head}(b) shows that our scheme improves the training efficiency at orders of magnitude when compared with existing neural implicit surface learning methods. \subsection{Training and Inference} \label{subsec:training_and_inference} At each iteration of HelixSurf training, we randomly sample pixels from the images in $\mathcal{I}$ and define the set of camera rays passing through these pixels as $\mathcal{R} \cup \tilde{\mathcal{R}}$, where $\mathcal{R}$ and $\tilde{\mathcal{R}}$ contain rays passing through texture-rich and textureless areas respectively. We optimize the following problem to learn the MLP based functions $f$ and $\bm{c}$ \begin{equation} \begin{aligned} \min_{f, \bm{c}} (& \sum_{\bm{r} \in \mathcal{R}} \mathcal{L}_\text{\tiny Neural}(\bm{r}; f, \bm{c}) + \sum_{\tilde{\bm{r}} \in \tilde{\mathcal{R}}} \mathcal{L}_\text{\tiny Neural}(\tilde{\bm{r}}; f, \bm{c}) + \\ & \lambda_\text{\tiny MVSRegu} \sum_{\bm{r} \in \mathcal{R}} \mathcal{L}_\text{\tiny MVSRegu}(\bm{r}; f) + \lambda_\text{\tiny Smooth} \sum_{\tilde{\bm{r}} \in \tilde{\mathcal{R}}} \mathcal{L}_\text{\tiny Smooth}(\tilde{\bm{r}}; f) \\ & \quad\quad\quad \lambda_\text{\tiny Eik} \sum_{\bm{x} \in \mathbb{R}^3} \mathcal{L}_\text{Eik}(\bm{x}; f) ), \end{aligned} \end{equation} where $\mathcal{L}_\text{Eik}(\bm{x}; f)$ is the Eikonal loss \cite{gropp2020implicit} that regularizes the learning of SDF $f$, and $\lambda_\text{\tiny MVSRegu}, \lambda_\text{\tiny Smooth}, \lambda_\text{\tiny Eik}$ are hyperparameters weighting different loss terms. During inference, we apply marching cubes \cite{lorensen1987marching} algorithm to extract the underlying surface from the learned SDF $f$. \section{Experiments} \label{sec:experiments} \noindent{\textbf{Datasets}} We conduct experiments using the benchmark dataset of ScanNet \cite{dai2017scannet} and Tanks and Temples \cite{knapitsch2017tanks}. ScanNet has 1613 indoor scenes with precise camera calibration parameters and surface reconstructions via the state-of-the-art SLAM technique \cite{dai2017bundlefusion}. Tanks and Temples has multiple large-scale indoor and outdoor scenes. For the ScanNet, we follow ManhattanSDF\cite{guo2022neural} and select 4 scenes to conduct our experiments. As for Tanks and Temples, we follow MonoSDF\cite{Yu2022MonoSDF} to select four large-scale indoor scenes to further investigate the extensibility of HelixSurf. \noindent{\textbf{Implementation Details}} We implement HelixSurf in PyTorch\cite{paszke2019pytorch} framework with CUDA extensions, and customized a PM-MVS module for HelixSurf according to COLMAP \cite{schonberger2016pixelwise} and ACMP \cite{xu2020planar}. We use the Adam optimizer \cite{kingma2014adam} with a learning rate of 1e-3 for network training, and set $\lambda_\text{\tiny{MVSRegu}}, \lambda_\text{\tiny{Smooth}}, \lambda_\text{\tiny{Eik}}$ to 0.5, 0.01, 0.03, respectively. For each iteration, we sample 5000 rays to train the model and use customized CUDA kernels for calculating the $\alpha$-compositing colors of the sampled points along each ray as \cref{eq:render_color}. To maintain dynamic occupancy grids, we update the grids after every 16 training iterations and cap the mean density of grids by 1e-2 as the density threshold $\tau_\text{\tiny{occu}}$. \noindent{\textbf{Evaluation Metrics}} For 3D reconstruction, we assess the reconstructed surfaces in terms of Accuracy, Completeness, Precision, Recall, and F-score. To evaluate the MVS predictions, we compute the distance differences for depth maps and count the angle errors for normal maps. Please refer to the supplementary for more details about these evaluation metrics. \begin{table} \centering \scalebox{0.66}{ \begin{tabular}{c|lllll|c} \hline Method & Acc$\downarrow$ & Comp$\downarrow$ & Prec$\uparrow$ & Recall$\uparrow$ & F-score$\uparrow$ & Time$\downarrow$ \\ \hline & & & & & & \\[-1em] COLMAP\cite{schonberger2016pixelwise} & 0.047 \tikz\draw[bronze,fill=bronze] (0,0) circle (.6ex); & 0.235 & 0.711 & 0.441 & 0.537 & \textit{133} \\ & & & & & & \\[-1em] ACMP\cite{xu2020planar} & 0.118 & 0.081 & 0.531 & 0.581 & 0.555 & \textit{10} \\ & & & & & & \\[-1em] \hline & & & & & & \\[-1em] NeRF\cite{mildenhall2020nerf} & 0.735 & 0.177 & 0.131 & 0.290 & 0.176 & $>1$k \\ & & & & & & \\[-1em] VolSDF\cite{yariv2021volume} & 0.414 & 0.120 & 0.321 & 0.394 & 0.346 & 825 \\ & & & & & & \\[-1em] NeuS\cite{wang2021neus} & 0.179 & 0.208 & 0.313 & 0.275 & 0.291 & 531 \\ & & & & & & \\[-1em] \hline & & & & & & \\[-1em] $\text{Manhattan-SDF}^\dag$\cite{guo2022neural} & 0.053 & 0.056 & 0.715 \tikz\draw[bronze,fill=bronze] (0,0) circle (.6ex); & 0.664 & 0.688 & 528 \\ & & & & & & \\[-1em] $\text{NeuRIS}^\dag$\cite{wang2022neuris} & 0.050 & 0.049 \tikz\draw[bronze,fill=bronze] (0,0) circle (.6ex); & 0.714 & 0.670 \tikz\draw[bronze,fill=bronze] (0,0) circle (.6ex); & 0.691 \tikz\draw[bronze,fill=bronze] (0,0) circle (.6ex); & 406 \\ & & & & & & \\[-1em] $\text{MonoSDF}^\dag$\cite{Yu2022MonoSDF} & 0.035 \tikz\draw[gold,fill=gold] (0,0) circle (.6ex); & 0.048 \tikz\draw[silver,fill=silver] (0,0) circle (.6ex); & 0.799 \tikz\draw[gold,fill=gold] (0,0) circle (.6ex); & 0.681 \tikz\draw[silver,fill=silver] (0,0) circle (.6ex); & 0.733 \tikz\draw[silver,fill=silver] (0,0) circle (.6ex); & 708 \\ & & & & & & \\[-1em] \hline & & & & & & \\[-1em] \textbf{HelixSurf} & 0.038 \tikz\draw[silver,fill=silver] (0,0) circle (.6ex); & 0.044 \tikz\draw[gold,fill=gold] (0,0) circle (.6ex); & 0.786 \tikz\draw[silver,fill=silver] (0,0) circle (.6ex); & 0.727 \tikz\draw[gold,fill=gold] (0,0) circle (.6ex); & 0.755 \tikz\draw[gold,fill=gold] (0,0) circle (.6ex); & \textbf{33} \\ \hline \end{tabular} } \caption{\textbf{Reconstruction metrics comparisons on ScanNet\cite{dai2017scannet}}. We compare our method with the state-of-the-art neural implicit surface learning methods\cite{mildenhall2020nerf, wang2021neus, yariv2021volume, guo2022neural, wang2022neuris, Yu2022MonoSDF} and PatchMatch based multi-view stereo methods (PM-MVS)\cite{schonberger2016pixelwise, xu2020planar}. Methods marked with $\dag$ are assisted with auxiliary training data, and vice versa. We mark the methods performing with least error using gold \protect\tikz\draw[gold,fill=gold] (0,0) circle (.6ex);, silver \protect\tikz\draw[silver,fill=silver] (0,0) circle (.6ex);, and bronze \protect\tikz\draw[bronze,fill=bronze] (0,0) circle (.6ex); \ medals. The last column shows time consumption (in minutes) for {\textit{PM-MVS}} methods, {Nueral implicit surface learning} methods, and {\textbf{HelixSurf}}. Note that the time for HelixSurf includes both MVS inference and neural implicit surface learning. } \label{table:benchmark} \end{table} \begin{figure*} \centering \begin{minipage}[t]{1.0\linewidth} \centering \includegraphics[width=\textwidth]{images/main/comparison1.pdf} \subcaption{Comparisons with existing methods when these methods do not use auxiliary training data. } \label{fig:comparison1} \end{minipage} \quad \begin{minipage}[t]{1.0\linewidth} \includegraphics[width=\textwidth]{images/main/comparison2.pdf} \subcaption{Comparisons with existing methods when they invoke the use of auxiliary training data. Note that our HelixSurf do not use auxiliary training data. } \label{fig:compariso2} \end{minipage} \caption{ \textbf{Qualitative geometry comparisons on ScanNet.} Compared to existing methods, our method can better reconstruct the scene details (\eg the lamp, the cabinet and chair handles) and the smooth regions (\eg the floor and walls). Surface normals are visualized as coded colors. } \label{fig:comparison} \end{figure*} \subsection{Comparisons} \label{subsec:comparisons} We evaluate the 3D geometry metrics and time consumption of our proposed HelixSurf against existing methods on ScanNet \cite{dai2017scannet}, as shown in \cref{table:benchmark}. Each quantitative result is averaged over all the selected scenes. For the geometry comparison, HelixSurf manifestly surpasses existing methods in almost every metrics, even some of the comparison methods are assisted with auxiliary training data. And the qualitative results in \cref{fig:comparison} further support the quantitative analyses. Without auxiliary training data, HelixSurf is capable of handling the textureless surface areas where other methods fail to tackle, as in \cref{fig:comparison}(a). Moreover, HelixSurf produces better details of objects than those methods using auxiliary training data, as in \cref{fig:comparison}(b). As for learning time, the data in \cref{table:benchmark} indicates that HelixSurf improves the learning efficiency with orders of magnitude when compared with existing neural implicit surface learning methods, even with the inclusion of MVS inference time. \subsection{Ablation Studies} \label{subsec:ablation} HelixSurf is optimized with interactive intertwined regularization as stated in Section \ref{sec:methods}. We design elaborate experiments to evaluate the efficacy of this regularization. Furthermore, the sampling guided by dynamic occupancy grids (\cf Section \ref{subsec:occupancy_grids}) is essential to realize fast training convergence. We thus compare it with the ordinary sampling alternative. These studies are conducted on the ScanNet dataset \cite{dai2017scannet}. \noindent{\textbf{Analysis on the regularization of neural implicit surface learning from MVS predictions }} The MVS inference results effectively regularize the neural implicit surface learning (\cf Section \ref{subsec:reg_neural_from_MVS}) and facilitate the network to capture fine details. The results in \cref{table:ablation} illustrate that the MVS predictions effectively promote the surface learning and the \emph{regularized} MVS predictions can further improve the quality of reconstruction. Nonetheless, the MVS predictions are less reliable on the textureless surface areas, we thus leverage the homogeneity inside individual superpixels and devise a scheme (\cf Section \ref{subsec:handling_textureless_areas}) to regularize the learning on such areas. As shown in \cref{fig:ablation_planar}, our proposed scheme handles the textureless surface areas and reconstructs smoother surface on the basis of maintaining the details of non-planar regions. \noindent{\textbf{Analysis on the regularization of MVS from neural implicit surface learning}} During the training process of the neural surface, the underlying geometries are progressively recovered. We use the learned depths and normals as priors to regularize MVS, which clears up the artifacts produced by the ordinary MVS and enables the double helix to forward and rise. \cref{fig:improved_geometric} qualitatively shows the inference results of the ordinary MVS method (\cref{fig:improved_geometric}(a)) and the regularized MVS (\cref{fig:improved_geometric}(b)). \cref{table:mvs_comparison} shows the quantitative comparison between the ordinary MVS method and our regularized one. Both qualitative and quantitative comparisons verify that this regularization eliminates noise and outliers and improves the quality of inference results. \noindent{\textbf{Efficacy of Sampling Guided by Dynamic Occupancy Grids}} To further alleviate the difficulties of optimization, we maintain dynamic occupancy grids and propose a sampling strategy to skip the sample points in empty space. The time consumption comparisons in \cref{table:benchmark} show that our training convergence is significantly faster than the existing neural implicit surface learning methods. The results in \cref{table:traintime_composition} present the timing consumptions of each part in the entire training process with and without the sampling strategy, respectively. \begin{table}[!htb] \centering \scalebox{0.61}{ \begin{tabular}{ccc|ccccc} \hline \multicolumn{3}{c|}{Regularization} & \multirow{3}{*}{Acc$\downarrow$} & \multirow{3}{*}{Comp$\downarrow$} & \multirow{3}{*}{Prec$\uparrow$} & \multirow{3}{*}{Recall$\uparrow$} & \multirow{3}{*}{F-score$\uparrow$} \\ \cline{1-3} \makecell{oridinary\\MVS} & \makecell{regularized\\MVS}& \makecell{Textureless\\Areas Handling} & \\ \hline & & & & & & & \\[-0.8em] & & & 0.179 & 0.208 & 0.313 & 0.275 & 0.291 \\ & & & & & & & \\[-0.8em] \checkmark & & & 0.059 & 0.076 & 0.661 & 0.605 & 0.632 \\ & & & & & & & \\[-0.8em] & \checkmark & & 0.051 & 0.066 & 0.711 & 0.649 & 0.679 \\ & & & & & & & \\[-0.8em] \checkmark & & \checkmark & 0.047 & 0.053 & 0.768 & 0.706 & 0.735 \\ & & & & & & & \\[-0.8em] & \checkmark & \checkmark & \textbf{0.038} & \textbf{0.044} & \textbf{0.786} & \textbf{0.727} & \textbf{0.755} \\ \hline \end{tabular} } \caption{Analyses on the regularization of neural implicit surface learning from MVS predictions.} \label{table:ablation} \end{table} \begin{figure}[ht] \centering \begin{minipage}[t]{0.43\linewidth} \centering \includegraphics[width=\textwidth]{images/main/non_planar.png} \subcaption{W/O smoothness on textureless surface areas} \label{fig:non_planar} \end{minipage} \quad \begin{minipage}[t]{0.43\linewidth} \includegraphics[width=\textwidth]{images/main/add_planar.png} \subcaption{With smoothness on textureless surface areas} \label{fig:add_planar} \end{minipage} \caption{ Visualization of example reconstruction without the use of smoothness scheme for textureless areas (a) and with the use of smoothness scheme (b). The colors encode surface normals. } \label{fig:ablation_planar} \end{figure} \begin{figure}[ht] \centering \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/main/uniform.png} \subcaption{ordinary MVS.} \label{fig:uniform} \end{minipage} \begin{minipage}[t]{0.49\linewidth} \includegraphics[width=\textwidth]{images/main/improved.png} \subcaption{regularized MVS.} \label{fig:improved} \end{minipage} \caption{Qualitative comparisons of inference results between the ordinary MVS method (a) and our regularized MVS (b).} \label{fig:improved_geometric} \end{figure} \begin{table}[!htb] \centering \scalebox{0.9}{ \begin{tabular}{c|cccc} \hline \multirow{2}{*}{\makecell{Method}} & \multicolumn{4}{c}{Depth map} \\ \cline{2-5} & Abs Diff$\downarrow$ & Abs Rel$\downarrow$ & Sq Rel$\downarrow$ & RMSE $\downarrow$\\ \hline & & & & \\[-0.9em] $\text{ordinary}$ & 0.067 & 0.098 & 0.020 & 0.147 \\ & & & & \\[-0.9em] $\text{regularized}$ & 0.053 & 0.085 & 0.011 & 0.106 \\ \hline \multirow{2}{*}{\makecell{Method}} & \multicolumn{4}{c}{Normal map} \\ \cline{2-5} & Mean $\downarrow$ & Median$\downarrow$ & RMSE$\downarrow$ & $\text{Prop}\_30^{\circ}\uparrow$ \\ \hline & & & & \\[-0.9em] $\text{ordinary}$ & 35.5$^{\circ}$ & 30.4$^{\circ}$ & 42.6$^{\circ}$ & 51.0\% \\ & & & & \\[-0.9em] $\text{regularized}$ & 27.8$^{\circ}$ & 20.2$^{\circ}$ & 35.3$^{\circ}$ & 67.4\% \\ \hline \end{tabular} } \caption{ Quantitative comparison between the ordinary MVS and our regularized MVS. } \label{table:mvs_comparison} \end{table} \begin{table}[!htb] \centering \scalebox{0.85}{ \begin{tabular}{l|c|c|c|c|c|c} \hline \makecell{Occ\\Grids} & MVS & \makecell{Texture-\\less} & Grid & \makecell{Training\\Forward} & \makecell{Training\\Backward} & \textbf{Total} \\ \hline w/ & \multirow{2}{*}{3.8} & \multirow{2}{*}{2.6} & 0.6 & 12.4 & 13.8 & \textbf{33.2} \\ \cline{1-1}\cline{4-7} w/o & & & - & 184 & 203 & 393.4 \\ \hline \end{tabular} } \caption{ Time consumption (in minutes) for each part of our training process with or without guidance of Dynamic Occupancy Grids. } \label{table:traintime_composition} \end{table} \begin{figure}[htbp] \centering \begin{minipage}[t]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{images/main/indoor1.png} \subcaption{} \label{fig:indoor1} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \includegraphics[width=\textwidth]{images/main/indoor2.png} \subcaption{} \label{fig:indoor2} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \includegraphics[width=\textwidth]{images/main/outdoor.png} \subcaption{} \label{fig:outdoor} \end{minipage} \caption{\textbf{Qualitative result of the reconstruction on Tanks and Temples \cite{knapitsch2017tanks}.} (a) and (b) are examples of indoor scenes. (c) is an outdoor scene.} \label{fig:TnT} \end{figure} \subsection{Real-world Large-scale Scene Reconstruction} To further examine the applicability and generalization of HelixSurf, we conduct experiments on an indoor subset from the Tanks and Temples \cite{knapitsch2017tanks} dataset. Results in \cref{fig:TnT}(a, b) show that HelixSurf achieves reasonable results on such large-scale indoor scenes. Furthermore, we evaluate HelixSurf on large-scale outdoor scenes from Tanks and Temples \cite{knapitsch2017tanks}. Surprisingly, HelixSurf has potential to handle large-scale outdoor scenes, as shown in \cref{fig:TnT}(c). Please refer to the supplementary for more results. {\small \bibliographystyle{ieee_fullname}
{ "arxiv_id": "2302.14304", "language": "en", "timestamp": "2023-03-01T02:08:59", "url": "https://arxiv.org/abs/2302.14304", "yymm": "2302" }
\section{Introduction} A theory of pseudo-differential operators and equations \cite{H83,Ta81,Tr80,E81} not so long history than other mathematical subjects of analysis. Nevertheless, these operators and related boundary value problems widely arise in a lot of applied problems in physics and technique (see, for example \cite{SFO} and references therein). Discrete aspects of the theory are reflected in mathematical papers more weak \cite{R,Ru} although these studies are closely related to the theory of Fourier series \cite{Ed82}. In our opinion the discrete theory is very important since it permits to use computer calculations to solve concrete applied problems. We interested in studying discrete pseudo-differential equations and their solvability in appropriate discrete functional spaces. There are certain approaches to studying discrete boundary value problems for partial differential equations including finite difference method \cite{S01,R02}. But these approaches are not applicable to studying discrete boundary value problems for elliptic pseudo-differential equations. According to this statement the first author with colleagues has started to develop discrete theory for elliptic pseudo-differential equations \cite{VV6,TV}. This is main motivation, and we have started from certain canonical domains. First considerations were related to discrete $m$-dimensional space and half-space, and here we consider discrete quadrant. We consider a special type of boundary conditions, namely integral conditions on a boundary. These conditions are nonlocal, and it seems, such conditions are artificial. But there are a lot of applied problems for partial differential equations with such boundary conditions \cite{AB,CK,KMT}, therefore it is natural way. Moreover, these conditions appear in a natural way to determine arbitrary functions in a general solution of an elliptic pseudo-differential equation. \section{Cones, periodic symbols, digital operators and equations} \subsection{Discrete spaces and transforms} Let $\mathbb Z^2$ be an integer lattice in a plane. Let $K=\{x\in\mathbb R^2: x=(x_1,x_2), x_1>0, x_2>0\}$ be a quadrant, $K_d=h\mathbb Z^2\cap K, h>0$. We consider functions of discrete variable $u_d(\tilde x), \tilde x=(\tilde x_1,\tilde x_2)\in h\mathbb Z^2$. Let us denote $\mathbb T^2=[-\pi,\pi]^2, \hbar=h^{-1}$. We consider functions defined in $\hbar\mathbb T^2$ as periodic functions defined in $\mathbb R^2$ with basic square of periods $\hbar\mathbb T^2$. One can define the discrete Fourier transform for the function $u_d$ \[ (F_du_d)(\xi)\equiv\tilde u_d(\xi)=\sum\limits_{\tilde x\in h\mathbb Z^2}e^{-i\tilde x\cdot\xi}u_d(\tilde x)h^2,~~~\xi\in\hbar\mathbb T^2, \] if the latter series converges, and the function $\tilde u_d(\xi)$ is a periodic function in $\mathbb R^2$ with basic square of periods $\hbar\mathbb T^2$. Such discrete Fourier transform preserves all properties of integral Fourier transform, and the inverse discrete Fourier transform looks as follows \[ (F_d^{-1}\tilde u_d)(\tilde x)=\frac{1}{(2\pi)^2}\int\limits_{\hbar\mathbb T^2}e^{i\tilde x\cdot\xi}\tilde u_d(\xi)d\xi,~~~\tilde x\in h\mathbb Z^2. \] The discrete Fourier transform gives one-to-one correspondence between spaces $L_2(h\mathbb Z^2)$ and $L_2(\hbar\mathbb T^2)$ with norms \[ ||u_d||_2=\left(\sum\limits_{\tilde x\in h\mathbb Z^2}|u_d(\tilde x)|^2h^2\right)^{1/2}, \quad ||\tilde u_d||_2=\left(\int\limits_{\xi\in\hbar\mathbb T^2}|\tilde u_d(\xi)|^2d\xi\right)^{1/2}. \] We need more general discrete functional spaces and we introduce such spaces using divided differences \cite{S01}. The divided differences of first order look as follows \[ (\Delta_1^{(1)}u_d)(\tilde x)=h^{-1}(u_d(\tilde x_1+h,\tilde x_2)-u_d(\tilde x_1,\tilde x_2)), \] \[ (\Delta_2^{(1)}u_d)(\tilde x)=h^{-1}(u_d(\tilde x_1,\tilde x_2+h)-u_d(\tilde x_1,\tilde x_2)), \] and their discrete Fourier transforms are given by formulas \[ \widetilde{(\Delta_k^{(1)}u_d)}(\xi)=h^{-1}(e^{-ih\cdot\xi_k}-1)\tilde u_d(\xi), k=1,2. \] The divided difference of second order is a divided difference of first order from divided difference of first order \[ (\Delta_1^{(2)}u_d)(\tilde x)=h^{-2}(u_d(\tilde x_1+2h,\tilde x_2) -2u_d(\tilde x_1+h,\tilde x_2)+u_d(\tilde x_1,\tilde x_2)), \] \[ (\Delta_2^{(2)}u_d)(\tilde x)=h^{-2}(u_d(\tilde x_1,\tilde x_2+2h) -2u_d(\tilde x_1,\tilde x_2+h)+u_d(\tilde x_1,\tilde x_2)), \] with the Fourier transform \[ \widetilde{(\Delta_k^{(2)}u_d)}(\xi)=h^{-2}(e^{-ih\cdot\xi_k}-1)^2\tilde u_d(\xi), k=1,2. \] Discrete analogue of the Laplacian is the following \[ (\Delta_du_d)(\tilde x)=(\Delta_1^{(2)}u_d)(\tilde x)+(\Delta_2^{(2)}u_d)(\tilde x), \] so that its Fourier transform is \[ \widetilde{(\Delta_du_d)}(\xi)=h^{-2}((e^{-ih\cdot\xi_1}-1)^2+(e^{-ih\cdot\xi_2}-1)^2)\tilde u_d(\xi). \] We use such discrete objects for constructing discrete Sobolev--Slobodetskii spaces to study wide class of discrete equations. First, we introduce discrete analogue of the Schwartz space $S(h\mathbb Z^2)$ as a set of discrete functions with finite semi-norms \[ |u_d|=\sup\limits_{\tilde x\in h\mathbb Z^2}(1+|\tilde x|)^l|\Delta^{({\bf k})}u_d(\tilde x)| \] for arbitrary $l\in\mathbb N, {\bf k}=(k_1,k_2), k_r\in\mathbb N, r=1,2$, \[ \Delta^{({\bf k})}u_d(\tilde x)=\Delta^{k_1}_1\Delta^{k_2}_2u_d(\tilde x). \] {\bf Definition 1.} {\it A discrete distribution is called a linear continuous functional defined on the space $S(h\mathbb Z^2)$. } A set of such distributions will be denoted by $S'(h\mathbb Z^2)$, and a value of the discrete distribution $f_d$ on the test discrete function $u_d\in S(h\mathbb Z^2)$ will be denoted by $(f_d,u_d)$. One can introduce a concept of a support for a discrete distribution. Namely, a support of the discrete function $u_d\in S(h\mathbb Z^2)$ is a subset of the set $h\mathbb Z^2$ such that $u_d(\tilde x)\neq 0$ for all points $\tilde x$ from this subset. For an arbitrary set $M\subset\mathbb R^2$ we denote $M_d=M\cap h\mathbb Z^2$, and then one says that $f_d=0$ in the discrete domain $M_d$ if $(f_d,u_d)=0, \forall u_d\in S(M_d),$ where $S(M_d)\subset S(h\mathbb Z^2)$ consists of discrete functions with supports in $M_d$. If $\widetilde M_d$ is a union of such $M_d$ where $f_d=0$ then support of the discrete distribution $f_d$ is the set $h\mathbb Z^2\setminus\widetilde M_d$. Similarly \cite{Vl} we can define standard operations in the space $S'(h\mathbb Z^2)$, but differentiation will be changed by divided difference of first order. These operations are described in \cite{VV6} in details, a convergence is meant as a weak convergence in the space $S'(h\mathbb Z^2)$. {\bf Example 1.} If the function $f_d(\tilde x)$ is locally summable then it generates the discrete distribution \begin{equation}\label{1} (f_d,u_d)=\sum\limits_{\tilde x\in h\mathbb Z^2}f_d(\tilde x)u_d(\tilde x)h^2,~~~\forall u_d\in S(h\mathbb Z^2). \end{equation} But there are different possibilities, for example, analogue of the Dirac mass-function \[ (\delta_d,u_d)=u_d(0), \] which can not be represented by the formula \eqref{1}. Let $\zeta^2=h^{-2}((e^{-ih\cdot\xi_1}-1)^2+(e^{-ih\cdot\xi_2}-1)^2)$. We introduce the following definition. {\bf Definition 2.} {\it The space $H^s(h\mathbb Z^2)$ consists of discrete distributions and it is a closure of the space $S(h\mathbb Z^2)$ with respect to the norm \begin{equation}\label{2} ||u_d||_s=\left(\int\limits_{\hbar\mathbb T^2}(1+|\zeta^2|)^s|\tilde u_d(\xi)|^2d\xi\right)^{1/2}. \end{equation} } Let us remind that a lot of properties of such discrete spaces were studied in \cite{F}, Varying the parameter $h$ in \eqref{2} we obtain different norms which are equivalent to the $L_2$-norm. But constants in this equivalence depend on $h$. In our constructions all constants do not depend on $h$. {\bf Definition 3.} {\t The space $H^s(K_d)$ consists of discrete distributions from $H^s(h\mathbb Z^2)$ such that their supports belong to the set $\overline{K_d}$. A norm in the space $H^s(K_d)$ is induced by the norm of the space $H^s(h\mathbb Z^2)$. The space $H^s_0(K_d)$ consists of discrete distributions $f_d\in S'(h\mathbb R^2)$ with supports inside of $K_d$, and these discrete distributions must admit a continuation into the space $H^s(h\mathbb Z^2)$. A norm in the space $H^s_0(K_d)$ is given by the formula \[ ||f_d||^+_s=\inf||\ell f_d||_s, \] where infimum is taken for all continuations $\ell$. } The Fourier image of the space $H^s(K_d)$ will be denoted by $\widetilde H^s(K_d)$. \subsection{Symbols, operators and projectors} Let $\widetilde A_d(\xi)$ be a measurable periodic function in $\mathbb R^2$ with basic square of periods $\hbar\mathbb T^2$. Such functions we call symbols. {\bf Definition 4.} {\it A digital pseudo-differential operator $A_d$ with the symbol $A_d(\xi)$ in the discrete quadrant $K_d$ is called an operator of the following type \begin{equation}\label{3} (A_du_d)(\tilde x)=\sum\limits_{\tilde y\in h\mathbb Z^2}h^2\int\limits_{\hbar\mathbb T^2}\widetilde A_d(\xi)e^{i(\tilde x-\tilde y)\cdot\xi}\tilde u_d(\xi)d\xi,~~~\tilde x\in K_d, \end{equation} } We say that the operator $A_d$ is elliptic one if \[ ess~\inf_{\xi\in\hbar\mathbb T^2}|A_d(\xi)|>0. \] A more general digital pseudo-differential operator with the symbol $\widetilde A_d(\tilde x,\xi)$ depending on a spatial variable $\tilde x$ \[ (A_du_d)(\tilde x)=\sum\limits_{\tilde y\in h\mathbb Z^2}h^2\int\limits_{\hbar\mathbb T^2} A_d(\tilde x,\xi)e^{i(\tilde x-\tilde y)\cdot\xi}\tilde u_d(\xi)d\xi,~~~\tilde x\in K_d, \] can be defined in the same way, but here we consider only operators of type \eqref{3}. We consider symbols satisfying the condition \begin{equation}\label{100} c_1(1+|\zeta^2|)^{\alpha/2}\leq|A_d(\xi)|\leq c_2(1+|\zeta^2|)^{\alpha/2} \end{equation} with constants $c_1, c_2$ non-depending on $h$. The number $\alpha\in\mathbb R$ is called an order of digital pseudo-differential operator $A_d$. The following simple result can be proved easily. {\bf Lemma 1.} {\it A digital pseudo-differential operator $A_d$ with the symbol $\widetilde A_d(\xi)$ is a linear bounded operator $H^s(h\mathbb Z^2)\to H^{s-\alpha}(h\mathbb Z^2)$ with a norm non-depending on $h$. } We study a solvability of the discrete equation \begin{equation}\label{4} (A_du_d)(\tilde x)=v_d(\tilde x),~~~\tilde x\in K_d, \end{equation} in the space $H^s(K_d)$ assuming that $v_d\in H^{s-\alpha}_0(K_d)$. We will use certain special domain in two-dimensional complex space $\mathbb C^2.$ A domain of the type ${\mathcal T}_h(K)=\hbar\mathbb T^2+iK$ is called a tube domain over the quadrant $K$, and we will consider analytical functions $f(x+i\tau)$ in the domain ${\mathcal T}_h(K)=\hbar\mathbb T^2+iK$. Let us introduce the periodic Bochner kernel similar \cite{Vl} \[ B_h(z)=\sum\limits_{\tilde x\in K_d}e^{i\tilde x\cdot(\xi+i\tau)}h^2,~~~\xi\in\hbar\mathbb T^2,~~~\tau\in K, \] and corresponding integral operator \[ (B_h\tilde u_d)(\xi)=\lim\limits_{\tau\to 0, \tau\in K}\frac{1}{4\pi^2}\int\limits_{\hbar\mathbb T^2}B_h(\xi+i\tau-\eta)\tilde u_d(\eta)d\eta. \] {\bf Lemma 2.} {\it For the quadrant $K$ the operator $B_h$ has the following form \[ (B_h\tilde u_d)(\xi)=\frac{h^2}{8\pi^2}\int\limits_{\mathbb T^2}\tilde u_d(\eta)d\eta+\lim\limits_{\tau\to 0+}\frac{ih}{8\pi^2}\int\limits_{\mathbb T^2}\cot\frac{h(\xi_1-\eta_1+i\tau_1)}{2}\tilde u_d(\eta)d\eta+ \] \[ +\lim\limits_{\tau\to 0+}\frac{ih}{8\pi^2}\int\limits_{\mathbb T^2}\cot\frac{h(\xi_2-\eta_2+i\tau_2)}{2}\tilde u_d(\eta)d\eta- \] \[ -\lim\limits_{\tau\to 0+}\frac{h^2}{8\pi^2}\int\limits_{\mathbb T^2}\cot\frac{h(\xi_1-\eta_1+i\tau_1)}{2}\cot\frac{h(\xi_2-\eta_2+i\tau_2)}{2}\tilde u_d(\eta)d\eta, \] and $B_h$ is a linear bounded operator $H^s(\hbar\mathbb T^2)\rightarrow H^s(\hbar\mathbb T^2)$ for $|s|<1/2$. Moreover, the operator $B_h$ is a projector $\widetilde H^s(h\mathbb Z^2)\rightarrow \widetilde H^s(K_d)$. } {\bf Proof.} Corresponding calculations for one-dimensional discrete cone were done in \cite{VV1}. We use these evaluations adapting to our two-dimensional case. Since \[ \sum\limits_{\tilde x_k\in h\mathbb Z_+}e^{-i\tilde x_kz_k}h=\frac{h}{2}-\frac{ih}{2}\cot\frac{hz_k}{2},~~~z_k=\xi_k+i\tau_k,~~~k=1,2. \] then multiplying two factors and applying the Fourier property on correspondence between a product and convolution we obtain the assertion. Boundedness of the one-dimensional operator with the kernel $h\cot\frac{hz}{2}$for $|s|<1/2$ was proved in \cite{F}, Theorem 6; two-dimensional case can be considered by the same method. $\blacksquare$ {\bf Remark 1.} The operator $B_h$ is so called periodic bi-singular operator. Using classical results for Cauchy type integral \cite{Ga,Mu} one can evaluate the boundary value, but it is not important this time. Since these formulas are very huge we can do some simplifications without lost of generality. For example, we can consider the space $S_1(h\mathbb Z^2)\subset S_1(h\mathbb Z^2)$ with zeroes in coordinate axes and consider the space $H^s(h\mathbb Z^2)$ as closure of the set $S_1(h\mathbb Z^2)$ assuming that all functions of discrete variable vanish on coordinate axes. For this case the first three summands in $B_h$ will be zero. {\bf Lemma 3.} {\it If $|s|<1/2$ then the space $\widetilde H^s(h\mathbb Z^2)$ is uniquely represented as the direct sum \[ \widetilde H^s(h\mathbb Z^2)=\widetilde H^{s}(K_d)\oplus\widetilde H^{s}(h\mathbb Z^2\setminus K_d) \] } {\bf Proof.} It is simple consequence of Lemma 2. Indeed, the unique representation of the function $\tilde f\in\widetilde H(h\mathbb Z^2)$ is the following \[ \tilde f=B_h\tilde f+(I-B_h)\tilde f. \] A uniqueness of the such representation is possible only for $|s|<1/2$. $\blacksquare$ To describe a solvability picture for the discrete equation \eqref{4} we need some additional elements of multidimensional complex analysis. We give it in the next section. \section{Periodic wave factorization} This concept is a periodic analogue of the wave factorization \cite{V0}. Some first preliminary considerations and results were described in \cite{V2,V3,V4,V5}. {\bf Definition 5.} {\it A periodic wave factorization for the elliptic symbol $A_d(\xi)\in E_{\alpha}$ is called its representation in the form \[ A_d(\xi)=A_{d,\neq}(\xi)A_{d,=}(\xi), \] where the factors $A_{d,\neq}(\xi), A_{d,=}(\xi)$ admit analytical continuation into tube domains ${\mathcal T}_h(K), {\mathcal T}_h(-K)$ respectively with estimates \[ c_1(1+|\hat\zeta^2|)^{\frac{\ae}{2}}\leq|A_{d,\neq}(\xi+i\tau)|\leq c'_1(1+|\hat\zeta^2|)^{\frac{\ae}{2}}, \] \[ c_2(1+|\hat\zeta^2|)^{\frac{{\alpha-\ae}}{2}}\leq|A_{d,=}(\xi-i\tau)|\leq c'_2(1+|\hat\zeta^2|)^{\frac{{\alpha-\ae}}{2}}, \] and constants $c_1, c'_1, c_2, c'_2$ non-depending on $h$, where \[ \hat\zeta^2\equiv\hbar^2\left((e^{-ih(\xi_1+i\tau_1)}-1)^2+(e^{-ih(\xi_2+i\tau_2)}-1)^2\right), \] \[ \xi=(\xi_1,\xi_2)\in\hbar\mathbb T^2,~~~\tau-(\tau_1,\tau_2)\in K. \] The number $\ae\in{\mathbb R}$ is called an index of periodic wave factorization. } Unfortunately, we have no an algorithm to construct the factorization. But the are certain examples of periodic symbols which admit such factorization. We give one of them. If $f$ is an arbitrary function of a discrete variable, $f\in S(h\mathbb Z^2)$, $supp~f\subset K_d\cup(-K_d)$ then we have \[ f=\chi_+f+\chi_-f, \] where $\chi_{\pm}$ are indicators of $\pm K_d$. Applying the discrete Fourier transform we obtain the representation $\tilde f=\tilde f_++\tilde f_-$, and $\tilde f_{\pm}$ admit an analytical continuation into ${\mathcal T}_{h}(\pm K)$ according to Lemma 2. Thus, we can write $\exp\tilde f=\exp\tilde f_+\cdot\exp\tilde f_-$, therefore we obtain periodic wave factorization with index zero for the function $\exp\tilde f$. {\it Everywhere below we assume existence of such periodic wave factorization for the symbol $A_d(\xi)$ with index $\ae$.} \subsection{A unique solvability} This section is devoted to most simple case when a solution of the equation \eqref{4} exists and it is unique. {\bf Theorem 1.} {\it Let $|\ae-s|<1/2$. Then the equation \eqref{4} has a unique solution for arbitrary right hand side $v_d\in H^{s-\alpha}_0(K_d)$, and it is given by the formula \[ \tilde u_d(\xi)-A^{-1}_{d,\neq}(\xi)B_h(A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)), \] where $\ell v_d$ is an arbitrary continuation of $v_d$ into $H^{s-\alpha}(h\mathbb Z^2)$. } {\bf Proof} Let $\ell v_d$ be an arbitrary continuation of $v_d\in H^{s-\alpha}_0(K_d)$ into $H^{s-\alpha}(h\mathbb Z^2)$. Let us introduce the function \[ w_d(\tilde x)=(\ell v_d)(\tilde x)-(A_du_d)(\tilde x), \] so that $w(\tilde x)=0$ for $\tilde x\notin K_d$. Now we write the equation \eqref{4} in the form \[ (A_du_d)(\tilde x)+w_d(\tilde x)=(\ell v_d)(\tilde x),~~~\tilde x\in h\mathbb Z^2, \] and after applying the discrete Fourier transform and periodic wave factorization we obtain \begin{equation}\label{5} A_{d,\neq}(\xi)\tilde u_d(\xi)+A^{-1}_{d,=}(\xi)\tilde w_d(\xi)=A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi),~~~\xi\in\hbar\mathbb T^2, \end{equation} We have the following inclusions according to Lemma 1 and Lemma 2 \[ A_{d,\neq}(\xi)\tilde u_d(\xi)\in\widetilde H^{s-\ae}(K_d),~~~A^{-1}_{d,=}(\xi)\tilde w_d(\xi)\in\widetilde H^{s-\ae}(h\mathbb Z^2\setminus K_d), \] \[ A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)\in\widetilde H^{s-\ae}(h\mathbb Z^2), \] and then according to Lemma 3 the right hand side of the equality \eqref{5} is uniquely represented by the sum \[ A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)=f^+_d(\xi)+f^-_d(\xi), \] where \[ f^+_d(\xi)=B_h(A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)),~~~f^-_d(\xi)=(I-B_h)(A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)) \] . Further, we rewrite the equality \eqref{5} \[ A_{d,\neq}(\xi)\tilde u_d(\xi)-f^+_d(\xi)=f^-_d(\xi)-A^{-1}_{d,=}(\xi)\tilde w_d(\xi) \] and using the uniqueness of the representation as the direct sum $\widetilde H^{s-\ae}(K_d)\oplus\widetilde H^{s-\ae}(h\mathbb Z^2\setminus K_d)$ we conclude that both left hand side and right hand side should be zero. Thus, \[ \tilde u_d(\xi)=A^{-1}_{d,\neq}(\xi)B_h(A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)), \] and Theorem 1 is proved. $\blacksquare$ \section{Discrete boundary value problem} In this section we consider more interesting case when the equation \eqref{4} has a lot of solutions. \subsection{Form of a discrete solution} This section uses some results from \cite{VV6} concerning a form of a discrete distribution supported at the origin. {\bf Theorem 2.} {\it Let $\ae-s=n+\delta, n\in\mathbb N, |\delta|<1/2$. Then a general solution of the equation \eqref{4} has the following form \[ \tilde u_d(\xi)=A^{-1}_{d,\neq}(\xi)Q_n(\xi)B_h(Q_n^{-1}(\xi)A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)) + \] \[ +A^{-1}_{d,\neq}(\xi)\left(\sum\limits_{k=0}^{n-1}\tilde c_k(\xi_1)\hat\zeta_2^k+\tilde d_k(\xi_2)\hat\zeta_1^k\right), \] where $Q_n(\xi)$ is an arbitrary polynomial of order $n$ of variables $\zeta_k=\hbar(e^{-ih\xi_k}-1), k=1,2,$ satisfying the condition \eqref{100} for $\alpha=n$, $\tilde c_k(\xi_1), \tilde d_k(\xi_2), k=0,1,\cdots,n-1,$ -- are arbitrary functions from $H^{s_k}(h\mathbb T), s_k=s-\ae+k-1/2$. The a priori estimate \[ ||u_d||_s\leq const\left(||f||^+_{s-\alpha}+\sum\limits_{k=0}^{n-1}([c_k]_{s_k}+[d_k]_{s_k})\right), \] holds, where $[\cdot]_{s_k}$ denotes a norm in $H^{s_k}(h\mathbb T)$, and $const$ does not depend on $h$. } {\bf Proof.} We start from the equality \eqref{5}. Let $Q_n(\xi)$ be an arbitrary polynomial of order $n$ of variables $\zeta_k=\hbar(e^{-ih\xi_k}-1), k=1,2,$ satisfying the condition \eqref{100} for $\alpha=n$. We multiply the equality \eqref{5} by $Q^{-1}_n(\xi)$ \begin{equation}\label{6} Q^{-1}_n(\xi)A_{d,\neq}(\xi)\tilde u_d(\xi)+Q^{-1}_n(\xi)A^{-1}_{d,=}(\xi)\tilde w_d(\xi)=Q^{-1}_n(\xi)A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi),~~~\xi\in\hbar\mathbb T^2, \end{equation} We have in view of Lemma 1 \[ Q^{-1}_n(\xi)A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)\in\widetilde H^{s-\ae+n}(h\mathbb Z^2), \] and since $s-\ae+n=-\delta$ then according to Lemma 3 we write the unique decomposition \[ Q^{-1}_n(\xi)A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)=F^+_d(\xi)+F^-_d(\xi), \] where \[ F^+_d(\xi)=B_h(Q^{-1}_n(\xi)A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)),~~~F^-_d(\xi)=(I-B_h)(Q^{-1}_n(\xi)A^{-1}_{d,=}(\xi)\widetilde{(\ell v_d)}(\xi)). \] Taking into account this fact we rewrite the equality \eqref{6} in the form \[ A_{d,\neq}(\xi)\tilde u_d(\xi)+A^{-1}_{d,=}(\xi)\tilde w_d(\xi)=Q_n(\xi)F^+_d(\xi)+Q_n(\xi)F^-_d(\xi), \] and further, \[ A_{d,\neq}(\xi)\tilde u_d(\xi)-Q_n(\xi)F^+_d(\xi)=Q_n(\xi)F^-_d(\xi)-A^{-1}_{d,=}(\xi)\tilde w_d(\xi), \] Since $F^+_d(\xi)\in\widetilde H^{s-\ae+n}(K_d), F^-_d(\xi)\in\widetilde H^{s-\ae+n}(h\mathbb Z^2\setminus K_d)$ then according to Lemma 1 we conclude $Q_n(\xi)F^+_d(\xi)\in\widetilde H^{s-\ae}(K_d), Q_n(\xi)F^-_d(\xi)\in\widetilde H^{s-\ae}(h\mathbb Z^2\setminus K_d)$. Applying the inverse discrete Fourier transform we obtain an equality for two discrete distributions. The left hand side vanishes at least under one condition $\tilde x_1<0$ or $\tilde x_2<0$, and the right hand side vanishes under the condition .$\tilde x_1>0,\tilde x_2>0$, Thus, it should be a discrete distribution supported on sides of the discrete quadrant $\{(\tilde x_1,\tilde x_2)\in h\mathbb Z^2: \{\tilde x_1>0,\tilde x_2=0\}\cup\{\tilde x_1=0,\tilde x_2>0\}\}$. Using corresponding result from \cite{VV6} we obtain the following form for this discrete distribution \[ \sum\limits_{k=0}^{n-1}\left(c_k(\tilde x_1)(\Delta_2^{(k)}\delta_d)(\tilde x_2)+d_k(\tilde x_2)(\Delta_1^{(k)}\delta_d)(\tilde x_1)\right), \] where all summands should be elements of the space $H^{s-\ae}(h\mathbb Z^2)$. The left question is how much summands we need in the right-hand side. Counting principle is a very simple because every summand should belong to the space $\widetilde H^{s}(\hbar\mathbb T^2)$. Let us consider the summand $c_k(\xi_1)\zeta_2^k$. Taking into account that order of $A^{-1}_{d,+}(\xi)$ is $-\ae$ we need to verify the finiteness of the $H^{s-\ae}$-norm for $c_k(\xi_1)\zeta_2^k$. We have \[ ||c_k(\Delta^{(k)}_2\delta_d)||^2_{s-\ae}=\int\limits_{\hbar\mathbb T^2}(1+|\zeta^2|)^{s-\ae}||c_k(\xi_1)\zeta_2^k|^2d\xi \] \[ =\int\limits_{\hbar\mathbb T^2}(1+|\zeta^2|)^{s-\ae}||c_k(\xi_1)|^2|\zeta_2^k|^2d\xi\leq a_1\hbar^{2(s-\ae+k+1/2)}\int\limits_{\hbar\mathbb T}|c_k(\xi_1)|^2d\xi_1 \] \[ \leq a_2\int\limits_{\hbar\mathbb T}(1+|{\zeta_1}^2|)^{s-\ae+k+1/2}|c_k(\xi_1)|^2d\xi_1, \] and the constants $a_1, a_2$ do not depend on $h$. The last summand should be $(n-1)$th because for $n$th summand we obtain a positive growth: for $k=n$ we have $s_n=s-\ae-n+1/2=-n-\delta+n+1/2=-\delta+1/2>0$. A priori estimates can be obtained in the same way described in \cite{VV6}. $\blacksquare$ \subsection{The Dirichlet discrete boundary condition} We consider here first simple case with discrete Dirichlet boundary conditions. We suppose in this section that $\ae-s=1+\delta, |\delta|<1/2, v_d\equiv 0.$ It follows from Theorem 2 that we have the following general solution of the equation \eqref{4} \begin{equation}\label{7} \tilde u_d(\xi)=A^{-1}_{d,\neq}(\xi)(\tilde c_0(\xi_1)+\tilde d_0(\xi_2)), \end{equation} where $c_0,d_0\in H^{s-\ae-1/2}(\hbar\mathbb Z)$ are arbitrary functions. To determine uniquely these functions we add the discrete Dirichlet conditions on angle sides \begin{equation}\label{8} {u_d}_{|_{\tilde x_1=0}}=f_d(\tilde x_2),~~~~~~~~~~~~~{u_d}_{|_{\tilde x_2=0}}=g_d(\tilde x_1). \end{equation} Thus, we have the discrete Dirichlet problem \eqref{4},\eqref{8}. First, we apply the discrete Fourier transform to discrete conditions \eqref{8} and obtain the following form \begin{equation}\label{9} \int\limits_{-\hbar\pi}^{\hbar\pi}\tilde u_d(\xi_1,\xi_2)d\xi_1=\tilde f_d(\xi_2),~~~\int\limits_{-\hbar\pi}^{\hbar\pi}\tilde u_d(\xi_1,\xi_2)d\xi_2=\tilde g_d(\xi_1). \end{equation} Substituting \eqref{9} into \eqref{7} we obtain the following relations \[ \int\limits_{-\hbar\pi}^{\hbar\pi}\tilde u_d(\xi)d\xi_1=\int\limits_{-\hbar\pi}^{\hbar\pi}A^{-1}_{d,\neq}(\xi)\tilde c_0(\xi_1)d\xi_1+\tilde d_0(\xi_2)\int\limits_{-\hbar\pi}^{\hbar\pi}A^{-1}_{d,\neq}(\xi)d\xi_1 \] \[ \int\limits_{-\hbar\pi}^{\hbar\pi}\tilde u_d(\xi)d\xi_2=\tilde c_0(\xi_1)\int\limits_{-\hbar\pi}^{\hbar\pi}A^{-1}_{d,\neq}(\xi)d\xi_2+\int\limits_{-\hbar\pi}^{\hbar\pi}A^{-1}_{d,\neq}(\xi)\tilde d_0(\xi_2)d\xi_2 \] Let us denote \[ \int\limits_{-\hbar\pi}^{\hbar\pi}A^{-1}_{d,\neq}(\xi)d\xi_1\equiv\tilde a_0(\xi_2),~~~\int\limits_{-\hbar\pi}^{\hbar\pi}A^{-1}_{d,\neq}(\xi)d\xi_2\equiv \tilde b_0(\xi_1) \] and suppose that $\tilde a_0(\xi_2),\tilde b_0(\xi_1)\neq 0, \forall\xi_1\neq 0,\xi_2\neq 0$. Therefore, we have the following system of two linear integral equations with respect to two unknown functions $\tilde c_0(\xi_1),\tilde d_0(\xi_2)$ \begin{equation}\label{10} \left\{ \begin{array}{rcl} \int\limits_{-\hbar\pi}^{\hbar\pi}M_1(\xi)\tilde c_0(\xi_1)d\xi_1+\tilde d_0(\xi_2)=\tilde F_d(\xi_2)\\ \tilde c_0(\xi_1)+\int\limits_{-\hbar\pi}^{\hbar\pi}M_2(\xi)\tilde d_0(\xi_2)d\xi_2=\tilde G_d(\xi_1), \end{array} \right. \end{equation} where we have used the following notations \[ \tilde F_d(\xi_2)=\tilde f_d(\xi_2)\tilde a_0^{-1}(\xi_2),~~~\tilde G_d(\xi_1)=\tilde g_d(\xi_1)\tilde b_0^{-1}(\xi_1), \] \[ M_1(\xi)=A^{-1}_{d,\neq}(\xi)\tilde a_0^{-1}(\xi_2),~~~M_2(\xi)=A^{-1}_{d,\neq}(\xi)\tilde b_0^{-1}(\xi_1). \] Unique solvability conditions for the system \eqref{10} will be equivalent to unique solvability for the discrete Dirichlet problem \eqref{4},\eqref{8}. Thus, we obtain the following result. {\bf Proposition 1.} {\it Let $f_d,g_d\in H^{s-1/2}(\mathbb R_+), s>1/2, v_d\equiv 0$. Then the discrete Dirichlet problem \eqref{4},\eqref{8} is reduced to the equivalent system of linear integral equations \eqref{10}. } \subsection{Non-local discrete boundary condition} We consider here the $\ae-s=1+\delta, |\delta|<1/2$ for the equation \eqref{4} with different boundary conditions, namely \begin{equation}\label{11} \begin{array}{rcl} \sum\limits_{\tilde x_1\in h\mathbb Z_+}u_d(\tilde x_1,\tilde x_2)h=f_d(\tilde x_2),~~~\sum\limits_{\tilde x_2\in h\mathbb Z_+}u_d(\tilde x_1,\tilde x_2)h=g_d(\tilde x_1),\\ \sum\limits_{\tilde x\in h\mathbb Z_{++}}u_d(\tilde x_1,\tilde x_2)h^2=0. \end{array} \end{equation} These additional conditions will help us to determine uniquely the unknown functions $c_0,d_0$ in the solution \eqref{7}. Indeed, using the discrete Fourier transform we rewrite the conditions \eqref{11} as follows \begin{equation}\label{12} \tilde u_d(0,\xi_2)=\tilde f_d(\xi_2),~~~\tilde u_d(\xi_1,0)=\tilde g_d(\xi_1),~~~\tilde u_d(0,0)=0. \end{equation} Now we substitute the formulas \eqref{12} into \eqref{7}. The first two equality are \[ \tilde u_d(0,\xi_2)=A^{-1}_{d,\neq}(0,\xi_2)(\tilde c_0(0)+\tilde d_0(\xi_2))=\tilde f_d(\xi_2), \] \[ \tilde u_d(\xi_1,0)=A^{-1}_{d,\neq}(\xi_1,0)(\tilde c_0(\xi_1)+\tilde d_0(0))=\tilde g_d(\xi_1). \] It implies the following relations according to the third condition \[ \tilde f_d(0)=\tilde g_d(0),~~~\text{and from which}~~~\tilde c_0(0)+\tilde d_0(0)=0,~~~\text{and}~~~\tilde c_0(0)=\tilde d_0(0)=0. \] Then we have at least formally \begin{equation}\label{13} \tilde u_d(\xi)=A^{-1}_{d,\neq}(\xi)\left(A_{d,\neq}(\xi_1,0)\tilde g_d(\xi_1)+A_{d,\neq}(0,\xi_2)\tilde f_d(\xi_2)\right) \end{equation} It is left to formulate and to prove exactly the obtained result. {\bf Theorem 3.} {\it Let $f_d,g_d\in H^{s+1/2}(h\mathbb Z), v_d\equiv 0$. Then the discrete problem \eqref{4},\eqref{11} has unique solution which is given by the formula \eqref{13}. The a priori estimate \[ ||u_d||_s\leq const(||f_d||_{s+1/2}+||g_d||_{s+1/2}) \] holds with a const non-depending on $h$ } {\bf Proof.} We need to prove the a priori estimate only. Let us consider the first summand \[ ||A^{-1}_{d,\neq}(\xi)A_{d,\neq}(\xi_1,0)\tilde g_d(\xi_1)||^2_s= \] \[=\int\limits_{\hbar\mathbb T^2}|A^{-1}_{d,\neq}(\xi_1,\xi_2)A_{d,\neq}(\xi_1,0)\tilde g_d(\xi_1)|^2(1+|\zeta^2|)^sd\xi_1d\xi_2\leq \] \[ \leq C\hbar^{2s}\int\limits_{\hbar\mathbb T^2}|g_d(\xi_1)|^2d\xi\leq C_1\hbar^{2s+1}\int\limits_{-\hbar\pi}^{\hbar\pi}|g_d(\xi_1)|^2d\xi_1\leq \] \[ \leq C_2\int\limits_{-\hbar\pi}^{\hbar\pi}|g_d(\xi_1)|^2(1+|\zeta_1^2|)^{s+1/2}d\xi_1=||g_d|^2_{s+1/2}. \] The second summand has the same estimate. $\blacksquare$ \section{A comparison between discrete and continuous solutions} The continuous analogue of the discrete boundary value problem is the following \cite{V1}. Let $A$ be a pseudo-differential operator with the symbol $A(\xi), \xi=(\xi_1,\xi_2)$ satisfying the condition \[ c_1(1+|\xi)^{\alpha}\leq|A(\xi)|\leq c_2(1+|\xi)^{\alpha}. \] and admitting the wave factorization with respect to the quadrant $K$ with index $\ae$. We consider the equation \begin{equation}\label{14} (Au)(x)=0,~~~x\in K, \end{equation} with the following additional conditions \begin{equation}\label{15} \int\limits_{0}^{+\infty}u(x_1,x_2)dx_1=f(x_2),~~~\int\limits_{0}^{+\infty}u(x_1,x_2)dx_2=g(x_1),~~~ \int\limits_{-K}u(x)dx=0. \end{equation} A solution of the problem \eqref{14},\eqref{15} is sought in the space $H^s(K)$ \cite{V0} and boundary functions are taken from the space $H^{s+1/2}(\mathbb R_+)$. Such problem was considered in \cite{V1} and it has the solution \begin{equation}\label{16} \tilde u(\xi)=A^{-1}_{\neq}(\xi)\left(A_{\neq}(\xi_1,0)\tilde g(\xi_1)+A_{\neq}(0,\xi_2)\tilde f(\xi_2)\right) \end{equation} under condition that the symbol $A(\xi)$ admits the wave factorization with respect to the quadrant $K$ \[ A(\xi)=A_{\neq}(\xi)A_=(\xi) \] with index $\ae$ such that $\ae-s=1+\delta, |\delta|<1/2$. To construct a discrete boundary value problem which is good approximation to \eqref{14},\eqref{15} we need to choose $A_d(\xi)$ and $f_d,g_d$ in a special way. First, we introduce the operator $l_h$ which acts as follows. For a function $u$ defined in $\mathbb R$ we take its Fourier transform $\tilde f$ then we take its restriction on $\hbar T$ and periodically extend it to $\mathbb R$. Finally, we take its inverse discrete Fourier transform and obtain the function of discrete variable $(l_hu)(\tilde x), \tilde x\in h\mathbb R$. Thus, we put \[ f_d=l_hf,~~~g_d=l_hg. \] Second, the symbol of digital operator $A_d$ we construct in the same way. If we have the wave factorization for the symbol $A(\xi)$ then we take restrictions of factors on $\hbar\mathbb T^2$ and the periodic symbol $A_d(\xi)$ is a product of these restrictions. For such $f_d,g_d$ and the symbol $A_d(\xi)$ we obtain the following result. {\bf Theorem 4.} Let $f,g\in S(\mathbb R), \ae>1$. Then we have the following estimate for solutions $u$ and$u_d$ of the continuous problem \eqref{14},\eqref{15} and the discrete one \eqref{4},\eqref{11} \[ |u(\tilde x)-u_d(\tilde x)|\leq C(f,g)h^{\beta}, \] where the const $C(f,g)$ depends on functions $f,g$, $\beta>0$ can be an arbitrary number. {\bf Proof.} We need to compare two functions \eqref{13} and \eqref{16}, more exactly their inverse discrete Fourier transform and inverse Fourier transform at points $\tilde x\in K_d$. We have \[ u_d(\tilde x)-u(\tilde x)=\frac{1}{4\pi^2}\left(\int\limits_{\hbar\mathbb T^2}e^{i\tilde x\cdot\xi}\tilde u_d(\xi)d\xi-\int\limits_{\mathbb R^2}e^{i\tilde x\cdot\xi}\tilde u(\xi)d\xi\right)= \] \[ =\frac{1}{4\pi^2}\int\limits_{\mathbb R^2\setminus\hbar\mathbb T^2}e^{i\tilde x\cdot\xi}A^{-1}_{\neq}(\xi)\left(A_{\neq}(\xi_1,0)\tilde g(\xi_1)+A_{\neq}(0,\xi_2)\tilde f(\xi_2)\right)d\xi, \] since according to our choice for $A_d,f_d,g_d$ the functions $\tilde u_d$ and $\tilde u$ coincide in points $\xi\in\hbar\mathbb T^2$. We will estimate one summand. \[ \left|\frac{1}{4\pi^2}\int\limits_{\mathbb R^2\setminus\hbar\mathbb T^2}e^{i\tilde x\cdot\xi}A^{-1}_{\neq}(\xi)A_{\neq}(\xi_1,0)\tilde g(\xi_1)d\xi\right|\leq \] \[ \leq~C\int\limits_{\hbar\pi}^{+\infty}\frac{d\xi_2}{(1+|\xi_1|+|\xi_2|)^{\ae}}\int\limits_{\hbar\pi}^{+\infty}|\xi_1|^{-\gamma}d\xi_1, \] since $\tilde g\in S(\mathbb R)$. It implies the required estimate. $\blacksquare$ \section*{Conclusion} In this paper we have considered two-dimensional cone only, but the authors continue to work in multidimensional situations and we hope to obtain results similar to a discrete half-space \cite{VV6,TV}. As first practical applications the authors plan to study discrete variant of a quarter-plane problem in diffraction theory \cite{SFO} and elasticity theory \cite{V0}. We hope it will useful application of the developed theory. \bibliographystyle{amsplain}
{ "arxiv_id": "2302.14299", "language": "en", "timestamp": "2023-03-01T02:08:49", "url": "https://arxiv.org/abs/2302.14299", "yymm": "2302" }
\section*{Appendices} \subsection*{A) First-Order Approximation Gradient Descent Boosting} \subsubsection*{Derivation of the partial derivative of the risk function with respect to $\epsilon$:} \begin{align*} -\frac{\partial R[f^{t} + \epsilon g + \delta h]}{\partial \epsilon} &= -\frac{\partial}{\partial \epsilon}\sum_{i=1}^{n}L_{M}[y_{i}, f^{t}(x_{i})+\epsilon g(x_{i}) + \delta h(x_{i})]\\ &=-\sum_{i=1}^{n}\frac{\partial}{\partial \epsilon}\sum_{k=1}^{M}e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i}) + \delta h(x_{i}), y_{i}-y^{k}>}\\ &= -\sum_{i=1}^{n}\sum_{k=1}^{M}\Big ( \frac{\partial}{\partial \epsilon} e^{-\frac{1}{2}\epsilon <g(x_{i}), y_{i}-y^{k}>}\Big ) e^{-\frac{1}{2}<f^{t}(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>} \\ &= \frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{M}<g(x_{i}), y_{i}-y^{k}>e^{-\frac{1}{2}\epsilon <g(x_{i}), y_{i}-y^{k}>}e^{-\frac{1}{2}<f^{t}(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>}\\ &= \frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{M}<g(x_{i}), y_{i}-y^{k}>e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>}. \\ & \end{align*} Its value at $\epsilon = \delta = 0 $: \begin{align*} -\frac{\partial R[f^{t} + \epsilon g + \delta h]}{\partial \epsilon} \Bigr|_{\substack{\epsilon=0\\\delta=0}} &= \frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{M}<g(x_{i}), y_{i}-y^{k}>e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>} \\ &= \frac{1}{2}\sum_{i=1}^{n} <g(x_{i}), \sum_{k=1}^{M}(y_{i}-y^{k})e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{t}>}>\\ &= \sum_{i=1}^{n}<g(x_{i}), w_{i}> , \\ & \end{align*} where: \begin{align*} w_{i} &= \frac{1}{2}\sum_{k=1}^{M}(y_{i}-y^{k})e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>} \\&= \frac{1}{2}e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}>}\sum_{k=1}^{M}(y_{i}-y^{k})e^{\frac{1}{2}<f^{t}(x_{i}), y^{k}>}. \\ & \end{align*} \subsubsection*{Similarly we can derive with respect to $\delta$:} \begin{align*} -\frac{\partial R[f^{t} + \epsilon g + \delta h]}{\partial \delta} \Bigr|_{\substack{\epsilon=0\\\delta=0}} &= \sum_{i=1}^{n}<h(x_{i}), w_{i}> , \\ & \end{align*} where $w_{i}$ is defined as before. \subsection*{B) Second-Order Approximation Gradient Descent Boosting} \subsubsection*{Derivation of the second partial derivative of the risk function with respect to $\epsilon$:} \begin{align*} \frac{\partial^{2} R[f^{t} + \epsilon g + \delta h]}{\partial \epsilon ^{2}} &=\frac{\partial}{\partial \epsilon} \Bigg[- \frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{M}<g(x_{i}), y_{i}-y^{k}>e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>}\Bigg] \\ &= \frac{\partial}{\partial \epsilon} \Bigg[ -\frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{M}<g(x_{i}), y_{i}-y^{k}>e^{-\frac{1}{2}\epsilon <g(x_{i}), y_{i}-y^{k}>}e^{-\frac{1}{2}<f^{t}(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>} \Bigg ]\\ &= -\frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{M}<g(x_{i}),y_{i}-y^{k}> e^{-\frac{1}{2}<f^{t}(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>}\frac{\partial}{\partial \epsilon} \Big [ e^{-\frac{1}{2}\epsilon <g(x_{i}), y_{i}-y^{k}>} \Big]\\ &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} ( <g(x_{i}),y_{i}-y^{k}>)^{2} e^{-\frac{1}{2}<f^{t}(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>} e^{-\frac{1}{2}\epsilon <g(x_{i}), y_{i}-y^{k}>} \\ &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} ( <g(x_{i}),y_{i}-y^{k}>)^{2} \hspace{0.1cm} e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>} . \\& \end{align*} Its value at $\epsilon = \delta = 0 $: \begin{align*} \frac{\partial^{2} R[f^{t} + \epsilon g + \delta h]}{\partial \epsilon ^{2}} \Bigr|_{\substack{\epsilon=0\\\delta=0}} &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} ( <g(x_{i}),y_{i}-y^{k}>)^{2} \hspace{0.1cm} e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>} \\ &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} \Big <g(x_{i}),y_{i}-y^{k} \Big > \Big <g(x_{i}),y_{i}-y^{k}\Big > * \\ & \hspace{1cm} (e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}} (e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}}\\ &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} \Big <g(x_{i}),(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}}\Big > *\\ & \hspace{1cm} \Big <g(x_{i}),(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}}\Big > \\ &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} \Big <g(x_{i}),(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}} \Big > ^{2}\\ &=\frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} \Bigg [\Big <g(x_{i}),g(x_{i}) \Big > +2\Big <g(x_{i}),(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}}\Big > +\\ & \hspace{1cm}\Big <(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}},(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}}\Big > \Bigg ] \\ &=\frac{1}{4}\sum_{i=1}^{n} \Bigg [\Big <g(x_{i}),g(x_{i})\Big > +2\Big <g(x_{i}), \sum_{k=1}^{M} \Big [ (y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}} \Big ]\Big >\\ & \hspace{1cm}+ \sum_{k=1}^{M} \Big <(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}},(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}}\Big> \Bigg ]\\ &=\frac{1}{4} \sum_{i=1}^{n} \Big [ \Big <g(x_{i}), g(x_{i})\Big > + 2\Big <g(x_{i}), \tilde {w_{i}}\Big > + \hat w_{i}\Big ],\\ & \end{align*} where: \begin{align*} \tilde w_{i} &= \sum_{k=1}^{M} \Big [ (y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}}\Big ] \\ & \end{align*} and \begin{align*} \hat w_{i} &= \sum_{k=1}^{M} \Big <(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}},(y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>})^{\frac{1}{2}} \Big> \\ &= \sum_{k=1}^{M} e^{-\frac{1}{2}<f^{t}(x_{i}),y_{i}-y^{k}>}\Big <y_{i}-y^{k}, y_{i}-y^{k} \Big > \\ &= e^{-\frac{1}{2}<f^{t}(x_{i}),y_{i}>}\sum_{k=1}^{M}\Big < y_{i}-y^{k}, y_{i}-y^{k} \Big > e^{\frac{1}{2}<f^{t}(x_{i}),y^{k}>}. \\ & \end{align*} \subsubsection*{Similarly we can derive with respect to $\delta$:} \begin{align*} \frac{\partial^{2} R[f^{t} + \epsilon g + \delta h]}{\partial \delta ^{2}} \Bigr|_{\substack{\epsilon=0\\\delta=0}} &=\frac{1}{4} \sum_{i=1}^{n} \Big [ \Big <h(x_{i}), h(x_{i})\Big > + 2\Big <h(x_{i}), \tilde w_{i}\Big > + \hat w{i}\Big ],\\ & \end{align*} where: $\Tilde w_{i}$ and $ \hat w_{i} $ are defined as before. \subsection*{Derivation of the mixed partial derivative of the risk function with respect to $\epsilon$ and $\delta$:} \begin{align*} \frac{\partial^{2} R[f^{t} + \epsilon g + \delta h]}{\partial \delta \partial \epsilon} &= \frac{\partial ^{2}R[f^{t}, g, h]}{\partial \delta \partial \epsilon} \\ &=\frac{\partial}{\partial \delta} \Bigg[ \frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{M}<g(x_{i}), y_{i}-y^{k}>e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>}\Bigg] \\ &= \frac{\partial}{\partial \delta} \Bigg[ -\frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{M}<g(x_{i}), y_{i}-y^{k}>e^{-\frac{1}{2}\delta<h(x_{i}), y_{i}-y^{k}>}e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i}), y_{i}-y^{k}>} \Bigg ]\\ &= -\frac{1}{2}\sum_{i=1}^{n}\sum_{k=1}^{M}<g(x_{i}),y_{i}-y^{k}> e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i}), y_{i}-y^{k}>}\frac{\partial}{\partial \delta} \Big [ e^{-\frac{1}{2}\delta <h(x_{i}), y_{i}-y^{k}>} \Big]\\ &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} \Big <g(x_{i}),y_{i}-y^{k} \Big > \Big < h(x_{i}),y_{i}-y^{k}\Big > e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i}), y_{i}-y^{k}>} e^{-\frac{1}{2}\delta <h(x_{i}), y_{i}-y^{k}>} \\ &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} \Big <g(x_{i}),y_{i}-y^{k} \Big>\Big <h(x_{i}),y_{i}-y^{k} \Big> \hspace{0.1cm} e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>} \\ &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} \Big <g(x_{i}) + h(x_{i}),y_{i}-y^{k} \Big>\ \hspace{0.1cm} e^{-\frac{1}{2}<f^{t}(x_{i})+\epsilon g(x_{i})+\delta h(x_{i}), y_{i}-y^{k}>}. \\& \end{align*} Its value at $\epsilon = \delta = 0 $: \begin{align*} \frac{\partial^{2} R[f^{t} + \epsilon g + \delta h]}{\partial \delta \partial \epsilon} \Bigr|_{\substack{\epsilon=0\\\delta=0}} &= \frac{1}{4}\sum_{i=1}^{n}\sum_{k=1}^{M} <g(x_{i})+h(x_{i}),y_{i}-y^{k}> \hspace{0.1cm} e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>} \\ &= \frac{1}{4}\sum_{i=1}^{n} <g(x_{i})+h(x_{i}),\sum_{k=1}^{M}(y_{i}-y^{k}) \hspace{0.1cm} e^{-\frac{1}{2}<f^{t}(x_{i}), y_{i}-y^{k}>}> \\ &= \sum_{i=1}^{n}<g(x_{i})+h(x_{i}), \frac{1}{2}w_{i}>, \\ & \end{align*} where $w_{i}$ is defined as before (in the first order derivatives). \subsection*{C) Percentage of relative metric improvement of single modality algorithms w.r.t. the baseline per dataset} \begin{table}[H] \centering \begin{tabular}{lccc} \toprule \textbf{Model} & CT & MI & RW \\ \midrule 1WL$_{\mathcal{S}}$ & -7.04 & -8.17 & -9.21 \\ 1WL$_{\mathcal{U}}$ & -0.48 & -0.16 & -1.93 \\ \bottomrule \end{tabular} \caption{Percentage of relative metric improvement of single modality algorithms w.r.t. the baseline per dataset} \label{comp:results} \end{table} \section{Introduction} Data in modern machine learning problems can be represented by a variety of modalities or data sources. We consider the setting in which structured data, aka tabular data, and unstructured data are available simultaneously. A common application area of this setting is medical diagnosis, in which the decision making process is supported by unstructured data such as medical imaging and doctors' notes, in combination with patient historical data, lab analyses and blood tests, in the form of structured data. The settings where structured or unstructured data are available individually have been extensively researched. Deep neural networks (DNNs) have consistently proven successful at solving problems with unstructured data, see e.g. textbooks \cite{book1} and \cite{Goodfellow-et-al-2016}. On the other hand, traditional boosting methods have shown significant advantages over DNNs in modeling structured data inputs \cite{survey, fttransf2021, Caruana:2008, Caruana:2006,fttransf2021}. Examples of such benefits are observed in terms of training time, interpretability, amount of required training data, tuning efforts, and computational expense. This is commonly observed in Kaggle competitions, where better performance is achieved by boosted methods when the available data is structured \cite{RePEc:eee,DBLP:MangalK17,Taieb13agradient}, and by deep learning models when the available data is unstructured \cite{ Graham, DBLP:ZouXL17}. In particular, LightGBM \cite{NIPS2017_6907} and XGBoost \cite{Chen:2016}, have become de-facto modeling standards for structured data. Conversely, in the setting in which both structured and unstructured data are accessible (\US), it is not obvious what the best modeling approach is to enhance performance on both data sources simultaneously. In general, the simplest method consists of training independent models for each data modality and then combining the results by averaging or voting over the individual predictions. A big caveat is the missed opportunity of capturing any cross-data source interactions or underlying complementary information that might exist in the data. Training concurrently on both modalities of data is deemed crucial if we attempt to learn such relationships. A common approach to joint training consists of using DNNs for representation learning on each data source, concatenating the learned embeddings, and having it as input to a third DNN. This approach serves as a baseline for our experiments, and performs sub-optimally given that boosting algorithms excel on structured data settings. In this paper, we propose two frameworks for the \US setting that address the above-mentioned considerations. Our frameworks aim at better capturing the best and most informative features of each data source, while simultaneously enhancing performance though a joint training scheme. To achieve this, our novel approaches combine the proven paradigms for structured and unstructured data respectively: gradient boosting machines and DNNs. The first framework is the boosted-feature-vector deep learning network (BFV+DNN). BFV+DNN learns features from the structured data using gradient boosting and combines them with embeddings from unstructured data via a two-branch deep neural network. It requires to train a boosted model on the structured data as an initial step. Then, each neural network branch learns embeddings specific to each data input, which are further fused into a shared trainable model. The post-fusion shared architecture allows the model to learn the complementary cross-data source interactions. The key novelty is the feature extraction process from boosting. Following standard terminology, we refer to model inputs as "features" and to DNN-learned representations as "embeddings." In our proposed framework, BFVs are used as inputs to BFV+DNN and hence, are named features accordingly. In addition, we propose a two-weak-learner boosting framework (2WL) that extends the boosting paradigm to the \US setting. The framework is derived as a first-order approximation to the gradient boosting risk function and further expanded to a second-order approximation method (2WL2O). It should be noted that this framework can be used in the general multimodal setting and is not restricted to the \US use case. Our experimental results show significant performance gains over the aforementioned baseline. Relative improvements on F1 metrics are observed by magnitudes of 4.7\%, 0.1\%, and 0.34\% on modified Census, Imagenet, and Covertype datasets, respectively. We also consider a real-world dataset from an industry partner where the improvement in accuracy is 0.41\%. The main contributions of this work are as follows. \begin{enumerate} \item We present a boosted-feature-vector DNN model that combines structured data boosting features with deep neural networks to address the setting in which both structured and unstructured data sources are available. \item We propose an alternative two-weak-learner-gradient-boosting framework to address the setting in which both structured and unstructured data sources are available. \item We extend the two-weak-learner-gradient-boosting to a second-order approximation. \item We show and compare the effectiveness of these approaches on public and real-world datasets. \end{enumerate} The rest of this paper is organized as follows. In Section \ref{sec:lr}, we discuss related work. In Section \ref{sec:model}, we formally introduce our proposed frameworks. Experimental results are discussed in Section \ref{sec:comp} and we conclude in Section \ref{sec:conclusion}. \section{Related work} \label{sec:lr} \subsection{Boosting Methods} Boosting methods combine base models (referred to as weak learners) as a means to improve the performance achieved by individual learners \cite{ridgeway:1999}. AdaBoost \cite{Freund:1997} is one of the first concrete adaptive boosting algorithms, whereas Gradient Boosting Machines (GBM) \cite{Friedman00greedyfunction} derive the boosting algorithm from the perspective of optimizing a loss function using gradient descent, see \cite{ mayr:2014}. A formulation of gradient boosting for the multi-class setting and two algorithmic approaches are proposed in \cite{NIPS2011_4450}. LightGBM \cite{NIPS2017_6907} incorporates techniques to improve GBM's efficiency and scalability. Traditionally, trees have been the base learners of choice for boosting methods, but the performance of neural networks as weak learners for AdaBoost has also been investigated in \cite{Schwenk:2000}. More recently, CNNs were explored as weak learners for GBM in \cite{BMVC2016}, integrating the benefits of boosting algorithms with the impressive results that CNNs have obtained at learning representations on visual data, \cite{DBLP:journals/corr/JiaSDKLGGD14, NIPS2012_4824, RedmonDGF16}. Second-order information is employed in boosting algorithms such as Logitboost \cite{Friedman98}, Taylorboost \cite{SaberianMV11}, and XGBoost \cite{Chen:2016}. However, unlike our proposed second-order model, all these algorithms consider a single family of weak learners and individual data inputs, whereas we handle two families of weak learners and both structured and unstructured data simultaneously. Boosting approaches have also been applied to the setting in which more than one data source is available as input. For instance, a multiview boosting algorithm based on PAC-Bayesian theory is presented in \cite{Goyal:2018} and a cost-based multimodal approach that introduces the notion of weak and strong modalities in \cite{KocoCB12}. In both cases, final classification is performed using majority or weighted voting. Similarly, a model that assigns a different contribution of each data input to the final classification is proposed in \cite{PengASP18}, where a shared weight distribution among modalities is used. None of these algorithms make use of DNN approaches, as they employ traditional decision stumps as weak learners regardless of the data input sources. In contrast, the multimodal reward-penalty-based voting boosting model proposed in \cite{LahiriPB18} uses DNNs as weak learners, but overlooks the benefits of tree-based approaches for structured data. Common to all methods reviewed in this paragraph, is the notion of addressing the setting where multiple data inputs are available. However, they do not take into account the underlying properties of these different data sources, nor do they consider specific algorithmic approaches that better suit each one of them. \subsection{Structured \& Unstructured Data Setting} Approaches that directly target the \US setting are scarce, more so those that address the structured data characteristics. In \cite{Chen:2017}, the authors deal with demographics, living habits, and examination results from patients in the form of structured data, and with doctor's records and patient's medical history presented as unstructured text data. An intuitive DNN approach is used, with the drawbacks that have already been discussed as no special treatment is given to structured data. Conversely, in \cite{48133} the \US setting is tackled by combining the benefits of tree-based models and DNNs. To do so, they use stacking and boosted stacking of independently trained models. The core idea is similar to our proposed boosted-feature-vector DNN in the sense that a first model is trained and then used as an input for joint training. Their approaches differ from our BFV+DNN in two main aspects. First, their models are heavily tailored for the learning-to-rank use case and second, they use direct outputs from the first model as input to the second model, whereas we propose a novel way to extract boosted-feature vectors from the first model, rather than using its direct output. \section{Proposed models} \label{sec:model} In this section, we propose two models to tackle the \US setting. The models address the inherent nature of each source of data by exploiting the specific benefits of boosted algorithms and neural networks as learners on each of them. \subsection{ Boosted-feature-vector Deep Learning Network (BFV+DNN)} The boosted-feature-vector deep learning network aims at using DNNs as the primary learning method, while incorporating boosted-feature vectors (BFV) from the structured data source. As a means of comparison, the baseline DNN approach to the \US setting is shown in Figure \ref{exp:dnns}a and the BFV+DNN architecture in Figure \ref{exp:dnns}b. Both contain two branches, a fusion stage and a joint learning architecture. Each branch learns representations from one data source (see DNN1 and DNN2 in Figure \ref{exp:dnns}). Then, a fusion yields a joint embedding that combines the data-source-specific representations. Finally, the combined vector is used as input to a trainable DNN to model cross-data source interactions (DNN3 in Figure \ref{exp:dnns}). \begin{figure} [h] \centering \subfloat[Baseline architecture]{% \includegraphics[width=0.3\linewidth]{figs/naivefusion.png}} \subfloat[BFV+DNN architecture]{% \includegraphics[width=0.3255\linewidth]{figs/viewtailored3.png}} \caption{\US deep neural networks} \label{exp:dnns} \end{figure} As noted, the common baseline ignores the structured or unstructured nature of the data source and directly learns representations via DNNs, whereas BFV+DNN uses BFVs as input to DNN2. To do so, we assume that a GBM model is first trained on the structured data. For the multiclass setting with $M$ classes and $N$ iterative GBM stages, $M$ CARTs \cite{BreimanFOS84} are fitted per iteration. Let $R_{i,j,k}$ be the region defined by class $i$, tree $j$, and leaf $k$, and $w_{i,j,k}$ the value representing the raw prediction of a sample falling in the corresponding region $(1 \leq i \leq M, 1 \leq j \leq N)$. Moreover, let each fitted tree $j$ of class $i$ have a number of leaves $l_{i,j}$. We define the boosted-feature vector of the structured portion of a sample $x$ as: \begin{equation*} BFV(x)= \begin{bmatrix} \sum_{k=1}^{l_{1,1}}w_{1,1,k}\mathbbm{1}\{x \in R_{1,1,k}\} \hspace{0.15cm}, & \dots & , \hspace{0.15cm} \sum_{k=1}^{l_{M,1}}w_{M,1,k}\mathbbm{1}\{x \in R_{M,1,k}\} \\[1ex] \sum_{k=1}^{l_{1,2}}w_{1,2,k}\mathbbm{1}\{x \in R_{1,2,k}\} \hspace{0.15cm}, & \dots & , \hspace{0.15cm} \sum_{k=1}^{l_{M,2}}w_{M,2,k}\mathbbm{1}\{x \in R_{M,2,k}\} \\ \vdots & & \vdots \\ \sum_{k=1}^{l_{1,N}}w_{1,N,k}\mathbbm{1}\{x \in R_{1,N,k}\} \hspace{0.15cm}, & \dots & , \hspace{0.15cm} \sum_{k=1}^{l_{M,N}}w_{M,N,k}\mathbbm{1}\{x \in R_{M,N,k}\} \end{bmatrix} \in \mathbb{R}^{M\text{x}N}. \end{equation*} Note that for the binary case, a single tree is fitted in each iteration and the boosted-feature vector of $x$ is simplified to: \begin{equation*} BFV(x)= \begin{bmatrix} \sum_{k=1}^{l_{1}}w_{1,k}\mathbbm{1}\{x \in R_{1,k}\} \hspace{0.15cm}, & \dots & ,\hspace{0.15cm} \sum_{k=1}^{l_{N}}w_{N,k}\mathbbm{1}\{x \in R_{N,k}\} \end{bmatrix} \in \mathbb{R}^{N} \text{,} \end{equation*} where $w_{j,k}$ is the raw prediction of a sample falling in region $R_{j,k}$ of tree $j$ and leaf $k$. The outputs of DNN1 and DNN2 are combined into a joint representation using a fusion method. In our experiments in Section \ref{sec:comp}, fusing steps are performed by concatenatation or element-wise multiplication of the DNNs' embeddings. The best method per dataset is reported in Table \ref{exp:bfv_hyper}. \subsection{Two-Weak-Learner-Gradient-Boosting Framework} Multi-class boosting aims at finding a classifier $F(x) = \argmax_{k} \langle y^{k},f(x)\rangle$ where $f$ is some predictor, $y^{k}$ is the $k^{th}$ class unit vector identifier, and $\langle \cdot{,}\cdot \rangle$ is the standard dot product. Following the GD-MCBoost \cite{NIPS2011_4450} multi-class boosting approach, $f$ is a boosted predictor trained to minimize classification risk $R(f)=\mathbb{E}_{X,Y}[L(y, f(x))] \approx \frac{1}{n}\sum_{i=1}^{n}L(y_{i}, f(x_{i}))$ where $n$ is the number of training samples and $L(y, f(x))= \sum_{k=1}^{M} e^{-\frac{1}{2}[<f(x),y-y^{k}>]}$ is the $M$-class loss function. At each iteration $t$, the update of the predictor is given by $f^{t+1}(x) = f^{t}(x) + g(x)$ with $g(x)$ a weak learner. Although the most common choices for weak learners are decision trees, we posit that weak learners must be chosen according to the available data source, such that they best capture their specific properties. In the \US setting, each training sample is of the form $((x^{U},x^{S}),y)$, and we have two families of weak learners denoted by $g=g(x^{U})$ and $h=h(x^{S})$. \subsubsection{Two-Weak-Learner-First-Order-Gradient-Boosting Framework (2WL)} The two-weak-learner-gradient-boosting framework integrates the boosting paradigm to the \US setting by including two families of weak learners that target each specific data input. In the two-weak-learner case, given $f^{t}$ we have weak learners $g$ and $h$. We update the predictor at iteration $t+1$ to $f^{t+1}((x^{U}, x^{S})) = f^{t}((x^{U}, x^{S})) + \epsilon g^{*}(x^{U}) + \delta h^{*}(x^{S})$. The optimization step is taken via gradient descent along directions $g$ and $h$ of largest decrease of $R(f)$. We have that (see Appendix A): \begin{equation*} \begin{aligned} R(f^{t} + \epsilon g + \delta h) &\approx R(f^{t}) + \frac{\partial R}{\partial \epsilon}\Bigr|_{\substack{\epsilon=0\\\delta=0}} \epsilon + \frac{\partial R}{\partial \delta}\Bigr|_{\substack{\epsilon=0\\\delta=0}} \delta \\ & = R(f^{t}) - \epsilon \sum_{i=1}^{n}<g(x^{U}_{i}), w_{i}> - \delta \sum_{i=1}^{n}<h(x^{S}_{i}), w_{i}> , \\ & \end{aligned} \end{equation*} \vspace{-0.6cm} \begin{equation} \label{model:wi} \begin{aligned} w_{i} &= \frac{1}{2}e^{-\frac{1}{2}<f^{t}(x^{U}_{i},x^{S}_{i}), y_{i}>}\sum_{k=1}^{M}(y_{i}-y^{k})e^{\frac{1}{2}<f^{t}(x^{U}_{i},x^{S}_{i}), y^{k}>}, \end{aligned} \end{equation} which yields optimization problems: \begin{equation} \label{model:g*} g^{*} \in \argmin_{g} \hspace{0.5cm} ||g - w ||^{2} = \sum_{i = 1}^{n} || g(x_{i})-w_{i} ||, \end{equation} \begin{equation} \label{model:h*} h^{*} \in \argmin_{h} \hspace{0.5cm} ||h - w ||^{2} = \sum_{i = 1}^{n} || h(x_{i})-w_{i} ||, \end{equation} \begin{equation} \label{model:epsdel} (\epsilon^{*}, \delta^{*}) \in \argmin_{\epsilon,\delta} \hspace{0.5cm} R(f + \epsilon g^{*} + \delta h^{*}). \end{equation} These problems are solved iteratively using Algorithm \ref{alg:2wl}. At each iteration, weak learners $g$ and $h$ are fitted to minimize the expressions shown in (\ref{model:g*}) and (\ref{model:h*}) for $w_{i}$ as in (\ref{model:wi}). Risk function $R(f)$, evaluated in the learned values, is optimized with respect to $\epsilon$ and $\delta$. \begin{algorithm}[H] \caption{Two-Weak-Learner-Gradient-Boosting} \label{alg:2wl} \textbf{Input:} Number of classes $M$, number of boosting iterations $N$ and training dataset $\mathcal{D} = \{(x_{1},y_{1}), ..., (x_{n}, y_{n})\}$, where $x_{i}$ are training samples of the form $x_{i} = (x_{i}^{1}, x_{i}^{2})$, with $x_{i}^{1}$ corresponding to one modality, $x_{i}^{2}$ corresponding to the second modality, and $y_{i}$ are the class labels. In our use case, $x_{i} = (x_{i}^{U}, x_{i}^{S})$. \begin{algorithmic} \State \textbf{Initialization: } Set $f^{0}=0 \in \mathbb{R}^{M}$ \For{$t=0$ to $N$} \State Compute $w_{i}$ as in (\ref{model:wi}). \State Fit learners $g^{*}$ and $h^{*}$ as in (\ref{model:g*}) and (\ref{model:h*}). \State Find $\epsilon^{*}$ and $\delta^{*}$ as in (\ref{model:epsdel}). \State Update $f^{t+1}(x)=f^{t}(x)+ \epsilon^{*} g^{*}(x_{i_{1}}) + \delta^{*} h^{*}(x_{i_{2}})$ . \EndFor \end{algorithmic} \textbf{Output:} $F(x) = \argmax_{k} \langle y^{k},f^{N}(x)\rangle$\\ \end{algorithm} Problems \ref{model:g*} and \ref{model:h*} are solved by using standard mean squared error algorithms. Optimization \ref{model:epsdel} can be approximated in different ways such as heuristics, grid search, randomized search, or Bayesian optimization. Our experimental study, detailed in Section \ref{sec:comp}, uses heuristic values or Bayes optimization. \subsubsection{Two-Weak-Learner-Second-Order Gradient Boosting Framework (2WL2O)} The two-weak-learner-gradient-boosting framework is derived from the first-order approximation to the multi-class risk function $R$. In order to improve the estimation, we use second-order Taylor approximation as follows (details are provided in Appendix B): \begin{equation*} \begin{aligned} R_{M}(f^{t} + \epsilon g + \delta h) &\approx R(f^{t}) + \frac{\partial R}{\partial \epsilon}\Bigr|_{\substack{\epsilon=0\\\delta=0}} \epsilon + \frac{\partial R}{\partial \delta}\Bigr|_{\substack{\epsilon=0\\\delta=0}} \delta \\ & + \frac{1}{2}\frac{\partial^{2} R}{\partial \epsilon^{2}}\Bigr|_{\substack{\epsilon=0\\\delta=0}} \epsilon^{2} + \frac{1}{2}\frac{\partial^{2} R}{\partial \delta^{2}}\Bigr|_{\substack{\epsilon=0\\\delta=0}} \delta^{2} + \frac{\partial^{2} R}{\partial \epsilon \partial \delta}\Bigr|_{\substack{\epsilon=0\\\delta=0}} \epsilon \delta \\ & = R(f^{t}) - \epsilon \sum_{i=1}^{n}<g(x_{i}^{1}), w_{i}> - \delta \sum_{i=1}^{n}<h(x_{i}^{2}), w_{i}> \\ & + \frac{\epsilon ^{2}}{2} \Bigg [ \frac{1}{4} \sum_{i=1}^{n} \Big ( <g(x_{i}^{1}), g(x_{i}^{1})> + 2<g(x_{i}^{1}), \tilde w_{i}> + \hat{w}_{i}\Big )\Bigg] \\ & + \frac{\delta ^{2}}{2} \Bigg [ \frac{1}{4} \sum_{i=1}^{n} \Big ( <h(x_{i}^{2}), h(x_{i}^{2})> + 2<h(x_{i}^{2}), \tilde w_{i}> + \hat{w}_{i}\Big )\Bigg] \\ & + \frac{\epsilon \delta}{2} \sum_{i=1}^{n} \Big ( <g(x_{i}^{1}), w_{i}> + <h(x_{i}^{2}), w_{i}>\Big ), & \end{aligned} \end{equation*} \begin{equation} \label{model:wi_tilde} \begin{aligned} \tilde{w}_{i} &= \sum_{k=1}^{M} \Big [ (y_{i}-y^{k})(e^{-\frac{1}{2}<f^{t}(x^{U}_{i},x^{S}_{i}), y_{i}-y^{k}>})^{\frac{1}{2}}\Big ], \end{aligned} \end{equation} \begin{equation*} \begin{aligned} \hat{w}_{i} &= e^{-\frac{1}{2}<f^{t}(x^{U}_{i},x^{S}_{i}),y_{i}>}\sum_{k=1}^{M} || y_{i} - y^{k}||^{2} e^{\frac{1}{2}<f^{t}(x^{U}_{i},x^{S}_{i}),y^{k}>}, \end{aligned} \end{equation*} and $w_{i}$ as in (\ref{model:wi}). We now have that: \begin{align} \label{model:epsdelta2d} (\epsilon^{*}, \delta^{*}) & \in \argmin_{\epsilon,\delta} \hspace{0.5cm} R(f + \epsilon g^{*}(\epsilon, \delta) + \delta h^{*}(\epsilon, \delta))\\ \text{s.t.} \hspace{0.5cm} & \label{model:g*2d} g^{*} \in \argmin_{g} \hspace{0.5cm} \Big |\Big |g -( \epsilon w - \frac{\epsilon ^{2}}{4}\tilde{w} - \frac{\epsilon \delta}{2}w ) \Big | \Big |^{2}\\ & \label{model:h*2d} h^{*} \in \argmin_{h} \hspace{0.5cm} \Big |\Big | h -( \delta w - \frac{\delta ^{2}}{4}\tilde{w}- \frac{\epsilon \delta}{2}w ) \Big | \Big |^{2} \text{,} \end{align} which we solve using Algorithm \ref{alg:2wl2d}. At each iteration, $w_{i}$ and $\tilde w_{i}$ are computed as in (\ref{model:wi}) and (\ref{model:wi_tilde}). An inner loop jointly optimizes $g$, $h$, $\epsilon$, and $\delta$ for these fixed $w$ and $\tilde w$: weak learners $g$ and $h$ are fitted to minimize the expressions shown in (\ref{model:g*2d}) and (\ref{model:h*2d}) and $R(f)$, evaluated in the learned values, is optimized with respect to $\epsilon$ and $\delta$. \begin{algorithm}[H] \caption{Two-Weak-Learner-Gradient-Boosting-Second-Order} \label{alg:2wl2d} \textbf{Input:} Number of classes $M$, number of boosting iterations $N_{1}$, number of inner iterations $N_{2}$ and training dataset $\mathcal{D} = \{(x_{1},y_{1}), ..., (x_{n}, y_{n})\}$, where $x_{i}$ are training samples of the form $x_{i} = (x_{i}^{1}, x_{i}^{2})$, with $x_{i}^{1}$ corresponding to one modality, $x_{i}^{2}$ corresponding to the second modality, and $y_{i}$ are the class labels. \begin{algorithmic} \State \textbf{Initialization: } Set $f^{0}=0 \in \mathbb{R}^{M}$ \For {$t=0$ to $N_{1}$} \State Compute $w_{i}$ and $\tilde{w}_{i}$ as in (\ref{model:wi}), and (\ref{model:wi_tilde}). \State Initialize $\epsilon^{*}_{0}$, $\delta^{*}_{0}$. \For{$j=0$ to $N_{2}$} \State Fit learners $g^{*}_{j}$ and $h^{*}_{j}$ as in (\ref{model:g*2d}) and (\ref{model:h*2d}) by using $\epsilon^{*}_{j}$, $\delta^{*}_{j}$ . \State Find $\epsilon^{*}_{j+1}$ and $\delta^{*}_{j+1}$ as in (\ref{model:epsdelta2d}). \State Compute risk function value $R_{j}$ at point $(g^{*}_{j}, h^{*}_{j}, \epsilon^{*}_{j+1}, \delta^{*}_{j+1})$. \EndFor \State $j^{*} = \argmin_{j} R_{j}$ \State $g^{*} = g_{j{*}}, h^{*} = h_{j^{*}} $ \State $\epsilon^{*} = \epsilon_{j^{*}}$, $\delta^{*} = \delta_{j^{*}}$ \State Update $f^{t+1}(x)=f^{t}(x)+ \epsilon^{*} g^{*}(x) + \delta^{*} h^{*}(x)$. \EndFor \end{algorithmic} \textbf{Output:} $F(x) = \argmax_{k} \langle y^{k},f^{N_{1}}(x)\rangle$ \end{algorithm} Optimization problems \ref{model:epsdelta2d}, \ref{model:g*2d}, and \ref{model:h*2d} are solved as stated for Algorithm \ref{alg:2wl}. In the experimental study in Section \ref{sec:comp}, the initialization values for $\epsilon^{*}_{0}$ and $\delta^{*}_{0}$ are set to 0.1, mimicking the default learning rate used in standard GBM implementations. \section{Computational study} \label{sec:comp} The computational study of the proposed models was conducted on five datasets: two subsets of the structured Census-Income (KDD) dataset \cite{Dua:2019}, modified versions of Imagenet \cite{deng2009imagenet} and UCI Forest Covertype \cite{Blackard:1999}, and a real-world proprietary dataset. \subsection{Datasets} \paragraph{Census-Income Dataset (CI)} The census-income dataset contains 40 demographic and employment related features and is used to predict income level, presented as a binary classification problem. Approximately 196,000 samples were used for training and almost 50,000 for validation. All of the features are presented in the form of structured data. We adjust it to the \US setting in two ways: CI-A) by randomly splitting the set of features and assigning them to two sets $\mathcal{S}$ and $\mathcal{U}$, representing the structured and unstructured modalities, respectively; CI-B) by using backward elimination to identify the most informative features and assigning them to one of the sets ($\mathcal{S}$), while the rest of the features were assigned to the other ($\mathcal{U}$). The latter setting CI-B represents the case of one modality being much stronger correlated to the labels than the other. \paragraph{Modified Imagenet Dataset (MI)} We sample from Imagenet ($\mathcal{I}$) and construct $\mathcal{U}$ with two classes: $\mathcal{C}_{0} = \{x_{U}| x_{U} \in \mathcal{I} \text{ and } x_{U} \text{ is a dog}\}$, which accounts for $47\%$ of the total samples in the resulting dataset and $\mathcal{C}_{1} = \{x_{U}| x_{U} \in \mathcal{I} \text{ and } x_{U} \text{ is a feline,}$ $ \text{primate, reptile or bird}\}$, which accounts for the remaining $53\%$. These classes were selected so that the dataset has a reasonable size and it is balanced. Approximately 313,000 samples were used for training and 12,000 for validation. We adjust it to the \US setting as follows: we generate $\mathcal{S}$ by creating a structured sample $x_{S} \in \mathbb{R}^{500}$ for each image $x_{U}$ in $\mathcal{U}$ such that, for a fixed $w \in \mathbb{R}^{500}$ we have that $w^{T}x_{S} >0$ if $x_{U} \in \mathcal{C}_{0} $ and $w^{T}x_{S} <0$ otherwise. Since there are many such $x_{S}$, we select one at random. Finally, we randomly switch $9\%$ of the labels in $\mathcal{S} \cup \mathcal{U}$ which provides a balance between further introducing noise to the data, while keeping more than $90\%$ of the dataset's deterministic label assignment unchanged.\\ \paragraph{Forest Covertype Dataset (CT)} We construct $\mathcal{S}$ with the 3 most represented classes in the highly imbalanced Forest Covertype dataset, resulting in approximately 424,000 and 53,000 training and validation samples, respectively. Conversely, we adjust it to the \US setting by generating an image $x_{U} \in \mathbb{R}^{128\times 128}$ for each structured sample $x_{S}$ in $\mathcal{S}$ as follows. Each $x_{U}$ consists of a white background and a random number in $\{1,...,10\}$ of randomly positioned: \begin{itemize} \item mixed type shapes if $x_{s} \in \mathcal{C}_{0}$, \item triangles if $x_{s} \in \mathcal{C}_{1}$, \item rectangles if $x_{s} \in \mathcal{C}_{2}$. \end{itemize} The shapes were generated using scikit-image \cite{scikit-image} with maximum bounding box sizes of 128 pixels and minimums of 10, 20, and 15 pixels, respectively. Again, we randomly switch $9\%$ of the labels in $\mathcal{S} \cup \mathcal{U}$ to introduce noise, while keeping more than $90\%$ of the dataset's labels unchanged. \begin{figure}[H] \centering \subfloat[Class $\mathcal{C}_{0}$]{% \includegraphics[width=0.22\linewidth]{figs/shapeimages/class0-.jpg}} \subfloat[Class $\mathcal{C}_{1}$]{% \includegraphics[width=0.22\linewidth]{figs/shapeimages/class1-.jpg}} \subfloat[Class $\mathcal{C}_{2}$]{% \includegraphics[width=0.22\linewidth]{figs/shapeimages/class2-.jpg}} \caption{Examples of generated images} \label{comp:ct} \end{figure} \paragraph{Real-World Multimodal Dataset (RW)} For this experiment, we use a proprietary dataset with both structured and unstructured data inputs, which allows us to test our models in a real-world \US setting. The dataset constitutes a binary classification problem with two data sources: one is presented in the form of images $\mathcal{U}$ and the other as structured data $\mathcal{S}$ where GBM works very well. Tens of thousands of samples were curated for training and validation, with each structured data sample containing approximately 100 features. \subsection{Implementation and hyperparameters} The experiments were implemented in Python and ran using GeForce RTX 2080 Ti GPU and Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz for all datasets except RW, for which Tesla V100 GPU and Intel Xeon CPU E5-2697 v4 @2.30Hz were used. For the BFV+DNN models, scikit-learn's GradientBoosterClassifiers \cite{scikit-learn} are trained and used to generate the BFVs. We employ Bayesian Optimization (BO) \cite{NIPS2012_4522} with 10 random exploration points and 20 iterations to find $\epsilon^{*}$ and $\delta^{*}$ in steps (\ref{model:epsdel}) of Algorithm \ref{alg:2wl} and (\ref{model:epsdelta2d}) of Algorithm \ref{alg:2wl2d}. The tracked metric is F1 for all datasets, except for RW, where accuracy is used. The dataset-specific hyperparameters used for BFV+DNN and two-weak-learner experiments can be found in Tables \ref{exp:bfv_hyper} and \ref{exp:2wl_hyper}, respectively. These hyperparameters were selected as follows. For DNNs, we used a fully connected layer with $k$ neurons (FC$k$) or two fully connected layers with $k_{1}$ and $k_{2}$ layers (FC$k_{1}$+$k_{2}$), where the number of layers and neurons were chosen based on the number of samples and features of each dataset. Regarding image datasets, VGG16\cite{Vgg16} and Resnet50\cite{resnet50} convolutional architectures were compared and the best one was selected. For optimizers, we chose the best performing between RMSPROP and stochastic gradient descent with learning rate $10^{-j}$, $j \in \{3,4,5\}$ (SGD/LR). Matrix multiplication and embedding concatenation were compared in order to select the fusion method for each dataset. The number of BFV trees is the best in $\{1000,1500,2000,3000\}$, while the maximum tree depth is the best in $\{3,4,5,6\}$. Values $N$, $N_{1}$, and $N_{2}$ vary according to the number of iterations each dataset took until convergence. Batch sizes were chosen based on the number of input features and pretraining was used for datasets with image data. \begin{table}[H] \centering \caption{Boosted-feature-vector Deep Learning Network \label{exp:bfv_hyper} Hyperparameters} \begin{tabular}{|c||c|c|c|c|c||} \hline & CI-A & CI-B & CT & MI & RW \\ \hline \hline DNN1 & FC$32$ & FC $100$+$50$ & VGG16+ & Resnet50+ & VGG16\\ & & & FC$1024$+$200$ & FC$1024$+$256$ & \\ DNN2 & FC$256$+$32$ & FC$100$+$50$ & FC$1024$+$200$ & FC$256$ & FC$512$\\ DNN3 & FC$256$+$32$ & FC$25$ & FC$64$ & FC$128$ & FC$16$\\ Fusion & Product & Concat & Concat & Concat & Product\\ Optimizer & SGD/0.001 & RMSPROP & SGD/0.00001 & SGD/0.00001 & SGD/0.001\\ Batch size & 128 & 128 & 32 & 32 & 16\\ BFV Trees & 2000 & 1500 & 3000 & 3000 & 2000\\ Pretrained & No & No & Yes & Yes & Yes\\ \hline \end{tabular} \end{table} \begin{table}[H] \centering \caption{Two-Weak-Learner-Gradient-Boosting Hyperparameters} \label{exp:2wl_hyper} \begin{tabular}{ |c||c|c|c|c|c|| } \hline & CI-A & CI-B & CT & MI & RW \\ \hline \hline DNN & FC$100$+$50$ & FC$100$+$50$ & VGG16+ & Resnet50+ & VGG16\\ & & & FC$1024$ & FC$64$ & \\ Pretrained & No & No & Yes & Yes & Yes\\ DT max depth & 3 & 3 & 6 & 3 & 5\\ Optimizer & RMSPROP & RMSPROP & SGD/0.01 & SGD/0.0001 & SGD/0.0001\\ Batch size & 512 & 128 & 32 & 32 & 16\\ $N$ , $N_{1}$, $N_{2}$ & 2000,2100,1 & 3500, 750,1 & 135,25,1 & 200,10,1 & 20,70,1 \\ \hline \end{tabular} \end{table} \subsection{Experimental Results} \subsubsection*{Model Comparison} In Table \ref{comp:results}, we summarize the results of the conducted experiments. For each dataset, we compare the performance of the boosted-feature vector DNN (BFV$_{\mathcal{S}}$ +DNN$_{\mathcal{U}}$ ), the two-weak-learner-gradient-boosted model with BO (2WL), and the two-weak-learner-second-order-gradient-boosted model with BO (2WL2O). Additionally, we analyze the impact of finding optimal steps $\epsilon^{*}$ and $\delta^{*}$ for the two-weak-learner models and conduct the same experiments with fixed $\epsilon^{*} = \delta^{*} = 0.1$ (2WL\_Fix and 2WL2O\_Fix). The value of 0.1 was chosen following the same reasoning as before regarding default hyperparameters used in GBM implementations. All results are given as percentage of relative improvement over the chosen baseline (see Figure \ref{exp:dnns} for reference). To account for randomization, we ran 5 identical experiments of each BFV+DNN and report their average. Their coefficients of variation were smaller than $0.005$, $0.005$, $0.0001$, $0.0001$, and $0.001$ for CI-A, CI-B, CT, MI, and RW datasets, respectively. Given the low coefficient of variation values shown by the DNN-based model and the additional computational time needed to run two-weak-learner experiments, we report a single instance for each boosting model. \begin{table}[H] \centering \caption{Relative performance w.r.t. baseline per dataset} \label{comp:results} \begin{tabular}{|l||c|c|c|c|c|} \hline & \multicolumn{5}{|c|}{\% of Relative Metric Improvement} \\ \hline \textbf{Model} &CI-A & CI-B & CT & MI & RW \\ \hline \hline BFV$_{\mathcal{S}}$ +DNN$_{\mathcal{U}}$ & \underline{1.87} & \textbf{4.70} & \underline{0.08} & \textbf{0.34} & \underline{0.11} \\ 2WL & \underline{3.50} & \underline{0.14} & -0.37& -0.09 & -0.60 \\ 2WL\_Fix & \underline{0.68} & -2.10 & -0.36 & -3.45 & -2.37 \\ 2WL2O & \textbf{3.97} & \underline{0.25} & \textbf{0.10} & \underline{0.13} & \textbf{0.41} \\ 2WL2O\_Fix & -7.87 & -19.85 & -0.23& -0.14 & -5.37 \\ \hline \end{tabular} \end{table} In general, BFV$_{\mathcal{S}}$ +DNN$_{\mathcal{U}}$ and 2WL2O models exhibit the best performance. The results in the different considered datasets help to differentiate the individual strengths of each of our proposed models. For datasets CI-A and CI-B, we observe that given the underlying structured nature of the data, the BFVs used for BFV$_{\mathcal{S}}$ +DNN$_{\mathcal{U}}$ are responsible for a large portion of the predictive power, making this model perform significantly better than the baseline. This behavior is notably exhibited in CI-B, were the most informative features have been grouped in $\mathcal{S}$ and used to generate the BFVs. On the other hand, due to the random variable split in CI-A, not all of the most informative features are used for generating the BFV, which is reflected in the performance gap between both datasets for this model and in the two-weak-learner models outperforming BFV+DNNs for CI-A. In datasets CT, MI and RW, which contain both structured and unstructured data, we observe closer gaps between the performances of BFV$_{\mathcal{S}}$ +DNN$_{\mathcal{U}}$ and 2WL2O, but quite large improvements of these over 2WL in all cases. BFV$_{\mathcal{S}}$ +DNN$_{\mathcal{U}}$ achieves the best performance for CI-B and MI. On the other hand, 2WL2O outperforms all models for CI-A, CT, and RW. The predictive power and complexity of each data input vs the other appears to play an important role both in the best model's performance and in the usefulness of the second order approximation. We further observe this in Figures \ref{comp:2wl-2wl2d} and \ref{comp:1wl-2wl}. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{figs/2wl-2wl2d.jpg} \caption{First- (2WL) and second-order (2WL2O) two-weak-learners performance vs runtime in minutes} \label{comp:2wl-2wl2d} \end{figure} In Figure \ref{comp:2wl-2wl2d}, we compare the first and second order weak learners performance. Given that we have sufficient time for convergence, the second order model outperforms the first order in all our experiments. However, we observe that for some datasets such as MI and RW, the performance of two-weak-learner models may abruptly drop after reaching its maximum. To further explore this, we compare 2WL and 2WL2O with their corresponding one weak learner models 1WL-DT, trained on $\mathcal{S}$, and 1WL-CNN, trained on $\mathcal{U}$. Accordingly, this comparison is conducted on datasets that have both structured and image data availables (MI, CT, and RW) and is shown in Figure \ref{comp:1wl-2wl}. As can be seen, the drop in performance observed for the two-weak-learner models in datasets MI and RW is consistently observed in their corresponding 1WL-CNN models. The unstructured data weak learner seems to be driving the two-weak-learner models for the aforementioned datasets. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{figs/1wlvs2wl.jpg} \caption{One- (1WL-DT, 1WL-CNN) and two-weak-learners (2WL, 2WL2O) performance vs runtime in minutes} \label{comp:1wl-2wl} \end{figure} \begin{table}[H] \centering \caption{Percentage of relative metric improvement w.r.t the baseline per dataset} \label{comp:resultsBis} \begin{tabular}{|l||c|c|c|} \hline \textbf{Model} & CT & MI & RW \\ \hline \hline BFV$_{\mathcal{S}}$ +DNN$_{\mathcal{U}}$ & \underline{0.08} & \textbf{0.34} & \underline{0.11} \\ 1WL$_{\mathcal{S}}$ & -7.04 & -8.17 & -9.21 \\ 1WL$_{\mathcal{U}}$ & -0.48 & -0.16 & -1.93 \\ 2WL & -0.37& -0.09 & -0.60 \\ 2WL2O & \textbf{0.10} & \underline{0.13} & \textbf{0.41} \\ \hline \end{tabular} \end{table} Interestingly, the deterioration is shown only in one of the two-weak-learner models per dataset, possibly as a result of conducting independent optimizations to find $\epsilon^{*}$ and $\delta^{*}$ in step (\ref{model:epsdel}) of Algorithm \ref{alg:2wl} and (\ref{model:epsdelta2d}) of Algorithm \ref{alg:2wl2d}. In Figure \ref{comp:fixvsBO}, we compare 2WL and 2WL2D with their corresponding 2WL\_Fix and 2WL2\_Fix runs. In all of our experiments, fixing the values of $\epsilon^{*}$ and $\delta^{*}$ results in a significant drop in performance, further emphasizing the key role played by the Bayes optimization steps and careful choice of learning rates $\epsilon$ and $\delta$. As a final remark, an important factor to consider when evaluating and comparing the proposed models is computational time. In Algorithms \ref{alg:2wl} and \ref{alg:2wl2d} for the two-weak-learner frameworks, we have that one DNN is trained per iteration for the first order approximation, yielding a total of $N$ trained DNNs per run. For the second order approximation, $N_{2}$ DNNs are trained per iteration, for a total of $N_{1}N_{2}$ DNNs per run. On the other hand, we have that the BFV+DNN model trains a single DNN (plus a previously trained GBM). Hence, BFV+DNN has a clear advantage in terms of runtime, whereas the two-weak-learner-boosted frameworks can be leveraged to improve performance when time does not pose a hard constraint. \begin{figure}[H] \centering \includegraphics[width=1.\linewidth]{figs/fixedvsBO.jpg} \caption {Performance of first- and second-order two-weak-learners with fixed (2WL\_Fix, 2WL2O\_Fix) and optimized (2WL, 2WL2O) learning rates vs runtime in minutes} \label{comp:fixvsBO} \end{figure} \section{Conclusion} \label{sec:conclusion} Traditionally, boosted models have shown stellar performance when dealing with structured data, whereas DNNs excel in unstructured data problems. However, in many real-world applications both structured and unstructured data are available. In this paper, we presented two frameworks that address these scenario. The proposed models are compared to a standard baseline model and demonstrate strong results, outperforming the baseline approach when data is presented as a combination of these two data sources. \bibliographystyle{abbrv}
{ "arxiv_id": "2302.14278", "language": "en", "timestamp": "2023-03-01T02:07:56", "url": "https://arxiv.org/abs/2302.14278", "yymm": "2302" }
\section*{Appendix} \subsection*{A) Network Intrusion - Exploratory Data Analysis} \begin{figure}[H] \captionsetup[subfloat]{labelformat=empty} \centering \begin{subfigure}[t]{0.9\textwidth} { \centering \subfloat[\centering Content]{{\includegraphics[width=0.41\linewidth]{Figures/NI_content_val.png} }}% \hspace{1cm} \subfloat[\centering Host ]{{\includegraphics[width=0.55\linewidth]{Figures/NI_host_val_3.png} }}% } \caption*{(a) Density of Content and Host features} \end{subfigure} \\[1cm] \begin{subfigure}[t]{0.8\textwidth} \includegraphics[width=0.96\linewidth]{Figures/NI_traffic_val.png} \caption*{(b) Density of Traffic features} \end{subfigure} \\[1cm] \begin{subfigure}[t]{0.8\textwidth} \includegraphics[width=0.96\linewidth]{Figures/NI_basic_val.png} \caption*{(c) Density of Basic features} \end{subfigure} \caption{NI EDA} \label{fig8} \end{figure} \subsection*{B) Best Context Group Distributions by Sample Classification Output} \begin{figure} [H] \centering \subfloat{{\includegraphics[height=0.205\textwidth,width=0.33\linewidth]{Figures/CT_agg_bestgroupmodepermethod_T3-0.005_withtitle.png} }}% \subfloat{{\includegraphics[height=0.2\textwidth,width=0.315\linewidth]{Figures/CT_agg_bestgroupmodepermethod_T3-0.005_correctlyclass_nomissclassified.png} }}% \subfloat{{\includegraphics[height=0.2\textwidth,width=0.315\linewidth]{Figures/CT_agg_bestgroupmodepermethod_T3-0.005_incorrectlyclass_nomissclassified.png} }}% \caption{CT}% \label{figCTsampletype} \end{figure} \begin{figure} [H] \centering \subfloat{{\includegraphics[height=0.205\textwidth,width=0.33\linewidth]{Figures/NI_agg_bestgroupmodepermethod_T1-0.01_withtitle.png} }}% \subfloat{{\includegraphics[height=0.2\textwidth,width=0.315\linewidth]{Figures/NI_agg_bestgroupmodepermethod_T1-0.01_correctlyclass_nomissclassified.png} }}% \subfloat{{\includegraphics[height=0.2\textwidth,width=0.315\linewidth]{Figures/NI_agg_bestgroupmodepermethod_T1-0.01_incorrectlyclass_nomissclassified.png} }}% \caption{NI}% \label{figNIsampletype} \end{figure} \section{Introduction} Tabular data (TD) is the most prevalent data modality in real world applications. It is widely used in critical daily-life fields such as medicine, transportation, insurance and finance, many times in combination with additional unstructured data modalities like images, videos, and text. Due to its ubiquity and relevance, it is becoming increasingly common to allow automated systems to use this data as input to models trained to propose or even directly make decisions. Nonetheless, the models are often used without a clear understanding of how and why model decisions come about \cite{confalonieri2021, tjoa2019}. Given the nature of the application areas, these decisions may have significant impact and consequences in human lives, highlighting an accentuated need for interpretable and explainable machine learning models. Understanding the rationale behind the decision making of intelligent systems remains the most pertinent factor towards building trust in Artificial Intelligence and further enabling its usage \cite{das2020, gunning2019}. The huge advances of modern deep learning (DL) models have focused mostly on unstructured data sources \cite{Goodfellow-et-al-2016, Roy2019}, as have most of the efforts made towards generating better prediction explanations and interpretable models. For TD, traditional machine learning models such as boosting and random forests continue to be de-facto choices \cite{tjoa2019}. Yet, despite their good performance, models based on tree ensembles are usually not as interpretable as some simpler -but less successful- models like logistic regression or single decision trees \cite{caruana2015}. In this sense, improving DL models for TD would allow both single- and multi-modal problems with TD to benefit from more widely explored explainable models. Additionally, DL models leverage gradient descent learning, which is not supported by tree-based models. This type of optimization decreases the need for feature selection and engineering and facilitates joint training of models taking inputs from multiple data sources with mixed modalities \cite{ tabnet2019, fttransf2021}. In particular, a set of models that have shown promising results at pushing the boundaries of DL for TD are transformers \cite{tabnet2019, fttransf2021,tabtransf2020,saint2021, autoint2019}. Unlike other DL models, transformers have not only obtained great success in diverse tasks, but also possess a built-in capability to provide explanations for its results via attention \cite{attention2017}. Following this lead, we propose multi-layer attention-based explainability via transformers for TD, a novel method that leverages attention mechanism and combines it with graph concepts to enable a better understanding of how groups of tabular features influence the transformer's decisions. To this end, a transformer model is trained on a classification task using TD as input. In tabular inputs, it is common to have several features that represent similar underlying concepts. As the number of features increases, their collective importance might end up being diluted across the relative importance of a large number of single features, making it hard to pinpoint relevant explanations at a conceptual level. To account for this, instead of assigning importance or relevance to each individual feature, meaningful groups of features are created a priori. A transformer model is trained with these groups as inputs and the most relevant concepts for the classification task are identified by the model at the conceptual or group level. We train transformers on three datasets and generate multi-layer attention-based explanations for their prediction, i.e., we identify groups of features that have the largest impact on the model's decision. For a transformer with a single head, prior work considers attention only at the last layer, which disregards information of all preceding layers. Our methodology considers all layers. To cope with the assumption of a single head, we use the student-teacher paradigm to train a single-head but multi-layer transformer based on a trained multi-head transformer and apply graph-based explainability on the student. We further compare our explanations with those provided by other widely known explainability methods. In summary, the contributions of this work are as follows. \begin{enumerate} \item We investigate explainable models based on transformers for tabular data. \item We propose a graph-oriented attention-based explainability method via transformers for tabular data. \item We compare this approach to attention-, gradient-, and perturbation-based explainability methods. \end{enumerate} The rest of this paper is organized as follows. In Section \ref{sec:lr}, the related work is discussed. Section \ref{sec:model} describes the proposed model: the conceptual transformer model for TD in Section \ref{sec:transf} and the explainability method used to identify relevant concepts in Section \ref{sec:alg}. Section \ref{sec:comp} provides the computational study, experimental details, results, and visualizations. Conclusions are given in Section \ref{sec:conclusion}. \section{Related work} \label{sec:lr} \subsection{Explainability for Deep Learning} The field of explainable Artificial Intelligence (XAI) has received increasing interest over the past decade. Surveys, reviews, and articles such as \cite{confalonieri2021, das2020, islam, samek2019explainable, tjoa2019, vilone, notions, visualsurvey} have synthesized its main motivations, approaches, and challenges. In a broad sense, XAI algorithms for DL can be organized into three major groups: perturbation-based, gradient-based, and, more recently, attention-based. Within the most famous perturbation-based methods, we find LIME \cite{LIME}, which generates a local approximation for a given model around a specific prediction, and SHAP \cite{SHAP}, which measures a feature's importance as the change in the expected prediction when conditioning on it. These methods are model agnostic, but have often been applied in DL settings. On the other hand, gradient-based algorithms have focused on DL algorithms, as they leverage gradient information to assess the relevance of the model's inputs to make its decision. The classic gradient-based methods are saliency maps \cite{Simonyan}, used for explaining the predictions of convolutional neural networks. In saliency maps, the gradients of the predictor function with respect to the input are computed and used to identify parts of the image that contribute the most either to the final decision or to a specific layer in the network. Another example is Grad-CAM \cite{gradcam}, in which gradients are used to compute an importance score that allows class-specific neuron activity visualization in images (referred to as activation maps). A more general gradient-based method is layer-wise relevance propagation \cite{LWRP}, while other well known methods falling under this category are Deeplift \cite{deeplift} and SmoothGrad \cite{smoothgrad}. Attention-based explanations gained relevance along with the success of transformer models \cite{attention2017} in a variety of application areas such as natural language processing, computer vision, and speech processing \cite{transformersurvey}. Transformers possess a built-in XAI method: the attention mechanism, which generates probability distributions over features and further interprets them as feature importances or contributions. Attention has been further combined with other attributes to generate explanations. For instance, in \cite{voita}, layer-wise relevance propagation is applied to transformers, and \cite{chefer} builds on that idea by including gradient information into the explanations. Transformers were initially introduced for machine translation, but have been extended and customized for diverse tasks such as object detection \cite{detr} and, more recently to multimodal settings \cite{vilbert, lxmert}. However, these have focused on vision and language tasks. \subsection{Transformers for Tabular Data} Following the success on unstructured data, transformers have also proven to have good performance on TD. One of the first models to leverage transformers for TD is TabNet \cite{tabnet2019}, which adopts transformer blocks to mimic the structure of decision trees and incorporates sequential attention to select which features to focus on at each step. Similarily, SAINT \cite{saint2021} combines self-attention with inter-sample attention to attend both rows and columns in the TD. TabTransformer \cite{tabtransf2020} uses self-attention transformers to map categorical features to conceptual embeddings. Continuous features are not passed through the transformer architecture, but concatenated with its output for further processing. FT-Transformer \cite{fttransf2021} introduces a feature tokenizer to adapt the transformer architecture to TD. While all of these methods use attentive transformers on TD, none of them consider multiple layers for explainability of attention or incorporate a priori conceptual information to the architecture. Graphs do come into consideration in graph attention networks \cite{velickovic2018graph}, neural network architectures that operate on graph-structured data. However, to the best of our knowledge, no work has been done on leveraging graphs for attention matrices. \section{Proposed Model} \label{sec:model} In this section, we introduce multi-layer attention-based explainability leveraging transformers for TD. We propose the following process. First, a multi-head transformer (teacher) is trained. Then, a single-head (student) transformer is trained based on the output predictions of the teacher. Single-head transformers are more amenable to explanations. Finally, explanations are extracted from the student by using attention values from all layers. In the following subsections, we describe how the transformer architecture is adapted to account for the specific structure of TD and incorporate a priori conceptual information. Next, we describe how we map the underlying self-attention mechanism into attention graphs. \subsection{Conceptual Transformer Encoder for TD} \label{sec:transf} The original transformer model was designed for sequence transduction tasks on text data. TD and text have inherently different structures and as such, their feature engineering strategies differ as well. For instance, preprocessing raw text data is usually done by tokenization, whereas for TD, it is common to normalize numerical features and one-hot-encode categorical. To account for these differences, two main changes are made to the transformer architecture. First, groups of features representing conceptual information are manually defined before training. Hence, for the TD case, instead of having attention matrices where each word's projection attends every other word's projection, we have conceptual groups of features that attend other groups of features. Second, given that TD does not provide sequential information, positional encoding is disabled. The adapted transformer architecture is trained for the classification task at hand. Let $x_{1} \in \mathcal{R}^{k_{1}}$, $x_{2} \in \mathcal{R}^{k_{2}}$, ..., $x_{m} \in \mathcal{R}^{k_{m}}$ be the concept groups of features. We project $x_{i}$ into latent space $\mathcal{R}^{d}$ by defining: $\tilde{x_{i}}= D_{i}x_{i} \in \mathcal{R}^{d}$, with $D_{i} \in \mathcal{R}^{d\times k_{i}}$ trainable. Then, $X = [\tilde{x_{1}}, ..., \tilde{x_{m}}]^{T} \in \mathcal{R}^{m\times d}$. Following \cite{attention2017}, we obtain attention coefficients $a_{i,j}$ by defining $V = XW^{V}$, $K=XW^{K}$, $Q=XW^{Q}$, with $W^{V}, W^{K}, W^{Q} \in \mathcal{R}^{d\times d}$ trainable matrices, and $V, K, Q \in \mathcal{R}^{m\times d}$. We have that $\frac{QK^{T}}{\sqrt{d}} \in \mathcal{R}^{m\times m}$ and $A = [a_{i,j}] = softmax(\frac{QK^{T}}{\sqrt{d}} V) \in \mathcal{R}^{m\times d}$. The above-mentioned transformer encoder will then have $N \times h$ attention matrices, where $N$ is the number of encoder layers and $h$ is the number of attention heads. While $N$ and $h$ are tunable hyperparameters, for optimal results they are almost always larger than 1. For XAI purposes, as $N$ and $h$ increase, multi-head attention becomes harder to interpret. To leverage the strengths of multi-head attention while simultaneously prioritizing explainability, we use knowledge distillation \cite{hinton2015distilling} as a means to learn a simplified, single-head transformer model (referred to as the student transformer) that can generalize in a similar fashion as the original model (the teacher transformer). The student architecture has $h = 1$ and $M$ encoder layers, where $M$ typically meets $M>N$. Furthermore, to improve the entropy of the attention matrices, a penalization term is added to the student's cross entropy loss function, yielding: \begin{center} $L = - \sum\limits_{i = 1}^{n} y_{i}log(\hat y_{i}) + \lambda \sum\limits_{l=1}^{M} \sum\limits_{j,k = 1}^{m} a^{l}_{j,k} log(a^l_{j,k}) $ \end{center} where $n$ is the number of training samples, $\lambda$ is the penalization term hyperparameter, $a^l_{j,k}$ is the value in the $j^{th}$ row and $k^{th}$ column of attention matrix $A^{l}$ corresponding to the $l^{th}$ encoder layer, and $y_{i}$ and $\hat y_{i}$ are the predictions of the multi-head teacher and the predicted value of the student for sample $i$, respectively. \subsection{Multi-Layer Attention-Based Explainability} \label{sec:alg} Multi-layer attention-based explainability for TD (MLA) leverages the conceptual transformer encoder's attention mechanism described in Section \ref{sec:transf} and maps the attention matrices across encoder layers into a directed acyclic graph (DAG). In the DAG, the vertices correspond to concept groups of features and the arcs to attention values. We further identify the concept group with the largest contribution to the prediction, that is, the \textit{best concept group} to explain the output, as the input group corresponding to the path of the maximum probability in the DAG. For a given conceptual transformer, we have a collection of attention matrices $A^{l} = (a^{l}_{j,k})$ with $l \in \{1, ..., M\}$, and $j,k \in \{1, ..., m\}$ as described above. We define $D = (V, A)$ a weighted DAG as follows. Let $V = \bigcup_{l=0}^{M}\{v^{l}_{c}\}$ and $(v^{l-1}_{\hat c}, v^{l}_{\tilde{c}}) \in A$ , where arc $(v^{l-1}_{\hat c}, v^{l}_{\tilde{c}})$ has weight $ a^{l}_{\hat{c},\tilde{c}}$, subscripts $\hat{c}$, $\tilde{c} \in \{1, ..., m\}$ correspond to concept groups, superscript $l$ corresponds to encoder layers, and $l = 0$ is a special case corresponding to the student's input layer. In Figure \ref{fig1}, we present a visualization of the construction of $D$. \begin{figure} [H] \centering \includegraphics[width=0.9\linewidth]{Figures/matrixtograph.png} \caption{Graph $D= (V,A)$} \label{fig1} \end{figure} The maximum probability path $p$ is found using Dijkstra's algorithm \cite{dijkstra1959note}, and is of the form $p = \{v^{0}_{i_{0}},v^{1}_{i_{1}},...,v^{M}_{i_{M}}\}$ with arc cost of $-log( a^{l}_{j,k})$ for $a^{l}_{j,k}>0$, yielding path cost $-log \Big(\prod_{l = 1}^{M} a ^{l}_{i_{l-1}, i_{l} }\Big )$. Since we are particularly interested in the concept group corresponding to the most relevant input for the prediction, we focus on group $c = i_{0}$ corresponding to $v^{0}_{i_{0}}$. Thus, we provide explanations to the student's predictions by finding the most relevant concept for the classification task, the \textit{best concept group}, defined as the concept group $c = i_{0}$ corresponding to the first vertex $v^{0}_{i_{0}}$ of the maximum probability path $p$ in graph $D$. Note that not always does a single concept group provide all the relevant information to make a prediction. To account for this, we rank additional concept groups iteratively. In each iteration we eliminate from the graph the starting point $v^{0}_{i_{0}}$ of the previously found highest probability path and then search for the respective next highest probability path in $D$. In our experiments, we use at most two \textit{best concept groups} to explain predictions. \section{Computational study} \label{sec:comp} \subsection{Datasets} The proposed explainability model is tested on three datasets: UCI Forest CoverType \cite{Blackard:1999}, KDD’99 Network Intrusion dataset \cite{NI-ds}, and a real-world proprietary dataset as described below. These where selected due to their relatively large number of features (with a fair mix of numerical and categorical) and samples, allowing adequate conceptual aggregation. \subsubsection*{Forest CoverType Dataset (CT)} In CT, the goal is to predict the most common cover type for each 30m by 30m patch of forest. We use the three most represented classes, resulting in approximately 425,000 training and 53,000 validation samples. The dataset consists of 10 quantitative features and two qualitative features, which were organized into the following five concept groups: \begin{enumerate}[label={(\alph*)}] \itemsep0em \item \textit{Generals}: Elevation, aspect, and slope of the patch \item \textit{Distances}: Horizontal and vertical distances to hydrology\footnote{The dataset info file states that this means "nearest surface water features."}, horizontal distances to roadways and fire points \item \textit{Hillshades}: Shades at 9am, noon, and 3pm \item \textit{Wild areas}: 4 different wilderness areas \item \textit{Soil types}: 40 different types of soil \end{enumerate} \subsubsection*{Network Intrusion Dataset (NI)} In NI, the classification task is to distinguishing between "bad" connections (intrusions or attacks) and "good" connections. Approximately 1,000,000 samples were used for training and almost 75,000 for validation, with each sample consisting of 53 features. The concept groups are defined following \cite{NI-ds-groups}: \begin{enumerate}[label={(\alph*)}] \itemsep0em \item \textit{Basic}: 20 features regarding individual TCP connections \item \textit{Content}: 14 features regarding the connection suggested by domain knowledge \item \textit{Traffic}: 9 features computed using a two-second time window \item \textit{Host}: 10 features designed to assess attacks which last for more than two seconds \end{enumerate} \subsubsection*{Real-World Dataset (RW)} The proprietary real-world dataset constitutes a binary classification problem. Tens of thousands of samples were used for training and validation. Each sample has approximately 100 features, which were subsequently arranged into 8 concept groups. \subsection{Implementation and hyperparameters} The experiments were implemented in Python and ran using GeForce RTX 2080 Ti GPU and Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz for all datasets except RW, for which Tesla V100 GPU and Intel Xeon CPU E5-2697 v4 @2.30Hz were used. The same hyperparameters were used for all teacher networks: $N=2$, $h=4$, $d=64$ and $128$ neurons in the internal layer. These parameters are standard choices for transformer encoders for TD; on the lower end for $N$ and $h$, and on the higher end for $d$ and neurons. The student's architecture is identical, but with $M=4$ and $h=1$. For training, we chose a dropout rate of $0.1$ to prevent overfitting while avoiding a large reduction of network's capacity. Additionally, we used a temperature of $2$, which provided a balance between producing reliable soft targets and avoiding to overly flatten the underlying probability distribution. A constant batch size of $128$ and the adam \cite{adam} optimizer were employed. Between six and ten lambdas were tested for each dataset's training loss. The lambda corresponding to the highest F1 (for CT and NI) and accuracy (for RW) was selected for the final results, yielding $\lambda_{CT} = 0.005$, $\lambda_{NI} = 0.01$, and $\lambda_{RW} = 0.9$. To account for minibatch randomization, each experiment was repeated five times for each CT and NI student and ten times for each RW student, after which variance is already low (see Table \ref{f1}). In such cases, the \textit{best concept group} per method was defined as the mode of these experiments. However, the distributions of all repetitions are also presented in the pairwise method comparisons detailed in Section \ref{sec:exp}. In order to assess the quality of multi-layer attention-based explanations, we first evaluate the performance of the conceptual transformer encoder for TD presented in Section \ref{sec:transf}. The model is compared against two go-to TD methods: LightGBM and XGBoost, with 1,000 base learners each. Other DL and transformer approaches for TD were not considered in this comparison due to their higher computational requirements and similar (if not slightly worse) performance when compared to boosting methods (as reported in \cite{survey, fttransf2021}). For CT and NI, the aggregated results for five repetitions of each model are shown in Table \ref{f1}. The conceptual transformer's performance follows the same patterns previously discussed, comparable but not necessarily better metrics than boosting models. \begin{table}[H] \caption{Validation F1} \label{f1} \begin{center} \begin{tabular}{l|cc|cc|} & \multicolumn{2}{c|}{\textbf{CT}} & \multicolumn{2}{|c|} {\textbf{NI}} \\ \textbf{Models} & \textbf{Mean} &\textbf{Std Dev} &\textbf{Mean} &\textbf{Std Dev}\\ \hline Conceptual Transformer & 0.96856 & 0.00055 & 0.88715 & 0.01537 \\ LightGBM & 0.96208 & 0.00063 & 0.88875 & 0.00064 \\ XGBoost & 0.97268 & N/A & 0.89226 & N/A \end{tabular} \end{center} \end{table} Conceptual transformers might not be the top-ranked classifier for all TD cases, but are able to provide explanations for their predictions. As for RW, the conceptual transformer yielded a mean value of $0.88689$ with a standard deviation of $0.00234$. Having validated that its performance is satisfactory, the multi-layer attention-based explanations are extracted as discussed in Section \ref{sec:alg} and compared to those generated using the most popular method from each XAI group: attention-based, gradient-based and perturbation-based. \paragraph{Attention-based: Last-layer explainability (LL)} We consider attention mechanism as presented in \cite{attention2017}. More specifically, the last layer's self-attention head of the student's encoder. The \textit{best concept group} to explain a given prediction is defined as that which corresponds to the highest attention value. \paragraph{Gradient-based: Saliency explainability (SA)} In the same fashion as \cite{Simonyan}, but in the context of TD, the gradients of the loss function with respect to the input (concept groups) are computed. The \textit{best concept group} to explain a given prediction is defined as that which yields the largest mean absolute value. \paragraph{Perturbation-based: Shapley additive explanations (SH)} The SHAP \cite{SHAP} value of each feature is computed. The \textit{best concept group} is defined as that with the largest mean absolute SHAP value. \subsection{Results} \label{sec:exp} \subsubsection*{Explanation Distributions} We analyze explanations at an aggregate level in Figure \ref{figBestCG}, where the distributions of the \textit{best concept group} per method over the whole validation sets for CT and NI datasets are shown. For each type of explanation, we show the proportion of samples that deemed each concept group as best. In general, we do not distinguish between correctly and wrongly classified samples, unless explicitly stated. For each dataset, the number of incorrectly classified samples is less than $5\%$, which has no impact in the overall distributions (see Appendix B). \begin{figure} \centering \subfloat[\centering CT]{{\includegraphics[height=0.205\textwidth,width=0.33\linewidth]{Figures/CT_agg_bestgroupmodepermethod_T3-0.005.png} }}% \subfloat[\centering NI]{{\includegraphics[height=0.2\textwidth,width=0.315\linewidth]{Figures/NI_agg_bestgroupmodepermethod_T1-0.01.png} }}% \subfloat[\centering RW]{{\includegraphics[height=0.2\textwidth,width=0.315\linewidth]{Figures/RW_agg_bestgroupmodepermethod_Lambda0.9.png} }}% \caption{Best concept group distribution per method}% \label{figBestCG} \end{figure} In Figure \ref{figBestCG}, we observe that SA and SH tend to focus on one or two concept groups to assign predictions, whereas LL and MLA appear to take more groups into account when identifying differences among samples. This behavior is consistent across all datasets. In Figure \ref{figBestCGperClass}, we zoom in and observe the above-mentioned distributions for CT and NI, but segmented by predicted class. \begin{figure} \centering \begin{subfigure}[b]{1.0\textwidth} \includegraphics[width=\linewidth]{Figures/CT_bestgroup_perclass_T3-0.005.png} \caption*{(a) CT} \end{subfigure} \begin{subfigure}[b]{1.0\textwidth} \includegraphics[width=\linewidth]{Figures/NI_bestgroup_perclass_T1-0.01.png} \caption*{(b) NI} \end{subfigure} \caption{ Best group of features per method by class} \label{figBestCGperClass} \end{figure} For CT, a consistent focus on \textit{Generals} and \textit{Distances} is observed across all classes. However, LL and MLA seem to also assign large explainability values to \textit{Soil Type}. A specific focus towards one concept group for a certain class is only observed by SA and SH for the second class. In contrast, the methods show a stronger focus on specific groups for a given class for NI (see Figure \ref{figBestCGperClass}b). All methods coincide in assigning the largest explainability value to \textit{Traffic} for class $0$. Interestingly, MLA points at \textit{Content} and \textit{Host} as explanations to predict class $1$, whereas LL points at \textit{Basic} and \textit{Host}, SA at \textit{Host}, and SH at \textit{Traffic}. Additionally, \textit{Host} and \textit{Traffic} are consistently referred to as the \textit{best concept groups} for class $2$ by all groups (with some explanation value assigned to \textit{Basic} by LL as well). In summary, we observe large consistency across methods on their \textit{best concept group} selection for CT, with a particularly strong aligment between LL and MLA. For NI, consistency is notable for classes $0$ and $2$, with LL showing some misaligned samples across all classes. An Exploratory Data Analysis (EDA) was conducted on CT and NI as a means to validate which features are most relevant for each class. Notably, in Figure \ref{figEDA_CT}a, we observe that CT's features corresponding to concept group \textit{Distances} do not seem to be particularly distinctive among classes according to their distributions. Perhaps their predictive power is better in conjunction with other groups. In contrast, Figure \ref{figEDA_CT}b shows that \textit{Soil Type} does provide a clear differentiation between classes. All samples from class $2$ have soil types in $\{0, ..., 9\}$, whereas samples from class $0$ do not have soil types lower than $9$. Even though the EDA clearly shows \textit{Soil Type} concept group's relevance for the classification task, only LL and MLA methods capture this information. In the NI dataset most features are continuous. Hence, only a small subset of them are presented and shown in Figure \ref{fig8} in the Appendix. Through a similar EDA, we observe that \textit{Host} and \textit{Traffic} are indicative of all classes, which is consistent with most of the distributions shown in Figure \ref{figBestCGperClass}b. However, for several samples of class $1$, LL assigns \textit{Basic} and MLA assigns \textit{Content} as the \textit{best concept groups}, while these correspondences are less clear in the EDA. \begin{figure}[h] \centering \begin{subfigure}[b]{1.0\textwidth} {\includegraphics[width=1.0\linewidth]{Figures/CT_distances_val_2.png} } \caption*{(a) Density of relevant distances features} \end{subfigure} \begin{subfigure}[b]{1.0\textwidth} \centering \includegraphics[width=0.8\linewidth]{Figures/CT_soiltypes_val.png} \caption*{(b) Number of samples per Soil Type} \end{subfigure} \caption{CT EDA} \label{figEDA_CT} \end{figure} We conclude that LL and MLA are better aligned with the findings of the EDA. For classes $0$ and $1$, both methods show similar correspondences -and disagreements- with it. However, for the least represented class in each dataset (class $2$), MLA's aligment to the EDA appears better than LL's, as it focuses on the groups highlighted by the EDA for a larger number of samples. \subsubsection*{Explanation Visualizations} To get a visual representation of the explanations for a given sample, each of the compared methods' explainability values for each concept group are plotted in the heatmaps below. In the 2D heatmaps, for $j, k \in {1, ..., m}$, the values for MLA to the the probability of the maximum probability path $p$ between $v^{0}_{j}$ and $v^{M}_{k}$, whereas the values for LL correspond to $a^{M}_{j,k}$. Lighter color tones correspond to lower explainability values. For comparability across methods, values have been scaled to $[0, 1]$ and only correctly classified samples were considered. \begin{figure}[h] \centering \subfloat[ Sample 36, Class 1]{{\includegraphics[width=0.46\linewidth]{Figures/Samples/CT/Sample36.png} }}% \hspace{0.8cm} \subfloat[ Sample 94, Class 2]{{\includegraphics[width=0.46\linewidth]{Figures/Samples/CT/Sample94.png} }}% \caption{CT Concept groups explainability coefficients} \label{figVis_CT} \end{figure} \begin{figure}[h] \centering \subfloat[ Sample 21259, Class 0]{{\includegraphics[width=0.46\linewidth]{Figures/Samples/NI/Sample21259.png} }}% \hspace{0.8cm} \subfloat[ Sample 13927, Class 1]{{\includegraphics[width=0.46\linewidth]{Figures/Samples/NI/Sample13927.png} }}% \caption{NI Concept groups explainability coefficients} \label{figVis_NI} \end{figure} Figures \ref{figVis_CT} and \ref{figVis_NI} show explanations corresponding to a couple of correctly classified samples from datasets CT and NI. Figure \ref{figVis_CT}a shows a sample of CT where all methods identified \textit{Distances} as the \textit{best concept group}. This implies that the information obtained from the distances to hydrology, roadways, and fire points was identified by all methods as the most relevant for the model to conclude that the correct class was $1$. In contrast, in Figure \ref{figVis_CT}b we observe a sample of class $2$ for which methods LL and MLA identified \textit{Soil Type} as the \textit{best concept group}, whereas SA and SH assigned larger explainability values to \textit{Distances} and \textit{Generals}, respectively. Similarly, two NI samples are presented in Figure \ref{figVis_NI}. In Figure \ref{figVis_NI}a, we observe that features related to \textit{Traffic} were given larger values by all methods, yet in Figure \ref{figVis_NI}b there is a lack of agreement among methods again but the two attention methods coincide. \subsubsection*{Pairwise Method Comparison} We now contrast the results provided per method by conducting pairwise comparisons among them. To do so, we quantify the number of samples for which the selected \textit{best concept group} is the same for two methods, i.e., for what percentage of the samples do two methods choose the same \textit{best concept group}. The distributions of such values across the various runs are presented in Figure \ref{figPairwise}. On average, the pairwise comparisons with MLA are higher for CT and NI. As seen in Figure \ref{figPairwise}c, MLA seems to provide very different explanations to the ones generated by other methods for RW. The red square in each boxplot corresponds to the percentage of samples that chose the same \textit{best concept group} when defined as the mode across all repetitions. We expected MLA and LL to have high agreement, however, this is not the case. \begin{figure}[H]% \centering \subfloat[\centering CT]{{\includegraphics[width=0.3\textwidth]{Figures/CT_singlebestpairwise.png} }}% \subfloat[\centering NI]{{\includegraphics[width=0.3\textwidth] {Figures/NI_singlebestpairwise.png} }}% \subfloat[\centering RW]{{\includegraphics[width=0.3\textwidth]{Figures/RW_singlebestpairwise.png} }}% \caption{Best concept group pairwise comparison}% \label{figPairwise}% \end{figure} As noted in the EDA, not always does a single group provide all the relevant information for a model to predict a class for a give sample. To account for this, we consider to identify the \textit{two best concept groups} per method and quantify the percentage of samples where at least one of those two are the same for each pair of methods. The resulting distributions, means, and modes per pairwise comparison are reported in Figure \ref{figPairwise2B}. \begin{figure}[H]% \centering \subfloat[\centering CT]{{\includegraphics[width=0.3\textwidth]{Figures/CT_twobestpairwise.png} }}% \subfloat[\centering NI]{{\includegraphics[width=0.3\textwidth]{Figures/NI_twobestpairwise.png} }}% \subfloat[\centering RW]{{\includegraphics[width=0.3\textwidth]{Figures/RW_twobestpairwise.png} }}% \caption{Two best concept groups pairwise comparison}% \label{figPairwise2B}% \end{figure} When considering the \textit{two best concept groups}, we observe very high pairwise agreement across methods. For MLA, all mean and mode pairwise overlaps are above $75\%$ and $92\%$, respectively. MLA, SA, and SH consistently show large pairwise agreements across datasets, and LL yields the smallest number of coinciding explanations. In general, MLA shares similarities with LL, considering that MLA is also itself an attention-based method. However, across all methods, LL shows the largest variability. This seems to be improved by MLA through acknowledging attention graphs as a whole. Additionally, MLA shows better pairwise results with SA and SH than LL. On the other hand, our experiments show that gradient- and perturbation-based methods (SA and SH) are more similar to each other than to the attention-based methods. They produce similar explanations and focus mostly on a reduced number of groups (which are not necessarily different across classes) to generate explanations for predictions. \subsubsection{Stability Analysis} The stability of the explanations is analyzed by quantifying the percentage of distinct runs that agree on the same explanation for each sample. Given the previously discussed observation that SA and SH tend to steadily choose the same groups even across different samples, we focus on methods MLA and LL for this analysis. Figure \ref{figStability} shows the boxplots for the best (1B) and two best (2B) \textit{concept groups} per dataset. \begin{figure}[H]% \centering \subfloat[\centering CT]{\includegraphics[width=0.25\linewidth]{Figures/CT_stability_AG_LL.png}}% \subfloat[\centering NI]{{\includegraphics[width=0.25\linewidth]{Figures/NI_stability_AG_LL.png} }}% \subfloat[\centering RW]{{\includegraphics[width=0.25\linewidth]{Figures/RW_stability_AG_LL.png} }}% \caption{Percentage of runs that agree on the best (1B) and two best (2B) \textit{concept groups} per method}% \label{figStability}% \end{figure} For these datasets, the 1B \textit{concept groups} comparison of MLA and LL appears inconclusive. For CT, we observe a better performance of MLA but larger variability than LL. On the other hand, the exact opposite can be said for RW, whereas both distributions seem to be identical for NI. For the 2B \textit{concept groups} case, both models appear to be quite stable with averages of over $60\%$ of agreement across runs. Again, the model-to-model comparison seems to be dataset-dependant, however, MLA shows lower variability than LL. It is important to note that correlation between concept groups could have a major impact in the 1B results. In the extreme case in which the data has perfectly correlated groups, the methods are free to choose one group over the other at random. Identifying the \textit{two best concept groups} helps to mitigate this issue. \section{Conclusion} \label{sec:conclusion} In this paper, we present a novel explainability method for TD that leverages transformer models and incorporates knowledge from the graph structure of attention matrices. Combining these two, we propose a way of identifying the concept groups of input features that provide the model with the most relevant information to make a prediction. We compare our method with well-known gradient-, attention-, and perturbation-based explanations and highlight the similarities and dissimilarities observed in our experiments. \clearpage \bibliographystyle{abbrv}
{ "arxiv_id": "2302.14281", "language": "en", "timestamp": "2023-03-01T02:07:57", "url": "https://arxiv.org/abs/2302.14281", "yymm": "2302" }
\section*{Introduction} \subsection*{1}Classical Calogero-Moser (CM) systems were among the first integrable $N$-particle systems of one dimensional particles \cite{Ca}\cite{Mo} with the potential $1/(q_i-q_j)^2$. This model was generalized to the potential $1/sh^2(q_i-q_j)$ in \cite{Suth}. Then it was extended to other root systems and to elliptic potentials in \cite{OP}, to a model involving spin degrees of freedom in \cite{GH}. There is an extensive literature on spin versions of CM systems. For example in \cite{KBBT}\cite{KrZ} solutions to equations of motion to the elliptic spin Calogero-Moser system were related to special elliptic solutions to the matrix KP hierarchy. The relation to gauge theories were explored in many papers, see for example \cite{Nekr}\cite{FoR}. A variety of spin CM systems were obtained by L. Feher, see for example \cite{FP1}\cite{FP2}\cite{Fe1}, in particular he derived important examples related to homogeneous spaces. Two spin CM systems were studied in \cite{KLOZ1}\cite{KLOZ2}. Integrable chains of relativistic spin CM type systems were studied in \cite{ChF}\cite{AO}. Superintegrability of spin CM systems and of spin Ruijsenaars systems was established in \cite{R1}. In \cite{R3} the superintegrability of spin CM systems on homogeneous spaces was established. A family of superintegrable systems on moduli spaces of flat connections was constructed in \cite{AR}. This family includes systems studied in \cite{ChF}\cite{AO}. In these particular case the system is also Liouville integrable. In this paper we will describe classical superintegrable system which we call spin Calogero-Moser(CM) chains. We call them spin CM chains because they combine features of many particle systems (as in CM systems) and of spin chains. We distinguish two cases: a {\it periodic chain} and an {\it open chain}. The periodic case is the classical version of a quantum integrable system where joint eigenfunctions of quantum commuting Hamiltonians are trace functions, see \cite{ES0}. In this case the spin part of the system reminds a spin chain with periodic boundary conditions. In case of rank 1 orbits for $\mathfrak{sl}_n$ these systems are linearized versions of \cite{ChF} and \cite{AO}. In the open case they are a classical version of quantum integrable systems constructed in \cite{SR}\cite{RS}. For these systems the spin part of the system is similar to an open spin chain. In both cases, i.e. in the periodic and in the open spin Calogero-Moser chains, the phases space is a stratified symplectic space \cite{LS}, which, in some cases have only one stratum and becomes a symplectic manifold. \subsection*{2} Recall that a {\it superintegrable system} is the structure on a symplectic manifold $\mathcal{M}$ that consists of a Poisson manifold $\mathcal P$, a Poisson manifold $\mathcal B$ with the trivial Poisson structure (i.e. zero Poisson tensor) and two surjective Poisson projections \begin{equation}\label{DIT} \mathcal{M}\stackrel{p_1}{\rightarrow} \mathcal P \stackrel{p_2}{\rightarrow} \mathcal B \end{equation} such that $\dim(\mathcal{M})=\dim(\mathcal P)+\dim(\mathcal B)$. For a superintegrable system a generic fiber of $p_1$ is an isotropic submanifolds of dimension $\dim(\mathcal B)$ and connected components of a generic fiber of $p_2$ is a disjoint union of symplectic leaves of $\mathcal P$. For details see \cite{N}, \cite{R2} and references therein. Here we adopt this notion to the case of stratified symplectic and Poisson spaces in which case $p_1$ and $p_2$ are Poisson mapping between stratifies spaces. In this paper the superintegrability means the balance of dimensions for the big stratum. How the system behave at smaller strata will be a subject of a separate publication. In the algebraic case, the appropriate setting is symplectic and Poisson stacks. Let $I$ be a Poisson commutative subalgebra of $A=C^\infty(\mathcal{M})$ that consists of functions which are constant on fibers of $p_2\circ p_1$ (the pull-back of functions on $\mathcal B$ to functions on $\mathcal{M}$) and $J$ be the Poisson algebra of functions which are constant on fibers of $p_1$ (the pull-back of functions on $\mathcal P$). The condition on $(\mathcal{M}, \mathcal P, \mathcal B)$ for being a superintegrable system is equivalent to the following condition on $I\subset J\subset A$. The Poisson algebra $A$ has trivial center, $I\subset A$ is a Poisson commutative subalgebra, such that $J$, its centralizer in $A$ maximal possible Gelfand-Kirillov dimension for the given Gelfand-Kirillov dimension of $I$. The Hamiltonian dynamics generated by a function $H\in I$ is called {\it superintegrable}. Any function from $J$ is constant along flow lines of the vector field generated by $H$ and thus, it is an integral of motion for the Hamiltonian dynamics generated by $H$. This is why we call elements of the Poisson commutative subalgebra $I$ Hamiltonians and elements of $J$ conservation laws. \subsection*{3}Throughout this paper $G$ is a split real connected semisimple Lie algebra with finite center which admits a complexification, and $\Theta\in\textup{Aut}(G)$ is a Cartan involution. We denote by $K=G^{\Theta}$ the closed subgroup of fixed points of $\Theta$, which is connected and maximal compact. Let $\theta$ the corresponding Cartan involution\footnote{Recall that an involution $\theta:\mathfrak{g}\to\mathfrak{g}$ is a Cartan involution when the bilinear from $(-\theta(x),y)$ on $\mathfrak{g}$ is positive definite. Here $(\cdot,\cdot)$ is the Killing form.} of $\mathfrak{g}$, and $\mathfrak{k}$ the Lie algebra of $K$. The associated Cartan decomposition of $\mathfrak{g}$ is $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$, with $\mathfrak{p}$ the $(-1)$-eigenspace of $\theta$. Let $\mathfrak{a}\subset \mathfrak{g}$ be maximally noncompact $\theta$-stable Cartan subalgebra of $\mathfrak{g}$. Since $\mathfrak{g}$ is split we have $\mathfrak{a}\subseteq\mathfrak{p}$. On the Lie group level, $A=\exp(\mathfrak{a})\subset G$ is a maximal real split torus in $G$ and $H:=Z_G(A)$, the centraliser of $A$ in $G$, is a Cartan subgroup in $G$ containing $A$. The exponential map provides an isomorphism $\mathfrak{a}\overset{\sim}{\longrightarrow} A$, whose inverse we denote by $\log: A\rightarrow\mathfrak{a}$. Consider the root decomposition of $\mathfrak{g}$ with respect to the Cartan subalgebra $\mathfrak{a}$, \[ \mathfrak{g}=\mathfrak{a}\oplus \bigoplus_{\alpha\in R} \mathfrak{g}_\alpha \] where $R\subset\mathfrak{a}^*$ is the root system of $\mathfrak{g}$ relative to $\mathfrak{a}$. Choose $e_\alpha\in \mathfrak{g}_\alpha$ such that \begin{equation}\label{RB} \theta(e_\alpha)=-e_{-\alpha} \end{equation} and $(e_\alpha, e_{-\alpha})=1$ for each $\alpha\in R$, and choose a subset $R_+\subset R$ of positive roots. Let $W\subset\textup{GL}(\mathfrak{a}^*)$ be the Weyl group of $R$. The Weyl group $W$ is isomorphic to $N_G(A)/H$, where $N_G(A)$ is the normaliser of $A$ in $G$. Denote by $A_{reg}$ the set of regular elements in $a\in A$, \[ A_{reg}:=\{a\in A \,\, | \,\, a_\alpha:=e^{\alpha(\log(a))}\not=1 \hbox{ for all } \alpha\in R\}. \] It is the union of all the regular $W$-orbits in $A$. A fundamental domain for the $W$-action on $A_{reg}$ \footnote{In case of $SL_n(\mathbb{R})$ one can take $A$ to be the diagonal unimodular matrices with positive real entries, and $A_{reg}$ consists of those with distinct diagonal entries.} is the positive Weyl chamber \[ A_+=\{a\in A \,\, | \,\, a_\alpha:=e^{\alpha(\log(a))} > 1 \mbox{ for any } \alpha\in R_+\}. \] Let $G^\prime\subset G$ be the set of elements $g\in G$ which are $G$-conjugate to some element in $A_{reg}$. The inclusion $A_{reg}\hookrightarrow G^\prime$ induces a bijection $A_{reg}/W\overset{\sim}{\longrightarrow}G^\prime/G$, with $A_{reg}/W$ the set of $W$-orbits in $A_{reg}$ and $G^\prime/G$ the set of $G$-conjugacy classes in $G^\prime$. The Weyl group $W$ is also isomorphic to $N_K(A)/M$, where $N_K(A)=N_G(A)\cap K$ and $M=H\cap K$ (note that $M$ is a finite group since $G$ is split). The inclusion map $A\hookrightarrow G$ induces an isomorphism $A/W\overset{\sim}{\longrightarrow} K\backslash G/K$. We write $G_{reg}=KA_+K$ for the union of the double $(K,K)$-cosets intersecting $A_{reg}$. \subsection*{4} The phase space of a periodic spin Calogero-Moser chain corresponding to a collection $\mathcal O=\{\mathcal O_1,\dots, \mathcal O_n\}$ of coadjoint orbits $\mathcal O_i\subset \mathfrak{g}^*$ is the regular part of the symplectic leaf $\mathcal{S}(\mathcal O)$ of the stratified Poisson space $T^*(G^{\times n})/G_n$, with the action of the gauge group $G_n=G^{\times n}$ the lift of a twisted conjugation action on $G^{\times n}$(see section \ref{g-act-per}).\footnote{In this paper we assume that all quotients $X/H$ are GIT quotients.} Here we assume that each of $\mathcal O_i$ is non-trivial, i.e. $\mathcal O_i\neq \{0\}$ These symplectic leaves are obtained by the Hamiltonian reduction, as it is described in section \ref{g-act-per}. As a stratified symplectic space \[ \mathcal{S}(\mathcal O)\simeq \{(x_1,\dots, x_n, g)\in {\mathfrak{g}^*}^{\times n}\times G\,\, | \,\, x_1-\textup{Ad}_{g^{-1}}^*x_n\in\mathcal{O}_1,\,\, x_{i}-x_{i-1}\in\mathcal{O}_i,\,\, i=2,\ldots,n \}/G, \] see section \ref{ph-space-reg}.\footnote{There are $n$ such natural isomorphisms $\varphi_j$ ($1\leq j\leq n$), see section \ref{g-red}. In the introduction we use $\varphi_n$.} Its regular part is defined as the intersection $\mathcal{S}(\mathcal O)_{reg}=\mathcal{S}(\mathcal O)\cap ({\mathfrak{g}^*}^{\times n}\times G')/G$.\footnote{A better way to think about the periodic spin Calogero-Moser system for a real split simple Lie group $G$ is to define $\mathcal{S}(\mathcal O)_\mathbb{C}$ is complex algebraic setting and then to take the corresponding real slice. This will be addressed in another publication.} The regular part has the following structure as a symplectic manifold, $\mathcal{S}(\mathcal O)_{reg}\simeq \bigl(\nu_{\mathcal O}^{-1}(0)/H\times T^*A_{reg}\bigr)/W$, where $\nu_\mathcal O:\mathcal O_1\times\cdots\mathcal O_n\rightarrow\mathfrak{a}^*$ is the moment map for the diagonal coadjoint action of $H$ on $\mathcal O_1\times\cdots\times\mathcal O_n$ (see section \ref{ph-space-reg}). Trivialization of $T^*G$ by right translations gives an isomorphism $T^*(G^{\times n})\simeq {\mathfrak{g}^*}^{\times n} \times G^{\times n}$ and the Poisson projection $T^*(G^{\times n})/G_n\to {(\mathfrak{g}^*/G)}^{\times n}$, which is the projection to the cotangent directions followed by the quotienting with respect to the coadjoint action of $G^{\times n}$. Poisson commuting Hamiltonians of the periodic spin Calogero-Moser system are functions on $T^*(G^{\times n})/G_n$ which are constant on fibers of this Poisson projection. More precisely, tPoisson commuting Hamiltonians are such functions restricted to $\mathcal{S}(\mathcal O)_{reg}$. Consider the increasing set of natural numbers $2=d_1\leq \cdots\leq d_r$ with $r=\textup{rank}(\mathfrak{g})$ and $d_k-1$ the exponents of $\mathfrak{g}$. Let $c_{d_k}$ be the nonzero coadjoint invariant functions on $\mathfrak{g}^*$ of degree $d_k$, known as Casimir functions. The function $c_2$ is the quadratic Casimir of $\mathfrak{g}$. Let $H^{(l)}_{d_k}$ be the function on $({\mathfrak{g}^*/G})^{\times n}$ which is $c_{d_k}$ on the $l$-th factor and constant on all other factors. Let us denote vectors in $\mathcal O_k\subset \mathfrak{g}^*$ by $\mu^{(k)}$, its Cartan component by $\mu_0^{(k)}$ and set $\mu_\alpha^{(k)}=\mu^{(k)}(e_{-\alpha})$ for $\alpha\in R$. Denote $(p,a)$ points on $T^*A\simeq \mathfrak{a}^*\times A$. Now let us describe quadratic Hamiltonians in terms of these variables. The $n$-th quadratic Hamiltonian is the spin Calogero-Moser Hamiltonian. It has particularly simple form: \[ H^{(n)}_2=\frac{1}{2} (p,p)-\sum_{\alpha>0}\frac{\mu_\alpha \mu_{-\alpha}}{2\textup{sh}^2(q_\alpha)} \] where we used the parametrization $a_\alpha=e^{q_\alpha}$ (so $q_\alpha=\alpha(\log(a))$) and $\mu_\alpha=\mu_\alpha^{(1)}+\cdots+\mu_\alpha^{(n)}$, and $(\cdot,\cdot)$ is the Euclidean form on $\mathfrak{a}^*$ obtained by dualising the restriction of the Killing form of $\mathfrak{g}$ to $\mathfrak{a}$. The differences $D_k=H^{(k)}_2-H^{(k-1)}_2$ for $1<k\leq n$ are classical analogs of topological Knizhnik-Za\-mo\-lod\-chi\-kov-Bernard differential operators, \begin{equation}\label{Dk-per-int} D_k=(\mu^{(k)}_0,p)-\sum_{l=1}^{k-1} r_{lk}+\sum_{l=k+1}^nr_{kl} \end{equation} where $r_{kl}$ for $k\not=l$ is a classical version of the Felder's dynamical $r$-matrix \cite{F}, \begin{equation}\label{F-r-int} r_{kl}=-\frac{1}{2}(\mu^{(k)}_0, \mu^{(l)}_0)+\sum_{\alpha}\frac{ \mu^{(k)}_{-\alpha} \mu^{(l)}_{\alpha}}{a_\alpha-1} \end{equation} and $\sum_\alpha$ stands for the sum over all the roots $\alpha\in R$. This explicit form of $D_k$ is derived in section \ref{ph-space-reg}. The superintegrability of this system is described in section \ref{pCM-sup}. The projection method for constructing solutions of equations of motion and angle variables are described in section \ref{solutions}. One can choose $G$ to be a the maximal compact real form of the complexification $G_\mathbb{C}$. In this case the integrable system is similar, but hyperbolic functions gets replaces by the trigonometric ones. The structure of the phase space is again a stratified symplectic space. The superintegrability of the quantum counterpart of such compact case is proven in \cite{R4}. \subsection*{5} The phase space of an open Calogero-Moser spin chain is the regular part of a symplectic leaf of the Poisson manifold $T^*(G^{\times n+1})/(K\times G^{\times n}\times K)$ where the action of the gauge group $K\times G^{\times n}\times K$ is described in section \ref{op-g-reduct}, and $K\subset G$ is as above. Such symplectic leaves are given by the Hamiltonian reduction. They are parametrized by collections of coadjoint orbits $\mathcal O=\{\mathcal O^K_\ell, \mathcal O_1, \dots, \mathcal O_{n}, \mathcal O^K_r\}$ where $\mathcal O_i\subset \mathfrak{g}^*$ and $\mathcal O^K_{\ell,r}\subset \mathfrak{k}^*\subset \mathfrak{g}^*$ are coadjoint orbits. We assume that none of $\mathcal O_i$ is trivial, i.e. $\mathcal O_i\neq \{0\}$. We denote the corresponding symplectic leaf by $\mathcal{S}(\mathcal O)$. It is a stratified symplectic space. Using Cartan decomposition $G=KAK$ and a "gauge fixing fixing", we define the regular part $\mathcal{S}(\mathcal O)_{reg}$ of $\mathcal{S}(\mathcal O)$ as the strtatum \[ S(\mathcal O)_{reg}\simeq (T^*A_{reg}\times \mathcal O^K_1\times \mathcal O_1\times\dots\times \mathcal O_{n}\times \mathcal O^K_2))/N_K(A), \] where on the right we have a natural product symplectic structure. Similarly to the periodic case, quadratic Hamiltonians can be computed explicitly in terms of Cartan components $\mu^{(k)}_0$ and root coordinates $\mu^{(k)}_\alpha$ of vectors $\mu^{(k)}\in \mathcal O_k$, coordinates $\mu'_{[\alpha]}, \mu''_{[\alpha]}$ on $\mathcal O_\ell^K$ and $\mathcal O_r^K$ respectively (in the basis elements $e_{[\alpha]}=e_{-\alpha}-e_\alpha\in \mathfrak{k}\subset \mathfrak{g}$ for $\alpha\in R_+$), and $(p,a)\in T^*A_{reg}$. Assuming the gauge fixing $\phi_n$ (see section \ref{op-g-reduct}) we have \[ H^{(n)}_2=\frac{1}{2}(p,p)+\sum_{\alpha>0}\frac{(a_\alpha\mu_{[\alpha]}^\prime+\mu_{[\alpha]}^{\prime\prime}+a_\alpha(\mu_\alpha-\mu_{-\alpha}))(a_\alpha^{-1}\mu_{[\alpha]}^\prime+\mu_{[\alpha]}^{\prime\prime}+a_\alpha^{-1}(\mu_\alpha-\mu_{-\alpha}))}{(a_\alpha-a_{-\alpha})^2} \] For other quadratic Hamiltonians the differences \[ D_k=H_2^{(k)}-H_2^{(k-1)}\qquad\quad (1\leq k\leq n) \] are more interesting. They are classical analogs of boundary Knizhnik-Zamolodchikov-Bernard differential operators \cite{SR}\cite{RS}. We have the following formula for $D_k$: \[ D_k=(\mu^{(k)}_0,p)-\sum_{l=1}^{k-1}(r_{lk}+r_{lk}^{\theta_l})+(\sum_\alpha K_\alpha\mu^{(k)}_{-\alpha}-\kappa_k) +\sum_{l=k+1}^n(r_{kl}-r_{kl}^{\theta_k}). \] Here $r_{kl}$ for $k\not=l$ now is Felder dynamical $r$-matrix rescaled in $a\in A_{reg}$, \begin{equation}\label{F-r-b} r_{kl}=-\frac{1}{2}(\mu^{(k)}_0, \mu^{(l)}_0)+\sum_{\alpha}\frac{ \mu^{(k)}_{-\alpha}\mu^{(l)}_{\alpha}}{a_\alpha^2-1}, \end{equation} $\theta_k$ is the transpose of the Cartan involution acting on $\mu^{(k)}$, \[ \kappa_k=\frac{1}{2} (\mu^{(k)}_0,\mu^{(k)}_0)+\sum_\alpha\frac{(\mu^{(k)}_\alpha)^2}{1-a_{\alpha}^2} \] and \begin{equation}\label{Ka} K_\alpha=\frac{a_\alpha\mu'_{[\alpha]}+\mu''_{[\alpha]}}{a_\alpha-a_{\alpha}^{-1}}. \end{equation} The differences $D_k=H_2^{(k)}-H_2^{(k-1)}$ are classical analogs of boundary KZB operators derived in \cite{SR}\cite{RS}. The superintegrability of open spin CM chains is proven in section \ref{op-sint}. The projection method for solving equations of motion and angle coordinates are described in section \ref{op-proj-dyn}. \subsection*{6} The structure of the paper is as follows. In section \ref{pCM} we construct periodic spin CM chains by the Hamiltonian reduction and prove the superintegrability. In section \ref{g-act-per} we describe the phase space of such a system. In sections \ref{g-red}, \ref{ph-space-reg} we describe the regular part of the phase space. Hamiltonians of a periodic spin CM chain, restricted to the regular part of the phase space are described in section \ref{Ham-per}. The superintegrability of a periodic spin CM chain is proven in section \ref{pCM-sup}. In section \ref{solutions} solutions to equations of motion are described algebraically by the projection method, and angle variables are described. In section \ref{open-CM} we focus on open spin CM chains. In section \ref{ph-space-open} we describe phase spaces. In section \ref{op-g-reduct}, \ref{op-s-leaf} we describe the regular part of the phase space. Hamiltonians of an open spin CM chain, restricted to the regular part of the phase space are described in section \ref{qu-Ham}. The superintegrability of an open spin CM chain is proven in section \ref{op-sint}. In section \ref{op-proj-dyn} solutions to equations of motion are described algebraically by the projection method, and angle variables are described. In the conclusion (section \ref{concl}) we discuss some open problems and describe in details periodic CM spin chain for $SL_N$ with orbits of rank 1. In Appendix \ref{R3} we compare our symplectic leaves with the ones from \cite{R3}. Throughout this paper we will focus on split real semisimple Lie groups. However, since all constructions are algebraic they extend (with appropriate modifications) to the complex algebraic case. The non-split real case will be the subject of a separate publication (see \cite{RS} for the quantum case). Another important real case is when $G$ is compact, which can be deduced from the complex algebraic case by restriction to a compact real form. The structure of phase spaces as stratified symplectic spaces will be explored further in \cite{CJRX}. \subsection*{Acknowledgments}This paper was started as a joint project with Jasper Stokman. The author is grateful to Jasper for many discussions and for the collaboration on this paper. He also would like to thank Vladimir Fock, Eva Miranda and Hessel Postuma for important discussions and remarks and to Zhuo Chen, Kai Jiang and Husileng Xiao for discussions on stratified symplectic spaces. N.R. want to thank ITS-ETH for the hospitality, where the bulk of this work was completed. The work of N.R. was supported by the NSF grant DMS-1902226, by the Dutch research council (NWO 613.009.126), and by the grant RFBR No. 18-01-00916. \section{Periodic spin Calogero-Moser chains}\label{pCM} \subsection{The phase space as the Hamiltonian reduction} \label{g-act-per} Here we will describe the phase space of a periodic spin Calogero-Moser chain as a Hamiltonian reduction of $T^* (G^{\times n})$. Let us start with the description of these symplectic spaces. Consider the manifold $T^*(G^{\times n})$ with the standard symplectic structure. The cotangent bundle over a Lie group can be trivialized by right translations, which gives an isomorphism of vector bundles \[ T^*(G^{\times n})\simeq (T^*G)^{\times n}\simeq {\mathfrak{g}^*}^{\times n}\times G^{\times n} \] We will choose this trivialization throughout the paper. The Lie group $G_n:=G^{\times n}$ acts naturally on itself by left and right translations. Lifting these actions to $T^*(G^{\times n})$, after the trivialization of the cotangent bundle, we can write the action by left translations as: \[ h_L(x,g)=(Ad_{h_1}^*(x_1), Ad_{h_2}^*(x_2) \dots, Ad_{h_{n}}^*(x_n), h_1g_1, h_2g_2, \dots, h_{n}g_n) \] and the action by right translations as \[ h_R(x,g)=(x_1,\dots, x_n, g_1h_1^{-1}, \dots, g_nh_n^{-1}) \] Both these actions are Hamiltonian with moment maps \[ \mu_L(x,g)=(x_1,x_2,\dots, x_n) \] and \[ \mu_R(x,g)=(-Ad_{g_1^{-1}}^*(x_1), \dots, -Ad_{g_n^{-1}}^*(x_n)) \] respectively. Actions by left and right translations can be twisted by permutations. In particular, we can twist the action by left translations by a cyclic permutation. Combining the twisted left action with the non-twisted right action we obtain the "gauge action" of $G_n$ on $G^{\times n}${\footnote{One can twist both left and right actions by a permutation. This leads to other superintegrable systems.} \[ h(g_1,\dots, g_n)=(h_1g_1h_2^{-1},h_2g_2h_3^{-1},\dots, h_{n}g_nh_1^{-1}) \] Lifting the twisted conjugation action of $G_n$ on $G^{\times n}$ to $T^*(G^{\times n})$ we obtain the "gauge action" on the cotangent bundle: \begin{equation}\label{tad-act} h(x,g)=(Ad_{h_1}^*(x_1), Ad_{h_2}^*(x_2), \dots, Ad_{h_{n}}^*(x_n), h_1g_1h_2^{-1}, h_2g_2h_3^{-1}, \dots, h_{n}g_nh_1^{-1}) \end{equation} Because this is the diagonal action for two Hamiltonian actions, the gauge action is also Hamiltonian with the moment map $\mu: T^*(G^{\times n})\to {\mathfrak{g}^*}^{\times n}$: \begin{equation}\label{tad-mmap} \mu(x,g)=\mu_L(x,g)+\mu_R^{tw}(x,g)=(x_1-Ad_{g_n^{-1}}^*(x_n), x_2-Ad_{g_1^{-1}}^*(x_1), \dots, x_n-Ad_{g_{n-1}^{-1}}^*(x_{n-1})) \end{equation} where $\mu_R^{tw}$ is the right moment map, twised by cyclic permutation. Because the gauge action \eqref{tad-act} of $G_n$ is Hamiltonian, the quotient space $T^*(G^{\times n})/G_n$ is a Poisson space.\footnote{This space is singular. Having in mind classical-quantum correspondence we need the algebra of functions on $T^* G^{\times n}/G_n$. Thus, by the quotient space we will always mean the GIT quotient. By definition, functions on $T^*(G^{\times n})/G_n$ are $G_n$-invariant functions on $T^*(G^{\times n})$.} Symplectic leaves of $T^*(G^{\times n})/G_n$ are given by the Hamiltonian reduction with respect to the moment map (\ref{tad-mmap}). Let $\mathcal O_1,\dots, \mathcal O_n$ be coadjoint orbits in $\mathfrak{g}^*$, then the corresponding symplectic leaf in $T^*(G^{\times n})/G_n$ is \begin{equation}\label{SOoriginal} \mathcal{S}(\mathcal O)=\mu^{-1}(\mathcal O_1\times\dots\times \mathcal O_n)/G_n=\{(x,g)\in {\mathfrak{g}^*}^{\times n}\times G^{\times n}| x_{i}-Ad_{g_{i-1}^{-1}}^*(x_{i-1})\in \mathcal O_i\}/G_n \end{equation} where $G_n$ acts by the gauge transformations (\ref{tad-act}) and the indices $i$ should be taken modulo $n$. On each of these symplectic leaves we will construct a superintegrable system which we will call a {\it periodic spin Calogero-Moser chain}. \subsection{The gauge fixing}\label{g-red} Let us fix $i\in1,\dots, n$ and $g=(g_1,\ldots,g_n)\in G^{\times n}$. Let $h\in G_n$ such that \begin{equation*} h_j= \begin{cases} h_ig_{i-1}^{-1}\cdots g_{j+1}^{-1}g_j^{-1}\quad &\hbox{ for }\,\, 1\leq j<i,\\ h_ig_{i-1}^{-1}\cdots g_2^{-1}g_1^{-1}g_n^{-1}\cdots g_{j+1}^{-1}g_j^{-1}\quad &\hbox{ for }\,\, i<j\leq n. \end{cases} \end{equation*} Denote such element of $G_n$ by $h_g$ (we suppress the dependence on $i$). It is easy to check that the gauge transformation of $g=(g_1,\dots, g_n)$ by the element $h_g$ brings it to $(1,\ldots,1,h_i(g_ig_{i+1}\cdots g_ng_1g_2,\ldots g_{i-1})h_i^{-1},1,\ldots,1)$, with the $i^{\textup{th}}$-entry being the nontrivial entry. This identifies the $G_n$ gauge orbit through $g=(g_1,\dots, g_n)$ with the $G$-conjugation orbit through $g_1\cdots g_n$. It thus gives an ($i$-independent) isomorphism \[ G^{\times n}/G_n\overset{\sim}{\longrightarrow} G/G, \] where $G/G$ denotes the set of conjugacy classes in $G$. On the cotangent bundles the gauge fixing with gives the isomorphism $\varphi_i: \bigl(\mathfrak{g}^{*\times n}\times G^{\times n}\bigr)/G_n\overset{\sim}{\longrightarrow}\bigl(\mathfrak{g}^{*\times n}\times G\bigr)/G$ mapping the $G_n$-orbit $G_n(x,g)$ through $(x,g)\in\mathfrak{g}^{*\times n}\times G^{\times n}$ to the $G$-orbit through \[ \bigl(Ad_{g_i^{-1}\cdots g_1^{-1}}^*(x_1),\ldots,Ad_{g_i^{-1}}^*(x_i),Ad_{g_i^{-1}\cdots g_1^{-1}g_n^{-1}\cdots g_{i+1}^{-1}}^*(x_{i+1}), \ldots,Ad_{g_i^{-1}\cdots g_1^{-1}g_n^{-1}}^*(x_n),g_{i+1}\cdots g_ng_1\cdots g_i\bigr) \] (for $i=n$ this should be read as $\bigl(Ad_{g_n^{-1}\cdots g_1^{-1}}^*(x_1),\ldots,Ad_{g_n^{-1}}^*(x_n),g_1\cdots g_n\bigr)$). Here $G$ is acting diagonally on $\mathfrak{g}^{*\times n}\times G$ via the coadjoint action on $\mathfrak{g}^*$ and the conjugation action on $G$. From now on we will work with the isomorphism $\varphi_n$. \subsection{The regular part of the phase space} \label{ph-space-reg} The image of the symplectic leaf $\mathcal{S}(\mathcal O)$ under the isomorphism $\varphi_n$ is \[ \mathcal{S}(\mathcal O)=\{(z_1,\dots, z_n, g)\in \mathfrak{g}^{*\times n} \times G\,\, |\,\, z_1-Ad^*_{g^{-1}}z_n\in \mathcal O_1,\,\, z_{i}-z_{i-1}\in \mathcal O_{i},\,\,\, i=2,\dots,n\}/G. \] Define the {\it regular} part $\mathcal{S}(\mathcal O)_{reg}\subset \mathcal{S}(\mathcal O)$ of the phase space as $\mathcal{S}(\mathcal O)\cap (\mathfrak{g}^{*\times n}\times G^\prime)/G$. On $\mathcal{S}(\mathcal O)_{reg}$ we can choose a representative where $g$ is in the regular part $A_{reg}$ of the real split torus $A$ in $G$: $g=bab^{-1}, z_i=\textup{Ad}_b^*x^{(i)}$ with $a\in A_{reg}$. Then we have \begin{equation*} \begin{split} \mathcal{S}(\mathcal O)_{reg}=\{(x^{(1)},\dots, x^{(n)}, a)\in \mathfrak{g}^{*\times n} \times A_{reg}\,\,|\,\, &x^{(1)}-Ad^*_{a^{-1}}x^{(n)}\in \mathcal O_1,\\ &\,\,\,x^{(i)}-x^{(i-1)}\in\mathcal O_i,\,\, i=2,\dots, n\}/N_G(A). \end{split} \end{equation*} Identify $\mathfrak{g}^*\simeq\mathfrak{g}$ and $\mathfrak{a}^*\simeq\mathfrak{a}$ via the Killing form of $\mathfrak{g}$. The element $y\in\mathfrak{g}^*$ then corresponds to $y_0+\sum_\alpha y_\alpha e_\alpha$, where $y_0$ is the element in $\mathfrak{a}$ corresponding to $y\vert_{\mathfrak{a}}$ and $y_\alpha=y(e_{-\alpha})$. Let $\mu^{(j)}\in \mathcal O_j$ be vectors $\mu^{(1)}=x^{(1)}-Ad^*_{a^{-1}} x^{(n)}$ and $\mu^{(i)}=x^{(i)}-x^{(i-1)}$ for $i=2,\dots, n$. For coordinates $x^{(i)}_\alpha$ and $\mu^{(i)}_\alpha$ of vectors $x^{(i)}$ and $\mu^{(i)}$ we then have \[ x_\alpha^{(1)}-a_\alpha^{-1} x^{(n)}_\alpha=\mu^{(1)}_\alpha,\qquad\quad x^{(i)}_\alpha-x^{(i-1)}_\alpha=\mu^{(i)}_\alpha, \qquad i=2,\dots, n. \] For the Cartan components we have \[ x^{(i)}_0-x^{(i-1)}_0=\mu^{(i)}_0, \qquad i=1,\dots, n, \] with the index $i$ taken to be modulo $n$. Solving these equations for $x^{(i)}$ we have \begin{equation}\label{solve} \begin{split} x^{(i)}_\alpha&=\frac{a_\alpha(\mu_\alpha^{(1)}+\mu_\alpha^{(2)}+\cdots+\mu_\alpha^{(i)})+\mu_\alpha^{(i+1)}+\mu^{(i+2)}+\cdots+\mu_\alpha^{(n)}}{a_\alpha-1},\\ x^{(i)}_0&=x^{(1)}_0+ \mu^{(2)}_0 +\dots +\mu^{(i)}_0=x^{(n)}_0-\mu^{(n)}_0 -\dots -\mu^{(i+1)}_0 \end{split} \end{equation} and we have the constraint \begin{equation}\label{constraint} \mu^{(1)}_0 +\dots +\mu^{(n)}_0=0. \end{equation} This gives an isomorphism \begin{equation}\label{radred} \mathcal{S}(\mathcal O)_{reg}\overset{\sim}{\longrightarrow} \bigl(\nu_{\mathcal O}^{-1}(0)/H\times T^*A_{reg}\bigr)/W \end{equation} which preserves the natural symplectic structures, where $\nu_\mathcal O: \mathcal O_1\times\cdots\times\mathcal O_n\rightarrow\mathfrak{a}^*$ is the moment map $(\mu^{(1)},\ldots,\mu^{(n)})\mapsto(\mu^{(1)}+\cdots+\mu^{(n)})\vert_{\mathfrak{a}}$ for the diagonal action of $H$ on the product $\mathcal O_1\times\cdots\times\mathcal O_n$ of coadjoint orbits, and $W=N_G(A)/H$ acts diagonally on $\nu_{\mathcal O}^{-1}(0)/H\times T^*A_{reg}$. The isomorphism \eqref{radred} maps the $N_G(A)$-orbit through $\bigl(x^{(1)},\ldots,x^{(n)},a\bigr)$ to the $W$-orbit through $\bigl(H(x^{(1)}-Ad_{a^{-1}}^*x^{(n)},x^{(2)}-x^{(1)},\ldots,x^{(n)}-x^{(n-1)}),x_0^{(n)},a\bigr)$, where we used the trivialisation $T^*A_{reg}\simeq\mathfrak{a}\times A_{reg}$. The inverse maps the $W$-orbit through $\bigl(H(\mu^{(1)},\ldots,\mu^{(n)}),p,a\bigr)$ to the $N_G(A)$-orbit through $(x^{(1)},\ldots,x^{(n)},a)$, with \begin{equation}\label{relationxmu} x^{(i)}=p-\mu_0^{(n)}-\cdots-\mu_0^{(i+1)}+\sum_\alpha\left(\frac{a_\alpha(\mu_\alpha^{(1)}+\cdots+\mu_\alpha^{(i)})+\mu_\alpha^{(i+1)}+\cdots+\mu_\alpha^{(n)}} {a_\alpha-1}\right)e_\alpha, \end{equation} where we use the identification $\mathfrak{g}\simeq\mathfrak{g}^*$ via the Killing form. \subsection{Hamiltonians of a periodic spin CM chain}\label{Ham-per} After the trivialization of the cotangent bundle by translations, we have a natural projection: \begin{equation}\label{pr-1} T^*(G^{\times n})\simeq {\mathfrak{g}^*}^{\times n}\times G^{\times n}\to {\mathfrak{g}^*}^{\times n} \end{equation} which is simply the projection to the first factor. This projection depends on the trivialization. In this paper we alway assume that we use the trivialization by right translations. However, the corresponding projection of quotient spaces \begin{equation}\label{pr-2} T^*(G^{\times n})/G_n\to ({\mathfrak{g}^*/G})^{\times n} \end{equation} does not depend on the trivialization and in this sense is canonical. The projection (\ref{pr-2}) is Poisson\footnote{One of the reasons for this is that the equation (\ref{pr-1}) is the moment map for the left diagonal action of $G^{\times n}$ on the cotangent bundle.} with the trivial Poisson structure on ${(\mathfrak{g}^*/G)}^{\times n}$. Thus the $G^{\times n}$-invariant functions on ${\mathfrak{g}^*}^{\times n}$ give rise to a Poisson commutative subalgebra in the algebra of functions on $T^*(G^{\times n})/G_n$. The restriction of these functions to the symplectic leaf $\mathcal{S}(\mathcal O)$ gives the algebra of Poisson commuting functions on it. This is the subalgebra of {\it Hamiltonians of the periodic spin Calogero-Moser chain}. Now let us describe the restriction of the Hamiltonians corresponding to quadratic Casimir functions \[ H_2^{(k)}(x,g)=\frac{1}{2}\bigl(x^{(k)},x^{(k)}\bigr)=\frac{1}{2}\bigl(x_0^{(k)},x_0^{(k)}\bigr)+\sum_{\alpha>0}x^{(k)}_\alpha x^{(k)}_{-\alpha}\qquad (1\leq k\leq n) \] to the regular part of $\mathcal{S}(\mathcal O)$, where $x=(x^{(1)},\ldots,x^{(n)})\in\mathfrak{g}^{*\times n}$ and $g\in G^{\times n}$. Consider the functions \[ D_k=H_{2}^{(k)}-H_2^{(k-1)}\qquad\quad (1<k\leq n) \] which we call {\it Knizhnik-Zamolodchikov-Bernard (KZB) Hamiltonians}\footnote{The proper name would be {\it constant Knizhnik-Zamolodchikov-Bernard Hamiltonians} emphasizing the fact that they are related to finite dimensional simple Lie algebras, not to the affine Kac-Moody algebras. See for example references \cite{F}\cite{ES}\cite{EV}\cite{S}.}. \begin{theorem}\label{thmradcyclic} The restriction of the KZB Hamiltonians to $\mathcal{S}(\mathcal O)_{reg}$ can be written as \begin{equation}\label{Dk-per} D_k=(\mu^{(k)}_0, p)-\sum_{l=1}^{k-1} r_{lk}+\sum_{l=k+1}^{n}r_{kl} \end{equation} where $r_{kl}$ for $k\not=l$ is the classical version of the Felder's dynamical $r$-matrix \cite{F}: \begin{equation}\label{F-r} r_{kl}=-\frac{1}{2}(\mu^{(k)}_0, \mu^{(l)}_0)+\sum_{\alpha}\frac{ \mu^{(k)}_{-\alpha}\mu^{(l)}_{\alpha}}{a_\alpha-1}. \end{equation} \end{theorem} \begin{remark} Note that (\ref{F-r}) can also be written as \[ r_{kl}=-\frac{1}{2}(\mu^{(k)}_0, \mu^{(l)}_0)+\sum_{\alpha>0}\frac{ \mu^{(k)}_{-\alpha}\mu^{(l)}_{\alpha}}{a_\alpha-1}-\sum_{\alpha>0}\frac{a_\alpha\mu^{(k)}_{\alpha} \mu^{(l)}_{-\alpha}}{a_\alpha-1}. \] \end{remark} \begin{proof} We need to show that formula \eqref{Dk-per} gives the expression of $D_k$ in terms of the coordinates on $\mathcal{S}(\mathcal O)_{reg}$ of $\bigl(\nu_{\mathcal O}^{-1}(0)/H\times T^*A_{reg}\bigr)/W$, obtained from the isomorphism \eqref{radred}. In particular, let $\bigl(H(\mu^{(1)},\ldots,\mu^{(n)}),p,a\bigr)\in \nu_{\mathcal O}^{-1}(0)/H\times\mathfrak{a}^*\times A_{reg}$ and let $\bigl((x^{(1)},\ldots,x^{(n)}),(1,\ldots,1,a)\bigr)$ be the corresponding point in $\mathfrak{g}^{*\times n}\times G^{\times n}$, with $x^{(i)}$ given by \eqref{relationxmu}. Taking into account the relation $x^{(k)}-x^{(k-1)}=\mu^{(k)}$ between the $x^{(i)}$ and the $\mu^{(j)}$ we have \begin{equation*} \begin{split} D_k&=\bigl(\mu^{(k)},x^{(k-1)}\bigr)+\frac{1}{2}\bigl(\mu^{(k)},\mu^{(k)}\bigr)\\ &=\bigl(\mu_0^{(k)},x_0^{(k-1)}+\frac{1}{2}\mu_0^{(k)}\bigr)+ \sum_\alpha x_\alpha^{(k-1)}\mu_{-\alpha}^{(k)}+\sum_{\alpha>0}\mu_\alpha^{(k)}\mu_{-\alpha}^{(k)}. \end{split} \end{equation*} Substitute here the expression \eqref{solve} for $x^{(k-1)}_\alpha$ in terms of the $\mu_\alpha^{(j)}$: \[ D_k=\bigl(\mu_0^{(k)},x_0^{(k-1)}+\frac{1}{2}\mu_0^{(k)}\bigr) +\sum_{l=1}^{k-1} \sum_\alpha \frac{a_\alpha\mu_\alpha^{(l)}\mu^{(k)}_{-\alpha}}{a_{\alpha}-1}+ \sum_{l=k+1}^{n} \sum_\alpha \frac{\mu_\alpha^{(l)}\mu^{(k)}_{-\alpha}}{a_{\alpha}-1}. \] From here using the identities \[ \sum_\alpha \frac{a_\alpha\mu_\alpha^{(l)}\mu^{(k)}_{-\alpha}}{a_{\alpha}-1}=-r_{lk}-\frac{1}{2}(\mu_0^{(k)},\mu_0^{(l)}) \] and \[ \sum_\alpha \frac{\mu_\alpha^{(l)}\mu^{(k)}_{-\alpha}}{a_{\alpha}-1}=r_{kl}+\frac{1}{2}(\mu_0^{(k)},\mu_0^{(l)}) \] we conclude \begin{equation*} D_k=\Bigl(\mu^{(k)}_0,x_0^{(k-1)}+\frac{1}{2}\sum_{l=k}^{n}\mu_0^{(l)}-\frac{1}{2}\sum_{l=1}^{k-1}\mu_0^{(l)}\Bigr) -\sum_{l=1}^{k-1}r_{lk}+\sum_{l=k+1}^{n}r_{kl}. \end{equation*} Using $x_0^{(k-1)}=p-\mu_0^{(n)}-\cdots-\mu_0^{(k)}$ (see \eqref{relationxmu}) and the constraint \eqref{constraint} we obtain (\ref{Dk-per}). \end{proof} A particularly simple expression has the quadratic Hamiltonian $H_2^{(n)}$ on $\mathcal{S}(\mathcal O)_{reg}$, \[ H_2^{(n)}=\frac{1}{2} (p,p)+\sum_{\alpha}\frac{\mu_\alpha \mu_{-\alpha}}{(1-a_\alpha)(1-a_{\alpha}^{-1})}. \] Here $\mu_\alpha=\mu^{(1)}_\alpha+\dots +\mu^{(n)}_\alpha$. Setting $q_\alpha=\alpha(\log(a))$ this formula becomes a familiar formula for the spin Calogero-Moser Hamiltonian, \begin{equation}\label{Hamper} H_2^{(n)}=\frac{1}{2} (p,p)-\sum_{\alpha>0}\frac{\mu_\alpha \mu_{-\alpha}}{2sh^2(q_\alpha)}. \end{equation} Note that the periodic spin CM chain is the classical version of the dynamical Knizhnik-Zamolodchikov equation from \cite{ES}\cite{EV}. \subsection{Periodic spin Calogero-Moser chain as a superintegrable system}\label{pCM-sup} Now let us establish the superintegrability of the periodic spin CM chain. For this we should construct an intermediate Poisson manifold and projections as in \cite{N}\cite{R2}. Observe that we have natural Poisson projections: \begin{equation}\label{pr-3} T^*(G^{\times n})/G_n \stackrel{p_1}{\rightarrow} \mathcal P_n \stackrel{p_2}{\rightarrow} \mathcal B_n \end{equation} Firstly, $\mathcal P_n=({\mathfrak{g}^*}^{\times n}\times_{{(\mathfrak{g}^*/G)}^{\times n}}{\mathfrak{g}^*}^{\times n})/G_n$ with \begin{equation*} {\mathfrak{g}^*}^{\times n}\times_{{(\mathfrak{g}^*/G)}^{\times n}}{\mathfrak{g}^*}^{\times n}:=\{(x,y)\in {\mathfrak{g}^*}^{\times n}\times {\mathfrak{g}^*}^{\times n}| Gy_{i}=-Gx_{i-1}\}, \end{equation*} where $Gz$ is the coadjoint orbit through $z\in\mathfrak{g}^*$ and the indices $i$ are taken modulo $n$, and $G_n$ is acting by \begin{equation}\label{P2} g(x,y):=(Ad_{g_1}^*(x_1),\ldots,Ad_{g_n}^*(x_n),Ad_{g_1}^*(y_1),\ldots,Ad_{g_n}^*(x_n)). \end{equation} The map $p_1$ is the map induced from the $G_n$-equivariant map $\mu_L\times \mu_R^{tw}$. Explicitly, the mapping $p_1$ acts as \begin{equation}\label{P1} \begin{split} p_1: G_n(x,g)&\mapsto G_n(\mu_L(x,g),\mu^{tw}_R(x,g))\\ &=G_n(x_1,x_2,\ldots,x_n, -Ad_{g_n^{-1}}^*(x_n),-Ad_{g_1^{-1}}(x_1),\ldots,-Ad_{g_{n-1}^{-1}}(x_{n-1})). \end{split} \end{equation} Secondly, \[ \mathcal B_n={(\mathfrak{g}^*/G)}^{\times n} \] and the map $p_2$ is the projection to the first factor. Restricting projection $p_1$ to the symplectic leaf $\mathcal{S}(\mathcal O)$ (see \eqref{SOoriginal}), we obtain the surjective Poisson projection \[ p_{1,\mathcal O}: \mathcal{S}(\mathcal O)\to \mathcal P(\mathcal O) \] where \[ \mathcal P(\mathcal O)=\{(x,y)\in {\mathfrak{g}^*}^{\times n}\times_{(\mathfrak{g}^*/G)^{\times n}}{\mathfrak{g}^*}^{\times n}\,\,|\,\,x_i+y_i\in \mathcal O_i\}/G_n\subset\mathcal P_n \] with the $G_n$-action described by \eqref{P2}. Restricting the second projection $p_2$ to $\mathcal P(\mathcal O)$ we have the Poisson projection \[ p_{2,\mathcal O}: \mathcal P(\mathcal O)\to \mathcal B(\mathcal O)\subset \mathcal B_n,\qquad G_n(x,y)\mapsto (Gx_1, \dots, Gx_n) \] where $\mathcal B(\mathcal O)$ is the image of $p_{2,\mathcal O}$. It can be explicitly described as \[ \mathcal B(\mathcal O)=\{(\mathcal O^{(1)}, \ldots, \mathcal O^{(n)})\in (\mathfrak{g}^*/G)^{\times n}\,\,|\,\, \mathcal O_i\subseteq\mathcal O^{(i)}-\mathcal O^{(i-1)}\}, \] with the indices $i$ taken modulo $n$. \begin{lemma} The dimension of $\mathcal B(\mathcal O)$ is $nr$ where $r$ is the rank of the Lie algebra $\mathfrak{g}$. \end{lemma} \begin{proof} Let $\mathfrak{h}^*_{\geq 0}$ be the positive Weyl chamber in the dual space to Cartan subalgebra of $\mathfrak{g}$. For each generic orbit $\mathcal O$ there is a unique representative $y\in \mathcal O\cup \mathfrak{h}^*{>0}$. Let $x_1$ be such representative of $\mathcal O^{(1)}$. Let us describe orbits $\mathcal O^{(2)}$ such that $x_1+y_1\in \mathcal O^{(2)}$ for some $y_1\in \mathcal O_1$. Assume that $\mathcal O^{(1)}$ is very large, i.e. $||x_1||>>1$. Because $\mathcal O_1$ is compact $||y_1||<C_1$ for some constant $C_1$ determined by the orbit $\mathcal O_1$. Let $c^{(i)}_k$ be the value of $k$-th Casimir function on the orbit $\mathcal O^{(i)}$. For $k$-th Casimir function $c_k^{(2)}$ we have: \[ c_k^{(2)}=c_k(x_1+y_1)=c_k^{(1)}+\sum_{i=1}^r \frac{\partial c_k(h)}{\partial h_i}|_{h=x_1}(y_1)_i+O(y^2) \] Because the matrix $\frac{\partial c_k(h)}{\partial h_i}$ is nondegerate for generic $h$, possible values of the Euclidean vector with components $c_k^{(2)}$ span an $r$-dimensional neighborhood of $\{c_k^{(1)}\}$. Repeating this argument for each $\mathcal O^{(i)}$ we conclude that each of $\mathcal O_i$ is non-zero, $\dim( \mathcal B(\mathcal O))=nr$. \end{proof} Now let us describe the fiber $\mathcal P(\mathcal O; \mathcal O^{(1)}, \dots, \mathcal O^{(n)})$ of $p_{2,\mathcal O}$ over $(\mathcal O^{(1)}, \dots, \mathcal O^{(n)})\in\mathcal B(\mathcal O)$: \begin{equation*} \begin{split} \mathcal P(\mathcal O; \mathcal O^{(1)},\dots, \mathcal O^{(n)})&=\{(x,y)\in {\mathfrak{g}^*}^{\times n}\times {\mathfrak{g}^*}^{\times n}|x_i+y_i\in \mathcal O_i, \ \ x_i\in \mathcal O^{(i)}, y_i\in -\mathcal O^{(i-1)} \}/G_n\\ &=\prod_{i=1}^n\bigl\{(x_i,y_i)\in\mathcal O^{(i)}\times-\mathcal O^{(i-1)}\,\, | \,\, x_i+y_i\in\mathcal O_i\bigr\}/G \end{split} \end{equation*} with the index $i$ taken modulo $n$ and with $G$ acting by the diagonal coadjoint action on $\mathcal O^{(i)}\times-\mathcal O^{(i-1)}$. Set \begin{equation}\label{cM3} \mathcal{M}(\mathcal O^{(1)}, \mathcal O^{(2)}, \mathcal O^{(3)})=\{(x,y,z)\in \mathcal O^{(1)}\times \mathcal O^{(2)}\times \mathcal O^{(3)}\,\,|\,\,x+y+z=0\}/G, \end{equation} with $G$ acting by the diagonal coadjoint action. Then \begin{equation}\label{cMiso} \begin{split} \bigl\{(x_i,y_i)\in\mathcal O^{(i)}\times-\mathcal O^{(i-1)}\,\, | \,\, x_i+y_i\in\mathcal O_i\bigr\}/G&\overset{\sim}{\longrightarrow}\mathcal{M}(-\mathcal O^{(i)},\mathcal O^{(i-1)},\mathcal O_i)\\ G(x_i,y_i)\mapsto G(-x_i,-y_i,x_i+y_i), \end{split} \end{equation} and hence we conclude that \begin{equation}\label{cpfactor} \mathcal P(\mathcal O; \mathcal O^{(1)},\dots, \mathcal O^{(n)})\simeq\prod_{i=1}^n\mathcal{M}(-\mathcal O^{(i)},\mathcal O^{(i-1)},\mathcal O_i), \end{equation} with the index $i$ taken modulo $n$. \begin{lemma}\label{dimrem} Let $\mathcal O^{(1)}, \mathcal O^{(2)}$ be generic, sufficiently large coadjoint orbits and $\mathcal O^{(3)}\neq 0$, then \[ \dim(\mathcal{M}(\mathcal O^{(1)}, \mathcal O^{(2)}, \mathcal O^{(3)}))=\dim(\mathcal O^{(3)})-2r. \] \end{lemma} \begin{proof} Let $x\in \mathcal O^{(1)}$ be the unique representative which lies in the positive Weyl chamber. Assume this orbit is "big", i.e. $||x||>>1$. The condition $x+y+z=0$ for $y\in \mathcal O^{(2)}$ and $z\in \mathcal O^{(3)}$ for a large orbit $\mathcal O^{(2)}$ means that we have $r$ constraints $c_k(-x-y)=c_k^{(3)}$ on $y$. For large orbits $\mathcal O^{(1)}$ and $\mathcal O^{(2)}$ these constraints are independent. Taking into account that we are quotiening by $H$ we have $\dim(\mathcal{M}(\mathcal O^{(1)},\mathcal O^{(2)},\mathcal O^{(3)})=\dim(\mathcal O^{(3)}-2r$. \end{proof} \begin{corollary} Thus the dimension of the fiber is $\dim(\mathcal P(\mathcal O; \mathcal O^{(1)}, \dots, \mathcal O^{(n)}))=\sum_{i=1}^n\dim(\mathcal O_i)-2nr$. \end{corollary} Each of the factors in \eqref{cpfactor} is the Hamiltonian reduction of the product of the three coadjoint orbits relative to the moment map of the diagonal coadjoint $G$-action, and therefore carries a natural symplectic structure. Moduli spaces $\mathcal{M}(\mathcal O^{(1)}, \mathcal O^{(2)}, \mathcal O^{(3)})$ and therefore fibers of $p_{2,\mathcal O}$ are stratified symplectic spaces. \begin{theorem}\label{siperiodic} The Hamiltonian system generated by any Hamiltonian for the periodic spin CM chain described in section \ref{Ham-per} is superintegrable with the superintegrable structure described by the Poisson maps \[ \mathcal{S}(\mathcal O) \stackrel{p_{1,\mathcal O}}{\longrightarrow} \mathcal P(\mathcal O) \stackrel{p_{2,\mathcal O}}{\longrightarrow} \mathcal B(\mathcal O) \] as introduced earlier in this section. \end{theorem} Here, as everywhere above, we assume that $\mathcal O_i\neq \{0\}$ for each $i=1,\dots, n$. \begin{proof} For $G_n(x,y)\in\mathcal P(\mathcal O)$ let $\widetilde{g}_i\in G$ such that $y_{i+1}=-Ad_{\widetilde{g}_i^{\,-1}}^*(x_i)$. Then \[ p_{1,\mathcal O}^{-1}\bigl(G_n(x,y)\bigr)=\{G_n(x,g)\in\mathcal{S}(\mathcal O) \,\, | \,\, g_i\in\widetilde{g}_iZ_G(y_{i+1})\} \] (index $i$ taken modulo $n$) which, generically, is isotropic and of dimension $nr=\dim(\mathcal B(\mathcal O))$. It remains to check the balance of dimensions. It follows from \eqref{radred} that \[ \dim(\mathcal{S}(\mathcal O))=\sum_{i=1}^n\dim(\mathcal O_i). \] By Remark \ref{dimrem} we have, for generic $(\mathcal O^{(1)},\ldots,\mathcal O^{(n)})\in(\mathfrak{g}^*/G)^{\times n}$, \begin{equation*} \begin{split} \dim(\mathcal P(\mathcal O))&=\dim(\mathcal B(\mathcal O))+\dim(\mathcal P(\mathcal O;\mathcal O^{(1)},\ldots,\mathcal O^{(n)}))\\ &=\dim(\mathcal B(\mathcal O))+\sum_{i=1}^n\dim(\mathcal O_i)-2nr. \end{split} \end{equation*} Then \[ \dim(\mathcal P(\mathcal O))+\dim(\mathcal B(\mathcal O))=\sum_{i=1}^n\dim(\mathcal O_i)=\dim(\mathcal{S}(\mathcal O)), \] as desired. \end{proof} \begin{remark}\label{cyclicquantum} In the compact case, the quantum version of functions on $\mathcal{M}(\mathcal O^{(1)},\mathcal O^{(2)},\mathcal O^{(3)})$ is the algebra of endomorphisms $End((V_{\lambda_1}\otimes V_{\lambda_2}\otimes V_{\lambda_3})^G)$ of the subspace of $G$-invariant vectors in the tensor product $V_{\lambda_1}\otimes V_{\lambda_2}\otimes V_{\lambda_3}$ with $V_{\lambda_i}$ the representation corresponding to $\mathcal O^{(i)}$. The quantum version of the algebra of functions on the fiber $\mathcal P(\mathcal O; \mathcal O^{(1)}, \dots, \mathcal O^{(n)})$ is the algebra of endomorphisms of the vector space \[ \textup{Hom}_G(V_{\lambda_1},V_{\lambda_n}\otimes V_{1})\otimes \textup{Hom}_G(V_{\lambda_2},V_{\lambda_1}\otimes V_{2})\otimes\cdots\otimes \textup{Hom}_G(V_{\lambda_n},V_{\lambda_{n-1}}\otimes V_{n}). \] Here the orbits $\mathcal O_i$ correspond to $V_i$, and $\textup{Hom}_G(V_{\lambda_i},V_{\lambda_{i-1}}\otimes V_i)$ is the space of $G$-linear intertwiners $V_{\lambda_i}\rightarrow V_{\lambda_{i-1}}\otimes V_i$. For details see \cite{RSQCD1}. \end{remark} \subsection{Constructing solutions by the projection method and angle variables}\label{solutions} For $\mathcal{H}$ a $G$-invariant function on $\mathfrak{g}^*$, write $\mathcal{H}^{(i)}$ for the $G_n$-invariant function on $T^*(G^{\times n})$ defined by $\mathcal{H}^{(i)}(x,g):=\mathcal{H}(x_i)$. The Hamiltonian flow through $(x,g)\in \mathfrak{g}^{*\times n}\times G^{\times n}$ generated by $\mathcal{H}^{(i)}$ is \begin{equation}\label{evi-cl} (x(t_i), g(t_i))=(x_1, \dots, x_n, g_1, \dots,g_{i-1}, e^{\nabla\mathcal{H}(x_i) t_i}g_i,g_{i+1} \dots, g_n) \end{equation} where $\nabla\mathcal{H}(x)\in \mathfrak{g}$ is the gradient of $\mathcal{H}$ at $x\in\mathfrak{g}^*$, i.e., \[ y(\nabla\mathcal{H}(x))=\frac{d}{dt}\mathcal{H}(x+ty)\vert_{t=0} \] for all $y\in\mathfrak{g}^*$. The projection of such flow line to $T^*(G^{\times n})/G^{\times n}$ and further restricted to $\mathcal{S}(\mathcal O)\subset T^*(G^{\times n})/G_n$ is a flow line of the Hamiltonian vector field generated by the restriction of $\mathcal{H}^{(i)}$ to $\mathcal{S}(\mathcal O)$. Now let us construct angle variables, i.e., functions on $S(\mathcal O)$ which evolve linearly with respect to the evolution (\ref{evi-cl}) for each $i=1,\ldots,n$. Write $\mathfrak{a}_+^*\subset\mathfrak{g}^*$ for the elements $x\in\mathfrak{g}^*$ which vanish on root spaces and satisfy $(x,\alpha)>0$ for $\alpha\in R_+$, where $(\cdot,\cdot)$ is the bilinear form on $\mathfrak{g}^*$ induced by the Killing form. Write $\mathfrak{g}^{\prime*}$ for the elements in $\mathfrak{g}^*$ which are $G$-conjugate to an element in $\mathfrak{a}_+^*$, relative to the coadjoint action. For $(x,g)\in \mathfrak{g}^{\prime*\times n}\times G^{\times n}$ define $s_i\in G$ by the property $Ad^*_{s_i}(x_i)\in\mathfrak{a}_+^*$. These elements are defined only up to $s_i\mapsto a_is_i$ where $a_i\in H$. Gauge transformations $h\in G_n$ act by $(s_1,\ldots,s_n)\mapsto (s_1h_1^{-1},\ldots,s_nh_n^{-1})$. Let $G_{\mathbb{C}}$ be a complexification of $G$, which we take to be connected. Let $H_{\mathbb{C}}\subset G_{\mathbb{C}}$ be the Cartan subgroup $Z_{G_{\mathbb{C}}}(\mathfrak{h})$, where $\mathfrak{h}$ is the Cartan subalgebra $\mathfrak{a}\oplus i\mathfrak{a}$ of the Lie algebra $\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}\oplus i\mathfrak{g}$ of $G_{\mathbb{C}}$. Then $A\subseteq H\subset H_{\mathbb{C}}$. We identify $\mathfrak{g}^*$ with the real subspace of $\mathfrak{g}_{\mathbb{C}}^*$ consisting of the complex linear functionals that take real values on $\mathfrak{g}$. For finite dimensional $G_{\mathbb{C}}$-representations $V_1,\ldots,V_n$ choose vector $v_i\in V_i$ of $H_{\mathbb{C}}$-weight $\lambda_{i+1}$ and linear functionals $u_i^*\in V_i^*$ of $H_{\mathbb{C}}$-weight $-\lambda_i$ (indices $i$ taken modulo $n$).\footnote{Recall that $G$ acts on dual vectors as $(gu^*)(v)=u^*(g^{-1}v)$.} Define \begin{equation}\label{f-var} f_{u,v}(x,g)=u_1^*(s_1g_1s_2^{-1}v_1) u_2^*(s_2g_2s_3^{-1}v_2) \dots u_n^*(s_ng_ns_1^{-1}v_n) \end{equation} for $(x,g)\in (\mathfrak{g}^{\prime*}\times G)^{\times n}$. This expression is well defined (i.e., invariant with respect to transformations $s_i\to a_is_i$ with $a_i\in H$), and invariant with respect to gauge transformations. Thus, it defines a function on the subset $(\mathfrak{g}^{\prime*\times n}\times G^{\times n})/G_n$ of $T^*(G^{\times n})/G_n$. From the $G$-invariance of $\mathcal{H}$ we have the identity \[ u^*_i(s_ie^{t_i\nabla\mathcal{H}(x_i)}g_is_{i+1}^{-1}v_i)=e^{t_i\lambda_i(\nabla\mathcal{H}(y_i))}u^*_i(s_ig_is_{i+1}^{-1}v_i) \] where $y_i=Ad_{s_i}^*(x_i)\in\mathfrak{a}_+^*$, and consequently \begin{equation}\label{towaa} f_{u,v}(x(t_i),g(t_i))=e^{t_i\lambda_i(\nabla\mathcal{H}(y_i))}f_{u,v}(x,g). \end{equation} Logarithms of these functions evolve linearly, and hence they produce angle variables for the Hamiltonians $\mathcal{H}^{(i)}$ on $S(\mathcal O)\cap (\mathfrak{g}^{\prime*\times n}\times G^{\times n})/G_n$. \section{Open spin Calogero-Moser chains}\label{open-CM} Recall from the introduction that $G$ is a split real connected Lie group with finite center which admits a complexification, and $K\subset G$ is the subgroup of fixed points of a fixed Cartan involution $\Theta$ of $G$. Recall furthermore the root space decomposition $\mathfrak{g}=\mathfrak{a}\oplus \bigoplus_{\alpha>0}(\mathbb{R} e_\alpha\oplus \mathbb{R} e_{-\alpha})$ with the Cartan subalgebra $\mathfrak{a}\subset\mathfrak{g}$ and the root vectors $e_\alpha\in\mathfrak{g}_\alpha$ such that the infinitesimal Cartan involution $\theta$ acts as \[ \theta(h)=-h, \ \ \theta(e_\alpha)=-e_{-\alpha} \] for $h\in\mathfrak{a}$ and $\alpha\in R$. We will furthermore normalise the root vectors in such a way that $(e_\alpha,e_{-\alpha})=1$, with $(\cdot,\cdot)$ the Killing form of $\mathfrak{g}$. To avoid cumbersome notations, we will not always indicate in notations that we are in the open case. This leads to an overlap of some of the notations with the ones for the periodic case. For instance, the moment maps, Poisson spaces and Poisson projections will be denoted in the same way. \subsection{The phase space as the Hamiltonian reduction} \label{ph-space-open} Consider for $n\geq 0$ the manifold $T^*(G^{\times n+1})$ with the standard symplectic structure. We trivialize the cotangent bundle $T^*(G^{\times n+1})$ by right translations: \begin{equation}\label{trivialisationK} T^*(G^{\times n+1})\simeq (T^*G)^{\times n+1}\simeq {\mathfrak{g}^*}^{\times n+1}\times G^{\times n+1} \end{equation} We have a natural action of $K\times G^{\times n}$ on $G^{\times n+1}$ by left translations: \[ (k_\ell, h_1, \dots, h_n)_L(g_0, g_1, \dots, g_n)=(k_\ell g_0, h_1g_1,\dots, h_ng_n) \] This action lifts to the following Hamiltonian action on $T^*G^{\times n+1}$, \begin{equation*} \begin{split} (k_\ell, h_1, \dots, h_n)_L(x_0, &\dots, x_n,g_0, g_1, \dots, g_n)=\\ &= (Ad^*_{k_\ell}(x_0), Ad^*_{h_1}(x_1), \dots, Ad^*_{h_n}(x_n),k_\ell g_0, h_1g_1,\dots, h_ng_n) \end{split} \end{equation*} with the moment map \[ \mu_L(x,g)=(\pi(x_0), x_1, \dots, x_n) \] where the projection $\pi: \mathfrak{g}^*\rightarrow \mathfrak{k}^*$ is the dual map dual to the embedding $\mathfrak{k}\hookrightarrow \mathfrak{g}$. Similarly, the action of $G^{\times n}\times K$ on $G^{\times n+1}$ by right translations \[ (h_1,\dots, h_n, k_r)_R(g_0,g_1,\dots, g_n)=(g_0h_1^{-1}, g_1h_2^{-1},\dots, g_{n-1}h_n^{-1}, g_nk_r^{-1}) \] lifts to the following Hamiltonian action on $T^*G^{\times n+1}$, \begin{equation*} \begin{split} (h_1,\dots, h_n, k_r)_R(x_0,& \dots, x_n, g_0,g_1,\dots, g_n)\\ &=(x_0, x_1,\dots, x_n,g_0h_1^{-1}, g_1h_2^{-1},\dots, g_{n-1}h_n^{-1}, g_nk_r^{-1}), \end{split} \end{equation*} with the moment map \[ \mu_R(x,g)=(-Ad^*_{g_0^{-1}}(x_0), -Ad^*_{g_1^{-1}}(x_1),\dots, -Ad^*_{g_{n-1}^{-1}}(x_{n-1}),-\pi(Ad^*_{g_n^{-1}}(x_n))). \] As a result, the group $G_{n,K}:=K\times G^{\times n}\times K$ acts on $T^*(G^{\times n+1})$ as \begin{equation}\label{k-n-act} \begin{split} (k_\ell,h_1,\dots,& h_{n}, k_r)(x_0,\dots, x_n, g_0,\dots, g_n)=\\ =&(Ad^*_{k_\ell}(x_0), Ad^*_{h_1}(x_1),\dots, Ad^*_{h_{n}}(x_n), k_\ell g_0h_1^{-1}, h_1g_1h_2^{-1}, \dots, h_{n}g_nk_r^{-1}) \end{split} \end{equation} with $k_\ell,k_r\in K$ and $h_i\in G$. This action is Hamiltonian with the moment map $\mu: T^*(G^{\times n})\to \mathfrak{k}^* \times {\mathfrak{g}^*}^{\times n}\times \mathfrak{k}^*$ given by \begin{equation}\label{momentK} \begin{split} \mu((&x_0,\dots, x_n,g_0,\dots, g_n)=(\mu_L(x,g),0)+ (0,\mu_R(x,g))= \\ &=(\pi(x_0), x_1-Ad^*_{g_0^{-1}}(x_0), x_2-Ad^*_{g_1^{-1}}(x_1),\dots, x_{n}-Ad^*_{g_{n-1}^{-1}}(x_{n-1}), -\pi(Ad^*_{g_n^{-1}}(x_n))). \end{split} \end{equation} For $n=0$ this is the $K\times K$ action $(k_\ell, k_r)(x,g)=(Ad^*_{k_\ell}(x), k_\ell g k_r^{-1})$ on $T^*G$, with the moment map $(x,g)\mapsto (\pi(x), -\pi(Ad_{g^{-1}}(x)))$. It is easy to check explicitly that this moment map intertwines the action of $G_{n,K}$ on $T^*(G^{\times n+1})$ given by (\ref{k-n-act}) with its diagonal coadjoint action on $\mathfrak{k}^* \times {\mathfrak{g}^*}^{\times (n-1)}\times \mathfrak{k}^*$. Because the action of $G_{n,K}$ on $T^*(G^{\times n+1})$ is Hamiltonian, the space $T^*(G^{\times n+1})/G_{n,K}$\footnote{Recall that here and in everywhere else in this paper $X/H$ means the GIT quotient for a Lie group $H$ action on a manifold $X$.} is Poisson with symplectic leaves being given by the Hamiltonian reduction with respect to the moment map \eqref{momentK}. Let $\mathcal O=(\mathcal O_\ell^K\times \mathcal O_1 \times \dots, \mathcal O_{n} \times \mathcal O^K_r)$ with $\mathcal O_i\subset \mathfrak{g}^*$ coadjoint $G$-orbits and $\mathcal O_\ell^K, \mathcal O_r^K \subset \mathfrak{k}^*$ coadjoint $K$-orbits, then the corresponding symplectic leaf in $T^*(G^{\times n+1})/G_{n,K}$ is \begin{equation}\label{bcase} \begin{split} \mathcal{S}&(\mathcal O)=\mu^{-1}(\mathcal O)/G_{n,K}\\ &=\bigl\{(x_0,\dots, x_n, g_0,\dots, g_n) \in {\mathfrak{g}^*}^{\times n+1} \times G^{\times n+1}\,\, |\,\, \pi(x_0)\in \mathcal O^K_\ell, \, -\pi(Ad_{g_n^{-1}}^*(x_n))\in \mathcal O^K_r,\\ &\qquad\qquad\qquad\qquad\qquad\quad x_1-Ad^*_{g_0^{-1}}(x_0) \in \mathcal O_1,\dots, x_{n}-Ad^*_{g_{n-1}^{-1}}(x_{n-1})\in \mathcal O_n\bigr\}/G_{n, K}. \end{split} \end{equation} Each symplectic leaf $\S(\mathcal O)$ is a stratified symplectic space and is the phase space for the corresponding open spin Calogero-Moser chain. We will describe the largest stratum of $\mathcal{S}(\mathcal O)$ later. \subsection{The Hamiltonians of the open spin Calogero-Moser chain}\label{oCM-Ham} After the trivialization \eqref{trivialisationK} of $T^*(G^{\times n+1})$ by right translations we have a natural Poisson projection $ T^*(G^{\times n+1})\to {\mathfrak{g}^*}^{\times n+1}$ to the first factor. It is $G_{n, K}$-invariant with the following action of $G_{n,K}$ on ${\mathfrak{g}^*}^{\times n+1}$ \[ (k_\ell, h_1,\dots, h_{n}, k_r): (x_0, x_1,\dots, x_n)\mapsto (Ad^*_{k_\ell}(x_0), Ad^*_{h_1}(x_1),\dots, Ad^*_{h_{n}}(x_n)). \] This gives rise to the projection \[ p: T^*(G^{\times n+1})/G_{n,K}\to (\mathfrak{g}^*/G)^{\times n+1} \] which is Poisson because it is the composition of natural Poisson projections \begin{equation}\label{pi-project} T^*(G^{\times n+1})/G_{n,K}\to ({\mathfrak{g}^*}^{\times n+1})/G_{n,K}=\mathfrak{g}^*/K\times (\mathfrak{g}^*/G)^{\times n} \to (\mathfrak{g}^*/G)^{\times n+1}. \end{equation} Here the Poisson structure on the right is trivial (the Poisson tensor is zero). The last projection is a consequence of the embedding $K\hookrightarrow G$. Restricting this projection to the symplectic leaf $\mathcal{S}(\mathcal O)$ we have the Poisson projection \begin{equation}\label{p} p_\mathcal O: \mathcal{S}(\mathcal O)\to \mathcal B(\mathcal O), \quad G_{n,K}(x_0,\dots,x_n, g_0,\dots, g_n)\mapsto (Gx_0,\dots, Gx_n) \end{equation} where $\mathcal B(\mathcal O)\subset (\mathfrak{g}^*/G)^{\times n+1}$ is, by definition, the image of $\mathcal{S}(\mathcal O)$. It can be described explicitly from the description of $\mathcal{S}(\mathcal O)$ as \begin{equation}\label{cBO} \begin{split} \mathcal B(\mathcal O)=\bigl\{(\mathcal O^{(0)},\dots, \mathcal O^{(n)})\in (\mathfrak{g}^*/G)^{\times n+1}&\,\, |\,\, \mathcal O_\ell^K\subseteq \pi(\mathcal O^{(0)}),\mathcal O_r^K\subseteq -\pi(\mathcal O^{(n)}),\\ &\mathcal O_1\subseteq\mathcal O^{(1)}-\mathcal O^{(0)},\ldots,\mathcal O_n\subseteq\mathcal O^{(n)}-\mathcal O^{(n-1)}\bigr\}. \end{split} \end{equation} Hamiltonians of the open spin Calogero-Moser system are pull back $p^*$ of functions on ${(\mathfrak{g}^*/G)}^{\times n+1}$ restricted to $\mathcal{S}(\mathcal O)$. The subalgebra of Hamiltonians is a Poisson commuting subalgebra. Quadratic Hamiltonians are given by Casimir functions. We will compute the radial components of the quadratic Casimirs explicitly in section \ref{qu-Ham}. Hamiltonians are constant on fibers of the projection $p_{\mathcal O}$. \subsection{The gauge fixing}\label{op-g-reduct} Fix $i\in\{0,\ldots,n\}$. For $g=(g_0, \dots, g_n)\in G^{\times n+1}$ and $k_\ell,k_r\in K$ define $h\in G^{\times n}$ (depending on $i,g,k_\ell,k_r$) by \begin{equation*} h_j= \begin{cases} k_\ell g_0g_1\cdots g_{j-1}\qquad &\hbox{ if }\,\, j\leq i,\\ k_r g_n^{-1}g_{n-1}^{-1}\cdots g_{j}^{-1}\qquad &\hbox{ if }\,\, j>i. \end{cases} \end{equation*} Then $(k_\ell, h_1, \dots, h_n, k_r)$ acts on $g\in G^{\times n+1}$ as \[ g\mapsto (1,\dots, 1, k_\ell g_0g_1\dots g_n k_r^{-1}, 1,\dots, 1), \] Here the nontrivial entry is at the position $i$. This gives an $i$-independent isomorphism \begin{equation}\label{g-isom} G^{\times n+1}/G_{n,K}\to K\backslash G/K, \qquad G_{n,K}(g_0,\dots, g_n)\mapsto Kg_0g_1\dots g_nK. \end{equation} For the cotangent bundles the gauge fixing gives an isomorphism \begin{equation}\label{cot-map} \phi_i: \bigl(\mathfrak{g}^{*\times n+1}\times G^{\times n+1}\bigr)/G_{n,K}\overset{\sim}{\rightarrow} K\backslash ( {\mathfrak{g}^*}^{\times {n+1}}\times G)/K \end{equation} mapping the $G_{n,K}$-orbit through $(x,g)\in \mathfrak{g}^{*\times n+1}\times G^{\times n+1}$ to the double $K$-coset through \[ \bigl(x_0,Ad_{g_0}^*(x_1),\ldots,Ad_{g_0\cdots g_{i-1}}^*(x_i),Ad_{g_n^{-1}\cdots g_{i+1}^{-1}}^*(x_{i+1}),\ldots,Ad_{g_n^{-1}}^*(x_n),g_0g_1\cdots g_n\bigr) \] For example for $i=n$ this expression is $\bigl(x_0,Ad_{g_0}^*(x_1),\ldots,Ad_{g_0\cdots g_{n-1}}^*(x_n),g_0g_1\cdots g_n\bigr)$. In (\ref{cot-map}) the double $K$-cosets in the codomain of $\phi_i$ are taken relative to the $i$-dependent $K\times K$-action \[ (k_\ell,k_r)(x_0,\ldots,x_n,g)=(Ad_{k_\ell}^*(x_0),\cdots,Ad_{k_\ell}^*(x_i),Ad_{k_r}^*(x_{i+1}),\ldots,Ad_{k_r}^*(x_n),k_\ell g k_r^{-1}) \] on ${\mathfrak{g}^*}^{\times n+1}\times G$. Now we can describe the symplectic leaf $\mathcal{S}(\mathcal O)$ as a subvariety in $K\backslash ( {\mathfrak{g}^*}^{\times {n+1}}\times G)/K$ though the isomorphism $\varphi_n$ as \begin{equation}\label{ps} \begin{split} \mathcal{S}(\mathcal O)=K\backslash\bigl\{(y_0,y_1,\ldots,y_n,g)\in\mathfrak{g}^{*\times n+1}\times G& \,\, | \,\, \pi(y_0)\in\mathcal O_\ell^K, -\pi\bigl(Ad_{g^{-1}}^*(y_n))\in\mathcal O_r^K,\\ &\quad\,\,\, y_1-y_0\in\mathcal O_1,\ldots,y_n-y_{n-1}\in\mathcal O_n\bigr\}/K. \end{split} \end{equation} Note that, as in the periodic case, $\mathcal{S}(\mathcal O)$ is a symplectic stratified space. From now on we will focus mostly on the largest stratum $\mathcal{S}(\mathcal O)_{reg}$. \subsection{The regular part of the symplectic leaf $\mathcal{S}(\mathcal O)$} \label{op-s-leaf} We use the gauge fixing isomorphism $\varphi_n$ in the remainder of the text. We now use $K\backslash G/K\simeq A/W$ with $W=N_K(A)/M$ the Weyl group of $G$ (see subsection \S3 of the introduction) to describe the regular part of the symplectic leaf $\mathcal{S}(\mathcal O)$ in radial coordinates. Define the regular part of the phase space $\mathcal{S}(\mathcal O)$ (see \eqref{ps}) as \[ \mathcal{S}(\mathcal O)_{reg}=\mathcal{S}(\mathcal O)\cap K\backslash (\mathfrak{g}^{*\times n+1}\times G_{reg})/K. \] The regular part $\mathcal{S}(\mathcal O)_{reg}\subset \mathcal{S}(\mathcal O)$ is its largest stratum of the stratified symplectic space $\mathcal{S}(\mathcal O)$. We can then choose a representative of $K(y_0,\ldots,y_n,g)K\in\mathcal{S}(\mathcal O)_{reg}$ with the $G$-component in $A_{reg}$ by writing $g=k_\ell ak_r^{-1}$ and $y_i=Ad_{k_\ell}^*(x^{(i)})$ with $k_\ell,k_r\in K$ and $a\in A_{reg}$. It follows that \begin{equation*} \begin{split} \mathcal{S}(\mathcal O)_{reg}\simeq\bigl\{(x^{(0)}, \dots, x^{(n)}, a)\in \mathfrak{g}^{*\times n+1}\times &A_{reg}\,\, | \,\, \pi(x^{(0)})\in \mathcal O^K_\ell,-\pi(Ad_{a^{-1}}^*(x^{(n)}))\in \mathcal O^K_r,\\ &\,\, x^{(1)}-x^{(0)}\in\mathcal O_1,\ldots,x^{(n)}-x^{(n-1)}\in\mathcal O_n\bigr\}/N_K(A). \end{split} \end{equation*} Here $N_K(A)$ acts diagonally on $\mathfrak{g}^{*\times n+1}\times A_{reg}$ via the coadjoint action on $\mathfrak{g}^*$ and the conjugation action on $A_{reg}$.\footnote{We use here the fact that $k_\ell a k_r^{-1}=k_\ell^\prime a^\prime k_r^{\prime-1}$ for $k_\ell,k_\ell^\prime,k_r,k_r^\prime\in K$ and $a,a^\prime\in A_{reg}$ is implying that $k_\ell^{-1}k_\ell^\prime=k_r^{-1}k_r^\prime\in N_K(A)$, cf., e.g., \cite[\S VII.3]{Kn}. This essentially follows from the global Cartan decomposition of $G$.} We can now also divide out the action of $M=Z_K(A)$, to obtain the isomorphism \begin{equation*} \begin{split} \mathcal{S}(\mathcal O)_{reg}\simeq\bigl\{(M(x^{(0)}, \dots, x^{(n)}), a)\in \mathfrak{g}^{*\times n+1}/M\times &A_{reg}\,\, | \,\, \pi(x^{(0)})\in \mathcal O^K_\ell,-\pi(Ad_{a^{-1}}^*(x^{(n)}))\in \mathcal O^K_r,\\ &\,\, x^{(1)}-x^{(0)}\in\mathcal O_1,\ldots,x^{(n)}-x^{(n-1)}\in\mathcal O_n\bigr\}/W, \end{split} \end{equation*} where $M$ acts by the diagonal coadjoint action on $\mathfrak{g}^{*\times n+1}$ and $W$ acts diagonally on the space $\mathfrak{g}^{*\times n+1}/M\times A_{reg}$ in the natural way. Recall that we identified $\mathfrak{g}\simeq\mathfrak{g}^*$ and $\mathfrak{a}\simeq\mathfrak{a}^*$ via the Killing form, so that $x\in\mathfrak{g}^*$ corresponds to $x_0+\sum_\alpha x_\alpha e_\alpha$ with $x_0$ the element in $\mathfrak{a}$ corresponding to $x\vert_{\mathfrak{a}}\in\mathfrak{a}^*$ and $x_\alpha=x(e_{-\alpha})$. Denote by $x_0^{(k)}, x_\alpha^{(k)}$ the components of vectors $x\in\mathfrak{g}^{*\times n+1}$ from the $k$-th factor of ${\mathfrak{g}^*}^{n+1}$, and $\mu_0^{(k)}, \mu_\alpha^{(k)}$ the components of $\mu\in \mathcal O_k$. For $y\in \mathfrak{k}^*$ we write $y_{[\alpha]}=y(e_{-\alpha}-e_\alpha)$, so that $y_{[\alpha]}=-y_{[-\alpha]}$. Consider \begin{equation*} \begin{split} T(\mathcal O)_{reg}=\bigl\{(x^{(0)}, \dots, x^{(n)}, a)\in \mathfrak{g}^{*\times n+1}\times A_{reg} \,\,|\,\,& \pi(x^{(0)})\in \mathcal O^K_\ell,-\pi(Ad_{a^{-1}}^*(x^{(n)}))\in \mathcal O^K_r,\\ &x^{(1)}-x^{(0)}\in\mathcal O_1,\ldots, x^{(n)}-x^{(n-1)}\in\mathcal O_n\bigr\}. \end{split} \end{equation*} Clearly $S(\mathcal O)_{reg}=T(\mathcal O)_{reg}/N_K(A)$. For $(x^{(0)},\ldots,x^{(n)},a)\in T(\mathcal O)_{reg}$ write $\mu^\prime=\pi(x^{(0)})\in\mathcal O_\ell^K$, $\mu^{\prime\prime}=-\pi(Ad_{a^{-1}}^*(x^{(n)}))\in\mathcal O_r^K$ and $\mu^{(i)}=x^{(i)}-x^{(i-1)}\in\mathcal O_i$ for $i=1,\ldots,n$. The Cartan components of $x^{(k)}$ and their root coordinates then satisfy \begin{equation}\label{x-mu} \begin{split} x_\alpha^{(0)}-x_{-\alpha}^{(0)}&=\mu_{[\alpha]}^\prime,\qquad\quad a_\alpha x_{-\alpha}^{(n)}-a_\alpha^{-1}x_\alpha^{(n)}=\mu_{[\alpha]}^{\prime\prime},\\ x^{(i)}_\alpha-x^{(i-1)}_\alpha&=\mu_\alpha^{(i)},\qquad\quad x_0^{(i)}-x_0^{(i-1)}=\mu_0^{(i)} \end{split} \end{equation} for $i=1,\ldots,n$. It is easy to solve the equations for Cartan parts $x_0^{(i)}$ ($0<i<n$) in terms of Catran components of $x^{(0)},x^{(n)}$ and $\mu^{(j)}$, \begin{equation}\label{x-var} x^{(i)}_0=\mu^{(i)}_0+\dots + \mu^{(1)}_0+ x^{(0)}_0=x^{(n)}_0-\mu^{(n)}_0-\dots-\mu^{(i+1)}_0 \end{equation} \begin{proposition} The following identities hold for $\alpha\in R$ and $k=0,1,\ldots,n$: \begin{equation}\label{x-k-b} x^{(k)}_\alpha=K_\alpha+\sum_{l=1}^{k}\frac{a_{\alpha}\mu^{(l)}_\alpha-a_{\alpha}\mu^{(l)}_{-\alpha}}{a_\alpha-a_{\alpha}^{-1}} +\sum_{l=k+1}^n\frac{a_{\alpha}^{-1}\mu^{(l)}_\alpha-a_{\alpha}\mu^{(l)}_{-\alpha}}{a_\alpha-a_{\alpha}^{-1}} \end{equation} where \begin{equation}\label{Ka} K_\alpha=\frac{a_\alpha\mu_{[\alpha]}^\prime+\mu''_{[\alpha]}}{a_\alpha-a_{\alpha}^{-1}}. \end{equation} \end{proposition} \begin{proof} Denote \begin{equation}\label{muexp} \mu=\mu^{(1)}+\dots + \mu^{(n)}. \end{equation} Note that $x^{(n)}-x^{(0)}=\mu$. Fix $\beta\in R_+$. For $x^{(1)}_{\pm\beta}$ and $x^{(n)}_{\pm\beta}$ the formula $x^{(n)}-x^{(0)}=\mu$ implies \[ x^{(n)}_\beta-x^{(0)}_\beta=\mu_\beta, \ \ x^{(n)}_{-\beta}-x^{(0)}_{-\beta}=\mu_{-\beta}. \] Combined with the first line of \eqref{x-mu} we end up with four linear equations in $x^{(0)}_\beta,x^{(0)}_{-\beta},x^{(n)}_\beta,x^{(n)}_{-\beta}$ which, by the assumption that $a$ is regular, are uniquely solved by \begin{equation}\label{extremes} \begin{split} x^{(0)}_\alpha&=\frac{a_\alpha\mu_{[\alpha]}^\prime+\mu_{[\alpha]}''+(a_{\alpha}^{-1}\mu_\alpha-a_{\alpha}\mu_{-\alpha})}{a_\alpha-a_{\alpha}^{-1}},\\ x^{(n)}_{\alpha}&=\frac{a_\alpha\mu_{[\alpha]}^\prime+\mu_{[\alpha]}''+(a_{\alpha}\mu_\alpha-a_\alpha\mu_{-\alpha})}{a_\alpha-a_{\alpha}^{-1}}\\ \end{split} \end{equation} for $\alpha=\beta,-\beta$ (here we used that $a_{-\beta}=a_\beta^{-1}$, $\mu_{[-\beta]}^\prime=-\mu_{[\beta]}^\prime$ and $\mu^{\prime\prime}_{[-\beta]}=-\mu^{\prime\prime}_{[\beta]}$). By the second line of \eqref{x-mu} we then obtain \[ x^{(k)}_\alpha=\frac{a_\alpha\mu_{[\alpha]}^\prime+\mu_{[\alpha]}''+(a_{\alpha}^{-1}\mu_\alpha-a_{\alpha}\mu_{-\alpha}) +(a_\alpha-a_\alpha^{-1})(\mu_\alpha^{(1)}+\cdots+\mu_\alpha^{(k)})}{a_\alpha-a_{\alpha}^{-1}} \] for $k=0,1,\ldots,n$. Substituting \eqref{muexp} it is now easy to see that this is exactly what we wanted to prove. \end{proof} The proposition and \eqref{x-var} give an isomorphism \begin{equation}\label{open-sl} S(\mathcal O)_{reg}\simeq \bigl(( \mathcal O_\ell^K \times \mathcal O_1 \times \dots \times\mathcal O_n \times \mathcal O_r^K)/M\times T^*A_{reg}\bigr)/W, \end{equation} mapping $N_K(A)(x^{(0)},\ldots,x^{(n)},a)$ to the $W$-orbit of $(M(\mu^\prime,\mu^{(1)},\ldots,\mu^{(n)},\mu^{\prime\prime}),x_0^{(n)},a)$, which preserves the natural symplectic structures. Here the finite discrete group $M=Z_K(A)\subset K$ acts diagonally via the coadjoint action, and $W=N_K(A)/M$ acts diagonally. The quantum version of this isomorphism is described in \cite{SR}. \subsection{Quadratic Hamiltonians of open spin Calogero-Moser chain on the regular part of the phase space} \label{qu-Ham} In this section we compute the restriction of the Hamiltonian corresponding to the quadratic Casimir function on $\mathfrak{g}^*$, \[ H_2^{(k)}(x,g)=\frac{1}{2}(x^{(k)},x^{(k)})=\frac{1}{2}(x^{(k)}_0,x^{(k)}_0)+\sum_{\alpha>0} x^{(k)}_\alpha x^{(k)}_{-\alpha} \] to the regular part of $\mathcal{S}(\mathcal O)$ (see \eqref{bcase}) for $k=0,\ldots,n$, where $(x,g)\in\mathfrak{g}^{*\times n+1}\times G^{\times n+1}$. Here $(\cdot,\cdot)$ is the Killing form and $x_\alpha^{(i)}, x^{(i)}_0$ are the components of $x^{(i)}$ which were computed in the previous section on the regular part of the phase space. We first consider the differences, which we will call the {\it boundary Knizhnik-Zamolodchikov-Bernard (bKZB) Hamiltonians}, \[ D_k=H_2^{(k)}-H_2^{(k-1)}\qquad\quad (1\leq k\leq n). \] \begin{theorem} For the bKZB Hamiltonians we have the following formula: \begin{equation}\label{Dk} D_k=(\mu^{(k)}_0,x^{(n)}_0)-\sum_{l=1}^{k-1}(r_{lk}+r_{lk}^{\theta_l})+(\sum_\alpha K_\alpha\mu^{(k)}_{-\alpha}-\kappa_k) +\sum_{l=k+1}^n(r_{kl}-r_{kl}^{\theta_k}). \end{equation} Here $r_{kl}$ for $k\not=l$ is Felder's rescaled dynamical $r$-matrix \begin{equation}\label{F-r-b} r_{kl}=-\frac{1}{2}(\mu^{(k)}_0, \mu^{(l)}_0)+\sum_{\alpha}\frac{ \mu^{(k)}_{-\alpha}\mu^{(l)}_{\alpha}}{a_\alpha^2-1}, \end{equation} $\theta_k$ is the transpose of the Chevalley involution $\theta$ acting on $\mu^{(k)}$, \[ \kappa_k=\frac{1}{2} (\mu^{(k)}_0,\mu^{(k)}_0)+\sum_\alpha\frac{({\mu^{(k)}_\alpha})^2}{1-a_{\alpha}^2} \] is the core quadratic classical dynamical $k$-matrix and $K_\alpha$ is given by (\ref{Ka}). \end{theorem} \begin{proof} The first step of the proof is the same as in the proof of Theorem \ref{thmradcyclic}, resulting in the expression \begin{equation}\label{dd} D_k=(\mu_0^{(k)},x_0^{(k-1)}+\frac{1}{2}\mu_0^{(k)})+\sum_\alpha x_\alpha^{(k-1)}\mu_{-\alpha}^{(k)}+\sum_{\alpha>0}\mu_\alpha^{(k)}\mu_{-\alpha}^{(k)}. \end{equation} Now let us the formula (\ref{x-k-b}) for $x^{(k-1)}_\alpha$, \begin{equation}\label{decompx} \begin{split} \sum_{\alpha} x^{(k-1)}_{\alpha}\mu^{(k)}_{-\alpha}&=\sum_{l=1}^{k-1}\sum_{\alpha}\frac{a_{\alpha}\mu^{(l)}_{\alpha}\mu_{-\alpha}^{(k)}-a_{\alpha}\mu^{(l)}_{-\alpha}\mu_{-\alpha}^{(k)}}{a_\alpha-a_{\alpha}^{-1}}\\ &+\sum_{\alpha} K_{\alpha}\mu^{(k)}_{-\alpha}+\sum_\alpha\frac{a_{\alpha}^{-1}\mu^{(k)}_{\alpha}\mu_{-\alpha}^{(k)}-a_{\alpha}\mu^{(k)}_{-\alpha}\mu_{-\alpha}^{(k)}} {a_\alpha-a_{\alpha}^{-1}}\\ &+\sum_{l=k+1}^n\sum_\alpha\frac{a_{\alpha}^{-1}\mu_{-\alpha}^{(k)}\mu^{(l)}_{\alpha}-a_{\alpha}\mu_{-\alpha}^{(k)}\mu^{(l)}_{-\alpha}}{a_\alpha-a_{\alpha}^{-1}}. \end{split} \end{equation} We express the different terms in the right hand side of \eqref{decompx} in terms of the dynamical $r$-matrix and $k$-matrix. Note first that \[ r_{kl}^{\theta_k}=\frac{1}{2}(\mu_0^{(k)},\mu_0^{(l)})-\sum_\alpha\frac{\mu_\alpha^{(k)}\mu_\alpha^{(l)}}{a_\alpha^2-1}=r_{lk}^{\theta_l}. \] Then the terms in the right hand side of \eqref{decompx} with $l$ strictly smaller than $k$ can be rewritten as \[ \sum_{\alpha}\frac{a_{\alpha}\mu^{(l)}_{\alpha}\mu_{-\alpha}^{(k)}-a_{\alpha}\mu^{(l)}_{-\alpha}\mu_{-\alpha}^{(k)}}{a_\alpha-a_{\alpha}^{-1}}= -(r_{lk}+r_{lk}^{\theta_l}) \] while the terms in the right hand side of \eqref{decompx} with $l$ strictly larger than $k$ reduce to \[ \sum_\alpha\frac{a_{\alpha}^{-1}\mu_{-\alpha}^{(k)}\mu^{(l)}_{\alpha}-a_{\alpha}\mu_{-\alpha}^{(k)}\mu^{(l)}_{-\alpha}}{a_\alpha-a_{\alpha}^{-1}}= (\mu_0^{(k)},\mu_0^{(l)})+(r_{kl}-r_{kl}^{\theta_k}). \] Finally, for the middle term in \eqref{decompx} a direct computation shows that \[ \sum_\alpha\frac{a_{\alpha}^{-1}\mu^{(k)}_{\alpha}\mu_{-\alpha}^{(k)}-a_{\alpha}\mu^{(k)}_{-\alpha}\mu_{-\alpha}^{(k)}} {a_\alpha-a_{\alpha}^{-1}}=\frac{1}{2}(\mu_0^{(k)},\mu_0^{(k)})-\sum_{\alpha>0}\mu_\alpha^{(k)}\mu_{-\alpha}^{(k)}-\kappa_k. \] Substitute these formulas in \eqref{decompx}, then the resulting formula \eqref{dd} for $D_k$ becomes \begin{equation*} \begin{split} D_k&=(\mu_0^{(k)},x_0^{(k-1)}+\mu_0^{(k)}+\mu_0^{(k+1)}+\cdots+\mu_0^{(n)})\\ &-\sum_{l=1}^{k-1}(r_{lk}+r_{lk}^{\theta_l})+(\sum_\alpha K_\alpha\mu^{(k)}_{-\alpha}-\kappa_k) +\sum_{l=k+1}^n(r_{kl}-r_{kl}^{\theta_k}). \end{split} \end{equation*} By \eqref{x-var} this reduces to the formula (\ref{Dk}). \end{proof} The quantum versions of the boundary KZB Hamiltonians in the present context were obtained in \cite[\S 6]{SR}. It was extended to the case of non-split real semisimple Lie groups $G$ in \cite{RS}. For the Hamiltonian $H^{(n)}_2$ we obtain by \eqref{extremes} the expression \[ H^{(n)}_2=\frac{1}{2}(p,p)+\sum_{\alpha>0}\frac{(a_\alpha\mu_{[\alpha]}^\prime+\mu_{[\alpha]}^{\prime\prime}+a_\alpha(\mu_\alpha-\mu_{-\alpha}))(a_\alpha^{-1}\mu_{[\alpha]}^\prime+\mu_{[\alpha]}^{\prime\prime}+a_\alpha^{-1}(\mu_\alpha-\mu_{-\alpha}))}{(a_\alpha-a_{-\alpha})^2} \] on $\mathcal{S}(\mathcal O)_{reg}$, where $\mu=\mu^{(1)}+\cdots+\mu^{(n)}$. Here we use notation $p=x^{(n)}_0$ for the cotangent vectors to $A_{reg}$ in formula (\ref{open-sl}). Note that the potential term only depends on the restrictions $\pi(\mu^{(i)})$ of $\mu^{(i)}\in\mathfrak{g}^*$ to $\mathfrak{k}$, since $\mu_\alpha-\mu_{-\alpha}=-(\pi(\mu))_{[\alpha]}$. The radial component of the quantum quadratic Hamiltonian in the current open context was obtained in \cite[\S 6]{SR}. \subsection{The superintegrability of the open spin CM chain}\label{op-sint} In this section we will prove that Poisson commutative subalgebra of Hamiltonians constructed in section \ref{oCM-Ham} defines a superintegrable system. Fix $\mathcal O=(\mathcal O_\ell^K,\mathcal O_1,\ldots,\mathcal O_n,\mathcal O_r^K)\in (\mathfrak{k}^*/K\times (\mathfrak{g}^*/G)^{\times n}\times\mathfrak{k}^*/K$. We willl construct Poisson projections \begin{equation}\label{Sint-open-pr} S(\mathcal O)\stackrel{p_{1,\mathcal O}}{\longrightarrow} \mathcal P(\mathcal O)\stackrel{p_{2,\mathcal O}}{\longrightarrow} \mathcal B(\mathcal O) \end{equation} such that $p_\mathcal O=p_{2,\mathcal O}\circ p_{1,\mathcal O}$ (see \eqref{p}), satisfying the desired properties. Let $(\mathfrak{k}^*\times\mathfrak{g}^{*\times n})\times_{(\mathfrak{g}^*/G)^{\times n+1}}(\mathfrak{g}^{*\times n}\times\mathfrak{k}^*)$ be the subset of $(\mathfrak{k}^*\times\mathfrak{g}^{*\times n})\times(\mathfrak{g}^{*\times n}\times\mathfrak{k}^*)$ consisting of elements $(z_\ell,x_1,\ldots,x_n,y_1,\ldots,y_n,z_r)$ satisfying \[ z_\ell\in -\pi(Gy_1),\quad x_i\in -Gy_{i+1}\,\, (1\leq i<n),\quad z_r\in -\pi(Gx_n). \] The gauge group $G_{n,K}$ acts on $(\mathfrak{k}^*\times\mathfrak{g}^{*\times n})\times_{(\mathfrak{g}^*/G)^{\times n+1}}(\mathfrak{g}^{*\times n}\times\mathfrak{k}^*)$ by \begin{equation}\label{g-act-P} \begin{split} (k_\ell, h_1, \dots, h_n, k_r)&(z_\ell,x_1,\dots, x_n,y_1,\dots, y_n, z_r)=\\ &= (Ad^*_{k_\ell}z_\ell, Ad^*_{h_1}x_1,\dots, Ad^*_{h_n}x_n,Ad^*_{h_1}y_1,\dots, Ad^*_{h_n}y_n, Ad^*_{k_r}z_r). \end{split} \end{equation} Consider the resulting Poisson space \[ \mathcal P=\bigl((\mathfrak{k}^*\times\mathfrak{g}^{*\times n})\times_{(\mathfrak{g}^*/G)^{\times n+1}}(\mathfrak{g}^{*\times n}\times\mathfrak{k}^*)\bigr)/G_{n,K} \] and define the Poisson map \[ p_1: T^*(G^{\times n+1})/G_{n,K}\rightarrow \mathcal P \] by \begin{equation*} \begin{split} p_1\bigl(G_{n,K}(x,g)\bigr)&=G_{n,K}\bigl(\mu_L(x,g),\mu_R(x,g)\bigr)\\ &=G_{n,K}\bigl(\pi(x_0),x_1,\ldots,x_n,-Ad_{g_0^{-1}}^*(x_0),\ldots,-Ad_{g_{n-1}^{-1}}^*(x_{n-1}),-\pi(Ad_{g_n^{-1}}^*(x_n))\bigr). \end{split} \end{equation*} Here $(x,g)=(x_0,\ldots,x_n,g_0,\ldots,g_n)\in\mathfrak{g}^{*\times n+1}\times G^{\times n+1}\simeq T^*(G^{\times n+1})$. Define the Poisson projection \[ p_2: \mathcal P\to (\mathfrak{g}^*/G)^{\times n+1} \] by \[ p_2\bigl(G_{n,K}(z_\ell,x_1,\dots, x_n,y_1,\dots, y_n, z_r)\bigr)=(-Gy_1,Gx_1,\dots, Gx_n), \] with the trivial Poisson structure on the target space. The restriction of the Poisson projection $p_1$ to the symplectic leaf $S(\mathcal O)\subset T^*(G^{\times n+1})/G_{n,K}$ (see \eqref{bcase}) gives the Poisson projection \[ p_{1,\mathcal O}: S(\mathcal O)\to \mathcal P(\mathcal O) \] where \[ \mathcal P(\mathcal O)=(\mu_L\times \mu_R)(\mu^{-1}(\mathcal O))/G_{n,K}, \] or, more explicitly, \begin{equation}\label{repP} \begin{split} \mathcal P(\mathcal O)=\bigl\{(z_\ell,x_1,\ldots,x_n, &y_1,\ldots,y_n,z_r)\in (\mathfrak{k}^*\times\mathfrak{g}^{*\times n})\times_{(\mathfrak{g}^*/G)^{\times n+1}}(\mathfrak{g}^{*\times n}\times \mathfrak{k}^*)\,\, | \,\, \\ &z_\ell\in\mathcal O_\ell^K,\,x_1+y_1\in\mathcal O_1,\ldots,x_n+y_n\in\mathcal O_n,\, z_r\in\mathcal O_r^K\bigr\}/G_{n,K}. \end{split} \end{equation} The generic fibers of this mapping are isotropic submanifolds in $\mathcal{S}(\mathcal O)$. The restriction of the Poisson projection $p_2$ to $\mathcal P(\mathcal O)$ gives a surjective Poisson projection \[ p_{2,\mathcal O}: \mathcal P(\mathcal O)\to \mathcal B(\mathcal O), \] with $\mathcal B(\mathcal O)$ given by \eqref{cBO}. Clearly, the composition of $p_{2,\mathcal O}\circ p_{1,\mathcal O}: \mathcal{S}(\mathcal O)\rightarrow \mathcal B(\mathcal O)$ is the projection $p_\mathcal O$ as given by \eqref{p}. Now let us describe fibers of $p_{2,\mathcal O}$ are symplectic leaves of $\mathcal P(\mathcal O)$. \begin{lemma} We have he following symplectomorphysm \begin{equation}\label{decomppOK} p_{2,\mathcal O}^{-1}((\mathcal O^{(0)},\dots, \mathcal O^{(n)}))\overset{\sim}{\longrightarrow}\mathcal{M}(-\mathcal O^{(0)},\mathcal O_\ell^K) \times\prod_{i=1}^n\mathcal{M}(-\mathcal O^{(i)},\mathcal O^{(i-1)},\mathcal O_i)\times \mathcal{M}(\mathcal O^{(n)},\mathcal O_r^K) \end{equation} where symplectic spaces $\mathcal{M}(-\mathcal O^{(i)},\mathcal O^{(i-1)},\mathcal O_i)$ are defined in (\ref{cM3}) and \begin{equation*} \mathcal{M}(\mathcal O^\prime,\mathcal O^K)=\{(x,z)\in\mathcal O^\prime\times\mathcal O^K\,\, | \,\, \pi(x)+z=0\}/K \end{equation*} Here $\mathcal O\subset \mathfrak{g}^*$ is a $G$-coadjoint orbit and $\mathcal O^K\subset\mathfrak{k}^*$ is a $K$-coadjoint orbit. It has a natural symplectic structure because it is the Hamiltonian reduction of $\mathcal O\times \mathcal O^K$ with respect to the Hamiltonian diagonal action of $K$. \end{lemma} \begin{proof} Let $(\mathcal O^{(0)},\ldots,\mathcal O^{(n)})\in\mathcal B(\mathcal O)$. By a direct computation, the fiber $p_{2,\mathcal O}^{-1}((\mathcal O^{(0)},\dots, \mathcal O^{(n)}))$ consists of the $G_{n,K}$-orbits in $\mathcal P$ with representatives \begin{equation*} (z_\ell,x_1,\ldots,x_n,y_1,\ldots,y_n,z_r)\in (\mathfrak{k}^*\times\mathfrak{g}^{*\times n})\times (\mathfrak{g}^{*\times n}\times\mathfrak{k}^*) \end{equation*} satisfying the following conditions \begin{equation*} \begin{matrix} &z_\ell\in\mathcal O_\ell^K\cap\pi(\mathcal O^{(0)}),\quad &x_i+y_i\in\mathcal O_i\quad (1\leq i\leq n),\,\,&z_r\in\mathcal O_r^K\cap -\pi(\mathcal O^{(n)}),\\ &-y_1\in \mathcal O^{(0)},\quad &x_i\in\mathcal O^{(i)}, -y_{i+1} \mathcal O^{(i)}\in \, (1\leq i\leq n-1),\quad &x_n\in\mathcal O^{(n)}. \end{matrix} \end{equation*} Using this explicit description of the fiber, we can write it as a direct product of symplectic spaces. The isomorphism \eqref{cMiso} for symplectic spaces $\mathcal{M}(\mathcal O^{(1)},\mathcal O^{(2)},\mathcal O^{(3)})$ defined by \eqref{cM3} gives factors $\mathcal{M}(-\mathcal O^{(i)},\mathcal O^{(i-1)},\mathcal O_i)$. The space \begin{equation*} \mathcal{M}(\mathcal O^\prime,\mathcal O^K)=\{(x,z)\in\mathcal O^\prime\times\mathcal O^K\,\, | \,\, \pi(x)+z=0\}/K \end{equation*} with $K$ acting by the diagonal coadjoint action is symplectic because, see above. The isomorphism \[ \mathcal{M}(\mathcal O^\prime,\mathcal O^K)\overset{\sim}{\longrightarrow} (\mathcal O^\prime\cap\pi^{-1}(\mathcal O^K))/K, \] completes the proof. The isomorphism maps $G_{n,K}(z_\ell,x_1,\ldots,x_n,y_1,\ldots,y_n,z_r)\in p_{2,\mathcal O}^{-1}((\mathcal O^{(0)},\dots, \mathcal O^{(n)}))$ to \[ (K(\widetilde{y}_1,z_\ell),G(-x_1,-y_1,x_1+y_1),\ldots,G(-x_n,-y_n,x_n+y_n),K(\widetilde{x}_n,z_r)), \] with $\widetilde{y}_1\in Gy_1=-\mathcal O^{(0)}$ such that $-\pi(\widetilde{y}_1)=z_\ell$ and $\widetilde{x}_n\in Gx_n=\mathcal O^{(n)}$ such that $-\pi(\widetilde{x}_n)=z_r$. \end{proof} Note that for generic $\mathcal O^\prime$, the symplectic space $\mathcal{M}(\mathcal O^\prime,\mathcal O^K)$ is of dimension $\dim(\mathcal O^K)$. \begin{remark} In the compact case, the algebra of function on the fiber of $p_{2,\mathcal O}$ has the algebra of endomorphisms of the vector space \[ \textup{Hom}_K(V_{\lambda_0},U_{\nu_\ell})\otimes\textup{Hom}_G(V_{\lambda_1},V_{\lambda_0}\otimes V_{\mu_1})\otimes\cdots\otimes \textup{Hom}_G(V_{\lambda_n},V_{\lambda_{n-1}}\otimes V_{\mu_n})\otimes\textup{Hom}_K(U_{\nu_r},V_{\lambda_n}) \] as natural quantization, where the finite dimensional $G$-representation $V_{\lambda_i}$ \textup{(}resp. $V_{\mu_i}$\textup{)} corresponds to $\mathcal O^{(i)}$ \textup{(}resp. $\mathcal O_i$\textup{)} and the $K$-representation $U_{\nu_\ell}$ \textup{(}resp. $U_{\nu_r}$\textup{)} corresponds to $\mathcal O_\ell^K$ \textup{(}resp. $\mathcal O_r^K$\textup{)}, compare with Remark \ref{cyclicquantum} in the cyclic case. For details see \cite{SR} \textup{(}which treats the noncompact case\textup{)} and \cite{RSQCD2}. \end{remark} \begin{lemma} Dimensions of spaces $\mathcal B(\mathcal O)$ and $\mathcal P(\mathcal O)$ are \[ \dim(\mathcal B(\mathcal O))=(n+1)r, \ \ \dim(\mathcal P(\mathcal O))=\dim(\mathcal O)-2nr \] where we define $\dim(\mathcal O)$ as $\dim(\mathcal O_\ell^K)+\sum_{i=1}^n\dim(\mathcal O_i)+\dim(\mathcal O_r^K)$. \end{lemma} \begin{proof} The proof of the dimension formula for $\mathcal B(\mathcal O)$ is completely similar to the periodic case. It is enough to consider large orbits. For the dimension of $\mathcal P(\mathcal O)$ we have: \[ \dim(\mathcal P(\mathcal O))=\dim(\mathcal B(\mathcal O))+\dim\bigl(p_{2,\mathcal O}^{-1}((\mathcal O^{(0)},\ldots,\mathcal O^{(n)})) \] and by \eqref{decomppOK} the dimension of $p_{2,\mathcal O}^{-1}((\mathcal O^{(0)},\ldots,\mathcal O^{(n)}))$ is equal to \begin{equation*} \begin{split} \dim(\mathcal{M}(-\mathcal O^{(0)},\mathcal O_\ell^K))&+\sum_{i=1}^n\dim(\mathcal{M}(-\mathcal O^{(i)},\mathcal O^{(i-1)},\mathcal O_i))+\dim(\mathcal{M}(\mathcal O^{(n)},\mathcal O_r^K))=\\ &=\dim(\mathcal O_\ell^K)+\sum_{i=1}^n(\dim(\mathcal O_i)-2r)+\dim(\mathcal O_r^K)=\dim(\mathcal O)-2nr. \end{split} \end{equation*} This finishes the proof. \end{proof} We now have the following main result of this section. \begin{theorem} The Hamiltonian system generated by any Hamiltonian for the open spin CM chain described in section \ref{oCM-Ham} is superintegrable with the superintegrable structure described by the surjective Poisson maps \[ \mathcal{S}(\mathcal O) \stackrel{p_{1,\mathcal O}}{\longrightarrow} \mathcal P(\mathcal O) \stackrel{p_{2,\mathcal O}}{\longrightarrow} \mathcal B(\mathcal O) \] as introduced earlier in this section. \end{theorem} Recall that $\mathcal O_i\neq \{0\}$ for all $i=0,1, \dots, n$. \begin{proof} We already verified most of the conditions. What remains to show is the matching of dimensions, \begin{equation}\label{lastcheck} \dim(\mathcal{S}(\mathcal O))=\dim(\mathcal P(\mathcal O))+\dim(\mathcal B(\mathcal O)). \end{equation} For the collection $\mathcal O=(\mathcal O_\ell^K, \mathcal O_1,\dots, \mathcal O_{n}, \mathcal O_r^K)$ of coadjoint orbits we write $\dim(\mathcal O)$ for the sum of the dimensions of the coadjoint orbits. For the dimension of the symplectic leaf $\mathcal{S}(\mathcal O)$ we have, using \eqref{open-sl}, \[ \dim(\mathcal{S}(\mathcal O))=2r+\dim(\mathcal O), \] with $r$ the rank of $\mathfrak{g}$. For $\mathcal P(\mathcal O)$ we obtain for generic $(\mathcal O^{(0)},\ldots,\mathcal O^{(n)})\in\mathcal B(\mathcal O)$, Because \[ \dim(\mathcal B(\mathcal O))=(n+1)r, \] we have \begin{equation*} \begin{split} \dim(\mathcal P(\mathcal O))+\dim(\mathcal B(\mathcal O))&=\dim(\mathcal O)-2nr+2\dim(\mathcal B(\mathcal O))\\ &=\dim(\mathcal O)+2r\\ &=\dim(\mathcal{S}(\mathcal O)), \end{split} \end{equation*} as desired. \end{proof} \subsection{Constructing solutions by the projection method and angle variables } \label{op-proj-dyn} Let $\mathcal{H}$ be a $G$-invariant function on $\mathfrak{g}^*$ and $\mathcal{H}^{(i)}$ for $i=0,\ldots,n$ the associated $G_n$-invariant function $(x,g)\mapsto \mathcal{H}(x_i)$ on $T^*(G^{\times n+1})$ (cf. section \ref{solutions}). The Hamiltonian flow generated by $\mathcal{H}^{(i)}$ on $T^*(G^{\times n+1})\simeq \mathfrak{g}^{*\times n+1}\times G^{\times n+1}$ was already described in section \ref{solutions}. The flow line passing through $(x,g)$ at $t=0$ is \begin{equation}\label{flow-open} (x(t_i), g(t_i))=(x_0, \dots, x_n, g_0, \dots,g_{i-1}, e^{\nabla\mathcal{H}(x_i) t_i}g_i,g_{i+1} \dots, g_n). \end{equation} The corresponding Hamiltonian flow on the symplectic leaf $\mathcal{S}(\mathcal O)\subset T^*(G^{\times n+1})/G_{n,K}$ is obtained by projecting the flow (\ref{flow-open}) to $T^*(G^{\times n+1})/G_{n,K}$ and restricting it to $\mathcal{S}(\mathcal O)$. Thus, we reformulated the problem of solving nonlinear differential equations of motion for open spin Calogero-Moser chains to a problem of linear algebra. This is a version of the original projection method which goes back to earlier papers on Calogero-Moser type systems \cite{OP}. Now let us describe angle variables for this integrable dynamics. We use the notations from section \ref{solutions}. Fix $(x,g)\in \mathfrak{g}^{\prime*\times n+1}\times G^{\times n+1}$. For $i=1,\dots, n$ define elements $s_i\in G$ by the condition $Ad^*_{s_i}(x_i)\in \mathfrak{a}^*_{+}$. As in the periodic case (see section \ref{solutions}) it is defines up to $s_i\mapsto a_is_i$ with $a_i\in H\subset G$, where $H\subset G$ is the Cartan subgroup containing $A$. Define $s_0\in K$ such that $Ad_{s_0}^*(x_0)\vert_{\mathfrak{p}}\in\mathfrak{a}_{+}^*$ (where we view $\mathfrak{a}_+^*$ now as subset of $\mathfrak{p}^*$ in the natural manner). The element $s_0$ is defined up to $s_0\mapsto ms_0$ with $m\in M=Z_K(A)$. Similarly, we define $s_{n+1}\in K$ such that $Ad_{s_{n+1}}^*(x_n)\vert_{\mathfrak{p}}\in \mathfrak{a}_+^*$. Choose finite dimensional representations $V_0,V_1,\dots, V_n$ of $G_{\mathbb{C}}$, $H_{\mathbb{C}}$-weight vectors $v_i\in V_i$ of weight $\lambda_{i+1}$ for $0\leq i<n$ and $H_{\mathbb{C}}$-weight vectors $u_j^*\in V_j^*$ of weight $-\lambda_j$ for $0\leq j\leq n$. Finally, we choose $M$-invariant vectors $u_0^*\in V_0^*$ and $v_n\in V_n$ (i.e., $mu_0^*=u_0^*$ and $mv_n=v_n$ for all $m\in M$). Define \begin{equation}\label{fopen} f_{u,v}(x,g)=u^*_0(s_0g_0s^{-1}_1v_0)u^*_1(s_1g_1s^{-1}_2v_1)\cdots u_{n-1}^*(s_{n-1}g_{n-1}s_n^{-1}v_{n-1})u_n^*(s_ng_ns_{n+1}^{-1}v_n). \end{equation} It is an easy check that $f_{u,v}(x,g)$ is a well defined $G_{n,K}$-invariant function on $\mathfrak{g}^{\prime*\times n+1}\times G^{\times n+1}$. Similarly as in the periodic case (see section \ref{solutions}) we then have for $i=1,\ldots,n$, \begin{equation}\label{trivialflow} f_{u,v}(x(t_i),g(t_i))=e^{t_i\lambda_i(\nabla\mathcal{H}(y_i))}f_{u,v}(x,g) \end{equation} with $y_i=Ad_{s_i}^*(x_i)\in\mathfrak{a}_+^*$. Logarithms of these functions thus evolve linearly, and hence give rise to angle variables for the Hamiltonians $\mathcal{H}^{(i)}$ on $\mathcal{S}(\mathcal O)\cap (\mathfrak{g}^{\prime*\times n+1}\times G^{\times n+1})/G_{n,K}$. For $i=0$ we need to restrict further to $(x,g)\in \mathfrak{g}^{\prime*\times n+1}\times G^{\times n+1}$ with $x_0\in\mathfrak{p}$, and assume that $u_0^*\in V_0^*$ is not only $M$-invariant but also a $H_{\mathbb{C}}$-weight vector, say of weight $-\lambda_0$. In this case $\Ad_{s_0}^*(x_0)=y_0\in\mathfrak{a}_+^*$ and hence \[ u^*_0(s_0e^{t_0\nabla\mathcal{H}(x_0)}g_0s_1^{-1}v_0)=e^{t_0\lambda_0(\mathcal{H}(y_0))}u^*_{0}(s_0g_0s_1^{-1}v_0). \] As a consequence \eqref{trivialflow} then also holds true for $i=0$, and the logarithm of $f_{u,v}(x,g)$ becomes a linear functions of time $t_0$. \section{A Liouville integrable example of a periodic spin Calogero-Moser example for orbits of rank $1$. }\label{concl} \subsection{} Let us briefly discuss a particular case of periodic spin CM chain corresponding to $G=SL_N(\mathbb{R})$ with rank one orbits $\mathcal O_k$. This case is related to the original paper \cite{GH} where spin CM systems were first introduced. Take $\mathfrak{a}\subset\mathfrak{sl}_N$ the Cartan subalgebra consisting of diagonal matrices, and denote the roots by $\{\epsilon_i-\epsilon_j\}_{i\not=j}\subset\mathfrak{a}^*$ with $\epsilon_i\in\mathfrak{a}^*$ the linear functional picking out the $i^{th}$ diagonal entry. We identify $\mathfrak{sl}_N$ with its dual via the Killing form $(x,y)=2N\,\textup{Tr}(xy)$. Then for $p\in\mathfrak{a}^*\simeq\mathfrak{a}$ we have $(p,p)=2N\sum_{i=1}^Np_i^2$, with $p_i$ the $i^{th}$ diagonal entry of the diagonal matrix $p$. For $y\in\mathfrak{sl}_N^*\simeq\mathfrak{sl}_N$ and $i\not=j$ we have $y_{\epsilon_i-\epsilon_j}=\sqrt{2N}y_{ij}$, with $y_{ij}$ the $(i,j)^{th}$ entry of the matrix $y$. For $\xi\in\mathbb{R}$ set \[ \mathcal O^{(\xi)}=\Bigl\{x-\frac{\xi}{N}\textup{id}_N\,\,\, | \,\,\, x \textup{ is a rank one $N\times N$ matrix with } \,\, \textup{Tr}(x)=\xi\,\Bigr\}. \] Then $\mathcal O^{(\xi)}$ is a coadjoint orbit in $\mathfrak{sl}_N\simeq\mathfrak{sl}_N^*$ of dimension $2(N-1)$. Viewing elements in $\mathbb{R}^N$ as column vectors, we have a natural mapping \begin{equation}\label{param} \bigl\{(a,b)\in\mathbb{R}^N\times\mathbb{R}^N \,\, | \,\, a^tb=\xi\bigr\}/\mathbb{R}^\times\overset{\sim}{\longrightarrow}\mathcal O^{(\xi)},\qquad \mathbb{R}^\times(a,b)\mapsto ba^t-\frac{\xi}{N}\textup{id}_N \end{equation} where $\lambda\in\mathbb{R}^\times$ acts by $(a,b)\mapsto (\lambda a, \lambda^{-1}b)$ and $a^t$ is the transpose of $a\in\mathbb{R}^N$. Because of the rank one condition, this is an isomorphism. It is easy to check that this is symplomorphism, with the Poisson brackets of the coordinate functions $a_i$ and $b_j$ of $(a,b)\in\mathbb{R}^N\times\mathbb{R}^N$ given by \[ \{b_i,a_{j}\}=\delta_{ij}, \ \ \{a_{i},a_{j}\}=0=\{b_i,b_j\}=0. \] The value of the quadratic Casimir function $y\mapsto \frac{(y,y)}{2N}=\sum_{i,j=1}^Ny_{ij}y_{ji}$ on $\mathcal{O}^{(\xi)}$ is easily computed using \eqref{param}: \[ \sum_{i,j=1}^N\mu_{ij}\mu_{ji}=\xi^2\Big(1-\frac{1}{N}\Bigr),\qquad\qquad \mu\in\mathcal O^{(\xi)}. \] Here we use notations $\mu=y|_{\mathcal O^{(\xi)}}$. \subsection{}The quadratic $n$-th Hamiltonian $H_2^{(n)}$ in radial coordinates, rescaled by a factor $2N$, then is \begin{equation}\label{Cas-Ham} H_2=\frac{1}{2}\sum_{i=1}^Np_i^2-\sum_{i<j}\frac{\mu_{ij}\mu_{ji}}{2\textup{sh}^2(q_i-q_j)} \end{equation} (see \eqref{Hamper}), where $q_i=\epsilon_i(\log(a))$ and \[ \mu_{ij}=\sum_{k=1}^n \mu_{ij}^{(k)}. \] We now consider the Hamiltonian \eqref{Cas-Ham} on $\mathcal{S}(\mathcal O)_{reg}\simeq (\nu_{\mathcal{O}}^{-1}(0)/H\times T^*A_{reg})/W$ (see \eqref{radred}) with $\mathcal O=(\mathcal O^{(\xi_1)},\ldots,\mathcal O^{(\xi_n)})$ a collection of $n$ rank one orbits. Here $H\subset\textup{SL}_N(\mathbb{R})$ is the Cartan subgroup of diagonal matrices and $\nu_{\mathcal O}^{-1}(0)\subset\mathcal O^{(\xi_1)}\times\cdots\times\mathcal O^{(\xi_n)}$ consists of the $n$-tuple of rank one matrices \begin{equation}\label{abpar} (\mu^{(1)},\ldots,\mu^{(n)})=\Bigl(b^{(1)}a^{(1)t}-\frac{\xi_1}{N}\textup{id}_N,\ldots,b^{(N)}a^{(N)t}-\frac{\xi_N}{N}\textup{id}_N\Bigr) \end{equation} where the diagonal action of $h\in H$ is given by $a_i^{(k)}\to h_ia_i^{(k)}$, $b_j^{(k)}\to h_j^{-1}b_j^{(k)}$, and vectors $a^{(k)},b^{(k)}\in\mathbb{R}^N$ satisfy the relations \begin{equation}\label{g-Casnew} \sum_{i=1}^Na_i^{(k)}b_i^{(k)}=\xi_k\quad (1\leq k\leq n),\qquad \sum_{k=1}^na_i^{(k)}b_i^{(k)}=\frac{\bm{\xi}}{N}\quad (1\leq i\leq N). \end{equation} Here $\bm{\xi}=\sum_{k=1}^n\xi_k$. In other words, $\mu^{(k)}_{ij}=b_i^{(k)}a_j^{(k)}-\delta_{ij}\xi_k/N$, where $b_i^{(k)}, a_j^{(k)}$ are as above. In terms of the variables $a^{(k)}$ and $b^{(k)}$ the Hamiltonian $H_2$ on $\mathcal{S}(\mathcal O)_{reg}$ can be rewritten as \begin{equation}\label{spin-CM-ab} H_2=\frac{1}{2}\sum_{i=1}^Np_i^2-\sum_{i<j}\frac{\sum_{k,\ell=1}^nb_i^{(k)}a_j^{(k)}b_j^{(\ell)}a_i^{(\ell)}}{2\textup{sh}^2(q_i-q_j)}. \end{equation} \subsection{} Here we will rewrite the Hamiltonian (\ref{spin-CM-ab}) in terms of variables attached to each $q_i$. It is natural to think of these variable as spin variables attached to a one dimensional particle with the position $q_i$. They are defined as follows. For $\xi\in \mathbb{R}$ denote by $\widetilde{\mathcal O}^{(\xi)}$ the rank one coadjoint $\textup{SL}_n(\mathbb{R})$-orbit defined as \[ \widetilde{\mathcal O}^{(\xi)}=\Bigl\{x-\frac{\xi}{n}\textup{id}_n\,\,\, | \,\,\, x \textup{ is a rank one $n\times n$ matrix with}\,\, \textup{Tr}(x)=\xi\,\Bigr\}, \] and set \[ \widetilde{\mathcal O}:=\underbrace{(\widetilde{\mathcal O}^{(\bm{\xi}/N)},\ldots,\widetilde{\mathcal O}^{(\bm{\xi}/N)})}_N. \] The coadjoint action of the Cartan subgroup $\widetilde{H}\subset \textup{SL}_n(\mathbb{R}$) is Hamiltonian and gives the moment map \[ \widetilde{\nu}_{\widetilde{\mathcal O}}: \underbrace{\widetilde{\mathcal O}^{(\bm{\xi}/N)}\times\cdots\times\widetilde{\mathcal O}^{(\bm{\xi}/N)}}_N\rightarrow\widetilde{\mathfrak{a}}^*,\qquad (g_1,\ldots,g_N)\mapsto\bigl(g_1+\cdots+g_N)_0 \] where $\widetilde{\mathfrak{a}}=\textup{Lie}(\widetilde{H})$. Finally, consider the traceless diagonal $n\times n$-matrix \[ t_{\underline{\xi}}=\textup{diag}\Bigl(\xi_1-\frac{\bm{\xi}}{n},\ldots,\xi_n-\frac{\bm{\xi}}{n}\Bigr)\in\widetilde{\mathfrak{a}}. \] From the above we immediately have the following statement. \begin{lemma} We the following isomorphism of $2(n-1)(N-1)$-dimensional symplectic varieties \begin{equation*} \begin{split} \nu_{\mathcal O}^{-1}(0)/H&\overset{\sim}{\longrightarrow}\widetilde{\nu}_{\widetilde{\mathcal O}}^{-1}(t_{\underline{\xi}})/\widetilde{H},\\ H(\mu^{(1)},\ldots,\mu^{(n)})&\mapsto \widetilde{H}(g^{(1)},\ldots,g^{(N)}), \end{split} \end{equation*} Here $\mu^{(k)}_{ij}=b_i^{(k)}a_j^{(k)}-\delta_{ij}\xi_k/N$ and the local spin variables $g^{(i)}_{k\ell}$ are \[ g_{k\ell}^{(i)}=b_i^{(k)}a_i^{(\ell)}-\delta_{k\ell}\frac{\bm{\xi}}{Nn}. \] \end{lemma} It is easy to check that if $i\neq j$ the following identity holds: \begin{equation}\label{g-spin} \sum_{k,\ell=1}^n g^{(i)}_{k\ell}g_{\ell k}^{(j)}=\mu_{ij}\mu_{ji}-\frac{\bm{\xi}^2}{N^2n}, \end{equation} Thus, we can rewrite the Hamiltonian (\ref{Cas-Ham}) in terms of spin variables from $\widetilde{\nu}_{\widetilde{\mathcal O}}^{-1}(t_{\underline{\xi}})/\widetilde{H}\times T^*A_{reg}$ as \begin{equation}\label{spin-spin-CM} H_2=\frac{1}{2}\sum_{i=1}^Np_i^2+\sum_{i<j}\frac{\textup{Tr}(g^{(i)}g^{(j)})+\frac{\bm{\xi}^2}{N^2n}}{2\textup{sh}^2(q_i-q_j)}. \end{equation} This Hamiltonian describes $N$ classical particles each carrying a "spin" from a rank one coadjoint orbit in $\mathfrak{sl}_n^*$ with the Casimir value given by \begin{equation} \sum_{\alpha, \beta=1}^n (g_i)_\beta^\alpha(g_i)_\alpha^\beta=\frac{c^2}{n^2}(1-\frac{1}{N}) \end{equation} The system is Liouville integrable since we constructed $n(N-1)$ integrals for the periodic spin chain earlier (see the proof of Theorem \ref{siperiodic}). Integrable system described above are closely related \cite{GH} and \cite{KBBT}. \subsection{} This project, together with results of \cite{AR}, is the first step towards constructing superintegrable systems on moduli spaces of flat connections on a surface where on part of the boundary the gauge group $G$ is constrained to $K$. When the boundary gauge group is not constrained, corresponding integrable systems are described in \cite{AR}. We expect that such moduli spaces have the structure of a cluster variety similar to the one described in \cite{FG}. It would be interesting to to extend the construction of spin CM chains to the elliptic case as it was done for $N=1$ in \cite{KBBT}.
{ "arxiv_id": "2302.14346", "language": "en", "timestamp": "2023-03-01T02:10:27", "url": "https://arxiv.org/abs/2302.14346", "yymm": "2302" }
\section{Amortized Clustering with Mixture of Gaussians}\label{app:clustering} We additionally tested the proposed sampled attention in 2D set datasets in the encoding-decoding framework introduced by \citet{lee2019set}. And the task is about using a neural network to learn the parameters of the mixture Gaussian distribution from the input set data. To begin with, the mixture Gaussian distribution is defined by a weighted sum of $k$ number of Gaussian distribution. Given a dataset $\bm X=\{ \bm x_1, \dotso, \bm x_n \}$, the log-likelihood of the mixture Gaussian distribution is defined as follows: \begin{align}\label{eq:log_likelihood} \log p(\bm X; \bm \theta) = \sum_{i=1}^n \log \sum_{j=1}^k \pi_j \mathcal{N}(\bm x_i; \bm \mu_i; \text{diag}(\bm \sigma_j^2)). \end{align} Generally, the parameters of the mixture Gaussian distribution are inferred by maximizing the log-likelihood $\theta^*(\bm X) = \argmax_\theta \log p(\bm X; \theta)$ using Expectation-Maximisation (EM) algorithm as the closed-form solution could not be inferred directly by setting the gradient equals to zero. Here we instead use the transformer to infer $\theta^*(\bm X)$. Specifically, given the input, the neural network $f$ outputs mixture Gaussian parameters $f(\bm X)=\{ \pi(\bm X), \{ \mu_j(\bm x), \sigma_j(\bm X) \}_{j=1}^k \}$ by maximing the log likelihood in Eq.~\ref{eq:log_likelihood} (and replacing all parameters as functions of $\bm X$). The 2D set data $\bm X$ is randomly sampled from a given mixture Gaussian distribution with $k=4$. And the number of elements $n$ is randomly sampled from $[100, 500]$. Namely, when setting the dimension of Gaussian distribution as 2, each sampled point could be viewed as a 2D data point, so the sampled collection is a 2D set dataset. The baseline we compared with is the Set transformer~\citep{lee2019set} with two Induced Set Attention Block(ISAB) in the encoder, one Multi-head Attention (PMA) and two Set Attention Block (SAB) in the decoder, as per \href{https://github.com/juho-lee/set_transformer}{the official implementation}. The inducting points refer to the additional learnable points $\bm I \in \mathbb{R}^{m\times d}$ proposed in Eq. 9 of \citep{lee2019set}, with $d$ dimension and $m$ number of inducting points. Here we have a mixture usage of points, tokens, and elements to represent a single sampled data point $x_i$. As the computation complexity of the inducting points block (ISAB) is $O(nm)$, our sampled attention may be adopted in the ISAB to reduce the computation complexity to $O(n)$. However, as the number of inducting points $m$ (regarded as the query in \citet{lee2019set}) is not equal to the number of input points $n$ (regarded as key and value) (in fact inducting points and points have different physical meanings), our Hamiltonian cycle attention could not be applied directly. In fact, the dense attention matrix in the inducting points layer is $m\times n$ rather than $n\times n$. We instead applied a different version of sampled attention by randomly sampling two elements per row in the attention matrix. This is a loose version of sampled attention as no Hamiltonian cycle is constructed. As we can see in Tab.~\ref{tab:clustering}, the sampled attention could be a plug-in module to replace the dense attention in the inducting points structure with competitive performance but theoretically less computational complexity. \input{meta_clustering.tex} \section{Preliminary} \subsection{Notation} Given an integer \( a \) we define \( [a] \doteq \{ 1, \ldots, a \} \). For a matrix \( {\bm M} \in \mathbb{R}^{n \times m} \), for a \(k \in [m] \) the \( k\)-th column is denoted by \( {\bm M}_{k} \). Given an (ordered) index set \( \mathcal{A} \subset [m] \) the submatrix \( {\bm M}_{\mathcal{A}} \in \mathbb{R}^{n \times \vert \mathcal{A} \vert} \) consists of the matrix generated by concatenating the columns determined by indices in \( \mathcal{A} \). See the notation guide in \S\ref{app:notation} in the supplementary material. \subsection{Transformer}\label{subsec:tf} The transformer $\bm X \mapsto t(\bm X)$~\citep{vaswani2017attention,dosovitskiy2020image} implements a function from point clouds to point clouds with input points $\bm X \in \mathbb{R}^{d\times n}$. It is formally defined by a multi-head self-attention layer and a feed-forward layer: \begin{subequations} \begin{align} \label{eq:dense_self_attn} &\text{Head}^j(\bm X) = (\bm W^j_V \bm X) \cdot \sigma_S [ (\bm W^j_K \bm X)^T \bm W^j_Q \bm X]\\ &\text{Attn}(\bm X) = \bm X + \bm W_O \begin{bmatrix} \text{Head}^1(\bm X) \\ \vdots \\ \text{Head}^h(\bm X) \end{bmatrix}\\ &\text{TB}(\bm X) = \text{Attn}(\bm X) + \bm W_2 \cdot \text{ReLU}(\bm W_1 \text{Attn}(\bm X)), \end{align}% \label{eq:dense_tf}% \end{subequations}% where $n$ is the number of points and $d$ is the feature dimension. $\text{Head}(\cdot)$ is the \textbf{self-attention layer}, and $\text{Attn}(\cdot)$ is the \textbf{multi-head self-attention layer} with the parameter $\bm W_O \in \mathbb{R}^{d\times mh}$. $\bm W^i_V, \bm W^i_K, \bm W^i_Q \in \mathbb{R}^{m\times d}$ are value, key, and query parameters; $\bm W_1 \in \mathbb{R}^{r\times d}$ and $\bm W_2 \in \mathbb{R}^{d\times r}$ are feed-forward layer parameters. We utilize a positional embedding $\bm E$ in the input $\bm X$, defined by $\bm E = \bm W_p \bm P$, where $\bm P \in \mathbb{R}^{3\times n}$ is the ($xyz$) coordinate, and $\bm W_P \in \mathbb{R}^{d\times 3}$ is an MLP layer. To simplify the notation, here we use $\bm X = \bm X + \bm E$ so that all the inputs $\bm X$ in this paper will include the positional embedding unless specifically stated otherwise. The \textbf{attention mechanism} for a dense transformer is the $n\times n$ attention matrix $(\bm W^i_K \bm X)^T \bm W^i_Q \bm X$ in Eq.~\ref{eq:dense_self_attn}, which is in fact a similarity matrix for $n$ elements/tokens, or a \textbf{complete attention graph}. \textbf{Sparse Attention} also refers to the same similarity matrix/attention graph but with sparse connections instead. As tokenization may not be necessary in dealing with point clouds, for clarity we use the terminology points, elements, and tokens are all to refer to points (which may be thought of as tokens in a traditional transformer context) in a point cloud (set). \subsection{Universal Approximation}\label{subsec:ua} Let $\mathcal{F}$ be the class of continuous sequence-to-sequence functions $f: \mathbb{R}^{d\times n} \mapsto \mathbb{R}^{d\times n}$ defined on any compact domain. Further define $\mathcal{T}^{h,m,r}$ as the set of transformer blocks $t(\cdot)$ with $h$ attention heads of each of size $m$, and with hidden layer width $r$~\citep{yun2019transformers,yun2020n}. To measure the distance between functions in \( \mathcal{F} \), we define the standard \( \ell_{p} \) distance function by the corresponding norm: \begin{align} d_p(f_1, f_2) = \left( \int \Vert f_1(\bm X) - f_2(\bm X) \Vert^p_p \; \mathrm{d}\bm X \right)^{1/p}, \end{align} which is element-wise continuous (w.r.t the $\ell_p$ norm) for $1\leq p < \infty$. \begin{theorem}[Universal Approximation,~\citet{yun2019transformers}]\label{theorem:ua} Let $1 \leq p < \infty$ and $\epsilon > 0$, then for any given $f\in \mathcal{F}$, there exist a Transformer network $g\in \mathcal{T}^{2,1,4}$, such that $d_p(f, g) \leq \epsilon$. \end{theorem} The proof of Theorem~\ref{theorem:ua} makes three stages of approximations, which are chained together via the triangle inequality to give the \( \epsilon \) bound \citep{yun2019transformers}. In particular, \ding{172} any \( f \in \mathcal{F} \) is approximated by a piece-wise linear function \( \overline{f} \in \overline{\mathcal{F}} \) (over a discretized input space). Then \ding{173} the piece-wise linear function is approximated by a \emph{modified} transformer \( \overline{\mathcal{T}}^{2,1,4} \), where the widely used \( \text{ReLU} \) and \( \sigma_{S} \) activation functions (as per Eq.~\ref{eq:dense_tf}) are replaced by the hardmax function \( \sigma_{H} \). Finally, \ding{174} it is shown that the class of transformer \( \overline{\mathcal{T}}^{2,1,4} \) can approximate any regular transformer \( g \in {\mathcal{T}}^{2,1,4} \). The key step comes in the proof of the second approximation \ding{173}. In \citet{yun2019transformers}, the approximation is proved by showing that multi-head self-attention layers of the modified transformer can implement any contextual map \( q_c: \mathbb{R}^{d\times n} \mapsto \mathbb{R}^{n} \). \begin{definition}[Contextual Mapping]\label{def:contextual_mapping} Consider a finite set $\mathbb{L} \subset \mathbb{R}^{d\times n}$. A map q: $\mathbb{L} \mapsto \mathbb{R}^{1\times n}$ defines a contextual map if the map satisfies the following: \begin{enumerate} \item For any $\bm L \in \mathbb{L}$, the $n$ entries in $q(\bm L)$ are all distinct.\label{def:contextual_mapping_rule1} \item For any $\bm L, \bm L' \in \mathbb{L}$, with $\bm L \neq \bm L'$, all entries of $q(\bm L)$ and $q(\bm L')$ are distinct.\label{def:contextual_mapping_rule2} \end{enumerate} \end{definition} Intuitively, a contextual map can be thought of as a function that outputs unique ``id-values''. The only way for a token (column) in \( \bm {L} \subset \mathbb{R}^{d\times n} \) to share an ``id-value'' (element of \( q(\bm {L}) \)) is to map the exact same sequence. As each token in the sequence is mapped to a unique value, an appropriately constructed feed-forward neural network can map a sequence to any other desired sequence, providing a universal approximation guarantee. In \citet{yun2020n}, such a contextual map is implemented via \emph{selective shift operators} and \emph{all-max-shift operators} through careful construction of multi-head self-attention layers. \section{Conclusion} In this paper, we present an $O(n)$ complexity sparse transformer -- \textit{sampled transformer} -- which directly handles point set data. By relating the permutation of set elements to the sampling of Hamiltonian cycle attention, we relieve the model of inappropriate permutation variance. The result is a sampled attention scheme that implements Monte Carlo simulation to approximate a dense attention layer with a prohibitive $O(n^2)$ number of connections. To guarantee the representation power of the proposed sampled transformer, we showed that it is a universal approximator of set-to-set functions. Motivated also by the strong empirical performance that our model achieves, we hope this work will help to shed light on the sparse transformer in dealing with sets. \section{Experiments}\label{sec:exp} We evaluate our proposed sampled attention in popular transformer-based frameworks as well as basic settings. To begin with, we compare our sampled attention (Fig.~\ref{fig:sampled}) with dense attention via the pre-training and fine-tuning framework~\citep{yu2022point,pang2022masked}, where we pre-train our model on ShapeNet~\citep{chang2015shapenet} via the reconstruction task, and further evaluate the performance on three downstream fine-tuning tasks: classification, transfer learning, and few-shot learning in ModelNet40~\citep{wu20153d} or ScanObjectNN~\citep{uy2019revisiting}. In addition, to eliminate the influence of other factors, we compared the dense, sparse, sampled, and $k$NN attention (Definition~\ref{def:knn_atten}), together with other sparse transformer such as Inducting Points \citep{lee2019set} and Stratified Strategy \citep{lai2022stratified}, in a basic classification setting consisting of a transformer block with a single attention layer for feature aggregation. Further, we compare the sampled attention with the $k$NN attention in the hierarchical grouping and merging structure following the Point-Transformer~\citep{zhao2021point}. Finally, we test the proposed sampled attention in 2D set datasets introduced by \citet{lee2019set}. \input{transfer_learning.tex} \subsection{Comparsion on Pre-training and Fine-tuning Framework}\label{subsec:bert} \paragraph{Pre-training.} We adopted the masked auto-encoder (MAE)~\citep{he2022masked} to process the point cloud data (denoted as MAE-dense) for pre-training, which is close with Point-MAE~\citep{pang2022masked}. Note that MAE-dense adopts dense-attention layers in its encoder and decoder network. To evaluate the effectiveness of our claimed contribution, we replace the dense-attention layer in MAE-dense with our sampled-attention layer (Fig.~\ref{fig:sampled}) while keeping the other components fixed. It is denoted as MAE-sampled. \input{few_shot.tex} To pre-train the MAE-dense and MAE-sampled, we first follow the standard train-test split of ShapeNet~\citep{chang2015shapenet} adopted by \citet{pang2022masked,yu2022point}. Further, the Furthest Points Sampling (FPS) and nearest neighbour search were adopted in tokenization~\citep{yu2022point} step, which means each input point cloud consisting of 1024 points was divided into 64 groups / tokens of size 32 points each. Tokens were further mapped to 256-dimensional latent vectors by MLP layers and max-pooling. In addition, we have 12 stacked transformers in the encoder (masking ratio of 70\%) and 1 single transformer in the decoder, both with $h=8$, $d=32$ and $r=256$. The batch size is 64 and the epoch number is 300. We used the AdamW~\citep{loshchilov2017decoupled} optimizer with cosine learning rate decay~\citep{loshchilov2016sgdr}, an initial learning rate of 0.0005, and weight decay of 0.05. \paragraph{Classification} The pre-trained MAE-dense and MAE-sampled models are first evaluated on the classification task in ModelNet40 \citep{wu20153d}. Specifically, we build the classifier by keeping the encoder structure and weights of the pre-trained MAE-dense and MAE-sampled models, followed by max-pooling as well as a fully connected layer of dimension $[256, 256, 40]$ to map the global token of a dimension of 256 to the 40 categories. Similar to ~\citet{yu2022point}, we further data-augment the point cloud training set via random scaling and translation during training. As shown in Tab.~\ref{tab:classification}, the proposed method achieved the second best performance compared with the most recent state-of-the-arts. Our sampled attention can achieve an accuracy improvement of \( 0.1\% \) when compared to dense attention, while reducing the complexity from $O(n^2)$ to $O(n)$. \paragraph{Transfer Learning} We additionally included the transfer learning as a fine-tuning classification task, which is implemented on the ScanObjectNN~\citep{uy2019revisiting} dataset with 2902 point clouds from 15 categories. We follow the data pre-processing and fine-tuning setting from Point-BERT~\citep{yu2022point} with the same three variants: OBJ-BG, OBJ-ONLY, and PB-T50-RS. As we can see in Tab.~\ref{tab:transfer_learning}, our sampled attention achieved a competitive performance in comparison with dense attention while reaching state-of-the-art performance. \paragraph{Few Shot Learning} The pre-trained MAE-dense and MAE-sampled models are finally evaluated on a few shot learning task. Following \citet{sharma2020self,wang2021unsupervised,yu2022point,pang2022masked}, the few-shot learning adopted an $k$-way, $m$-shot training setting on the ModelNet40~\citep{wu20153d} dataset, where $k$ represents the number of randomly sampled classes and $m$ the number of randomly sampled examples per class. The testing split is 20 randomly sampled unseen examples from each class. We set $k\in\{5, 10\}$ and $m\in\{10,20\}$, and report the mean accuracy with standard deviation for 10 independent experiments. As shown in Tab.~\ref{tab:few_shot}, our proposed MAE-sampled outperformed all state-of-the-art methods on 3 out of 4 settings, while MAE-sampled consistently outperformed MAE-dense. \subsection{Comparsion on Basic Classification Setting}\label{subsec:basic_setting} \input{abl_single_layer.tex} Our inputs are clouds of $n$ points with 3D coordinates as position and its normal information as features. The feature and position are first transformed by two separate MLP layers with hidden dimensions $[64, 256]$, and then added together as the input of a single layer transformer with $h=8$, $r=256$, and $d=32$, as per Eq.~\ref{eq:dense_tf}~and Eq.~\ref{eq:sampled_tf}. The transformer output of $\mathbb{R}^{n\times 256}$ is then summarized by max-pooling to obtain a global feature with a dimension of 256, followed by a fully connected layer to map it to the category vector. Here we tested this basic pipeline with $n \in \{256, 512, 768, 1024, 2048, 3072, 4096, 8192\}$ for each of the dense, sparse, $k$NN, the proposed sampled attention layers, Inducting Points \citep{lee2019set}, and Stratified \citep{lai2022stratified}, including an additional case without attention layer (MLP+Full Connected layer) as the baseline. We addiitonally included the emeory usage for different attention layers in \S~\ref{app:memory_usage} Tab.~\ref{tab:memory}. As shown in Tab.~\ref{tab:single_layer} and Tab.~\ref{tab:memory}, the model with dense attention layers achieves the best performance as it considers all $O(n^2)$ connections directly with relatively few parameters to train. However, it runs out of the 24 Gigabytes memory when the number of points $n \geq 3072$, due to the quadratic complexity. While both sparse and sampled transformers have a computational complexity of $O(n)$, our model with sampled attention outperformed the sparse one, in line with the strong theoretical guarantees we provide. We conjecture that the improvements of sampled transformer over the sparse transformer may indicate that the additional randomness (randomly shuffling points, w / o attention) leads to a better approximation of the $O(n^2)$ connections in a manner analogous to Dropout~\citep{srivastava2014dropout}. In addition, the transformer with $k$NN attention layers has the worst performance, as the permutation could not extend its receptive field. Finally, the proposed sampled attention layer also outperforms existing point-cloud-oriented sparse attentions, such as Inducting Points \citep{lee2019set}, and Stratified \citep{lai2022stratified}. Details of the comparsion could be found in \S~\ref{app:inducting} and \S~\ref{app:stratified}, respectively. \subsection{Comparsion on Hierarchical Transformer Structure} \input{hier_tf.tex} We further compare our sampled attention with $k$NN attention by adopting the hierarchical structure for the classification task under the framework of \cite{zhao2021point}. Each hierarchical layer is obtained by FPS, followed by the nearest neighbour search for the grouping, using MLPs with max-pooling for feature merging, and transformers for feature mapping. The grouping stage within each hierarchical layer summarizes the point cloud into key (subset) points. The total hierarchical layer number is $t=5$, the parameters for which we chose the number $k$ of nearest neighbours \{8, 16, 16, 16, 16\}, strides \{4, 4, 4, 4, 4\}, self-attention feature dimensions \{32, 64, 128, 256, 512\}, and transformer blocks \{2, 3, 4, 6, 3\}. The scalar attention (Eq.~\ref{eq:dense_tf} or Eq.~\ref{eq:sampled_tf}) is adopted specifically for comparison. Results shown in Tab.~\ref{tab:hier_tf} demonstrate that our sampled attention outperforms the $k$NN attention in line with our randomly sampled receptive field. Furthermore, the performance of the $k$NN layer improved greatly from $t=1$ to $t=2$ and from $t=2$ to $t=3$ as its receptive field extends due to the multiple hierarchical layers. Finally, $k$NN with vector attention~\citep{yu2022point} (reported in Tab.~\ref{tab:classification} on the PointTransformer row) achieved a better performance, in line with the observation that replacing the softmax with learnable MLPs $\gamma$ in the transformer can easier make $k$NN attention a universal approximator of continuous functions. Detailed analysis is provided in \S\ref{app:ua_spth} in the supplementary material. The performance difference between scalar attention and vector attention is shown in the Tab. 7 of \cite{yu2022point}, and is also analyzed in \cite{yun2020n}. \section{Amortized Clustering} We test the proposed sampled attention in 2D set datasets in the encoding-decoding framework introduced by \citet{lee2019set}. And the task is about using a neural network to learn the parameters of the mixture Gaussian distribution from the input set data. As we can see in Tab.~\ref{tab:clustering}, the sampled attention could be a plug-in module to replace the dense attention in the inducting points structure with competitive performance but theoretically less computational complexity. Detailed implementation and comparsion could be found in \S~\ref{app:clustering}. \section{Inductive Bias}\label{app:inductive_bias} We use inductive bias to refer to the prior knowledge and design built into a machine learning model. Loosely, more inductive bias may have better performance in specific tasks, while less inductive bias may have better generalisation ability (meaning, for example, wider applicability to different tasks and frameworks), and fewer hyperparameters that need to be tuned. In this paper, our initial research goal is to have an efficient and permutation invariant transformer for point sets / clouds. Both nearest neighbour search and inducing points are good designs as both models are efficient and permutation invariant. However, to implement the nearest neighbour search, one should introduce the hyperparameter of the number of neighbours, and the choice of definition of token-to-token distance. Further, the inducing points introduced additional parameters (inducing points themselves), which means additional backpropagation calculations. Close inspection by \citet{lee2019set} reveals a number of other non-trivial design choices. In contrast to e.g. the nearest neighbour based approaches, our random permutation-based attention involves less intuition-guided assumptions and fewer additional hyper parameter choices. \section{Introduction} Encoding structured data has become a focal point of modern machine learning. In recent years, the defacto choice has been to use transformer architectures for sequence data, \emph{e.g.}, in language~\citep{vaswani2017attention} and image~\citep{dosovitskiy2020image} processing pipelines. Indeed, transformers have not only shown strong empirical results, but also have been proven to be universal approximators for sequence-to-sequence functions~\citep{yun2019transformers}. Although the standard transformer is a natural choice for set data due to permutation invariant dense attention, its versatility is limited by the costly \(O(n^2)\) computational complexity. To decrease the cost, a common trick is to use sparse attention, which reduce the complexity from $O(n^2)$ to $O(n)$~\citep{guo2019star,yun2020n,zaheer2020big}. However, in general this results in an attention mechanism that is not permutation invariant -- swapping two set elements change which elements they attend. As a result, sparse attention cannot be directly used for set data. Recent work has explored the representation power of transformers in point sets as a plug-in module~\citep{lee2019set}, a pretraining-finetuning pipeline~\citep{yu2022point,pang2022masked}, and with a hierarchical structure~\citep{zhao2021point}. However, these set transformers introduced additional inductive biases to (theoretically) approach the same performance as the densely connected case in language and image processing applications. Here inductive bias refers to the prior knowledge and design built into a machine learning model. For example, to achieve permutation invariance with efficient computational complexity, previous work has required additional inductive bias such as nearest neighbor search~\citep{zhao2021point} or inducing points sampling~\citep{lee2019set}. Detailed discussion could be found in \S \ref{app:inductive_bias} in the supplementary material. Following the above analysis, a research question naturally arises to avoid introducing unneeded inductive bias: \begin{displayquote} \textit{Can $O(n)$ complexity sparse attention mechanisms be applied directly to sets?} \end{displayquote} \input{diagram.tex} We propose the sampled transformer to address this question, which is distinguished from the original sparse transformer by mapping the permutation of set elements to the permutation of attention matrix elements. Viewing this permutation sampling as attention matrix sampling, the proposed sampled attention approximates \( O(n^2) \) dense attention. This is achieved with the proposed random element sampling and Hamiltonian self-attention. To be specific, in random element sampling the input point set is first randomly split into several subsets of $n_s$ points (Fig.~\ref{fig:after_sampling}), each of which will be processed by shared self-attention layers. In addition, a sparse attention mechanism -- namely \textit{Hamiltonian self-attention} (Fig.~\ref{fig:Hamiltonian}) -- is applied to reduce complexity of the subset inputs, so that $n_s$ point connections are sampled from $O(n_s^2)$ connections. The combination of all Hamiltonian self-attention mechanism for all subsets -- namely \textit{cycle attention} (Fig.~\ref{fig:cycle}) -- can be viewed as a Hamiltonian cycle in the complete attention graph. As a result, the permutation of set elements is equivalent to the permutation of nodes in a Hamiltonian cycle (Fig.~\ref{fig:swap}), which is in fact randomly sampling Hamiltonian cycles from the complete graph -- thereby yielding the proposed \textit{sampled attention} (Fig.~\ref{fig:sampled}). Finally, viewing this randomization as a Monte Carlo sample of attention pairs, repeated sampling can be used to approximate the complete \( O(n^2) \) dense connections. Furthermore, our proposed sampled transformer is proven to be a universal approximator for set data -- means any continuous set-to-set functions can be approximated to arbitrary precision. The contributions of this paper are summarized as follows. \begin{itemize} \item We propose the sampled attention mechanism which maps the random permutation of set elements to the random sampling of Hamiltonian cycle attention matrices, permitting the direct processing of point sets. \item We prove that the proposed sampled transformer is a universal approximator of continuous set-to-set functions, see Corollary~\ref{thm:our_ua}. \item Compared to previous transformer architectures, the empirical results show that our proposed sampled transformer achieves comparable (or better) performance with less inductive bias and complexity. \end{itemize} \section{Methodology} We propose a variation of the sparse attention transformer -- sampled sparse attention transformer -- applicable to point sets. We deviate from the typical sparse attention transformer in two ways. First, we randomly sub-sample the input point set \( l \) times, with each sub-sample being evaluated through a shared multi-head self-attention layer. Secondly, we propose a simple Hamiltonian self-attention mechanism, a special case of the sparse attention mechanism, to reduce the computation complexity of considering point sets. This ultimately yields the variant of the typical sparse transformer (Eq.~\ref{eq:dense_tf}) which can be interpreted as using a \emph{sampled attention} mechanism, as depicted in Fig.~\ref{fig:diagram}. To study the approximation capabilities of our proposed architecture, we prove that our sampled sparse attention transformer is a universal approximator of set-to-set functions. \def1{1} \ifnum1=1 \subsection{Random Element Sampling} For a point set input \( \bm X \in \mathbb{R}^{d \times n} \), instead of directly applying the transformer attention layer to $n$ tokens, we process \( l \) many sub-sampled inputs \( {\bm {\mathsf{X}}}^{i} \in \mathbb{R}^{d \times n_{s}} \) for \( i \in [l] \) and \( 2 \leq n_{s} \leq n \). For simplicity, we assume that \( (n_{s} - 1) \cdot l = n \). The sub-sampled inputs \( {\bm {\mathsf{X}}}^{i} \) can be defined by taking various column submatrices: \begin{align}\label{eq:random_element_sampling} &{\bm {\mathsf{X}}}^{i} = \bm X_{\mathcal{R}^{i} \cup \mathcal{R}_{1}^{\gamma(i)}}; \quad \gamma(v) = 1 + (v \mod l), \end{align} where \( \mathcal{R}^{1}, \ldots \mathcal{R}^{l} \) are randomly selected ordered index sets, such that \( \vert \mathcal{R}^{i} \vert = n_{s} - 1 \) and \( \mathcal{R}^{i} \cap \mathcal{R}^{j} = \emptyset \) for \( i \neq j \). The index element \( \mathcal{R}_{1}^{i} \) denotes the first index in the ordered set \( \mathcal{R}^{i} \). The \emph{cycle} function \( \gamma: [l] \rightarrow [l] \) ensures that the edge-case of \( {\bm {\mathsf{X}}}^{l} \) is well defined, \emph{i.e.}, \( \gamma(l) = 1 \). Intuitively, the sequence of sub-sampled inputs \( {\bm {\mathsf{X}}}^{1}, \ldots, {\bm {\mathsf{X}}}^{l} \) can be interpreted as a rolling window of \( (n_{s} - 1) \cdot l = n \) many sampled point set elements. Indeed, by concatenating the index sets in order, \( {\bm {\mathsf{X}}}^{i} \) is a sliding window of the elements with size \( n_{s} \) and stride \( n_{s} - 1 \) (with wrapping). It should be noted that \( {\bm {\mathsf{X}}}^{i} \) can be treated as a random variable. As such a singular realization of the sampled elements can be viewed as a Monte Carlo sample over the set of ordered point sequences~\citep{metropolis1949monte}. Computationally, by applying a dense self-attention layer to each of the sub-sampled elements \( {\bm {\mathsf{X}}}^{i} \), the total complexity of evaluating \( l \) many self-attention layer is \( O(l \cdot n_{s}^{2}) \). We however note that the \( l \) self-attention layers can be evaluated in parallel, which yields a trade-off between individual self-attention complexity \( O(n_s^{2}) \) and computation time. To gain intuition, consider the ``limiting behaviours'' of our random element sampling: taking \( n_{s} = n + 1 \) can be interpreted as taking the whole sequence with \( l = 1 \), \emph{i.e.}, \( {\bm {\mathsf{X}}}^{1} = \bm X \) which under dense attention would result in complexity \( O(n^2) \). On the other end, if we take \( n_{s} = 2 \), we get \( l = n \) pairs of points \( \vert {\bm {\mathsf{X}}}^i \vert = 2 \); processing every such pair with dense self-attention results in \( n \) many \( O(1) \) self-attention evaluations. Random element sampling with dense attention layers can be interpreted as an instance of sparse attention, see Fig.~\ref{fig:after_sampling}. \else \subsection{Random Element Sampling} For a point set input \( \bm X \in \mathbb{R}^{d \times n} \), instead of directly inputting $n$ tokens into the transformer's attention layer, we process \( l \) many sub-sampled inputs \( {\bm {\mathsf{X}}}^{i} \in \mathbb{R}^{d \times n_{s}} \) for \( i \in [l] \). We assume that \( n_{s} l = n \) for simplicity and that \( n_{s} \ll n \). Specifically, we define \( {\bm {\mathsf{X}}}^{i} \) as the following: \begin{align} &{\bm {\mathsf{X}}}^1 = \bm X \bm {\mathsf{U}}^{1}_{n_s}; \quad {\bm {\mathsf{X}}}^l = {\bm {\mathsf{X}}}^{l-1} {\bm\shiftop}^{\lowert} + \bm X [\mathbb{0}_n^T, \bm {\mathsf{U}}_{n_s-2}^l, \mathbb{0}_n^T] + {\bm {\mathsf{X}}}^{1} {\bm\shiftop}^{\uppert} \nonumber\\ &{\bm {\mathsf{X}}}^i = {\bm {\mathsf{X}}}^{i-1} {\bm\shiftop}^{\lowert} + \bm X \bm [\mathbb{0}_n^T, \bm {\mathsf{U}}_{n_s-1}^i], \quad \text{ for } i \in \{2, 3,\dotso, l-1\}, \end{align} where \( \bm {\mathsf{U}}_{a}^{.} \in \mathbb{R}^{n \times a} \) such that each row of \( [(\bm {\mathsf{U}}^{1}_{n_s})^T, (\bm {\mathsf{U}}^{2}_{n_s-1})^{T}, \ldots, (\bm {\mathsf{U}}^{l-1}_{n_s-1})^{T}, (\bm {\mathsf{U}}^{l}_{n_s-2})^{T}] \) consists of one-hot vectors generated by sampling without replacement from the set of all \( n \)-length one-hot vectors. The upper-shift (lower-shift) matrix \( {\bm\shiftop}^{\uppert} \) (\({\bm\shiftop}^{\lowert}\)) is defined as the matrix in \( \mathbb{R}^{n \times n} \) which consists of all zeros except for its upper-right (bottom-left) most element \( \shiftop^{\uppert}_{1,n_{s}} = 1 \) (\( \shiftop^{\lowert}_{n_{s},1} = 1 \)). Intuitively, the shift operators ``extracts'' a column when applied to a matrix. For instance given \( \bm M \in \mathbb{R}^{n_{s} \times n_{s}} \), we can simply \( {\bm M} {\bm\shiftop}^{\lowert} = [{\bm M}_{n_{s}}, \mathbb{0}_{n_{s}}^{T}, \ldots, \mathbb{0}_{n_{s}}^{T}] \) and \( {\bm M} {\bm\shiftop}^{\uppert} = [\mathbb{0}_{n_{s}}^{T}, \ldots, \mathbb{0}_{n_{s}}^{T}, {\bm M}_{1}] \) -- \emph{i.e.}, the lower-shift operator extracts the last column of elements and the upper-shift operator extracts the first column. Furthermore, the sequence of sub-sampled inputs \( {\bm {\mathsf{X}}}^{i} \) can be interpreted as a rolling window of sampled point set elements. Indeed, the input \( \bm X \) matrix multiplied to a matrix whose columns consists of one-hot vectors simply extracts elements, \emph{i.e.}, \( {\bm X} (\mathbb{1}_{n}^{k})^T = {\bm X}_{k} \), Thus, for \( i \in \{2, 3, \ldots, l-1 \} \) the sub-sample \( {\bm {\mathsf{X}}}^{i} \) consists of \( n_{s} - 1 \) new (w.r.t. point seen in the sequence) elements sampled from (columns of) \( \bm X \) and one old element \( {\bm {\mathsf{X}}}^{i-1}_{n_{s}} \). The final element of the sequence also has an element from the previous sample in the sequence \( {\bm {\mathsf{X}}}^{l-1}_{n_{s}} \), but also first element of the first sampled points \( {\bm {\mathsf{X}}}^{1}_{n_{s}} \). \todo{Commented out below was discussion regarding a position embedding. I am unsure what this is referred to. I think we can notate this in the dense transformer definition regardless.} \todo{Maybe consider moving this in the section where the entire transformer is introduced} The proposed sampling of elements of \( \bm X \) can be viewed as a Monte Carlo simulation \citep{metropolis1949monte} over the set of ordered point sequences (of length \( n_{s} + (l-2)(n_{s} - 1) + (n_{s} - 2) = (l+1)(n_{s} - 1) \)). \todo{Practically, sequences are sampled for different batches and epochs, and with increasing number of the deterministic orders are learned and aggregated by the transformer, the untouchable element-wise relations within a set are learned.} Computationally, by applying a dense self-attention layer to the sub-sampled elements \( {\bm {\mathsf{X}}}^{i} \), we have a trade-off between the number of subsets \( l \) and the complexity of evaluating the self-attention layers \( O(n_s^{2}) \) (by assumption we have \( l \ll n_{s} \)). When $l=1$ we have \( n_{s} = n \), the whole point sequence is passed to the self-attention layer, resulting in complexity $O(n^2)$, which is just the original input \( \bm X \) applied to a dense transformer. When $l=n$ we have \( n_{s} = 1 \), $n$ \todo{points-pairs are passed to the self-attention layer with complexity $O(4)$, so the self-attention is trained $n$ times.} \fi \subsection{Hamiltonian Self-Attention} The random element sampling discussed in the previous section reduces the computational complexity of dense self-attention-layers from \( O(n^2) \) to \( O(l \cdot n_{s}^2) = O(n^2 / l) \) (as \( (n_{s} - 1) \cdot l = n \)) by processing each sampled set of points \( {\bm {\mathsf{X}}}^{i} \) through individual self-attention layers. Despite this improved computational complexity, the quadratic scaling of \( n \) can still be costly for point clouds. As such, instead of evaluating each sampled element \( {\bm {\mathsf{X}}}^{1}, \ldots, {\bm {\mathsf{X}}}^{l} \) with a dense self-attention layer, we propose a sparse attention layer. Sparse attention mechanisms can be formally defined via the attention patterns \( \{ \mathcal{A}_{k} \}_{k \in [n_{s}]} \), where \( j \in \mathcal{A}_{k} \) implies that the \( j \)-th token will attend to the \( k \)-th token. We propose the use of an attention mechanism, dubbed as \emph{Hamiltonian self-attention}, which is defined by the following attention patterns: \begin{align}\label{eq:attention_pattern} \mathcal{A}_k = \begin{cases} \{k, k + 1\} & \textrm{if } 1 \leq k < n_s \\ \{ k\} & \textrm{otherwise } k=n_s \end{cases}, \end{align} which ensures that the set of attention patterns \( \{ \mathcal{A}_{k} \}_{k \in [n_{s}]} \) define a \emph{Hamiltonian path}. Indeed, if we fix a subset of elements \( {\bm {\mathsf{X}}}^{i} \), by starting at \( {\bm {\mathsf{X}}}^{i}_{1} \) and following the attended elements (ignoring self-attention \( k \in \mathcal{A}_{k} \)), we visit every token exactly once. Fig.~\ref{fig:Hamiltonian} shows the corresponding attention matrix, where the Hamiltonian path corresponds to off-diagonal elements and self-attention corresponds to the diagonal elements, respectively. For Hamiltonian self-attention, computing the attention mechanism according to Eq.~\ref{eq:attention_pattern} only requires \( 2n_{s} = O(n_{s}) \) many evaluations. Thus by using our proposed sparse attention for each \( {\bm {\mathsf{X}}}^{1}, \ldots, {\bm {\mathsf{X}}}^{l} \), in comparison to dense attention, the computational complexity reduces from \( O(n^{2} / l^{2}) \) to \( O(n / l) \). The proposed Hamiltonian self-attention mechanism is rather simple and general. For instance, in the general case sparsity patterns can be defined for each individual layer (resulting in an addition superscript for each \( A_{k} \)). Despite this, the attention patterns \( \{ A_{k} \}_{k \in [n_{s}]} \) satisfy important key assumptions for proving that the attention pattern will result in a sparse transformer that is a universal approximator~\citep[Assumption 1]{yun2020n}. In particular, by stacking \( (n_{s} - 1) \) many attention layers, our Hamiltonian self-attention will allow any element to indirectly or directly attend all other element in a \( {\bm {\mathsf{X}}}^{i} \). The proposed Hamiltonian self-attention could also be viewed as a special case of window attention in \citet{zaheer2020big}, where elements are linked undirectedly. \subsection{Sampled Sparse Attention Transformer}\label{sec:sampled_attention} Given the setup of random element sampling and Hamiltonian self-attention, we can define our proposed sampled transformer for continuous set-to-set function approximation: \begingroup \allowdisplaybreaks \begin{subequations} \begin{align} &\text{SHead}_k^{j}({\bm {\mathsf{X}}}^i) = (\bm W^j_V {\bm {\mathsf{X}}}^{i}_{\mathcal{A}_k}) \cdot \sigma_S [ (\bm W^j_K {\bm {\mathsf{X}}}^{i}_{\mathcal{A}_k})^T \bm W^j_Q {\bm {\mathsf{X}}}^i_k]\\\label{eq:g_func} &g^i ({\bm {\mathsf{X}}}^i) = {\bm {\mathsf{X}}}^i + \bm W_O \begin{bmatrix} \text{SHead}^{1}({\bm {\mathsf{X}}}^i) \\ \vdots \\ \text{SHead}^{h}({\bm {\mathsf{X}}}^i) \end{bmatrix}\\\label{eq:sampled_attn} &\text{SAttn}(\bm X) = g^{l}({\bm {\mathsf{X}}}^l) \circ g^{l-1}({\bm {\mathsf{X}}}^{l-1}) \circ \cdots \circ g^1({\bm {\mathsf{X}}}^1)\\\label{seq:stacked_sampled_attn} &\text{STB}(\bm X) = \text{SAttn}(\bm X) + \bm W_2 \cdot \text{ReLU}\left(\bm W_1 \text{SAttn}(\bm X)\right). \end{align}% \label{eq:sampled_tf}% \end{subequations}% \endgroup In Eq.~\ref{eq:sampled_attn}, composition is w.r.t. the induced linear maps from matrices given by Eq.~\ref{eq:g_func}. The learnable parameters of the sampled transformer are the same as the usual dense transformer in Eq.~\ref{eq:dense_tf}. As the attention pattern of each \( {\bm {\mathsf{X}}}^{i} \) forms a Hamiltonian path, and each \( {\bm {\mathsf{X}}}^{i} \) shares an element with the proceeding \( {\bm {\mathsf{X}}}^{\gamma(i)} \), the joint attention map makes a \textit{Hamiltonian cycle} path. In other words, the shared index \( \mathcal{R}_{1}^{\gamma(i+1)} \) in Eq.~\ref{eq:random_element_sampling} links each individual Hamiltonian path given by Eq.~\ref{eq:attention_pattern}, leading the attention matrix to form a \textit{cycle attention} as shown in Fig.~\ref{fig:cycle}. Furthermore, the permutation of elements in cycle attention corresponds to the swapping of nodes in the Hamiltonian cycle, with corresponding links and swapping of element values in the attention matrix, see in Fig.~\ref{fig:swap}. As a result, the combined randomization from using random element sampling and Hamiltonian self-attention can be thought of as sampling from the set of Hamiltonian cycle graphs from the complete attention graph, resulting in the \textit{sampled attention} depicted in Fig.~\ref{fig:sampled}. Unlike dense attention, sparse attention patterns are not generally permutation invariant. Indeed, if we permute the columns of \( {\bm {\mathsf{X}}}^{i} \), the elements attended according to \( \{ \mathcal{A} \}_{k \in [n_{s}]} \) are not the same. As such, applying \( \{ \mathcal{A}_{k} \}_{k \in [n_{s}]} \) directly to \( \bm X \) is not valid for point clouds, which requires a permutation invariant operation. However, in our case the sparse attention heads are being applied to \emph{randomized} sub-sampled element sets \( {\bm {\mathsf{X}}}^{i} \). Ignoring computation, if we continue to sample the randomized elements \( {\bm {\mathsf{X}}}^{i} \) and average the resulting attention (w.r.t. the entire point set \( \bm X \)), the attention will converge to dense attention -- through randomization of \( {\bm {\mathsf{X}}}^{i} \), the event that any non-self-edge appears in a sampled attention graph (as per Eq.~\ref{eq:attention_pattern}) is equiprobable. This also holds when fixing the order of elements while applying randomly sampled Hamiltonian cycle attention. As such, the sampled transformer can be used to approximate a permutation invariant operator, and thus be used to approximate set-to-set functions. Of course, sampling sufficiently many realizations of Hamiltonian cycle attention to converge to dense attention is impractical. Instead, in practice, we re-sample the attention pattern only for each batch and epoch. Although this may seem like a crude approximation to dense attention, similar methods are successful in Dropout~\citep{srivastava2014dropout}, which even induces desirable model regularization. Furthermore, our empirical results indicate that sampled sparse attention closely approximates the more expensive (and infeasible at the typical point set scales) dense attention. \subsection{Sampled Transformer as a Universal Approximator} We formally guarantee the representation power of the proposed sampled transformer by proving universal approximation for set-to-set functions. As our sampled transformer Eq.~\ref{eq:sampled_attn} is similar to dense / sparse transformers presented by \citet{yun2019transformers,yun2020n}, we follow their framework (Sec.~\ref{subsec:ua}) to prove our universal approximation property. \begin{corollary}[Sampled Transformer is a Universal Approximator]\label{thm:our_ua} There exist sampled (sparse) Transformers that are universal approximators in the sense of Theorem~\ref{theorem:ua}. \end{corollary}% To prove our Corollary, we extend the proof of \citet{yun2019transformers,yun2020n} by showing that our sparse attention mechanisms with random element sampling can also implement a selective shift operator. As a result, we show that the proposed sampled sparse attention transformer is a universal approximator in the context of set-to-set functions. See \S\ref{app:ua} in the supplementary material for the full proof of the universal approximation property. \section{Notations} \label{app:notation} \bgroup \def1.5{1.4} \begin{tabular}{p{1in}p{4in}} $\displaystyle f$ & a continuous function\\ $\displaystyle g$ & transformer\\ $\displaystyle \overline{g}$ & modified transformer\\ $\displaystyle \mathcal{F}$ & the class of continuous sequence-to-sequence function\\ $\displaystyle \mathcal{F}_S$ & the class of continuous set-to-set function\\ $\displaystyle \overline{\mathcal{F}}$ & the class of piece-wise constant sequence-to-sequence function\\ $\displaystyle \overline{\mathcal{F}}_S$ & the class of piece-wise constant set-to-set function\\ $\displaystyle \mathcal{T}^{h,m,r}$ & the class of (sparse) transformers with $h$ attention heads, $m$ head size, and hidden layer width $r$\\ $\displaystyle \overline{\mathcal{T}}^{h,m,r}$ & the class of the modified transformers with $h$ attention heads, $m$ head size, and hidden layer width $r$\\ $\displaystyle \sigma_S$ & softmax activation\\ $\displaystyle \sigma_H$ & hardmax activation\\ $\displaystyle \ell_p$ & p norm\\ \\ $\displaystyle \mathbb{G}_\delta$ & grid $\{0, \delta, \dotso, 1-\delta\}^{d\times n}$\\ $\displaystyle \mathbb{G}^{+}_\delta$ & extend grid $\{-\delta^{-nd}, 0, \delta, \dotso, 1-\delta\}^{d\times n}$\\ \\ $\displaystyle n$ & number of points/elements/tokens\\ $\displaystyle d$ & point/element/token feature size\\ $\displaystyle m$ & head size\\ $\displaystyle h$ & heads number\\ $\displaystyle r$ & hidden layer width\\ $\displaystyle \delta$ & step size\\ \\ $\displaystyle \bm X$ & transformer input\\ $\displaystyle {\bm {\mathsf{X}}}^i$ & $i$-th subset of transformer input\\ $\displaystyle \bm P$ & $xyz$ coordinates for point cloud (set)\\ $\displaystyle \bm E$ & positional embedding\\ $\displaystyle \bm L$ & quantized transformer input\\ $\displaystyle \bm A_{\bm L}$ & desired output for the input $\bm L$\\ $\displaystyle \bm W_V^i$ & value parameter in $i$-th single-head attention layer\\ $\displaystyle \bm W_K^i$ & key parameter in $i$-th single-head attention layer\\ $\displaystyle \bm W_Q^i$ & query parameter in $i$-th single-head attention layer\\ $\displaystyle \bm W_O$ & multi-head attention parameter\\ $\displaystyle \bm W_1$ & feed-forward layer parameter\\ $\displaystyle \bm W_2$ & feed-forward layer parameter\\ $\displaystyle \bm W_p$ & parameter for position embedding\\ \end{tabular} \egroup \bgroup \def1.5{1.4} \begin{tabular}{p{1in}p{4in}} \\ $\displaystyle \bm u$ & query, key, and value parameter used in universal approximation proof\\ $\displaystyle \bm e^{(1)}$ & indicator vector $(1,0,0,\dotso,0)\in \mathbb{R}^d$\\ $\displaystyle \bm 1_n$ & vector with all ones $(1,\dotso,1)\in \mathbb{R}^n$\\ $\displaystyle \bm 0_n$ & vector with all zeros $(0,\dotso,0)\in \mathbb{R}^n$\\ \\ $\displaystyle \text{Head}^i(\cdot)$ & $i$-th single-head attention layer\\ $\displaystyle \text{SHead}^i(\cdot)$ & $i$-th sparse/sampled single-head attention layer\\ $\displaystyle \text{Attn}(\cdot)$ & multi-head attention layer\\ $\displaystyle \text{SAttn}(\cdot)$ & multi-head attention layer with sampled sparse attention\\ $\displaystyle \text{TB}(\cdot)$ & transformer block\\ $\displaystyle \text{STB}(\cdot)$ & sampled transformer block\\ $\displaystyle t(\cdot)$ & a series of any number of transformer blocks\\ $\displaystyle q_c(\cdot)$ & contextual mapping\\ $\displaystyle \bm \Psi(\cdot;b_Q, b'_Q)$ & selective shift operation\\ $\displaystyle \psi(\cdot;b_Q)$ & a single-head attention in selective shift operation\\ $\displaystyle d_c(\cdot, \cdot)$ & distance between two functions\\ \end{tabular} \egroup \section{Universal Approximator Proof}\label{app:ua} A proof of Corollary~\ref{thm:our_ua} follows the steps described in \S~\ref{subsec:ua}. As we only changed the dense/sparse attention to the sampled attention, the steps \ding{172} and \ding{174} in \S~\ref{subsec:ua} remain the same as \cite{yun2019transformers,yun2020n} and found in the \S C and F in \cite{yun2020n}. Here we need only cover the proof of step \ding{173}. First, we have $\mathcal{F}_S(\cdot)$ is the class of continuous set-to-set function, and $\overline{\mathcal{F}}_S(\cdot)$ is the class of piece-wise constant set-to-set function. \begin{lemma}[Modified Universal Approximation.]\label{prop:modified_ua} For each $\overline{f} \in \overline{\mathcal{F}}_S(\delta)$ and $1 \leq q < \infty$, $\exists \overline{g} \in \overline{\mathcal{T}}^{2,1,1}$ such that $\overline{f}(\bm X)=\overline{g}(\bm X)$ for all $\bm X \in \mathbb{D}$. \end{lemma} Without loss of generality, here $\mathbb{D}\in [0, 1)^{d\times n}$. As in \citep{yun2019transformers, yun2020n} The proof of Lemma~\ref{prop:modified_ua} could then be separated into four steps: \begin{enumerate} \item Use the positional embedding $\bm E$ in \S~\ref{subsec:tf} such that each column of the input $\bm X_k + \bm E_k$ are in disjoint intervals. \item The input $\bm X + \bm E$ is quantized into $\bm L$ with values in $\{0, \delta, \dotso, n-\delta\}$ by a series of modified feed-forward layers. \item The \textit{contextual mapping q} defined in Definition~\ref{def:contextual_mapping} is implemented by a series of modified sampled multi-head self-attention layers (modified version of Eq.~\ref{eq:sampled_attn}) with the input of $\bm L$ . \item Another series of modified feed-forward layers implements the \textit{value mapping} such that each element in the unique id $q(\bm L)$ is mapped to the desired output $\bm A_{\bm X}$. \end{enumerate} As modified feed-forward layers are all the same as in \cite{yun2020n}, the definition and proof of step 2 is available in \S D.2 and E.1 in \citep{yun2020n}, while the definition and proof of step 4 could be found in the \S D.4 and E.3 in \citep{yun2020n}. Here we mainly explain steps 1 and 3. \subsection{Positional Embedding} The positional input for point sets in its $xyz$ coordinate $\bm P \in \mathbb{R}^{3\times n}$. We adopted a matrix $\bm W_p\in \mathbb{R}^{d\times 3}$ (a permutation invariant operation) such that the input of the sampled transformer will be $\bm X + \bm E = \bm X + \bm W_p {\bm P}$. And there exists a case such that: \begin{align} \bm E_1 = (n-1)\bm 1_n, \text{ and } \bm E = (i-2)\bm 1_n, \text{ for } i \in [2:n]. \end{align} In this case, the first column will be $(\bm X + \bm E)_1 \in [n-1, n)^d$, and $(\bm X + \bm E)_i \in [i-2, i-1)^d$ for $i\in [2:n]$. So the requirement of step 1 is satisfied, that each column lies in disjoint intervals. \subsection{Contextual Mapping for Stacked Multi-Heads self-Attention Layers} After the step 2, the quantized input $\bm L$ will be in the set $\mathbb{H}_\delta \subset \mathbb{R}^{d\times n}$, such that: \begin{align} \mathbb{H}_\delta := \{ \bm G + \bm E \in \mathbb{R}^{d\times n} \vert \bm G \in \mathbb{G}_\delta\}, \end{align} with $\mathbb{G}_\delta := \{0, \delta, \dotso, 1-\delta\}$. Then the adaptive selective shift operation $\Psi$ is defined so that the learnable parameter $\bm u^T \in \mathbb{R}^{d}$ could map $\bm u^T\Psi(\bm L)$ into unique scalars (ids). Finally, with the help of the all-max-shift operation $\Omega$, the output of a series of those two operations will be a scalar in disjoint intervals w.r.t each column of $\bm L$, as well as different inputs $\bm L$ and $\bm L'$, thereby implementing the contextual mapping in Definition.~\ref{def:contextual_mapping}. \paragraph{Adaptive Selective Shift Operation.} With a 2 heads and 1 hidden layer width modified multi-heads attention layer, the adaptive selective shift operation $\Psi(\cdot)$ may be defined as: \begin{subequations} \begin{align} \Psi^l(\bm L^l;c, b_Q, b'_Q) :&= \bm L^l + c[\bm 1^1_{n_s} -\bm 1^1_{n_s}] \begin{bmatrix} \psi^l(\bm L^l; b_Q) \\ \psi^l(\bm L^l; b'_Q) \end{bmatrix}\\\label{eq:selective_op} \psi^l(\bm L^l; b_Q)_k &= \bm u^T \bm L_{\mathcal{A}_k^l} \sigma_H \left[(\bm u^T \bm L_{\mathcal{A}_k^l})^T(\bm u^T \bm L^l_k -b_Q)\right] \nonumber\\ &= \begin{cases} \max_{j\in \mathcal{A}^l_k} \bm u^T \bm L^l_j &\text{if } \bm u^T \bm L^l_k > b_Q\\ \min_{j\in \mathcal{A}^l_k} \bm u^T \bm L^l_j &\text{if } \bm u^T \bm L^l_k < b_Q, \end{cases} \end{align} \end{subequations} where we assign query, key, and value parameters as $\bm u^T$, and we introduced the superscript $l$ to denote different attention layers of self-attention layer $l$. With the help of hardmax, the $k$-th row of the attention matrix will be one-hot vectors to select the max or min vector in $\mathcal{A}^l_k$. $\bm W_O=c[\bm 1^1_{n_s} -\bm 1^1_{n_s}] \in \mathbb{R}^{n_s\times 2}$ is used to make sure only the first element in feature dimension are changed in selective shift operation. Specifically, the $1,k$-entity of the self-attention output reads: \begin{align} \Psi^l(\bm L^l; c, b_Q, b'_Q)_{1,k} &= \overline{L}^l_{1,k} + c\left( \psi^l(\bm L^l; b_Q)_k - \psi^l(\bm L^l; b'_Q)_k \right)\\ &= \begin{cases} \overline{L}^l_{1,k} + c\left( \max_{j\in \mathcal{A}^l_k} \bm u^T \bm L^l_j - \min_{j\in \mathcal{A}^l_k} \bm u^T \bm L^l_j \right) & \text{if } b_Q < \bm u^T \bm L^l_k < b'_Q,\\ \overline{L}^l_{1,k} & \text{if } \bm u^T \bm L^l_k \notin [b_Q, b'_Q]. \end{cases} \end{align} Without loss of generality, the sampled transformer in \S.~\ref{sec:sampled_attention} may be viewed as a series of stacked masked attention $\mathcal{A}^i$ for $i\in [n]$, such that: \begin{subequations}\label{eq:selective_attn} \begin{align} &\mathcal{A}^i_{k =1+(i-2+n \mod n)} = \{ i, 1+ (i-2 + n \mod n) \} \\ &\mathcal{A}^i_{k =i} = \{ i \} \\ &\mathcal{A}^i_{k \not\subset\{ i,i-1 \mod n \}} = \{\}, \end{align} \end{subequations} for $k\in [n]$. This is in fact the $n$ point pairs in the Hamiltonian cycle. So the stack of all the masked attention is the cycle attention in Fig.~\ref{fig:cycle} reflected across the diagonal line. Then the Eq.~\ref{seq:stacked_sampled_attn} will be \begin{align} \text{SAttn} (\bm L) = g^n (\bm L_{\mathcal{A}_n}) \circ g^{n-1} (\bm L_{\mathcal{A}_{n-1}}) \circ \cdots \circ g^1 (\bm L_{\mathcal{A}_1}), \end{align} noting that the updated column for previous $g^i$ will be applied to the next $g^{i+1}$. In conclusion, the contextual mapping holds as the masked attention $\mathcal{A}^i$ is designed to aggregate information from all $n$ elements / tokens by applying the $g(\cdot)$ about $O(n)$ times, which matches the design of \citep{yun2020n}. Now consider $\bm u^T = (1, \delta^{-1}, \delta^{-2}, \dotso, \delta^{-d+1})$, the mapping $l_i = p_s({ \bm L}_i) = \bm u^T \bm L_i$ is bijective as all input point features $\bm L_i$ are different with at least one element having a gap of $\delta$. In addition, without loss of generality, the order $l_2<l_3<\dotso<l_n<l_1$ holds as in \citep{yun2020n} because of the positional embedding $\bm E$. Further, as each $l_i$ has $\delta^{-d}$ intervals, and as the $n$ tokens are disjoint with each other, we need $n\delta^-d$ adaptive selective operations to achieve the bijective mapping of unique ids. \paragraph{First $\delta^{-d}$ selective shift operations.} The first $\delta^{-d}$ layers are all applied to the second column (token) within $l_2 \in \left[ 0: \delta: \delta^{-d+1} - \delta \right]$, and each selective shift operation will match one interval within $b_Q = b-\frac{\delta}{2}, b'_Q = b+\frac{\delta}{2}$ for $b \in \left[ 0: \delta: \delta^{-d+1} - \delta \right]$. Also $\mathcal{A}^2$ is in fact $\mathcal{A}^2_1 = \{ 1 \}$, $\mathcal{A}^2_2 = \{ 1, 2 \}$, and is empty otherwise. So all $\delta^{-d}$ layers are only applied on the first two token embeddings, then the maximum value is $l_1$ and the minimum value is $l_2$. We have the output after those selective shift operations: \begin{align} \tilde{l}_2 = l_2 + \delta^{-d} (\max_{j\in \mathcal{A}^1_2} l_j - \min_{j\in \mathcal{A}^1_2} l_j) = l_2 + \delta^{-d} (l_1 - l_2), \end{align} where with constant value $c=\delta^{-d}$ in Eq.~\ref{eq:selective_op}. Note that $\tilde{l}_2 > l_1$ because \begin{align} l_2 + \delta^{-d} (l_1 - l_2) > l_1 \Leftrightarrow (\delta^{-d} - 1)(l_1 - l_2) > 0, \end{align} which is true. So the current order becomes $l_3 < l_4 < \dotso < l_n < l_1 < \tilde{l}_2$. So in the next $\delta^{-d}$ selective shift operations, the maximum value will be $\tilde{l}_2$ and the minimum will be $l_3$. \paragraph{Second $\delta^{-d}$ selective shift operations.} The next $\delta^{-d}$ layers will be applied on the third column (token embedding) within intervals $l_3 \in \left[ \sum_{i=0}^{d-1} \delta^{-i}: \delta: \sum_{i=0}^{d-1} \delta^{-i} + \delta^{-d+1} - \delta \right]$ which results in \begin{align} \tilde{l}_3 = l_3 + \delta^{-d}(\tilde{l}_2-l_3) = l_3 + \delta^{-d}(l_2-l_3) + \delta^{-2d}(l_1-l_2), \end{align} which is again $\tilde{l}_3 > \tilde{l}_2$ because \begin{align} l_3 + \delta^{-d} (\tilde{l}_2 - l_3) > \tilde{l}_2 \Leftrightarrow (\delta^{-d} - 1)(\tilde{l}_2 - l_3) > 0. \end{align} So we have a new maximum $\tilde{l}_3$ and new minimum $l_4$. \paragraph{Repeat after $(n-1)\delta^{-d}$ operations.} The next $\delta^{-d}$ will operate on the fourth column. After all $(n-1)\delta^{-d}$ operations we have \begin{align} (n-1)\sum_{i=0}^{d-1}\delta^{-i} \leq l_1 < \tilde{l}_2 < \dotso < \tilde{l}_n. \end{align} For $j$-th column, we will have the output \begin{subequations} \begin{align} &\tilde{l}_1 = l_1,\\ &\tilde{l}_2 = l_2 + \delta^{-d}(l_1 - l_2),\\ &\tilde{l}_j = l_j + \sum^{j-2}_{k=1}\delta^{-kd} (l_{j-k}-l_{j-k+1}) + \delta^{-(j-1)d}(l_1 - l_2). \end{align} \end{subequations} And we also know the interval of each $l_i$ \begin{align} &l_1 \in [(n-1)\Delta:\delta:(n-1)\Delta+\delta^{-d+1}-\delta]\\ &l_i \in [(i-2)\Delta:\delta:(i-2)\Delta+\delta^{-d+1}-\delta], \end{align} with $\delta^{-d+1}-\delta<\Delta:=\sum_{i=0}^{d-1}\delta^{-i}=\frac{\delta^{-d}-1}{\delta^{-1}-1} \leq \delta^{-d}-1 \Rightarrow 0 < \delta \leq \frac{1}{2}$. So we have \begin{align} &l_1 - l_2 \in [(n-1)\Delta - \delta^{-d+1} + \delta:\delta:(n-1)\Delta + \delta^{-d+1} - \delta]\\ &l_i - l_{i+1} \in [-\Delta-\delta^{-d+1}+\delta:\delta:-\Delta+\delta^{-d+1}-\delta] \text{ for } i \in \{ 2, 3, \dotso, n-1 \}. \end{align} Then the interval of outputs are \begin{align} &\tilde{l}_1 \in [(n-1)\Delta, (n-1)\Delta+\delta^{-d+1}-\delta]\\ &\tilde{l}_2 \in [(n-1)\Delta \delta^{-d}-\delta^{-2d+1}+\delta^{-d+1}, (n-1)\Delta \delta^{-d}+\delta^{-2d+1}-\delta]\\ &\tilde{l}_i \in [(i-2)\Delta - \sum_{k=1}^{i-2}\delta^{-kd} \Delta - \sum_{k=1}^{i-2}\delta^{-kd} (\delta^{-d+1}-\delta)+\delta^{-(i-1)d}(n-1)\Delta-\delta^{-(i-1)d}(\delta^{-d+1}-\delta),\nonumber \\ & \qquad (i-2)\Delta +\delta^{-d+1}-\delta - \sum_{k=1}^{i-2}\delta^{-kd} \Delta \nonumber\\ & \qquad \qquad+ \sum_{k=1}^{i-2}\delta^{-kd}(\delta^{-d+1}-\delta ) + \delta^{-(i-1)d}(n-1)\Delta+\delta^{-(i-1)d}(\delta^{-d+1}-\delta) ], \end{align} and to check whether intervals are disjoint or not, we take the difference between the lower bound of $\tilde{l}_{i+1}$ and the upper bound of $\tilde{l}_i$ \begin{align} \tilde{l}_{i+1}^{l} - \tilde{l}_i^{u} &= \Delta - \delta^{-(i-1)d} \Delta + (\delta^{-id} - \delta^{-(i-1)d})(n-1)\Delta - (\delta^{-d+1}-\delta) \\ &\quad - \delta^{-(i-1)d}(\delta^{-d+1}-\delta) - 2\sum_{k=1}^{i-2}\delta^{-kd}(\delta^{-d+1}-\delta)\\ &\quad - \delta^{-id}(\delta^{-d+1}-\delta) - \delta^{-(i-1)d}(\delta^{-d+1}-\delta) \\ &= \left[ 1 - n \delta^{-(i-1)d} + (n-1)\delta^{-id} \right] \Delta \nonumber\\ &\quad- \left( \frac{1+\delta^{-d}}{1-\delta^{-d}} - \frac{2\delta^{-d}}{1-\delta^{-d}} \delta^{-(i-2)d} + 2\delta^{-(i-1)d} + \delta^{-id} \right)(\delta^{-d+1}-\delta)\\ &\geq \left[ \frac{2\delta^{-d}}{\delta^{-d}-1} - \frac{2\delta^{-d}}{\delta^{-d}-1}\delta^{-(i-2)d} - (n+2)\delta^{-(i-1)d} + (n-2)\delta^{-id} \right] (\delta^{-d+1}-\delta)\\ &\geq \delta^{-(i-2)d} \left[ -\frac{2\delta^{-d}}{\delta^{-d}-1} - (n+2)\delta^{-d} + (n-2)\delta^{-2d} \right] (\delta^{-d+1}-\delta)\\ &\geq \delta^{-(i-2)d} \left[ -4- (n+2)\delta^{-d} + (n-2)\delta^{-2d} \right] (\delta^{-d+1}-\delta), \end{align} which is not guaranteed to be above 0, so the addition operations should be introduced. Further, the adaptive shift operation is a one-to-one map as the map $\bm L_k \mapsto \bm u^T \bm L_k$ is one-to-one, and the permutation of columns is one-to-one, and so it sufficies to prove that the map $[l_1 \cdots l_n] \mapsto \tilde{l}_k$ is also one-to-one. See the detailed analysis in \S E.2.3 in \citep{yun2020n}. \paragraph{Preliminaries.} As in \citep{yun2020n}, the upper bound for the unique id $\tilde{l}_i$ is: \begin{align} \tilde{l}_i &:= l_i + \sum_{j=1}^{i-2} \delta^{-jd} (l_{i-j} - l_{i+1-j}) + \delta^{-(i-1)d}(l_1 - l_2) \nonumber\\ &\leq l_i + \delta^{-d}\sum_{j=1}^{i-2}(l_{i-j}-l_{i+1-j}) + \delta^{-(i-1)d}(l_1 - l_2) \nonumber\\ &= l_i + \delta^{-d}(l_2-l_i) + \delta^{-(i-1)d}(l_1 - l_2) \nonumber\\ &=\delta^{-(i-1)d}l_1 - (\delta^{-(i-1)d}-\delta^{-d})l_2 - (\delta^{-d}-1)l_i\\ &\leq \delta^{-(i-1)d} l_1 \leq \delta^{-(i-1)d}\left( (n-1)\Delta + \delta^{-d+1} -\delta \right)\\ &\leq \delta^{-(i-1)d} (i-1+\delta)(\delta^{-d}-1) \leq n\delta^{-id} - \delta.\label{eq:upper_bound_selective_op} \end{align} Similarly, we have \begin{align} l_n \leq n\delta^{-nd} - \delta. \end{align} Also, for any $n\geq1$, we have \begin{align}\label{eq:ineq_for_allmax} \left( \frac{2n+1}{2n} \right) \leq \left( \frac{2n+1}{2n} \right)^2 \leq \cdots \leq \left( \frac{2n+1}{2n} \right)^n \leq 2 \end{align} \paragraph{All-max-shift operations.} Following \citep{yun2020n}, to make the interval between $l_k$ are disjoint with each other, the all-max-shift operation $\Omega^l: \mathbb{R}^{d\times n} \to \mathbb{R}^{d\times n}$ is a self-attention layer defined as follows: \begin{align} \Omega^l(\bm L; c) = \bm L + c\bm e^{(1)} \psi^l(\bm L; 0). \end{align} The $(1, k)$-th entry of $\Omega^l(\bm Z; c)$ reads \begin{align} \Omega^l(\bm L; c)_{1,k} = L_{1,k} + c\psi^l(\bm L; 0)_k = L_{1,k}+c \max_{j\in \mathcal{A}^l_k}\bm u^T \bm L_j.\label{eq:all_max} \end{align} The main idea of all-max-shift operation is that, in the $i$-th layer, we will 'replace' the current 'column' by the maximum column within reach of sparse attention pattern $\mathcal{A}^i$. In the next layer, the shifted max column will again be 'replaced' by the new maximum value within reach of the shifted column. After $n$ steps or layers, all the first elements of each column will be replaced by the one in the maximum column, which is the dominated value. The steps within the dominated element are greater than the intervals of the whole $l_n$. So, for two different inputs $\bm L$, they $n$ entries are distinct, and the requirement \ref{def:contextual_mapping_rule2} in Definition~\ref{def:contextual_mapping} satisfied. Without loss of generality, in contrast with the case of the cycle attention Eq.~\ref{eq:selective_attn} in the adaptive selective operation, the case of the stacked sampled attention is the same as in Fig.~\ref{fig:cycle}, with $l=1$. \paragraph{First layer of all-max-shift.} The input of the first all-max-shift operation is $\tilde{\bm L} \in \mathbb{R}^{d\times n}$. Recall that $\bm u^T\tilde{\bm L} = [l_1, \tilde{l}_2, \tilde{l}_3, \dotso, \tilde{l}_n]$ and each element is $0<l_1<\tilde{l}_2<\tilde{l}_3<\dotso<\tilde{l}_n<n\delta^{-nd}-\delta$. The last inequality holds as in Eq.~\ref{eq:upper_bound_selective_op}. Let the output of the first layers be $\bm M^1$. The $k$-th element in the first row reads \begin{align} M^1_{1,k} := \tilde{L}_{1,k} + 2n^2 \delta^{-nd-1} \max_{j\in \mathcal{A}^1_k}\bm u^T \tilde{\bm L}_j = \tilde{L}_{1,k} + 2n^2 \delta^{-nd-1} \bm u^T \tilde{\bm L}_{k+1 \mod n}, \end{align} where with constant value $c=2n^2 \delta^{-nd-1}$ in Eq.~\ref{eq:all_max}, and for each column we will have \begin{align}\label{eq:first_all_max_shift} \bm u^T \bm M^1_k = \bm u^T \tilde{\bm L}_k + 2n^2 \delta^{-nd-1} \bm u^T \tilde{\bm L}_{k+1\mod n}, \end{align} as the first element of $\bm u$ is 1. Next, we see that $\bm u^T \bm M^1_k$ is dominated by the right term $2n^2 \delta^{-nd-1} \bm u^T \tilde{\bm L}_{k+1\mod n}$, which is defined by for any $k, k' \in [n]$, \begin{align} \bm u^T \tilde{\bm L}_{k+1\mod n} < \bm u^T \tilde{\bm L}_{k'+1\mod n} \Rightarrow \bm u^T \bm M_k < \bm u^T \bm M_{k'}. \end{align} This is because the minimum gap between $\bm u^T \tilde{\bm L}_{k+1}$ is $\delta$, and we have \begin{align} \bm u^T \tilde{\bm L}_k < n\delta^{-nd} < 2n^2\delta^{-nd-1} \cdot \delta, \end{align} so if we have $\bm u^T \tilde{\bm L}_{k+1\mod n} < \bm u^T \tilde{\bm L}_{k'+1\mod n}$, it could determine the order $\bm u^T \bm M_k < \bm u^T \bm M_{k'}$, because $\bm u^T \tilde{\bm L}_k$ is within the minimum gap of the right term of Eq.~\ref{eq:first_all_max_shift}, and so cannot change the overall value. \paragraph{Second layer of all-max-shift.} As in the first layer, we define the output of this layer as $\bm M^2$, and the $k$-th element in the first row reads \begin{align} M^2_{1,k} := M^1_{1,k} + 2n^2 \delta^{-nd-1} \max_{j\in \mathcal{A}^2_k}\bm u^T \bm M^2_{j} = M^1_{1,k} + 2n^2 \delta^{-nd-1} \bm u^T \bm M^2_{k+1\mod n}, \end{align} so for each column, we have \begin{align} \bm u^T \bm M^2_k &= \bm u^T \bm M^1_k + 2n^2 \delta^{-nd-1} \bm u^T \bm M^2_{k+1\mod n}\nonumber\\ &= \bm u^T \tilde{\bm H}_k + 2n^2 \delta^{-nd-1} \bm u^T \tilde{\bm H}_{k+1\mod n} \nonumber\\ & \quad + 2n^2 \delta^{-nd-1}(\bm u^T \tilde{\bm H}_{k+1\mod n} + 2n^2 \delta^{-nd-1} \bm u^T \tilde{\bm H}_{k+2\mod n})\nonumber\\ &= \bm u^T \tilde{\bm H}_k + 4n^2 \delta^{-nd-1} \bm u^T \tilde{\bm H}_{k+1\mod n} + (2n^2 \delta^{-nd-1})^2 \bm u^T \tilde{\bm H}_{k+2\mod n}. \end{align} The last term domains $\bm u^T \bm M^2_k$, because the minimum gap of $\bm u^T \bm M^2_{k+1\mod n}$ is at least $\delta$, and \begin{align} &\bm u^T \bm M^2_k - (2n^2 \delta^{-nd-1})^2 \bm u^T \tilde{\bm H}_{k+2\mod n} = \bm u^T \tilde{\bm H}_k + 4n^2 \delta^{-nd-1} \bm u^T \tilde{\bm H}_{k+1\mod n}\nonumber \\ & <(1+4n^2\delta^{-nd-1})n\delta^{-nd} \leq (1+4n)n^2\delta^{-2nd-1} \leq (2n^2 \delta^{-nd-1})^2 \cdot \delta. \end{align} The last inequality holds due to \begin{align} \left(\frac{1+2n}{2n}\right)^2 \leq 2 \Leftrightarrow 1+4n \leq 4n^2, \end{align} from Eq.~\ref{eq:ineq_for_allmax}. \paragraph{Repeat all-max-shifts.} After all $n$ layers we get $\bm M^n$, and $\bm u^T \bm M^n_k $ is dominated by \begin{align} (2n^2\delta^{-nd-1})^n \max_{j\in \mathcal{A}^n_k}\bm u^T \tilde{\bm H}_j = (2n^2\delta^{-nd-1})^n \tilde{l}_n. \end{align} Because the remains in $\bm u^T\bm M^n_k$ have strictly upper-bound \begin{align} \bm u^T \bm M^n_k - (2n^2\delta^{-nd-1})^n \tilde{l}_n &< \left( \sum_{i=0}^{n-1} \begin{pmatrix} n\\ i \end{pmatrix} (2n^2\delta^{-nd-1})^i \right)n\delta^{-nd} \\ &\leq \left( \sum_{i=0}^{n-1} \begin{pmatrix} n\\ i \end{pmatrix} (2n)^i \right) (n\delta^{-nd-1})^{n-1} n\delta^{-nd}\\ &=\left( (1+2n)^n - (2n)^n \right)(n\delta^{-nd-1})^n\cdot \delta \leq (2n^2\delta^{-nd-1})^n\cdot \delta. \end{align} The last inequality used $(1+2n)^n - (2n)^n \leq (2n)^n$ from Eq.~\ref{eq:ineq_for_allmax}. \paragraph{Verifying Contextual Mapping.} This matches the analysis in \S E.2.5 of \citep{yun2020n}. As all $\bm u$ selective-shift operations and all-max operations are bijective, and $\bm u$ map each column (token) of the input to the unique id, the requirement~\ref{def:contextual_mapping_rule1} in the Definition~\ref{def:contextual_mapping} holds. As $\bm u^T \bm M^n_k$ are all dominated by $(2n^2\delta^{-nd-1})\tilde{l}_n$, and different inputs $\bm L$ have different $\tilde{l}_n$ as $\tilde{l}_n$ is influenced by all $[l_1, l_2, \dotso, l_n]$, not all columns are the same for different inputs $\bm L$, and $\bm u^T$ is the unique mapping. The interval may be written \begin{align} \bm u^T \bm M^n_k \in [(2n^2\delta^{-nd-1})^n \tilde{l}_n, (2n^2\delta^{-nd-1})^n (\tilde{l}_n+\delta)]. \end{align} The upper bound holds as other terms are less than $(2n^2\delta^{-nd-1})^n \cdot \delta$ in total (not the dominated term). So as we can see the interval for all $\bm u^T \bm M^n_k$ are disjoint for different inputs, and the requirement~\ref{def:contextual_mapping_rule2} in the Definition~\ref{def:contextual_mapping} holds. \section{Additional Information on the Basic Classification Setting} \subsection{$k$NN Transformer}\label{app:ua_spth} \begin{definition}[$k$NN Attention]\label{def:knn_atten} For $k \in [n]$, $k$NN attention has the attention pattern $\mathcal{A}_k = \text{kNN}(k)$ for all points, where $\text{kNN}(\cdot)$ represents the Euclidean $k$-nearest neighbourhood of the input. \end{definition} \begin{definition}[$k$NN Transformer]\label{def:knn_tf} The $k$NN transformer is the transformer defined as in Eq.~\ref{eq:dense_tf}, but with the $k$NN attention of definition~\ref{def:knn_atten}. \end{definition} In addition, in the case of vector attention (Eq. 3 in~\citep{zhao2021point}), universal approximation holds as the learnable mapping $\gamma(\cdot)$ (an MLP) is a universal approximator. This may helps to explain why vector attention could outperform scalar attention in Tab.~7 of \citep{zhao2021point}. Finaly, in Tab.~\ref{tab:single_layer}, the performance of the $k$NN transformer drops with the increasing number of points. This is because as the point number increase, the fix $k$ nearest neighbor number is relatively reduced. As a result, the receptive field shrink. So the performance drops. \subsection{Memory Usage}\label{app:memory_usage} \input{memory_usage.tex} The memory usage of some sparse attentions in basic setting is in Tab~\ref{tab:memory}, which shows that the dense transformer has the largest memory usage due to its $O(n^2)$ complexity. The sparse transformer and sampled transformer have comparable memory usage due to the same $O(n)$ complexity. \subsection{In comparison with Inducting Points (Set Transformer)}\label{app:inducting} We additionally compared the proposed sampled attention with learnable inducting points strategy~\citep{lee2019set}. The inducting points here are implemented by simply replacing the multi-heads self-attention transformer block in Eq. \ref{seq:stacked_sampled_attn} with the Induced Set Attention Block (ISAB) in Eq. (9) of \citet{lee2019set}. And the positional embedding is added in the key and value input as per our sampled attention. Our implementation of the basic classification in Sec.~\ref{subsec:basic_setting} is different from the one in \citet{lee2019set} with respect to the data pre-processing: our data pre-processing is in line with \citet{zhao2021point,yu2022point}, while \citet{lee2019set} follow \citet{zaheer2017deep} without positional embedding. As we can see in Tab.~\ref{tab:single_layer}, our proposed sampled attention outperformance the inducting point strategy~\citep{lee2019set} with linear complexity in the attention matrix. As the performance of \citet{lee2019set} on the two implements is quite different, we further compared the sampled attention and inducting points strategy in the implementation provided by \hyperlink{https://github.com/juho-lee/set_transformer}{the official implementation of \citet{lee2019set}}. To begin with, our proposed sampled attention could be applied to the inducting points strategy directly to reduce its complexity from $O(mn)$ to $O(n)$, where $n$ is the input points number and $m$ is the learnable inducting points number. Specifically, we use the sampled attention to replace the dense attention in the Induced Set Attention Block(ISAB) from the Eq. 9 of \citet{lee2019set}. However, as the inducting points and points have different physical meanings, also as the inducting points number $m$ (query in the self-attention) is not equal to the input points number $n$ (key and value), our Hamiltonian cycle attention could not be applied directly. We instead applied a different version of sampled attention by randomly sampling two elements per row in the dense attention matrix. This is a loose version of sampled attention as no Hamiltonian cycle is constructed. The results could be found in Tab.~\ref{tab:class_inducting}. As we can see, our proposed sampled attention is still comparable with the set transformer but with less computational complexity. \input{class_inducting.tex} \subsection{In comparison with Stratified Strategy}\label{app:stratified} The window-based transformer is another important branch of exploring the representation power of the transformer. Combined with the hierarchical backbone, it has been widely used in processing 2D images, languages, and 3D point clouds, such as \citet{liu2021swin,lai2022stratified}. The window-based transformer is proposed to learn the cross-window relationships as well as the non-overlapping local relationship. Here we compared our proposed sampled attention with the Stratified strategy from Figure 3 of \citet{lai2022stratified} in Tab.~\ref{tab:single_layer}. The Stratified strategy could be viewed as a combination of dense and sparse keys obtained by the window partition of different sizes. It is an efficient design for learning token relationships in the hierarchical backbone. However, in the single-layer setting, directly learning $O(n^2)$ connections in the attention matrix may be a better solution as it could reach the full receptive field. As our proposed sampled attention mechanism could estimate $O(n^2)$ connections by implementing the Monto Carlo simulation, we outperformed the Stratified strategy in the basic classification setting as per Tab.~\ref{tab:single_layer}. \section{Related Work} The transformer~\citep{vaswani2017attention} is widely used in languages~\citep{dai2019transformer, yang2019xlnet, raffel2020exploring} and images~\citep{ramachandran2019stand, dosovitskiy2020image, liu2021swin, touvron2021training}. For example, \citet{raffel2020exploring} explored the transformer by unifying a suite of text problems to a text-to-text format; \citet{dai2019transformer} modeled very long-term dependency by reusing previous hidden states; \citet{dosovitskiy2020image} demonstrated that the pure transformer can be effectively applied directly to a sequence of image patches; and \citet{liu2021swin} proposed a transformer with hierarchical structure to learn various scales with linear computational complexity. In addition, the representation power of the transformer has been explored by the pre-training and fine-tuning models~\citep{bao2021beit, yu2022point, he2022masked}. Recently, an increasing number of researchers begin to explore the representation power of the transformer in 3D point clouds (sets) data. \citet{xie2018attentional} applied multi-layered dense transformers to small-scale point clouds directly; \citet{yang2019modeling} further proposed the Group Shuffle attention to deal with size-varying inputs by furthest point sampling; \citet{han2022dual} aggregated point-wise and channel-wise features by directly adding two self-attention layers. To avoid the tricky tokenization step, \citet{lee2019set} tried to deal with points directly with $O(nm)$ complexity by introducing inducing points, and proved universal approximation; \citet{mazur2021cloud} further proposed a hierarchical point set mapping, grouping, and merging structure with nearest neighbors defining the sparse attention mechanism. \citet{yu2022point} and \citet{pang2022masked} further introduced the transformers to the pre-training and fine-tuning pipelines in the area of 3D point clouds. Last but not the least, transformers have also been widely used in other such works on 3D (point cloud) data as \cite{liu2019point2sequence,fuchs2020se,misra2021end,mao2021voxel,sander2022sinkformers} Another important line of work seeks to theoretically demonstrate the representation power of the transformer by showing the universal approximation of continuous sequence-to-sequence functions~\citep{yun2019transformers,yun2020n,zaheer2020big,shi2021sparsebert,kratsios2021universal}. To be specific, \citet{yun2019transformers} demonstrated the universal approximation property of the transformer; \citet{yun2020n} and \citet{zaheer2020big} demonstrated that the transformer with sparse attention matrix remains a universal approximator; \citet{shi2021sparsebert} claimed that the transformer without diag-attention is still a universal approximator. \citet{kratsios2021universal} proposed that the universal approximation under constraints is possible for the transformer. In comparison with the above works, we proposes the $O(n)$ \textit{sampled transformer} -- a universal approximator of continuous set-to-set functions. To our knowledge, the use of approximating dense attention by sampling Hamiltonian cycle attention matrices is new. \subsection{Sparse Transformer and Universal Approximator} Similar to Eq.~\ref{eq:dense_tf} the sparse transformer is definited as: \begin{subequations} \begin{align} &\text{SAttn}^l(\bm X) = \bm X + \bm W_O \begin{bmatrix} \text{SHead}^{1,l}(\bm X) \\ \vdots \\ \text{SHead}^{h,l}(\bm X) \end{bmatrix}\\ &\text{SHead}^{i,l}(\bm X)_k = \bm W^i_V \bm X_{\mathcal{A}^l_k} \cdot \sigma_S [ (\bm W^i_K \bm X_{\mathcal{A}^l_k})^T \bm W^i_Q \bm X_k] \\ &\text{STB}(\bm X) = \text{SAttn}^l(\bm X) + \bm W_2 \cdot \text{ReLU}(\bm W_1 \text{SAttn}^l(\bm X)), \end{align} \label{eq:sparse_tf} \end{subequations} here $\mathcal{A}^l_k$, for $l\in [p]$ and $k\in [n]$, is the $l$-th sparsity pattern on $k$-th column (token embedding). $\bm X_{\mathcal{A}^l_k} \in \mathbb{R}^{d\times m}$ is the set of token embeddings 'links' to $k$-th token embedding under $l$-th sparse partten. \begin{definition} The sparsity patterns $\{ \mathcal{A}^l_k \}$ satisfy the following: \begin{enumerate} \item For all $k\in [n]$ and $l\in [p]$, we have $k \in \mathcal{A}^l_k$. \item There exists a permutation $\gamma: [n] \to [n]$ such that, for all $i \in [n-1]$, $\gamma(i) \in \cup^p_{l=1}\mathcal{A}^l_{\gamma(i+1)}$. \item There exists a finite $s \in \mathbb{N}$ such that $s = \min\{ u \vert \mathcal{S}^u_k = [n] \text{ for all } k \in [n] \}$. \end{enumerate} \end{definition} The first rule means each sparse pattern should include itself. The second says there exist a perumation such that the sparse attention of the current token embedding includes the previous one. The last rule says after $s$ sparse attention layers, every token are directly or indirectly linked to all others. The proof of universal approximator for sparse transformer is same to the dense transformer except proofing contextual mapping, as all other steps only require using feed-forward layers. So the key point is proofing the sparse multi-head attention is contextual mapping. Difference with dense multi-head attention, the sparse one could not visit the all tokens in one step, so could not apply the selective shift operation directly. In fact it requires $\delta^{-nk}$ number of sparase multi-head attentions rather than $\delta^{-k}$ in the dense case. The sparse multi-head attention is implemented as: \begin{subequations} \begin{align} &\Psi^l(\bm Z; c, b_Q, b'_Q) := \bm Z + [c\bm e^{(1)} -c\bm e^{(1)}] \begin{bmatrix} \psi^l (\bm Z; b_Q) \\ \psi^l (\bm Z; b'_Q) \end{bmatrix}\\ &\psi^l(\bm Z; b_Q)_k := \bm u^T \bm Z_{\mathcal{A}^l_k} \sigma_H [(\bm u^T \bm Z_{\mathcal{A}^l_k})^T(\bm u^T \bm Z_k - b_Q)]= \begin{cases} \max_{j\in \mathcal{A}^l_k}\bm u^T \bm Z_j & \text{if } \bm u^T \bm Z_k > b_Q,\\ \min_{j\in \mathcal{A}^l_k}\bm u^T \bm Z_j & \text{if } \bm u^T \bm Z_k < b_Q. \end{cases} \end{align} \end{subequations} Compared with Eq.~\ref{eq:selective_shift}, the sparse selective shift operation only takes the maximum and minimum w.r.t $\mathcal{A}^l_k$ rather than all tokens. The $(1, k)$-th entry reads \begin{subequations} \begin{align} \Psi^l (\bm Z; c, b_Q, b'_Q)_{1,k} &= Z_{1,k} + c\left(\psi^l (\bm Z; b_Q)_k - \psi^l (\bm Z; b'_Q)_k \right)\\ &= \begin{cases} Z_{1,k} + c\left( \max_{j\in \mathcal{A}^l_k} \bm u^T \bm Z_j - \min_{j\in \mathcal{A}^l_k} \bm u^T \bm Z_j \right) & \text{if } b_Q < \bm u^T \bm Z_k < b'_Q,\\ Z_{1,k} & \text{if } \bm u^T\bm Z_k \notin [b_Q, b'_Q]. \end{cases} \end{align} \end{subequations} Same as in Eq.~\ref{eq:selective_shift_col}, the sparse shift operation only modify each entries in the first row of the input $\bm Z_{1,k}$ if $\bm u^T \bm Z_k \in (b_Q, b'_Q)$, which is also an one-to-one mapping. However, as the sparse attention could not reach a subset of tokens, the global maximum and minimum could not be calculated in general cases. This could be solved by assuming $l_2 < l_3 < \cdots < l_n < l_1$, which is achieved by the positional embedding \begin{align}\label{eq:position_embedding} \bm E = \begin{bmatrix} n - 1 & \cdots & n - 1\\ i - 2 & \cdots & i - 2\\ i - 3 & \cdots & i - 3\\ \cdots & \cdots & \cdots \\ n - 2 & \cdots & n - 2 \end{bmatrix}. \end{align} Rewrite the input space from $\mathbb{G}_\delta$ to \begin{align} \mathbb{H}_\delta := \{ \bm G + \bm E \in \mathbb{R}^{d\times n} \vert \bm G \in \mathbb{G}_\delta := [0, \delta, 1-\delta]^{d\times n} \}. \end{align} And the interval will be \begin{align} &\bm H_1 \in [n-1:\delta:n-\delta]^d,\\ &\bm H_i \in [i-2:\delta:i-1-\delta]^d \text{ for all } i \in [2:n]. \end{align} By choicing $\bm u := (1, \delta^{-1}, \delta^{-2}, \dotso, \delta^{-d+1})$, the map $\bm H_k \mapsto \bm u^T \bm H_k$ is one-to-one, and \begin{align}\label{eq:pe_interval} &\bm u^T \bm H_{1} \in \left[ (n-1)\sum_{i=0}^{d-1} \delta^{-i}: \delta: (n-1)\sum_{i=0}^{d-1} \delta^{-i} + \delta^{-d+1} - \delta \right],\\ &\bm u^T \bm H_{i} \in \left[ (i-2)\sum_{i=0}^{d-1} \delta^{-i}: \delta: (i-2)\sum_{i=0}^{d-1} \delta^{-i} + \delta^{-d+1} - \delta \right] \text{ for all } i \in [2:n]. \end{align} So the interval of the dot product $\bm u^T \bm H_i$ is disjoint and $\bm u^T \bm H_2 < \bm u^T \bm H_3 < \cdots < \bm u^T \bm H_n < \bm u^T \bm H_1$. As the $j$-th token is linked to itself and the previous one, we could calculate the maximum and minimum value in selective shift operation: The sparase selective shift operation is first applied to second column, which is the minimum one, and its previous token (first column) is the maximum one. After this step, the resulted $\bm u^T \tilde{\bm H}_2$ is the new maximum and the thrid column $\bm u^T \bm H_3$ is the new minimum one. And the next sparsed selective shift operation could be applied to $\bm u^T \bm H_3$ as the maximum and minimum is reached by $\mathcal{A}^l_3$. After $n-1$ steps of sparse selective shift operations, we have $\bm u^T \bm H_1 < \bm u^T \tilde{\bm H}_2 < \bm u^T \tilde{\bm H}_3 \cdots < \bm u^T \tilde{\bm H}_n$. This operation could guarantee the first rule of contextual mapping. Then the all-max shift operation is applied to guarantee that all 'column ids' for different input (contextual) is disjoint, so the second rule of the contextual mapping is implemented. The all-max shift operation $\Omega^l : \mathbb{R}^{d\times n} \to \mathbb{R}^{d\times n}$ is defined as \begin{align} \Omega^l (\bm Z; c) = \bm Z + c\bm e^{(1)}\psi^l (\bm Z; 0). \end{align} And the $(1, k)$-th entry reads \begin{align} \Omega^l (\bm Z; c)_{1,k} = Z_{1,k} + c \psi^l (\bm Z; 0)_k = Z_{1,k} + c \max_{j \in \mathcal{A}^l_k} \bm u^T \bm Z_j. \end{align} To reach the global maximum, at least $s$ all-max shift operations should be applied. The total number of multi-head attention layers is $\delta^{-nk}+s$. \section{Masked Transformer} The masked transformer is defined as masking some tokens and replacing it with shared learnable masking token embedding $\bm m \in \mathbb{R}^d$, and $\bm M = [\bm m^T, \dotso, \bm m^T] \in \mathbb{R}^{d\times n}$. The randomly masking indicator is defined as $\bm \xi_n \in \{ 0, 1\}^n$ with 0 indicates the masked token. And $\circ$ denotes Hadamard product. \begin{subequations} \begin{align} &\text{Mask}(\bm X, \bm M) = \bm X \circ (\bm 1_d^T \bm \xi_n) + \bm M \circ \left( \bm 1_d^T (\bm 1_n - \bm \xi_n) \right) \\ &\text{SHead}^{i,l}(\bm X)_k = \bm W^i_V \text{Mask}(\bm X)_{\mathcal{A}_k^l} \cdot \sigma_S [ (\bm W^i_K \text{Mask}(\bm X)_{\mathcal{A}_k^l})^T \bm W^i_Q \text{Mask}(\bm X)_k] \\ &\text{SAttn}^l(\bm X) = \bm X + \bm W_O \begin{bmatrix} \text{SHead}^{1,l}(\bm X) \\ \vdots \\ \text{SHead}^{h,l}(\bm X) \end{bmatrix}\\ &\text{STB}^l(\bm X) = \text{SAttn}^l(\bm X) + \bm W_2 \cdot \text{ReLU}(\bm W_1 \cdot \text{SAttn}^l(\bm X)). \end{align} \label{eq:masked_tf} \end{subequations}
{ "arxiv_id": "2302.14343", "language": "en", "timestamp": "2023-03-01T02:10:25", "url": "https://arxiv.org/abs/2302.14343", "yymm": "2302" }
\section{Koning-Delaroche with uncertainty quantification (KDUQ)} The KDUQ~\cite{KDUQ} is the uncertainty-quantified version of the Koning-Delaroche (KD) global parametrization. It was fitted to the same corpus of data as KD, the main difference being the Bayesian framework used to determine the parameter posteriors and their correlations. It is valid for the scattering of a nucleon off a target with $A\geq 24$ and at energies 1 keV$\,<E<\,$200~MeV. In this section, we illustrate the posteriors and scattering observables obtained with the KDUQ. For all calculations in this work, we neglect the spin-orbit terms in the KDUQ parametrization, that we expect to be small and to have a negligible effect of transfer and knockout observables. The KDUQ parametrization therefore includes only volume and surface terms, it reads \begin{align} \label{Eq1} U(r) &= -V f(r;R_V,a_V) - i W f(r;R_W,a_W)+i 4a_S W_S \frac{d}{dr} f(r;R_S,a_S) \\ \nonumber \end{align} where $f(r;R_i,a_i)=\frac{1}{1+e^{(r-R_i)/a_i}}$ and $V$, $W$, $W_S$ are the real volume, imaginary volume, and imaginary surface depths, $R_i = r_i A^{1/3}$ for the neutron-target potential, where $A$ is the mass number of the target. Fig.~\ref{Fig1}(a) shows the corner plot obtained with 1600 sets of the KDUQ posteriors for $n$- $^{33}$Ar at $9$~MeV. As noted in Ref.~\cite{KDUQ}, the mean values of these posteriors are close to the parameters of the original KD (black line and points). By sampling these posteriors, one can propagate the uncertainties to reaction observables and build uncertainty bands on these cross sections. Fig.~\ref{Fig1}(b) shows the uncertainty bands obtained for the elastic-scattering cross section of a neutron of a $^{33}$Ar target at $9$~MeV. Both the $1\sigma$ (dark blue shaded area) and $2\sigma$ (light blue shaded area) uncertainty bands include the prediction made using the KD (black line). The uncertainty bands for the transfer observables [$^{34}$Ar$(p,d)$$^{33}$Ar(g.s.), $^{36}$Ar$(p,d)$$^{35}$Ar(g.s.) and $^{46}$Ar$(p,d)$$^{45}$Ar(g.s.) at $33A$~MeV] shown in Fig.~1 of the manuscript are obtained by using 1600 sets of the KDUQ posteriors in the Adiabatic Distorted Wave Approximation (ADWA)~\cite{JT74}. The KDUQ is used for the $p$-$^{34,36,46}$Ar interactions at 33~MeV but also for the $n$- and $p$-$^{33,35,45}$Ar potentials at half the deuteron energies. Let us emphasize that each ADWA calculation uses three nucleon-nucleus potentials, that were built consistently from the same global parameter set (composed of 46 parameters~\cite{KDUQ}). \begin{figure} \centering \begin{minipage}{0.9\linewidth} {\includegraphics[width=\linewidth]{PotentialsKDUQ_n_A33_Z18_E9_pulls1600.pot_CornerPlot.pdf}} \end{minipage} \hspace{7.2cm}\begin{minipage}{0.47\linewidth} \vspace{-23.1cm} {\includegraphics[width=\linewidth]{PotentialsKDUQ_n_A33_Z18_E9_pulls1600.pot_ElScat.pdf}} \end{minipage} \caption{(a) Corner plot of the KDUQ~\cite{KDUQ} posteriors for $n$-$^{33}$Ar at $9$~MeV ($V$, $r_V$, $a_V$, $W$, $r_W$, $a_W$, $W_S$, $r_S$ and $a_S$). (b) Elastic-scattering cross section of a neutron off $^{33}$ target. The dark and light blue shaded areas correspond respectively to the $1\sigma$ and $2\sigma$ uncertainty bands. The black line and points in (a) and (b) correspond to the original KD potential~\cite{KD03}.}\label{Fig1} \end{figure} \clearpage \section{Bayesian analysis of $n$-$^9$Be target interaction} In this work, we have propagated the uncertainties due to the neutron-target interaction onto knockout observables of $^{32,34,46}$Ar off a $^9$Be target. Since the KDUQ global optical potential is limited to heavier target, its accuracy would be uncertain for $n$-$^9$Be system. To be able to quantify the uncertainties introduced in the fit of the potential parameters to experimental data for the $n$-$^9$Be interaction, we follow the methodology detailed in our previous work~\cite{KOUQ22}. We consider mock data, that we generate at every 1$^\circ$ using a realistic potential~\cite{Weppner18}, which was fitted to reproduce elastic-scattering observables for a nucleon off a target nucleus with $A\leq 13$ at energies between 65 MeV and 75 MeV. We assign an error of 10\% to all mock data, which is common for elastic-scattering experiments with stable beams. As in our previous work~\cite{KOUQ22}, we assume wide prior Gaussian distributions in order to obtain data-driven posterior and we fix the geometry of the imaginary volume term to be the same as that of the real volume term, i.e., $r_W=r_V$ and $a_W=a_V$. The resulting parameters posteriors obtained for 25600 parameter sets, their correlations and the priors (black lines) used are shown in the corner plot Fig.~\ref{Fig2}(a). The $1\sigma$ (dark red shaded area) and $2\sigma$ (light red shaded area) uncertainty bands for the elastic-scattering observables along with the mock data are given in Fig.~\ref{Fig2}(b). The uncertainties on the scattering cross section are large at backwards angles, where no data were included in the fit. As seen in Ref.~\cite{KOUQ22}, including more data at these large angles would have a pronounced effect in the confidence intervals obtained for elastic scattering, but their impact on knockout observables is much more modest. For each of the knockout calculations presented in the manuscript, we use 1600 samples of these posteriors to generate the uncertainty bands for knockout observables. \begin{figure}[h] \centering \begin{minipage}{0.9\linewidth} {\includegraphics[width=\linewidth]{chi2_n9Be.pdf}} \end{minipage} \hspace{7.2cm}\begin{minipage}{0.47\linewidth} \vspace{-23.1cm} {\includegraphics[width=\linewidth]{PlotElScatn9BePulls25600_fill_prior1.pdf}} \end{minipage} \caption{Phenomenological $n$-$^9\rm Be$ interaction constrained with mock data $\in [0,60^\circ]$ generated by the potential given in Ref.~\cite{Weppner18}. (a) Corner plots; (b) Elastic-scattering cross sections of $n$ off $^{9}\rm Be$ at 70~MeV as a function of the scattering angle $\theta$ with mock data $\in [0,60^\circ]$ (black points). }\label{Fig2} \end{figure} \clearpage \section{Asymmetry plot obtained with the KDUQ potential} In this section, we verify that the asymmetry dependence of the ${\cal R}(\Delta S)$ for knockout shown in Fig. 3 of the manuscript is not dependent on our methodology, i.e., the phenomenological Bayesian fit to $n$-$^9$Be mock data. We repeated our knockout calculations using the KDUQ potential for the $n$-$^9\rm Be$ interaction. Although the accuracy of the KDUQ potential is unclear for such light nuclei, it was fitted to a large sets of nuclei, and therefore it encodes a mass and energy dependence. We show in Fig.~\ref{Fig3} the resulting asymmetry plot of the ratio of the SF extracted from data and the shell-model SF. Similarly to what is seen in the manuscript, the SFs extracted from knockout and transfer observables are consistent within $2\sigma$. The slope of the knockout results ($a=-0.0134\pm 0.0122$) is slightly smaller in magnitude and exhibits larger uncertainties than the one obtained in Fig. 3 of the manuscript ($a=-0.0178\pm 0.088$), exhibiting now a closer mean value to the one obtained from the transfer analysis ($a=-0.0036\pm 0.0090$). {These larger relative errors might be due to the fact that we extrapolate the KDUQ parametrization outside its mass range of validity}. As the slope extracted from both knockout analyses are consistent, this indicates that the mild dependence of the ratio ${\cal R}(\Delta S)$ on the asymmetry of the nucleus is not dependent on how we construct the $n$-$^9$Be interaction. \begin{figure}[h!] \centering\includegraphics[width=0.7\linewidth]{GadePlotUQonlyKDUQPub.pdf} \caption{Ratio of the SF extracted from data and the shell-model SF (including the center-of-mass correction) as a function of the asymmetry of the nucleus $\Delta S=S_n-S_p$. The blue error bars correspond to the SFs extracted from transfer angular distributions and the red ones to the SFs extracted from knockout cross sections, both using the KDUQ parametrization~\cite{KDUQ}. Each error bars show the $1\sigma$ and $2\sigma$ uncertainties. The shaded area correspond to the $1\sigma$ uncertainties of a linear fit of the transfer (blue) and knockout (red) error bars. The slopes of these linear fits and their $1\sigma$ uncertainty are given in the legend.}\label{Fig3} \end{figure} \clearpage \section{Asymmetry plots obtained considering the independent-particle model occupation number} Because they change with the size of the model space and the interactions considered in the shell-model calculations, the slopes extracted from the asymmetry plot in Fig. 3 of the manuscript and Fig.~\ref{Fig3} of this Supplemental Material are unique for each shell-model predictions. To avoid this ambiguity, one can extract the asymmetry dependence of the normalized ratio spectroscopic factors extracted from experimental data, obtained considering the independent-particle model (IPM) occupation number simply given by $(2j+1)$. Fig.~\ref{Fig4} shows the asymmetry plots obtained from the analysis of experimental data using (a) the Bayesian analysis of the $n$-$^9$Be interaction for the knockout data and the KDUQ for the transfer ones, and (b) using in both cases the KDUQ parametrization. Similarly to the case with the Bayesian analysis, the slopes obtained from both the knockout [(a) $a=-0.0120\pm 0.0062$ and (b) $a=-0.0088 \pm 0.0086$] and transfer ($a=-0.0039\pm 0.0061$) analyses are consistent, and the knockout calculations exhibit larger relative uncertainties. As discussed in the manuscript, the discrepancies between the knockout and transfer results are probably due to flaws of the eikonal model, that tends to overestimate the theoretical knockout cross sections and therefore underestimate the SFs extracted from knockout data. Finally, even though to make a meaningful comparison with the $\mathcal R(0)$ deduced from $(e,e'p)$, similar uncertainty analysis for that reaction channel needs to be done, our results do not seem inconsistent with the previous analysis~\cite{KRAMER2001267}: we extract ${\cal R}(0)= 0.38 \pm0.10$ from the knockout data in the case (a) [${\cal R}(0)= 0.29 \pm0.14$ in the case (b)] and ${\cal R}(0)= 0.51\pm0.06$ from the transfer data, while in Ref.~\cite{KRAMER2001267} ${\cal R}(0)\sim 0.4-0.7$ is extracted from $(e,e'p)$ data. \begin{figure}[h!] \centering\subcaptionbox{}{\includegraphics[width=0.48\linewidth]{GadePlotUQonlyKDUQWeppPubIPM.pdf}}\subcaptionbox{}{\includegraphics[width=0.48\linewidth]{GadePlotUQonlyKDUQPubIPM.pdf}} \caption{Asymmetry plot obtained with SFs extracted from transfer angular distributions (blue) and knockout cross sections (red) and normalized SFs considering the independent particle model. Panel (a) is obtained using the KDUQ parametrization for the transfer calculations and the Bayesian analysis of the $n$-$^9$Be interaction and panel (b) using in both cases the KDUQ parametrization. Each error bars show the $1\sigma$ and $2\sigma$ uncertainties. The shaded area correspond to the $1\sigma$ uncertainties of a linear fit of the transfer (blue) and knockout (red) error bars. The slopes of these linear fits and their $1\sigma$ uncertainty are given in the legend.}\label{Fig4} \end{figure} \clearpage \vspace{-0.2cm} \bibliographystyle{unsrt}
{ "arxiv_id": "2302.14272", "language": "en", "timestamp": "2023-03-01T02:07:41", "url": "https://arxiv.org/abs/2302.14272", "yymm": "2302" }
\section{1. Introduction} Transition metal diborides (TMDBs) have attracted a lot of attention over the past few decades due to their diverse properties \cite{TMD1,TMD2,TMD3}, including high hardness and shear strength \cite{hardness1,hardness2,hardness3,hardness4}, excellent corrosion resistance \cite{corrosion1,corrosion2}, superior catalytic performance \cite{catalytic1,catalytic2}, notrivial band topology \cite{topology1,topology2}, high electrical conductivity and even superconductivity \cite{SC1,SC2,SC3,SC4,SC5}. Among existing TMDBs, molybdenum diboride is unique in that it exists in both hexagonal AlB$_{2}$-type and rhombohedral structures \cite{MoB2}. The former is built up by planar transition metal and boron layers stacked alternatively along the $c$-axis, and the latter can be derived from the former by puckering half of the boron layers. Although both phases are not superconducting, superconductivity can be induced by either doping at the transition metal site or applying high pressure. In particular, the $T_{\rm c}$ of AlB$_{2}$-type MoB$_{2}$ reaches 32 K at $\sim$110 GPa \cite{MoB2pressureSC}, comparable to that of isostructural MgB$_{2}$ \cite{MgB2}. At ambient pressure, however, MoB$_{2}$ crystallizes in the rhombohedral structure. Instead, partial substitution of Mo by Zr or Sc is needed to stabilize the AlB$_{2}$-type phase in metal-deficient (Mo$_{x}$$X$$_{1-x}$)$_{1-\delta}$B$_{2}$ ($X$ = Zr and Sc), which exhibits bulk superconductivity up to 8.3 K \cite{SC3,SC5}. In addition to Zr, preliminary results are also reported for the group IVB dopants Ti and Hf \cite{SC3}. Carbon is known to be the most effective dopant for enhancing the upper critical field of MgB$_{2}$ \cite{MgB2C1,MgB2C2}. Single crystal results indicate that carbon can substitute up to $\sim$15\% of boron in MgB$_{2}$, which leads to a monotonic decreases in the $a$-axis while has little effect on the $c$-axis \cite{MgB2C3,MgB2C4}. With increasing carbon content, the $T_{\rm c}$ of Mg(B$_{1-x}$C$_{x}$)$_{2}$ decreases dramatically and is no longer detectable for $x$ $>$ 0.125. Despite the suppression of $T_{\rm c}$, the zero-temperature upper critical field $B_{\rm c2}$(0) is almost doubled with increasing $x$ from 0 to $\sim$0.04 \cite{MgB2C1,MgB2C2}. To further optimize the high field performance, codoping of Mg(B$_{1-x}$C$_{x}$)$_{2}$ with Ti was also attempted \cite{MgB2TiC}. It turns out that the carbon still replaces boron in the MgB$_{2}$ phase while Ti precipitates out as either TiB or TiB$_{2}$ in the intra-granular region. However, neither carbon doping nor its effect on superconductivity in TMDBs has been investigated to date. In this paper, we study the structure and physical properties of (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$. Structural analysis indicates the formation of an AlB$_{2}$-type phase for $x$ = 0, 0.12 and 0.16. The carbon dopant is distributed uniformly in the lattice and induces changes in the lattice parameters, microstructure, and boron binding energies. Physical property measurements show that (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ exhibits bulk superconductivity below 7.0 K while the C-doped samples remain normal down to 1.8 K, the reason for which is discussed. \section{2. Experimental section} Polycrystalline (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$ samples with $x$ = 0, 0.04, 0.08, 0.12 and 0.16 were synthesized by the arc melting method as described previously \cite{SC3,SC5}. Stoichiometric amounts of high purity Mo (99.99\%), Ti (99.99\%), B (99.99\%) and C (99.99\%) powders were weighed, mixed thoroughly and pressed into pellets in an argon-filled glovebox. The pellets were then melted in an arc furnace under high-purity argon atmosphere (99.999\%) with a current of 80 A, which roughly corresponds to a temperature of 2400 $^{\circ}$C. The melts were flipped and remelted at least four times to ensure homogeneity, followed by rapid cooling on a water-chilled copper plate. The phase purity of resulting samples was checked by powder x-ray diffraction (XRD) measurements in the 2$\theta$ range of 20-80$^{\circ}$ using a Bruker D8 Advance x-ray diffractometer with Cu-K$\alpha$ radiation. The step size is 0.025$^{\circ}$ and the dwelling time at each step is 0.1 s. The lattice parameters were determined by a least-squares method. The morphology and chemical composition were investigated in a Zeiss Supratm 55 Schottky field emission scanning electron microscope (SEM) equipped with an energy dispersive x-ray (EDX) spectrometer. The microstructure was examined in an FEI Tecnai G2 F20 S-TWIN transmission electron microscope (TEM) operated under an accelerating voltage of 200 kV. The x-ray photoelectron spectroscopy (XPS) measurements were performed in a ESCALAB Xi+ spectrometer with Al K$\alpha$ x-rays as the excitation source. The resistivity and specific heat measurements down to 1.8 K were done in a Quantum Design Physical Property Measurement System (PPMS-9 Dynacool), The resistivity was measured on bar-shaped samples using the standard four-probe method, and the applied current is 1 mA. The dc magnetization was measured down to 1.8 K in a commercial SQUID magnetometer (MPMS3) with an applied field of 1 mT. \section{3. Results and discussion} \noindent\textbf{3.1 X-ray structural analysis}\\ Figure 1(a) shows the XRD patterns for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$ samples with $x$ up to 0.16. In line with the previous report \cite{SC3}, the C-free (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ ($x$ = 0) sample has a dominant hexagonal AlB$_{2}$-type phase ($P$6/$mmm$ space group). The refined lattice parameters are $a$ = 3.042 {\AA} and $c$ = 3.150 {\AA}, which are smaller than those of (Mo$_{0.96}$Zr$_{0.04}$)$_{0.8}$B$_{2}$ \cite{SC3}. However, substituting as low as 4\% of B by C destabilizes the AlB$_{2}$-type structure. Indeed, the sample with $x$ = 0.04 contains a significant amount of rhombohedral Mo$_{2}$B$_{5}$-type impurity phase. Note that this phase is rich in boron, which may explain the absence of boron peak in the pattern. As the carbon content $x$ increases further, the impurity phase is suppressed rapidly and nearly single AlB$_{2}$-type phase is recovered for $x$ = 0.12 and 0.16. Especially, the absence of diffraction peaks from carbide phases provides compelling evidence that carbon is incorporated into the lattice, which, to our knowledge, is the first observation for AlB$_{2}$-type TMDs. The $a$- and $c$-axis lengths are found to be 3.048 {\AA} and 3.094 {\AA} for $x$ = 0.12, and 3.050 {\AA} and 3.091 {\AA} for $x$ = 0.16. Compared with (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$, the C-doped samples have slightly longer $a$-axes and considerably shorter $c$-axes. Note that carbon has a smaller atomic radius and a larger electronegativity than boron \cite{radius}. Hence the substitution of carbon for boron is expected to reduce the thickness of boron layers and enhance their attraction with the transition metal layers, both of which tend to shorten the $c$-axis. Meanwhile, the transition metal atoms could move apart so that the Coulomb repulsion between them is reduced and consequently the $a$-axis increases. \begin{figure} \includegraphics[width=14.6cm]{Fig1.eps} \caption{ (a) XRD patterns of the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$ samples with 0 $\leq$ $x$ $\leq$ 0.16. The peaks related to different impurities are marked by different symbols. (b) Structural refinement profile for the sample with $x$ = 0.12. Here black circles and red line are the observed (Obs.) and calculated (Cal.) patterns, respectively; the blue solid line represents the difference (Diff.) between them; the magenta ticks represent the expected positions of Bragg reflections.The inset shows a schematic structure of (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$. } \label{fig1} \end{figure} \begin{figure*} \includegraphics*[width=13.5cm]{Fig2.eps} \caption{ (a, b) SEM images for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ and (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$ samples, respectively, on a scale bar of 20 $\mu$m. (c) SEM image for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ sample on a scale bar of 2.5 $\mu$m. (d-f) EDX elemental mapping results for Mo, Ti, and B, respectively. (g) SEM iamge for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$ sample on a scale bar of 2.5 $\mu$m. (h-k) EDX elemental mapping results for Mo, Ti, B, and C, respectively. } \label{fig2} \end{figure*} The structural refinement profile for the sample with $x$ = 0.12 is displayed in Fig. 1(b). In the unit cell of AlB$_{2}$, the Al and B atoms occupy the (0, 0, 0) and (0,3333, 0.6667, 0.5) sites, respectively. For (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$, the Mo and Ti atoms are set to share the former site while the B and C atoms are set to share the latter one. As can be seen, all the diffraction peaks can be well fitted based on this structural model, which is corroborated by the small reliability factors ($R_{\rm wp}$ = 6.0\% $R_{\rm p}$ = 4.3\%). These results confirm the AlB$_{2}$-type structure of (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$, which is sketched in the inset of Fig. 1(b).\\ \noindent\textbf{3.2 Morphology, chemical composition and microstructure}\\ Typical SEM images on a scale bar of 20 $\mu$m for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ and (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$ samples are displayed in Figs. 2(a) and (b). Both samples consist of aggregated grains with size ranging from a few tenth to hundred $\mu$m. In some of the intergrain regions, a minor secondary phase with darker contrast is clearly visible for (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ but less evidence for (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$. EDX measurements indicate that this secondary phase is elemental boron in the former and elemental carbon in the latter, consistent with the XRD results shown above. Figures 2(c-k) show the magnified SEM images (on a scale bar of 2.5 $\mu$m) and the corresponding EDX elemental maps taken on the large grains of these samples. For both cases, a dense homogeneous bulk is observed and all the constituent elements are distributed uniformly. In addition, the average Ti/(Mo+Ti) ratio is determined to be 0.04(1), in reasonable agreement with the nominal composition. Nevertheless, the boron and carbon contents cannot be determined accurately by the EDX analysis due to their small atomic masses. \begin{figure*} \includegraphics*[width=14.8cm]{Fig3.eps} \caption{ (a,b) High resolution TEM images and corresponding SAED pattern for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ sample taken along the [0 0 1] zone axis. (c,d) High resolution TEM images and corresponding SAED pattern for the same sample taken along the [0 1 0] zone axis. In panel (c), the arrows mark the lattice defects. } \label{fig3} \end{figure*} \begin{figure} \includegraphics*[width=14.8cm]{Fig4.eps} \caption{ (a,b) High resolution TEM images and corresponding SAED patterns for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$ sample taken along the [0 0 1] zone axis. In panel (a), the arrows mark the lattice defects. (c,d) High resolution TEM images and corresponding SAED patterns for the same sample taken along the [0 1 0] zone axis. } \label{fig4} \end{figure} Figure 3(a) shows the high-resolution TEM (HRTEM) image taken along the [0 0 1] zone axis for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ sample. One can see sharp lattice fringes with a spacing of 0.263 nm, which matches well with that between the (100) planes. The corresponding selected area electron diffraction (SAED) displayed in Fig. 3(b) exhibits a well-defined spot pattern and the spots near the center can be indexed as the (100), (010) and (110) reflections. The HRTEM image and corresponding SAED taken along the [0 1 0] zone axis for the same sample are displayed in Figs. 3(c) and (d). Along this zone axis, two lattice spacings of 0.263 nm and 0.315 nm can be resolved, in line with those of the (100) and (001) planes, respectively. Also, the spots near the center are indexable to the (100), (101) and (001) reflections. It is worth noting that, similar to (Mo$_{0.96}$Zr$_{0.04}$)B$_{2}$ \cite{SC3}, planar defects along the $c$-axis are also detected for (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$, as indicated by the arrows in Fig. 3(c). However, the absence of streaking in the SAED pattern [see Fig. 3(d)] implies that the density of such defects is significantly lower in the latter than in the former. This is consistent with the expectation that the defects are suppressed by the presence of metal deficiency \cite{SC3}. The HRTEM images and corresponding SAED patterns taken along the [0 0 1] and [0 1 0] zone axes for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$ are shown in Figs. 4(a-d). The lattice fringes remain well visible and the two resolved lattice spacings of 0.263 nm and 0.315 nm agree well with those of the (100) and (001) planes, respectively. It is thus clear that the crystallinity dose not degrade upon carbon doping. However, contrary to (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$, a number of planar defects are detected along the [0 0 1] zone axis of (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$, as seen in Fig. 4(a). Indeed, the corresponding SAED pattern shown in Fig. 4(b) exhibits streaking along the (00$l$) directions. Given that the streaks are absent along the [0 1 0] zone axis and the transition metal layers remain unchanged, it is most likely that the planar defects come from the C-doped boron layers. In pristine boron layers, the boron atoms form a continuous network of six-membered rings. The substitution of boron by smaller carbon is expected to distort the rings, which promotes the formation of planar defects.\\ \begin{figure} \includegraphics*[width=10cm]{Fig5.eps} \caption{ (a,b) B 1$s$ XPS spectrum and its deconvolution for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ and (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$ samples, respectively. The two vertical dashed lines are guides to the eyes. } \label{fig5} \end{figure} \noindent\textbf{3.3 XPS spectra}\\ Figures 5(a) and (b) show the B 1$s$ XPS spectra of the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ and (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$ samples, respectively. In the former case, the spectrum is deconvoluted into three peaks located at 187.1 eV, 188.2 eV and 192.6 eV, which can be assigned to B-B bonds \cite{B-Bbond}, B-Mo bonds and B-O bonds \cite{BO-Mobond}, respectively. Similarly, the deconvolution of the spectrum of (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{0.88}$C$_{0.12}$)$_{2}$ gives three peaks located at 187.5 eV, 188.7 eV and 192.6 eV. Compared with (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$, the former two peaks shifts obviously towards higher binding energies while the latter one remains stationary. This suggests a contribution of B-C bonding since the the electronegtivity of carbon is larger than that of boron \cite{B-Cbond}. Note that this bonding inevitably involves the breaking of B-B bonds and induces a charge redistribution of the six-membered rings that favors the defect formation.\\ \noindent\textbf{3.4 Superconductiviting and normal-state properties}\\ \noindent\textbf{3.4.1 (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$}\\ \begin{figure} \includegraphics*[width=14.8cm]{Fig6.eps} \caption{ (a-c) Low temperature resistivity, magnetic susceptibility and specific heat, respectively, for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ sample. The vertical line is a guide to the eyes, showing the $T_{\rm c}$. In panel (c), the black lines are entropy conserving construction to estimate the normalized specific-heat jump and the red line is a fit to the normal-state data by the Debye model. (d) Temperature dependencies of resistivity under magnetic fields up to 5 T with a field increment of 1 T. The horizontal lines marks the level corresponding to half drop of the normal-state resistivity.(e) Upper critical field versus temperature phase diagram. The red line is a fit to the data by the WHH model. } \label{fig6} \end{figure} The low-temperature resistivity ($\rho$), magnetic susceptibility ($\chi$) and specific heat ($C_{\rm p}$) data for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ sample are displayed in Figs. 6(a-c). As can be seen, $\rho$ starts to drop on cooling below 7.4 K, indicating the onset of a superconducting transition. The $\rho$ drop is sharp down to $\sim$6.8 K, but afterward becomes much slower, and achieves zero only below $\sim$6 K. From the midpoint of the $\rho$ drop, the superconducting transition temperature $T_{\rm c}$ is determined to be 7.0 K. Coinciding with $T_{\rm c}$, a diamagnetic transition in the zero-field cooling (ZFC) $\chi$ and a distinct $C_{\rm p}$ anomaly are detected. In particular, the diamagnetic signal at 1.8 K corresponds to a shielding fraction of $-$4$\pi\chi$ $\approx$ 87.4\% , which is more than one order of magnitude larger than that reported previously \cite{SC3}. These results indicate the bulk nature of superconductivity in (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$. To extract the Sommerfeld coefficient $\gamma$, the normal-state $C_{\rm p}$ data is fitted by the Debye model, \begin{equation} C_{\rm p}/T = \gamma + \beta T^{2}, \end{equation} where $\beta$ is the phonon specific heat coefficient. This gives $\gamma$ = 3.61 mJ/molK$^{2}$ and $\beta$ = 0.01052 mJ/molK$^{4}$, and then the entropy-conserving construction yields the normalized specific heat jump $\Delta$$C_{\rm p}$/$\gamma$$T_{\rm c}$ = 0.41. With $\beta$, the Debye temperature $\Theta_{\rm D}$ is calculated to be 802 K using the equation \begin{equation} \Theta_{\rm D} = (12\pi^{4} N R/5\beta)^{1/3}, \end{equation} where $N$ = 2.8 is the number of atoms per formula unit and $R$ is the gas constant. Then the electron-phonon coupling constant $\lambda_{\rm ep}$ is estimated to be 0.54 using the inverted McMillan formula \cite{Mcmillam} \begin{equation} \lambda_{\rm ep} = \frac{1.04 + \mu^{\ast} \rm ln(\Theta_{\rm D}/1.45\emph{T}_{\rm c})}{(1 - 0.62\mu^{\ast})\rm ln(\Theta_{\rm D}/1.45\emph{T}_{\rm c}) - 1.04}, \end{equation} where $\mu^{\ast}$ = 0.13 is the Coulomb repulsion pseudopotential. This, together with $\Delta$$C_{\rm p}$/$\gamma$$T_{\rm c}$, implies that (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ is a weak coupling superconductor. The upper critical field ($B_{\rm c2}$) of (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ is determined by resistivity measurements under magnetic fields up to 5 T, the result of which is shown in Fig. 6(d). The application of field leads to the shift of resistive transition toward lower temperatures. For each field, $T_{\rm c}$ is determined using the same criterion at zero field, and the resulting temperature dependence of $B_{\rm c2}$ is plotted in Fig. 6(e). Extrapolating the $B_{\rm c2}$($T$) data to 0 K using the Werthamer-Helfand-Hohenberg model \cite{WHH} gives the zero-temperature upper critical field $B_{\rm c2}$(0) = 5.2 T. This $B_{\rm c2}$(0) is comparable to that of (Mo$_{1-x}$Sc$_{x}$)$_{1-\delta}$B$_{2}$ \cite{SC5} and much smaller than the Pauli paramagnetic limit $B_{\rm P}$(0) = 1.86$T_{\rm c}$ $\approx$ 13.1 T \cite{Paulilimit}, indicating that it is limited by the orbital effect. Once $B_{\rm c2}$(0) is known, the Ginzburg-Landau (GL) coherence length $\xi_{\rm GL}$(0) is calculated to be 8.0 nm according to the equation \begin{equation} \xi_{\rm GL}(0) = \sqrt{\frac{\Phi_{0}}{2\pi B_{\rm c2}(0)}}, \end{equation} where $\Phi_{0}$ = 2.07 $\times$ 10$^{-15}$ Wb is the flux quantum.\\ \begin{figure} \includegraphics*[width=9.2cm]{Fig7.eps} \caption{ Temperature dependencies of resistivity up to 300 K for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$ samples with $x$ = 0.12 and 0.16. The data for (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ (dashed line) is also included for comparison. } \label{fig7} \end{figure} \noindent\textbf{3.4.2 (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$}\\ Figure 7 shows the $\rho$($T$) curves for the (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$ samples with $x$ = 0.12 and 0.16, together with the data for (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ (dashed line) for comparison. For both C-doped samples, $\rho$ decreases smoothly with decreasing temperature, reflecting a metallic behavior. Compared with the C-free sample, the $\rho$ magnitude is larger for $x$ = 0.12 while smaller for $x$ = 0.16. However, no $\rho$ drop is detected down to 1.8 K for the latter two cases, indicating that $T_{\rm c}$ is strongly suppressed for carbon content $x$ $\geq$ 0.12.\\ \noindent\textbf{3.4.3 Mechanism of $T_{\rm c}$ suppression by carbon doping}\\ The overall behavior of (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$ is reminiscent of that observed in single crystalline Mg(B$_{1-x}$C$_{x}$)$_{2}$ \cite{MgB2C1}. In the latter case, this suppression of $T_{\rm c}$ is mainly attributed to the effect of band filling since carbon is an electron dopant, which reduces the number of holes at the top of the boron $\sigma$ bands \cite{Tcsuppresion1}. Contrary to MgB$_{2}$, the boron $\sigma$ orbitals are completely filled in MoB$_{2}$ \cite{SC3,MoB2bandstructure}. Instead, the electronic bands near the Fermi level ($E_{\rm F}$) are dominated by the Mo 4$d$ states and the boron states at the $E_{\rm F}$ are $\pi$ bonding in nature \cite{SC3}. While this boron bonding is much weaker than that in MgB$_{2}$, its contribution to $\lambda_{\rm ep}$ is found to still play an important role in achieving the relatively high $T_{\rm c}$ in compressed MoB$_{2}$ \cite{MoB2bandstructure}. It is thus reasonable to speculate that the boron $\pi$ states are also the key to superconductivity in (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$. The carbon substitution for boron denotes electrons, which fill the boron $\pi$ bands. This could lead to a reduction in electron-phonon coupling strength and consequently suppress the superconductivity in (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$. Nonetheless, further studies in future are necessary to draw a more definitive conclusion. \section{4. Conclusion} In summary, we have investigated the structure and properties of (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$(B$_{1-x}$C$_{x}$)$_{2}$ diborides prepared by the arc-melting method. The samples with $x$ = 0, 0.12, and 0.16 have a nearly single AlB$_{2}$-type phase with a uniform elemental distribution. The carbon doping leads to a slight increase in the $a$-axis, a significant reduction in the $c$-axis, the formation of planar defects along the (100) planes, and a shift of the B 1$s$ peaks towards higher binding energies. Moreover, the C-free (Mo$_{0.96}$Ti$_{0.04}$)$_{0.8}$B$_{2}$ ($x$ = 0) is confirmed to be a bulk superconductor below $T_{\rm c}$ = 7.0 K, while no resistivity drop is observed down to 1.8 K for $x$ = 0.12 and 0.16. This suppression of superconductivity is attributed to weakening of electron-phonon coupling as a consequence of the electron filling of boron $\pi$ bands by carbon doping. Our results not only represent the first study on the effect of carbon doping in transition metal diborides, but also help to better understand the superconductivity in this family of materials. \section*{ACKNOWLEGEMENT} We thank the foundation of Westlake University for financial support and the Service Center for Physical Sciences at Westlake University for technical assistance in SEM measurements. The work at Zhejiang University is supported by the National Natural Science Foundation of China (12050003).
{ "arxiv_id": "2302.14288", "language": "en", "timestamp": "2023-03-01T02:08:18", "url": "https://arxiv.org/abs/2302.14288", "yymm": "2302" }
\section*{Abstract} \input{0_Abstract.tex} Keywords: Regular reflection; Mach reflection; Moving wedge; Dynamic shock waves; Supersonic flow; Dual solution domain. \input{1_Nomenclature.tex} \section{Introduction} \input{2_Introduction.tex} \section{Computational Model} \subsection{Model Description} \input{3_Model_Description.tex} \subsection{Governing Equations} \input{4_Governing_Equations.tex} \subsection{Computational Domain} \input{5_Computational_Domain.tex} \section{Results and Discussion} \input{6_Results_and_Discussion.tex} \section{Conclusion} \input{7_Conclusion.tex}
{ "arxiv_id": "2302.14245", "language": "en", "timestamp": "2023-03-01T02:06:36", "url": "https://arxiv.org/abs/2302.14245", "yymm": "2302" }
\section{Introduction} \label{sec:intro} Fluid simulation using computers has widely been used for various purposes, such as analysis of industrial products, generating computer graphics, and usage in computer games. Although the dynamics of fluid can be described by the Navier-Stokes equations, solving these equations analytically is unrealistic in many cases and, therefore, solving these equations numerically using computers is important. In reality, fluid interacts with various solids. As such, enabling fluid-solid interaction makes simulation more practical. In addition, within the range of daily handling, many types of fluids and solids can be approximated as incompressible fluids or rigid bodies, respectively. Therefore, coupling of incompressible fluid and rigid body enables more versatile and complex simulation. Coupled simulation of multiple substances can be widely divided into two types of simulation, namely, weakly coupled simulation and strongly coupled simulation. In weakly coupled simulation, interaction between different substances is computed explicitly, whereas, in strongly coupled simulation, this interaction is implicitly computed. Therefore, strongly coupled simulation is more stable and accurate than weakly coupled simulation when the same time step size is adopted. Rigid-body simulation has long been researched, and impulse-based methods~\cite{mirtich1995impulse} are among the main computation methods. The most important part of rigid-body computation is contact computation, and impulse-based methods deal with contacts in a unified manner by the impulse that integrates force over a small time step. Since simulation requires the integration of time, impulse is easier to handle than force, and using impulse keeps simulation from mathematical and numerical problems~\cite{baraff1991coping, anitescu1997formulating}. If we only consider normal force, we can formulate contact constraints using linear complementarity problems (LCPs)~\cite{baraff1994fast}. When we take friction into consideration, the problem is no longer linear and we have to solve nonlinear complementarity problems (NCPs). However, we can approximate friction to make things less complicated. For example, Tonge et al.~\cite{tonge2012mass} formulated frictional constraints using boxed LCPs by applying pyramid approximation to the Coulomb friction cone. Moreover, Gholami et al.~\cite{gholami2016linear} adopted a continuity approximation to friction to formulate the entire contact problem using LCPs. On the other hand, we can also compute the accurate Coulomb friction cone without approximation by applying an extension to the projected Gauss-Seidel (PGS) method, which is generally used to solve LCPs~\cite{erleben2005physics}. We also use the PGS method and its extension for solving all kinds of constraints that appear in simulation. When we formulate rigid-body contacts using LCPs, we have to predict the relative velocity after collisions prior to contact computation. This prediction is usually performed using the relationship between the relative velocity and the coefficient of restitution. However, when multiple collisions occur simultaneously, it is difficult to explicitly compute the accurate velocity after these collisions, and LCPs may give physically incorrect results. In addition, depending on the positions and number of contact points, the coefficient matrix of an LCP may not become a symmetric positive-definite matrix, and therefore the solution of an iterative method may not converge to a unique solution~\cite{drumwright2007fast}. Although we do not adopt them in this study, some approaches are presented to solve these problems. For example, Tang et al.~\cite{tang2014impulse} developed the energy tracking impulse (ETI) method, which does not require solving LCPs to compute the impulse. In the ETI method, by tracking energy during collisions, the physically accurate impulse can be calculated without explicitly giving the relative velocity after collision, and its improvement was proposed by Li et al.~\cite{li2020energy} for particle-based rigid-body simulation. Linear complementarity problems have long been used for rigid-body computation, but there are a few examples of the use of LCPs for fluid computation. Batty et al.~\cite{batty2007fast} used an LCP to formulate wall boundary conditions, and Bodin et al.~\cite{bodin2011constraint} proposed a method by which to simultaneously formulate and solve constant-density constraints and constraints for wall boundary conditions as a mixed LCP. In addition, Gerszewski and Bargteil~\cite{gerszewski2013physics} developed a method by describing the incompressibility of a fluid using inequalities and formulating the resulting pressure equation with an LCP. Coupled simulation of fluid using a particle method and rigid bodies has long been researched. In the context of the smoothed particle hydrodynamics (SPH) method, Monaghan et al.~\cite{monaghan2003fluid} developed a method that enables coupled simulation of fluid and rigid bodies by representing rigid bodies with particles and computing the interaction force between nearing fluid particles and rigid-body particles. Later, a more stable method was developed by Oh et al.~\cite{oh2009impulse} to compute the interaction based on impulse rather than force, including inter-rigid-body collisions. Akinci et al.~\cite{akinci2012versatile} improved computation at the boundaries of a particle-based rigid-body and fluid and generalized the explicit computation of the interaction so that they can handle the coupled simulation of rigid bodies with fluid computed by both the predictive-corrective SPH method~\cite{solenthaler2009predictive} and the slightly compressible SPH method. Macklin et al.~\cite{macklin2014unified} proposed a method that enables coupled simulation of liquid, gas, solid, and cloth on GPUs by approximating rigid bodies using shape matching techniques. Coupling of rigid bodies and an incompressible fluid has also been researched. Li and Asai~\cite{li2018fluid} proposed a method to couple an incompressible fluid using the incompressible SPH method~\cite{cummins1999sph} and rigid-bodies that are computed using an impulse-based method. There is also a method, which was developed by Klinger et al.~\cite{klingner2006fluid}, to strongly couple an incompressible fluid and rigid bodies using meshes that are dynamically updated during simulation. In the field of computer graphics, incompressible fluid simulation with position-based formulation has been proposed~\cite{bender2017survey}, which can easily be coupled with other position-based methods. However, position-based methods cannot capture some types of physical phenomena, such as variation of coefficient of restitution in rigid-body simulation. In addition, the moving particle semi-implicit (MPS) method~\cite{koshizuka1996moving} was used to research the coupling of fluids and rigid bodies. Shibata et al.~\cite{shibata2012lagrangian} simulated shipping water on a ship to analyze its effect using weak coupling of fluid and a rigid body. In addition, Shibata et al.~\cite{shibata2013numerical} simulated a lifeboat falling into water also using weak coupling of fluid and a rigid body by transferring fluid pressure to the rigid body. Koshizuka et al.~\cite{koshizuka1998numerical} developed the passively moving solid (PMS) model, which computes rigid bodies with particles by once treating rigid-body particles as fluid particles during pressure and advection computation and then cancelling out the deformation of rigid bodies to restore their shape. The PMS model can handle not only the interaction between rigid bodies and fluid but also between rigid bodies and other rigid bodies. Tanaka et al.~\cite{tanaka2007particle} computed coupled simulation of rigid bodies and an incompressible fluid using a penalty method for inter-rigid-body contact calculation and the PMS model for the interaction of rigid bodies and fluid. Gotoh et al.~\cite{gotoh2006three} computed a flood flow with floating objects using the MPS method and the PMS model for fluid-and-solid coupled simulation. The PMS model itself is not necessarily combined with the MPS method, and the same approach has been used with the SPH method~\cite{bouscasse2013nonlinear} and the weakly compressed SPH method~\cite{ren2015nonlinear} to compute floating rigid bodies. This approach has also been used in the field of computer graphics~\cite{carlson2004rigid}. Large-scale computation of fluid and rigid bodies has been performed by Murotani et al.~\cite{murotani2014development} using the explicit MPS method for fluid and the PMS model for the interaction of fluid and rigid bodies. They divided the computation space into multiple small spaces in order to make the simulation run in parallel~\cite{murotani2014development}. Wang et al.~\cite{wang2019numerical} simulated floating bodies transported by fluid using the MPS method and the PMS model for an incompressible fluid and the interaction of rigid bodies and fluid, as well as the discrete element method for explicitly calculated inter-rigid-body contacts. In the MPS method, the pressure of a fluid is implicitly computed by solving the pressure Poisson equation and thus the pressure field of rigid bodies in the PMS method is implicitly calculated along with that of the fluid when this method is used in combination with the MPS method. As far as we know, there is no other method than ours that achieves fully strongly coupled simulation of incompressible fluid and rigid bodies using a particle method and velocity-based formulations. Rigid-body computation in impulse-based methods is performed in a velocity-based manner. Therefore, by formulating the incompressibility constraint of fluid in a velocity-based manner as well, we can expect to simultaneously compute both the impulse due to the collision of rigid bodies and the pressure of an incompressible fluid implicitly~\cite{miyamoto2020strong}. In this study, we modify and generalize our previous work~\cite{miyamoto2020strong}. Although the strongly coupled simulation itself has already been achieved in~\cite{miyamoto2020strong}, the core technique of the method highly depends on LCP formulations. In the present paper, we extend the core technique that enables the strongly coupled simulation by introducing the abstract concept of ``velocity-based constraints'' so that it can be applied for wider range. The notion of velocity-based constraints provides a very flexible framework of strongly coupled simulation of various substances that are not limited to fluids and rigid bodies. The remainder of the present paper is organized as follows. In Section~\ref{sec:fluid}, we introduce the proposed particle method to compute incompressible fluid. We also define the velocity-based constraints in this section. In Section~\ref{sec:rigid}, we describe the impulse-based rigid-body computation and the way to build velocity-based constraints to solve inter-rigid-body collisions. Then, in Section~\ref{sec:rigid-fluid}, the strong coupling of rigid bodies and an incompressible fluid is proposed. We present some numerical examples in Section~\ref{sec:example} in order to confirm the behavior and accuracy of the proposed method. Finally, we present the conclusion in Section~\ref{sec:conclusion}. \section{Fluid Simulation} \label{sec:fluid} In this section, we introduce a velocity-based method to simulate incompressible fluids. We use $\bs{r}$, $\bs{u}$, $m$, $\rho$, $p$, $\nu$, and $\bs{f}$ to represent position, velocity, mass, density, pressure, kinematic viscosity, and acceleration due to external forces, respectively. We use subscript $\phi_i$ to represent arbitrary physical quantity $\phi$ of particle $i$. \subsection{Governing Equation} \label{sec:fluid:gov} We use the Navier-Stokes equation \begin{align} \label{eq:fluid:ns} \frac{D\bs{u}}{Dt} = -\frac1\rho\nabla p + \nu\nabla^2\bs{u} + \bs{f} \end{align} and the continuity equation \begin{align} \label{eq:fluid:conti} \nabla\cdot\bs{u} = 0 \end{align} as the governing equations of incompressible fluids. Here, we assumed that the density and the viscosity are constant. Note that $D\bs{u}/Dt$ in the left-hand side of \eqref{eq:fluid:ns} represents the Lagrange derivative of $\bs{u}$. \subsection{Weighting Function} \label{sec:fluid:weighting} We denote the weighting function as $w$ and the effective radius as $r_e$. In the proposed method, the derivative of weighting function $w'$ is used during computation, so it is preferable that $w'$ be continuous around effective radius $r_e$. Thus, we use the following function as the weighting function: \begin{align} w(r) = \begin{cases} (1-\frac{r}{r_e})^2 & (r<r_e) \\ 0 & (\text{otherwise}) \end{cases} \end{align} Using the weighting function, we define the particle number density of particle $i$ as \begin{align} \label{eq:fluid:pnd} n_i = \sum_{j\neq i} w(r_{ij}), \end{align} where $r_{ij} = \norm{\bs{r}_{ij}}$ and $\bs{r}_{ij} = \bs{r}_j-\bs{r}_i$. We also define the standard particle number density $n^0$ as the particle number density in the initial particle arrangement of a particle that is placed inside a sufficient volume of the fluid. We use $l$ to denote the interval between particles in the initial particle arrangement. \subsection{Spatial Discretization} \label{sec:fluid:disc} We define the discretized gradient model as \begin{align} \label{eq:fluid:grad} \ang{\nabla\phi}_i = C \sum_{j\neq i} (\phi_i + \phi_j) w'(r_{ij}) \frac{\bs{r}_{ij}}{r_{ij}}, \end{align} which is widely used in various SPH-based methods, except for the scalar constant $C$. We introduce the constant because the most standard gradient model used in the SPH method tends to give an incorrect value, especially when the effective radius is not large enough. The value of $C$ is chosen so that the gradient model shows the correct value in the initial particle arrangement. More precisely, $C$ is computed as \begin{align} C = \left(\sum_{j\neq i} (\bs{r}_i+\bs{r}_j)_x w'(r_{ij}) \frac{(\bs{r}_{ij})_x}{r_{ij}}\right)^{-1} \end{align} in the initial particle arrangement, where $(\cdot)_x\colon \Rn{3}\to\mathbb{R}$ is the first component of a vector and particle $i$ is inside a sufficient volume of the fluid. Using the gradient model of \eqref{eq:fluid:grad}, the pressure term for particle $i$ is discretized as follows: \begin{align} \label{eq:fluid:pgrad} -\frac1\rho\langle\nabla p\rangle_i=-\frac C\rho \sum_{j\neq i} (p_i + p_j) w'(r_{ij}) \frac{\bs{r}_{ij}}{r_{ij}}. \end{align} \subsection{Incompressibility Constraint} \label{sec:fluid:incompl} Since the particle number density is proportional to the fluid density, we can adopt the following equation as the incompressibility condition: \begin{align} \label{eq:fluid:incompl} n_i=n^0. \end{align} From \eqref{eq:fluid:incompl}, we obtain \begin{align} \label{eq:fluid:nulldiv} \frac{dn_i}{dt}=0. \end{align} By the definition of the particle number density, we can directly calculate the left-hand side of \eqref{eq:fluid:nulldiv} as: \begin{align} \frac{dn_i}{dt} & = \frac{d}{dt} \sum_{j\neq i} w(r_{ij}) \\ & = \sum_{j\neq i} w'(r_{ij}) \frac{d}{dt}\norm{\bs{r}_j-\bs{r}_i} \\ & = \sum_{j\neq i} w'(r_{ij}) \frac{(\bs{r}_j-\bs{r}_i)\cdot(\bs{u}_j-\bs{u}_i)}{\norm{\bs{r}_j-\bs{r}_i}} \\ & = \sum_{j\neq i} w'(r_{ij}) (\bs{u}_j-\bs{u}_i)\cdot\frac{\bs{r}_{ij}}{r_{ij}}, \end{align} and thus the discretized version of the zero-divergence condition of \eqref{eq:fluid:conti} can be written as: \begin{align} \label{eq:fluid:dconti} \sum_{j\neq i} w'(r_{ij}) (\bs{u}_j-\bs{u}_i)\cdot\frac{\bs{r}_{ij}}{r_{ij}} = 0. \end{align} However, solving only \eqref{eq:fluid:dconti} as the constraint in actual computation makes the numerical error accumulate as the computation progresses, which allows the fluid to be gradually compressed, and eventually the computation collapses. There are several approaches to avoid this problem. One approach is to utilize the particle shifting techniques \cite{xu2009accuracy, lind2012incompressible} to maintain the volume of the fluid constant. Another approach is to mix the positional constraint violation term to the velocity constraint. The latter is easier to handle because this approach requires few changes to the entire procedure and keeps the equations to be solved linear. We use this approach to solve the problem. Let $\alpha\in[0,1]$ be a constant to control the amount of mixing. We use the following equation to obtain the constraint: \begin{align} \label{eq:fluid:mixed} \sum_{j\neq i} w'(r_{ij}) (\bs{u}_j-\bs{u}_i)\cdot\frac{\bs{r}_{ij}}{r_{ij}} & = \frac\alpha h (n^0 - n_i), \end{align} where $h$ is the time step size. Setting $\alpha$ to zero means that there is no positional term and \eqref{eq:fluid:mixed} is equal to \eqref{eq:fluid:dconti}, whereas setting $\alpha$ to one means that an attempt is made to cancel out all positional constraint violations at the next time step. However, using a large value of $\alpha$ does not work well due to the linear approximation and makes the simulation unstable, so we use 0.05 for the value of $\alpha$ throughout the present paper. \subsection{Time Integration} \label{sec:fluid:int} In this method, we use a semi-implicit scheme for the time integration, which is similar to the MPS method. The computation in each time step is performed roughly as follows. We first calculate the viscosity and the external force terms explicitly to obtain temporal velocity $\bs{u}^*$, and then we compute the pressure to correct the temporal velocity so that $\bs{u}^*$ satisfies constraint \ {eq:fluid:mixed}. Finally, we adopt the corrected temporal velocity as the velocity in the next time step, and update the position using the velocity. We use integer $t$ to denote a time step, and use superscript $\phi^t$ to denote arbitrary physical quantity $\phi$ at time $t$. We do not define the detailed algorithm for computing explicit terms, as there are many acceptable ways to compute these terms and the subsequent procedure is not affected as long as we can obtain temporal velocity $\bs{u}^*$. In the following discussion, we focus on the computation of $\bs{u}^{t+1}$ and $p^{t+1}$. Let $\bs{x}^t$ and $\bs{u}^t$ be the position and the velocity of particles, respectively, at time $t$. After obtaining temporal velocity $\bs{u}^*$ by explicit calculation, we first initialize pressure $p^{t+1}$ with zero. Pressure $p^{t+1}$ plays a role in correcting temporal velocity $\bs{u}^*$ through the pressure term $-\frac1\rho\ang{p^{t+1}}$. In order to derive an equation to obtain $p^{t+1}$, assume that $\bs{u}^{t+1} = \bs{u}^* - \frac h\rho\ang{p^{t+1}}$ holds and assign $\bs{u}^{t+1}$ to constraint \eqref{eq:fluid:mixed} to obtain \begin{align} \frac\alpha h (n^0 - n_i) & = \sum_{j\neq i} w'(r_{ij}) (\bs{u}_j^{t+1}-\bs{u}_i^{t+1})\cdot\frac{\bs{r}_{ij}}{r_{ij}} \\ & = \sum_{j\neq i} w'(r_{ij}) \left(\bs{u}_j^*-\bs{u}_i^*-\frac h\rho\ang{p_j^{t+1}}+\frac h\rho\ang{p_i^{t+1}}\right)\cdot\frac{\bs{r}_{ij}}{r_{ij}}, \end{align} or, equivalently, \begin{align} & \frac\alpha h (n^0 - n_i) - \sum_{j\neq i} w'(r_{ij}) (\bs{u}_j^*-\bs{u}_i^*)\cdot\frac{\bs{r}_{ij}}{r_{ij}} \\ \label{eq:fluid:eqlast} & = h\frac C\rho\sum_{j\neq i} w'(r_{ij}) \left( \sum_{k\neq i} (p_i + p_k) w'(r_{ik}) \frac{\bs{r}_{ik}}{r_{ik}} -\sum_{k\neq j} (p_j + p_k) w'(r_{jk}) \frac{\bs{r}_{jk}}{r_{jk}} \right)\cdot\frac{\bs{r}_{ij}}{r_{ij}}. \end{align} Let $N$ be the number of particles; $\bs{b}\in\Rn N$ be a vector, the $i$th component of which is $\frac\alpha h (n^0 - n_i) - \sum_{j\neq i} w'(r_{ij}) (\bs{u}_j^*-\bs{u}_i^*)\cdot\frac{\bs{r}_{ij}}{r_{ij}}$; and $\bs{p}\in\Rn N$ be a vector, the $i$th component of which is $p_i^{n+1}$. Then, there exists a matrix $\bs{A}\in\Rn{N\times N}$ such that $\bs{A}\bs{p}=\bs{b}$ is equivalent to equation \eqref{eq:fluid:eqlast}. Define $\bs{v}_{ij}\in\Rn3$ as \begin{align} \bs{v}_{ij} = \begin{cases} \bs{0} & (i=j) \\ w'(r_{ij})\dfrac{\bs{r}_{ij}}{r_{ij}} & (i\neq j). \end{cases} \end{align} Then, the right-hand side of \eqref{eq:fluid:eqlast} can be written as \begin{align} & \phantom{=} h\frac C\rho\sum_{j\neq i}\sum_{k\neq i}(p_i+p_k)w'(r_{ij})\frac{\bs{r}_{ij}}{r_{ij}}\cdot w'(r_{ik})\frac{\bs{r}_{ik}}{r_{ik}} \\ & \phantom{=} -h\frac C\rho\sum_{j\neq i}\sum_{k\neq j}(p_j+p_k)w'(r_{ij})\frac{\bs{r}_{ij}}{r_{ij}}\cdot w'(r_{jk})\frac{\bs{r}_{jk}}{r_{jk}} \\ & = h\frac C\rho\sum_{j,k}(p_i+p_k)\bs{v}_{ij}\cdot\bs{v}_{ik} - h\frac C\rho\sum_{j,k}(p_j+p_k)\bs{v}_{ij}\cdot\bs{v}_{jk} \\ & = h\frac C\rho\biggl(p_i\sum_{j,k}\bs{v}_{ij}\cdot\bs{v}_{ik} + \sum_jp_j\sum_k\bs{v}_{ij}\cdot\bs{v}_{ik} \\ & \phantom{=} -\sum_jp_j\sum_k\bs{v}_{ij}\cdot\bs{v}_{jk} + \sum_jp_j\sum_k\bs{v}_{ik}\cdot\bs{v}_{jk}\biggr) \\ & = p_ih\frac C\rho\left(\norm{\sum_k\bs{v}_{ik}}^2+\sum_k\norm{\bs{v}_{ik}}^2\right) \\ & \phantom{=} +\sum_{j\neq i}p_jh\frac C\rho\sum_k\left(\bs{v}_{ij}\cdot\bs{v}_{ik}-\bs{v}_{ij}\cdot\bs{v}_{jk}+\bs{v}_{ik}\cdot\bs{v}_{jk}\right). \end{align} Therefore, the components of matrix $\bs{A}$ are \begin{align} A_{ij} = h\frac C\rho\begin{dcases} \norm{\sum_k\bs{v}_{ik}}^2+\sum_k\norm{\bs{v}_{ik}}^2 & (i=j) \\ \sum_k\left(\bs{v}_{ij}\cdot\bs{v}_{ik}-\bs{v}_{ij}\cdot\bs{v}_{jk}+\bs{v}_{ik}\cdot\bs{v}_{jk}\right) & (i\neq j). \end{dcases} \end{align} Let $\bs{J}_i\in\Rn{3N}$ be a vector \begin{align} \bs{J}_i = \begin{pmatrix} \bs{v}_{i,1} \\ \vdots \\ \bs{v}_{i,i-1} \\ -\sum_j\bs{v}_{i,j} \\ \bs{v}_{i,i+1} \\ \vdots \\ \bs{v}_{i,N} \end{pmatrix}, \end{align} and $\bs{J}\in\Rn{3N\times N}$ be a matrix $\bs{J}=(\bs{J}_1 \cdots \bs{J}_N)$. Then, matrix $\bs{A}$ can be decomposed into $\bs{A}=h\frac C\rho\bs{J}^T\bs{J}$. Indeed, the inner product of $\bs{J}_i$ and $\bs{J}_j$ times $h\frac C\rho$ is \begin{align} h\frac C\rho\bs{J}_i\cdot\bs{J}_j & = h\frac C\rho\begin{dcases} \sum_k\bs{v}_{ik}\cdot\bs{v}_{ik} + (\sum_k\bs{v}_{ik})\cdot(\sum_k\bs{v}_{ik}) & (i=j) \\ \sum_{k\neq i,j} \bs{v}_{ik}\cdot\bs{v}_{jk} - \bs{v}_{ij}\cdot\sum_k\bs{v}_{jk} - \bs{v}_{ji}\cdot\sum_k\bs{v}_{ik} & (i\neq j) \end{dcases} \\ & = h\frac C\rho\begin{dcases} \sum_k\norm{\bs{v}_{ik}}^2 + \norm{\sum_k\bs{v}_{ik}}^2 & (i=j) \\ \sum_k(\bs{v}_{ik}\cdot\bs{v}_{jk} - \bs{v}_{ij}\cdot\bs{v}_{jk}+\bs{v}_{ij}\cdot\bs{v}_{ik}) & (i\neq j) \end{dcases} \\ & = A_{ij}, \end{align} and, therefore, if the particles are well arranged, in other words, if matrix $\bs{J}$ is not degenerated, then matrix $\bs{A}$ is symmetric positive definite. As discussed above, we need to solve the equation $\bs{A}\bs{p}=\bs{b}$ with respect to $\bs{p}$ in order to correct the temporal velocity so that constraint \eqref{eq:fluid:mixed} is satisfied. However, directly using the pressure obtained by solving the equation may make the simulation unstable. Particle methods sometimes behave poorly when there is an attractive force between particles. Considering the gradient model of \eqref{eq:fluid:grad}, we see that the attractive force due to the pressure works between particles $i$ and $j$ if and only if $p_i+p_j<0$ holds for some $i$ and $j$. This fact leads us to an easy sufficient condition: there is no attractive force if $p_i\geq0$ holds for every particle $i$. The simplest way to make the pressure nonnegative is just to project the solution of $\bs{A}\bs{x}=\bs{b}$ onto $\mathbb{R}_+^N$, where $\mathbb{R}_+=[0,\infty)$. However, this modification of the solution breaks the equilibrium between the pressure of each particle. If the pressure of particle $i$ is changed from negative to zero, then the $j$th row of equation $\bs{A}\bs{p}=\bs{b}$ will no longer hold for each particle $j$ that is a neighboring particle of particle $i$. We need $\bs{p}$ to satisfy $\bs{A}\bs{p}=\bs{b}$ as much as possible, whereas $\bs{p}$ is forced to be nonnegative. These requirements can be satisfied by finding $\bs{p}$ that satisfies \begin{align} \label{eq:fluid:lcp1} (\bs{p})_i & \geq0 \\ \label{eq:fluid:lcp2} (\bs{A}\bs{p}-\bs{b})_i & \geq0 \\ \label{eq:fluid:lcp3} (\bs{p})_i(\bs{A}\bs{p}-\bs{b})_i & =0 \end{align} for each $i$, where $(\cdot)_i$ is the $i$th component of a vector. Inequality \eqref{eq:fluid:lcp1} is for the nonnegativity of $\bs{p}$. Inequality \eqref{eq:fluid:lcp2} is for the requirement that $\bs{A}\bs{p}=\bs{b}$ should hold as much as possible. If $(\bs{A}\bs{p}-\bs{b})_i<0$ holds, then we can make $(\bs{A}\bs{p}-\bs{b})_i=0$ hold by increasing $(\bs{p})_i$, because $\bs{A}$ is positive definite and, therefore, $A_{ii}>0$. Equation \eqref{eq:fluid:lcp3} has the same requirement that $\bs{A}\bs{p}=\bs{b}$ should hold as much as possible. If $(\bs{p})_i(\bs{A}\bs{p}-\bs{b})_i>0$ holds, then $(\bs{p})_i$ is unnecessarily large, and either $(\bs{p})_i=0$ or $(\bs{A}\bs{p}-\bs{b})_i=0$ can be achieved by decreasing $(\bs{p})_i$ because $A_{ii}>0$ holds. The problem of finding $\bs{p}$ that satisfies \eqref{eq:fluid:lcp1}, \eqref{eq:fluid:lcp2}, and \eqref{eq:fluid:lcp3} is called a linear complementarity problem (LCP). An LCP has a unique solution if the coefficient matrix of the problem is symmetric positive definite. Since $\bs{A}$ is proven to be symmetric positive definite, there exists a unique solution of the problem. We configured matrix $\bs{A}$ in order to define the problem. However, in actual computation, we do not compute the matrix explicitly. Instead, we use temporal velocity $\bs{u}^*$ to iteratively solve this model. Since the constraint is originally imposed on the velocity, we repeatedly apply the change in pressure to the temporal velocity using the gradient model of \eqref{eq:fluid:pgrad}, while determining the change in pressure by computing the current deviation of $\bs{u}^*$ from constraint \eqref{eq:fluid:mixed}. More precisely, we iterate the following steps until some stopping criteria are satisfied. For each particle $i$, we first compute the current deviation of constraint $\delta_i$ as \begin{align} \delta_i = \frac\alpha h (n^0 - n_i) - \sum_{j\neq i} w'(r_{ij}) (\bs{u}_j^*-\bs{u}_i^*)\cdot\frac{\bs{r}_{ij}}{r_{ij}}. \end{align} Then, we compute the next value of pressure $p'_i$ as \begin{align} p'_i = \max\left\{0,\ p_i^{t+1} + \frac{\delta_i}{A_{ii}}\right\}. \end{align} Finally, we update pressure $p_i^{t+1}$ and apply the difference of it to temporal velocity $\bs{u}^*$: \begin{align} \Delta p_i & = p'_i - p_i^{t+1} \\ p_i^{t+1} & \gets p'_i \\ \bs{u}_i^* & \gets \bs{u}_i^* - h\frac C\rho \sum_{j\neq i} \Delta p_i w'(r_{ij}) \frac{\bs{r}_{ij}}{r_{ij}} \\ \bs{u}_j^* & \gets \bs{u}_j^* + h\frac C\rho \Delta p_i w'(r_{ij}) \frac{\bs{r}_{ij}}{r_{ij}}\quad(j\neq i). \end{align} This iterative approach actually corresponds to the projected Gauss-Seidel (PGS) method, which is an iterative method to solve LCPs. The tentative solution is updated repeatedly in a Gauss-Seidel manner and is projected onto the nonnegative space. Remarkably, we can compute pressure $p^{t+1}$ based only on temporal velocity $\bs{u}^*$. When we update the pressure of a particle, we no longer need to refer the pressure of the neighboring particles because that information can be obtained through the temporal velocity of the neighboring particles. This makes the method very flexible because we can actually apply external force and change the temporal velocity in the middle of the iteration process. When this happens, the pressure starts adapting to the new environment and finally converges to a different distribution from that which was previously assumed. This enables us to strongly couple the fluid simulation with different types of simulation, as long as these simulations only have velocity-based constraints. We define velocity-based constraints here. A constraint in a system is referred to as velocity-based if it only requires the temporal velocity of the system to iteratively update its constraint force or impulse, and we apply the difference of it to update the temporal velocity. A velocity-based constraint must not require knowledge of the information on other constraints that cannot be obtained through the temporal velocity of the system. By this definition, we can say that each particle $i$ has a velocity-based constraint, the constraint force of which is $p_i^{t+1}$. We can update the constraint force and reapply this force to the temporal velocity without knowing the pressure of the other particles explicitly. When the iteration process is terminated, we adopt the temporal velocity as the velocity in the next time step by $\bs{u}_i^{t+1}=\bs{u}_i^*$ for each particle $i$. Then, we update the position of each particle $i$ by $\bs{r}_i^{t+1}=\bs{r}_i^t+h\bs{u}_i^{t+1}$ to finish the time step. \subsection{Smoothing Pressure} As we will show in Section \ref{sec:example}, the computed value of raw pressure $p$ contains a lot of noise, even though the fluid as a whole behaves very smoothly. However, we can actually obtain a smooth and sufficiently accurate pressure field by smoothing $p$ to some extent. A similar approach is presented by Kondo~\cite{kondo2020physically} to decrease the heavy noise of pressure. In \cite{kondo2020physically}, the virial theorem is used to make the smoothing process physically meaningful. Considering the balance of smoothness and locality, we define smoothed pressure $\tilde p_i$ for each particle $i$ by: \begin{align} \label{eq:fluid:spressure} \tilde p_i = \frac{\sum_j w_\text{smooth}(r_{ij})p_j}{\sum_j w_\text{smooth}(r_{ij})}, \end{align} which is equal to the weighted average of $p_i$ with weighting function $w_\text{smooth}$. We use the following weighting function for smoothing: \begin{align} w_\text{smooth}(r) = \begin{cases} (r_e^2-r^2)^3 & (r<r_e) \\ 0 & (\text{otherwise}), \end{cases} \end{align} which is proportional to a standard weighting function in the SPH method and has the same effective radius as the weighting function used in the main fluid computation. \section{Rigid-body Simulation} \label{sec:rigid} In this section, we introduce a rigid-body simulation method that can be strongly coupled with the incompressible fluid simulation introduced in the last section. As mentioned previously, in order to enable strong coupling, the rigid-body simulation must only have velocity-based constraints. This requirement, however, is not hard to satisfy because most of the constraints imposed on rigid bodies can be written as velocity-based constraints, which we defined in the last section. We use $\bs{x}$, $\bs{v}$, $\bs\omega$, $m$, $\bs{I}$, $\bs{R}$, and $\bs{q}$ to represent a the center of gravity, linear velocity, angular velocity, mass, moment of inertia, rotation matrix, and a unit quaternion that represents the orientation, respectively, of a rigid body. \subsection{Time Integration} \label{sec:rigid:time} Let $h$ be the time step size. We use the same notation as in Section \ref{sec:fluid}. We use $\phi^t$ to denote physical quantity $\phi$ at time $t$. We integrate the center of gravity $\bs{x}$ of a rigid body as follows: \begin{align} \bs{x}^{t+1} = \bs{x}^t + h\bs{v}^{t+1}. \end{align} In addition, we integrate quaternion $\bs{q}$ of a rigid body as follows: \begin{align} \label{eq:rigid:qint} \bs{q}^{t+1} = \bs{d}\bs{q}^{t+1}\bs{q}^t, \end{align} where $\bs{d}\bs{q}^{t+1}$ is a unit quaternion that represents a rotation, the rotation axis of which is $\bs\omega^{t+1}$ and the rotation angle of which is $h\norm{\bs\omega^{t+1}}$. Since the moment of inertia tensor $\bs{I}$ depends on the orientation of the rigid body, using only \eqref{eq:rigid:qint} does not accurately conserve angular momentum $\bs{I}\bs\omega$. However, computing an accurate angular velocity requires implicit calculation due to a numerical instability and is therefore costly. In the present paper, we assume that a change in angular velocity only occurs due to an external force applied to the rigid body. If the conservation of the angular momentum is important, a gyroscopic force can be calculated as an external force, which can be applied to a rigid body. The linear velocity and the angular velocity at time $t+1$ is computed as follows. Starting from the previous velocities $\bs{v}^t$ and $\bs\omega^t$, we first compute temporal velocities $\bs{v}^*$ and $\bs\omega^*$ by applying external forces explicitly and then correct velocities by applying constraint impulses iteratively, so that they satisfy all of the velocity-based constraints. Finally, we adopt the corrected temporal velocities as the velocities at time $t+1$ as $\bs{v}^{t+1}=\bs{v}^*$ and $\bs\omega^{t+1}=\bs\omega^*$. \subsection{Contact Constraint} \label{sec:rigid:contact} When two rigid bodies touch or collide, we generate contact points between these bodies and handle these bodies as contact constraints. Contact points are usually generated as they form the convex hull of the touching area. In the present paper, each contact point is treated as an independent constraint, and we compute the constraint impulse of each contact point independently. Consider that two rigid bodies, namely, $A$ and $B$, are touching at point $\bs{p}$ with the unit contact normal $\bs{n}$, where the contact normal is directed from rigid body $B$ to rigid body $A$ at the contact point. We use subscript $\phi_X$ to denote arbitrary physical quantity $\phi$ of rigid body $X$. Let $\bs{r}_A=\bs{p}-\bs{x}_A$ and $\bs{r}_B=\bs{p}-\bs{x}_B$ be the relative contact positions, and let $N$ be the contact impulse along $\bs{n}$. Then, impulses $N\bs{n}$ and $-N\bs{n}$ are applied to rigid bodies $A$ and $B$, respectively. The changes in velocities of each rigid body caused by the impulses are as follows: \begin{align} \Delta\bs{v}_A & = \frac1{m_A}N\bs{n} \\ \Delta\bs\omega_A & = \inv{I_A} (\bs{r}_A\times N\bs{n}) \\ \Delta\bs{v}_B & = -\frac1{m_B}N\bs{n} \\ \Delta\bs\omega_B & = -\inv{I_B} (\bs{r}_B\times N\bs{n}). \end{align} We define the relative velocity at contact point $\bs{v}_{AB}$ by \begin{align} \bs{v}_{AB} = (\bs{v}_A+\bs\omega_A\times\bs{r}_A) - (\bs{v}_B+\bs\omega_B\times\bs{r}_B). \end{align} Then, the change in relative velocity $\Delta\bs{v}_{AB}$ can be written as \begin{align} \Delta\bs{v}_{AB} & = (\Delta\bs{v}_A+\Delta\bs\omega_A\times\bs{r}_A) - (\Delta\bs{v}_B+\Delta\bs\omega_B\times\bs{r}_B) \\ & = \left(\frac1{m_A}N\bs{n}+(\inv{I_A} (\bs{r}_A\times N\bs{n}))\times\bs{r}_A\right) \\ & \phantom{=} + \left(\frac1{m_B}N\bs{n}+(\inv{I_B} (\bs{r}_B\times N\bs{n}))\times\bs{r}_B\right) \\ & = \left(\frac1{m_A}+\frac1{m_B}\right)N\bs{n} - [\bs{r}_A\times](\inv{I_A} ([\bs{r}_A\times]N\bs{n})) \\ & \phantom{=} - [\bs{r}_B\times](\inv{I_B} ([\bs{r}_B\times]N\bs{n})) \\ \label{eq:rigid:drv} & = \left(\frac1{m_A}\bs{1}+\frac1{m_B}\bs{1} + [\bs{r}_A\times]^T\inv{I_A}[\bs{r}_A\times] + [\bs{r}_B\times]^T\inv{I_B}[\bs{r}_B\times]\right)N\bs{n}, \end{align} where $[\bs{a}\times]$ is a skew-symmetric matrix that satisfies $[\bs{a}\times]\bs{b}=\bs{a}\times\bs{b}$ for arbitrary vectors $\bs{a}$ and $\bs{b}$, and $\bs{1}$ is the identity matrix of order three. By taking the dot product of \eqref{eq:rigid:drv} and $\bs{n}$, we obtain the change in relative velocity along the normal: \begin{align} \Delta\bs{v}_{AB}\cdot\bs{n} & = \bs{n}^T\left(\frac1{m_A}\bs{1}+\frac1{m_B}\bs{1} + [\bs{r}_A\times]^T\inv{I_A}[\bs{r}_A\times] + [\bs{r}_B\times]^T\inv{I_B}[\bs{r}_B\times]\right)N\bs{n} \\ & = \left(\frac1{m_A}+\frac1{m_B} + \norm{\bs{r}_A\times\bs{n}}_\inv{\bs{I}_A}^2 + \norm{\bs{r}_B\times\bs{n}}_\inv{\bs{I}_B}^2\right)N \\ \label{eq:rigid:drvn} & = \inv{m_{AB\bs{n}}} N. \end{align} Here, $\norm{\bs{a}}_{\bs{A}}=\sqrt{\bs{a}^T\bs{A}\bs{a}}$ for an arbitrary vector $\bs{a}$ and symmetric positive definite matrix $\bs{A}$, and the effective mass along normal $m_{AB\bs{n}}$ is defined as \begin{align} m_{AB\bs{n}} = \left(\frac1{m_A}+\frac1{m_B} + \norm{\bs{r}_A\times\bs{n}}_\inv{\bs{I}_A}^2 + \norm{\bs{r}_B\times\bs{n}}_\inv{\bs{I}_B}^2\right)^{-1}. \end{align} From \eqref{eq:rigid:drvn}, we obtain the linear relationship between $\Delta\bs{v}_{AB}\cdot\bs{n}$ and $N$. In order to change $\bs{v}_{AB}\cdot\bs{n}$ by some amount $\varepsilon$, we need to set $N$ to $m_{AB\bs{n}}\varepsilon$. Note that in actual computation, we iteratively update the value of $N$ as well as the temporal velocities of the rigid bodies. The constraint of the contact point can be written as follows in the form of an LCP: \begin{align} \bs{v}_{AB}^*\cdot\bs{n} - b_{AB} & \geq 0 \\ N & \geq 0 \\ N(\bs{v}_{AB}^*\cdot\bs{n} - b_{AB}) & = 0, \end{align} where $b_{AB}$ is the target velocity of the contact point. The target velocity represents how fast the rigid bodies move away from each other along the normal after the collision. Using the coefficient of restitution $e$, we can estimate the relative velocity along the normal after the collision as $-e\bs{v}_{AB}^*\cdot\bs{n}$. Note that this value must be observed and fixed before any constraint impulse is applied. Although this estimation is not always physically correct because the energy passing through multiple bodies is not tracked, the estimation gives satisfactory results in most cases. Newton's cradle is a typical case in which this estimation does not work well. Since the estimation depends only on the velocities, simply setting $b_{AB}=-e\bs{v}_{AB}^*\cdot\bs{n}$ will let rigid bodies gradually pass through each other due to numerical error if we do not correct positions separately. This is similar to the case of the incompressibility of fluid shown in Section \ref{sec:fluid:int}. We should either correct positions or mix a positional term with the constraint in order to maintain the incompressibility. We also mix a positional term with the constraint in the case of the rigid-body simulation. Let $d_{AB}$ be the penetration depth of rigid bodies $A$ and $B$ at the contact point, and let $\alpha\in[0,1]$ be a constant to control the amount of mixing. Then, we set $b_{AB}$ as \begin{align} \label{eq:rigid:mixed} b_{AB} = \max\{-e\bs{v}_{AB}^*\cdot\bs{n}, \frac\alpha hd_{AB}\}. \end{align} Contact constraints are velocity-based constraints, and we can solve these constraints in the same way as in Section \ref{sec:fluid:int}. We iterate the following steps for each contact constraint, until some stopping criteria are met. First, we compute the difference $\delta_{AB}$ of the target velocity and the current relative velocity along the normal as \begin{align} \delta_{AB} = b_{AB} - \bs{v}^*_{AB}\cdot\bs{n}. \end{align} Then, we compute the next value of impulse $N'$ by \begin{align} N' = \max\{0, N + m_{AB\bs{n}}\delta_{AB}\}. \end{align} Finally, we update impulse $N$ and apply its difference to each rigid body to update the temporal velocities: \begin{align} \Delta N & = N' - N \\ N & \gets N' \\ \bs{v}^*_A & \gets \bs{v}^*_A + \frac1{m_A}\Delta N\bs{n} \\ \bs\omega^*_A & \gets \bs\omega^*_A + \inv{I_A}(\bs{r}_A\times \Delta N\bs{n}) \\ \bs{v}^*_B & \gets \bs{v}^*_B - \frac1{m_B}\Delta N\bs{n} \\ \bs\omega^*_B & \gets \bs\omega^*_B - \inv{I_B}(\bs{r}_B\times \Delta N\bs{n}). \end{align} \section{Rigid Body and Fluid Interaction} \label{sec:rigid-fluid} We introduced the incompressible fluid computation in Section \ref{sec:fluid}, and the rigid-body computation in Section \ref{sec:rigid}. In this section, we introduce the strongly coupled simulation method of rigid bodies and fluid. We defined velocity-based constraints that can be strongly coupled with other velocity-based constraints. Thus, for the strongly coupled simulation, we should define the rigid-body and fluid interaction model, which consists of velocity-based constraints. \subsection{Shape Representation of a Rigid Body} In the present paper, we adopt particles to represent the shape of a rigid body. Since particles are used to represent fluid, there are many benefits to using particles as rigid bodies. To construct a rigid body with particles, we first fill the inside of the rigid bodies with particles. Each particle is placed at the center of a grid of interval $l$, which is the same value as the interval of initial particle arrangement for fluid. Then, we compute the mass, the center of gravity, and the inertia tensor matrix of the rigid body. In this step, particles are considered to be axis-aligned boxes of edge length $l$. The position of each particle relative to the center of gravity of the rigid body is computed and stored in order to update the position of the particle when the position and orientation of the rigid body are changed. During collision detection between rigid bodies, each particle is treated as a sphere of radius $l/2$. In addition, contact information is generated based on the sphere shape. In other words, when a pair of particles collides, a contact point is generated at the middle point between two particles and the contact normal is set to a vector parallel to the relative position vector of two particles. The penetration depth of the contact point is set to $l-r$, where $r$ is the distance between two particles. In order to compute the particle number density of fluid particles around the boundary of a rigid body and fluid accurately, we include rigid-body particles in the particle number density computation of fluid particles. The particle number density of a fluid particle is computed using all kinds of particles. \subsection{Contact Constraint between a Rigid Body and a Fluid Particle} Since rigid bodies and fluid are both represented as particles, contact computation between rigid bodies and fluid can be performed in a particle-based manner. We treat fluid particles as rigid-body particles during the rigid-body and fluid interaction computation. Each fluid particle is considered as a rigid-body particle, which is an independent rigid body. The physical properties of the rigid body are defined based on a sphere of radius $l/2$ and mass $\rho l^d$, where $\rho$ is the density of the fluid and $d$ is the dimension of the simulation. The sphere is placed at the same position as the particle, and the rigid body shares its linear velocity with the velocity of the particle. That means that the linear velocity of the rigid body actually refers to the velocity of the particle, and if one velocity is changed, then the other velocity is also changed to the same value. This relationship is also applied to the temporal linear velocity of the rigid body and the temporal velocity of the particle while the iteration for the constraint resolution is running. Collision detection and contact generation between rigid bodies and rigid bodies for fluid particles are performed in the same way as those described in the previous subsection. Since every interaction is treated as a rigid-body contact constraint, and a rigid-body contact constraint is a velocity-based constraint, we only need use velocity-based constraints for the entire simulation. This enables us to strongly couple rigid bodies and fluid in the simulation. We finally show the computation procedure of the entire time step for the strongly coupled simulation in \figref{fig:rigid-fluid:procedure}. \begin{figure} \centering \includegraphics[width=\textwidth]{procedure.pdf} \caption{Computation procedure of the entire time step.} \label{fig:rigid-fluid:procedure} \end{figure} \section{Computation Examples} \label{sec:example} In this section, we show computation examples of the proposed method. In Section~\ref{sec:example:rigid}, we compute a scene that consists of multiple rigid bodies in order to verify inter-rigid-body collision. In Section~\ref{sec:example:stat}, Section~\ref{sec:example:cpatch}, and Section~\ref{sec:example:dambreak}, we compute scenes that consist of incompressible fluid in order to verify dynamic and static properties of incompressible fluid. In Section~\ref{sec:example:buoyancy} and Section~\ref{sec:example:seesaw}, we compute scenes that consist of both rigid bodies and incompressible fluid in order to verify interaction and strongly-coupled properties between rigid bodies and fluid. Finally, in Section~\ref{sec:example:complex}, we provide a complex scene that involves multiple inter-rigid-body contacts and interaction between rigid bodies and fluid. \subsection{Rigid-Body Computation} \label{sec:example:rigid} We show a two-dimensional computation example of rigid-body interaction we introduced in Section \ref{sec:rigid}. The initial configuration of the simulation is shown in \figref{fig:example:rigid-init}. Each box has the same mass, and the floor below the boxes has infinite mass and is not affected by gravity. The coefficient of restitution is set to 0.2 for all contact points. The interval of the particle arrangement $l$ is equal to 0.03 m, and the time step $h$ is 0.005 s. The gravitational acceleration $g$ is set to 9.8 m/s$^2$ Snapshots of the simulation are shown in \figref{fig:example:rigid}. The piled boxes are dropped and collapsed, scattering over the floor. Thanks to the mixed term of constraint \eqref{eq:rigid:mixed}, no penetration between rigid bodies can be found in \figref{fig:example:rigid}. Although no friction is considered, we can see the friction-like effect in the simulation since the surfaces of the rigid bodies consist of particles and are uneven. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{rigid-init.pdf} \caption{Initial configuration of the rigid-body simulation.} \label{fig:example:rigid-init} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{rigid.pdf} \caption{Snapshots of the rigid-body simulation at times 0.0, 0.4, 0.8, 1.2, 1.6, and 2.0 in seconds.} \label{fig:example:rigid} \end{figure} \subsection{Hydrostatic Pressure Computation} \label{sec:example:stat} Since we introduce a novel method for fluid computation, it is worthwhile to confirm the basic properties of the fluid. We compute the hydrostatic pressure of fluid. The initial configuration is shown in \figref{fig:example:static-init}. We set $l$ to 0.02 m, $r_e/l$ to 2.1, $g$ to 10.0 m/s$^2$, and $h$ to 0.001 s. We show the distributions of the raw pressure $p$ and the smoothed pressure $\tilde p$ that is computed according to \eqref{eq:fluid:spressure} at time 1.0 s in \figref{fig:example:static}. From \figref{fig:example:static}, we can observe that the raw pressure distribution suffers from heavy noise, whereas the smoothed distribution appears better and to be accurate. As we describe in Section \ref{sec:fluid:weighting}, the smoothing length is equal to $r_e$, and thus the noise can be said to have only a local and limited effect. In addition, the distribution of the deviation from the standard particle number density $(n_i-n_0)/n_0$ for each particle $i$ at time 1.0 s is shown in \figref{fig:example:static-pnd}. From the figure, we can see that no particle is compressed to the level of 1.0\% deviation, and the incompressibility is highly assured. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{static-init.pdf} \caption{Initial configuration of the hydrostatic pressure computation.} \label{fig:example:static-init} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{static.png} \caption{Raw (left) and smoothed (right) pressure distribution at time 1.0 s. The values are shown in pascals.} \label{fig:example:static} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{static-pnd.png} \caption{Distribution of the deviation from the standard particle number density at time 1.0 s.} \label{fig:example:static-pnd} \end{figure} \subsection{Dam-break Computation} \label{sec:example:dambreak} The dam-break problem is simulated using the proposed method. The initial configuration is shown in \figref{fig:example:dambreak-init}. This is the same configuration as in the experiment performed by Koshizuka and Oka~\cite{koshizuka1996moving}. We set $l$ to 0.005 m, $r_e/l$ to 2.1, $g$ to 9.80665 m/s$^2$, and $h$ to 0.0005 s. We show the snapshots of the simulation in \figref{fig:example:dambreak}, which is colored according to smoothed pressure. We can observe that the fluid reaches the right wall approximately $t$ = 0.3 s after the beginning of the simulation and causes a heavy splash. The fluid makes a breaking wave from approximately $t$ = 0.7 s to $t$ = 0.9 s. The features of the simulation visibly match those of the experiment. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{dambreak-init.pdf} \caption{Initial configuration of the dam break computation.} \label{fig:example:dambreak-init} \end{figure} \begin{figure} \centering \includegraphics[width=0.65\textwidth]{dambreak.pdf} \caption{Snapshots of the dam break computation at times from 0.1 s to 1.0 s, in steps of 0.1 s. Each fluid particle is colored based on its smoothed pressure, and the numbers shown at the top of each figure represent the minimum and maximum smoothed pressures in pascals.} \label{fig:example:dambreak} \end{figure} \subsection{Circular Patch Computation} \label{sec:example:cpatch} An extending circular droplet is computed using the proposed method. The radius of the droplet is set to 0.5 m, and the center of the droplet is located at the origin. The initial velocity of each particle $i$ at $(x,y)$ is given by \begin{align} \bs{u}_i = \begin{pmatrix} x \\ -y \end{pmatrix}. \end{align} We set $l$ to 0.005 m, $r_e/l$ to 2.5, $g$ to 0 m/s$^2$, and $h$ to 0.002 s. This problem can be solved analytically~\cite{colagrossi2005meshless} and the theoretical solution is given by \begin{align} \frac{da}{dt} & = -aA \\ \frac{db}{dt} & = bA \\ \frac{d^2A}{dt^2} & = \frac{4}{A}\left(\frac{dA}{dt}\right)^2 - 2A^3 \end{align} with boundary conditions \begin{align} a(0) = 0.5,\ b(0) = 0.5,\ \frac{dA}{dt}(0) = 0,\ A(0) = 1, \end{align} where $a$ and $b$ are the semi-minor axis and the semi-major axis of the droplet in meters, respectively. Snapshots are shown in \figref{fig:example:cpatch}, and the simulation result is shown in \figref{fig:example:cpatch-axes}. The theoretical result in \figref{fig:example:cpatch-axes} is computed numerically. Although pressure disturbance can be observed as the droplet extends due to the disorder of the particle arrangement, the graph shows the simulated values of the semi-minor axis and the semi-major axis well match the theoretical result. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{cpatch.png} \caption{Snapshots of the circular patch computation at times 0.0, 0.5, 1.0, 1.5, and 2.0 in seconds. The numbers at the top of each snapshot indicate the pressure in pascals.} \label{fig:example:cpatch} \end{figure} \begin{figure} \centering \begin{minipage}{0.6\textwidth} \centering \includegraphics[width=\hsize]{cpatch-minor.pdf} \subcaption{Semi-minor axis} \vspace{0.3cm} \end{minipage} \begin{minipage}{0.6\textwidth} \includegraphics[width=\hsize]{cpatch-major.pdf} \subcaption{Semi-major axis} \end{minipage} \caption{The result of the circular patch computation. The horizontal axes show the time, and the vertical axes show the time evolution of the semi-minor axis (a) and the semi-major axis (b).} \label{fig:example:cpatch-axes} \end{figure} \subsection{Buoyancy Computation} \label{sec:example:buoyancy} In order to confirm that the interaction between a rigid body and fluid is computed correctly, we put boxes with different densities into a fluid to observe the effect of buoyancy. The initial configuration of the simulation is shown in \figref{fig:example:buoyancy-init}. The box is a square of edge length 0.6 m, and its density is set to have a specified density ratio to the fluid. The density ratio varies from 0.1 to 0.9 in steps of 0.1. We set $l$ to 0.01 m, $r_e/l$ to 2.1, $g$ to 9.80665 m/s$^2$, and $h$ to 0.0025 s. We measure the ratio of the submerged volume of the box when the box becomes stationary in the fluid, and the result is shown in \figref{fig:example:buoyancy}. The graph shows that the simulated values match the theoretical values well. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{buoyancy-init.pdf} \caption{Initial configuration of the buoyancy computation.} \label{fig:example:buoyancy-init} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{buoyancy.pdf} \caption{Result of the buoyancy computation. The horizontal axis shows the density of the box relative to the density of the fluid, and the vertical axis shows the ratio of the submerged volume of the box.} \label{fig:example:buoyancy} \end{figure} \subsection{Seesaw Computation} \label{sec:example:seesaw} In strongly coupled simulations, multiple interactions of rigid bodies and fluid can be computed simultaneously. To confirm this, we run the following seesaw computation. The initial configuration of the simulation is shown in \figref{fig:example:seesaw-init}. The densities of the rigid body and fluid are 490 kg/m$^2$ and 1,000 kg/m$^2$, respectively. The top square part of the fluid has an initial velocity of 1.0 m/s toward the left, and other fluid and the rigid body are set to be still. The rigid body is pinned at and can only rotate around its center of gravity. We set $l$ to 0.02 m, $r_e/l$ to 2.1, $g$ to 0.0 m/s$^2$, and $h$ to 0.005 s. If the simulation is strongly coupled, all of the following things happen instantly in a single time step. The top part of the fluid pushes the rigid body toward the left so that the rigid body starts to rotate and pushes the bottom part of the fluid toward the right. This results in positive pressure at the bottom part of the fluid. \figref{fig:example:seesaw} shows the pressure distribution and the velocity distribution just after solving all constraints and before updating positions. We can observe that the rigid body is rotating and the bottom part of the fluid has positive pressure due to the rotation. We also show the result obtained with the PMS model in \figref{fig:example:seesaw-pms}. Since the PMS model cannot handle strong coupling, the bottom part of the fluid has no positive pressure and zero velocity. In subsequent steps, however, the PMS model can handle weakly-coupled interaction between the rigid body and the lower part of the fluid and positive pressure will be observed there. Note that the result obtained with the PMS model shows a lower peak pressure, as compared to the proposed method, because the top part of the fluid cannot take into account the existence of the bottom part of the fluid. As such, it is easier for the top part of the fluid to push the rigid body. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{seesaw-init.pdf} \caption{Initial configuration of the seesaw computation.} \label{fig:example:seesaw-init} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{seesaw.png} \caption{Pressure distribution with (left) and without (right) velocity distribution just after all constraints are solved. The numbers at the top indicate the pressure in pascals.} \label{fig:example:seesaw} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{seesaw-pms.png} \caption{Pressure distribution with (left) and without (right) velocity distribution just after updating the velocity of the rigid body and the fluid. The interaction of the rigid body and the fluid is computed using the passively moving solid model. The numbers at the top indicate the pressure in pascals.} \label{fig:example:seesaw-pms} \end{figure} \subsection{Complex Scene} \label{sec:example:complex} Finally, we give a computation example of a complex and dynamic scene that contains both inter-rigid-body collision and rigid bodies and fluid interaction. The centers of gravity of two cross-shaped rigid bodies at the bottom are fixed and can only rotate around the centers of gravity. The other rigid bodies are not fixed and can move freely. The simulation result is shown in \figref{fig:example:complex}. We set $l$ to 0.02 m, $r_e/l$ to 2.1, $g$ to 9.80665 m/s$^2$, and $h$ to 0.001 s. We can observe that the simulation visibly runs without problems. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{complex.pdf} \caption{Snapshots of the complex simulation at times from 0.0 s to 1.6 s.} \label{fig:example:complex} \end{figure} \section{Conclusion} \label{sec:conclusion} In this research, we proposed a method to simulate an incompressible fluid that uses an LCP to formulate the incompressibility constraint and enables strong coupling with rigid bodies. We formulated velocity-based constraints that generalize incompressibility constraints in fluid computation and non-penetration constraints in rigid-body computation, which provides the general framework of various strongly coupled simulations. Through numerical examples, we have demonstrated that the proposed method can compute incompressible fluid accurately and achieves strong coupling with rigid bodies correctly. With this method, we can use ordinary impulse-based methods for rigid-body simulation; therefore, the proposed method fits in well with existing software for rigid-body simulation. The remaining problems of the proposed method include the fact that it is difficult to compute negative pressure because we required the pressure to be nonnegative in order not to cause an attractive force between particles. Some stabilization technique is likely needed to allow for negative pressure. In addition, generalization of the shape representation of rigid bodies is desirable so that we can represent a smoother face and decrease the computation time by means other than the use of particles. We leave these as future research topics.
{ "arxiv_id": "2302.14283", "language": "en", "timestamp": "2023-03-01T02:08:04", "url": "https://arxiv.org/abs/2302.14283", "yymm": "2302" }
\section{Introduction} The discovery of superconductivity in infinite-layer Nd$_{0.8}$Sr$_{0.2}$NiO$_{2}$ thin films reignited an interest in the nickelates as cuprate analogues \cite{LiNat2019, OsadaNanoLett2020_SC_PrNiO2, LiPRL2020_SCdome_NdNiO2, OsadaPRM2020_PrNiO2_phasediagram, ZengPRL2020_ariando_SCdomeNdNiO2, Osada2021AdvMat_LaNiO2, Li_YNieFrontiersPhys2021_cation_stoich_SC_112,ZhouRareMetal2021_SCNdNiO2_noO2vacanciesinSTO,ZhouRareMetal2021_SCNdNiO2_noO2vacanciesinSTO,ZengSciAdv2022_LaCaNiO2,RenArxiv2021_PrSrNiO2_LSAT_STO,KyuhoLeeArxiv2022_NdNiO2_LSAT_transport}. More broadly, the infinite-layer nickelates are the $n=\infty$ member of a homologous series of `layered square-planar nickelates', $R_{n+1}$Ni$_{n}$O$_{2n+2}$ or ($R$NiO$_{2}$)$_{n}$($R$O$_{2}$), where $R = $ trivalent rare-earth cation and $n>1$. These compounds host $n$ quasi-two-dimensional NiO$_{2}$ planes separated by ($R$O$_{2}$)$^{-}$ spacer layers, as illustrated in Fig.\ \ref{Fig1}. Consequently, the layering $n$ tunes the nickel 3$d$ electron filling. Mapped onto the cuprate phase diagram, the bulk stable $n=3$ compound, Nd$_{4}$Ni$_{3}$O$_{8}$, lies in the overdoped regime with a formal electron count of 3$d^{8.67}$. Indeed, Pr$_{4}$Ni$_{3}$O$_{8}$ single crystals are metallic \cite{ZhangNatPhys2017_orb_polarization}, as corroborated by first-principles calculations \cite{Nica2020PRB_438bandstructure}. The $n=5$ compound, Nd$_{6}$Ni$_{5}$O$_{12}$, has a formal electron count of 3$d^{8.8}$ aligned with optimal doping; thin films were recently found to be superconducting \cite{PanNatMat2021}. The layering $n$ also tunes the out-of-plane electronic dispersion: density-functional theory (DFT) calculations suggest that, despite their similar $d$ electron fillings, the electronic structure of Nd$_{6}$Ni$_{5}$O$_{12}$ is more two-dimensional, and thus more cuprate-like, than that of the hole-doped infinite-layer nickelates \cite{ZhangNatPhys2017_orb_polarization,PanNatMat2021, LaBollita2022Arxiv_d9-deltalayerednickelates}. Proposals to further promote cuprate-like electronic structure and enhance $T_\textrm{c}$ in the nickelates include electron doping the lower-dimensional $n=3$ compound \cite{ZhangNatPhys2017_orb_polarization,PanNatMat2021}, decreasing the $c$-axis lattice constant \cite{Been2021PRX_RE_trends}, increasing compressive strain via epitaxy \cite{KyuhoLeeArxiv2022_NdNiO2_LSAT_transport, RenArxiv2021_PrSrNiO2_LSAT_STO}, and applying high pressure \cite{Wang2022NatComms_pressure_enhancement_Tc}. Furthermore, studies of layered square-planar nickelates in powder \cite{Lacorre1992JSSChem, PoltavetsJAmChemSoc2006_La326, PoltavetsInorChem2007_Ln438, PoltavetsPRL2008_La326_electronicproperties, Poltavets2010PRL_La438} and single crystal form have revealed a cuprate-like Fermi surface \cite{Li2023Pr4Ni3O8_ARPES}, charge/spin stripes \cite{ZhangPNAS2016_La438chargestripes,ZhangPRL2019_La438spinstripes}, orbital polarization \cite{ZhangNatPhys2017_orb_polarization}, and large super-exchange \cite{LinPRL2021_(LaPr)438_stronsuperexchange,ShenPRX2022_oxygenstatesinLa438}. Layered square-planar nickelate thin films thus form an exciting platform to investigate the role of dimensionality, epitaxial strain, and chemical doping in nickelate superconductivity. The synthesis of square-planar nickelate thin films, however, remains an immense challenge. Due to their low decomposition temperatures, infinite-layer and layered square-planar nickelates are accessible only via low temperature topotactic reduction of the perovskite $R$NiO$_{3}$ ($n=\infty$) or Ruddlesden–Popper $R_{n+1}$Ni$_{n}$O$_{3n+1}$ ($n>1$) parent compounds, respectively \cite{CrespinJCS1983_LaNiO2_pt1,LevitzJCS1983_LaNiO2_pt2,Lacorre1992JSSChem,Lacorre1992JSSChem,HaywardJAmChemSoc1999_LaNiO2,HaywardSSS2003_NdNiO2_NaH, PoltavetsJAmChemSoc2006_La326}. Furthermore, the absence of superconductivity in reduced nickelate powders \cite{LiNatComm2020_nobulkSC, WangPRM2020_nobulkSC, Huo2022ChiPhysB_noSCinLaSrNiO2} and bulk single crystals \cite{PuphalSciAdv2021_nobulkSC} to date suggests that external stabilization by a substrate may be required to yield superconductivity. The synthesis of reduced nickelate thin films, however, is complicated by the large $\sim$2–3$\%$ increase in the in-plane lattice parameter upon reduction. To minimize compressive strain in the reduced phase, the parent compound must be synthesized under tensile strain, as shown in Fig.\ \ref{Fig1}. Perovskite nickelates synthesized under $\sim$2.6\% tensile strain on SrTiO$_{3}$, however, exhibit strain-relieving extended defects \cite{YangSymmetry2022_NdNiO3RPfaults,LiPRL2020_SCdome_NdNiO2,ZengSciAdv2022_LaCaNiO2,Osada2021AdvMat_LaNiO2,LeeAPL2020_aspects_synthesis}. A dramatic reduction of such extended defects has recently been achieved by synthesizing the parent perovskite under more modest $\sim$1.6\% tensile strain on (LaAlO$_{3}$)$_{0.3}$(Sr$_{2}$TaAlO$_{6}$)$_{0.7}$ (LSAT) \cite{KyuhoLeeArxiv2022_NdNiO2_LSAT_transport, RenArxiv2021_PrSrNiO2_LSAT_STO}. Extended defects can also form through the reduction process. For example, LaNiO$_{2}$ / SrTiO$_3$ exhibits $a$-axis oriented domains which relieve the 1.4\% compressive strain \cite{Osada2021AdvMat_LaNiO2}, while these domains are absent in NdNiO$_{2}$ / SrTiO$_{3}$ with 0.4\% compressive strain \cite{LeeAPL2020_aspects_synthesis}. Therefore, understanding the strain-dependent stability of both the parent Ruddlesden–Popper and reduced square-planar phases is essential to optimize the synthesis of layered square-planar nickelate thin films. \begin{figure} \includegraphics[width = 1\columnwidth]{Figures - main text (final revision)/fig1_final.pdf} \caption{Schematic crystal structures of (left) Nd$_{4}$Ni$_{3}$O$_{10}$ ($n=3$, Ruddlesden–Popper) and (right) Nd$_{4}$Ni$_{3}$O$_{8}$ ($n=3$, layered square-planar). Neodymium, nickel, and oxygen atoms are depicted in blue, red, and yellow, respectively. The number line presents the bulk in-plane lattice parameters of Nd$_{4}$Ni$_{3}$O$_{10}$ (3.826 \AA) \cite{OlafsenJournalSolidStateChem2000_Nd4310structure} and Nd$_{4}$Ni$_{3}$O$_{8}$ (3.915 \AA) \cite{PoltavetsInorChem2007_Ln438}, as well as the pseudocubic lattice parameters of LaAlO$_{3}$ (001), NdGaO$_{3}$ (110), SrTiO$_{3}$ (001). The lattice mismatch between Nd$_{4}$Ni$_{3}$O$_{10}$, Nd$_{4}$Ni$_{3}$O$_{8}$ and the three substrates are shown in blue and green, respectively. } \label{Fig1} \end{figure} Here, we discuss the competing requirements for the synthesis and oxygen deintercalation of Nd$_{4}$Ni$_{3}$O$_{10}$ thin films on LaAlO$_{3}$ (001), NdGaO$_{3}$ (110), and SrTiO$_{3}$ (001). We focus on the $n=3$ Ruddlesden–Popper compound because both the oxidized Nd$_4$Ni$_3$O$_{10}$ and reduced Nd$_4$Ni$_3$O$_{8}$ compounds have been synthesized as bulk single crystals, allowing us to benchmark the strain states with bulk lattice constants. Fig.\ \ref{Fig1} tabulates the lattice mismatch, $\epsilon= (a_{sub}-a_{bulk})/a_{bulk}$, of Nd$_{4}$Ni$_{3}$O$_{10}$ and Nd$_{4}$Ni$_{3}$O$_{8}$ on LaAlO$_{3}$, NdGaO$_{3}$, and SrTiO$_{3}$. First, we present the molecular beam epitaxy (MBE) synthesis of Nd$_{4}$Ni$_{3}$O$_{10}$ on the three substrates. We show that Nd$_{4}$Ni$_{3}$O$_{10}$ on SrTiO$_{3}$ exhibits a high-degree of disorder characterized by a near-equal density of vertical and horizontal rock salt faults; consequently, we do not consider these films for reduction. When synthesized under lesser tensile strain on NdGaO$_{3}$, a smaller density of vertical rock salt faults forms while mainaining relatively high quality Ruddlesden–Popper ordering. By contrast, Nd$_{4}$Ni$_{3}$O$_{10}$ on LaAlO$_{3}$ exhibits coherent ordering of horizontal rock salt layers with very few extended defects. Next, we reduce Nd$_{4}$Ni$_{3}$O$_{10}$ to the square-planar phase, Nd$_{4}$Ni$_{3}$O$_{8}$, on LaAlO$_{3}$ and NdGaO$_{3}$. In Nd$_{4}$Ni$_{3}$O$_{8}$ on LaAlO$_{3}$, we observe regions with pristine square-planar ordering along with disordered regions where the $c$-axis cants locally by as much as 7\degree, likely a compressive strain relaxation mechanism \cite{Kawai2010CG&D_a-LaNiO2, Osada2021AdvMat_LaNiO2}. However, all reduced films on LaAlO$_{3}$ are insulating. On the other hand, Nd$_{4}$Ni$_{3}$O$_{8}$ and Nd$_{6}$Ni$_{5}$O$_{12}$ on NdGaO$_{3}$ are metallic and superconducting \cite{PanNatMat2021}, respectively, despite the extended defects in the parent compound. Finally, our density-functional theory (DFT) calculations reveal that in-plane strain alters aspects of the $R_{n+1}$Ni$_{n}$O$_{2n+2}$ electronic structure relevant to superconductivity. Our study thus demonstrates a pathway to the synthesis of a superconducting compound and sets limits on the ability to strain-engineer $R_{n+1}$Ni$_{n}$O$_{2n+2}$ thin films via epitaxy. \section{Results} \subsection{Density-Functional Theory Calculations} We perform DFT calculations to investigate how strain tunes the electronic structure of Nd$_{4}$Ni$_{3}$O$_{8}$ and Nd$_{6}$Ni$_{5}$O$_{12}$. As detailed in Supplementary Note 1, our calculations reveal the following effects: \begin{enumerate} \item The charge-transfer energy ($\Delta$, a measure of the $p$-$d$ splitting) increases (decreases) with compressive (tensile) strain. \item The rare-earth density-of-states at the Fermi level increases (decreases) with compressive (tensile) strain. \item The Ni-$d_{x^{2}-y^{2}}$ bandwidth increases (decreases) with compressive (tensile) strain. \end{enumerate} \vspace{5mm} The role of the charge-transfer energy $\Delta$ and rare-earth 5$d$ `spectator' bands in nickelate superconductivity has been extensively debated \cite{Botana2020simi,Kitatani2020Nicke,Karp2022super, Louie2022twogap}. In the cuprates, the superexchange interaction $J\sim t^{4}/\Delta^{3}$ and single-band fermiology are widely deemed essential for superconductivity \cite{Scalapino2012, OMahony2022PNAS_seamusdavispaper}. Thus strain-engineering $R_{n+1}$Ni$_{n}$O$_{2n+2}$ compounds provides an avenue to explore the role of the charge-transfer energy and multi- or single-band fermiology in nickelate and cuprate superconductivity. Next we discuss the synthesis of these compounds under a variety of strain states. \subsection{MBE Synthesis of Ruddlesden–Popper nickelates} \begin{figure*} \centering \includegraphics[width = 2 \columnwidth]{Figures - main text (final revision)/fig2_final.pdf} \caption{Structural characterization of Nd$_{4}$Ni$_{3}$O$_{10}$ / SrTiO$_{3}$ (001) ($\epsilon = +2.1\%$). (a) XRD scans of Nd$_{4}$Ni$_{3}$O$_{10}$ / SrTiO$_{3}$ with Nd / Ni $\sim$ 4 / 3 (nominally stoichiometric) and Nd / Ni $\sim$ 5.1 / 3 (27.5\% neodymium-rich). These compositions are measured by Rutherford back-scattering spectroscopy (RBS). The asterisks denote substrate peaks and the vertical lines mark the 00$l$ peak positions of bulk Nd$_{4}$Ni$_{3}$O$_{10}$ \cite{OlafsenJournalSolidStateChem2000_Nd4310structure}. (b) Schematic crystal structure depicting a vertical rock salt fault. (c) 90\degree \ rotation of the crystal structure in (b) illustrating the origin of the atomic contrast loss in STEM. The green dashed arrows in (b) and circled dots in (c) denote the propagation direction of the electron beam during STEM measurements. (d) MAADF-STEM image of the Nd$_{4}$Ni$_{3}$O$_{10}$ / SrTiO$_{3}$ film with Nd / Ni $\sim$ 5.1 / 3 in (a). (e) Atomic-resolution HAADF-STEM image showing the representative lattice structure of the film. The reduced atomic contrast is due to projection through vertically offset Ruddlesden–Popper regions, shown schematically in (c). } \label{Fig2} \end{figure*} The synthesis of Ruddlesden–Popper nickelate thin films shares several of the challenges encountered in the synthesis of perovskite nickelates, including the difficulty in reaching the high oxidation Ni$^{3+}$ state and the formation of secondary phases like NiO \cite{LeeAPL2020_aspects_synthesis,Li_YNie2020APL,Sun_YNie2021PRB,PanPRM2022_RPgrowth}. There are, however, additional challenges unique to the synthesis of Ruddlesden–Popper thin films. In general, the MBE synthesis of $A_{n+1}B_{n}$O$_{3n+1}$ Ruddlesden–Popper compounds requires the precise sequential deposition of $A$ and $B$ monolayers to achieve the desired composition ($A / B = (n+1) / n$) and monolayer dose \cite{LeeJFreelandNatMat2014,MattBaroneAPLM2021,PanPRM2022_RPgrowth}. Given ideal composition, errors in the monolayer dose times result in the formation of Ruddlesden–Popper layers with an average periodicity different from the one targeted \cite{MattBaroneAPLM2021}. Errors in composition, on the other hand, can be accommodated via the formation of extended defects such as rock salt faults: half unit cell stacking faults formed by additional $A$O inclusions \cite{BalzPlieth1955_K2NiF4,RuddlesdenPopper1957_K2NiO4typecompounds, RuddlesdenPopper1957_K2NiO4typecompounds, RuddlesdenPopper1957_Sr3Ti2O7}. In homoepitaxial Sr$_{n+1}$Ti$_{n}$O$_{3n+1}$, for example, up to 5\% excess strontium can be accommodated by the formation of additional SrO rock salt layers, while strontium deficiency results in missing rock salt layers \cite{Ohnishi2005APL_STO_PLDgrowth,Ohnishi2008JourAppPhys_defectsSTO,Brooks2009APL_STO_composition,Xu2016_RPfaults_STOfilms,Dawley2020APL_STO_RP}. In addition to the composition and monolayer dose, lattice mismatch plays a crucial role in the formation of extended defects. In compressively-strained LaNiO$_{3}$ / LaAlO$_{3}$ films, for example, misfit dislocations form to relax the in-plane compressive strain \cite{BakJourPhysChemLett2020_LaNiO3RPfaults,QiNanoscale2021_RPfaults_compressivestrain}. Synthesized under tensile strain on SrTiO$_{3}$, LaNiO$_{3}$ \cite{BakJourPhysChemLett2020_LaNiO3RPfaults} and NdNiO$_{3}$ \cite{YangSymmetry2022_NdNiO3RPfaults} films exhibit vertical rock salt faults. These extended defects effectively increase the in-plane lattice constant and thus relieve tensile strain because the distance between rare-earth planes within a rock salt layer ($\sim$2.8 \AA) is greater than the atomic layer spacing of perovskite (La,Nd)NiO$_{3}$ ($\sim$1.9 \AA) (see Supplementary Note 2). The density of vertical rock salt faults in infinite-layer nickelates has been decreased dramatically by decreasing the tensile strain of the parent perovskite compound \cite{RenArxiv2021_PrSrNiO2_LSAT_STO, KyuhoLeeArxiv2022_NdNiO2_LSAT_transport}. In comparison to perovskites, however, Ruddlesden–Popper compounds are even more prone to the formation of vertical rock salt faults because their structure already hosts horizontal rock salt layers which can instead orient vertically to relieve the in-plane tensile strain. \begin{figure*} \centering \includegraphics[width = 2 \columnwidth]{Figures - main text (final revision)/fig3_final.pdf} \caption{Structural characterization of Nd$_{4}$Ni$_{3}$O$_{10}$ / NdGaO$_{3}$ (110) ($\epsilon = +0.8\%$). (a) XRD scan of a Nd$_{4}$Ni$_{3}$O$_{10}$ / NdGaO$_{3}$ film. The asterisks denote substrate peaks and the vertical lines mark the 00$l$ peak positions of bulk Nd$_{4}$Ni$_{3}$O$_{10}$ \cite{OlafsenJournalSolidStateChem2000_Nd4310structure}. (b) Schematic crystal structures depicting three regions in (d). (c) MAADF-STEM image of the Nd$_{4}$Ni$_{3}$O$_{10}$ / NdGaO$_{3}$ film shown in (a). (d) Atomic-resolution HAADF-STEM image showing the representative lattice structure of the film with atomic model overlays. The yellow box highlights horizontal rock salt ordering while the green and red boxes show regions with half and three unit cell long vertical rock salt faults, respectively. } \label{Fig3} \end{figure*} \begin{figure*} \centering \includegraphics[width = 2\columnwidth]{Figures - main text (final revision)/fig4_final.pdf} \caption{Structural characterization of Nd$_{4}$Ni$_{3}$O$_{10}$ / LaAlO$_{3}$ (001) ($\epsilon = -0.9\%$). (a) XRD scan of a Nd$_{4}$Ni$_{3}$O$_{10}$ / LaAlO$_{3}$ film. The asterisks denote substrate peaks and the vertical lines mark the 00$l$ peak positions of bulk Nd$_{4}$Ni$_{3}$O$_{10}$ \cite{OlafsenJournalSolidStateChem2000_Nd4310structure}. (b) Schematic crystal structure of well-ordered, horizontal Ruddlesden–Popper layers. (c) MAADF-STEM image of the Nd$_{4}$Ni$_{3}$O$_{10}$ / LaAlO$_{3}$ film shown in (a). (d) Atomic-resolution HAADF-STEM image showing the representative lattice structure of the film with an atomic model overlay of the structure shown in (b), boxed in yellow. } \label{Fig4} \end{figure*} \begin{figure*} \centering \includegraphics[width =2\columnwidth]{Figures - main text (final revision)/fig5_final.pdf} \caption{Strain-dependent structural characterization of Nd$_{4}$Ni$_{3}$O$_{10}$ films. Lattice fringe strain maps highlighting the characteristic formation of horizontal and vertical rock salt layers in (a) Nd$_{4}$Ni$_{3}$O$_{10}$ / LaAlO$_{3}$ (001), (b) Nd$_{4}$Ni$_{3}$O$_{10}$ / NdGaO$_{3}$ (110), and (c) Nd$_{4}$Ni$_{3}$O$_{10}$ / SrTiO$_{3}$ (001) films. The number line depicts the in-plane lattice constant of Nd$_{4}$Ni$_{3}$O$_{10}$ in black and the corresponding strain states on LaAlO$_{3}$ in yellow, NdGaO$_{3}$ in blue, and SrTiO$_{3}$ in red. More details of the analysis are provided in Methods. Raw STEM data and strain map outputs are shown in Supplementary Fig.\ S26. See Supplementary Fig.\ S4 for a discussion regarding the observed strain-dependent defect formation.} \label{Fig5} \end{figure*} Most infinite-layer nickelate films to date have been synthesized on SrTiO$_{3}$, motivated by the minimal 0.4\% compressive strain in the reduced state. Here, we present attempts to synthesize Nd$_{4}$Ni$_{3}$O$_{10}$ under 2.1\% tensile strain on SrTiO$_{3}$. In Fig.\ \ref{Fig2}(a), we present x-ray diffraction (XRD) scans of two Nd$_{4}$Ni$_{3}$O$_{10}$ / SrTiO$_{3}$ films: one with Nd/Ni $\sim 4 / 3$ and the other with Nd/Ni $\sim 5.1 / 3$, a 27.5\% excess in neodymium content. We will refer to these two films as `stoichiometric' and `neodymium-rich', respectively. The stoichiometric film exhibits two primary features in the XRD scan: a film peak at $47.4\degree$ and a broad peak at $\sim$26$\degree$. These features are reminiscent of disordered and non-stoichiometric $R$NiO$_{3}$ \cite{Satyalakshmi1993APL_LaNiO3films,BreckenfeldAppliedMaterials2014_nonstoichiometryNdNiO3, Li_YNieFrontiersPhys2021_cation_stoich_SC_112}. However, the stoichiometric film exhibits none of the 00$l$ superlattice peaks expected for bulk Nd$_{4}$Ni$_{3}$O$_{10}$. The neodymium-rich film, on the other hand, exhibits superlattice peaks that are roughly consistent with bulk Nd$_{4}$Ni$_{3}$O$_{10}$. Thus, coherent horizontal rock salt ordering forms only after supplying excess neodymium. We investigate the nature of this crystalline disorder by atomic-resolution scanning transmission electron microscopy (STEM) imaging of the film. Microscopic signatures of Ruddlesden–Popper disorder evident in STEM include a zigzag arrangement of $A$-cations and reduced $A$–$B$ contrast \cite{Detemple2012JourAppPhys_RPfaults_LNO_LAO,CollJourPhysChem2017_LaNiO3RPfaults,YangSymmetry2022_NdNiO3RPfaults}. As illustrated in Fig.\ \ref{Fig2}(b), a vertical rock salt fault in the (001) plane leads to the relative shift of an adjacent rock salt layer by $a$/2 [011] \cite{TokudaAPL2011_STORPfaults}. Consequently, the neodymium and nickel atomic sites are stacked in projection along the propagation direction of the electron beam, resulting in reduced atomic number ($Z$) contrast \cite{Dawley2020APL_STO_RP} (Fig.\ \ref{Fig2}(c)). We present a medium-angle annular dark field (MAADF)-STEM image of the Nd$_{5.1}$Ni$_3$O$_{10-\delta}$ film in Fig.\ \ref{Fig2}(d). The film is distinguished from the substrate by the bright contrast which arises from the heavy neodymium nuclei in the film. Within the film, however, the clean layered perovskite structure of an $n=3$ Ruddlesden–Popper phase is obscured by a high density of disordered vertical and horizontal rock salt planes. A higher magnification (HAADF)-STEM image of the film shown in Fig.\ \ref{Fig2}(e) similarly indicates a high density of rock salt plane formation through the reduced $Z$ contrast. A near-equal density of vertical and horizontal rock salt faults thus forms in spite of the sequential MBE shuttering sequence encouraging horizontal rock salt layering. Our attempts to synthesize Nd$_{4}$Ni$_{3}$O$_{10}$ on SrTiO$_{3}$ demonstrate the extraordinary difficulties in stabilizing coherent horizontal rock salt ordering under large tensile strain. High quality perovskite NdNiO$_{3}$, on the other hand, can be synthesized on SrTiO$_{3}$, albeit with a small density of vertical rock salt faults \cite{LeeAPL2020_aspects_synthesis, YangSymmetry2022_NdNiO3RPfaults, BakJourPhysChemLett2020_LaNiO3RPfaults}. Ruddlesden–Popper nickelates, however, are more likely to form vertical rock salt faults than perovskites due to the composition — in NdNiO$_{3}$, Nd/Ni $=$ 1/1, while in Nd$_{4}$Ni$_{3}$O$_{10}$, Nd/Ni $=$\ 4/3. We thus propose that the additional neodymium content in Nd$_{4}$Ni$_{3}$O$_{10}$ under tensile strain forms strain-relieving vertical rock salt faults instead of horizontal rock salt layers, regardless of the MBE shuttering sequence. In fact, to stabilize any horizontal rock salt ordering, it is necessary to supply an excess of neodymium, as demonstrated in Fig.\ \ref{Fig2}(a). Ruddlesden–Popper films are therefore more sensitive to the formation of tensile strain-relieving extended defects than their perovskite counterparts. Additionally, SrTiO$_3$ is unique out of the substrates included in our study for its charge neutral atomic planes which form a stronger discontinuity to the charged planes in the Nd$_{4}$Ni$_{3}$O$_{10}$ film. Spectroscopic and theoretical studies of the interface between NdNiO$_2$ and SrTiO$_3$ show that a similar polar discontinuity is alleviated by a unit cell thick reconstruction layer of Nd(Ti,Ni)O$_3$ \cite{goodge2022reconstructing}, which we also observe in our Nd$_{4}$Ni$_{3}$O$_{10}$ films on SrTiO$_3$ (Supplementary Fig.\ S23). In this work we therefore focus on epitaxial strain as the primary driver of reduced crystalline quality in these films. Future studies may reveal subtle differences in polar accommodation of infinite- and several-layer nickelates. Given the high density of extended defects observed in Nd$_{4}$Ni$_{3}$O$_{10}$ / SrTiO$_{3}$, we disqualify this system from consideration for reduction and instead turn our attention to substrates which yield smaller lattice mismatch. In an effort to decrease the density of extended defects, we next synthesize Nd$_{4}$Ni$_{3}$O$_{10}$ under more modest 0.8\% tensile strain on NdGaO$_{3}$ (Fig.\ \ref{Fig1}). We present an XRD scan of a Nd$_{4}$Ni$_{3}$O$_{10}$ / NdGaO$_{3}$ film in Fig.\ \ref{Fig3}(a) which exhibits 00$l$ superlattice peaks consistent with bulk Nd$_{4}$Ni$_{3}$O$_{10}$. The splitting of the $00\underline{10}$ peak may be indicative of a small error in the monolayer dose or composition \cite{MattBaroneAPLM2021}. Reciprocal space mapping in Supplementary Fig.\ S22 demonstrates that the film is epitaxially strained to the substrate. Cross-sectional STEM imaging in Fig.\ \ref{Fig3}(c-d) provides a microscopic view of the defects. Diffraction contrast near the rock salt planes in the large field-of-view MAADF-STEM image in Fig.\ \ref{Fig3}(c) highlights coherent horizontal ordering of Ruddlesden–Popper layers and additional vertical rock salt faults. A HAADF-STEM image of the film in Fig.\ \ref{Fig3}(d) similarly identifies the coexistence of different Ruddlesden–Popper phases as well as vertical rock salt faults. Boxes and atomic models in Figs.\ \ref{Fig2}(b) and \ref{Fig2}(d) denote regions of well-ordered horizontal Ruddlesden–Popper layers (yellow), vertical rock salt faults (red), and local step edges between regions of mixed local $n$ phase (green). Therefore, while some vertical rock salt faults are observed, the lower tensile strain imparted by NdGaO$_3$ reduces the density of vertical Ruddlesden–Popper faults and other extended defects that dominate Nd$_{4}$Ni$_{3}$O$_{10}$ films synthesized on SrTiO$_3$. To determine whether vertical rock salt fault formation can be further mitigated, we next synthesize Nd$_{4}$Ni$_{3}$O$_{10}$ under 0.9\% compressive strain on LaAlO$_{3}$. The XRD scan of a Nd$_{4}$Ni$_{3}$O$_{10}$ / LaAlO$_{3}$ film in Fig.\ \ref{Fig4}(a) exhibits sharp superlattice peaks with no peak splitting. Furthermore, reciprocal space mapping in Supplementary Fig.\ S12 confirms the film is epitaxially strained to the substrate. More detailed structural and electronic characterization of our Ruddlesden–Popper ($n=1-5$) nickelates on LaAlO$_{3}$ can be found in Ref.\ \cite{PanPRM2022_RPgrowth}. MAADF-STEM images of this film in Fig.\ \ref{Fig4}(c) show nearly uniform adherence to horizontal layering with very few vertical rock salt faults. Atomic-resolution HAADF-STEM imaging in Fig.\ \ref{Fig4}(d) corroborates the high degree of crystalline order with only few defects visible. Deviations from the $n=3$ Ruddlesden–Popper structure is quantified in Supplementary Fig.\ S7 in Ref.\ \cite{PanPRM2022_RPgrowth}. While such deviations from the targeted $n=3$ Ruddlesden–Popper structure are observed in La$_{4}$Ni$_{3}$O$_{10}$ \cite{Drennan1982MatResearchBulletin_LaNiO3_phaseinhomogeneity, RamJourSSChem1986_LaNiO3_RPs} and reduced Nd$_{4}$Ni$_{3}$O$_{8}$ \cite{Retoux1998JourSSChem_TEM_Nd4Ni3O8} bulk crystals, our films exhibit a small density of such deviations due to the MBE shuttering sequence that encourages the formation of the targeted Ruddlesden–Popper order. We present a summary of the characteristic crystalline microstructure for Nd$_{4}$Ni$_{3}$O$_{10}$ films under varying amounts of compressive and tensile epitaxial strain in Fig.\ \ref{Fig5}. Strain analysis of the (101) and ($\bar{1}$01) pseudocubic lattice fringes highlight local $a$/2 (011) lattice offsets at both horizontal and vertical Ruddlesden–Popper rock salt planes. Under 0.9\% compressive strain on LaAlO$_{3}$, Nd$_{4}$Ni$_{3}$O$_{10}$ exhibits coherent horizontal Ruddlesden–Popper ordering with some rock salt discontinuities but very few vertical rock salt faults (Fig.\ \ref{Fig5}(a)). Under 0.8\% tensile strain on NdGaO$_{3}$, the horizontal Ruddlesden–Popper layering structure is largely preserved with the emergence of some vertical rock salt planes (Fig.\ \ref{Fig5}(b)). The density of vertical rock salt faults in Nd$_{4}$Ni$_{3}$O$_{10}$ increases dramatically as the tensile strain is increased to $\epsilon=+2.1\%$ on SrTiO$_{3}$, with a near equal density of vertical and horizontal rock salt faults evident in Fig.\ \ref{Fig5}(c). Such an increase in rock salt fault density is expected as the vertical Ruddlesden–Popper layers relieve tensile strain \cite{BakJourPhysChemLett2020_LaNiO3RPfaults,BakJourPhysChemLett2020_LaNiO3RPfaults,YangSymmetry2022_NdNiO3RPfaults} (see Supplementary Note 2). With these challenges in stabilizing the parent Ruddlesden–Popper compounds in mind, we next consider their oxygen deintercalation. Crucially, much of the cation disorder observed in the parent compounds is preserved through reduction due to the minimal cation mobility at typical reduction temperatures ($\sim$300\degree C). \newpage \clearpage \subsection{Oxygen deintercalation to the layered square-planar phase} A fundamental issue in the topotactic reduction of nickelates is the metastability of square-planar phases at typical reduction temperatures ($\sim$300\degree C) \cite{Lacorre1992JSSChem,ZhangGreenblattJournalSolidStateChem1995_Ln4310, HaywardJAmChemSoc1999_LaNiO2,PoltavetsInorChem2007_Ln438, PoltavetsJAmChemSoc2006_La326,MalyiPRB2022_NdNiO2instability_Hintercalation}. Decomposition to $R_{2}$O$_{3}$ ($R$ = La, Pr, Nd), NiO, and nickel metal has been reported at temperatures as low as 210\degree C for LaNiO$_{2}$ \cite{HaywardJAmChemSoc1999_LaNiO2} and 200\degree C for NdNiO$_{2}$ \cite{HaywardSSS2003_NdNiO2_NaH}. For layered square-planar compounds, the decomposition temperatures are higher: 400\degree C for La$_{4}$Ni$_{3}$O$_{8}$ \cite{Lacorre1992JSSChem} and 375\degree C for La$_{3}$Ni$_{2}$O$_{7}$ \cite{PoltavetsJAmChemSoc2006_La326}. This difference may be ascribed to the suppressed out-of-plane cation mobility in Ruddlesden–Popper and layered square-planar nickelates, facilitated by the rock salt $R$O or fluorite $R$O$_{2}$ `blocking' layers, respectively. While nickelate powders can be reduced at temperatures as low as 190\degree C with metal hydrides \cite{HaywardJAmChemSoc1999_LaNiO2}, nickelate thin films typically require higher reaction temperatures ($>$250\degree C) to stabilize the square-planar phase \cite{Kawai2010CG&D_a-LaNiO2, LeeAPL2020_aspects_synthesis}, possibly because films are not ground with the reductant like powders. As a result, decomposition has been observed in reduced infinite-layer thin films \cite{OnozukaDaltonTransactions2016_NdNiO2fluorite,Ikeda2013PhysicaC_LaNiO2_NGO,Ikeda2016APE_LaNiO2_NGO_TEM,LeeAPL2020_aspects_synthesis}; the precise identification of the decomposition products is, however, difficult in thin films in part due to the small film volume. An additional challenge associated with metal hydride reductions is the possibility of hydrogen incorporation in the lattice, an effect which has been proposed as a reason for the absence of superconductivity in some films \cite{SiPRL2020_topotactichydrogen,MalyiPRB2022_NdNiO2instability_Hintercalation, SiCrystals2022_topotactic}. For example, an oxyhydride NdNiO$_{x}$H$_{y}$ phase was reported in NdNiO$_{3}$ films reduced with CaH$_{2}$ \cite{OnozukaDaltonTransactions2016_NdNiO2fluorite}, but to date no experiment has linked insulating behavior with hydrogen intercalation. Solid state reduction techniques have recently been demonstrated as a promising alternative to metal hydride reductions \cite{Wei2023PRM_solidstatereduction}. Thus to minimize the formation of decomposition products and potential hydrogen intercalation during metal hydride reductions, careful optimization of reduction duration and temperature is essential. \begin{figure*} \centering \includegraphics[width =2\columnwidth]{Figures - main text (final revision)/fig6_final.pdf} \caption{Structural and electrical transport characterization of NdNiO$_{3}$ and Nd$_{4}$Ni$_{3}$O$_{10}$ reductions on LaAlO$_{3}$ (001). (a) XRD scans of a 15.5 nm NdNiO$_{3}$ film as-synthesized (bottom) and reduced for 3 hours at 290\degree C (top). The vertical dashed lines denote the 200 and 002 peak positions of bulk NdNiO$_{2}$ \cite{HaywardSSS2003_NdNiO2_NaH}. (b) Schematic crystal structures illustrating the formation of a mixture of $a$- and $c$-axis oriented NdNiO$_{2}$ upon reduction of NdNiO$_{3}$. (c) XRD scans of a 20.0 nm Nd$_{4}$Ni$_{3}$O$_{10}$ film as-synthesized (bottomw) and Nd$_{4}$Ni$_{3}$O$_{8}$ reduced for 3 hours at 290\degree C (top). The vertical solid and dashed lines denote 00$l$ peak positions of bulk Nd$_{4}$Ni$_{3}$O$_{10}$ \cite{OlafsenJournalSolidStateChem2000_Nd4310structure} and Nd$_{4}$Ni$_{3}$O$_{8}$ \cite{PoltavetsInorChem2007_Ln438}, respectively. The primed indices distinguish the reduced layered square-planar phase from the as-synthesized Ruddlesden–Popper \cite{PoltavetsInorChem2007_Ln438}. (d) Schematic crystal structures illustrating the reduction of $c$-axis oriented Nd$_{4}$Ni$_{3}$O$_{10}$ to $c$-axis oriented Nd$_{4}$Ni$_{3}$O$_{8}$. (e) Resistivity versus temperature measurement of the Nd$_{4}$Ni$_{3}$O$_{10}$ and Nd$_{4}$Ni$_{3}$O$_{8}$ films in (c). } \label{Fig6} \end{figure*} \begin{figure*} \centering \includegraphics[width =2\columnwidth]{Figures - main text (final revision)/fig7_final.pdf} \caption{Structural characterization of a reduced Nd$_{4}$Ni$_{3}$O$_{8}$ / LaAlO$_{3}$ film. (a) Large field-of-view MAADF-STEM image. (b) Atomic-resolution HAADF-STEM image of a region exhibiting high quality layered square-planar structure. (c) HAADF and (d) ABF-STEM images of a small field-of-view with an overlaid atomic model of the square-planar structure. The full field-of-view is provided in Supplementary Fig.\ S28. (e) Gaussian-smoothed map of local $c$-axis canting relative to the out-of-plane direction. The STEM image and raw tilt map are provided in Supplementary Fig.\ S27. The XRD and transport measurements of this film can be found in Supplementary Fig.\ S7.} \label{Fig7} \end{figure*} Additionally, the dramatic expansion (contraction) of the in-plane (out-of-plane) lattice parameter through reduction further complicates the stabilization of high quality square-planar nickelates. After reduction, La$_{1-x}$Ca$_{x}$NiO$_{2}$ single crystals exhibit three orthogonally-oriented domains of the infinite-layer phase separated by micro-cracks \cite{PuphalSciAdv2021_nobulkSC}. Nevertheless, La$_{1-x}$Ca$_{x}$NiO$_{2}$ and Pr$_{4}$Ni$_{3}$O$_{8}$ \cite{PuphalSciAdv2021_nobulkSC} single crystals are metallic but not yet superconducting to date. In contrast, most reduced infinite-layer \cite{WangPRM2020_nobulkSC,LiNatComm2020_nobulkSC,HeChangpingJourPhysCondMatt2021_SmSrNiO2_noSC, Huo2022ChiPhysB_noSCinLaSrNiO2} and layered nickelate powders \cite{PoltavetsJAmChemSoc2006_La326, PoltavetsInorChem2007_Ln438, PoltavetsPRL2008_La326_electronicproperties, Poltavets2010PRL_La438, Sakurai2013PhysC_R438insulating, Hao2021PRB_Nd438_chargestripe, LiSciChinaPhys2021_4310vs438} are insulating — metallic (Pr,Nd)$_{4}$Ni$_{3}$O$_{8}$ pellets were, however, achieved with reductions using a sulfur getter to promote the removal of apical oxygens \cite{Nakata2016AdvCondMattPhys_metallicNd3.5Sm0.5Ni3O8, Miyatake202JPSCP_R438_substitution}. Thus metallicity in square-planar, or $T'$, nickelates may be exquisitely sensitive to the oxygen content like in $T'$ cuprates \cite{Brinkmann1996PhysC_Pr2CuO4_oxygenstoich}. Additional proposals for the the lack of superconductivity in bulk reduced nickelates include nickel deficiency and the nucleation of ferromagnetic nickel grains \cite{SharmaArxiv2022whySCabsentinbulknickelates}. Thin films, on the other hand, have several advantages that address these issues. First, thin films host an inherent in-plane versus out-of-plane anisotropy defined by the substrate that encourages the formation of a single orientation of the reduced phase. Furthermore, the thin film geometry minimizes the size of potential ferromagnetic nickel clusters and promotes reduction uniformity due to the large surface area exposed to the reductant. Thin films, however, face a challenge absent in bulk compounds: defects may form in response to reduction-induced compressive strain. In LaNiO$_{2}$ / SrTiO$_{3}$, for example, macroscopic \cite{Kawai2010CG&D_a-LaNiO2} and local \cite{Osada2021AdvMat_LaNiO2, Ikeda2016APE_LaNiO2_NGO_TEM} $a$-axis oriented infinite-layer domains form, likely to relieve the 1.4\% compressive strain \cite{Osada2021AdvMat_LaNiO2}. This effect was also demonstrated in LaNiO$_{2}$ films which exhibit an increasing fraction of $a$-axis oriented LaNiO$_{2}$ with increasing compressive strain \cite{Ikeda2013PhysicaC_LaNiO2_NGO}. While strain is an important factor in defect formation, synthesis optimization of the parent compound has suppressed the formation of such defects upon reduction \cite{Osada2021AdvMat_LaNiO2, KyuhoLeeArxiv2022_NdNiO2_LSAT_transport}. The reduction-induced strain-relieving mechanisms in layered nickelates may be different than those observed in infinite-layer nickelates. With these challenges in mind, we now turn to reductions on LaAlO$_{3}$, which yield the highest quality as-synthesized Ruddlesden–Popper nickelates (Fig.\ \ref{Fig5}), but also the highest compressive strain in the reduced state (Fig.\ \ref{Fig1}). We first reduce perovskite NdNiO$_{3}$ on LaAlO$_{3}$ to study how a system without horizontal rock salt layers undergoes an increase in compressive strain from 0.5\% to 3.3\% upon reduction. We present the XRD scans of an as-synthesized NdNiO$_{3}$ and reduced NdNiO$_{2}$ film on LaAlO$_{3}$ in Fig.\ \ref{Fig6}(a). The as-synthesized film exhibits the 002 NdNiO$_{3}$ peak at $\sim$47.6$\degree$ with no second phases evident over the full scan range (Supplementary Fig.\ S10). In the reduced film, we observe a peak at $\sim$46.3$\degree$ along with a lower intensity peak at $\sim$56.0$\degree$, which likely correspond to the 200 and 002 peaks of NdNiO$_{2}$, respectively. Incremental reduction of the film results in the immediate formation of the $a$-axis oriented NdNiO$_{2}$ phase (Supplementary Fig.\ S10) which suggests this phase is not formed a result of over-reduction, as was observed in LaNiO$_{2}$ / SrTiO$_{3}$ \cite{Kawai2010CG&D_a-LaNiO2}. Furthermore, NdNiO$_{2}$ / LaAlO$_{3}$ exhibits insulating electrical transport (Supplementary Fig.\ S10). These results suggest that NdNiO$_{3}$ undergoes large-scale $a$-axis reorientation upon reduction under high compressive strain, as illustrated in Fig.\ \ref{Fig6}(b). Next we reduce Nd$_{4}$Ni$_{3}$O$_{10}$ on LaAlO$_{3}$ to investigate how rock salt layering impacts the structural transformation to the square-planar phase. The reduction-induced increase in compressive strain of the Ruddlesden–Popper (from 0.9\% to 3.2\%) is comparable to that of the perovskite (from 0.5\% to 3.3\%). In Fig.\ \ref{Fig6}(c), we present XRD scans of Nd$_{4}$Ni$_{3}$O$_{10}$ and Nd$_{4}$Ni$_{3}$O$_{8}$ films on LaAlO$_{3}$. The parent film exhibits all expected Nd$_{4}$Ni$_{3}$O$_{10}$ 00$l$ peaks and, as shown in Fig.\ \ref{Fig6}(e), a metal-to-insulator transition at $\sim$150 K consistent with previous reports \cite{Sun_YNie2021PRB,PanPRM2022_RPgrowth}. Upon reduction, the Nd$_{4}$Ni$_{3}$O$_{10}$ 00$l$ peaks shift toward the Nd$_{4}$Ni$_{3}$O$_{8}$ 00$l'$ peaks, revealing the formation of the square-planar phase. Furthermore, the lack of 200 NdNiO$_{2}$ or 220 Nd$_{4}$Ni$_{3}$O$_{8}$ peaks at $\sim$46.3$\degree$ suggests that the film has retained $c$-axis orientation through reduction, as illustrated in Fig.\ \ref{Fig6}(d). Reciprocal space maps in Supplementary Fig.\ S12 however demonstrate that the Nd$_{4}$Ni$_{3}$O$_{8}$ film is partially relaxed with a 3.89 \AA \ in-plane lattice constant compared to the 3.915 \AA \ bulk value. XAS at the oxygen K-edge in Supplementary Fig.\ S13 further corroborates the formation of the square-planar phase. Thus, in contrast to NdNiO$_{3}$, Nd$_{4}$Ni$_{3}$O$_{10}$ retains global $c$-axis orientation upon reduction, despite the high compressive strain on LaAlO$_{3}$. Like NdNiO$_{2}$ / LaAlO$_{3}$, however, the reduced layered nickelate is insulating (Fig.\ \ref{Fig6}(e)). In fact, all films reduced on LaAlO$_{3}$ are insulating, likely due to reduction-induced structural disorder (Supplementary Note 4). Furthermore, of the three strain states studied here, $R_{n+1}$Ni$_{n}$O$_{2n+2}$ films under high compressive strain on LaAlO$_{3}$ are the furthest from cuprate-like due to the larger charge-transfer energy and higher neodymium DOS at $\varepsilon_{\mathrm{F}}$ (Supplementary Note 1). Thus if optimized to a metallic state, $R_{n+1}$Ni$_{n}$O$_{2n+2}$ / LaAlO$_{3}$ films could be a platform to study the role of the charge transfer energy and neodymium 5$d$ states in nickelate superconductivity. Cross-sectional STEM measurements shown in Fig.\ \ref{Fig7} reveal the microstructure of the reduced Nd$_{4}$Ni$_{3}$O$_{10}$ / LaAlO$_{3}$ film. The large field-of-view MAADF-STEM image in Fig.\ \ref{Fig7}(a) shows some decomposition near the surface and diagonal defects between crystalline regions throughout the film. Atomic-resolution HAADF-STEM imaging in Fig.\ \ref{Fig7}(b) shows one such clean region with horizontal layer ordering similar to the as-synthesized film (Figs.\ \ref{Fig4} and \ref{Fig5}(a)). The reduced square-planar structure is visible by close inspection of the atomic lattice in Fig.\ \ref{Fig7}(c) which shows a more pronounced in- versus out-of-plane anisotropy of the pseudo-infinite-layer unit cell. Annular bright field (ABF)-STEM imaging in Fig.\ \ref{Fig7}(d) further illustrates the reduced square-planar phase with the absence of oxygen atomic columns in the horizontal neodymium planes. Interspersed between these regions of pristine Nd$_{4}$Ni$_{3}$O$_{8}$, we find that the diagonal defects observed by MAADF-STEM in Fig.\ \ref{Fig7}(a) are in fact regions of local $c$-axis canting. Using local wavefitting analysis \cite{smeaton2021mapping}, in Fig.\ \ref{Fig7}(d) we map the local $c$-axis orientation across one such diagonal defect, revealing regions of opposite canting of up to several degrees. Within this window of variation ($\sim\pm 10 \degree$), however, the film's $c$-axis remains globally out-of-plane. By contrast, XRD scans in Fig.\ \ref{Fig6}(a) suggest that a perovskite NdNiO$_{3}$ film on LaAlO$_{3}$ can also be reduced to NdNiO$_{2}$, but that most of the film attains $a$-axis orientation with only a tiny peak corresponding to $c$-axis orientation remaining visible. The Ruddlesden–Popper structure therefore appears to stabilize the $c$-axis orientation even under high 3.2\% compressive strain, preventing the large-scale reorientation observed in the infinite-layer counterpart on LaAlO$_3$. Under more modest (1.4\%) compressive strain, LaNiO$_2$ / SrTiO$_3$ exhibits a small fraction of $a$-axis reorientation along diagonal planes \cite{Osada2021AdvMat_LaNiO2}, similar to those observed in the triple-layer films here. An important distinction between the infinite- and triple-layer systems is that of the heavy cation lattice and the oxygen-filled (or reduced) planes. In the perovskite / infinite-layer system, the orientation of empty oxygen planes effectively defines the local $c$-axis (orthogonal to the rare-earth planes). Without the imposition of additional symmetry by oxygen occupancy, however, the cation lattice directions are essentially equivalent within the pseudocubic approximation. Local reorientation can therefore be equivalently described as local rearrangement of occupied oxygen sites. In the Ruddlesden–Popper structure, on the other hand, the heavy cation lattice bears inherent symmetry distinctions even without the oxygen lattice, with the cation $c$-axis defined as orthogonal to the rock salt planes. Here, we observe a global preservation of this $c$-axis direction upon reduction: all the Ruddlesden–Popper layers remain in the same uniform plane from the as-synthesized to reduced phase (the energy scale of cation mobility that would be required for Ruddlesden–Popper reorientation is far above the temperature of the topotactic reduction process). How closely, then, is the oxygen lattice tied to this pre-defined cation symmetry? Locally, we find nanometer-scale regions near the canting defects which suggest an oxygen lattice reorientation internal to the global Ruddlesden–Popper structure. Supplementary Fig.\ S29 shows a high-magnification image of the defect structure mapped in Fig.\ \ref{Fig7}(e). The atomic overlay in the bottom right of the image -- where the film is epitaxial, uncanted, and well-ordered -- shows how the reduced structure can be mapped onto a rectangular sublattice with longer (shorter) in-plane (out-of-plane) atomic spacings. The rectangles outline 3 $\times$ 3 neodymium atomic sites, with the long and short dimensions colored cyan and yellow, respectively. We observe an identical (but slightly rotated) structure on one side of the canted lattice near the top left of the image. Between the two, however, atomic distances in a subset of the positively-canted region are better described by a near-90$\degree$ rotation of the 3 $\times$ 3 rectangle. Even where the planar Ruddlesden–Popper structure is preserved, the in-plane spacings are shorter than the out-of-plane spacings, suggesting the internal oxygen lattice in this region differs from elsewhere in the film. Extracting the precise atomic structure of these defects will be challenging given the higher-degree of disorder in the surrounding lattice and the small total volume they comprise. While the observation of such competing reorientation likely does not strongly impact the macroscopic properties of the films studied here, it may inspire future efforts to decouple the various atomic sublattices in these or other layered materials. These results suggest that to mitigate strain-induced extended defects upon reduction, the compressive strain in the reduced state should be minimized. NdGaO$_{3}$ is an appealing option as the compressive strain of the reduced compound is limited to 1.5\%, compared to 3.2\% on LaAlO$_{3}$ (Fig.\ \ref{Fig1}). In Fig.\ \ref{Fig8}, we present the reduction of the same Nd$_{4}$Ni$_{3}$O$_{10}$ / NdGaO$_{3}$ film discussed in Fig.\ \ref{Fig3}. Upon reduction, the superlattice peaks in Fig.\ \ref{Fig8}(a) shift toward positions consistent with the square-planar structure. Unlike the films on LaAlO$_{3}$ which partially relax upon reduction (Supplementary Fig.\ S12), we observe that Nd$_{4}$Ni$_{3}$O$_{8}$ / NdGaO$_{3}$ is epitaxially strained to the substrate (Supplementary Fig.\ S22). As shown in Fig.\ \ref{Fig8}(b), the as-synthesized Nd$_{4}$Ni$_{3}$O$_{10}$ film exhibits a metal-to-metal transition at $\sim$150 K, consistent with the bulk compound \cite{Greenblatt1997_RPnickelates,LiSciChinaPhys2021_4310vs438}. Grown instead on LaAlO$_{3}$, Nd$_{4}$Ni$_{3}$O$_{10}$ possesses a metal-to-insulator transition (Fig.\ \ref{Fig6}(e) and Refs. \cite{Sun_YNie2021PRB,PanPRM2022_RPgrowth}). The reduced Nd$_{4}$Ni$_{3}$O$_{8}$ / NdGaO$_{3}$ is metallic with a resistive upturn, similar to the infinite-layer nickelates \cite{LiNat2019, PuphalSciAdv2021_nobulkSC} and Pr$_{4}$Ni$_{3}$O$_{8}$ \cite{ZhangNatPhys2017_orb_polarization, Miyatake202JPSCP_R438_substitution}. More details on the reduction of Nd$_{4}$Ni$_{3}$O$_{10}$ / NdGaO$_{3}$ can be found in Supplementary Note 5. Thus, metallic and eptiaxially-strained layered square-planar nickelates can be stabilized on NdGaO$_{3}$. Cross-sectional STEM images of the reduced Nd$_{4}$Ni$_{3}$O$_{8}$ / NdGaO$_{3}$ film in Fig.\ \ref{Fig8} are presented in Fig.\ \ref{Fig9}. The large-scale MAADF-STEM image in Fig.\ \ref{Fig9}(a) shows Nd$_{4}$Ni$_{3}$O$_{8}$ ordering of fair crystalline quality, disordered regions with extended defects, as well as reduction-induced disorder at the surface. We observe the same effects in the HAADF-STEM images in Fig.\ \ref{Fig9}(b)-(d) which show regions of varying representative levels of crystalline order. All three regions (and especially Region 1 in Fig.\ \ref{Fig9}(b)) show signs of reduced crystallinity near the film surface which are likely reduction-induced and not a result of TEM sample preparation (e.g.\ ion beam damage). In infinite-layer thin films, stabilizing capping layers of SrTiO$_3$ have proven an effective way to reduce similar surface degradation \cite{LeeAPL2020_aspects_synthesis}. Near the top left of Region 2 in Fig.\ \ref{Fig9}(c), minor $c$-axis canting similar to that discussed in Fig.\ \ref{Fig7} can also be observed. All three regions show areas with significant disorder which may have nucleated near vertical rock-salt faults in the as-synthesized film (see Figs.\ \ref{Fig3} and \ref{Fig5}(b)). Regions without amorphization but exhibiting vertical rock salt faults are highlighted with yellow arrows. In a relatively clean region of the film, HAADF- and ABF-STEM images in Fig.\ \ref{Fig9}(e-f) show the expected $n=3$ ordering and square-planar structure. Notably, the overall crystallinity of the Nd$_{4}$Ni$_{3}$O$_{8}$ / NdGaO$_{3}$ film in Fig.\ \ref{Fig9} is higher than the crystallinity of the superconducting quintuple-layer nickelate \cite{PanNatMat2021}, likely due to the challenges in stabilizing the higher order $n=5$ phase. Both reduced $n=3$ and $n=5$ compounds on NdGaO$_{3}$, however, exhibit similar reduction-induced disorder, likely nucleating at vertical rock salt regions formed during the synthesis of the parent compound. \begin{figure*} \centering \includegraphics[width =2.0\columnwidth]{Figures - main text (final revision)/fig8_final.pdf} \caption{Structural and electrical transport characterization of Nd$_{4}$Ni$_{3}$O$_{10}$ and Nd$_{4}$Ni$_{3}$O$_{8}$ on NdGaO$_{3}$ (110). (a) XRD scans of an as-synthesized Nd$_{4}$Ni$_{3}$O$_{10}$ and reduced Nd$_{4}$Ni$_{3}$O$_{8}$ film on NdGaO$_{3}$. The film is 25.5 nm and reduced for 3 hours at 300\degree C. The vertical solid and dashed lines denote the 00$l$ peak positions of bulk Nd$_{4}$Ni$_{3}$O$_{10}$ and Nd$_{4}$Ni$_{3}$O$_{8}$, respectively. The primed indices distinguish the reduced square-planar phase from the as-synthesized Ruddlesden–Popper. The asterisks denote NdGaO$_{3}$ substrate peaks. (b) Resistivity versus temperature measurements of the Nd$_{4}$Ni$_{3}$O$_{10}$ and Nd$_{4}$Ni$_{3}$O$_{8}$ films in (a). Data reproduced from Ref.\ \cite{PanNatMat2021}. } \label{Fig8} \end{figure*} \begin{figure*} \centering \includegraphics[width =2\columnwidth]{Figures - main text (final revision)/fig9_final.pdf} \caption{Structural characterization of reduced the Nd$_{4}$Ni$_{3}$O$_{8}$ / NdGaO$_{3}$ (110) films shown in Fig.\ \ref{Fig8}. (a) Large field-of-view MAADF-STEM image. (b–d) HAADF-STEM images of two separate regions with varying local densities of vertical rock salt faults. Blue, red, and yellow arrows or lines highlight local $c$-axis canting, with red marking disordered regions, and yellow marking vertical rock salt faults without major amorphization. (e) HAADF and (f) annular bright field ABF-STEM images of the same region. We provide the full field-of-view images in Supplementary Fig.\ S30. } \label{Fig9} \end{figure*} \subsection{Cation stoichiometry} Optimized Nd$_{4}$Ni$_{3}$O$_{8}$ ($n=3$) and Nd$_{6}$Ni$_{5}$O$_{12}$ ($n=5$) films on NdGaO$_{3}$ are metallic and superconducting, respectively \cite{PanNatMat2021}. However, the metallicity of these films is highly sensitive to as-synthesized structural differences and reduction conditions (Supplementary Fig.\ S21 and Supplementary Fig.\ S2 in Ref.\ \cite{PanNatMat2021}). In the infinite-layer nickelates, cation stoichiometry strongly influences metallicity and superconductivity \cite{LeeAPL2020_aspects_synthesis, Li_YNieFrontiersPhys2021_cation_stoich_SC_112}. We thus investigate the role of cation stoichiometry in the metallicity of Nd$_{4}$Ni$_{3}$O$_{8}$ on NdGaO$_{3}$. In Fig.\ \ref{Fig10-stoichiometry}(a) we present XRD scans of five Nd$_{4}$Ni$_{3}$O$_{10}$ films with systematically varying neodymium content. The structural quality of the Nd$_{4}$Ni$_{3}$O$_{10}$ films deteriorates as the neodymium content is varied from the optimal value, evident by the broadness and reduced intensity of the $008$ peak. Electrical transport measurements in Fig.\ \ref{Fig10-stoichiometry}(b) demonstrate that the resistivity of the Nd$_{4}$Ni$_{3}$O$_{10}$ films is relatively insensitive to cation stoichiometry, except for the 6\% neodymium-rich film which exhibits higher resistivity than the rest of the films in the series. The decrease in resistive upturn temperature with increased off-composition is consistent with the decrease in metal-insulator transition temperature with neodymium-richness in NdNiO$_{3}$ \cite{BreckenfeldAppliedMaterials2014_nonstoichiometryNdNiO3}. We provide more details regarding the MBE synthesis and properties of these films in Supplementary Note 8. Next we reduce the films shown in Fig.\ \ref{Fig10-stoichiometry}(a) and present the XRD scans in Fig.\ \ref{Fig10-stoichiometry}(c). The three films within 3\% of optimal stoichiometry exhibit all Nd$_{4}$Ni$_{3}$O$_{8}$ $00l$ peaks, while the two films that deviate from optimal stoichiometry by 6\% exhibit substantially diminished XRD peak intensity. These differences in structural quality are corroborated by the resistivity measurements in Fig.\ \ref{Fig10-stoichiometry}(d). The films that deviate from optimal stiochiometry by 6\% are insulating, while the films closer to optimal stoichiometry are metallic (more reduction trials are provided in Supplementary Figs.\ S16–S20). These results suggest that Nd$_{4}$Ni$_{3}$O$_{8}$ films can be metallic only if the cation stoichiometry lies within $\sim$3\% of the optimal value. It was similarly demonstrated that Nd$_{1-x}$Sr$_{x}$NiO$_{2}$ films are insulating if the cation stoichiometry is off by 10\% \cite{Li_YNieFrontiersPhys2021_cation_stoich_SC_112}. However, we also observe metallicity in Nd$_{4}$Ni$_{3}$O$_{8}$ films that are structurally inferior to those shown in Fig.\ \ref{Fig10-stoichiometry} (Supplementary Fig.\ S21(c)). The metallicity of reduced Nd$_{4}$Ni$_{3}$O$_{8}$ films is thus intricately dependent on a variety of structural factors including cation stoichiometry, oxygen content, and extended defect density. \begin{figure*} \centering \includegraphics[width =2\columnwidth]{Figures - main text (final revision)/fig10_final.pdf} \caption{Structural and electrical transport characterization of Nd$_{4}$Ni$_{3}$O$_{10}$ and Nd$_{4}$Ni$_{3}$O$_{8}$ films with varying neodymium content. (a) XRD and (b) resistivity measurements of the parent Nd$_{4}$Ni$_{3}$O$_{10}$ films on NdGaO$_{3}$. (c) XRD and (d) resistivity measurements of the reduced Nd$_{4}$Ni$_{3}$O$_{8}$ films in (a). All films are $\sim$11.0 nm and reduced for 3 hours at 290\degree C. The vertical solid and dashed lines denote 00$l$ peak positions of bulk Nd$_{4}$Ni$_{3}$O$_{10}$ \cite{OlafsenJournalSolidStateChem2000_Nd4310structure} and Nd$_{4}$Ni$_{3}$O$_{8}$ \cite{PoltavetsInorChem2007_Ln438}, respectively. The primed indices distinguish the reduced layered square-planar phase from the as-synthesized Ruddlesden–Popper \cite{PoltavetsInorChem2007_Ln438}. The asterisks denote NdGaO$_{3}$ substrate peaks. Additional reduction trials are provided in Supplementary Figs.\ S16–S20).} \label{Fig10-stoichiometry} \end{figure*} \section{Discussion} In Fig.\ \ref{Fig6}, we explore the role of rock salt layers in topotactic reductions. While the reduction of Nd$_{4}$Ni$_{3}$O$_{10}$ yields $c$-axis oriented Nd$_{4}$Ni$_{3}$O$_{8}$ with local $c$-axis canting, the reduction of NdNiO$_{3}$ primarily stabilizes $a$-axis oriented NdNiO$_{2}$. The most important distinction between NdNiO$_{3}$ and Nd$_{4}$Ni$_{3}$O$_{10}$ is the presence of rock salt layers in the Ruddlesden–Popper, which transform into fluorite-like NdO$_{2}$ layers through reduction (Fig.\ \ref{Fig1}). Our results in suggest that rock salt and fluorite-like spacer layers suppress $a$-axis reorientation and promote the deintercalation of apical oxygens along planes parallel to the spacer layers, as discussed in Supplementary Fig.\ S29. These spacer layers additionally facilitate the stabilization of the square-planar phase under much higher compressive strain than currently possible in infinite-layer systems. This capability is particularly appealing as additional compressive strain has thus far enhanced $T_\textrm{c}$ in infinite-layer nickelates \cite{RenArxiv2021_PrSrNiO2_LSAT_STO,KyuhoLeeArxiv2022_NdNiO2_LSAT_transport}, although it is unclear to what extent this enhancement can be attributed to improved crystallinity. Additionally, oxygen diffusion is known to be anisotropic in Ruddlesden–Popper compounds, in contrast to perovskites \cite{LeeMaterials2017_RPoxygenmobility,TomkiewiczMatChemA2015_RPoxygenpathways}; the influence of rock salt layers on reduction kinetics, however, remains to be investigated. An additional issue raised by our work is the nature of the insulating and metallic states in reduced layered nickelates on LaAlO$_{3}$ and NdGaO$_{3}$, respectively. We speculate that in Nd$_{4}$Ni$_{3}$O$_{8}$ / LaAlO$_{3}$, the compressive strain-induced extended defects shown in Fig.\ \ref{Fig7}(e) preclude metallic transport through the film, despite the prevalence of high-crystallinity regions shown in Fig.\ \ref{Fig7}(b). Nevertheless, the potential role of the twinned structural domains in LaAlO$_{3}$ should be considered as well. The typical size a twin domain in LaAlO$_{3}$ is $\sim$1-100 \micro m \cite{Burema2019JourVacSci_LAOtwinning}. The distance between $c$-axis canting defect regions in Nd$_{4}$Ni$_{3}$O$_{8}$ / LaAlO$_{3}$, on the other hand, is $\sim$100 nm, as shown in Fig.\ \ref{Fig7}(a). The length scale associated with $c$-axis canting defects is thus at least an order-of-magnitude smaller than the size of LaAlO$_{3}$ twin domains. This difference in length scale suggests the $c$-axis canting defects may form independently of the twin domains. In summary, we have synthesized Ruddlesden–Popper Nd$_{n+1}$Ni$_{n}$O$_{3n+1}$ and layered square-planar Nd$_{n+1}$Ni$_{n}$O$_{2n+2}$ films under multiple strain states. We show that attempts to synthesize Nd$_{4}$Ni$_{3}$O$_{10}$ under 2.1\% tensile strain on SrTiO$_{3}$ result in an extremely disordered structure with a high density of vertical rock salt faults, disqualifying these films from consideration for reduction. Synthesized under 0.8\% tensile strain on NdGaO$_{3}$, Nd$_{4}$Ni$_{3}$O$_{10}$ exhibits horizontal rock salt ordering with a small density of vertical rock salt faults. We synthesize the highest quality Nd$_{4}$Ni$_{3}$O$_{10}$ films under 0.9\% compressive strain on LaAlO$_{3}$, with few extended defects observed. Thus compared to perovskite nickelates, Ruddlesden–Popper films are more prone to the formation of tensile strain-induced extended defects because the horizontal rock salt layers can instead orient vertically to form strain-relieving rock salt faults. Therefore, minimizing the as-grown tensile strain is crucial to decreasing the extended defect density in the reduced compounds, as was demonstrated in the infinite-layer nickelates \cite{KyuhoLeeArxiv2022_NdNiO2_LSAT_transport}. Minimizing tensile strain is particularly crucial in the layered nickelates due to the increased propensity for the formation of strain-relieving extended defects compared to the infinite-layer nickelates. We reduced the Ruddlesden–Popper nickelate films on LaAlO$_{3}$ and NdGaO$_{3}$ to the layered square-planar phase. In Nd$_{4}$Ni$_{3}$O$_{8}$ / LaAlO$_{3}$, we observe $c$-axis canting diagonal defects interspersed between regions of high crystalline quality. All films reduced on LaAlO$_{3}$ are insulating, likely due to reduction-induced structural disorder. Reduced NdNiO$_{2}$ / LaAlO$_{3}$ is also insulating but, in contrast to the layered nickelates, is primarily $a$-axis oriented. Horizontal rock salt layers in the Ruddlesden–Popper structure thus suppress $c$-axis reorientation during reduction under high compressive strain. Reduced Nd$_{4}$Ni$_{3}$O$_{8}$ films on NdGaO$_{3}$, on the other hand, demonstrate high-quality layered square-planar ordering as well as regions with extended defects such as vertical rock salt faults. Despite the presence of these extended defects, Nd$_{4}$Ni$_{3}$O$_{8}$ ($n=3$) and Nd$_{6}$Ni$_{5}$O$_{12}$ ($n=5$) films on NdGaO$_{3}$ are metallic and superconducting, respectively \cite{PanNatMat2021}. The metallic state in Nd$_{4}$Ni$_{3}$O$_{8}$ / NdGaO$_{3}$, however, can only be stabilized if the neodymium content lies within $\sim$3\% of the optimal quantity. The competing requirements for the MBE synthesis of the parent compound and reduction to the square-planar phase are thus met by synthesizing the parent compound under tensile strain on NdGaO$_{3}$ to accommodate the large increase in the in-plane lattice parameter upon reduction, at the cost of forming strain-induced vertical rock salt faults in the as-synthesized film. Through our systematic study across multiple substrates, we establish a method to synthesize layered square-planar nickelate thin films and set limits on the ability to strain-engineer these compounds. Furthermore, our DFT calculations demonstrate that Nd$_{4}$Ni$_{3}$O$_{8}$, if optimized to a metallic state on LaAlO$_{3}$ and SrTiO$_{3}$, could be a platform to study the role of the charge-transfer energy and rare-earth 5$d$ states in nickelate superconductivity. Finally, this work provides a comprehensive starting point from which to launch future investigations into the role of epitaxial strain, dimensionality, and chemical doping in nickelate superconductivity. \section{Methods} \subsection{Molecular beam epitaxy synthesis and CaH$_{2}$ reduction} We employ ozone-assisted MBE to synthesize the Ruddlesden–Popper nickelates on LaAlO$_{3}$ (001), NdGaO$_{3}$ (110), and SrTiO$_{3}$ (001). To calibrate the nickel and neodymium elemental fluxes, we synthesize NiO on MgO (001) and Nd$_{2}$O$_{3}$ on yttria-stabilized zirconia (YSZ (111)), then measure the film thickness via x-ray reflectivity \cite{Sun2022PRM_flux_calibration}. Next, we synthesize NdNiO$_{3}$ / LaAlO$_{3}$ (001) and use the $c$-axis lattice constant and film thickness to refine the Nd / Ni ratio and monolayer dose, respectively \cite{Li_YNieFrontiersPhys2021_cation_stoich_SC_112}. Using the optimized neodymium and nickel shutter times from the synthesis of NdNiO$_{3}$ / LaAlO$_{3}$, we synthesize the Ruddlesden–Popper nickelates via monolayer shuttering. Both NdNiO$_{3}$ and Ruddlesden–Popper nickelates are synthesized with $\sim$$1.0\mathrm{e}{-6}$ Torr distilled ozone (Heeg Vacuum Engineering) and 500\degree-600\degree C manipulator temperature. The MBE synthesis conditions and calibration scheme are described in Refs.\ \cite{PanPRM2022_RPgrowth, PanNatMat2021}; similar techniques were also used in Refs.\ \cite{Li_YNie2020APL, Sun_YNie2021PRB}. The Ruddlesden–Popper films were reduced to the layered square-planar phase using CaH$_{2}$ topotactic reduction. The as-synthesized films are cut into $\sim$2.5$\times$5-mm$^2$ pieces, wrapped in aluminum foil (All-Foils), then inserted into borosilicate tubes (Chemglass Life Sciences) with $\sim$0.1 grams of CaH$_{2}$ pellects ($>$92\%, Alfa Aesar). The borosilicate tube is sealed at $<0.5$ mTorr using a small turbomolecular. The sealed glass ampoule is heated in a convection oven (Heratherm, Thermo Fisher Scientific) for several hours at $\sim$290\degree C, with a 10\degree C min$^{-1}$ heating rate. After reduction, the film is rinsed in 2-butanone and isopropanol to remove CaH$_{2}$ residue. Similar methods are commonly used elsewhere \cite{LeeAPL2020_aspects_synthesis, Li_YNieFrontiersPhys2021_cation_stoich_SC_112, PanNatMat2021}. \subsection{Structural characterization} X-ray diffraction (XRD) measurements were performed using a Malvern Panalytical Empyrean diffractometer with Cu K$\alpha_1$ ($\lambda = 1.5406$ \AA) radiation. Reciprocal space maps (RSMs) were taken with a PIXcel3D area detector. Lattice constants were calculated using Nelson-Riley fits of the superlattice peak positions \cite{nelsonriley}. Cross-sectional STEM specimens were prepared using the standard focused ion beam (FIB) lift-out process on a Thermo Scientific Helios G4 UX FIB or a FEI Helios 660. High-angle annular dark-field (HAADF-) and medium-angle annular dark field (MAADF-) STEM images were acquired on an aberration-corrected Thermo Fisher Scientific Spectra 300 X-CFEG operated at 300 kV with a probe convergence semi-angle of 30 mrad and inner collection angles of 66 mrad (HAADF) or 17 mrad (MAADF). Annular bright field (ABF)-STEM images were acquired on an aberration-corrected 300 kV FEI Titan Themis with a probe convergence semi-angle of 21.4 mrad and 12 mrad inner collection angle. Additional HAADF-STEM measurements were performed on a Thermo Fisher Titan Themis Z G3 operated at 200 kV with probe convergence semi-angle of 19 mrad and inner collection angle of 71 mrad. Lattice-scale disorder in the as-synthesized films is visualized by extracting modulations in the (101) and ($\bar{1}$01) pseudocubic lattice fringes using the phase lock-in method described in Refs. \cite{goodge2022disentangling, fleck2022atomic} and implemented by the Python analysis package publicly available at \href{https://doi.org/10.34863/amcp-4s12}{https://doi.org/10.34863/amcp-4s12}. In particular, vertical and horizontal rock salt planes or rock salt faults appear as an apparent strong local compressive strain in the pseudocubic lattice fringes. The original HAADF-STEM images used for analysis and the raw output strain maps are provided in Supplementary Fig.\ S26. Furthermore, regions within each Ruddlesden–Popper layer show small negative strain values: this is due to the choice of the reference lattice spacing which is based on the average image in this Fourier-based technique and is not reflective of real elastic strain in the atomic lattice. To highlight the Ruddlesden–Popper faults, we therefore include only positive strain values in the overlays shown in Fig.\ \ref{Fig5} with the full maps shown in the Supplementary Fig.\ S27. The strain map transparency follows the local magnitude of the strain. The map of local $c$-axis orientation in Fig.\ \ref{Fig7} is generated with the local wavefitting analysis described in Ref.\ \cite{smeaton2021mapping} using the 002 pseudocubic lattice fringes, cropped windows of 24$\times$24 pixels, and a window step size of 12 pixels equivalent to $\sim$0.16 nm. The map displayed in Fig.\ \ref{Fig7} is additionally smoothed by a Gaussian kernel with $\sigma$ = 5 corresponding to a distance of about 5 pseudocubic unit cells; the original STEM image, raw wavefitting output, and smoothed result are provided in Supplementary Fig.\ S27. Electron energy loss spectroscopy (EELS) measurements were carried out on the same instrument described above equipped with a Gatan Continuum spectrometer and camera. Spectrum images of the films on LaAlO$_3$ and NdGaO$_3$ were acquired with a spectrometer dispersion of 0.3 eV per channel. Spectrum images of the film on SrTiO$_3$ were acquired operating in DualEELS mode with a spectrometer dispersion of 0.15 eV per channel. The accelerating voltage was 300 (120) kV for measurements of the films on SrTiO$_3$ and NdGaO$_3$ (LaAlO$_3$). Due to their overlapping EELS edges, two-dimensional concentration maps of the La-M$_{4,5}$ and Ni-L$_{2,3}$ edges are determined by non-negative least squares (NNLS) fit to the weighted sum of reference components for each edge taken from substrate (La-M$_{4,5}$) and film (Ni-L$_{2,3}$) regions (Supplementary Fig.\ S25). \subsection{Electrical transport} Electrical transport data were primarily taken using a Quantum Design Physical Property Measurements System equipped with a 9 T magnet. Hall bars were patterned with Cr (5 nm) / Au (100 nm) contacts using shadow masks which were then defined using a diamond scribe. Resistivity data was taken down to 1.8K at $\sim$3\degree C/min using an AC lock-in amplifier. Hall coefficients were determined from linear fits of antisymmetrized field sweeps up to 9T. All field sweeps were taken upon warming. Several electrical transport measurements in the Supplementary Materials were taken using a home-built electrical `dipstick probe' compatible with a helium dewar. Indium contacts were soldered on the four corners of each film in a Van der Pauw configuration. AC transport measurements were taken at 17.777 Hz using SR830 lock-in amplifiers. The voltage and current were measured simultaneously to determine the resistance. \subsection{X-ray Absorption Spectroscopy} X-ray absorption spectroscopy (XAS) was performed at the Advanced Light Source, Lawrence Berkeley National Lab, at Beamline 6.3.1. The spectra were taken in the total electron yield mode at 300K with linear horizontally polarized light. The film was oriented either normal ($I_{x}$) or at 30\degree\ grazing incidence ($I_{z}$) to the beam. For the signal acquired at grazing incidence, a geometric correction factor was applied \cite{Wu2004ICESS_XAS}. The spectra are normalized to the incident x-ray flux and scaled to the same intensity at energies just below the absorption edge. Every spectrum we present is an average over eight pairs of spectra measured in normal ($I_{x}$) and grazing ($I_{z}$) x-ray incidence angle. \vspace*{5mm} \noindent \textbf{Data Availability}\\ Source data for x-ray diffraction and electrical transport in Figs.\ 2-10 and high-resolution STEM images contained within the main text are provided in the Source Data file. Additional data which support the findings of this study are available from the corresponding authors upon reasonable request. \vspace{5mm}
{ "arxiv_id": "2302.14267", "language": "en", "timestamp": "2023-03-01T02:07:31", "url": "https://arxiv.org/abs/2302.14267", "yymm": "2302" }
\section{Introduction} \label{sec:intro} Deep neural networks have achieved outstanding performance in many computer vision tasks, such as image classification~\cite{Resnet,InceptionV3}, object detection~\cite{YOLOv2,RCNN}, and semantic segmentation~\cite{FCN}. However, they are also vulnerable to various adversarial attacks, which are specifically designed to mislead DNNs with some invisible perturbations. The perturbed images, called adversarial examples, have attracted a lot of research interests and attention, since they could potentially threaten many safety-critical applications, including medical diagnosis~\cite{medical-diagnosis} and autonomous driving~\cite{autonomous-cars}. Existing adversarial attacks can be divided into two categories: 1) digital attacks, where the adversarial perturbations are crafted in the digital domain, \textit{e}.\textit{g}., the conventional gradient-based adversarial attacks. 2) physical attacks, where the perturbations are crafted on real existing objects, such as road sign, T-shirt, \etc, to achieve the attacking goal. Since digitally crafted adversarial examples rarely exist in the real-world environment, physical attacks have recently drawn more and more attention. One popular strategy adopted by physical attacks is to add some carefully designed artifacts on the target object, \textit{e}.\textit{g}., stickers on Stop signs~\cite{physical_stopsign} or colorful patterns on eyeglass frames~\cite{physical_glasses}. However, these attacks often generate unnatural textures, which are quite visible to human eyes. Thus many works focus on generating adversarial examples with natural styles that appear legitimate to human eyes, \textit{e}.\textit{g}., adversarial shadows~\cite{physical_shadows} caused by polygons. Nevertheless, these visually valid adversarial examples are still artifacts and seldom appear in real-world environments. In recent years, some researchers~\cite{advrain,advhaze} propose to employ natural phenomena, such as rain and haze, to generate more natural adversarial attacks. For example, Zhai \textit{et al}.~\cite{advrain} generates adversarial rain with different angles and intensities to cheat the target DNN. Gao \textit{et al}.~\cite{advhaze} proposes adversarial haze to attack the DNNs. Although the adversarial weather examples crafted in these works show strong attack ability to DNNs, they are not real rain or haze, and often look unnatural, either because the models used to simulate weather phenomena are not sophisticated, or because too much attention is put on the attack strength of the adversarial examples, and their reality is ignored. In this paper, we investigate the adversarial examples caused by raindrops to show a fact that there exist a lot of natural phenomena, like raindrops, being able to work as adversarial attackers to DNNs. Therefore, it is crucial for such type of applications as autonomous driving to find an effective way to defend against these natural adversarial attacks. For this purpose, we propose a scheme, denoted as AdvAD, to generate adversarial raindrop images based on the GAN technique. Specifically, AdvAD trains a Generative Adversarial Network (GAN) on a real-world raindrop dataset~\cite{raidrop_removal_cvpr2018} until it can transform a clean input image into a natural raindrop style image, and meanwhile, a transfer learning classifier is embedded in the GAN framework to endue the generated raindrop images more power of adversarial attacking. The adversarial raindrop images generated by our scheme are shown to be very similar to the natural raindrop images, not only from human viewpoint, but also from the statistic measure of two distributions. Finally, we show that the adversarial raindrop images help improve the robustness of DNN models to the attacks of natural adversarial raindrops. Extensive experiments are carried out to demonstrate the effectiveness of our scheme. The main contributions of our work are the following: \begin{itemize} \item We propose a novel approach, based on the GAN technique, to generate adversarial raindrop images that are visually and statistically similar to the natural raindrop images. \item We show real-world raindrops can act as adversarial examples to mislead DNNs, which could bring substantive threats to such security-critical applications as autonomous driving. Adversarial training using our AdvRD samples can help improve the robustness of DNNs to real-world raindrop perturbations. \end{itemize} \section{Related Work} \label{sec:Rela} \subsection{Adversarial Attacks} The adversarial perturbation phenomenon was first found by Szegedy \textit{et al}.~\cite{L-BFGS}, in which carefully manipulated perturbations were added to the original images to fool DNN classifiers. The perturbed images, named adversarial examples, are usually imperceptible to humans, but could mislead the classifier to output incorrect predictions. Let $x\in \mathbb{R} ^d$ denote a clear image sample and $x'$ its corresponding adversarial example. Adversarial perturbation can be expressed as an optimizing problem: \begin{equation} \underset{x'}{arg\max}\,\,L_C\left( x',y \right) \,\,s.t. \left\| x'-x \right\|_{p} <\epsilon, \label{eq:label} \end{equation} where $y$ is the ground-truth label of $x$ and $\epsilon$ is the perturbation budget. $L_C$ denotes the loss function of the classifier. The perturbation $\delta = x'-x$ is often bounded by $\ell_{p}$-norms to guarantee their imperceptibility to human eyes. $\ell_{2}$, $\ell_{\infty}$ and $\ell_{0}$ are commonly used norms. In the literature, plenty of adversarial attacks have been proposed, which can be divided into two categories, digital attacks and physical attacks, according to the domain that the adversarial perturbations are crafted in. \subsection{Digital Attacks} Adversarial attack in the digital domain is to craft perturbation for each pixel of the input image to mislead DNNs' prediction. According to the transparency of the target DNN model to the attacker, digital attacks can be grouped into white-box~\cite{FGSM,BIM,CW} attacks and black-box attacks~\cite{MI-FGSM,DIM,TIM,NIM,NATTACK,Zoo}. In white-box attacks, attackers have access to entire information of the target model, and therefore, can design efficient ways of using the model gradients, like those in PGD~\cite{PGD}, to perturb image to achieve their goal. In black-box scenarios, only outputs of the target model are accessible. Hence attackers need to estimate the gradient by querying the target model~\cite{NES,SPSA} or rely on the adversarial transferability~\cite{DIM,TIM}. Although the adversarial examples crafted in the digital domain work well in cheating DNN models, they in fact rarely exist in the real-world. \subsection{Physical Attacks} Kurakin \textit{et al}.~\cite{Physical_first_attack} first demonstrates that adversarial examples also exist in real-world environments. They find that hard copies of the digital adversarial examples can still fool the target model. One popular strategy of physical attacks is ``sticker-pasting", which attaches adversarial patches to the object to mislead DNN models. For example, Eykholt \textit{et al}.~\cite{physical_stopsign_cvpr2018} put adversarial patches on the road sign to fool a well-trained auto vehicle. Xu \textit{et al}.~\cite{physical_T_shirt} print the adversarial patch on T-shirt to help a person escape from detection model. Sharif \textit{et al}.~\cite{physical_glasses} apply the adversarial textures on eye-glasses frames to fool facial recognition system. Another line of work tries to attack the target model in a non-invasive manner. Applying semitransparent stickers~\cite{physical_sticker_recognize_ICML_2019,physical_translucent_patch_cvpr2021} to a camera lens is a simple yet effective strategy to fool recognition and detection systems. Some methods utilize optic devices to generate adversarial examples. Gnanasambandam \textit{et al}.~\cite{physical_optical_projector_iccv2021} use a projector to perform the adversarial perturbations on the object to realize the attack. Sayles \textit{et al}.~\cite{physical_optical_light_cvpr2021} illuminate the object with a modulated light signal to craft adversarial examples invisible to humans. Duan \textit{et al}.~\cite{physical_optical_laser_cvpr2021} shoot a laser beam onto the target object to achieve a fast attack. Recently, some works try to camouflage physical-world adversarial examples in natural phenomena. Zhai \textit{et al}.~\cite{advrain} transform clean images into rainy styles to perform attacks. Gao \textit{et al}.~\cite{advhaze} camouflage perturbations into the haze to mislead the classifier. Zhong \textit{et al}.~\cite{physical_optical_shadow_cvpr2022} utilize perturbations visually like shadows to craft adversarial examples. These works generate visually natural images based on theoretical models. However, the synthetic samples are not truly compatible aligned with their counterparts in the real world. \begin{figure*}[!t] \centering \includegraphics[width=0.9\linewidth]{total_1.png} \caption{Illustrations of the pipline of raindrop generation network.} \label{fig:total} \end{figure*} \section{Methodology} In this section, we present our approach to generate adversarial raindrops in both digital and physical domains. We first describe a physical way to acquire real-world adversarial raindrop images, and show their strong capability to mislead well-trained DNN models. Then, we focus our attention on how to generate adversarial raindrops in the digital domain, using a quasi-GAN technique. \subsection{Real-World Adversarial Raindrops} \label{real-world raindrop} When we are taking pictures on a rainy day, sometimes through a glass of window, there are always a few of raindrops attached to our camera lens or the window glass. What we finally acquire are raindrop images, which could mislead well-trained DNN models. To investigate how often the real-world raindrop images mislead a pre-trained DNN model, we follow the strategy of \cite{raidrop_removal_cvpr2018} to acquire raindrop images, and statistically estimate the possibility that a raindrop image misleads a well-trained DNN classifier. We randomly spray some small drops of water on a glass and put it in front of the camera's lens. The clean images are shown on the computer screen one by one. We fix the position of the camera and the screen, and randomly move and rotate the glass to collect 5 seconds of video for each image. If we find at least one frame in the video that misleads a pre-trained DNN model, we call this frame a real-world adversarial raindrop image of the DNN model. Fig.~\ref{fig:lab} illustrates the process of obtaining real-world adversarial raindrops. We find that there is a good chance (over 50\%, see Sec.~\ref{physical-attack}) to get a real-world adversarial raindrop image for a given DNN model. Obviously, it is also dependent on the size and density of the water drops. More detailed information and experimental results are presented in Sec.~\ref{physical-attack}. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{lab.png} \setlength{\abovecaptionskip}{0.0cm} \setlength{\belowcaptionskip}{-0.4cm} \caption{(a) the camera used to capture images; (b) the glass used to craft raindrops; (c) raindrops randomly sprayed on the glass in front of camera lens; and (d) final raindrop image.} \label{fig:lab} \end{figure} \subsection{Adversarial Raindrops in Digital Domain} Adversarial training is considered as the most effective way to improve the robustness of DNN models to adversarial attacks. It also works for the adversarial raindrops. However, it needs to train the classifier with large number of adversarial raindrop images, which are hard to collect in practice. Therefore, we proposed a novel approach, AdvRD, to simulate natural raindrops in the digital domain using a quasi-GAN framework. The generated raindrops are shown to be considerably strong in attacking DNN models, and similar to the natural raindrops from viewpoints of human vision and statistical analysis (see Sec.~\ref{realness}). Fig.~\ref{fig:total} shows the illustrations of training stages for the proposed raindrop generation networks. we generate adversarial raindrop images based on a quasi-GAN framework, which contains three sub-networks, generator, discriminator, and a transfer classifier. Different from the conventional GAN, the generator $G$ in our GAN architecture tries to generate raindrop images as real as possible, aiming to cheat not only the discriminator, but also the transfer classifier. The discriminator $D$ tries to identify whether the input image is a real raindrop image or from the generator. The transfer classifier $C$ is employed to endue the generated raindrop images with adversarial attacking capability. Our generative adversarial loss can be formulated as: \begin{equation} \begin{split} \underset{G}{\min}\underset{D}{\max}~V\left( G,D \right) &=\mathbb{E} _{o\sim P_{raindrop}}\left[ \log \left( D\left( o \right) \right) \right] \\ &+\mathbb{E} _{b\sim P_{clean}}\left[ \log \left( 1-D\left( G\left( b,z \right) \right) \right) \right] \\ &+\mathbb{E} _{x\sim P_{pred}} \left[\left\| L_C\left( G\left( x,z \right) \right) -\eta \right\| _1 \right] \end{split} \label{eq:whole-loss} \end{equation} where $o$ is a sample drawn from the ground-truth raindrop images and $b$ is the corresponding image of clean natural images. $z$ is a Gaussian noise vector and $x$ is a clean image that is correctly classified by $C$. $\eta$ is a factor to balance the reality and attacking ability of the generated raindrops. \subsubsection{Generative Network} The first goal of the generator is to generate real-like raindrop images to fool the discriminator. To simulate natural raindrops, the generator should consider the background scene during the raindrop generation. As shown in Fig.~\ref{fig:total}, we apply several convolutional layers to extract the shallow feature of clean images and fuse them with the intermediate features calculated from a noise vector $z$, to obtain the final raindrops. This process can be expressed as follows: \begin{equation} \begin{aligned} o' &= G\left( b ,z \right)\\ &= G(b,E(o)) , \end{aligned} \label{eq:vae-generate} \end{equation} where $o'$ represents the synthetic raindrop images, and $E$ represents the encoder. It converts the raindrop image into features, \textit{i}.\textit{e}., the mean and variance for the latent variable $z$. We use pairs of images with and without raindrops, $\left\{ o_n,b_n \right\} _{n=1}^{N}$, to train the generative network. The generative loss $L_{G}$ for the first goal of generator is expressed as: \begin{equation} L_G=L_{gen}+\alpha _{1}\cdot L_z+\alpha _{2}\cdot L_p, \label{eq:weather generation loss} \end{equation} where the first term of Eq.~(\ref{eq:weather generation loss}), $L_{gen}$, intends to train the generator to output raindrop images that can fool the discriminator. This loss is computed by: \begin{equation} L_{gen}=log(1-D(o')) . \label{eq:Lgen} \end{equation} The second term of Eq.~(\ref{eq:weather generation loss}), $L_z$, constrains the latent variable $z$ encoded by $E$, to obey the isotropic Gaussian distribution, which is calculated as: \begin{equation} \begin{aligned} L_{z} &= D_{KL}\left[ p\left( z \right) ||\mathcal{N} \left( 0,I \right) \right] \\ &=\sum_{i=1}^d{\left[ \frac{\mu ^2_i}{2}+\frac{1}{2}\left( \sigma _i -\log \sigma _i -1 \right) \right]} \label{eq:Lz} \end{aligned} \end{equation} where $\mu$ and $\sigma$ are the mean and variance of the latent variable $z$, and $d$ denotes the dimension of $z$. The last loss in Eq.~(\ref{eq:weather generation loss}), $L_p$, is adopted to make the generated raindrops more realistic. Inspired by the observation that raindrops only affect partial pixels of the image, thus we choose the $L_{1}$ norm to encourage the generator crafts sparse perturbations: \begin{equation} L_p=\left\| o'-b \right\| _1 \label{eq:Lp} \end{equation} In practice, we find that $L_p$ can help to improve the quality of synthetic raindrop images and accelerate the convergence speed of GAN. \subsubsection{Discriminative Network} The discriminator $D$ tries to identify if an input image comes from real data distribution rather than $G$. Thus the loss of discriminator is defined as: \begin{equation} L_D=- log(D\left( o \right)) -log(1-D(o')) \label{eq:discriminator loss} \end{equation} Some works~\cite{Globally_local_GAN} require discriminator to identify global and local images to improve the performance of GAN, which needs complex design of loss functions and more computing resources. To balance the performance and efficiency of the GAN, we only use a global discriminator whose structure is mainly based on AlexNet~\cite{Alexnet}, since we experimentally find that this sample network can satisfyingly meet our needs. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{3.png} \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.5cm} \caption{(a) The generator and the input. (b) The distributions learned by generators that trained by different losses. (c) Raindrops images sampled from different distributions. Raindrop images in the distribution trained with larger $\eta$ will fool the target classifier more easily but deviate from the real raindrop distribution.} \label{fig:Ladv effect distribution} \end{figure} \subsubsection{Transfer Classifier} The second goal of the generator is to endue the generated raindrop images with more power of adversarial attack, since, according to our observation, only a small amount of raindrop images crafted from the previous generator G can successfully mislead a target DNN model. So, in our GAN architecture, a transfer learning network $C$, called transfer classifier, is added to the conventional GAN framework, aiming to transform the generated raindrop images to adversarial examples. Below is the loss function we used in the training: \begin{equation} L_{adv}=\,\,\left\| L_C\left( G(x,z) ,y \right) -\eta \right\| _1, \label{eq:untarget Lben} \end{equation} where $x$ is a clean image correctly classified by $C$, and $\eta$ is a positive constant used to limit the range of classification loss $L_C$, so that we can make a good trade-off between attacking ability and authenticity of the generated raindrops. Fig~\ref{fig:Ladv effect distribution} shows the effect of $\eta$ on the attacking ability and authenticity. More detailed analyses of $\eta$ are presented in Sec.~\ref{ablution}. Overall, the generative loss $L_{G}$ used during the training of GAN is expressed as: \begin{equation} L_G=L_{gen}+\alpha _{1}\cdot L_z+\alpha _{2}\cdot L_p +\alpha _{3}\cdot L_{adv}. \label{eq:weather generation loss_stage2} \end{equation} \subsubsection{Adversarial Raindrop Attack} After training of the GAN architecture, we fix its parameters, and generate adversarial raindrop images by solving the following optimization problem: \begin{equation} \underset{z}{arg\max}\,\,L_{TC}\left( G\left( x,z \right) ,y \right), s.t. \left\| z \right\| <\epsilon_{z}, \label{eq:Lc} \end{equation} where $L_{TC}$ is the loss function of the target DNN classifier, and $\epsilon_{z}$ denotes the threshold for $z$. In white-box scenario, Eq.~(\ref{eq:Lc}) can be solved by the gradient-descent method. The iteration formula can be written as follows: \begin{equation} z_{t+1}=z_{{t}}+\alpha _{{z}}\cdot sign\left( \nabla _{z_{t}} L_{TC}\left( G\left( x,z \right) ,y \right) \right), \label{eq:AWA_white} \end{equation} where $\alpha_{z}$ is the step size of the iteration. In black-box scenario, the gradients of loss \wrt the input noise are unavailable. Hence we adopt a simple but effective way to estimate the optimal $z$, that is, sampling $N$ inputs from the Gauss noise distribution and choose the one that can fool the target model. This strategy is similar to that in the query-based black-box attacking~\cite{Zoo,NATTACK}, and avoids the tough and complicated gradient estimation of the target model via a large number of queries. Experimentally, our method can achieve a fairly high black-box ASR (Attack Success Rate, 51.2\% for ResNet50) with a small query count $N=5$. The algorithm is denoted as AdvRD, and summarized in Algorithm~\ref{alg:AWA} for the case of white-box scenario. Note that the proposed method can generally combine with any gradient-based attacks to enhance its attack ability, \textit{e}.\textit{g}., MIM~\cite{MI-FGSM}, DIM~\cite{DIM}, and TIM~\cite{TIM}. \begin{algorithm} \caption{Adversarial Raindrop Attack. (White-box)}\label{alg:AWA} \textbf{Input:} The target classifier $f$, loss function $L_{TC}$\\ \qquad A original sample $x$, the ground-truth label $y$.\\ The raindrop generator $G$.\\ Noise sampling number $N$, iteration number $T$. \\ \textbf{Output:} An adversarial raindrop example $x_{adv}$.\\ \begin{algorithmic}[1] \For{$n=1\rightarrow N$} \State Sample an noise vector $z_{n}=\mathcal{N} \left( \textbf{0},\textbf{I} \right) $ \For{$t=1\rightarrow T$} \State Generate the raindrop style sample $x_{n}^{t}=G\left( x,z_{n}^{t} \right)$ \State $x_{n}^{t} = clip(x_{n}^{t},0,1)$ \If{$f\left( x_{n}^{t} \right) \ne y$ } \State $x_{adv}=x_{n}^{t}$ \State \textbf{return} $x_{adv}$ \EndIf \State Calculate the gradient $g_{t}=\nabla _{z_{t}} L_{TC}\left(x_{n}^{t} ,y \right) $ \State Update $z_{n}^{t+1}$ by applying the sign of gradient \begin{equation} z_{n}^{t+1}=z_{n}^{t}+\alpha _z\cdot sign\left( g_t \right) \end{equation} \EndFor \EndFor \end{algorithmic} \end{algorithm} \section{Experiments} \label{experiment} To demonstrate the effectiveness of our physically and digitally generated raindrops in adversarial attack against DNN models, we conduct three groups of experiments. The first group of experiments in Sec.~\ref{realness} are to show that the raindrop images crafted by our AdvRD scheme not only look like the realistic ones, but also distribute closely to the realistic raindrops in terms of a metric of statistical analysis. The second group of experiments in Sec.~\ref{digital-attack} are carried out to compare the performances of our adversarial raindrops and some conventional adversarial attacking methods. In the third group of experiments in Sec.~\ref{physical-attack}, we show that adversarial training with the raindrop images generated by AdvRD can improve the robustness of DNN models to real-world raindrop attack. \subsection{Setup} \label{setup} \noindent \textbf{Dataset.} We train our quasi-GAN architecture on the Raindrop Removal (RDR)~\cite{raidrop_removal_cvpr2018} dataset, which contains 1119 pairs of images, with various real-world background scenes and raindrops. All other experiments are carried out on three datasets, NIPS-17~\cite{NIPS-Competition}, and two traffic sign recognition datasets, Tsinghua-Tencent 100K (TT-100K)~\cite{TT-100K_traffic_sign_cvpr2016} and GTSRB~\cite{GTSRB}. The NIPS-17 dataset was released in the NIPS 2017 competition on Defenses against Adversarial Attacks, which contains 1000 labeled images with a resolution of $299\times299\times3$. It has been wildly used in many previous works~\cite{digital_transfer_3d22D,digital_transfer_nips2021}. TT-100K contains 45 different Chinese road sign classes, and GTSRB has 43 different German road sign classes. \noindent \textbf{Implement Details.} The quasi-GAN architecture is trained using the Adam optimization algorithm~\cite{Adam}. The transfer classifier in Fig.~\ref{fig:total} is a Resnet50, which is pre-trained on the Imagenet~\cite{ImageNet}. We set the parameter $\eta$ in Eq.~(\ref{eq:untarget Lben}) as $\eta=2$. The prior hyper-parameter $\alpha _1\sim\alpha _3$ are set to 100, 100, and 0.8, respectively. The dimension $d$ of latent variable $z$ is set to be 64. In our AdvRD scheme, the noise sampling number is set to 25, the iteration number T is 10, and the step size $\alpha _z$ is 0.05. \subsection{Authenticity Evaluation} \label{realness} Using the clean images in the RDR dataset, we generate adversarial raindrop images, which look similar to their corresponding real-world raindrop images. Fig.~\ref{fig:reality} shows the visual similarity between real-world raindrops and those generated by our AdvRD scheme. Moreover, we employ a statistical metric, Fr$\acute{\text{e}}$chet Inception Distance (FID)~\cite{FID_distance}, to measure the distribution similarity between our adversarial raindrops and real-world raindrops. Basically, the FID metric was proposed to measure the difference of two Gaussian distributions $g1$ and $g2$, whose mean and covariance matrix are supposed to be ($m_1$,$C_1$) and ($m_2$,$C_2$), respectively. Then, the FID of $g1$ and $g2$ is defined as: \begin{scriptsize} \begin{equation} FID(g1,g2)=\left\| m_1-m_2 \right\| _{2}^{2}+Tr\left( C_1+C_2-2\left( C_1C_2 \right) ^{1/2} \right). \label{eq:FID} \end{equation} \vspace{-12pt} \end{scriptsize} To estimate the FID value of the adversarial raindrops and realistic raindrops, we randomly divide the RDR dataset into two disjointed subsets, $r1$ and $r2$, with roughly same size, and then, use our AdvRD algorithm to generate a set of adversarial raindrop images, denoted as $f1$, from the clean images in $r1$. The values of FID($P_{r1}$,$P_{r2}$) and FID($P_{f1}$,$P_{r2}$) are calculated for each random sampling. We repeat the experiments 10 times, and Tab.~\ref{tab:RFID} presents the results, in which RFID is the ratio of FID($P_{f1}$,$P_{r2}$) and FID($P_{r1}$,$P_{r2}$). It can be seen that the values of RFID are very close to 1, indicating that the distribution difference between adversarial raindrops and realistic raindrops is almost the same as that between realistic raindrops. So, we can say that the raindrops generated by AdvRD almost have the same distribution as the realistic raindrops. \begin{table}[!t]\small \centering \caption{Performance of AdvRD on the reality.} \begin{tabular}{cccc} \toprule Metrics &FID$\left( P_{f1},P_{r2} \right) $ & FID$\left( P_{r1},P_{r2} \right) $ & RFID \\ \midrule value & 34.03$\pm$0.55 & 33.28$\pm$0.50 & 1.023$\pm$0.012 \\ \bottomrule \end{tabular} \label{tab:RFID} \end{table} \begin{figure}[!t] \centering \setlength{\belowcaptionskip}{-0.4cm} \includegraphics[width=\linewidth]{5.png} \caption{Visual similarity of (a) real-world raindrop images, and (b) adversarial raindrop images generated by our AdvRD scheme.} \label{fig:reality} \end{figure} \subsection{Attacking with Adversarial Raindrops} \label{digital-attack} We conduct the second group of experiments to evaluate the performance of the adversarial raindrops crafted by our AdvRD in attacking some typical DNN models. To do so, we compare our AdvRD method with six popular adversarial attacking approaches, namely, FGSM~\cite{FGSM}, BIM~\cite{BIM}, MIM~\cite{MI-FGSM}, DIM~\cite{DIM}, TIM~\cite{TIM} and NIM~\cite{NIM}, in terms of Attacking Success Rate (ASR) on the NIPS-17 dataset. The perturbation budget $\epsilon$ for these attacking approaches is set to be 16/255. In the case of white-box scenario, the target DNN models are chosen to be four standardly trained models: Inception-v3 (Inc-v3)~\cite{InceptionV3}, Inception-v4 (Inc-v4), Inception-Resnet-v2 (IncRes-v2)\cite{Inceptionv4}, and Resnet-v2-101 (Res-101)~\cite{Resnet}, and two robust models: Rob-ResNet50 (Rob-Res50)~\cite{PGD}, and ens-adv-Inception-ResNet-v2 (IncRes-v2$\rm _{\text{ens}}$)~\cite{Ens_incepton}. For black-box attacking, we choose three standardly trained networks, Inception-v4, Inception-Resnet-v2~\cite{Inceptionv4}, and Resnet-v2-101~\cite{Resnet}, and three adversarially trained models, ens3-adv-Inception-v3 (Inc-v3$\rm _{\text{ens3}}$), ens4-adv-Inception-v3 (Inc-v3$\rm _{\text{ens4}}$), and ens-adv-Inception-ResNet-v2~\cite{Ens_incepton}. For all the black-box attacks, except AdvRD, adversarial examples are generated and transferred from Inception-v3. Experimental results are presented in Tab.~\ref{tab:digital-ASR-w} and Tab.~\ref{tab:digital-ASR-b}, for white-box attacks and black-box attacks, respectively. We see that 1) In the white-box scenario, The attacking ability of AdvRD raindrops is weaker in terms of ASR than that of the adversarial examples crafted by the gradient-based methods, except FGSM; 2) In black-box scenario, AdvRD outperforms other gradient-based methods by a big margin of ASR in most cases. Even for the three robust target models, which are pre-trained by adversarial training, our AdvRD raindrops still achieve more than 50\% ASR, remarkably higher than the gradient-based methods do. This may imply that the conventional gradient-based adversarial training does not work in defense against adversarial raindrops. \begin{table*}[!t] \caption{The white-box attack success rates (\%) $\uparrow$ on four undefended models and two adversarially trained models by various attacks.} \centering \begin{tabular}{cccccccc} \toprule Methods & Inc-v3 & Inc-v4 & IncRes-v2 &Res-101 & Rob-Res50 & IncRes-v2$\rm _{\text{ens}}$ & Average \\ \midrule FGSM & 80.0 & 85.1 & 60.1 & 87.8 & 86.1 & 30.7 & 71.6 \\ BIM & \textbf{100.0} & \textbf{99.4} & \textbf{99.7} & \textbf{100.0} & \textbf{92.4} & \textbf{97.5} & \textbf{98.2} \\ MIM & \textbf{100.0} & 99.3 & 98.7 & 99.9 & 91.5 & 97.2 & 97.8 \\ DIM & 99.7 & \textbf{99.4} & 96.9 & 99.9 & 89.6 & 89.6 & 95.9 \\ TIM & 99.5 & 99.2 & 97.3 & 99.7 & 88.4 & 92.4 & 96.1 \\ NIM & \textbf{100.0} & \textbf{99.4} & 99.0 & 99.9 & 91.7 & 97.3 & 97.9 \\ AdvRD & 84.2 & 91.0 & 89.7 & 88.1 & 88.1 & 89.5 & 88.4 \\ \bottomrule \end{tabular} \label{tab:digital-ASR-w} \end{table*} \begin{table*}[!t] \caption{The black-box attack success rates (\%) $\uparrow$ on three undefended models and three adversarially trained models by various attacks.} \centering \begin{tabular}{cccccccc} \toprule Methods & Inc-v4 & IncRes-v2 & Res-101 & Inc-v3$\rm _{\text{ens3}}$ & Inc-v3$\rm _{\text{ens4}}$ & IncRes-v2$\rm _{\text{ens}}$ & Average \\ \midrule FGSM & 31.9 & 29.6 & 30.9 & 17.3 & 14.8 & 9.4 & 22.3 \\ BIM & 25.4 & 17.7 & 19.0 & 12.7 & 13.5 & 7.5 & 16.0 \\ MIM & 46.3 & 43.3 & 38.2 & 20.0 & 18.0 & 11.7 & 29.6 \\ DIM & 38.5 & 30.6 & 27.2 & 15.3 & 15.2 & 8.5 & 22.6 \\ TIM & 50.1 & 43.2 & 39.4 & 29.2 & 27.3 & 18.8 & 34.7 \\ NIM & 52.9 & \textbf{51.3} & 41.4 & 20.3 & 18.1 & 11.1 & 32.5 \\ AdvRD & \textbf{53.9} & 47.6 & \textbf{58.8} & \textbf{62.4} & \textbf{63.4} & \textbf{52.4} & \textbf{56.4} \\ \bottomrule \end{tabular} \label{tab:digital-ASR-b} \end{table*} \begin{figure*}[!t] \centering \includegraphics[width=0.95\linewidth]{phybig.png} \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0.4cm} \caption{Adversarial raindrop examples captured in the real-world and the corresponding classification results.} \label{fig:phybig} \end{figure*} \begin{table} \centering \caption{Attack success rate (\%) of real-world raindrop on three datasets.} \begin{tabular}{cccc} \toprule Dataset & NIPS-17 & TT-100K & GTSRB \\ \midrule ASR & 92.0 & 54.0 &59.0\\ \bottomrule \end{tabular} \vspace{-9pt} \label{tab:physical-ASR} \end{table} \subsection{Defense Against Adversarial Raindrops} \label{physical-attack} In this part, we conduct experiments to show that real-world raindrops could become adversarial raindrops to DNN models, and then, we provide a defense method against adversarial raindrops. \noindent \textbf{Find real-world adversarial raindrops.}~In Sec.~\ref{real-world raindrop}, we describe how to find the real-world adversarial raindrop images to a DNN model. Given an image, the adversarial raindrop image, if found in 5 second video, is physically crafted adversarial example. Fig.~\ref{fig:phybig} shows five real-world adversarial raindrop images that mislead a DNN model pre-trained on TT-100K. We conduct experiments on three datasets, NIPS-17, TT-100K, and GTSRB, to estimate the probability that we can successfully find the adversarial raindrop image. We view this probability as the Attack Success Rate (ASR) of the real-world adversarial raindrops in attacking a DNN model. Tab.~\ref{tab:physical-ASR} lists the estimated probability or ASR that our real-world raindrops mislead the DNN models. We see that the values of ASR on the datasets are relatively high (over 50\%), indicating that DNN models are vulnerable not only to digital perturbations of adversarial examples, but also to the natural perturbations of raindrops. Especially, on NIPS-17, the ASR value reaches 92\%, which means that almost all the images in the dataset can naturally become an adversarial example, only if a few of raindrops are sprayed on them. \noindent \textbf{Defense against Adversarial Raindrops.} Adversarial training is usually considered as the most effective way to increase robustness of DNN models to adversarial perturbations. To defend against adversarial raindrops, we investigate the effectiveness of applying adversarial training to improve DNNs' robustness. Since adversarial training needs large number of adversarial examples, and it is difficult to obtain enough real-world adversarial raindrop images, we use our AdvRD raindrops instead in the experiments of adversarial training. Specifically, in each epoch of our adversarial training, we randomly select half of the training data to generate AdvRD raindrop images, and combine them with the other half of clean data to train the model. Tab.~\ref{tab:AT} gives the experimental results, upper half of which is for standard training, and the lower half is for adversarial training. It can be seen that adversarial training with our AdvRD raindrops significantly reduce the ASR values for both digital and physical raindrop attacking. Like the conventional adversarial training, our adversarial training in the experiments also decreases the recognition accuracy on clean images of NIPS-17. But, surprisingly, it even improves the recognition accuracy on clean samples of datasets TT-100K and GTSRB. \begin{table} \centering \caption{Performance comparison of models with and without adversarial training.} \begin{tabular}{lccc} \toprule Model & Acc. & Dig. ASR & Phy. ASR \\ \midrule NIPS-Resnet50 & 76.13 & 64.5 & 92.0 \\ TT-Resnet18 & 98.11 & 72.6 & 54.0 \\ GTSRB-Resnet18 & 97.52 & 66.6 & 59.0 \\ \midrule NIPS-Resnet50$_{\rm rob}$ & 73.37 & 29.0 & 69.0 \\ TT-Resnet18$_{\rm rob}$ & 99.67 & 18.2 & 27.0 \\ GTSRB-Resnet18$_{\rm rob}$ & 98.99 & 23.5 & 37.0 \\ \bottomrule \end{tabular} \vspace{-7pt} \label{tab:AT} \end{table} \subsection{Why Adversarial Raindrops Work} \label{discussion} To understand the mechanism that raindrops can mislead DNN models,, we visualize the CAM attention~\cite{CAM}, before and after adding the adversarial raindrops. Fig.~\ref{fig:CAM} shows the CAM attention of a DNN model on clean images, AdvRD crafted raindrop images, and physical adversarial raindrop images, respectively. We see that, though raindrops only perturb sparse pixels, the attention maps are substantially disturbed. It is worth noting that the objects in the raindrop perturbed images are visually complete and clear, which implies that image degeneration may not be the main reason for misclassification. Raindrops are responsible for disturbing the CAM attention maps. More real-world adversarial raindrops and their CAM are included in supplementary material. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{CAM.png} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{-0.6cm} \caption{CAM for images.} \label{fig:CAM} \end{figure} \subsection{Ablution Study} \label{ablution} \noindent \textbf{The parameter $\eta$.} The main effect of $\eta$ in Eq.~(\ref{eq:untarget Lben}) is to balance the trade-off between adversarial strength and the reality of synthetic raindrops. A generator trained with a larger $\eta$ tends to fool the target classifier more easily. However, a too large $\eta$ may make the generator focus on cheating the classifier rather than generate realistic raindrop images. We test the effect of $\eta$ by setting it from 2 to 10 with a step size of 2. The ASR curves for seven target models are shown in Fig.~\ref{fig:EOY}. Tab.~\ref{tab:EOY} presents the RFID values corresponding to different $\eta$. As can be seen in Fig.~\ref{fig:EOY}, a generator fine-tuned with larger $\eta$ achieves higher values of ASR, which indicates that increasing $\eta$ can improve the attack capability of AdvRD. On the other hand, we see from Tab.~\ref{tab:EOY} that a larger $\eta$ causes a higher value of RFID, which means the sacrifice of reality. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{EOY.png} \setlength{\abovecaptionskip}{0.0cm} \setlength{\belowcaptionskip}{-0.3cm} \caption{The ASR curves of AdvRD trained with different $\eta$.} \label{fig:EOY} \end{figure} \begin{table} \centering \caption{Reality of proposed AdvRD trained with different $\eta$.} \begin{tabular}{cccccc} \toprule $\eta$& 2 & 4 & 6 & 8 & 10 \\ \midrule RFID & 1.023 & 1.045 & 1.046 & 1.065 & 1.180 \\ \bottomrule \end{tabular} \vspace{-5pt} \label{tab:EOY} \end{table} \noindent \textbf{Noise sampling number $N$.} Obviously, increasing $N$ will improve the attack strength but impair the attack efficiency, since the attacker queries the target model more times to search an adversarial example. We set $N$ from 15 to 35 with a step size of 5 to test the influence of noise sampling number. The values of ASR and running time to finish attacking NIPS-17 are shown in Tab.~\ref{tab:EON}. We can observe that both the attack strength and running time are positively related to $N$. So in practice, we set $N=25$ to balance the attack strength and efficiency. \begin{table} \centering \caption{Ablation study of noise sample number $N$.} \begin{tabular}{cccccc} \toprule N & 15 & 20 & 25 & 30 & 35 \\ \midrule ASR (\%) & 59.7 & 61.2 & 63.4 & 65.2 & 67.1 \\ Running Time (s) & 445 & 535 & 636 & 722 & 792 \\ \bottomrule \end{tabular} \vspace{-10pt} \label{tab:EON} \end{table} \section{Conclusion} In this paper, we study the adversarial examples caused by natural raindrops, and present a new approach to generate adversarial raindrops in the digital domain, using a quasi-GAN technique. The generated raindrop images are very similar to the real-world raindrop images, from viewpoints of human vision and statistical analysis. More importantly, they perform strong adversarial attack to the state-of-the-art DNNs. We also show that the adversarial training using our AdvRD images can significantly improve the robustness of DNNs to the real-world raindrop attacks. {\small \bibliographystyle{ieee_fullname}
{ "arxiv_id": "2302.14300", "language": "en", "timestamp": "2023-03-01T02:08:56", "url": "https://arxiv.org/abs/2302.14300", "yymm": "2302" }
\section{Introduction} Observations stemming from the anisotropies in the Cosmic Microwave Background (CMB) have played a major role in establishing the standard model of Cosmology --- the $\Lambda$CDM\ paradigm with power law primordial spectrum. Despite suffering from certain theoretical issues, this paradigm has been extremely successful in accounting for a wide variety of observations across many scales and epochs in the cosmic history. The \emph{Planck}\ satellite provided the most precise estimation of its 6 main cosmological parameters to date \citep{Planck_results_2020,P18Cosmo2020}. In addition, other ground-based CMB experiments from the Atacama Cosmology Telescope (ACT) \citep{Aiola_2020,Choi_2020} and South Pole Telescope (SPT) \citep{Dutcher_2021,SPT-3G:2022hvq} collaborations have recently provided complementary measurements of temperature and polarisation of the CMB anisotropies. These focus on smaller, sub-degree angular scales (larger multipoles $\ell\gtrsim650$) and offer a new way of testing the robustness of $\Lambda$CDM\ with higher-resolution CMB maps and independently of \emph{Planck}.\\ Despite the success of the standard model, increasingly precise (low-redshift) measurements have reported a few statistically significant discrepancies \citep{Cosmo_Intertwined_III_2021,Cosmo_Intertwined_II_2021}. The most notable example is the $\gtrsim5\sigma$ discrepancy in the value of the Hubble constant $H_0$, as measured by low-$z$ probes using the distance ladder \cite{Riess:2021jrx} and high-$z$ estimations, assuming $\Lambda$CDM. A milder ($\sim2\sigma$) but longstanding discrepancy has also been reported between high and low-redshift estimations of the amplitude of matter fluctuations---characterized by $S_{8}\equiv\ensuremath{\sigma_{8,0}}\sqrt{\ensuremath{\Omega_\text{m0}}/0.3}$---the latter preferring lower values compared to the early universe predictions \citep{HSC:2018mrq,2020A&A...634A.127A,2021Kids,DES:2021wwk,Amon_NL_S8}. The $\Lambda$CDM\ model is also facing other (less-relevant) observational challenges, see \emph{e.g.} \citep{Bull:2015stt,Bullock:2017xww,Perivolaropoulos:2021jda,Bernal:2021yli}. Moreover, CMB measurements are known to have mild inconsistencies between the different angular scales (high vs low multipoles \cite{Addison_2016}), and as measured by the different collaborations \cite{PhysRevD.103.063529}. Indeed, even within \emph{Planck}\ data alone, the TT spectrum seems to favor a lensing amplitude $A_{\rm L}>1$ and provides ``evidence'' for a non-vanishing (positive) spatial curvature \citep{Planck_results_2020,DiValentino2019}, although it has been argued that these are purely stemming from statistical fluctuations \cite{Efstathiou_2021}; see also \citep{Handley_2021,Di_Valentino_2021,Vagnozzi_2021,https://doi.org/10.48550/arxiv.2210.09865} and references therein for further discussions on this. Furthermore, the results from the ACT collaboration seem to prefer a scale-invariant spectrum of primordial fluctuations, with $n_s\simeq1$, while \emph{Planck}\ data excludes such a value at more than $3\sigma$. If these inconsistencies are not coming from systematics, they may hint towards new physics beyond the standard model.\\ In recent years, a lot of effort has gone into investigating extensions of $\Lambda$CDM\ to try and provide physical explanations for some of the aforementioned discrepancies; see \emph{e.g.} \citep{Schoneberg:2021qvd,Abdalla:2022yfr}. These often change the Universe's growth and/or expansion history at late times \citep{Pogosian2022,Heisenberg:2022gqk}, or introduce new physics at early times such that (i) the physical size of the sound horizon $r_d\equiv r_s(z_d)$ decreases with respect to $\Lambda$CDM\ \citep{Poulin:2018cxd,Niedermann:2019olb,ACT_EDE,NEDE_22}, (ii) the redshift of recombination is shifted \citep{Jedamzik_2020,Galli_2022,https://doi.org/10.48550/arxiv.2107.02243,Sekiguchi_2021} or (iii) new features are introduced in the primordial spectrum of fluctuations \citep{Hazra:2022rdl,Antony:2022ert}. While appealing from the theoretical standpoint, very few of the proposed solutions are actually able to simultaneously address these tensions. For example, it has been argued that no late-time modification to $H(z)$ is able to raise the value of $H_0$ \citep{Knox:2019rjx,Keeley:2022ojz}, while modifications to the early universe might create or exacerbate the tensions with low-$z$ observations; see \emph{e.g.} \citep{Ivanov:2020ril,Hill_2020,Smith:2020rxx,Murgia:2020ryi,Niedermann:2020qbw,DAmico:2020ods,2021CmPhy...4..123J,Simon:2022adh} for discussions on this topic. \\ Given the fundamental role of the CMB in cosmological analyses, it is crucial to understand whether the differences between the latest observations are coming from either statistical fluctuations, unaccounted systematics or new physics beyond $\Lambda$CDM. In this work, we test their statistical consistency using Gaussian Processes (GP) - a non-parametric method which can effectively represent smooth deformations away from the model under consideration. If the differences between the datasets are entirely consistent with random fluctuations, then the GP regression should yield a curve consistent with zero. If not, then the GP can provide insights into what shape of deformation, either from systematics or theoretical inconsistency, is preferred by the data. \\ The structure of this paper is as follows. In Section \ref{MethodData}, we describe in detail the method and the data used in the analysis. We then proceed to perform the consistency tests using the best fit $\Lambda$CDM\ predictions, by looking for structures in the residuals with respect to the mean function. We start by confronting $\Lambda$CDM\ to the most recent CamSpec data (with the highest sky fraction) in Section \ref{LCDM_CSPR4}. We then repeat the analysis using ground-based measurements by the Atacama Cosmology Telescope (ACT) and the South Pole Telescope (SPT) to look for discrepancies between the experiments in Section \ref{sec:ACTSPT}. Finally, we test the robustness of our conclusions by using a different mean function in the analysis\footnote{Namely, we use $\Lambda$CDM's best fit to \texttt{ACT+WMAP}\ data as mean function.} and investigate the consistency of the \emph{Planck}\ data alone by studying the low-$\ell$ ($\ell<650$) and high-$\ell$ ranges ($\ell>650$) separately in Appendix \ref{sec:ACTWMAPmean} and \ref{sec:HvsL}. In Appendix \ref{Calibration}, we assess whether an absolute scaling of the spectra (difference in calibration) can account for the mild inconsistencies between the different collaborations. The conclusion and future prospects are summarized in Section \ref{Conclusion}. \section{Method and Data}\label{MethodData} \subsection{Data}\label{sec:data} \begin{figure} \includegraphics[width=\columnwidth]{fig/Data.pdf} \caption{TT, TE and EE residuals with respect to $\Lambda$CDM\ best fit spectra to CamSpec data. Solid red lines correspond to the differences in the best fit predictions from \texttt{ACT+WMAP}\ and CamSpec data.} \label{fig:residuals} \end{figure} In this work, we confront the predictions of $\Lambda$CDM\ with different space and ground-based CMB experiments as a way of testing the consistency of the model and the robustness of the measurements. Namely, we consider the following data \begin{itemize} \item \textbf{\emph{Planck}\ 2018} - Temperature (TT), Polarisation (EE) and their cross-correlation (TE) from the final \emph{Planck}\ 2018 data release \citep{Planck_results_2020,P18Cosmo2020}. More specifically, we use data from the latest (cleanest) \texttt{CamSpec NPIPE PR4\_v12.6} likelihood \cite{Rosenberg:2022sdy}; see \cite{Efstathiou_2021} for details on the \texttt{CamSpec} likelihood. These cover the range $\ell\in[30,2500]$ in TT and $\ell\in[30,2000]$ in TE and EE. We refer to this data simply as CamSpec. \item \textbf{ACT DR4} - Similarly, we use the Atacama Cosmology Telescope (ACT) Temperature (TT), $E$-mode Polarisation (EE) and their cross-correlation (TE) from the \texttt{ACTPolliteDR4} likelihood in the latest ACT data release \citep{Aiola_2020,Choi_2020}. We refer to these simply as ACT. \item \textbf{SPT 3G} - Finally, we also include the latest results from the South Pole Telescope 2018 collaboration (SPT-3G) \cite{SPT-3G:2022hvq}. These are updated measurements of both E-mode Polarisation (EE) and Temperature-Polarisation cross-correlation (TE) from \cite{Dutcher_2021}, but with the inclusion of TT measurements. These cover angular scales $\ell\in[750,3000]$ for TT and $\ell\in[350,3000]$ in TE and EE. \end{itemize} We should note that we use the minimum-variance-combined bandpowers for ACT and SPT data, which might not accurately reflect the full information contained in these datasets. We believe however that this can be seen as a zeroth-order approximation. A more rigorous analysis using the full (multi-frequency) likelihood might be needed for a more robust interpretation of the results. \subsection{Gaussian Process Regression}\label{Method} Gaussian Processes (GP) \cite{rasmussen2006gaussian} have been extensively used in the literature to fit a smooth curve from noisy and/or sparse data without the need to write down an explicit parametric model. GP excels when the noise in the data is well approximated by a (multivariate) Gaussian distribution. It provides a posterior distribution of smooth functions given the data based on two assumptions on the functional form: the mean function ($\mu(x)$) and the kernel ($k(x,x')$). In-depth analyses of GP's dependence on these assumptions are given in \citep[e.g.][]{PhysRevD.85.123530,PhysRevD.87.023520,Hwang:2022hla}. \\ The mean of the GP posterior distribution evaluated at a set of `test' points $\vect{x}_\star$ can be easily calculated through \begin{equation}\label{mean_gp} \vect{\mu}= \vect{m}_\star+ \mathbf{K_\star}\mathbf{K}^{-1}(\vect{y}-\vect{m}) \end{equation} where $\vect{m}_\star\equiv\mu(\vect{x}_\star)$, $\vect{m}\equiv \mu(\vect{x})$, $\mathbf{K}_\star \equiv k(\vect{x}_\star, \vect{x})$, $\mathbf{K}\equiv k(\vect{x},\vect{x})+\Sigma$, where the observations $\vect{y}$ are made at data points $\vect{x}$ with the data covariance matrix $\Sigma$. Similarly, the posterior of the covariance is obtained using \begin{equation}\label{cov_gp} \vect{C}=\mathbf{K}_{\star\star}-\mathbf{K}_\star\mathbf{K}^{-1}\mathbf{K}^{T}_{\star} \end{equation} \begin{table \centering \caption{Uniform prior ranges in the hyperparameters for TT,TE and EE.} \label{tab:priors_params} {\rowcolors{2}{lgray}{white} \begin{tabular}{ccc} \hline Parameter & $\log_{10}\sigma_f$ & $\log_{10}\ell_f$ \\ \hline TT & $[-3,2]$ & $[0,4]$ \\ TE & $[-3,0.5]$ & $[0,4]$ \\ EE & $[-3,0.5]$ & $[0,4]$ \\ \hline \end{tabular}} \end{table} In practice, the calculation of such quantities amounts to a matrix inversion of $\mathbf{K}$. Computationally, a Cholesky decomposition is often preferred as it is a faster and numerically more stable procedure. Finally, the log-marginal likelihood (LML) under a GP is given by \begin{equation}\label{LML} \ln\mathcal{L}=-\frac12\big[\vect{r}^{T}\,\mathbf{K}^ {-1}\,\vect{r}+\ln{|\mathbf{K}|} + N\ln{(2\pi)}\big] \end{equation} where $\vect{r}=\vect{y-m}$ is the residual vector, $N$ is the number of (observed) datapoints and $|\mathbf{K}|$ denotes the determinant of the full covariance matrix. The GP predictions depend on the choice of kernel describing the correlations between the data points. In this work, we use a \emph{squared exponential} (SE) kernel given by \begin{equation}\label{kernel} k(x,x';\sigma_f,\ell_f)=\sigma_f^2\,e^{-(x-x')^2/2\ell_f^2}, \end{equation} where $\sigma_f$ and $\ell_f$ determine the amplitude and typical length-scale of the correlations, respectively. These hyperparameters are optimized by maximizing the log-marginal likelihood in \eqref{LML}; we refer to \cite{rasmussen2006gaussian} for a more detailed discussion.\\ In this work, we focus on testing the consistency of the $\Lambda$CDM\ model, and thus decide to work in residual space where the best-fit ($\Lambda$CDM) predictions have been subtracted from the data---effectively choosing $\Lambda$CDM\ as a GP mean function. More specifically, we decide to work in the space of $\mathcal{D}_\ell=\ell(\ell+1)\mathcal{C}_\ell/2\pi$, where the physical (oscillatory) features would be more pronounced. Having a closer look at Eq. \eqref{LML}, it is seen that if the mean function is a good (enough) fit to the data, the first and second (penalty) terms in \eqref{LML} will tend to prefer \emph{no extra-correlations} (\emph{i.e.} $\sigma_f\simeq0$) or \emph{diverging correlation-lengths} ($\ell_f\to\infty$), as encoded in the GP kernel \eqref{kernel}. In the presence of hidden systematics or in the need for a modification of the mean function, however, a finite value for $(\sigma_f,\ell_f)$ might be statistically preferred. Therefore, inspecting the two-dimensional likelihood profile $\mathcal{L}(\sigma_f,\ell_f)$ can yield valuable information on the model and the dataset under consideration \citep{PhysRevD.87.023520,Aghamousa_2017,Keeley:2019hmw,Krishak:2021fxp}. Thus, if the likelihood is maximized for $\sigma_f\to0$ (or $\ell_f\to\infty$), the mean function is consistent with the data. On the other hand, any significant detection of $\sigma_f\neq0$ can be interpreted as hints of underlying structures or systematics in the data that cannot be properly accounted for by the model, given by a smooth deformation with a typical amplitude and correlation length given by the preferred values of $(\sigma_f,\ell_f)$. \section{Results and Discussions} In this section, we confront the best-fit $\Lambda$CDM\ predictions with the various CMB observations. More specifically, we choose the $\Lambda$CDM\ best-fit to CamSpec PR4 data as a mean function in our Gaussian Process since it is obtained from the analysis of the latest and the most constraining data. Following \cite{Aghamousa_2017}, our goal is to test the consistency of the $\Lambda$CDM\ model, update the analysis to include the most recent CMB measurements described before, namely the final \emph{Planck}\ \emph{CamSpec-PR4v12.6}, ACT DR4 and SPT-3G data releases, and test the consistency between these datasets. In Section \ref{LCDM_CSPR4}, we will extensively discuss the results using \emph{Planck}\ data. The results of the analysis using ground-based observations, namely ACT and SPT, will be discussed in Section \ref{sec:ACTSPT}. Finally, to explore possible systematics affecting the low-$\ell$ part of the \emph{Planck}\ data, we use $\Lambda$CDM's best-fit predictions to \texttt{ACT+WMAP}\ as the mean function instead, effectively replacing \emph{Planck}'s large scale constraints by WMAP. The results using this choice of mean function are discussed in Appendix \ref{sec:ACTWMAPmean}. Furthermore, we investigate the consistency of the \emph{Planck}\ data alone by studying the low-$\ell$ ($\ell<650$) and high-$\ell$ ranges ($\ell>650$) separately in Appendix \ref{sec:HvsL}. \subsection{Consistency of \tpdf{$\Lambda$CDM } with \emph{Planck}\ PR4}\label{LCDM_CSPR4} \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{fig/CamSpecPR4/2DPosteriors_TT_CamSpecPR4.png} \includegraphics[width=0.32\textwidth]{fig/CamSpecPR4/2DPosteriors_TE_CamSpecPR4.png} \includegraphics[width=0.32\textwidth]{fig/CamSpecPR4/2DPosteriors_EE_CamSpecPR4.png} \caption{$\Delta\ensuremath{\chi^2}$ statistic as a function of \ensuremath{(\sigma_f,\ell_f)}\ for the \emph{CamSpec PR4} data, and $\Lambda$CDM\ best-fit to the same data as mean function. The color bar shows the improvement in fit, where $\Delta\ensuremath{\chi^2}=-2(\ln\ensuremath{\mathcal{L}}^{\rm GP}-\ln\ensuremath{\mathcal{L}}^{\Lambda \rm CDM})$ and $\ln\ensuremath{\mathcal{L}}^{\rm GP}\ensuremath{(\sigma_f,\ell_f)}$ is the log marginal likelihood in Eq. \eqref{LML}.} \label{fig:posteriors_CamSpec-CamSpec} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{fig/CamSpecPR4/Reconstructions_CamSpecPR4.png} \caption{GP reconstructions for the set of hyperparameters maximizing the log-marginal likelihood in Eq. \eqref{LML} when using CamSpec PR4 data and the corresponding $\Lambda$CDM\ best-fit spectra as mean functions. The solid line and shaded regions correspond to the mean and $2\sigma$ confidence intervals around it, respectively.} \label{fig:GP_CamSpec} \end{figure} We start by considering the $\Lambda$CDM's best fit to the latest CamSpec data \cite{Rosenberg:2022sdy} as the mean function in our analysis. In Fig. \ref{fig:posteriors_CamSpec-CamSpec}, we show the two-dimensional likelihood profiles for the CamSpec residuals with respect to $\Lambda$CDM's best-fit $\mathcal{D}_\ell$'s, as a function of \ensuremath{(\sigma_f,\ell_f)}. The color bar shows the goodness of fit, where $\Delta\ensuremath{\chi^2}=-2(\ln\ensuremath{\mathcal{L}}^{\rm GP}-\ln\ensuremath{\mathcal{L}}^{\Lambda \rm CDM})$ and $\ln{\mathcal{L}^{\rm GP}}$ is the log-marginal likelihood (LML), defined in Eq. \eqref{LML}. Negative values of $\Delta\ensuremath{\chi^2}$ (in blue) reflect regions in parameter space yielding an improvement in the fit with respect to $\Lambda$CDM. Conversely, red colored regions correspond to deviations from the mean function leading to a degraded fit to the data ($\Delta\ensuremath{\chi^2}>0$), whereas gray shaded regions represent no improvement at all. The black dot represents the set of hyperparameters \ensuremath{(\sigma_f,\ell_f)}\ yielding the highest likelihood and the black solid line on the colorbar the corresponding improvement in fit ($\Delta\ensuremath{\chi^2}$). As mentioned before, if the mean function is a good description of the data, the LML in \eqref{LML} should peak at $\sigma_f\to0$ and/or $\ell_f\to\infty$. In other words, no smooth deviations away from the best-fit $\Lambda$CDM\ are needed to explain the data. However, if the LML peaks at finite (possibly large) values of \ensuremath{(\sigma_f,\ell_f)}, it might point towards the need for a different mean function, or indicate the existence of hidden structures/systematics in the data.\\ In this case, as can be seen from Fig. \ref{fig:posteriors_CamSpec-CamSpec}, the $\Lambda$CDM\ model provides a very good fit to TT, TE, and EE data. The LML seems to prefer small deviations from the best fit $\Lambda$CDM\ spectra: $\sigma_f\lesssim1$ for TT and TE and $\sigma_f\lesssim0.1$ for the case of EE. Any larger deviation from the mean function is highly penalized by the data, as can be seen by the color bar on the right ($\hat{\sigma}>0$ meaning a degraded fit), and the improvement in fit by the GP is negligible in both TT and TE; see Table \ref{tab:deltachi2_CSPR4}. Meanwhile, there is a noticeable improvement in fit in EE for $\ell_f\lesssim1$. Such a GP realisation is essentially a white noise where the values at each $\ell$ are uncorrelated with each other. Preferences towards the inclusion of white noise indicate that the fluctuations in the data around the mean are more than what is expected from the data covariance matrix. In other words, the covariance matrix may have been slightly underestimated for this EE data. This may alter the weights and hence the optimality of the cosmological parameter estimation, but is less likely to have a significant effect on the estimated parameter values. \\ In fact, the LML is expected to have a local minimum at some $\sigma_f$ with $\ell_f \ll 1$ about half of the time. Taking the limit $\ell_f \rightarrow 0$ in \eqref{LML}, we found that the necessary and sufficient condition for such a minimum is given by ${\| \Sigma^{-1} \bf{r} \|^2 > \rm{tr}\left( \Sigma^{-1}\right)}$, where $\bf{r}$ is the residual vector. Assuming that the mean and the covariance matrix are exact, the expected value of the left-hand side is equal to the right-hand side. When the residual vector is large enough either by statistical fluctuations and/or underestimation of the errors, then we expect to see a local minimum at some $\sigma_f$ satisfying ${\| (\sigma_f^2 I + \Sigma)^{-1} \bf{r} \|^2 = \rm{tr}\left[ (\sigma_f^2 I + \Sigma)^{-1}\right]}.$ \\ Despite this fact, it is still intriguing that we find $\Delta\chi^2$ as low as $-10.21$ in EE for CamSpec PR4. As we discuss in Appendix \ref{sec:HvsL}, most of these improvements in fit come from low-$\ell$ data, and notably is \textit{not} present in the CamSpec Public Release 3 (PR3). Our non-parametric approach using GP indicates that the updated analysis pipeline of CamSpec PR4 has caused the residuals in the EE coadded spectrum to vary more than the expected amount given by the covariance matrix. Indeed, we confirm that the usual chi-squared statistic for the EE data in the range $30\le\ell\le650$ is $\ensuremath{\chi^2}=709$, larger than the expected value of $620$ by $2.51\sigma$.\footnote{For the full data range of $30\le\ell\le2000$, $\ensuremath{\chi^2}=2023$, which is $0.83\sigma$ above the expected value of 1971. Note that here we look at the TT+TE+EE best-fit values.} This excess has been investigated in \citep[][see e.g. Table 2]{Rosenberg:2022sdy} and is statistically consistent with random fluctuations. However, it is interesting that our GP analysis indicates that these excess variances appear to be uncorrelated random fluctuations rather than smooth deformations in the mean, unlike the cases of TT and TE. Nonetheless, the deviations from zero are $\mathcal{O}(10^{-2})$, with relatively minor improvements in fit, suggesting $\Lambda$CDM\ is a good description of the \emph{Planck}\ data.\\ We then use the set of \ensuremath{(\sigma_f,\ell_f)}\ maximizing the likelihood in Eq. \eqref{LML} (shown as a black dot in Fig. \ref{fig:posteriors_CamSpec-CamSpec}) to obtain the mean (in orange), $68\%$ and $95\%$ C.L. (gray-shaded bands) shown in Fig. \ref{fig:GP_CamSpec}, using Eqs. \eqref{mean_gp} and \eqref{cov_gp}, respectively. Again, we see that the reconstructions are perfectly consistent with zero across the entire multipole range covered by the data.\\ \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{fig/CamSpecPR4/2DPosteriors_TT_ACT.png} \includegraphics[width=0.32\textwidth]{fig/CamSpecPR4/2DPosteriors_TE_ACT.png} \includegraphics[width=0.32\textwidth]{fig/CamSpecPR4/2DPosteriors_EE_ACT.png} \caption{$\Delta\ensuremath{\chi^2}$ statistic as a function of \ensuremath{(\sigma_f,\ell_f)}\ for the ACT DR4 data, and $\Lambda$CDM\ best-fit to CamSpecPR4 data as mean function. The color bar shows the improvement in fit, where $\Delta\ensuremath{\chi^2}=-2(\ln\ensuremath{\mathcal{L}}^{\rm GP}-\ln\ensuremath{\mathcal{L}}^{\Lambda \rm CDM})$ and $\ln\ensuremath{\mathcal{L}}^{\rm GP}\ensuremath{(\sigma_f,\ell_f)}$ is the log marginal likelihood in Eq. \eqref{LML}.} \label{fig:posteriors_ACT-CamSpec} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{fig/CamSpecPR4/Reconstructions_ACT.png} \caption{GP reconstructions for the set of hyperparameters maximizing the log-marginal likelihood in Eq. \eqref{LML} when using ACT DR4 data and $\Lambda$CDM\ best-fit to CamSpec PR4 as mean function. Solid line and shaded regions correspond to the mean and $2\sigma$ confidence intervals around it, respectively.} \label{fig:GP_ACT} \end{figure} These results suggest that the $\Lambda$CDM\ model is consistent with the \emph{Planck}\ CMB data. This should not come as a surprise, since we have chosen the best-fit predictions to CamSpec data as the mean function in our analysis. However, as explained before, this serves as a consistency test for the different CMB measurements. In the presence of systematics, or physics beyond $\Lambda$CDM, some inconsistencies might appear when using a (possibly incorrect) $\Lambda$CDM\ mean function. Finally, we would like to mention that the \emph{Planck}\ analysis has been reproduced to $0.1\sigma$ accuracy using the new pipeline for the Simons Observatory \cite{SimonsPipeline}, providing an independent cross-check of the \emph{Planck}\ results. \begin{table} \centering {\rowcolors{2}{lgray}{white} \begin{tabular}{ccccc} \hline $\Delta\ensuremath{\chi^2}$ & \emph{Planck}\ CamSpec PR4 & ACT DR4 & SPT-3G \\ \hline TT & $-0.35$ & $-15.11$ & $-0.124$ \\ TE &$-0.75$ & $ -2.62$ & $-0.002$ \\ EE & $-10.21$& $-7.89$ & $-8.916$ \\ \hline \end{tabular}} \centering \caption{$\Delta\ensuremath{\chi^2}\equiv( \ensuremath{\chi^2}_{\rm GP}-\ensuremath{\chi^2}_{\Lambda\rm CDM})$ improvements in fit for the ground and space-based experiments obtained with the GP using $\Lambda$CDM's best-fit to CamSpecPR4 as mean function.} \label{tab:deltachi2_CSPR4} \end{table} \subsection{Consistency of \tpdf{$\Lambda$CDM} with ground-based experiments (ACT \& SPT)}\label{sec:ACTSPT} Next, we use the same mean function as before and look for potential structures in the residuals of ACT and SPT data. If $\Lambda$CDM\ is the correct model describing the CMB anisotropies up to $\ell\simeq4000$, and their parameters are accurately estimated by \emph{Planck}, then the two-dimensional distributions of the hyperparameters \ensuremath{(\sigma_f,\ell_f)}\ should not prefer any finite, non-vanishing values. The presence of unaccounted systematics, or discrepancies between the experiments, would however be reflected in the two-dimensional likelihood profiles if there are any. \subsubsection{ACT DR4}\label{ACTDR4} In Fig. \ref{fig:posteriors_ACT-CamSpec} we show the posteriors for \ensuremath{(\sigma_f,\ell_f)}\ when using ACT DR4 data and $\Lambda$CDM's best-fit to \emph{Planck}\ as mean function. In this case, an interesting feature appears in the TT data. The LML peaks at $\ensuremath{(\sigma_f,\ell_f)} \simeq(9,2\times10^3)$ where the GP finds an improvement in fit with respect to the mean function ($\Lambda$CDM), corresponding to a $\Delta\ensuremath{\chi^2}=-15.11$; see Table \ref{tab:deltachi2_CSPR4}. Interestingly, the TE and EE posteriors show similar (bimodal) distributions, with a preference for non-vanishing values of \ensuremath{(\sigma_f,\ell_f)}, although the statistical significance of these deviations from $\Lambda$CDM\ is milder than in the TT case. The improvements in fit are reported in the middle column of Table \ref{tab:deltachi2_CSPR4}. In Fig. \ref{fig:GP_ACT}, we show the mean and $2\sigma$ reconstructions from the GP when using the set of hyperparameters maximizing the LML in Eq. \eqref{LML}, shown as black dots in Fig. \ref{fig:posteriors_ACT-CamSpec}. Note that the ACT reconstructions of the TT spectra seem to prefer lower amplitudes at $\ell\lesssim3000$ (at more than $2\sigma$) and a slightly larger amplitude at $\ell\gtrsim3000$ with respect to what is predicted by $\Lambda$CDM's best-fit to the CamSpec (\emph{Planck}) data. This is yet another (non-parametric) indication that the ACT data seem to favor a scale-invariant, Harrison-Zel'dovich ($n_s\simeq1$) spectrum of fluctuations \citep{PhysRevD.103.063529,https://doi.org/10.48550/arxiv.2210.06125,Corona:2021qxl}. At the same time, a larger value for $n_s$ might also imply an increased value of $H_0$\footnote{However, we should note that such a shift in the cosmological parameters, $H_0\to73\;\rm km/s/Mpc$ and $n_s\to1$, would typically lead to larger values of $S_8$, worsening the fit to low-redshift (weak-lensing) measurements of the clustering amplitude.}, through a reduction of the size of the sound horizon \citep{Ye_2021,Jiang_2022}.\\ \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{fig/CamSpecPR4/2DPosteriors_TT_SPT-3G.png} \includegraphics[width=0.32\textwidth]{fig/CamSpecPR4/2DPosteriors_TE_SPT-3G.png} \includegraphics[width=0.32\textwidth]{fig/CamSpecPR4/2DPosteriors_EE_SPT-3G.png} \caption{$\Delta\ensuremath{\chi^2}$ statistic as a function of \ensuremath{(\sigma_f,\ell_f)}\ for the SPT-3G data, and $\Lambda$CDM\ best-fit to CamSpec data as mean function. The color bar shows the improvement in fit, where $\Delta\ensuremath{\chi^2}=-2(\ln\ensuremath{\mathcal{L}}^{\rm GP}-\ln\ensuremath{\mathcal{L}}^{\Lambda \rm CDM})$ and $\ln\ensuremath{\mathcal{L}}^{\rm GP}\ensuremath{(\sigma_f,\ell_f)}$ is the log marginal likelihood in Eq. \eqref{LML}.} \label{fig:Posteriors_SPT-CS} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{fig/CamSpecPR4/Reconstructions_SPT-3G.png} \caption{GP reconstructions using the set of hyperparameters maximizing the log-marginal likelihood in Eq. \eqref{LML} when using SPT-3G data and $\Lambda$CDM\ best-fit to CamSpec PR4 as mean function. Solid line and shaded regions correspond to the mean and $2\sigma$ confidence intervals around it, respectively.} \label{fig:GP_SPT} \end{figure} While the LML improvements at $\ell_f \sim 2000 $ relate to the overall scaling through $n_\mathrm{s}$, the other mode in LML found at $\ell_f \sim 90$ in TE and EE spectra may be closely related to the cosmological parameters affecting the width and height of the acoustic peaks. Roughly speaking, a realisation of a GP with $\ell_f \sim 90$ has a typical full width at half maximum (FWHM) of $\sim 210$ and is likely to have oscillations that mimic the acoustic peaks in the CMB ($\Delta\ell \sim 300$). Indeed, the GP reconstruction of Fig. \ref{fig:GP_ACT} for EE spectra has oscillation scales similar to those of the differences in the best-fit predictions from \texttt{ACT+WMAP}\ and CamSpec data (red line in Fig. \ref{fig:residuals}). The features present in our non-parametric reconstructions can therefore be a manifestation of the mild discordance in the ($\omega_b,\omega_c$)-plane between Planck and ACT (seen for instance in Fig. 7 in \cite{SPT-3G:2022hvq}), or vice versa. An interesting feature is also captured in the EE reconstructions around $\ell\sim 400-700$, with milder oscillations extending up to $\ell\sim3000$. These might be a slight hint of new features, a manifestation of the mild disagreement in the estimated cosmological parameters, or some unaccounted systematics affecting the low/high-$\ell$ part of the ACT and/or \emph{Planck}\ data. Together with the issue in TT, it might explain why recent analyses reach slightly different conclusions when considering ACT data alone, or performing cuts at a given $\ell_{\rm max}$ in the \emph{Planck}\ data \citep[e.g.][]{Poulin:2021bjr,ACT_EDE}. Our results seem to support previous findings in the context of $\Lambda$CDM\ and simple extensions \cite[e.g.][]{PhysRevD.103.063529,Galli_2022,Corona:2021qxl}, suggesting that these mild discrepancies are mainly driven by the ACT data and in particular by the TT measurements. Whether such discrepancies arise from physical, systematic, or statistical origin, however, remains to be determined by upcoming (more precise) CMB observations. In particular, the ACT collaboration is soon expected to update their results with the ACT DR6 data release. \subsubsection{SPT-3G} Similarly, we inspect for structures in the residuals of SPT-3G data, when subtracting $\Lambda$CDM's best fit to CamSpecPR4 data. The results are shown in Fig. \ref{fig:Posteriors_SPT-CS}. The posteriors of the temperature auto-correlation (TT) and temperature/polarisation cross-correlation (TE) are in good agreement with the $\Lambda$CDM\ predictions, and the GP finds negligible improvements with respect to $\Lambda$CDM; see also Fig. \ref{fig:GP_SPT}. However, this could also be explained by the larger uncertainties in the temperature measurements with respect to \emph{Planck}\ (see Fig. \ref{fig:residuals}). The situation is slightly different for EE, suggesting again a bimodal distribution in the \ensuremath{(\sigma_f,\ell_f)}-plane, with typical deviations from a zero mean-function of the order $\sigma_f\lesssim1$ and with a preferred correlation length of $\ell_f\simeq30$, indicating that the improvement in fit is likely due to subtle oscillations around the $\Lambda$CDM\ best fit predictions. The improvements in fit with respect to $\Lambda$CDM\ are again reported in the last column of Table \ref{tab:deltachi2_CSPR4}, with a maximum $\Delta\chi^2=-8.196$ for EE, which is of the same order of magnitude as the improvement in fit as for the case of ACT data. Curiously, the SPT reconstructions also show a prominent feature at intermediate scales ($\ell\sim500-700$). As discussed before, these oscillations might be linked to the mild differences in the cosmological parameters (such as $\omega_b$ or $\omega_c$) affecting the width, height and position of the acoustic peaks with respect to the ones inferred by \emph{Planck}. We should mention that the SPT-3G results are overall consistent with those of \emph{Planck}\ at the parameter level; see Table IV and Fig. 7 in \cite{SPT-3G:2022hvq}. However, our results suggest that the very mild differences between the two are mostly driven by the EE measurements.\\ At this stage, the mild disagreement between the experiments is not statistically significant ($\lesssim2\sigma$) to confidently claim a discrepancy, although it is interesting to see that both ACT and SPT seem to prefer additional features in EE with respect to the best-fit $\Lambda$CDM\ predictions to \emph{Planck}, which could be pointing towards a common physical origin. Note that with the arrival of upcoming CMB surveys such as Simons Observatory, CMBS4, LiteBird and others, the situation might soon improve and might even shed light on the origin of these mild differences. \section{Conclusion} \label{Conclusion} In this work, we used Gaussian Processes (GP) to test the consistency of $\Lambda$CDM\ with the most-recent CMB observations. In particular, we tested the robustness of the $\Lambda$CDM\ predictions against the final \emph{Planck}\ data release \citep{Rosenberg:2022sdy,Efstathiou_2021,Planck_results_2020} as well as with other ground-based temperature and polarisation measurements by the Atacama Cosmology Telescope (ACT) \citep{Aiola_2020,Choi_2020} and South Pole Telescope (SPT-3G) \citep{Dutcher_2021,SPT-3G:2022hvq} collaborations. We find a mild inconsistency between the \emph{Planck}/SPT and ACT results mainly seen in the TT spectra, where the GP finds a non-negligible improvement in fit with respect to the best fit $\Lambda$CDM\ predicitions, with indications for a Harrison-Zel'dovich spectrum with $n_s\simeq1$. This is a non-parametric confirmation of previous results, which supports the idea that the ACT data seem to favor a scale-invariant primordial power spectrum. Additionally, the EE measurements from both ACT and SPT seem to require additional features at intermediate scales ($\ell \sim 400-700$), extending up to $\ell\sim2500$, which might be pointing towards a common physical origin or minor unknown systematics in the data.\\ Throughout the main body of this paper, we discussed the results when using $\Lambda$CDM's best-fit predictions to the cleanest CamSpec data as the mean function for our Gaussian Process. However, it is known that the conclusions drawn from a GP analysis are highly sensitive to the choice of mean function. Thus, for completeness and to explore possible systematics affecting the \emph{Planck}'s TT measurements, we repeated our analysis using $\Lambda$CDM's best-fit to the combination of \texttt{ACT+WMAP}\ data, effectively replacing \emph{Planck}'s measurements with WMAP measurements. The results are presented in Appendix \ref{sec:ACTWMAPmean} and our conclusions are stable under such a change in the mean function. Importantly, the TT posteriors for the CamSpec data, shown in Fig. \ref{fig:posteriors_ACT-ACT+WMAP} require large deviations from the mean function ($\Lambda$CDM's best fit to \texttt{ACT+WMAP}) and the GP reconstruction yields a major improvement in fit ($\Delta\ensuremath{\chi^2} \simeq-225$), which reflects again the discrepancies between ACT and \emph{Planck}\ TT measurements under the assumption of a $\Lambda$CDM\ mean function; see also the corresponding reconstructions in Fig. \ref{fig:GP_CamSpec-ACT+WMAP}. The TT measurements from SPT-3G seem to support the \emph{Planck}\ results, as we find similar posterior distributions for \ensuremath{(\sigma_f,\ell_f)}, seen in the bottom left panel of Fig. \ref{fig:posteriors_ACT-ACT+WMAP}, suggesting that \texttt{ACT+WMAP}\ best fit predictions also conflict with TT measurements from SPT. Similarly, as can be seen from the lower panel in Fig. \ref{fig:GP_ACT-ACT+WMAP}, the EE measurements from ACT still require the aforementioned features around $\ell\sim 400-700$, regardless of the chosen mean function, which might suggest a physical origin for such oscillations, and which cannot be properly accounted for by the $\Lambda$CDM\ model. Such features, however, are not statistically significant (yet) to draw robust conclusions.\\ To summarise, we tested the consistency of $\Lambda$CDM\ against an array of ground and space-based measurements of the temperature and polarisation anisotropies in the CMB. Overall, our analysis again confirms the robustness of the $\Lambda$CDM\ predictions when confronted with state-of-the-art CMB measurements; although we report a slight mismatch between ACT and \emph{Planck}/SPT-3G results, mainly seen in the TT spectrum and using for the first time a non-parametric approach. The arrival of upcoming CMB experiments such as the Simons Observatory \cite{SimonsObservatory:2018koc}, CMB-S4 \cite{Abazajian:2019eic} and others, will allow for further, more careful exploration of these issues. The method discussed in this work can be readily applied to the upcoming data; hopefully determining whether the mild discrepancies reported here are actually coming from physical, systematic or statistical origin. \section*{Acknowledgements} The authors would like to thank Erik Rosenberg for providing us the latest CamSpec likelihood used in this analysis. RC would like to thank Adrien La Posta for useful discussions. This work was supported by the high-performance computing cluster Seondeok at the Korea Astronomy and Space Science Institute. DKH would like to acknowledge the support from CEFIPRA grant no. 6704-1. \section*{Data Availability} \bibliographystyle{mnras}
{ "arxiv_id": "2302.14348", "language": "en", "timestamp": "2023-03-02T02:10:40", "url": "https://arxiv.org/abs/2302.14348", "yymm": "2302" }
\section{Supplementary Material} In this supplementary document, we first show more qualitative results of our method in Section~\ref{subsec:additional_qualitative_results}. We then show the results of our additional ablation study in Section~\ref{subsec:additional_ablation_study}. Finally, we report our implementation details and experimental details in Section~\ref{subsec:implementation_details} and \ref{subsec:detailed_experimental_setups}, respectively. \subsection{Additional Qualitative Results} \label{subsec:additional_qualitative_results} \subsubsection{Video Results} \vspace{-0.2\baselineskip} We provide the video results (\url{https://youtu.be/3yNGSRz564A}) of our method on image-based two-hand reconstruction in comparison to HALO~\cite{karunratanakul2021skeleton} and IntagHand~\cite{li2022interacting}. This video contains the results on InterHand2.6M~\cite{moon2020interhand2} test image sequences that are used in the main experiments in the paper. For our method and HALO, we used DIGIT~\cite{fan2021learning} to generate keypoint inputs from single images. Please note that our method and the baseline methods~\cite{li2022interacting, karunratanakul2021skeleton} are originally proposed for two-hand reconstruction from single images and/or keypoints, thus the shapes were reconstructed from \emph{each frame independently}. One important future research direction would be to extend our model to additionally utilize temporal information for tracking applications. \vspace{-0.3\baselineskip} \subsubsection{Ablation Study} In Figure~\ref{fig:supp_ablation}, we show the qualitative examples of our ablation study in Table~\ref{table:ablation_study_refinement} in the main paper. The shown examples were produced from single images, where we used the keypoints predicted by DIGIT~\cite{fan2021learning} as inputs. In the figure, $\mathcal{I}$ produces two-hand shapes that do not look plausible due to the input errors from \emph{predicted} two-hand keypoints. $\mathcal{K + I}$ generates more plausible shapes through the input keypoint refinement step performed by $\mathcal{K}$, however, it still does not properly model hand-to-hand interactions (e.g., finger contacts). Our full model, $\mathcal{K + I + R}$, reconstructs the most accurate shapes with higher hand-to-image and hand-to-hand coherency. \vspace{-0.3\baselineskip} \subsubsection{Additional Qualitative Comparison} \vspace{-0.3\baselineskip} In Figure~\ref{fig:supp_comparison} (\emph{please see the next page}), we also show the additional examples of our qualitative comparison of interacting two-hand reconstruction on InterHand2.6M~\cite{moon2020interhand2}. Compared to HALO~\cite{karunratanakul2021skeleton} and IntagHand~\cite{li2022interacting}, Im2Hands can reconstruct interacting two-hand shapes with \textbf{a higher resolution, less penetrations, and better hand-to-image and hand-to-hand alignments}. The shown examples were produced from single image inputs to perform a fair comparison with IntagHand, where our method and HALO again leveraged DIGIT~\cite{fan2021learning} as a keypoint estimator. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{figures/fig_supp_ablation.pdf} \caption{\textbf{Qualitative examples of ablation study on InterHand2.6M~\cite{moon2020interhand2}.} $\mathcal{I}$, $\mathcal{R}$ and $\mathcal{K}$ denotes Initial Hand Occupancy Network, Two-Hand Occupancy Refinement Network, and Input Keypoint Refinement Network, respectively. } \end{center} \end{figure} \begin{figure*}[!h] \begin{center} \includegraphics[width=\textwidth]{figures/fig_supp_comparison.pdf} \caption{\textbf{Additional qualitative examples of \emph{image-based} interacting two-hand reconstruction on InterHand2.6M~\cite{moon2020interhand2}.} We compare the results of our method with HALO~\cite{karunratanakul2021skeleton} and IntagHand~\cite{li2022interacting}. \textcolor[rgb]{0.0, 0.7, 0.0}{\textbf{Green boxes}} show penetrations, \textcolor[rgb]{0.8, 0.5, 0.2}{\textbf{brown boxes}} show non-smooth shapes, and \textcolor[rgb]{0.58, 0.44, 0.86}{\textbf{purple boxes}} show shapes with bad image alignment. Our method produces two-hand shapes with \textbf{better hand-to-image and hand-to-hand coherency, less penetrations, and a higher resolution}.} \end{center} \end{figure*} \vspace{0.3\baselineskip} \subsection{Additional Ablation Study} \label{subsec:additional_ablation_study} We now report the quantitative results of more detailed ablation study. Note that our main ablation study (Section~\ref{subsec:ablation_study} in the main paper) is performed using two-hand keypoints predicted by DIGIT~\cite{fan2021learning}. In this section, we further show our results using the ground truth keypoints to examine the effectiveness of Im2Hands on more various experimental settings. In what follows, we first explain the notations for each of the evaluated variations of Im2Hands. \begin{itemize} \item $\boldsymbol{\mathcal{I} - \mathrm{ImageCond}}$ denotes a variation where no image conditioning is used in $\mathcal{I}$, resulting in a model equivalent to HALO~\cite{karunratanakul2021skeleton}. \item $\boldsymbol{\mathcal{I} - \mathrm{QueryImageAtt}}$ denotes a variation where no query-image attention is used in $\mathcal{I}$. Instead, pixel-aligned features (e.g., PIFu~\cite{saito2019pifu}) are used to condition our initial occupancy on an input image. \item $\boldsymbol{\mathcal{I} + \mathcal{R} - \mathrm{InitOccCond}}$ denotes a variation where the initial occupancy probability estimated by $\mathcal{I}$ is not used to condition our two-hand refined occupancy estimation in $\mathcal{R}$. \item $\boldsymbol{\mathcal{I} + \mathcal{R} - \mathrm{FeatureCloud}}$ denotes a variation where the feature cloud conversion is not performed in $\mathcal{R}$. \item $\boldsymbol{\mathcal{I} + \mathcal{R} - \mathrm{ContextLatent}}$ denotes a variation where the context latent extraction is not performed in $\mathcal{R}$. Instead, global latent vector of each hand point cloud is used in the refined occupancy estimation. \end{itemize} In Table~\ref{table:detailed_ablation_study}, our quantitative results across the variations of Im2Hands are shown. Our results demonstrate that each of the proposed model components contributes to more accurate two-hand shape estimation, and thus the proposed full model achieves the best performance. Considering the main ablation study results (Table~\ref{table:ablation_study_refinement} in the main paper) together, one interesting observation is that $\mathcal{I}$ contributes more to the performance improvement when using the \emph{ground truth} keypoints input, while $\mathcal{R}$ contributes more to it when using the \emph{predicted} keypoints input. It reveals that \emph{both} $\mathcal{I}$ and $\mathcal{R}$ are essential to enable robust two-hand shape reconstruction given input keypoints with various degrees of noise. \input{tables/table_ablation.tex} \subsection{Implementation Details} \label{subsec:implementation_details} In this section, we report more details of our implementation that could not be included in the main paper due to the space limit. Note that more implementation details will be also available through our code. \subsubsection{Network Architecture} \noindent \textbf{Initial Hand Occupancy Network ($\mathcal{I}$).} For the query positional embedding module used to compute our query-image attention ($\mathrm{PosEnc}$ in Equation~\ref{eq:query-image-att}), we use a shared MLP composed of two fully-connected layers, each of them followed by ReLU activation and dropout with a rate of $0.01$. For the image encoder-decoder ($\mathrm{ImgEnc}$ in Equation~\ref{eq:query-image-att}), we use a ResNet-50~\cite{he2016deep} architecture as an encoder and a CNN composed of four deconvolutional layers as a decoder. For the multi-headed self-attention module ($\mathrm{MSA}$ in Equation~\ref{eq:query-image-att}), we extract features of $8 \times 8$ image patches using an encoder of Vision Transformer~\cite{dosovitskiy2020image} and apply self-attention with two attention heads. The resulting features extracted by query-image attention are concatenated with the features extracted by HALO~\cite{karunratanakul2021skeleton} encoder after the first layer in the part occupancy functions of HALO. For the architecture of HALO encoder and part occupancy functions, we follow the design of HALO. We thus refer the reader to \cite{karunratanakul2021skeleton} for more details. \noindent \textbf{Two-Hand Occupancy Refinement Network ($\mathcal{R}$).} For iso-surface point extraction, we evaluate the occupancy probabilities at uniformly sampled query points in 3D space and collect the query points that are estimated to be on the surface. We then apply farthest point sampling (FPS) to obtain 512 points to create each of the hand point clouds (i.e., $\mathcal{P}_l$ and $\mathcal{P}_r$). For feature cloud conversion, we use the same image encoder-decoder used in $\mathcal{I}$. For point cloud encoder ($\mathrm{PCEnc}$ in Equation~\ref{eq:pcd_encoding}), we use the same encoder architecture as in AIR-Net~\cite{giebenhain2021air} except for the input point dimension, which is increased due to our feature cloud conversion procedure. We use a shared $\mathrm{PCEnc}$ for both sides of hand feature clouds, but we distinguish each side by concatenating the binary label -- $[1, 0]$ for left hand and $[0, 1]$ for right hand -- to each of the point features. For our context encoder ($\mathrm{ContextEnc}$ in Equation~\ref{eq:context_encoding}), we concatenate the inputs ($z_l$, $z_r$, $z_I$) and apply an MLP composed of two fully-connected layers, each of them followed by ReLU activation. For our point cloud decoder that estimates the refined occupancy ($\mathrm{PCDec}$ in Equation~\ref{eq:pcd_decoding}), we concatenate the query coordinate $x$ with the initial occupancy probability at $x$ and feed the resulting query vector to the decoder of AIR-Net along with $\mathcal{A}_s$ and $z_c$. For more details on the architecture of $\mathrm{PCEnc}$ and $\mathrm{PCDec}$, please refer to \cite{giebenhain2021air}. \noindent \textbf{Input Keypoint Refinement Network ($\mathcal{K}$).} For $\mathrm{KptEnc}$, we use (1) an embedding layer to embed the index of each keypoint and (2) an MLP composed of two fully-connected layers to encode the coordinate of each keypoint. We then concatenate the index feature and the coordinate feature for each of the keypoints and set them as node features in a two-hand skeleton graph. We then feed the skeleton graph to a graph convolutional network ($\mathrm{GCN}$) composed of four layers with residual connections. The updated node features are directly used for multi-headed self-attention ($\mathrm{MSA}$) between patch-wise image features, which are extracted by the same Vision Transformer~\cite{dosovitskiy2020image} encoder used in $\mathcal{I}$. The updated node features are then fed to an output keypoint coordinate regressor, which is an MLP composed of two fully-connected layers -- each of them followed by ReLU activation and dropout of a rate of $0.01$. We again emphasize that we will release our code for the reproducibility of our method. We kindly refer the reader to the code for minor implementation details (e.g., hyper-parameter values with respect to each of the layers). \subsubsection{Training Details} \label{subsec:training_details} For $\mathcal{I}$ and $\mathcal{R}$, we train each of the networks for 10 epochs with a batch size of 8. We use an Adam optimizer with an initial learning rate of $1e-4$, betas of $[0.9, 0.999]$, an epsilon of $1e-8$, and a weight decay parameter of $1e-5$. We additionally use a learning rate scheduler to decay the learning rate by $0.2$ every 5000 training steps. For the loss function to train $\mathcal{R}$, we use a weighted sum of the proposed loss terms (i.e., occupancy loss and penetration loss), with the weight values set as 1 and 0.001, respectively. Other training details (e.g., training query sampling) are the same as in the original HALO framework (please refer to \cite{karunratanakul2021skeleton} for more detail). For $\mathcal{K}$, we train the network for 30 epochs with a batch size of 32. We use an Adam optimizer with an initial learning rate of $1e-4$ with a scheduler to decay the learning rate by $0.3$ every 5000 training steps. \subsection{Detailed Experimental Setups} \label{subsec:detailed_experimental_setups} \subsubsection{Metric Computation} For Im2Hands and HALO~\cite{karunratanakul2021skeleton}, we extract the reconstructed meshes by evaluating occupancy probabilities at uniformly sampled query points in 3D space and applying Marching Cubes~\cite{lorensen1987marching}. We then compute our metrics (i.e., mean Intersection over Union and Chamfer L1-Distance) after root alignment of each hand -- following the previous works on interacting two-hand reconstruction~\cite{li2022interacting, zhang2021interacting}. Note that the existing works~\cite{li2022interacting, zhang2021interacting} use Mean Per Vertex Position Error (MPVPE) as an evaluation metric, which assumes one-to-one vertex correspondence between the ground truth and the predicted meshes. As our method does not assume such vertex correspondence, we use mean Intersection over Union and Chamfer L1-Distance as our evaluation metrics as in other implicit function-based reconstruction methods~\cite{karunratanakul2021skeleton, deng2020nasa}. \vspace{-0.2\baselineskip} \subsubsection{Setups for Generalizability Test} In this section, we report more details of our setups for the generalizability test (Section~\ref{subsec:generalizability_test} in the paper). For pre-processing the two-hand frames in RGB2Hands~\cite{wang2020rgb2hands} and EgoHands~\cite{bambach2015lending} datasets, we compute a coarse foreground mask obtained by thresholding the depth map provided by \cite{wang2020rgb2hands, bambach2015lending} to mask out the approximate background region. We then directly apply Im2Hands trained only on InterHand2.6M~\cite{moon2020interhand2} to evaluate its generalization ability to unseen hand shapes and appearances. Other experimental settings are the same as in our main experiments on InterHand2.6M dataset. \section{Experiments} \label{sec:experiments} \subsection{Experimental Setups} \label{subsec:experimental_setting} \subsubsection{Datasets, Metrics, and Keypoint Estimatiors} \noindent \textbf{Datasets.} We mainly use InterHand2.6M~\cite{moon2020interhand2} dataset -- the only interacting two-hand dataset with dense shape annotations -- for both quantitative and qualitative evaluation. To maintain consistency with the previous work~\cite{li2022interacting}, we only use interacting hand (IH) samples annotated as \emph{valid} hand type. The resulting dataset contains 366K training samples, 110K validation samples, and 261K test samples. For qualitative evaluation, we additionally demonstrate our results on RGB2Hands~\cite{wang2020rgb2hands} and EgoHands~\cite{bambach2015lending} datasets, which contain RGB videos of interacting two hands without shape annotations. \noindent \textbf{Evaluation Metrics.} For evaluating the quality of reconstructed two-hand shapes, we compute the mean Intersection over Union (IoU) and Chamfer L1-Distance (CD) between the predicted and the ground truth two-hand shapes. We also evaluate the accuracy of 3D hand keypoints after the proposed keypoint refinement using Mean Per Joint Position Error (MPJPE). \noindent \textbf{Two-Hand Keypoint Estimation Methods.} To evaluate Im2Hands on \emph{single-image} two-hand reconstruction, we leverage an off-the-shelf image-based two-hand keypoint estimation method. We consider DIGIT~\cite{fan2021learning} and IntagHand~\cite{li2022interacting}, whose official implementation is publicly available. We would like to note that IntagHand is also proposed for two-hand \emph{shape} estimation, thus we additionally consider it as a baseline for two-hand shape reconstruction. However, when using IntagHand for a \emph{keypoint} estimation method in our image-based reconstruction experiments, we completely discard the predicted shape information and only use the estimated keypoints -- considering it as a pure two-hand keypoint estimation method. We emphasize that our two-hand occupancy function works agnostic to the architecture of the previous two-hand keypoint pre-processor and can be used in a plug-and-play manner. \subsubsection{Baselines} \label{subsubsection:baselines} As Im2Hands is the first neural implicit function for two interacting hands, we compare our method to (1) the existing \emph{single-hand} reconstruction method using \emph{implicit} representation~\cite{karunratanakul2021skeleton} and (2) \emph{two-hand} reconstruction methods using \emph{mesh} representation~\cite{li2022interacting, zhang2021interacting}. \vspace{0.5\baselineskip} \noindent \textbf{Implicit Single-Hand Reconstruction Methods.} We consider HALO~\cite{karunratanakul2021skeleton}, which is a neural implicit single-hand representation driven by 3D keypoints. As HALO models a hand shape only using 3D keypoints, we modify HALO to additionally leverage an input RGB image by feeding pixel-aligned features~\cite{saito2019pifu} together with the other inputs to the part occupancy functions -- to make more fair comparison to Im2Hands by using the same input data domains (see Section~\ref{subsec:representation_power}). Although LISA~\cite{corona2022lisa} is another implicit single-hand representation which takes both RGB image and 3D keypoints, we do not make direct comparison to LISA, as it requires more input data domains (e.g., foreground masks, multi-view RGB images) than Im2Hands. \vspace{0.5\baselineskip} \noindent \textbf{Mesh-Based Two-Hand Reconstruction Methods.} We compare our method to IntagHand~\cite{li2022interacting} and Two-Hand-Shape-Pose~\cite{zhang2021interacting}, which are the state-of-the-art methods on interacting two-hand shape reconstruction. These methods reconstruct a fixed-topology mesh from an RGB image by using dense vertex correspondence~\cite{li2022interacting} or statistical model parameter annotations~\cite{zhang2021interacting} for training. Since they are designed for \emph{single-image} two-hand reconstruction and do not take 3D hand keypoints as input, we also evaluate our model on image-based reconstruction by using 3D hand keypoints predicted from an image (see Section~\ref{subsec:single_image_reconstruction}). \subsection{Reconstruction From Images and Keypoints} \label{subsec:representation_power} In this section, we examine the representation power of Im2Hands given an input RGB image paired with the ground truth 3D hand keypoints (e.g., obtained from a sensor). In Table~\ref{table:representation_power}, we quantitatively compare our reconstruction results to the baseline methods. In the fifth row, note that we also compare our method to a re-implimented version of HALO~\cite{karunratanakul2021skeleton} that takes the same input data domains as ours (please refer to Section~\ref{subsubsection:baselines}). In the table, Im2Hands outperforms all the other methods by effectively leveraging an RGB image and hand keypoints to model shape- and pose-dependent deformations, respectively. \input{tables/table_representation_power} \subsection{Reconstruction From Single Images} \label{subsec:single_image_reconstruction} In this section, we evaluate Im2Hands on single-image two-hand reconstruction. To this end, we estimate two-hand shapes using 3D hand keypoints predicted from an input image via an off-the-shelf two-hand keypoint estimation method~\cite{li2022interacting, fan2021learning} and the proposed keypoint refinement module ($\mathcal{K}$). Note that the goal of this experiment is to evaluate each method in a setting where \emph{no ground truth 3D hand keypoints} are available. Thus, we disabled the rescaling of the reconstructed hand joints and shapes performed by some of the existing methods~\cite{li2022interacting, zhang2021interacting} using the scale ratio calculated from a subset of the \emph{ground truth} 3D keypoints at the test time. For those methods~\cite{li2022interacting, zhang2021interacting} that require such rescaling, we use the mean scale of the hands in the training set of InterHand2.6M~\cite{moon2020interhand2} to perform fair comparison. In Table~\ref{table:joint_results}, we first evaluate the effectiveness of our keypoint refinement module on noisy two-hand keypoints predicted by DIGIT \cite{fan2021learning} and IntagHand \cite{li2022interacting}. Our method is successful in alleviating input keypoint errors. Especially, it is shown to be effective when the degree of the input keypoint error is high. \input{tables/table_joints} We now report our two-hand shape reconstruction results from single images. In Table~\ref{table:image_results}, our method achieves the state-of-the-art results in image-based interacting two-hand reconstruction on InterHand2.6M~\cite{moon2020interhand2}. We would like to re-emphasize that the baselines originally designed for image-based two-hand reconstruction~\cite{li2022interacting, zhang2021interacting} (1) can produce only fixed-resolution meshes and (2) require dense vertex correspondences or statistical model parameter annotations for training, while our method outperforms them without imposing such constraints. Also, note that most of the existing articulated implicit functions (e.g., \cite{karunratanakul2021skeleton, deng2020nasa, corona2022lisa}) are typically evaluated using \emph{noiseless} keypoints or skeletons. In contrast, our method is experimentally demonstrated to achieve competitive results in a setting where \emph{no ground truth keypoints} are available -- by leveraging the proposed keypoint and two-hand shape refinement modules. \input{tables/table_image} Our qualitative comparison of image-based two-hand reconstruction is also shown in Figure~\ref{fig:main_qualitative_results}. Note that HALO~\cite{karunratanakul2021skeleton} produces non-smooth shapes due to the lack of shape or keypoint refinement steps given the noisy keypoint observation. IntagHand~\cite{li2022interacting} does not properly model finger contacts (\emph{rows 1 and 3}) or produces shape penetration (\emph{row 2}). Our method generates more plausible two-hand shapes that also align well with the input image. \subsection{Generalizability Test} \label{subsec:generalizability_test} In Figure~\ref{fig:generalizability_test}, we additionally show our qualitative results on RGB2Hands~\cite{wang2020rgb2hands} and EgoHands~\cite{bambach2015lending} datasets. For two-hand keypoint estimation, we use DIGIT~\cite{fan2021learning} and IntagHand~\cite{li2022interacting} for RGB2Hands and EgoHands, respectively. Note that these reconstruction examples were directly generated by Im2Hands trained on InterHand2.6M~\cite{moon2020interhand2} dataset only, demonstrating the generalization ability of Im2Hands. For more detailed setups for this experiment, please refer to our supplementary section. \begin{figure}[!h] \begin{center} \includegraphics[width=0.47\textwidth]{figures/fig_rgb2hands.pdf} \caption{\textbf{Generalizability test on RGB2Hands~\cite{wang2020rgb2hands} and EgoHands~\cite{bambach2015lending}.} Top 2 rows are samples from RGB2Hands~\cite{wang2020rgb2hands} and bottom 2 rows are samples from EgoHands~\cite{bambach2015lending}.} \label{fig:generalizability_test} \vspace{-1.6\baselineskip} \end{center} \end{figure} \subsection{Ablation Study} \label{subsec:ablation_study} In this section, we perform an ablation study to investigate the effectiveness of the major modules (i.e., $\mathcal{K}$, $\mathcal{I}$, and $\mathcal{R}$) of Im2Hands using 3D hand keypoints predicted by DIGIT~\cite{fan2021learning}. In Table~\ref{table:ablation_study_refinement}, our full model is the most effective compared to the other model variants. Please refer to the supplementary section for ablation study results with respect to more detailed model variations (e.g. models after removing \emph{each} of the proposed components inside $\mathcal{I}$ and $\mathcal{R}$). \input{tables/table_refinement_ablation.tex} \section{Introduction} \label{sec:intro} Humans use hand-to-hand interaction in everyday activities, which makes modeling 3D shapes of two interacting hands important for various applications (e.g., human-computer interaction, robotics, and augmented or virtual reality). However, the domain of two-hand shape reconstruction remains relatively under-explored, while many existing studies have put efforts into \emph{single-hand} reconstruction from RGB~\cite{ge20193d, kulon2020weakly, zhou2020monocular, lin2021end, baek2019pushing, baek2020weakly, boukhayma20193d}, depth~\cite{wan2020dual}, or sparse keypoints~\cite{karunratanakul2021skeleton, zhou2020monocular}. These single hand-based methods are not effective when directly applied for interacting two-hand reconstruction, since it introduces additional challenges including inter-hand collisions and mutual occlusions. Recently, few learning-based methods on two-hand shape reconstruction~\cite{li2022interacting, zhang2021interacting, rong2021monocular} have been proposed since the release of the large-scale interacting hand dataset (i.e., InterHand2.6M~\cite{moon2020interhand2}). Two-Hand-Shape-Pose~\cite{zhang2021interacting} and IHMR~\cite{rong2021monocular} reconstruct two-hands by estimating MANO~\cite{romero2017embodied} parameters, which are later mapped to triangular hand meshes using a pre-defined statistical model (i.e., MANO). IntagHand~\cite{li2022interacting} directly regresses a fixed number of mesh vertex coordinates using a graph convolutional network (GCN). These methods mainly model the shape of two interacting hands based on a low-resolution mesh representation with a fixed topology of MANO (please refer to Figure~\ref{fig:teaser_image}). In this paper, we present \textbf{Im}plicit \textbf{Two} \textbf{Hands} (Im2Hands), the first neural implicit representation of two interacting hands. Unlike the existing mesh-based two-hand reconstruction methods, Im2Hands can capture the fine-grained geometry of two interacting hands by learning a \emph{continuous} 3D occupancy field. Im2Hands (1) produces two-hand meshes with an arbitrary resolution, (2) does not require dense vertex correspondences or statistical model parameter annotations for training, and (3) learns output shapes with precise hand-to-hand and hand-to-image alignment. As two interacting hands are highly articulated objects, we take inspiration from recent neural \emph{articulated} implicit functions~\cite{deng2020nasa, karunratanakul2021skeleton, corona2022lisa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap} that learn an implicit geometry in the object canonical space computed from an input pose observation. Our two-hand articulated implicit function is also driven by input pose and shape observations, which are represented as sparse 3D keypoints and an RGB image, respectively. To effectively handle the shape complexity and interaction context between two hands, Im2Hands consists of two novel attention-based modules responsible for initial hand occupancy estimation and context-aware two-hand occupancy refinement, respectively. The initial occupancy estimation network first predicts the articulated occupancy volume of each hand in the canonical space. Given a 3D query point, it (1) performs query canonicalization using the keypoint encoder of HALO~\cite{karunratanakul2021skeleton} to effectively capture \emph{pose-dependent hand deformation} and (2) extracts a hand shape feature using our novel query-image attention module to model \emph{shape-dependent hand deformation}. As it is non-trivial to model two-hand interaction while learning in the canonical space defined for each hand, our context-aware occupancy refinement network then modifies the initial two-hand occupancy in the original posed space to enhance hand-to-hand coherency. Given the initial two-hand shape represented as anchored point clouds, it uses query-anchor attention to learn a refined two-hand occupancy in a context-aware manner. Furthermore, we consider a practical scenario of two-hand reconstruction using Im2Hands from single images, where no ground truth keypoints are observed as inputs to our method. To this end, we introduce an \emph{optional} input keypoint refinement network to enable more robust two-hand shape reconstruction by alleviating errors in the input 3D keypoints \emph{predicted} from an off-the-shelf image-based two-hand keypoint estimation method (e.g., \cite{fan2021learning, moon2020interhand2, kim2021end, li2022interacting, zhang2021interacting}). Overall, our main contributions are summarized as follows: \begin{itemize} \item We introduce Im2Hands, the first neural implicit representation of two interacting hands. Im2Hands reconstructs resolution-independent geometry of two-hands with high hand-to-hand and hand-to-image coherency. \item To effectively learn an occupancy field of the complex two-hand geometries, we propose two novel attention-based modules that perform (1) initial occupancy estimation in the canonical space and (2) context-aware occupancy refinement in the original posed space, respectively. We additionally introduce an \emph{optional} keypoint refinement module to enable more robust two-hand shape estimation using a single image input. \item We demonstrate the effectiveness of Im2Hands in comparison to the existing (1) two-hand mesh-based and (2) single-hand implicit function-based reconstruction methods, where Im2Hands achieves state-of-the-art results in interacting two-hand reconstruction. \end{itemize} \section{Related Work} \label{sec:related_work} \noindent \textbf{Single-Hand Reconstruction.} Methods for single-hand reconstruction have been actively investigated in the past decades. Most existing deep learning-based approaches either reconstruct hand poses represented as 3D keypoints~\cite{ge2016robust, iqbal2018hand, moon2018v2v, zimmermann2017learning, cai2018weakly, simon2017hand, spurr2020weakly, wu2005analyzing}, estimate MANO parameters~\cite{romero2017embodied, baek2019pushing, baek2020weakly}, or directly regress mesh vertex coordinates~\cite{lin2021end, ge20193d, kulon2020weakly, wan2020dual}. Inspired by the recent success of implicit representations in modeling human bodies~\cite{deng2020nasa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap}, few recent works~\cite{karunratanakul2021skeleton, corona2022lisa} also employ neural implicit functions for single-hand reconstruction. Compared to the existing methods based on mesh representations, these implicit function-based methods~\cite{karunratanakul2021skeleton, corona2022lisa} can produce fine-grained hand geometry in a resolution-independent manner. \noindent \textbf{Interacting Hand-Object Reconstruction.} Similar to single-hand reconstruction, most of the existing methods~\cite{hasson2019learning, hasson2020leveraging, baek2020weakly, doosti2020hope, rhoi2020} on hand-object reconstruction adopt MANO topology-based mesh representations to model a hand shape. Few recent works~\cite{karunratanakul2020grasping, karunratanakul2021skeleton} consider neural implicit representations to model hand-objects, but they often constrain their interaction based on contacts -- which does not \emph{necessarily} occur in hand-to-hand interactions~\cite{zhang2021interacting}. In addition, these methods mainly consider a rigid object in the interaction, which makes them ineffective to be directly applied for articulated two-hand reconstruction. \noindent \textbf{Interacting Two-Hand Reconstruction.} Compared to single-hand or hand-object reconstruction, two-hand reconstruction is more challenging, as more complex occlusions and deformations occur from the interaction of two articulated hands. For modeling two interacting hands, there have been methods recently proposed for two-hand \emph{pose} reconstruction from RGB images~\cite{moon2020interhand2, kim2021end, fan2021learning, li2022interacting, zhang2021interacting}, depth images~\cite{mueller2019real, taylor2017articulated}, or RGB-D video sequences~\cite{kyriazis2014scalable, oikonomidis2012tracking}. Yet, there are only a few methods that can directly reconstruct the \emph{dense surface} of closely interacting two-hands~\cite{li2022interacting, zhang2021interacting, rong2021monocular, mueller2019real}, which is more challenging and important in reasoning hand-to-hand interactions. Mueller \emph{et al.}~\cite{mueller2019real} propose an energy minimization framework to fit MANO~\cite{romero2017embodied} parameters to the input depth image of two interacting hands. Two-Hand-Shape-Pose~\cite{zhang2021interacting} introduce pose-aware attention and context-aware cascaded refinement modules to directly regress two-hand MANO parameters from an RGB image. IntagHand~\cite{li2022interacting} propose an attention-based graph convolutional network (GCN) for two-hand vertex regression from an RGB image. These recent deep learning-based frameworks~\cite{zhang2021interacting, li2022interacting, rong2021monocular, wang2020rgb2hands} have shown that (1) an attention mechanism to model non-local interactions and (2) context-aware shape refinement steps are effective in two-hand reconstruction, which have inspired the design of our two-hand occupancy function. However, compared to these existing methods, our occupancy-based method can learn resolution-independent hand surface with better image-shape alignment, as our output space (i.e., occupancy field) itself is more directly aligned with the input image space. \begin{figure*}[!t] \begin{center} \includegraphics[width=0.98\textwidth, height=0.48\textwidth]{figures/fig_overall_architecture.pdf} \end{center} \vspace{-\baselineskip} \caption{\textbf{Architecture overview.} Given an RGB image and coarse 3D two-hand keypoints, our method estimates two-hand occupancy volumes via (1) initial hand occupancy estimation and (2) two-hand occupancy refinement. In a nutshell, the initial hand occupancy network uses query-image attention and HALO~\cite{karunratanakul2021skeleton} encoder to learn per-hand occupancy volumes in the canonical spaces. Then, the two-hand occupancy refinement network encodes the initial per-hand occupancy volumes into anchored features, which are then used to provide two-hand context information for refined occupancy estimation in the original posed space.} \label{fig:architecture_overview} \vspace{-\baselineskip} \end{figure*} \noindent \textbf{Neural Articulated Implicit Representation.} Many existing methods to model articulated objects (e.g., human bodies, hands) use neural \emph{articulated} implicit representations~\cite{deng2020nasa, karunratanakul2021skeleton, corona2022lisa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap}, which directly condition an implicit geometry on the object articulation. While many methods~\cite{deng2020nasa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap} mainly demonstrate their results on human body modeling, there are only a few studies that explore the effectiveness of implicit representations for hands. LISA~\cite{corona2022lisa} proposes an articulated VolSDF~\cite{yariv2021volume} to model shape and appearance of single-hands, but it assumes that various ground truth inputs (i.e., multi-view RGB image sequences, 3D bone transformations and foreground masks) are available and requires model optimization for inference. HALO~\cite{karunratanakul2021skeleton} proposes a single-hand articulated occupancy function driven by 3D keypoints via incorporating a novel canonicalization layer -- eliminating the need for the ground truth 3D bone transformations used in most of the articulated implicit functions~\cite{deng2020nasa, corona2022lisa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap}. However, HALO models pose-independent shape variations only with hand bone lengths, which may be a strong assumption. We take inspiration from HALO to condition our two-hand occupancy on pose represented as sparse 3D keypoints, but we further improve HALO by conditioning shape variations using an RGB image, which is a more direct form of shape observation than bone lengths. \section{Im2Hands: Implicit Two-Hand Function} \label{sec:method} Im2Hands is a neural occupancy representation of two interacting hands. Our two-hand occupancy network can be formally defined as: \vspace{-0.1\baselineskip} \begin{equation} \mathcal{O} (x\, |\, \alpha,\, \beta) \rightarrow [o_l,\, o_r], \end{equation} \vspace{-0.4\baselineskip} \noindent where $\mathcal{O}$ is a neural network with the learned weights. $\mathcal{O}$ maps an input 3D query point $x \in \mathbb{R}^3$ to occupancy probabilities for each side of hand $o_l,\, o_r \in [0, 1]$ conditioned on a shape observation $\alpha$ and a pose observation $\beta$, which are represented as an RGB image $I \in \mathbb{R}^{w \times h \times 3}$ and sparse 3D two-hand keypoints $J = \left[ {J_l \atop J_r} \right] \in \mathbb{R}^{42 \times 3}$, respectively. One straightforward approach to design a neural implicit function for two hands would be directly applying the existing articulated occupancy function for \emph{single hand} (i.e., HALO~\cite{karunratanakul2021skeleton}) to both hands separately. This existing method can robustly model pose-dependent deformations via learning hand shapes in the canonical space, but it does not effectively capture other shape-dependent deformations (e.g., identity-dependent or soft tissue deformations). We thus design our initial hand occupancy estimation network (Section~\ref{subsec:initial_hand_occupancy_estimation}) that combines (1) HALO to robustly model \emph{pose-dependent deformation} by learning in the hand canonical space and (2) our novel query-image attention module to capture \emph{shape-dependent deformation} observed from an RGB image. However, it is difficult to handle two-hand interaction contexts while learning in the hand canonical spaces. Thus, we additionally propose a two-hand occupancy refinement network (Section~\ref{subsec:two-hand_occupancy_refinement}) to perform context-aware shape refinement of interacting two hands in the original posed space. Furthermore, we consider two-hand reconstruction from single images, where the input two-hand keypoints can be obtained by applying an off-the-shelf two-hand pose estimation method (e.g.,\cite{fan2021learning, moon2020interhand2, kim2021end, li2022interacting, zhang2021interacting}). To this end, we additionally introduce an optional input keypoint refinement module (Section~\ref{subsec:input_keypoint_refinement}) to alleviate input keypoint errors to enable more robust shape estimation. In what follows, we explain each of the proposed components of Im2Hands in more detail. \subsection{Initial Hand Occupancy Estimation} \label{subsec:initial_hand_occupancy_estimation} Given a 3D query point $x$, our initial hand occupancy network $\mathcal{I}$ predicts initial occupancy probabilities for each hand $o^{i}_l,\, o^{i}_r \in [0,\, 1]$ conditioned on an RGB image $I$ and two-hand keypoints $J$. Our network design is partly based on HALO~\cite{karunratanakul2021skeleton}, which models an implicit \emph{single-hand} conditioned on 3D keypoints. \vspace{0.5\baselineskip} \noindent \textbf{Background: HALO~\cite{karunratanakul2021skeleton}.} HALO aims to learn an occupancy field of a single hand driven by 3D keypoints. To learn hand shapes in the canonical space, HALO proposes a novel algorithm to transform the input 3D keypoints to canonicalization matrices $\{\mathbf{T}_b\}_{\mathit{b=1}}^{B}$, where $\mathbf{T}_b$ is a transformation matrix to the canonical pose for hand bone $b$, and $B$ is the number of hand bones. Using the obtained canonicalization matrices, hand occupancy at query $x$ is modeled as: \vspace{-0.5\baselineskip} \begin{equation} \mathcal{H}(x\,|\, J) = \max\limits_{b = 1,\, ...,\, B}\{\bar{\mathcal{H}}_b(\mathbf{T}_bx,\, f_b^\phi, f_b^\omega)\}, \label{eq:halo} \end{equation} \noindent where $\bar{\mathcal{H}}_b$ is an MLP-based part occupancy network to learn the shape of hand bone $b$. Each part occupancy network takes a canonicalized query point $\mathbf{T}_{b}x$ along with a bone length shape feature $f_b^\phi$ and a pose feature $f_b^\omega$ for bone $b$. $f_b^\phi$ is extracted by an MLP encoder that takes a bone length vector $l \in \mathbb{R}^B$ computed from the input 3D keypoints. $f_b^\omega$ is obtained by another MLP encoder that takes global pose matrices $\{\mathbf{T}_{b}\}_{b=1}^B$ and a root translation vector $t \in \mathbb{R}^3$ as inputs. For more details about HALO, we kindly refer the reader to \cite{karunratanakul2021skeleton}. \vspace{0.5\baselineskip} \noindent \textbf{Our Initial Hand Occupancy Network.} While HALO is effective in modeling pose-dependent deformations via query point canonicalization (i.e. $\mathbf{T}_bx$) and pose feature extraction (i.e., $f^\omega_b$), its bone length shape feature (i.e., $f^\phi_b$) cannot model shape-dependent deformations that cannot be described by bone lengths. Modeling such shape variations is especially important in interacting two-hand reconstruction, as soft tissue deformations commonly occur due to inter-hand contacts. Thus, we introduce an additional shape feature conditioned on an RGB image~$I$, which provides a minimal observation for such shape-dependent deformations. We propose a per-query shape feature $f_x^\phi$ conditioned on the image $I$ as follows: \begin{equation} f_x^\phi = \mathrm{MSA}([\mathrm{PosEnc}(x),\, \mathrm{ImgEnc}(I)]), \label{eq:query-image-att} \vspace{0.3\baselineskip} \end{equation} \noindent where $\mathrm{MSA}$ denotes a multi-headed self-attention module that takes a query feature $\mathrm{PosEnc}(x)$ and patch-wise image features $\mathrm{ImgEnc}(I)$. To be more specific, $\mathrm{PosEnc}$ is an MLP that performs positional embedding to map the input query coordinate $x$ to a query feature vector $f_x \in \mathbb{R}^{d}$. $\mathrm{ImgEnc}$ is an image encoder of Vision Transformer~\cite{dosovitskiy2020image}, which maps the input image $I$ into patch-wise image features $f_I \in \mathbb{R}^{p \times d}$, where $p$ is the number of patches. Through this query-image attention, we can obtain a per-query shape feature $f_x^\phi$ contributed by the features of more \emph{relevant} image patch regions to the input query $x$. In Section~\ref{sec:experiments}, we also show that using this query-image attention yields better results than directly using a local image feature located at the projected query position (e.g., PIFu~\cite{saito2019pifu}). Finally, our initial per-hand occupancy is modeled by feeding our per-query shape feature $f_x^\phi$ along with the other inputs to the MLP-based part occupancy network $\bar{\mathcal{H}}_b$ in Equation~\ref{eq:halo}: \vspace{-0.6\baselineskip} \begin{equation} \mathcal{I}(x\,|\, I,\, J) = \max\limits_{b = 1,\, ...,\, B}\{\bar{\mathcal{H}}_b(\mathbf{T}_bx,\, f_b^\phi, f_x^\phi, f_b^\omega)\}. \end{equation} \noindent Note that our occupancy is estimated for each hand separately in this stage. We use the shared network for both hands, but we distinguish each hand by feeding coarse 3D keypoints of each hand $J_{l}, J_{r} \in \mathbb{R}^{21 \times 3}$ separately as inputs to our network. As these keypoints are used to compute the canonicalization matrices and the pose feature, we can robustly learn our initial occupancy in the canonical space defined per-hand. \subsection{Two-Hand Occupancy Refinement} \label{subsec:two-hand_occupancy_refinement} The previous step can robustly estimate per-hand occupancy volumes, but it does not effectively model the coherency between two hands due to the adaptation of the hand canonical spaces. As recent mesh-based two-hand reconstruction methods~\cite{zhang2021interacting, li2022interacting} have shown that (1) two hand shapes are correlated with each other and thus (2) context-aware two-hand shape refinement is effective, we additionally propose a two-hand occupancy refinement network $\mathcal{R}$ that refines the initial two-hand shape \emph{in the original posed space}. It estimates the refined two-hand occupancies $o_l,\, o_r \in [0, 1]$ at the query point $x$ conditioned on (1) the initial two-hand occupancy probabilities at $x$ estimated by $\mathcal{I}$ and (2) the input image $I$. To condition our refined occupancy estimation on the interaction context, it is necessary to encode the initial geometry of two hands. To effectively learn features from the current two hand shapes, we first represent them as point clouds $\mathcal{P}_{l}, \mathcal{P}_{r} \in \mathbb{R}^{n \times 3}$, which are sets of iso-surface points of each side of hand geometry. These points can be easily obtained from collecting query coordinates that are evaluated to be on surface by our initial hand occupancy network: $\{ x\, |\, 0.5 - \epsilon \leq \mathcal{I}(x\, |\, I, J) \leq 0.5 + \epsilon\}$. We then encode each hand point cloud into a global latent vector and local latent vectors anchored in 3D space: \begin{equation} \{z_{s}, \mathcal{A}_{s} = \mathrm{PCEnc}(\mathcal{P}_{s}, I)\}_{s = l, r}, \label{eq:pcd_encoding} \end{equation} \noindent where $z_{s} \in \mathbb{R}^{z}$ is a global latent vector and $\mathcal{A}_{s} \in \mathbb{R}^{m\times(3+a)}$ is a set of $m$ number of anchor points (i.e., a subset of the input point cloud selected by farthest point sampling) with $a$-dimensional per-point features. In our point cloud encoding module $\mathrm{PCEnc}$, we first preprocess the input point cloud $\mathcal{P}_{s}$ into a feature cloud $\mathcal{F}_{s} \in \mathbb{R}^{n \times (3 + f)}$ to incorporate texture information observed from the input image $I$. To this end, we apply a simple encoder-decoder CNN to extract an image feature map $G \in \mathbb{R}^{w \times h \times f}$ and concatenate each point $p$ in $\mathcal{P}_s$ to an image feature vector located at the projected position of $p$ in $G$. We then feed our feature cloud to the encoder of AIR-Net~\cite{giebenhain2021air} that extracts global and locally anchored point cloud features using Point Transformer~\cite{zhao2021point}. Using the obtained point cloud features, we extract a latent code that encodes interacting two-hand shapes and image context as follows: \begin{equation} z_{c} = \mathrm{ContextEnc}(z_{l}, z_{r}, z_{I}). \label{eq:context_encoding} \end{equation} \noindent $\mathrm{ContextEnc}$ is an MLP that extracts the context feature, and $z_{I}$ is the global bottleneck feature from the image encoder-decoder previously used to extract $G$. Finally, we estimate our refined occupancy at the query point $x$ as follows: \vspace{-0.5\baselineskip} \begin{equation} \{o_{s} = \mathrm{PCDec}(x, o^i_s, \mathcal{A}_{s},\, z_{c}) \}_{s = l, r}, \label{eq:pcd_decoding} \end{equation} \noindent where $\mathrm{PCDec}$ is a point cloud decoder, for which we adopt a similar architecture to the decoder of AIR-Net~\cite{giebenhain2021air}. The original AIR-Net decoder predicts an occupancy probability given a single point cloud via vector cross attention between the query point $x$ and local anchor features $\mathcal{A}_s$ and a global feature $z_s$ (please refer to \cite{giebenhain2021air} for more details). Our decoder instead (1) uses a global latent vector $z_c$ that encodes the context between the input image and the initial two hand shapes predicted by $\mathcal{I}$ and (2) conditions the occupancy estimation on our initial occupancy probability by concatenating $o^i_s$ at $x$ to the input query coordinate. In summary, our refined occupancy estimation is effectively conditioned both on (1) local latent descriptors $\mathcal{A}_{s}$ that encode information about the initial hand geometry, (2) a global latent descriptor $z_c$ that encodes the global context of two-hand interaction and the input image, and (3) our initial occupancy estimation $o^i_s$. \subsection{Input Keypoint Refinement} \label{subsec:input_keypoint_refinement} In this section, we further consider image-based two-hand reconstruction using Im2Hands, where no ground truth hand keypoints are available. To enable robust shape reconstruction from keypoints \emph{predicted} from an off-the-shelf image-based two-hand keypoint estimator (e.g., \cite{moon2020interhand2, kim2021end, fan2021learning, li2022interacting, zhang2021interacting}), we introduce an optional keypoint refinement module $\mathcal{K}$ that can alleviate noise in the input two-hand keypoints. Our keypoint refinement module is formulized as: $\mathcal{K}(J, I) \rightarrow J^r$, where $J, J^r \in \mathbb{R}^{42 \times 3}$ are initial and refined 3D two-hand keypoints, respectively. To be more specific, $\mathcal{K}$ is designed as: \vspace{-0.8\baselineskip} \begin{equation} \mathcal{K}(J, I) = \mathrm{MSA}([\mathrm{GCN}(\mathrm{KptEnc}(J)),\, \mathrm{ImgEnc}(I)]). \end{equation} \vspace{-0.8\baselineskip} \noindent In the above equation, we first extract input keypoint features by $\mathrm{KptEnc}$ that encodes the index and coordinate of each keypoint using a shared MLP. We then build a two-hand skeleton graph with initial node features set as features extracted from the previous $\mathrm{KptEnc}$ function. Next, we feed this initial two-hand skeleton graph to a $\mathrm{GCN}$ to extract keypoint features that further encode information of the initial two-hand structure. Finally, we apply multi-headed self-attention (i.e., $\mathrm{MSA}$) between the keypoint features and image features -- similarly to query-image attention in Equation~\ref{eq:query-image-att} -- to regress the refined two-hand keypoint positions. In Section~\ref{sec:experiments}, we experimentally demonstrate that $\mathcal{K}$ significantly helps improving the quality of two-hand reconstruction from single images. \subsection{Loss Functions} \label{subsec:loss_functions} We now explain loss functions to train our two-hand occupancy network. \noindent \textbf{Initial Hand Occupancy Network ($\mathcal{I}$).} $\mathcal{I}$ is trained by MSE loss that measures the deviation between the ground truth and the predicted occupancy probabilities. For training query point generation, we combine (1) points uniformly sampled in the hand bounding box and (2) points sampled on the hand surface added with Gaussian noise following~\cite{karunratanakul2021skeleton, saito2019pifu}. \noindent \textbf{Two-Hand Occupancy Refinement Network ($\mathcal{R}$).} Similar to $\mathcal{I}$, $\mathcal{R}$ is trained by MSE loss using the ground truth occupancy supervision. It is additionally trained with penetration loss, which penalizes the refined two-hand occupancy values that are estimated to be occupied in both hands at the same query position. Given a set of training samples $\mathcal{X}$, a set of sampled training query points $\mathcal{P}$, and $\mathcal{R}_l$ and $\mathcal{R}_r$ that output the refined occupancy probabilities for left and right hands, our penetration loss is defined as: \vspace{-0.6\baselineskip} \begin{equation} \mathcal{L}_{pen} = \frac{1}{|\mathcal{X}|}\sum_{(I, J) \in \mathcal{X}}\sum_{x \in \mathcal{P}}\mathcal{R}_{l}(x | I,\, J) \cdot \mathcal{R}_{r}(x | I,\, J), \end{equation} \noindent where $\mathcal{R}_l(\cdot) > 0.5$ and $\mathcal{R}_r(\cdot) > 0.5$. We incorporate this loss function to avoid inter-penetration between two hands. \noindent \textbf{Input Keypoint Refinement Network ($\mathcal{K}$).} $\mathcal{K}$ is trained by MSE loss computed using the ground truth and the predicted two-hand keypoints. Due to the space limit, please refer to our supplementary section for training and network architecture details. \begin{figure*}[!t] \begin{center} \includegraphics[width=\textwidth]{figures/fig_img_recon} \caption{\textbf{Qualitative comparison of \emph{single-image} interacting two-hand reconstruction on InterHand2.6M~\cite{moon2020interhand2}.} We compare our results to HALO~\cite{karunratanakul2021skeleton} and IntagHand~\cite{li2022interacting}, where our method produces two-hand shapes with significantly better hand-to-image and hand-to-hand alignment. Ours also properly captures finger contacts (\emph{rows 1 and 3}) and avoids shape penetration (\emph{row 2}). For our method and HALO, we use the hand keypoints predicted by DIGIT~\cite{fan2021learning} to condition the articulated occupancy fields. Please see the supplementary section for more qualitative examples.} \label{fig:main_qualitative_results} \vspace{-1.4\baselineskip} \end{center} \end{figure*} \section{Conclusion and Future Work} We present Im2Hands, the first neural implicit representation of two interacting hands. To effectively model the interaction context between two articulated hands, we propose two modules for initial occupancy estimation in the hand canonical space and context-aware occupancy refinement in the original posed space, respectively. Furthermore, we introduce an optional input keypoint refinement module to enable robust shape reconstruction from single images. As a result, we achieve state-of-the-art results on two-hand reconstruction -- demonstrating the capability of producing resolution-free, well-aligned-to-image, and penetration-free two-hand shapes. \vspace{0.5\baselineskip} \noindent \textbf{Limitations and Future Work.} As Im2Hands models two-hand articulated occupancy conditioned on the pose, the reconstruction accuracy depends on the quality of the input keypoints. We plan to further investigate the ways to effectively learn both pose and shape of two interacting hands in an end-to-end manner given a single image input. \section{Im2Hands: Implicit Two-Hand Function} \label{sec:method} Im2Hands is a neural occupancy representation of two interacting hands. To effectively model highly articulated two-hand shapes, we take inspiration from neural \emph{articulated} implicit functions~\cite{deng2020nasa, karunratanakul2021skeleton, corona2022lisa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap}, which condition an implicit function on shape and pose variations. Our two-hand occupancy network can be formally defined as: \vspace{-0.3\baselineskip} \begin{equation} \mathcal{O} (x\, |\, \alpha,\, \beta) \rightarrow [o_l,\, o_r], \end{equation} \vspace{-0.4\baselineskip} \noindent where $\mathcal{O}$ is a neural network with the learned weights. $\mathcal{O}$ maps an input 3D query point $x \in \mathbb{R}^3$ to occupancy probabilities for each side of hand $o_l,\, o_r \in [0, 1]$ conditioned on a shape observation $\alpha$ and a pose observation $\beta$, for which our occupancy network directly takes an RGB image $I \in \mathbb{R}^{w \times h \times 3}$ and sparse two-hand 3D keypoints $J = [J_l,\, J_r] \in \mathbb{R}^{42 \times 3}$, respectively. One straightforward approach to design a neural implicit function for two hands would be to directly apply the existing articulated occupancy function for \emph{single hand} (i.e., HALO~\cite{karunratanakul2021skeleton}) to both hands separately. This existing method can robustly model pose-dependent deformations via learning hand shapes in the canonical space, but it does not effectively capture other shape-dependent deformations (e.g., identity-dependent or soft tissue deformations). We thus design our initial hand occupancy estimation network (Section~\ref{subsec:initial_hand_occupancy_estimation}) that combines (1) HALO that robustly models \emph{pose-dependent deformation} by learning in the hand canonical space and (2) our novel query-image attention module to capture \emph{shape-dependent deformation} observed from an RGB image. However, such initial occupancy estimation alone is not sufficient in capturing two-hand interaction context due to the adaptation of the hand canonical space. Thus, we additionally propose a two-hand occupancy refinement network (Section~\ref{subsec:two-hand_occupancy_refinement}) to perform context-aware shape refinement of interacting two hands in the original posed space. Furthermore, we consider a practical scenario of two-hand reconstruction from single images, where the input two-hand keypoints can be obtained by applying an off-the-shelf two-hand pose estimation method (e.g.,\cite{fan2021learning, moon2020interhand2, kim2021end, li2022interacting, zhang2021interacting}). To this end, we additionally introduce an optional input keypoint refinement module (Section~\ref{subsec:input_keypoint_refinement}) to alleviate input keypoint errors to enable more robust shape estimation. In what follows, we explain each of the proposed components of Im2Hands in more detail. \subsection{Initial Hand Occupancy Estimation} \label{subsec:initial_hand_occupancy_estimation} Given a 3D query point $x$, our initial hand occupancy network $\mathcal{I}$ predicts initial occupancy probabilities for each hand $o^{i}_l,\, o^{i}_r \in [0,\, 1]$ conditioned on an RGB image $I$ and sparse two-hand keypoints $J$. Our network design is partly based on HALO~\cite{karunratanakul2021skeleton}, which models an implicit \emph{single-hand} conditioned on 3D keypoints. \vspace{0.5\baselineskip} \noindent \textbf{Background: HALO~\cite{karunratanakul2021skeleton}.} HALO aims to learn an occupancy field of a single hand driven by 3D keypoints. To learn hand shapes in the canonical space, HALO proposes a novel algorithm to transform the input 3D keypoints to canonicalization matrices $\{\mathbf{T}_b\}_{\mathit{b=1}}^{B}$, where $\mathbf{T}_b$ is a transformation matrix to the canonical pose for hand bone $b$, and $B$ is the number of hand bones. Using the obtained canonicalization matrices, hand occupancy at query $x$ is modeled as: \vspace{-0.5\baselineskip} \begin{equation} \mathcal{H}(x\,|\, J) = \max\limits_{b = 1,\, ...,\, B}\{\bar{\mathcal{H}}_b(\mathbf{T}_bx,\, f_b^\phi, f_b^\omega)\}, \label{eq:halo} \end{equation} \noindent where $\bar{\mathcal{H}}_b$ is an MLP-based part occupancy network to learn the shape of hand bone $b$. Each part occupancy network takes a canonicalized query point $\mathbf{T}_{b}x$ along with a shape feature $f_b^\phi$ and a pose feature $f_b^\omega$ for bone $b$. $f_b^\phi$ is extracted by an MLP encoder that takes a bone length vector $l \in \mathbb{R}^B$ computed from the input 3D keypoints. $f_b^\omega$ is obtained by another MLP encoder that takes global pose matrices $\{\mathbf{T}_{b}\}_{b=1}^B$ and a root translation vector $t \in \mathbb{R}^3$ as inputs. For more details about HALO, we kindly refer the reader to \cite{karunratanakul2021skeleton}. \vspace{0.5\baselineskip} \noindent \textbf{Our Initial Hand Occupancy Network.} While HALO is effective in modeling pose-dependent deformations via query point canonicalization (i.e. $\mathbf{T}_bx$) and pose feature extraction (i.e., $f^\omega_b$), its shape feature (i.e., $f^\phi_b$) cannot model pose-independent shape variations that cannot be described by bone lengths. Modeling such shape variations is especially important in interacting two-hand reconstruction, as soft tissue deformations commonly occur due to inter-hand contacts. Thus, we introduce an additional shape feature conditioned on an RGB image~$I$, which provides a minimal observation for such pose-independent deformations. We propose a per-query shape feature $f_x^\phi$ conditioned on the image $I$ as follows: \begin{equation} f_x^\phi = \mathrm{MSA}([\mathrm{PosEnc}(x),\, \mathrm{ImgEnc}(I)]), \label{eq:query-image-att} \vspace{0.3\baselineskip} \end{equation} \noindent where $\mathrm{MSA}$ denotes a multi-headed self-attention module that takes a query feature $\mathrm{PosEnc}(x)$ and patch-wise image features $\mathrm{ImgEnc}(I)$. To be more specific, $\mathrm{PosEnc}(\cdot)$ is an MLP to perform positional embedding that maps the input query coordinate $x$ to a query feature vector $f_x \in \mathbb{R}^{d}$. $\mathrm{ImgEnc}(\cdot)$ is an image encoder of Vision Transformer~\cite{dosovitskiy2020image}, which maps the input image $I$ into patch-wise image features $f_I \in \mathbb{R}^{p \times d}$, where $p$ is the number of patches. Through this query-image attention, we can obtain a per-query shape feature $f_x^\phi$ contributed by the features of more \emph{relevant} image patch regions to the input query $x$. In Section~\ref{sec:experiments}, we also show that using such query-image attention yields better results than directly using a local image feature located at the projected query position (e.g.,~\cite{saito2019pifu}) as $f_x^\phi$. Finally, our initial per-hand occupancy is modeled by feeding our per-query shape feature $f_x^\phi$ along with the other inputs to the MLP-based part occupancy network $\bar{\mathcal{H}}_b$ in Equation~\ref{eq:halo}: \vspace{-0.6\baselineskip} \begin{equation} \mathcal{I}(x\,|\, I,\, J) = \max\limits_{b = 1,\, ...,\, B}\{\bar{\mathcal{H}}_b(\mathbf{T}_bx,\, f_b^\phi, f_x^\phi, f_b^\omega)\}. \end{equation} \noindent Note that our occupancy is estimated for each hand separately in this stage. We use the same network architecture for both hands, but we distinguish each hand side by feeding coarse 3D keypoints of each hand $J_{l}, J_{r} \in \mathbb{R}^{21 \times 3}$ separately as inputs to our network. As these keypoints are used to compute the pose feature and the canonicalization matrices, we can robustly learn our initial occupancy in the canonical space defined per-hand. \subsection{Two-Hand Occupancy Refinement} \label{subsec:two-hand_occupancy_refinement} The previous step can robustly estimate per-hand occupancy volumes, but it does not effectively model the coherency between two hands due to the adaptation of the hand canonical spaces. As recent mesh-based two-hand reconstruction methods~\cite{zhang2021interacting, li2022interacting} have shown that (1) two hand shapes are correlated with each other and thus (2) context-aware two-hand shape refinement is effective, we additionally propose a two-hand occupancy refinement function $\mathcal{R}$ that refines the initial two-hand shape \emph{in the original posed space}. It estimates the refined two-hand occupancies $o_l,\, o_r \in [0, 1]$ at the query point $x$ conditioned on (1) the initial two-hand occupancy probabilities at $x$ estimated by $\mathcal{I}$ and (2) the input image $I$. To condition our refined occupancy estimation on the interaction context, it is necessary to encode the initial geometry of two hands. To effectively learn features from the current two hand shapes, we first represent them as point clouds $\mathcal{P}_{l}, \mathcal{P}_{r} \in \mathbb{R}^{n \times 3}$, which are sets of iso-surface points of each side of hand geometry. These points can be easily obtained from collecting query coordinates that are evaluated to be on surface by our initial hand occupancy network: $\{ x\, |\, 0.5 - \epsilon \leq \mathcal{I}(x\, |\, I, J) \leq 0.5 + \epsilon\}$. We then encode each hand point cloud into a global latent vector and local latent vectors anchored in 3D space: \begin{equation} \{z_{s}, \mathcal{A}_{s} = \mathrm{PCEnc}(\mathcal{P}_{s}, I)\}_{s = l, r}, \end{equation} \noindent where $z_{s} \in \mathbb{R}^{z}$ is a global latent vector, and $\mathcal{A}_{s} \in \mathbb{R}^{m\times(3+a)}$ is a set of $m$ number of anchor points (i.e., a subset of the input point cloud selected by farthest point sampling) with $a$-dimensional per-point features. In our point cloud encoding module $\mathrm{PCEnc}(\cdot)$, we first preprocess the input point cloud $\mathcal{P}_{s}$ into a feature cloud $\mathcal{F}_{s} \in \mathbb{R}^{n \times (3 + f)}$ to incorporate texture information observed from the input image $I$. To this end, we apply a simple encoder-decoder CNN to extract an image feature map $G \in \mathbb{R}^{w \times h \times f}$ and concatenate each point $p$ in $\mathcal{P}_s$ to an image feature vector located at the projected position of $p$ from $G$. We then feed our feature cloud to the encoder of AIR-Net~\cite{giebenhain2021air} that extracts global and local anchored features using Point Transformer~\cite{zhao2021point}. Using the obtained point cloud features, we extract a latent code that encodes interacting two-hand shapes and image context as follows: \begin{equation} z_{c} = \mathrm{ContextEnc}(z_{l}, z_{r}, z_{I}). \end{equation} \noindent $\mathrm{ContextEnc}(\cdot)$ is an MLP that extracts the context feature, and $z_{I}$ is the global bottleneck feature from the image encoder-decoder previously used to extract $G$. Finally, we estimate our refined occupancy at the query point $x$ as follows: \vspace{-0.5\baselineskip} \begin{equation} \{o_{s} = \mathrm{PCDec}(x, o^i_s, \mathcal{A}_{s},\, z_{c}) \}_{s = l, r}, \end{equation} \noindent where $\mathrm{PCDec}$ is a point cloud decoder, for which we adopt a similar architecture to the decoder of AIR-Net~\cite{giebenhain2021air}. The original AIR-Net decoder predicts an occupancy probability given a single point cloud via vector cross attention between the query point $x$ and local anchor features $\mathcal{A}_s$ and a global feature $z_s$ (please refer to \cite{giebenhain2021air} for more details). Our decoder instead (1) uses a global latent vector $z_c$ that encodes the context between the input image and the initial two hand shapes predicted by $\mathcal{I}$, and (2) conditions the occupancy estimation on our initial occupancy probability by concatenating $o^i_s$ at $x$ to the input query coordinate. In summary, our refined occupancy estimation is effectively conditioned both on (1) local latent descriptors $\mathcal{A}_{s}$ that encode information about the initial hand geometry and (2) a global latent descriptor $z_c$ that encodes the global context of two-hand interaction and the input image. \subsection{Input Keypoint Refinement} \label{subsec:input_keypoint_refinement} In this section, we further consider image-based two-hand reconstruction using Im2Hands, where no ground truth hand keypoints are available. To enable robust shape reconstruction from keypoints \emph{predicted} from an off-the-shelf image-based two-hand keypoint estimator (e.g., \cite{moon2020interhand2, kim2021end, fan2021learning, li2022interacting, zhang2021interacting}), we introduce an optional keypoint refinement module $\mathcal{K}$ that can alleviate noise in the input two-hand keypoints. Our keypoint refinement module is formulized as: $\mathcal{K}(J, I) \rightarrow J^r$, where $J, J^r \in \mathbb{R}^{42 \times 3}$ are initial and refined 3D two-hand keypoints, respectively. To be more specific, $\mathcal{K}$ is designed as: \vspace{-0.8\baselineskip} \begin{equation} \mathcal{K}(J, I) = \mathrm{MSA}([\mathrm{GCN}(\mathrm{KptEnc}(J)),\, \mathrm{ImgEnc}(I)]). \end{equation} \vspace{-0.8\baselineskip} \noindent In the above equation, we first extract input keypoint features (i.e., $\mathrm{KptEnc}(\cdot)$) by encoding the index and coordinate of each keypoint using a shared MLP. We then build a two-hand skeleton graph with initial node features set as features extracted from the previous $\mathrm{KptEnc}(\cdot)$ function. Next, we feed this initial two-hand skeleton graph to a $\mathrm{GCN}$ to extract keypoint features that further encode information of the initial two-hand structure. Finally, we apply multi-headed self-attention (i.e., $\mathrm{MSA}$) between the keypoint features and image features -- similarly to query-image attention in Equation~\ref{eq:query-image-att}. In Section~\ref{sec:experiments}, we experimentally demonstrate that $\mathcal{K}$ significantly helps improving the quality of two-hand reconstruction from single images. \subsection{Loss Functions} \label{subsec:loss_functions} We now explain loss functions to train our two-hand occupancy network. \noindent \textbf{Initial Hand Occupancy Network ($\mathcal{I}$).} $\mathcal{I}$ is trained using MSE loss that measures the deviation between the ground truth and the predicted occupancy probabilities. For training query point generation, we combine (1) points uniformly sampled in the hand bounding box and (2) points sampled on the hand surface added with Gaussian noise following~\cite{karunratanakul2021skeleton, saito2019pifu}. \noindent \textbf{Two-Hand Occupancy Refinement Network ($\mathcal{R}$).} Similar to $\mathcal{I}$, $\mathcal{R}$ is trained using MSE loss with the ground truth occupancy supervision. It is additionally trained with penetration loss, which penalizes the refined two-hand occupancy values that are estimated to be occupied in both hands at the same query position. Given $\mathcal{R}_l(\cdot)$ and $\mathcal{R}_r(\cdot)$ that output the refined occupancy probabilities for left and right hands respectively, our penetration loss is defined as: \vspace{-0.6\baselineskip} \begin{equation} \mathcal{L}_{pen} = \frac{1}{|\mathcal{X}|}\sum_{(I, J) \in \mathcal{X}}\sum_{x \in \mathcal{P}}\mathcal{R}_{l}(x | I,\, J) \cdot \mathcal{R}_{r}(x | I,\, J), \end{equation} \noindent where $\mathcal{R}_l(\cdot) > 0.5$ and $\mathcal{R}_r(\cdot) > 0.5$. We incorporate this loss function to avoid inter-penetration between two hands. \noindent \textbf{Input Keypoint Refinement Network ($\mathcal{K}$).} $\mathcal{K}$ is trained using MSE loss between the ground truth and the predicted two-hand keypoints. Due to the space limit, please refer to our supplementary section for training and network architecture details. \begin{figure*}[!t] \begin{center} \includegraphics[width=\textwidth]{figures/fig_img_recon} \caption{\textbf{Qualitative comparison of \emph{single-image} interacting two-hand reconstruction on InterHand2.6M~\cite{moon2020interhand2}.} We compare our results to HALO~\cite{karunratanakul2021skeleton} and IntagHand~\cite{li2022interacting}, where our method produces two-hand shapes with better hand-to-image and hand-to-hand alignment. For our method and HALO, we use the hand keypoints predicted by DIGIT~\cite{fan2021learning} to condition the articulated occupancy fields. Please see the supplementary section for more qualitative examples.} \label{fig:main_qualitative_results} \vspace{-1.2\baselineskip} \end{center} \end{figure*} \section{Im2Hands: Implicit Two-Hand Function} \label{sec:method} Im2Hands is a neural occupancy representation of two interacting hands. To effectively model highly articulated two-hand shapes, we take inspiration from neural \emph{articulated} implicit functions~\cite{deng2020nasa, karunratanakul2021skeleton, corona2022lisa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap}, which typically condition an implicit function on shape and pose descriptors. Our two-hand occupancy network can be formally defined as: \vspace{-0.3\baselineskip} \begin{equation} \mathcal{O} (x\, |\, \alpha,\, \beta) \rightarrow [o_l,\, o_r], \end{equation} \vspace{-0.4\baselineskip} \noindent where $\mathcal{O}$ is a neural network with the learned weights. $\mathcal{O}$ maps an input 3D query point $x \in \mathbb{R}^3$ to occupancy probabilities for each side of hand $o_l,\, o_r \in [0, 1]$ conditioned on a shape descriptor $\alpha$ and a pose descriptor $\beta$, for which our occupancy network directly takes an RGB image $I \in \mathbb{R}^{w \times h \times 3}$ and sparse 3D keypoints $J \in \mathbb{R}^{42 \times 3}$, respectively. \textcolor{red}{To be modified...} To effectively handle articulation and interaction context between two-hands, we build our occupancy network $\mathcal{O}$ using two subnetworks -- $\mathcal{I}$ and $\mathcal{R}$ -- which predicts initial per-hand occupancy and refined two-hand occupancy, respectively: \vspace{-0.2\baselineskip} \begin{equation} \mathcal{O}(x\, |\, \alpha, \beta) = \mathcal{R}(x\, |\, \mathcal{I}(x\, |\, I, J),\, I). \label{eq:two_hand_occ} \end{equation} \vspace{-0.2\baselineskip} \noindent In overview, $\mathcal{I}$ first models the neural articulated occupancy for each hand separately given the input shape and pose descriptors. Using the coarse per-hand occupancy volume predicted by $\mathcal{I}$, $\mathcal{R}$ learns the refined two-hand occupancy with improved hand-to-hand and hand-to-image coherency. For the inputs to our function (i.e., $I$ and $J$), we generally assume that coarse 3D keypoints $J$ are obtained directly from a sensor -- following the existing articulated implicit functions for single hand or human body~\cite{deng2020nasa, karunratanakul2021skeleton, corona2022lisa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap}. In the meanwhile, we take a step further and also consider a \emph{single-image} two-hand reconstruction scenario using Im2Hands, where $J$ is obtained by applying an off-the-shelf interacting two-hand pose estimation method (e.g.,\cite{fan2021learning, moon2020interhand2, kim2021end, li2022interacting, zhang2021interacting}) to the input RGB image. To this end, we additionally introduce an optional input keypoint refinement module to alleviate input keypoint errors to enable more robust two-hand reconstruction, which will be further discussed later in Section~\ref{subsec:input_keypoint_refinement}. In what follows, we will first explain our network architecture for $\mathcal{I}$ and $\mathcal{R}$ in more detail. \subsection{Initial Hand Occupancy Estimation} \label{subsec:initial_hand_occupancy_estimation} Given a 3D query point $x$, our initial hand occupancy network $\mathcal{I}(x\, |\, I,\, J)$ predicts initial occupancy probabilities for each hand $o^{i}_l,\, o^{i}_r \in [0,\, 1]$ conditioned on an RGB image $I$ and sparse 3D keypoints $J$, which serves as shape and pose observations, respectively. As recent articulated implicit functions~\cite{deng2020nasa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap, karunratanakul2021skeleton, corona2022lisa} have shown that learning an occupancy field in the object canonical space -- rather than the original posed space -- significantly improves the generalization capability, we first learn our initial occupancy field in the canonical spaces defined for each hand separately. Our network design for initial hand occupancy estimation is mainly based on HALO~\cite{karunratanakul2021skeleton}, which models an implicit \emph{single-hand} conditioned on 3D keypoints. \vspace{0.5\baselineskip} \noindent \textbf{Background: HALO~\cite{karunratanakul2021skeleton}.} HALO aims to learn an occupancy field of a single hand driven by 3D keypoints. To learn hand shape in the canonical space, HALO proposes a novel algorithm to transform the input 3D keypoints to canonicalization matrices $\{\mathbf{T}_b\}_{\mathit{b=1}}^{B}$, where $\mathbf{T}_b$ is a transformation matrix to the canonical pose for hand bone $b$. Using the obtained canonicalization matrices, hand occupancy at query $x$ is modeled as: \vspace{-\baselineskip} \begin{equation} \mathcal{H}(x\,|\, J) = \max\limits_{b = 1,\, ...,\, B}\{\bar{\mathcal{H}}_b(\mathbf{T}_bx,\, f_b^\phi, f_b^\omega)\}, \label{eq:halo} \end{equation} \noindent where $\bar{\mathcal{H}}_b$ is an MLP-based part occupancy network to learn the shape of hand bone $b$. Each part occupancy network takes a canonicalized query point $\mathbf{T}_{b}x$ along with a shape feature $f_b^\phi$ and a pose feature $f_b^\omega$ corresponding to the bone $b$. $f_b^\phi$ is extracted by an MLP encoder that takes a bone length vector $l \in \mathbb{R}^B$ computed from the input 3D keypoints. $f_b^\omega$ is obtained by another MLP encoder that takes global pose information $\{\mathbf{T}_{b}\}_{b=1}^B$ and root translation vector $t \in \mathbb{R}^3$ as inputs. For more details about HALO, we kindly refer the reader to \cite{karunratanakul2021skeleton}. \vspace{0.5\baselineskip} \noindent \textbf{Our Architecture.} While HALO is effective in modeling pose-dependent deformations via query point canonicalization (i.e. $\mathbf{T}_bx$) and pose feature extraction (i.e., $f^\omega_b$), its shape feature (i.e., $f^\phi_b$) cannot model pose-independent shape variations (e.g., identity-dependent deformations) that cannot be described by bone lengths. Modeling such shape variations is especially important in interacting two-hand reconstruction, as soft tissue deformations commonly occur due to inter-hand contacts. Thus, we introduce an additional shape feature conditioned on an RGB image~$I \in \mathbb{R}^{w \times h \times 3}$, which provides a minimal observation for such pose-independent deformations. We propose a per-query shape feature $f_x^\phi$ conditioned on the image $I$ as follows: \begin{equation} f_x^\phi = \mathrm{MSA}([\mathrm{PosEnc}(x),\, \mathrm{ImgEnc}(I)]), \label{eq:query-image-att} \end{equation} \noindent where $\mathrm{MSA}$ denotes a multi-headed self-attention module that takes a query feature $\mathrm{PosEnc}(x)$ and patch-wise image features $\mathrm{ImgEnc}(I)$. To be more specific, $\mathrm{PosEnc}(\cdot)$ is an MLP-based positional embedding function to map the input query coordinate $x \in \mathbb{R}^3$ to a feature vector $f_x \in \mathbb{R}^{d}$. $\mathrm{ImgEnc}(\cdot)$ is an image encoder of Vision Transformer~\cite{dosovitskiy2020image}, which maps the input image $I \in \mathbb{R}^{w \times h \times 3}$ into patch-wise image features $f_I \in \mathbb{R}^{p \times d}$. Through this query-image attention, we can obtain a per-query shape feature $f_x^\phi$ contributed by the features of more \emph{relevant} image patch regions to the input query $x$. In Section~\ref{sec:experiments}, we also show that using such query-image attention yields better results than directly using a local image feature located at the projected query position (e.g.,~\cite{saito2019pifu}) as $f_x^\phi$. Finally, our initial per-hand occupancy is modeled by feeding our per-query shape feature $f_x^\phi$ along with the other inputs to the MLP-based part occupancy network $\bar{\mathcal{H}}_b$ in Equation~\ref{eq:halo}: \vspace{-0.6\baselineskip} \begin{equation} \mathcal{I}(x\,|\, I,\, J) = \max\limits_{b = 1,\, ...,\, B}\{\bar{\mathcal{H}}_b(\mathbf{T}_bx,\, f_b^\phi, f_x^\phi, f_b^\omega)\}. \end{equation} \noindent Note that our occupancy is estimated separately for \emph{each hand} in this stage. We use the same network architecture for both hands, but we distinguish each hand side by feeding coarse 3D keypoints of each hand $J_{left}, J_{right} \in \mathbb{R}^{21 \times 3}$ separately as input to our neural network. As these keypoints are used to compute the pose feature $f_b^\omega$ and canonicalization matrices $\{\mathbf{T}_b\}_{\mathit{b=1}}^{B}$, we can robustly learn our per-hand initial occupancy in the canonical spaces defined for \emph{each hand}. \subsection{Two-Hand Occupancy Refinement} \label{subsec:two-hand_occupancy_refinement} The previous step can robustly estimate per-hand occupancy volumes by learning in the canonical spaces defined for each hand \emph{separately}. However, it does not model the coherency between two hands due to the adaptation of the hand canonical space. As recent mesh-based two-hand reconstruction methods~\cite{zhang2021interacting, li2022interacting} have shown that (1) two hand shapes are correlated with each other and thus (2) context-aware two-hand shape refinement is effective, we introduce a two-hand \emph{occupancy refinement} function: $\mathcal{R}(x\, |\, \mathcal{I}(x\, |\, I, J),\, I)$. $\mathcal{R}(\cdot)$ estimates the refined two-hand occupancy $o_l,\, o_r \in [0, 1]$ at the query point $x$ conditioned on (1) the initial two-hand occupancy probabilities at $x$ estimated by $\mathcal{I}$ and (2) a shape descriptor, for which we take the input image $I$. To condition our refined occupancy estimation on the interaction context, it is necessary to encode context information observed from the initial two hand shapes. To effectively learn features from the current two hand geometries, we first represent them as point clouds $\mathcal{P}_{l}, \mathcal{P}_{r} \in \mathbb{R}^{n \times 3}$, which are sets of iso-surface points of each hand geometry. These points can be easily obtained from collecting query coordinates that are evaluated to be on surface by the initial hand occupancy estimator: $\{ x\, |\, 0.5 - \epsilon \leq \mathcal{I}(x\, |\, \alpha, \beta) \leq 0.5 + \epsilon\}$. We then encode each hand point cloud into a \emph{global} latent vector and \emph{local} latent vectors anchored in 3D space: \begin{equation} \{z_{s}, \mathcal{A}_{s} = \mathrm{PCEnc}(\mathcal{P}_{s}, I)\}_{s = l, r}, \end{equation} \noindent where $z_{s} \in \mathbb{R}^{z}$ is a global latent vector, and $\mathcal{A}_{s} \in \mathbb{R}^{m\times(3+a)}$ is a set of anchor points (i.e., a subset of the input point cloud selected by farthest point sampling) with $a$-dimensional per-point features. For our point cloud encoding module $\mathrm{PCEnc}$, we mainly adopt the architecture of AIR-Net~\cite{giebenhain2021air} encoder that encodes the input point cloud into the global feature and local anchor features using Point Transformer~\cite{zhao2021point}. One modification is that we preprocess the input point cloud $\mathcal{P}_{s}$ into a feature cloud $\mathcal{F}_{s} \in \mathbb{R}^{n \times (3 + f)}$ to incorporate additional texture information observed from the input image $I$. To this end, we apply a simple encoder-decoder CNN to extract an image feature map: $G = \mathrm{ImgEncDec}(I) \in \mathbb{R}^{w \times h \times f}$. Using this feature map, we generate our feature cloud $\mathcal{F}_{s}$ by concatenating each point $p \in \mathbb{R}^3$ in $\mathcal{P}_s$ to an image feature vector located at the position $\pi(p)$ from $G$, where $\pi(p)$ is a 2D projection of $p$ onto the image plane. Using the extracted point cloud latent vectors $\{z_s, \mathcal{A}_s\}_{s = l, r}$, we extract a latent code that encodes two-hand interaction and image context: \begin{equation} z_{c} = \mathrm{MLP}(z_{l}, z_{r}, z_{I}), \end{equation} \noindent where $\mathrm{MLP}$ is a feature extraction function parameterized by an MLP, and $z_{I}$ is the global bottleneck feature from $\mathrm{ImgEncDec}(I)$. Finally, we estimate our refined occupancy at the query point $x$: \vspace{0.5\baselineskip} \begin{equation} \{o_{s} = \mathrm{PCDec}(x, o^i_s, \mathcal{A}_{s},\, z_{c}) \}_{s = l, r} \end{equation} \noindent where $\mathrm{PCDec}$ is a point cloud decoder, which we again adopt a similar architecture to the decoder of AIR-Net~\cite{giebenhain2021air}. The original AIR-Net decoder predicts an occupancy probability given a point cloud via vector cross attention between the query point $x$ and local anchor features $\mathcal{A}_s$ and a global feature $z_s$ extracted from the encoder. Our decoder instead (1) uses a global latent vector $z_c$ that encodes the \emph{context} between the initial two hand shapes and the input image, and (2) conditions our occupancy estimation on the initial occupancy probability by concatenating $o^i_s$ to the input query point $x$. In summary, our refined occupancy estimation is effectively conditioned both on (1) local latent descriptors $\mathcal{A}_{s}$ that encode information about the initial hand geometry and image and (2) a global latent descriptor $z_c$ that encodes the global context of two-hand interaction and image. \subsection{Loss Functions} \label{subsec:loss_functions} We now explain loss functions to train our two-hand occupancy network. (\textcolor{red}{To be modified...}) \noindent \textbf{Initial Hand Occupancy Loss.} It penalizes the deviation between our initial occupancy probability estimated by $\mathcal{I}$ and the ground truth occupancy value: \vspace{-0.8\baselineskip} \begin{equation} \mathcal{L}_\mathcal{I} = \frac{1}{|\mathcal{X}|}\sum_{(I, J) \in \mathcal{X}}\sum_{x \in \mathcal{P}}\sum_{s \in l, r}(\mathcal{I}_s(x | I,\, J) - \mathcal{O}^*_s(x))^2, \end{equation} \vspace{-0.2\baselineskip} \noindent where $\mathcal{X}$ is a set of training samples, and $\mathcal{P}$ is a set of sampled training query points. $\{l, r\}$ denotes left and right hand side, respectively. $\mathcal{O}^*_s(\cdot)$ is a function that returns the ground truth occupancy value for $s$ side of hand at the input query position. For generating training queries $\mathcal{X}$, we combine (1) points uniformly sampled in the hand bounding box and (2) points sampled on the hand surface added with Gaussian noise following~\cite{karunratanakul2021skeleton, saito2019pifu}. \noindent \textbf{Refined Two-Hand Occupancy Loss.} It penalizes the difference between our refined occupancy probability and the ground truth occupancy value: \vspace{-\baselineskip} \begin{equation} \mathcal{L}_\mathcal{O} = \frac{1}{|\mathcal{X}|}\sum_{(I, J) \in \mathcal{X}}\sum_{x \in \mathcal{P}}\sum_{s \in l, r}(\mathcal{O}_s(x | I,\, J) - \mathcal{O}^*_s(x))^2, \end{equation} \noindent where $\mathcal{O}_s(\cdot)$ outputs our final occupancy probability for hand side $s$ after the two-hand refinement (see Equation~\ref{eq:two_hand_occ}). \noindent \textbf{Penetration Loss.} It penalizes our final two-hand occupancy values that are estimated to be occupied in both hands at the same query position: \vspace{-0.6\baselineskip} \begin{equation} \mathcal{L}_{pen} = \frac{1}{|\mathcal{X}|}\sum_{(I, J) \in \mathcal{X}}\sum_{x \in \mathcal{P}}\mathcal{O}_{l}(x | I,\, J) \cdot \mathcal{O}_{r}(x | I,\, J), \end{equation} \noindent where $\mathcal{O}_l(\cdot) > 0.5$ and $\mathcal{O}_r(\cdot) > 0.5$. We incorporate this loss function to avoid inter-penetration between two hands. Our final loss function can be written as $\mathcal{L} = \lambda_1\mathcal{L}_{\mathcal{I}} + \lambda_2\mathcal{L}_{\mathcal{O}} + \lambda_3\mathcal{L}_{pen}$, where $\{\lambda_i\}_{i=1,2,3}$ are hyper-parameters to control the weights for the loss terms. \begin{figure*}[!t] \begin{center} \includegraphics[width=\textwidth]{figures/fig_img_recon} \caption{\textbf{Qualitative comparison of \emph{single-image} interacting two-hand reconstruction on InterHand2.6M~\cite{moon2020interhand2}.} We compare our results to HALO~\cite{karunratanakul2021skeleton} and IntagHand~\cite{li2022interacting}, where our method produces two-hand shapes with better hand-to-image and hand-to-hand alignment. For our method and HALO, we use the hand keypoints predicted by DIGIT~\cite{fan2021learning} to condition the articulated occupancy fields. Please see the supplementary section for more qualitative examples.} \label{fig:main_qualitative_results} \vspace{-1.2\baselineskip} \end{center} \end{figure*} \subsection{Input Keypoint Refinement} \label{subsec:input_keypoint_refinement} Our two-hand occupancy network effectively models two-hand pose- and shape-dependent deformations by taking 3D hand keypoints and an RGB image as inputs. In this section, we further consider \emph{single-image} two-hand reconstruction using Im2Hands, where no ground truth hand keypoints are available. To enable robust shape reconstruction from hand keypoints \emph{predicted} from an off-the-shelf single-image two-hand keypoint estimation method (e.g., \cite{moon2020interhand2, kim2021end, fan2021learning, li2022interacting, zhang2021interacting}), we introduce an \emph{optional} keypoint refinement module that can alleviate noise in the input two-hand keypoints. Our keypoint refinement module $\mathcal{K}$ can be formulized as: $\mathcal{K}(J^i, I) \rightarrow J$, where $J^i \in \mathbb{R}^{42 \times 3}$ and $J \in \mathbb{R}^{42 \times 3}$ are initial and refined 3D two-hand keypoints, respectively. To be more specific, our keypoint refinement module $\mathcal{K}$ is designed as: \vspace{-0.8\baselineskip} \begin{equation} \mathcal{K}(J^i, I) = \mathrm{MSA}([\mathrm{GCN}(\mathrm{KptEnc}(J^i)),\, \mathrm{ImgEnc}(I)]). \end{equation} \vspace{-0.8\baselineskip} \noindent In the above equation, we first extract input keypoint features (i.e., $\mathrm{KptEnc}(\cdot)$) by encoding the index and coordinate of each input keypoint using an MLP. We then build a two-hand skeleton graph with initial node features set as keypoint features extracted from the previous $\mathrm{KptEnc}(\cdot)$ function. Next, we feed this initial two-hand skeleton graph to a $\mathrm{GCN}$ to extract keypoint features that further encode information of the initial two-hand structure. Finally, we apply multi-headed self-attention (i.e., $\mathrm{MSA}$) between the keypoint features and image features -- similarly to query-image attention in Equation~\ref{eq:query-image-att}. The proposed keypoint refinement module $\mathcal{K}$ is trained using L2-loss between the ground truth and the predicted two-hand keypoints. In Section~\ref{sec:experiments}, we experimentally demonstrate that $\mathcal{K}$ is effective in refining noisy input two-hand keypoints, which significantly helps improving two-hand reconstruction quality in a single-image reconstruction setting. We also kindly refer the reader to the supplementary section for more network architecture and training details. \section{Introduction} \label{sec:intro} Humans use hand-to-hand interaction in everyday activities, which makes modeling 3D shapes of two interacting hands important for various applications (e.g., human-computer interaction, robotics, and augmented or virtual reality). However, the domain of two-hand shape reconstruction remains relatively under-explored, while many existing studies have put efforts in \emph{single-hand} reconstruction from RGB~\cite{ge20193d, kulon2020weakly, zhou2020monocular, lin2021end, baek2019pushing, baek2020weakly, boukhayma20193d}, depth~\cite{wan2020dual} and sparse keypoints~\cite{karunratanakul2021skeleton, zhou2020monocular}. These single-hand-based methods cannot be effectively applied for interacting two-hand reconstruction due to its unique challenges, such as inter-hand collision, mutual occlusion, and larger solution space. Recently, few learning-based methods on two-hand shape reconstruction~\cite{li2022interacting, zhang2021interacting, rong2021monocular} have been proposed since the release of the large-scale interacting hand dataset (i.e., InterHand2.6M~\cite{moon2020interhand2}). Zhang \emph{et al.}~\cite{zhang2021interacting} and Rong \emph{et al.}~\cite{rong2021monocular} reconstruct two-hands by estimating MANO~\cite{romero2017embodied} parameters, which are later mapped to triangular hand meshes using a pre-defined statistical model (i.e., MANO). Li \emph{et al.}~\cite{li2022interacting} directly regresses a fixed number of mesh vertex coordinates using a graph convolutional network (GCN). These methods mainly model the shape of two interacting hands based on mesh representation with a fixed topology of MANO, which is known to have a low resolution (please refer to Figure~\ref{fig:teaser_image}). In addition, it is non-trivial to directly penalize shape intersections using such mesh representation as differentiable training loss~\cite{li2022interacting, deng2020nasa} In this paper, we present \textbf{Im}plicit \textbf{Two} \textbf{Hands} (Im2Hands), the first neural implicit representation of two interacting hands. Unlike existing mesh-based methods, Im2Hands can capture the fine-grained geometry of two interacting hands by learning a \emph{continuous} 3D occupancy field. Im2Hands (1) can produce two-hand meshes with an arbitrary resolution, (2) does not require dense vertex correspondence or statistical model parameter annotations for training, and (3) can learn output shapes with better hand-to-hand and hand-to-image alignment. As two interacting hands are highly articulated objects, we follow the direction of recent neural \emph{articulated} implicit functions~\cite{deng2020nasa, karunratanakul2021skeleton, corona2022lisa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap}, which typically condition an implicit geometry function on pose and shape descriptors. Our two-hand articulated implicit function is also driven by pose and shape observations -- which are represented as sparse 3D keypoints and an RGB image, respectively. To effectively handle the high shape complexity and interaction context between two hands, Im2Hands consists of two novel attention-based modules responsible for (1) initial occupancy estimation and (2) context-aware occupancy refinement, respectively. The initial occupancy estimation module first produces the articulated occupancy volume of each hand. Given a query point $x \in \mathbb{R}^3$, it learns a hand occupancy field in the canonical space computed using the canonicalization layer of HALO~\cite{karunratanakul2021skeleton} and extracts a image feature vector corresponding to $x$ using query-image attention. These features are then used to learn part occupancy functions that model \emph{per-hand} occupancy volumes. Next, our context-aware occupancy refinement module modifies the initial occupancy fields to enhance the coherency between the two hands together. Given the initial two-hand shape represented as anchored point clouds, it uses query-anchor attention to learn a global refined occupancy function in a context-aware manner. Furthermore, we additionally introduce an \emph{optional} hand keypoint refinement module for a single-image reconstruction scenario, where no ground truth keypoints are observed as inputs to our method. The proposed GCN-based module enables more robust two-hand shape reconstruction by alleviating errors in 3D hand keypoints \emph{predicted} from an off-the-shelf single-image two-hand keypoint estimation method (e.g., \cite{fan2021learning, li2022interacting, kim2021end}). Overall, our main contributions can be summarized as follows: \begin{itemize} \item We introduce Im2Hands, the first neural implicit representation of two interacting hands. Im2Hands can reconstruct resolution-independent geometry of two-hands with high hand-to-hand and hand-to-image coherency. \item To effectively learn an occupancy field of the complex two-hand geometries, we propose two novel attention-based modules that perform (1) initial occupancy estimation and (2) context-aware occupancy refinement, respectively. Furthermore, we also introduce an \emph{optional} keypoint refinement module to enable more robust two-hand shape prediction in a single-image reconstruction scenario. \item We experimentally demonstrate the effectiveness of Im2Hands in comparison to the existing (1) two-hand mesh-based and (2) single-hand implicit function-based methods, where Im2Hands is shown to achieve state-of-the-art results in two-hand reconstruction. \end{itemize} \section{Introduction} \label{sec:intro} Humans use hand-to-hand interaction in everyday activities, which makes modeling 3D shapes of two interacting hands important for various applications (e.g., human-computer interaction, robotics, and augmented or virtual reality). However, the domain of two-hand shape reconstruction remains relatively under-explored, while many existing studies have put efforts in \emph{single-hand} reconstruction from RGB~\cite{ge20193d, kulon2020weakly, zhou2020monocular, lin2021end, baek2019pushing, baek2020weakly, boukhayma20193d}, depth~\cite{wan2020dual}, and sparse keypoints~\cite{karunratanakul2021skeleton, zhou2020monocular}. These single-hand-based methods are not effective when directly applied for interacting two-hand reconstruction, since it introduces additional challenges including inter-hand collisions, mutual occlusions, and a larger solution space. Recently, few learning-based methods on two-hand shape reconstruction~\cite{li2022interacting, zhang2021interacting, rong2021monocular} have been proposed since the release of the large-scale interacting hand dataset (i.e., InterHand2.6M~\cite{moon2020interhand2}). Zhang \emph{et al.}~\cite{zhang2021interacting} and Rong \emph{et al.}~\cite{rong2021monocular} reconstruct two-hands by estimating MANO~\cite{romero2017embodied} parameters, which are later mapped to triangular hand meshes using a pre-defined statistical model (i.e., MANO). Li \emph{et al.}~\cite{li2022interacting} directly regresses a fixed number of mesh vertex coordinates using a graph convolutional network (GCN). These methods mainly model the shape of two interacting hands based on a low-resolution mesh representation with a fixed topology of MANO (please refer to Figure~\ref{fig:teaser_image}). In this paper, we present \textbf{Im}plicit \textbf{Two} \textbf{Hands} (Im2Hands), the first neural implicit representation of two interacting hands. Unlike existing mesh-based two-hand reconstruction methods, Im2Hands can capture the fine-grained geometry of two interacting hands by learning a \emph{continuous} 3D occupancy field. Im2Hands (1) produces two-hand meshes with an arbitrary resolution, (2) does not require dense vertex correspondences or statistical model parameter annotations for training, and (3) learns output shapes with precise hand-to-hand and hand-to-image alignment. As two interacting hands are highly articulated objects, we take inspiration from recent neural \emph{articulated} implicit functions~\cite{deng2020nasa, karunratanakul2021skeleton, corona2022lisa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap} that learn an implicit geometry in the object canonical space computed from an input pose observation. Our two-hand articulated implicit function is also driven by pose and shape observations, which are represented as sparse 3D keypoints and an RGB image, respectively. To effectively handle the shape complexity and interaction context between two hands, Im2Hands consists of two novel attention-based modules responsible for (1) initial occupancy estimation and (2) context-aware occupancy refinement. The initial occupancy estimation network learns per-hand occupancy volumes in the hand canonical space. It models \emph{pose-dependent hand deformation} using the canonicalization layer and the input keypoint encoder of HALO~\cite{karunratanakul2021skeleton} and \emph{shape-dependent hand deformation} (e.g., identity-dependent deformation) using our novel query-image attention module. As it is not sufficient to model two-hand interaction by learning in the canonical space defined for each hand, we additionally propose a context-aware occupancy refinement network that refines the initial occupancy based on two-hand interaction context in the original posed space. our context-aware occupancy refinement module modifies the initial occupancy fields to enhance the coherency between the two hands together. Given the initial two-hand shape represented as anchored point clouds, it uses query-anchor attention to learn a global refined occupancy function in the posed space. Furthermore, we introduce an \emph{optional} hand keypoint refinement module for a single-image reconstruction scenario, where no ground truth keypoints are observed as inputs to our method. The proposed joint-image attention-based module enables more robust two-hand shape reconstruction by alleviating errors in 3D hand keypoints \emph{predicted} from an off-the-shelf single-image two-hand keypoint estimation method (e.g., \cite{fan2021learning, li2022interacting, kim2021end}). Overall, our main contributions are summarized as follows: \begin{itemize} \item We introduce Im2Hands, the first neural implicit representation of two interacting hands. Im2Hands reconstructs resolution-independent geometry of two-hands with high hand-to-hand and hand-to-image coherency. \item To effectively learn an occupancy field of the complex two-hand geometries, we propose two novel attention-based modules that perform (1) initial occupancy estimation in the canonical space and (2) context-aware occupancy refinement in the posed space, respectively. We additionally introduce an \emph{optional} keypoint refinement module to enable more robust two-hand shape estimation using a single image input. \item We demonstrate the effectiveness of Im2Hands in comparison to the existing (1) two-hand mesh-based and (2) single-hand implicit function-based methods, where Im2Hands obtains state-of-the-art results in two-hand reconstruction. \end{itemize} \section{Introduction} \label{sec:intro} Humans use hand-to-hand interaction in everyday activities, which makes modeling 3D shapes of two interacting hands important for various applications (e.g., human-computer interaction, robotics, and augmented or virtual reality). However, the domain of two-hand shape reconstruction remains relatively under-explored, while many existing studies have put efforts in \emph{single-hand} reconstruction from RGB~\cite{ge20193d, kulon2020weakly, zhou2020monocular, lin2021end, baek2019pushing, baek2020weakly, boukhayma20193d}, depth~\cite{wan2020dual}, and sparse keypoints~\cite{karunratanakul2021skeleton, zhou2020monocular}. These single-hand-based methods are not effective when directly applied for interacting two-hand reconstruction, since it introduces additional challenges including inter-hand collisions, mutual occlusions, and a larger solution space. Recently, few learning-based methods on two-hand shape reconstruction~\cite{li2022interacting, zhang2021interacting, rong2021monocular} have been proposed since the release of the large-scale interacting hand dataset (i.e., InterHand2.6M~\cite{moon2020interhand2}). Zhang \emph{et al.}~\cite{zhang2021interacting} and Rong \emph{et al.}~\cite{rong2021monocular} reconstruct two-hands by estimating MANO~\cite{romero2017embodied} parameters, which are later mapped to triangular hand meshes using a pre-defined statistical model (i.e., MANO). Li \emph{et al.}~\cite{li2022interacting} directly regresses a fixed number of mesh vertex coordinates using a graph convolutional network (GCN). These methods mainly model the shape of two interacting hands based on a low-resolution mesh representation with a fixed topology of MANO (please refer to Figure~\ref{fig:teaser_image}). In this paper, we present \textbf{Im}plicit \textbf{Two} \textbf{Hands} (Im2Hands), the first neural implicit representation of two interacting hands. Unlike existing mesh-based methods, Im2Hands can capture the fine-grained geometry of two interacting hands by learning a \emph{continuous} 3D occupancy field. Im2Hands (1) produces two-hand meshes with an arbitrary resolution, (2) does not require dense vertex correspondences or statistical model parameter annotations for training, and (3) learns output shapes with precise hand-to-hand and hand-to-image alignment. As two interacting hands are highly articulated objects, we follow the direction of recent neural \emph{articulated} implicit functions~\cite{deng2020nasa, karunratanakul2021skeleton, corona2022lisa, noguchi2021neural, saito2021scanimate, mihajlovic2022coap} that directly condition an implicit geometry function given a pose input to learn shapes in the canonical pose space. Our two-hand articulated implicit function is also driven by pose and shape observations -- which are represented as sparse 3D keypoints and an RGB image, respectively. 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 CVPR #4299 CVPR #4299 CVPR 2023 Submission #4299. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. Im2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes Anonymous CVPR submission Paper ID 4299Image alignmentReconstruction Zoom-inInput Image alignmentReconstruction Zoom-in IntagHand Im2Hands (ours) Observation Reconstruction Image alignment Image alignmentReconstruction IntagHand [21] Im2Hands (Ours) Figure 1. Reconstructed two-hand shapes from an RGB image. We present Im2Hands, the first neural implicit representation for two interacting hands. Compared to the existing mesh-based two-hand reconstruction method (i.e., IntagHand [23]), Im2Hands effectively captures the fine-grained geometry of two hands with higher shape-to-image coherency. The above results were produced from a single RGB image input, where ours utilized an off-the-shelf two-hand keypoint estimation method (i.e., DIGIT [11]) to condition our articulated occupancy. Abstract We present Implicit Two Hands (Im2Hands), the first neural implicit representation of two interacting hands. Un- like existing methods on two-hand reconstruction that rely on a parametric hand model and/or low-resolution meshes, Im2Hands can produce fine-grained geometry of two hands with high hand-to-hand and hand-to-image coherency. To handle high shape complexity and interaction context be- tween two hands, Im2Hands models the occupancy volume of two hands – conditioned on an RGB image and coarse 3D keypoints – by two novel attention-based modules respon- sible for (1) initial occupancy estimation and (2) context- aware occupancy refinement, respectively. Im2Hands first learns per-hand neural articulated occupancy in the canon- ical space designed for each hand using query-image at- tention. It then refines the initial two-hand occupancy in the posed space to enhance the coherency between the two hand shapes using query-anchor attention. In addition, we introduce an optional keypoint refinement module to enable robust two-hand shape estimation from predicted hand key- points in a single-image reconstruction scenario. We ex- perimentally demonstrate the effectiveness of Im2Hands on two-hand reconstruction in comparison to related methods, where ours achieves a higher quality of reconstructed hand shapes in both quantitative and qualitative manners. 1. Introduction Humans use hand-to-hand interaction in everyday activ- ities, which makes modeling 3D shapes of two interact- ing hands important for various applications (e.g., human- computer interaction, robotics, and augmented or virtual reality). However, the domain of two-hand shape recon- struction remains relatively under-explored, while many ex- isting studies have put efforts in single-hand reconstruc- tion from RGB [1, 2, 4, 13, 21, 24, 45], depth [39], and sparse keypoints [18, 45]. These single-hand-based meth- ods are not effective when directly applied for interacting two-hand reconstruction, since it introduces additional chal- lenges including inter-hand collisions, mutual occlusions, and a larger solution space. Recently, few learning-based methods on two-hand shape reconstruction [23, 33, 43] have been proposed since the release of the large-scale interacting hand dataset (i.e., InterHand2.6M [28]). Zhang et al. [43] and Rong et al. [33] reconstruct two-hands by estimating MANO [32] param- eters, which are later mapped to triangular hand meshes using a pre-defined statistical model (i.e., MANO). Li et al. [23] directly regresses a fixed number of mesh vertex coordinates using a graph convolutional network (GCN). These methods mainly model the shape of two interacting hands based on a low-resolution mesh representation with a 1 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 CVPR #4299 CVPR #4299 CVPR 2023 Submission #4299. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. fixed topology of MANO (please refer to Figure 1). In this paper, we present Implicit Two Hands (Im2Hands), the first neural implicit representation of two interacting hands. Unlike existing mesh-based methods, Im2Hands can capture the fine-grained geometry of two interacting hands by learning a continuous 3D occupancy field. Im2Hands (1) produces two-hand meshes with an ar- bitrary resolution, (2) does not require dense vertex corre- spondences or statistical model parameter annotations for training, and (3) learns output shapes with precise hand- to-hand and hand-to-image alignment. As two interacting hands are highly articulated objects, we follow the direction of recent neural articulated implicit functions [7, 8, 18, 25, 30, 35] that directly condition an implicit geometry func- tion given a pose input to learn shapes in the canonical pose space. Our two-hand articulated implicit function is also driven by pose and shape observations – which are repre- sented as sparse 3D keypoints and an RGB image, respec- tively. To effectively handle the high shape complexity and in- teraction context between two hands, Im2Hands consists of two novel attention-based modules responsible for (1) ini- tial occupancy estimation and (2) context-aware occupancy refinement, respectively. The initial occupancy estimation module produces the articulated occupancy volume of each hand. Given a query point x ∈ R3, it learns a hand oc- cupancy field in the canonical space computed using the canonicalization layer of HALO [18] and extracts an image feature vector corresponding to x using query-image atten- tion. These features are then used to learn part occupancy functions that model per-hand occupancy volumes. Next, our context-aware occupancy refinement module modifies the initial occupancy fields to enhance the coherency be- tween the two hands together. Given the initial two-hand shape represented as anchored point clouds, it uses query- anchor attention to learn a global refined occupancy func- tion in the posed space. Furthermore, we introduce an op- tional hand keypoint refinement module for a single-image reconstruction scenario, where no ground truth keypoints are observed as inputs to our method. The proposed joint- image attention-based module enables more robust two- hand shape reconstruction by alleviating errors in 3D hand keypoints predicted from an off-the-shelf single-image two- hand keypoint estimation method (e.g., [11, 20, 23]). Overall, our main contributions are summarized as fol- lows: • We introduce Im2Hands, the first neural implicit rep- resentation of two interacting hands. Im2Hands recon- structs resolution-independent geometry of two-hands with high hand-to-hand and hand-to-image coherency. • To effectively learn an occupancy field of the complex two-hand geometries, we propose two novel attention- based modules that perform (1) initial occupancy es- timation in the canonical space and (2) context-aware occupancy refinement in the posed space, respectively. We additionally introduce an optional keypoint refine- ment module to enable more robust two-hand shape estimation using a single image input. • We demonstrate the effectiveness of Im2Hands in comparison to the existing (1) two-hand mesh-based and (2) single-hand implicit function-based methods, where Im2Hands obtains state-of-the-art results in two-hand reconstruction. 2. Related Work Single-Hand Reconstruction. Methods for single-hand re- construction have been actively investigated for decades. Most existing deep learning-based approaches either recon- struct hand poses represented as 3D keypoints [5, 12, 17, 27, 36, 37, 41, 46], estimate MANO parameters [1, 2, 32], or directly regress mesh vertex coordinates [13, 21, 24, 39]. Inspired by the recent success of implicit representation in modeling human bodies [8, 25, 30, 35], few recent works [7, 18] also employ neural implicit functions for single-hand reconstruction. Compared to the existing methods based on explicit mesh representations, these implicit function-based methods [7, 18] can produce fine-grained hand geometry in a resolution-independent manner. Interacting Hand-Object Reconstruction. Similar to single-hand reconstruction, most of the existing methods [2, 6, 9, 15, 16] on hand-object reconstruction adopt MANO topology-based mesh representation to model a hand shape. Few recent works [18, 19] consider neural implicit repre- sentation to model hand-objects, but they usually constrain their interaction based on contacts – which do not necessar- ily occur in hand-hand interaction. In addition, these meth- ods mainliy consider a rigid object in interaction, which makes them ineffective to be directly applied for articulated two-hand reconstruction. Interacting Two-Hand Reconstruction. Compared to single-hand or hand-object reconstruction, two-hand recon- struction is more challenging due to more complex occlu- sions and deformations occurred by the interaction of two articulated hands. For modeling two interacting hands, there have been recent methods on two-hand pose recon- struction from RGB images [11, 20, 23, 28, 43], depth im- ages [29, 38], and RGB-D video sequences [22, 31]. Yet, there are only few methods that can directly reconstruct the dense surface of closely interacting two-hands [23, 29, 33, 43], which is more challenging and important in rea- soning hand-hand interactions. Mueller et al. [29] pro- pose an energy minimization framework to fit MANO [32] parameters to the input depth image of two interacting hands. Zhang et al. [43] introduce pose-aware attention 2 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 CVPR #4299 CVPR #4299 CVPR 2023 Submission #4299. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. and context-aware cascaded refinement modules to directly regress two-hand MANO parameters from an RGB im- age. Li et al. [23] propose an attention-based graph con- volutional network (GCN) for two-hand vertex regression from an RGB image. These recent deep learning-based frameworks [23, 33, 40, 43] have shown that (1) an attention mechanism to model non-local interactions and (2) context- aware shape refinement steps are effective in two-hand re- construction, which have inspired the design of our two- hand occupancy function. However, compared to these ex- isting mesh-based methods, our occupancy-based method can learn resolution-independent hand surface with better image-shape alignment, as our output space (i.e., occupancy field) itself is more directly aligned with the input image space. Neural Articulated Implicit Representation. Many ex- isting methods to model articulated objects (e.g., human bodies, hands) adopt neural articulated implicit represen- tations [7, 8, 18, 25, 30, 35], which directly condition an implicit geometry on the object articulation. NASA [8] learns per-bone occupancy functions to estimate the shape and pose blend shapes. LEAP [26] improves the general- ization capability of NASA with the learned linear blend- ing skinning (LBS) functions. COAP [25] adopts part- aware encoder-decoders to learn complex local deforma- tions, which further improves pose generalization. These methods [8, 25, 30, 35] mainly demonstrate their results on human body modeling, and there are only few studies that explore the effectiveness of implicit representations for hands. LISA [7] proposes an articulated VolSDF [42] to model shape and appearance of single-hands, but it assumes that various ground truth information (i.e., multi-view RGB image sequences, 3D bone transformations and foreground masks) is known and requires model optimization for in- ference. HALO [18] proposes a single-hand articulated oc- cupancy function driven by 3D keypoints via incorporating a novel canonicalization layer – eliminating the need for the ground truth 3D bone transformations used in most of the articulated implicit functions [7, 8, 25, 30, 35]. However, HALO models pose-independent shape variations only with hand bone lengths, which is a strong assumption. We take the inspiration from HALO to condition our two-hand oc- cupancy on poses represented as sparse 3D keypoints, but we further improve HALO by conditioning shape variations using an RGB image, which is a more direct form of shape observation than bone lengths. 3. Im2Hands: Implicit Two-Hand Function Im2Hands is a neural occupancy representation of two interacting hands. To effectively model highly articulated two-hand shapes, we take inspiration from neural articu- lated implicit functions [7, 8, 18, 25, 30, 35], which condi- tion an implicit function on shape and pose variations. Our two-hand occupancy network can be formally defined as: O(x | α, β) → [ol, or ], (1) where O is a neural network with the learned weights. O maps an input 3D query point x ∈ R3 to occupancy probabilities for each side of hand ol, or ∈ [0, 1] condi- tioned on a shape observation α and a pose observation β, for which our occupancy network directly takes an RGB image I ∈ Rw×h×3 and sparse two-hand 3D keypoints J = [Jl, Jr ] ∈ R42×3, respectively. One straightforward approach to design a neural im- plicit function for two hands would be to directly apply the existing articulated occupancy function for single hand (i.e., HALO [18]) to both hands separately. This existing method can robustly model pose-dependent deformations via learning hand shapes in the canonical space, but it does not effectively capture other shape-dependent deformations (e.g., identity-dependent or soft tissue deformations). We thus design our initial hand occupancy estimation network (Section 3.1) that combines HALO and our novel query- image attention module to capture shape-dependent defor- mation observed from an RGB image. However, such ini- tial occupancy estimation alone is not sufficient in captur- ing two-hand interaction context due to the adaptation of the hand canonical space. Thus, we additionally propose a two-hand occupancy refinement network (Section 3.2) to perform context-aware shape refinement of interacting two hands in the original posed space. Furthermore, we con- sider a practical scenario of two-hand reconstruction from single images, where the input two-hand keypoints can be obtained by applying an off-the-shelf two-hand pose esti- mation method (e.g., [11,20,23,28,43]). To this end, we ad- ditionally introduce an optional input keypoint refinement module (Section 3.3) to alleviate input keypoint errors to enable more robust shape estimation. In what follows, we explain each of the proposed com- ponents of Im2Hands in more detail. 3.1. Initial Hand Occupancy Estimation Given a 3D query point x, our initial hand occupancy network I predicts initial occupancy probabilities for each hand oi l , oi r ∈ [0, 1] conditioned on an RGB image I and sparse two-hand keypoints J. Our network design is partly based on HALO [18], which models an implicit single-hand conditioned on 3D keypoints. Background: HALO [18]. HALO aims to learn an occu- pancy field of a single hand driven by 3D keypoints. To learn hand shapes in the canonical space, HALO proposes a novel algorithm to transform the input 3D keypoints to canonicalization matrices {Tb}B b=1 , where Tb is a trans- formation matrix to the canonical pose for hand bone b, and 3 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 CVPR #4299 CVPR #4299 CVPR 2023 Submission #4299. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.Image Encoder-DecoderRGB Image Image Feature Query Point Bone Length Feature Global Pose Feature Canonicalized Query Optional Two-Hand Occupancy Refinement Query Shape Feature Refined Keypoints Keypoint Refiner Initial Occupancy Field (In Posed Space) Anchored FeaturesPoint Cloud Encoder Refined Occupancy Function Refined Occupancy Field Shared (Section 3.1) Initial Hand Occupancy Estimation (Section 3.2) (Section 3.3) Two-Hand Keypoints HALO [17] Encoder Initial Occupancy Field (In Canonical Space) Context Extraction Feature Cloud Conversion Iso-Surface Point Extraction Image-Query Attention Initial Occupancy Function Shape- and Pose-Dependent Per-Hand Deformation Network Operation Figure 2. Architecture overview. Given an RGB image and coarse 3D two-hand keypoints, our method estimates two-hand occupancy volumes via (1) initial hand occupancy estimation and (2) two-hand occupancy refinement. In a nutshell, the initial hand occupancy network uses image-query attention and HALO [18] encoder to learn per-hand occupancy volumes in the canonical spaces designed for each hand separately. Then, the two-hand occupancy refinement network encodes the initial per-hand occupancy volumes into anchored features, which are then used to provide two-hand context information for refined occupancy estimation in the original posed space. B is the number of hand bones. Using the obtained canon- icalization matrices, hand occupancy at query x is modeled as: H(x | J) = max b=1, ..., B{ ̄Hb(Tbx, f φ b , f ω b )}, (2) where ̄Hb is an MLP-based part occupancy network to learn the shape of hand bone b. Each part occupancy network takes a canonicalized query point Tbx along with a shape feature f φ b and a pose feature f ω b for bone b. f φ b is ex- tracted by an MLP encoder that takes a bone length vec- tor l ∈ RB computed from the input 3D keypoints. f ω b is obtained by another MLP encoder that takes global pose matrices {Tb}B b=1 and a root translation vector t ∈ R3 as inputs. For more details about HALO, we kindly refer the reader to [18]. Our Initial Hand Occupancy Network. While HALO is effective in modeling pose-dependent deformations via query point canonicalization (i.e. Tbx) and pose feature ex- traction (i.e., f ω b ), its shape feature (i.e., f φ b ) cannot model pose-independent shape variations that cannot be described by bone lengths. Modeling such shape variations is espe- cially important in interacting two-hand reconstruction, as soft tissue deformations commonly occur due to inter-hand contacts. Thus, we introduce an additional shape feature conditioned on an RGB image I, which provides a minimal observation for such pose-independent deformations. We propose a per-query shape feature f φ x conditioned on the image I as follows: f φ x = MSA([PosEnc(x), ImgEnc(I)]), (3) where MSA denotes a multi-headed self-attention module that takes a query feature PosEnc(x) and patch-wise im- age features ImgEnc(I). To be more specific, PosEnc(·) is an MLP to perform positional embedding that maps the input query coordinate x to a query feature vector fx ∈ Rd. ImgEnc(·) is an image encoder of Vision Transformer [10], which maps the input image I into patch-wise image fea- tures fI ∈ Rp×d, where p is the number of patches. Through this query-image attention, we can obtain a per- query shape feature f φ x contributed by the features of more relevant image patch regions to the input query x. In Sec- tion 4, we also show that using such query-image attention yields better results than directly using a local image feature located at the projected query position (e.g., [34]) as f φ x . Finally, our initial per-hand occupancy is modeled by feeding our per-query shape feature f φ x along with the other inputs to the MLP-based part occupancy network ̄Hb in Equation 2: I(x | I, J) = max b=1, ..., B{ ̄Hb(Tbx, f φ b , f φ x , f ω b )}. (4) Note that our occupancy is estimated for each hand sepa- rately in this stage. We use the same network architecture 4 \section{Experiments} \label{sec:experiments} In this section, we experimentally validate the effectiveness of Im2Hands. In Section~\ref{subsec:experimental_setting}, we report our experimental setting. In Section~\ref{subsec:representation_power}, we quantitatively evaluate the representation power of Im2Hands in interacting two-hand reconstruction. We also demonstrate our results on \emph{single-image} reconstruction using Im2Hands in Section~\ref{subsec:single_image_reconstruction}. Finally, we show our qualitative results and ablation study results in Sections~\ref{subsec:qualitative_results} and \ref{subsec:ablation_study}, respectively. \subsection{Experimental Setting} \label{subsec:experimental_setting} \subsubsection{Datasets and Evaluation Metrics} \noindent \textbf{Datasets.} We mainly use InterHand2.6M~\cite{moon2020interhand2} dataset -- the only interacting two-hand dataset with dense shape annotations -- for both quantitative and qualitative evaluation. To maintain consistency with the previous work~\cite{li2022interacting}, we only use interacting hand (IH) samples annotated as \emph{valid} hand type. The resulting dataset contains 366K training samples, 110K validation samples, and 261K test samples. For qualitative evaluation, we additionally demonstrate our results on RGB2Hands~\cite{wang2020rgb2hands} and EgoHands~\cite{bambach2015lending} datasets, which contains RGB videos of interacting two hands without shape annotations. Our overall dataset configuration follows \cite{li2022interacting}; we kindly refer the reader to \cite{li2022interacting} for more dataset details. \noindent \textbf{Evaluation Metrics.} For evaluating the quality of reconstructed two-hand shapes, we compute mean Intersection over Union (IoU) and Chamfer L1-Distance (CD) between the predicted and the ground truth two-hand meshes. We also evaluate the accuracy of 3D hand keypoints after the propsoed keypoint refinement step using Mean Per Joint Position Error (MPJPE). \subsubsection{Compared Methods} As Im2Hands is the first neural implicit function for two interacting hands, we compare our method to (1) the existing \emph{single-hand} reconstruction method using \emph{implicit} representation~\cite{karunratanakul2021skeleton} and (2) \emph{two-hand} reconstruction methods using \emph{mesh} representation~\cite{li2022interacting, zhang2021interacting}. \vspace{0.5\baselineskip} \noindent \textbf{Implicit Single-Hand Reconstruction Methods.} We consider HALO~\cite{karunratanakul2021skeleton}, which is a neural implicit single-hand representation driven by 3D keypoints. As HALO models a hand shape only using 3D keypoints, we modify HALO to additionally leverage an input RGB image by feeding pixel-aligned features~\cite{saito2019pifu} together with the other inputs to the part occupancy functions -- to make more fair comparisons to Im2Hands by matching the input data domains (see Section~\ref{subsec:representation_power}). Although LISA~\cite{corona2022lisa} is another implicit single-hand representation which takes both RGB image and 3D keypoints to model pose- and shape-dependent deformations, we do not make direct comparisons to LISA, as it requires more various input data (e.g., foreground masks, multi-view RGB images) than Im2Hands. \vspace{0.5\baselineskip} \noindent \textbf{Explicit Two-Hand Reconstruction Methods.} We compare our method to IntagHand~\cite{li2022interacting} and Two-Hand-Shape-Pose~\cite{zhang2021interacting}, which are the state-of-the-art methods on interacting two-hand shape reconstruction. These methods reconstruct a fixed-topology mesh from an RGB image by using dense vertex correspondence~\cite{li2022interacting} or statistical model parameter annotations~\cite{zhang2021interacting} for training. Since they are designed for \emph{single-image} two-hand reconstruction and do not take 3D hand keypoints as input, we also evaluate our model in a single-image reconstruction scenario by using 3D hand keypoints predicted from an image -- to make fair comparisons (see Section~\ref{subsec:single_image_reconstruction}). \vspace{0.5\baselineskip} \noindent \textbf{Two-Hand Keypoint Estimation Baselines.} To evaluate Im2Hands in a \emph{single-image} two-hand reconstruction scenario, we leverage an off-the-shelf keypoint estimation method to predict two-hand keypoints from an input image. For such two-hand keypoint estimation baseline, we consider DIGIT~\cite{fan2021learning} and IntagHand~\cite{li2022interacting}, whose official implementation is publicly available. We would like to note that IntagHand is also proposed for two-hand \emph{shape} estimation, thus we additionally consider it as our compared method on two-hand shape reconstruction. However, when using IntagHand for a keypoint estimation baseline in our single-image reconstruction experiments, we completely discard the shape information and use the predicted keypoints only -- considering it as a pure two-hand keypoint estimation method. We emphasize that our two-hand occupancy function works agnostic to the two-hand keypoint estimation model architecture and can be used in a plug-and-play manner. \subsection{Representation Power} \label{subsec:representation_power} In this section, we examine the representation power of Im2Hands given an input RGB image paired with 3D hand keypoints (e.g., obtained from a sensor). In Table~\ref{table:representation_power}, we show the two-hand shape reconstruction error of Im2Hands in comparison to the existing explicit two-hand~\cite{li2022interacting, zhang2021interacting} and implicit single-hand~\cite{karunratanakul2021skeleton} reconstruction methods. Im2Hands is shown to outperform all compared methods by leveraging an RGB image and hand keypoints to effectively model shape- and pose-dependent deformations, respectively. In the fifth row of Table~\ref{table:representation_power}, we also compare our results to a re-implmented version of HALO~\cite{karunratanakul2021skeleton}, where it is modified to leverage an RGB image by taking pixel-aligned features~\cite{saito2019pifu} as an additional input to the occupancy functions -- to make more fair comparisons between HALO and Im2Hands by using the same input domains. In this experiment, Im2Hands is again shown to achieve a higher quality of reconstruction. \begin{figure}[!t] \begin{center} \includegraphics[width=0.5\textwidth]{cvpr2023-author_kit-v1_1-1/latex/figures/fig_rgb2hands.pdf} \caption{\textbf{Qualitative results on in-the-wild images.} ???} \label{fig:main_qualitative_results} \vspace{-1.2\baselineskip} \end{center} \end{figure} \input{tables/table_representation_power} \subsection{Single-Image Reconstruction} \label{subsec:single_image_reconstruction} We also evaluate Im2Hands in a single-image two-hand reconstruction scenario to make more fair comparisons to the existing explicit two-hand reconstruction methods~\cite{li2022interacting, zhang2021interacting}. To this end, we estimate two-hand shapes using 3D hand keypoints predicted from an input image via an off-the-shelf two-hand keypoint estimation method~\cite{li2022interacting, fan2021learning} and the proposed keypoint refinement module. We would like to note that the goal of this experiment is to evaluate each of the compared methods in a setting where \emph{no ground truth 3D hand keypoints} are available. Thus, we also disabled the rescaling of the reconstructed hand joints and shapes performed by some of the existing methods~\cite{li2022interacting, zhang2021interacting} using the scale ratio calculated from a subset of the \emph{ground truth} 3D keypoints. For those methods~\cite{li2022interacting, zhang2021interacting} that require such rescaling, we use the mean scale of the hands in the training set of InterHand2.6M~\cite{moon2020interhand2} to perform fair comparisons. In Table~\ref{table:joint_results}, we first evaluate the effectiveness of our keypoint refinement module on noisy two-hand keypoints predicted by DIGIT \cite{fan2021learning} and IntagHand \cite{li2022interacting}. Our method is shown to be successful in alleviating input keypoint errors; it is shown to be especially effective when the error degree of the input keypoints is high. \input{tables/table_joints} We now report our two-hand shape reconstruction results from single images. In Table~\ref{table:image_results}, our method is shown to achieve state-of-the-art results in single-image interacting two-hand reconstruction on InterHand2.6M~\cite{moon2020interhand2}. We would like to also note that most of the existing articulated implicit functions (e.g., \cite{karunratanakul2021skeleton, deng2020nasa, corona2022lisa}) are typically evaluated using \emph{noiseless} keypoints or skeletons. In contrast, our method is shown to be capable of achieving competitive results in a setting where \emph{no ground truth keypoints} are available -- by leveraging an off-the-shelf two-hand keypoint estimation method and the keypoint refinement module. We would like to also re-emphasize that the compared methods originally designed for single-image reconstruction~\cite{li2022interacting, zhang2021interacting} can produce only fixed-topology meshes with low resolution, where our method can produce resolution-independent two-hand shapes. \input{tables/table_image} \subsection{Qualitative Results} \label{subsec:qualitative_results} \textcolor{red}{To be done...} \subsection{Ablation Study} \label{subsec:ablation_study} We perform ablation study to investigate the effectiveness of each of our model components. In Table~\ref{table:ablation_study}, we first report the results of our full ablation study performed using the \emph{ground truth} 3D hand keypoints. In \emph{rows 2-4}, we evaluate the two-hand reconstruction results of our model after removing image conditioning (which results in the baseline HALO model~\cite{karunratanakul2021skeleton}), image-query attention (which results in the use of pixel-aligned features as in \cite{saito2019pifu}), and the proposed two-hand occupancy refinement module $\mathcal{R}$ (which results in $\mathcal{I}$). In \emph{rows 5-7}, we further examine the effects of removing each of the proposed components inside $\mathcal{R}$. It is shown that our full model achieves the best performance in terms of both evaluation metrics. \input{tables/table_ablation} In addition, we present our ablation study with respect to the major modules (i.e., $\mathcal{K}$, $\mathcal{I}$, and $\mathcal{R}$) of Im2Hands using the \emph{predicted} 3D hand keypoints in Table~\ref{table:ablation_study_refinement}. Our full model is again shown to produce more accurate two-hand reconstructions in comparison to the other versions. \input{tables/table_refinement_ablation.tex}
{ "arxiv_id": "2302.14276", "language": "en", "timestamp": "2023-03-01T02:07:55", "url": "https://arxiv.org/abs/2302.14276", "yymm": "2302" }
\section*{Acknowledgements} \section{INTRODUCTION} Social learning~\cite{jaques2019social,ndousse2021emergent} agents analyze cues from direct observation of other agents (novice or expert) in the same environment to learn an action policy from others. However, observing expert actions may not be sufficient to coordinate with other agents. Rather, by learning to communicate, agents can better model the intent of other agents, leading to better coordination. In humans, explicit communication for coordination assumes a common communication substrate to convey abstract concepts and beliefs directly~\cite{mirsky2020penny}, which may not be available for new partners. To align complex beliefs, heterogeneous agents must learn a message policy that translates from one theory of mind~\cite{li2022theory} to another to synchronize coordination. Especially when there is complex information to process and share, new agent partners need to learn to communicate to work with other agents. Emergent communication studies the creation of artificial language. Often phrased as a Lewis game, speakers and listeners learn a set of tokens to communicate complex observations~\cite{lewis1969convention}. However, in multi-agent reinforcement learning (MARL), agents suffer from partial observability and non-stationarity (due to unaligned value functions)~\cite{papoudakis2019dealing}, which aims to be solved with decentralized learning through communication. In the MARL setup, agents, as speakers and listeners, learn a set of tokens to communicate observations, intentions, coordination, or other experiences which help facilitate solving tasks~\cite{karten2022sparse,karten2022inter}. Agents learn to communicate effectively through a backpropagation signal from their task performance~\cite{foerster2016learning, lowe2017multi, lazaridou2016multi, commnet, ic3net}. This has been found useful for applications in human-agent teaming~\cite{karten2022inter,marathe2018bidirectional,lake2019human,lazaridou2020emergent}, multi-robot navigation~\cite{benSparseDiscrete}, and coordination in complex games such as StarCraft II~\cite{samvelyan2019starcraft}. Communication quality has been shown to have a strong relationship with task performance~\cite{marlow2018does}, leading to a multitude of work attempting to increase the representational capacity by decreasing the convergence rates~\cite{EcclesBiases,MA_autoencoder,karten2022sparse,wang2020learning,tucker2022towards}. Yet these methods still create degenerate communication protocols~\cite{karten2022inter,karten2022sparse,benSparseDiscrete}, which are uninterpretable due to joined concepts or null (lack of) information, which causes performance degradation. In this work, we investigate the challenges of learning a messaging lexicon to prepare emergent communication for social learning (EC4SL) scenarios. We study the following hypotheses: \textbf{H1)} EC4SL will learn faster through structured concepts in messages leading to higher-quality solutions, \textbf{H2)} EC4SL aligns the policies of expert heterogeneous agents, and \textbf{H3)} EC4SL enables social shadowing, where an agent learns a communication policy while only observing an expert agent's action policy. By learning a communication policy, the agent is encouraged to develop a more structured understanding of intent, leading to better coordination. The setting is very realistic among humans and many computer vision and RL frameworks may develop rich feature spaces for a specific solo task, but have not yet interacted with other agents, which may lead to failure without alignment. We enable a compositional emergent communication paradigm, which exhibits clustering and informativeness properties. We show theoretically and through empirical results that compositional language enables independence properties among tokens with respect to referential information. Additionally, when combined with contrastive learning, our method outperforms competing methods that only ground communication on referential information. We show that contrastive learning is an optimal critic for communication, reducing sample complexity for the unsupervised emergent communication objective. In addition to the more human-like format, compositional communication is able to create variable-length messages, meaning that we are not limited to sending insufficiently compressed messages with little information, increasing the quality of each communication. In order to test our hypotheses, we show the utility of our method in multi-agent settings with a focus on teams of agents, high-dimensional pixel data, and expansions to heterogeneous teams of agents of varying skill levels. Social learning requires agents to explore to observe and learn from expert cues. We interpolate between this form of social learning and imitation learning, which learns action policies directly from examples. We introduce a 'social shadowing' learning approach where we use first-person observations, rather than third-person observations, to encourage the novice to learn latently or conceptually how to communicate and develop an understanding of intent for better coordination. The social shadowing episodes are alternated with traditional MARL during training. Contrastive learning, which works best with positive examples, is apt for social shadowing. Originally derived to enable lower complexity emergent lexicons, we find that the contrastive learning objective is apt for agents to develop internal models and relationships of the task through social shadowing. The idea is to enable a shared emergent communication substrate (with minimal bandwidth) to enable future coordination with novel partners. Our contributions are deriving an optimal critic for a communication policy and showing that the information bottleneck helps extend communication to social learning scenarios. In real-world tasks such as autonomous driving or robotics, humans do not necessarily learn from scratch. Rather they explore with conceptually guided information from expert mentors. In particular, having structured emergent messages reduces sample complexity, and contrastive learning can help novice agents learn from experts. Emergent communication can also align heterogeneous agents, a social task that has not been previously studied. \section{RELATED WORK} \subsection{Multi-Agent Signaling} Implicit communication conveys information to other agents that is not intentionally communicated~\cite{grupen2022multi}. Implicit signaling conveys information to other agents based on one's observable physical position~\cite{grupen2022multi}. Implicit signaling may be a form of implicit communication such as through social cues~\cite{jaques2019social,ndousse2021emergent} or explicit communication such as encoded into the MDP through ``cheap talk"~\cite{sokota2022communicating}. Unlike implicit signaling, explicit signaling is a form of positive signaling~\cite{li2021learning} that seeks to directly influence the behavior of other agents in the hopes that the new information will lead to active listening. Multi-agent emergent communication is a type of explicit signaling which deliberately shares information. Symbolic communication, a subset of explicit communication, seeks to send a subset of pre-defined messages. However, these symbols must be defined by an expert and do not scale to particularly complex observations and a large number of agents. Emergent communication aims to directly influence other agents with a learned subset of information, which allows for scalability and interpretability by new agents. \subsection{Emergent Communication} Several methodologies currently exist to increase the informativeness of emergent communication. With discrete and clustered continuous communication, the number of observed distinct communication tokens is far below the number permissible~\cite{discreteComm}. As an attempt to increase the emergent ``vocabulary'' and decrease the data required to converge to an informative communication ``language'', work has added a bias loss to emit distinct tokens in different situations~\cite{EcclesBiases}. More recent work has found that the sample efficiency can be further improved by grounding communication in observation space with a supervised reconstruction loss~\cite{MA_autoencoder}. Information-maximizing autoencoders aim to maximize the state reconstruction accuracy for each agent. However, grounding communication in observations has been found to easily satisfy these input-based objectives while still requiring a myriad more samples to explore to find a task-specific communication space~\cite{karten2022sparse}. Thus, it is necessary to use task-specific information to communicate informatively. This will enable learned compression for task completion rather than pure compression for input recovery. Other work aims to use the information bottleneck~\cite{tishby2015deep} to decrease the entropy of messages~\cite{wang2020learning}. In our work, we use contrastive learning to increase representation similarity with future goals, which we show optimally optimizes the Q-function for messages. \subsection{Natural Language Inspiration} The properties of the tokens in emergent communication directly affect their informative ability. As a baseline, continuous communication tokens can represent maximum information but lack human-interpretable properties. Discrete 1-hot (binary vector) tokens allow for a finite vocabulary, but each token contains the same magnitude of information, with equal orthogonal distance to each other token. Similar to word embeddings in natural language, discrete prototypes are an effort to cluster similar information together from continuous vectors~\cite{discreteComm}. Building on the continuous word embedding properties, VQ-VIB~\cite{tucker2022towards}, an information-theoretic observation grounding based on VQ-VAE properties~\cite{van2017neural}, uses variational properties to provide word embedding properties for continuous emergent tokens. Like discrete prototypes, they exhibit a clustering property based on similar information but are more informative. However, each of these message types determines a single token for communication. Tokens are stringed together to create emergent ``sentences''. \section{Preliminaries} We formulate our setup as a decentralized, partially observable Markov Decision Process with communication (Dec-POMDP-Comm). Formally, our problem is defined by the tuple, $\langle\mathcal{S},\mathcal{A},\mathcal{M},\mathcal{T},\mathcal{R},\mathcal{O},\Omega,\gamma \rangle$. We define $\mathcal{S}$ as the set of states, $\mathcal{A}^i \, , \, i\in[1,N]$ as the set of actions, which includes task-specific actions, and $\mathcal{M}^i$ as the set of communications for $N$ agents. $\mathcal{T}$ is the transition between states due to the multi-agent joint action space $\mathcal{T}: \mathcal{S} \times \mathcal{A}^1,...,\mathcal{A}^N \to \mathcal{S}$. $\Omega$ defines the set of observations in our partially observable setting. Partial observability requires communication to complete the tasks successfully. $\mathcal{O}^i: \mathcal{M}^1,...,\mathcal{M}^N \times \hat{\mathcal{S}} \to \Omega$ maps the communications and local state, $\hat{\mathcal{S}}$, to a distribution of observations for each agent. $\mathcal{R}$ defines the reward function and $\gamma$ defines the discount factor. \subsection{Architecture} The policy network is defined by three stages: Observation Encoding, Communication, and Action Decoding. The best observation encoding and action decoding architecture is task-dependent, i.e., using multi-layer perceptrons (MLPs), CNNs~\cite{lecun1995convolutional}, GRUs~\cite{chung2014empirical}, or transformer~\cite{vaswani2017attention} layers are best suited to different inputs. The encoder transforms observation and any sequence or memory information into an encoding $H$. The on-policy reinforcement learning training uses REINFORCE~\cite{williams1992simple} or a decentralized version of MAPPO~\cite{yu2021surprising} as specified by our experiments. Our work focuses on the communication stage, which can be divided into three substages: message encoding, message passing (often considered sparse communication), and message decoding. We use the message passing from~\cite{karten2022sparse}. For message decoding, we build on a multi-headed attention framework, which allows an agent to learn which messages are most important~\cite{graphMA}. Our compositional communication framework defines the message encoding, as described in section~\ref{sec:composition}. \subsection{Objective} Mutual information, denoted as $I(X;Y)$, looks to measure the relationship between random variables, \begin{equation*} \begin{aligned} I(X;Y) = \mathds{E}_{p(x,y)} \left[ \log\frac{p(x|y)}{p(x)} \right] = \mathds{E}_{p(x,y)} \left[ \log\frac{p(y|x)}{p(y)} \right] \end{aligned} \end{equation*} which is often measured through Kullback-Leibler divergence~\cite{kullback1997information}, $I(X;Y) = D_{KL} (p(x,y) || p(x) \otimes p(y))$. The message encoding substage can be defined as an information bottleneck problem, which defines a trade-off between the complexity of information (compression, $I(X,\hat{X})$) and the preserved relevant information (utility, $I(\hat{X},Y)$). The deep variational information bottleneck defines a trade-off between preserving useful information and compression~\cite{alemi2017deep,tishby2015deep}. We assume that our observation and memory/sequence encoder provides an optimal representation $H^i$ suitable for sharing relevant observation and intent/coordination information. We hope to recover a representation $Y^i$, which contains the sufficient desired outputs. In our scenario, the information bottleneck is a trade-off between the complexity of information $I(H^i; M^i)$ (representing the encoded information exactly) and representing the relevant information $I(M^{j \neq i}; Y^i) $, which is signaled from our contrastive objective. In our setup, the relevant information flows from other agents through communication, signaling a combination of the information bottleneck and a Lewis game. We additionally promote complexity through our compositional independence objective, $I(M_1^i;\hdots ; M_L^i | H^i) $. This is formulated by the following Lagrangian, \begin{align*} \mathcal{L}(\ p(m^i | h^i )\ ) =\ &\beta_u \hat{I} (M^{j \neq i}; Y^i)\ - \beta_c \hat{I}(H^i; M^i) \\&- \beta_I \hat{I} (M_1^i;\hdots ; M_L^i | H^i) \end{align*} where the bounds on mutual information $\hat{I}$ are defined in equations~\ref{eq:inde_info},~\ref{eq:input}, and~\ref{eq:contrastive}. Overall, our objective is, \begin{align*} J(\theta) = \max\limits_{\pi} \mathds{E} \left[ \sum_{t \in T} \sum_{i \in N} \gamma_t \mathcal{R}(s_t,a_t) + \mathcal{L}(\ p(m_t | h_t )\ ) \right] \\ \text{s.t.} (a_t, m_t, h_t) \sim \pi^i, s_t \sim \mathcal{T}(s_{t-1}) \end{align*} \section{Complexity through Compositional Communication}\label{sec:composition} We aim to satisfy the complexity objective, $I(H^i, M^i)$, through compositional communication. In order to induce complexity in our communication, we want the messages to be as non-random as possible. That is, informative with respect to the input hidden state $h$. In addition, we want each token within the message to share as little information as possible with the preceding tokens. Thus, each additional token adds \textit{only informative} content. Each token has a fixed length in bits $W$. The total sequence is limited by a fixed limit, $\sum_l^L W_l \leq S$, of $S$ bits and a total of $L$ tokens. We use a variational message generation setup, which maps the encoded hidden state $h$ to a message $m$; that is, we are modeling the posterior, $\pi_m^i (m_l|h)$. We limit the vocabulary size to $K$ tokens, $e_j \in \mathds{R}^D, j \in [1,K] \subset \mathds{N}$, where each token has dimensionality $D$ and $l \in [1,L] \subset \mathds{N}$. Each token $m_l$ is sampled from a categorical posterior distribution, \begin{equation*} \pi_m^i (m_l = e_k | h) = \begin{cases} 1 & \text{for } k = \argmin\limits_{j} || m_l - e_j ||_2 \\ 0 & \text{otherwise} \end{cases} \end{equation*} such that the message $m_l$ is mapped to the nearest neighbor $e_j$. A set of these tokens makes a message $m$. To satisfy the complexity objective, we want to use $m^i$ to well-represent $h^i$ and consist of independently informative $m^i_l$. \subsection{Independent Information} We derive an upper bound for the interaction information between all tokens. \begin{proposition} For the interaction information between all tokens, the following upper bound holds: $I(m_1; \hdots; m_L | h) \leq \mathds{E}_{h \sim p(h)} \left[ D_{KL} \left(q(\hat{m}|h) || \pi^i_m(m_1|h) \otimes \cdots \otimes \pi^i_m(m_L|h)\right) \right]$. \end{proposition} The proof is in Appendix~\ref{appx:proofs}. \begin{figure*}[!t] \centering \includegraphics[width=.75\textwidth]{fig/fig2.png} \caption{By using contrastive learning, our method seeks similar representations between the state-message pair and future states while creating dissimilar representations with random states. Thus satisfying the utility objective of the information bottleneck. The depicted agents are blind and cannot see other cars.} \label{fig:fig2} \end{figure*} Since we want the mutual information to be minimized in our objective, we minimize, \begin{equation}\label{eq:inde_info \begin{aligned} &\hat{I} (m_1;\hdots ; m_L | h) =\\ &\mathds{E}_{h \sim p(h)} \left[ D_{KL} \left(q(\hat{m}|h) || \pi^i_m(m_1|h) \otimes \cdots \otimes \pi^i_m(m_L|h)\right) \right] \end{aligned} \end{equation} \subsection{Input-Oriented Information} In order to induce complexity in the compositional messages, we additionally want to minimize the mutual information $I(H; M)$ between the composed message $\hat{m}$ and the encoded information $h$. We derive an upper bound on the mutual information that we use as a Lagrangian term to minimize. \begin{proposition} For the mutual information between the composed message and encoded information, the following upper bound holds: $I(H; M) \leq \sum_l^L \mathds{E}_{h \sim p(h)} \left[ D_{KL} \left( q(m_l|h) || z(m_l)) \right) \right]$. \end{proposition} The proof is in Appendix~\ref{appx:proofs}. Thus, we have our Lagrangian term, \begin{equation}\label{eq:input} \begin{aligned} \hat{I}(H^i, M^i) = \sum_l^L \mathds{E}_{h \sim p(h)} \left[ D_{KL} \left( q(m_l|h) || z(m_l)) \right) \right] \end{aligned} \end{equation} Conditioning on the input or observation data is a decentralized training objective. \subsection{Sequence Length} Compositional communication necessitates an adaptive limit on the total length of the sequence. \begin{corollary}\label{cor:redundant} Repeat tokens, $w$, are redundant and can be removed. \end{corollary} Suppose one predicts two arbitrary tokens, $w_k$ and $w_l$. Given equation~\ref{eq:inde_info}, it follows that there is low or near-zero mutual information between $w_k$ and $w_l$. A trivial issue is that the message generator will predict every available token as to follow the unique token objective. Since the tokens are imbued with input-oriented information (equation~\ref{eq:input}), the predicted tokens will be based on relevant referential details. Thus, it follows that tokens containing irrelevant information will not be chosen. A nice optimization objective that follows from corollary~\ref{cor:redundant} is that one can use self-supervised learning with an end-of-sequence (EOS) token to limit the variable total length of compositional message sequences. \begin{equation}\label{eq:seq_len} H(m_{\texttt{EOS}}, m_l) = - \pi(m_{\texttt{EOS}}) \log(\pi(m_l)) \end{equation} \subsection{Message Generation Architecture} Now, we can define the pipeline for message generation. The idea is to create an architecture that can generate features to enable independent message tokens. We expand each compressed token into the space of the hidden state $h$ (1-layer linear expansion) since each token has a natural embedding in $\mathbf{R}^{|h|}$. Then, we perform attention using a \texttt{softmin} to help minimize similarity with previous tokens and sample the new token from a variational distribution. See algorithm~\ref{alg:cap} for complete details. During execution, we can generate messages directly due to equation~\ref{eq:inde_info}, resolving any computation time lost from sequential compositional message generation. \begin{algorithm}[!t] \caption{\texttt{Compositional Message Gen.}$(h_t)$}\label{alg:cap} \begin{algorithmic}[1] \STATE $T \gets \texttt{num\_tokens}$ \STATE $m = \textbf{0}$ \COMMENT{$T \times d_m$, $d_m \gets \texttt{token\_size}$} \STATE $Q \gets \texttt{Q\_MLP}(h_t)$ \STATE $V \gets \texttt{V\_MLP}(h_t)$ \FOR{$i \gets 1 \text{ to } T$} \STATE $K \gets \texttt{K\_MLP}(m)$ \STATE $\hat{h} = \texttt{softmin}(\frac{Q^\intercal \texttt{mean}(K,1)}{\sqrt{d_k}})^\intercal V$ \STATE $m_i \sim \mathcal{N}(\hat{h}; \mu, \sigma)$ \ENDFOR \STATE \textbf{return} $m$ \end{algorithmic} \end{algorithm} \section{Utility through Contrastive Learning} First, note that our Markov Network is as follows: $H^j \rightarrow M^j \rightarrow Y^i \leftarrow H^i$. Continue to denote $i$ as the agent identification and $j$ as the agent ID such that $j \neq i$. We aim to satisfy the utility objective of the information bottleneck, $I(M^j; Y^i)$, through contrastive learning as shown in figure~\ref{fig:fig2}. \begin{proposition} Utility mutual information is lower bounded by the contrastive NCE-binary objective, $I(M,Y) \geq \log \sigma (f(s,m,s_f^+)) +\log \sigma (1-f(s,m,s_f^-))$. \end{proposition} The proof is in Appendix~\ref{appx:proofs}. This result shows a need for gradient information to flow backward across agents along communication edge connections. \section{Experiments and Results} We condition on inputs, especially rich information (such as pixel data), and task-specific information. When evaluating an artificial language in MARL, we are interested in referential tasks, in which communication is \textit{required} to complete the task. With regard to intent-grounded communication, we study ordinal tasks, which require coordination information between agents to complete successfully. Thus, we consider tasks with a team of agents to foster messaging that communicates coordination information that also includes their observations. To test \textbf{H1}, structuring emergent messages enables lower complexity, we test our methodology and analyze the input-oriented information and utility capabilities. Next, we analyze the ability of heterogeneous agents to understand differing communication policies (\textbf{H2})). Finally, we consider the effect of social shadowing (\textbf{H3}), in which agents solely learn a communication policy from an expert agent's action policy. We additionally analyze the role of offline reinforcement learning for emergent communication in combination with online reinforcement learning to further learn emergent communication alongside an action policy. We evaluate each scenario over 10 seeds. \begin{figure}[!t] \centering \includegraphics[width=0.5\columnwidth]{fig/pascal_ex.png} \caption{An example of two possible classes, person and horse, from a single observation in the Pascal VOC game.} \label{fig:pascal_ex} \end{figure} \subsection{Environments} \paragraph{Blind Traffic Junction} We consider a benchmark that requires both referential and ordinal capabilities within a team of agents. The blind traffic junction environment~\cite{ic3net} requires multiple agents to navigate a junction without any observation of other agents. Rather, they only observe their own state location. Ten agents must coordinate to traverse through the lanes without colliding into agents within their lane or in the junction. Our training uses REINFORCE~\cite{williams1992simple}. \paragraph{Pascal VOC Game} We further evaluate the complexity of compositional communication with a Pascal VOC~\cite{everingham2010pascal}. This is a two-agent referential game similar to the Cifar game~\cite{MA_autoencoder} but requires the prediction of multiple classes. During each episode, each agent observes a random image from the Pascal VOC dataset containing exactly two unique labels. Each agent must encode information given only the raw pixels from the original image such that the other agent can recognize the two class labels in the original image. An agent receives a reward of 0.25 per correctly chosen class label and will receive a total reward of 1 if both agents guess all labels correctly. See figure~\ref{fig:pascal_ex}. Our training uses heterogeneous agents trained with PPO (modified from MAPPO~\cite{yu2021surprising} repository). For simplicity of setup, we consider images with exactly two unique labels from a closed subset of size five labels of the original set of labels from the Pascal VOC data. Furthermore, these images must be of size $375 \times 500$ pixels. Thus, the resultant dataset comprised 534 unique images from the Pascal VOC dataset. \subsection{Baselines} To evaluate our methodology, we compare our method to the following baselines: (1) \texttt{no-comm}, where agents do not communicate; (2) \texttt{rl-comm}, which uses a baseline communication method learned solely through policy loss~\cite{ic3net}; (3) \texttt{ae-comm}, which uses an autoencoder to ground communication in input observations~\cite{MA_autoencoder}; (4) \texttt{VQ-VIB}, which uses a variational autoencoder to ground discrete communication in input observations and a mutual information objective to ensure low entropy communication~\cite{tucker2022towards}. \begin{table}[t!] \centering \caption{Beta ablation: Messages are naturally sparse in bits due to the complexity loss. Redundancy measures the capacity for a bijection between the size of the set of unique tokens and the enumerated observations and intents. Min redundancy is 1.0 (a bijection). Lower is better.} \begin{tabularx}{\columnwidth}{|m{.22\columnwidth}|m{.18\columnwidth} m{.2\columnwidth} m{.19\columnwidth} |} \specialrule{.2em}{.1em}{.1em} $\beta$ & Success & Message Size in Bits & Redundancy \\\hline 0.1 & 1.0 & 64 & 1.0 \\ 0.01 & .996 & 69.52 & 1.06 \\ 0.001 & .986 & 121.66 & 2.06 \\ 0 & .976 & 147.96 & 2.31 \\ non-compositional & .822 & 512 & 587 \\ \specialrule{.1em}{.05em}{.05em} \end{tabularx} \label{table:beta} \end{table} \subsection{Input-Oriented Information Results} We provide an ablation of the loss parameter $\beta$ in table~\ref{table:beta} in the blind traffic junction scenario. When $\beta = 0$, we use our compositional message paradigm without our derived loss terms. We find that higher complexity and independence losses increase sample complexity. When $\beta=1$, the model was unable to converge. However, when there is no regularization loss, the model performs worse (with no guarantees about referential representation). We attribute this to the fact that our independence criteria learns a stronger causal relationship. There are fewer spurious features that may cause an agent to take an incorrect action. In order to understand the effect of the independent concept representation, we analyze the emergent language's capacity for redundancy. A message token $m_l$ is redundant if there exists another token $m_k$ that represents the same information. With our methodology, the emergent `language' converges to the exact number of observations and intents required to solve the task. With a soft discrete threshold, the independent information loss naturally converges to a discrete number of tokens in the vocabulary. Our $\beta$ ablation in table~\ref{table:beta} yields a bijection between each token in the vocabulary and the possible emergent concepts, i.e., the enumerated observations and intents. Thus for $\beta = 0.1$, there is no redundancy. \paragraph{Sparse Communication} In corollary~\ref{cor:redundant}, we assume that there is no mutual information between tokens. In practice, the loss may only be near-zero. Our empirical results yield independence loss around $1e-4$. In table~\ref{table:beta}, the size of the messages is automatically compressed to the smallest size to represent the information. Despite a trivially small amount of mutual information between tokens, our compositional method is able to reduce the message size in bits by 2.3x using our derived regularization, for a total of an 8x reduction in message size over non-compositional methods such as $\texttt{ae-comm}$. Since the base unit for the token is a 32-bit float, we note that each token in the message may be further compressed. We observe that each token uses three significant digits, which may further compress tokens to 10 bits each for a total message length of 20 bits. \begin{figure}[!t] \centering \includegraphics[width=0.49\columnwidth]{fig/contrastive.png} \includegraphics[width=0.49\columnwidth]{fig/loss_analysis.png} \caption{\textbf{Blind Traffic Junction} Left: Our method uses compositional complexity and contrastive utility to outperform other baselines in terms of performance and sample complexity. The legend provides the mean $\pm$ variance of the best performance. Right: Top: success, contrastive, and complexity losses for our method. Right, Bottom: success, autoencoder loss for \texttt{ae-comm} with supervised pretraining. } \label{fig:contrastive} \label{fig:loss_anal} \end{figure} \subsection{Communication Utility Results} Due to coordination in MARL, grounding communication in referential features is not enough. Finding the communication utility requires grounding messages in ordinal information. Overall, figure~\ref{fig:contrastive} shows that our compositional, contrastive method outperforms all methods focused on solely input-oriented communication grounding. In the blind traffic junction, our method yields a higher average task success rate and is able to achieve it with a lower sample complexity. Training with the contrastive update tends to spike to high success but not converge, often many episodes before convergence, which leaves area for training improvement. That is, the contrastive update begins to find aligned latent spaces early in training, but it cannot adapt the methodology quickly enough to converge. The exploratory randomness of most of the early online data prevents exploitation of the high utility $f^+$ examples. This leaves further room for improvement for an adaptive contrastive loss term. \paragraph{Regularization loss convergence} After convergence to high task performance, the autoencoder loss increases in order to represent the coordination information. This follows directly from the information bottleneck, where there exists a tradeoff between utility and complexity. However, communication, especially referential communication, should have an overlap between utility and complexity. Thus, we should seek to make the complexity loss more convex. Our compositional communication complexity loss does not converge before task performance convergence. While the complexity loss tends to spike in the exploratory phase, the normalized value is very small. Interestingly, the method eventually converges as the complexity loss converges below a normalized 0.3. Additionally, the contrastive loss tends to decrease monotonically and converges after the task performance converges, showing a very smooth decrease. The contrastive $f^-$ loss decreases during training, which may account for success spikes prior to convergence. The method is able to converge after only a moderate decrease in the $f^+$ loss. This implies empirical evidence that the contrastive loss is an optimal critic for messaging. See figure~\ref{fig:loss_anal}. \begin{figure}[!t] \centering \includegraphics[width=.75\columnwidth]{fig/pascal.png} \caption{\textbf{Pascal VOC Game} Representing compositional concepts from raw pixel data in images to communicate multiple concepts within a single image. Our method significantly outperforms \texttt{ae-comm} and \texttt{no-comm} due to our framework being able to learn composable, independent concepts.} \label{fig:pascal} \end{figure} \subsection{Heterogeneous Alignment Through Communication} In order to test the heterogeneous alignment ability of our methodology to learn higher-order concepts from high-dimensional data, we analyze the performance on the Pascal VOC game. We compare our methodology against \texttt{ae-comm} to show that concepts should consist of independent information directly from task signal rather than compression to reconstruct inputs. That is, we show an empirical result on pixel data to verify the premise of the information bottleneck. Our methodology significantly outperforms the observation-grounded \texttt{ae-comm} baseline, as demonstrated by figure~\ref{fig:pascal}. The \texttt{ae-comm} methodology, despite using autoencoders to learn observation-grounded communication, performs only slightly better than \texttt{no-comm}. On the other hand, our methodology is able to outperform both baselines significantly. It is important to note that based on figure~\ref{fig:pascal}, our methodology is able to guess more than two of the four labels correctly across the two agents involved, while the baseline methodologies struggle to guess exactly two of thew four labels consistently. This can be attributed to our framework being able to learn compositional concepts that are much more easily discriminated due to mutual independence. \subsection{Social Shadowing} Critics of emergent communication may point to the increased sample complexity due to the dual communication and action policy learning. In the social shadowing scenario, heterogeneous agents can learn to generate a communication policy without learning the action policy of the watched expert agents. To enable social shadowing, the agent will alternate between a batch of traditional MARL (no expert) and (1st-person) shadowing an expert agent performing the task in its trajectory. The agent only uses the contrastive objective to update its communication policy during shadowing. In figure~\ref{fig:tj_social_teach}, the agent that performs social shadowing is able to learn the action policy with almost half the sample complexity required by the online reinforcement learning agent. Our results show that the structured latent space of the emergent communication learns socially benevolent coordination. This tests our hypothesis that by learning communication to understand the actions of other agents, one can enable lower sample complexity coordination. Thus, it mitigates the issues of solely observing actions. \begin{figure}[!t] \centering \includegraphics[width=.75\columnwidth]{fig/traffic_teaching.png} \caption{\textbf{Blind Traffic Junction} Social shadowing enables significantly lower sample complexity when compared to traditional online MARL.} \label{fig:tj_social_teach} \end{figure} \section{Discussion} By using our framework to better understand the intent of others, agents can learn to communicate to align policies and coordinate. Any referential-based setup can be performed with a supervised loss, as indicated by the instant satisfaction of referential objectives. Even in the Pascal VOC game, which appears to be a purely referential objective, our results show that intelligent compression is not the only objective of referential communication. The emergent communication paradigm must enable an easy-to-discriminate space for the game. In multi-agent settings, the harder challenge is to enable coordination through communication. Using contrastive communication as an optimal critic aims to satisfy this, and has shown solid improvements. Since contrastive learning benefits from good examples, this method is even more powerful when there is access to examples from expert agents. In this setting, the communication may be bootstrapped, since our optimal critic has examples with strong signals from the 'social shadowing' episodes. Additionally, we show that the minimization of our independence objective enables tokens that contain minimal overlapping information with other tokens. Preventing trivial communication paradigms enables higher performance. Each of these objectives is complementary, so they are not trivially minimized during training, which is a substantial advantage over comparative baselines. Unlike prior work, this enables the benefits of training with reinforcement learning in multi-agent settings. In addition to lower sample complexity, the mutual information regularization yields additional benefits, such as small messages, which enables the compression aspect of sparse communication. From a qualitative point of view, the independent information also yields discrete emergent concepts, which can be further made human-interpretable by a post-hoc analysis~\cite{yeh2021human}. This is a step towards white-box machine learning in multi-agent settings. The interpretability of this learned white-box method could be useful in human-agent teaming as indicated by prior work~\cite{karten2022inter}. The work here will enable further results in decision-making from high-dimensional data with emergent concepts. The social scenarios described are a step towards enabling a zero-shot communication policy. This work will serve as future inspiration for using emergent communication to enable ad-hoc teaming with both agents and humans. \section{Appendix} \subsection{Proofs}\label{appx:proofs} \textbf{Proposition 4.1} \textit{For the interaction information between all tokens, the following upper bound holds: $I(m_1; \hdots; m_L | h) \leq \mathds{E}_{h \sim p(h)} \left[ D_{KL} \left(q(\hat{m}|h) || \pi^i_m(m_1|h) \otimes \cdots \otimes \pi^i_m(m_L|h)\right) \right]$.} \begin{proof} Starting with the independent information objective, we want to minimize the interaction information, \begin{equation*} \begin{aligned} &I(m_1; \hdots; m_L | h) = \\ &\int \hdots \int f_m(m_1, \hdots, m_L, h) dh\ d{m_1} \hdots d{m_L} \end{aligned} \end{equation*} which defines the conditional mutual information between each token and, \begin{equation} \label{eq:interaction} \begin{aligned} f_m(*) = p(h) p(m_1; \hdots; m_L | h) \log \frac{p(m_1; \hdots; m_L | h)}{\prod_l^L p(m_L | h)} \end{aligned} \end{equation} Let $\pi_m^i (m_l | h)$ be a variational approximation of $p(m_l | h)$, which is defined by our message encoder network. Given that each token should provide unique information, we assume independence between $m_l$. Thus, it follows that our compositional message is a vector, $m = [m_1, \hdots, m_L]$, and is jointly Gaussian. Moreover, we can define $q(\hat{m} | h)$ as a variational approximation to $p(m | h) = p(m_1; \hdots, m_L | h)$. We can model $q$ with a network layer and define its loss as $||\hat{m} - m||_2$. Thus, transforming equation \ref{eq:interaction} into variational form, we have, \begin{equation*} \begin{aligned} g_m(m_1, \hdots, m_L, h) = p(h) q(\hat{m} | h) \log \frac{q(\hat{m}|h)}{\prod_l^L \pi_m^i(m_l | h)} \end{aligned} \end{equation*} Since Kullback-Leibler divergence $D_{KL}$ is non-negative, $$D_{KL}\left(q(\hat{m}|h) || \pi^i_m(m_1|h) \otimes \cdots \otimes \pi^i_m(m_L|h)\right) \geq 0,$$ it follows that $$\int q(\hat{m}|h) \log q(\hat{m}|h) d\hat{m} \geq \int q(\hat{m}|h) \log \prod_l^L \pi^i_m(m_l|h) d\hat{m}$$ Thus, we can bound our interaction information, \begin{equation*} \begin{aligned} &I(m_1;\hdots ; m_L | h) \leq \int \hdots \int g_m(*) dh d{m_1} \hdots d{m_L} \\ &= \mathds{E}_{h \sim p(h)} \left[ D_{KL} \left(q(\hat{m}|h) || \pi^i_m(m_1|h) \otimes \cdots \otimes \pi^i_m(m_L|h)\right) \right] \end{aligned} \end{equation*} \end{proof} \textbf{Proposition 4.2} \textit{For the mutual information between the composed message and encoded information, the following upper bound holds: $I(H; M) \leq \sum_l^L \mathds{E}_{h \sim p(h)} \left[ D_{KL} \left( q(m_l|h) || z(m_l)) \right) \right]$.} \begin{proof} By definition of mutual information between the composed messages $M$ and the encoded observations $H$, we have, \begin{equation*} \begin{aligned} I(H; M) = \int \int p(h) p(\hat{m}|h) \log \frac{p(\hat{m}|h)}{p(\hat{m})} d\hat{m}\ dh \end{aligned} \end{equation*} Substituting $q(\hat{m}|h)$ for $p(\hat{m}|h)$, the same KL Divergence identity, and defining a Gaussian approximation $z(\hat{m})$ of the marginal distribution $p(\hat{m})$, it follows that, \begin{equation*} \begin{aligned} I(H; M) \leq \int \int p(h) q(\hat{m}|h) \log \frac{q(\hat{m}|h)}{z(\hat{m})} d\hat{m}\ dh \end{aligned} \end{equation*} In expectation of equation~\ref{eq:inde_info}, we have, $$q(\hat{m}|h) = q(\hat{m}|h) = \prod_l^L \pi^i_m (m_l|h).$$ This implies that, for $\hat{m}=[m_1,\hdots,m_L]$, there is probabilistic independence between $m_j, m_k, j\neq k$. Thus, expanding, it follows that, \begin{equation*} \begin{aligned} I(H; M) &\leq \sum_l^L \int \int p(h) q(m_l|h) \log \frac{q(m_l|h)}{z(m_l)} dm_l\ dh \\ &= \sum_l^L \mathds{E}_{h \sim p(h)} \left[ D_{KL} \left( q(m_l|h) || z(m_l)) \right) \right] \end{aligned} \end{equation*} where $z(m_l)$ is a standard Gaussian. \end{proof} \textbf{Proposition 5.1.} \textit{Utility mutual information is lower bounded by the contrastive NCE-binary objective, $I(M,Y) \geq \log \sigma (f(s,m,s_f^+)) +\log \sigma (1-f(s,m,s_f^-))$.} \begin{proof} We suppress the reliance on $h$ since this is directly passed through. By definition of mutual information, we have, \begin{equation*} \begin{aligned} I(M^j; Y^i) = \int \int p(m) \pi_{R^+}(y|m) \log \frac{\pi_{R^+}(y|m)}{\pi_{R^-}(y)} dm\,dy \end{aligned} \end{equation*} Our network model learns $\pi_{R^+}(y|m)$ from rolled-out trajectories, $R^+$, using our policy. The prior of our network state, $\pi_{R^-}(y)$, can be modeled from rolling out a random trajectory, $R-$. Unfortunately, it is intractable to model $\pi_{R^+}(y|m)$ and $\pi_{R^-}(y)$ directly during iterative learning, but we can sample $y^+ \sim \pi_{R^+}(y|m)$ and $y^- \sim \pi_{R^-}(y)$ directly from our network during training. It has been shown that $\log p(y|m)$ provides a bound on mutual information~\cite{poole2019variational}, \begin{equation}\label{eq:contrastive_MI} \begin{aligned} I(M^j;Y^i) \geq \mathds{E} \left[ \frac{1}{K} \sum_{k=1}^K \log \pi_{R^+}(y_k|m_k) + \log \pi_{R^-}(y_k) \right] \end{aligned} \end{equation} with the expectation over $\prod_l p(m_l, y_l)$. However, we need a tractable understanding of the information $Y$. \begin{lemma}\label{lemma:y} $\pi_{R^-}(y) = p(s^\prime = s_f^- | y)$. \end{lemma} In the information bottleneck, $Y$ represents the desired outcome. In our setup, $y$ is coordination information that helps create the desired output, such as any action $a^-$. This implies, $y \implies a^-$. Since the transition is known, it follows that $a^- \implies s_f^-$ , a random future state. Thus, we have, $\pi_{R^-}(y) = p(s^\prime = s_f^- | y)$. \begin{lemma}\label{lemma:ym} $\pi_{R^+}(y|m) = p(s^\prime = s_f^+ | y,m)$. \end{lemma} This is similar to the proof for lemma~\ref{lemma:y}, but requires assumptions on messages $m$ from the emergent language. We note that when $m$ is random, the case defaults to lemma~\ref{lemma:y}. Thus, we assume we have at least input-oriented information in $m$ given sufficiently satisfying equation~\ref{eq:input}. Given a sufficient emergent language, it follows that $y \implies a^+$, where $a^+$ is an intention action based on $m$. Similarly, since the transition is known, $a^+ \implies s_f^+$, a desired goal state along the trajectory. Thus, we have, $\pi_{R^+}(y|m) = p(s^\prime = s_f^+ | y,m)$. Recall the following (as shown in~\cite{eysenbach2022contrastive}), which we have adapted to our communication objective, \begin{proposition}[rewards $\rightarrow$ probabilities]\label{eq:eysenbach} The Q-function for the goal-conditioned reward function $r_g (s_t, m_t) = (1-\gamma) p(s^\prime = s_g |y_t)$ is equivalent to the probability of state $s_g$ under the discounted state occupancy measure: \begin{equation}\label{eq:qfunc} Q_{s_g}^\pi (s,m) = p^{\pi} (s_f^+ = s_g |y) \end{equation} \end{proposition} and \begin{lemma}\label{lemma:eysenbach} The critic function that optimizes equation~\ref{eq:contrastive_MI} is a Q-function for the goal-conditioned reward function up to a multiplicative constant $\frac{1}{p(s_f)}$: $\exp(f^*(s,m,s_f) = \frac{1}{p(s_f)} Q_{s_f}^\pi (s,m)$. \end{lemma} The critic function $f(s,m,s_f) = y^\intercal \texttt{enc}(s_f)$ represents the similarity between the encoding $y = \texttt{enc}(s,m)$ and the encoding of the future rollout $s_f$. Given lemmas~\ref{lemma:y}~\ref{lemma:ym}~\ref{lemma:eysenbach} and proposition~\ref{eq:eysenbach}, it follows that equation~\ref{eq:contrastive_MI} is the NCE-binary~\cite{ma2018noise} (InfoMAX~\cite{hjelm2018learning}) objective, \begin{equation}\label{eq:contrastive} \hat{I}(M^j,Y^i) = \log\left( \sigma(f(s,m,s_f^+)) \right) + \log\left(1 - \sigma(f(s,m,s_f^-)) \right) \end{equation} which lower bounds the mutual information, $I(M^j,Y^i) \geq \hat{I}(M^j,Y^i)$. The critic function is unbounded, so we constrain it to $[0,1]$ with the sigmoid function, $\sigma(*)$. We suppress the reliance on $h$ since this is directly passed through. By definition of mutual information, we have, \begin{equation*} \begin{aligned} I(M^j; Y^i) = \int \int p(m) \pi_{R^+}(y|m) \log \frac{\pi_{R^+}(y|m)}{\pi_{R^-}(y)} dm\,dy \end{aligned} \end{equation*} Our network model learns $\pi_{R^+}(y|m)$ from rolled-out trajectories, $R^+$, using our policy. The prior of our network state, $\pi_{R^-}(y)$, can be modeled from rolling out a random trajectory, $R-$. Unfortunately, it is intractable to model $\pi_{R^+}(y|m)$ and $\pi_{R^-}(y)$ directly during iterative learning, but we can sample $y^+ \sim \pi_{R^+}(y|m)$ and $y^- \sim \pi_{R^-}(y)$ directly from our network during training. It has been shown that $\log p(y|m)$ provides a bound on mutual information~\cite{poole2019variational}, \begin{equation}\label{eq:contrastive_MI} \begin{aligned} I(M^j;Y^i) \geq \mathds{E} \left[ \frac{1}{K} \sum_{k=1}^K \log \pi_{R^+}(y_k|m_k) + \log \pi_{R^-}(y_k) \right] \end{aligned} \end{equation} with the expectation over $\prod_l p(m_l, y_l)$. However, we need a tractable understanding of the information $Y$. \begin{lemma}\label{lemma:y} $\pi_{R^-}(y) = p(s^\prime = s_f^- | y)$. \end{lemma} In the information bottleneck, $Y$ represents the desired outcome. In our setup, $y$ is coordination information that helps create the desired output, such as any action $a^-$. This implies, $y \implies a^-$. Since the transition is known, it follows that $a^- \implies s_f^-$ , a random future state. Thus, we have, $\pi_{R^-}(y) = p(s^\prime = s_f^- | y)$. \begin{lemma}\label{lemma:ym} $\pi_{R^+}(y|m) = p(s^\prime = s_f^+ | y,m)$. \end{lemma} This is similar to the proof for lemma~\ref{lemma:y}, but requires assumptions on messages $m$ from the emergent language. We note that when $m$ is random, the case defaults to lemma~\ref{lemma:y}. Thus, we assume we have at least input-oriented information in $m$ given sufficiently satisfying equation~\ref{eq:input}. Given a sufficient emergent language, it follows that $y \implies a^+$, where $a^+$ is an intention action based on $m$. Similarly, since the transition is known, $a^+ \implies s_f^+$, a desired goal state along the trajectory. Thus, we have, $\pi_{R^+}(y|m) = p(s^\prime = s_f^+ | y,m)$. Recall the following (as shown in~\cite{eysenbach2022contrastive}), which we have adapted to our communication objective, \begin{proposition}[rewards $\rightarrow$ probabilities]\label{eq:eysenbach} The Q-function for the goal-conditioned reward function $r_g (s_t, m_t) = (1-\gamma) p(s^\prime = s_g |y_t)$ is equivalent to the probability of state $s_g$ under the discounted state occupancy measure: \begin{equation}\label{eq:qfunc} Q_{s_g}^\pi (s,m) = p^{\pi} (s_f^+ = s_g |y) \end{equation} \end{proposition} and \begin{lemma}\label{lemma:eysenbach} The critic function that optimizes equation~\ref{eq:contrastive_MI} is a Q-function for the goal-conditioned reward function up to a multiplicative constant $\frac{1}{p(s_f)}$: $\exp(f^*(s,m,s_f) = \frac{1}{p(s_f)} Q_{s_f}^\pi (s,m)$. \end{lemma} The critic function $f(s,m,s_f) = y^\intercal \texttt{enc}(s_f)$ represents the similarity between the encoding $y = \texttt{enc}(s,m)$ and the encoding of the future rollout $s_f$. Given lemmas~\ref{lemma:y}~\ref{lemma:ym}~\ref{lemma:eysenbach} and proposition~\ref{eq:eysenbach}, it follows that equation~\ref{eq:contrastive_MI} is the NCE-binary~\cite{ma2018noise} (InfoMAX~\cite{hjelm2018learning}) objective, \begin{equation}\label{eq:contrastive} \hat{I}(M^j,Y^i) = \log\left( \sigma(f(s,m,s_f^+)) \right) + \log\left(1 - \sigma(f(s,m,s_f^-)) \right) \end{equation} which lower bounds the mutual information, $I(M^j,Y^i) \geq \hat{I}(M^j,Y^i)$. The critic function is unbounded, so we constrain it to $[0,1]$ with the sigmoid function, $\sigma(*)$. \end{proof}
{ "arxiv_id": "2302.14246", "language": "en", "timestamp": "2023-03-01T02:06:41", "url": "https://arxiv.org/abs/2302.14246", "yymm": "2302" }
\section{Algorithm} \label{sec:algorithm} \begin{figure*}[h] \centering \begin{subfigure}[t]{0.8\linewidth} \centering \includegraphics[width=1.0\linewidth]{figures/i2lqr-diagram.png} \caption{Proposed i2LQR. The optimization problem is resolved iteratively in the outer loop colored in red connected lines, and multiple iLQR problems are solved in parallel at each outer loop iteration colored in blue.} \label{fig:diagram-i2LQR} \end{subfigure} \begin{subfigure}[t]{0.8\linewidth} \centering \includegraphics[width=1.0\linewidth]{figures/lmpc-diagram.png} \caption{LMPC~\cite{rosolia2017learning,rosolia2021minimum}. The key difference between LMPC and our proposed i2LQR is that performance optimal points are regarded as terminal constraints instead of terminal costs updated iteratively.} \label{fig:diagram-LMPC} \end{subfigure} \caption{Illustration of i2LQR and existing LMPC algorithms} \label{fig:strategy} \end{figure*} After introducing the problem setup for iterative tasks, the design of the proposed Iterative LQR for Iterative Tasks (i2LQR) in a dynamic environment will be presented in this section. The general idea of the algorithm will be introduced in Sec.~\ref{sec:algorithm_overview}. Sec.~\ref{sec:algorithm-nearest-point} shows how to build a target terminal set using historical data. Details about the local iLQR optimization and best open-loop solution selection will be shown in Sec.~\ref{sec:algorithm_optimization_design} and Sec.~\ref{sec:algorithm-optimal-selection}, respectively. \subsection{Structure of i2LQR} \label{sec:algorithm_overview} Consider the problem of achieving the system's optimal performance for iterative tasks in a dynamic environment in the form of Alg. \ref{alg:iterative-task}. The proposed control strategy i2LQR computes the optimal input using historical data from previous iterations. This includes states that the system has visited in previous iterations and the cost-to-go $h(\mathbf{x}^i_t)$ associated with each historical state. In the first iteration, any open-loop controller could be used to generate a feasible trajectory. Then, the proposed i2LQR algorithm is deployed to calculate the system's optimal input at each time step. Fig.~\ref{fig:diagram-i2LQR} shows the structure of the proposed algorithm. As a comparison, the structure of the LMPC algorithm is also presented in Fig.~\ref{fig:diagram-LMPC}. Different from LMPC algorithm, the proposed i2LQR controller consists of several optimization cycles. For the $r$-th optimization cycle, we define a guided state $\bar{\mathbf{x}}_{r}$, which will guide the position of the open-loop terminal state of the iLQR optimization (see Fig.~\ref{fig:iterative_point_selection}). For the first cycle, the state at the current time step $\mathbf{x}^i_t$ will be used as the guided state, while the best open-loop predicted terminal state ${\mathbf{x}}_{r-1}^*(j_{r-1}^*)_N$ from the last cycle will be used as the guided state for $r$-th cycle. Then $K$-nearest points to $\bar{\mathbf{x}}_{r}$ from the historical states set $\mathcal{H}$ will build the target terminal set $\mathcal{Z}_r$, which consists the target terminal state $\mathbf{z}_r(j)$ of the $j$-th iLQR optimization. To reduce the computational time of the proposed algorithm, these iLQR optimizations will be solved through parallel computing, colored in blue in Fig.~\ref{fig:diagram-i2LQR}. Then the best open-loop predicted solution for the $r$-th optimization cycle will be selected. The algorithm will continue doing optimization until either the set $\mathcal{Z}_r$ remains unchanged or the maximum cycle number $r_{\text{max}}$ is reached. As a result, the algorithm will select the iLQR's optimal target terminal state $\mathbf{z}_r(j_r^*)$ from historical data in an iterative manner. Details about this algorithm will be illustrated in the following subsections. \subsection{Nearest Points Selection}\label{sec:algorithm-nearest-point} To build the target terminal set $\mathcal{Z}_r$, The following criteria will be used to select the $K$-nearest points: \begin{subequations}\label{eq:nearest_points} \begin{align} J_\mathbf{z}(\bar{\mathbf{x}}_r) = \min\limits_{\mathbf{z}_r(j)} &{\sum\limits_{j=1}\limits^{K}||{\mathbf{z}_{r}(j) - \bar{\mathbf{x}}_r}}||^{2}_{D_0} \\ \text{s.t.}\quad z_{j} \neq z_{l}, &~\forall{j\neq l}\\ z_{j} \in \mathcal{H},&~j= 1,...,K \end{align} \end{subequations} where $j$ refers to the index of points in $r$-th optimization cycle; $D_0$ is a diagonal matrix that contains weighting factors for state variables. \begin{remark} The number of maximum iteration $r_{\text{max}}$ and the number of selected nearest points $K$ are the hyperparameters of the proposed algorithm. A larger value of $r_{\text{max}}$ or $K$ will make the system converge to the optimal performance more quickly. However, this will also increase the computational burden at each time step. \end{remark} \begin{figure} \setlength{\abovecaptionskip}{0.35cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \includegraphics[width=0.85\linewidth]{figures/iterative_point_selection.png} \caption{An illustration of nearest points selection in an iterative manner. System state at the current time step is marked in red, while historical states are marked in blue. States on the right come with a smaller cost-to-go. Points with crosses are the selected nearest points in the corresponding optimization cycle. Specifically, yellow represents the guided state, and purple indicates the historical state for the best open-loop trajectory. The orange line is the best open-loop trajectory for the corresponding optimization cycle. In this way, the proposed i2LQR algorithm computes the optimal open-loop trajectory in an iterative manner.} \label{fig:iterative_point_selection} \end{figure} \subsection{Local iLQR Optimization} \label{sec:algorithm_optimization_design} Following constrained finite-time optimal control problem will be solved through iLQR for each $\mathbf{z}_r(j)$: \begin{subequations} \label{eq:i2lqr} \begin{align} J_l(\mathbf{x}_t^i, \mathbf{z}_r(j)) = \min\limits_{{\mathbf{u}}^*_{r}(j)} p(\mathbf{x}^*_r(j)_{1+N}, \mathbf{z}_r(j)&) \label{eq:i2lqr-cost}\\ \text{s.t.} \quad \mathbf{x}^*_{r}(j)_{k+1} = f(\mathbf{x}^*_{r}(j)_{k}, \mathbf{u}^*_{r}(j)_{k}), k& = 1,...,N\label{eq:i2lqr-dynamics} \\ \mathbf{x}^*_{r}(j)_{k+1} ,\mathbf{u}^*_{r}(j)_{k} \in \mathcal{C}^i_{t+k|t}, k& = 1,...,N \label{eq:i2lqr-constraint}\\ \mathbf{x}^*_{r}(j)_{1} = \mathbf{x}^i_t,~~~~~~~~~~~& \label{eq:i2lqr-initial-condition} \end{align} \end{subequations} where \eqref{eq:i2lqr-cost} is the objective function of the optimization problem; \eqref{eq:i2lqr-dynamics}) represents the system dynamics; \eqref{eq:i2lqr-constraint} shows the constraints of the system along the prediction horizon; \eqref{eq:i2lqr-initial-condition} indicates the initial constraint. Specifically, the terminal cost introduces the difference between the open-loop predicted terminal state and the target terminal state in the quadratic form: \begin{equation}\label{eq:i2lqr-terminal-cost} \begin{aligned} p(\mathbf{x}^*_r(j)_{1+N}, \mathbf{z}_r(j))&=\\(\mathbf{x}^*_r(j)_{1+N}& - \mathbf{z}_r(j))^{T}P(\mathbf{x}^*_r(j)_{1+N}- \mathbf{z}_r(j)), \end{aligned} \end{equation} where $P$ is a diagonal matrix consisting of weighting factors. Alg.~\ref{alg:iLQR} shows how to solve the above optimization problem through iLQR. Constraints on states and inputs along the prediction horizon will be converted to part of the new cost function $J_s(\cdot)$ through the exponential function as done in \cite{chen2017constrained}. As shown in Alg.~\ref{alg:iLQR}, the algorithm starts with an initial input sequence, such as zero control inputs in this work. Then, $g(\cdot)$ will calculate the open-loop states based on open-loop inputs and initial state through system dynamics \eqref{eq:i2lqr-dynamics} during the forward pass at line \ref{alg:forward_dynamics}. It will be linearized along with the cost function $J_s(\cdot)$ around $\mathbf{x}_{r}^m(j)$ and $\mathbf{u}_{r}^m(j)$. The optimal solution $\delta^*(\mathbf{u}_{r}^m(j))$ could be obtained efficiently and will be used to generate the input sequence for the next iteration $\mathbf{u}_{r}^{m+1}(j)$. The algorithm will do this computation repeatedly until the cost $J_s(\cdot)$ has converged or the maximum iteration $m_{\text{max}}$ is reached. \begin{algorithm} \caption{iLQR} \label{alg:iLQR} \begin{algorithmic}[1] \State $\mathbf{u}_{r}^0(j) \leftarrow \mathbf{0}$ \Repeat \State Iteration $m$ begins \State $\mathbf{x}_{r}^m(j)_\leftarrow g(\mathbf{x}_t^i,\mathbf{u}_{r}^m(j))$ \label{alg:forward_dynamics} \State Linearize $f(\cdot)$ and $J_s(\cdot)$ around $\mathbf{x}_{r}^m(j)$ and $\mathbf{u}_{r}^m(j)$\label{alg:linearization} \State $\delta^*(\mathbf{u}_{r}^m(j))\leftarrow \text{LQR}(\delta(\mathbf{x}_{r}^m(j)), \delta(\mathbf{u}_{r}^m(j)))$ \State $\mathbf{u}_{r}^{m+1}(j)\leftarrow\mathbf{u}_{r}^m(j)+\delta^*(\mathbf{u}_{r}^m(j))$ \State $m \leftarrow m+1$ \Until{Reach $m_\text{max}$ OR $J_s(\cdot)$ has converged} \end{algorithmic} \end{algorithm} \subsection{Best Open-Loop Solution Selection}\label{sec:algorithm-optimal-selection} In each optimization cycle, the best open-loop solution will be selected among $K$ solutions. Following local cost will be used in this process: \begin{equation}\label{eq:terminal-state-local-cost} j^*_r{=}\argmin_{j = 1, ..., K} w_h h(\mathbf{z}_r(j_r)) + w_d ||\mathbf{x}^*_r(j_r)_N- \mathbf{z}_r(j_r)||^2_{D_1}, \end{equation} where $w_h$, $w_d$ are weighting factors; $h(.)$ is the cost-to-go associated with the state $\mathbf{z}_r(j_r)$; $||\mathbf{x}^*_r(j_r)_N- \mathbf{z}_r(j_r)||^2_{D_1}$ describes the penalty for the state difference between $\mathbf{x}^*_r(j_r)_N$ and $\mathbf{z}_r(j_r)$ with a diagonal weighting matrix $D_1$. Since cost-to-go function $h(.)$ is considered in~\eqref{eq:terminal-state-local-cost}, this allows the iterative optimization to find best terminal state in the outer loop of i2LQR, shown in red connected lines in Fig.~\ref{fig:diagram-i2LQR}. \section{Problem Setup} \label{sec:setup} In this section, we introduce the problem setup of iterative tasks. \begin{table}[t!] \setlength{\abovecaptionskip}{0.5cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \scriptsize \caption{Symbol Notations} \vskip-10pt \begin{tabular}{c | c}\hline \multicolumn{2}{c}{\textbf{Symbols for iterative tasks}}\\\hline Symbol & Description\\ \hline $\mathbf{x}_t^i$ & System state at time step $t$ of iteration $i$\\\hline $\mathbf{u}_t^i$ & System input at time step $t$ of iteration $i$\\\hline $\mathbf{x}_0$ & Initial state of a single iteration\\ \hline $\mathbf{x}_{\text{target}}$ & Target state of a single iteration \\\hline $h(\mathbf{x}^i_t)$ & Cost-to-go associated with system state $\mathbf{x}^j_t$ \\ \hline $\mathcal{X}^{i}$ & Set of historical states for iteration $i$\\ \hline $\mathcal{H}$ & Set of all historical states from previous iterations \\\hline $\mathcal{C}_{t}^i$ & Constraints on the system at time step $t$ in iteration $i$ \\ \hline $\epsilon$ & A small positive number \\ \hline \multicolumn{2}{c}{\textbf{Symbols for i2LQR}}\\\hline Symbol & Description\\ \hline $\bar{\mathbf{x}}_{r}$ & Guided state for $r$-th target terminal set \\\hline $\mathbf{z}_{r}(j)$ & $j$-th state from $r$-th target terminal set \\\hline $N$ & Prediction horizon for the optimization problem \\\hline $\mathcal{Z}_{r}$ & $r$-th target terminal set\\\hline ${\mathbf{x}}_{r}^m(j)$& Open-loop states of $m$-th iteration of iLQR for $\mathbf{z}_{r}(j)$\\\hline ${\mathbf{u}}_{r}^m(j)$ & Open-loop inputs of $m$-th iteration of iLQR for $\mathbf{z}_{r}(j)$\\\hline ${\mathbf{x}}_{r}^*(j)$ & Optimized open-loop states from iLQR for $\mathbf{z}_{r}(j)$\\\hline ${\mathbf{u}}_{r}^*(j)$ & Optimized open-loop inputs from iLQR for $\mathbf{z}_{r}(j)$\\\hline $\mathbf{x}^*_r(j_r^*)$ & Best open-loop states for $r$-th optimization cycle\\\hline $\mathbf{u}^*_r(x_r^*)$ & Best open-loop inputs for $r$-th optimization cycle\\\hline $J_{\mathbf{z}}(\bar{\mathbf{x}}_{r})$ & Cost for nearest point selection on $\bar{\mathbf{x}}_{r}$\\\hline $J_{l}(\mathbf{x}_t^i,\mathbf{z}_{r}(j))$ & Cost for local iLQR Optimization\\\hline $m_\text{max}$ & Maximum iteration for iterative optimization \\\hline $r_\text{max}$ & Maximum iteration for target terminal set \\\hline \end{tabular} \label{tab:setup-parameter} \vskip-10pt \end{table} For an iterative task at time step $t$ of iteration $i$, an autonomous system with dynamics $\mathbf{x}^i_{t+1} = f(\mathbf{x}^i_{t}, \mathbf{u}^i_{t})$ performs the task repeatedly until completion. In each iteration, the system starts from the same initial state $\mathbf{x}_0$ and ends at the same target state $\mathbf{x}_\text{target}$. The system's historical data (e.g. state, time or energy consumption) is saved in a historical data set $\mathcal{H}$. The system's controller computes the optimal input based on this data set $\mathcal{H}$, state $\mathbf{x}^i_t$ and constraint $\mathcal{C}_t^i$ at current time step. The constraint $\mathcal{C}_t^i$ includes the constraints on both system states and inputs. If the system works in a static environment \cite{rosolia2021minimum, rosolia2019learning}, the constraint $\mathcal{C}_t^i$ will be same for the same time step $t$ of different iterations (i.e., $\mathcal{C}_t^i=\mathcal{C}_t^{i+1}$). If the system works in a dynamic environment (as in this work), the constraint $\mathcal{C}_t^i$ will change along the entire process (i.e., $\mathcal{C}_t^i\neq\mathcal{C}_t^{i+1}$ or $\mathcal{C}_{t+1}^i\neq\mathcal{C}_t^{i}$). A cost-to-go $h(\mathbf{x}^i_t)$ is associated with each historical state. This describes the cost to finish the corresponding iteration from that point to the target state $\mathbf{x}_{\text{target}}$. The general form of this problem is shown in Alg. 1. In this work, we focus on designing a performance optimal controller, which could minimize the cost $h(\mathbf{x}^i_0)$ of the iterative task. Details about this algorithm will be illustrated in Sec.~\ref{sec:algorithm}. Parameters used in this work are listed in TABLE \ref{tab:setup-parameter} along with their notations. \begin{algorithm}[ht] \caption{Iterative Tasks} \label{alg:iterative-task} \begin{algorithmic}[1] \State $\mathcal{H} \leftarrow \emptyset$ \Repeat \State Iteration $i$ begins \State $\mathbf{x}^i_0 \leftarrow \mathbf{x}_0$, $\mathcal{X}^i\leftarrow\mathbf{x}^i_0$ \While{$||\mathbf{x}^i_t - \mathbf{x}_{\text{target}} ||_2 \geq \epsilon$} \State $\mathbf{u}^i_t \leftarrow \text{Controller}(\mathbf{x}^i_t, \mathcal{C}^i_t, \mathcal{H}$) \label{alg:iterative-task-controller} \State $\mathbf{x}^i_{t+1}\leftarrow f(\mathbf{x}^i_t, \mathbf{u}^i_t)$ \State $t \leftarrow t+1$ \State $\mathcal{X}^i \leftarrow \mathcal{X}^i \cup \mathbf{x}^i_t$ \EndWhile \State $\mathcal{H} \leftarrow \mathcal{H} \cup \mathcal{X}^i$ \State $i \leftarrow i+1$ \Until{Task is finished} \end{algorithmic} \end{algorithm} \section{Conclusion} \label{Sec:Conclusion} In this work, a control strategy called Iterative Linear Quadratic Regulator for Iterative Tasks (i2LQR) is presented. The proposed algorithm achieves the system's optimal performance for iterative tasks in dynamic environments. The algorithm utilizes historical data in an optimization problem, which will be solved in an iterative manner. To illustrate our control design, four sets of simulations are conducted. In the first two groups of simulations with a static environment, our proposed i2LQR algorithm provides the same optimal performance as state-of-the-art LMPC algorithm. In the remaining two simulations where the environment changes during the simulation, the i2LQR algorithm outperforms the LMPC algorithm. In future work, stability and feasibility analysis of the proposed controller will be presented. \section{Introduction}\label{sec:introduction} \subsection{Motivation} One important objective of control algorithms is to optimize the performance of autonomous systems. This can be usually formulated as minimizing time \cite{jain2020computing} or energy consumption \cite{wu2022model} during task execution. The performance-optimal controller can be applied to iterative tasks, where the autonomous system must do the same task repeatedly. However, the surrounding environment around the autonomous system may be dynamic. This means that constraints of the system are not always the same along the process, such as constraints that are changed in some given iterations, e.g., new obstacles appear after a particular iteration or moving obstacles in a single iteration. The existing state-of-the-art algorithm cannot handle these changes effectively. This motivates us to propose a control strategy that could handle additional constraints in different iterations while achieving the system's optimal performance. \subsection{Related Work} Researchers have applied different methods to address the problem of optimizing performance. Model-based approaches usually leverage a high-level planner to generate the optimal trajectory and a low-level controller is deployed to track the planned trajectory \cite{kapania2016sequential,nagy2019sequential, heilmeier2019minimum, palleschi2021fast}. However, calculating the best possible or global optimal trajectory can be time consuming, and the low-level controller may not track the trajectory perfectly due to discretization in the planning problem. Furthermore, when the planned trajectory encounters conflicts with newly appeared constraints, local re-planning is required to deal with these additional constraints~\cite{gao2020teach}. This may result in the loss of global optimality for the trajectory. Several attempts are made to address the problem through data-driven based model-free approaches, where the system's historical data is used to train an end-to-end control policy directly. For instance, in \cite{jain2020computing,fuchs2021super}, the learning-based control policy shows its capability to push an autonomous racing car to its dynamics limit. In \cite{song2021autonomous, penicka2022learning}, quadrotors are shown to fly with an aggressive maneuver in autonomous drone racing competition. Nevertheless, these methods still have limitations. Learning-based methods are data hungry and require significant time to get the policy, which means that such methods may not be suitable for some real-time applications. Moreover, all the aforementioned policies operate in a static environment. However, in practice, the control policy is required to be functional in a dynamic environment. More importantly, since the performance of the trained policy is highly related to the used training data set, the learned policy may not guarantee the global optimal performance. Recently, reference-free model-based methods are proposed to provide optimal performance for iterative tasks. In \cite{kabzan2019learning}, a model predictive contouring controller (MPCC) with dynamics learning is used for autonomous racing cars. The state-of-the-art algorithm, called learning-based MPC (LMPC), is introduced in \cite{rosolia2017learning, rosolia2021minimum}, where the system's historical data is used to formulate the local MPC optimization problem. This allows the system to improve its performance along the iteration and the system is proved theoretically to achieve global optimal performance \cite{rosolia2017learning}. The proposed algorithm is implemented on autonomous vehicles \cite{rosolia2019learning, kim2019eco} and aerial robots \cite{li2022learning}. However, these methods still have shortcomings. Although the local objective function in \cite{kabzan2019learning} considers optimal performance along the prediction horizon, the result is not global optimal without global planning. Moreover, the LMPC strategy in \cite{rosolia2017learning, rosolia2021minimum} must work in a static environment, which means that the scenario should be exactly same for every iteration. This is due to the limitation of the optimization setup, where local MPC's feasibility strictly depends on the reachability of historical states from previous iterations. If new obstacles are introduced, these historical states could be infeasible, rendering the MPC problem infeasible. One possible solution to this problem is local-replanning \cite{he2022autonomous}, but the performance may be limited due to the nonsmooth switch in the high-level planner. To ensure that the autonomous system can smoothly adapt to newly introduced constraints, the optimal trajectory should be computed in an iterative manner. Therefore, we want to use a method that could handle this without resulting in infeasibility. The iterative linear quadratic regulator (iLQR) shows potential in solving this problem. iLQR is an extension of LQR control, where the optimization is solved iteratively and linearization of cost function and system dynamics is conducted in each iteration. In \cite{chen2017constrained}, the iLQR computes the open-loop predicted trajectory for control problems with nonlinear dynamics and complicated constraints in an iterative manner. This motivates us to investigate the above challenging problem using an iLQR-based algorithm. \subsection{Contribution} The contributions of this paper are ass follows: \begin{itemize} \item We propose a novel optimal control strategy called Iterative LQR for Iterative Tasks (i2LQR), which provides optimal performance for a general dynamic system for iterative tasks in dynamic environments. \item We show how to use historical data from previous iterations to build the local optimization problem of i2LQR at each time step and how to solve this optimization problem in an iterative manner. \item Through numerical simulation, our proposed control strategy is shown to achieve the same optimal performance as state-of-the-art LMPC algorithm for iterative tasks in static environments and outperforms the LMPC algorithm for iterative tasks in dynamic environments. \end{itemize} \section{Results} \label{Sec:Results} Having illustrated our optimal control strategy for iterative tasks in a dynamic environment in the previous sections, we now show the performance of the proposed algorithm. In Sec. \ref{Sec:Results-Setup}, the basic setup for the simulation is introduced. Then, the performance of the proposed controller will be compared with state-of-the-art learning-based MPC algorithm: in the Sec. \ref{Sec:Results-Static}, results on iterative tasks in a static environment will be illustrated; in the Sec. \ref{Sec:Results-Dynamic}, results on iterative tasks in a dynamic environment will be presented. \subsection{Simulation Setup}\label{Sec:Results-Setup} Nonlinear kinematic bicycle model with input constraints as in \cite{chen2017constrained} will be used to evaluate the proposed algorithm. \begin{equation} \textbf{x}_t=[x_t, y_t, v_t, \theta_t]^T,~\textbf{u}_t=[a_t, \delta_t]^T, \end{equation} are states and inputs vectors at time step $t$, respectively; specifically, $x_t$ and $y_t$ describe the system's position; $v_t$ shows the system's speed; $\theta_t$ indicates the heading angle; $a_t$ introduces the acceleration; $\delta_t$ represents the steering angle. In this work, the sampling time $dt$ is set to 1 sec, which is consistent with the open-source code of~\cite{rosolia2021minimum}. The system is subject to the following input constraints: \begin{equation} -2 \text{ m/s}^2\leq a_t\leq2\text{ m/s}^2,~-\dfrac{\pi}{2} \text{ rad.}\leq \delta_t\leq \dfrac{\pi}{2} \text{ rad.} \end{equation} For each iteration, the initial state $\textbf{x}_0$ and target state $\textbf{x}_{\text{target}}$ are set to $[0 \text{ m},0\text{ m},0\text{ m/s},0\text{ rad.}]^T$ and $[201.5\text{ m},0\text{ m},0\text{ m/s},0\text{ rad.}]^T$, respectively. The cost-to-go $h(\textbf{x}_t^i)$ is the time to finish the corresponding iteration from the point $\textbf{x}_t^i$ to $\mathbf{x}_{\text{target}}$, which means that the algorithm aims to minimize the time to finish each iteration. In iteration 0, a brute force algorithm is used to calculate the initial feasible trajectory. This trajectory will be used by both i2LQR and LMPC algorithms for all simulations. In iteration 1, both algorithms will use the historical data from iteration 0, and historical data from the two previous iterations will be used by both algorithms in subsequent iterations. In this work, numerical simulation is implemented in Python. For the LMPC algorithm, CasADi \cite{andersson2019casadi} is used as modeling language and the resulting optimization is solved with IPOPT \cite{biegler2009large}. \subsection{Iterative Tasks In Static Environments} \label{Sec:Results-Static} In order to compare the performance between our proposed i2LQR and LMPC for iterative tasks in a static environment, we do two groups of simulations. In the first group of simulations, no obstacle exists in the environment. In each iteration, the system will travel from the initial state $\textbf{x}_0$ to the target state $\textbf{x}_{\text{target}}$. The initial trajectory and trajectory in iteration 10 for the two algorithms are shown in Fig.~\ref{fig:no_obstacle_trajectory}. Fig.~\ref{fig:no_obstacle_time} shows the timestamps to finish every single iteration using two algorithms. In Fig.~\ref{fig:no_obstacle_spd} and Fig.~\ref{fig:no_obstacle_acc}, the system's velocity and acceleration in iteration 10 using the two algorithms are presented. \begin{figure} \setlength{\abovecaptionskip}{0.3cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \begin{subfigure}[t]{1\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/no_obstacle_trajectory.pdf} \caption{Shared initial trajectory, and trajectories in iteration 10 for i2LQR and LMPC.} \label{fig:no_obstacle_trajectory} \end{subfigure} \begin{subfigure}[t]{1\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/no_obstacle_time.pdf} \caption{Time to finish the iteration for i2LQR and LMPC.} \label{fig:no_obstacle_time} \end{subfigure} \begin{subfigure}[t]{1\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/no_obstacle_velocity.pdf} \caption{System speed in iteration 10 for i2LQR and LMPC.} \label{fig:no_obstacle_spd} \end{subfigure} \begin{subfigure}[t]{1\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/no_obstacle_acceleration.pdf} \caption{System acceleration in iteration 10 for i2LQR and LMPC.} \label{fig:no_obstacle_acc} \end{subfigure} \caption{Simulation with no obstacle. Both algorithms could reach the system's optimal performance.} \label{fig:sim-1} \end{figure} According to Fig. \ref{fig:no_obstacle_time}, both i2LQR and LMPC algorithms minimize the time consumption for the system to reach the target state. The optimal time cost is the same for both algorithms. Specifically, given the same initial trajectory, the trajectories in iteration 10 will be straight lines between the initial state and target state for both algorithms. As shown in Fig.~\ref{fig:no_obstacle_acc}, for both algorithms in iteration 10, the system will accelerate for the first half of the simulation and then decelerate to reach the target state with zero velocity. In the second group of simulations, an ellipse-shaped static obstacle with center $(x_{\text{obs}},y_{\text{obs}})=(100\text{ m},-5 \text{ m})$ exists in the environment. The system must travel from the initial state $\textbf{x}_0$ to the target state $\textbf{x}_{\text{target}}$ while avoiding the obstacle. Again, Fig.~\ref{fig:sim-2} shows the information for numerical simulations using the two algorithms. \begin{figure} \setlength{\abovecaptionskip}{0.3cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \begin{subfigure}[t]{1.0\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/static_obstacle_trajectory.pdf} \caption{Shared initial trajectory, and trajectories in iteration 10 for i2LQR and LMPC.} \label{fig:static_obstacle_trajectory} \end{subfigure} \begin{subfigure}[t]{1.0\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/static_obstacle_time.pdf} \caption{Time to finish the iteration for i2LQR and LMPC.} \label{fig:static_obstacle_time} \end{subfigure} \begin{subfigure}[t]{1.0\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/static_obstacle_velocity.pdf} \caption{System speed in iteration 10 for i2LQR and LMPC.} \label{fig:static_obstacle_spd} \end{subfigure} \begin{subfigure}[t]{1.0\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/static_obstacle_acceleration.pdf} \caption{System acceleration in iteration 10 for i2LQR and LMPC.} \label{fig:static_obstacle_acc} \end{subfigure} \caption{Simulation with a static obstacle. Both algorithms could reach the system's optimal performance.} \label{fig:sim-2} \end{figure} Fig. \ref{fig:static_obstacle_time} shows that both i2LQR and LMPC algorithms minimize the time consumption for the system to reach the target state even a static obstacle exists in the environment. According to Fig.~\ref{fig:static_obstacle_trajectory}, given the same initial trajectory, both algorithms can find the optimal trajectory that avoids the obstacle. According to Fig.~\ref{fig:static_obstacle_acc}, in iteration 10, the system will accelerate for the first half of the simulation and then decelerate to reach the target state with zero velocity. However, Fig.~\ref{fig:static_obstacle_time} shows that the times to finish the iteration are not stable for the i2LQR algorithm after the system reaches its optimal performance. This is due to the fact that all constraints are written in the form of the cost function. Numerical fluctuation may occur during this process. \subsection{Iterative Tasks In Dynamic Environments} \label{Sec:Results-Dynamic} To show the proposed i2LQR algorithm's performance for iterative tasks in dynamic environments, we conduct two groups of simulations with different environments. In the third group of simulations (Fig.~\ref{fig:sim-3}), a static circle-shaped obstacle with center $(x_{\text{obs}},y_{\text{obs}})=(35\text{ m},0 \text{ m})$ exists in the environment for the iteration 6. According to Fig.~\ref{fig:add_static_obstacle_time}, systems with the both algorithms have reached their optimal performance before the static obstacle is introduced. During iteration 6, i2LQR spends 25 s to reach the target state while avoiding the static obstacle. Then, it returns to its optimal performance after the obstacle is removed. However, the LMPC cannot reach the target state after more than 100 s. The reason is that all the nearby historical states except the initial state are occupied by the obstacle, which results in the infeasibility of the optimization problem. \begin{figure} \setlength{\abovecaptionskip}{0.3cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \begin{subfigure}[t]{1.0\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/add_static_obstacle_trajectory.pdf} \caption{Shared initial trajectory, and trajectories in iteration 6 for i2LQR and LMPC. The static obstacle is plotted in red.} \label{fig:add_static_obstacle_trajectory} \end{subfigure} \begin{subfigure}[t]{1.0\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/add_static_obstacle_time.pdf} \caption{Time to finish the iteration for i2LQR and LMPC.} \label{fig:add_static_obstacle_time} \end{subfigure} \caption{Simulation with an added static obstacle in iteration 6. In iteration 6, the proposed i2LQR reaches the target state $\mathbf{x}_{\text{target}}$ while the LMPC cannot reach the target state $\mathbf{x}_{\text{target}}$.} \label{fig:sim-3} \end{figure} To further present the proposed i2LQR algorithm's performance in a more complicated dynamic environment, in the fourth simulation, a circle-shaped moving obstacle moves upwards from the initial point $(x_{\text{obs}},y_{\text{obs}})=(35\text{ m},-16 \text{ m})$ with a speed of 1 m/s in iteration 6. The obstacle is removed in the next iteration. Fig.~\ref{fig:add_moving_obstacle_trajectory} and Fig.~\ref{fig:add_moving_obstacle_total_traj} show the snapshots and trajectories for both i2LQR and LMPC algorithms in iteration 6, respectively. It's shown that i2LQR is able to avoid this moving obstacle even when the obstacle is close to the system. However, since historical states with smaller time costs are occupied by the obstacle, these states become infeasible for the local MPC optimization of the LMPC algorithm; therefore, the controller cannot drive the system towards the target state at the beginning. After the obstacle goes away from the system, it moves towards the target state. According to Fig.~\ref{fig:add_static_obstacle_time}, both algorithms have reached their optimal performance before the moving obstacle exists. The i2LQR spends 32 s to finish the iteration 6, while the LMPC spends 63 s to finish this in the same environment. Both algorithms return to their optimal performance after the moving obstacle is removed from the environment. \begin{figure} \setlength{\abovecaptionskip}{0.3cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \begin{subfigure}[t]{1.0\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=1.0\linewidth]{figures/add_moving_obstacle_trajectory.pdf} \caption{Snapshots for the systems using two algorithms in iteration 6.} \label{fig:add_moving_obstacle_trajectory} \end{subfigure} \begin{subfigure}[t]{0.95\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=1.0\linewidth]{figures/add_moving_obstacle_trajectory_total.pdf} \caption{Shared initial trajectory, and trajectories in iteration 6 for i2LQR and LMPC.} \label{fig:add_moving_obstacle_total_traj} \end{subfigure} \begin{subfigure}[t]{1.0\linewidth} \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{0.1cm} \centering \includegraphics[width=0.95\linewidth]{figures/add_moving_obstacle_time.pdf} \caption{Time to finish the iteration for i2LQR and LMPC.} \label{fig:add_moving_obstacle_time} \end{subfigure} \caption{Simulation with an added moving obstacle in iteration 6. In iteration 6, the proposed i2LQR reaches the target state $\mathbf{x}_{\text{target}}$ earlier than the LMPC, which means the system comes with a smaller cost-to-go $h(\mathbf{x}^6_0)$.} \label{fig:sim-4} \end{figure} \begin{remark} For the iterative task in a dynamic environment, it's possible to get a feasible solution by adding slack variables to the terminal state constraint of LMPC in \cite{rosolia2021minimum}. This converts the hard constraints on the terminal state into cost-based soft constraints. However, using slack varibales is not in line with the design of the LMPC algorithm, which relies on the feasibility of the target terminal state. Further more, the performance of the algorithm may not be guaranteed when slack variables are used. \end{remark}
{ "arxiv_id": "2302.14271", "language": "en", "timestamp": "2023-03-01T02:07:37", "url": "https://arxiv.org/abs/2302.14271", "yymm": "2302" }
\section{Introduction} Many of the most significant nonlinear wave equations are geometric wave equations, such as the wave map equation, the Maxwell-Klein-Gordon equation, the hyperbolic Yang-Mills equation and the Einstein equations. In contrast to scalar nonlinear wave equations, the nonlinearity in geometric wave equations comes from the curvature of the state space itself: the target manifold for the wave problem, the connection for the Maxwell-Klein-Gordon and the Yang-Mills equations and the metric for the Einstein equations. Simultaneously, another distinguishing feature of the geometric wave equations is their gauge-invariance, arising from the invariance of the equations under a group of local transformations. This, in a nutshell, can be thought of as the freedom to choose different coordinate-systems. These two phenomena: the nature of nonlinearity and gauge invariance create interesting analytical and geometric challenges. In the last three decades, there has been many significant developments in the \emph{deterministic} theory of geometric wave equations. Since a complete (or even partial) overview of this literature is well beyond the scope of this introduction and since the focus of this paper is on the low regularity (stochastic) solutions, we limit our references to the local low-regularity well-posedness in \cite{C93,KM93,KM94,KM95,KT99,KRS15,LR15,LR17,MS04,O14,Pecher18,Pecher20,ST10,ST16,Tao01b,Tao03}, the small-data critical results in \cite{C93,KST15,KT17,RT04,Tao01a}, and the large-data critical results in \cite{C93,KL15,LO19a,LO19b,OT19,Tao04}. \\ For scalar nonlinear wave (and other dispersive) equations, there has also been significant progress towards a \emph{probabilistic} rather than \emph{deterministic} theory. The randomness is commonly introduced in the evolution problem through either random initial data and/or stochastic forcing. While the regularity of the random initial data or stochastic forcing is often beyond the threshold for deterministic well-posedness, it can still be possible to prove probabilistic well-posedness. The first theorem on the probabilistic well-posedness of a dispersive equation was obtained by \cite{B96}, who studied the two-dimensional cubic nonlinear Schr\"odinger equation with random initial data just below the scaling-critical regularity\footnote{The random initial data in \cite{B96} and many related works is drawn from the corresponding Gibbs measure, but Gibbs measures will not be discussed further here.}. More recently, there has been significant progress for higher-order nonlinearities, dispersive equations in three dimensions, and on problems from wave turbulence, see e.g. \cite{B20II,B21,BDNY22,CG19,DH21a,DH21b, DNY19,DNY21,DNY20,GKO18a,GKO18b,OOT20,OOT21,ST21}. For a more detailed discussion, we refer the reader to the introduction of \cite{BDNY22}. \\ Despite the significant progress towards a probabilistic theory of scalar nonlinear wave equations, there has only been little progress towards a probabilistic well-posedness theory for geometric wave equations. At this point, the primary references are \cite{BLS21} and \cite{KLS20}. In \cite{BLS21}, the first author, Lührmann, and Staffilani proved the probabilistic local well-posedness of the $(1+1)$-dimensional wave maps equation with Brownian paths as initial data. The most important aspect of \cite{BLS21} is that Brownian paths, i.e., the random initial data, are natural from both geometric and probabilistic perspectives. In \cite{KLS20}, Krieger, Lührmann, and Staffilani proved the small data probabilistic global well-posedness of the energy-critical Maxwell-Klein-Gordon equation (MKG) with initial data drawn from a Wiener randomization. The most difficult aspect of \cite{KLS20} is that the well-posedness of the energy-critical MKG is already extremely involved at the deterministic level (see e.g. \cite{KST15}), and it is therefore hard to combine the deterministic functional framework with probabilistic ideas. The random initial data in \cite{KLS20}, however, is not completely natural from a geometric perspective. In particular, it is not gauge-invariant in law. We note that, in comparison to random geometric wave equations, stochastic geometric parabolic equations are much better understood. We refer the reader to the review article \cite{C22} and the recent research articles \cite{BGHZ21,CC21a,CC21b,CCHS20,CCHS22,CS23,S21}. \\ The main goal of this article is to further explore probabilistic aspects of geometric wave equations. In \eqref{intro:eq-covariant} below, we introduce a model for gauge-covariant wave equations with space-time white noise, i.e., stochastic forcing. We consider here the simplest and most natural model: a stochastic scalar field driven by a stochastic abelian connection (vector potential). The stochastic forcing leads to randomness in both the vector potential and scalar field, coupled through the interaction between the two but {\it only} in the equation for the scalar field. In our main theorem (Theorem \ref{intro:thm-main}), we then obtain the probabilistic global well-posedness of our model in the Lorenz gauge. In the last subsection of this introduction, we compare our new model with a stochastic Maxwell-Klein-Gordon equation. Through the failure of a probabilistic null-form estimate, we also expose a potential obstruction in proving probabilistic well-posedness for the stochastic Maxwell-Klein-Gordon equation. In view of the above, the current paper is the first step towards building a theory of stochastic geometric wave equations, with serious challenges already presenting themselves in the attempts to make sense of the {\it fully coupled} stochastic interaction between a scalar field and an abelian connection (stochastic Maxwell-Klein-Gordon). Further challenges await in constructing stochastic solutions of geometric hyperbolic {\it semilinear} and, eventually, {\it quasilinear}, equations. \subsection{A gauge-covariant wave equation with space-time white noise}\label{section:covariant} In this subsection, we introduce our model for a (gauge-)covariant wave equation with space-time white noise and state the main theorem of this article. Initially, our discussion will be performed in a general spatial dimension $d\geq 2$, but our main result only concerns the two-dimensional setting, i.e., $d=2$. We work on the $(1+d)$-dimensional space-time $\R_t \times \T_x^d$. Here, $\T^d := (\R \backslash 2\pi \Z)^2$ denotes the $d$-dimensional torus and thus our space-time is periodic in the spatial coordinates. We let $\eta$ be the Minkowski metric with signature $(-\, + \, \hdots \, +)$ and raise and lower all indices with respect to $\eta$. We let $(A_\alpha)_{\alpha=0}^d \colon \R_t \times \T_x^d \rightarrow \R^{1+d}$ be a vector potential. We then define the corresponding curvature tensor $(F_{\alpha\beta})$ and covariant derivatives $(D_\alpha)$ by \begin{align} F_{\alpha \beta } &:= \partial_\alpha A_\beta - \partial_\beta A_\alpha, \label{intro:eq-curvature-tensor} \\ D_\alpha &:= \partial_\alpha + i A_\alpha \label{intro:eq-covariant-derivative}. \end{align} We also let $\phi \colon \R_t \times \T_x^d \rightarrow \C$ be a complex-valued scalar field. The vector potential $(A_\alpha)_{\alpha=0}^d$ and scalar field $\phi$ will soon serve as the unknowns in our covariant wave equation. Finally, we let $\scrm \colon \R_t \rightarrow \R$ be a time-dependent mass, $\zeta \colon \R_t \times \T_x^d \rightarrow \C$ be a complex-valued stochastic forcing term, and $(J^\alpha)_{\alpha=0}^d\colon \R_t \times \T_x^d \rightarrow \R^{1+d}$ be a vector-valued stochastic forcing term.\\ Equipped with the above definitions, we now consider a stochastic covariant wave equation, which is given by \begin{equation}\label{intro:eq-covariant} \begin{aligned} \partial_\alpha F^{\alpha \beta} &= J^\beta \qquad (t,x) \in \mathbb{R}_t \times \T_x^d, \\ \big( D_\alpha D^\alpha + \scrm^{\hspace{-0.4ex}2} \big) \phi &= \zeta \qquad \hspace{1.5ex} (t,x) \in \mathbb{R}_t \times \T_x^d. \end{aligned} \end{equation} We note that the evolution of the curvature tensor $F$ is completely determined by the space-time current $(J^\beta)_{\beta=0}^d$. In particular, since the space-time current $(J^\beta)_{\beta=0}^d$ will be chosen as a stochastic forcing-term, this leads to a random curvature tensor $F$. We also note that the evolution equation of the scalar field $\phi$ is the Euler-Lagrange equation for \begin{equation}\label{intro:eq-Lagrangian} \phi \mapsto \int_\R \dt \int_{\T^d} \dx \, \Big( \tfrac{1}{2} D_\alpha \phi \overline{D^\alpha \phi} - \tfrac{1}{2} \scrm^{\hspace{-0.4ex}2} |\phi|^2 - \Re \big( \zeta \overline{\phi} \big) \Big). \end{equation} We emphasize that the massive term $-\scrm^{\hspace{-0.4ex}2} |\phi|^2$ is non-coercive. This is necessary for implementing our renormalization, see e.g. Definition \ref{vector:def-mass} and Lemma \ref{vector:lem-quadratic} below. \\ As we will see momentarily, \eqref{intro:eq-covariant} cannot (or should not) be studied for general stochastic forcing terms $\zeta$ and $(J^\beta)_{\beta=0}^d$. Instead, we now introduce assumptions on $\zeta$ and $(J^\beta)_{\beta=0}^d$ which formally guarantee both the gauge-invariance in law and the consistency of the evolution equations imposed on the curvature tensor. \subsubsection{Gauge-invariance} We first derive a condition on the stochastic forcing term $\zeta$ by formally imposing gauge-invariance. For any deterministic (or, more generally, progressively measurable) smooth $\varphi \colon \R_t \times \T_x^d \rightarrow \R$, we consider the gauge transformation \begin{equation}\label{intro:eq-gauge} (A_\alpha, \phi) \mapsto (\widetilde{A}_\alpha,\widetilde{\phi}):= ( A_\alpha + \partial_\alpha \varphi, e^{-i\varphi} \phi ). \end{equation} We note that the curvature tensor is gauge-invariant and the covariant derivative is gauge-covariant with respect to the gauge transformation in \eqref{intro:eq-gauge}. More precisely, if $\widetilde{F}$ and $\widetilde{D}_\alpha$ are the curvature tensor and covariant derivative corresponding to $\widetilde{A}$, then it holds that \begin{equation}\label{intro:eq-gauge-F-D} \widetilde{F}_{\alpha\beta} = F_{\alpha \beta} \qquad \text{and} \qquad \widetilde{D}_\alpha \widetilde{\phi} = \widetilde{D_\alpha \phi}. \end{equation} Using \eqref{intro:eq-gauge-F-D}, we obtain for any solution $(A_\alpha,\phi)$ of the stochastic covariant wave equation that the gauge-transformed unknowns $(\widetilde{A}_\alpha,\widetilde{\phi})$ satisfy \begin{equation}\label{intro:eq-covariant-gauge-transformed} \begin{aligned} \partial_\alpha \widetilde{F}^{\alpha \beta} &= J^\beta \qquad \hspace{4.25ex} (t,x) \in \mathbb{R}_t \times \T_x^d, \\ \big( \widetilde{D}_\alpha \widetilde{D}^\alpha + \scrm^{\hspace{-0.4ex}2} \big) \widetilde{\phi} &= e^{-i\varphi} \zeta \qquad \hspace{1.5ex} (t,x) \in \mathbb{R}_t \times \T_x^d. \end{aligned} \end{equation} From a physical perspective, it is natural to impose that the solution $(A_\alpha,\phi)$ should be gauge-invariant in law. After comparing \eqref{intro:eq-covariant} and \eqref{intro:eq-covariant-gauge-transformed}, it is therefore natural to impose that \begin{equation}\label{intro:eq-gauge-law-zeta} \operatorname{Law}\big( e^{-i\varphi} \zeta \big) = \operatorname{Law} \big( \zeta\big). \end{equation} In order to satisfy \eqref{intro:eq-gauge-law-zeta}, we let the stochastic forcing term $\zeta$ be a complex-valued space-time white noise. We re-emphasize that the choice of $\zeta$ as complex-valued space-time white noise is not of our own making, but is essentially dictated by gauge-invariance, i.e, dictated by physical considerations. Since space-time white noise has temporal regularity $-1/2-$ and spatial regularity $-d/2-$, which decreases in the spatial dimension, this suggests that the probabilistic well-posedness theory of our stochastic covariant wave equation is easier in lower dimensions. \subsubsection{Continuity equation and random currents} We now derive a condition on the stochastic forcing terms $(J^\alpha)_{\alpha=0}^d$ from the properties of the curvature tensor. From the definition of the curvature tensor $F$ in \eqref{intro:eq-curvature-tensor}, it is clear that $F$ is skew-symmetric, i.e., $F_{\alpha \beta}=-F_{\beta \alpha}$ for all $0\leq \alpha, \beta \leq d$. As a result, it follows that \begin{equation}\label{intro:eq-curvature-skew-symmetry-condition} \partial_\alpha \partial_\beta F^{\alpha \beta}=0. \end{equation} In order for the stochastic covariant wave equation \eqref{intro:eq-covariant} to be consistent, it is therefore necessary that \begin{equation}\label{intro:eq-continuity-equation} \partial_\beta J^\beta = 0. \end{equation} Consistency conditions such as \eqref{intro:eq-continuity-equation} are often called continuity equations and appear naturally in many physical applications. In electrodynamics, where $J^0$ represents the charge and $(J^j)_{j=1}^d$ represents the spatial currents, \eqref{intro:eq-continuity-equation} is the conservation law for the charge. Due to \eqref{intro:eq-continuity-equation}, the space-time currents $(J^\alpha)_{\alpha=0}^d$ cannot be chosen independently. We now introduce a model for a space-time white noise current, in which the spatial currents $(J^j)_{j=1}^d$ are chosen independently and the time-current $J^0$ is determined by \eqref{intro:eq-continuity-equation}. \begin{definition}[Space-time white noise current]\label{intro:def-white-noise-current} A vector-valued distribution $(J^\alpha)_{\alpha=0}^d \colon \R_t \times \T_x^d \rightarrow \R^{1+d}$ is called a space-time white noise current if the following two conditions are satisfied: \begin{enumerate}[label=(\roman*)] \item $(J^j)_{j=1}^d$ are independent, real-valued space-time white noises. \item $J^0$ satisfies $\partial_t J^0 = - \partial_j J^j$ and $J^0\big|_{t=0}=0$. \end{enumerate} \end{definition} \subsubsection{Lorenz gauge} Due to the gauge-invariance in law of \eqref{intro:eq-covariant}, it is possible to impose a gauge condition on the vector potential. The classical gauge choices for geometric wave equations are the Coulomb gauge $\partial_j A^j =0$, the Lorenz gauge $\partial_\alpha A^\alpha=0$, and the temporal gauge $A^0=0$ (see e.g. \cite{KM94,MS04,ST10,Tao03}). In this article, we work in the Lorenz gauge and our reasons for this are two-fold: First, the Lorenz gauge leads to a complete derivative in \eqref{intro:eq-phi-Lorenz} below, which simplifies the analysis of high$\times$low-interactions (see Section \ref{section:ansatz}). Second, the Lorenz gauge leads to a system involving only hyperbolic and no elliptic equations, which enables a unified treatment of all components of the vector potential. Despite this, however, we believe that our main theorem can also be proven in the Coulomb and temporal gauges. We now examine the covariant wave equation \eqref{intro:eq-covariant} in the Lorenz gauge. Due to the Lorenz condition $\partial_\alpha A^\alpha =0$, it holds that \begin{equation}\label{intro:eq-curvature-Lorenz} \partial_\alpha F^{\alpha \beta} = \partial_\alpha \partial^\alpha A^\beta - \partial_\alpha \partial^\beta A^\alpha = \partial_\alpha \partial^\alpha A^\beta - \partial^\beta \big( \partial_\alpha A^\alpha\big) = \partial_\alpha \partial^\alpha A^\beta. \end{equation} Furthermore, it holds that \begin{equation}\label{intro:eq-phi-Lorenz} \begin{aligned} D_\alpha D^\alpha \phi &= \partial_\alpha \partial^\alpha \phi + 2i \partial_\alpha \big( A^\alpha \phi) - i \big( \partial_\alpha A^\alpha \big) \phi - A_\alpha A^\alpha \phi \\ &= \partial_\alpha \partial^\alpha \phi + 2i \partial_\alpha \big( A^\alpha \phi) - A_\alpha A^\alpha \phi . \end{aligned} \end{equation} We emphasize that the only quadratic term in \eqref{intro:eq-phi-Lorenz} contains a complete derivative, which is the main advantage of working in the Lorenz gauge. Thus, the stochastic covariant wave equation in the Lorenz gauge takes the form \begin{align} \partial_\alpha \partial^\alpha A^\beta &= J^\beta, \label{intro:eq-covariant-Lorenz-q1} \\ \partial_\alpha A^\alpha &=0, \label{intro:eq-covariant-Lorenz-q2} \\ \partial_\alpha \partial^\alpha \phi + 2i \partial^\alpha \big( A_\alpha \phi \big) &= (A_\alpha A^\alpha - \scrm^{\hspace{-0.4ex}2}) \phi + \zeta. \label{intro:eq-covariant-Lorenz-q3} \end{align} In order to determine the vector potential $(A^\alpha)_{\alpha=0}^d$ and the scalar field $\phi$ completely, we also need to impose initial conditions. For notational simplicity (and in order to reduce the number of terms), we require that \begin{equation}\label{intro:eq-initial-A} A^\alpha [0]:= \big( A^\alpha (0), \, \partial_t A^\alpha(0) \big) = 0. \end{equation} For the scalar field $\phi$, we allow non-zero initial data, i.e., impose a condition of the form \begin{equation}\label{intro:eq-initial-phi} \phi [0] = \big( \phi(0), \, \partial_t \phi(0) \big) = (\phi_0,\phi_1), \end{equation} since it does not significantly increase the number of terms in our analysis. \\ After putting everything together, we arrive at the following initial value problem \begin{align} \partial_\alpha \partial^\alpha A^\beta &= J^\beta \qquad \hspace{17.7ex} (t,x) \in \mathbb{R}_t \times \T_x^d, \label{intro:eq-covariant-Lorenz-e1} \\ \partial_\alpha A^\alpha &= 0 \qquad \hspace{19.2ex} (t,x) \in \mathbb{R}_t \times \T_x^d, \label{intro:eq-covariant-Lorenz-e2} \\ \partial_\alpha \partial^\alpha \phi + 2i \partial^\alpha \big( A_\alpha \phi \big) &= (A_\alpha A^\alpha - \scrm^{\hspace{-0.4ex}2}) \phi + \zeta \qquad \hspace{1.25ex} (t,x) \in \mathbb{R}_t \times \T_x^d, \label{intro:eq-covariant-Lorenz-e3} \\ A^\alpha[0]=0, \qquad \phi[0]&=(\phi_0,\phi_1).\label{intro:eq-covariant-Lorenz-e4} \end{align} \begin{remark}[Random vector potential] The vector potential $A^\alpha$ is determined by the wave equation \eqref{intro:eq-covariant-Lorenz-e1}, the gauge condition \eqref{intro:eq-covariant-Lorenz-e2}, and the initial data $A^\alpha[0]$. As part of our stochastic gauge-covariant wave equation \eqref{intro:eq-covariant}, we have therefore introduced a model for a random vector potential. While this article is only focused on wave equations, our model for the random vector potential $A^\alpha$ may also be interesting in other settings. For example, it may be interesting to study the Schr\"{o}dinger equation for a quantum particle in the electromagnetic field corresponding to the random vector potential $A^\alpha$. \end{remark} \subsubsection{Renormalization} We now describe the final step in the derivation of our covariant wave equation with space-time white noise, which concerns a renormalization and the resulting infinite and time-dependent mass. In our discussion of the renormalization, we specialize to the spatial dimension $d=2$. This is because the form of the renormalization depends heavily on the analytical properties of the stochastic forcing, which in turn depend heavily on the dimension. As we will see in Section \ref{section:vector}, the vector potential $(A^\alpha)_{\alpha=0}^2$ only has spatial regularity $0-$ and, as may then be expected, the quadratic expression $A_\alpha A^\alpha$ diverges in the sense of space-time distributions. To counteract this divergence, we need introduce a frequency-truncation and a divergent, time-dependent mass $\scrm_{\leq N}\colon \R_t \rightarrow \R$. To be precise, we let $P_{\leq N}$ be the Littlewood-Paley projection from \eqref{prelim:eq-littlewood-paley} and let $\scrm_{\leq N}$ be as in Definition \ref{vector:def-mass} below. Then, we consider the frequency-truncated initial value problem \begin{align} \partial_\alpha \partial^\alpha A^\beta_{\leq N} &= P_{\leq N} J^\beta \qquad \hspace{23.4ex} (t,x) \in \mathbb{R}_t \times \T_x^2, \label{intro:eq-covariant-Lorenz-truncated-e1} \\ \partial_\alpha A^\alpha_{\leq N} &= 0 \qquad \hspace{29.6ex} (t,x) \in \mathbb{R}_t \times \T_x^2, \label{intro:eq-covariant-Lorenz-truncated-e2} \\ \partial_\alpha \partial^\alpha \phi_{\leq N} + 2i \partial_\alpha \big( A^\alpha_{\leq N} \phi_{\leq N} \big) &= (A_{\leq N,\alpha}A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N}) \phi_{\leq N} + \zeta \qquad \hspace{1.25ex} (t,x) \in \mathbb{R}_t \times \T_x^2, \label{intro:eq-covariant-Lorenz-truncated-e3} \\ A^\alpha_{\leq N}[0]=0, \qquad \phi_{\leq N}[0]&=(\phi_0,\phi_1). \label{intro:eq-covariant-Lorenz-truncated-e4} \end{align} In \eqref{intro:eq-covariant-Lorenz-truncated-e1}-\eqref{intro:eq-covariant-Lorenz-truncated-e4}, the sub-script $\leq \hspace{-0.5ex} N$ is used to denote the new unknowns $(A_{\leq N}^\alpha,\phi_{\leq N})$ and the only Littlewood-Paley operator occurs in $P_{\leq N} J^\beta$. For a fixed frequency-truncation parameter $N\geq 1$, it is easy to see that \eqref{intro:eq-covariant-Lorenz-truncated-e1}-\eqref{intro:eq-covariant-Lorenz-truncated-e4} has a unique global solution. Indeed, the inhomogeneous linear wave equation $\partial_\alpha \partial^\alpha A^\beta_{\leq N}=P_{\leq N} J^\beta$ has a unique global solution. Furthermore, it satisfies the Lorenz gauge condition $\partial_\alpha A^{\alpha}_{\leq N}=0$ and is smooth in the spatial variables. As a result, energy estimates easily imply the global well-posedness of the covariant wave equation for $\phi_{\leq N}$. Thus, our main question now concerns the convergence of $(A_{\leq N}^\alpha, \phi_{\leq N})$ as $N$ tends to infinity. \subsubsection{Main results} We previously introduced the $(1+2)$-dimensional covariant wave equation with space-time white noise in the Lorenz gauge \eqref{intro:eq-covariant-Lorenz-e1}-\eqref{intro:eq-covariant-Lorenz-e4} and its frequency-truncated version \eqref{intro:eq-covariant-Lorenz-truncated-e1}-\eqref{intro:eq-covariant-Lorenz-truncated-e4}. In our main result, we obtain the probabilistic global well-posedness of the corresponding initial value problem. In this statement, we use the notation \begin{equation}\label{intro:eq-Sobolev} \mathscr{H}_x^s (\T^2) := H_x^s (\T^2) \times H_x^{s-1}(\T^2), \end{equation} where $s\in \R$ and $H_x^s(\T^2)$ denotes the usual $L^2$-based Sobolev space (see \eqref{prelim:eq-Sobolev} below). \begin{theorem}[Probabilistic global well-posedness]\label{intro:thm-main} Let $d=2$. Furthermore, let $\delta>0$, let $\zeta$ be complex-valued space-time white noise, let $(J^\alpha)_{\alpha=0}^2$ be a space-time white noise current, and let $(\phi_0,\phi_1) \in \mathscr{H}_x^{1/4}(\T^2)$. For all $N\geq 1$, let $(A_{\leq N}[t],\phi_{\leq N}[t])$ be the unique global solution of \eqref{intro:eq-covariant-Lorenz-truncated-e1}-\eqref{intro:eq-covariant-Lorenz-truncated-e4}. Then, the limit \begin{equation} (A[t],\phi[t]) := \lim_{N\rightarrow \infty} (A_{\leq N}[t],\phi_{\leq N}[t]) \end{equation} almost surely exists in \begin{equation} \Cs_t^0 \mathscr{H}_x^{-\delta} \big( [-T,T]\times \T^2 \rightarrow \R^3) \times \Cs_t^0 \mathscr{H}_x^{-\delta} \big( [-T,T]\times \T^2 \rightarrow \C) \end{equation} for all $T>0$. \end{theorem} \begin{remark} The global well-posedness follows rather easily from our local estimates, which are the main part of this article, and the algebraic structure of \eqref{intro:eq-covariant-Lorenz-truncated-e1}-\eqref{intro:eq-covariant-Lorenz-truncated-e4}. Indeed, since the vector potential $A^\alpha$ solves a stochastic linear wave equation, it obeys global estimates (see e.g. Corollary \ref{vector:cor-regularity}). While the evolution equation for the scalar field \eqref{intro:eq-covariant-Lorenz-truncated-e3} is nonlinear in $(A^\alpha,\phi)$, it is still linear in $\phi$. Since $A^\alpha$ is already known to obey global bounds, our local estimates and a Gronwall-type argument then imply global bounds for $\phi$. \\ The only point of caution is that the initial data $(\phi_0,\phi_1)$ in Theorem \ref{intro:thm-main} belongs to $\mathscr{H}_x^{1/4}$, which is not preserved by the evolution of the scalar field. However, this problem can be circumvented by iterating our estimates over a sequence of growing intervals (rather than iterating over a sequence of initial times), and we refer the reader to the proof of Proposition \ref{proof:prop-psi} for the details. \end{remark} We now briefly discuss the main difficulties and ideas in the proof of Theorem \ref{intro:thm-main}. During this discussion, we formally set $N=\infty$. Most of our argument deals with the Duhamel integral of the derivative nonlinearity in \eqref{intro:eq-covariant-Lorenz-e3}, i.e., \begin{equation}\label{intro:eq-bilinear} \Duh \big[ \partial_\alpha (A^\alpha \phi) \big]. \end{equation} In the literature on wave equations, bilinear estimates for terms similar to \eqref{intro:eq-bilinear} have a long history. Bilinear estimates were first utilized by Klainerman and Machedon in \cite{KM93} and then further studied in \cite{KM95D,KM97,KM97D,KS97,KT99,Tao01b}. For a systematic treatment, we refer the reader to \cite{DFS10,DFS12,KS02}. Due to the space-time white noise in \eqref{intro:eq-covariant-Lorenz-e1}-\eqref{intro:eq-covariant-Lorenz-e4}, however, both $A^\alpha$ and $\phi$ only have spatial regularity $0-$. This regularity is well-beyond the regime of the aforementioned bilinear estimates (see Remark \ref{operator:rem-comparison-bilinear}) and, more generally, deterministic methods\footnote{Based on \cite{CP14,Pecher20}, we expect that the deterministic well-posedness theory requires (at the very least) that $(A,\phi) \in \Cs_t^0 \mathscr{H}_x^s \times \Cs_t^0 \mathscr{H}_x^s$ with $s\geq 1/4$. In fact, the threshold $s=1/4$ shows up even in the much simpler cubic wave equation, where it corresponds to the Lorentz-critical regularity.}. For this reason, our argument is based on probabilistic rather than deterministic estimates.\\ One of the main ideas behind probabilistic approaches to random dispersive equations is to decompose the solution into a term with a random structure, such as the solution to the inhomogeneous linear wave equation with stochastic forcing, and a smoother remainder (see e.g. \cite{B96,B21,DNY19,GKO18a}). In our setting, this method is complicated by the absence of nonlinear smoothing. As will be discussed in Section \ref{section:ansatz}, the absence of nonlinear smoothing is a result of low$\times$high-interactions in $\partial_\alpha (A^\alpha \phi)$. In order to isolate the main term in $\phi$, we therefore define $\chi \colon \R_t \times \T_x^2 \rightarrow \C$ as the solution of \begin{equation}\label{intro:eq-chi} \partial_\alpha \partial^\alpha \chi + 2 i \partial_\alpha \big( A^\alpha \parall \chi \big) = \zeta, \end{equation} where $\parall$ is the low$\times$high-paraproduct from Definition \ref{prelim:def-para-products}. In order to analyze \eqref{intro:eq-chi}, we introduce \begin{equation}\label{intro:eq-z-L} z := \Duh \big[ \zeta \big] \qquad \text{and} \qquad \Lin[ll] \chi := 2i \Duh \big[ \partial_\alpha \big( A^\alpha \parall \chi \big) \big]. \end{equation} We emphasize that both the Picard iterate $z$ and operator $\Lin[ll] $ are random. Since $z$ depends only on the complex-valued space-time white noise $\zeta$ and $\Lin[ll]$ depends only on the space-time white noise current $(J^\beta)_{\beta=0}^2$, however, $z$ and $\Lin[ll]$ are probabilistically independent. Equipped with \eqref{intro:eq-z-L}, we can then write \begin{equation}\label{intro:eq-chi-integral} \chi = \big( 1 + \Lin[ll][] \big)^{-1} z. \end{equation} Since the analysis of the random Picard iterate $z$ is rather elementary, the main step in the analysis of $\chi$ lies in the analysis of the random operator $\Lin[ll][]$ and its resolvent $( 1 + \Lin[ll][] )^{-1}$. This is done using the random tensor estimates from \cite{DNY20}, which are combined with problem-specific lattice point counting estimates (Lemma \ref{prelim:lem-basic-counting}). We mention that the para-controlled approach to stochastic wave equations from \cite{GKO18a}, which has also been used in \cite{B20II,BDNY22,OOT20,OOT21}, corresponds to the Neumann series approximation \begin{equation*} \big( 1 + \Lin[ll][] \big)^{-1} z = \sum_{n=0}^\infty (-1)^n \big( \Lin[ll][] \big)^n z \approx z - \Lin[ll][] z. \end{equation*} Since $ \Lin[ll][]$ is not smoothing, $( \Lin[ll][])^n z $ is not smoother than $z$ or $\Lin[ll][] z$, and therefore the para-controlled approach cannot be applied to our model \eqref{intro:eq-covariant-Lorenz-e1}-\eqref{intro:eq-covariant-Lorenz-e4}.\\ While there is no nonlinear smoothing for low$\times$high-interactions in \eqref{intro:eq-bilinear}, both high$\times$high and high$\times$low-interactions exhibit nonlinear smoothing. For the high$\times$high-interactions, the nonlinear smoothing estimate relies heavily on the random structure of $\chi$ (see Proposition \ref{product:prop-main}). In particular, we rely on the probabilistic independence of the vector potential $A^\alpha$ and Picard iterate $z$. For the high$\times$low-interaction, the nonlinear smoothing estimate utilizes the Lorenz gauge condition, which allows us to push derivatives from $A^\alpha$ onto $\phi$ (see e.g. \eqref{intro:eq-phi-Lorenz}). As a consequence of the nonlinear smoothing estimates for high$\times$high and high$\times$low-interactions, it can be shown that the difference $\phi-\chi$ has regularity $1/4-$ and is therefore smoother than $\phi$. \begin{remark} In \cite{B21}, the first author proved the probabilistic well-posedness of a quadratic derivative nonlinear wave equation on $\mathbb{R}^{1+3}$. Similar as for \eqref{intro:eq-covariant-Lorenz-e1}-\eqref{intro:eq-covariant-Lorenz-e4}, the derivative nonlinearity in \cite{B21} prevents nonlinear smoothing. However, there are significant differences between the methods of this article and \cite{B21}: First, the Ansatz in this article is much simpler than in \cite{B21}, which is possible since the random operator $\Lin[ll]$ is explicit. Second, the dispersive estimates in this article (see e.g. Proposition \ref{operator:prop-main}) are much more involved than in \cite{B21}. The reason is that the high$\times$high-product of $A^\alpha$ and $\phi$ is quite delicate, whereas the high$\times$high-products in \cite[(10)]{B21} are harmless. \end{remark} \begin{remark} In our main theorem, we prove the probabilistic well-posedness of \eqref{intro:eq-covariant-Lorenz-e1}-\eqref{intro:eq-covariant-Lorenz-e4} in \mbox{$(1+2)$-dimensions.} Of course, \eqref{intro:eq-covariant-Lorenz-e1}-\eqref{intro:eq-covariant-Lorenz-e4} can also be studied in $(1+3)$-dimensions, but proving probabilistic well-posedness in this case seems rather challenging. In $(1+3)$-dimensions, the linear stochastic objects in $A^\alpha$ and $\phi$ have regularity $-1/2-$. Based on the lattice point counting estimates in $(1+3)$-dimensions from \cite[Lemma 4.15]{B20II}, one further expects that the Duhamel integral of the high$\times$high-interactions in $\partial_\alpha (A^\alpha \phi)$ also has regularity $-1/2-$. Thus, the problem is critical with respect to the probabilistic scaling introduced in \cite{DNY19}. \end{remark} \subsection{Maxwell-Klein-Gordon and the failure of a probabilistic null-form estimate}\label{section:intro-null-form} We recall that the $(1+d)$-dimensional (periodic) Maxwell-Klein-Gordon system is given by \begin{alignat}{3} \partial_\alpha F^{\alpha \beta} &= - \Im \big( \phi \overline{D^\beta \phi} \big) &\hspace{10ex}& (t,x) \in \R_t \times \T_x^d, \label{intro:eq-MKG-1} \\ D_\alpha D^\alpha \phi &= 0 && (t,x) \in \R_t \times \T_x^d. \label{intro:eq-MKG-2} \end{alignat} The well-posedness of \eqref{intro:eq-MKG-1}-\eqref{intro:eq-MKG-2} has been extensively studied\footnote{Most of the results on the Maxwell-Klein-Gordon equations are formulated on $\mathbb{R}^d$ instead of $\mathbb{T}^d$. Despite the elliptic aspects of the analysis, which enter through the gauge conditions, the local well-posedness theory can mostly be carried over from the Euclidean to the periodic setting.} in dimension in $d=2$ in \cite{CP14,Pecher18}, in dimension $d=3$ in \cite{KM94,MS04}, in the energy-critical dimension $d=4$ in \cite{KST15,KL15,OT16}, and in high dimensions \cite{RT04,Pecher20}. In order to understand probabilistic aspects of geometric wave equations, one would like to understand a stochastic version of \eqref{intro:eq-MKG-1}-\eqref{intro:eq-MKG-2}, which is given by \begin{alignat}{3} \partial_\alpha F^{\alpha \beta} &= - \Im \big( \phi \overline{D^\beta \phi} \big) + J^\beta &\hspace{10ex}& (t,x) \in \R_t \times \T_x^d, \label{intro:eq-sMKG-1} \\ D_\alpha D^\alpha \phi &= \zeta && (t,x) \in \R_t \times \T_x^d. \label{intro:eq-sMKG-2} \end{alignat} Similar as in Subsection \ref{section:covariant}, gauge-invariance encourages us to choose $\zeta$ as a complex-valued space-time white noise. Due to the low regularity of space-time white noise, we again restrict ourselves to the $(1+2)$-dimensional setting, i.e., $d=2$. As before, the choice of the space-time current $(J^\beta)_{\beta=0}^2$ offers more flexibility. Since Theorem \ref{intro:thm-failure} below is not concerned with the space-time currents $(J^\beta)_{\beta=0}^2$, we make the simplest choice and simply set $J^\beta =0$. \\ The only difference between the stochastic Maxwell-Klein-Gordon equations \eqref{intro:eq-sMKG-1}-\eqref{intro:eq-sMKG-2} and covariant wave equation \eqref{intro:eq-covariant} is the term $\Im ( \phi \overline{D^\beta \phi})$. However, this difference turns out to be substantial. As we will show in Theorem \ref{intro:thm-failure} below, $\Im ( \phi \overline{D^\beta \phi})$ diverges as a space-time distribution at the level of the first Picard iterate. This divergence is different from divergences which can be removed using renormalizations, since it concerns the probabilistically non-resonant component (see Remark \ref{intro:rem-failure} below). In Theorem \ref{intro:thm-failure}, we actually prove a stronger claim, which not only concerns $\Im ( \phi \overline{D^\beta \phi})$ but also a certain null-form. In order to state the stronger claim, we first introduce additional notation. \\ For all $1\leq j,k\leq 2$ and $\phi,\psi\colon \R_t \times \T_x^2 \rightarrow \C$, we define the null-form \begin{equation}\label{intro:eq-Qjk} Q_{jk}\big( \phi, \psi \big) = \partial_j \phi \, \partial_k \psi - \partial_k \phi \, \partial_j \psi. \end{equation} We note that $Q_{11}(\phi,\psi)=Q_{22}(\phi,\psi)=0$, which implies that only $Q_{12}(\phi,\psi)$ is non-trivial. We also introduce the Leray-Projection $\Leray$, which is defined by \begin{equation}\label{intro:eq-Leray} \Leray B := B - \Delta^{-1} \nabla \big( \nabla \cdot B \big) \end{equation} for all smooth $B\colon \R_t \times \T_x^2 \rightarrow \R^2$. From a direct calculation, it follows that \begin{equation}\label{intro:eq-null-identity} \Leray \Im \big( \phi \overline{\nabla \phi} \big)_j = \frac{1}{(2\pi)^2} \int_{\T^2} \dx \, \Im \big( \phi \overline{\nabla \phi} \big)_j - \Delta^{-1} \partial^k \Im \big( Q_{jk}(\phi, \overline{\phi}) \big). \end{equation} We note that the constant term in \eqref{intro:eq-null-identity} only appears because we are working on the torus $\T_x^2$. The identity \eqref{intro:eq-null-identity}, which rewrites the Leray-projection of $\Im \big( \phi \overline{\nabla \phi} \big)$ in terms of the null-forms, is often used when working in the Coulomb gauge (see e.g. \cite{KM94,MS04}). This is because the Coulomb condition allows the introduction of the Leray-projection into the evolution equations for the vector potential. Finally, we let $z\colon \R_t \times \T_x^2 \rightarrow \C$ be the first Picard iterate of \eqref{intro:eq-sMKG-2}, i.e., the solution of \begin{equation}\label{intro:eq-z} \partial_\alpha \partial^\alpha z = \zeta, \qquad z[0]=0. \end{equation} Equipped with the null-forms $Q_{jk}$ and the inhomogeneous linear wave $z$, we can now state the failure of a probabilistic null-form estimate. \begin{theorem}[Failure of a probabilistic null-form estimate]\label{intro:thm-failure} Let $d=2$, let $1\leq j \leq 2$, let $0<c\ll 1$ be sufficiently small, let $A=0$, and let $z$ be as in \eqref{intro:eq-z}. Then, there exist two deterministic functions $\varphi,\psi\in \Cs^\infty_c((0,\infty) \times \T_x^2 \rightarrow [-1,1])$ such that, for all $N\in \dyadic$, \begin{equation}\label{intro:eq-failure-1} \operatorname{Var} \bigg( \int_\R \dt \int_{\T^2} \dx \, \varphi(t,x) \Im \big( P_{\leq N} z \, \overline{D^j P_{\leq N} z} \big) \bigg) \geq c \log(N) - c^{-1} \end{equation} and \begin{equation}\label{intro:eq-failure-2} \operatorname{Var} \bigg( \int_\R \dt \int_{\T^2} \dx \, \psi(t,x) \Im \big( Q_{12}\big( P_{\leq N} z, \overline{P_{\leq N}z} \big) \big) \bigg) \geq c \log(N) - c^{-1}. \end{equation} \end{theorem} \begin{remark}\label{intro:rem-failure} As mentioned above, the divergence in \eqref{intro:eq-failure-1} and \eqref{intro:eq-failure-2} is different from divergences which are often removed using renormalizations (such as the divergence of $A_{\leq N,\alpha} A^{\alpha}_{\leq N}$ discussed in Subsection \ref{section:covariant}). This is because \eqref{intro:eq-failure-1} and \eqref{intro:eq-failure-2} involve the variance and not the expectation of the space-time integrals. \end{remark} We note that Theorem \ref{intro:thm-failure} does not strictly imply the ill-posedness of the stochastic Maxwell-Klein-Gordon equation \eqref{intro:eq-sMKG-1}-\eqref{intro:eq-sMKG-2}. However, it does expose a potential obstruction towards proving probabilistic well-posedness and prevents us from using Picard iteration schemes (without additional ingredients). \\ \textbf{Acknowledgements:} The authors thank Ilya Chevyrev, Sung-Jin Oh, and Daniel Tataru for interesting discussions during the preparation of this work. B.B. was partially supported by the NSF under Grant No. DMS-1926686. I.R. is partially supported by a Simons Investigator Award. \section{Ansatz}\label{section:ansatz} In this section, we describe our Ansatz for solving \eqref{intro:eq-covariant-Lorenz-truncated-e1}-\eqref{intro:eq-covariant-Lorenz-truncated-e4}, i.e., the covariant wave equation with space-time white noise in the Lorenz gauge. We first rigorously define all terms in our Ansatz and state the main proposition of this section (Proposition \ref{ansatz:prop-ansatz}). At the end of this section, we then describe the heuristic motivation behind our Ansatz. We first define the Duhamel integral and the wave propagator by \begin{align} \Duh \big[ G \big] &:= - \int_0^t \ds \, \frac{\sin\big( (t-s)|\nabla|\big)}{|\nabla|} G(s), \label{ansatz:eq-duhamel} \\ \Wp(t) (\phi_0,\phi_1) &:= \cos\big( t |\nabla|\big) \phi_0 + \frac{\sin\big(t |\nabla|\big)}{|\nabla|} \phi_1, \label{ansatz:eq-wp} \end{align} for all $t \in \R$, smooth $G\colon \R_t \times \T_x^2 \rightarrow \C$, and smooth $\phi_0,\phi_1 \colon \T_x^2 \rightarrow \C$. If $(J^\alpha)_{\alpha=0}^2$ is the space-time white noise current from Definition \ref{intro:def-white-noise-current} and $\zeta$ is a complex-valued space-time white noise, we define the vector potential $A$, its frequency-truncation $A_{\leq N}$, and the complex-valued field $z$ by \begin{align} A^\alpha &:= \Duh \big[ J^\alpha \big] \qquad \text{for all } 0 \leq \alpha \leq 2, \label{ansatz:eq-A} \\ A^\alpha_{\leq N}&:= \Duh \big[ P_{\leq N} J^\alpha \big] \qquad \text{for all } 0 \leq \alpha \leq 2, \label{ansatz:eq-A-truncated} \\ z &:= \Duh \big[ \zeta \big] \label{ansatz:eq-z}. \end{align} We note that, since the evolution equations for $A^\alpha$ and $A^\alpha_{\leq N}$ are linear, constant-coefficient wave equations, it holds that $A_{\leq N}^\alpha= P_{\leq N} A^\alpha$. In the same spirit, we now simplify our notation by writing \begin{equation} A^\alpha_K := P_K A^\alpha. \end{equation} As we will see in Section \ref{section:vector} below, both the vector potential $A^\alpha_{\leq N}$ and the complex-valued field $z$ have regularity $0-$. Using the notation from \eqref{ansatz:eq-A}-\eqref{ansatz:eq-z}, we can rewrite \eqref{intro:eq-covariant-Lorenz-truncated-e3} as \begin{equation}\label{ansatz:eq-phiN-a} \phi_{\leq N} + 2 i \Duh \Big[ \partial_\alpha \big( A^\alpha_{\leq N} \phi_{\leq N} \big) \Big] = z + \Duh \Big[ \Big( A_{\leq N,\alpha} A_{\leq N}^\alpha - \scrm^{\hspace{-0.4ex}2}_{\leq N}\Big) \phi_{\leq N} \Big] + \Wp(t) (\phi_0,\phi_1). \end{equation} As will be discussed at the end of this section, \eqref{ansatz:eq-phiN-a} cannot be treated as a perturbation of the constant-coefficient linear wave equation. To be slightly more precise, the low$\times$high-interactions in $\partial_\alpha (A^\alpha_{\leq N} \phi_{\leq N})$ do not exhibit nonlinear smoothing and the high$\times$high-interactions in $\partial_\alpha (A^\alpha_{\leq N} \phi_{\leq N})$ cannot be defined without additional structural information on $\phi_{\leq N}$. In order to overcome these difficulties, we let $\parall$, $\parasim$, and $\paragg$ be the low$\times$high, high$\times$high, and high$\times$low-paraproducts from Definition \ref{prelim:def-para-products} below. We then define the three random operators \begin{align} \Lin[ll][\leq N] \phi &:= 2i \Duh \Big[ \partial_\alpha \Big( A_{\leq N}^\alpha \parall \phi \Big) \Big] \label{ansatz:eq-Lin-ll}, \\ \Lin[sim][\leq N] \phi &:= 2i \Duh \Big[ \partial_\alpha \Big( A_{\leq N}^\alpha \parasim \phi \Big) \Big] \label{ansatz:eq-Lin-sim}, \\ \Lin[ll][\leq N] \phi &:= 2i \Duh \Big[ \partial_\alpha \Big( A_{\leq N}^\alpha \paragg \phi \Big) \Big] \label{ansatz:eq-Lin-gg}. \end{align} We note that, due to the definitions in \eqref{ansatz:eq-Lin-ll}, \eqref{ansatz:eq-Lin-sim}, and \eqref{ansatz:eq-Lin-gg}, it holds that \begin{equation}\label{ansatz:eq-Lin-sum} \Big( \Lin[ll][\leq N] + \Lin[sim][\leq N] + \Lin[gg][\leq N] \Big) \phi = 2i \Duh \Big[ \partial_\alpha \Big( A^\alpha_{\leq N} \phi \Big) \Big]. \end{equation} Equipped with our definitions, we can now state our Ansatz in the form of a proposition. \begin{proposition}[Ansatz]\label{ansatz:prop-ansatz} Let $N\in \dyadic$, let $A_{\leq N}$ be as in \eqref{ansatz:eq-A-truncated}, let $\chi_{\leq N}, \psi_{\leq N}\colon \R_t \times \T_x^2 \rightarrow \C$, and let $\phi_{\leq N} = \chi_{\leq N} + \psi_{\leq N}$. Furthermore, assume that $(\chi_{\leq N},\psi_{\leq N})$ is a solution of \begin{align} \chi_{\leq N} + \Lin[ll][\leq N] \chi_{\leq N} &= z \label{ansatz:eq-chi} \end{align} and \begin{align} \psi_{\leq N} + \Lin[ll][\leq N] \psi_{\leq N} =& \, - \big( \Lin[sim][\leq N] + \Lin[gg][\leq N]\big) \big( \chi_{\leq N} + \psi_{\leq N} \big) \label{ansatz:eq-psi-1} \\ +& \, \Duh \Big[ \Big( A_{\leq N,\alpha} A_{\leq N}^\alpha - \scrm^{\hspace{-0.4ex}2}_{\leq N}\Big) \big(\chi_{\leq N}+\psi_{\leq N}\big) \Big] + \Wp(t) (\phi_0,\phi_1).\label{ansatz:eq-psi-2} \end{align} Then, $(A_{\leq N},\phi_{\leq N})$ solves the covariant wave equation with space-time white noise in the Lorenz gauge, i.e., \eqref{intro:eq-covariant-Lorenz-truncated-e1}-\eqref{intro:eq-covariant-Lorenz-truncated-e4}. \end{proposition} \begin{proof} We have to verify that \eqref{intro:eq-covariant-Lorenz-truncated-e1}-\eqref{intro:eq-covariant-Lorenz-truncated-e4} are satisfied. We first note that the initial conditions in \eqref{intro:eq-covariant-Lorenz-truncated-e4} can be verified directly from \eqref{ansatz:eq-A-truncated}, \eqref{ansatz:eq-chi}, and \eqref{ansatz:eq-psi-1}-\eqref{ansatz:eq-psi-2}. The wave equation \eqref{intro:eq-covariant-Lorenz-truncated-e1} is equivalent to its Duhamel integral formulation \eqref{ansatz:eq-A-truncated}. The Lorenz condition \eqref{intro:eq-covariant-Lorenz-truncated-e2} follows directly from \eqref{ansatz:eq-A-truncated}, the continuity equation for the current \eqref{intro:eq-continuity-equation}, and the initial condition $A_{\leq N}[0]=0$. Finally, the wave equation \eqref{intro:eq-covariant-Lorenz-truncated-e3} follows directly from \eqref{ansatz:eq-Lin-sum} and the evolution equations for $\chi_{\leq N}$ and $\psi_{\leq N}$, i.e., \eqref{ansatz:eq-chi} and \eqref{ansatz:eq-psi-1}-\eqref{ansatz:eq-psi-2}. \end{proof} At the end of this section, we motivate our Ansatz heuristically. To this end, we first recall that $A_{\leq N}$ and $z$ (and therefore also $\phi_{\leq N}$) have regularity at most $0-$. In the following, it is useful to employ a heuristic which will be partially justified by Lemma \ref{prelim:lem-basic-counting} below: If $u,v\colon \R_t \times \T_x^2 \rightarrow \C$ are random waves and $K,L,M \in \dyadic$, then multi-linear dispersive effects gain a factor of $\min(K,L,M)^{-1/4}$ over trivial estimates of \begin{equation*} \Big\| P_M \int_0^t \ds \sin\big( (t-s) |\nabla|\big) P_K u(s) \, P_L v(s) \Big\|_{\mathcal{N}} \end{equation*} for all relevant norms $\mathcal{N}$. Equipped with this heuristic, we now separately discuss low$\times$high, high$\times$high, and high$\times$low-interactions. \\ \emph{Low$\times$high-interactions:} The corresponding contribution is given by \begin{equation}\label{intro:eq-motivation-low-high} \Lin[ll][\leq N] \phi_{\leq N} = 2i \int_0^t \ds \, \sin\big((t-s)|\nabla|\big) \frac{\partial_\alpha}{|\nabla|} \big( A^\alpha_{\leq N} \parall \phi_{\leq N}\big). \end{equation} The $\partial_\alpha$-derivatives are compensated by the inverse gradient $|\nabla|^{-1}$. Since multi-linear dispersive effects do not gain derivatives of the high-frequency input $\phi_{\leq N}$, we expect that the regularity of \eqref{intro:eq-motivation-low-high} is no better than the regularity of $\phi$ and hence given by $0-$. In particular, there is no probabilistic nonlinear smoothing. In order to address this difficulty, we view $\Lin[ll][\leq N]$ as a random operator and estimate the resolvent $(1+\Lin[ll][\leq N])^{-1}$. \\ \emph{High$\times$high-interactions:} The corresponding contribution is given by \begin{equation}\label{intro:eq-motivation-high-high} \Lin[sim][\leq N] \phi_{\leq N} = 2i \int_0^t \ds \, \sin\big((t-s)|\nabla|\big) \frac{\partial_\alpha}{|\nabla|} \big( A^\alpha_{\leq N} \parasim \phi_{\leq N} \big). \end{equation} As above, the $\partial_\alpha$-derivatives are compensated by the inverse gradient $|\nabla|^{-1}$. Since high$\times$high$\rightarrow$high-interactions gain one quarter of a derivative from multi-linear dispersive effects, we expect that \eqref{intro:eq-motivation-high-high} has regularity $1/4-$. However, since high$\times$high$\rightarrow$low-interactions only gain in the low frequency-scale, and $A^\alpha_{\leq N}$ and $\phi_{\leq N}$ have negative regularity, \eqref{intro:eq-motivation-high-high} cannot be defined without additional information on $A^\alpha_{\leq N}$ and $\phi_{\leq N}$. This additional information consists of the probabilistic independence of $A_{\leq N}$ and $z$. \\ \emph{High$\times$low-interactions:} The corresponding contribution is given by \begin{equation}\label{intro:eq-motivation-high-low} \Lin[gg][\leq N] \phi_{\leq N} = 2i \int_0^t \ds \, \sin\big((t-s)|\nabla|\big) \frac{\partial_\alpha}{|\nabla|} \big( A^\alpha_{\leq N} \paragg \phi_{\leq N} \big). \end{equation} As above, the $\partial_\alpha$-derivatives are compensated by the inverse gradient $|\nabla|^{-1}$. From our discussion of low$\times$high-interactions above, one may expect that \eqref{intro:eq-motivation-high-low} only has regularity $0-$. However, due to the Lorenz gauge condition $\partial_\alpha A^{\alpha}_{\leq N}=0$, it holds that \begin{equation*} \partial_\alpha \big( A^\alpha_{\leq N} \paragg \phi_{\leq N} \big) = A^\alpha_{\leq N} \paragg \partial_\alpha \phi_{\leq N}. \end{equation*} This allows us to push the derivatives onto the low-frequency input $\phi_{\leq N}$. By also utilizing multi-linear dispersive effects, we expect that it is possible to prove that \eqref{intro:eq-motivation-high-low} has regularity $1/4-$. \section{Notation and preliminaries}\label{section:preliminaries} In this section, we recall definitions, estimates, and notation from the previous literature. \subsection{General notation}\label{section:notation} In all sections except for the introduction, we restrict to the spatial dimension $d=2$. Throughout this article, Greek indices such as $\alpha$ and $\beta$ take values in $\{0,1,2\}$ and Roman indices such as $a$ and $b$ take values in $\{1,2\}$. For example, we write \begin{equation*} \partial_\alpha A^\alpha = \partial_0 A^0 + \partial_{1} A^1 + \partial_{2} A^2 \quad \text{and} \quad \partial_a A^a = \partial_{1} A^1 + \partial_{2} A^2. \end{equation*} Let $\delta>0$ be as in the statement of Theorem \ref{intro:thm-main}. We introduce fixed parameters $\delta_0,\delta_1,\delta_2,\kappa>0$, $b_-\in (0,1/2)$, and $b_0,b_+\in (1/2,1)$ satisfying \begin{equation}\label{prelim:eq-parameter-condition} \kappa \ll 1/2-b_- \ll b_0 -1/2 \ll b_+ - 1/2 \ll \delta_0 \ll \delta_1 \ll \delta_2^2 \ll \delta_2 \ll \delta. \end{equation} We also introduce an implicit parameter $\theta=\theta(b_-,b_0,b_+,\delta_0,\delta_1,\delta_2,\kappa)>0$, whose value is allowed to change from line to line. \\ For any $A,B>0$, we write \begin{equation}\label{prelim:eq-lesssim} A\lesssim B \qquad \text{if} \qquad A \leq CB, \end{equation} where $C$ is a constant depending only on the parameters in \eqref{prelim:eq-parameter-condition}. Similarly, we write \begin{align} A \gtrsim B \qquad &\text{if} \qquad B\lesssim A, \label{prelim:eq-sim}\\ A \sim B \qquad &\text{if} \qquad A\lesssim B \quad \text{and} \quad B \lesssim A. \label{prelim:eq-gtrsim} \end{align} With a slight abuse of notation, we deviate from \eqref{prelim:eq-lesssim}, \eqref{prelim:eq-sim}, and \eqref{prelim:eq-gtrsim} when the quantities are frequency-scales $M,N\in \dyadic$. In that case, we write \begin{align*} M \lesssim N \qquad &\text{if} \quad M \leq 2^{10} N, \\ M \gtrsim N \qquad &\text{if} \quad M \geq 2^{-10} N, \\ M \sim N \qquad &\text{if} \quad 2^{-10} N \leq M \leq 2^{10} N. \end{align*} We also write \begin{align*} M \ll N \qquad &\text{if} \quad M < 2^{-10} N, \\ M \gg N \qquad &\text{if} \quad M > 2^{10} N. \end{align*} The precise meaning of $\lesssim$, $\sim$, and $\gtrsim$ will always be clear from context and should not cause any confusion. \subsection{Probability theory}\label{section:probability} We let $(\Omega,\mathcal{E},\mathbb{P})$ be an abstract probability space. Throughout this article, all random variables will be defined on $\Omega$, all events will be contained in $\mathcal{E}$, and all probabilities will be measured with respect to $\mathbb{P}$. \\ We first recall a few basic facts regarding maxima, moments, and tails of random variables. For a more detailed treatment (and the proofs), we refer to \cite{V18}. \begin{definition}\label{prelim:def-Psi} Let $\gamma>0$ and let $X$ be a random variable. Then, we define \begin{equation} \big\| X \big\|_{\Psi_\gamma} := \sup_{p\geq 1} \frac{\E \big[ |X|^p \big]^{1/p}}{p^{1/\gamma}}. \end{equation} \end{definition} In the next lemma, we show that the finiteness of the $\Psi_\gamma$-norm is equivalent to a stretched-exponential tail. \begin{lemma}[On moments and tails] Let $X$ be a random variable and let $\gamma>0$. Furthermore, let $C_\gamma\geq 1$ and $0<c_\gamma\leq 1$ be sufficiently large and sufficiently small, respectively. Then, the following implications hold: \begin{enumerate}[label=(\roman*)] \item (From moments to tails) Let $B>0$ and assume that $\| X\|_{\Psi_\gamma}\leq B$. Then, we have for all $\lambda \geq 0$ that \begin{equation*} \mathbb{P} \big( |X| \geq \lambda \big) \leq 2 \exp \Big( - c_\gamma \Big(\frac{\lambda}{B}\Big)^\gamma \Big). \end{equation*} \item (From tails to moments) Let $B>0$ and assume that \begin{equation*} \mathbb{P} \big( |X| \geq \lambda \big) \leq 2 \exp \Big( - \Big(\frac{\lambda}{B}\Big)^\gamma \Big) \end{equation*} for all $\lambda \geq 0$. Then, it holds that $\| X \|_{\Psi_\gamma}\leq C_\gamma B$. \end{enumerate} \end{lemma} In the next lemma, we estimate the $\Psi_\gamma$-norm of maxima of random variables. \begin{lemma}[Maxima of random variables]\label{prelim:lem-maxima-random} Let $J\in \mathbb{N}$ and let $(X_j)_{j=1}^J$ be a family of random variables. Then, it holds that \begin{equation*} \big\| \max_{j=1,\hdots, J} |X_j| \big\|_{\Psi_{\gamma}} \lesssim_\gamma \log ( 2+J )^{\frac{1}{\gamma}} \max_{j=1,\hdots, J} \big\| X_j \big\|_{\Psi_\gamma}. \end{equation*} \end{lemma} Finally, we recall a Gaussian hypercontractivity estimate, which is phrased in the language of Definition \ref{prelim:def-Psi}. \begin{lemma}[Gaussian hypercontractivity]\label{prelim:lem-hypercontractivity} Let $J\in \mathbb{N}$, let $(g_j)_{j=1}^J$ be a family of Gaussians, and let $X$ be a polynomial in $(g_j)_{j=1}^J$ of degree less than or equal to $m$ . Then, it holds that \begin{equation}\label{prelim:eq-hypercontractivity} \big\| X \big\|_{\Psi_{1/m}} \lesssim_m \E \big[ |X|^2 \big]^{1/2}. \end{equation} \end{lemma} We emphasize that the implicit constant in \eqref{prelim:eq-hypercontractivity} only depends on the degree and not on $J$, i.e., the number of Gaussians. \\ We now introduce the notation associated with the space-time white noise current $(J^\alpha)_{\alpha=0}^2$ and the complex-valued space-time white noise $\zeta$. In order to define the space-time white noise current $(J^\alpha)_{\alpha=0}^2$, we let $(W_t(n))_{n\in \Z^2}$ be a sequence of Gaussian processes satisfying the following properties: \begin{enumerate}[label=(\roman*)] \item $W_t(0)$ is a standard, real-valued, two-sided Brownian motion and, for all $n\in \Z^2\backslash \{0\}$, $W_t(n)$ is a standard, complex-valued, two-sided Brownian motion. \item For all $m,n\in \Z^2 \backslash \{0\}$ satisfying $m\neq \pm n$, $W_t(m)$ and $W_t(n)$ are independent processes. \item For all $n\in \Z^2$, $\overline{W_t(n)}=W_t(-n)$. \end{enumerate} Then, we let $(W_t^j(n))_{n\in \Z^2}$, where $j=1,2$, be two independent copies of $(W_t(n))_{n\in \Z^2}$ and define \begin{equation}\label{prelim:eq-Jj} J^j(t,x) = \sum_{n\in \Z^2} e^{i\langle n,x\rangle} \partial_t W^j_t(n). \end{equation} We note that $J^1$ and $J^2$ are well-defined as space-time distributions. The time-current $J^0$ is then defined via $J^1$ and $J^2$ as in Definition \ref{intro:def-white-noise-current}. \\ In order to define the complex-valued space-time white noise, we let $(Z_t(n))_{n\in \Z^2}$ be a sequence of standard, complex-valued, two-sided Brownian motions. Similar as in \eqref{prelim:eq-Jj}, we then define \begin{equation}\label{prelim:eq-zeta} \zeta(t,x) = \sum_{n\in \Z^2} e^{i\langle n ,x \rangle} \partial_t Z_t(n). \end{equation} Furthermore, we define $\sigma_A$ and $\sigma_z$ as the $\sigma$-algebras generated by the stochastic processes $A$ and $z$ from \eqref{ansatz:eq-A} and \eqref{ansatz:eq-z}, respectively. \subsection{Harmonic analysis} For any smooth $f\colon \T_x^2 \rightarrow \C$, we define its Fourier transform by \begin{equation} \widehat{f}(n) := \frac{1}{2\pi} \int_{\T^2} \dx \, f(x) e^{-i\langle n ,x\rangle}. \end{equation} For a smooth, compactly supported function $f\colon \R_t \times \T_x^2\rightarrow \C$, we define its space-time Fourier transform by \begin{equation} \widetilde{f}(\lambda,n) := \frac{1}{(2\pi)^{3/2}} \int_\R \dt \int_{\T^2} \dx \, f(t,x) e^{-i t\lambda - i \langle n ,x \rangle} \end{equation} for all $\lambda \in \R$ and $n \in \Z^2$. We let $\rho \colon \R \rightarrow [0,1]$ be a smooth, even function satisfying $\rho(\xi)=1$ for all $\xi \in [-7/8,7/8]$ and $\rho(\xi)=0$ for all $\xi \in [-9/8,9/8]$. For all $N \in \dyadic$, we define $\rho_{\leq N}(\xi):= \rho(\xi/N)$. Furthermore, we define \begin{equation*} \rho_{1}:= \rho_{\leq 1} \quad \text{and} \quad \rho_{N} := \rho_{\leq N} - \rho_{\leq N/2} \quad \text{for all } N \geq 2. \end{equation*} We then define the associated Littlewood-Paley operators by \begin{equation}\label{prelim:eq-littlewood-paley} \widehat{P_{\leq N} f}(n) = \rho_{\leq N}(n) \widehat{f}(n) \quad \text{and} \quad \widehat{P_{N} f}(n) = \rho_{N}(n) \widehat{f}(n) \end{equation} for all smooth $f\colon \T^2\rightarrow \C$ and all $n\in \Z^2$. We also define the fattened Littlewood-Paley operators by \begin{equation}\label{prelim:eq-fattened-littlewood-paley} \widetilde{P}_N := \sum_{\substack{M \in \dyadic\colon \\ M \sim N }} P_M. \end{equation} Equipped with our Littlewood-Paley operators, we can now introduce our paraproducts. \begin{definition}[Paraproducts]\label{prelim:def-para-products} For any smooth $f,g\colon \T^2 \rightarrow \C$, we define the low$\times$high, high$\times$high, and high$\times$low-paraproduct operators by \begin{align} f \parall g &:= \sum_{\substack{K,L \in \dyadic \colon \\ K\ll L}} P_K f \, P_L g, \\ f \parasim g &:= \sum_{\substack{K,L \in \dyadic \colon \\ K\sim L}} P_K f \, P_L g, \\ f \paragg g &:= \sum_{\substack{K,L \in \dyadic \colon \\ K\gg L}} P_K f \, P_L g. \end{align} \end{definition} For any $\nu \in \R$, we define the Sobolev space $H_x^\nu(\T^2)$ and the Hölder space $\Cs_x^\nu(\T^2)$ as the completion of $C^\infty(\T^2)$ with respect to the norms \begin{align} \| f \|_{H_x^\nu(\T^2)} &:= \Big( \sum_{K} K^{2\nu} \big\| P_K f \big\|_{L^2_x(\T^2)}^2 \Big)^{1/2}, \label{prelim:eq-Sobolev} \\ \| f \|_{\Cs_x^\nu(\T^2)} &:= \sup_K K^\nu \big\| P_K f \big\|_{L^\infty(\T^2)}. \end{align} We recall from \eqref{intro:eq-Sobolev} that \begin{equation*} \mathscr{H}_x^\nu(\T^2) := H_x^\nu(\T^2) \times H_x^{\nu-1}(\T^2). \end{equation*} For any interval $I\subseteq \R_t$ and $f\colon I \times \T_x^2 \rightarrow \C$, we further define \begin{equation*} \| f[t]\|_{\Cs_t^0 \mathscr{H}_x^\nu(I \times \T^2)} := \| f \|_{\Cs_t^0 H_x^\nu(I\times \T^2)} + \| \partial_t f \|_{\Cs_t^0 H_x^{\nu-1}(I\times \T^2)}. \end{equation*} \subsection{\protect{$X^{s,b}$-spaces and tensors}}\label{section:xsb-tensor} In this subsection, we discuss $X^{\nu,b}$-spaces, tensors, and related basic estimates. For more detailed treatments, we refer to \cite{Tao06} and \cite{DNY20}. \begin{definition}[$X^{\nu,b}$-spaces]\label{prelim:def-Xnub} For all $\nu \in \R$, $b\in \R$, and $u\colon \R_t \times \T_x^2 \rightarrow \C$, we define \begin{equation} \big\| u \big\|_{X^{\nu,b}(\R)} := \big\| \langle n \rangle^\nu \langle |\lambda| - |n| \rangle^b \big(\mathcal{F}_{t,x} u\big)(\lambda,n) \big\|_{L_\lambda^2 \ell_n^2 (\R \times \Z^2)}. \end{equation} Furthermore, for any closed interval $I\subseteq \R$ and $v\colon I \times \T_x^2 \rightarrow \C$, we define \begin{equation} \big\| v \big\|_{X^{\nu,b}(I)} := \inf \big\{ \big\| u \big\|_{X^{\nu,b}(\R)} \colon u \big|_{I\times \T^2} = v \big\}. \end{equation} \end{definition} We remark that the variable $\nu$ is used for the regularity since the more common choice $s$ will be used as a second time variable. In the following lemma, we list basic estimates involving $X^{\nu,b}$-norms. \begin{lemma}[Basic properties of $X^{\nu,b}$]\label{prelim:lem-Xnub} Let $b,b^\prime \in (1/2,1)$ satisfy $b<b^\prime$ and let $\nu \in \R$. Then, we have the following estimates: \begin{enumerate}[label=(\roman*)] \item (Linear estimate) For all $(\phi_0,\phi_1)\in \mathscr{H}_x^\nu(\T^2)$ and $T_0>0$, it holds that \begin{equation*} \big\| \mathcal{W}(t) (\phi_0,\phi_1) \big\|_{\Cs_t^0 \mathscr{H}_x^\nu([0,T_0])} + \big\| \mathcal{W}(t) (\phi_0,\phi_1) \big\|_{X^{\nu,b}([0,T_0])} \lesssim (1+T_0^2) \big\| (\phi_0,\phi_1) \big\|_{\mathscr{H}_x^{\nu}(\T^2)}. \end{equation*} \item (Continuity) For all $\phi \in X^{\nu,b}([0,T_0])$, it holds that \begin{equation*} \| \phi \|_{C_t^0 H_x^\nu([0,T_0]\times \T^2)} \lesssim \| \phi \|_{X^{\nu,b}([0,T_0])}. \end{equation*} \item (Hölder-continuity) For all $0<\gamma<b-1/2$, it holds that \begin{equation*} \| \phi \|_{C_t^\gamma H_x^\nu([0,T_0]\times \T^2)} \lesssim \| \phi \|_{X^{\nu,b}([0,T_0])}. \end{equation*} \item (Time-localization) For all $T_0>0$, $\tau \in (0,1)$, and $\phi \in X^{\nu,b^\prime}([0,T_0+\tau])$, it holds that \begin{equation*} \big\| \phi \big\|_{X^{\nu,b}([0,T_0+\tau])} \lesssim \big\| \phi \big\|_{X^{\nu,b}([0,T_0])} + \tau^{b^\prime-b} \big\| \phi \big\|_{X^{\nu,b^\prime}([0,T_0+\tau])}. \end{equation*} \item (Energy estimate) For all $\nu \in \R$, $T_0 >0$, and $F\in X^{\nu-1,b-1}([0,T_0])$, it holds that \begin{equation*} \big\| \Duh \big[ F \big] \big\|_{\Cs_t^0 \mathscr{H}_x^{\nu}([0,T_0]\times \T^2)} + \big\| \Duh \big[ F \big] \big\|_{X^{\nu,b}([0,T_0])} \lesssim (1+T_0^2) \big\| F \big\|_{X^{\nu-1,b-1}([0,T_0])}. \end{equation*} \end{enumerate} \end{lemma} In the next lemma, we state a technical estimate concerning operator norms. This estimate is essentially the opposite direction of a typical transference principle for $X^{\nu,b}$-norms. \begin{lemma}[\protect{From $X^{\nu,b}$ to $\ell^2$}]\label{prelim:lem-Xnub-to-ell2} Let $\nu_1,\nu_2,b_1,b_2\in \R$, let $T\geq 1$, and let \begin{equation*} \Lc \colon X^{\nu_1,b_1}([0,T]) \rightarrow X^{\nu_2,b_2}([0,T]). \end{equation*} Furthermore, let $\varphi_m \colon \R_t \rightarrow \C$, where $m\in \Z^2$, be a sequence of functions and define the functions $\Lc_m^{(\varphi)} \in X^{\nu_2,b_2}([0,T])$ and operator $\Lc^{(\varphi)}\colon \ell^2 \rightarrow X^{\nu_2,b_2}([0,T])$ by \begin{equation*} \Lc^{(\varphi)}_m := \Lc \big( e^{i \langle m ,x \rangle} \varphi_m \big) \qquad \text{and} \qquad \Lc^{(\varphi)} v := \sum_{m\in \Z^2} v_m \Lc^{(\varphi)}_m \quad \textup{for all } v\in \ell^2. \end{equation*} Then, it holds that \begin{equation} \big\| \Lc^{(\varphi)} \big\|_{\ell^2 \rightarrow X^{\nu_2,b_2}([0,T])} \lesssim \big\| \Lc \big\|_{X^{\nu_1,b_1}([0,T]) \rightarrow X^{\nu_2,b_2}([0,T])} \sup_{m\in \Z^2} \big\| e^{i\langle m,x \rangle} \varphi_m(t) \big\|_{X^{\nu_1,b_1}([0,T])}. \end{equation} \end{lemma} \begin{proof} This follows directly from the orthogonality of $(e^{i\langle m,x\rangle} \varphi_m(t))_{m\in \Z^2}$ in $X^{\nu_1,b_1}$. \end{proof} Finally, we state an $X^{\nu,b}$-estimate for It\^{o}-integrals, which will be used to combine Lemma \ref{prelim:lem-Xnub-to-ell2} with stochastic estimates. \begin{lemma}[$X^{\nu,b}$-estimate for It\^{o}-integrals]\label{prelim:lem-Xnub-ito} Let $M \in \dyadic$, let $p\geq 1$, let $T\geq 1$, and let $Z$ be as in Subsection \ref{section:probability}. Then, it holds that \begin{equation} \E \Big[ \sup_{\substack{m\in \Z^2 \colon \\ |m|\sim M}} \Big\| e^{i\langle m,x \rangle} \int_0^t \mathrm{d}Z_s(m) \, \sin\big( (t-s) |m| \big) \Big\|_{X^{0,b_+}([0,T])}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta M^{4(b_+-1/2)}. \end{equation} \end{lemma} \begin{proof} We first decompose \begin{equation*} \int_0^t \mathrm{d}Z_s(m) \, \sin\big( (t-s) |m| \big) = \frac{1}{2i} \Big( e^{it|m|} \int_0^t \mathrm{d}Z_s(m) \, e^{-is|m|} - e^{-it|m|} \int_0^t \mathrm{d}Z_s(m) \, e^{is|m|} \Big). \end{equation*} For all $m\in \Z^2$, it holds that \begin{equation*} \operatorname{Law}\Big( t \mapsto \int_0^t \mathrm{d}Z_s(m) \, e^{\pm is|m|} \Big) = \operatorname{Law} \Big( t \mapsto Z_t(m) \Big). \end{equation*} Using the usual $\Cs^{1/2-}-$estimate for Brownian motions and our estimate of maxima of random variables (Lemma \ref{prelim:lem-maxima-random}), it follows for all $\epsilon>0$ that \begin{equation*} \E \Big[ \sup_{\substack{m\in \Z^2 \colon \\ |m|\sim M}} \Big\| e^{i\langle m,x \rangle} \int_0^t \mathrm{d}Z_s(m) \, \sin\big( (t-s) |m| \big) \Big\|_{X^{0,b_-}([0,T])}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta M^\epsilon. \end{equation*} In order to increase the $b$-parameter from $b_-$ to $b_+$, we note that \begin{align*} \int_0^t \mathrm{d}Z_s(m) \, \sin\big( (t-s) |m| \big) &= \sin\big( (t-s)|m| \big) Z_s(m) \Big|_{s=0}^t - |m| \int_0^t \ds \cos\big( (t-s) |m| \big) Z_s(m) \\ &= - |m| \int_0^t \ds \cos\big( (t-s) |m| \big) Z_s(m) . \end{align*} Thus, we can increase the $b$-parameter from $b_-$ to $b_+$ by paying a factor of $M^{(b_+-b_-)}$, which is acceptable. \end{proof} This completes our discussion of $X^{\nu,b}$-spaces and we now turn to tensors and their estimates. \begin{definition}[Tensors and tensor norms]\label{prelim:def-tensors} Let $\Jc$ be a finite index set. A tensor $h=h_{k_\Jc}$ is a function from $(\Z^2)^\Jc$ into $\C$. A partition of $\Jc$ is a pair of sets $(\Xc,\Yc)$ such that $\Jc = \Xc \medcup \Yc$ and $\Xc \medcap \Yc = \emptyset$. For any partition $(\Xc,\Yc)$, we define \begin{equation}\label{prelim:eq-tensor-operator-norm} \big\| h \big\|_{k_\Xc \rightarrow k_\Yc}^2 := \sup \Big\{ \sum_{k_\Yc} \Big| \sum_{k_\Xc} h_{k_\Jc} z_{k_\Xc}\Big|^2 \colon \sum_{k_\Xc} \big| z_{k_\Xc} \big|^2 = 1 \Big\}. \end{equation} We also define the Hilbert-Schmidt norm \begin{equation} \big\| h \big\|_{k_\Jc}^2 := \sum_{k_\Jc} \big| h_{k_\Jc} \big|^2, \end{equation} which corresponds to the special cases $\Xc = \emptyset$ or $\Yc = \emptyset$ in \eqref{prelim:eq-tensor-operator-norm}. \end{definition} We now recall a special case of a random tensor estimate which was obtained in \cite{DNY20}. For the version stated below, which involves It\^{o}-integrals instead of Gaussians, we also refer to \cite[Lemma C.3]{OWZ21}. \begin{lemma}[Moment method]\label{prelim:lem-moment-method} Let $J\in \mathbb{N}$ be a positive integer and let $\Jc := \{1,\hdots, J\}$. Let $h=h_{k_0 k_{\Jc}}$ be a tensor, let $K\in \dyadic$, and assume that $h$ is supported on frequency vectors satisfying $|k_j| \leq K$ for all $0\leq j \leq J$. Furthermore, for each $s\in \R$, let $f(s):= f_{k_0 k_{\Jc}}(s)$ be an additional tensor. Finally, let $H=H_{k_{\Jc}}$ be a random tensor defined as \begin{equation} H_{k_{\Jc}} = \sum_{k_0 \in \Z^2} h_{k_0 k_{\Jc}}\int_{\R} \mathrm{d}W_s(k_0)\, f_{k_0 k_{\Jc}}(s), \end{equation} where $(W_s(k))_{k\in \Z^2}$ is as in Subsection \ref{section:probability}. Then, it holds for all partitions $(\Xc,\Yc)$ of $\Jc$, all $\epsilon>0$, and all $p\geq 1$ that \begin{equation} \E \Big[ \big\| H \big\|_{k_\Xc \rightarrow k_\Yc}^p \Big]^{1/p} \lesssim_{J,\epsilon} \sqrt{p} K^\epsilon \max \Big( \big\| \, h \big\|_{k_0 k_\Xc \rightarrow k_\Yc}, \big\| \, h \big\|_{ k_\Xc \rightarrow k_0 k_\Yc} \Big) \big\| f_{k_0 k_{\Jc}} \big\|_{\ell^\infty_{k_0}\ell^\infty_{k_\Jc} L^2_s}. \end{equation} \end{lemma} \subsection{Lattice point counting estimates} In this subsection, we present the two-dimensional version of the basic lattice point counting estimate from \cite[Lemma 4.15]{B20II}. Combined with the tensor estimates from Subsection \ref{section:xsb-tensor}, the lattice point counting estimates will be used in Section \ref{section:operator} to estimate random linear operators. \begin{lemma}[Basic lattice point counting estimate]\label{prelim:lem-basic-counting} Let $K,L \in \dyadic$ and let $l \in \Z^2$ satisfy $|l|\sim L$. Then, it holds that \begin{align} \sup_{\mu \in \R} \# \Big\{ k \in \Z^2 \colon |k| \sim K, \, \big| |k+l| - |k| -\mu \big| \leq 1 \Big\} & \lesssim \min(K,L)^{-1/2} K^2, \label{prelim:eq-basic-counting-minus} \\ \sup_{\mu \in \R} \# \Big\{ k \in \Z^2 \colon |k| \sim K, \, \big| |k+l| + |k| -\mu \big| \leq 1 \Big\} & \lesssim K^{-1/2} K^2, \label{prelim:eq-basic-counting-plus} \\ \sup_{\mu \in \R} \# \Big\{ k \in \Z^2 \colon |k| \sim K, \, \big| |k+l| -\mu \big| \leq 1 \Big\} & \lesssim K^{-1} K^2. \label{prelim:eq-basic-counting-zero} \end{align} Furthermore, for all $\sigma \in \{ -1,0,1 \}$, it holds that \begin{equation}\label{prelim:eq-basic-counting-linear} \sup_{\mu \in \R} \sup_{\unormal \in \mathbb{S}^1} \# \Big\{ k \in \Z^2 \colon |k|\sim K, \, \big| \unormal \cdot k + \sigma |k| - \mu \big| \leq 1 \Big\} \lesssim K^{-1/2} K^2. \end{equation} \end{lemma} On the right-hand sides of \eqref{prelim:eq-basic-counting-minus}, \eqref{prelim:eq-basic-counting-plus}, \eqref{prelim:eq-basic-counting-zero} and \eqref{prelim:eq-basic-counting-linear}, we always isolate the factor $K^2$, which corresponds to the cardinality of $\{ k \in \Z^2 \colon |k|\sim K\}$. \begin{remark} While the precise form of Lemma \ref{prelim:lem-basic-counting} is most closely related to \cite[Lemma 4.15]{B20II}, similar estimates have been used for wave equations since the earliest bilinear and null-form estimates (see e.g. \cite{FK20,KM93,KM95D}). In the language of \cite{FK20}, \eqref{prelim:eq-basic-counting-minus} and \eqref{prelim:eq-basic-counting-plus} essentially correspond to weighted surface integrals of hyperboloids and ellipsoids, which were estimated in \cite[Proposition 4.3 and 4.5]{FK20}. \end{remark} \begin{proof} Since the estimates are (essentially) available in the literature, we only sketch the argument. Similar as in \cite[Lemma 4.15]{B20II} (and \cite[Lemma 5.1]{BDNY22} for \eqref{prelim:eq-basic-counting-plus}), the first two estimates \eqref{prelim:eq-basic-counting-minus} and \eqref{prelim:eq-basic-counting-plus} can be reduced to proving that \begin{equation}\label{prelim:eq-basic-counting-p1} \sup_{\mu_1,\mu_2\in \R} \Leb \Big( \Big\{ \xi \in \R^2 \colon |\xi| \sim K, \, |\xi| = \mu_1 + \mathcal{O}(1), \, \Big| \xi + |l| e_1 \Big| = \mu_2 + \mathcal{O}(1) \Big\} \Big) \lesssim \min(K,L)^{-1/2} K. \end{equation} In order to estimate \eqref{prelim:eq-basic-counting-p1}, we use an angular decomposition and therefore let $\varphi:= \angle ( \xi, e_1 ) \in [0,\pi]$. Then, we dyadically decompose \begin{align*} &\Leb \Big( \Big\{ \xi \in \R^2 \colon |\xi| \sim K, \, |\xi| = \mu_1 + \mathcal{O}(1), \, \Big| \xi + |l| e_1 \Big| = \mu_2 + \mathcal{O}(1) \Big\} \Big) \\ \lesssim& \, \sum_{\Phi \in 2^{-\Nzero}} \Leb \Big( \Big\{ \xi \in \R^2 \colon |\xi| \sim K, \, |\xi| = \mu_1 + \mathcal{O}(1), \, \Big| \xi + |l| e_1 \Big| = \mu_2 + \mathcal{O}(1), \, \min \big( \varphi, \pi - \varphi\big) \sim \Phi \Big\} \Big). \end{align*} Trivially, it holds that \begin{equation}\label{prelim:eq-basic-counting-p2} \begin{aligned} &\Leb \Big( \Big\{ \xi \in \R^2 \colon |\xi| \sim K, \, |\xi| = \mu_1 + \mathcal{O}(1), \, \Big| \xi + |l| e_1 \Big| = \mu_2 + \mathcal{O}(1), \, \min \big( \varphi, \pi - \varphi\big) \sim \Phi \Big\} \Big) \\ \lesssim & \, \Leb \Big( \Big\{ \xi \in \R^2 \colon |\xi| \sim K, \, |\xi| = \mu_1 + \mathcal{O}(1), \, \min \big( \varphi, \pi - \varphi\big) \sim \Phi \Big\} \Big) \lesssim \, \Phi K. \end{aligned} \end{equation} By computing the volume using polar coordinates as in \cite[Proof of Lemma 4.15]{B20II}, it also holds that \begin{equation}\label{prelim:eq-basic-counting-p3} \begin{aligned} &\Leb \Big( \Big\{ \xi \in \R^2 \colon |\xi| \sim K, \, |\xi| = \mu_1 + \mathcal{O}(1), \, \Big| \xi + |l| e_1 \Big| = \mu_2 + \mathcal{O}(1), \min \big( \varphi, \pi - \varphi\big) \sim \Phi \Big\} \Big) \\ \lesssim & \, \Phi^{-1} \min(K,L)^{-1} K. \end{aligned} \end{equation} The desired estimate \eqref{prelim:eq-basic-counting-p1} then follows by combining \eqref{prelim:eq-basic-counting-p2} and \eqref{prelim:eq-basic-counting-p3} and summing over $\Phi$. We note that the additional factor of $\Phi^{-1}$, which is not present in \cite{B20II}, is due to the differences in the volume elements of polar coordinates in $\mathbb{R}^2$ and $\mathbb{R}^3$. \\ The third estimate \eqref{prelim:eq-basic-counting-zero} can be reduced to proving that \begin{equation*} \sup_{\mu \in \R} \Leb \Big( \Big\{ \xi \in \R^2 \colon |\xi| \sim K, \Big| |\xi + l | - \mu \Big| \leq 1 \Big\} \Big) \lesssim K, \end{equation*} which is trivial. \\ It remains to prove the fourth estimate \eqref{prelim:eq-basic-counting-linear}. Similar as in \eqref{prelim:eq-basic-counting-p1}, it suffices to prove that \begin{equation}\label{prelim:eq-basic-counting-p4} \sup_{\mu \in \R} \Leb \Big( \Big\{ \xi \in \R^2 \colon |\xi| \sim K, \Big| \xi_1 + \sigma |\xi| - \mu \Big| \leq 1 \Big\} \Big) \lesssim K^{-1/2} K^2. \end{equation} The case $\sigma=0$ is trivial and can even be bounded by $K^{-1} K^2$. Thus, it remains to treat the cases $\sigma=\pm 1$. Now, if the constraint $|\xi_1 + \sigma |\xi| - \mu| \leq 1$ is satisfied and $|\xi| \sim K$, then it follows that \begin{equation*} \xi_2^2 = - \xi_1^2 + (\mu - \xi_1)^2 + \mathcal{O}(K). \end{equation*} Thus, \eqref{prelim:eq-basic-counting-p4} can be obtained by first integrating over $\xi_2$, which contributes $K^{1/2}$, and then integrating over $\xi_1$, which contributes $K$. \end{proof} \section{The vector potential}\label{section:vector} In this section, we primarily examine the vector potential $(A^\alpha)_{\alpha=0}^2$, which is a solution of the system of wave equations \begin{align} \partial_\alpha \partial^\alpha A^\beta &= J^\beta, \\ A^\beta[0] &=0. \end{align} We recall that the spatial components $J^1,J^2\colon \R_t\times \T_x^2 \rightarrow \R$ of the space-time white noise current are given by \begin{equation} J^j(t,x) = \sum_{k\in \Z^2} e^{i \langle k,x \rangle} \partial_t W_t^j(k), \end{equation} where $(W_t^1(k))_{k\in \Z^2}$ and $(W_t^2(k))_{k\in \Z^2}$ are independent sequences of standard complex-valued Brownian motions as in Subsection \ref{section:probability}. In the first lemma of this section, we obtain an explicit formula for the vector potential. \begin{lemma}[Explicit formula for the vector potential]\label{vector:lem-explicit} For all $t\geq 0$ and $k \in \Z^2$, it holds that \begin{equation}\label{vector:eq-explicit-A0} \widehat{A}^0(t,k) = - 1\big\{ k\neq 0 \big\} \frac{ik_a}{|k|^2} \int_0^t \dW^a_{s}[k] \Big( \cos\big( (t-s)|k|\big)-1\Big). \end{equation} Similarly, for $a=1,2$ and all $k\in \Z^2\backslash\{0\}$, it holds that \begin{equation} \widehat{A}^a(t,k) = - |k|^{-1} \int_0^t \dW^a_{s}[k] \sin\big((t-s) |k|\big). \label{vector:eq-explicit-Aa} \end{equation} In \eqref{vector:eq-explicit-Aa}, we implicitly set $\sin((t-s)|0|)/|0|:=(t-s)$. \end{lemma} \begin{proof} The second explicit formula \eqref{vector:eq-explicit-Aa} follows directly from the definition of $A^a$ and $J^a$. To be precise, it holds that \begin{align*} A^a(t,x) = - \int_0^t \ds \, \frac{\sin\big( (t-s) |\nabla|\big)}{|\nabla|} J^a(s,x) = - \sum_{k \in \Z^2} \bigg( e^{i \langle k,x\rangle} \int_0^t \dW^a_s[k] \frac{\sin\big((t-s)|k|\big)}{|k|} \bigg), \end{align*} which implies the desired identity. Thus, it remains to prove the first explicit formula \eqref{vector:eq-explicit-A0}. Using the definition of $A^0$, integration by parts, and $J^0\big|_{t=0}=0$, it holds that \begin{align*} A^0(t,x) &= - \int_0^t \ds \, \frac{\sin\big((t-s)|\nabla|\big)}{|\nabla|} J^0(s,x) \\ &= - \frac{\cos\big((t-s)|\nabla|\big)}{|\nabla|^2} J^0(s,x) \Big|_{s=0}^t + \int_0^t \ds \, \frac{\cos\big( (t-s) |\nabla|\big)}{|\nabla|^2} (\partial_s J^0)(s,x) \\ &= - |\nabla|^{-2} J^0(t,x) + \int_0^t \ds \, \frac{\cos\big( (t-s) |\nabla|\big)}{|\nabla|^2} (\partial_s J^0)(s,x) \\ &= |\nabla|^{-2} \int_0^t \ds \, \Big( \cos\big( (t-s) |\nabla|\big) -1 \Big) (\partial_s J^0)(s,x). \end{align*} After inserting $\partial_t J^0 = - \partial_a J^a$ and using the definition of $J^a$, it follows that \begin{align*} & |\nabla|^{-2} \int_0^t \ds \Big( \cos\big( (t-s) |\nabla|\big) -1 \Big) (\partial_s J^0)(s,x) \\ =& \, - |\nabla|^{-2} \int_0^t \ds \Big( \cos\big( (t-s) |\nabla|\big) -1 \Big) (\partial_a J^a)(s,x) \\ =& - \sum_{k \in \Z^2 \backslash \{0\}} \bigg( \frac{ik_a}{|k|^2} \Big( \int_0^t \dW^a_s[k] \big( \cos\big( (t-s) |k|\big) - 1\big) \Big) e^{i \langle k,x \rangle}. \end{align*} This yields the desired identity \eqref{vector:eq-explicit-A0}. \end{proof} Equipped with Lemma \ref{vector:lem-explicit}, we now examine the space-time covariances of the vector potential. \begin{lemma}[Space-time covariances of the vector potential]\label{vector:lem-covariance} Let $t,t^\prime \geq 0$ and let $k,l \in \Z^2$. Then, we have the the following two identities: \begin{enumerate}[label=(\roman*)] \item \label{vector:item-covariance-time} (Time-component) It holds that \begin{align*} &\E \Big[ \widehat{A}_0(t,k) \widehat{A}^0(t^\prime,l) \Big] \\ =& \mathbf{1} \big\{ k+l =0 \neq k \big\} \, \frac{1}{|k|^2} \bigg( \big( t \wedge t^\prime \big) \Big( 1+ \frac{1}{2} \cos \big( (t-t^\prime) |k| \big) \Big) \\ &+ \frac{1}{4|k|} \Big( \sin\big( (t+t^\prime) |k| \big) + 4 \sin\big( t |k| \big) + 4 \sin\big( t^\prime |k| \big) + 3 \sin \big( |t-t^\prime| |k| \big) \Big) \bigg). \end{align*} \item \label{vector:item-convariance-spatial} (Spatial components) For all $a,b=1,2$, it holds that \begin{align*} &\E \Big[ \widehat{A}^a(t,k) \widehat{A}^b(t^\prime,l) \Big] \\ =& \mathbf{1}\big\{ k+l =0 \neq k \big\} \, \delta^{ab} \frac{1}{2|k|^2} \bigg( \big( t \wedge t^\prime \big) \cos \big( (t-t^\prime) |k| \big) \\ &\, + \frac{1}{2|k|} \Big( \sin\big( |t-t^\prime| |k| \big) - \sin\big( (t+t^\prime) |k| \big) \Big) \bigg) \\ +& \mathbf{1} \big\{ k=l=0 \big\} \delta^{ab} \min(t,t^\prime)^2 \Big( \tfrac{1}{2} \max(t,t^\prime) - \tfrac{1}{6} \min(t,t^\prime) \Big). \end{align*} \end{enumerate} \end{lemma} \begin{remark} Naturally, Lemma \ref{vector:lem-explicit} also allows us to compute the space-time covariances of $A^0$ and $A^a$, where $a=1,2$, but they will not be needed in this article. \end{remark} \begin{proof} We prove \ref{vector:item-covariance-time} and \ref{vector:item-convariance-spatial} separately. \\ \emph{Proof of \ref{vector:item-covariance-time}:} If $k=0$ or $l=0$, it follows from Lemma \ref{vector:lem-explicit} that $\widehat{A}_0(t,k)=0$ or $\widehat{A}^0(t^\prime,l)=0$. Thus, it remains to treat the case $k,l\neq 0$. Using Lemma \ref{vector:lem-explicit} and It\^{o}'s isometry, it holds that \begin{align*} &\E \Big[ \widehat{A}_0(t,k) \widehat{A}^0(t^\prime,l) \Big] \\ =& \E \bigg[ \frac{ik_a}{|k|^2} \int_0^t \dW^a_{s}[k] \Big( \cos\big( (t-s)|k|\big)-1\Big) \times \frac{il_b}{|l|^2} \int_0^{t^\prime} \dW^b_{s}[l] \Big( \cos\big( (t^\prime-s)|l|\big)-1\Big) \bigg] \\ =& \mathbf{1}\big\{ k+ l = 0 \big\} \frac{k_a k_b \delta^{ab}}{|k|^4} \int_0^{t\wedge t^\prime} \ds \, \big( 1- \cos\big( (t-s) |k| \big) \big) \big( 1- \cos\big( (t^\prime-s) |k| \big) \big) \\ =& \mathbf{1}\big\{ k+ l = 0 \big\} \frac{1}{|k|^2} \int_0^{t\wedge t^\prime} \ds \, \big( 1- \cos\big( (t-s) |k| \big) \big) \big( 1- \cos\big( (t^\prime-s) |k| \big) \big). \end{align*} Thus, it only remains to compute the $s$-integral, which is elementary. \\ \emph{Proof of \ref{vector:item-convariance-spatial}:} We only treat the case $k,l\neq 0$, since the remaining cases are similar. Using Lemma \ref{vector:lem-explicit} and It\^{o}'s isometry, it holds that \begin{align*} &\E \Big[ \widehat{A}^a(t,k) \widehat{A}^b(t^\prime,l) \Big]\\ =& \E \bigg[ |k|^{-1} \int_0^t \dW^a_{s}[k] \sin\big((t-s) |k|\big) \times |l|^{-1} \int_0^{t^\prime} \dW^b_{s}[k] \sin\big((t^\prime-s) |l|\big) \bigg] \\ =& \, \mathbf{1}\big\{ k+ l = 0 \big\} \delta^{ab} |k|^{-2} \int_0^{t\wedge t^\prime} \ds \, \sin\big((t-s) |k|\big) \sin\big((t^\prime-s) |k|\big). \end{align*} Thus, it only remains to compute the $s$-integral, which is elementary. \end{proof} As a consequence of Lemma \ref{vector:lem-covariance}, we obtain the following regularity estimates. \begin{corollary}[Regularity estimates for the vector potential]\label{vector:cor-regularity} For all $K\in \dyadic$, $T\geq 1$, $p\geq 1$, $\epsilon>0$, and $0\leq \alpha \leq 2$, it holds that \begin{align} \E \Big[ \big\| A^\alpha_K \big\|_{\Cs_t^0 \Cs_x^{-\epsilon}([0,T]\times \T^2)}^p \Big]^{1/p} &\lesssim_\epsilon \sqrt{p} T^\theta K^{-\epsilon/2}, \label{vector:eq-regularity-1} \\ \E \Big[ \big\| \partial_t A^\alpha_K \big\|_{\Cs_t^0 \Cs_x^{-1-\epsilon}([0,T]\times \T^2)}^p \Big]^{1/p} &\lesssim_\epsilon \sqrt{p} T^\theta K^{-\epsilon/2}. \label{vector:eq-regularity-2} \end{align} \end{corollary} \begin{proof} Using Gaussian hypercontractivity (Lemma \ref{prelim:lem-hypercontractivity}) and translation-invariance (similar as in \cite[Proof of Lemma 7.4]{BDNY22}), the proof of the first estimate \eqref{vector:eq-regularity-1} can be reduced to \begin{equation}\label{vector:eq-regularity-p1} \sup_{t\in [0,T]} \E \Big[ \big\| P_K A^\alpha (t) \big\|_{H_x^{-\epsilon}}^2 \Big] \lesssim K^{-2\epsilon} T^\theta. \end{equation} In order to proof \eqref{vector:eq-regularity-p1}, we use Lemma \ref{vector:lem-covariance}, which yields \begin{align*} \E \Big[ \big\| P_K A^\alpha (t) \big\|_{H_x^{-\epsilon}}^2 \Big] = \sum_{k \in \Z^2} \langle k \rangle^{-2\epsilon} \rho_K^2(k) \E \Big[ \big| \widehat{A}^\alpha(t,k) \big|^2 \Big] \lesssim T^\theta \sum_{k\in \Z^2} \langle k \rangle^{-2-2\epsilon} \rho_K^2(k) \lesssim K^{-2\epsilon} T^\theta. \end{align*} It therefore remains to prove \eqref{vector:eq-regularity-2}. Using Lemma \ref{vector:lem-explicit}, it follows\footnote{The It\^{o}-integrals in Lemma \ref{vector:lem-explicit} are continuously differentiable since the integrand vanishes at $s=t$.} that \begin{align} \partial_t \widehat{A}^0(t,k) &= 1\big\{ k\neq 0 \big\} \frac{ik_a}{|k|} \int_0^t \dW^a_{s}[k] \sin\big( (t-s)|k|\big), \label{vector:eq-regularity-p2} \\ \partial_t \widehat{A}^a(t,k) &= - \int_0^t \dW^a_{s}[k] \cos\big((t-s) |k|\big). \label{vector:eq-regularity-p3} \end{align} Using the explicit formulas in \eqref{vector:eq-regularity-p2} and \eqref{vector:eq-regularity-p3}, the second estimate \eqref{vector:eq-regularity-2} can then be proven using similar arguments as in the proof of \eqref{vector:eq-regularity-1} and we omit the details. \end{proof} We now treat the quadratic expression \begin{equation} A_{\leq N,\alpha} A^{\alpha}_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N}. \end{equation} To this end, we first provide the precise definition of the renormalized mass $\scrm_{\leq N}$, which was previously left undefined. \begin{definition}[The renormalized mass]\label{vector:def-mass} For all $N \in \dyadic$ and $t\in \R$, we define \begin{equation*} \scrm_{\leq N}(t) := \sqrt{ \frac{5}{2} \bigg( \sum_{n\in \Z^2\backslash \{ 0 \}} \frac{\rho_{\leq N}^2(n)}{|n|^2} \bigg) |t|}. \end{equation*} \end{definition} Equipped with Definition \ref{vector:def-mass}, we can now state the estimate of the quadratic expression. \begin{lemma}[The renormalized quadratic term in the vector potential]\label{vector:lem-quadratic} For all $M,N \in \dyadic$, $T\geq 1$, $p\geq 1$, and $\epsilon>0$, it holds that \begin{equation}\label{vector:eq-quadratic-1} \E \Big[ \sup_N \Big\| A_{\leq N,\alpha} A^{\alpha}_{\leq N}- \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big\|_{L_t^\infty \Cs^{-\epsilon}_x([0,T]\times \T_x^2)}^p \Big]^{1/p} \lesssim_\epsilon \, p T^\theta. \end{equation} Furthermore, it holds that \begin{equation}\label{vector:eq-quadratic-2} \begin{aligned} &\E \Big[ \sup_N \Big\| \Big( A_{\leq M,\alpha} A^{\alpha}_{\leq M}- \scrm^{\hspace{-0.4ex}2}_{\leq M} \Big) - \Big( A_{\leq N,\alpha} A^{\alpha}_{\leq N}- \scrm^{\hspace{-0.4ex}2}_{\leq N}\Big) \Big\|_{L_t^\infty \Cs^{-\epsilon}_x([0,T]\times \T_x^2)}^p \Big]^{1/p} \\ \lesssim_\epsilon&\, p T^\theta \min(M,N)^{-\kappa}. \end{aligned} \end{equation} \end{lemma} \begin{proof} We only prove \eqref{vector:eq-quadratic-1}, since \eqref{vector:eq-quadratic-2} follows from a minor modification of the same argument. We first decompose $A_{\leq N,\alpha} A^{\alpha}_{\leq N}$ into a resonant and non-resonant term. To be more precise, we decompose \begin{align} &A_{\leq N,\alpha} A_{\leq N}^\alpha \notag \\ =& \sum_{k,l\in \Z^2} \rho_{\leq N}(k) \rho_{\leq N}(l) \, \Big( \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) - \E \big[ \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) \big] \Big) e^{i\langle k+l,x \rangle} \label{vector:eq-quadratic-p1} \\ +& \sum_{k,l\in \Z^2} \rho_{\leq N}(k) \rho_{\leq N}(l) \, \E \big[ \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) \big] e^{i\langle k+l,x \rangle} \label{vector:eq-quadratic-p2}. \end{align} We now treat the resonant term \eqref{vector:eq-quadratic-p1} and non-resonant term \eqref{vector:eq-quadratic-p2} separately. \\ \emph{Contribution of the non-resonant term \eqref{vector:eq-quadratic-p1}:} We use the dyadic decomposition \begin{align*} &\sum_{k,l\in \Z^2} \rho_{\leq N}(k) \rho_{\leq N}(l) \, \Big( \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) - \E \big[ \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) \big] \Big) e^{i\langle k+l,x \rangle} \\ =& \sum_{\substack{K,L\colon \\ K,L \leq N}} \sum_{k,l\in \Z^2} \rho_{K}(k) \rho_{L}(l) \, \Big( \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) - \E \big[ \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) \big] \Big) e^{i\langle k+l,x \rangle}. \end{align*} Due to Gaussian hypercontractivity (Lemma \ref{prelim:lem-hypercontractivity}) and translation-invariance (see e.g. \cite[Proof of Lemma 7.4]{BDNY22}), it suffices to prove that \begin{equation}\label{vector:eq-quadratic-p3} \begin{aligned} &\sup_{t\in [0,T]} \E \Big[ \Big\| \sum_{k,l\in \Z^2} \rho_{K}(k) \rho_{L}(l) \, \Big( \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) - \E \big[ \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) \big] \Big) e^{i\langle k+l,x \rangle} \Big\|_{H_x^{-\epsilon}}^2 \Big] \\ \lesssim& \, T^\theta \max(K,L)^{-2\epsilon}. \end{aligned} \end{equation} We now note that, up to permutations, the family of random variables \begin{equation*} \Big( \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) - \E \big[ \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) \big] \Big)_{k,l \in \Z^2} \end{equation*} is orthogonal in $L^2(\Omega,\mathbb{P})$. Together with Lemma \ref{vector:lem-covariance} and Gaussian hypercontractivity\footnote{Gaussian hypercontractivity is used only for converting the second-moment bound from Lemma \ref{vector:lem-covariance} into a fourth-moment bound.} (Lemma \ref{prelim:lem-hypercontractivity}), it follows that \begin{align*} &\E \Big[ \Big\| \sum_{k,l\in \Z^2} \rho_{K}(k) \rho_{L}(l) \, \Big( \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) - \E \big[ \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) \big] \Big) e^{i\langle k+l,x \rangle} \Big\|_{H_x^{-\epsilon}}^2 \Big]\\ \lesssim& \, \sum_{k,l\in \Z^2} \rho_K^2(k) \rho_L^2(l) \langle k + l \rangle^{-2\epsilon} \E \Big[ \Big| \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) - \E \big[ \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) \big] \Big|^2 \Big] \\ \lesssim& \, T^\theta \sum_{k,l\in \Z^2} \rho_K^2(k) \rho_L^2(l) \langle k + l \rangle^{-2\epsilon} \langle k\rangle^{-2} \langle l \rangle^{-2} \\ \lesssim& \, T^\theta \max(K,L)^{-2\epsilon}. \end{align*} This completes the proof of \eqref{vector:eq-quadratic-p3}. \\ \emph{Contribution of the resonant term \eqref{vector:eq-quadratic-p2}:} By relabeling $k\in \Z^2$ as $n\in \Z^2$, using Lemma \ref{vector:lem-covariance}, and using Definition \ref{vector:def-mass}, we obtain that \begin{align} & \sum_{k,l\in \Z^2} \rho_{\leq N}(k) \rho_{\leq N}(l) \, \E \big[ \widehat{A}_\alpha(t,k) \widehat{A}^\alpha(t,l) \big] e^{i\langle k+l,x \rangle} \notag \\ =& \, \frac{5}{2} \Big( \sum_{n\in \Z^2 \backslash \{0 \} } \frac{\rho_{\leq N}^2(n)}{|n|^2} \Big) t + \sum_{n\in \Z^2 \backslash \{0\}} \frac{\rho_{\leq N}^2(n)}{|n|^3} \Big( -\frac{1}{4} \sin\big( 2t |n|\big) + 2 \sin\big( t|n|\big) \Big) + \frac{t^3}{3}. \notag \end{align} Here, the last summand comes from the contribution for $n=0$. Since the first summand coincides with $\scrm^{\hspace{-0.4ex}2}_{\leq N}$, it easily follows for all $\nu \in \R$ that \begin{equation*} \Big\| \eqref{vector:eq-quadratic-p2} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big\|_{L_t^\infty \Cs_x^\nu ([0,T]\times \T_x^2)} \lesssim T^3. \qedhere \end{equation*} \end{proof} In the next lemma, we utilize dispersive effects in order to bound a certain time-integral involving the vector potential $A$. This lemma will be used in the proof of Proposition \ref{operator:prop-main} below. \begin{lemma}[Dispersive estimate for a time-integral]\label{vector:lem-dispersive-time-integral} Let $K \in \dyadic$ and let $0\leq \alpha \leq 2$. Then, it holds for all $T\geq 1$ and $p\geq 1$ that \begin{equation*} \E \Big[ \max_{\unormal \in \mathbb{S}^1} \sup_{\substack{\lambda \in \R \colon \\ |\lambda| \lesssim K^{10}}} \Big\| \int_0^t \dt^\prime e^{it^\prime \lambda} e^{-t^\prime \unormal \cdot \nabla} A^\alpha_K(t^\prime,x) \Big\|_{L_x^\infty H_t^{b_+}(\T_x^2 \times [0,T])}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta K^{-1/4+\delta}. \end{equation*} \end{lemma} \begin{proof} We first estimate \begin{equation}\label{vector:eq-dispersive-p1} \Big\| \int_0^t \dt^\prime e^{it^\prime \lambda} e^{-t^\prime \unormal \cdot \nabla} A^\alpha_K(t^\prime,x) \Big\|_{L_x^\infty H_t^{b_+}(\T_x^2 \times [0,T])} \lesssim \Big\|e^{it \lambda} e^{-t \unormal \cdot \nabla} A^\alpha_K(t,x) \Big\|_{L_x^\infty H_t^{b_+-1}(\T_x^2 \times [0,T])}. \end{equation} In order to estimate the right-hand side of \eqref{vector:eq-dispersive-p1}, we first want to replace $b_+-1$ by $b_--1$ (where $b_-$ is as in Subsection \ref{section:notation}). To this end, we note that \begin{align*} \Big\|e^{it \lambda} e^{-t \unormal \cdot \nabla} A^\alpha_K(t,x) \Big\|_{L_x^\infty H_t^{0}(\T_x^2 \times [0,T])} &\lesssim T^{1/2} \Big\|e^{it \lambda} e^{-t \unormal \cdot \nabla} A^\alpha_K(t,x) \Big\|_{L_t^\infty L_x^\infty( [0,T] \times \T_x^2)} \\ &\lesssim T^{1/2} \big\| A_K^\alpha \big\|_{L_t^\infty L_x^\infty( [0,T] \times \T_x^2)}. \end{align*} Using Corollary \ref{vector:cor-regularity}, it follows for all $\epsilon>0$ that \begin{equation}\label{vector:eq-dispersive-p2} \E \Big[ \max_{\unormal \in \mathbb{S}^1} \sup_{\substack{\lambda \in \R \colon \\ |\lambda| \lesssim K^{10}}} \Big\| e^{it \lambda} e^{-t \unormal \cdot \nabla} A^\alpha_K(t,x) \Big\|_{L_x^\infty H_t^{0}(\T_x^2 \times [0,T])}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta K^\epsilon. \end{equation} Equipped with \eqref{vector:eq-dispersive-p2}, we may then prove the desired estimate at $b_+-1$ by interpolation between $b_--1$ and $0$, and it then only remains to prove that \begin{equation}\label{vector:eq-dispersive-p3} \E \Big[ \max_{\unormal \in \mathbb{S}^1} \sup_{\substack{\lambda \in \R \colon \\ |\lambda| \lesssim K^{10}}} \Big\| e^{it \lambda} e^{-t \unormal \cdot \nabla} A^\alpha_K(t,x) \Big\|_{L_x^\infty H_t^{b_--1}(\T_x^2 \times [0,T])}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta K^{-1/4+\delta/2}. \end{equation} Using a standard meshing argument (see e.g. \cite[Section 5.7]{BDNY22}) and Gaussian hypercontractivity (Lemma \ref{prelim:lem-hypercontractivity}), \eqref{vector:eq-dispersive-p3} can be further reduced to the estimate \begin{equation}\label{vector:eq-dispersive-p4} \max_{\unormal \in \mathbb{S}^1} \sup_{\substack{\lambda \in \R \colon \\ |\lambda| \lesssim K^{10}}} \max_{x\in \T^2} \E \Big[ \Big\| e^{it \lambda} e^{-t \unormal \cdot \nabla} A^\alpha_K(t,x) \Big\|_{ H_t^{b_--1}( [0,T])}^2 \Big]^{1/2} \lesssim T^\theta K^{-1/4+\delta/4}. \end{equation} Finally, since $b_- <1/2$, the $H_t^{b_--1/2}$-norm can be estimated by the sup-norm of the Fourier transform. Thus, \eqref{vector:eq-dispersive-p4} can be further reduced to the estimate \begin{equation}\label{vector:eq-dispersive-p5} \max_{\unormal \in \mathbb{S}^1} \sup_{\substack{\lambda \in \R \colon \\ |\lambda| \lesssim K^{10}}} \max_{x\in \T^2} \sup_{\xi \in \R} \E \Big[ \Big| \int_{\R} \dt \, \chi\big( t/T\big) e^{it\xi} e^{it \lambda} e^{-t \unormal \cdot \nabla} A^\alpha_K(t,x) \Big|^2 \Big]^{1/2} \lesssim T^\theta K^{-1/4+\delta/4}, \end{equation} where $\chi$ is a smooth, nonnegative cut-off function. After all of the reductions above, we now prove \eqref{vector:eq-dispersive-p5} using the covariance identity (Lemma \ref{vector:lem-covariance}) and lattice point counting estimates (Lemma \ref{prelim:lem-basic-counting}). We have that \begin{align} &\E \Big[ \Big| \int_{\R} \dt \, \chi\big( t/T\big) e^{it\xi} e^{it \lambda} e^{-t \unormal \cdot \nabla} A^\alpha_K(t,x) \Big|^2 \notag \\ =& \int_{\R} \dt \int_{\R} \dt^\prime \chi\big(t/T\big) \chi\big(t^\prime/T) e^{i (\xi+\lambda) (t-t^\prime)} \sum_{k,k^\prime \in \Z^2} \bigg( \rho_K(k) \label{vector:eq-dispersive-p6} \\ &\times \rho_K(k^\prime) e^{-i t \unormal \cdot k + i t^\prime \unormal \cdot k^\prime} \E \Big[ \widehat{A}_K^\alpha(t,k) \overline{\widehat{A}^\alpha_K(t^\prime,k)} \Big] e^{i\langle k-k^\prime, x \rangle} \bigg). \notag \end{align} To avoid confusion, we note that the index $0\leq \alpha \leq 2$ in \eqref{vector:eq-dispersive-p6} is fixed and not summed over. Using Lemma \ref{vector:eq-dispersive-p6}, we can write \begin{equation}\label{vector:eq-dispersive-p7} \E \Big[ \widehat{A}_K^\alpha(t,k) \overline{\widehat{A}^\alpha_K(t^\prime,k)} \Big] = \mathbf{1}\big\{ k=k^\prime \big\} \bigg( \frac{\varphi(t,t^\prime)}{\langle k \rangle^2} \sum_{\sigma=0,\pm 1} c_\sigma e^{i\sigma (t-t^\prime) |k|} + \mathcal{O} \Big( \frac{T}{\langle k\rangle^3} \Big) \bigg), \end{equation} where $\varphi \colon \R \times \R \rightarrow \R$ is continuous with bounded first-order derivatives, $c_{-1}$, $c_0$, and $c_1$ are constants, and $\mathcal{O}$ is the usual Landau symbol. Due to the estimate \begin{equation*} \int_{\R} \dt \int_{\R} \dt^\prime \chi\big(t/T\big) \chi\big(t^\prime/T) \sum_{k\in \Z^2} \rho_K^2(k) \langle k \rangle^{-3} \lesssim T^2 K^{-1}, \end{equation*} the contribution of the $\mathcal{O}$-term in \eqref{vector:eq-dispersive-p7} is (better than) acceptable. Thus, it remains to treat the contribution of the main term in \eqref{vector:eq-dispersive-p7}. To this end, we estimate \begin{align} &\bigg| \int_{\R} \dt \int_{\R} \dt^\prime \chi\big(t/T\big) \chi\big(t^\prime/T) e^{i (\xi+\lambda) (t-t^\prime)} \varphi(t,t^\prime) \sum_{k\in \Z^2} \bigg( \frac{\rho_K^2(k)}{\langle k \rangle^2} e^{-i (t-t^\prime) \unormal \cdot k} e^{i\sigma (t-t^\prime) |k|} \bigg) \bigg| \notag \\ \lesssim& \, T^\theta \sum_{k\in \Z^2} \frac{\rho_K^2(k)}{\langle k \rangle^2} \Big( 1+ \big| \xi + \lambda - \unormal \cdot k + \sigma |k| \big| \Big)^{-1}. \end{align} By using a level-set decomposition and the lattice point counting estimate (Lemma \ref{prelim:lem-basic-counting}), we have that \begin{align*} &\sum_{k\in \Z^2} \frac{\rho_K^2(k)}{\langle k \rangle^2} \Big( 1+ \big| \xi + \lambda - \unormal \cdot k + \sigma |k| \big| \Big)^{-1} \\ \lesssim& \sum_{\substack{\mu\in \Z \colon \\ |\mu|\lesssim K}} \Big( 1+ \big| \xi + \lambda - \mu \big| \Big)^{-1} \times K^{-2} \sup_{\mu \in \Z^2} \sum_{k\in \Z^2} \rho_K^2(k) \mathbf{1} \big\{ \big| - \unormal \cdot k + \sigma |k| - \mu \big| \leq 1\big\} \\ \lesssim& \log(K) K^{-1/2}. \end{align*} This yields an acceptable contribution to \eqref{vector:eq-dispersive-p4} and therefore completes the proof. \end{proof} At the end of this section, we shift our attention from the vector potential $A$ to the linear stochastic object $z$ from \eqref{ansatz:eq-z}. Just like the vector potential $A$, $z$ solves a constant-coefficient wave equation with stochastic forcing. Using similar arguments as above, we also obtain the following regularity estimate for $z$. \begin{corollary}[Regularity estimate for $z$]\label{vector:cor-regularity-z} For all $K\in \dyadic$, $T\geq 1$, $p\geq 1$, and $\epsilon>0$, it holds that \begin{equation}\label{vector:eq-z-cs} \E \Big[ \big\| P_K z \big\|_{\Cs_t^0 \Cs_x^{-\epsilon}([0,T] \times \T^2)}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta K^{-\epsilon/2}. \end{equation} Furthermore, for all $\epsilon \geq 10 (b_+ - 1/2)$, it also holds that \begin{equation}\label{vector:eq-z-xnub} \E \Big[ \big\| P_K z \big\|_{X^{-\epsilon,b_+}([0,T])}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta K^{-\epsilon/2}. \end{equation} \end{corollary} \begin{proof} The first estimate \eqref{vector:eq-z-cs} follows essentially as in Corollary \ref{vector:cor-regularity}. The second estimate \eqref{vector:eq-z-xnub} follows easily from Lemma \ref{prelim:lem-Xnub-ito}. \end{proof} \section{Random operator estimates}\label{section:operator} In this section, we prove random operator estimates for $\Lin[ll][\leq N]$, $\Lin[sim][\leq N]$, and $\Lin[gg][\leq N]$ from \eqref{ansatz:eq-Lin-ll}-\eqref{ansatz:eq-Lin-gg}. To this end, we first use a dyadic decomposition of the vector potential and define \begin{align} \Lin[ll][K] \phi &:= 2i \Duh \Big[ \partial_\alpha \Big( A_K^\alpha \parall \phi \Big) \Big] ,\label{operator:eq-LK-ll} \\ \Lin[sim][K] \phi &:= 2i \Duh \Big[ \partial_\alpha \Big( A_K^\alpha \parasim \phi \Big) \Big] ,\label{operator:eq-LK-sim} \\ \Lin[gg][K] \phi &:= 2i \Duh \Big[ \partial_\alpha \Big( A_K^\alpha \paragg \phi \Big) \Big] . \label{operator:eq-LK-gg} \end{align} Our main estimates, which address all three operators in \eqref{operator:eq-LK-ll}, \eqref{operator:eq-LK-sim}, and \eqref{operator:eq-LK-gg}, are collected in the next proposition. \begin{proposition}[Random operator estimates]\label{operator:prop-main} Let $b_0,b_+$, and $\delta_1$ be as in Subsection \ref{section:notation}, let $p\geq 1$, let $T\geq 1$, and let $K\in \dyadic$. Then, we have the following three estimates: \begin{enumerate}[label=(\roman*)] \item (High$\times$high-estimate) \label{operator:item-high-high} For all $\nu\geq \delta_1$, it holds that \begin{equation*} \E \Big[ \big\| \Lin[sim][K] \big\|_{X^{\nu,b_0}([0,T])\rightarrow X^{\nu+1/4-\delta_1,b_+}([0,T])}^p \Big]^{1/p} \lesssim \sqrt{p}\, T^\theta K^{-\kappa}. \end{equation*} \item (High$\times$low-estimate) \label{operator:item-high-low} For all $\nu\leq 0 $, it holds that \begin{equation*} \E \Big[ \big\| \Lin[gg][K] \big\|_{X^{\nu,b_0}([0,T])\rightarrow X^{\nu,b_+}([0,T])}^p \Big]^{1/p} \lesssim \sqrt{p}\, T^\theta K^{\delta_1 - 1/4-\kappa}. \end{equation*} \item (Low$\times$high-estimate) \label{operator:item-low-high} For all $\nu\in \R$, it holds that \begin{equation*} \E \Big[ \big\| \Lin[ll][K] \big\|_{X^{\nu,b_0}([0,T])\rightarrow X^{\nu,b_+}([0,T])}^p \Big]^{1/p} \lesssim \sqrt{p}\, T^\theta K^{\delta_1 - 1/4-\kappa}. \end{equation*} \end{enumerate} \end{proposition} We remark that the three estimates in Proposition \ref{operator:prop-main} are stated in increasing order of difficulty. \begin{remark}[High$\times$high-estimate] We emphasize that the condition $\nu>0$, which is slightly weaker than $\nu\geq \delta_1$, is likely necessary for the high$\times$high-estimate. The reason is that the vector potential $A^\alpha$ is a linear wave with negative spatial regularity, and therefore the high$\times$high-product $A^\alpha \parasim \phi$ cannot be defined for all elements of $X^{0,b}$. \\ If the high$\times$high-estimate was satisfied for any $\nu <0$, then the proof of Theorem \ref{intro:thm-main} would likely be much simpler. At least for the derivative nonlinearity in \eqref{ansatz:eq-phiN-a}, one could then close all estimates directly in $X^{\nu,b}$. \end{remark} \begin{remark}[Null-structure] In the low$\times$high-estimate, we exhibit a $1/4$-gain in the lowest frequency-scale, i.e., a gain of $K^{-1/4}$. By using null structures (and potentially imposing a different gauge condition), it may be possible to replace $K^{-1/4}$ by $K^{-1/2}$. The reason for this is as follows: The proof of the low$\times$high-estimate crucially relies on the lattice point counting estimate from Lemma \ref{prelim:lem-basic-counting}. In the proof of Lemma \ref{prelim:lem-basic-counting}, the main contribution comes from angles $\Phi \sim K^{-1/2}$. Therefore, the worst contributions come from nearly parallel interactions, which are weakened by null forms. By utilizing a null structure, it may be possible to restrict to angles $\Phi \sim 1$, in which case the $K^{-1/2}$-factor in Lemma \ref{prelim:lem-basic-counting} can be replaced by $K^{-1}$. This improved lattice point counting estimate would then lead to the aforementioned improvement in the low$\times$high-estimate. Using similar heuristics, one may hope to improve the high$\times$high-estimate by replacing $X^{\nu+1/4-\delta_1,b_+}$ with $X^{\nu+1/2-\delta_1,b_+}$ and the high$\times$low-estimate by replacing $K^{\delta_1 - 1/4-\kappa}$ in with $K^{\delta_1 - 1/2-\kappa}$. However, since Theorem \ref{intro:thm-main} can be proven without any of these further improvements, we did not pursue this direction here. \end{remark} \begin{remark}[Comparison of bilinear and operator estimates] In the deterministic literature, bilinear estimates for wave equations are often stated in the form \begin{equation}\label{operator:eq-bilinear} \big\| \Duh \big[ uv \big] \big\|_{X^{\nu_0,b_0}} \lesssim \| u \|_{X^{\nu_1,b_1}} \| v \|_{X^{\nu_2,b_2}}. \end{equation} Of course, the bilinear estimate \eqref{operator:eq-bilinear} is equivalent to the operator estimate \begin{equation}\label{operator:eq-operator} \big\| v \mapsto \Duh \big[ uv \big] \big\|_{X^{\nu_2,b_2} \rightarrow X^{\nu_0,b_0}} \lesssim \| u \|_{X^{\nu_1,b_1}}. \end{equation} In deterministic settings, the operator formulation \eqref{operator:eq-operator} is unnecessarily complicated. In random settings, however, it is more convenient to state estimates in the form of operator bounds. The reason is that random objects, such as our vector potential $A^\alpha$, are often explicit and not just any element of a given function space. \end{remark} \begin{remark}[Comparison with a deterministic bilinear estimate]\label{operator:rem-comparison-bilinear} While the literature contains several deterministic bilinear estimates for wave equations (see e.g. \cite{DFS10,DFS12,FK20,KS02}), they generally require more regularity than is available in Proposition \ref{operator:prop-main}. For example, consider the null-form estimate from \cite[Corollary 13.4]{FK20}, which implies\footnote{In the notation of \cite{FK20}, the null-form estimate corresponds to $\beta_0=-3/4+\delta$, $\beta_+=0$, $\beta_-=-1/4+\delta$, $\alpha_1=5/4+\delta$, and $\alpha_2=1/4+\delta$.} \begin{equation}\label{operator:eq-nullform} \big\| \Duh \big[ Q_{12}(|\nabla|^{-1} A_\alpha , \phi) \big] \big\|_{X^{1/4+\delta,3/4+\delta}} \lesssim \big\| A_\alpha \big\|_{X^{1/4+\delta,b}} \big\| \phi \big\|_{X^{1/4+\delta,b}}, \end{equation} where $0<\delta\ll 1$ and $0<b-1/2\ll 1$. While \eqref{operator:rem-comparison-bilinear} requires that $A$ and $\phi$ have spatial regularity $1/4+$, Proposition \ref{operator:prop-main} concerns a vector potential $A$ with regularity $0-$ and scalar fields $\phi$ with regularity $0+$. \end{remark} Once Proposition \ref{operator:prop-main} has been established, it is relatively easy to establish the following two lemmas. In the first lemma, we control the resolvent $\big(1+\Lin[ll][\leq N]\big)^{-1}$ and the structured component $\chi_{\leq N}$. In the statement, $\sigma_A$ and $\sigma_z$ are the $\sigma$-Algebras generated by $A$ and $z$, which were previously introduced in Subsection \ref{section:probability}. \begin{lemma}[Resolvent estimate]\label{operator:lem-resolvent} Let $C=C(b_0,b_+,\delta_1)\geq 1$ be sufficiently large and $0<c\leq 1$ be sufficiently small. Then, for all $\lambda\geq 1$, there exist events $E^{(A)}_\lambda \in \sigma_A$ and $E^{(z)}_\lambda \in \sigma_z$ which satisfy \begin{equation}\label{operator:eq-event-probability} \mathbb{P} \big( E^{(A)}_\lambda \big), \mathbb{P}\big( E^{(z)}_\lambda \big) \geq 1 - c \exp(-\lambda) \end{equation} and such that the following properties are satisfied: \begin{enumerate}[label=(\roman*)] \item (Resolvent estimate) On the event $E^{(A)}_\lambda$, we have for all $M,N\in \dyadic$, $T\geq 1$, and $\nu \in \R$ that \begin{align} \Big\| \big( 1 + \Lin[ll][\leq N] \big)^{-1} \Big\|_{X^{\nu,b_0}([0,T]) \rightarrow X^{\nu,b_0}([0,T])} &\leq \exp \Big( C (T\lambda)^C\Big), \label{operator:eq-resolvent-1} \\ \Big\| \big( 1 + \Lin[ll][\leq M] \big)^{-1} - \big( 1 + \Lin[ll][\leq N] \big)^{-1} \Big\|_{X^{\nu,b_0}([0,T]) \rightarrow X^{\nu,b_0}([0,T])} &\leq \exp \Big( C (T\lambda)^C\Big) \min(M,N)^{-\kappa}. \label{operator:eq-resolvent-2} \end{align} \item (Bound on $\chi$) On the event $E^{(A)}_\lambda \medcap E^{(z)}_\lambda$, we have for all $M,N\in \dyadic$ and $T\geq 1$ that \begin{align} \big\| \chi_{\leq N} \big\|_{X^{-\delta_1,b_0}([0,T])} &\leq \exp \Big( C (T\lambda)^C\Big), \label{operator:eq-chi-1} \\ \big\| \chi_{\leq M} - \chi_{\leq N} \big\|_{X^{-\delta_1,b_0}([0,T])} &\leq \exp \Big( C (T\lambda)^C\Big) \min(M,N)^{-\kappa}. \label{operator:eq-chi-2} \end{align} \end{enumerate} \end{lemma} In the second lemma, we obtain a commutator estimate, which will be useful in the proof of Proposition \ref{product:prop-main} below. \begin{lemma}[Commutator estimate]\label{operator:lem-commutator} Let $C=C(b_0,b_+,\delta_1)\geq 1$ be sufficiently large and $0<c\leq 1$ be sufficiently small. Then, for all $\lambda \geq 1$, there exists an event $E^{(A)}_\lambda \in \sigma_A$ which satisfies \begin{equation}\label{operator:eq-commutator-probability} \mathbb{P} \big( E^{(A)}_\lambda \big) \geq 1 - c \exp(-\lambda) \end{equation} and such that, on this event, the following estimates hold: For all $L,M,N \in \dyadic$, $T\geq 1$, and $\nu \in \R$, it holds that \begin{align} &\Big\| \big[ P_L , (1+\Lin[ll][\leq N])^{-1}\big] \Big\|_{X^{\nu,b_0}([0,T])\rightarrow X^{\nu,b_0}([0,T])} \leq L^{-1/4+\delta_1} \exp\big( C (T\lambda)^C \big), \label{operator:eq-commutator-1} \\ & \Big\| \big[ P_L , (1+\Lin[ll][\leq M])^{-1} - (1+\Lin[ll][\leq N])^{-1}\big] \Big\|_{X^{\nu,b_0}([0,T])\rightarrow X^{\nu,b_0}([0,T])} \label{operator:eq-commutator-2} \\ &\leq L^{-1/4+\delta_1} \exp\big( C (T\lambda)^C \big) \min(M,N)^{-\kappa}.\notag \end{align} \end{lemma} The proofs of Lemma \ref{operator:lem-resolvent} and Lemma \ref{operator:lem-commutator} will be presented at the end of Subsection \ref{section:operator-proof} and we first focus on the proof of Proposition \ref{operator:prop-main}. We split the proof of Proposition \ref{operator:prop-main} over the following two subsections. In Subsection \ref{section:operator-basic}, we first prove a basic random operator estimate which will serve as the main ingredient in the proof of Proposition \ref{operator:prop-main}. In Subsection \ref{section:operator-proof}, we then present the proof of Proposition \ref{operator:prop-main}. In addition to using the basic random operator estimate, we need to carefully treat cases in which the frequency-scales of the vector potential $A$ and argument $\phi$ are far apart. These cases are very delicate since, as mentioned in the introduction and further discussed in Section \ref{section:ansatz}, there is no nonlinear smoothing. \subsection{Basic random operator estimate}\label{section:operator-basic} In this subsection, we state and proof a basic random operator estimate which will be the main ingredient in the proof of Proposition \ref{operator:prop-main}. \begin{lemma}[Basic random operator estimate]\label{operator:lem-basic} Let $M,K,L\in \dyadic$, let $(c_{kl})_{k,l\in \Z^2}$ be a deterministic sequence, let $\sigma_1 \in \{-1,1\}$, and let $\sigma_2 \in \{ -1,0,1\}$. Define a random operator $\mathcal{R}$ by \begin{equation} \begin{aligned} \mathcal{R} \phi :=& \sum_{k,l \in \Z^2} \bigg( \rho_M(k+l) \rho_{K}(k) \rho_{L}(l) c_{kl} e^{i\langle k+l, x \rangle} \\ &\times \int_0^t \dt^\prime \, e^{i\sigma_1 (t-t^\prime) |k+l|} \bigg( \int_0^{t^\prime} \mathrm{d}W_s(k) \, e^{i\sigma_2 (t^\prime-s)|k|} \bigg) \widehat{\phi}(t^\prime,l) \bigg), \end{aligned} \end{equation} where $(W_s(k))_{k\in \Z^2}$ is as in Subsection \ref{section:probability}. Then, for all $T\geq 1$ and $p\geq 1$, it holds that \begin{equation}\label{operator:eq-basic-1} \E \Big[ \big\| \, \mathcal{R} \, \big\|_{X^{0,b_0}([0,T]) \rightarrow X^{0,b_+}([0,T])}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta \max(K,L,M)^{\delta_0} \min(K,L,M)^{-1/4} K \big\| c_{kl} \big\|_{\ell^\infty_k \ell^\infty_l}. \end{equation} In the case $\sigma_1 \neq \sigma_2$, we have the stronger estimate \begin{equation}\label{operator:eq-basic-2} \E \Big[ \big\| \, \mathcal{R} \, \big\|_{X^{0,b_0}([0,T]) \rightarrow X^{0,b_+}([0,T])}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta \max(K,L,M)^{\delta_0} \min(K,M)^{-1/4} K \big\| c_{kl} \big\|_{\ell^\infty_k \ell^\infty_l}, \end{equation} in which $\min(K,L,M)$ has been replaced by $\min(K,M)$. \end{lemma} The proof is based on the lattice point counting estimates from Lemma \ref{prelim:lem-basic-counting} and the random tensor estimates from \cite{DNY20}, which we recalled in Lemma \ref{prelim:lem-moment-method} above. \begin{proof}[Proof of Lemma \ref{operator:lem-basic}:] The proof consists of three main steps. In the first step, we reduce the desired estimates \eqref{operator:eq-basic-1} and \eqref{operator:eq-basic-2} to a random tensor estimate. In the second step, we then reduce the random tensor estimate to a deterministic tensor estimate (using the moment method from Lemma \ref{prelim:lem-moment-method}). In the last step, we then prove the deterministic tensor estimate. \\ To unify and simplify the notation, we denote the main factor in the right-hand sides of \eqref{operator:eq-basic-1} and \eqref{operator:eq-basic-2} as $\frakC$, i.e., \begin{equation} \frakC := K \| c_{kl} \|_{\ell^\infty_k \ell^\infty_l} \begin{cases} \begin{tabular}{ll} $\min(K,L,M)^{-1/4}$ &if $\sigma_1=\sigma_2$ \\ $\min(K,M)^{-1/4}$ &if $\sigma_1 \neq \sigma_2$ \end{tabular} \end{cases}. \end{equation} \emph{Step 1: Reduction to a random tensor estimate.} Due to the definition of restricted $X^{\nu,b}$-norms, $\phi\in X^{0,b_0}([0,T])$ can be replaced by $\phi \in X^{0,b_0}(\R)$. Then, we can write \begin{equation} \widehat{\phi}(t^\prime,l) = \sum_{\sigma^\prime = \pm 1} \int_{\R} \dlambda^\prime \, e^{it^\prime \sigma^\prime |l|} e^{it^\prime \lambda^\prime} \widetilde{\phi}^{(\sigma^\prime)}(\lambda^\prime,l), \end{equation} where \begin{equation*} \max_{\sigma^\prime = \pm 1} \big\| \langle \lambda^\prime \rangle^{b_0} \widetilde{\phi}^{(\sigma^\prime)}(\lambda^\prime,l) \big\|_{L_{\lambda^\prime}^2 \ell_l^2} \sim \big\| \phi \big\|_{X^{0,b_0}(\R)}. \end{equation*} We now write \begin{equation} \Rc \phi = \sum_{\sigma^\prime = \pm 1} \int_\R \dlambda^\prime \Rc_{\sigma^\prime, \lambda^\prime} \widetilde{\phi}^{(\sigma^\prime)}(\lambda^\prime), \end{equation} where the operators $\Rc_{\sigma^\prime,\lambda^\prime}\colon \ell^2(\Z^2) \rightarrow X^{0,b_+}([0,T])$ are defined by \begin{equation} \begin{aligned} \Rc_{\sigma^\prime, \lambda^\prime} v :=& \sum_{k,l \in \Z^2} \bigg[ \rho_M(k+l) \rho_K(k) \rho_L(l) c_{kl}\, e^{i\langle k+ l , x \rangle} \int_0^t \dt^\prime \, \bigg( e^{i\sigma_1 (t-t^\prime) |k+l|} \\ &\times \Big( \int_0^{t^\prime} \mathrm{d}W_s(k) \, e^{i\sigma_2 (t^\prime-s)|k|}\Big) e^{i\sigma^\prime t^\prime |l|} e^{it^\prime \lambda^\prime} \bigg) \, v_l \bigg] \end{aligned} \end{equation} for all $v\in \ell^2(\Z^2)$. Using Cauchy-Schwarz in $\lambda^\prime$ and $b_0>1/2$, we can then reduce the desired estimates \eqref{operator:eq-basic-1} and \eqref{operator:eq-basic-2} of the random operator $\Rc$ to the estimate \begin{equation}\label{operator:eq-basic-p1} \begin{aligned} \E \bigg[ \max_{\sigma^\prime =\pm 1} \sup_{\lambda^\prime \in \R} \langle \lambda^\prime \rangle^{-(b_0-b_-) p} \big\| \Rc_{\sigma^\prime, \lambda^\prime} \big\|_{\ell^2 \rightarrow X^{0,b_+}([0,T])}^p \bigg]^{1/p} \lesssim \sqrt{p} T^\theta \max(K,L,M)^{\delta_0} \frakC. \end{aligned} \end{equation} Using Lemma \ref{prelim:lem-maxima-random} (see also the reductions in \cite[Subsection 5.7]{BDNY22}), \eqref{operator:eq-basic-p1} can be further reduced to proving that \begin{equation}\label{operator:eq-basic-p2} \begin{aligned} \max_{\sigma^\prime =\pm 1} \sup_{\lambda^\prime \in \R} \E \bigg[ \big\| \Rc_{\sigma^\prime, \lambda^\prime} \big\|_{\ell^2 \rightarrow X^{0,b_+}([0,T])}^p \bigg]^{1/p} \lesssim \sqrt{p} T^\theta \max(K,L,M)^{\delta_0/2} \frakC. \end{aligned} \end{equation} We now want to decrease the $b_+$-parameter in \eqref{operator:eq-basic-p2}. To this end, we first use the crude estimate \begin{align*} \big\| \Rc_{\sigma^\prime, \lambda^\prime} v \big\|_{X^{0,1}([0,T])} &\lesssim \big\| \partial_t \Rc_{\sigma^\prime, \lambda^\prime} v \big\|_{L_t^2 L_x^2([0,T]\times \T^2)} + \big\| \nabla_x \Rc_{\sigma^\prime, \lambda^\prime} v \big\|_{L_t^2 L_x^2([0,T]\times \T^2)} + \big\| \Rc_{\sigma^\prime, \lambda^\prime} v \big\|_{L_t^2 L_x^2([0,T]\times \T^2)} \\ &\lesssim T^\theta (KL)^3 \max_{\substack{\,\,\,k \in \Z^2\colon \\ |k|\sim K}} \max_{t^\prime \in [0,T]} \bigg| \int_0^{t^\prime} \mathrm{d}W_s(k) \, e^{i\sigma_2 (t^\prime-s)|k|} \bigg| \, \max_{\substack{\,\,\, l \in \Z^2 \colon \\ |l|\sim L}} |v_l|, \end{align*} which follows directly from the definition of $\Rc_{\sigma^\prime, \lambda^\prime}$. As a result, Lemma \ref{prelim:lem-maxima-random} and Doob's maximal inequality imply that \begin{align*} \max_{\sigma^\prime =\pm 1} \sup_{\lambda^\prime \in \R} \E \bigg[ \big\| \Rc_{\sigma^\prime, \lambda^\prime} \big\|_{\ell^2 \rightarrow X^{0,1}([0,T])}^p \bigg]^{1/p} &\lesssim T^\theta (KL)^3 \E \bigg[ \max_{\substack{\,\,\,k \in \Z^2\colon \\ |k|\sim K}} \max_{t^\prime \in [0,T]} \bigg| \int_0^{t^\prime} \mathrm{d}W_s(k) \, e^{i\sigma_2 (t^\prime-s)|k|} \bigg|^p \bigg]^{1/p} \\ &\lesssim \sqrt{p} T^\theta (KL)^4. \end{align*} By interpolation, \eqref{operator:eq-basic-p2} can therefore be reduced to the estimate \begin{equation}\label{operator:eq-basic-p3} \begin{aligned} \max_{\sigma^\prime =\pm 1} \sup_{\lambda^\prime \in \R} \E \bigg[ \big\| \Rc_{\sigma^\prime, \lambda^\prime} \big\|_{\ell^2 \rightarrow X^{0,b_-}([0,T])}^p \bigg]^{1/p} \lesssim \sqrt{p} T^\theta \max(K,L,M)^{\delta_0/4} \frakC. \end{aligned} \end{equation} Comparing \eqref{operator:eq-basic-p3} with \eqref{operator:eq-basic-p2}, we decreased the $b$-parameter from $b_+$ to $b_-$, but this cost us a factor of $\max(K,L,M)^{\delta_0/4}$. We now let $\varphi \in C^\infty_c(\R\rightarrow [0,1])$ be a smooth, compactly supported cut-off function satisfying $\varphi|_{[-1,1]}=1$. For any $\lambda \in \R$, $m\in \Z^2$, and $v\in \ell^2(\Z^2)$, we then write \begin{equation}\label{operator:eq-basic-p4} \int_{\R} \dt e^{-it\lambda} e^{-i t \sigma_1 |m|} \varphi(t/T) \widehat{(\Rc_{\sigma^\prime,\lambda^\prime} v)}(t,m) = \sum_{\mu \in \Z} \big(\Rc_{\lambda,\mu,\sigma^\prime,\lambda^\prime} v\big) (m), \end{equation} where the operators $\Rc_{\lambda,\mu,\sigma^\prime,\lambda^\prime}\colon \ell^2(\Z^2)\rightarrow \ell^2(\Z^2)$ are defined as \begin{equation} \begin{aligned} &\big( \Rc_{\lambda,\mu,\sigma^\prime,\lambda^\prime} v \big)(m) \\ :=& \, \sum_{\substack{k,l\in \Z^2\colon \\ k+l=m }} \bigg[ \rho_M(k+l) \rho_K(k) \rho_L(l) c_{kl} \, \mathbf{1} \big\{ - \sigma_1 |m| + \sigma_2 |k| + \sigma^\prime |l| \in [\mu,\mu+1) \big\} \\ &\times \bigg( \int_\R \mathrm{d}W_s(k) \, \bigg( \int_{\R} \dt \int_{\R} \dt^\prime \mathbf{1} \big\{ 0 \leq s \leq t^\prime \leq t \big\} \varphi(t/T) e^{-it\lambda} \\ &\times e^{it^\prime ( - \sigma_1 |m| + \sigma_2 |k| + \sigma^\prime |l| + \lambda^\prime)} e^{-i\sigma_2 s |k|} \bigg) \bigg) v_l \bigg]. \end{aligned} \end{equation} In order to prove \eqref{operator:eq-basic-p3}, it therefore remains to prove \begin{equation}\label{operator:eq-basic-p5} \max_{\sigma^\prime =\pm 1} \sup_{\lambda^\prime \in \R} \bigg\| \langle \lambda \rangle^{b_-} \, \E \Big[ \big\| \Rc_{\lambda,\mu,\sigma^\prime, \lambda^\prime} \big\|_{\ell^2 \rightarrow \ell^2}^p \Big]^{1/p} \bigg\|_{L_\lambda^2 \ell_\mu^1} \lesssim \sqrt{p} T^\theta \max(K,L,M)^{\delta_0/4} \frakC. \end{equation} \emph{Step 2: From a random to a deterministic tensor estimate.} Using the moment method (Lemma \ref{prelim:lem-moment-method}), we now reduce the random tensor estimate \eqref{operator:eq-basic-p5} to a deterministic tensor estimate. Using Lemma \ref{prelim:lem-moment-method}, it holds that \begin{equation}\label{operator:eq-basic-p6} \begin{aligned} &\E \Big[ \big\| \Rc_{\lambda,\mu,\sigma^\prime, \lambda^\prime} \big\|_{\ell^2 \rightarrow \ell^2}^p \Big]^{1/p} \\ \lesssim& \, \sqrt{p} \max(K,L,M)^{\delta/8} \max \Big( \| h_{klm} \|_{kl\rightarrow m}, \| h_{klm} \|_{l \rightarrow km} \Big) \max_{k,l,m\in \Z^2} \big\| f_{klm} (s) \big\|_{L_s^2}, \end{aligned} \end{equation} where \begin{equation}\label{operator:eq-basic-p7} \begin{aligned} h_{klm} &:= \big| c_{kl} \big| \, \mathbf{1}\big\{ m= k+l \big\} \mathbf{1}\big\{ |k| \sim K\big\} \mathbf{1}\big\{ |l| \sim L\big\} \mathbf{1}\big\{ |m| \sim M\big\} \\ &\times \mathbf{1} \big\{ - \sigma_1 |m| + \sigma_2 |k| + \sigma^\prime |l| \in [\mu,\mu+1) \big\} \end{aligned} \end{equation} and \begin{equation}\label{operator:eq-basic-p8} \begin{aligned} f_{klm}(s) &:= \mathbf{1}\big\{ m= k+l \big\} \mathbf{1}\big\{ |k| \sim K\big\} \mathbf{1}\big\{ |l| \sim L\big\} \mathbf{1}\big\{ |m| \sim M\big\} \\ &\times \mathbf{1} \big\{ - \sigma_1 |m| + \sigma_2 |k| + \sigma^\prime |l| \in [\mu,\mu+1) \big\} \\ &\times \int_{\R} \dt \int_{\R} \dt^\prime \mathbf{1} \big\{ 0 \leq s \leq t^\prime \leq t \big\} \varphi(t/T) e^{-it\lambda} e^{it^\prime ( - \sigma_1 |m| + \sigma_2 |k| + \sigma^\prime |l| + \lambda^\prime)} e^{-i\sigma_2 s |k|}. \end{aligned} \end{equation} We note that $h$ depends on $\mu$ and $\sigma^\prime$ and $f$ depends on $\lambda$, $\mu$, $\sigma^\prime$, and $\lambda^\prime$, but we do not reflect this in our notation. In order to estimate $f_{klm}(s)$, we first integrate in $t^\prime$ and then integrate in $t$, which yields \begin{equation}\label{operator:eq-basic-p9} \big| f_{klm}(s) \big| \lesssim T^\theta \mathbf{1}\big\{ 0 \leq s \lesssim T \big\} \mathbf{1}\big\{ |\mu| \lesssim \max(K,L,M) \big\} \langle \mu + \lambda^\prime \rangle^{-1} \Big( \langle \lambda \rangle^{-1} + \langle \lambda - \mu - \lambda^\prime \rangle^{-1} \Big). \end{equation} As a result, it follows that \begin{equation}\label{operator:eq-basic-p10} \begin{aligned} &\Big\| \langle \lambda \rangle^{b_-} \max_{k,l,m\in \Z^2} \big\| f_{klm}(s) \big\|_{L_s^2} \Big\|_{L_\lambda^2\ell_\mu^1} \\ \lesssim& \, T^\theta \, \Big\| \mathbf{1}\big\{ |\mu| \lesssim \max(K,L,M) \big\} \langle \lambda \rangle^{b_-} \langle \mu + \lambda^\prime \rangle^{-1} \big( \langle \lambda \rangle^{-1} + \langle \lambda - \mu - \lambda^\prime \rangle^{-1} \big) \Big\|_{L_\lambda^2 \ell_\mu^1} \\ \lesssim& \, \, T^\theta \log\big(2+\max(K,L,M)\big) \big\| \langle \lambda \rangle^{b_- - 1} \big\|_{L_\lambda^2} \\ \lesssim&\, T^\theta \max(K,L,M)^{\delta_0/16}. \end{aligned} \end{equation} By inserting \eqref{operator:eq-basic-p10} into \eqref{operator:eq-basic-p6}, it follows that \begin{equation}\label{operator:eq-basic-p11} \begin{aligned} & \max_{\sigma^\prime =\pm 1} \sup_{\lambda^\prime \in \R} \bigg\| \langle \lambda \rangle^{b_-} \, \E \Big[ \big\| \Rc_{\lambda,\mu,\sigma^\prime, \lambda^\prime} \big\|_{\ell^2 \rightarrow \ell^2}^p \Big]^{1/p} \bigg\|_{L_\lambda^2 \ell_\mu^1} \\ \lesssim& \, \sqrt{p} \max(K,L,M)^{\delta_0/8} \max_{\sigma^\prime =\pm 1} \sup_{\mu \in \Z} \max \Big( \| h_{klm} \|_{kl\rightarrow m}, \| h_{klm} \|_{l \rightarrow km} \Big)\\ &\times \max_{\sigma^\prime =\pm 1} \sup_{\lambda^\prime \in \R} \, \Big\| \langle \lambda \rangle^{b_-} \max_{k,l,m\in \Z^2} \big\| f_{klm} \big\|_{L_s^2(\R)} \Big\|_{L_\lambda^2 \ell_\mu^1} \\ \lesssim&\, \sqrt{p} T^\theta \max(K,L,M)^{\delta_0/8+\delta_0/16} \max_{\sigma^\prime =\pm 1} \sup_{\mu \in \Z} \max \Big( \| h_{klm} \|_{kl\rightarrow m}, \| h_{klm} \|_{l \rightarrow km} \Big). \end{aligned} \end{equation} In order to prove \eqref{operator:eq-basic-p5}, it therefore remains to prove that \begin{equation}\label{operator:eq-basic-p12} \begin{aligned} \max_{\sigma^\prime =\pm 1} \sup_{\mu \in \Z} \max \Big( \| h_{klm} \|_{kl\rightarrow m}, \| h_{klm} \|_{l \rightarrow km} \Big) \lesssim \max(K,L,M)^{\delta_0/16} \frakC. \end{aligned} \end{equation} \emph{Step 3: A deterministic tensor estimate.} In the last step of this argument, we prove the deterministic tensor estimate in \eqref{operator:eq-basic-p12}. We estimate the two arguments in the maximum in \eqref{operator:eq-basic-p12} separately. Using Schur's test and the lattice point counting estimate (Lemma \ref{prelim:lem-basic-counting}), the first argument is estimated by \begin{align*} &\big\| h_{klm} \big\|_{kl\rightarrow m}^2\\ \lesssim& \, \| c_{kl}\|_{\ell^\infty_k \ell^\infty_l}^2 \sup_{\substack{m \in \Z^2 \colon \\ |m| \sim M }} \sum_{\substack{k,l\in \Z^2}} \mathbf{1} \big\{ k+l=m \big\} \, \mathbf{1} \big\{ |k| \sim K \big\} \, \mathbf{1} \big\{ - \sigma_1 |m| + \sigma_2 |k| + \sigma^\prime |l| \in [\mu,\mu+1) \big\} \\ =& \, \| c_{kl}\|_{\ell^\infty_k \ell^\infty_l}^2 \sup_{\substack{m \in \Z^2 \colon \\ |m| \sim M }} \sum_{\substack{k\in \Z^2 }} \mathbf{1} \big\{ |k| \sim K \big\} \, \mathbf{1} \big\{ - \sigma_1 |m| + \sigma_2 |k| + \sigma^\prime |k-m| \in [\mu,\mu+1) \big\} \\ \lesssim&\, \min(K,M)^{-1/2} K^2 \| c_{kl}\|_{\ell^\infty_k \ell^\infty_l}^2 \lesssim \frakC^2. \end{align*} We note that the first argument always obeys the better bound in \eqref{prelim:eq-basic-counting-plus}, i.e., the bound involving $\min(K,M)$ instead of $\min(K,L,M)$. Using Schur's test, the second argument is estimated by \begin{align} &\big\| h_{klm} \big\|_{l\rightarrow km}^2 \notag \\ \lesssim& \, \| c_{kl}\|_{\ell^\infty_k \ell^\infty_l}^2 \sup_{\substack{l \in \Z^2 \colon \\ |l| \sim L }} \sum_{\substack{k,m\in \Z^2}} \mathbf{1} \big\{ k+l=m \big\} \, \mathbf{1} \big\{ |k| \sim K \big\} \, \mathbf{1} \big\{ - \sigma_1 |m| + \sigma_2 |k| + \sigma^\prime |l| \in [\mu,\mu+1) \big\} \notag \\ \lesssim& \, \| c_{kl}\|_{\ell^\infty_k \ell^\infty_l}^2 \sup_{\substack{l \in \Z^2 \colon \\ |l| \sim L }} \sum_{\substack{k\in \Z^2}} \mathbf{1} \big\{ |k| \sim K \big\} \, \mathbf{1} \big\{ - \sigma_1 |k+l| + \sigma_2 |k| + \sigma^\prime |l| \in [\mu,\mu+1) \big\}. \label{operator:eq-basic-p13} \end{align} We now estimate \eqref{operator:eq-basic-p13} using \eqref{prelim:eq-basic-counting-minus} when $\sigma_1 = \sigma_2$ and using either \eqref{prelim:eq-basic-counting-plus} or \eqref{prelim:eq-basic-counting-zero} when $\sigma_1 \neq \sigma_2$, which yields \begin{equation*} \, \| c_{kl}\|_{\ell^\infty_k \ell^\infty_l}^2 \sup_{\substack{l \in \Z^2 \colon \\ |l| \sim L }} \sum_{\substack{k\in \Z^2}} \mathbf{1} \big\{ |k| \sim K \big\} \, \mathbf{1} \big\{ - \sigma_1 |k+l| + \sigma_2 |k| + \sigma^\prime |l| \in [\mu,\mu+1) \big\} \lesssim \frakC^2. \end{equation*} This completes the proof of \eqref{operator:eq-basic-p12} and hence the proof of this lemma. \end{proof} \subsection{Proof of Proposition \ref{operator:prop-main} and its consequences}\label{section:operator-proof} In the main part of this subsection, we prove Proposition \ref{operator:prop-main} using the basic random operator estimate (Lemma \ref{operator:lem-basic}). At the end of this subsection, we then use Proposition \ref{operator:prop-main} (and its proof) to prove Lemma \ref{operator:lem-resolvent} and Lemma \ref{operator:lem-commutator}. \begin{proof}[Proof of Proposition \ref{operator:prop-main}:] We prove the high$\times$high, high$\times$low, and low$\times$high-estimate separately. \\ \emph{Proof of \ref{operator:item-high-high}: The high$\times$high-estimate.} We first use the dyadic decomposition \begin{equation} (2i)^{-1} \Lin[sim][K] \phi = \sum_{\substack{L,M\in \dyadic \colon \\ K \sim L \gtrsim M}} P_M \Duh \Big[ \partial_\alpha \Big( A^\alpha_K P_L \phi \Big)\Big]. \end{equation} We then write the contribution of the spatial components as \begin{equation}\label{operator:eq-hh-p1} \begin{aligned} &P_M \Duh \Big[ \partial_a \Big( A^a_K P_L \phi \Big)\Big] \\ =& \, - i \sum_{k,l \in \Z^2} \bigg( \rho_K(k) \rho_L(l) \rho_{M}(k+l) (k+l)_a e^{i\langle k + l , x \rangle} \\ &\times \int_0^t \dt^\prime \, \frac{\sin\big( (t-t^\prime) |k+l|\big)}{|k+l|} \widehat{A}^a(t^\prime,k) \widehat{\phi}(t^\prime,l) \bigg). \end{aligned} \end{equation} Furthermore, using integration by parts and $A^0(0)=0$, we write the contribution of the temporal component as \begin{equation}\label{operator:eq-hh-p2} \begin{aligned} &P_M \Duh \Big[ \partial_0 \Big( A^0_K P_L \phi \Big)\Big] \\ =& \, - P_M \int_0^t \dt^\prime \, \frac{\sin\big( (t-t^\prime) |\nabla|\big)}{|\nabla|} \partial_{t^\prime} \big( A^0_K(t^\prime) P_L \phi(t^\prime) \big) \\ =& P_M \int_0^t \dt^\prime \, \cos \big( (t-t^\prime) |\nabla| \big) A_K^0(t^\prime) P_L \phi (t^\prime) \\ =& \sum_{k,l\in \Z^2} \bigg( \rho_K(k) \rho_L(l) \rho_{M}(k+l) e^{i\langle k + l , x \rangle} \int_0^t \dt^\prime \, \cos\big( (t-t^\prime) |k+l|\big) \widehat{A}^0(t^\prime,k) \widehat{\phi}(t^\prime,l) \bigg). \end{aligned} \end{equation} By using \eqref{operator:eq-hh-p1}, \eqref{operator:eq-hh-p2}, the explicit formula from Lemma \ref{vector:lem-explicit}, and the basic random operator estimate (Lemma \ref{operator:lem-basic}), it follows that \begin{align*} &\E \Big[ \Big\| P_M \Duh \Big[ \partial_\alpha \Big( A^\alpha_K P_L \phi \Big) \Big] \Big\|_{X^{\nu,b_0}([0,T])\rightarrow X^{1/4+\nu-\delta_1,b_+}([0,T])}^p \Big]^{1/p} \\ \lesssim& \, \sqrt{p} T^\theta M^{\nu+1/4-\delta_1} L^{-\nu} \max(K,L,M)^\delta \min(K,L,M)^{-1/4}. \end{align*} Since $K \sim L \gtrsim M$ and $\nu \neq \delta_1$, it holds that \begin{equation*} M^{\nu+1/4-\delta_1} L^{-\nu} \max(K,L,M)^\delta \min(K,L,M)^{-1/4} \sim M^{\nu-\delta_1} K^{\delta-\nu} \lesssim K^{\delta-\delta_1}. \end{equation*} Since $\kappa \ll \delta \ll \delta_1$, this implies the desired estimate. \\ \emph{Proof of \ref{operator:item-high-low}: The high$\times$low-estimate.} We first use the dyadic decomposition \begin{equation*} (2i)^{-1} \Lin[gg][K] \phi = \sum_{\substack{L,M \in \dyadic \colon \\ M \sim K \gg L}} P_M \Duh \Big[ \partial_\alpha \Big( A^\alpha_K P_L \phi \Big) \Big]. \end{equation*} Using the Lorenz condition, it holds that \begin{equation*} \partial_\alpha \Big( A^\alpha_K P_L \phi \Big) = A_K^\alpha \partial_\alpha P_L \phi = A^0_K \partial_0 P_L \phi + A^a_K \partial_a P_L \phi. \end{equation*} We now treat the temporal component ($\alpha=0$) and the spatial components ($\alpha=1,2$) separately. \\ \emph{The spatial components:} To treat the contributions of the spatial components, we write \begin{align*} &P_M \Duh \Big[ A^a_K \partial_a P_L \phi \Big] \\ =& \, - i \sum_{k,l \in \Z^2} \bigg( \rho_K(k) \rho_L(l) \rho_{M}(k+l)\, l_a \, e^{i\langle k+l ,x \rangle} \int_0^t \dt^\prime \, \frac{\sin\big( (t-t^\prime) |k+l|\big)}{|k+l|} \widehat{A}^a(t^\prime,k) \widehat{\phi}(t^\prime,l) \bigg). \end{align*} By inserting the explicit formula (Lemma \ref{vector:lem-explicit}) and using the basic random operator estimate (Lemma \ref{operator:lem-basic}), it follows that \begin{align*} & \E \Big[ \Big\| P_M \Duh \Big[ A_K^a \partial_a P_L \phi \Big] \Big\|_{X^{\nu,b_0}([0,T]) \rightarrow X^{\nu,b_+}([0,T])}^p \Big]^{1/p} \\ \lesssim&\, \sqrt{p} T^\theta M^{\nu} L^{-\nu} \max(K,L,M)^\delta \min(K,L,M)^{-1/4} L M^{-1}. \end{align*} Since $M \sim K \gg L$ and $\nu \leq 0$, it holds that \begin{equation*} M^{\nu} L^{-\nu} \max(K,L,M)^\delta \min(K,L,M)^{-1/4} L M^{-1} \lesssim K^{-1+\delta} L^{3/4} \lesssim K^{-1/4+\delta}. \end{equation*} Since $\delta \ll \delta_1$, this yields an acceptable contribution. \\ \emph{The temporal component:} This case is slightly technical. Since there is not much room in the $b$-parameter, we will prevent the time-derivative from hitting $\phi$. Using integration by parts and $A^0(0)=0$, we first write \begin{equation}\label{operator:eq-hl-p1} \begin{aligned} &\Duh \Big[ A^0_K \partial_0 P_L \phi \Big] \\ =& \, - \sum_{k,l \in \Z^2} \bigg( \rho_K(k) \rho_L(l) \rho_{M}(k+l) e^{i \langle k+ l , x\rangle} \int_0^t \dt^\prime \, \frac{\sin\big( (t-t^\prime) |k+l|\big)}{|k+l|} \widehat{A}^0(t^\prime,k) \partial_{t^\prime} \widehat{\phi}(t^\prime,l) \bigg) \\ =& \, \sum_{k,l \in \Z^2} \left( \rho_K(k) \rho_L(l) \rho_{M}(k+l) e^{i \langle k+ l , x\rangle} \int_0^t \dt^\prime \, \partial_{t^\prime} \bigg( \frac{\sin\big( (t-t^\prime) |k+l|\big)}{|k+l|} \widehat{A}^0(t^\prime,k) \bigg) \widehat{\phi}(t^\prime,l) \right). \end{aligned} \end{equation} Using Lemma \ref{vector:lem-explicit}, we further compute\footnote{We note that the It\^{o}-integral below is differentiable in $t^\prime$ since the $t^\prime$-dependent integrand vanishes at $s=t^\prime$.} \begin{equation}\label{operator:eq-hl-p2} \begin{aligned} &\partial_{t^\prime} \left( \frac{\sin\big( (t-t^\prime) |k+l|\big)}{|k+l|} \widehat{A}^0(t^\prime,k) \right) \\ =& \, - i \frac{k_a}{|k|^2 |k+l|} \partial_{t^\prime} \left( \int_0^{t^\prime} \mathrm{d}W_s^a(k) \, \sin\big((t-t^\prime) |k+l|\big) \Big( \cos \big( (t^\prime-s) |k|\big)-1\Big) \right) \\ =& \, - i \frac{k_a}{|k|^2 |k+l|} \int_0^{t^\prime} \mathrm{d}W^a_s(k)\, \partial_{t^\prime} \bigg( \sin\big((t-t^\prime) |k+l|\big) \Big( \cos \big( (t^\prime-s) |k|\big)-1\Big) \bigg). \end{aligned} \end{equation} Using trigonometric identities, it follows that \begin{align} &\partial_{t^\prime} \bigg( \sin\big((t-t^\prime) |k+l|\big) \Big( \cos \big( (t^\prime-s) |k|\big)-1\Big) \bigg) \notag \\ =& -\frac{1}{2} \Big( |k+l| - |k| \Big) \cos \Big( (t-t^\prime) |k+l| + (t^\prime-s) |k| \Big) \label{operator:eq-hl-p3} \\ -& \frac{1}{2} \Big( |k+l| + |k| \Big) \cos \Big( (t-t^\prime) |k+l| - (t^\prime-s) |k| \Big) \label{operator:eq-hl-p4} \\ +& |k+l| \cos \Big( (t-t^\prime) |k+l|\Big). \label{operator:eq-hl-p5} \end{align} We now insert \eqref{operator:eq-hl-p3}-\eqref{operator:eq-hl-p5} into \eqref{operator:eq-hl-p2}, then insert \eqref{operator:eq-hl-p2} into \eqref{operator:eq-hl-p1}, and finally use the basic random operator estimate (Lemma \ref{operator:lem-basic}). For all three terms \eqref{operator:eq-hl-p3}, \eqref{operator:eq-hl-p4}, and \eqref{operator:eq-hl-p5}, this yields an acceptable contribution. For \eqref{operator:eq-hl-p3}, the reason is that $\big| |k+l|-|k| \big| \lesssim |l|$. For \eqref{operator:eq-hl-p4} and \eqref{operator:eq-hl-p5}, the reason is that we can use the improvement for $\sigma_1\neq \sigma_2$ in Lemma \ref{operator:lem-basic}. \\ \emph{Proof of \ref{operator:item-low-high}: The low$\times$high-estimate.} It suffices to treat the case $\nu=0$, since the estimates for $\nu\in \R$ are all equivalent. Since there is no nonlinear smoothing, i.e., no gain in the spatial frequency of $\phi$, the following argument is rather delicate. In particular, we cannot directly apply the basic random operator estimate (Lemma \ref{operator:lem-basic}), since this would lead to a $\delta$-loss in the highest frequency scale. We therefore split the proof into several steps. \\ \emph{Step (A): Reduction to much higher frequency-scales.} In the first step, we reduce to the case when the spatial frequency of $\phi$ is much higher than the spatial frequency of $A_K$. To this end, we decompose \begin{align} \Duh \Big[ \partial_\alpha \Big( A_K^\alpha \parall \phi \Big) \Big] &= \Duh \Big[ \partial_\alpha \Big( A_K^\alpha \parall P_{< K^{100}} \phi \Big) \Big] \label{operator:eq-lh-p1}\\ &+ \Duh \Big[ \partial_\alpha \Big( A_K^\alpha P_{\geq K^{100}} \phi \Big) \Big] \label{operator:eq-lh-p2}. \end{align} Similar as in the proof of \ref{operator:item-high-high}, it follows from the basic random operator estimate (Lemma \ref{operator:lem-basic}) and $\kappa,\delta_0 \ll \delta_1$ that \begin{align*} \E \Big[ \Big\| \Duh \Big[ \partial_\alpha \Big( A_K^\alpha \parall P_{< K^{100}} \phi \Big) \Big] \Big\|_{X^{0,b_0}([0,T]) \rightarrow X^{0,b_+}([0,T])}^p \Big]^{1/p} &\lesssim \sqrt{p} T^\theta \big( K^{100}\big)^{\delta} K^{-1/4} \\ &\lesssim \sqrt{p} T^\theta K^{-1/4+\delta_1-\kappa}. \end{align*} Thus, it remains to treat the contribution of \eqref{operator:eq-lh-p2}.\\ \emph{Step (B): Reduction to low-modulations.} Using the definition of the $X^{0,b_0}([0,T])$-norm, we can replace $\phi \in X^{0,b_0}([0,T])$ by $\phi \in X^{0,b_0}(\R)$. We then employ the usual decomposition of $\phi$ into time-modulated half-waves. To be precise, we write \begin{equation}\label{operator:eq-lh-p3} \phi = \sum_{\sigma = \pm 1 } \int_{\R} \dlambda \, e^{i\lambda t} e^{i\sigma t|\nabla|} \phi^{(\sigma)}(\lambda), \end{equation} where, for each $\sigma\in \{\pm 1\}$ and $\lambda \in \R$, $\phi^{(\sigma)}(\lambda) \in L^2(\T^2_x)$ and \begin{equation*} \| \phi \|_{X^{0,b_0}(\R)} \sim \max_{\sigma = \pm 1 } \big\| \langle \lambda \rangle^{b_0} \phi^{(\sigma)}(\lambda) \big\|_{L_\lambda^2 L_x^2}. \end{equation*} We then further decompose $\phi$ into a high and low-modulation term. To this end, we define\footnote{We introduce the parameter $\Lambda$ for expository purposes. Indeed, there is a-priori no reason for choosing $\Lambda$ as being equal to $K$, but this choice can be justified using the numerology below.} $\Lambda := K$ and decompose \begin{align} \phi_{\hi} := \sum_{\sigma = \pm 1 } \int_{\R} \dlambda \, \mathbf{1} \big\{ |\lambda| \geq \Lambda \big\} e^{i\lambda t} e^{i\sigma t|\nabla|} \phi^{(\sigma)}(\lambda), \label{operator:eq-lh-p4} \\ \phi_{\lo} := \sum_{\sigma = \pm 1 } \int_{\R} \dlambda \, \mathbf{1}\big\{ |\lambda| \leq \Lambda \big\} e^{i\lambda t} e^{i\sigma t|\nabla|} \phi^{(\sigma)}(\lambda). \label{operator:eq-lh-p5} \end{align} For the high-modulation term, we have that \begin{align*} &\Big\| \Duh \Big[ \partial_\alpha \Big( A^\alpha_K P_{\geq K^{100}} \phi_{\hi} \Big) \Big] \Big\|_{X^{0,b_+}([0,T])} \\ \lesssim& \, T^\theta \max_{\alpha=0,1,2} \Big\| A^\alpha_K P_{\geq K^{100}} \phi_{\hi} \Big\|_{L_t^2 L_x^2([0,T]\times \T^2)} \\ \lesssim& \, T^\theta \max_{\alpha=0,1,2} \Big\| A^\alpha_K \Big\|_{L_t^\infty L_x^\infty([0,T]\times \T^2)} \Big\| P_{\geq K^{100}} \phi_{\hi} \Big\|_{L_t^2 L_x^2([0,T]\times \T^2)} \\ \lesssim& \, T^\theta \Lambda^{-b_0} \Big\| A^\alpha_K \Big\|_{L_t^\infty L_x^\infty([0,T]\times \T^2)} \big\| \phi \big\|_{X^{0,b_0}(\R)}. \end{align*} Using Corollary \ref{vector:cor-regularity}, this yields an acceptable contribution. Thus, it remains to control the contribution of the low-modulation term in \eqref{operator:eq-lh-p5}. \\ \emph{Step (C): Simplifying the Duhamel integrals.} For the temporal component, we obtain from integration by parts and $A^0(0)=0$ that \begin{align*} \Duh \Big[ \partial_0 \Big( A^0_K P_{\geq K^{100}} \phi_{\lo} \Big) \Big] = - \int_0^t \dt^\prime \, \cos \big( (t-t^\prime) |\nabla| \big) \Big( A^0_K P_{\geq K^{100}} \phi_{\lo} \Big). \end{align*} For the spatial components, we write \begin{align*} \Duh \Big[ \partial_a \Big( A^a_K P_{\geq K^{100}} \phi_{\lo} \Big) \Big] = \frac{\partial_a}{|\nabla|} \int_0^t \dt^\prime \, \sin \big( (t-t^\prime) |\nabla| \big) \Big( A^a_K P_{\geq K^{100}} \phi_{\lo} \Big) \end{align*} Due to the boundedness of $\partial_a / |\nabla|$ on $L^2(\T_x^2)$, it therefore suffices to estimate \begin{equation}\label{operator:eq-lh-p6} e^{i\sigma t |\nabla|} \int_0^t \dt^\prime \, e^{-i\sigma t^\prime |\nabla|} \Big( A^\alpha_K \, P_{\geq K^{100}} \phi_{\lo} \Big) \end{equation} for all $\sigma = \pm 1$ and $0\leq \alpha \leq 2$. \\ \emph{Step (D): Angular decomposition.} For any $\varphi \in (-\pi,\pi]$, we define the projection operator $\Qphi$ by \begin{equation}\label{operator:eq-lh-p7} \widehat{\Qphi f}(m) = \mathbf{1} \big\{ \big| \angle (m , e_1) - \varphi\big| \leq 2\pi K^{-99} \big\} \widehat{f}(m) \end{equation} for all $m\in \Z^2\backslash \{0\}$ and\footnote{In our argument, the operator $\Qphi$ is only applied to high-frequency functions, and therefore the definition for $m=0$ is irrelevant.} $\widehat{\Qphi f}(0)=0$. In \eqref{operator:eq-lh-p7}, the distance between $\angle (m,e_1)$ and $\varphi$ is measured in $\T_x$, i.e., modula multiples of $2\pi$. For future use, we also define the unit normal corresponding to the angle $\varphi$ as \begin{equation}\label{operator:eq-lh-p8} \nphi := \big( \cos \varphi , \, \sin \varphi \big). \end{equation} For any $k\in \Z^2$ and $l\in \Z^2$ satisfying $|k|\sim K$ and $|l| \gtrsim K^{100}$, it holds that \begin{equation*} \Big| \angle ( k+l , e_1 ) - \angle (l, e_1) \Big| \lesssim K^{-99}. \end{equation*} Using almost orthogonality and Lemma \ref{prelim:lem-maxima-random}, it therefore suffices to estimate \begin{equation}\label{operator:eq-lh-p9} e^{i\sigma t |\nabla|} \int_0^t \dt^\prime \, e^{-i\sigma t^\prime |\nabla|} \Big( A^\alpha_K \cdot \Qphi P_{\geq K^{100}} \phi_{\lo} \Big) \end{equation} for a fixed $\varphi \in (-\pi,\pi]$. We now simplify the action of $e^{-i\sigma t^\prime |\nabla|}$ on the product of $A^\alpha_K$ and $\Qphi P_{\geq K^{100}} \phi_{\lo}$. For any $k,l\in \Z^2$ satisfying $|k|\sim K$, $|l|\gtrsim K^{100}$, and $|\angle (l, e_1 ) -\varphi|\lesssim K^{-99}$, it holds that \begin{equation}\label{operator:eq-lh-p10} \Big| |k+ l| - \big( \nphi \cdot k + |l| \big) \Big| \lesssim K^{-98}. \end{equation} Then, it easily follows that \begin{equation}\label{operator:eq-lh-p11} \begin{aligned} &\Big\| e^{-i\sigma t^\prime |\nabla|} \Big( A^\alpha_K \cdot \Qphi P_{\geq K^{100}} \phi_{\lo} \Big) - \Big( e^{-\sigma t^\prime \nphi \cdot \nabla} A^\alpha_K \Big) \cdot \Big( e^{-i\sigma t^\prime |\nabla|} \Qphi P_{\geq K^{100}} \phi_{\lo} \Big) \Big\|_{L_x^2} \\ \lesssim& \, K^{-96} \big\| A_K^\alpha(t^\prime) \big\|_{L_x^2} \big\| \phi_{\lo} \big\|_{L_x^2}. \end{aligned} \end{equation} Due to \eqref{operator:eq-lh-p11}, it then remains to control \begin{equation}\label{operator:eq-lh-p12} e^{i\sigma t |\nabla|} \int_0^t \dt^\prime \, \Big( e^{-\sigma t^\prime \nphi \cdot \nabla} A^\alpha_K \Big) \cdot \Big( e^{-i\sigma t^\prime |\nabla|} \Qphi P_{\geq K^{100}} \phi_{\lo} \Big) . \end{equation} \emph{Step (D): Inserting the half-wave decomposition of $\phi_{\lo}$.} Using \eqref{operator:eq-lh-p5}, it holds that \begin{align} &e^{i\sigma t |\nabla|} \int_0^t \dt^\prime \, \Big( e^{-\sigma t^\prime \nphi \cdot \nabla} A^\alpha_K \Big) \cdot \Big( e^{-i\sigma t^\prime |\nabla|} \Qphi P_{\geq K^{100}} \phi_{\lo} \Big) \notag \\ =& \, \sum_{\sigma^\prime=\pm 1} \int_{\R} \dlambda \mathbf{1} \big\{ |\lambda| \leq \Lambda \big\} e^{i\sigma t |\nabla|} \int_0^t \dt^\prime \, \bigg[ \Big( e^{-\sigma t^\prime \nphi \cdot \nabla} A^\alpha_K \Big) \label{operator:eq-lh-p13} \\ &\times \, \Big( e^{it^\prime \lambda} e^{i(\sigma^\prime-\sigma)t^\prime |\nabla|} \Qphi P_{\geq K^{100}} \phi_{\lo}^{(\sigma^\prime)}(\lambda) \Big) \bigg]. \notag \end{align} If $\sigma^\prime \neq \sigma$, integration by parts in $t^\prime$ gains a factor of \begin{equation*} K^{-100} \big( \Lambda + K \big) \lesssim K^{-99}, \end{equation*} so that the corresponding contribution can be controlled using crude arguments. Thus, it remains to treat the case $\sigma^\prime=\sigma$. In this case, we can simplify \eqref{operator:eq-lh-p13}, which can be written as \begin{equation}\label{operator:eq-lh-p14} \begin{aligned} \int_{\R} \dlambda \mathbf{1} \big\{ |\lambda| \leq \Lambda \big\} \, e^{i\sigma t |\nabla|} \bigg[ \bigg(\int_0^t \dt^\prime \, e^{it^\prime\lambda} e^{-\sigma t^\prime \nphi \cdot \nabla} A^\alpha_K \bigg) \cdot \Qphi P_{\geq K^{100}} \phi_{\lo}^{(\sigma)}(\lambda) \bigg]. \end{aligned} \end{equation} The most important aspect of \eqref{operator:eq-lh-p14} is that there no longer are any $t^\prime$-dependent operators acting on $\phi^{(\sigma)}_{\lo}(\lambda)$, which can therefore be pulled outside of the $t^\prime$-integral. \\ \emph{Step (E): Finishing up the proof.} It remains to estimate \eqref{operator:eq-lh-p14}. We have that \begin{align} &\bigg\| \int_{\R} \dlambda \mathbf{1} \big\{ |\lambda| \leq \Lambda \big\} \, e^{i\sigma t |\nabla|} \bigg[ \bigg(\int_0^t \dt^\prime \, e^{it^\prime\lambda} e^{-\sigma t^\prime \nphi \cdot \nabla} A^\alpha_K \bigg) \cdot \Qphi P_{\geq K^{100}} \phi_{\lo}^{(\sigma)}(\lambda) \bigg] \bigg\|_{X^{0,b_+}([0,T])} \notag \\ \leq& \, \int_{\R} \dlambda \, \bigg\| e^{i\sigma t |\nabla|} \bigg[ \bigg(\int_0^t \dt^\prime \, e^{it^\prime\lambda} e^{-\sigma t^\prime \nphi \cdot \nabla} A^\alpha_K \bigg) \cdot \Qphi P_{\geq K^{100}} \phi_{\lo}^{(\sigma)}(\lambda) \bigg] \bigg\|_{X^{0,b_+}([0,T])} \notag \\ \lesssim& \, \int_{\R} \dlambda \, \bigg\| \bigg(\int_0^t \dt^\prime \, e^{it^\prime\lambda} e^{-\sigma t^\prime \nphi \cdot \nabla} A^\alpha_K \bigg) \cdot \Qphi P_{\geq K^{100}} \phi_{\lo}^{(\sigma)}(\lambda) \bigg\|_{L_x^2 H_t^{b_+}(\T^2 \times [0,T])} \notag \\ \lesssim& \, \bigg\| \bigg(\int_0^t \dt^\prime \, e^{it^\prime\lambda} e^{-\sigma t^\prime \nphi \cdot \nabla} A^\alpha_K \bigg) \bigg\|_{L_x^\infty H_t^{b_+}(\T^2\times [0,T])} \, \Big\| \Qphi P_{\geq K^{100}} \phi_{\lo}^{(\sigma)}(\lambda) \Big\|_{L_\lambda^1 L_x^2(\R\times \T^2)} \label{operator:eq-lh-p15}. \end{align} The first factor in \eqref{operator:eq-lh-p15} is now estimated using Lemma \ref{vector:lem-dispersive-time-integral} and the second factor in \eqref{operator:eq-lh-p15} is estimated using Cauchy-Schwarz in $\lambda$ and the definition of the $X^{0,b_0}$-norm. \end{proof} Equipped with Proposition \ref{operator:prop-main}, Lemma \ref{operator:lem-resolvent} follows from Gronwall-type argument. The only slightly non-standard aspect of the argument is that Proposition \ref{operator:prop-main} only yields bounds on intervals $[t_0,t_1]$ when $t_0=0$ and not for general $t_0 \geq 0$. For this reason, we sketch the (otherwise standard) argument. \begin{proof}[Proof of Lemma \ref{operator:lem-resolvent}:] We first prove the resolvent estimate \eqref{operator:eq-resolvent-1}, which is the main part of the argument. To this end, let $C_0=C_0(b_0,b_+)$ be sufficiently large. Using Proposition \ref{operator:prop-main} and a standard union-bound estimate over time-scales $T\in \dyadic$, we obtain that there exists an event $E^{(A)}_\lambda \in \sigma_A$ which satisfies the probability estimate \eqref{operator:eq-event-probability} and such that, on this event, we have for all $t\geq 0$ that \begin{equation}\label{operator:eq-resolvent-p2} \sup_N \big\| \Lin[ll][\leq N] \big\|_{X^{\nu,b_0}([0,t])\rightarrow X^{\nu,b_+}([0,t])} \leq C_0 \max(1,|t|)^{C_0} \lambda. \end{equation} For the rest of this argument, we restrict ourselves to this event $E^{(A)}_\lambda$. \\ We now let $N\in \dyadic$, let $T\geq 1$, and let $v \in X^{\nu,b_0}([0,T])$. Using soft arguments (due to the finiteness of $N$), it follows that there exists a unique solution $\chi \in X^{\nu,b}([0,T])$ of \begin{equation}\label{operator:eq-resolvent-p3} \chi + \Lin[ll][\leq N] \chi = v. \end{equation} In order to prove the resolvent bound \eqref{operator:eq-resolvent-1}, it remains to prove that \begin{equation}\label{operator:eq-resolvent-p4} \big\| \chi \big\|_{X^{\nu,b_0}([0,T])} \leq \exp\big( C (T\lambda)^C \big) \big\| v \big\|_{X^{\nu,b_0}([0,T])}. \end{equation} To this end, we let $\tau=\tau(T,\lambda)\in (0,1)$ be a time-step which remains to be chosen. For all $j\geq 1$, we define \begin{equation*} D_j := \big\| \chi \big\|_{X^{\nu,b_0}([0,j\tau])} + \big\| \Lin[ll][\leq N] \chi \big\|_{X^{\nu,b_0}([0,j\tau])}. \end{equation*} Assuming that $j\in \Nzero$ satisfies $(j+1)\tau \leq T$, we obtain from Lemma \ref{prelim:lem-Xnub} and \eqref{operator:eq-resolvent-p3} that \begin{align} D_{j+1} &= \big\| \chi \big\|_{X^{\nu,b_0}([0,(j+1)\tau])} + \big\| \Lin[ll][\leq N] \chi \big\|_{X^{\nu,b_0}([0,(j+1)\tau])} \notag \\ &\leq 2 \big\| \Lin[ll][\leq N] \chi \big\|_{X^{\nu,b_0}([0,(j+1)\tau])} + \big\| v \big\|_{X^{\nu,b_0}([0,(j+1)\tau])} \notag \\ &\lesssim_{b_0,b_+} \big\| \Lin[ll][\leq N] \chi \big\|_{X^{\nu,b_0}([0,j\tau])} + \tau^{b_+-b_0} \big\| \Lin[ll][\leq N] \chi \big\|_{X^{\nu,b_+}([0,(j+1)\tau])} + \big\| v \big\|_{X^{\nu,b_0}([0,T])}. \label{operator:eq-resolvent-p5} \end{align} Using \eqref{operator:eq-resolvent-p2}, it follows that \begin{align} \eqref{operator:eq-resolvent-p5} \, &\lesssim_{b_0,b_+} D_j + \tau^{b_+-b_0} T^{C_0} \lambda \big\| \chi \big\|_{X^{\nu,b_0}([0,(j+1)\tau])} + \big\| v \big\|_{X^{\nu,b_0}([0,T])} \notag \\ &\lesssim_{b_0,b_+} D_j + \big\| v \big\|_{X^{\nu,b_0}([0,T])} + \tau^{b_+-b_0} T^{C_0} \lambda D_{j+1}. \label{operator:eq-resolvent-p6} \end{align} We now let $C_1=C_1(b_0,b_+,C_0)\geq 1$ be sufficiently large. Assuming that the time-step $\tau$ satisfies \begin{equation*} C_1 (T\lambda)^{C_1} \tau \leq \frac{1}{2}, \end{equation*} it follows from \eqref{operator:eq-resolvent-p6} that \begin{equation}\label{operator:eq-resolvent-p7} D_{j+1} \leq C_1 \Big( D_j + \big\| v \big\|_{X^{\nu,b_0}([0,T])} \Big). \end{equation} The desired estimate \eqref{operator:eq-resolvent-p4} now follows by simply iterating \eqref{operator:eq-resolvent-p7} and our condition on $\tau$. This completes the proof of \eqref{operator:eq-resolvent-1}.\\ It remains to prove the second resolvent estimate \eqref{operator:eq-resolvent-2} and the two estimates \eqref{operator:eq-chi-1} and \eqref{operator:eq-chi-2} for the structured component. The second resolvent estimate \eqref{operator:eq-resolvent-2} follows directly from the resolvent identity, the first resolvent estimate \eqref{operator:eq-resolvent-1}, and the decay in $K$ in Proposition \ref{operator:prop-main}. The estimates \eqref{operator:eq-chi-1} and \eqref{operator:eq-chi-2} for the structured component follow directly from the two resolvent estimates \eqref{operator:eq-resolvent-1} and \eqref{operator:eq-resolvent-2} and the regularity estimate for $z$ (Corollary \ref{vector:cor-regularity-z}). \end{proof} At the end of this subsection, we now present the proof of Lemma \ref{operator:lem-commutator}. \begin{proof}[Proof of Lemma \ref{operator:lem-commutator}:] We only prove \eqref{operator:eq-commutator-1}, since \eqref{operator:eq-commutator-2} follows from a minor modification. First, we observe the commutator identity \begin{equation}\label{operator:eq-commutator-p1} \big[ P_L , (1+\Lin[ll][\leq N])^{-1} \big] = - (1+\Lin[ll][\leq N])^{-1} \big[ P_L, 1+\Lin[ll][\leq N] \big] (1+\Lin[ll][\leq N])^{-1}. \end{equation} Due to \eqref{operator:eq-commutator-p1} and Lemma \ref{operator:lem-resolvent}, it suffices to prove for all $L\in \dyadic$, $p\geq 1$, and $T\geq 1$ that \begin{equation}\label{operator:eq-commutator-p2} \E \Big[ \sup_{N \in \dyadic} \Big\| \Big[ P_L, \Lin[ll][\leq N] \Big] \Big\|_{X^{\nu,b_0}([0,T])\rightarrow X^{\nu,b_0}([0,T])}^p \Big]^{1/p} \lesssim \sqrt{p} T^\theta L^{-1/4+\delta_1-\kappa}. \end{equation} To this end, we further decompose \begin{equation}\label{operator:eq-commutator-p3} \Big[ P_L, \Lin[ll][\leq N] \Big] = 2i \sum_{\substack{K,L^\prime \in \dyadic \colon \\ K \ll L \sim L^\prime}} \Duh \Big[ P_L \Big( \partial_\alpha \big( A^\alpha_K \, P_{L^\prime} \phi\big) \Big) - \partial_\alpha \big( A^\alpha_K \, P_L P_{L^\prime} \phi \big) \Big]. \end{equation} We now note that, for any $k,l\in \Z^2$, it holds that \begin{equation}\label{operator:eq-commutator-p4} \Big| \rho_L(k+l) \rho_K(k) \rho_{L^\prime}(l) - \rho_L(l) \rho_K(k) \rho_{L^\prime}(l) \Big| \lesssim K L^{-1} \rho_K(k) \rho_{L^\prime}(l). \end{equation} By combining a Fourier expansion of \eqref{operator:eq-commutator-p3}, the estimate \eqref{operator:eq-commutator-p4}, and the basic random operator estimate (Lemma \ref{operator:lem-basic}), it follows that \begin{align*} &\E \bigg[ \bigg\| \Duh \Big[ P_L \Big( \partial_\alpha \big( A^\alpha_K \, P_{L^\prime} \phi\big) \Big) - \partial_\alpha \big( A^\alpha_K \, P_L P_{L^\prime} \phi \big) \Big] \bigg\|_{X^{\nu,b_0}([0,T]) \rightarrow X^{\nu,b_+}([0,T])}^p \bigg]^{1/p} \\ \lesssim& \, \sqrt{p} T^\theta K^{3/4+\delta} L^{-1} \lesssim \sqrt{p} T^\theta L^{-1/4+\delta}. \end{align*} This implies the desired estimate \eqref{operator:eq-commutator-p2}. \end{proof} \section{A product estimate and its applications}\label{section:product} In this section, we control the contributions of \begin{equation}\label{product:eq-motivation} \partial_\alpha \big( A^\alpha_{\leq N} \parasim \chi_{\leq N} \big) \qquad \text{and} \qquad \Big( A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big) \, \chi_{\leq N}, \end{equation} which involve the structured component $\chi_{\leq N} = (1-\Lin[ll][\leq N])^{-1} z$ (as introduced in Section \ref{section:ansatz}). Due to the negative regularity of $A^\alpha_{\leq N}$ and $\chi_{\leq N}$, both terms in \eqref{product:eq-motivation} cannot be defined (even as space-time distributions) using purely deterministic arguments. Instead, our argument will rely heavily on the independence of the vector potential $A^\alpha$ and the linear stochastic object $z$. Our main estimates are collected in the following proposition. \begin{proposition}\label{product:prop-main} Let $C=C(b_0,b_+,\delta_1,\delta_2)\geq 1$ be sufficiently large and let $0<c\leq 1$ be sufficiently small. Then, for all $\lambda \geq 1$, there exists an event $E_\lambda$ which satisfies \begin{equation}\label{product:eq-main-probability} \mathbb{P}\big( E_\lambda \big) \geq 1 - c \exp\big( - \lambda \big) \end{equation} and such that, on this event, the following estimates are satisfied: \begin{enumerate}[label=(\roman*)] \item (Cubic estimate) For all $T\geq 1$ and $M,N \in \dyadic$, it holds that \begin{align} &\Big\| \Big( A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big) \, \chi_{\leq N} \Big\|_{L_t^2 H_x^{-\delta_2}([0,T]\times \T^2)} \leq \exp\big( C(T\lambda)^C\big), \label{product:eq-cubic-1} \\ &\Big\| \Big( A_{\leq M,\alpha} A^\alpha_{\leq M} - \scrm^{\hspace{-0.4ex}2}_{\leq M} \Big) \, \chi_{\leq M} - \Big( A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big) \, \chi_{\leq N} \Big\|_{L_t^2 H_x^{-\delta_2}([0,T]\times \T^2)} \label{product:eq-cubic-2} \\ \leq&\, \exp\big( C(T\lambda)^C\big) \min(M,N)^{-\kappa}. \notag \end{align} \item (High$\times$high-estimate) For all $T\geq 1$ and $M,N\in \dyadic$, it holds that \begin{align} \big\| \Lin[sim][\leq N] \chi_{\leq N} \big\|_{X^{1/4-\delta_2,b_+}([0,T])} &\leq \exp\big( C (T\lambda)^C \big), \label{product:eq-hh-1} \\ \big\| \Lin[sim][\leq M] \chi_{\leq M} - \Lin[sim][\leq N] \chi_{\leq N} \big\|_{X^{1/4-\delta_2,b_+}([0,T])} &\leq \exp\big( C (T\lambda)^C \big) \min(M,N)^{-\kappa}. \label{product:eq-hh-2} \end{align} \end{enumerate} \end{proposition} \begin{remark} Proposition \ref{product:prop-main} can likely also be proven by combining Lemma \ref{operator:lem-resolvent} with the tensor merging estimates from \cite{DNY20}. However, we present a more elementary argument, which requires less of the tensor machinery. \end{remark} The main ingredient in the proof of Proposition \ref{product:prop-main} is a product estimate. Due to its importance, the product estimate is captured in its own lemma. \begin{lemma}[Product estimate]\label{product:lem-product} Let $T\geq 1$, let $2\leq q <\infty$, and let $z\colon \R_t \times \T_x^2 \rightarrow \R$ be as in \eqref{ansatz:eq-z}. Furthermore, let $f\colon [0,T] \times \T_x^2 \rightarrow \C$, let $\Mc \colon X^{0,b_0}([0,T]) \rightarrow X^{0,b_0}([0,T])$, and assume that both $f$ and $\Mc$ are probabilistically independent of $z$. Then, it holds for all $K,L \in \dyadic$ satisfying $K\sim L$ and all $p\geq 1$ that \begin{equation} \begin{aligned} &\E_z \Big[ \Big\| P_K f \, \Mc P_L z \Big\|_{L_t^q H_x^{-1}([0,T]\times \T_x^2)}^p \Big]^{1/p} \\ \lesssim& \, \sqrt{p} T^\theta K^{-1+\delta_1} \| f \|_{L_t^q L_x^2([0,T])} \| \Mc \|_{X^{0,b_0}([0,T])\rightarrow X^{0,b_0}([0,T])}. \end{aligned} \end{equation} Here, $\E_z$ denotes the expectation taken only with respect to the $\sigma$-Algebra $\sigma_z$. \end{lemma} \begin{remark} Due to the frequency-scales, time-scales, moments, and $X^{0,b}$-spaces, the statement of Lemma \ref{product:lem-product} is much more complicated than the main idea behind its proof. To illustrate the main idea, we state a toy-version of the same estimate. To this end, let $f\in \ell^2(\Z^2)$ be deterministic, let $\Mc \colon \ell^2(\Z^2) \rightarrow \ell^2(\Z^2)$ be deterministic, and let $(g_m)_{m\in \Z^2}$ be a sequence of independent, standard complex-valued Gaussian random variables. Then, it is easy to show that \begin{equation} \sup_{n\in \Z^2} \E \Big[ \Big| \sum_{\substack{k,l\in \Z^2 \colon \\ k+l =n}} f_k \big( \Mc g \big)_l \Big|^2 \Big] \lesssim \big\| \Mc^\ast f \big\|_{\ell^2}^2 \lesssim \big\| \Mc^\ast \big\|_{\ell^2 \rightarrow \ell^2}^2 \big\| f \big\|_{\ell^2}^2 = \big\| \Mc \big\|_{\ell^2 \rightarrow \ell^2}^2 \big\| f \big\|_{\ell^2}^2. \end{equation} \end{remark} \begin{proof}[Proof of Lemma \ref{product:lem-product}:] In the following, all space-time norms are implicitly restricted to $[0,T]$. Using Gaussian hypercontractivity (Lemma \ref{prelim:lem-hypercontractivity}), it suffices to treat the case $p=q$. Furthermore, since the conclusion is trivial for $K,L\lesssim 1$, we can assume that $K,L \gg 1$. \\ We now make use of a small trick which allows us to easily use Lemma \ref{prelim:lem-Xnub-to-ell2}. To this end, let $(Z_l)_{l\in \Z^2}$ be the Gaussian process from Subsection \ref{section:probability}. Furthermore, let $(\sigma_l)_{l\in \Z^2}$ be a sequence of independent Bernoulli variables. Due to the rotation invariance of complex-valued Brownian motions, the two sequences $(\sigma_l)_{l\in \Z^2}$ and $(\sigma_l Z_l)_{l\in \Z^2}$ are still independent. We now note that the stochastic object $P_L z$ can be written as \begin{equation}\label{product:eq-product-p1} \begin{aligned} P_L z &= \sum_{\substack{l\in \Z^2 \colon \\ |l|\sim L}} e^{i\langle l, x \rangle} \int_0^t \mathrm{d}Z_s(l) \frac{\sin\big((t-s)|l|\big)}{|l|} \\ &= \sum_{\substack{l \in \Z^2 \colon \\ |l| \sim L}} \sigma_l e^{i\langle l,x \rangle} \int_0^t \mathrm{d}\big( \sigma_l Z_s(l)\big) \frac{\sin\big((t-s)|l|\big)}{|l|} \\ &=: \sum_{\substack{l \in \Z^2 \colon \\ |l| \sim L}} \sigma_l e^{i\langle l, x \rangle} \varphi_l(t). \end{aligned} \end{equation} Similar as discussed above, we note that $(\sigma_l)_{l\in \Z^2}$ and $(\varphi_l)_{l\in \Z^2}$ are independent. In the following, we denote the corresponding expectations by $\E_\sigma$ and $\E_\varphi$ and the corresponding $L^p$-spaces by $L_\sigma^p$ and $L_\varphi^p$, respectively. We now define $\Mc^{(\varphi)}$ similar as in Lemma \ref{prelim:lem-Xnub-to-ell2}, i.e., for all $l \in \Z^2$ and $v\in \ell^2(\Z^2)$, we define \begin{equation*} \Mc^{(\varphi)}_l := \Mc \big( e^{i \langle l, x\rangle} \varphi_l(t) \big) \quad \text{and} \qquad \Mc^{(\varphi)} v := \sum_{\substack{l \in \Z^2 \colon \\ |l| \sim L}} \Mc^{(\varphi)}_l v_l. \end{equation*} After these preparations, we now proof the product estimate. To this end, let $N\in \dyadic$. Using Minkowski's integral inequality, \eqref{product:eq-product-p1}, and Khintchine's inequality, it holds that \begin{equation}\label{product:eq-product-p2} \begin{aligned} \Big\| P_N \Big( P_K f(t) \Mc P_L z(t) \Big) \Big \|_{L_\varphi^q L_\sigma^q L_t^q L_x^2} \lesssim& \, \Big\| P_N \Big( P_K f(t) \Mc P_L z(t) \Big) \Big \|_{L_\varphi^q L_t^q L_x^2 L_\sigma^q} \\ \lesssim_q &\, \bigg\| \Big( \sum_{\substack{l \in \Z^2 \colon \\ |l|\sim L}} \big| P_N \big(P_K f(t) \Mc_l^{(\varphi)} \big) \big|^2 \Big)^{1/2} \bigg\|_{L_\varphi^q L_t^q L_x^2}. \end{aligned} \end{equation} Using Plancherell's theorem, it holds that \begin{equation}\label{product:eq-product-p3} \begin{aligned} \bigg\| \Big( \sum_{\substack{l \in \Z^2 \colon \\ |l|\sim L}} \big| P_N \big(P_K f(t) \Mc_l^{(\varphi)} \big) \big|^2 \Big)^{1/2} \bigg\|_{L_x^2}^2 &=\sum_{\substack{l\in \Z^2 \colon \\ |l| \sim L }} \Big\| P_N \Big( P_K f(t) \Mc^{(\varphi)}_l(t) \Big)\Big\|_{L_x^2}^2 \\ &\lesssim \sum_{\substack{n \in \Z^2 \colon \\ |n|\sim N}} \sum_{\substack{l\in \Z^2 \colon \\ |l| \sim L }} \Big| \big\langle e^{i\langle n ,x \rangle} \overline{P_K f}(t), \Mc^{(\varphi)}_l(t) \big\rangle_{L^2_x} \Big|^2. \end{aligned} \end{equation} Using duality and the embedding $X^{0,b_0}\hookrightarrow L_t^\infty L_x^2$, it holds for all $n\in \Z^2$ that \begin{equation}\label{product:eq-product-p4} \begin{aligned} &\sum_{\substack{l\in \Z^2 \colon \\ |l| \sim L }} \Big| \big\langle e^{i\langle n ,x \rangle} \overline{P_K f}(t), \Mc^{(\varphi)}_l(t) \big\rangle_{L^2_x} \Big|^2 \\ \leq& \, \big\| e^{i \langle n,x \rangle} \overline{P_K f}(t) \big\|_{L_x^2}^2 \big\| \Mc^{(\varphi)}(t) \big\|_{\ell^2\rightarrow L_x^2}^2 \leq \big\| f(t) \big\|_{L_x^2}^2 \big\| \Mc^{(\varphi)}\big\|_{\ell^2 \rightarrow L_t^\infty L_x^2}^2 \lesssim \big\| f(t) \big\|_{L_x^2}^2 \big\| \Mc^{(\varphi)}\big\|_{\ell^2 \rightarrow X^{0,b_0}}^2. \end{aligned} \end{equation} By combining \eqref{product:eq-product-p2}, \eqref{product:eq-product-p3}, and \eqref{product:eq-product-p4}, we obtain that \begin{equation}\label{product:eq-product-p5} \E_\varphi \E_\sigma \Big[ \Big\| P_N \Big( P_K f(t) \, \Mc P_L z (t) \Big) \Big\|_{L_t^q L_x^2}^q \Big]^{1/q} \\ \lesssim N \big\| f \big\|_{L_t^q L_x^2} \E_\varphi \Big[ \big\| \Mc^{(\varphi)}\big\|_{\ell^2 \rightarrow X^{0,b_0}}^q \Big]^{1/q}. \end{equation} By using Lemma \ref{prelim:lem-Xnub-to-ell2}, Lemma \ref{prelim:lem-Xnub-ito}, and the definition of $\varphi_l$ from \eqref{product:eq-product-p1}, it follows that \begin{align*} \E_\varphi \Big[ \Big\| \Mc^{(\varphi)} \Big\|_{\ell^2 \rightarrow X^{0,b_0}}^q \Big]^{1/q} \lesssim& \, \big\| \Mc \big\|_{X^{0,b_0}\rightarrow X^{0,b_0}} \E_\varphi \Big[ \sup_{\substack{l \in \Z^2 \colon \\ |l|\sim L}} \big\| e^{i\langle l,x \rangle} \varphi_l(t) \big\|_{X^{0,b_0}}^q \Big]^{1/q} \\ \lesssim& \, L^{-1+\delta_1/2} T^\theta \big\| \Mc \big\|_{X^{0,b_0}\rightarrow X^{0,b_0}} . \end{align*} After inserting this back into \eqref{product:eq-product-p5}, we obtain the desired estimate. \end{proof} Equipped with Lemma \ref{product:lem-product}, we can now prove the main proposition of this section. \begin{proof}[Proof of Proposition \ref{product:prop-main}:] Let $\lambda\geq 1$ be arbitrary but fixed. In the following argument, we refer to events satisfying the probability estimate \eqref{product:eq-main-probability} simply as events with sufficiently high probability. We also note that, after possibly adjusting the choice of $0<c\ll 1$, the intersection of finitely many events with sufficiently high probability still has sufficiently high probability. Furthermore, we let $T\geq 1$ be arbitrary but fixed and choose all of our events as not depending on $T$. We also implicitly restrict all space-time norms to $[0,T]\times \T^2$. \\ We now split the proof into three substeps. In the first step, we prove a general frequency-localized estimate. In the second and third step, we then use the general frequency-localized estimate to show the cubic and high$\times$high-estimate, respectively. \\ \emph{Step 1: A general frequency-localized estimate.} Let $q\geq 2$ and let $f\colon \R \times \T^2 \rightarrow \C$ be measurable w.r.t. $\sigma_A$. Then, there exists an event $E_{\lambda,f}$ with sufficiently high probability such that, on this event, the estimate \begin{equation}\label{product:eq-general-localized} \Big\| P_K f P_L \chi_{\leq N} \Big\|_{L_t^q H_x^{-\delta_2}([0,T]\times \T_x^2)} \lesssim \max(K,L)^{-\delta_2/10} \big\| P_K f \big\|_{L_t^\infty \Cs_x^{-\delta_1}} \exp\Big( C (T\lambda)^C \Big) \end{equation} is satisfied for all $K,L\in \dyadic$. In order to prove \eqref{product:eq-general-localized}, we first estimate \begin{equation*} \Big\| P_K f P_L \chi_{\leq N} \Big\|_{L_t^q H_x^{-\delta_2}([0,T]\times \T_x^2)} \leq \sum_{M\in \dyadic} M^{-\delta_2} \Big\| P_M \Big( P_K f P_L \chi_{\leq N} \Big) \Big\|_{L_t^q L_x^2([0,T]\times \T_x^2)} \end{equation*} We now use different estimates depending on the relative sizes of $K$, $L$, and $M$. \\ \emph{Step 1.(a): Case $M\geq \max(K,L)^{1/5}$.} Using Lemma \ref{prelim:lem-Xnub} and Lemma \ref{operator:lem-resolvent}, it holds on an event with sufficiently high probability that \begin{align*} M^{-\delta_2} \Big\| P_M \Big( P_K f P_L \chi_{\leq N} \Big) \Big\|_{L_t^q L_x^2([0,T]\times \T_x^2)} &\lesssim T M^{-\delta_2} \Big\| P_K f P_L \chi_{\leq N} \Big\|_{L_t^\infty L_x^2([0,T]\times \T_x^2)} \\ &\lesssim T M^{-\delta_2} \big\| P_K f \big\|_{L_t^\infty L_x^\infty} \big\| P_L \chi_{\leq N} \big\|_{L_t^\infty L_x^2} \\ &\lesssim T M^{-\delta_2} \big\| P_K f \big\|_{L_t^\infty L_x^\infty} \big\| P_L \chi_{\leq N} \big\|_{X^{0,b_0}} \\ &\lesssim T M^{-\delta_2} K^{\delta_1} L^{\delta_1} \big\| P_K f \big\|_{L_t^\infty \Cs_x^{\delta_1}} \exp\Big( C (T\lambda)^C \Big). \end{align*} Since $M\geq \max(K,L)$ and $\delta_1\ll \delta_2$, this yields an acceptable contribution.\\ \emph{Step 1.(b): Case $M<\max(K,L)^{1/5}$.} In this case, it automatically holds that $K\sim L$. We recall that $\chi_{\leq N}=(1+\Lin[ll][\leq N])^{-1} z$ and decompose \begin{align} P_M \Big( P_K f P_L \chi_{\leq N} \Big) &= P_M \Big( P_K f \, (1+\Lin[ll][\leq N])^{-1} P_L z \Big) \label{product:eq-general-p1} \\ &+P_M \Big( P_K f \, \Big[ P_L, \big( 1+ \Lin[ll][\leq N]\big)^{-1} \Big] \, z \Big) \label{product:eq-general-p2} \end{align} We now estimate \eqref{product:eq-general-p1} and \eqref{product:eq-general-p2} separately. \\ \emph{Estimate of \eqref{product:eq-general-p1}:} Using Lemma \ref{product:lem-product}, we obtain on an event with sufficiently high probability that \begin{align*} \Big\| P_M \Big( P_K f \, (1+\Lin[ll][\leq N])^{-1} P_L z \Big) \Big\|_{L_t^q H_x^{-\delta_2}} & \lesssim M^{1-\delta_2} \, \Big\| P_K f \, (1+\Lin[ll][\leq N])^{-1} P_L z \Big\|_{L_t^q H_x^{-1}} \\ & \lesssim M^{1-\delta_2} L^{-1+\delta_1} \Big\| P_K f \Big\|_{L_t^q L_x^2} \Big\| (1+\Lin[ll][\leq N])^{-1} \Big\|_{X^{0,b_0}\rightarrow X^{0,b_0}}. \end{align*} Using Lemma \ref{operator:lem-resolvent}, it follows that \begin{align*} &M^{1-\delta_2} L^{-1+\delta_1} \Big\| P_K f \Big\|_{L_t^q L_x^2} \Big\| (1+\Lin[ll][\leq N])^{-1} \Big\|_{X^{0,b_0}\rightarrow X^{0,b_0}}\\ \lesssim& \, M^{1-\delta_2} K^{\delta_1} L^{-1+\delta_1} \Big\| P_K f \Big\|_{L_t^\infty \Cs_x^{-\delta_1}} \exp\big( C (T\lambda)^C \big). \end{align*} Since $K\sim L$ and $M < \max(K,L)^{1/5}$, this clearly yields an acceptable contribution. \\ \emph{Estimate of \eqref{product:eq-general-p2}:} Using Lemma \ref{prelim:lem-Xnub}, Corollary \ref{vector:cor-regularity-z}, and Lemma \ref{operator:lem-commutator}, we obtain on an event with sufficiently high probability that \begin{align*} \Big\| P_M \Big( f \, \Big[ P_L, \big( 1+ \Lin[ll][\leq N]\big)^{-1} \Big] \, z \Big) \Big\|_{L_t^q H_x^{-\delta_2}} \lesssim&\, \Big\| P_K f \Big\|_{L_t^\infty \Cs_x^{\delta_2}} \Big\| \Big[ P_L, \big( 1+ \Lin[ll][\leq N]\big)^{-1} \Big] \, z \Big\|_{L_t^\infty H_x^{-\delta_1}} \\ \lesssim&\, K^{2\delta_2} L^{-1/4+\delta_1} \exp\big( C (T\lambda)^C\big) \Big\| P_K f \Big\|_{L_t^\infty \Cs_x^{-\delta_2}} \big\| z \big\|_{X^{-\delta_1,b_0}} \\ \lesssim&\, K^{2\delta_2} L^{-1/4+\delta_1} \exp\big( C (T\lambda)^C\big). \end{align*} Since $K\sim L \gtrsim M$ and $\delta_2 \ll 1$, this clearly yields an acceptable contribution. This completes the proof of the general frequency-localized estimate \eqref{product:eq-general-localized}.\\ \emph{Step 2: Proof of the cubic estimate.} We only prove \eqref{product:eq-cubic-1}, since \eqref{product:eq-cubic-2} follows from minor modifications. Using \eqref{product:eq-general-localized}, we obtain on an event with sufficiently high probability that \begin{equation*} \begin{aligned} &\Big\| \Big( A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big) \, \chi_{\leq N} \Big\|_{L_t^2 H_x^{-\delta_2}} \\ \lesssim& \, \Big\| A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big\|_{L_t^\infty \Cs_x^{-\delta_1}} \exp\Big( C (T\lambda)^C \Big). \end{aligned} \end{equation*} Together with Lemma \ref{vector:lem-quadratic}, this yields the desired estimate. \\ \emph{Step 3: Proof of the high$\times$high-estimate.} We only prove \eqref{product:eq-hh-1}, since \eqref{product:eq-hh-2} follows from minor modifications. Using \eqref{product:eq-general-localized} and Corollary \ref{vector:cor-regularity}, we obtain on an event with sufficiently high probability that, for all $K,M,N\in \dyadic$, \begin{equation*} \big\| P_M \Lin[sim][K] \chi_{\leq N} \big\|_{X^{1/4-\delta_2,b_+}} \lesssim T \max_{\alpha=0,1,2} \big\| P_M \big( A_K^\alpha \parasim \chi_{\leq N} \big) \big\|_{L_t^2 H_x^{1/4-\delta_2}} \lesssim M^{1/4}K^{-\delta_2/10} \exp\big( C (T\lambda)^C \big). \end{equation*} Using Proposition \ref{operator:prop-main}.\ref{operator:item-high-high} and Lemma \ref{operator:lem-resolvent}, we also obtain on an event with sufficiently high probability that, for all $K,M,N\in \dyadic$, \begin{align*} \big\| P_M \Lin[sim][K] \chi_{\leq N} \big\|_{X^{1/4-\delta_2,b_+}} &\lesssim M^{-\delta_2} \big\| P_M \Lin[sim][K] \chi_{\leq N} \big\|_{X^{1/4,b_+}} \\ &\lesssim T^\theta M^{-\delta_2} \big\|\widetilde{P}_K \chi_{\leq N} \big\|_{X^{\delta_1,b_0}} \\ &\lesssim M^{-\delta_2} K^{2\delta_1} \exp\big( C (T\lambda)^C\big). \end{align*} Since \begin{equation*} \min\Big( M^{1/4}K^{-\delta_2/10} , M^{-\delta_2} K^{2\delta_1} \Big) \lesssim \Big( M^{1/4}K^{-\delta_2/10} \Big)^{\delta_2} \Big( M^{-\delta_2} K^{2\delta_1} \Big)^{1-\delta_2} \lesssim K^{-2\kappa}, \end{equation*} the combined estimate yields \eqref{product:eq-hh-1}. \end{proof} We also obtain the following $\Cs_t^0$-estimate, which essentially corresponds to the endpoint $q=\infty$ in \eqref{product:eq-general-localized} from the proof of Proposition \ref{product:prop-main}. The $\Cs_t^0$-estimate will be needed in the proof of Proposition \ref{proof:prop-phi} below. \begin{corollary}[The product of $A^0$ and $\chi$]\label{product:cor-Ct} Let $C=C(b_-,b_0,b_+,\delta_0,\delta_1,\delta_2)$ be sufficiently large and $0<c\ll 1$ be sufficiently small. Then, for all $\lambda \geq 1$, there exists an event $E_{\lambda}$ which satisfies \begin{equation}\label{product:eq-Ct-probability} \mathbb{P}\big( E_{\lambda} \big) \geq 1 - c \exp(-\lambda) \end{equation} and such that, on this event, the following estimates are satisfied: For all $M,N \in \dyadic$ and $T\geq 1$, it holds that \begin{align} \big\| A^0_{\leq N}(t) \chi_{\leq N}(t) \big\|_{\Cs_t^0 H_x^{-\delta_2}([0,T]\times \T^2)} &\leq \exp\big( C(T\lambda)^C \big) \label{product:eq-Ct-1}, \\ \big\| A^0_{\leq M}(t) \chi_{\leq M}(t) - A^0_{\leq N}(t) \chi_{\leq N}(t) \big\|_{\Cs_t^0 H_x^{-\delta_2}([0,T]\times \T^2)} &\leq \exp\big( C(T\lambda)^C \big) \min(M,N)^{-\kappa}. \label{product:eq-Ct-2} \end{align} \end{corollary} \begin{proof} We only prove the first estimate \eqref{product:eq-Ct-1}, since \eqref{product:eq-Ct-2} follows from minor modifications. Let $1\leq q=q(\delta_2,b_0)<\infty$ remain to be chosen. Using \eqref{product:eq-general-localized} from the proof of Proposition \ref{product:prop-main} and Corollary \ref{vector:cor-regularity}, we obtain on an event with sufficiently high probability that, for all $K,L,N\in \dyadic$, \begin{equation}\label{product:eq-Ct-p1} \big\| A^0_K(t) P_L \chi_{\leq N}(t) \big\|_{L_t^q H_x^{-\delta_2}} \leq \max(K,L)^{-\delta_2/10} \exp\big( C(T\lambda)^C \big). \end{equation} We now set $\gamma:= \frac{1}{2} (b_0-1/2)$. Using the Hölder estimate from Lemma \ref{prelim:lem-Xnub} and Corollary \ref{vector:cor-regularity}, we obtain on an event with sufficiently high probability that, for all $K,L,N\in \dyadic$, \begin{equation}\label{product:eq-Ct-p2} \begin{aligned} \big\| A^0_K(t) P_L \chi_{\leq N}(t) \big\|_{\Cs_t^\gamma H_x^{-\delta_2}} &\lesssim \big\| A^0_K(t) \big\|_{\Cs_t^\gamma \Cs_x^{\delta_2}} \big\| \chi_{\leq N} \big\|_{\Cs_t^\gamma H_x^{-\delta_1}} \lesssim \big\| A^0_K(t) \big\|_{\Cs_t^\gamma \Cs_x^{\delta_2}} \big\| \chi_{\leq N} \big\|_{X^{-\delta_1,b_0}} \\ &\leq K^{2\delta_2} \exp\big( C (T\lambda)^C \big). \end{aligned} \end{equation} As long as $q=q(\delta_2,b_0)$ is sufficiently large, \eqref{product:eq-Ct-p1} and \eqref{product:eq-Ct-p2} imply the desired estimate \eqref{product:eq-Ct-1}. \end{proof} \section{The smooth remainder and proof of the main theorem} \label{section:proof-thm} The goal of this section is to prove our main theorem (Theorem \ref{intro:thm-main}). To this end, we first prove two different estimates. In Proposition \ref{proof:prop-psi}, we control the smooth remainder $\psi_{\leq N}$ (from Section \ref{section:ansatz}). The corresponding estimate essentially follows from our estimates in Section \ref{section:vector}, Section \ref{section:operator}, and Section \ref{section:product}, and the argument therefore primarily consists of references to earlier estimates. Together with Lemma \ref{operator:lem-resolvent}, Proposition \ref{proof:prop-psi} yields convergence of $\phi_{\leq N}$ in $X^{-\delta,b_0}$. \\ In Proposition \ref{proof:prop-phi}, we obtain the convergence of $\phi_{\leq N}[t]=(\phi_{\leq N}(t),\partial_t \phi_{\leq N}(t))$ in $\Cs_t^0 \mathscr{H}_x^{-\delta}$. Due to the embedding $X^{-\delta,b_0}\hookrightarrow \Cs_t^0 H_x^{-\delta}$, the convergence of $\phi_{\leq N}(t)$ directly follows from our previous results. However, the convergence of the time-derivatives $\partial_t \phi_{\leq N}(t)$ has to be shown separately. This is primarily because of the contribution from $\partial_t (A_{\leq N}^0 \phi_{\leq N})$, since $\partial_t (A_{\leq N}^0 \phi_{\leq N})$ cannot be placed in $L_t^2 H_x^{-1-\delta}$. At the end of this section, we then prove Theorem \ref{intro:thm-main}, which follows almost directly from Corollary \ref{vector:cor-regularity} and Proposition \ref{proof:prop-phi}. \begin{proposition}[\protect{The smooth remainder $\psi_{\leq N}$}]\label{proof:prop-psi} Let $R\geq 1$ and let $(\phi_0,\phi_1) \in \mathscr{H}^{1/4}_x(\T^2)$ satisfy $\| (\phi_0,\phi_1)\|_{\mathscr{H}_x^{1/4}}\leq R$. Furthermore, let $C=C(b_-,b_0,b_+,\delta_0,\delta_1,\delta_2)$ be sufficiently large and $0<c\ll 1$ be sufficiently small. Then, for all $\lambda \geq 1$, there exists an event $E_{\lambda}$ which satisfies \begin{equation}\label{proof:eq-psi-probability} \mathbb{P}\big( E_{\lambda} \big) \geq 1 - c \exp(-\lambda) \end{equation} and such that, on this event, the following estimates are satisfied: For all $M,N \in \dyadic$ and $T\geq 1$, it holds that \begin{align} \big\| \psi_{\leq N} \big\|_{X^{1/4-\delta_2,b_0}([0,T])} &\leq R \exp\big( C (T\lambda)^C \big), \label{proof:eq-psi-1} \\ \big\| \psi_{\leq M} - \psi_{\leq N} \big\|_{X^{1/4-\delta_2,b_0}([0,T])} &\leq R \exp\big( C (T\lambda)^C \big) \min(M,N)^{-\kappa}. \label{proof:eq-psi-2} \end{align} \end{proposition} \begin{proof} We only prove \eqref{proof:eq-psi-1}, since \eqref{proof:eq-psi-2} follows from minor modifications. As in the proof of Proposition \ref{product:prop-main}, we let $\lambda\geq 1$ be arbitrary but fixed and refer to events satisfying \eqref{proof:eq-psi-probability} as having sufficiently high probability. We also let $T\geq 1$ be arbitrary but fixed and implicitly restrict all space-time norms to $[0,T] \times \T^2$. \\ We let $\tau = \tau(T,\lambda)\in (0,1)$ remain to be chosen and define \begin{equation*} D_j := \big\| \psi_{\leq N} \big\|_{X^{1/4-\delta_2,b_0}([0,j\tau])} \end{equation*} for all $j\in \Nzero$. If $j\in \Nzero$ satisfies $(j+1)\tau\leq T$, it follows from Lemma \ref{prelim:lem-Xnub} that \begin{equation}\label{proof:eq-psi-p0} \begin{aligned} D_{j+1} &= \big\| \psi_{\leq N} \big\|_{X^{1/4-\delta_2,b_0}([0,(j+1)\tau])} \\ &\lesssim D_j + \tau^{b_+-b_0} \big\| \psi_{\leq N} \big\|_{X^{1/4-\delta_2,b_+}([0,(j+1)\tau])}. \end{aligned} \end{equation} Using the evolution equation for $\psi_{\leq N}$, i.e., \eqref{ansatz:eq-psi-1}-\eqref{ansatz:eq-psi-2}, it follows that \begin{align} &\hspace{3ex}\big\| \psi_{\leq N} \big\|_{X^{1/4-\delta_2,b_+}([0,(j+1)\tau])} \notag \\ &\lesssim \big\| \Lin[ll][\leq N] \psi_{\leq N} \big\|_{X^{1/4-\delta_2,b_+}([0,(j+1)\tau])} \label{proof:eq-psi-p1} \\ &+ \big\| \Lin[sim][\leq N] \big( \chi_{\leq N} + \psi_{\leq N} \big) \big\|_{X^{1/4-\delta_2,b_+}([0,(j+1)\tau])} \label{proof:eq-psi-p2} \\ &+ \big\| \Lin[gg][\leq N] \big( \chi_{\leq N} + \psi_{\leq N} \big) \big\|_{X^{1/4-\delta_2,b_+}([0,(j+1)\tau])} \label{proof:eq-psi-p3} \\ &+ \Big\| \Duh \Big[ \big( A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \big) \, \big( \chi_{\leq N} +\psi_{\leq N} \big) \Big] \Big\|_{X^{1/4-\delta_2,b_+}([0,(j+1)\tau])} \label{proof:eq-psi-p4} \\ &+ \big\| \mathcal{W}(t) (\phi_0,\phi_1) \big\|_{X^{1/4-\delta_2,b_+}([0,(j+1)\tau])} \label{proof:eq-psi-p5}. \end{align} We now let $C_0=C_0(b_0,b_+,\delta_1,\delta_2)$ be sufficiently large. Then, we claim that, on an event with sufficiently high probability, it holds that \begin{equation}\label{proof:eq-psi-p6} \begin{aligned} &\eqref{proof:eq-psi-p1} + \eqref{proof:eq-psi-p2} + \eqref{proof:eq-psi-p3} + \eqref{proof:eq-psi-p4} + \eqref{proof:eq-psi-p5} \\ \leq& \, C_0 (T\lambda)^{C_0} \Big( \big\| \psi_{\leq N} \big\|_{X^{1/4-\delta_2,b_+}([0,(j+1)\tau])} + R \Big) + \exp\big( C_0 (T\lambda)^{C_0} \big). \end{aligned} \end{equation} The estimates of the five individual terms in \eqref{proof:eq-psi-p6} can now be obtained as follows: \begin{itemize} \item[$\bullet$] For \eqref{proof:eq-psi-p1}, this estimate follows from Proposition \ref{operator:prop-main}.\ref{operator:item-low-high}. \item[$\bullet$] For \eqref{proof:eq-psi-p2}, this estimate follows from Proposition \ref{product:prop-main} (for the $\chi_{\leq N}$-term) and Proposition \ref{operator:prop-main}.\ref{operator:item-high-high} (for the $\psi_{\leq N}$-term). \item[$\bullet$] For \eqref{proof:eq-psi-p3}, this estimate follows from Proposition \ref{operator:prop-main}.\ref{operator:item-high-low}. \item[$\bullet$] For \eqref{proof:eq-psi-p4}, this estimate follows from Proposition \ref{product:prop-main} (for the $\chi_{\leq N}$-term) and just Corollary \ref{vector:cor-regularity} (for the $\psi_{\leq N}$-term). \item[$\bullet$] Finally, for \eqref{proof:eq-psi-p5}, this estimate follows from Lemma \ref{prelim:lem-Xnub}. \end{itemize} By combining \eqref{proof:eq-psi-p0} and \eqref{proof:eq-psi-p6}, it now follows that \begin{equation*} D_{j+1} \leq C_0 \Big( D_j + (T\lambda)^{C_0} R + \exp\big( C_0 (T\lambda)^{C_0}\big) \Big) + \tau^{b_+-b_0} C_0 (T\lambda)^{C_0} D_{j+1}. \end{equation*} Similar as in the proof of Lemma \ref{operator:lem-resolvent}, the desired estimate now follows by first choosing $\tau=\tau(\lambda,T)$ and then iterating. \end{proof} \begin{proposition}[Control of $\phi_{\leq N}$]\label{proof:prop-phi} Let $R\geq 1$ and let $(\phi_0,\phi_1) \in \mathscr{H}^{1/4}_x(\T^2)$ satisfy $\| (\phi_0,\phi_1)\|_{\mathscr{H}_x^{1/4}}\leq R$. Furthermore, let $C=C(b_-,b_0,b_+,\delta_0,\delta_1,\delta_2)$ be sufficiently large and $0<c\ll 1$ be sufficiently small. Then, for all $\lambda \geq 1$, there exists an event $E_{\lambda}$ which satisfies \begin{equation}\label{proof:eq-phi-probability} \mathbb{P}\big( E_{\lambda} \big) \geq 1 - c \exp(-\lambda) \end{equation} and such that, on this event, the following estimates are satisfied: For all $M,N \in \dyadic$ and $T\geq 1$, it holds that \begin{align} \big\| \phi_{\leq N}[t] \big\|_{\Cs_t^0 \mathscr{H}_x^{-\delta}([0,T]\times \T^2)} &\leq R \exp\big( C (T\lambda)^C \big), \label{proof:eq-phi-1} \\ \big\| \phi_{\leq M}[t] - \phi_{\leq N}[t] \big\|_{\Cs_t^0 \mathscr{H}_x^{-\delta}([0,T]\times \T^2)} &\leq R \exp\big( C (T\lambda)^C \big) \min(M,N)^{-\kappa}. \label{proof:eq-phi-2} \end{align} \end{proposition} \begin{proof}[Proof of Proposition \ref{proof:prop-phi}:] We only prove \eqref{proof:eq-phi-1}, since \eqref{proof:eq-phi-2} follows from minor modifications. Using Proposition \ref{ansatz:prop-ansatz}, it follows that $\phi_{\leq N}=\chi_{\leq N}+\psi_{\leq N}$. Then, the bound on $\phi_{\leq N}$ in $\Cs_t^0 H_x^{-\delta}$ follows directly from the embedding in Lemma \ref{prelim:lem-Xnub}, Lemma \ref{operator:lem-resolvent}, and Proposition \ref{proof:prop-psi}. Thus, it remains to control the time-derivative. \\ In the following proof, all space-time norms are restricted to $[0,T]\times \T^2$. From \eqref{ansatz:eq-phiN-a}, it follows that \begin{align} \partial_t \phi_{\leq N} &= - 2 i \partial_t \Duh \Big[ \partial_0 \big( A_{\leq N}^0 \phi_{\leq N} \big) \Big] \label{proof:eq-time-p1} \\ &- 2 i \partial_t \Duh \Big[ \partial_a \big( A_{\leq N}^a \phi_{\leq N} \big) \Big]\label{proof:eq-time-p2} \\ &+ \partial_t \Duh \Big[ \Big( A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big) \phi_{\leq N} \Big] \label{proof:eq-time-p3} \\ &+ \partial_t z + \partial_t \mathcal{W}(t) (\phi_0,\phi_1). \label{proof:eq-time-p4} \end{align} We first address the contribution of \eqref{proof:eq-time-p1}, which is the main term. Using $A^0_{\leq N}(0)=0$ and integration by parts, it follows that \begin{equation}\label{proof:eq-time-p9} \begin{aligned} \partial_t \Duh \Big[ \partial_0 \big( A_{\leq N}^0 \phi_{\leq N} \big) \Big] =& \partial_t \int_0^t \frac{\sin \big( (t-t^\prime) |\nabla|\big)}{|\nabla|} \partial_{t^\prime} \big(A^0_{\leq N}(t^\prime) \phi_{\leq N}(t^\prime) \big) \\ =& \int_0^t \cos \big( (t-t^\prime) |\nabla| \big) \partial_{t^\prime} \big(A^0_{\leq N}(t^\prime) \phi_{\leq N}(t^\prime) \big) \\ =& A^0_{\leq N}(t) \phi_{\leq N}(t) - |\nabla| \int_0^t \dt^\prime \sin \big( (t-t^\prime) |\nabla| \big) \big( A^0_{\leq N}(t^\prime) \phi_{\leq N}(t^\prime) \big). \end{aligned} \end{equation} We now estimate \begin{equation}\label{proof:eq-time-p7} \big\| A_{\leq N}^0(t) \phi_{\leq N}(t) \big\|_{\Cs_t^0 H_x^{-\delta}} \lesssim \big\| A_{\leq N}^0(t) \chi_{\leq N}(t) \big\|_{\Cs_t^0 H_x^{-\delta}} + \big\| A_{\leq N}^0(t) \psi_{\leq N}(t) \big\|_{\Cs_t^0 H_x^{-\delta}}. \end{equation} The first summand in \eqref{proof:eq-time-p7} is the subject of Corollary \ref{product:cor-Ct} and the second summand in \eqref{proof:eq-time-p7} can be estimated using Corollary \ref{vector:cor-regularity} and Proposition \ref{proof:prop-psi}. We further estimate \begin{equation}\label{proof:eq-time-p8} \begin{aligned} &\Big\| |\nabla| \int_0^t \dt^\prime \sin \big( (t-t^\prime) |\nabla| \big) \big( A^0_{\leq N}(t^\prime) \phi_{\leq N}(t^\prime) \big) \Big\|_{C_t^0 H_x^{-1-\delta}} \\ \lesssim& \, \big\| A^0_{\leq N} \phi_{\leq N} \big\|_{L_t^1 H_x^{-\delta}} \lesssim \, T \big\| A^0_{\leq N} \phi_{\leq N} \big\|_{L_t^\infty H_x^{-\delta}}. \end{aligned} \end{equation} Thus, \eqref{proof:eq-time-p8} is controlled by \eqref{proof:eq-time-p7}. This completes the estimate of \eqref{proof:eq-time-p9} and therefore the estimate of the main term \eqref{proof:eq-time-p1}. \\ It remains to control the contributions of \eqref{proof:eq-time-p2}, \eqref{proof:eq-time-p3}, and \eqref{proof:eq-time-p4}. Using Lemma \ref{prelim:lem-Xnub} and our Ansatz, it follows that \begin{align} &\big\| \eqref{proof:eq-time-p2} \big\|_{\Cs_t^0 H_x^{-1-\delta}} + \big\| \eqref{proof:eq-time-p3} \big\|_{\Cs_t^0 H_x^{-1-\delta}} \notag \\ \lesssim& \, T^2 \Big\| \partial_a \big( A_{\leq N}^a \phi_{\leq N} \big) \Big\|_{L_t^2 H_x^{-1-\delta}} + T^2 \Big\| \Big( A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big) \phi_{\leq N} \Big\|_{L_t^2 H_x^{-1-\delta}} \notag \\ \lesssim& \, T^2 \max_{a=1,2} \Big\| A_{\leq N}^a \chi_{\leq N} \Big\|_{L_t^2 H_x^{-\delta}} + T^2 \Big\| \Big( A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big) \chi_{\leq N} \Big\|_{L_t^2 H_x^{-1-\delta}} \label{proof:eq-time-p5} \\ +& \, T^2 \max_{a=1,2} \Big\| A_{\leq N}^a \psi_{\leq N} \Big\|_{L_t^2 H_x^{-\delta}} + T^2 \Big\| \Big( A_{\leq N,\alpha} A^\alpha_{\leq N} - \scrm^{\hspace{-0.4ex}2}_{\leq N} \Big) \psi_{\leq N} \Big\|_{L_t^2 H_x^{-1-\delta}}. \label{proof:eq-time-p6} \end{align} The terms in \eqref{proof:eq-time-p5} can be bounded using Proposition \ref{product:prop-main}. The terms in \eqref{proof:eq-time-p6} can be bounded using Corollary \ref{vector:cor-regularity}, Lemma \ref{vector:lem-quadratic}, and Proposition \ref{proof:prop-psi}. Finally, the contribution of \eqref{proof:eq-time-p4} can bounded directly using Corollary \ref{vector:cor-regularity-z} and Lemma \ref{prelim:lem-Xnub}. \end{proof} Equipped with Proposition \ref{proof:prop-phi}, we can now prove our main theorem. \begin{proof}[Proof of Theorem \ref{intro:thm-main}:] By time-reversal symmetry, it suffices to prove almost-sure convergence on $[0,T]\times \T^2$ for all $T\geq 1$. The almost-sure convergence of $A_{\leq N}[t]$ follows from Corollary \ref{vector:cor-regularity}. The almost-sure convergence of $\phi_{\leq N}[t]$ follows from Proposition \ref{proof:prop-phi}. \end{proof} \section{Failure of a probabilistic null-form estimate} \label{section:null-form} In this section, we prove the failure of a probabilistic null-form estimate, i.e., Theorem \ref{intro:thm-failure}. To this end, we first state the following lemma regarding the space-time covariances of $z$. \begin{lemma}[Space-time covariances of $z$]\label{failure:lem-covariance} For all $k, l \in \Z^2 \backslash \{0\}$ and $t,s\geq 0$, it holds that \begin{align*} &\E \Big[ \widehat{z}(t,k) \overline{\widehat{z}(s,l)}\Big] \\ =& \, \mathbf{1} \big\{ k = l \big\} \bigg( \frac{1}{2|k|^2} (t\wedge s) \cos \big( (t-s) |k| \big) + \frac{1}{4|k|^3} \Big( \sin\big( |t-s| |k| \big) - \sin \big( (t+s) |k| \big) \Big) \bigg). \end{align*} \end{lemma} \begin{proof} The computation is essentially as in Lemma \ref{vector:lem-covariance} and we therefore omit the details. \end{proof} Equipped with Lemma \ref{failure:lem-covariance}, we now prove Theorem \ref{intro:thm-failure}. \begin{proof}[Proof of Theorem \ref{intro:thm-failure}:] We first reduce the first claim \eqref{intro:eq-failure-1} to the second claim \eqref{intro:eq-failure-2}. To this end, assume that $\psi\in \Cs^\infty_c((0,\infty) \times \T_x^2 \rightarrow [-1,1])$ satisfies \eqref{intro:eq-failure-2}. Since the vector-field $(-\partial_2 \psi, \, \partial_1 \psi)$ is divergence-free, it holds that \begin{equation*} \int_{\R} \dt \int_{\T^2} \dx \, \begin{pmatrix} - \partial_2 \psi \\[-1ex] \partial_1 \psi \end{pmatrix} \cdot \Im \big( P_{\leq N} z \, \overline{\nabla P_{\leq N} z} \big) = \int_{\R} \dt \int_{\T^2} \dx \, \begin{pmatrix} - \partial_2 \psi \\[-1ex] \partial_1 \psi \end{pmatrix} \cdot \Leray \Im \big( P_{\leq N} z \, \overline{\nabla P_{\leq N} z} \big). \end{equation*} By inserting the identity \eqref{intro:eq-null-identity}, then using that $\partial_1 \psi$ and $\partial_2 \psi$ are mean-zero to eliminate the contribution of the constant term in \eqref{intro:eq-null-identity}, and finally using $Q_{11}=Q_{22}=0$ and $Q_{12}=-Q_{21}$, it follows that \begin{align} &\int_{\R} \dt \int_{\T^2} \dx \, \begin{pmatrix} - \partial_2 \psi \\[-1ex] \partial_1 \psi \end{pmatrix} \cdot \Leray \Im \big( P_{\leq N} z \, \overline{\nabla P_{\leq N} z} \big) \notag \\ =&\, -\int_{\R} \dt \int_{\T^2} \dx \, \begin{pmatrix} - \partial_2 \psi \\[-1ex] \partial_1 \psi \end{pmatrix} \cdot \begin{pmatrix} \Delta^{-1} \partial_2 \Im Q_{12} \big( P_{\leq N} z, \overline{P_{\leq N} z} \big) \\[-1ex] \Delta^{-1} \partial_1 \Im Q_{21} \big( P_{\leq N} z, \overline{P_{\leq N} z} \big) \end{pmatrix} \notag \\ =& \, -\int_{\R} \dt \int_{\T^2} \dx \, \Big( - \Delta^{-1}\partial_2^2 \psi \cdot \Im Q_{12}\big( P_{\leq N} z, \overline{P_{\leq N} z} \big) + \Delta^{-1} \partial_1^2 \psi \cdot \Im Q_{21} \big( P_{\leq N} z, \overline{P_{\leq N} z} \big) \Big) \notag \\ =& \, \int_{\R} \dt \int_{\T^2} \dx \, \psi \cdot \Im Q_{12} \big( P_{\leq N} z \, \overline{P_{\leq N} z} \big) . \label{failure:eq-p1} \end{align} Since we are currently assuming \eqref{intro:eq-failure-2}, it therefore follows that \begin{equation}\label{failure:eq-p2} \operatorname{Var} \left( \int_{\R} \dt \int_{\T^2} \dx \, \begin{pmatrix} - \partial_2 \psi \\[-1ex] \partial_1 \psi \end{pmatrix} \cdot \Im \big( P_{\leq N} z \overline{\nabla P_{\leq N} z} \big) \right) \geq c \log(N) - c^{-1}. \end{equation} As a result, it follows that \begin{align*} &\max \left( \operatorname{Var}\left( \int_{\R} \dt \int_{\T^2} \dx \, \partial_1 \psi \, \Im \big( P_{\leq N} z \overline{\partial_2 P_{\leq N} z} \big) \right), \operatorname{Var}\left( \int_{\R} \dt \int_{\T^2} \dx \, \partial_2 \psi \, \Im \big( P_{\leq N} z \overline{\partial_1 P_{\leq N} z} \big) \right) \right) \\ \geq& \, c \log(N) - c^{-1}. \end{align*} Thus, by choosing $\varphi \in C^\infty_c((0,\infty) \times \T_x^2 \rightarrow [-1,1])$ as a scalar-multiple of $\partial_2 \psi$ or $\partial_1 \psi$, we see that the desired claim \eqref{intro:eq-failure-1} is satisfied for $j=1$ or $j=2$ (or both). Using the symmetry in the first and second coordinate, however, \eqref{intro:eq-failure-1} then has to be satisfied for both $j=1$ and $j=2$ (with possibly different choices of $\varphi$).\\ It now remains to prove the second claim \eqref{intro:eq-failure-2} and we start with elementary simplifications. We first note that \begin{equation*} \overline{Q_{12}\big( P_{\leq N} z, \overline{ P_{\leq N} z} \big)} = Q_{12}\big( \overline{ P_{\leq N} z}, P_{\leq N} z \big) = - Q_{12}\big( P_{\leq N} z, \overline{ P_{\leq N} z} \big). \end{equation*} Thus, $Q_{12}( P_{\leq N} z, \overline{ P_{\leq N} z} )$ is purely imaginary and we can therefore consider \eqref{intro:eq-failure-2} with the imaginary part $\Im Q_{12}( P_{\leq N} z, \overline{ P_{\leq N} z} )$ replaced by $Q_{12}( P_{\leq N} z, \overline{ P_{\leq N} z})$. Furthermore, we can consider $\psi \in C^\infty_c((0,\infty)\times \T_x^2 \rightarrow \C)$ instead of $\psi \in C^\infty_c((0,\infty)\times \T_x^2 \rightarrow [-1,1])$, since we can eventually replace $\psi$ by a scalar multiple of its real or imaginary part. We now choose \begin{equation*} \psi(t,x) = \chi(t) e^{i \langle m,x\rangle}, \end{equation*} where $m\in \Z^2\backslash \{0\}$ is arbitrary and $\chi=\chi_m\in \Cs^\infty_c((0,\infty)\rightarrow \R)$ remains to be chosen. In order to compute the space-time integral in \eqref{intro:eq-failure-2}, we first compute \begin{align*} &\int_{\T_x^2} \dx \, e^{i \langle m ,x \rangle} Q_{12} \big( P_{\leq N} z , \overline{P_{\leq N} z} \big) \\ =& \sum_{k,l\in \Z^2 \backslash \{ 0 \} } \rho_{\leq N}(k) \rho_{\leq N}(l) \big( k_1 l_2 - k_2 l_1\big) \widehat{z}(t,k) \overline{\widehat{z}(t,l)} \int_{\T_x^2} \dx \, e^{i\langle m ,x \rangle} e^{i \langle k-l,x \rangle} \allowdisplaybreaks[3] \\ =& (2\pi)^2 \sum_{\substack{k,l \in \Z^2 \backslash \{ 0 \} \colon \\ l = k+m}} \rho_{\leq N}(k) \rho_{\leq N}(l) \big( k_1 l_2 - k_2 l_1\big) \widehat{z}(t,k) \overline{\widehat{z}(t,l)} \allowdisplaybreaks[3] \\ =& (2\pi)^2 \sum_{ \substack{k\in \Z^2 \colon \\ k \neq 0, \pm m}} \rho_{\leq N}(k) \rho_{\leq N}(k+m) \big( k_1 m_2 - k_2 m_1\big) \widehat{z}(t,k) \overline{\widehat{z}(t,k+m)}. \end{align*} In the last line, we used that $k_1 k_2 - k_2 k_1 =0$, i.e., the null structure. Using the factor \mbox{$(k_1 m_2 - k_2 m_1)$}, we also restricted to $k\neq 0,\pm m$, which will be convenient (for notational reasons) below. It then follows that \begin{equation}\label{failure:eq-p3} \begin{aligned} &\int_{\R} \dt \int_{\T^2} \dx \, \psi \, Q_{12}\big( P_{\leq N} z, \overline{P_{\leq N} z} \big) \\ =& (2\pi)^2 \int_\R \dt \, \chi(t) \sum_{ \substack{k\in \Z^2 \colon \\ k \neq 0, \pm m}} \rho_{\leq N}(k) \rho_{\leq N}(k+m) \big( k_1 m_2 - k_2 m_1\big) \widehat{z}(t,k) \overline{\widehat{z}(t,k+m)}. \end{aligned} \end{equation} Due to Lemma \ref{failure:lem-covariance}, \eqref{failure:eq-p3} has zero expectation. It therefore follows that \begin{align} &\frac{1}{(2\pi)^4} \operatorname{Var} \left( \int_{\R} \dt \int_{\T^2} \dx \, \psi \, Q_{12}\big( P_{\leq N} z, \overline{P_{\leq N} z} \big) \right) \notag \\ =&\, \int_{\R} \dt \int_{\R} \dt^\prime \, \chi(t) \chi(t^\prime) \sum_{ \substack{k,k^\prime\in \Z^2 \colon \\ k,k^\prime \neq 0, \pm m}} \bigg( \rho_{\leq N}(k) \rho_{\leq N}(k+m) \rho_{\leq N}(k^\prime) \rho_{\leq N}(k^\prime+m) \label{failure:eq-p4} \\ &\times\,\big( k_1 m_2 - k_2 m_1\big) \big( k_1^\prime m_2 - k_2^\prime m_1\big) \E \Big[ \widehat{z}(t,k) \overline{\widehat{z}(t,k+m)} \, \overline{\widehat{z}(t^\prime,k^\prime)} \widehat{z}(t^\prime,k^\prime+m) \Big] \bigg). \notag \end{align} Using Wick's theorem for products of Gaussian random variables, Lemma \ref{failure:lem-covariance}, and $m \neq 0$, it holds that \begin{align} &\E \Big[ \widehat{z}(t,k) \overline{\widehat{z}(t,k+m)} \, \overline{\widehat{z}(t^\prime,k^\prime)} \widehat{z}(t^\prime,k^\prime+m) \Big] \notag \\ =& \, \E \Big[ \widehat{z}(t,k) \overline{\widehat{z}(t^\prime,k^\prime)} \Big] \cdot \E \Big[ \overline{\widehat{z}(t,k+m)} \, \widehat{z}(t^\prime,k^\prime+m) \Big] \notag \\ =& 1 \big\{ k=k^\prime \big\} \, \E \Big[ \widehat{z}(t,k) \overline{\widehat{z}(t^\prime,k)} \Big] \cdot \overline{\E \Big[ \widehat{z}(t,k+m) \, \overline{\widehat{z}(t^\prime,k+m)}\Big]}. \label{failure:eq-p5} \end{align} After inserting \eqref{failure:eq-p5} and using Lemma \ref{failure:lem-covariance}, it follows that \begin{align} &\hspace{1ex}\eqref{failure:eq-p4} \notag \\ \geq& \int_{\R} \dt \int_{\R} \dt^\prime \, \chi(t) \chi(t^\prime) (t\wedge t^\prime)^2 \sum_{ \substack{k\in \Z^2 \colon \\ k \neq 0, \pm m}} \bigg( \rho_{\leq N}^2(k) \rho_{\leq N}^2(k+m) \big( k_1 m_2 - k_2 m_1\big)^2 \label{failure:eq-p6} \\ &\times \, \frac{\cos\big( (t-t^\prime) |k| \big) \cos\big( (t-t^\prime) |k+m|\big)}{4|k|^2 |k+m|^2} \bigg) \notag \\ -&\, C_{m,\chi} \bigg( \sum_{\substack{k\in \Z^2\colon \\ k \neq 0, \pm m}} \frac{(k_1 m_2 - m_2 k_1)^2}{|k|^5} \bigg), \label{failure:eq-p7} \end{align} where $C_{\chi,m}$ is a sufficiently large constant depending on $m$ and $\chi$. Since the sum in \eqref{failure:eq-p7} is absolutely convergent, it remains to prove a lower bound for \eqref{failure:eq-p6}. To this end, we recall the trigonometric identity \begin{equation}\label{failure:eq-p8} \begin{aligned} &\cos\big( (t-t^\prime) |k| \big) \cos\big( (t-t^\prime) |k+m|\big) \\ =& \frac{1}{2} \Big( \cos\big( (t-t^\prime) (|k+m|-|k|) \big) + \cos\big( (t-t^\prime) (|k+m|+|k|) \big) \Big). \end{aligned} \end{equation} Due to the favorable sign in the time-frequency, the contribution of $\cos\big( (t-t^\prime) (|k+m|+|k|) \big)$ to \eqref{failure:eq-p6} can easily be controlled via integration by parts. Thus, it remains to obtain a lower bound on \begin{equation}\label{failure:eq-p9} \begin{aligned} &\int_{\R} \dt \int_{\R} \dt^\prime \, \chi(t) \chi(t^\prime) (t\wedge t^\prime)^2 \sum_{ \substack{k\in \Z^2 \colon \\ k \neq 0, \pm m}} \bigg( \rho_{\leq N}^2(k) \rho_{\leq N}^2(k+m) \big( k_1 m_2 - k_2 m_1\big)^2 \\ &\times \, \frac{\cos\big( (t-t^\prime) (|k+m|-|k|) \big)}{8|k|^2 |k+m|^2} \bigg). \end{aligned} \end{equation} We now choose a deterministic, nonnegative, cut-off function $\chi\in C^\infty_c((0,\infty) \rightarrow [0,1])$, which depends only on $m\in \Z^2\backslash \{0\}$ and satisfies \begin{equation*} \chi(t) = \begin{cases} \begin{tabular}{ll} $0$ \hspace{7ex} & if $t\leq 2^{-8} |m|^{-1}$, \\ $1$ & if $2^{-7} |m|^{-1} \leq t\leq 2^{-6} |m|^{-1}$, \\ $0$ & if $t\geq 2^{-5} |m|^{-1}$. \end{tabular} \end{cases} \end{equation*} Since $\big| |k+m| - |m| \big| \leq |m|$ for all $k\in \Z^2$, it follows that \begin{equation}\label{failure:eq-p10} \chi(t) \chi(t^\prime) \cos\big( (t-t^\prime) (|k+m|-|k|) \big) \geq \frac{1}{2} \chi(t) \chi(t^\prime). \end{equation} After inserting this into \eqref{failure:eq-p9} and integrating in $t$ and $t^\prime$, it follows that \begin{equation*} \eqref{failure:eq-p9} \geq c_{\chi,m} \sum_{ \substack{k\in \Z^2 \colon \\ k \neq 0, \pm m}} \bigg( \rho_{\leq N}^2(k) \rho_{\leq N}^2(k+m) \frac{\big( k_1 m_2 - k_2 m_1\big)^2}{|k|^2 |k+m|^2} \bigg). \end{equation*} This sum diverges logarithmically in $N$, which finally yields the desired claim. \end{proof}
{ "arxiv_id": "2302.14324", "language": "en", "timestamp": "2023-03-01T02:09:36", "url": "https://arxiv.org/abs/2302.14324", "yymm": "2302" }
\section{Proof of Carlini's formula}\label{app:carlini} \begin{proof}[Proof of \cref{carlini}] Recall $2I_n(t)$ is the $n^{\text{th}}$ Chebyshev coefficient for $\exp(tx)$. It will be slightly more convenient for us to reparameterize and bound $I_n(nt)$. Without loss of generality $t \geq 0$, since $\exp(tx)$ and $\exp(-tx)$ have the same Chebyshev coefficients up to sign. By \eqref{eq:cheb-integral}, for $n \geq 1$, \begin{align*} I_{n}(nt) &= \frac{1}{2\pi{\imath}} \oint_{\abs{z} = 1} z^{-n-1} \exp(\tfrac{nt}{2}(z + z^{-1})) \mathrm{d} z \\ &= \frac{1}{2\pi {\imath}}\oint_{\abs{z} = 1} \exp\Big(-n\Big(\log(z)-\frac{t}{2}(z + \tfrac1z)\Big)\Big) \frac{\mathrm{d} z} z. \end{align*} We choose a contour circling around the origin once; by Cauchy's theorem, this results in the same integral as the contour does not cross the origin. We parameterize it via $z = re^{{\imath} \theta}$ and construct $r$ as a function of $\theta$. Consider the (rescaled) imaginary part of the expression in the exponential: \begin{align*} \log(z) - \frac{t}{2}(z + \tfrac1z) &= \log(r) + {\imath} \theta - \frac{t}{2n}(r e^{{\imath} \theta} + \tfrac1 r e^{-{\imath} \theta}) \\ \implies \Im\Big(\log(z) -\frac{t}{2}(z + \tfrac1z)\Big) &= \theta -\frac{t}{2}(r\sin(\theta) - \tfrac1r \sin(\theta)). \\ \intertext{ We wish to make the imaginary part constant; we set it equal to $\psi$, and solve for $r$: } \psi &= \theta - \frac{t}{2}(r\sin(\theta) - \tfrac1r \sin(\theta)) \\ \implies \tfrac{1}{2}(r - \tfrac1r) &= \frac{\theta - \psi}{t\sin(\theta)} \\ \implies r &= \frac{\theta - \psi}{t\sin(\theta)} + \sqrt{\Par{\frac{\theta - \psi}{t\sin(\theta)}}^2 + 1}. \end{align*} Above, we use that $r = x + \sqrt{x^2 + 1}$ and $\frac1r = -x + \sqrt{x^2 + 1}$ is a solution to $\frac{1}{2}(r - \frac1r) = x$. Now, we choose to take our contour for $\theta$ from $-\pi+\psi$ to $\pi+\psi$, and we take $\psi = 0$. So the contour is \begin{align*} z = re^{{\imath}\theta} = \Par{\frac{\theta}{t\sin(\theta)} + \sqrt{\Par{\frac{\theta}{t\sin(\theta)}}^2 + 1}}e^{{\imath} \theta} \text{ for } \theta \in [-\pi, \pi] \text{, where } \frac {\sin 0} 0 \coloneqq 1. \end{align*} Since $r \geq 1$ always, the contour winds once counter-clockwise around zero and is valid for evaluating the integral. By design, on this contour $\Im(\log(z) - \frac{t}{2}(z - \tfrac1z))$ vanishes, and the real part is \begin{align*} F(\theta, t) &\coloneqq \Re\Big(\log(z) - \frac{t}{2}(z + \tfrac1z)\Big) = \log(r) - \frac{t}{2}(r + \tfrac1r)\cos(\theta) \\ &= \log\Par{\frac{\theta}{t\sin(\theta)} + \sqrt{\Par{\frac{\theta}{t\sin(\theta)}}^2 + 1}} - t\cos(\theta)\sqrt{\Par{\frac{\theta}{t\sin(\theta)}}^2 + 1}. \end{align*} Now, we consider the original integral along this contour: \begin{align*} I_{n}(nt) &= \frac{1}{2\pi {\imath}}\oint \exp\Big(-n\Big(\log(z) - \frac{t}{2}(z + \tfrac1z)\Big)\Big) \frac{\mathrm{d} z} z\\ &= \frac{1}{2\pi {\imath}}\int_{-\pi}^\pi \exp\Big(-n \cdot F(\theta, t)\Big) \Big(\frac{\mathrm{d} r(\theta)}{\mathrm{d}\theta} \cdot \frac 1 {r(\theta)}+ {\imath}\Big) \mathrm{d}\theta \\ &= \frac{1}{2\pi}\int_{-\pi}^{\pi} \exp\Big(-n \cdot F(\theta, t)\Big) \mathrm{d}\theta \\ &= \frac{1}{\pi}\int_{0}^{\pi} \exp\Big(-n \cdot F(\theta, t)\Big) \mathrm{d}\theta. \end{align*} The last two lines use that as $F(\theta, t)$ and $r(\theta)$ are even functions in $\theta$, the piece of the integral corresponding to $\frac{\mathrm{d} r(\theta)}{r(\theta)\mathrm{d}\theta}$ vanishes. From here, it becomes a matter of bounding $F(\theta, t)$. We compute \begin{align*} \frac{\partial}{\partial \theta} F(\theta, t) &= \sqrt{(t\sin(\theta))^2 + \theta^2} + \frac{(1-\theta\cot(\theta))^2}{\sqrt{(t\sin(\theta))^2 + \theta^2}}. \\ \intertext{ First, we notice that $\frac{\partial}{\partial \theta} F \geq 0$, so $F$ is increasing. Second, we notice that, for $\theta \in [0, \pi/2]$, } \frac{\partial}{\partial \theta} F(\theta, t) &\geq \theta\sqrt{(t\sin(\theta)/\theta)^2 + 1} \geq \theta\sqrt{t^2(4/\pi^2) + 1}. \end{align*} Integrating, we get that, for $\theta \in [0, \frac \pi 2]$, $F(\theta, t) - F(0, t) \geq \tfrac{1}{2}\theta^2\sqrt{1+4t/\pi^2}$, and using that $F(0, t) = \log(t^{-1} + \sqrt{t^{-2} + 1}) - \sqrt{1 + t^2}$, we have the desired \begin{align*} I_{n}(nt) &= \frac{1}{\pi} \int_0^\pi \exp(-n F(\theta, t))\mathrm{d}\theta \\ &\leq \frac{2}{\pi} \int_0^{\pi/2} \exp(-n F(\theta, t))\mathrm{d}\theta \\ &\leq \frac{2}{\pi} \int_0^{\pi/2} \exp(-\tfrac n 2\theta^2\sqrt{1 + 4t^2/\pi^2})\exp(-n F(0, t))\mathrm{d}\theta \\ &= \frac{\exp(-nF(0,t))}{\pi} \int_0^\pi \exp(-\tfrac{1}{2}\theta^2\sqrt{n^2 + 4n^2t^2/\pi^2})\mathrm{d}\theta \\ &< \frac{\exp(-nF(0,t))}{\pi} \int_0^\infty \exp(-\tfrac{1}{2}\theta^2\sqrt{n^2 + 4n^2t^2/\pi^2})\mathrm{d}\theta \\ &= \frac{\exp(-nF(0,t))}{\sqrt{2\pi\sqrt{n^2 + 4n^2t^2/\pi^2}}} \\ &= \frac{\exp(\sqrt{n^2 + n^2t^2})}{(4\pi^2 n^2 + 16n^2 t^2)^{1/4}(t^{-1} + \sqrt{t^{-2} + 1})^n} \\ &\leq \frac{\exp(\sqrt{n^2 + n^2t^2})(\sqrt{t^{-2} + 1} - t^{-1})^n}{2(n^2 + n^2 t^2)^{1/4}}. \tag*{\qedhere} \end{align*} \end{proof} \section{QSVT and the Cosine-Sine decomposition}\label{sec:csd} \begin{quotation} \textit{Briefly, whenever some aspect of a problem can be usefully formulated in terms of two-block by two-block partitions of unitary matrices, the CS decomposition will probably add insights and simplify the analysis.} \hspace{1em plus 1fill}---Paige and Wei, \cite{pw94} \end{quotation} \subsection{Existence of the CS decomposition} \label{ssec:csd} We begin by proving the existence of the CS decomposition (CSD), a decomposition of a partitioned unitary matrix, following Paige and Wei~\cite{pw94}. We describe its application to the quantum singular value transformation (QSVT) in the following Sections~\ref{sec:qsvt},~\ref{ssec:basic_case}, and~\ref{ssec:gen_case}. The main idea of the CSD is that when a unitary matrix $\mathbf{U}$ is split into two-by-two blocks $\mathbf{U}_{ij}$ for $i, j \in \{1, 2\}$, one can produce ``simultaneous singular value decompositions (SVDs)'' of the blocks, of the form $\mathbf{U}_{ij} = \mathbf{V}_i \mathbf{D}_{ij} \mathbf{W}_j^\dagger$.\footnote{In fact, there is some sense in which the SVD and the CSD are special cases of the same object, a \emph{generalized Cartan decomposition}. We recommend the survey by Edelman and Jeong for readers curious about this connection~\cite{ej21}.} If the reader cares just about the application to QSVT, they can read Theorem~\ref{thm:cs} and skip to \cref{sec:qsvt}. For additional intuition on the CSD, we refer the reader to Appendix~\ref{app:csd_interpret}, in which we derive principal angles and Jordan's lemma as consequences. \begin{theorem}\label{thm:cs} Let $\mathbf{U} \in \mathbb{C}^{d \times d}$ be a unitary matrix, partitioned into blocks of size $\{r_1, r_2\} \times \{c_1, c_2\}$: \[\mathbf{U} = \begin{pmatrix} \mathbf{U}_{11} & \mathbf{U}_{12} \\ \mathbf{U}_{21} & \mathbf{U}_{22}\end{pmatrix},\text{ where } \mathbf{U}_{ij} \in \mathbb{C}^{r_i \times c_j} \text{ for } i,j\in\{1,2\}.\] Then, there exists unitary $\mathbf{V}_i \in \mathbb{C}^{r_i \times r_i}$ and $\mathbf{W}_j \in \mathbb{C}^{c_j \times c_j}$ for $i, j \in \{1,2\}$ such that \[ \begin{pmatrix} \mathbf{U}_{11} & \mathbf{U}_{12} \\ \mathbf{U}_{21} & \mathbf{U}_{22} \end{pmatrix} = \begin{pmatrix} \mathbf{V}_1 & \\ & \mathbf{V}_2 \end{pmatrix} \begin{pmatrix} \mathbf{D}_{11} & \mathbf{D}_{12} \\ \mathbf{D}_{21} & \mathbf{D}_{22} \end{pmatrix} \begin{pmatrix} \mathbf{W}_1 & \\ & \mathbf{W}_2 \end{pmatrix}^\dagger, \] where blanks represent zero matrices and $\mathbf{D}_{ij} \in \mathbb{R}^{r_i \times c_j}$ are diagonal matrices, possibly padded with zero rows or columns. Specifically, we can write \begin{equation} \label{eq:d-form} \mathbf{D} \coloneqq \begin{pmatrix} \mathbf{D}_{11} & \mathbf{D}_{12} \\ \mathbf{D}_{21} & \mathbf{D}_{22} \end{pmatrix} = \left(\begin{array}{@{}c|c@{}} \begin{matrix} \bm{0} & & \\ & \mathbf{C} & \\ & & \mathbf{I} \end{matrix} & \begin{matrix} \mathbf{I} & & \\ & \hphantom{-}\mathbf{S} & \\ & & \hphantom{-}\bm{0} \end{matrix} \\ \hline \begin{matrix} \mathbf{I} & & \\ & \mathbf{S} & \\ & & \bm{0} \end{matrix} & \begin{matrix} \bm{0} & & \\ & -\mathbf{C} & \\ & & -\mathbf{I} \end{matrix} \end{array} \right) \end{equation} where $\mathbf{I}$, $\mathbf{C}$, and $\mathbf{S}$ blocks are square diagonal matrices where $\mathbf{C}$ and $\mathbf{S}$ have entries in $(0,1)$ on the diagonal, and $\bm{0}$ blocks may be rectangular.\footnote{Blocks may be non-existent. The $\mathbf{I}$ blocks may not necessarily be the same size, but $\mathbf{C}$ and $\mathbf{S}$ are the same size.} Because $\mathbf{D}$ is unitary, we also have $\mathbf{C}^2 + \mathbf{S}^2 = \mathbf{I}$. \end{theorem} \begin{remark} \label{rmk:cs-rotation} The form of $\mathbf{D}$ naturally induces decompositions $\mathbb{C}^d = {\textcolor{olive5}{\mathcal{X}_0}} \oplus {\textcolor{azure5}{\mathcal{X}_C}} \oplus {\textcolor{brown5}{\mathcal{X}_1}}$ and $\mathbb{C}^d = {\textcolor{olive5}{\mathcal{Y}_0}} \oplus {\textcolor{azure5}{\mathcal{Y}_C}} \oplus {\textcolor{brown5}{\mathcal{Y}_1}}$ into direct sums of three spaces. Hence, $\mathbf{D}: \mathbb{C}^d \to \mathbb{C}^d$ can be seen as a map $\mathbf{D}: {\textcolor{olive5}{\mathcal{X}_0}} \oplus {\textcolor{azure5}{\mathcal{X}_C}} \oplus {\textcolor{brown5}{\mathcal{X}_1}} \to {\textcolor{olive5}{\mathcal{Y}_0}} \oplus {\textcolor{azure5}{\mathcal{Y}_C}} \oplus {\textcolor{brown5}{\mathcal{Y}_1}}$, such that $\mathbf{D}$ is a direct sum of three linear maps. \[ \left(\begin{array}{@{}c|c@{}} \begin{matrix} \hla{\bm{0}} & & \\ & \hlb{\mathbf{C}} & \\ & & \hlc{\mathbf{I}} \end{matrix} & \begin{matrix} \hla{\mathbf{I}} & & \\ & \hphantom{-}\hlb{\mathbf{S}} & \\ & & \hphantom{-}\hlc{\bm{0}} \end{matrix} \\ \hline \begin{matrix} \hla{\mathbf{I}} & & \\ & \hlb{\mathbf{S}} & \\ & & \hlc{\bm{0}} \end{matrix} & \begin{matrix} \hla{\bm{0}} & & \\ & \hlb{-\mathbf{C}} & \\ & & \hlc{-\mathbf{I}} \end{matrix} \end{array} \right) = \underbrace{\hla{\begin{pmatrix} \bm{0} & \mathbf{I} \\ \mathbf{I} & \bm{0} \end{pmatrix}}}_{{\textcolor{olive5}{\mathcal{X}_0}} \to {\textcolor{olive5}{\mathcal{Y}_0}}} \oplus \underbrace{\hlb{\begin{pmatrix} \mathbf{C} & \hphantom{-}\mathbf{S} \\ \mathbf{S} & -\mathbf{C} \end{pmatrix}}}_{{\textcolor{azure5}{\mathcal{X}_C}} \to {\textcolor{azure5}{\mathcal{Y}_C}}} \oplus \underbrace{\hlc{\begin{pmatrix} \mathbf{I} & \hphantom{-}\bm{0} \\ \bm{0} & -\mathbf{I} \end{pmatrix}}}_{{\textcolor{brown5}{\mathcal{X}_1}} \to {\textcolor{brown5}{\mathcal{Y}_1}}}. \] The key resulting intuition for QSVT is that, supposing everything is square, these blocks can be further decomposed into $2\times 2$ blocks of the following rotation matrix form \[\begin{pmatrix} \lambda_i & \sqrt{1-\lambda_i^2} \\ \sqrt{1-\lambda_i^2} & -\lambda_i\end{pmatrix}\] from this representation, where $\{\lambda_i\}$ are the singular values of $\mathbf{U}_{11}$ (see Lemma~\ref{lem:qsp-to-qsvt}). \end{remark} For completeness we now recall how to derive the CS decomposition from other common decompositions (namely, the SVD and QR decompositions), following the proof of \cite{pw94}.\footnote{Our exposition will be slightly different, since Paige and Wei use a QR decomposition with a \emph{lower} triangular matrix, where we use the more standard one with an \emph{upper} triangular matrix.} \begin{proof}[Proof of Theorem~\ref{thm:cs}] We begin by considering $\mathbf{U}_{11} \in \mathbb{C}^{r_1 \times c_1}$. Let $\mathbf{V}_1 \mathbf{D}_{11} \mathbf{W}_1^\dagger$ be a SVD of $\mathbf{U}_{11}$, where $\mathbf{D}_{11} \in \mathbb{R}^{r_1 \times c_1}$ and $\mathbf{V}_1$ and $\mathbf{W}_1$ are square unitaries. Since $\norm{\mathbf{U}_{11}}_{\textup{op}} \le \norm{\mathbf{U}}_{\textup{op}} = 1$, all its singular values are between zero and one, so we can specify that $\mathbf{D}_{11}$ takes the form \[ \mathbf{D}_{11} = \begin{pmatrix} \bm{0} & & \\ & \mathbf{C} & \\ & & \mathbf{I} \end{pmatrix} \] for a diagonal matrix $\mathbf{C}$ with $0 < \mathbf{C}_{ii} < 1$ for all $i$. Now, take QR decompositions of $\mathbf{U}_{21}\mathbf{W}_1 \in \mathbb{C}^{r_2 \times c_1}$ and $\mathbf{U}_{12}^\dagger \mathbf{V}_1 \in \mathbb{C}^{c_2 \times r_1}$. These decompositions give unitaries $\mathbf{V}_2$ and $\mathbf{W}_2$ such that $\mathbf{D}_{21} \coloneqq \mathbf{V}_2^\dagger \mathbf{U}_{21} \mathbf{W}_1$ and $\mathbf{D}_{12}^\dagger \coloneqq \mathbf{W}_2^\dagger \mathbf{U}_{12}^\dagger \mathbf{V}_1$ are upper triangular with non-negative entries on the diagonal. By design, the $\mathbf{V}_{i}$ and $\mathbf{W}_j$ we have defined give us the decomposition \begin{align*} \begin{pmatrix} \mathbf{V}_1 \\ & \mathbf{V}_2\end{pmatrix}^\dagger \begin{pmatrix} \mathbf{U}_{11} & \mathbf{U}_{12} \\ \mathbf{U}_{21} & \mathbf{U}_{22} \end{pmatrix} \begin{pmatrix} \mathbf{W}_1 \\ & \mathbf{W}_2 \end{pmatrix} = \underbrace{\begin{pmatrix} \mathbf{D}_{11} & \mathbf{D}_{12} \\ \mathbf{D}_{21} & \mathbf{V}_2^\dagger \mathbf{U}_{22}\mathbf{W}_2 \end{pmatrix}}_{\mathbf{D}}. \end{align*} We claim that the blocks $\mathbf{D}_{21}$ and $\mathbf{D}_{12}$ satisfy the desired form from \eqref{eq:d-form}. This will (almost) be our final decomposition. We will only give the argument for $\mathbf{D}_{21}$; the one for $\mathbf{D}_{12}$ is similar. The columns of $\mathbf{D}$ are orthonormal and $\mathbf{D}_{21}$ is upper triangular with non-negative entries on its diagonal. So, all of the rightmost blocks of $\mathbf{D}_{21}$ (under the $\mathbf{I}$ from $\mathbf{D}_{11}$) must be zero because of orthonormality, and further, the top-left block of $\mathbf{D}_{21}$ must be $\mathbf{I}$, using upper triangularity and non-negativity of the diagonal (inducting by row, the top-left entry is $1$, forcing its row to be zeroes, and so on down the diagonal). Since the rows of $\mathbf{D}$ are orthonormal, the $\mathbf{I}$ block in $\mathbf{D}_{21}$ forces the block to its right to be the zero matrix. Finally, upper triangularity and column orthonormality forces the middle block to be non-negative diagonal with $\mathbf{S}^2 + \mathbf{C}^2 = \mathbf{I}$. The logic is displayed below: \[\mathbf{D}_{21} = \begin{pmatrix} \bm{?} & \bm{?} & \bm{?} \\ & \bm{?} & \bm{?}\\ & & \bm{?} \end{pmatrix} \to \begin{pmatrix} \bm{?} & \bm{?} & \\ & \bm{?} & \\ & & \bm{0} \end{pmatrix} \to \begin{pmatrix} \mathbf{I} & \bm{?} & \\ & \bm{?} & \\ & & \bm{0} \end{pmatrix} \to \begin{pmatrix} \mathbf{I} & & \\ & \bm{?} & \\ & & \bm{0} \end{pmatrix} \to \begin{pmatrix} \mathbf{I} & & \\ & \mathbf{S} & \\ & & \bm{0} \end{pmatrix}.\] What we have argued so far suffices to show that (recalling $\mathbf{D}$ is unitary) \begin{equation*} \mathbf{D} = \left(\begin{array}{@{}c|c@{}} \begin{matrix} \bm{0} & & \\ & \mathbf{C} & \\ & & \mathbf{I} \end{matrix} & \begin{matrix} \mathbf{I} & & \\ & \mathbf{S} & \hphantom{\bm{?}_{12}} \\ & \hphantom{\bm{?}_{21}} & \bm{0} \end{matrix} \\ \hline \begin{matrix} \mathbf{I} & & \\ & \mathbf{S} & \\ & & \bm{0} \end{matrix} & \begin{matrix} \bm{0} & & \\ & \bm{?}_{11} & \bm{?}_{12} \\ & \bm{?}_{21} & \bm{?}_{22} \end{matrix} \end{array} \right). \end{equation*} For brevity we will only sketch the rest of the argument about the bottom-right block $\mathbf{D}_{22}$. First, $\bm{?}_{11} = -\mathbf{C}$ follows from unitarity of the following block of $\mathbf{D}$, which shows $\mathbf{C} \mathbf{S} + \mathbf{S} \bm{?}_{11} = 0$: \[\begin{pmatrix} \mathbf{C} & \mathbf{S} \\ \mathbf{S} & \bm{?}_{11} \end{pmatrix}. \] The blocks $\bm{?}_{21}$ and $\bm{?}_{12}$ must then be zero using unitarity, considering the fifth (block) row and column in $\mathbf{D}$. Finally, $\bm{?}_{22}$ is unitary and can be rotated to the (negative) identity by changing $\mathbf{W}_2$: \[ \mathbf{W}_2 \leftarrow \begin{pmatrix} \mathbf{I} \\ & \mathbf{I} \\ & & -\bm{?}_{22}^\dagger \end{pmatrix}\mathbf{W}_2 . \qedhere \] \end{proof} In Appendix~\ref{app:csd_interpret}, we demonstrate that for specific choices of $\mathbf{U}$, the form of the CS decomposition reveals interesting properties about the interactions between subspaces. In particular, the diagonals of $\mathbf{C}$ and $\mathbf{S}$ can naturally be seen as the cosines and sines of ``principal angles'' between subspaces. \subsection{QSVT and quantum signal processing}\label{sec:qsvt} We now apply the machinery of Section~\ref{ssec:csd} to prove correctness of the QSVT framework of \cite{gslw19}. We first recall the situation treated by QSVT, requiring the notion of a block encoding. \begin{definition}[Variant of {\cite[Definition 43]{gslw19}}]\label{def:block_encode} Given $\mathbf{A} \in \mathbb{C}^{r \times c}$, and $\alpha, \varepsilon > 0$, we say unitary $\mathbf{U} \in \mathbb{C}^{d \times d}$ is an $(\alpha, \varepsilon)$-block encoding of $\mathbf{A}$ if there are $\mathbf{B}_{\textup{L},1} \in \mathbb{C}^{d \times r}, \mathbf{B}_{\textup{R},1} \in \mathbb{C}^{d \times c}$ with orthonormal columns such that $ \norm{\mathbf{A} - \alpha \mathbf{B}_{\textup{L},1}^\dagger \mathbf{U} \mathbf{B}_{\textup{R},1}}_{\textup{op}} \leq \varepsilon$. We denote $\mproj_{\textup{L}} = \mb_{\textup{L}, 1}\mblo^\dagger$, $\mproj_{\textup{R}} = \mb_{\textup{R}, 1}\mbro^\dagger$ to be the corresponding projections onto the spans of $\mb_{\textup{L}, 1}$ and $\mb_{\textup{R}, 1}$, respectively. \end{definition} In other words, if $\mathbf{U}$ is a $(\alpha, \varepsilon)$-block encoding of $\mathbf{A}$, then in the right basis, it has something $\varepsilon$-close to $\alpha\mathbf{A}$ as a submatrix. For example, for a $(1,0)$-block encoding, if $\mb_{\textup{L}} = \begin{pmatrix} \mb_{\textup{L}, 1} & \mb_{\textup{L}, 2} \end{pmatrix}$ and $\mb_{\textup{R}} = \begin{pmatrix} \mb_{\textup{R}, 1} & \mb_{\textup{R}, 2} \end{pmatrix}$ are unitary completions of $\mb_{\textup{L}, 1}$ and $\mb_{\textup{R}, 2}$, then \begin{align*} \mb_{\textup{L}}^\dagger \mathbf{U} \mb_{\textup{R}} = \begin{pmatrix} \mathbf{A} & \cdot \\ \cdot & \cdot \end{pmatrix} \text{ and } \mb_{\textup{L}}^\dagger (\mproj_{\textup{L}} \mathbf{U} \mproj_{\textup{R}}) \mb_{\textup{R}} = \begin{pmatrix} \mathbf{A} & \bm{0} \\ \bm{0} & \bm{0} \end{pmatrix}. \end{align*} In Section~\ref{ssec:basic_case}, we consider the special case when $\mathbf{U}$ is a $(1, 0)$-block encoding of $\mathbf{A}$, and furthermore, $\mb_{\textup{L}, 1}$ and $\mb_{\textup{R}, 1}$ are the first $r$ and $c$ columns of the identity, respectively. Under this restriction, the following statements about submatrices are true in the computational basis: \begin{equation}\label{eq:compbasis} \mathbf{U} = \begin{pmatrix} \mathbf{A} & \cdot \\ \cdot & \cdot \end{pmatrix} \text{ and } \mproj_{\textup{L}} \mathbf{U} \mproj_{\textup{R}} = \begin{pmatrix} \mathbf{A} & \bm{0} \\ \bm{0} & \bm{0} \end{pmatrix}. \end{equation} This simplification is for the purposes of exposition, since then $\mathbf{U}$ is clearly a block matrix which we can apply the CS decomposition to. Indeed, it is without loss of generality: we recover the general statements by simple unitary transformations which reduce to the special, simpler case above. We give a formal treatment of the general case in Section~\ref{ssec:gen_case}. We now describe the QSVT framework, a lifting of a $2\times 2$ matrix polynomial construction defined via ``phase factors'' (Definition~\ref{def:qsp}), to higher dimensions. The construction in the $2 \times 2$ case is referred to in the literature as quantum signal processing (QSP). We recall the basics of QSP here. \begin{definition}[{\cite[Corollary~8]{gslw19}}] \label{def:qsp} We say that a degree-$n$ polynomial $p(x) \in \mathbb{C}^n$ can be implemented with QSP if there is a sequence of phase factors $\Phi = \{\phi_j\}_{j\in[n]} \in \mathbb{R}^n$ such that% \footnote{ We define QSP with the reflection operation $\mathbf{R}(x)$, as is used for QSVT. One can appropriately change the CS decomposition to make this work for the rotation $e^{{\imath}\arccos(x)\boldsymbol{\sigma}_x} = (\begin{smallmatrix} x & {\imath}\sqrt{1-x^2} \\ {\imath}\sqrt{1-x^2} & x\end{smallmatrix})$, denoted $\mathbf{W}(x)$ in \cite{gslw19}. } \begin{align} \label{eq:qsp} \qsp(\Phi, x) \coloneqq \prod_{j=1}^n \Bigg(\underbrace{\begin{pmatrix} e^{{\imath} \phi_j} & 0 \\ 0 & e^{-{\imath} \phi_j} \end{pmatrix}}_{e^{{\imath} \phi_j \boldsymbol{\sigma}_z}} \underbrace{\begin{pmatrix} x & \sqrt{1-x^2} \\ \sqrt{1-x^2} & -x \end{pmatrix}}_{=: \mathbf{R}(x)}\Bigg) = \begin{pmatrix} p(x) & \cdot \\ \cdot & \cdot \end{pmatrix}. \end{align} \end{definition} Corollaries~8 and 10 of \cite{gslw19} derive sufficient conditions for the existence of such a $\Phi$: see \cref{rmk:qsvt-to-alg} for a discussion. We next introduce a generalization of Definition~\ref{def:qsp} to higher dimensions. \begin{definition}[{\cite[Definition 15]{gslw19}}] \label{def:15} The \emph{phased alternating sequence} associated with a partitioned unitary $\mathbf{U}$ (following notation of Definition~\ref{def:12}) and $\Phi = \{\phi_j\}_{j \in [n]} \in \mathbb{R}^n$ is \begin{align*} \mathbf{U}_\Phi &\coloneqq \begin{cases} e^{{\imath}\phi_1(2\boldsymbol{\Pi}_{\textup{L}}-\mathbf{I})} \mathbf{U} \prod_{j\in[\frac{n - 1}{2}]} e^{{\imath}\phi_{2j}(2\boldsymbol{\Pi}_{\textup{R}} - \mathbf{I})} \mathbf{U}^\dagger e^{{\imath}\phi_{2j+1}(2\boldsymbol{\Pi}_{\textup{L}} - \mathbf{I})}\mathbf{U} &\text{if } n \text{ is odd, and} \\ \hspace{5.8em}\prod_{j\in[\frac n 2]} e^{{\imath}\phi_{2j-1}(2\boldsymbol{\Pi}_{\textup{R}} - \mathbf{I})} \mathbf{U}^\dagger e^{{\imath}\phi_{2j}(2\boldsymbol{\Pi}_{\textup{L}} - \mathbf{I})}\mathbf{U} &\text{if } n \text{ is even.} \\ \end{cases} \end{align*} \end{definition} \begin{remark}\label{rem:high-d} The phased alternating sequence $\mathbf{U}_\Phi$ can be seen as a generalization of the quantum signal processing circuit $\qsp(\Phi, x)$. When $d = 2$ and $r = c = 1$, $2\boldsymbol{\Pi}_{\textup{L}} - \mathbf{I} = 2\boldsymbol{\Pi}_{\textup{R}} - \mathbf{I} = \boldsymbol{\sigma}_z$, so \begin{align*} \qsp(\Phi,x) = [\mathbf{R}(x)]_\Phi \text{ where } \mathbf{R}(x) = \begin{pmatrix} x & \sqrt{1-x^2} \\ \sqrt{1-x^2} & -x \end{pmatrix}. \end{align*} \end{remark} Finally, we define the matrix polynomial we wish to target via the QSVT framework as follows. \begin{definition}[{\cite[Definition 16]{gslw19}}]\label{def:16} Let $f: \mathbb{R} \to \mathbb{C}$ be even or odd, and let $\mathbf{A} \in \mathbb{C}^{r \times c}$ have SVD $\mathbf{A} = \sum_{i\in[\min(r, c)]} \sigma_iu_iv_i^\dagger$. Then we define \begin{align*} f^{\textup{(SV)}}(\mathbf{A}) = \begin{cases} \sum_{i\in[\min(r, c)]} f(\sigma_i)u_iv_i^\dagger & f\text{ is odd} \\ \sum_{i\in[c]} f(\sigma_i)v_iv_i^\dagger & f\text{ is even} \end{cases} \end{align*} where $\sigma_i$ is defined to be zero for $i > \min(r, c)$. \end{definition} When $f(x) = p(x)$ is an even or odd polynomial, $p^{\textup{(SV)}}(\mathbf{A})$ can be written as a polynomial in the expected way, e.g.\ if $p(x) = x^2 + 1$, $p^{\textup{(SV)}}(\mathbf{A}) = \mathbf{A}^\dagger \mathbf{A} + \mathbf{I}$ and if $p(x) = x^3 + x$, $p^{\textup{(SV)}}(\mathbf{A}) = \mathbf{A} \mathbf{A}^\dagger \mathbf{A} + \mathbf{A}$. With this definition in hand, we are now ready to state the main result of the QSVT framework. \begin{theorem}[{\cite[Theorem 17]{gslw19}}] \label{thm:qsp-to-qsvt} Let partitioned unitary $\mathbf{U} \in \mathbb{C}^{d \times d}$ be a $(1, 0)$-block encoding of $\mathbf{A}$. Suppose $\Phi = \{\phi_j\}_{j \in [n]} \in \mathbb{R}^n$ is such that $\qsp(\Phi, x)$ computes the degree-$n$ polynomial $p(x) \in \mathbb{C}[x]$, as in \cref{def:qsp}. Then, \begin{align*} \text{if }p\text{ is odd,}\quad \boldsymbol{\Pi}_{\textup{L}} \mathbf{U}_\Phi \boldsymbol{\Pi}_{\textup{R}} &= \begin{pmatrix} p^{\textup{(SV)}}(\mathbf{A}) & \bm{0} \\ \bm{0} & \bm{0} \end{pmatrix} = p^{\textup{(SV)}}(\boldsymbol{\Pi}_{\textup{L}} \mathbf{U} \boldsymbol{\Pi}_{\textup{R}}), \\ \text{and if }p\text{ is even,}\quad \boldsymbol{\Pi}_{\textup{R}} \mathbf{U}_\Phi \boldsymbol{\Pi}_{\textup{R}} &= \begin{pmatrix} p^{\textup{(SV)}}(\mathbf{A}) & \bm{0} \\ \bm{0} & \bm{0} \end{pmatrix} = \boldsymbol{\Pi}_{\textup{R}} p^{\textup{(SV)}}(\boldsymbol{\Pi}_{\textup{L}} \mathbf{U} \boldsymbol{\Pi}_{\textup{R}}) \boldsymbol{\Pi}_{\textup{R}}. \end{align*} \end{theorem} In other words, if $\mathbf{U}$ is a block encoding of $\mathbf{A}$, then $\mathbf{U}_\Phi$ is a block encoding of $p^{\textup{(SV)}}(\mathbf{A})$.\footnote{A warning to the reader: an $\varepsilon$-error in a block encoding might not propagate through QSVT in the expected way. This is because, if $f$ is a function with derivative bounded by $L$, $\norm{f^{\textup{(SV)}}(\mathbf{A}) - f^{\textup{(SV)}}(\mathbf{B})} \leq L\norm{\mathbf{A} - \mathbf{B}}$ is \emph{not true in general}, even up to constants. As Section~3.3 of \cite{gslw19} discusses, sometimes one must lose log factors here.} As we will see, when $\mb_{\textup{L}, 1}$ and $\mb_{\textup{R}, 1}$ are in the computational basis, the CS decomposition (Theorem~\ref{thm:cs}) readily reduces the proof of Theorem~\ref{thm:qsp-to-qsvt} to substantially simpler subproblems (see Lemma~\ref{lem:qsp-to-qsvt}). \begin{remark}[QSVT to quantum algorithms] \label{rmk:qsvt-to-alg} \cref{thm:qsp-to-qsvt} typically admits quantum algorithms in the following way. First, given a polynomial $p(x)$ with real coefficients, provided it is even or odd and $\abs{p(x)} \leq 1$ for all $x \in [-1,1]$, there is a phase sequence $\Phi$ which implements a $q(x)$ with \emph{complex} coefficients, but whose real part is $p(x)$ \cite[Corollary 10]{gslw19}. So, we can get a block encoding of $p^{\textup{(SV)}}(\mathbf{A})$ as an average of the block encodings for $q^{\textup{(SV)}}(\mathbf{A})$ and the corresponding transform with coefficients conjugated, $[q^*]^{\textup{(SV)}}(\mathbf{A})$. Corollary 18 of \cite{gslw19} shows that $(\bra{+} \otimes \boldsymbol{\Pi}_{\textup{L}}) (|0\rangle\langle0| \otimes \mathbf{U}_\Phi + |1\rangle\langle1| \otimes \mathbf{U}_{-\Phi}) (\ket{+} \otimes \boldsymbol{\Pi}_{\textup{R}})$ produces this block encoding (for $p$ odd, with the even case being similar), and Corollary 19 of \cite{gslw19} shows that one can implement this circuit with controlled $\mathbf{U}$ and $\mathbf{U}^\dagger$'s, along with other gates based on $\boldsymbol{\Pi}_{\textup{L}}$ and $\boldsymbol{\Pi}_{\textup{R}}$. Equipped with a block encoding of $p^{\textup{(SV)}}(\mathbf{A})$, we can accomplish our desired algorithmic task. For most applications, we simply want to apply $p^{\textup{(SV)}}(\mathbf{A})$ to an input state $\ket{\psi} \in \mathbb{C}^c$, which can be done by rotating $\ket{\psi}$ in $d$-dimensional state space such that it's aligned with the block in the block encoding. \end{remark} \subsection{Simplified QSVT in the computational basis}\label{ssec:basic_case} In this section, we provide a proof of Theorem~\ref{thm:qsp-to-qsvt} in the computational basis. We begin with some helpful notation in this special case, following the partitioning given by Theorem~\ref{thm:cs}. \begin{definition}[{Variant of \cite[Definition 12]{gslw19}}]\label{def:12} Let $\mathbf{U} \in \mathbb{C}^{d \times d}$ be a $(1, 0)$-block encoding of $\mathbf{A} \in \mathbb{C}^{r \times c}$ where $\mb_{\textup{L}, 1}$ and $\mb_{\textup{R}, 1}$ are the first $r$ and $c$ columns of the identity, respectively (see \eqref{eq:compbasis}). By \cref{thm:cs}, there is a CS decomposition compatible with the partitioning of $\mathbf{U}$: \[\mathbf{U} = \begin{pmatrix} \mathbf{A} & \mathbf{U}_{12} \\ \mathbf{U}_{21} & \mathbf{U}_{22} \end{pmatrix} = \underbrace{\begin{pmatrix} \mathbf{V}_{1} \\ & \mathbf{V}_{2} \end{pmatrix}}_{\mathbf{V}} \underbrace{\begin{pmatrix} \mathbf{D}_{11} & \mathbf{D}_{12} \\ \mathbf{D}_{21} & \mathbf{D}_{22} \end{pmatrix}}_{\mathbf{D}} \underbrace{\begin{pmatrix} \mathbf{W}_{1} \\ & \mathbf{W}_{2} \end{pmatrix}^\dagger}_{\mathbf{W}^\dagger}.\] \end{definition} In Definition~\ref{def:12}, we applied Theorem~\ref{thm:cs} to obtain an SVD of $\mathbf{A} = \mathbf{V}_1 \mathbf{D}_{11} \mathbf{W}_1$ that we have extended to the $d$-dimensional $\mathbf{U}$. Throughout the remainder of this section, $\mb_{\textup{L}}, \mb_{\textup{R}}, \mproj_{\textup{L}}$, and $\mproj_{\textup{R}}$ are defined consistently with the choice of $\mb_{\textup{L}, 1}$ and $\mb_{\textup{R}, 1}$ in Definition~\ref{def:12}: $\mb_{\textup{L}} = \mb_{\textup{R}} = \mathbf{I}$, and $\mproj_{\textup{L}}$ and $\mproj_{\textup{R}}$ are the identity but with all but the first $r$ and $c$ 1's set to 0, respectively. We next observe that this SVD commutes appropriately with exponentiated projections respecting the partition. \begin{lemma}[Variant of {\cite[Lemma 14]{gslw19}}] \label{lem:14} Let $\phi \in \mathbb{R}$. Following notation of \cref{def:12}, \begin{align*} e^{{\imath}\phi(2\mproj_{\textup{L}} - \mathbf{I})} = \begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix},\; e^{{\imath}\phi(2\mproj_{\textup{R}} - \mathbf{I})} = \begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix}, \end{align*} with appropriate block sizes, and \begin{align*} \begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix} \begin{pmatrix} \mathbf{V}_{1} \\ & \mathbf{V}_{2} \end{pmatrix} &= \begin{pmatrix} \mathbf{V}_{1} \\ & \mathbf{V}_{2} \end{pmatrix} \begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix}, \\ \begin{pmatrix} \mathbf{W}_{1} \\ & \mathbf{W}_{2} \end{pmatrix} \begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix} &= \begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix} \begin{pmatrix} \mathbf{W}_{1} \\ & \mathbf{W}_{2} \end{pmatrix}. \end{align*} \end{lemma} We next state our main technical claim, whose proof is deferred to the end of the section. \begin{lemma} \label{lem:qsp-to-qsvt} Consider $\mathbf{U} \in \mathbb{C}^{d \times d}$ as a block matrix. Let $\Phi \in \mathbb{R}^n$ be the sequence of angles implementing the degree-$n$ polynomial $p(x) \in \mathbb{C}[x]$ via quantum signal processing (\cref{def:qsp}). \begin{enumerate} \item When $\mathbf{U} = \begin{pmatrix} \bm{0}^{r\times c} & \mathbf{I}_{r} \\ \mathbf{I}_{c} & \bm{0}^{c\times r} \end{pmatrix}$, we have \begin{equation} \mathbf{U}_\Phi = \begin{pmatrix} p(0) \mathbf{I}_c & \cdot \\ \cdot & \cdot \end{pmatrix} \text{ for $n$ even, and } \mathbf{U}_\Phi = \begin{pmatrix} \bm{0}^{r\times c} & \cdot \\ \cdot & \cdot \end{pmatrix} \text{ for $n$ odd.} \label{eq:block-xzero} \end{equation} \item When $\begin{pmatrix} \mathbf{I}_r & \hphantom{-}\bm{0}^{r\times c} \\ \bm{0}^{c\times r} & -\mathbf{I}_c \end{pmatrix}$, we have \begin{equation}\mathbf{U}_\Phi = \begin{pmatrix} p(1)\mathbf{I}_r & \cdot \\ \cdot & \cdot \end{pmatrix}. \label{eq:block-xone}\end{equation} \item Let $\mathbf{C}, \mathbf{S} \in \mathbb{C}^{r \times r}$ be diagonal with $\mathbf{C}^2 + \mathbf{S}^2 = \mathbf{I}$. Then when $\mathbf{U} = \begin{pmatrix} \mathbf{C} & \hphantom{-}\mathbf{S} \\ \mathbf{S} & -\mathbf{C} \end{pmatrix}$, we have \begin{equation} \mathbf{U}_\Phi = \begin{pmatrix} p^{{\textup{(SV)}}}(\mathbf{C}) & \cdot \\ \cdot & \cdot \end{pmatrix}. \label{eq:block-xmid} \end{equation} \end{enumerate} \end{lemma} Using this lemma, our main QSVT result (Theorem~\ref{thm:qsp-to-qsvt}) in the setting of Definition~\ref{def:12} follows directly. \begin{proof}[Proof of \cref{thm:qsp-to-qsvt}, special case] For convenience, we recall the definition of $\mathbf{U}_\Phi$: \begin{align} \mathbf{U}_\Phi &= \begin{cases} e^{{\imath}\phi_1(2\boldsymbol{\Pi}_{\textup{L}}-\mathbf{I})} \mathbf{U} \prod_{j\in[\frac{n - 1}{2}]} e^{{\imath}\phi_{2j}(2\boldsymbol{\Pi}_{\textup{R}} - \mathbf{I})} \mathbf{U}^\dagger e^{{\imath}\phi_{2j+1}(2\boldsymbol{\Pi}_{\textup{L}} - \mathbf{I})}\mathbf{U} &\text{if } n \text{ is odd, and} \\ \hspace{5.8em}\prod_{j\in[\frac n 2]} e^{{\imath}\phi_{2j-1}(2\boldsymbol{\Pi}_{\textup{R}} - \mathbf{I})} \mathbf{U}^\dagger e^{{\imath}\phi_{2j}(2\boldsymbol{\Pi}_{\textup{L}} - \mathbf{I})}\mathbf{U} &\text{if } n \text{ is even.} \\ \end{cases} \nonumber \intertext{Using that $\mathbf{V}$ and $\mathbf{W}^\dagger$ from the CS decomposition $\mathbf{U} = \mathbf{V} \mathbf{D} \mathbf{W}^\dagger$ commute with their adjacent exponentiated reflections (\cref{lem:14}), we continue:} &= \begin{cases} \mathbf{V} e^{{\imath}\phi_1(2\boldsymbol{\Pi}_{\textup{L}}-\mathbf{I})} \mathbf{D} \Par{ \prod_{j\in[\frac{n - 1}{2}]} e^{{\imath}\phi_{2j}(2\boldsymbol{\Pi}_{\textup{R}} - \mathbf{I})} \mathbf{D}^\dagger e^{{\imath}\phi_{2j+1}(2\boldsymbol{\Pi}_{\textup{L}} - \mathbf{I})}\mathbf{D} } \mathbf{W}^\dagger &\text{if } n \text{ is odd, and} \\ \hspace{5.75em}\mathbf{W} \Par{\prod_{j\in[\frac n 2]} e^{{\imath}\phi_{2j-1}(2\boldsymbol{\Pi}_{\textup{R}} - \mathbf{I})} \mathbf{D}^\dagger e^{{\imath}\phi_{2j}(2\boldsymbol{\Pi}_{\textup{L}} - \mathbf{I})}\mathbf{D}} \mathbf{W}^\dagger &\text{if } n \text{ is even} \\ \end{cases} \nonumber\\ &= \begin{cases} \mathbf{V} \mathbf{D}_\Phi \mathbf{W}^\dagger &\text{if } n \text{ is odd, and} \\ \mathbf{W} \mathbf{D}_\Phi \mathbf{W}^\dagger &\text{if } n \text{ is even.} \\ \end{cases}\label{eq:uphi_simplify} \end{align} This reduces the problem to computing $\mathbf{D}_\Phi$. Recall from \eqref{eq:d-form} that the structure of $\mathbf{D}$ is \begin{equation*} \left(\begin{array}{@{}c|c@{}} \mathbf{D}_{11} & \mathbf{D}_{12} \\ \hline \mathbf{D}_{21} & \mathbf{D}_{22} \end{array} \right) = \left(\begin{array}{@{}c|c@{}} \begin{matrix} \hla{\bm{0}} & & \\ & \hlb{\mathbf{C}} & \\ & & \hlc{\mathbf{I}} \end{matrix} & \begin{matrix} \hla{\mathbf{I}} & & \\ & \hphantom{-}\hlb{\mathbf{S}} & \\ & & \hphantom{-}\hlc{\bm{0}} \end{matrix} \\ \hline \begin{matrix} \hla{\mathbf{I}} & & \\ & \hlb{\mathbf{S}} & \\ & & \hlc{\bm{0}} \end{matrix} & \begin{matrix} \hla{\bm{0}} & & \\ & \hlb{-\mathbf{C}} & \\ & & \hlc{-\mathbf{I}} \end{matrix} \end{array} \right) = \hla{\begin{pmatrix} \bm{0} & \mathbf{I} \\ \mathbf{I} & \bm{0} \end{pmatrix}} \oplus \hlb{\begin{pmatrix} \mathbf{C} & \hphantom{-}\mathbf{S} \\ \mathbf{S} & -\mathbf{C} \end{pmatrix}} \oplus \hlc{\begin{pmatrix} \mathbf{I} & \hphantom{-}\bm{0} \\ \bm{0} & -\mathbf{I} \end{pmatrix}}. \end{equation*} Similarly, where the blocks below denote the same direct sum decomposition above, for $\phi \in \mathbb{R}$, \begin{align*} e^{{\imath}\phi(2\boldsymbol{\Pi}_{\textup{L}} - \mathbf{I})} &= \left(\begin{array}{@{}c|c@{}} e^{{\imath}\phi}\mathbf{I} \\ \hline & e^{-{\imath}\phi}\mathbf{I} \end{array}\right) = \hla{\begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix}} \oplus \hlb{\begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix}} \oplus \hlc{\begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix}}, \\ e^{{\imath}\phi(2\boldsymbol{\Pi}_{\textup{R}} - \mathbf{I})} &= \left(\begin{array}{@{}c|c@{}} e^{{\imath}\phi}\mathbf{I} \\ \hline & e^{-{\imath}\phi}\mathbf{I} \end{array}\right) = \hla{\begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix}} \oplus \hlb{\begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix}} \oplus \hlc{\begin{pmatrix} e^{{\imath}\phi}\mathbf{I} \\ & e^{-{\imath}\phi}\mathbf{I} \end{pmatrix}}. \end{align*} Leveraging this direct sum decomposition of $\mathbf{D}$, applying Lemma~\ref{lem:qsp-to-qsvt} to each block yields \begin{align*} \mathbf{D}_\Phi &= \hla{\begin{pmatrix} \bm{0} & \mathbf{I} \\ \mathbf{I} & \bm{0} \end{pmatrix}_\Phi} \oplus \hlb{\begin{pmatrix} \mathbf{C} & \hphantom{-}\mathbf{S} \\ \mathbf{S} & -\mathbf{C} \end{pmatrix}_\Phi} \oplus \hlc{\begin{pmatrix} \mathbf{I} & \hphantom{-}\bm{0} \\ \bm{0} & -\mathbf{I} \end{pmatrix}_\Phi} \\ &= \begin{cases} \hla{\begin{pmatrix} \bm{0} & \cdot \\ \cdot & \cdot \end{pmatrix}} \oplus \hlb{\begin{pmatrix} p^{\textup{(SV)}}(\mathbf{C}) & \cdot \\ \cdot & \cdot \end{pmatrix}} \oplus \hlc{\begin{pmatrix} p(1)\mathbf{I} & \cdot \\ \cdot & \cdot \end{pmatrix}} & \text{if }n\text{ is odd, and} \\ \hla{\begin{pmatrix} p(0)\mathbf{I} & \cdot \\ \cdot & \cdot \end{pmatrix}} \oplus \hlb{\begin{pmatrix} p^{\textup{(SV)}}(\mathbf{C}) & \cdot \\ \cdot & \cdot \end{pmatrix}} \oplus \hlc{\begin{pmatrix} p(1)\mathbf{I} & \cdot \\ \cdot & \cdot \end{pmatrix}} & \text{if }n\text{ is even.} \end{cases} \end{align*} So, for $n$ odd, recalling \eqref{eq:uphi_simplify} and $p(0) = 0$, we have \begin{align*} \boldsymbol{\Pi}_{{\textup{L}}}\mathbf{U}_\Phi\boldsymbol{\Pi}_{{\textup{R}}} &= \boldsymbol{\Pi}_{{\textup{L}}}\mathbf{V} \mathbf{D}_\Phi \mathbf{W}^\dagger\boldsymbol{\Pi}_{{\textup{R}}} \\ &= \begin{pmatrix} \mathbf{I} & \\ \hphantom{\mathbf{I}} \end{pmatrix} \begin{pmatrix} \mathbf{V}_{1} \\ & \mathbf{V}_{2} \end{pmatrix} \mathbf{D}_\Phi \begin{pmatrix} \mathbf{W}_{1}^\dagger \\ & \mathbf{W}_{ 2}^\dagger \end{pmatrix} \begin{pmatrix} \mathbf{I} & \\ \hphantom{\mathbf{I}} \end{pmatrix} = \begin{pmatrix} \mathbf{V}_{1} & \\ \hphantom{\mathbf{I}} \end{pmatrix} \mathbf{D}_\Phi \begin{pmatrix} \mathbf{W}_{1}^\dagger & \\ \hphantom{\mathbf{I}} \end{pmatrix} \\ &= \left(\begin{array}{@{}c|c@{}} \mathbf{V}_{1}\left(\begin{matrix} \hla{\bm{0}} & & \\ & \hlb{p^{\textup{(SV)}}(\mathbf{C})} & \\ & & \hlc{p(1)\mathbf{I}} \end{matrix}\right)\mathbf{W}_{1}^\dagger & \begin{matrix} \hphantom{-} & & \\ & \hphantom{-} & \\ & & \hphantom{-} \end{matrix} \\ \hline \begin{matrix} \hphantom{-} & & \\ & \hphantom{-} & \\ & & \hphantom{-} \end{matrix} & \begin{matrix} \hphantom{-} & & \\ & \hphantom{-} & \\ & & \hphantom{-} \end{matrix} \end{array} \right) = \left(\begin{array}{@{}c|c@{}} p^{\textup{(SV)}}(\mathbf{A}) & \bm{0} \\ \hline \bm{0} & \bm{0} \end{array} \right). \end{align*} Similarly, for $n$ even, we have \begin{align*} \boldsymbol{\Pi}_{{\textup{R}}}\mathbf{U}_\Phi\boldsymbol{\Pi}_{{\textup{R}}} &= \boldsymbol{\Pi}_{{\textup{R}}}\mathbf{W} \mathbf{D}_\Phi \mathbf{W}^\dagger\boldsymbol{\Pi}_{{\textup{R}}} \\ &= \begin{pmatrix} \mathbf{W}_{1} & \\ \hphantom{\mathbf{I}} \end{pmatrix} \mathbf{D}_\Phi \begin{pmatrix} \mathbf{W}_{1}^\dagger & \\ \hphantom{\mathbf{I}} \end{pmatrix} \\ &= \left(\begin{array}{@{}c|c@{}} \mathbf{W}_{1}\left(\begin{matrix} \hla{p(0)\mathbf{I}} & & \\ & \hlb{p^{\textup{(SV)}}(\mathbf{C})} & \\ & & \hlc{p(1)\mathbf{I}} \end{matrix}\right)\mathbf{W}_{ 1}^\dagger & \begin{matrix} \hphantom{-} & & \\ & \hphantom{-} & \\ & & \hphantom{-} \end{matrix} \\ \hline \begin{matrix} \hphantom{-} & & \\ & \hphantom{-} & \\ & & \hphantom{-} \end{matrix} & \begin{matrix} \hphantom{-} & & \\ & \hphantom{-} & \\ & & \hphantom{-} \end{matrix} \end{array} \right) = \left(\begin{array}{@{}c|c@{}} p^{\textup{(SV)}}(\mathbf{A}) & \bm{0} \\ \hline \bm{0} & \bm{0} \end{array} \right). \end{align*} \end{proof} We conclude the section by proving Lemma~\ref{lem:qsp-to-qsvt}. \begin{proof}[Proof of \cref{lem:qsp-to-qsvt}] The basic intuition behind this argument is that, by assumption and \eqref{eq:qsp}, \begin{align*} \prod_{j\in[n]} \begin{pmatrix} e^{{\imath} \phi_j} & 0 \\ 0 & e^{-{\imath} \phi_j} \end{pmatrix} \begin{pmatrix} x & \sqrt{1-x^2} \\ \sqrt{1-x^2} & -x \end{pmatrix} = \begin{pmatrix} p(x) & \cdot \\ \cdot & \cdot \end{pmatrix}. \end{align*} So, supposing we could evaluate the polynomial at a matrix $x \leftarrow \mathbf{C}$, we get that \begin{align*} ``\prod_{j\in[n]} \begin{pmatrix} e^{{\imath} \phi_j}\mathbf{I} & 0 \\ 0 & e^{-{\imath} \phi_j}\mathbf{I} \end{pmatrix} \begin{pmatrix} \mathbf{C} & \sqrt{\mathbf{I}-\mathbf{C}^2} \\ \sqrt{\mathbf{I}-\mathbf{C}^2} & -\mathbf{C} \end{pmatrix} = \begin{pmatrix} p(\mathbf{C}) & \cdot \\ \cdot & \cdot \end{pmatrix}." \end{align*} This should hold because block matrix multiplication operates by the same rules as scalar matrix multiplication, but requires care to handle the non-square case. Here, we handle this in a more elementary manner. First, we consider \eqref{eq:block-xzero}. When $n$ is even, \begin{align*} \mathbf{U}_\Phi &= \prod_{j\in[\frac n 2]} \begin{pmatrix} e^{{\imath} \phi_{2j-1}}\mathbf{I} \\ & e^{-{\imath} \phi_{2j-1}}\mathbf{I} \end{pmatrix} \begin{pmatrix} \bm{0} & \mathbf{I} \\ \mathbf{I} & \bm{0} \end{pmatrix}^\dagger \begin{pmatrix} e^{{\imath} \phi_{2j}}\mathbf{I} \\ & e^{-{\imath} \phi_{2j}}\mathbf{I} \end{pmatrix} \begin{pmatrix} \bm{0} & \mathbf{I} \\ \mathbf{I} & \bm{0} \end{pmatrix} \nonumber\\ &= \prod_{j\in[\frac n 2]} \begin{pmatrix} e^{{\imath}(\phi_{2j-1} - \phi_{2j})}\mathbf{I} & \bm{0} \\ \bm{0} & e^{-{\imath}(\phi_{2j-1} - \phi_{2j})}\mathbf{I} \end{pmatrix} = \begin{pmatrix} e^{{\imath}\sum_{k\in[n]}(-1)^{k+1}\phi_k}\mathbf{I} & \bm{0} \\ \bm{0} & e^{-{\imath}\sum_{k\in[n]}(-1)^{k+1}\phi_k}\mathbf{I} \end{pmatrix}. \end{align*} Taking $\mathbf{I}$ and $\bm{0}$ to be $1$-dimensional scalars $1$ and $0$, this computation and Definition~\ref{def:qsp} also show that $p(0) = e^{{\imath}\sum_{k\in[n]}(-1)^{k+1}\phi_k}$ yielding the desired conclusion. Similarly, when $n$ is odd, \begin{align*} \mathbf{U}_\Phi &= \begin{pmatrix} e^{{\imath} \phi_{1}}\mathbf{I} \\ & e^{-{\imath} \phi_{1}}\mathbf{I} \end{pmatrix} \begin{pmatrix} \bm{0} & \mathbf{I} \\ \mathbf{I} & \bm{0} \end{pmatrix} \prod_{j\in[\frac{n-1} 2]} \begin{pmatrix} e^{{\imath} \phi_{2j}}\mathbf{I} \\ & e^{-{\imath} \phi_{2j}}\mathbf{I} \end{pmatrix} \begin{pmatrix} \bm{0} & \mathbf{I} \\ \mathbf{I} & \bm{0} \end{pmatrix}^\dagger \begin{pmatrix} e^{{\imath} \phi_{2j+1}}\mathbf{I} \\ & e^{-{\imath} \phi_{2j+1}}\mathbf{I} \end{pmatrix} \begin{pmatrix} \bm{0} & \mathbf{I} \\ \mathbf{I} & \bm{0} \end{pmatrix} \nonumber\\ &= \begin{pmatrix} \bm{0} & e^{{\imath}\sum_{k\in[n]}(-1)^{k+1}\phi_k}\mathbf{I} \\ e^{-{\imath}\sum_{k\in[n]}(-1)^{k+1}\phi_k}\mathbf{I} & \bm{0} \end{pmatrix}. \end{align*} Next, we prove \eqref{eq:block-xone}. Since $\mathbf{U}$ is a real diagonal matrix, it is Hermitian and commutes with the other matrices in the expression $\mathbf{U}_\Phi$. As an immediate consequence, \begin{align*} \mathbf{U}_\Phi &= \begin{pmatrix} \mathbf{I} & \bm{0} \\ \bm{0} & (-1)^n\mathbf{I} \end{pmatrix} \prod_{k\in[n]} \begin{pmatrix} e^{{\imath} \phi_k}\mathbf{I} & \bm{0} \\ \bm{0} & e^{-{\imath} \phi_k}\mathbf{I} \end{pmatrix} = \begin{pmatrix} e^{{\imath} \sum_{k\in[n]}\phi_k}\mathbf{I} & \bm{0} \\ \bm{0} & (-1)^ne^{-{\imath} \sum_{k\in[n]}\phi_k}\mathbf{I} \end{pmatrix}. \end{align*} As before, the same computation specialized to a $2$-dimensional $\mathbf{U} = \boldsymbol{\sigma}_z$ shows that $p(1) = e^{{\imath} \sum_{k\in[n]}\phi_k}$ giving the desired claim. Finally to prove \eqref{eq:block-xmid}, let the diagonal entries of $\mathbf{C}$ be $\{c_i\}_{i \in [r]}$. Then, $\mathbf{U}$ is the direct sum of $r$ matrices of the form $\mathbf{R}(c_i)$, where we recall we defined $\mathbf{R}$ in Definition~\ref{def:qsp}. Applying Definition~\ref{def:qsp} to each $2 \times 2$ block, and comparing to Definition~\ref{def:16}, yields the conclusion. \end{proof} \subsection{Simplified QSVT in general bases}\label{ssec:gen_case} We now finish the proof of Theorem~\ref{thm:qsp-to-qsvt} in the case of general bases $\mb_{\textup{L}}$, $\mb_{\textup{R}}$. For disambiguation we let $\mb_{\textup{L}}$, $\mb_{\textup{R}}$, $\mb_{\textup{L}, 1}$, $\mb_{\textup{R}, 1}$, $\mproj_{\textup{L}}$, $\mproj_{\textup{R}}$ refer to an arbitrary basis and associated subspace in Definition~\ref{def:block_encode}, and (for this section only) we let $\bmb_{\textup{L}}$, $\bmb_{\textup{R}}$, $\bmb_{\textup{L}, 1}$, $\bmb_{\textup{R}, 1}$, $\bmproj_{\textup{L}}$, $\bmproj_{\textup{R}}$ refer to the computational basis where $\bmb_{\textup{L}, 1}$ and $\bmb_{\textup{R}, 1}$ have the same number of columns as $\mb_{\textup{L}, 1}$ and $\mb_{\textup{R}, 1}$. These are related via \begin{equation}\label{eq:convert_bases} \mb_{\textup{L}, 1} = \mb_{\textup{L}} \bmb_{\textup{L}, 1},\; \mproj_{\textup{L}} = \mb_{\textup{L}} \bmproj_{\textup{L}} \mb_{\textup{L}}^\dagger,\; \mb_{\textup{R}, 1} = \mb_{\textup{R}} \bmb_{\textup{R}, 1},\; \mproj_{\textup{R}} = \mb_{\textup{R}} \bmproj_{\textup{R}} \mb_{\textup{R}}^\dagger. \end{equation} Finally, we define \begin{equation}\label{eq:bmmudef}\overline{\mmu} \coloneqq \mb_{\textup{L}}^\dagger \mathbf{U} \mb_{\textup{R}} \iff \mathbf{U} = \mb_{\textup{L}} \overline{\mmu} \mb_{\textup{R}}^\dagger.\end{equation} We prove the general case by reducing to the special case of Theorem~\ref{thm:qsp-to-qsvt} we proved in the prior Section~\ref{ssec:basic_case}, as suggested earlier. The following observation will be useful: \begin{equation}\label{eq:compare_svd} \bmproj_{\textup{L}} \overline{\mmu} \bmproj_{\textup{R}} = \mb_{\textup{L}}^\dagger \mproj_{\textup{L}} \mathbf{U} \mproj_{\textup{R}} \mb_{\textup{R}}. \end{equation} \begin{proof}[Proof of Theorem~\ref{thm:qsp-to-qsvt}, general case.] For simplicity we will only prove the odd case, as the reduction in the even case is essentially identical. Recall that when $n$ is odd, \begin{align*} \mathbf{U}_\Phi &= e^{{\imath}\phi_1(2\boldsymbol{\Pi}_{\textup{L}}-\mathbf{I})} \mathbf{U} \prod_{j\in[\frac{n - 1}{2}]} e^{{\imath}\phi_{2j}(2\boldsymbol{\Pi}_{\textup{R}} - \mathbf{I})} \mathbf{U}^\dagger e^{{\imath}\phi_{2j+1}(2\boldsymbol{\Pi}_{\textup{L}} - \mathbf{I})}\mathbf{U} \\ &= \mb_{\textup{L}} \Par{e^{{\imath}\phi_1(2\bmproj_{\textup{L}}-\mathbf{I})} \overline{\mmu} \prod_{j\in[\frac{n - 1}{2}]} e^{{\imath}\phi_{2j}(2\bmproj_{\textup{R}} - \mathbf{I})} \overline{\mmu}^\dagger e^{{\imath}\phi_{2j+1}(2\bmproj_{\textup{L}} - \mathbf{I})}\overline{\mmu}} \mb_{\textup{R}}^\dagger \\ &= \mb_{\textup{L}} \begin{pmatrix} p^{\textup{(SV)}}(\overline{\mathbf{A}}) & \bm{0} \\ \bm{0} & \bm{0} \end{pmatrix} \mb_{\textup{R}}^\dagger,\text{ where } \overline{\mathbf{A}} \coloneqq \bmproj_{\textup{L}} \overline{\mmu} \bmproj_{\textup{R}}. \end{align*} In the second line, we used the definitions \eqref{eq:convert_bases}, \eqref{eq:bmmudef} and the fact that $\bmb_{\textup{L}}$, $\bmb_{\textup{R}}$ are unitary. Finally, in the third line we used the special case of Theorem~\ref{thm:qsp-to-qsvt} we proved earlier, applied to $\overline{\mmu}$. The conclusion follows by the claim \begin{equation}\label{eq:compare_psv}\mb_{\textup{L}} \begin{pmatrix} p^{\textup{(SV)}}(\overline{\mathbf{A}}) & \bm{0} \\ \bm{0} & \bm{0} \end{pmatrix} \mb_{\textup{R}}^\dagger = p^{\textup{(SV)}}(\mathbf{A}),\text{ where } \mathbf{A} = \mproj_{\textup{L}} \mathbf{U} \mproj_{\textup{R}}.\end{equation} Indeed, as a consequence of \eqref{eq:compare_svd}, $\mathbf{A}$ and $\overline{\mathbf{A}}$ have the same singular values, their left singular vectors are related via rotation by $\mb_{\textup{L}}$, and their right singular vectors are related via rotation by $\mb_{\textup{R}}$. Comparing with the definition of $p^{\textup{(SV)}}$ from Definition~\ref{def:16} yields the claim \eqref{eq:compare_psv}. \end{proof} \section{QSVT and Chebyshev Series} \label{sec:cheb} \begin{quotation} \textit{So it becomes tempting to look at approximation methods that go beyond interpolation, and to warn people that interpolation is dangerous... the trouble with this is that for almost all the functions encountered in practice, Chebyshev interpolation works beautifully!} \hspace{1em plus 1fill}---Trefethen, \cite{trefethen13} \end{quotation} \subsection{Chebyshev polynomials}\label{ssec:cheb_poly} The QSVT framework gives a generic way of applying bounded polynomials to matrices. In applications of interest, the main goal is actually to apply a non-polynomial function; to capture these applications, it is important to develop tools for approximating the relevant functions with bounded polynomials. In this section, we introduce Chebyshev polynomials, our main tool for constructing approximations. We present only properties which are needed to achieve our results. \begin{definition}[Chebyshev polynomial] The degree-$n$ \emph{Chebyshev polynomial} (of the first kind), denoted $T_n(x)$, is the function that satisfies, for all $z \in \mathbb{C}$, \[ T_n(\tfrac{1}{2}(z + z^{-1})) = \tfrac{1}{2}(z^n + z^{-n}). \] \end{definition} For $z = \exp({\imath} \theta)$ for $\theta \in [-\pi, \pi]$ we may identify $x \coloneqq \frac{1}{2}(z + z^{-1})$ for $x = \cos\theta$. This identification yields another familiar definition of the Chebyshev polynomials, \[T_n(\cos(\theta)) = \cos(k\theta). \] From these definitions we have that $\norm{T_n(x)}_{[-1,1]} \leq 1$, and that $T_n$ has the same parity as $n$, i.e.\ $T_n(-x) = (-1)^n T_n(x)$. We can invert $x = \tfrac{1}{2}(z + z^{-1})$ to obtain $z = x \pm \sqrt{x^2 - 1}$. Consequently, \begin{align*} T_n(x) = \frac{1}{2}\Par{\Par{x + \sqrt{x^2 - 1}}^n + \Par{x - \sqrt{x^2 - 1}}^{n}}. \end{align*} \begin{definition}[Chebyshev coefficients] Let $f: [-1,1] \to \mathbb{C}$ be Lipschitz (i.e.\ $\abs{f(x) - f(y)} \leq C\abs{x-y}$ for finite $C$). Then $f$ has a unique decomposition into Chebyshev polynomials \begin{align*} f(x) = \sum_{k=0}^\infty a_k T_k(x), \end{align*} where the \emph{Chebyshev coefficients} $a_k$ absolutely converge, and can be computed via the following integral (counterclockwise) around the complex unit circle, where $\pi {\imath}$ is replaced by $2\pi {\imath}$ when $k = 0$: \begin{align} \label{eq:cheb-integral} a_k = \frac{1}{\pi{\imath}}\int_{\abs{z} = 1} z^{k-1}f(\tfrac{1}{2}(z + z^{-1})) \mathrm{d} z. \end{align} \end{definition} This comes from the Cauchy integral formula: $f(\frac{1}{2}(z + z^{-1}))$ maps $[-1,1]$ (twice) onto the unit circle, so that when we write $f$ as a Laurent series $\sum_{k \in \mathbb{Z}} b_k(z^k + z^{-k})$, the coefficients $b_k$ match up with the coefficients $a_k$ (up to a factor of two). For more details, see Theorem 3.1 of \cite{trefethen13}. We will typically construct polynomial approximations via \emph{Chebyshev truncation}, defined as follows. \begin{definition}[Chebyshev truncation] For a function $f: [-1,1] \to \mathbb{C}$ written as a Chebyshev series $f(x) = \sum_{k=0}^\infty a_k T_k(x)$, we denote the degree-$n$ \emph{Chebyshev truncation} of $f$ as \begin{align*} f_n(x) = \sum_{k=0}^n a_kT_k(x). \end{align*} \end{definition} To construct polynomial approximations time-efficiently, we should avoid computing the integral \eqref{eq:cheb-integral}. So, instead of the Chebyshev truncation, we would actually compute the \emph{Chebyshev interpolant}~\cite[Chapter 3]{trefethen13}, the degree-$n$ polynomial that agrees with $f$ at the $n+1$ Chebyshev points $\cos(\frac{j\pi}{n})$ for $0 \le j \le n$. Computing this interpolant only requires $n+1$ evaluations of $f$ to specify, and $O(n\log(n))$ additional time to compute its Chebyshev coefficients, via a fast cosine transform. Chebyshev truncation and Chebyshev interpolation are closely related through standard bounds (see Eq.\ 4.9, \cite{trefethen13}); we focus on the latter as it is conceptually cleaner, but all bounds we prove extend to interpolation up to a factor of two.\footnote{This justifies our use of the quote from \cite{trefethen13} at the beginning of Section~\ref{sec:cheb}: though it is about Chebyshev interpolation rather than truncation, the spirit is the same.} \subsection{Chebyshev series for standard functions} If the function $f$ one wishes to approximate is standard, closed forms of the Chebyshev coefficients may be known, so one can take a Chebyshev truncation and explicitly bound the error: \begin{align*} \norm{f - f_n}_{[-1,1]} = \norm[\Big]{\sum_{k=n+1}^\infty a_kT_k(x)}_{[-1,1]} \leq \sum_{k=n+1}^\infty \abs{a_k} \norm{T_k(x)}_{[-1,1]} = \sum_{k=n+1}^\infty \abs{a_k}. \end{align*} In other words, by choosing $n$ such that the coefficient tail sum is bounded by $\varepsilon$, we obtain an $\varepsilon$-uniform approximation on $[-1, 1]$. We list some standard Chebyshev coefficient series here, which can help for converting a Taylor or Fourier series to a Chebyshev series. The notation $\sum^{\prime}$ means that if a term exists for $T_0$ in the summation, it is halved: \begin{align*} x^m &= 2^{1-m}\sum_{n=0}^{\lfloor m/2\rfloor}{}^{\prime} \binom{m}{n} T_{m-2n}(x). \tag*{\cite[Eq.\ 2.14]{mh02}} \\ e^{tx} &= 2\sum_{n=0}^{\infty}{}^{\prime} I_n(t) T_n(x). \tag*{\cite[Eq.\ 5.18]{mh02}} \\ \sinh(tx) &= 2\sum_{n=0}^{\infty} I_{2n+1}(t) T_{2n+1}(x). \tag*{\cite[Eq.\ 5.19]{mh02}} \\ \cosh(tx) &= 2\sum_{n=0}^{\infty}{}^{\prime} I_{2n}(t) T_{2n}(x). \tag*{\cite[Eq.\ 5.20]{mh02}} \end{align*} In the above display, $I_n$ denotes the modified Bessel function of the first kind. This function is typically defined as the solution to a differential equation, but for us it suffices to define $I_n(t)$ as ``the $n^{\text{th}}$ Chebyshev coefficient of $e^{tx}$,'' so by \eqref{eq:cheb-integral} (and pairing Laurent coefficients), \begin{align*} I_n(t) \coloneqq \frac 1 {2\pi{\imath}}\oint_{\abs{z} = 1} z^{n-1} e^{\frac{t}{2}(z + z^{-1})}\mathrm{d} z = \frac 1 {2\pi{\imath}}\oint_{\abs{z} = 1} z^{-n-1} e^{\frac{t}{2}(z + z^{-1})}\mathrm{d} z. \end{align*} In the remainder of the section, to demonstrate the ``direct coefficient bound'' style of approximation error analysis, we analyze the use of Chebyshev truncation to give a uniform approximation to $f(x) = e^{tx}$ for $t \in \mathbb{R}$ on the interval $[-1, 1]$. The main result of this section is the following. \begin{theorem}[{\cite[Theorem 4.1]{sv14}}, {\cite[Lemmas 57, 59]{gslw19}}]\label{thm:expbound} Let $\varepsilon > 0$ and let $p(x)$ be the degree-$n$ Chebyshev truncation of $e^{tx}$ for $t \in \mathbb{R}$. Then $\norm{p(x) - e^{tx}}_{[-1,1]} \leq \varepsilon$ for \begin{align*} n \eqsim \begin{cases} \abs{t} + \frac{\log(1/\varepsilon)}{\log(e + \log(1/\varepsilon)/\abs{t})} & \varepsilon \leq 1 \\ \sqrt{\abs{t}\log(e^{\abs{t}}/\varepsilon)} & \varepsilon > 1 \end{cases}. \end{align*} \end{theorem} To elaborate on what this bound states, there are four regimes of $\varepsilon$. \begin{enumerate} \item When $\varepsilon \geq e^{\abs{t}}$, the zero polynomial $p(x) \equiv 0$ suffices. \item When $1 \leq \varepsilon < e^{\abs{t}}$, we have $n \eqsim \sqrt{\abs{t}\log(e^{\abs{t}}/\varepsilon)}$, or $n \eqsim \sqrt{\abs{t}\log(1/\delta)}$, rewriting $\varepsilon = \delta e^{\abs{t}}$ to scale with the maximum value of $e^{\abs{t}x}$ on $[-1,1]$. This is the bound shown in \cite{sv14}. \item When $e^{-C\abs{t}} \leq \varepsilon < 1$ for some universal constant $C$, the scaling is $n \eqsim \abs{t}$. \item When $\varepsilon < e^{-C\abs{t}}$, the scaling is $\frac{\log(1/\varepsilon)}{\log(e + \log(1/\varepsilon)/\abs{t})}$. This is the bound shown in \cite{gslw19}. \end{enumerate} Theorem~\ref{thm:expbound} was first proven by combining results from \cite{sv14} and \cite{gslw19}. A recent work \cite{AggarwalA22} obtained the same upper bound, as well as a lower bound showing Theorem~\ref{thm:expbound} is tight. The bounds from \cite{sv14, gslw19} are each loose in certain regimes (\cite{sv14}'s bound, $\sqrt{\abs{t}\log(e^{\abs{t}}/\varepsilon)} + \log(e^{\abs{t}}/\varepsilon)$, is loose in regime 4, whereas \cite{gslw19} assumes $\varepsilon < 1$), potentially due to the different proof techniques employed. Specifically, while \cite{gslw19} proceeds by a standard Bessel function inequality to bound the tail terms of the Chebyshev truncation, \cite{sv14} proceeds by approximating monomials in the Taylor expansion of $e^{tx}$ with Chebyshev truncation. As noted by \cite{LowC17}, this strategy bounds the tail terms of the Chebyshev truncation by an easier-to-understand series that dominates it. We give another (arguably more straightforward) proof of Theorem~\ref{thm:expbound}, by bounding the error of Chebyshev truncation (a strategy also employed by \cite{AggarwalA22}). To achieve the right bound in the $\varepsilon > 1$ regime, we require a sharper bound on $I_n$, the Chebyshev coefficients of $e^{tx}$. \begin{lemma}[Carlini's formula] \label{carlini} For $t \in \mathbb{R}$, \begin{align*} \abs{I_n(t)} < \frac{\exp(\sqrt{t^2 + n^2})(\sqrt{(n/t)^2 + 1} - n/\abs{t})^n}{2(n^2 + t^2)^{1/4}}. \end{align*} \end{lemma} We contribute an independent proof of \cref{carlini} in \cref{app:carlini}, which was proven without using Bessel-style techniques in \cite{AggarwalA22}. While we would guess that this bound is well-known, we could not find this statement in the literature.\footnote{ Off-the-shelf bounds like Kapteyn's inequality \cite[Eq. 10.14.8]{DLMF} are not quite tight enough for our purposes, since a very fine degree of control is necessary in the challenging regime where none of $n$, $t$, or $t/n$ remain constant. } The equivalent bound on the (unmodified) Bessel function of the first kind $J_k$ is due to Carlini~\cite[Chapter 1.4]{Watson44}, and can be viewed as a ``real-valued'' analog of \cref{carlini} following the equivalence $I_k(t) = {\imath}^{-k} J_k({\imath} t)$.\footnote{ Note that the real version of this statement is perhaps more non-trivial, since then the terms in the power series for the Bessel function are no longer nonnegative. Qualitatively similar statements may have also been made by Laplace, but for this claim Watson cites a book of the M\'ecanique C\'eleste without an English translation~\cite[page 7]{Watson44}. This felt like a good place to stop our investigation of Bessel function bounds. } Our proof follows \cite{Watson17} (who handled the real-valued version), and begins with a representation of a Bessel function as a contour integral. We bound this integral via the \emph{method of steepest descent}, where the contour is changed to a real-valued one, using that the integrand is analytic. Using Lemma~\ref{carlini}, we now prove Theorem~\ref{thm:expbound}. \begin{proof}[Proof of Theorem~\ref{thm:expbound}] By symmetry it suffices to take $t \geq 0$. We split into cases based on $\varepsilon$. \textit{Case 1: $\varepsilon \le 1$.} Define $r(t, \varepsilon)$ to be the value $r$ such that $\varepsilon = (t/r)^r$. We choose \[n = \left\lceil r\Par{3t, \varepsilon} \right\rceil.\] Note that the function $(t/r)^r$ is decreasing for $r \ge t$ and hence in the regime $\varepsilon \le 1$, we have $n \ge 3t$. Then, recalling that $p$ is the degree-$n$ Chebyshev truncation, we have by Lemma~\ref{carlini} that \begin{align*} \norm{p(x) - e^{tx}}_{[-1,1]} &\leq 2\sum_{k=n+1}^\infty \abs{I_k(t)} \le 2\sum_{k = n}^\infty |I_k(t)| \\ &\leq 2\sum_{k=n}^\infty \frac{\exp(\sqrt{t^2 + k^2})(\sqrt{(k/t)^2 + 1} - k/t)^k}{2(k^2 + t^2)^{1/4}}\\ &\leq \sum_{k=n}^\infty \exp(\sqrt{t^2 + k^2})(\sqrt{(k/t)^2 + 1} - k/t)^k \\ &\leq \sum_{k=n}^\infty \exp(k\sqrt{(t/n)^2 + 1})(\sqrt{(n/t)^2 + 1} - n/t)^k \tag*{by $k \ge n$ and since $\sqrt{x^2 + 1} - x$ decreases in $x$}\\ &\leq \sum_{k=n}^\infty \Big(\exp(\sqrt{(t/n)^2 + 1}) \cdot \frac{t}{2n}\Big)^k \tag*{by $\sqrt{1-x^2} - x \leq \frac 1 {2x}$}\\ &\leq \sum_{k=n}^\infty \Big(\exp(\sqrt{10/9})\cdot \frac{t}{2n}\Big)^k \tag*{by $n \ge 3t$} \\ &\leq \sum_{k=n}^\infty \Big(\frac{3t}{2n}\Big)^k \leq \Big(\frac{3t}{n}\Big)^{n} \sum_{k=1}^\infty \frac{1}{2^k} \leq \varepsilon \tag*{by $n \geq r(3t, \varepsilon)$.} \end{align*} The desired bound on $n$ in this regime then follows from \cite[Lemma 59]{gslw19} which shows that, for $\varepsilon \in (0,1)$ and $t > 0$, $r(t, \varepsilon) = \Theta(t + \frac{\log(1/\varepsilon)}{\log(e + \log(1/\varepsilon)/t)})$. \textit{Case 2: $\varepsilon > 1$.} For $\varepsilon \in (1, 2]$ the conclusion follows from our proof when $\varepsilon = 1$, so assume $\varepsilon > 2$;\footnote{If $t$ is a sufficiently small constant, then applying the $\varepsilon = 1$ case gives a constant-degree polynomial. Otherwise, $t$ is sufficiently large to outweigh constant-factor changes in $\varepsilon$ (and hence additive changes in $\log \frac 1 \varepsilon$).} for $\varepsilon \ge e^t$ the zero polynomial suffices so assume $\varepsilon < e^t$. Let $\delta = \frac{\varepsilon - 1} {5}$ and $m = \lceil 3t \rceil$. We choose \[n = \left\lceil \sqrt{100t\Par{t + \log \frac 1 \delta}} \right\rceil \ge 10\sqrt{t},\] where we use $t + \log \frac 1 \delta \ge 1$ for our range of $\varepsilon$. The claim then follows from $\delta = \Theta(\varepsilon)$ and combining: \begin{equation}\label{eq:splitatr} \begin{aligned} 2\sum_{k = n + 1}^{m - 1} |I_k(t)| \le 5\delta,\; 2\sum_{k = m}^\infty |I_k(t)| \le 1. \end{aligned} \end{equation} The second claim in \eqref{eq:splitatr} was already shown by our earlier derivation setting $\varepsilon = 1$, since $m \ge r(3t, 1) = 3t$. For bounding the first sum, the following estimate will be helpful: for $k \le 3t$, \begin{equation}\label{eq:besselhelper} \begin{aligned} \exp\Par{\sqrt{t^2 + k^2}}\Par{\sqrt{\Par{\tfrac k t}^2 + 1} - \tfrac k t}^k &\le \exp\Par{\sqrt{t^2 + k^2} + k\Par{\sqrt{\Par{\tfrac k t}^2 + 1} - \tfrac k t - 1}} \\ &= \exp\Par{t\Par{1 + \tfrac k t}\Par{\sqrt{\Par{\tfrac k t}^2 + 1} - \tfrac k t}} \\ &\le \exp\Par{t\Par{1 - 0.01\Par{\tfrac k t}^2}}. \end{aligned} \end{equation} The last equation used $(1 + x)(\sqrt{1 + x^2} - x) \le 1 - 0.01x^2$ for $0 \le x \le 3$. Since $m - 1 \le 3t$, \begin{align*} \sum_{k = n + 1}^{m - 1} |I_k(t)| &\leq \frac{1}{2(n^2 + t^2)^{1/4}}\sum_{k=n+1}^{m - 1} \exp\Par{t\Par{1 - 0.01\Par{\frac k t}^2}} \\ &= \frac{e^t}{2(n^2 + t^2)^{1/4}}\sum_{k=n+1}^{m - 1} \exp\Par{-\frac{k^2}{100t}} \\ &\le \frac{e^t}{2(n^2 + t^2)^{1/4}}\int_{n}^\infty \exp\Par{-\frac{x^2}{100t}}\mathrm{d} x \\ &= \frac{e^t \sqrt{25t}}{(n^2 + t^2)^{1/4}} \int_{\frac{n}{\sqrt{100t}}}^\infty \exp\Par{-x^2} \mathrm{d} x \\ &\le \frac{e^t \sqrt{25 t}}{(n^2 + t^2)^{1/4}} \cdot \Par{\frac{1}{2}\exp\Par{-\frac{n^2}{100t}}} \le \frac 5 2\exp\Par{t - \frac{n^2}{100t}} \le \frac 5 2\delta. \end{align*} The first line used \cref{carlini} and \eqref{eq:besselhelper}, and the second-to-last used the Gaussian tail bound \eqref{eq:erf-bound}: \[\int_{\frac{n}{\sqrt{100t}}}^\infty \exp\Par{-x^2} \mathrm{d} x < \exp\Par{-\frac{n^2}{100t}} \cdot \frac{1}{1 + \frac n {\sqrt{100t}}} \le \frac{1}{2} \exp\Par{-\frac{n^2}{100t}},\] where we used $n \ge 10\sqrt{t}$ and $\exp(t - \frac{n^2}{100t}) \le \delta$ by construction. \end{proof} \subsection{Bounded approximations via Chebyshev series: a user's guide}\label{sec:bounded} In this section, we develop tools for constructing \emph{bounded} polynomial approximations, a requirement for the machinery of Section~\ref{sec:qsvt} (see discussion after Definition~\ref{def:qsp}). To do so, we combine a powerful meta-technique for bounding the Chebyshev coefficients of analytic functions with applications of explicit thresholding functions. This meta-technique is stated as Theorem~\ref{thm:trefethen}; in many classical settings, a direct application already yields near-optimal polynomial approximations. \begin{theorem}[{\cite[Theorems~8.1 and 8.2]{trefethen13}}] \label{thm:trefethen} Let $f$ be an analytic function in $[-1,1]$ and analytically continuable to the interior of the Bernstein ellipse $E_{\rho} = \{\frac12(z + z^{-1}) : \abs{z} = \rho\}$, where it satisfies $\abs{f(x)} \leq M$. Then its Chebyshev coefficients satisfy $\abs{a_0} \leq M$ and $\abs{a_k} \leq 2M\rho^{-k}$ for $k \geq 1$. Consequently, for each $n \geq 0$, its Chebyshev projections satisfy \begin{align*} \norm{f - f_n}_{[-1,1]} &\leq \frac{2M\rho^{-n}}{\rho - 1}, \end{align*} and choosing $n = \lceil \frac{1}{\log(\rho)}\log\frac{2M}{(\rho-1)\varepsilon} \rceil$, we have $\norm{f - f_n}_{[-1,1]} \leq \varepsilon$. \end{theorem} \begin{proof} Recall from \eqref{eq:cheb-integral} (and since inverting $z$ does not change the contour integral) that for $k \ge 1$, \begin{align*} a_k = \frac{1}{\pi {\imath}} \int_{|z| = 1} z^{-(k + 1)} f(\tfrac{1}{2}(z + z^{-1})) \mathrm{d} z. \end{align*} The boundary of $E_\rho$ is given by $\frac{1}{2}(z + z^{-1})$ for $|z| = \rho$, and $f$ is analytic in $E_\rho$, so we may choose a different contour without affecting the value of the integral: \begin{align*} a_k = \frac{1}{\pi {\imath}} \int_{|z| = \rho} z^{-(k + 1)} f(\tfrac{1}{2}(z + z^{-1})) \mathrm{d} z. \end{align*} The conclusion follows from the facts that the circumference of $|z| = \rho$ is $2\pi \rho$ and the function is bounded by $M$. A similar argument gives the case $k = 0$, where \eqref{eq:cheb-integral} has $2\pi{\imath}$ in the denominator. \end{proof} Theorem~\ref{thm:trefethen} shows that if one can analytically continue $f$ to a Bernstein ellipse with $\rho = 1 + \alpha$ for small $\alpha$, then a degree $\approx \frac 1 \alpha$ polynomial obtains good approximation error on $[-1, 1]$. Unfortunately, since the approximation in Theorem~\ref{thm:trefethen} is based on Chebyshev truncation, the approximation rapidly blows up outside the range $[-1, 1]$ (in Lemma~\ref{lem:chebz-bound}, we give estimates on the growth of Chebyshev polynomials, i.e.\ that the $n^{\text{th}}$ polynomial grows as $O(|x|^n)$ for $x$ sufficiently outside $[-1, 1]$). In interesting applications of the QSVT framework, this is an obstacle. For example, to use QSVT for matrix inversion, we need a polynomial approximation to $x^{-1}$ on $[\delta, 1]$ that is bounded on $[-1,1]$. Upon linearly remapping $[\delta, 1]$ to $[-1, 1]$, this corresponds to a bounded approximation on $[-b, 1]$ for some $b > 1$, so Chebyshev truncations give us a very poor degree of control. To this end, we provide the following ``bounded approximation'' variant of Theorem~\ref{thm:trefethen}, as a user-friendly way of extending it to applications of the QSVT framework. \begin{theorem}\label{thm:main_bounded} Let $f$ be an analytic function in $[-1, 1]$ and analytically continuable to the interior of $E_\rho$ where $\rho = 1 + \alpha$, where it is bounded by $M$. For $\delta \in (0,\frac 1 C \min(1, \alpha^2))$ where $C$ is a sufficiently large constant, $\varepsilon \in (0, 1)$, and $b > 1$, there is a polynomial $q$ of degree $O(\frac b \delta \log\frac{b}{\delta\varepsilon})$ such that \begin{align*} \norm{f - q}_{[-1, 1]} &\leq M\varepsilon, \\ \norm{q}_{[-(1 + \delta), 1 + \delta]} &\leq M, \\ \norm{q}_{[-b, -(1 + \delta)] \cup [1 + \delta,b]} &\leq M\varepsilon. \end{align*} \end{theorem} \begin{proof}[Proof sketch.] We give a formal proof in Section~\ref{ssec:bounded_proof}, but briefly summarize our proof strategy here. \begin{enumerate} \item Applying Theorem~\ref{thm:trefethen} gives $f_n$ of degree $n \approx \frac 1 \alpha$ approximating $f$ in the interval $[-1, 1]$, but $f_n$ does not satisfy the other required conclusions due to its growth outside $[-1, 1]$. \item We multiply $f_n$ by a ``threshold'' $r$ based on the Gaussian error function $\erf$, whose tails decay much faster than the Chebyshev polynomials grow outside $[-1, 1]$. Our function $r$ has the property that inside $[-1, 1]$, it is close to $1$, and outside $[-(1 + \delta), 1 + \delta]$, it is close to $0$. \item Using bounds on the growth of $\erf$, we show $r \cdot f_n$ is bounded on a Bernstein ellipse of radius $1 + \frac \delta b$ appropriately rescaled, and applying Theorem~\ref{thm:trefethen} once more gives the conclusion. \end{enumerate} The final proof requires some care to obtain the claimed scalings on the windows of approximation, but we include this tedium to make the theorem statement as simple to use as possible. \end{proof} \begin{remark}\label{rem:bounded_comments} \cref{thm:main_bounded} is an alternative to Corollary 66 of \cite{gslw19}. Translating the statement there to our setting, the polynomial approximation it would achieve has degree $O(\frac{b}{\delta}\log\frac{M}{\varepsilon})$. Our approximation of degree $O(\frac{b}{\delta}\log\frac{b}{\delta\varepsilon})$ is comparable, matching when $\varepsilon$ and $\frac \delta b$ are polynomially related. We note that our Theorem~\ref{thm:main_bounded} has a $\log\frac{b}{\delta\varepsilon}$ dependence (instead of $\log\frac{1}{\varepsilon}$) because we use a slightly weaker type of assumption: not only does \cite[Corollary 66]{gslw19} assume that its function $f(x) = \sum_{k=0}^\infty a_k x^k$ is analytic and bounded by $M$ on a disk of radius $1 + \delta$, it also assumes that the Taylor series coefficients $\abs{a_k}$ satisfy $\sum_{k=0}^\infty \abs{a_k}(1+\delta)^k \leq M$. Without this final condition, boundedness merely implies that $\abs{a_k} = O((1+\delta)^{-k})$, and this slight weakening leads to an additional logarithmic factor when $\varepsilon$ is large. In most applications, the difference is negligible; both strategies have an additional polynomial overhead on $\frac b \delta$, which typically dominates a $\log \frac b \delta$ dependence. Finally, the precondition of Theorem~\ref{thm:main_bounded} is weaker than the requirement of Corollary 66 of \cite{gslw19} in another sense. Specifically, \cite{gslw19} assumes a locally bounded Taylor series in a scaled unit circle in the complex plane, whereas we only require a bound on a (potentially much smaller) Bernstein ellipse, which could enable more applications. \end{remark} We use the rest of the section to provide a user's guide on applying Theorem~\ref{thm:main_bounded} to boundedly approximate various piecewise smooth functions. All of our applications proceed as follows. \begin{enumerate} \item We linearly rescale the ``region of interest,'' i.e.\ the part of $\mathbb{R}$ where we wish to approximate a function via bounded polynomials, to the interval $[-1, 1]$. \item We apply Theorem~\ref{thm:main_bounded} to the rescaled function for appropriate choices of $b$ and $\delta$, so the region where the approximation must be bounded is captured upon undoing the rescaling. \item If additional properties of the bounded approximation are desired, e.g.\ a parity requirement, we use the additional implications of Theorem~\ref{thm:main_bounded} to obtain these properties. \end{enumerate} A simple application of \cref{thm:main_bounded} is obtaining degree-$O(\frac{1}{\delta}\log\frac{1}{\delta\varepsilon})$ polynomial approximations to the sign and rectangle functions (where our guarantee is $\varepsilon$-closeness outside of a $\delta$ interval around the points of discontinuity, as described in \cite[Lemmas 25 and 29]{gslw19}).\footnote{This result shows that direct Chebyshev truncation is sometimes not enough for these slightly different approximation guarantees: the sign function has Chebyshev series $\sum_{k\geq 0} \frac{4}{\pi}\frac{(-1)^k}{2k+1}T_{2k+1}(x)$~\cite[Exercise 3.6]{trefethen13}, which cannot be truncated without paying $\Omega(1)$ error.} We leave this as an exercise. We begin with a bounded approximation to the rescaled exponential function in Corollary~\ref{cor:exp_bounded}. Such bounds have previously seen use in quantum applications of the multiplicative weights framework via QSVT, to design faster approximate solvers for linear programs \cite{vanApeldoornG19, BoulandGJSWT23}. \begin{corollary}\label{cor:exp_bounded} Let $\varepsilon \in (0, 1)$, and let $f(x) = \exp(\beta x)$ for $\beta \ge 1$. There exists a polynomial $p$ of degree $O(\beta \log \frac \beta {\varepsilon})$ such that $\norm{p}_{[-1, 1]} = O(1)$ and $\norm{p - f}_{[-1, 0]} \le \varepsilon$. \end{corollary} \begin{proof} First, we rescale so the region of interest is $[-1, 1]$: let $g(y) = f(\beta(\tfrac{1}{2} (y - 1)))$. Note that $g(y)$ is analytic everywhere and bounded by a constant on $E_\rho$ for $\rho = 1 + \beta^{-\frac{1}{2}}$. To see this, the magnitude of $\frac{1}{2}(z - 1)$ for $z \in E_\rho$ is maximized when $z$ is furthest from $1$, and Fact~\ref{lem:bern-bounds} shows this magnitude is $O(\frac 1 \beta)$. Hence, applying Theorem~\ref{thm:main_bounded} with $b = 3$ and $\delta = \Theta(\frac 1 \beta)$ for a sufficiently small constant yields the claim upon shifting the region of interest back, since $\frac{1}{2}(y - 1) = 1$ for $y = 3$. \end{proof} Next, in Corollary~\ref{cor:approx-arcsin} we provide an analog of Lemma 70 in \cite{gslw19}, regarding the bounded approximation of $\arcsin$, using the framework of Theorem~\ref{thm:main_bounded}. \begin{corollary}\label{cor:approx-arcsin} Let $\delta, \varepsilon \in (0,1)$, and let $f(x) = \frac 2 \pi \arcsin(x)$. There exists an odd polynomial $p(x)$ of degree $O(\frac{1}{\sqrt{\delta}}\log\frac{1}{\delta\varepsilon})$ such that $\norm{p}_{[-1, 1]} \le 1$ and $\norm{p - f}_{[-(1 - \delta), 1 - \delta]} \le \varepsilon$. \end{corollary} \begin{proof} First, we rescale so the region of interest is $[-1,1]$: let $\overline{\arcsin}(x) = \arcsin((1-\delta)x)$. The $\arcsin$ function is analytic on $\mathbb{C} \setminus ((-\infty, 1] \cup [1, \infty))$, so we choose $\rho = 1 + \sqrt{2\delta}$ so that $\overline{\arcsin}(x)$ is analytic on the interior of $E_\rho$ by the first bound in Fact~\ref{lem:bern-bounds}. By the maximum modulus principle, the maximum of $\overline{\arcsin}$ is achieved on the boundary of the ellipse. We can bound this using that, for $\abs{z} \leq 1$ (so the Taylor series~\cite[Eq.~4.24.1]{DLMF} converges), \begin{equation}\label{eq:arcsin_bound} \abs{\arcsin{z}} = \left|\sum_{n = 0}^\infty \frac{(2 n)!} {2^{2 n} (n!)^2} \frac{z^{2 n + 1}}{2 n + 1}\right| \leq \sum_{n = 0}^\infty \frac{(2 n)!} {2^{2 n} (n!)^2} \frac{\abs{z}^{2 n + 1}}{2 n + 1} \leq \arcsin\abs{z} \leq \frac{\pi}{2}. \end{equation} We can further verify by Fact~\ref{lem:bern-bounds} that $|z| \le 1 + \delta$ for $z \in E_\rho$, so the above display yields \begin{align*} \left|\overline{\arcsin}\Par{z}\right| &= \left|\arcsin\Par{(1 - \delta)z}\right| \le \frac \pi 2. \end{align*} So, by \cref{thm:main_bounded} with $b \gets \frac 1 {1 - \delta}$ and $\delta \gets \frac \delta C$ for a sufficiently large $C$, there is a polynomial $q$ with $\norm{q - \overline{\arcsin}}_{[-1, 1]} \le \frac \pi 2 \varepsilon$ and $\norm{q}_{[-(1 - \delta)^{-1}, (1 - \delta)^{-1}]} \le \frac \pi 2$. Letting $p((1 - \delta)x) = \frac 2 \pi q(x)$, we have the desired bounds. The degree of $p$ is $O(\frac{1}{\delta}\log\frac{1}{\delta\varepsilon})$, and it is odd by Corollary~\ref{cor:parity_bounded} as $\overline{\arcsin}$ is odd. \end{proof} In Corollary~\ref{cor:approx-exptarcsin}, we further apply Theorem~\ref{thm:main_bounded} to the ``fractional query'' setting of Corollary 72 in \cite{gslw19}, which requires approximations to $\exp({\imath} t \arcsin(x))$ for small $t$. As in \cite{gslw19}, we provide bounded approximations to $\cos(t\arcsin(x))$ and $\sin(t\arcsin(x))$ through our framework. \begin{corollary}\label{cor:approx-exptarcsin} Let $\varepsilon \in (0,1)$ and $t \in [-1,1]$. There exists an even polynomial $p$ and an odd polynomial $q$ of degree $O(\log \frac 1 \varepsilon)$ such that $\norm{p}_{[-1, 1]} \le 1$, $\norm{q}_{[-1, 1]} \le 1$, and \begin{align*} \norm{p(x) - \cos(t \arcsin(x))}_{[-\frac{1}{2}, \frac{1}{2}]} \leq \varepsilon,\; \norm{q(x) - \sin(t \arcsin(x))}_{[-\frac{1}{2}, \frac{1}{2}]} \leq \varepsilon. \end{align*} \end{corollary} \begin{proof} First, we rescale so the region of interest is $[-1,1]$: let $f(x) = \cos(t \arcsin(\frac x 2))$ and $g(x) = \sin(t \arcsin(\frac x 2))$. These are analytic on $\mathbb{C} \setminus ((-\infty, -2] \cup [2, \infty))$, since that is where $\arcsin(\frac x 2)$ is analytic. Let $\rho = 2$, so $f$ and $g$ are analytic on the interior of $E_\rho$. We observe that for all $z \in \mathbb{C}$, \begin{align*} |\cos(z)| = \frac{1}{2}\left|e^{{\imath} z} + e^{-{\imath} z}\right| \le \frac{1}{2} |e^{{\imath} z}| + \frac{1}{2} |e^{-{\imath} z}| \le \cosh(|z|), \end{align*} as $\cosh$ is increasing and the imaginary part of $z$ is at most $|z|$. A similar argument shows $|\sin(z)| \le \cosh(|z|)$. By Fact~\ref{lem:bern-bounds} we observe that every point in the interior of $\frac{1}{2} E_\rho$ has modulus $\le \frac 3 4$, and $|\arcsin|$ is bounded in this region by $\frac \pi 2$ (see \eqref{eq:arcsin_bound}), so for $z \in E_\rho$, $\left|f\Par{z}\right| = \left|\cos\Par{t\arcsin\Par{\frac z 2 }} \right| \le \cosh\Par{\frac \pi 2}$, and we may analogously bound $g$ on $E_\rho$. Taking $b = 2$ and $\delta$ to be a sufficiently small constant in \cref{thm:main_bounded}, and rescaling the region of interest, gives the conclusion. The parities of $p$ and $q$ follow from Corollary~\ref{cor:parity_bounded} and the parities of $\cos(t\arcsin(x))$ and $\sin(t\arcsin(x))$. \end{proof} Finally, Corollary~\ref{cor:approx-x-c} gives a variant of Corollaries 67 and 69 in \cite{gslw19}, regarding the bounded approximation of negative power functions. Our bound has a slightly worse logarithmic factor in some regimes (as discussed in Remark~\ref{rem:bounded_comments}), but otherwise agrees with the bounds in \cite{gslw19} up to a constant factor, using arguably a more standard approach. \begin{corollary} \label{cor:approx-x-c} Let $\delta, \varepsilon \in (0, 1)$, and let $f(x) = \abs{\frac \delta x}^{c}$ for $c > 0$. There exist both even and odd polynomials $p(x)$ of degree $O(\frac{\max(1, c)}{\delta}\log\frac{1}{\delta\varepsilon})$ such that $\norm{p}_{[-1, 1]} \le 3$ and $\norm{p - f}_{[\delta, 1]} \leq \varepsilon$. \end{corollary} \begin{proof} Assume $\delta$ is sufficiently small, else taking a smaller $\delta$ only affects the bound by a constant. We rescale the region of interest: $x = \frac{1-\delta}{2}y + \frac{1+\delta}{2}$ is in $[\delta, 1]$ for $y \in [-1,1]$, so let \begin{align*} g(y) \coloneqq \delta^c\Big(\frac{1-\delta}{2}y + \frac{1+\delta}{2}\Big)^{-c}. \end{align*} We require a bound of $g$ on $E_\rho$ for $\rho = 1 + \sqrt{\delta/4\max(1, c)}$. Since $f$ is largest closest to the origin, $g$ is largest at the point closest to $-\frac{1+\delta}{1-\delta}$, i.e.\ $-\frac12(\rho + \rho^{-1}) > -(1 + \frac \delta {8\max(1, c)})$ by \cref{lem:bern-bounds}. Further, \begin{align*} g\Par{-\frac12(\rho + \rho^{-1})} &\leq g\Par{-\Par{1 + \frac \delta {8\max(1, c)}}} \\ &\leq \delta^c\Big(-\frac{1-\delta}{2}\Par{1 + \frac \delta {8\max(1, c)}} + \frac{1+\delta}{2}\Big)^{-c} \\ &= \Par{1 - \frac{1 - \delta}{16\max(1, c)}}^{-c} \leq \frac 3 2. \end{align*} Let $\widetilde{\delta} = \frac \delta {4C\max(1, c)}$ for sufficiently large $C$, and $b = 4$. \cref{thm:main_bounded} yields $q(y)$ satisfying: \begin{align*} \norm{q(y) - g(y)}_{[-1,1]} \leq \varepsilon, \; \norm{q(y)}_{[-(1 + \widetilde{\delta}), 1+\widetilde{\delta}]} \leq 2^c,\; \norm{q(y)}_{[-4, -(1 + \widetilde{\delta})] \cup [1+\widetilde{\delta}, 4]} \leq \varepsilon. \end{align*} Shifting back $y = \frac{2}{1-\delta}(x - \frac{1+\delta}{2})$, it is clear for sufficiently large $C$ that $y = -\frac{1 + 3\delta}{1 - \delta}$ (which corresponds to $x = -\delta$) has $y < -(1 + \widetilde{\delta})$, and $y = -\frac{3 + \delta}{1 - \delta}$ (which corresponds to $x = -1$) has $y > -4$. So, \begin{equation}\label{eq:qfbounds} \begin{aligned} \left\|q\Par{\frac{2}{1-\delta}\Par{x - \frac{1+\delta}{2}}} - f(x)\right\|_{[\delta,1]} \leq \varepsilon, \\ \left\|q\Par{\frac{2}{1-\delta}\Par{x - \frac{1+\delta}{2}}}\right\|_{[-\delta, \delta]} \leq 2^c, \\ \left\|q\Par{\frac{2}{1-\delta}\Par{x - \frac{1+\delta}{2}}}\right\|_{[-1, -\delta]} \leq \varepsilon. \end{aligned} \end{equation} Depending on whether we wish the final function to be even or odd, we take \[p(x) = q\Par{\frac{2}{1-\delta}\Par{x - \frac{1+\delta}{2}}} \pm q\Par{\frac{2}{1-\delta}\Par{-x - \frac{1+\delta}{2}}}.\] Then the guarantees of \eqref{eq:qfbounds} give $ \norm{p(x) - f(x)}_{[\delta,1]} \leq 2\varepsilon$ and $\norm{p(x)}_{[-1, 1]} \leq 3$, and we rescale $\varepsilon$ to conclude. The final degree of the polynomial is the degree of $q(y)$: $O(\frac{\max(1, c)}{\delta}\log\frac{1}{\delta\varepsilon})$. \end{proof} \subsection{Proof of Theorem~\ref{thm:main_bounded}}\label{ssec:bounded_proof} We conclude with a proof of Theorem~\ref{thm:main_bounded}. Our proof builds upon several elementary bounds on Bernstein ellipses and the growth of Chebyshev polynomials, as well as the construction of explicit thresholding functions. For ease of exposition, we state all the helper bounds we use in this section, but defer their proofs to Appendix~\ref{app:more_csp}. We begin with our bounds on the sizes of Bernstein ellipses. \begin{fact} \label{lem:bern-bounds} The Bernstein ellipse $E_\rho$ for $\rho \geq 1$ satisfies \[ \operatorname{interior}(E_{\rho}) \subset \Big\{x + {\imath} y \mid x, y \in \mathbb{R},\, \abs{x} \leq \tfrac{1}{2}(\rho + \rho^{-1}) \text{ and } \abs{y} \leq \tfrac{1}{2}(\rho - \rho^{-1}) \Big\}. \] Further, for $\rho = 1 + \delta \leq 2$, \begin{align*} 1 + \frac{\delta^2}{4} \leq \frac{1}{2}(\rho + \rho^{-1}) &= 1 + \frac{\delta^2}{2(1+\delta)} \leq 1 + \frac{\delta^2}{2}, \\ \frac34 \delta \leq \frac{1}{2}(\rho - \rho^{-1}) &= \delta - \frac{\delta^2}{2(1+\delta)} \leq \delta. \end{align*} \end{fact} This yields the following containment fact, whose proof is deferred to Appendix~\ref{app:more_csp}. \begin{restatable}{lemma}{restatetinyellipse} \label{lem:tiny-ellipse-rescale} For $\delta \in(0, 1)$, $(1+\delta)E_{1 + \alpha}$ is contained in the interior of $E_{\sigma}$, where $\sigma = 1 + 3(\alpha + \sqrt{\delta})$. \end{restatable} We also use the following bounds on Chebyshev polynomials, deferring a proof to Appendix~\ref{app:more_csp}. \begin{restatable}{lemma}{restatechebzbound}\label{lem:chebz-bound} There are universal constants $C, c > 0$ such that, for $n \geq 0$ and $x, y \in \mathbb{R}$, $|y| \le c$, \begin{equation*} \abs{T_n(x + {\imath} y)} \leq \begin{cases} (1 + C\sqrt{\abs{y}})^n & \abs{x} \leq 1 \\ (x + \sqrt{x^2 - 1} + C\sqrt{\abs{xy}})^n & \abs{x} > 1 \end{cases}. \end{equation*} \end{restatable} To ameliorate the polynomial growth of Chebyshev polynomials from Lemma~\ref{lem:chebz-bound}, we apply a threshold function with tails which decay superexponentially. Our thresholding is based on the Gaussian error function $\erf$; we define $\erf$ and recall some standard bounds on it in the following. \begin{fact}[Eqs.\ 7.8.3, 7.8.7, \cite{DLMF}]\label{fact:erf} For $z \in \mathbb{C}$, $\erf: \mathbb{C} \to \mathbb{C}$ by $\erf(z) \coloneqq \frac{2}{\sqrt{\pi}} \int_0^{z} e^{-t^2}\mathrm{d} t$. Then, \begin{align} 1 - \erf(x) &= \frac{2}{\sqrt{\pi}}\int_x^\infty e^{-t^2} \mathrm{d} t < \frac{2e^{-x^2}}{\sqrt{\pi}(1+x)} < 2e^{-x^2}, & \label{eq:erf-bound} \\ \abs{\erf({\imath} x)} &= \frac{2}{\sqrt{\pi}}\int_0^{x} e^{t^2} \mathrm{d} t < \frac{2(e^{x^2} - 1)}{\sqrt{\pi}\abs{x}} < 2e^{x^2} \text{ (when $x \geq 1$)}. \label{eq:erfi-bound} \end{align} \end{fact} For $z \in \mathbb{R}$, we note that $\frac{1}{2} + \frac{1}{2} \erf(z)$ is the cumulative distribution function for a Gaussian with mean $0$ and variance $\frac{1}{2}$, which interpolates between $0$ and $1$; consequently, one may view $\erf$ (appropriately rescaled as necessary) as a ``smoothed'' variant of the sign function \begin{align*} \sgn(x) \coloneqq \begin{cases} -1 & x < 0 \\ 0 & x = 0 \\ 1 & x > 0 \end{cases}. \end{align*} Building upon $\erf$, we state our family of thresholding functions, deferring proofs to Appendix~\ref{app:more_csp}. \begin{restatable}[Thresholding function]{lemma}{restaterectbounds}\label{lem:rect-bounds} For $\mu, s > 0$, let $r(z) \coloneqq \frac{1}{2}(\erf(s(\mu + z)) + \erf(s(\mu - z)))$. When $z \in \mathbb{R}$, $0 \le r(z) \le 1$. When $x, y \in \mathbb{R}$ and $z = x + {\imath} y$, $|r(z) - r(x)| \le \exp(-s^2\Par{\mu - |x|}^2) |\erf({\imath} sy)|$. \end{restatable} Evidently for $z \in \mathbb{R}$, the function $r$ behaves as a threshold: sufficiently inside $[-\mu, \mu]$, it is close to $1$, and sufficiently outside it is close to $0$. The size of the ``growth window'' near $\mu$ is roughly $\frac 1 s$, and Lemma~\ref{lem:rect-bounds} shows $r(x + {\imath} y) \approx r(x)$ for small $y$. Leveraging these tools, we now prove Theorem~\ref{thm:main_bounded}. \begin{proof}[Proof of Theorem~\ref{thm:main_bounded}] Without loss of generality, we rescale so that $M = 1$. To obtain the theorem statement, it suffices to prove that there exists a polynomial $q$ of degree $O(\frac{b}{\delta}\log\frac{1}{\alpha\varepsilon})$ such that \begin{align} \norm{f - q}_{[-(1-\delta), 1-\delta]} &\leq \varepsilon, \nonumber \\ \norm{q}_{[-1, 1]} &\leq 1+\varepsilon, \label{eq:bounded-true}\\ \norm{q}_{[-b, -1] \cup [1,b]} &\leq \varepsilon. \nonumber \end{align} To see this, consider $f$ as in the theorem statement. Let $\delta' = \frac{\delta}{1+\delta} = \Theta(\delta)$, so that $\frac{1}{1-\delta'} = 1+\delta$. Then $f(\tfrac{y}{1-\delta'})$ is analytic and bounded by $M$ for $y$ in the interior of $(1-\delta') E_\rho$, which contains $E_{1+\frac \alpha 4}$ by \cref{lem:tiny-ellipse-rescale} and $\sqrt{\delta'} < \sqrt{\delta} < \frac \alpha C$. Applying \eqref{eq:bounded-true}, we get a function $q(y)$ such that $q((1-\delta')x)$ satisfies the guarantees described above, with the intervals scaled up by a factor of $\frac{1}{1-\delta'}$: \begin{align*} \abs{f(x) - q((1-\delta')x)} &\leq \varepsilon & \text{for } x &\in [-1, 1], \\ \abs{q((1-\delta')x)} &\leq 1+\varepsilon & \text{for } x &\in [-\tfrac{1}{1-\delta'}, \tfrac{1}{1-\delta'}] = [-(1+\delta), 1+\delta], \\ \abs{q((1-\delta')x)} &\leq \varepsilon & \text{for } x &\in [-\tfrac{b}{1-\delta'}, -(1+\delta)] \cup [1+\delta,\tfrac{b}{1-\delta'}] \end{align*} To conclude, consider $\frac{1}{1+\varepsilon}q((1-\delta')x)$. We make $q$ slightly smaller so that it is bounded by $1$ in $[-(1+\delta), (1+\delta)]$. This only affects the closeness to $f$ by a constant factor: for $x \in [-1,1]$, \begin{align*} \abs{f(x) - \tfrac{1}{1+\varepsilon}q((1-\delta')x)} \leq \abs{f(x) - q((1-\delta')x)} + (1 - \tfrac{1}{1+\varepsilon})\abs{q((1-\delta')x)} \leq 2\varepsilon. \end{align*} The degree of $\frac{1}{1+\varepsilon}q((1-\delta')x)$ is degree of $q(x)$, as desired. We now proceed to prove \eqref{eq:bounded-true}. By \cref{thm:trefethen}, there is a polynomial with degree $n = \lceil\frac 1 \alpha \log \frac{6}{\alpha\varepsilon}\rceil$ with $\norm{f - f_n}_{[-1, 1]} \leq \frac \varepsilon 3$, and the Chebyshev coefficients of $f_n = \sum_{k = 0}^n a_k T_k(x)$, satisfy $\abs{a_k} \leq 2\rho^{-k}$. Next, let $\widetilde{p}(z) \coloneqq r(z)f_n(z)$ be the truncation $f_n$ multiplied by the function $r(z)$ from \cref{lem:rect-bounds} with \[ \mu \coloneqq 1 - \frac \delta 2,\; s \coloneqq \frac{C_s}{\delta}\sqrt{\log\frac{1}{\alpha\varepsilon}}, \] and $C_s$ is a constant to be chosen later. Let $\widetilde{\rho} \coloneqq 1 + \frac \delta b$; we will show $\widetilde{p}$ is bounded on $bE_{\widetilde{\rho}}$, and then our final approximation $q$ will be an application of \cref{thm:trefethen} to approximate $\widetilde{p}$ on $[-b, b]$. To this end, it suffices to bound $\widetilde{p}(z)$ for all $z \in S$, where the strip $S$ is defined as \[S \coloneqq \Brace{z = x + {\imath} y \mid |y| \le \delta},\] because Fact~\ref{lem:bern-bounds} implies $S \supseteq bE_{\widetilde{\rho}}$. We begin by bounding $r(z)$ for $z \in S$: \begin{equation}\label{eq:rbound} \begin{aligned} \abs{r(x+{\imath} y)} &\leq r(x) + e^{-s^2(\mu - |x|)^2}\abs{\erf({\imath} sy)}\\ &\leq r(x) + e^{-s^2(\mu - |x|)^2}\abs[\Big]{\erf\paren[\Big]{{\imath} C_s\sqrt{\log\tfrac{1}{\alpha\varepsilon}}}}\\ &\leq r(x) + 2e^{-s^2(\mu - |x|)^2}(\alpha\varepsilon)^{-C_s^2}. \end{aligned} \end{equation} The inequalities above respectively used \cref{lem:rect-bounds}, the definition of $S$, and \eqref{eq:erfi-bound}. We now combine \eqref{eq:rbound} with Lemma~\ref{lem:chebz-bound} to bound $\widetilde{p}$ on $S$. First, consider when $z = x + {\imath} y \in S$ and $x \in [-1, 1]$. This is the bottleneck of the argument, where $\widetilde{p}$ is largest. We bound \begin{align*} \abs{\tilde{p}(z)} = \abs{r(z)} \abs{f_n(z)} &\le \paren[\Big]{1 + \exp\paren{-s^2(\mu - |x|)^2}\poly\paren[\Big]{\frac 1 {\alpha\varepsilon}}} \abs[\Bigg]{\sum_{k = 0}^n a_k T_k(z)} \\ &\le \poly\paren[\Big]{\frac 1 {\alpha\varepsilon}}\paren[\Bigg]{2\sum_{k = 0}^n \rho^{-k}|T_k(z)|} \\ &\le \poly\paren[\Big]{\frac 1 {\alpha\varepsilon}} \paren[\Bigg]{\sum_{k = 0}^n \paren[\Big]{\frac{1 + K\sqrt{\delta}}{\rho}}^k} = \poly\paren[\Big]{\frac 1 {\alpha\varepsilon}}. \end{align*} The first inequality was \eqref{eq:rbound}, the second used the guarantees of Theorem~\ref{thm:trefethen}, the third used Lemma~\ref{lem:chebz-bound}, and the last used $\delta \ll \alpha^2$, $n = \poly(\frac 1 {\alpha\varepsilon})$. Next, for $z = x + {\imath} y \in S$ with $|x| \ge 1$, \begin{equation}\label{eq:bigx_rbound} \begin{aligned} r(x) = \frac{1}{2}\Par{\erf\Par{s(\mu + |x|)} - \erf\Par{s(|x| - \mu)}} \le \frac{1}{2}\Par{1 - \erf\Par{s(|x| - \mu)}} < e^{-s^2(|x| - \mu)^2}, \end{aligned} \end{equation} where the first inequality was $\erf(z) \le 1$ for $z \in \mathbb{R}$, and the last was \eqref{eq:erf-bound}. Further, Lemma~\ref{lem:chebz-bound} yields \begin{equation}\label{eq:bigx_fbound} \begin{aligned} |f_n(z)| &\le \sum_{k = 0}^n 2\rho^{-k}\Par{|x| + \sqrt{x^2 - 1} + K\sqrt{|xy|}}^k \\ &\le 2n\Par{|x| + \sqrt{x^2 - 1} + K\sqrt{|xy|}}^n \le 2n\exp\Par{n\Par{|x| - 1 + \sqrt{x^2 - 1} + K\sqrt{|xy|}}}. \end{aligned} \end{equation} Continuing, we combine \eqref{eq:rbound}, \eqref{eq:bigx_rbound}, and \eqref{eq:bigx_fbound} to conclude \begin{equation}\label{eq:tp_bigx_bound} \begin{aligned} |\widetilde{p}(z)| &\le \exp\Par{-s^2(|x| - \mu)^2} \poly\paren[\Big]{\frac 1 {\alpha\varepsilon}} |f_n(z)| \\ &\le \exp\Par{-s^2(|x| - \mu)^2 + n\Par{|x| - 1 + \sqrt{x^2 - 1} + K\sqrt{|xy|}}} \poly\paren[\Big]{\frac 1 {\alpha\varepsilon}} \\ &\leq \exp\Par{\Par{-\frac{C_s^2}{\delta^2}(|x| - \mu)^2 + \frac 1{\sqrt{\delta}}\Par{|x| - 1 + \sqrt{x^2 - 1} + K\sqrt{|x|\delta}} }\log \frac 1 {\alpha\varepsilon}}\poly\Par{\frac 1 {\alpha\varepsilon}}. \end{aligned} \end{equation} Here we used that $n \le \frac 1{\sqrt \delta} \log \frac 1 {\alpha\varepsilon}$ under the assumed relationship between $\delta$ and $\alpha$, and the definition of $s$. For sufficiently large $C_s$, it is straightforward to see that for all $|x| \ge 1$, since the left-hand side asymptotically grows faster than each term in the right-hand side, \begin{equation}\label{eq:csdominates} \frac{C_s^2}{2\delta^2}\paren[\Big]{|x| - \paren[\Big]{1 - \frac \delta 2}}^2 \ge \frac 1 {\sqrt{\delta}}\paren[\Big]{|x| - 1 + \sqrt{x^2 - 1} + K\sqrt{|x|\delta}}, \end{equation} and hence for this choice of $C_s$, plugging this into the previous bound gives that $\widetilde{p}(x + {\imath} y) \le \poly(\frac 1 {\alpha\varepsilon})$ for $|x| \ge 1$.\footnote{We note that the $\poly$ in \eqref{eq:tp_bigx_bound} hides a $C_s$-dependent exponent, which grows faster than \eqref{eq:csdominates}.} Later, we will need a tighter bound when $y = 0$ and $|x| \ge 1$: in this setting, the second additive term in \eqref{eq:rbound} vanishes, and hence taking $C_s$ such that \eqref{eq:csdominates} holds, repeating the arguments in \eqref{eq:tp_bigx_bound} without the $\poly(\frac 1 {\alpha\varepsilon})$ overhead gives for sufficiently large $C_s$, \begin{equation}\label{eq:largex_tpbound}\widetilde{p}(x) \le \exp\paren[\Big]{-\frac{C_s^2}{2\delta^2}(|x| - \mu)^2 \log \frac 1 {\alpha\varepsilon}}\le \exp\paren[\Big]{-\frac{C_s^2}{8}\log\frac 1 {\alpha\varepsilon}} \le \frac \varepsilon 3 \text{ for all } x \in \mathbb{R} \text{ with } |x| \ge 1.\end{equation} Thus, we have shown that for all $z \in S$, $|\widetilde{p}(z)| \le \poly(\frac 1 {\alpha\varepsilon})$, and as the product of analytic functions, $\widetilde{p}$ is analytic. Next, for all $z \in \mathbb{C}$ let $\widehat{p}(z) \coloneqq \widetilde{p}(bz)$. We have shown $\widehat{p}$ is bounded on $E_{\widetilde{\rho}}$, and hence Theorem~\ref{thm:trefethen} gives a Chebyshev truncation $\widehat{p}_m$ such that $\norm{\widehat{p} - \widehat{p}_m}_{[-1, 1]} \le \frac \varepsilon 3$, for \[m = O\Par{\frac b \delta \log \frac b {\delta\varepsilon}}.\] Our final approximation is $q(z) \coloneqq \widehat{p}_m(\frac z b)$. By the definitions of $q$ and $\widehat{p}$, the relationship between $\widehat{p}$ and $\widetilde{p}$ implies $\norm{q - \widetilde{p}}_{[-b, b]} \le \frac \varepsilon 3$. Combined with \eqref{eq:largex_tpbound}, this implies the third bound in \eqref{eq:bounded-true}, \[\norm{q}_{[-b, -1] \cup [1, b]} \le \varepsilon.\] The first bound $\norm{f - q}_{[-(1-\delta), 1-\delta]} \le \varepsilon$ in \eqref{eq:bounded-true} follows from $\norm{q - \widetilde{p}}_{[-(1-\delta), 1-\delta]} \le \frac \varepsilon 3$, $\norm{f_n - f}_{[-(1-\delta), 1-\delta]} \le \frac \varepsilon 3$, and $\norm{f_n - \widetilde{p}}_{[-(1-\delta), 1-\delta]} \le (1 + \frac \varepsilon 3)\norm{1 - r}_{[-(1-\delta), 1-\delta]} \le \frac \varepsilon 3$ choosing $C_s$ sufficiently large. Finally, the second bound in \eqref{eq:bounded-true} follows from \begin{align*} \norm{q}_{[1-\delta, 1]} &\le \norm{q - \widetilde{p}}_{[1-\delta, 1]} + \norm{\widetilde{p}}_{[1-\delta, 1]} \le \frac \varepsilon 3 + \norm{\widetilde{p}}_{[1-\delta, 1]} \\ &\le \frac \varepsilon 3 + \norm{f_n}_{[1-\delta, 1]} \le \varepsilon + \norm{f}_{[1-\delta, 1]} \le 1 + \varepsilon, \end{align*} where we used the closeness bounds between $(\widetilde{p}, q)$ and $(f_n, f)$, as well as the assumed bound on $f$ over $E_\rho$ (which contains $[1-\delta, 1]$). The bound $\norm{q}_{[-1, -(1-\delta)]} \le 1 + \varepsilon$ follows symmetrically. \end{proof} In some of our applications in Section~\ref{sec:bounded}, we used the following property of our approximations constructed via Theorem~\ref{thm:main_bounded}, which we record here for convenience. \begin{corollary}\label{cor:parity_bounded} In the setting of Theorem~\ref{thm:main_bounded}, if $f$ is even or odd, so is $q$. \end{corollary} \begin{proof} It is straightforward to check that all operations we perform on $f$ (Chebyshev truncation, multiplication by an even function $r$, and another Chebyshev truncation) preserve parity. \end{proof} \section{Introduction}\label{sec:intro} This work is intended as a ``user-friendly guide'' to understanding technical aspects of the \emph{quantum singular value transformation} (QSVT), an elegant framework introduced by Low and Chuang~\cite{LowC16} for designing quantum algorithms, particularly those that can be phrased as a \emph{linear algebraic task} on a quantum state $\ket{\psi}$, viewed as a vector of amplitudes $\sum_{k=1}^d \psi_k \ket{k}$. This includes Hamiltonian simulation~\cite{LowC17}, i.e.\ preparing $e^{{\imath} t\mathbf{H}}\ket{\psi}$ for a Hamiltonian $\mathbf{H}$; quantum linear system solving~\cite{hhl09}, i.e.\ preparing $\mathbf{A}^{-1}\ket{\psi}$ for a sparse matrix $\mathbf{A}$; and quantum random walks \cite{Szegedy04}, i.e.\ approximating large powers of a Markov chain transition matrix or discriminating its singular values. QSVT was subsequently popularized by a paper of Gily\'{e}n, Su, Low, and Wiebe~\cite{gslw19} which demonstrates that these important, seemingly disparate quantum algorithms can be seen as consequences of a single unifying primitive. Our aim is to expose the beauty of~\cite{gslw19} by simplifying or providing alternatives to its more mathematically dense proofs. This goal may be viewed as complementary to prior expositions of \cite{gslw19} such as \cite{MartynRTC21}, which focused on describing applications of QSVT to the design of quantum algorithms. On the other hand, our work directly provides streamlined proofs of the main technical results in \cite{gslw19}. First, we give an alternate exposition of the \emph{qubitization} technique given in \cite[Section 3.2]{gslw19}, a lifting of \emph{quantum signal processing} (a product decomposition for computing bounded scalar polynomials) to QSVT, its matrix counterpart. Specifically, QSVT implements quantum signal processing separately on each of the singular values of a ``block encoded matrix'' by mapping them through a polynomial transformation, while preserving the block encoding structure. Our exposition of QSVT is by way of the \emph{Cosine-Sine} decomposition, a strengthening of Jordan's lemma \cite{jordan75} that is more amenable to the computations needed in the proof of QSVT's correctness. We believe that this viewpoint elucidates the QSVT technique and its action on the block structure naturally induced by the constructions of encoded matrices, simplifying parts of the exposition in \cite{gslw19}. Second, we give a brief introduction to Chebyshev polynomials, and in particular, the technique of truncating \emph{Chebyshev Series}, which can be used to match or nearly-match all of the polynomial approximation results needed throughout \cite[Section 5]{gslw19}. Our starting point is a classical theorem of Trefethen (\cref{thm:trefethen}) which bounds the error incurred by Chebyshev truncation for smooth functions. We derive, as a consequence of Trefethen's result, a ``bounded Chebyshev truncation'' analog (\cref{thm:main_bounded}) which applies to piecewise-smooth functions, and is compatible with the QSVT framework. Unlike its analog in the original work \cite[Corollary 66]{gslw19}, our \cref{thm:main_bounded} does not use Taylor series or Fourier series in its proof: the only approximation theory tools used are standard properties of Chebyshev polynomials. This may be unsurprising from conventional wisdom in approximation theory, as Chebyshev, Fourier, and Laurent/Taylor series are ``essentially equivalent'' under changes of variables, as discussed in \cite[Preface]{trefethen13}. We recommend viewing this note as a companion piece to \cite{gslw19}, for readers interested in understanding or applying QSVT. Correspondingly, our exposition will be lighter on discussing motivations than is appropriate for a self-contained work, as we focus our scope on only what is needed to provide context for the techniques of \cite{gslw19}. Though this choice does limit the self-contained readability of this work, we hope it clarifies the ideas underlying QSVT by cleaning up technical details and providing potentially simpler, user-friendly alternatives to prior developments.\footnote{As our goal is expository in nature, we would like to encourage the reader to contact us with feedback on any portions of our work which could benefit from further clarification.} \paragraph{Notation.} Matrices have bolded variable names, and $\mathbf{I}$ is the identity matrix with dimension specified by context. For a matrix $\mathbf{U}$, $\mathbf{U}^\dagger$ denotes its conjugate transpose. A square matrix $\mathbf{U}$ is \emph{unitary} if $\mathbf{U}^\dagger \mathbf{U} = \mathbf{U} \mathbf{U}^\dagger = \mathbf{I}$, and we call it ``partitioned'' if it has a block matrix structure with two row blocks and two column blocks (which is clear from context). When writing a matrix as blocks, an empty block denotes a zero block, and $\cdot$ denotes that the block contains arbitrary entries. For brevity, we omit the dimensions of blocks when these sizes are not important to the computation: all block matrices occurring in products are compatible in the standard way. The ``computational basis'' is the standard basis in $\mathbb{C}^d$. We denote $[d] \coloneqq \{1,2,\ldots,d\}$, ${\imath} \coloneqq \sqrt{-1}$, and $\boldsymbol{\sigma}_z$ is the Pauli matrix $(\begin{smallmatrix} 1 & 0 \\ 0 & -1 \end{smallmatrix})$. The maximum absolute value of a real function $f$ on $[a, b]$ is denoted $\norm{f}_{[a, b]}$. We write $a \eqsim b$ to mean there are universal constants $0 < C_1 \le C_2$ with $C_1 a \le b \le C_2 a$. \section*{Acknowledgements} ET thanks t.f.\ for providing useful references. ET is supported by the NSF GRFP (DGE-1762114). KT thanks Jonathan Kelner for encouraging him to learn about the CS decomposition, Christopher Musco for encouraging him to read \cite{trefethen13}, and Yang P.\ Liu for a helpful conversation about Jordan's lemma many moons ago. \bibliographystyle{alphaurl} \section{More applications of the CS decomposition}\label{app:csd_interpret} To shed light on the CS decomposition as capturing interactions between subspaces, in this section we derive two further applications beyond QSVT. We note that we do not claim the intuition provided in this section is very helpful for understanding the particular application of QSVT. However, we hope the reader is sufficiently convinced of the virtue of the CS decomposition as a technical tool, and this section serves to provide additional background on this tool. \subsection{Principal angles} In this section we consider two rank-$a$ subspaces $\mathcal{X} = \textup{Image}(\boldsymbol{\Pi}_x) \subset \mathbb{C}^d$ and $\mathcal{Y} = \textup{Image}(\boldsymbol{\Pi}_y) \subset \mathbb{C}^d$, for some $a \in [d]$. For $k \in [a]$, we define the $k^{\text{th}}$ \emph{principal angle} between $\mathcal{X}$ and $\mathcal{Y}$ recursively via \begin{equation}\label{eq:svd_big}\cos(\theta_k) \coloneqq \max_{\substack{x \in \mathcal{X} \\ \norm{x}_2 = 1}} \max_{\substack{y \in \mathcal{Y} \\ \norm{y}_2 = 1}} \inprod{x}{y} \text{ subject to } x \perp x_i, y \perp y_i \text{ for all } i < k, \end{equation} where $x_k$, $y_k$ are the \emph{principal vectors} realizing the maximum above. In other words, the first principal angle $\theta_1$ is the largest angle between a vector in $\mathcal{X}$ and a vector in $\mathcal{Y}$; $\theta_2$ is the largest angle between vectors in the subspaces of $\mathcal{X}$ and $\mathcal{Y}$ orthogonal to the vectors achieving $\theta_1$, and so on. This definition only depends on the subspaces, and so is agnostic to the choice of basis for $\mathcal{X}$ and $\mathcal{Y}$. \begin{lemma} \label{lem:svs-are-angles} Let $\mathbf{X}, \mathbf{Y} \in \mathbb{C}^{d \times a}$ be such that their columns are orthonormal bases for $\mathcal{X}$ and $\mathcal{Y}$, respectively. Then, the values $\{\cos(\theta_k)\}_{k \in [a]}$ are the singular values of $\mathbf{X}^\dagger \mathbf{Y}$. Further, letting $\mathbf{V} \mathbf{C} \mathbf{W}^\dagger$ be the SVD of $\mathbf{X}^\dagger \mathbf{Y}$, the principal vectors between $\mathcal{X}, \mathcal{Y}$ are columns of $\mathbf{X}\mathbf{V}$, $\mathbf{Y}\mathbf{W}$. \end{lemma} \begin{proof} This result follows from the variational characterization of singular values and vectors. Let $\mathbf{C} = \diag{c}$. Fix $k \in [a]$ and suppose inductively the conclusion holds for all $i < k$. Recall that the SVD is recursively defined by \begin{equation}\label{eq:svd_small}c_k = \max_{\norm{v}_2 = \norm{w}_2 = 1} v^\dagger \mathbf{X}^\dagger \mathbf{Y} w, \text{ subject to } v \perp v_i,\; w \perp w_i \text{ for all } i < k. \end{equation} Inductively assume $x_i = \mathbf{X} v_i$ for all $i < k$. Notice that every unit vector in $\mathcal{X}$ can be written as $\mathbf{X} v$ for some unit $v \in \mathbb{C}^a$, and since $\mathbf{X}^\dagger \mathbf{X} = \mathbf{I}$, we have $\mathbf{X} v \perp \mathbf{X} v_i$ iff $v \perp v_i$. By reasoning similarly for $\mathcal{Y}$, we conclude the optimization problems \eqref{eq:svd_big} and \eqref{eq:svd_small} are the same under the transformation $x \gets \mathbf{X} v$ and $y \gets \mathbf{Y} w$. Hence setting $x_k \gets \mathbf{X} v_k$ and $y_k \gets \mathbf{Y} w_k$ we may continue inducting. \end{proof} Now we explain the connection between the above digression and the CS decomposition. \cref{lem:svs-are-angles} shows that we can find the principal angles between $\mathcal{X}$ and $\mathcal{Y}$ by taking orthogonal bases, $\mathbf{X}$ and $\mathbf{Y}$, and computing the SVD of $\mathbf{X}^\dagger \mathbf{Y}$. We could further ask: what are the principal angles of the orthogonal subspaces, $\mathcal{X}_\perp = \{u \mid \langle u, x\rangle = 0 \text{ for all } x \in \mathcal{X}\}$ and $\mathcal{Y}_\perp = \{u \mid \langle u, y\rangle = 0 \text{ for all } y \in \mathcal{Y}\}$? We can apply the same lemma on bases for the subspaces, $\mathbf{X}^\perp$ and $\mathbf{Y}^{\perp}$, to compute them; however, we can say more. First, let the following matrices be unitary completions of $\mathbf{X}$, $\mathbf{Y}$: \begin{equation}\label{eq:complete_unitary} \begin{pmatrix} \mathbf{X} & \mathbf{X}_\perp \end{pmatrix},\; \begin{pmatrix} \mathbf{Y} & \mathbf{Y}_\perp \end{pmatrix}. \end{equation} We next take the CS decomposition (Theorem~\ref{thm:cs}) of the product (which is also unitary), \[\mathbf{U} = \begin{pmatrix} \mathbf{X} & \mathbf{X}_\perp \end{pmatrix}^\dagger \begin{pmatrix} \mathbf{Y} & \mathbf{Y}_\perp \end{pmatrix} = \begin{pmatrix} \mathbf{X}^\dagger \mathbf{Y} & \mathbf{X}^\dagger \mathbf{Y}_\perp \\ \mathbf{X}_\perp^\dagger \mathbf{Y} & \mathbf{X}_\perp^\dagger \mathbf{Y}_\perp\end{pmatrix}.\] This gives us $\mathbf{V}_1, \mathbf{V}_2, \mathbf{W}_1, \mathbf{W}_2$ such that \[\begin{pmatrix}\mathbf{V}_1^\dagger \mathbf{X}^\dagger \mathbf{Y} \mathbf{W}_1 & \mathbf{V}_1^\dagger \mathbf{X}^\dagger \mathbf{Y}_\perp \mathbf{W}_2 \\ \mathbf{V}_2^\dagger \mathbf{X}_\perp^\dagger \mathbf{Y} \mathbf{W}_1 & \mathbf{V}_2^\dagger \mathbf{X}_\perp^\dagger \mathbf{Y}_\perp \mathbf{W}_2 \end{pmatrix} = \begin{pmatrix} \mathbf{D}_{11} & \mathbf{D}_{12} \\ \mathbf{D}_{21} & \mathbf{D}_{22} \end{pmatrix},\] of the form in Theorem~\ref{thm:cs}. This gives simultaneous SVDs for each block, and thus gives the principal angles and vectors for all combinations of $\mathcal{X}, \mathcal{X}_\perp, \mathcal{Y}$, and $\mathcal{Y}_\perp$. In particular, we can see that the principal angles of $(\mathcal{X}, \mathcal{Y})$ and $(\mathcal{X}_\perp, \mathcal{Y}_\perp)$ are related: up to padding by 0's and 1's, they are identical! Further, we can take $\mathbf{X} \gets \mathbf{X} \mathbf{V}_1$, $\mathbf{Y} \gets \mathbf{Y} \mathbf{W}_1$, etc.\ without affecting the induced subspaces (since e.g.\ $\mathbf{X} \mathbf{V}_1 \mathbf{V}_1^\dagger \mathbf{X}^\dagger = \mathbf{X} \mathbf{X}^\dagger$), but such that after this transformation we simply have \begin{equation}\label{eq:cs_canonical}\begin{pmatrix}\mathbf{X}^\dagger \mathbf{Y} & \mathbf{X}^\dagger \mathbf{Y}_\perp \\ \mathbf{X}_\perp^\dagger \mathbf{Y} & \mathbf{X}_\perp^\dagger \mathbf{Y}_\perp \end{pmatrix} = \begin{pmatrix} \mathbf{D}_{11} & \mathbf{D}_{12} \\ \mathbf{D}_{21} & \mathbf{D}_{22} \end{pmatrix}.\end{equation} That is, we can choose canonical basis representations of $\mathcal{X}$, $\mathcal{Y}$, and their complement subspaces consistently, such that every pairing directly induces the ``principal angles and vectors'' defined above. We remark that this argument extends just fine to $\mathcal{X}$, $\mathcal{Y}$ of different dimensions. \subsection{Jordan's lemma} Next we derive Jordan's lemma \cite{jordan75}, a useful way of decomposing $\mathbb{C}^d$ into subspaces (induced by a unitary matrix) which are jointly compatible with two projection matrices in a certain sense. We note that Jordan's lemma has seen varied implicit or explicit uses in the quantum computing literature (including a suggestion by \cite{gslw19}), and refer to \cite{reg06} for an account of this. \begin{lemma}\label{lem:jordan} Let $\boldsymbol{\Pi}_x, \boldsymbol{\Pi}_y \in \mathbb{C}^{d \times d}$ be projection matrices. There exists a unitary matrix $\mathbf{U} \in \mathbb{C}^{d \times d}$ with columns $\{u_i\}_{i \in [d]}$, and a partition of $[d]$ into $\mathcal{S} \coloneqq \{S_j\}_{j \in [k]}$ such that $|S_j| \in \{1, 2\}$ for all $j \in [k]$, and $\mathbf{U}^\dagger \boldsymbol{\Pi}_x \mathbf{U}$, $\mathbf{U}^\dagger \boldsymbol{\Pi}_y \mathbf{U}$ are block-diagonal with blocks indexed by $\{S_j\}_{j \in [k]}$; each block is trace-$1$. Moreover for each $S_j = \{i, i'\}$ where $|S_j| = 2$, we have $\boldsymbol{\Pi}_x u_i \parallel \boldsymbol{\Pi}_x u_{i'}$ and $\boldsymbol{\Pi}_y u_i \parallel \boldsymbol{\Pi}_y u_{i'}$. \end{lemma} In other words, there is a choice of subspaces given by $\mathbf{U}$ (whose columns are partitioned by $\mathcal{S}$) such that if $i, i' \in [n]$ where $i \in S_j$ and $i' \in S_{j'}$, $u_i^\dagger \boldsymbol{\Pi}_x u_{i'} \neq 0$, $u_i^\dagger \boldsymbol{\Pi}_y u_{i'} \neq 0$ iff $j = j'$. Moreover, the second part states each of the $2 \times 2$ blocks in $\mathbf{U}^\dagger \boldsymbol{\Pi}_x \mathbf{U}$ and $\mathbf{U}^\dagger \boldsymbol{\Pi}_y \mathbf{U}$ are in fact rank-$1$ and trace-$1$. \begin{proof}[Proof of Lemma~\ref{lem:jordan}] We first prove this in the special case when $\boldsymbol{\Pi}_x$ and $\boldsymbol{\Pi}_y$ are dimension-$\frac d 2$ projectors with ``no intersection,'' and briefly discuss how to extend this to the general case. \\ \textit{Special case.} Suppose $\boldsymbol{\Pi}_x$ and $\boldsymbol{\Pi}_y$ are dimension-$\frac d 2$ projectors, and let us further make one following restriction. Let $\boldsymbol{\Pi}_x = \mathbf{X}\mx^\dagger$ and $\boldsymbol{\Pi}_y = \mathbf{Y}\my^\dagger$ where $\mathbf{X}$, $\mathbf{Y}$ and their ``completions'' $\mathbf{X}_\perp$, $\mathbf{Y}_\perp$ (in the sense that the matrices \eqref{eq:complete_unitary} are unitary) are chosen such that \eqref{eq:cs_canonical} holds, as guaranteed by Theorem~\ref{thm:cs}; we make the restriction that for $\mathbf{C} = \mathbf{X}^\dagger \mathbf{Y}$, all of the diagonal entries of $\mathbf{C}$ are in $(0, 1)$. The previous section shows this means $\textup{Span}(\mathbf{X}) \cap \textup{Span}(\mathbf{Y}) = \emptyset$. We choose the basis inducing $\mathbf{U}$ as follows. Let the columns of $\mathbf{X}$ be $\{x_j\}_{j \in [\frac d 2]} \subset \mathbb{C}^d$ and the columns of $\mathbf{Y}$ be $\{y_j\}_{j \in [\frac d 2]} \subset \mathbb{C}^d$. For $j \in [\frac d 2]$, we let $\{u_{2j - 1}, u_{2j}\} \subset \mathbb{C}^d$ be an arbitrary basis of $\textup{Span}\{x_j, y_j\}$. We claim such $\mathbf{U}$ meets the requirements, where each $S_j = \{2j - 1, 2j\}$ in our partition. For all $j \in [\frac d 2]$, since $\mathbf{X}^\dagger \mathbf{Y} = \mathbf{C}$, \[\boldsymbol{\Pi}_x y_j = \mathbf{X}\mx^\dagger y_j = \mathbf{X} \Par{\mathbf{C} e_j} = c_jx_j. \] So, $\boldsymbol{\Pi}_x$ maps $\textup{Span}\{x_j, y_j\}$ to $\textup{Span}\{x_j\}$, and similarly $\boldsymbol{\Pi}_y$ maps $\textup{Span}\{x_j, y_j\}$ to $\textup{Span}\{y_j\}$. This proves the second part of the lemma, namely that $\boldsymbol{\Pi}_x$ and $\boldsymbol{\Pi}_y$ act as rank-$1$ projectors on each block of the partition. For the first part (the block-diagonal structure), it suffices to show that for all $j \neq j' \in [\frac d 2]$, $x_j \perp \textup{Span}\{x_{j'}, y_{j'}\}$, since we already argued $\boldsymbol{\Pi}_x$ maps $\textup{Span}\{x_j, y_j\}$ to $\textup{Span}\{x_j\}$. By orthonormality of $\mathbf{X}$, $x_j \perp x_{j'}$, and since we used the canonical choice where $\mathbf{X}^\dagger \mathbf{Y} = \mathbf{C}$, indeed $x_j \perp y_{j'}$ as well. To see that each block has trace $1$, write $x_j = \alpha_j u_{2j - 1} + \beta_j u_{2j}$. We have that $\mathbf{X}\mx^\dagger u_{2j - 1} = \alpha_j x_j$ and $\mathbf{X}\mx^\dagger u_{2j} = \beta_j x_j$; to see this, we already argued that $u_{2j - 1}$, $u_{2j}$ are orthogonal to all $x_i$ for $i \neq j$, since $x_j \perp x_i$ and $y_j \perp x_i$. Hence, the $2 \times 2$ block of $\boldsymbol{\Pi}_x$ indexed by $S_j$ is the outer product of $\begin{pmatrix} \alpha_j & \beta_j \end{pmatrix}$ which clearly has trace $1$; a similar argument applies to $\boldsymbol{\Pi}_y$.\\ \textit{General case.} More generally, we can ``pull out'' vectors corresponding to the $\mathbf{I}$ and $\bm{0}$ blocks of the decomposition in Theorem~\ref{thm:cs}, when the dimensions are unequal. Concretely, again let $\boldsymbol{\Pi}_x = \mathbf{X} \mathbf{X}^\dagger$ and $\boldsymbol{\Pi}_y = \mathbf{Y}\my^\dagger$, such that $\mathbf{X}^\dagger \mathbf{Y} = \mathbf{C}$ and $\mathbf{C}$ has the form guaranteed by Theorem~\ref{thm:cs}. Further, assume $d_1 = \text{dim}(\textup{Span}(\mathbf{X})) \ge \text{dim}(\textup{Span}(\mathbf{Y})) = d_2$. Whenever there is a $1$ entry in $\mathbf{C}$, this corresponds to a subset of size $1$ in the partition with the column of $\mathbf{U}$ set to the corresponding vector in $\textup{Span}(\mathbf{X}) \cap \textup{Span}(\mathbf{Y})$. Whenever there is a $0$ entry we simply pull out the corresponding vector in $\textup{Span}(\mathbf{X}) \setminus \textup{Span}(\mathbf{Y})$ into its own block in the partition. Finally, when $\textup{Span}(\mathbf{X}) \oplus \textup{Span}(\mathbf{Y}) \neq \mathbb{C}^d$ we find any orthonormal basis of $(\textup{Span}(\mathbf{X}) \oplus \textup{Span}(\mathbf{Y}))^c$ and add them it as columns of $\mathbf{U}$. It is an exercise to check the overall dimension of $\mathbf{U}$ after this process is $d$. \end{proof} \section{Deferred proofs from Section~\ref{ssec:bounded_proof}}\label{app:more_csp} \restatetinyellipse* \begin{proof} Recall from Theorem~\ref{thm:trefethen} that we can parameterize the boundary of $E_\rho$ as $\frac{1}{2}(\rho + \rho^{-1}) \cos \theta + \frac{1}{2}(\rho - \rho^{-1})\sin\theta$, for $\theta \in [0, 2\pi]$. Hence, to prove the stated inclusion it suffices to prove that $\frac{1}{2}(\sigma + \sigma^{-1}) \ge (1+\delta)\frac{1}{2}(\rho + \rho^{-1})$ and $\frac{1}{2}(\sigma - \sigma^{-1}) \ge (1 + \delta)\frac{1}{2}(\rho - \rho^{-1})$. By \cref{lem:bern-bounds}, \begin{align*} \frac{1}{2}(\sigma + \sigma^{-1}) - (1+\delta)\frac{1}{2}(\rho + \rho^{-1}) &= 1 + \frac{9(\alpha^2 + 2\alpha\sqrt{\delta} + \delta)}{2(1 + 3\alpha + 3\sqrt{\delta})} - (1+\delta)\Big(1 + \frac{\alpha^2}{2(1+\alpha)}\Big) \\ &\geq 1 + \frac{2(\alpha^2 + 2\alpha\sqrt{\delta} + \delta)}{2(1 + \alpha)} - (1+\delta)\Big(1 + \frac{\alpha^2}{2(1+\alpha)}\Big) \\ &= \frac{2(\alpha^2 + 2\alpha\sqrt{\delta} + \delta) - (1+\delta)\alpha^2 - 2\delta(1+\alpha)}{2(1 + \alpha)} \geq 0. \end{align*} Further, since $\sigma \geq (1+\delta)(1+\alpha)$ and $\sigma - \sigma^{-1}$ increases in $\sigma$, \begin{align*} \frac{1}{2}(\sigma - \sigma^{-1}) - (1+\delta)\frac{1}{2}(\rho - \rho^{-1}) \geq \frac{1}{2}((1+\delta)\rho - ((1+\delta)\rho)^{-1}) - (1+\delta)\frac{1}{2}(\rho - \rho^{-1}) \geq 0. \end{align*} \end{proof} \restatechebzbound* \begin{proof} The only points not lying on the boundary of a Bernstein ellipse are the interval $[-1, 1]$, where the Chebyshev polynomials are at most $1$. Otherwise, suppose $z \coloneqq x + {\imath} y$ lies on the boundary of the Bernstein ellipse $E_\rho$, for some $\rho > 1$. This implies that for some $\theta \in [-\pi, \pi]$, \[\frac{1}{2} (\rho + \rho^{-1}) \cos \theta = x,\; \frac{1}{2} (\rho - \rho^{-1})\sin \theta = y,\] which follows from the parameterization of the Bernstein ellipse in Theorem~\ref{thm:trefethen}. Let $s = \cos \theta$ and $t = \frac{1}{2}(\rho + \rho^{-1})$. Noting that $\sqrt{t^2 - 1} = \frac{1}{2}(\rho - \rho^{-1})$, we then have the system of equations \[ts = x,\; \sqrt{1 - s^2}\sqrt{t^2 - 1} = y \implies t^4 - (1 + x^2 + y^2)t^2 + x^2 = 0.\] Hence, solving a quadratic equation in $t^2 = \frac 1 4(\rho^2 + \rho^{-2}) + \frac{1}{2}$ yields \[\frac{1}{2}(\rho^2 + \rho^{-2}) = x^2 + y^2 + \sqrt{(1 + x^2 + y^2)^2 - 4x^2} =: D, \] and then $\rho = (D + \sqrt{D^2 - 1})^{\frac{1}{2}}$. By definition, the Chebyshev polynomial satisfies \begin{equation}\label{eq:rhobound_suffices}T_n(z) \le \frac{1}{2} (\rho^n + \rho^{-n}) \le \rho^n,\end{equation} so it suffices to establish bounds on $\rho$. Next, assume without loss of generality that $y \ge 0$, as Chebyshev polynomials are either odd or even and the stated conclusions are unsigned. We bound \begin{align*} D &= x^2 + y^2 + \sqrt{1 + x^4 + y^4 - 2x^2 + 2y^2 + 2x^2y^2} \\ &= x^2 + y^2 + \sqrt{(1 - x^2)^2 + 2y^2(1 + x^2) + y^4} \le x^2 + |1 - x^2| + O(y\sqrt{1 + x^2}). \end{align*} When $|x| \le 1$, then $D = 1 + O(y)$ and $\rho = 1 + O(\sqrt{y})$, establishing the conclusion via \eqref{eq:rhobound_suffices}. Otherwise, when $|x| > 1$, we have $D = 2x^2 - 1 + O(|x|y)$, and then \begin{align*} \rho &= \sqrt{2x^2 - 1 + \sqrt{4x^4 - 4x^2 + O(|x^3| y)}} \\ &= \sqrt{\Par{|x| + \sqrt{x^2 - 1}}^2 + O\Par{\sqrt{|x^3|y}}} = |x| + \sqrt{x^2 - 1} + O(\sqrt{|xy|}), \end{align*} again proving the desired claim via \eqref{eq:rhobound_suffices}. \end{proof} \restaterectbounds* \begin{proof} To see the first claim, let $z \in \mathbb{R}$. As previously discussed we have $\erf(s(\mu + z)), \erf(s(\mu - z)) \le 1$, giving the upper bound. For the lower bound, since $\erf$ is odd and increasing, $z \ge 0$ implies \[\erf\Par{s\Par{\mu + z}} + \erf\Par{s\Par{\mu - z}} = \erf\Par{s\Par{\mu + z}} - \erf\Par{s\Par{-\mu + z}} \ge 0,\] and a similar argument handles the case $z \le 0$. Next, for the second claim we first observe \begin{equation}\label{eq:erf_close} \begin{aligned} |\erf(z) - \erf(x)| &= \frac{2}{\sqrt{\pi}} \Abs{\int_{t = x}^{t = x + {\imath} y} e^{-t^2} \mathrm{d} t} \\ &= \frac{2e^{-x^2}}{\sqrt{\pi}} \Abs{\int_0^y e^{-2{\imath} xt + t^2} \mathrm{d} t} \\ &\le \frac{2e^{-x^2}}{\sqrt{\pi}}\int_0^{|y|} e^{t^2} \mathrm{d} t = e^{-x^2} |\erf({\imath} y)|. \end{aligned} \end{equation} The last line used the triangle inequality. Finally, \begin{align*} \left|\erf(z) - \erf(x)\right|&\le \frac{1}{2} \Abs{\erf(s(\mu + z)) - \erf(s(\mu + x))} + \frac{1}{2} \Abs{\erf(s(\mu - z)) - \erf(s(\mu - x))} \\ &\le \frac{1}{2}\Par{e^{-s^2(\mu + x)^2} |\erf({\imath} sy)| + e^{-s^2(\mu - x)^2} |\erf(-{\imath} sy)|}, \end{align*} where we used \eqref{eq:erf_close}, and we conclude by noting $\mu + x, \mu - x \ge \mu - |x|$. \end{proof}
{ "arxiv_id": "2302.14311", "language": "en", "timestamp": "2023-03-01T02:09:15", "url": "https://arxiv.org/abs/2302.14311", "yymm": "2302" }
\section{Introduction} \label{sec:intro} \begin{figure}[h] \centering \includegraphics[width=0.95\columnwidth]{figures/time_mem.pdf} \vspace{-3pt} \caption{ The training time and memory cost comparison between the proposed SLTT-1 method and the BPTT with SG method on ImageNet. SLTT-1 achieves similar accuracy as BPTT, while owning better training efficiency than BPTT both theoretically and experimentally. Please refer to \cref{sec:SLTT,sec:experiments} for details. } \label{fig:time_mem} \end{figure} Regarded as the third generation of neural network models \cite{maass1997networks}, Spiking Neural Networks (SNNs) have recently attracted wide attention. SNNs imitate the neurodynamics of power-efficient biological networks, where neurons communicate through spike trains (\ie, time series of spikes). A spiking neuron integrates input spike trains into its membrane potential. After the membrane potential exceeds a threshold, the neuron fires a spike and resets its potential \cite{gerstner2014neuronal}. The spiking neuron is active only when it experiences spikes, thus enabling event-based computation. This characteristic makes SNNs energy-efficient when implemented on neuromorphic chips~\cite{merolla2014million,davies2018loihi,pei2019towards}. As a comparison, the power consumption of deep Artificial Neural Networks (ANNs) is substantial. The computation of SNNs with discrete simulation can share a similar functional form as recurrent neural networks (RNNs) \cite{neftci2019surrogate}. The unique component of SNNs is the non-differentiable threshold-triggered spike generation function. The non-differentiability, as a result, hinders the effective adoption of gradient-based optimization methods that can train RNNs successfully. Therefore, SNN training is still a challenging task. Among the existing SNN training methods, backpropagation through time (BPTT) with surrogate gradient (SG) \cite{cramer2022surrogate,sengupta2019going} has recently achieved high performance on complicated datasets in a small number of time steps (\ie, short length of spike trains). The BPTT with SG method defines well-behaved surrogate gradients to approximate the derivative of the spike generation function. Thus the SNNs can be trained through the gradient-based BPTT framework \cite{werbos1990backpropagation}, just like RNNs. With such framework, gradients are backpropagated through both the layer-by-layer spatial domain and the temporal domain. Accordingly, BPTT with SG suffers from considerable memory cost and training time that are proportional to the network size and the number of time steps. The training cost is further remarkable for large-scale datasets, such as ImageNet. In this paper, we develop the Spatial Learning Through Time (SLTT) method that can achieve high performance while significantly reducing the training time and memory cost compared with the BPTT with SG method. We first decompose the gradients calculated by BPTT into spatial and temporal components. With the decomposition, the temporal dependency in error backpropagation is explicitly presented. We then analyze the contribution of temporal information to the final calculated gradients, and propose the SLTT method to delete the unimportant routes in the computational graph for backpropagation. In this way, the number of scalar multiplications is reduced; thus, the training time is reduced. SLTT further enables online training by calculating gradient instantaneously at each time step, without the requirement of storing information of other time steps. Then the memory occupation is independent of the number of total time steps, avoiding the significant training memory costs of BPTT. Due to the instantaneous gradient calculation, we also propose the SLTT-K method that conducts backpropagation only at $K$ time steps. SLTT-K can further reduce the time complexity without performance loss. With the proposed techniques, we can obtain high-performance SNNs with superior training efficiency. The wall-clock training time and memory costs of SLTT-1 and BPTT on ImageNet under the same experimental settings are shown in \cref{fig:time_mem}. Formally, our contributions include: \begin{itemize} \item[1.] Based on our analysis of error backpropagation in SNNs, we propose the Spatial Learning Through Time (SLTT) method to achieve better time and memory efficiency than the commonly used BPTT with SG method. Compared with the BPTT with SG method, the number of scalar multiplications is reduced, and the training memory is constant with the number of time steps, rather than grows linearly with it. \item[2.] Benefiting from our online training framework, we propose the SLTT-K method that further reduces the time complexity of SLTT. The required number of scalar multiplication operations is reduced from $\Omega(T)$\footnote{$f(x)=\Omega(g(x))$ means that there exist $c>0$ and $n>0$, such that $0\le cg(x)\le f(x)$ for all $x\ge n$.} to $\Omega(K)$, where $T$ is the number of total time steps, and $K<T$ is the parameter indicating the number of time steps to conduct backpropagation. \item[3.] Our models achieve competitive SNN performance with superior training efficiency on CIFAR-10, CIFAR-100, ImageNet, DVS-Gesture, and DVS-CIFAR10 under different network settings or large-scale network structures. On ImageNet, our method achieves state-of-the-art accuracy while the memory cost and training time are reduced by more than 70\% and 50\%, respectively, compared with BPTT. \end{itemize} \section{Related Work} \label{sec:related} \paragraph{The BPTT Framework for Training SNNs.} A natural methodology for training SNNs is to adopt the gradient-descent-based BPTT framework, while assigning surrogate gradients (SG) to the non-differentiable spike generation functions to enable meaningful gradient calculation \cite{zenke2021remarkable,neftci2019surrogate,wu2018spatio,wu2019direct,shrestha2018slayer,huh2017gradient}. Under the BPTT with SG framework, many effective techniques have been proposed to improve the performance, such as threshold-dependent batch normalization \cite{zheng2020going}, carefully designed surrogate functions \cite{li2021differentiable} or loss functions \cite{guo2022recdis,deng2022temporal}, SNN-specific network structures \cite{fang2021sew}, and trainable parameters of neuron models \cite{fang2021incorporating}. Many works conduct multi-stage training, typically including an ANN pre-training process, to reduce the latency (\ie, the number of time steps) for the energy efficiency issue, while maintaining competitive performance \cite{rathi2019enabling,rathi2020diet,chowdhurytowards,chowdhury2021one}. The BPTT with SG method has achieved high performance with low latency on both static \cite{fang2021sew,guo2022reducing} and neuromorphic \cite{li2022neuromorphic,deng2022temporal} datasets. However, those approaches need to backpropagate error signals through both temporal and spatial domains, thus suffering from high computational costs during training \cite{deng2020rethinking}. In this work, we reduce the memory and time complexity of the BPTT with SG framework with gradient approximation and instantaneous gradient calculation, while maintaining the same level of performance. \paragraph{Other SNN Training Methods.} The ANN-to-SNN conversion method \cite{yan2021near,deng2021optimal,sengupta2019going,rueckauer2017conversion,han2020rmp,han2020deep,ding2021optimal} has recently yielded top performance, especially on ImageNet \cite{li2021free,meng2022ann,bu2022optimal}. This method builds a connection between the firing rates of SNNs and some corresponding ANN outputs. With this connection, the parameters of an SNN are directly determined from the associated ANN. Despite the good performance, the required latency is much higher compared with the BPTT with SG method. This fact hurts the energy efficiency of SNN inference \cite{davidson2021comparison}. Furthermore, the conversion method is not suitable for neuromorphic data. Some gradient-based direct training methods find the equivalence between spike representations (\eg, firing rates or first spike times) of SNNs and some differentiable mappings or fixed-point equations \cite{mostafa2017supervised,zhou2019temporal,xiao2021ide,meng2022training,thiele2019spikegrad,wu2021training,wu2021tandem,xiao2022spide,yang2022training}. Then the spike-representation-based methods train SNNs by gradients calculated from the corresponding mappings or fixed-point equations. Such methods have recently achieved competitive performance, but still suffer relatively high latency, like the conversion-based methods. To achieve low latency, our work is mainly based on the BPTT with SG method and then focuses on the training cost issue of BPTT with SG. \vspace{-3pt} \paragraph{Efficient Training for SNNs.} Several RNN training methods pursue online learning and constant memory occupation agnostic time horizon, such as real time recurrent learning \cite{williams1989learning} and forward propagation through time \cite{kag2021training}. Inspired by them, some SNN training methods \cite{zenke2018superspike,zenke2021brain,bellec2020solution,bohnstingl2022online,yin2021accurate} apply similar ideas to achieve memory-efficient and online learning. However, such SNN methods cannot scale to large-scale tasks due to some limitations, such as using feedback alignment \cite{nokland2016direct}, simple network structures, and still large memory costs although constant with time. \cite{kaiser2020synaptic} ignores temporal dependencies of information propagation to enable local training with no memory overhead for computing gradients. They use similar ways as ours to approximate the gradient calculation, but do not verify the reasonableness of the approximation, and cannot achieve comparable accuracy as ours, even for simple tasks. \cite{perez2021sparse} presents the sparse SNN backpropagation algorithm in which gradients only backpropagate through ``active neurons'', that account for a small number of the total, at each time step. However, \cite{perez2021sparse} does not consider large-scale tasks, and the memory grows linearly with the number of time steps. Recently, some methods \cite{yang2022training,xiao2022online} have achieved satisfactory performance on large-scale datasets with time steps-independent memory occupation. Still, they either rely on pre-trained ANNs and cannot conduct direct training\cite{yang2022training}, or do not consider reducing time complexity and require more memory than our work due to tracking presynaptic activities \cite{xiao2022online}. Our work can achieve state-of-the-art (SOTA) performance while maintaining superior time and memory efficiency compared with other methods. \section{Preliminaries} \subsection{The Leaky Integrate and Fire Model} A spiking neuron replicates the behavior of a biological neuron which integrates input spikes into its membrane potential $u(t)$ and transmits spikes when the potential $u$ reaches a threshold. Such spike transmission is controlled via some spiking neural models. In this paper, we consider a widely adopted neuron model, the leaky integrate and fire (LIF) model \cite{burkitt2006review}, to characterize the dynamics of $u(t)$: \begin{equation} \small \tau \frac{\mathrm{d} u(t)}{\mathrm{d} t}=-(u(t)-u_{rest}) + R\cdot I(t), \ \text{when} \ u(t)<V_{th}, \end{equation} where $\tau$ is the time constant, $R$ is the resistance, $u_{rest}$ is the resting potential, $V_{th}$ is the spike threshold, and $I$ is the input current which depends on received spikes. The current model is given as $ I(t)=\sum_{i}w_i^\prime s_i(t) + b^\prime, $ where $w_i^\prime$ is the weight from neuron-$i$ to the target neuron, $b^\prime$ is a bias term, and $s_i(t)$ is the received train from neuron-$i$. $s_i(t)$ is formed as $s_i(t)=\sum_{f}\delta(t-t_{i,f})$, in which $\delta(\cdot)$ is the Dirac delta function and $t_{i,f}$ is the $f$-th fire time of neuron-$i$. Once $u\ge V_{th}$ at time $t_f$, the neuron output a spike, and the potential is reset to $u_{rest}$. The output spike train is described as $s_{out}(t)=\sum_{f}\delta(t-t_{f})$. In application, the discrete computational form of the LIF model is adopted. With $u_{rest}=0$, the discrete LIF model can be described as \begin{equation} \label{eqn:lif} \left\{ \begin{aligned} &u[t]=(1-\frac{1}{\tau})v[t-1] + \sum_{i}w_i s_i[t] + b, \\[-3pt] &s_{out}[t]=H(u[t]-V_{th}), \\ &v[t]=u[t]-V_{th} s_{out}[t], \end{aligned} \right. \end{equation} where $t\in\{1,2,\cdots,T\}$ is the time step index, $H(\cdot)$ is the Heaviside step function, $s_{out}[t], s_{i}[t] \in \{0,1\}$, $v[t]$ is the intermediate value representing the membrane potential before being reset and $v[0]=0$, and $w_i$ and $b$ are reparameterized version of $w_i^\prime$ and $b^\prime$, respectively, where $\tau$ and $R$ are absorbed. The discrete step size is $1$, so $\tau>1$ is required. \subsection{Backpropagation Through Time with Surrogate Gradient} Consider the multi-layer feedforward SNNs with the LIF neurons based on \cref{eqn:lif}: \begin{equation} \label{eqn:feedforward} \vspace{-1pt} \mathbf{u}^{l}[t]=(1-\frac{1}{\tau})(\mathbf{u}^{l}[t-1] -V_{t h} \mathbf{s}^{l}[t-1])+ \mathbf{W}^{l} \mathbf{s}^{l-1}[t], \end{equation} where $l=1,2,\cdots,L$ is the layer index, $t=1,2,\cdots,T$, $0<1-\frac{1}{\tau}<1$, $\mathbf{s}^0$ are the input data to the network, $\mathbf{s}^l$ are the output spike trains of the $l^{\text{th}}$ layer, $\mathbf{W}^{l}$ are the weight to be trained. We ignore the bias term for simplicity. The final output of the network is $\mathbf{o}[t]=\mathbf{W}^o\mathbf{s}^{L}[t]$, where $\mathbf{W}^o$ is the parameter of the classifier. The classification is based on the average of the output at each time step $\frac{1}{T}\sum_{t=1}^{T} \mathbf{o}[t]$. The loss function $\mathcal{L}$ is defined on $\{\mathbf{o}[1],\cdots,\mathbf{o}[T]\}$, and is often defined as \cite{zheng2020going,rathi2020diet,xiao2021ide,li2021differentiable} \vspace{-1pt} \begin{equation} \label{eqn:loss_rate} \mathcal{L} = \ell(\frac{1}{T}\sum_{t=1}^{T}\mathbf{o}[t],y), \vspace{-1pt} \end{equation} where $y$ is the label, and $\ell$ can be the cross-entropy function. \begin{figure}[t] \centering \includegraphics[width=0.78\columnwidth]{figures/bptt.pdf} \vspace{-1pt} \caption{Computational graph of multi-layer SNNs. Dashed arrows represent the non-differentiable spike generation functions.} \label{fig:bptt} \end{figure} \begin{figure*}[h] \begin{subfigure}{0.33\linewidth} \includegraphics[width=0.99\columnwidth]{figures/cifar_gradient.pdf} \end{subfigure} \begin{subfigure}{0.33\linewidth} \includegraphics[width=0.99\columnwidth]{figures/dvs_gradient.pdf} \end{subfigure} \begin{subfigure}{0.33\linewidth} \includegraphics[width=0.99\columnwidth]{figures/imagenet_gradient.pdf} \end{subfigure} \caption{The cosine similarity between the gradients calculated by BPTT and the ``spatial gradients''. For the CIFAR-10, DVS-CIFAR10, and ImageNet datasets, the network architectures of ResNet-18, VGG-11, and ResNet-34 are adopted, respectively. Other settings and hyperparameters for the experiments are described in the Supplementary Materials. We calculate the cosine similarity for different layers and report the average in the figure. For ImageNet, we only train the network for 50 iterates since the training is time-consuming. Dashed curves represent a larger number of time steps.} \label{fig:gradient_compare} \end{figure*} BPTT with SG calculates gradients according to the computational graph of \cref{eqn:feedforward} shown in \cref{fig:bptt}. The pseudocode is described in the Supplementary Materials. For each neuron $i$ in the $l$-th layer, the derivative $\frac{\partial \mathbf{s}_i^{l}[t] }{\partial \mathbf{u}_i^{l}[t]}$ is zero for all values of $\mathbf{u}_i^{l}[t]$ except when $\mathbf{u}_i^{l}[t]=V_{th}$, where the derivative is infinity. Such a non-differentiability problem is solved by approximating $\frac{\partial \mathbf{s}_i^{l}[t] }{\partial \mathbf{u}_i^{l}[t]}$ with some well-behaved surrogate function, such as the rectangle function \cite{wu2018spatio,wu2019direct} \begin{equation} \label{eqn:rectangle_sg} \frac{\partial s }{\partial u}=\frac{1}{\gamma} \mathbbm{1}\left(\left|u-V_{th}\right|<\frac{\gamma}{2}\right), \end{equation} and the triangle function \cite{deng2022temporal,EsserMACAABMMBN16} \begin{equation} \label{eqn:triangle_sg} \frac{\partial s}{\partial u}=\frac{1}{\gamma^2} \max \left(0, \gamma-\left|u-V_{t h}\right|\right), \end{equation} where $\mathbbm{1}(\cdot)$ is the indicator function, and the hyperparameter $\gamma$ for both functions is often set as $V_{th}$. \section{The proposed Spatial Learning Through Time Method} \label{sec:SLTT} \subsection{Observation from the BPTT with SG Method} \label{sec:observation} In this subsection, we decompose the derivatives for membrane potential, as calculated in the BPTT method, into spatial components and temporal components. Based on the decomposition, we observe that the spatial components dominate the calculated derivatives. This phenomenon inspires the proposed method, as introduced in \cref{sec:method}. According to \cref{eqn:feedforward} and \cref{fig:bptt}, the gradients for weights in an SNN with $T$ time steps are calculated by \begin{equation} \label{eqn:w-update} \nabla_{\mathbf{W}^{l}}\mathcal{L} =\sum_{t=1}^{T} \frac{\partial \mathcal{L}}{\partial \mathbf{u}^{l}[t]} ^\top \mathbf{s}^{l-1}[t]^\top, \ l = L, L-1,\cdots,1. \end{equation} We further define \begin{equation} \textcolor{black}{\mathbf{\epsilon}^{l}[t]} \triangleq \frac{\partial \mathbf{u}^{l}[t+1]}{\partial \mathbf{u}^{l}[t]} +\frac{\partial \mathbf{u}^{l}[t+1]}{\partial \mathbf{s}^{l}[t]} \frac{\partial \mathbf{s}^{l}[t]}{\partial \mathbf{u}^{l}[t]} \end{equation} as the sensitivity of $\mathbf{u}^{l}[t+1]$ with respect to $\mathbf{u}^{l}[t]$, represented by the red arrows shown in \cref{fig:bptt}. Then with the chain rule, $\frac{\partial \mathcal{L}}{\partial \mathbf{u}^{l}[t]}$ in \cref{eqn:w-update} can be further calculated recursively. In particular, for the output layer, we arrive at \begin{equation} \footnotesize \label{eqn:u-update-final} \frac{\partial \mathcal{L}}{\partial \mathbf{u}^{L}[t]} =\textcolor{newblue}{ \frac{\partial \mathcal{L}}{\partial \mathbf{s}^{L}[t]} \frac{\partial \mathbf{s}^{L}[t]}{\partial \mathbf{u}^{L}[t]} } + \textcolor{newgreen}{ \sum_{t^\prime=t+1}^{T} \frac{\partial \mathcal{L}}{\partial \mathbf{s}^{L}[t^\prime]} \frac{\partial \mathbf{s}^{L}[t^\prime]}{\partial \mathbf{u}^{L}[t^\prime]} \prod_{t^{{\prime\prime}}=1}^{t^\prime - t} \mathbf{\epsilon}^{L}[t^\prime-t^{\prime\prime}] }, \end{equation} and for the intermediate layer $l=L-1,\cdots,1$, we have \begin{equation} \footnotesize \label{eqn:u-update-inter} \begin{aligned} \frac{\partial \mathcal{L}}{\partial \mathbf{u}^{l}[t]} =&\textcolor{newblue}{ \frac{\partial \mathcal{L}}{\partial \mathbf{u}^{l+1}[t]} \frac{\partial \mathbf{u}^{l+1}[t]}{\partial \mathbf{s}^{l}[t]} \frac{\partial \mathbf{s}^{l}[t]}{\partial \mathbf{u}^{l}[t]}} \\\vspace{-2pt} &+ \textcolor{newgreen}{ \sum_{t^\prime=t+1}^{T} \frac{\partial \mathcal{L}}{\partial \mathbf{u}^{l+1}[t^\prime]} \frac{\partial \mathbf{u}^{l+1}[t^\prime]}{\partial \mathbf{s}^{l}[t^\prime]} \frac{\partial \mathbf{s}^{l}[t^\prime]}{\partial \mathbf{u}^{l}[t^\prime]} \prod_{t^{{\prime\prime}}=1}^{t^\prime - t} \mathbf{\epsilon}^{l}[t^\prime-t^{\prime\prime}] }. \end{aligned} \end{equation} The detailed derivation can be found in the Supplementary Materials. In both \cref{eqn:u-update-final,eqn:u-update-inter}, the terms before the addition symbols on the R.H.S. (the blue terms) can be treated as the spatial components, and the remaining parts (the green terms) represent the temporal components. We observe that the temporal components contribute a little to $\frac{\partial \mathcal{L}}{\partial \mathbf{u}^{l}[t]}$, since the diagonal matrix $\prod_{t^{{\prime\prime}}=1}^{t^\prime - t} \textcolor{black}{\mathbf{\epsilon}^{l}[t^\prime-t^{\prime\prime}]}$ is supposed to have a small spectral norm for typical settings of surrogate functions. To see this, we consider the rectangle surrogate (\cref{eqn:rectangle_sg}) with $\gamma=V_{th}$ as an example. Based on \cref{eqn:feedforward}, the diagonal elements of $\textcolor{black}{\mathbf{\epsilon}^{l}[t]}$ are \begin{equation} \label{eqn:small_sensitivity} \textcolor{black}{\left(\mathbf{\epsilon}^{l}[t]\right)_{jj}} = \left\{\begin{array}{l}0, \quad \frac{1}{2}V_{th}<\left(\mathbf{u}^{l}[t]\right)_j<\frac{3}{2}V_{th}, \\ 1-\frac{1}{\tau}, \quad \text{otherwise}.\end{array}\right. \end{equation} Define $\lambda \triangleq 1-\frac{1}{\tau}$, then $\textcolor{black}{\left(\mathbf{\epsilon}^{l}[t]\right)_{jj}}$ is zero in an easily-reached interval, and is at least not large for commonly used small $\lambda$ (\eg, $\lambda=0.5$ \cite{xiao2022online,deng2022temporal}, $\lambda=0.25$ \cite{zheng2020going}, and $\lambda=0.2$ \cite{guo2022recdis}). The diagonal values of the matrix $\prod_{t^{{\prime\prime}}=1}^{t^\prime - t} \textcolor{black}{\mathbf{\epsilon}^{l}[t^\prime-t^{\prime\prime}]}$ are smaller than the single term $\textcolor{black}{\mathbf{\epsilon}^{l}[t^\prime-t^{\prime\prime}]}$ due to the product operations, especially when $t^\prime-t$ is large. The temporal components are further unimportant if the spatial and temporal components have similar directions. Then the spatial components in \cref{eqn:u-update-final,eqn:u-update-inter} dominate the gradients. For other widely-used surrogate functions and their corresponding hyperparameters, the phenomenon of dominant spatial components still exists since the surrogate functions have similar shapes and behavior. In order to illustrate this, we conduct experiments on CIFAR-10, DVS-CIFAR10, and ImageNet using the triangle surrogate (\cref{eqn:triangle_sg}) with $\gamma=V_{th}$. We use the BPTT with SG method to train the SNNs on the abovementioned three datasets, and call the calculated gradients the baseline gradients. During training, we also calculate the gradients for weights when the temporal components are abandoned, and call such gradients the spatial gradients. We compare the disparity between baseline and spatial gradients by calculating their cosine similarity. The results are demonstrated in \cref{fig:gradient_compare}. The similarity maintains a high level for different datasets, the number of time steps, and $\tau$. In particular, for $\tau=1.1 \ (\lambda=1-\frac{1}{\tau}\approx0.09)$, the baseline and spatial gradients consistently have a remarkably similar direction on CIFAR-10 and DVS-CIFAR10. In conclusion, the spatial components play a dominant role in the gradient backpropagation process. \subsection{Spatial Learning Through Time} \label{sec:method} Based on the observation introduced in \cref{sec:observation}, we propose to ignore the temporal components in \cref{eqn:u-update-final,eqn:u-update-inter} to achieve more efficient backpropagation. In detail, the gradients for weights are calculated by \begin{equation} \label{eqn:w-update-ours} \nabla_{\mathbf{W}^{l}}\mathcal{L} =\sum_{t=1}^{T} \mathbf{e}_\mathbf{W}^{l}[t], \quad \mathbf{e}_\mathbf{W}^{l}[t] = \mathbf{e}_\mathbf{u}^{l}[t] ^\top \mathbf{s}^{l-1}[t]^\top, \end{equation} where \begin{equation} \label{eqn:u-update-ours} \mathbf{e}_\mathbf{u}^{l}[t] = \left\{\begin{array}{l} \frac{\partial \mathcal{L}}{\partial \mathbf{s}^{L}[t]} \frac{\partial \mathbf{s}^{L}[t]}{\partial \mathbf{u}^{L}[t]}, \quad\quad\quad\quad\quad \ l=L, \\ \mathbf{e}_\mathbf{u}^{l+1}[t] \frac{\partial \mathbf{u}^{l+1}[t]}{\partial \mathbf{s}^{l}[t]} \frac{\partial \mathbf{s}^{l}[t]}{\partial \mathbf{u}^{l}[t]}, \quad \quad l<L,\end{array}\right. \end{equation} and $\mathbf{e}_\mathbf{u}^{l}[t]$ is a row vector. Compared with \cref{eqn:w-update,eqn:u-update-final,eqn:u-update-inter}, the required number of scalar multiplications in \cref{eqn:w-update-ours,eqn:u-update-ours} is reduced from $\Omega(T^2)$ to $\Omega(T)$. Note that the BPTT method does not conduct naive computation of the sum-product as shown in \cref{eqn:u-update-final,eqn:u-update-inter}, but in a recursive way to achieve $\Omega(T)$ computational complexity, as shown in the Supplementary Materials. Although BPTT and the proposed update rule both need $\Omega(T)$ scalar multiplications, such multiplication operations are reduced due to ignoring some routes in the computational graph. Please refer to Supplementary Materials for time complexity analysis. Therefore, the time complexity of the proposed update rule is much lower than that of BPTT with SG, although they are both proportional to $T$. According to \cref{eqn:w-update-ours,eqn:u-update-ours}, the error signals $\mathbf{e}_\mathbf{W}^{l}$ and $\mathbf{e}_\mathbf{u}^{l}$ at each time step can be calculated independently without information from other time steps. Thus, if $\frac{\partial \mathcal{L}}{\partial \mathbf{s}^{L}[t]}$ can be calculated instantaneously at time step $t$, $\mathbf{e}_\mathbf{W}^{l}[t]$ and $\mathbf{e}_\mathbf{u}^{l}[t]$ can also be calculated instantaneously at time step $t$. Then there is no need to store intermediate states of the whole time horizon. To achieve the instantaneous calculation of $\frac{\partial \mathcal{L}}{\partial \mathbf{s}^{L}[t]}$, we adopt the loss function \cite{guo2022recdis,deng2022temporal,xiao2022online} \begin{equation} \label{eqn:loss} \mathcal{L} = \frac{1}{T}\sum_{t=1}^{T}\ell(\mathbf{o}[t],y), \end{equation} which is an upper bound of the loss introduced in \cref{eqn:loss_rate}. We propose the Spatial Learning Through Time (SLTT) method using gradient approximation and instantaneous gradient calculation, as detailed in \cref{alg:SLTT}. In \cref{alg:SLTT}, all the intermediate terms at time step $t$, such as $\mathbf{e}_\mathbf{u}^{l}[t], \mathbf{s}^{l}[t],\frac{\partial \mathbf{u}^{l+1}[t]}{\partial \mathbf{s}^{l}[t]}$, and $\frac{\partial \mathbf{s}^{l}[t]}{\partial \mathbf{u}^{l}[t]}$, are never used in other time steps, so the required memory overhead of SLTT is constant agnostic to the total number of time steps $T$. On the contrary, the BPTT with SG method has an $\Omega(T)$ memory cost associated with storing all intermediate states for all time steps. In summary, the proposed method is both time-efficient and memory-efficient, and has the potential to enable online learning for neuromorphic substrates \cite{zenke2021brain}. \setlength{\textfloatsep}{20pt} \begin{algorithm}[t] \caption{One iteration of SNN training with the SLTT or SLTT-K methods.} \label{alg:SLTT} \begin{algorithmic}[1] \Require Time steps $T$; Network depth $L$; Network parameters $\{\mathbf{W}^l\}_{l=1}^L$; Training data $(\mathbf{s}^0,\mathbf{y})$; Learning rate $\eta$; Required backpropagation times $K$ (for SLTT-K). \item[\textbf{Initialize:}] $\Delta \mathbf{W}^l = 0, \ l=1,2,\cdots,L$. \If {using SLTT-K} \State Sample $K$ numbers in $[1,2,\cdots,T]$ w/o replacement to form $required\_bp\_steps$; \Else \State $required\_bp\_steps=[1,2,\cdots,T]$; \EndIf \For {$t=1,2,\cdots,T$} \State Calculate $\mathbf{s}^L[t]$ by \cref{eqn:feedforward,eqn:lif}; \quad //\textbf{Forward} \State Calculate the instantaneous loss $\ell$ in \cref{eqn:loss}; \If {$t$ in $required\_bp\_steps$} \quad\quad //\textbf{Backward} \State $\mathbf{e}_\mathbf{u}^{L}[t] =\frac{1}{T}\frac{\partial \ell}{\partial \mathbf{s}^{L}[t]}\frac{\partial \mathbf{s}^{L}[t]}{\partial \mathbf{u}^{L}[t]}$; \For {$l=L-1,\cdots,1$} \State $\mathbf{e}_\mathbf{u}^l[t] = \mathbf{e}_\mathbf{u}^{l+1}[t] \frac{\partial \mathbf{u}^{l+1}[t]}{\partial \mathbf{s}^{l}[t]} \frac{\partial \mathbf{s}^{l}[t]}{\partial \mathbf{u}^{l}[t]}$; \State $\Delta \mathbf{W}^l \mathrel{+}= \mathbf{e}_\mathbf{u}^{l}[t] ^\top \mathbf{s}^{l-1}[t]^\top$; \EndFor \EndIf \EndFor \State $\mathbf{W}^l = \mathbf{W}^l - \eta \Delta \mathbf{W}^l, \ l=1,2,\cdots,L$; \Ensure Trained network parameters $\{\mathbf{W}^l\}_{l=1}^L$. \end{algorithmic} \end{algorithm} \subsection{Further Reducing Time Complexity} \label{sec:rethink} Due to the online update rule of the proposed method, the gradients for weights are calculated according to an ensemble of $T$ independent computational graphs, and the time complexity of gradient calculation is $\Omega(T)$. The $T$ computational graphs can have similar behavior, and then similar gradient directions can be obtained with only a portion of the computational graphs. Based on this, we propose to train a portion of time steps to reduce the time complexity further. In detail, for each iteration in the training process, we randomly choose $K$ time indexes from the time horizon, and only conduct backpropagation with SLTT at the chosen $K$ time steps. We call such a method the SLTT-K method, and the pseudo-code is given in \cref{alg:SLTT}. Note that setting $K=T$ results in the original SLTT method. Compared with SLTT, the time complexity of SLTT-K is reduced to $\Omega(K)$, and the memory complexity is the same. In our experiments, SLTT-K can achieve satisfactory performance even when $K=1$ or $2$, as shown in \cref{sec:experiments}, indicating superior efficiency of the SLTT-K method. \section{Experiments} \label{sec:experiments} In this section, we evaluate the proposed method on CIFAR-10 \cite{krizhevsky2009learning}, CIFAR-100\cite{krizhevsky2009learning}, ImageNet\cite{deng2009imagenet}, DVS-Gesture\cite{amir2017low}, and DVS-CIFAR10 \cite{li2017cifar10} to demonstrate its superior performance regarding training costs and accuracy. For our SNN models, we set $V_{th}=1$ and $\tau=1.1$, and apply the triangle surrogate function (\cref{eqn:triangle_sg}). An effective technique, batch normalization (BN) along the temporal dimension \cite{zheng2020going}, cannot be adopted to our method, since it requires calculation along the total time steps and then intrinsically prevents time-steps-independent memory costs. Therefore, for some tasks, we borrow the idea from normalization-free ResNets (NF-ResNets) \cite{brock2021high} to replace BN by weight standardization (WS) \cite{qiao2019micro}. Please refer to the Supplementary Materials for experimental details. \subsection{Comparison with BPTT} \label{sec:bptt_compare} The major advantage of SLTT over BPTT is the low memory and time complexity. To verify the advantage of SLTT, we use both methods with the same experimental setup to train SNNs. For CIFAR-10, CIFAR-100, ImageNet, DVS-Gesture, and DVS-CIFAR10, the network architectures we adopt are ResNet-18, ResNet-18, NF-ResNet-34, VGG-11, and VGG-11, respectively, and the total number of time steps are 6, 6, 6, 20, and 10, respectively. For ImageNet, to accelerate training, we first train the SNN with only 1 time step for 100 epochs to get a pre-trained model, and then use SLTT or BPTT to fine-tune the model with 6 time steps for 30 epochs. Details of the training settings can be found in the Supplementary Materials. We run all the experiments on the same Tesla-V100 GPU, and ensure that the GPU card is running only one experiment at a time to perform a fair comparison. It is not easy to directly compare the running time for two training methods since the running time is code-dependent and platform-dependent. In our experiments, we measure the wall-clock time of the total training process, including forward propagation and evaluation on the validation set after each epoch, to give a rough comparison. For ImageNet, the training time only includes the 30-epoch fine-tuning part. \begin{table}[t] \caption{Comparison of training memory cost, training time, and accuracy between SLTT and BPTT. The ``Memory'' column indicates the maximum memory usage on an GPU during training. And the ``Time'' column indicates the wall-clock training time.} \label{table:compare} \begin{threeparttable} \begin{tabular}{lcccc} \toprule Dataset & Method & Memory & Time & Acc \\ \midrule \multirow{2}*{CIFAR-10}& BPTT & 3.00G & 6.35h & \bf{94.60\%}\\ & SLTT & \bf{1.09G} & \bf{4.58h} & 94.59\% \\ \hline \multirow{2}*{CIFAR-100} & BPTT & 3.00G & 6.39h & 73.80\% \\ & SLTT & \bf{1.12}G & \bf{4.68h} & \bf{74.67}\% \\ \hline \multirow{2}*{ImageNet} & BPTT & 28.41G & 73.8h & \bf{66.47\%} \\ & SLTT & \bf{8.47G} & \bf{66.9h} & 66.19\% \\ \hline \multirow{2}*{DVS-Gesture} & BPTT & 5.82G & 2.68h & 97.22\% \\ & SLTT & \bf{1.07G} & \bf{2.64h} & \bf{97.92\%} \\ \hline \multirow{2}*{DVS-CIFAR10} & BPTT & 3.70G & 4.47h & 73.60\% \\ & SLTT & \bf{1.07G} & \bf{3.43h} & \bf{77.30\%} \\ \bottomrule \end{tabular} \end{threeparttable} \end{table} The results of maximum memory usage, total wall-clock training time, and accuracy for both SLTT and BPTT on different datasets are listed in \cref{table:compare}. SLTT enjoys similar accuracy compared with BPTT while using less memory and time. For all the datasets, SLTT requires less than one-third of the GPU memory of BPTT. In fact, SLTT maintains constant memory cost over the different number of time steps $T$, while the training memory of BPTT grows linearly in $T$. The memory occupied by SLTT for $T$ time steps is always similar to that of BPTT for $1$ time step. Regarding training time, SLTT also enjoys faster training on both algorithmic and practical aspects. For DVS-Gesture, the training time for both methods are almost the same, deviating from the algorithmic time complexity. That may be due to really little training time for both methods and the good parallel computing performance of the GPU. \subsection{Performance of SLTT-K} \begin{table}[t] \caption{Comparison of training time and accuracy between SLTT and SLTT-K. ``NFRN'' means Normalizer-Free ResNet. For DVS-Gesture and DVS-CIFAR10, the ``Acc'' column reports the average accuracy of 3 runs of experiments using different random seeds. We skip the standard deviation values since they are almost 0, except for SLTT on DVS-CIFAR10 where the value is 0.23\%.} \label{table:SLTT-k} \begin{threeparttable} \begin{tabular}{lcccc} \toprule Network & Method & Memory & Time & Acc \\ \midrule \multicolumn{5}{c}{{DVS-Gesture, $T=20$}}\\ \midrule \multirow{2}*{VGG-11} & SLTT & \multirow{2}*{$\approx$1.1G} & 2.64h & \bf{97.92\%} \\ & SLTT-4 & & \bf{1.69h} & 97.45\% \\ \midrule \multicolumn{5}{c}{{DVS-CIFAR10, $T=10$}}\\ \midrule \multirow{2}*{VGG-11} & SLTT & \multirow{2}*{$\approx$1.1G} & 3.43h & \bf{77.16\%} \\ & SLTT-2 & & \bf{2.49h} & 76.70\% \\ \midrule \multicolumn{5}{c}{{ImageNet, $T=6$}}\\ \midrule \multirow{3}*{NFRN-34} & SLTT & \multirow{3}*{$\approx$8.5G} & 66.90h & \bf{66.19\%} \\ & SLTT-2 & & 41.88h & 66.09\% \\ & SLTT-1 & & \bf{32.03h} & 66.17\% \\ \hline \multirow{3}*{NFRN-50} & SLTT & \multirow{3}*{$\approx$24.5G} & 126.05h & \bf{67.02\%} \\ & SLTT-2 & & 80.63h & 66.98\% \\ & SLTT-1 & & \bf{69.36h} & 66.94\% \\ \hline \multirow{3}*{NFRN-101} & SLTT & \multirow{3}*{$\approx$33.8G} & 248.23h & 69.14\% \\ & SLTT-2 & & 123.05h & \bf{69.26\%} \\ & SLTT-1 & & \bf{91.73h} & 69.14\% \\ \bottomrule \end{tabular} \end{threeparttable} \end{table} As introduced in \cref{sec:rethink}, the proposed SLTT method has a variant, SLTT-K, that conducts backpropagation only in randomly selected $K$ time steps for reducing training time. We verify the effectiveness of SLTT-K on the neuromorphic datasets, DVS-Gesture and DVS-CIFAR10, and the large-scale static dataset, ImageNet. For the ImageNet dataset, we first pre-train the 1-time-step networks, and then fine-tune them with 6 time steps, as described in \cref{sec:bptt_compare}. We train the NF-ResNet-101 networks on a single Tesla-A100 GPU, while we use a single Tesla-V100 GPU for other experiments. As shown in \cref{table:SLTT-k}, the SLTT-K method yields competitive accuracy with SLTT (also BPTT) for different datasets and network architectures, even when $K=\frac{1}{6}T$ or $\frac{1}{5}T$. With such small values of $K$, further compared with BPTT, the SLTT-K method enjoys comparable or even better training results, less memory cost (much less if $T$ is large), and much faster training speed. \subsection{Comparison with Other Efficient Training Methods} \label{sec:ottt_compare} There are other online learning methods for SNNs \cite{xiao2022online,bellec2020solution,bohnstingl2022online,yin2021accurate,yang2022training} that achieve time-steps-independent memory costs. Among them, OTTT \cite{xiao2022online} enables direct training on large-scale datasets with relatively low training costs. In this subsection, we compare SLTT and OTTT under the same experimental settings of network structures and total time steps (see Supplementary Materials for details). The wall-clock training time and memory cost are calculated based on 3 epochs of training. The two methods are comparable since the implementation of them are both based on PyTorch \cite{paszke2019pytorch} and SpikingJelly \cite{SpikingJelly}. The results are shown in \cref{table:compare_ottt}. SLTT outperforms OTTT on all the datasets regarding memory costs and training time, indicating the superior efficiency of SLTT. As for accuracy, SLTT also achieves better results than OTTT, as shown in \cref{table:sota}. \begin{table}[h] \caption{Comparison of training memory cost and training time per epoch between SLTT and OTTT.} \label{table:compare_ottt} \centering \begin{threeparttable} \begin{tabular}{cccc} \toprule Dataset & Method & Memory & Time/Epoch \\ \midrule \multirow{2}*{CIFAR-10}& OTTT & 1.71G & 184.68s \\ & SLTT & \bf{1.00G} & \bf{54.48s} \\ \hline \multirow{2}*{CIFAR-100} & OTTT & 1.71G & 177.72s \\ & SLTT & \bf{1.00}G & \bf{54.60s} \\ \hline \multirow{2}*{ImageNet} & OTTT & 19.38G & 7.52h \\ & SLTT & \bf{8.47G} & \bf{2.23h} \\ \hline \multirow{2}*{DVS-Gesture} & OTTT & 3.38G & 236.64s \\ & SLTT & \bf{2.08G} & \bf{67.20s} \\ \hline \multirow{2}*{DVS-CIFAR10} & OTTT & 4.32G & 114.84s \\ & SLTT & \bf{1.90G} & \bf{48.00s} \\ \bottomrule \end{tabular} \end{threeparttable} \end{table} \subsection{Comparison with the State-of-the-Art} \begin{table*}[h] \caption{Comparisons with other SNN training methods on CIFAR-10, CIFAR-100, ImageNet, DVS-Gesture, and DVS-CIFAR10. Results of our method on all the datasets, except ImageNet, are based on 3 runs of experiments. The ``Efficient Training'' column means whether the method requires less training time or memory occupation than the vanilla BPTT method for one epoch of training.} \label{table:sota} \centering \begin{threeparttable} \begin{tabular}{c|lcccc} \toprule[1.08pt] & Method & Network & Time Steps & Efficient Training & Mean$\pm$Std (Best) \\ \midrule[1.08pt] \multirow{5}*{\rotatebox{90}{CIFAR-10}} &LTL-Online\cite{yang2022training} \tnote{1} & ResNet-20 & 16 & \Checkmark & ${93.15\%}$ \\ & OTTT\cite{xiao2022online} & VGG-11 (WS) & 6 & \Checkmark & $93.52\pm0.06\%$ ($93.58\%$) \\ &Dspike\cite{li2021differentiable} & ResNet-18 & 6 & \XSolidBrush & $94.25\pm0.07\%$ \\ & TET\cite{deng2022temporal} & ResNet-19 & 6 & \XSolidBrush & $\mathbf{94.50\pm0.07\%}$ \\ \cline{2-6} &SLTT (ours) & ResNet-18 & 6 & \Checkmark & $\underline{94.44\%\pm0.21\%}$ ($\mathbf{94.59\%}$) \\ \midrule[1.08pt] \multirow{5}*{\rotatebox{90}{CIFAR-100}} &OTTT\cite{xiao2022online} & VGG-11 (WS) & 6 & \Checkmark & $71.05\pm0.04\%$ ($71.11\%$) \\ &ANN-to-SNN\cite{bu2022optimal} \tnote{1} & VGG-16 & 8 & \Checkmark & ${73.96}\%$ \\ &RecDis\cite{guo2022recdis} & ResNet-19 & 4 & \XSolidBrush & $74.10\pm0.13 \%$ \\ &TET\cite{deng2022temporal} & ResNet-19 & 6 & \XSolidBrush & $\mathbf{74.72\pm0.28\%}$ \\ \cline{2-6} &SLTT (ours) & ResNet-18 & 6 & \Checkmark & $\underline{74.38\%\pm0.30\% \ (74.67\%)}$ \\ \midrule[1.08pt] \multirow{6}*{\rotatebox{90}{ImageNet}} &ANN-to-SNN\cite{li2021free} \tnote{1} & ResNet-34 & 32 & \Checkmark & ${64.54\%}$ \\ &TET\cite{deng2022temporal} & ResNet-34 & 6 & \XSolidBrush & $64.79\%$ \\ &OTTT\cite{xiao2022online} & NF-ResNet-34 & 6 & \Checkmark & ${65.15\%}$ \\ &SEW \cite{fang2021sew} & Sew ResNet-34,50,101 & 4 & \XSolidBrush & $67.04\% ,67.78\%,68.76\%$ \\ \cline{2-6} &SLTT (ours) & NF-ResNet-34,50 & 6 & \Checkmark & ${66.19\%,67.02\%}$ \\ &SLTT-2 (ours) & NF-ResNet-101 & 6 & \Checkmark & $\mathbf{69.26\%}$ \\ \midrule[1.08pt] \multirow{6}*{\rotatebox{90}{\small DVS-Gesture}} &STBP-tdBN \cite{zheng2020going} & ResNet-17 & 40 & \XSolidBrush & $96.87\%$ \\ &OTTT \cite{xiao2022online} & VGG-11 (WS) & 20 & \Checkmark & $96.88\%$ \\ &PLIF \cite{fang2021incorporating} & VGG-like & 20 & \XSolidBrush & $97.57\%$ \\ & SEW\cite{fang2021sew} & Sew ResNet & 16 & \XSolidBrush & ${97.92}\%$ \\ \cline{2-6} &\multirow{2}*{SLTT (ours)} & VGG-11 & 20 & \Checkmark & $\underline{97.92\pm0.00\% \ (97.92\%)}$ \\ & & VGG-11 (WS) & 20 & \Checkmark & $\mathbf{98.50\pm0.21\% \ (98.62\%)}$ \\ \midrule[1.08pt] \multirow{5}*{\rotatebox{90}{DVS-CIFAR10}} &Dspike\cite{li2021differentiable} \tnote{2} & ResNet-18 & 10 & \XSolidBrush & $75.40\pm0.05\%$ \\ & InfLoR\cite{guo2022reducing} \tnote{2} & ResNet-19 & 10 & \XSolidBrush & $75.50\pm0.12\%$ \\ &OTTT \cite{xiao2022online} \tnote{2} & VGG-11 (WS) & 10 & \Checkmark & $76.27\pm0.05\% (76.30\%)$ \\ & TET\cite{deng2022temporal} \tnote{2} & VGG-11 & 10 & \XSolidBrush & $\mathbf{83.17\pm0.15\%}$ \\ \cline{2-6} &SLTT (ours) & VGG-11 & 10 & \Checkmark & ${77.17\pm0.23\% \ (77.30\%)}$ \\ &SLTT (ours) \tnote{2} & VGG-11 & 10 & \Checkmark & $\underline{82.20\pm0.95\% \ (83.10\%)}$ \\ \bottomrule[1.08pt] \end{tabular} \small $^{1}$ Pre-trained ANN models are required. \ $^{2}$ With data augmentation. \end{threeparttable} \end{table*} The proposed SLTT method is not designed to achieve the best accuracy, but to enable more efficient training. Still, our method achieves competitive results compared with the SOTA methods, as shown in \cref{table:sota}. Besides, our method obtains such good performance with only a few time steps, leading to low energy consumption when the trained networks are implemented on neuromorphic hardware. For the BPTT-based methods, there is hardly any implementation of large-scale network architectures on ImageNet due to the significant training costs. To our knowledge, only Fang \etal \cite{fang2021sew} leverage BPTT to train an SNN with more than 100 layers, while the training process requires near 90G GPU memory for $T=4$. Our SLTT-2 method succeeds in training the same-scale ResNet-101 network with only 34G memory occupation and 4.10h of training time per epoch (\cref{table:SLTT-k,table:sota}). Compared with BPTT, the training memory and time of SLTT-2 are reduced by more than 70\% and 50\%, respectively. Furthermore, since the focus of the SOTA BPTT-type methods (\eg, surrogate function, network architecture, and regularization) are orthogonal to ours, our training techniques can be plugged into their methods to achieve better training efficiency. Some ANN-to-SNN-based and spike representation-based methods \cite{yang2022training,bu2022optimal,li2021free} also achieve satisfactory accuracy with relatively small training costs. However, they typically require a (much) larger number of time steps (\cref{table:sota}), which hurts the energy efficiency for neuromorphic computing. \subsection{Influence of $T$ and $\tau$} For efficient training, the SLTT method approximates the gradient calculated by BPTT by ignoring the temporal components in \cref{eqn:u-update-final,eqn:u-update-inter}. So when $T$ or $\tau$ is large, the approximation may not be accurate enough. In this subsection, we conduct experiments with different $\tau$ and $T$ on the neuromorphic datasets, DVS-Gesture and DVS-CIFAR10. We verify that the proposed method can still work well for large $T$ and commonly used $\tau$ \cite{xiao2022online,deng2022temporal,zheng2020going,guo2022recdis}, as shown in \cref{fig:diff_tau_step}. Regarding large time steps, SLTT obtains similar accuracy with BPTT even when $T=50$, and SLTT can outperform BPTT when $T<30$ on the two neuromorphic datasets. For different $\tau$, our method can consistently perform better than BPTT, although there is a performance drop for SLTT when $\tau$ is large. \begin{figure}[t] \centering \begin{subfigure}{0.49\linewidth} \includegraphics[width=1.12\columnwidth]{figures/dvsgesture_step.pdf} \label{fig:dvsgesture_step} \end{subfigure} \begin{subfigure}{0.49\linewidth} \includegraphics[width=1.12\columnwidth]{figures/dvscifar_step.pdf} \label{fig:dvscifar_step} \end{subfigure} \begin{subfigure}{0.49\linewidth} \vspace{-1.3em} \includegraphics[width=1.12\columnwidth]{figures/dvsgesture_tau.pdf} \label{fig:dvsgesture_tau} \end{subfigure} \begin{subfigure}{0.49\linewidth} \vspace{-1.3em} \includegraphics[width=1.12\columnwidth]{figures/dvscifar_tau.pdf} \label{fig:dvscifar_tau} \end{subfigure} \vspace{-1.0em} \caption{Performance of SLTT and BPTT for different number of time steps (the top two subfigures) and for different $\tau$ (the bottom two subfigures). Experiments are conducted on the neuromorphic datasets, DVS-Gesture and DVS-CIFAR10.} \label{fig:diff_tau_step} \end{figure} \section{Conclusion} \label{sec:conclusion} In this work, we propose the Spatial Learning Through Time (SLTT) method that significantly reduces the time and memory complexity compared with the vanilla BPTT with SG method. We first show that the backpropagation of SNNs through the temporal domain contributes a little to the final calculated gradients. By ignoring unimportant temporal components in gradient calculation and introducing an online calculation scheme, our method reduces the scalar multiplication operations and achieves time-step-independent memory occupation. Additionally, thanks to the instantaneous gradient calculation in our method, we propose a variant of SLTT, called SLTT-K, that allows backpropagation only at $K$ time steps. SLTT-K can further reduce the time complexity of SLTT significantly. Extensive experiments on large-scale static and neuromorphic datasets demonstrate superior training efficiency and high performance of the proposed method, and illustrate the method's effectiveness under different network settings and large-scale network structures. {\small \bibliographystyle{ieee_fullname}
{ "arxiv_id": "2302.14305", "language": "en", "timestamp": "2023-03-01T02:09:00", "url": "https://arxiv.org/abs/2302.14305", "yymm": "2302" }
\section{Introduction} Magneto-optical trap (MOT) is a robust device to generate the samples of ultracold atoms for various research and device applications of these cold atoms. Nowadays, cold atoms are considered as an important quantum systems for their applications in several upcoming quantum technologies such as high precision atomic clocks \cite{Wang, Guena}, inertial sensors \cite{Geiger, Nelson, chu, wu, lee, bidel,alzar}, electro-magnetic field sensors \cite{carter, wild, beh}, quantum computers \cite{briegel, saffman}, etc. Recently, the use of cold atoms for developing quantum vacuum pressure standard has been proposed and demonstrated \cite{shen1, shen2}. The cold atom based pressure sensors are absolute and universal, as they are based on atomic collision process and no repeated calibration is required over the time. This is advantage of a cold atom based pressure standard over the conventional pressure sensing instruments such as ionization gauges which require repeated calibrations due to aging of filaments and electrodes. In addition, the cold atoms based pressure standards can work over the large dynamic range of vacuum, from UHV to extreme-high vacuum (XHV) regime. The loss rate of atoms in atom traps are sensitive to background pressure in the trap chamber \cite{arpo, eckel, eckelrsi, moore, yuan, wu1, will, vivek3}. Therefore atom traps can be utilized to sense or measure the UHV pressure in the chamber. Both, MOT and magnetic trap, are being used for pressure sensing applications with their relative advantages over each other. MOT is easier to form but it can sense pressure in UHV regime only, whereas the magnetic traps can be used to sense the pressure down to XHV regime. Earlier, Yuan et al \cite{ yuan} have estimated the Rb-pressure in the chamber from the MOT loading time in low cooling beam intensity regime by ignoring the intra-trap collisional loss rate. Willems et al \cite{will} have estimated the background pressure (non-Cs gasses) in the chamber by measuring the life-time of MOT and magneto-static trap. Arpornthip et al \cite{arpo} measured the MOT loading time as function of background pressure (non-Rb contents) as well as the function of MOT loading rate (dependent on Rb-pressure), to estimate the background pressure and partial pressure of Rb in the chamber. Moore et al \cite{moore} has measured the non-Rb background pressure in the chamber from the Rb-MOT loading data by increasing the non-Rb gas pressure in the chamber - applying an approach of chamber pressure rise demonstrated earlier by Arpornthip et al. In this method, sputter ion pump (SIP) was turned off to change the non-Rb gas pressure and MOT loading was studied at different non-Rb background pressure. Though this method is more time consuming, but it is suitable to detect vacuum leak in the chamber. In an another approach \cite{vivek3}, the partial pressure due to Rb and non-Rb gases have been estimated by measuring the saturated number and loading time in a Rb-MOT. \\ In the work reported here, we have estimated the background pressure due to non-Rb gases in the chamber by measuring the Rb-MOT loading time in low Rb pressure and low cooling beam intensity regimes. We first measured the MOT loss rate ($\Gamma $) as function of cooling beam intensity. The MOT loading time at low cooling beam intensity was used to estimate the total (Rb and non-Rb) background collisional loss rate by neglecting the intra-trap collisional loss rate. Then, we measured the loading time of this low cooling beam intensity MOT as function of Rb-dispenser current. In these measurements, the MOT loading time at low dispenser current was used to estimate the MOT loss rate due to non-Rb background gas contents, which provided the estimate of non-Rb background pressure in the chamber. Therefore, as compared to earlier methods \cite{ yuan, arpo, moore}, we show that non-Rb partial pressure in the UHV chamber can be estimated by measuring the MOT loading time in low cooling beam intensity and low Rb pressure regimes. Our method is comparatively less time consuming and does not require switching-off the pumping of vacuum chamber which prevents exposure of the chamber to undesirable gas contamination. The straight forward method presented here has the potential for developing a UHV pressure sensor device.\\ The MOT loading process can be described by a rate equation as \cite{arpo}, \begin{equation} \frac{dN(t)}{dt}= R - \gamma_{b} N(t) - \beta \bar{n}(t)N(t) \end{equation} where $N (t)$ is the number of atoms in the MOT cloud at any time $t$, $R$ is the loading rate of MOT due to Rb vapour in the background, $\gamma_{b}$ is the loss rate in MOT due to collisions of trapped atoms in MOT with the atoms/molecules present in the background, $\beta$ is loss rate due to inelastic two body intra-trap collisions, $\bar{n}(t) = \int_{}^{} n(\textbf{r},t)^2 \,dV / N(t) $ is average number density and $n(\textbf{r},t)$ the number density of the trapped atoms in the MOT cloud. \\ The solution of equation (1) depends on the regime of parameter in which MOT is operated. For small number of atoms in MOT ($N<10^5$), known as constant volume regime, $\bar{n}(t)\approx {N(t)}/{V}$. For large N ($N>10^5$), known as constant density regime, $\bar{n}$ is constant. In our experiments, the MOT was operated in the constant density regime (i.e. $N>10^6$), therefore the solution of the equation (1) can be written as, \begin{equation} N(t) = N_{s} \left[1-exp(-t/\tau_{L})\right], \end{equation} where $\tau_{L}$= 1/$\Gamma$ with $\Gamma = \gamma_{b} + \beta \bar{n}$. Here $N_{s} = R\tau_{L}$ is the final number in the MOT (i.e. number of atoms in the MOT in equilibrium). The parameter $\tau_{L}$ is known as MOT loading time. The equation (2) describes the variation in number of atoms in a MOT with time and its plot is referred as MOT loading curve. From the experimentally measured MOT loading curve, both parameters, $\tau_{L}$ and $N_{s}$, can be determined. These parameters are dependent on the background pressure in the chamber due to Rb and non-Rb gas contents, and on the MOT parameters such as cooling beam intensity etc. It is known that the loss rate of atoms from the MOT cloud due to collisions from atoms/molecules of any gas species in the background is related to its partial pressure in the chamber. The loss rate $\gamma_{i}$ due to collisions with atoms/molecules of $i^{th}$ gas species in the background is related to its partial pressure $P_{i}$ in the background as \cite{arpo}, \begin{equation} \gamma_{i} = 6.8\frac{P_{i}}{(k_{B}T)^{2/3}}\left(\frac{C_{i}}{m_{i}}\right)^{1/3} (Dm_{0})^{-1/6} = \frac{P_{i}}{k_{i}}, \end{equation} where $m_{0}$ is mass of atom in the trap, $m_{i}$ is mass of the incident atom/molecule of $i^{th}$ gas species, $k_{B}$ is Boltzmann constant, T is the temperature and D is the trap depth of the MOT and $C_{i}$ is Van der Walls coefficient for $i^{th}$ gas species in the background. \\ In the typical vapour loaded MOT chamber, the background collisional loss rate has two components and can be written as, \begin{equation} \gamma_{b} =\gamma_{Rb} + \gamma_{non-Rb}, \end{equation} where $ \gamma_{Rb}$($= (k_{Rb})^{-1} P_{Rb}$) represents the loss rates due to collisions with untrapped Rb vapour atoms and $\gamma_{non-Rb}$ represents the loss rate due to other atoms/molecules in the background. We have experimentally measured $\Gamma$ for different values of laser beam intensity in the MOT. In the low intensity regime of cooling laser beam, the value of $\Gamma$ can be approximated to $\Gamma \, \approx \, \gamma_{b}$, where the intratrap collisional loss rate ($ \beta \bar{n} $) is negligible as compared to background collisional loss rate. For value of $\bar{n} \sim 10^{8} \,$ cm$^{-3}$ (for our MOT at 7.7 mW/cm$^{2}$) and $ \beta \sim 2 \times 10^{-12} \,$ cm$^{-3}$ s$^{-1}$ (as reported earlier \cite{Gensemer} for detuning and intensity used in our MOT), the value of $ \beta \bar{n} $ is $\sim 10^{-4} \,$ s$^{-1}$. This is much smaller than the lowest value of $\gamma_{b} $ ($\sim 0.0071 \,$ s$^{-1}$) observed at that intensity (Figure \ref{gamma vs int}). \begin{figure}[ht] \centering\includegraphics[width=8.5cm]{Fig1.eps} \caption{A schematic diagram of the experimental setup. Two MOT-beams in the reflection geometry in the y-z plane are shown, whereas the other two MOT-beams along $\pm$ x-direction are not shown in the diagram. PD represents photodiode (PD) for the detection of fluorescence.} \label{exptsetup} \end{figure} The experiments have been performed with loading of mirror-MOT (U-MOT) on an atom-chip, with schematic as shown in Figure \ref{exptsetup}. The details of experimental setup of atom-chip mirror-MOT have been described earlier \cite{ vivek2, vivek3}. Different vacuum pumps used in the setup include a 77 l/s turbo molecular pump (TMP), a 300 l/s sputter ion pump (SIP) and a titanium sublimation pump (TSP). The ultimate base pressure achieved in the chamber without Rb vapour was $1.5 \times 10^{-10}$ Torr as read by SIP controller. The pressure values read by SIP controller were nearly equal to those read by an extractor gauge attached to the chamber. A Rb dispenser assembly having three Rb dispensers (Rb/NF/3.4/12FT) connected in parallel configuration was prepared by welding dispensers on a two-pin feedthrough. This assembly was placed in the vacuum chamber through a viewport hole such that a Rb dispensers are at a distance of $\sim$ 17 cm from the centre of the octagonal chamber. The Rubidium vapour is produced inside the chamber after flowing a current through this dispenser assembly. The current in each dispenser ($I_{D}$) is nearly one-third of the current supplied to dispenser assembly. \begin{figure}[h] \centering\includegraphics[width=8.5cm]{Fig2.eps} \caption{The loading curves for U-MOT on atom-chip for different values of cooling laser beam intensity at a fixed background pressure (at dispenser current of $I_{D}$ = 3.57 A). The experimentally observed MOT loading data along with best-fit (continuous curve) are shown for different values of intensity.} \label{loadingcurve} \end{figure} A quadrupole like magnetic field required for MOT was generated from a current carrying (60 A) copper U-wire (Figure \ref{exptsetup}) placed behind the atom-chip in presence of a homogeneous bias fields ($B_{y} \, \sim$ 11 G and $B_{z} \, \sim$ 3 G) . The output from frequency stabilized diode lasers served as cooling and repumping laser beams. Each MOT beam was a combination of a cooling and a re-pumping beams with suitable ratio of power in the beams. Two MOT beams were reflected at 45$^{\circ}$ from chip surface which formed four MOT beams in the overlapping region. Two counter propagating MOT beams in orthogonal direction made a set of required six MOT beams for operation of the MOT. This MOT configuration is called the mirror-MOT configuration. \begin{figure}[h!] \centering \subfigure[]{ \label{gamma vs int} \resizebox*{8.7 cm}{!}{\includegraphics{Fig3a.eps}}\hspace{0 cm}} \subfigure[]{ \label{gammavsdispenser} \resizebox*{9.0 cm}{!}{\includegraphics{Fig3b.eps}}} \caption{(a) The variation in the loss rate ($\Gamma$) with cooling laser beam intensity for different Rb dispenser current ($I_{D}$). (b) Variation in background collisional loss rate ($\gamma_{b}$) with Rb dispenser current.} \label{sample-figure} \end{figure} Figure \ref{loadingcurve} shows the loading curve of U-MOT for different values of cooling laser beam intensity at a fixed Rb dispenser current of $I_{D}$ = 3.57 A. The continuous curves show the best-fit of the experimental loading curve to the equation (2). From the fit, we obtain the value of loading time $\tau_{L}$ and $\Gamma$. A reduction in loading time from 65.25 s to 33.41 s was observed with increase in intensity of the cooling beam from 7.7 mW/cm$^{2}$ to 20.2 mW/cm$^{2}$, as shown in figure \ref{loadingcurve}. These intensity dependent measurements of $\Gamma$ were carried out for different values of dispenser current ($I_{D}$) and results are shown in Figure \ref{gamma vs int}. As discussed before, the value of $\Gamma ( = \gamma_{b} + \beta \bar{n})$ in low cooling beam intensity regime can be approximated as $\Gamma \, \sim \,\gamma_{b}$. Alternatively, as followed by Yuan et al \cite{yuan}, the intercept on y-axis in $\Gamma$ vs cooling beam intensity plot. Figure \ref{gamma vs int} can be used to estimate the value of $\gamma_{b}$. Figure \ref{gammavsdispenser} shows the $\gamma_{b}$ values estimated this way for different values of dispenser current. As shown in figure \ref{gammavsdispenser}, the value of $\gamma_{b}$ increases rapidly with dispenser current for current beyond the value of 4.0 A. However, at lower dispenser current (lower than 4.0 A) values, the variation in $\gamma_{b}$ with current is negligibly small. This shows that contribution to loss rate from the Rb atoms in the background is negligible. Therefore, the value of $\gamma_{b}$ can be approximated to $\gamma_{non-Rb}$ in this regime. By estimating $\gamma_{non-Rb}$ this way, $\gamma_{Rb}$ can be estimated at any current value by measuring $\gamma_{b}$, as $\gamma_{non-Rb}$ is independent of dispenser current. Thus, we can estimate both $\gamma_{non-Rb}$ and $\gamma_{Rb}$ in our method, without switching off the vacuum pumps as compared to earlier works\cite{arpo, moore}. In the very low pressure (UHV) regime, there are only few gas species (H$_{2}$, He, Ar etc) which contribute to the total base pressure in the chamber ($P = \sum_{i} P_{i} = \sum_{i} k_{i} \gamma_{i}$). If we consider hydrogen (H$_{2}$) as a dominant species in the UHV pressure range as in our chamber \cite{red} , the measured $\gamma_{non-Rb}$ can be approximated to $\gamma_{H_{2}}$. This gives the non-rubidium partial pressure in the chamber as $P_{non-Rb} = k_{H_{2}} \, \gamma_{H_{2}} = \, k_{H_{2}} \, \gamma_{non-Rb}$ = $1.1 \times 10^{-10} \,$ Torr, with $k_{H_{2}} = 2.04 \times 10^{-8} $ Torr s and $\gamma_{non-Rb}$ = 0.0056 s$^{-1}$. This estimated pressure of non-Rb background gases agrees well with that measured by the SIP controller ($ 1.5 \times 10^{-10}$ Torr). \begin{figure}[h] \centering\includegraphics[width=9.5cm]{Fig4.eps} \caption{Variation in partial pressures due to Rb vapour ($P_{Rb}$), non-rubidium gases ($P_{non-Rb}$). The total background pressure ($P_{b}$ \, = \,$P_{Rb}$ + $P_{non-Rb}$) is compared with pressure measured by SIP controller ($P_{SIP}$).} \label{pressures vs dispenser} \end{figure} After knowing the value of $\gamma_{Rb}$ at any dispenser current, the value of Rb partial pressure can be estimated by the relation $P_{Rb} = k_{Rb} \gamma_{Rb}$, with $k_{Rb} = 2.27 \times 10^{-8}\,$ Torr s \cite{yuan, arpo}. The variation of the estimated Rb pressure in the chamber with dispenser current is shown in figure \ref{pressures vs dispenser}. In this figure, the total background pressure ($P_{b} \, = \, P_{Rb} \, + \, P_{non-Rb}$) estimated by our method is also shown and compared with the pressure read by SIP controller. We note that at low dispenser current (less than 4.0 A), there is a good agreement between the estimated total background pressure ($P_{b}$) and the pressure measured by SIP controller ($P_{SIP}$). However, at higher values of dispenser current, the pressure estimated by present method is more than that read by SIP controller. This difference can be attributed to the adsorption of Rb atoms at the chamber walls and pipe connecting the SIP to the chamber. Similar observations have been reported earlier also \cite{arpo}. In conclusion, we have estimated the Rb and non-Rb partial pressure values in an UHV chamber from the loading data of a Rb-MOT on an atom-chip. The estimated pressure values agree with the pressure measured by the SIP controller. \\ We are thankful for the help extended by Amit Chaudhary and Dayanand Mewara for this work. \\ \noindent The authors declare no conflict of interest. \section{} \subsection{} \subsubsection{}
{ "arxiv_id": "2302.14249", "language": "en", "timestamp": "2023-03-01T02:06:53", "url": "https://arxiv.org/abs/2302.14249", "yymm": "2302" }
\section{Introduction} Many works have addressed the topic of humanoid robot movement control over the last few decades. Great challenges in the planning and control of biped robots include high degrees of freedom (DoFs), nonlinear dynamics, and balancing with one foot on the ground. In the existing literature, model-based walking methods use either a simplified robot model \cite{kajita2014introduction} or a whole-body dynamic modeling approach \cite{hirai1998development,huang1999high,yamaguchi1999development}. Both rely on the accuracy of the robot's inertial parameters, such as mass, center of mass (CoM), and link lengths. Complex control methods are also needed to compensate for modeling errors. The remaining literature on walking focuses primarily on reinforcement learning (RL). However, these RL methods often require a robot's kinematic model and inertial parameters to build simulation environments, or a pre-planned walking trajectory is used as a reference signal \cite{meyer2014machine, wen2015q, garcia2020teaching}. And certain robustness adjustments, such as domain randomization, are also needed for sim-to-real transfer \cite{li2021reinforcement}. A widely used model-based method is proposed by Shuuji et al. \cite{kajita20013d,kajita2003biped}. They facilitate the sophisticated nonlinear biped dynamics system into a simple 3D Linear Inverted Pendulum Mode (LIPM). A preview controller is used to track zero-moment point (ZMP) trajectories. Because the walking pattern is produced from a specified ZMP trajectory, this method is also called ZMP-based walking pattern generation \cite{vukobratovic1972stability}. Different from using the CoM of the whole robot, the authors in \cite{ficht2023direct} utilize a five-mass distribution model to plan whole-body motion. The momentum vector has a linear connection to the robot joint speed vector. Thus, other research uses a reference linear and angular momentum to generate whole body motion, called Resolved Momentum Control \cite{kajita2003resolved, gong2021one}. Angular momentum is also used in Capture Point (CP) to prevent a legged robot from falling by taking N steps before stopping \cite{pratt2006capture,koolen2012capturability,pratt2012capturability}. The CP is calculated by extending the LIPM with a flywheel body. George et al. \cite{mesesan2019dynamic} combine CP and a passivity-based controller for a humanoid robot to walk over compliant terrain. Hybrid Zero Dynamics (HZD)\cite{westervelt2003hybrid,hereid2019rapid,sreenath2011compliant} is a full-order model-based method. It depends on linear input and output, which can obtain more natural robot walking dynamics. However, these methods rely on the high accuracy of dynamic modeling, which may not be precise, and hence need external controllers. They also require high-performance CPUs and are computationally time-consuming. Additionally, these algorithms have high hardware requirements for robots. Thus, they may not be appropriate for small-sized and low-cost humanoid robots. \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{Fig/gait.jpg} \caption{Planning CoP transition and foot placement for humanoid robot walking pattern. The red point is the CoP of the robot. The floating base has its origin $O$ on the center of the foot, which is in the same side as the CoP. For example, in transition a $\rightarrow$ e (including bcd), the CoP is moving on the right side of the robot, so the origin is on the right side. The $z$-axis points up, and the $x$-axis points to the front of the robot.} \label{gait} \end{figure*} RL-based methods also show great potential in robot locomotion without detailed model-based planning and control in real-time. Instead, they rely heavily on offline training. However, directly training the RL policy on a real-world robot can be risky and may cause damage. Therefore, researchers typically train in simulation environments. To bridge the gap between simulation and the real world, complex robustness regulations are required. In \cite{peng2017deeploco}, the authors demonstrate that a 3D biped can learn locomotion skills in simulation with limited prior model knowledge using a two-level hierarchical deep RL structure. Both levels use a uniform style of the actor-critic algorithm. Zhaoming et al. \cite{xie2018feedback} use a multi-layer neural network with policy-gradient to learn a feedback controller for robot walking in simulation. However, the method requires a pre-defined joint angle trajectory as a reference signal and acquires the robot's full state space information. But in reality, the states must be measured with noisy sensors. Combining with HZD, an RL framework is built to achieve variable speed controller learning for 3D robot walking in \cite{castillo2020hybrid}. Nevertheless, the method is only validated in a customized simulation environment and not on a real robot. Other works aim to train stable policies for robots walking in a complex environment. In \cite{krishna2021learning}, a gradient-free RL algorithm is used to achieve robust biped walking on varying sloped terrain in simulation. Diego et al. \cite{ferigo2021emergence} train a push-recovery whole-body motion planning policy in simulation with an RL framework. Although the works cited above obtain stable gait policies in simulation, the physics engines in the simulation environments still require robot dynamics models. That is, the above approaches cannot be generalized to robots without kinematic models or inertial parameters. In this paper, a model-free framework for humanoid robot movement control is introduced. As it is an initial effort in this area, only quasi-static situation is addressed here. This proprioceptive framework depends only on sensor data, which does not need any knowledge of robot parameters, such as inertial parameters and kinematic models. Thus, the difficulties induced by model uncertainty can be resolved. The approach can be directly implemented on real robots without pre-training in simulation or reference trajectories. Therefore, it has the potential to be extended to explore unknown environments. Compared to other frameworks, our method is much simpler, requiring only low computing power, data storage capacity, and graphics card performance. Furthermore, the loss function can converge in only dozens of updates, indicating that a stable motion trajectory can be obtained in a short amount of time, without prior knowledge of a robot. \section{Methodology}\label{Methodology} In order to evaluate the effectiveness of our method, we conduct two experiments, one focus on walking and another on self-calibration of shoe sensors. In this section, we provide a detailed explanation of the walking experiment as an example to illustrate our approach. To generate the motion trajectory for one walking cycle, several pairs of the CoP and foot positions are pre-designed. These pairs are treated as objectives for the robot to reach during its walking \ref{CoP Design}. We then define a cost function based on the Cartesian distances from the measured CoP and foot positions to their corresponding objectives \ref{Cost Function Design}. Then, the robot slightly moves each of its leg joints forward and backward, while the robot's CoP and foot positions are collected by sensors at each time \ref{Experiment Setup}. After this, the local derivatives can be approximated using the finite difference of sensor outputs without an explicit function. Finally, an optimization algorithm is used to update the robot's motion till all the objectives are achieved. \ref{optimization} \subsection{CoP and Foot Objectives Design}\label{CoP Design} This part introduces details of our objective CoP and foot position design. No prior knowledge of a robot's kinematic model or inertial parameters is used in this procedure. The ZMP reference trajectory is designed by the humanoid robot's footsteps to stay in balance. The robot's slow movement can be approximated to quasi-static \cite{han2022watch}, in which case the ZMP and COP are equivalent. A simplified CoP path is used as shown in Fig.\ref{gait}. To begin, the robot is given the target of moving its CoP horizontally along the $y$-axis to the middle of the right foot while keeping both feet at the origin (Fig. \ref{gait}a $\rightarrow$ b). When the CoP is in the center of the right foot, the robot can lift its left foot vertically without falling, as Fig. \ref{gait}b $\rightarrow$ c. After the left foot is lifted to a certain height, the robot can step forward for a certain distance $s$ as Fig. \ref{gait}c $\rightarrow$ d. In the meantime, the CoP of the robot should also move forward for half of $s$, which makes sense that the CoP should stay close to the center of the support polygon \cite{erbatur2008natural}. Then, the robot moves its CoP horizontally along the $y$-axis back to the center as Fig. \ref{gait}d $ \rightarrow $ e. The solved joint trajectories are stored and the motion of the other leg can just mirror the stored configuration trajectories due to the symmetry of biped robots. Therefore, setup methods in transition Fig. \ref{gait}e $\rightarrow$ f $\rightarrow$ g are the same as above with the inverse of left and right. \subsection{Cost Function Design}\label{Cost Function Design} The general formula of the cost function for each posture transition is \begin{equation} f = \frac{1}{2}(||{\bf C}-{\bf C}_d||^2_{{\bf w_{c}}}+\sum ||{\bf P}[j]-{\bf P}_{d}[j] ||^2_{{\bf w_{p}}}) \label{cost} \end{equation} where ${\bf C}=[x,y]^T$ is the measured CoP of the robot, ${\bf C}_d$ is the desired CoP, $j=l,r$ means left and right foot, ${\bf P} = [x,y,z,\alpha,\beta,\gamma]^T$ is the current position and Euler angle of the robot's foot, and ${\bf P}_d$ is the desired foot posture, $w_{c}$ and $w_{f}$ are the weights for the cost terms. Obviously, $f$ is a function of the robot joint angles, $\bf q$. As a result, one can find the derivatives of $f(\bf q)$ and minimize the cost. But in the present method, knowledge of $f({\bf q})$ as an analytical function is not required. The details will be shown in \ref{Map}. According to Fig. \ref{gait}, the desired terms in the cost for each walking pattern transition are in Table~\ref{t}. Because the desired rotation of the robot's feet is always zero, only 3-D positions of feet are listed. \begin{table}[H] \caption{Desired cost design} \centering \begin{threeparttable} \begin{tabular}{|c|c|ccc|} \hline \multirow{2}{*}{Index} & \multirow{2}{*}{Gait Transition} & \multicolumn{3}{c|}{Desired Terms} \\ \cline{3-5} & & \multicolumn{1}{c|}{${{\bf C}_d}^T$} & \multicolumn{1}{c|}{${{\bf P}_{d}[l]}^T$} & ${{\bf P}_{d}[r]}^T$ \\ \hline 1 & {\bf a} $\rightarrow$ {\bf b} & \multicolumn{1}{c|}{\multirow{2}{*}{$[0\;0]$}} & \multicolumn{1}{c|}{${[}0\;W\;0{]}$} & \multirow{4}{*}{${[}0\;0\;0{]}$} \\ \cline{1-2} \cline{4-4} 2 & {\bf b} $\rightarrow$ {\bf c} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{${[}0\;W\;h{]}$} & \\ \cline{1-4} 3 & {\bf c} $\rightarrow$ {\bf d} & \multicolumn{1}{c|}{$[s'\;0{]}$} & \multicolumn{1}{c|}{\multirow{2}{*}{${[}s\;W\;0{]}$}} & \\ \cline{1-3} 4 & {\bf d} $\rightarrow$ {\bf e} & \multicolumn{1}{c|}{$[s'\;w]$} & \multicolumn{1}{c|}{} & \\ \hline 5 & {\bf e} $\rightarrow$ {\bf f} & \multicolumn{1}{c|}{$-[s'\;0{]}$} & \multicolumn{1}{c|}{${[}0\ 0\ 0{]}$} & $-[s\; W\;0{]}$ \\ \hline \end{tabular} \end{threeparttable} \label{t} \begin{tablenotes} \footnotesize \item where $s'=s/2$, $W=2w$ \end{tablenotes} \end{table} \subsection{CoP and Foot Positions Measurement}\label{Experiment Setup} The robot-sensor overview is shown in Fig. \ref{exp}a, which includes a Nao robot H25 V6, a pair of force-sensing shoes, three April tags, and a camera. The force-sensing shoes are the same as \cite{blueshoe}. We further attach an April tag to the bottom of the robot's foot to track foot positions (Fig. \ref{exp}b). April tags attached to the feet are called foot frames, and the one on the test bench is called the world frame. \begin{figure}[t!] \centering \includegraphics[width=0.95\linewidth]{Fig/exp.jpg} \caption{(a) Overview of the experiment setup: a NAO robot, a pair of force-sensing shoes, three April tags, and a camera. (b) Force-sensing shoe. (c) April tags scanned by the camera. } \label{exp} \end{figure} \subsubsection{\bf{CoP Sensing Principle}} Each force-sensing shoe has 4 load cells on the four corners of the shoe and measures the CoP. Through multiplying the force $f_i$ measured by each sensor by its corresponding 2D position ${\bf p}_i = [p_{ix}\;p_{iy}]^T$, and dividing by the sum of all the forces, the CoP, $\bf C$ in Eq. \ref{cost} is \begin{align} {\bf C} = \sum_{i=1}^{4}f_{i}{\bf p}_{i}/ \sum_{i=1}^{4}f_{i}. \label{CoP_Equ} \end{align} Using all eight sensors, the same sensing principle that works for single support can also be used for double support. \begin{figure}[t!] \centering \includegraphics[width=0.78\linewidth]{Fig/NAOb.jpg} \caption{Joints used for walking are marked with circled arrows. (a) Top middle and bottom are R(L)HipPitch, R(L)KneePitch and R(L)AnklePitch respectively. (b) Top and bottom are R(L)HipRoll and R(L)AnkleRoll respectively. R is right and L is left} \label{NAO} \end{figure} \subsubsection{\bf{Foot Positions Measurement Principle}}\label{position} There are four reference frames in the experiment: camera frame, world frame, foot frame, and floating base (Fig. \ref{gait}). Taking transition a $\rightarrow$ b as an example, to calculate ${\bf P}[j]$ in Eq. \ref{cost}, one needs to obtain the poses of the foot frames with respect to the floating base. From the camera and three April tags, one can directly get the transformation of the world frame and foot frames relative to the camera frame as ${\bf T}_0$, ${\bf T}_j$, where $j=l,r$ means left and right foot. Transfer the ${\bf T}_j$ into the world frame ${\bf T}_0$: \begin{align} {{\bf T}_0^j} = {{\bf T}_0}^{-1} {\bf T}_j = \left[\begin{array}{cc} {\text R_0^j} & {\bf p}_0^j \\ \bf{0}^{T} & 1 \end{array}\right] \label{T} \end{align} where ${\text R_0^j} ={\text{R$_z$}} (\gamma_0^j) {\text{R$_y$}} (\beta_0^j){\text{R$_x$}} (\alpha_0^j)$\\ From Fig. \ref{gait}, the floating base is attached on the shoe (Fig. \ref{exp}a ). It is the same as the right foot frame except that it is always parallel to the test bench. That is, it will not rotate around $x$ or $y$-axis of the world frame with $\beta_0^r=0$, $\alpha_0^r=0 $. Therefore, from Eq. \ref{T}, the pose of the floating base with respect to the world frame is \begin{align} {{\bf T}_0^{s}} = \left[\begin{array}{cc} {\text {R$_z$}}(\gamma_0^r) & {\bf p}_0^r \\ \bf{0}^{T} & 1 \end{array}\right] \end{align} The pose of left foot frame with respect to the floating base is \begin{align} {{\bf T}_{s}^l} = {{\bf T}_0^{s}}^{-1} {\bf T}_0^l = \left[\begin{array}{cc} {\text R_s^l} & {\bf p}_s^l \\ \bf{0}^{T} & 1 \end{array}\right] \end{align} where ${\text R_s^l} ={\text{R$_z$}} (\gamma_s^l) {\text{R$_y$}} (\beta_s^l){\text{R$_x$}} (\alpha_s^l)$\\ As a result, the ${\bf P}[l]$ in Eq. \ref{cost} is as \begin{align} {\bf P}[l] = [ {{\bf p}_s^l}^T \, \alpha_s^l \ \beta_s^l \ \gamma_s^l]^T \end{align} The pose of the right foot frame relative to the floating base and the sensors' positions can be derived by the same process. \subsection{Optimization Algorithms}\label{optimization} This section describes the optimization algorithm used to converge the robot's measured CoP and foot positions to their objectives. Simple gradient-based methods is used. \begin{figure*}[t!] \centering \includegraphics[width=0.99\textwidth]{Fig/exp1.jpg} \caption{Two example transitions. (a) Evolution of the robot’s motion, its CoP and cost for transition a $\rightarrow$ b. Grey rectangles are the robot shoes. (a) Evolution of the robot’s feet motion and cost for transition b $\rightarrow$ d} \label{res} \end{figure*} {\bf Gradient Descent (GD)} Given a multi-variable function $f({\bf q})$, the iterative law of the GD \cite{boyd2004convex} is \begin{equation} {{{\bf q}}_{t+1}} = {{{\bf q}}_t} - \eta \nabla {f({{\bf q}}_t)} \end{equation} where $\nabla{f} = \frac{\partial \bf f}{\partial {\bf q}} $is the gradient of the objective function and $\eta$ is a step size that can be determined in different ways like Armijo rule \cite{armijo1966minimization}. To approximate the derivatives in the optimization algorithm formula above, finite difference \cite{enwiki:1105295784} is used. In the literature, the finite difference method is often used due to its speed. Here it is used because there is not an explicit function whose derivative can be computed analytically. The function is empirical, resulting directly from sensor measurements and how their values are perturbed as the robot configuration is slightly varied. \section{Experiment \romannumeral1}\label{expi} \subsection{NAO Robot} In this paper, the NAO V6 robot is used to test the general method. The mass is 5.3$kg$ and the height is 57$cm$. The robot has 25 degrees of freedom (DoFs). In this model-free biped walking framework, the five joints for each leg are considered in Fig. \ref{NAO}. \subsection{Model-Free Mapping of CoP Trajectory and Foot Placement to Joint Angles}\label{Map} In the common humanoid robot, Pitch joints (Fig. \ref{NAO}a) are rotations around the $y$-axis, and Roll joints (Fig. \ref{NAO}b) are around the $x$-axis. Due to this structure, the robot can move the body's CoP horizontally along the $y$-axis by simply rotating four Roll joints of both legs, such as Fig. \ref{gait}a $\rightarrow$ b and d $\rightarrow$ e $\rightarrow$ f. For the same reason, it can lift one of its feet and step forward with the rotation of Pitch joints as Fig. \ref{gait}b $\rightarrow$ c $\rightarrow$ d. Taking transition Fig. \ref{gait}a $\rightarrow$ b as an example: assuming the joint vector in Fig. \ref{gait}a is ${\bf q}_0=[q_1\;q_2\;q_3\;q_4]^T$, NAO will rotate one of its joint forward for a certain angle as $f(q_1+\Delta q_1,q_2,q_3,q_4)$. This value can be used to calculate the gradient of $f({\bf q}_0)$. Each time the robot turns one or more joints and reaches a pose where the cost of calculating the derivatives can be found, it is called a \textit{search}. According to finite difference in \ref{optimization}, it takes 8 searches to find the gradient of $f({\bf q}_0)$ for an update. Then the joints are updated to the next state $\bf{q}_{01}$ by the optimization methods in \ref{optimization}. The robot continues this cycle until the value of the cost is less than a certain value or the difference between the costs of two consecutive updates is small enough. The same procedure is also taken in other transitions in Fig. \ref{gait}. The only difference is that the rotation joints are Pitch joints (Fig. \ref{NAO}a) in transition Fig. 1b $\rightarrow$ c $\rightarrow$ d. \subsection{Demonstrations and Results} \label{Demo and Results} To test the effectiveness of the method, we directly implemented it on a real robot without simulation. GD with a fixed step size is used to minimize the cost function in real-world experiments to present our method. Transition examples are displayed in Fig. \ref{res}. Because the transitions d $\rightarrow$ e $\rightarrow$ f are the same as a $\rightarrow$ b with moving CoP horizontally along the $y$-axis, only examples of a $\rightarrow$ b $\rightarrow$ d are listed. There are two problems for NAO hardware when it supports its entire weight on a single leg: \textbf{1.} \textit{The HipRoll joint is unable to match the reference position. This is because the single support requires very high torque from this joint. As a result, the swinging leg does not leave the ground properly and will touch the ground prematurely, which destabilizes the robot} \cite{gouaillier2010omni}. \textbf{2.} \textit{For small-sized humanoid robots with under-powered motors, the single support generates excessive torque. This results in the NAO robot platform suffering from severe leg joint overheating issues} \cite{han2022watch}. To solve the first problem, the authors in \cite{gouaillier2010omni} design a trapezoid function feedback controller based on pre-planned joint trajectories. However, their method is model-based, which is not suitable for our model-free framework. As for the second problem, the authors in \cite{han2022watch} choose double support rather than single support for safety concerns. Therefore, we directly do the transition Fig. \ref{gait}b $\rightarrow$ d, without posture c. And a ${\bf C}_d$ closer to the center of double support is also chosen to reduce the joint torque. The results of generating gait trajectories with our model-free framework are displayed in Fig. \ref{res}. Fig. \ref{res}a shows that the CoP is getting closer to the target location and the cost function converges within a dozen of updates. Next, the robot can step forward with several times of updates Fig. \ref{res}b. There is no collision problem in the experiments. This is because the robot's joints only turn a small angle at each search. And the robot's desired left and right foot positions in the cost function have a certain distance between each other. The overall results demonstrate that our approach can generate a stable gait trajectory without the robot's inertial parameters, a kinematic model, reference joint signals, or a training process. \section{Experiment \romannumeral2}\label{expi2} In our previous work \cite{han2022watch}, we utilized a robot model to plan whole-body trajectories and relied on modeled CoP and ground reaction force (GRF) as reference data to enable the robot to self-calibrate its shoe sensors. However, this model-based approach may not always be practical, especially when the robot is repaired or retrofitted with additional equipment, which may result in the loss of the robot model. To overcome this limitation, we integrated our model-free framework into the self-calibration process. This model-free self-calibration (MFSC) approach allows the robot to self-calibrate its shoe sensors without the need for a pre-existing model. \subsection{Sensor Parameter Identification} Our proposed method comprises two parts: 1. The robot follows planned CoP paths using our model-free method \ref{Methodology} when the shoe sensors are accurate. The measured CoP and GRF are saved as reference ground truth data. 2. When the sensors lose accuracy, the robot replays the previous trajectories, and the sensor parameters are optimized by reducing the difference between the reference data and the current measured data collected during the robot's movement. The load cell sensing principle is \begin{align} f = aS+b, \label{affine} \end{align} where $S$ is the voltage output of the load cell, $a$ and $b$ are the constant coefficients. To determine the load cells' parameters, we use nonlinear least squares (NLS) to minimize the error between the sensors' current raw measurements and their corresponding reference GRF and CoP values Fig. \ref{SC} a and b. The optimization is formulated as: \begin{align}\label{GRF_recovery} \underset{{\bf \zeta}}{\text{argmin}}\quad J = \sum_{k=1}^{N}(|n[k] - n_r|^{2}_{{\bf w_{n}}}+ \\ \nonumber ||{\bf c}[k] - {\bf c}_{r}[k]||^{2}_{{\bf w_{c}}} + ||{\bf f}[k]-{\bf f}_{r}[k]||^{2}_{{\bf w_{f}}}), \end{align} where ${\bf \zeta} = [(a_{1},b_{1}),\hdots,(a_{8},b_{8})]$ are the optimization variables; $k$ is the number of training points; ${n}$ and ${\bf c}$ represent current GRF and CoP measurements; $ n_{r}$ and ${\bf c_{r}}$ are their reference data; ${\bf f}$ and ${\bf f}_r$ represent the measured force and reference force of each load cell. The weights for the cost terms are denoted as $w_{n}$, $w_{c}$, and $w_{f}$. \subsection{Initial Guess of Sensor Parameters} A reasonable initial guess is required for the NLS process. Given that all sensors in the shoes are of the same type, we use an equal initial estimation of $[a_0, b_0]$ for all sensors' parameters. We assume that the GRF and CoP measurements using this initial guess are approximately equivalent to their corresponding reference values, \begin{align} \begin{bmatrix} n_r\\ {\bf c}_{r} \end{bmatrix} \approx \begin{bmatrix} \sum_{i=1}^{8}(a_{0}S[i]+b_{0})\\ \sum_{i=1}^{8}(a_{0}S[i]+b_{0}){\bf p}[i]/\sum_{i=1}^{8}(a_{0}S[i]+b_{0}) \end{bmatrix}. \end{align} \begin{figure}[t!] \centering \includegraphics[width=0.99\linewidth]{Fig/SC.jpg} \caption{Model-free self-calibration. (a) Top shows the example of the designed configurations. In bottom, black rectangles are the sensing polygon of each foot. Black line is the CoP trajectory and the blue arrow is the robot moving direction. The blue circles and red squares indicate the reference and initial guess CoPs. The yellow and black triangles represent the MFSC of the training and testing datasets. (b) and (c) Blue is the average of the reference data. Red and yellow show the measured data from the initial estimation and MFSC. The white and the grey backgrounds represent the training and testing datasets.} \label{SC} \end{figure} $i$ represents the sensor index, and ${\bf p}$ is the sensor position obtained from the camera. To include more training samples, the equations can be rearranged as \begin{align} \label{initial_guess_equ} \begin{bmatrix} \vdots\\ {\bf c}_{r}[k]n_{r} \\ \vdots \end{bmatrix} \approx \begin{bmatrix} \vdots & \vdots\\ \sum_{i=1}^{8}S[k][i]{\bf p}[k][i]&\sum_{i=1}^{8}{\bf p}[k][i]\\ \vdots & \vdots \end{bmatrix} \begin{bmatrix} a_{0}\\ b_{0} \end{bmatrix} \end{align} where $k$ is the number of the training samples. The initial guess $(a_0,b_0)$ can therefore be resolved using least squares regression. \subsection{Performance Evaluation}\label{error} To assess the accuracy of the shoe measurements, mean absolute error (MAE) is used. The MAE of the GRF, $e_{n}$, is given by: \begin{align} &e_{n} = (\sum_{k=1}^{N}|n[k] - n_r[k]|)/N \label{eGRF}, \end{align} The MAE of CoP, $e_{C}$, is defined as: \begin{align} &e_{c} = (\sum_{k=1}^{N}||{\bf c}[k] - {\bf c}_r[k]||)/N \label{eCoP}, \end{align} \subsection{Demonstrations and Results} To verify the efficacy of our MFSC approach, we compare the shoe measurements using the initial guess parameters with those using model-free self-calibrated parameters, as shown in Fig. \ref{SC}a and b. In this case, the camera only serves as an unknown mass, making the robot model uncertain. The corresponding MAE values for the two datasets are displayed in Fig. \ref{SC}c. The results indicate that the GRF (3.5 N MAE) and CoP (4.5 mm MAE) measurements through the initial estimation parameters are far from the ground truth(Fig. \ref{SC}c). On the contrary, the MAE of GRF and CoP measured by our MFSC approach are around 0.09 N and 0.8 mm respectively, which is much closer to the ground truth. Additionally, the performance of the training dataset (Fig. \ref{SC}c, white background) and testing dataset (grey background) does not differ significantly, indicating that there is no over-fitting. Overall, the results suggest that our model-free self-calibration method can effectively recover sensor measurements without relying on robot models, manual intervention or initial sensor data. \section{Conclusion and Future work} This research presents a novel model-free framework for quasi-static humanoid robot walking and self-calibrating foot force-sensing modules based only on sensor outputs. The proposed approach utilizes gradient descent to optimize the objective function that includes the measured CoP and positions of the robot's feet. We have directly implemented this framework on a real robot. The results show that it is capable of planning a steady walking trajectory without prior knowledge of the robot's inertial parameters or kinematic model. Although demonstrated on the NAO, this proprioceptive framework can theoretically be generalized to other robots. This method simply needs initial planning to generate one motion that can be saved for future usage. The framework utilizes rich sensor information, which can be integrated with other model-based techniques to explore unknown environments. Future work will involve adding filters to reduce sensor noise and implementing stochastic-based algorithms on various robots. With fast computation, we can also do dynamic proprioceptive motion by including the velocity and acceleration space in the search process. \section{Acknowledgement} This work was supported by NUS Startup grants A-0009059-02-00, A-0009059-03-00, CDE Board account E-465-00-0009-01, Tier 2 grant REBOT A-8000424-00-00, and National Research Foundation, Singapore, under its Medium Sized Centre Programme - Centre for Advanced Robotics Technology Innovation (CARTIN), sub award A-0009428-08-00. \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.14316", "language": "en", "timestamp": "2023-03-01T02:09:21", "url": "https://arxiv.org/abs/2302.14316", "yymm": "2302" }
\section*{Data availability} The data that support the findings of this study are available from the corresponding authors upon reasonable request. \section*{Author contributions} \label{sec:authorcontributions} XG, ZD, BCD, JS, and ES performed the experiments: XG, THz s-SNOM measurements; ZD, XPS and sample etching; BCD, AFM and nano-FTIR measurements; JS, GIWAXS measurements. XG, ZD, PJ and JS analyzed the data. ZD, XG and PJ prepared the manuscript, with contributions from all authors. PJ and ADR supervised the project. All authors discussed the results and reviewed the manuscript. \section*{Acknowledgements} \label{sec:acknowledgements} The authors acknowledge that UQ operates on the land of the Jagera and Turrbal peoples. We pay our respects to their Ancestors and their descendants who continue cultural and spiritual connections to Country. The authors acknowledge the facilities, and the scientific and technical assistance, of the Microscopy Australia Facility at the Centre for Microscopy and Microanalysis, The University of Queensland. This work used the Queensland node of the NCRIS-enabled Australian National Fabrication Facility (ANFF). Financial support was provided by the Australian Research Council’s Discovery Projects’ funding scheme (No. DP210103342), the ARC Centre of Excellence for Engineered Quantum Systems (EQUS, No. 286 CE170100009), and the Foundational Questions Institute Fund (Grant No. FQXi-IAF19-04). JS acknowledges support from Maarten B. J. Roeffaers and from the Research Foundation - Flanders (FWO: Grant No. 12Y7221N, V400622N). The authors thank the staff of the BL11 NCD-SWEET beamline at ALBA Synchrotron for their assistance in recording the GIWAXS data. \section*{Supplementary Information} \label{sec:supp} \subsubsection*{Surface treatment} Tantalum on sapphire wafers ($200$ nm Ta, c-plane sapphire; protective photoresist capping layer) were purchased from Star Cryoelectronics. The photoresist coated samples were diced from the same wafer to dimensions of $4 \times 7$ mm$^2$. Photoresist was removed from the diced chips by a 3 min sonication clean in very large scale integration (VLSI) grade acetone, followed by a further 3 min sonication in VLSI IPA. The samples were then rinsed under running DI water ($18.2$ M$\Omega$ cm) for 15 s and blow-dried with nitrogen gas. We refer to samples in this state (photoresist removed, no further treatment) as solvent cleaned. All samples undergo this photoresist removal step before any further surface treatment. Piranha-cleaned samples were submerged in a 3:1 piranha solution (30 ml H$_2$SO$_4$ to $10$ ml H$_2$O$_2$) for $20$ min, then rinsed in two DI water beakers for 2 min each and dried with nitrogen gas. Buffered oxide etched (BOE) samples were submerged in a buffered HF solution (7:1 ammonium fluoride to hydrofluoric acid) for $150$ s, followed by two separate DI water dips for $15$ s and $10$ s and a nitrogen gas blow-dry. Reactive ion etching (RIE) was performed in an Oxford Instruments PlasmaPro 80 Reactive Ion Etcher. Etching used carbon tetrafluoride (CF$_4$) gas at a flow rate of $30$ sccm ($20$ mTorr) and a plasma power of $200$ W. An etch rate of $\sim 11$ nm/min was determined by comparing masked and unmasked regions of Ta test samples with a profilometer (Tencor™ P-7 Stylus Profiler). Samples for AFM/SNOM and XPS analysis were etched for 5 min at a plasma power of $200$ W, removing $\sim 55$ nm of material. \subsubsection*{X-ray photoelectron spectroscopy} X-ray photoelectron spectroscopy (XPS) measurements were performed on a Kratos AXIS Supra+ system equipped with monochromated Al K$\alpha$ ($1486.6$ eV) with beam spot size of $300$ $\mu$m. XPS chamber base pressure $<1\times 10^{-8}$ mbar. Survey scans ($0$ - $1200$ eV) and high-resolution scans of the Ta 4f, O 1s, C 1s, and F 1s levels were obtained at emission angles of $0^{\circ}$, $40^{\circ}$ and $70^{\circ}$ for each sample. The Al K$\alpha$ source was used across all samples. Tantalum films were grounded by clips and no sample charging was observed. XPS analysis was performed in CasaXPS \cite{fairley_systematic_2021}, where peaks were fitted using a symmetrical Gaussian/Lorentzian line-shape and Shirley background subtraction. The Ta 4f peak was fitted with doublet components from metallic Ta and Ta$_2$O$_5$. To use a minimal fitting model that accounts for details seen in angle resolved XPS data across all treatments, we implement an additional doublet at intermediate binding energy to account for suboxide contributions. O 1s spectra were fit with three components corresponding to lattice oxide, surface hydroxyls, and carboxyl groups. Oxide thickness $d$ was estimated from the intensity ratio of oxide (taken as the sum of Ta$_2$O$_5$ and suboxides) and metal components in the Ta 4f peak by \cite{alexander_quantification_2002} \begin{equation} d_{XPS} (\text{nm})=\lambda_o \sin\theta \ln\left(\frac{N_m \lambda_m I_o}{N_o \lambda_o I_m} + 1 \right), \end{equation} where $\lambda_m$ ($\lambda_o$) is the inelastic mean free path (IMFP) for electrons in Ta metal (oxide) at a given energy ($\lambda_{\text{Ta2O5}}=2$ nm, $\lambda_m=1.964$ nm), and $\theta$ is the emission angle (measured as the angle between the surface normal and detector). $I_o/I_m$ is the ratio of the oxide ($o$) and metal ($m$) peak intensities measured by XPS at the given emission angle $\theta$, and $N_m/N_o$ is the ratio of volume densities of Ta atoms ($N_m=0.092$, $N_o=0.065$). We limit ourselves to a bilayer model considering a bulk Ta layer, and a mixed oxide layer incorporating both stoichiometric Ta$_2$O$_5$ and suboxides~\cite{li_tuning_2019}. The intensity of metallic Ta ($I_m$) is determined by the scaled Ta peak area, and the oxide intensity ($I_o$) is taken as the sum of Ta$_2$O$_5$ and suboxide scaled peak areas. \subsubsection*{Scattering-type scanning near-field optical microscopy} In an s-SNOM, a metallic probe tip periodically taps the sample surface, which is simultaneously illuminated by an electromagnetic stimulus, e.g., THz radiation. The probe tip is transiently polarized by the incident illumination and thereby forms a highly concentrated electric field --- a nanofocus --- near its apex. As the nanofocus can be positioned with nanometer precision, and the probe tip radius is $\sim60~\textrm{nm}$, THz s-SNOM is able to resolve nanoscale THz responses correlated to AFM sample topography. Collecting white-light (spectral-averaged) images enables a spatial mapping of the near-field optical response. In contrast, THz nanospectroscopy provides a frequency-dependent imprint of the interrogated material within the nanofocus~\cite{ChenXZ2019Review, Cocker2021Review}. Therefore, THz s-SNOM is able to bypass the typical Rayleigh diffraction limit and provides nanoscale insight into lattice dynamics and electronic processes, including the characterization of amorphous and crystalline material phases~\cite{ChenC2020, Barnett2021}. A commercial s-SNOM (neaSNOM, neaspec GmbH, Haar, Germany) is used for near-field experiments with commercial probes (THz: 25PtIr200B-H45, Rocky Mountain Nanotechnology, LLC, Bountiful, USA; nano-FTIR: neaspec nano-FTIR probes). The metallic probe, operating in tapping-mode atomic force microscope, is illuminated by a laser beam for near-field measurements. The tip-scattered field, $E(t)$, was demodulated with high-order harmonics ($n\geq 2$) of the tip oscillation frequency to obtain near-field signals $S_n(t)$. \textbf{Nano-FTIR:} the beam comes from an integrated laser source (Toptica Photonics AG, Gr{\"a}felfing, Germany) which is generated by a difference frequency mixer where two near-infrared (NIR) pulse trains are superposed. An asymmetric Michelson interferometer is employed to obtain interferograms for mid-IR spectral information. The backscattered field from the tip is directed to a mercury cadmium telluride detector. \textbf{THz s-SNOM:} a broadband THz radiation ($0$ to $6$ THz) is generated from a photoconductive antenna (low-temperature-grown InGaAs/InAlAs, Menlo Systems GmbH, Martinsried, Germany) under the excitation of a femtosecond NIR ($1560$ nm) laser. The forward tip-scattered component is detected by another photoconductive antenna gated by a NIR pulse identical to the excitation beam. For white-light nanoimaging, the time-delay ($\sim$picosecond) between the excitation and sampling pulse with the strongest THz scattering amplitude is selected for spatially varying spectral-averaged (white-light) THz responses. For THz nanospectroscopy, the point of interest is selected to sweep the optical scanning delay line to obtain the time-dependent THz scattering field for further lock-in detection and Fourier transform. \textbf{High-aspect-ratio AFM:} a Bruker TESPA-HAR probe was used for tapping-mode AFM imaging. \subsubsection*{Synchrotron-based grazing incidence wide angle scattering} Room temperature GIWAXS of Ta films were recorded at NCD-SWEET beamline (ALBA synchrotron in Cerdanyola del Vall\`es, Spain) with a monochromatic ($\lambda = 0.9574$~\AA) X-ray beam of $80 \times 30$~$\mu$m$^2$~[H × V], using a Si (111) channel cut monochromator, which was collimated using an array of Be collimating lenses. The scattered signal was recorded using a LX255-HS area detector (Rayonix, LLC, Evanston, USA) placed at $229.4$~mm from the sample position. Detector tilts, and sample-to-detector distance were calculated using Cr$_2$O$_3$ as calibrant. GIWAXS frames were recorded at incident angles ($\alpha_i$) between $0^{\circ}$ and $0.35^{\circ}$, remaining in the surface-sensitive evanescent regime of scattering and remaining below the full penetration of the Ta film. Continuous N$_2$ flow over the sample was employed during the measurements to ensure the sample remained in a chemically inert atmosphere during irradiation. Collected 2D images were azimuthally integrated using PyFAI~\cite{ashiotis_fast_2015} and processed using a custom python routine. \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/ARXPS_Ta4f.pdf} \caption{Angle resolved XPS (ARXPS) of the Ta~4f levels.} \label{fig:S1} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/ARXPS_O1s.pdf} \caption{ARXPS of the O~1s levels.} \label{fig:S2} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/ARXPS_C1s.pdf} \caption{ARXPS of the C~1s levels. On the RIE-etched sample, the C~1s spectrum has components at higher binding energy ($290$ - $295$ eV) which are attributed to C-F bonds formed during etching with CF$_4$.} \label{fig:S3} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/thzabsorption.png} \caption{THz spectral absorption. Absorption coefficients without the frequency-dependent normalization on both (a) a solvent-cleaned and (b) an RIE-etched sample on region A (red) and B (blue) with high-resistivity silicon (black) as a reference in the probing THz bandwidths. Near-field scattering signals demodulated at multiple high-order harmonics ($n\geq 2$) of the probe oscillation frequency are used to suppress the background noise. The localized absorption coefficient of the un-etched sample on region B ((a), blue) shares the same characteristic behavior with the reported far-field THz time-domain spectroscopic measurements on a glassy system~\cite{mori_detection_2020}; the same case happens for the normalized absorption coefficient by the square of the frequency (Figure 2b).} \label{fig:S4} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/supp4.pdf} \caption{Relative stripe directions at a domain intersection. (a) $5 \times 5$ $\mu$m AFM image and (b) $10 \times 10$ $\mu$m THz nanoimaging ($3^\text{rd}$ harmonic) of nanoridges found in region A meeting at a domain intersection. As expected for bcc Ta, the stripes meet with a relative angle close to $110^{\circ}$. The images were taken with s-SNOM probe tip for THz nanospectroscopy ($\sim60$~nm tip radius).} \label{fig:S5} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/giwaxs.pdf} \caption{Surface-sensitive study of metallic tantalum and surface tantalum (sub)oxide using synchrotron-based GIWAXS. (a) Schematic illustration of the scattering geometry of synchrotron GIWAXS measurements (ALBA Synchrotron) performed on Ta thin films. The incident X-ray beam ($\lambda = 0.95774$ Å, $12.95$ keV) scatters from the sample under a grazing angle, projecting diffraction signals onto the larger-area imaging detector. Depending on the polycrystalline texture (i.e. orientation and distribution of scattering domains), scattering rings may only appear in a certain azimuthal direction, $\chi$. Integrated data are derived from integrating over the whole image (i.e. $q_{x,y,z}$) are used for the profile analysis in the main text. (b) A schematic illustrating the dependence of penetration depth ($\Lambda$) on the incident angle $\alpha_i$ and critical angle $\alpha_c$. With the Ta and TaO$_x$ materials having a high-frequency refractive index less than 1, the incident X-rays are totally reflected at shallow incident angles ($\alpha_i \leq \alpha_c$), while at higher incident angles they penetrate the material ($\alpha_i > \alpha_c$). The transition between these disparate scattering conditions defines the critical angle ($\alpha_c$), and depends on the X-ray energy and the material dielectric properties. (c) The calculated penetration depth ($12.95$ keV) for Ta (both $\alpha$- and $\beta$-phase material densities~\cite{abadias_elastic_2019}) in comparison to fully (Ta$_2$O$_5$) and partially (TaO) oxidized Ta, which is relatively less dense~\cite{mathieu_aes_1984}. The open circles indicate the calculated critical angle, $\alpha_c$. For the range of angle below and new the critical angle, the X-ray electromagnetic field only interacts a short distance below the film surface due to the evanescent damping ($5$ - $10$ nm), and constitutes the defined evanescent regime. Fortuitously, the less-dense oxide layer acts as a waveguide in this regime, amplifying TaO$_x$-related signals arising from the top surface.} \label{fig:S6} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.6\linewidth]{figures/1dscattering.pdf} \caption{Full 1D scattering profile of integrated GIWAXS signals recorded from Ta films exhibiting a fresh (solvent-cleaned) and partially oxidized (aged) surface. } \label{fig:S7} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/depthprofile.pdf} \caption{Surface depth profiles of Ta films. Integrated GIWAXS profiles recorded as a function of incident angle (values inset) from (a) fresh and (b) an aged solvent-cleaned Ta film. As the angle of incidence ($\alpha_i$) nears the critical angle of the Ta metal film ($\sim 0.32^{\circ}$), the $\beta$-Ta phase signals disappear in a), indicating they arise from the surface rather than the bulk. Likewise, while $\alpha_i<0.34$ the relatively less dense TaO$_x$ has its scattering signal amplified from waveguide-like effects, before the angle is high enough to stop reflecting off the Ta surface and penetrate the bulk.} \label{fig:S8} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{figures/crystals.pdf} \caption{Orientation and distribution of $\beta$-Ta and $\alpha$-Ta crystal in the film. (a) Relative orientation of $\alpha$-Ta (110) and $\beta$-Ta (002) Bragg scattering planes incident on the 2D $q_z$ vs $q_{x,y}$ image, i.e., the so-called “missing wedge” is absent in these data for simplicity. (b) Geometric illustrations of the relative crystal texture on the substrate, showing the most probable orientations of the measured $\beta$-Ta and $\alpha$-Ta crystal structure.} \label{fig:S9} \end{figure*}
{ "arxiv_id": "2302.14243", "language": "en", "timestamp": "2023-03-01T02:06:29", "url": "https://arxiv.org/abs/2302.14243", "yymm": "2302" }
\section{Introduction} \label{sec: intro} An aggregate data meta-analysis combines summary statistics of an outcome of interest from multiple studies. When the outcome of interest is continuous, statistical methods for performing an aggregate data meta-analysis often assume that the primary studies report the sample mean and standard deviation of the outcome. However, primary studies may report the sample median of the outcome rather than the sample mean, which commonly occurs when the distribution of the outcome is skewed \cite{higgins2020cochrane}. Standard meta-analytic methods typically cannot be directly applied when primary studies report sample medians. Consequently, a number of statistical methods have been developed to incorporate studies reporting sample medians in meta-analysis. These methods can be classified into two groups: mean-based methods and median-based methods. Mean-based methods impute the sample means and standard deviations of the outcome from primary studies that report sample medians in order to apply standard meta-analytic methods based on the (imputed) study-specific sample means and standard deviations. Many authors have proposed sample mean and standard deviation estimators for this context \cite{hozo2005estimating,bland2015estimating, wan2014estimating, kwon2015simulation, luo2018optimally, mcgrath2020estimating, shi2020optimally, shi2020estimating, rychtavr2020estimating, walter2022estimation, cai2021estimating, yang2021generalized, balakrishnan2022unified}. Median-based methods directly meta-analyze the study-specific sample medians in order to estimate a pooled median of the outcome or the difference of medians across groups \cite{mcgrath2019one, mcgrath2020meta, ozturk2020meta}. A software tool implementing these methods can facilitate performing high-quality meta-analyses for a few reasons. First, some of these methods can be challenging and laborious to apply without available software (e.g., \cite{mcgrath2020estimating, cai2021estimating, mcgrath2020meta} require numerically solving ad-hoc optimization problems), which may limit their adoption in practice. Second, it can be insightful to perform sensitivity analyses where data analysts apply several of these methods and evaluate how the conclusions of their meta-analysis change, as these methods are based on different assumptions and estimation strategies. A comprehensive software tool implementing these methods in a standardized way can facilitate performing such sensitivity analyses. In this paper, we present the \pkg{metamedian} R package \cite{metamedian}, a freely available and open-source software tool for meta-analyzing primary studies that report sample medians. The \pkg{metamedian} R package is available on the Comprehensive R Archive Network (CRAN) at \url{https://CRAN.R-project.org/package=metamedian}, and the development version of the package is available on GitHub at \url{https://github.com/stmcg/metamedian}. The package implements both mean-based methods and median-based methods. When applicable, the widely used \pkg{metafor} package \cite{metafor} is internally applied when pooling the study-specific estimates, which gives users a wide array of pooling options (e.g., supporting common effect and random effects models, various between-study heterogeneity estimators, meta-regression analyses, etc.) and facilitates performing a number of subsequent analyses (e.g., generating forest plots and funnel plots, testing small study effects, etc.). The remainder of the paper is structured as follows. We briefly summarize the literature on statistical methods for meta-analyzing studies reporting sample medians. We then describe the main features of the software and illustrate its application in a real-life meta-analysis. We conclude with a discussion of related software, limitations, and future directions in the development of \pkg{metamedian}. \section{Methods} \label{sec: methods} \subsection{Standard meta-analytic methods} \label{sec: standard methods} We begin by recalling some standard statistical models and estimators for an aggregate data meta-analysis of a continuous outcome. These models and estimators are the foundation of many of the methods implemented in the \pkg{metamedian} package. Suppose that the meta-analysis consists of $K$ primary studies. Let $y_k$ denote the estimate of the outcome measure in the $k\textsuperscript{th}$ primary study (e.g., the difference of sample means across two arms in a trial), and let $\theta_k$ denote the true outcome measure in the $k\textsuperscript{th}$ primary study. It is assumed that \begin{equation*} y_k \sim \text{Normal}(\theta_k, \sigma^2_k), \end{equation*} where the within-study sampling variances, $\sigma^2_k$, are considered to be known. The common effect model assumes that the true outcome measures of the primary studies are equal (i.e., $\theta_k = \theta$ for all $k$). The random effects model assumes that the true outcome measures of the primary studies differ and are distributed as \begin{equation*} \theta_k \sim \text{Normal}(\theta, \tau^2). \end{equation*} Throughout, we refer to $\theta$ as the pooled outcome measure and $\tau^2$ as the between-study variance. The classic inverse-variance weighted estimator of the pooled outcome measure and an estimate of its standard error (SE) are given by \begin{equation*} \hat{\theta} = \frac{\sum_{k = 1}^K w_k y_k}{\sum_{k = 1}^K w_k}, \qquad \widehat{\text{SE}}(\hat{\theta}) = \sqrt{\frac{1}{\sum_{k = 1}^K w_k}}, \end{equation*} where the weights, $w_k$, depend on the type of meta-analytic model used. In a common effect meta-analysis, $w_k = \frac{1}{\sigma_k^2}$. In a random effects meta-analysis, $w_k = \frac{1}{\sigma_k^2 + \hat{\tau}^2}$ where $\hat{\tau}^2$ denotes an estimate of the between-study variance. See \cite{viechtbauer2005bias, veroniki2016methods, langan2019comparison} for discussions of estimators of the between-study variance. \subsection{Methods to meta-analyze studies reporting medians} There are two key challenges to applying standard meta-analytic methods when primary studies report the sample median of the outcome. One challenge is that studies reporting the sample median of the outcome often do not report an estimate of its SE (i.e., $\sigma_k$), which is needed to compute the weights in the inverse-variance weighted estimator of the pooled outcome measure. Instead, studies reporting sample medians commonly report the first and third quartiles and/or minimum and maximum values of the outcome. In other cases, these studies may not report any other summary statistics of the outcome besides the sample median, which typically occurs when the outcome of interest (in the meta-analysis) is not one of the primary outcomes in the study. A second challenge is that some primary studies may report estimates of the mean of the outcome whereas other primary studies may report estimates of the median of the outcome. In general, outcome measures based on means (e.g., the difference of means across groups) are not equal to outcome measures based on medians (e.g., the difference of medians across groups). To apply standard meta-analytic methods, all primary studies must contribute an estimate of the same outcome measure. The following subsections describe statistical methods developed to address these challenges. \subsubsection{Mean-based methods} The most commonly applied approach to meta-analyze studies reporting sample medians involves imputing sample means and standard deviations of the outcome from studies reporting medians \cite{hozo2005estimating,bland2015estimating, wan2014estimating, kwon2015simulation, luo2018optimally, mcgrath2020estimating, shi2020optimally, shi2020estimating, rychtavr2020estimating, walter2022estimation, cai2021estimating, yang2021generalized, balakrishnan2022unified}. Then, data analysts may apply standard meta-analytic methods based on the (imputed) study-specific sample means and standard deviations. For instance, this approach may be used to impute the difference of sample means and its usual SE estimate from studies reporting sample medians in order to meta-analyze the difference of means. Because these methods estimate an outcome measure based on means, we refer to these methods as mean-based methods. The literature on mean-based methods is growing large, which we very briefly summarize next. These methods typically consider that a primary study may report some of the following statistics of the outcome: minimum value ($q_{\text{min}}$), first quartile ($q_1$), median ($q_2$), third quartile ($q_{3}$), maximum value ($q_{\text{max}}$), and sample size ($n$). In particular, they typically consider that one of the following sets of summary statistics is reported: \begin{align*} S_1 & = \{q_{\text{min}}, q_{2}, q_{\text{max}}, n \} \\ S_2 & = \{q_{1}, q_{2}, q_{3}, n \} \\ S_3 & = \{q_{\text{min}}, q_{1}, q_{2}, q_{3}, q_{\text{max}}, n \}. \end{align*} Hozo et al.\ \cite{hozo2005estimating} were amongst the first to propose and systematically study mean-based methods, whereby they developed sample mean and standard deviation estimators from $S_1$. After, Bland \cite{bland2015estimating} proposed corresponding estimators from $S_3$, and Wan et al.\ \cite{wan2014estimating} proposed estimators from $S_2$ along with other estimators from $S_1$ and $S_3$ under the assumption that the outcome is normally distributed. Since then, Luo et al.\ \cite{luo2018optimally} and Shi et al.\ \cite{shi2020optimally} developed new estimators to achieve certain optimality properties under the assumption that the outcome is normally distributed. Motivated by the observation that studies often report sample medians instead of sample means because the distribution of the outcome is skewed, Kwon and Reis \cite{kwon2015simulation}, McGrath et al. \cite{mcgrath2020estimating}, Shi et al. \cite{shi2020estimating}, and Cai et al. \cite{cai2021estimating} developed estimators for skewed data. Other estimators have been developed which require the data analysts to specify the assumed parametric distribution of the outcome, such as the normal or log-normal distribution \cite{yang2021generalized, balakrishnan2022unified}. While most of the literature on mean-based methods has focused on better estimating the sample mean and standard deviation from $S_1$, $S_2$, or $S_3$, another important consideration is the performance of such methods in the context of meta-analysis. McGrath et al.\ \cite{mcgrath2023standard} showed that using the imputed study-specific sample means and standard deviations in place of the actual (unreported) sample means and standard deviations in standard meta-analytic methods can severely underestimate the within-study SEs for studies reporting medians, which can result in negative downstream consequences in meta-analysis. Moreover, they described a bootstrap approach to better estimate the within-study SEs when using the mean estimators of McGrath et al.\ \cite{mcgrath2020estimating} and Cai et al.\ \cite{cai2021estimating}. Yang et al.\ \cite{yang2021generalized} and Balakrishnan et al.\ \cite{balakrishnan2022unified} also provided estimators of the within-study SE that appropriately account for the variability of their mean estimators. \subsubsection{Median-based methods} Another line of the literature has focused on developing methods to directly meta-analyze the study-specific sample medians, i.e., without first imputing sample means and standard deviations \cite{mcgrath2019one, mcgrath2020meta, ozturk2020meta}. These methods estimate the pooled median of the outcome when the meta-analysis consists of one-group studies, and they estimate the pooled difference of medians in the case of two-group studies (e.g., studies with treatment and control groups). We refer to these methods as median-based methods because they estimate an outcome measure based on medians. The main distinction between the different median-based methods is how they address the challenge of unreported within-study SEs. McGrath et al.\ \cite{mcgrath2019one, mcgrath2020meta} considered methods that avoid the need for within-study SEs altogether. Specifically, they considered an approach that takes the median of the study-specific estimates as the point estimate of the pooled outcome measure. For instance, in a meta-analysis of one-group studies, this approach uses the median of the study-specific medians as the pooled estimate. For two-group studies, this approach uses the median of the study-specific differences of medians as the pooled estimate. A nonparametric confidence interval around the pooled estimate is obtained by taking suitable quantiles of the study-specific estimates. Weighted versions (based on the study-specific sample sizes) were also considered. McGrath et al.\ \cite{mcgrath2020meta} and Ozturk and Balakrishnan \cite{ozturk2020meta} considered methods that estimate the within-study SEs in order to perform an inverse-variance weighted meta-analysis. Specifically, McGrath et al.\ \cite{mcgrath2020meta} proposed parametric estimators of the within-study SE from $S_1$, $S_2$, or $S_3$ summary statistics based on an estimation strategy referred to as Quantile Matching Estimation (QE). To distinguish this median-based method using QE from the mean-based method using QE \cite{mcgrath2020estimating} (see Section \ref{sec: software mean-based methods}), we refer to the median-based method as $\text{QE}_{\text{median}}$ and the mean-based method as $\text{QE}_{\text{mean}}$. Ozturk and Balakrishnan \cite{ozturk2020meta} proposed nonparametric estimators of the within-study SE from $S_2$ summary statistics as well as other sets of summary statistics (see Appendix A for details), which they referred to as the Confidence Distribution (CD) approach. When the meta-analysis consists of some primary studies that report the sample median of the outcome and other primary studies that report the sample mean, all of these methods assume that the distribution of the outcome is symmetric in the primary studies that report sample means. That is, they assume that the mean of the outcome equals the median in such primary studies. \subsubsection{Comparison of methods} \label{sec: comparison} Given the number of methods developed for meta-analyzing studies reporting medians, data analysts may ask how to choose the most suitable methods for their applications. In this subsection, we summarize some comparisons of these methods performed in previous studies. First, consider the comparison between mean-based methods and median-based methods. Since mean-based methods and median-based methods estimate different outcome measures, one consideration is which outcome measure is most appropriate for the application at hand. For instance, if the distribution of the outcome is highly skewed, one may find that the mean of the outcome distribution is not a very meaningful outcome measure for the application and instead prefer to estimate the median. A more detailed discussion comparing mean-based outcome measures and median-based outcome measures in this context is given in McGrath et al.\ \cite{mcgrath2020meta}. Apart from considerations on the outcome measure, two factors that strongly differentiate the performance of mean-based methods and median-based methods are (i) the proportion of primary studies reporting medians and (ii) the skewness of the outcome distribution in the primary studies. For (i), as the proportion of primary studies reporting medians increases, the performance of median-based methods often improves and the performance of mean-based methods often worsens \cite{mcgrath2019one, mcgrath2020meta, mcgrath2023standard}, as one may expect. For (ii), when the outcome distribution is highly skewed, mean-based methods can perform poorly \cite{mcgrath2019one, mcgrath2020meta,mcgrath2023standard}. In practice, data analysts can use Bowley's coefficient of skewness \cite{bowley1901} to evaluate the skewness of the outcome in a primary study, as it only depends on $S_2$ summary statistics (e.g., see \cite{mcgrath2019one, mcgrath2020meta, ozturk2020meta, mcgrath2023standard}). Second, consider the comparison of the various mean-based methods. Most simulation studies evaluating the performance of mean-based methods found that the performance of the mean and standard deviation estimators strongly depends on the underlying distribution of the outcome. If the outcome is normally distributed in the primary studies, then the methods that assume normality (e.g., Wan et al.\ \cite{wan2014estimating}, Luo et al.\ \cite{luo2018optimally}, Shi et al.\ \cite{shi2020optimally}, Yang et al.\ \cite{yang2021generalized}) often perform best. However, when the distribution of the outcome is not normal (especially skewed), the methods developed under more flexible distributional assumptions (e.g., McGrath et al.\ \cite{mcgrath2020estimating}, Cai et al.\ \cite{cai2021estimating}) often perform best. Additionally, regardless of the approach used to estimate the study-specific means, using a within-study SE estimator that appropriately accounts for the variability of the mean estimators (i.e., the parametric bootstrap \cite{mcgrath2023standard} and plug-in approaches \cite{yang2021generalized}) can yield better inference at the meta-analysis level compared to the naïve SE estimator \cite{mcgrath2023standard}. Last, consider the comparison of the median-based methods. An advantage of the approach based on taking the median of the study-specific outcome measure estimates \cite{mcgrath2019one, mcgrath2020meta} is that it is able to incorporate studies that only report the sample median of the outcome (rather than $S_1$, $S_2$, $S_3$, or other sets of summary statistics). However, since the $\text{QE}_{\text{median}}$ \cite{mcgrath2020meta} and CD \cite{ozturk2020meta} methods meta-analyze studies using inverse-variance weighting, advantages of these approaches include the following: they are able to estimate between-study heterogeneity; they are typically more efficient; and they allow data analysts to perform a number of standard subsequent analyses (e.g., generating forest plots and funnel plots, testing small study effects). \section{Software functionality} \label{sec: software} To demonstrate the software's functionality, we will use as an example a meta-analysis recently performed by Katzenschlager et al.\ \cite{katzenschlager2021can} that aimed to identify risk factors for a severe course of COVID-19. The analysis we focus on compared the average age between COVID-19 infected patients who died and those who survived. All 52 primary studies either reported $S_2$ summary statistics of age in both groups of patients or reported the mean, standard deviation, and sample size in both groups. The \pkg{metamedian} package includes two data sets for this example: \verb|dat.age_raw| contains the extracted summary data for all 52 primary studies, and \verb|dat.age| contains the summary study for all primary studies except one which had a very small sample size in the nonsurvivor group (see Section \ref{sec: examples} for further details). We chose this example because all of the methods in the \pkg{metamedian} package give similar results and lead to the same conclusions. We believe that using an example where different methods lead to different conclusions could potentially distract readers from the primary focus of the paper (i.e., describing the utility of the \pkg{metamedian} package). However, for interested readers, we provide analyses of two other outcome variables (i.e., aspartate transaminase and creatine kinase levels) from the meta-analysis of Katzenschlager et al.\ \cite{katzenschlager2021can} in Appendix C. In these additional examples, some methods lead to noticeably different results. The \pkg{metamedian} package can be installed from CRAN and loaded by running the following commands in the R console: \begin{verbatim} install.packages("metamedian") library("metamedian") \end{verbatim} The following subsections describe the requirements on the input data set and the key functions in the \pkg{metamedian} package. \subsection{Data set} \label{sec: data set} The key functions in the \pkg{metamedian} package require users to supply an input data set, \verb|data|, containing the summary data extracted from the primary studies. The input data set must be a data frame, where each row corresponds to a primary study in the meta-analysis. For one-group studies, the input data set can include the following columns: \verb|min.g1| (minimum value), \verb|q1.g1| (first quartile), \verb|med.g1| (median), \verb|q3.g1| (third quartile), \verb|max.g1| (maximum value), \verb|n.g1| (sample size), \verb|mean.g1| (mean), and \verb|sd.g1| (standard deviation). For two-group studies, the input data set can additionally include the following columns for the summary data in the second group: \verb|min.g2|, \verb|q1.g2|, \verb|med.g2|, \verb|q3.g2|, \verb|max.g2|, \verb|n.g2|, \verb|mean.g2|, and \verb|sd.g2|. When constructing the input data set for two-group studies, note that the outcome measure (i.e., difference of means or the difference of medians across groups) is based on the group 1 value minus the group 2 value. If the study does not report one of the summary statistics, the relevant entry in the input data set must be set to \verb|NA|. Some additional columns can be included in the input data set when using some methods (e.g., the CD method \cite{balakrishnan2022unified}), which are detailed in Appendix A. As an example, the data set \verb|dat.age| corresponding to our example is formatted in this manner. The first five rows of \verb|dat.age| (excluding the \verb|author| column corresponding to the authors of the primary studies) are displayed below. The group 1 values correspond to the ages (in years) in the nonsurvivor group, and the group 2 values correspond to the ages in the survivor group. \begin{verbatim} n.g1 q1.g1 med.g1 q3.g1 mean.g1 sd.g1 n.g2 q1.g2 med.g2 q3.g2 mean.g2 sd.g2 1 35 NA NA NA 77.00 8.30 157 NA NA NA 65.60 15.60 2 32 NA NA NA 64.60 11.20 20 NA NA NA 51.90 12.90 3 31 65.0 76.0 82.00 NA NA 122 56.0 63.0 69.0 NA NA 4 16 68.0 72.0 81.00 NA NA 137 56.0 63.0 70.0 NA NA 5 29 NA NA NA 63.60 17.68 351 NA NA NA 52.85 16.35 \end{verbatim} \subsection{Main functions} \subsubsection{Mean-based methods} \label{sec: software mean-based methods} The \verb|metamean| function performs a meta-analysis using mean-based methods. For one-group studies, this function meta-analyzes the mean of the outcome. For two-group studies, this function meta-analyzes the difference of means across the two groups. The method for estimating the mean(s) of the outcome in a primary study reporting $S_1$, $S_2$, or $S_3$ summary statistics is specified by the \verb|mean_method| argument. The argument \verb|mean_method| can either be a vector, in which case the $k\textsuperscript{th}$ element is the method used for the $k\textsuperscript{th}$ primary study in the input data set, or a scalar, in which case the same method is used for all primary studies. The options for \verb|mean_method| are listed below: \begin{itemize} \item \verb|"wan"|: This option applies the mean estimator of Hozo et al.\ \cite{hozo2005estimating} in scenario $S_1$, Wan et al.\ \cite{wan2014estimating} in scenario $S_2$, and Bland \cite{bland2015estimating} in scenario $S_3$. \item \verb|"luo"|: This option applies the mean estimator of Luo et al.\ \cite{luo2018optimally} in all three scenarios. \item \verb|"shi_normal"|: This option applies the mean estimator of Luo et al.\ \cite{luo2018optimally} in scenarios $S_1$ and $S_2$ and the mean estimator of Shi et al. \cite{shi2020optimally} in scenario $S_3$. \item \verb|"shi_lognormal"|: This option applies the mean estimator of Shi et al.\ \cite{shi2020estimating} in all three scenarios. \item \verb|"qe"|: This option applies the $\text{QE}_{\text{mean}}$ mean estimator of McGrath et al.\ \cite{mcgrath2020estimating} in all three scenarios. \item \verb|"bc"|: This option applies the Box-Cox (BC) mean estimator of McGrath et al.\ \cite{mcgrath2020estimating} in all three scenarios. \item \verb|"mln"|: This option applies the Method for Unknown Non-Normal Distributions (MLN) mean estimator of Cai et al.\ \cite{cai2021estimating} in all three scenarios. \item \verb|"yang"|: This option applies the mean estimator of Yang et al.\ \cite{yang2021generalized} under the assumption of normality in all three scenarios. \end{itemize} Note that some of these options involve different mean estimators in different scenarios. This is because those papers did not propose mean estimators in all three scenarios (e.g., Wan et al.\ \cite{wan2014estimating}), and instead recommended existing approaches in the scenarios they did not consider. The method for estimating the SE of the mean estimators in scenarios $S_1$, $S_2$, and $S_3$ is specified by the \verb|se_method| argument. The options include the following: \begin{itemize} \item \verb|"naive"|: This option applies the naïve SE estimator, which uses the estimated standard deviation divided by the square root of the sample size. This option is available when using any of the mean estimators. The approach used to estimate the standard deviation is specified by the \verb|sd_method| argument, which can be set to \verb|"wan"| (Wan et al.\ \cite{wan2014estimating}), \verb|"shi_normal"| (Shi et al.\ \cite{shi2020optimally}), \verb|"shi_lognormal"| (Shi et al.\ \cite{shi2020estimating}), \verb|"qe"| (McGrath et al.\ \cite{mcgrath2020estimating}, $\text{QE}_{\text{mean}}$ approach), \verb|"bc"| (McGrath et al.\ \cite{mcgrath2020estimating}, BC approach), \verb|"mln"| (Cai et al.\ \cite{cai2021estimating}, MLN approach), and \verb|"yang"| (Yang et al.\ \cite{yang2021generalized}). Note that the \verb|"shi_normal"| option uses the standard deviation estimators of Wan et al.\ \cite{wan2014estimating} in scenarios $S_1$ and $S_2$. \item \verb|"bootstrap"|: This option applies the parametric bootstrap approach described by McGrath et al.\ \cite{mcgrath2023standard}. This option is available when using the $\text{QE}_{\text{mean}}$, BC, and MLN mean estimators. The number of bootstrap replicates is specified by the \verb|nboot| argument, which is set to \verb|1000| by default. Since the bootstrap approach involves random sampling, users may wish to set a random number seed to ensure reproducibility (e.g., running \verb|set.seed(1234)| in the R console) when using this option. \item \verb|"plugin"|: This option uses the analytically derived SE of the mean estimator, and plugs in the estimated standard deviation in place of the distributional standard deviation. This option is available when using the Yang et al.\ \cite{yang2021generalized} mean estimator. \end{itemize} When a primary study reports the sample mean and standard deviation of the outcome, the sample mean is used as the point estimate and the standard deviation divided by the square root of the sample size is used as the SE estimate. For two-group studies, the outcome measure estimate is the difference in sample means and its SE estimate is the square root of the of the sum of the squared SE estimates in both groups. After obtaining study-specific outcome measure estimates and their SE estimates, an inverse-variance weighted meta-analysis is performed via the \verb|rma.uni| function in the \pkg{metafor} package. By default, a random effects meta-analysis is performed where the Restricted Maximum Likelihood (REML) approach is used to estimate between-study heterogeneity. Additional arguments supplied to the \verb|metamean| function are directly passed to the \verb|rma.uni| function so that users can specify how to pool the study-specific estimates, such as specifying whether a common effect or random effects model is used or including effect modifier variables (i.e., performing a meta-regression). The output of the \verb|rma.uni| function is returned, giving users the option of performing subsequent analyses using functions available in the \pkg{metafor} package. As an example, consider the following application of the \verb|metamean| function to the data set from our example (i.e., \verb|dat.age|). We can perform a random effects meta-analysis where we apply the MLN method to estimate the study-specific means and the parametric bootstrap approach (with 100 bootstrap replicates for ease of computation) to estimate their SEs as follows: \begin{verbatim} set.seed(1234) metamean(data = dat.age, mean_method = "mln", se_method = "bootstrap", nboot = 100) \end{verbatim} The output is given below. \begin{verbatim} Random-Effects Model (k = 51; tau^2 estimator: REML) tau^2 (estimated amount of total heterogeneity): 28.2204 (SE = 7.0657) tau (square root of estimated tau^2 value): 5.3123 I^2 (total heterogeneity / total variability): 86.75% H^2 (total variability / sampling variability): 7.55 Test for Heterogeneity: Q(df = 50) = 362.9420, p-val < .0001 Model Results: estimate se zval pval ci.lb ci.ub 12.8410 0.8367 15.3464 <.0001 11.2010 14.4809 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 \end{verbatim} The output indicates that the pooled estimate of the difference of means (mean age in the nonsurvivor group minus the mean age in the survivor group) is 12.84 years [95\% CI: 11.20, 14.48]. \begin{comment} As an example of leveraging the functionalities of the \pkg{metafor} package in the pooling stage of the analysis, we consider extending the previous analysis to perform a meta-regression. Suppose that \verb|df| has a column \verb|"year"| specifying the year of publication of the primary studies. To perform a meta-regression with the publication year as a covariate, we set the \verb|mods| argument in the \verb|rma.uni| function as follows: \begin{verbatim} metamean(df = df, mean_method = "mln", se_method = "bootstrap", nboot = 100, mods = "year") \end{verbatim} \end{comment} \subsubsection{Median-based methods} The \verb|metamedian| function performs a meta-analysis using median-based methods. These methods meta-analyze the median of the outcome for one-group studies and the difference of medians for two-group studies. The argument \verb|method_median| is a scalar that specifies the method used to perform the meta-analysis. The options are listed below: \begin{itemize} \item \verb|"mm"|: This option applies the Median of Medians (MM) method \cite{mcgrath2019one} when the meta-analysis consists of one-group primary studies and applies the Median of the Difference of Medians (MDM) method \cite{mcgrath2020meta} when the meta-analysis consists of two-group primary studies. These methods only require that \verb|data| contain the median or mean of the outcome (in both groups) in the primary studies. \item \verb|"wm"|: This option applies weighted versions of the methods described for the \verb|"mm"| option, where studies are weighted proportional to their total sample size \cite{mcgrath2019one, mcgrath2020meta}. \item \verb|"qe"|: This option applies the $\text{QE}_{\text{median}}$ method \cite{mcgrath2020estimating}, which is applicable whether the meta-analysis consists of one-group studies or two-group studies. Recall that this method parametrically estimates the SEs of the study-specific medians (for one-group studies) or differences of medians (for two-group studies). Then, an inverse-variance weighted meta-analysis is performed via the \verb|rma.uni| function in the \pkg{metafor} package in the same manner as described for the mean-based methods. The output of the \verb|rma.uni| function is returned. This method requires that \verb|data| contains the study-specific $S_1$, $S_2$, $S_3$ summary statistics of the outcome or the sample mean, standard deviation, and sample size (in both groups). \item \verb|"cd"|: This option applies the CD method \cite{ozturk2020meta}, which is applicable for one-group studies. Recall that this method nonparametrically estimates the SEs of the study-specific medians. Then, an inverse-variance weighted meta-analysis is performed where a Jackknife approach is used to estimate the variance of the pooled outcome measure estimate. This method requires that \verb|data| contains the study-specific (i) $S_2$ summary statistics, (ii) mean, standard deviation, and sample size, or (iii) other sets of summary statistics detailed in Appendix A. \end{itemize} For example, the following code applies the $\text{QE}_{\text{median}}$ method to the data set from our example: \begin{verbatim} metamedian(data = dat.age, median_method = "qe") \end{verbatim} The output is given below. \begin{verbatim} Random-Effects Model (k = 51; tau^2 estimator: REML) tau^2 (estimated amount of total heterogeneity): 33.5585 (SE = 8.3883) tau (square root of estimated tau^2 value): 5.7930 I^2 (total heterogeneity / total variability): 86.95% H^2 (total variability / sampling variability): 7.66 Test for Heterogeneity: Q(df = 50) = 373.6841, p-val < .0001 Model Results: estimate se zval pval ci.lb ci.ub 13.2238 0.9121 14.4980 <.0001 11.4361 15.0115 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 \end{verbatim} The output indicates that the pooled estimate of the difference of medians (median age in the nonsurvivor group minus the median age in the survivor group) is 13.22 years [95\% CI: 11.44, 15.01]. \subsection{Descriptive analyses} \label{sec: descriptive analyses} Recall from Section \ref{sec: comparison} that the following factors often strongly affect the performance of the mean-based methods and median-based methods: the proportion of primary studies reporting sample medians, the sets summary statistics reported by the primary studies reporting medians (e.g., $S_1$, $S_2$, or $S_3$), and the skewness of the outcome distribution in the primary studies. The \verb|describe_studies| function prints some descriptive statistics of these factors to help guide data analysts in choosing the most suitable methods to use in their applications. Specifically, the \verb|describe_studies| function prints the following information: \begin{itemize} \item The number of primary studies and the number of primary studies reporting medians. \item The number of primary studies reporting each set of relevant summary statistics (e.g., $S_1$, $S_2$, and $S_3$). By default, the sets of summary statistics considered are $S_1$, $S_2$ and $S_3$, as well as the sample mean, standard deviation, and sample size. If the argument \verb|method| is set to \verb|"cd"|, the sets of summary statistics considered by the CD median-based method \cite{ozturk2020meta} are used instead (see Appendix A for details). \item The five-number summary (i.e., the minimum value, first quartile, median, third quartile, and maximum value) and mean of the study-specific Bowley skewness values \cite{bowley1901}. Bowley skewness values range from -1 to 1, where positive values indicate that the distribution is right skewed and negative values indicates that the distribution is left skewed. Since the Bowley skewness depends on the sample median and first and third quartiles, the Bowley skewness is only computed for primary studies that report $S_2$ or $S_3$ summary statistics. \end{itemize} When the meta-analysis consists of two-group primary studies, the descriptive analyses are performed for each of the two groups. When printing the results, users can specify the labels corresponding to the two groups with the \verb|group_labels| argument. For instance, the following code applies the \verb|describe_studies| function to the data set from our example: \begin{verbatim} describe_studies(data = dat.age, group_labels = c("Nonsurvivors", "Survivors")) \end{verbatim} The output is given below. \begin{verbatim} DESCRIPTION OF PRIMARY STUDIES Nonsurvivors Survivors N. studies: 51 51 N. studies reporting the median: 29 29 N. studies reporting S1 (min, med, max, n): 0 0 N. studies reporting S2 (q1, med, q3, n): 29 29 N. studies reporting S3 (min, q1, med, q3, max, n): 0 0 N. studies reporting the mean: 22 22 N. studies reporting the mean, sd, and n: 22 22 Bowley skewness Minimum: -0.4000 -0.6000 First quartile: -0.0818 -0.1304 Median: 0.0000 -0.0526 Mean: -0.0087 -0.0250 Third quartile: 0.0909 0.1458 Maximum: 0.3846 0.4167 \end{verbatim} \section{Example} \label{sec: examples} In this section, we apply the \pkg{metamedian} package to perform a complete analysis of our example meta-analysis comparing the age between survivors and nonsurvivors of COVID-19. In these analyses, we do not include one of the primary studies (Qi et al.\ \cite{qi2021clinical}) included in the original meta-analysis, which had a sample size of 5 in the group of nonsurvivors (i.e., we analyze the \verb|dat.age| data set). When including this study, the estimated within-study SE for this study is very large compared to the other primary studies, which can cause instability when estimating between-study heterogeneity. See Appendix B for the results of the analysis when including the study of Qi et al.\ \cite{qi2021clinical}. Recall that the descriptive analyses for this data set were performed in Section \ref{sec: descriptive analyses}. Next, we apply several mean-based methods to estimate the pooled difference of mean age between survivors and nonsurvivors. Specifically, we apply all of the applicable mean estimators for $S_2$ summary statistics (i.e., Wan et al.\ \cite{wan2014estimating}, Luo et al.\ \cite{luo2018optimally}, Shi et al.\ \cite{shi2020estimating}, $\text{QE}_{\text{mean}}$ \cite{mcgrath2020estimating}, BC \cite{mcgrath2020estimating}, MLN \cite{cai2021estimating}, and Yang et al.\ \cite{yang2021generalized}). We use the corresponding naïve SE estimator for the Wan et al.\ \cite{wan2014estimating}, Luo et al.\ \cite{luo2018optimally}, and Shi et al.\ \cite{shi2020estimating} mean estimators, the bootstrap SE estimator for the $\text{QE}_{\text{mean}}$ \cite{mcgrath2020estimating}, BC \cite{mcgrath2020estimating}, and MLN \cite{cai2021estimating} mean estimators, and the plug-in SE estimator for the Yang et al.\ \cite{yang2021generalized} mean estimator. We also apply all of the applicable median-based methods (i.e., the (weighted) MDM method \cite{mcgrath2019one, mcgrath2020meta}, $\text{QE}_{\text{median}}$ \cite{mcgrath2020meta} method) to estimate the pooled difference of median age between survivors and nonsurvivors. For the approaches based on performing an inverse-variance weighted meta-analysis, we assume a random effects model and obtain estimates of the between-study variance and $I^2$ index \cite{higgins2002quantifying, higgins2003measuring} as implemented in the \pkg{metafor} package. The code for applying the mean-based methods is given below. Due to the computationally intensive nature of bootstrapping and the large number of primary studies, the code took approximately seven minutes to run on a standard laptop computer (8 GB RAM, 1.1 GHz Quad-Core Intel Core i5 processor). \begin{verbatim} set.seed(1234) res_wan <- metamean(dat.age, mean_method = "wan", se_method = "naive", sd_method = "wan") res_luo <- metamean(dat.age, mean_method = "luo", se_method = "naive", sd_method = "wan") res_shi <- metamean(dat.age, mean_method = "shi_lognormal", se_method = "naive", sd_method = "shi_lognormal") res_qe_mean <- metamean(dat.age, mean_method = "qe", se_method = "bootstrap") res_bc <- metamean(dat.age, mean_method = "bc", se_method = "bootstrap") res_mln <- metamean(dat.age, mean_method = "mln", se_method = "bootstrap") res_yang <- metamean(dat.age, mean_method = "yang", se_method = "plugin") \end{verbatim} Similarly, the code for applying the median-based methods is given below. \begin{verbatim} res_mm <- metamedian(dat.age, median_method = "mm") res_wm <- metamedian(dat.age, median_method = "wm") res_qe_median <- metamedian(dat.age, median_method = "qe") \end{verbatim} The results are summarized in Table \ref{tab: application}. Most of the entries in the table can be obtained by printing the objects returned by the \verb|metamean| and \verb|metamedian| functions. The 95\% CIs around $\hat{\tau}^2$ and $\hat{I}^2$ can be obtained by applying the \verb|confint| function from the \pkg{metafor} package (e.g., \verb|confint(res_mln)|). All of the mean-based methods gave similar estimates of the pooled difference of means and between-study heterogeneity, and all of the median-based methods gave similar estimates of the pooled difference of medians. Moreover, the mean-based methods and median-based methods gave similar pooled estimates to each other despite estimating different outcome measures, which presumably occurred because the distribution of age was not highly skewed in the primary studies (e.g., most primary studies had Bowley skewness values smaller than 0.1). \begin{table}[H] \caption{Results of the meta-analysis comparing the age between survivors and nonsurvivors of COVID-19. The column titled ``Pooled Estimate [95\% CI]" displays the pooled estimate of the difference of means (mean age in the nonsurvivor group minus the mean age in the survivor group) for the mean-based methods and displays the pooled estimate of the difference of medians for the median-based methods. A value of NA (Not Applicable) is displayed when the method does not provide an estimate of the corresponding parameter. The unit of measurement for age is years. \label{tab: application}} \begin{center} \begin{tabular}{llll} \hline Method & Pooled Estimate [95\% CI] & $\hat{\tau}^2$ [95\% CI] & $\hat{I}^2$ [95\% CI] \\ \hline Mean-Based Methods \\ \,\,\,\, Wan et al. & 13.34 [11.64, 15.03] & 30.84 [19.19, 50.37] & 89\% [83\%, 93\%] \\ \,\,\,\, Luo et al. / Wan et al.$^*$ & 13.34 [11.65, 15.04] & 30.69 [19.09, 50.18] & 89\% [83\%, 93\%]\\ \,\,\,\, Shi et al. & 12.87 [11.25, 14.49] & 27.51 [16.93, 45.66] & 87\% [81\%, 92\%]\\ \,\,\,\, $\text{QE}_{\text{mean}}$ & 13.23 [11.62, 14.84] & 26.40 [16.02 44.19] & 85\% [77\%, 90\%] \\ \,\,\,\, BC & 13.33 [11.72, 14.94] & 25.64 [15.19, 43.53] & 82\% [73\%, 89\%] \\ \,\,\,\, MLN & 12.86 [11.23, 14.50] & 28.04 [17.35, 46.90] & 87\% [80\%, 92\%] \\ \,\,\,\, Yang et al. & 13.28 [11.57, 14.99] & 31.28 [19.49, 51.42] & 88\% [82\%, 92\%]\\ Median-Based Methods \\ \,\,\,\, MDM & 13.00 [11.72, 16.00] & NA & NA \\ \,\,\,\, Weighted MDM & 13.00 [8.00, 16.00] & NA & NA \\ \,\,\,\, $\text{QE}_{\text{median}}$ & 13.22 [11.44, 15.01] & 33.56 [20.70, 55.68] & 87\% [80\% 92\%] \\\hline \end{tabular} \end{center} \caption*{$^*$We applied the mean estimator of Luo et al. and the standard deviation estimator of Wan et al.} \end{table} To generate a forest plot, we can apply the \verb|forest| function from the \pkg{metafor} package. For instance, the code below generates a forest plot corresponding to the analysis with the MLN mean-based method, which is given in Figure \ref{fig:forest}. \begin{verbatim} library("metafor") forest(res_mln, header = c("Study", "Difference of Means [9 slab = dat.age$author, xlab = "Difference of Mean Age (years)") \end{verbatim} \begin{figure} [H] \centering \includegraphics[width=\textwidth]{Forest.pdf} \caption{Forest plot showing the difference in mean age between COVID-19 survivors and nonsurvivors. When a primary study reported medians, the MLN method was applied to estimate the difference of means and parametric bootstrap was applied to estimate its standard error \cite{cai2021estimating, mcgrath2023standard}. \label{fig:forest}} \end{figure} \section{Discussion} \label{sec: discussion} The \pkg{metamedian} R package facilitates performing meta-analyses when primary studies report the sample median of the outcome. The package implements a number of methods that estimate mean-based outcome measures as well as methods that estimate median-based outcome measures. We developed the package in response to the surge of methodological development in this area in recent years and the lack of a comprehensive software tool for such analyses. The following subsections discuss some related software, limitations, and future directions. \subsection{Related software} In prior work, we developed the \pkg{estmeansd} R package and Shiny web application (\url{https://smcgrath.shinyapps.io/estmeansd/}), which implement the mean and standard deviation estimators proposed by McGrath et al.\ \cite{mcgrath2020estimating} and Cai et al.\ \cite{cai2021estimating} from $S_1$, $S_2$, and $S_3$ summary statistics as well as the corresponding bootstrap SE estimators \cite{mcgrath2023standard}. This package is internally applied in \pkg{metamedian} when applying the relevant mean-based methods. In fact, one motivation for developing \pkg{metamedian} was to facilitate applying \pkg{estmeansd} in meta-analyses and comparing these methods to other mean-based methods and median-based methods in the literature. Data analysts should be aware of a few other software options that implement methods to meta-analyze studies reporting medians. The \pkg{meta} R package \cite{balduzzi2019} recently added some mean-based methods (i.e., Wan et al.\ \cite{wan2014estimating}, Luo et al.\ \cite{luo2018optimally}, Shi et al.\ \cite{shi2020optimally}, McGrath et al.\ \cite{mcgrath2020estimating}, and Cai et al.\ \cite{cai2021estimating}, where the latter two are applied via the \pkg{estmeansd} package). The current development version of the \pkg{metafor} R package \cite{metafor} (i.e., version 3.9-21) includes some mean-based methods as well (i.e., Hozo et al.\ \cite{hozo2005estimating}, Walter and Yao \cite{walter2007effect}, Wan et al.\ \cite{wan2014estimating}, Luo et al.\ \cite{luo2018optimally}, and Shi et al.\ \cite{shi2020optimally, shi2020estimating}). Both \pkg{meta} and \pkg{metafor} use the imputed sample means and standard deviations in place of the actual (unreported) sample means and standard deviations in standard meta-analytic approaches (i.e., they use the naïve SE estimator). Last, the \pkg{metaBLUE} R package \cite{metaBLUE} implements the meta-analytic model proposed by Yang et al. \cite{yang2021generalized} to estimate a global mean or global difference of means. A key advantage of the \pkg{metamedian} package is that it implements median-based methods, which are currently not available in other software. As previously discussed, prior studies \cite{mcgrath2019one, mcgrath2020meta,mcgrath2023standard} have found that mean-based methods can perform very poorly for highly skewed data and median-based methods may be preferable (and possibly the only option) in such settings. Moreover, by including both mean-based methods and median-based methods in a single software package with the same requirements on formatting the input data set and applying the main functions, we believe that \pkg{metamedian} can greatly facilitate performing sensitivity analyses based on using other methods in the package. A few other advantages of \pkg{metamedian} include the following: (1) \pkg{metamedian} offers within-study bootstrap SE estimators for the mean-based methods, which have been shown to often perform better than the naïve SE estimators. (2) \pkg{metamedian} offers the \verb|describe_studies| function to help data analysts with data cleaning and choosing suitable methods for their application. (3) \pkg{metamedian} includes several real data sets (e.g., \verb|dat.age|, \verb|dat.asat|, \verb|dat.ck|) of meta-analyses consisting of studies reporting medians, which can be useful for methodological development in this area. Finally, it should be noted that there are a few software options apart from \pkg{estmeansd} for estimating the sample mean and standard deviation from $S_1$, $S_2$, and $S_3$ summary statistics. Specifically, the \pkg{metaBLUE} R package \cite{metaBLUE} implements the mean and standard deviation estimators of Wan et al. \cite{wan2014estimating}, Luo et al. \cite{luo2018optimally}, and Yang et al. \cite{yang2021generalized}, which is also internally applied in \pkg{metamedian}. The \pkg{ABCMETAapp} Shiny web application \cite{kwon2021abcmetaapp} implements the approximate Bayesian computation (ABC) approach of Kwon and Reis \cite{kwon2015simulation} to estimate the mean and standard deviation. \subsection{Limitations and future directions} The \pkg{metamedian} package has a few limitations that should be acknowledged. First, the package is restricted to performing univariate meta-analyses of one-group studies and two-group studies. While some of the methods implemented in the package are applicable in more general settings (e.g., multivariate/multilevel meta-analysis, network meta-analysis, fully Bayesian estimation strategies in meta-analysis), we restricted the package to perform univariate meta-analyses to simplify the user interface. If users are interested in performing meta-analyses that do not fall within the scope of the \pkg{metamedian} package, they can set the \verb|pool_studies| argument (in the \verb|metamean| and \verb|metamedian| functions) to \verb|FALSE| in order to obtain the study-specific outcome measure estimates and their SE estimates which can be used in conjunction with other software options to perform the pooling stage of the analysis. Second, while we strove to make the package as comprehensive as possible, the package does not include some mean and standard deviation estimators developed in this context. For example, we did not include the ABC approach of Kwon and Reis \cite{kwon2015simulation} because it is computationally intensive and its performance can be highly sensitive to a number of settings (e.g., the assumed outcome distribution, prior distributions, acceptance thresholds) that may need to be carefully tailored for each primary study. The mean and standard deviation estimators included in the current version of \pkg{metamedian} are those that we believe users are most likely to apply based on the current literature and that typically do not need to be highly tailored for each primary study. We aim to expand \pkg{metamedian} in a few of directions in the future. For instance, we would like to include additional outcome measures for the mean-based methods, such as the standardized difference of means and the ratio of means. Similarly, some of the median-based methods can conceivably be extended for other outcome measures (e.g., the ratio of medians). However, we did not include such outcome measures in the current version of \pkg{metamedian} because they first require methodological work. More generally, we anticipate adding new methods to the package as they continue to become developed. We also welcome those working in this area to contribute to the development version of the packages on GitHub. \section*{Acknowledgements} We thank Alexandra Zimmer, Alexander Seitel, and Claudia Denkinger for helping collect the data sets used in the examples. This work was supported by the National Science Foundation Graduate Research Fellowship Program under Grant No.\ DGE2140743. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. \section*{Conflict of interest} The authors declare no potential conflict of interest. \section*{Data availability statement} The source code of the software and data presented in this paper are publicly available on GitHub (\url{https://github.com/stmcg/metamedian}). \printbibliography \end{document} \section{Additional details on the Confidence Distribution method} Recall that Ozturk and Balakrishnan \cite{ozturk2020meta} proposed a median-based method for meta-analyzing one-group studies, which they referred to as the Confidence Distribution (CD) method. This method is based on performing an inverse-variance weighted meta-analysis, where one nonparametrically estimates the within-study SEs from various sets for summary statistics. Unlike most of the methods implemented in \pkg{metamedian} -- which consider that primary studies report $S_1$, $S_2$, $S_3$ summary statistics of the outcome or the sample mean, standard deviation, and sample size -- Ozturk and Balakrishnan \cite{ozturk2020meta} consider different sets of summary statistics that may be reported by the primary studies. In this section, we describe these other sets of summary statistics and how to incorporate them when using \pkg{metamedian}. To describe the summary statistics considered by Ozturk and Balakrishnan \cite{ozturk2020meta}, we use the following notation. We observe an independent and identically distributed sample of $n$ observations of the outcome of interest, denoted by $\{X_i\}_{i = 1}^n$. Let $X_{(i)}$ denote the $i$th order statistic. Let $\xi$ denote the median of the distribution. Ozturk and Balakrishnan \cite{ozturk2020meta} consider the following sets of summary statistics that may be reported by a primary study: \begin{itemize} \item[$C_1$:] Two central order statistics, $X_{(r)}$ and $X_{(s)}$ where $1 < r < s < n$, that can form a $100(1-\alpha)\%$ confidence interval (CI) for $\xi$ such that \begin{equation} \label{eq: ci} P(X_{(r)} \leq \xi \leq X_{(s)}) \geq 1 - \alpha_1 - \alpha_2 = 1 - \alpha \end{equation} where $P(X_{(r)} \geq \xi) \leq \alpha_1$ and $P(X_{(s)} \leq \xi) \leq \alpha_2$. \item[$C_2$:] A $100(1-\alpha)\%$ CI $[L, U] = [X_{(r)}, X_{(s)}]$ for $\xi$ such that (\ref{eq: ci}) holds. \item[$C_3$:] The sample median and an estimate of its variance. \item[$C_4$:] The sample mean, standard deviation, and $n$. \item[$C_5$:] The sample median, first and third quartiles, and $n$. \end{itemize} Recall that the \verb|metamedian| function requires an input data set, \verb|data|, containing the summary data from the primary studies. If a primary study reports $C_1$ or $C_2$ summary statistics, users can specify the lower and upper bound of the CI in columns named \verb|med.ci.lb.g1| and \verb|med.ci.ub.g1|, respectively. The $\alpha_1$ and $\alpha_2$ values can be specified in columns named \verb|alpha.1.g1| and \verb|alpha.2.g1|, respectively. If a primary study reports $C_3$ summary statistics, users can specify the estimated variance of the median in a column named \verb|med.var.g1| (and can specify the median in a column named \verb|med.g1|). Recall that the main text discusses how to specify the $C_4$ and $C_5$ summary statistics in the input data set (i.e., in columns named \verb|mean.g1|, \verb|sd.g1|, \verb|n.g1|, \verb|med.g1|, \verb|q1.g1|, and \verb|q3.g1|). \begin{comment} \section{TBD} \begin{table}[H] \caption{Summary of mean estimators \label{tab: mean estimators}} \begin{center} \begin{tabular}{lllll} \hline Method & $S_1$ & $S_2$ & $S_3$ & Distributional assumption \\ \hline Hozo et al.\, \cite{hozo2005estimating} & \cmark & & & \\ Bland et al.\, \cite{bland2015estimating} & & & \cmark \\ Wan et al.\, \cite{wan2014estimating} & & \cmark & & Normal \\ Luo et al.\ \cite{luo2018optimally} & \cmark & \cmark & \cmark & Normal \\ Shi et al.\ \cite{shi2020optimally} & & & \cmark & Normal \\ Shi et al.\ \cite{shi2020estimating} & \cmark & \cmark & \cmark & Log-Normal \\ QE \cite{mcgrath2020estimating} & \cmark & \cmark & \cmark \\ BC \cite{mcgrath2020estimating} & \cmark & \cmark & \cmark \\ MLN \cite{cai2021estimating} & \cmark & \cmark & \cmark \\ Yang et al.\ \cite{yang2021generalized} & \cmark & \cmark & \cmark \\\hline \end{tabular} \end{center} \end{table} \end{comment} \section{Additional results from the main example} \setcounter{table}{0} \renewcommand{\thetable}{B\arabic{table}} Table \ref{tab: application with qui} summarizes the results of the analysis of our main example when including the study of Qi et al.\ \cite{qi2021clinical}. The code for performing the analysis is the same as that presented in Section 4 of the main text, replacing the data set \verb|dat.age| with \verb|dat.age_raw|. \begin{table}[H] \caption{Results of the meta-analysis comparing the age between survivors and nonsurvivors of COVID-19 when including the study of Qi et al.\ \cite{qi2021clinical}. The column titled ``Pooled Estimate [95\% CI]" displays the pooled estimate of the difference of means (mean age in the nonsurvivor group minus the mean age in the survivor group) for the mean-based methods and displays the pooled estimate of the difference of medians for the median-based methods. A value of NA (Not Applicable) is displayed when the method does not provide an estimate of the corresponding parameter. The unit of measurement for age is years.\label{tab: application with qui}} \begin{center} \begin{tabular}{llll} \hline Method & Pooled Estimate [95\% CI] & $\hat{\tau}^2$ [95\% CI] & $\hat{I}^2$ [95\% CI] \\ \hline Mean-Based Methods \\ \,\,\,\, Wan et al. & 13.28 [11.58, 14.97] & 30.86 [19.20, 50.77] & 89\% [83\%, 93\%] \\ \,\,\,\, Luo et al. / Wan et al.$^*$ & 13.28 [11.59, 14.97] & 30.71 [19.12, 50.66] & 89\% [83\%, 93\%]\\ \,\,\,\, Shi et al. & 12.81 [11.19, 14.43] & 27.54 [16.98, 46.17] & 87\% [81\%, 92\%]\\ \,\,\,\, $\text{QE}_{\text{mean}}$ & 13.15 [11.55, 14.76] & 26.56 [16.26 45.30] & 84\% [77\%, 90\%] \\ \,\,\,\, BC & 13.33 [11.72, 14.93] & 25.30 [14.53, 41.57] & 82\% [72\%, 88\%] \\ \,\,\,\, MLN & 12.77 [11.16, 14.39] & 27.44 [16.93, 46.18] & 86\% [80\%, 91\%] \\ \,\,\,\, Yang et al. & 13.23 [11.52, 14.93] & 31.26 [19.40, 51.43] & 88\% [82\%, 92\%]\\ Median-Based Methods \\ \,\,\,\, MDM & 13.00 [11.51, 16.00] & NA & NA \\ \,\,\,\, Weighted MDM & 13.00 [8.00, 16.00] & NA & NA \\ \,\,\,\, $\text{QE}_{\text{median}}$ & 13.15 [11.37, 14.94] & 33.59 [20.74, 56.16] & 87\% [80\% 92\%] \\\hline \end{tabular} \end{center} \caption*{$^*$We applied the mean estimator of Luo et al. and the standard deviation estimator of Wan et al.} \end{table} \section{Additional examples} In this section, we analyze two additional example data sets from the meta-analysis of Katzenschlager et al.\ \cite{katzenschlager2021can}. Specifically, we meta-analyze aspartate transaminase (ASAT) and creatine kinase (CK) levels in the group of COVID-19 nonsurvivors. We chose these additional examples for two main reasons: 1) Unlike the outcome of age in the main example, the distribution of ASAT levels is moderately skewed and the distribution of CK levels is highly skewed in most of the primary studies. When meta-analyzing skewed outcomes, the different methods in the \pkg{metamedian} package can lead to considerably different results. This underscores the importance of \pkg{metamedian} allowing data analysts to choose from various methods and to easily perform sensitivity analyses with different methods. 2) These examples are meta-analyses of single group data, unlike the main example which meta-analyzes two-group data. This allows us to illustrate using \pkg{metamedian} to meta-analyze other outcome measures (i.e., the mean and median of the outcome) and to apply the CD method \cite{ozturk2020meta}. \subsection{Aspartate transaminase levels in COVID-19 nonsurvivors} \setcounter{table}{0} \renewcommand{\thetable}{C\arabic{table}} \setcounter{figure}{0} \renewcommand{\thefigure}{C\arabic{figure}} The data sets \verb|dat.asat| and \verb|dat.asat_raw| in \pkg{metamedian} contain the summary data of ASAT levels in COVID-19 survivors and nonsurvivors, where \verb|dat.asat_raw| includes the primary study of Qi et al.\ \cite{qi2021clinical} and \verb|dat.asat| does not. For simplicity, we use \verb|dat.asat| in this subsection. The first five rows of \verb|dat.asat| (excluding the \verb|author| column) are displayed below. The group 1 values correspond to nonsurvivor group, and the group 2 values correspond to the survivor group. The unit of measurement for the ASAT levels is U/L. \begin{verbatim} n.g1 q1.g1 med.g1 q3.g1 mean.g1 sd.g1 n.g2 q1.g2 med.g2 q3.g2 mean.g2 sd.g2 1 29 NA NA NA 104.56 117.76 351 NA NA NA 64.38 234.43 2 65 30.00 43.0 68.0 NA NA 274 22.00 29.0 43.00 NA NA 3 12 27.25 38.0 70.5 NA NA 9 36.50 112.0 293.25 NA NA 4 13 28.75 38.0 64.5 NA NA 40 20.00 25.0 35.00 NA NA 5 44 30.00 37.0 52.0 NA NA 40 32.25 38.5 57.25 NA NA \end{verbatim} Since \verb|dat.asat| includes columns for the survivor group, we need to subset \verb|dat.asat| in order for the \verb|metamean| and \verb|metamedian| functions to meta-analyze the mean and median ASAT levels, respectively, in the nonsurvivor group. This can be performed as follows: \begin{verbatim} my_data <- dat.asat[, c("author", "q1.g1", "med.g1", "q3.g1", "mean.g1", "sd.g1", "n.g1")] \end{verbatim} Next, we perform the descriptive analyses by applying the command \verb|describe_studies(my_data)|, which produces the following output: \begin{verbatim} DESCRIPTION OF PRIMARY STUDIES N. studies: 26 N. studies reporting the median: 23 N. studies reporting S1 (min, med, max, n): 0 N. studies reporting S2 (q1, med, q3, n): 23 N. studies reporting S3 (min, q1, med, q3, max, n): 0 N. studies reporting the mean: 3 N. studies reporting the mean, sd, and n: 3 Bowley skewness Minimum: -0.1917 First quartile: 0.1901 Median: 0.2903 Mean: 0.2686 Third quartile: 0.3818 Maximum: 0.5556 \end{verbatim} Observe that most primary studies report $S_2$ summary statistics and have Bowley skewness values greater than 0.2, indicating a moderate degree of right skewness in the distribution of ASAT levels. Next, we apply the same mean-based methods and median-based methods as described in Section 4 of the main text. In addition, since this example is a meta-analysis of single group data, we also apply the CD method \cite{ozturk2020meta} in a random effects meta-analysis. The code for performing these analyses is given below. The code took approximately three minutes to run on the same computer as described in Section 4 of the main text. \begin{verbatim} ## Applying mean-based methods set.seed(1234) res_wan <- metamean(my_data, mean_method = "wan", se_method = "naive", sd_method = "wan") res_luo <- metamean(my_data, mean_method = "luo", se_method = "naive", sd_method = "wan") res_shi <- metamean(my_data, mean_method = "shi_lognormal", se_method = "naive", sd_method = "shi_lognormal") res_qe_mean <- metamean(my_data, mean_method = "qe", se_method = "bootstrap") res_bc <- metamean(my_data, mean_method = "bc", se_method = "bootstrap") res_mln <- metamean(my_data, mean_method = "mln", se_method = "bootstrap") res_yang <- metamean(my_data, mean_method = "yang", se_method = "plugin") ## Applying median-based methods res_mm <- metamedian(my_data, median_method = "mm") res_wm <- metamedian(my_data, median_method = "wm") res_qe_median <- metamedian(my_data, median_method = "qe") res_cd <- metamedian(my_data, median_method = "cd") \end{verbatim} Table \ref{tab: application asat} summarizes the results. The mean-based methods that assume that the outcome is normally distributed (i.e., Wan et al.\ \cite{wan2014estimating}, Luo et al.\ \cite{luo2018optimally}, and Yang et al.\ \cite{yang2021generalized}) gave pooled mean estimates of approximately 46 U/L, whereas the mean-based methods that do not assume normality (i.e., Shi et al.\ \cite{shi2020estimating}, $\text{QE}_{\text{mean}}$ \cite{mcgrath2020estimating}, BC \cite{mcgrath2020estimating}, and MLN \cite{cai2021estimating}) gave pooled mean estimates of approximately 50 U/L. All of the median-based methods gave pooled median estimates of approximately 42 U/L. The observed differences in the pooled estimates between the various methods may be attributed to ASAT levels being right skewed in many of the primary studies. The mean-based methods assuming normality had smaller estimates of $\tau^2$ compared to those that do not assume normality. The trends in the $I^2$ estimates were less clear, which may be attributed to $I^2$ strongly depending on both the distributional assumptions of the mean-based method as well as the choice of the within-study SE estimator. Moreover, all of the mean-based methods had larger estimates of between-study heterogeneity compared to the median-based methods. \begin{table}[H] \caption{Results of the meta-analysis of aspartate transaminase (ASAT) levels in COVID-19 nonsurvivors. The column titled ``Pooled Estimate [95\% CI]" displays the pooled estimate of mean ASAT level (in U/L) for the mean-based methods and displays the pooled estimate of the median ASAT level (in U/L) for the median-based methods. A value of NA (Not Applicable) is displayed when the method does not provide an estimate of the corresponding parameter. \label{tab: application asat}} \begin{center} \begin{tabular}{llll} \hline Method & Pooled Estimate [95\% CI] & $\hat{\tau}^2$ [95\% CI] & $\hat{I}^2$ [95\% CI] \\ \hline Mean-Based Methods \\ \,\,\,\, Wan et al. & 45.93 [42.35, 49.50] & 57.20 [37.68, 286.85] & 82\% [76\%, 96\%] \\ \,\,\,\, Luo et al. / Wan et al.$^*$ & 46.13 [42.53, 49.74] & 58.36 [38.20, 287.27] & 83\% [76\%, 96\%]\\ \,\,\,\, Shi et al. & 51.37 [46.57, 56.17] & 112.94 [75.70, 484.90] & 88\% [83\%, 97\%]\\ \,\,\,\, $\text{QE}_{\text{mean}}$ & 49.45 [45.14, 53.76] & 73.93 [42.08 365.46] & 76\% [64\%, 94\%] \\ \,\,\,\, BC & 49.57 [45.27, 53.87] & 69.90 [35.46, 332.15] & 73\% [58\%, 93\%] \\ \,\,\,\, MLN & 49.86 [45.39, 54.32] & 92.16 [60.54, 429.83] & 83\% [77\%, 96\%] \\ \,\,\,\, Yang et al. & 45.76 [42.21, 49.31] & 55.74 [35.60, 275.11] & 81\% [73\%, 95\%]\\ Median-Based Methods \\ \,\,\,\, MDM & 41.50 [38.00, 43.89] & NA & NA \\ \,\,\,\, Weighted MDM & 43.40 [42.00, 47.00] & NA & NA \\ \,\,\,\, $\text{QE}_{\text{median}}$ & 42.58 [39.58, 45.58] & 30.50 [20.36, 264.74] & 63\% [54\% 94\%] \\ \,\,\,\, CD & 42.53 [39.41, 45.65] & 34.26 [NA, NA] & NA \\\hline \end{tabular} \end{center} \caption*{$^*$We applied the mean estimator of Luo et al. and the standard deviation estimator of Wan et al.} \end{table} Last, we generate a forest plot corresponding to the analysis with the MLN mean-based method. The code for generating the forest plot is given below, and the forest plot is given in Figure \ref{fig:forest asat}. \begin{verbatim} forest(res_mln, header = c("Study", "Mean [9 slab = my_data$author, xlab = "Mean ASAT (U/L)") \end{verbatim} \begin{figure} [H] \centering \includegraphics[width=\textwidth]{Forest-ASAT.pdf} \caption{Forest plot showing the mean ASAT level in COVID-19 nonsurvivors. When a primary study reported the median, the MLN method was applied to estimate the mean and parametric bootstrap was applied to estimate its standard error \cite{cai2021estimating, mcgrath2023standard}. \label{fig:forest asat}} \end{figure} \subsection{Creatine kinase levels in COVID-19 nonsurvivors} The data sets \verb|dat.ck| and \verb|dat.ck_raw| in \pkg{metamedian} contain the summary data of CK levels in COVID-19 survivors and nonsurvivors. Similar to the previous example, \verb|dat.ck_raw| includes the primary study of Qi et al.\ \cite{qi2021clinical}. Once again, we use the data set excluding the primary study of Qi et al.\ \cite{qi2021clinical} in this subsection. The first five rows of \verb|dat.ck| (excluding the \verb|author| column) are below. As in the previous examples, the group 1 values correspond to the nonsurvivor group, and the group 2 values correspond to the survivor group. The unit of measurement for the CK levels is U/L. \begin{verbatim} n.g1 q1.g1 med.g1 q3.g1 mean.g1 sd.g1 n.g2 q1.g2 med.g2 q3.g2 mean.g2 sd.g2 1 29 NA NA NA 1033.45 1754.19 351 NA NA NA 216.38 474.51 2 65 50.00 84.0 222.0 NA NA 274 40.0 60.0 97.00 NA NA 3 12 38.75 96.0 415.5 NA NA 9 63.0 100.5 2322.75 NA NA 4 48 106.70 251.0 392.0 NA NA 185 59.0 93.0 182.20 NA NA 5 11 NA NA NA 57.00 36.80 25 NA NA NA 85.00 69.00 \end{verbatim} As in the previous example, we need to subset \verb|dat.ck| in order to meta-analyze the mean and median CK level in the nonsurvivor group. The code for performing this is the same as in the previous example when replacing \verb|dat.asat| with \verb|dat.ck|. Moreover, since the code for performing the descriptive analyses and the meta-analyses is identical to that in the previous example after subsetting the data, we do not display the code throughout this example for parsimony. The output of the descriptive analyses is given below: \begin{verbatim} DESCRIPTION OF PRIMARY STUDIES N. studies: 17 N. studies reporting the median: 15 N. studies reporting S1 (min, med, max, n): 0 N. studies reporting S2 (q1, med, q3, n): 15 N. studies reporting S3 (min, q1, med, q3, max, n): 0 N. studies reporting the mean: 2 N. studies reporting the mean, sd, and n: 2 Bowley skewness Minimum: -0.0116 First quartile: 0.2934 Median: 0.3840 Mean: 0.4381 Third quartile: 0.6076 Maximum: 0.8416 \end{verbatim} Observe that most primary studies report $S_2$ summary statistics and that the skewness of the outcome in the primary studies is somewhat larger than that of the previous example. Most studies had Bowley skewness values greater than 0.3, and one study (Zhang et al.\ \cite{zhang2020obesity}) had a Bowley skewness value as large as 0.84. Next, we apply the same methods as described in the previous example to meta-analyze the mean and median CK level. The code for performing the analyses took approximately two minutes to run on the same computer as described in Section 4 of the main text. Table \ref{tab: application ck} summarizes the results. The mean-based methods that assume that the outcome is normally distributed gave pooled estimates ranging from 181 U/L to 187 U/L, whereas the mean-based methods that do not assume normality gave pooled mean estimates ranging from 221 U/L to 257 U/L. All the median-based methods gave similar pooled median estimates of approximately 140 U/L. As in the previous example, the discrepancies between the pooled estimates of these methods are presumably due to the skewness of the distribution of CK levels in the primary studies. The estimates of between-study heterogeneity were very large for all of the methods. The trends in the estimates of between-study heterogeneity for the various methods were less clear in this example compared to the previous one. The results for the $\text{QE}_{\text{mean}}$ and BC mean-based methods should be interpreted with some caution. Both of these methods had very large within-study SE estimates for the primary study of Yang et al.\ \cite{yang2020extracorporeal}, and the $\text{QE}_{\text{mean}}$ method additionally had a very large within-study SE estimate for the primary study of Zhang et al.\ \cite{zhang2020obesity}. These two primary studies had very small sample sizes ($n = 8$ and $n = 12$, respectively), and the bootstrap SE estimators have been found to perform poorly for such small sample sizes \cite{mcgrath2023standard}. The presence of very large within-study SE estimates can cause instability when estimating between-study heterogeneity. \begin{table}[H] \caption{Results of the meta-analysis of creatine kinase (CK) levels in COVID-19 nonsurvivors. The column titled ``Pooled Estimate [95\% CI]" displays the pooled estimate of mean CK level (in U/L) for the mean-based methods and displays the pooled estimate of the median CK level (in U/L) for the median-based methods. A value of NA (Not Applicable) is displayed when the method does not provide an estimate of the corresponding parameter. \label{tab: application ck}} \begin{center} \begin{tabular}{llll} \hline Method & Pooled Estimate [95\% CI] & $\hat{\tau}^2$ [95\% CI] & $\hat{I}^2$ [95\% CI] \\ \hline Mean-Based Methods \\ \,\,\,\, Wan et al. & 184.77 [134.21, 235.34] & 9272.31 [5962.93, 66248.33] & 95\% [92\%, 99\%] \\ \,\,\,\, Luo et al. / Wan et al.$^*$ & 187.16 [136.38, 237.95] & 9365.94 [6004.80, 66146.39] & 95\% [92\%, 99\%]\\ \,\,\,\, Shi et al. & 244.54 [180.69, 308.38] & 12499.12 [9871.40, 138972.32] & 92\% [90\%, 99\%]\\ \,\,\,\, $\text{QE}_{\text{mean}}$ & 257.12 [175.24, 339.00] & 13536.63 [3262.90 70239.81] & 88\% [64\%, 97\%] \\ \,\,\,\, BC & 234.82 [169.24, 300.41] & 9108.76 [2642.80, 57973.02] & 84\% [60\%, 97\%] \\ \,\,\,\, MLN & 220.69 [160.29, 281.09] & 10015.95 [6184.21, 120680.56] & 90\% [85\%, 99\%] \\ \,\,\,\, Yang et al. & 180.91 [131.54, 230.28] & 8760.71 [5733.60, 66314.43] & 94\% [92\%, 99\%]\\ Median-Based Methods \\ \,\,\,\, MDM & 137.00 [97.68, 188.41] & NA & NA \\ \,\,\,\, Weighted MDM & 142.00 [137.00, 189.00] & NA & NA \\ \,\,\,\, $\text{QE}_{\text{median}}$ & 145.42 [102.61, 188.23] & 6187.63 [4436.58, 66290.36] & 91\% [88\% 99\%] \\ \,\,\,\, CD & 140.57 [96.20, 184.93] & 4121.38 [NA, NA] & NA \\\hline \end{tabular} \end{center} \caption*{$^*$We applied the mean estimator of Luo et al. and the standard deviation estimator of Wan et al.} \end{table} For consistency with the previous examples, we illustrate a forest plot of the analysis based on the MLN mean-based method in Figure \ref{fig:forest ck}. \begin{figure} [H] \centering \includegraphics[width=\textwidth]{Forest-CK.pdf} \caption{Forest plot showing the mean CK level in COVID-19 nonsurvivors. When a primary study reported the median, the MLN method was applied to estimate the mean and parametric bootstrap was applied to estimate its standard error \cite{cai2021estimating, mcgrath2023standard}. \label{fig:forest ck}} \end{figure} \printbibliography \end{document}
{ "arxiv_id": "2302.14261", "language": "en", "timestamp": "2023-03-01T02:07:11", "url": "https://arxiv.org/abs/2302.14261", "yymm": "2302" }
\section{Introduction} \label{sec:intro} Transformers have achieved tremendous success in computer vision \cite{han2020survey} in a variety of image-based tasks, including object detection \cite{zhou2017scene}, semantic segmentation \cite{liu2021swin}, and image recognition \cite{dosovitskiy2020image,atienza2021vision}. Since transformers were originally developed for natural language processing \cite{devlin2018bert}, demanding efforts are required to design and train transformers for vision tasks \cite{chen2020generative, feng2020scene}. One attractive idea \cite{dosovitskiy2020image} is to apply a pure transformer, called ViT, directly to sequences of image patches, significantly reducing the required computational resources. \begin{figure} \begin{center} \includegraphics[width=0.9 \columnwidth]{fig1.pdf} \end{center} \caption{\textbf{Pure transformers for scene text recognition.} (a) ViTSTR (Atienza, 2021), an existing transformer for monolingual scene text recognition. (b) TANGER: The proposed augmented transformer architecture with adaptive n-grams embedding and cross-language rectification for multilingual scene text recognition, where $Loss_{V}$ means the loss for vision model and $Loss_{clr}$ means the loss of cross-language rectification. } \label{fig1} \end{figure} Scene text recognition (STR), one class of important and challenging tasks involving both image and text features, aims to identify texts in natural scenes such as object labels, street and road signs \cite{zhu2016scene}. On the basis of ViT, an image patch-based transformer for STR, called ViTSTR, has been proposed in \cite{atienza2021vision} to achieve an optimal trade-off between performance and speed. However, as illustrated in Fig.~\ref{fig1}(a), ViTSTR works directly on sequences of single image patches without considering their associated neighbors, which may be inadequate in dealing with more complex scenes such as multilingual scene texts. Despite their potential competitive performances for STR tasks, not much attention has been paid to vision transformers to address the specific challenges encountered in multilingual scene text recognition \cite{buvsta2018e2e}. Multilingual scene text recognition is particularly challenging in that it involves multi-scale, multi-orientation, and low-resolution images containing multiple languages. For instance, the scene texts in Fig.~\ref{fig1}(b) contain English and Chinese, which have different sizes and orientations, making it hard for existing transformers that use single image patches to efficiently recognize multilingual scene texts. To tackle the above challenges, we propose an augmented Transformer architecture with Adaptive N-Grams Embeddings and cross-language Rectification (TANGER) for multilingual scene text recognition. Our key hypothesis here is that adaptive patch-based n-grams embeddings can allow TANGER to deal with complex multilingual scene characters more flexibly. In addition to the single image patches for the primary vision transformer, the supplementary pyramid transformer takes various representations of the neighboring image patches as the input. The primary and supplementary vision transformers jointly learn the representation of multilingual scene text features by sharing their parameters, reducing the computational complexity of the proposed model. To further improve the performance for multilingual scene text recognition, a loss function for cross-language rectification is designed for text prediction in the presence of multiple languages by considering the language category information as well as the coherence of a sequence of characters or words. We evaluate the proposed TANGER by comparing its performance with the state-of-the-art approaches on three public multilingual benchmark datasets including E2E-MLT \cite{buvsta2018e2e}, CTW1500 \cite{yuliang2017detecting}, and RCTW17 \cite{shi2017icdar2017}, and one monolingual dataset, Totaltext \cite{ch2017total}. In addition, comparative experiments are carried out on a new multilingual database we collected from tourism scenes in Indonesia, called TsiText, which consists of Indonesian, English, and Chinese. Our experimental results demonstrate that TANGER achieves state-of-the-art results on all multilingual and monolingual scene text recognition tasks considered in this work. The key contributions of this work are summarized as follows: \begin{itemize} \item We propose an augmented transformer architecture (TANGER) for multilingual scene text recognition by integrating a primary vision transformer with a supplementary pyramid transformer with n-grams embeddings. To our knowledge, this is the first pure vision transformer-based method for multilingual scene text recognition. \item An adaptive n-grams embedding method is designed to more flexibly extract key text features in locally related neighboring visual patches. The adaptive n-grams embedding can determine the optimal number of grams for different input image patches, which is highly beneficial for complex scent text recognition. \item To further enhance TANGER's capability of handling multilingual scene texts, we design a cross-language rectification loss function to account for language identification errors and text coherence. \end{itemize} \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{fig2.pdf} \caption{\textbf{Overall architecture of TANGER}. The primary and supplementary transformers are employed to deal with regular and n-grams embedding visual information, respectively, where $F_{1}$, $F_{2}$, $F_{3}$ and $F_{4}$ are the feature maps. } \label{fig2} \end{figure*} \section{Related Work} \subsection{Scene Text Recognition} Existing work on STR \cite{xie2019aggregation} can be categorized into language-free and language-based approaches \cite{zhu2016scene}. Language-free methods do not consider the linguistic information between characters and focus on the visual textures for recognition \cite{tounsi2018multilingual}. Yao \textit{et al.} \cite{yao2014strokelets} combine various feature descriptors with localized individual characters, and present a multi-scale representation for STR. Shi \textit{et al.} \cite{shi2016end} propose a unified neural network architecture framework for STR that integrates feature extraction, sequence modeling, and transcription, considering STR as a pixel-wise classification task. Language-based methods focus on the language mode with different structures, or consider the relationship between vision and language. For example, attention-based methods \cite{li2019show,zhan2019esir} are adopted for STR by following an end-to-end neural network model in recurrent neural networks \cite{shi2018aster}. Fang \textit{et al.} \cite{fang2018attention} propose a text recognizer based on convolutional neural networks (CNNs) by fusing visual and language-based methods to boost the recognition performance. Little work on multilingual scene text recognition has been reported with only a few exceptions. Busta \textit{et al.} \cite{buvsta2018e2e} design probably the first method, called E2E-MLT, for multilingual scene text recognition on the basis of a single fully convolutional network, which is demonstrated to be competitive even on rotated and vertical text instances. An E2E approach to script identification with different recognition heads, called Multiplexed Multilingual Mask TextSpotter, was proposed in \cite{huang2021multiplexed}, which can support the removal of existing or inclusion of new languages. However, due to the diversity of scene texts, complexity of the background, large amounts of uncertainty \cite{qiao2020seed}, and different sizes of different language scripts, existing methods fail to work effectively on multilingual scene text recognition tasks. \subsection{Vision Transformers for STR} To benefit from the generated visual features following linguistic rules, increased research interests have been dedicated to using transformers for STR recently \cite{lyu20192d, dosovitskiy2020image,na2021multi}, where the encoder extracts visual features and the decoder predicts characters in images. Owing to their parallel self-attention and prediction mechanisms, transformers can overcome the difficulties of sequential inference with diverse scene text features to some extent \cite{han2020survey}. Sporadic research efforts on adapting transformers to addressing various challenges in monolingual STR have been reported. For instance, to deal with images with different resolutions, Raisi \textit{et al.} \cite{raisi2020} develop a transformer-based architecture for recognizing texts in images by using a 2D positional encoder so that the spatial information the features can be preserved. Biten \textit{et al.} \cite{biten2022latr} propose a layer-aware transformer with a pre-training scheme on the basis of text and spatial cues only and show that it works well on scanned documents to handle multimodality in scene text visual question answering. Based on ViT \cite{dosovitskiy2020image}, Tan \textit{et al.} \cite{tan2022pure} propose a mixture experts of pure transformers for processing different resolutions for scene text recognition. Atienza \cite{atienza2021vision} presents a new transformer, called ViTSTR, for scent text recognition, which only uses the encoder architecture and emphasizes the balance between the performance and computational efficiency. Recently, Wang \textit{et al.} \cite{wang2022petr} explore the linguistic information between the local visual patches and propose transformer-based language rectification modules for optimizing word length and guiding rectification in monolingual scene text recognition. Despite the progress made on the application of visual transformers to STR tasks, it remains an open question how to explicitly identify inter-patch correlations for multilingual scene text recognition containing multi-scale, multi-orientation and low-resolution texts. \section{TANGER} \subsection{Overall Architecture} Our goal is to introduce a new vision transformer architecture for handling complex multilingual STR tasks. The proposed TANGER, as shown in Fig.~\ref{fig2}, consists of a primary transformer, which is a normal vision transform, and a supplementary pyramid transformer with adaptive n-grams embeddings. The primary vision transform focuses on the patch-based image features without considering the associated local patches, in which a dropout layer is adopted to alleviate overfitting. The supplementary pyramid transformer concentrates on effectively extracting visual features from the neighbouring image patches with the help of the adaptive n-grams embedding, allowing us to deal with multi-scale and multi-orientation visual appearances of multilingual texts. Similar to \cite{atienza2021vision}, we use a single model architecture in the primary transformer, while in the supplementary transformer, three layers of pyramid transformer encoders are stacked to control the scale of feature maps in three stages. As shown in Fig.~\ref{fig2}, $F_{2}$, $F_{3}$ and $F_{4}$ and the feature maps extracted from the previous stage and serve as the input into encoder layers $L_1$, $L_2$, and $L_3$, respectively. To reduce the computational complexity of TANGER, the encoder in the primary transformer and all three encoder layers in the supplementary transformer share the same parameters. In our implementation, we first divide an input RGB image into non-overlapping patches, and the features of a patch is regarded as a concatenation of the raw pixel RGB values. For example, we use a patch size of $m \times m$, and split a given input image of size $H\times W\times 3$ into $\frac{HW}{m^{2}}$ patches, where $m$, $H$ and $W$ are parameters to be defined. After that, a linear embedding layer is applied on this raw-valued feature to project it onto an arbitrary dimension $C$ for each patch. In this way, we can flexibly pass the embedded patches of different dimensions as well as a position embedding to the input of the two transformers. \subsection{Patch-based Adaptive n-grams Embedding} The vision transformer in \cite{atienza2021vision} only considers single image patches as a set of isolated vision words or characters without taking into account the correlation between the features in the neighboring image patches, thus limiting the model's ability to recognize complex scene texts. For obtaining efficient relevant features of the image patches, we treat each image patch as a bag of visual words \cite{yang2007evaluating} to capture the relationship between neighboring patches. To this end, we introduce an adaptive $n$-grams embedding method for determining an optimal $n$ for each image patch. As illustrated in Fig.~\ref{fig2}, the pyramid transformer architecture \cite{wang2021pyramid} with $n$-grams embedding is employed to flexibly capture the linguistic information in the neighboring patches. However, the value of $n$ for the $i$-th image patch $p_{i}$ must be determined individually to best capture the correlations between its neighboring patches. In this work, we set $n=\{2,3,4,5\}$ according to our pilot studies. Then, we choose the optimal $n_i$ for $p_i$ as follows: \begin{equation} n_i(p_{i}) = \max_{n}{P(p_{i}|p_{i-n+1}p_{i-n+2}...p_{i-1})},n=\{2,3,4,5\}, \label{eq1} \end{equation} where $P(.)$ is the probability at which $p_i$ is correlated with its previous $(n-1)$ patches, $p_{i-n+1}p_{i-n+2}...p_{i-1}$ represents $n-1$ sequential local visual patches. We can estimate $P(.)$ using a feature histogram of a group of continuous patches built by using the bag of visual words (BVW) as suggested in \cite{yang2007evaluating,tripathi2022bag}: \begin{equation} {P(p_{i}|p_{i-n+1}p_{i-n+2}...p_{i-1})} = \frac{\max_{k=1}^K\{N(VW_{k})\}}{\sum_{k=1}^{K}N(VW_{k})}, \label{eq2} \end{equation} where $VW_{k}$ is the $k$-th visual word, $K$ is the total number of visual words in the histogram, and $N(VW_{k})$ refers to the frequency of features of the input image patches $p_{i-n+1}p_{i-n+2}...p_{i-1}p_{i}$ that is similar to each of the visual word in the histogram. An illustrative example is given in Fig.~\ref{fig3}, in which the input image is partitioned into nine patches. To determine the number of grams for $p_6$, we calculate $N(VK_k), k=1,\cdots, 5$ when $n=2,3,4,5$, respectively. Finally, the maximum probability is obtained when $n=2$. Consequently, bi-gram embedding is adopted for $p_6$. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig3.pdf} \caption{\textbf{An illustrative example of determining the optimal number of grams}. In this example, $p_6$ is the current patch, and bi-gram will be adopted for $p_6$. } \label{fig3} \end{figure} Since the length of the local patches differs from patch to patch (in the above example, the length varies from 2 to 5), we use a spatial pyramid pooling layer to obtain the final n-grams embedding representation that conforms to the input size of the transformer. This way, we can flexibly adjust the length of the local image patches most relevant to the current patch, enabling it to extract the most effective features for complex multilingual scene texts. \subsection{Cross-language Rectification Loss} For multilingual scene text recognition, we must pay particular attention to the variability of the extracted features of two transformers for different languages. To achieve this, we design a loss function by including an extra cross-language rectification loss in addition to the vision loss: \begin{equation} Loss_{total} = Loss_{v} + \alpha Loss_{clr}, \label{eq3} \end{equation} where $Loss_{v}$ means the loss for the vision model, which is set following \cite{atienza2021vision}, $Loss_{clr}$ represents the cross-language rectification loss, and $\alpha$ is the weight coefficient, which is set to 0.01 based on our pilot studies. $Loss_{clr}$ is defined as follows: \begin{equation} Loss_{clr} = L_{class} + L_{score}, \label{eq4} \end{equation} where $L_{score}$ is the loss of the word coherence scores, and $L_{class}$ is the loss of language class identification: \begin{equation} L_{class} = SoftCE(Maxpool(MLP(T_{pt}) + MLP(T_{st}))). \label{eq5} \end{equation} where $MLP(\cdot)$ is a multilayer perceptron (MLP) in the transformers that performs feature extraction, and $T_{pt}$ and $T_{st}$ denote the extracted cross-language features from the vision transformer and pyramid transformer, respectively, $SoftCE(.)$ is the soft cross entropy of language class. With the help of the supplementary transformer, the coherence scores of the predicted characters in different patches are obtained by the linear inference layer in MLP, and the loss of the word coherence scores for multilingual texts can be estimated as follow: \begin{equation} L_{score} = -log (Linear (\arg max(y_{i}))), \label{eq7} \end{equation} where $y_{i}, i \in [1, maxlen]$ represent inference result of the $i$-th character, $maxlen$ is the maximum length of the predicted characters. \subsection{Discussion} Here, we discuss further the relationship between TANGER and ViTSTR \cite{atienza2021vision}. Like ViTSTR, TANGER is an image-based transformer model without relying on language resources such as dictionaries or corpus. Similar to the traditional vision transformer \cite{dosovitskiy2020image}, ViTSTR is mainly concerned with extracting features from a sequence of separate image patches. By contrast, TANGER explicitly takes into account the potential correlation between neighbouring visual patches, which is of paramount importance for feature extraction in complex scenes, such as multi-scale, multi-orientation and multilingual texts \cite{nanda2019illumination}. \section{Experimental Results} In this section, we experimentally validate our proposed TANGER by comparing the performance with the state-of-the-art methods on several public datasets as well as one newly collected multilingual dataset TsiText. First, we examine the performance of TANGER for multilingual scene text recognition in comparison with two end-to-end methods \cite{buvsta2018e2e, huang2021multiplexed} and one dictionary-guided method \cite{nguyen2021dictionary}. Then, we compare our model with the vision transformer ViTSTR \cite{atienza2021vision} in three variants, \textit{i.e.}, tiny, small, and base versions for monolingual scene text recognition. Finally, we present the results of ablation studies comparing TANGER with its two variants, one without the adaptive $n$-gram patch embedding method, and the other without cross-language rectification for multilingual scene text recognition. \subsection{Experimental Settings} \textbf{Datasets}\quad We validate the effectiveness of TANGER on three multilingual datasets, MLT17 \cite{buvsta2018e2e}, CTW1500 \cite{yuliang2017detecting}, RCTW17 \cite{shi2017icdar2017}, one monolingual dataset Totaltext \cite{ch2017total}, as well as a new multilingual dataset TsiText. MLT2017 \cite{saha2020multi}, which comes from ICDAR 2017 Robust Reading Competition, contains 7200 training, 1800 validation and 9000 testing natural scene images in Arabic, Latin, Chinese, Japanese, Korean, and Bangla. CTW1500 \cite{yuliang2017detecting} includes 1000 natural scene images for training and 500 for testing in English and Chinese languages. RCTW17 \cite{shi2017icdar2017} contains 8034 training and 4229 testing images, including primarily on-scene texts in Chinese and a few in English. Totaltext \cite{ch2017total} contains 1255 training images and 300 images for testing with 11459 annotated text instances of wild scenes in English. TsiText contains 3600 training, 600 validation, and 1800 testing natural images collected from tourism scenes. All images in TsiText are downloaded from the Internet and do not contain sensitive information. There are an average of four text instances in per scene image, with a maximum of 61 text instances. Each text instance contains four different attributes, including position, character content, scene class, and language, and is annotated in a similar way to \cite{yuliang2017detecting}. TsiText aims to provide complex multilingual scene texts containing rich and diverse travel-related scenes such as food, shop signs, traffic signs, directions, landmarks, and posters. It also includes multiple text types, including handwriting, print, and complex artistic styles. Finally, there is certain degrees of variety in text shapes, such as horizontal, multi-directional, curved, circular, and partially obscured texts, among others. To the best of our knowledge, this is the richest dataset for multilingual tourism scene texts. \textbf{Metrics and Parameter Configurations} \quad We adopt the character-level accuracy as the metric for comparing the algorithms on multilingual scene text recognition tasks. In addition, the frequency of the word-level edit distance between the predicted and ground truth words is employed to compare the recognition performance of the algorithms under comparison on Latin and non-Latin language scene texts. To verify the effectiveness of TANGER on monolingual scene texts, we compare it with ViTSTR in terms of model parameters, speed, FLOPS as well as accuracy. During the training, we adopt Adam with a mini-batch size of 192 to train the transformer models for 300 epochs from scratch with a learning rate of 0.001 on two A100 GPUs. \subsection{Performance on Multilingual Scene Texts} Table \ref{tab1} lists the performance of the proposed TANGER with three state-of-the-art algorithms for multilingual scene text recognition, namely ABCNet+D \cite{nguyen2021dictionary}, E2E-MLT \cite{buvsta2018e2e}, and Multiplexed \cite{huang2021multiplexed}. ABCNet+D \cite{nguyen2021dictionary} is a dictionary-based recognition algorithm that incorporates dictionaries before handling ambiguous cases. E2E-MLT \cite{buvsta2018e2e} is an end-to-end approach with a single fully convolutional network applicable to both Latin and non-Latin languages for multilingual scene text. Multiplexed \cite{huang2021multiplexed} proposes a unified loss with disentangled loss and integrated loss, and trains multiple text recognition heads in an end-to-end manner for script identification in different languages. Note that E2E-MLT and Multiplexed are end-to-end methods for both text detection and recognition, however, this work focuses on the recognition performance of the compared algorithms. As listed in Table \ref{tab1}, we can see that TANGER achieves the best character-level accuracy on four multilingual scene text recognition tasks. Compared with ABCNet+D, a dictionary-guided approach, TANGER can significantly improve the accuracy by 14.8\%, 11.9\%, 10.2\%, and 10.1\% on MLT17, LSVT19, RCTW17, and TsiText datasets, respectively. The impressive results of TANGER may be attributed to the fact that visual feature extraction is more effective than using lexicons when dealing with a series of multilingual texts in complex scenes. Experimental results also show that TANGER consistently outperforms two end-to-end approaches on all multilingual scene text recognition tasks. Specifically, TANGER achieves a state-of-the-art accuracy of 55.1\% and 89.9\% on the MLT17 and TsiText datasets, respectively. We surmise that the competitive performance of TANGER comes mainly from the more effective representation of the neighboring visual features, as well as the use of cross-language rectification for complex multilingual scene texts, which has also been verified in our ablation studies. Figure \ref{fig4} exemplifies the recognition results by TANGER on six complex testing images from the TsiText dataset. We see that TANGER can successfully recognize the texts in multiple languages in various complex scenes. On these six images, TANGER achieves an accuracy of 99.1\%, while ABCNet+D, E2E-MLT, and Multiplexed achieve 91.2\%, 94.6\% and 95.1\%, respectively, confirming the benefits of the proposed adaptive n-grams embedding and cross-language rectification mechanisms. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{figmu.pdf} \caption{Multilingual text recognition results of TANGER on six complex images from the TsiText dataset. } \label{fig4} \end{figure*} To further demonstrate the performance of TANGER for multilingual scene text recognition on both Latin and non-Latin languages, we examine the histograms of edit distance \cite{saluja2017error} between the pairs of predicted and the ground truth words for Arabic (non-Latin) and Latin language texts on the MLT17 dataset. Note that the smaller the edit distance, the smaller the recognition error of the algorithm is, and the words are recognized correctly when the edit distance is 0. From Fig.~\ref{fig5}, we observe that TANGER can obtain the maximum frequency of occurrences at an edit distance of 0 among all compared methods for both Arabic and Latin languages. In addition, the edit distance of TANGER is always smaller than 5 for Latin languages. Overall, we can conclude that TANGER outperforms the compared methods by observing the histograms of the edit distance, further demonstrating the effectiveness of TANGER for multilingual scene text recognition. \begin{figure} \centering \includegraphics[width=1.0 \linewidth]{fig5.pdf} \caption{Histogram of word frequencies on different edit distances for Arabic (non-Latin) and Latin language text on MLT17.} \label{fig5} \end{figure} \begin{table} \small \centering \caption{Recognition results in terms of character-based accuracy of four compared methods on four multilingual scene text datasets. The character-level accuracy is adopted as the metric for recognition qualities for multilingual scene text recognition.} \begin{tabular}{cccccc} \toprule Method & MLT17 & CTW1500 & RCTW17 & TsiText \\ \midrule ABCNet+D \cite{nguyen2021dictionary}& 40.3 & 73.3 & 63.2 & 79.8\\ E2E-MLT \cite{buvsta2018e2e}& 49.1 & 65.3 & 69.2 & 84.9\\ Multiplexed \cite{huang2021multiplexed}& 53.2 & 80.3 & 72.2 & 88.3 \\ TANGER(ours)& \textbf{55.1} & \textbf{85.2} & \textbf{73.4} & \textbf{89.9}\\ \bottomrule \end{tabular} \label{tab1} \end{table} \subsection{Performance on Monolingual Scene Texts} Here we evaluate TANGER for monolingual scene text recognition on the Totaltext dataset by comparing it with ViTSTR \cite{atienza2021vision}. Table \ref{tab2} reports the comparative results in terms of the accuracy, speed, model parameters, and FLOPS for three variants of the two compared models, tiny, small, and base versions. Since the transformer-based ViTSTR model is limited to Latin language text recognition we compared the two algorithms on Totaltext, which contains English texts only. From the results in Table \ref{tab2}, we can find that TANGER enhances the recognition accuracies by 2.1\%, 3.6\% and 1.1\% for the tiny, small and base versions, respectively, compared to ViTSTR, without considerably slowing down the inference speed. Interestingly, we find that the coherence score loss designed for cross-language rectification in our method is also helpful for enhancing the recognition performance for monolingual tasks, which may be attributed to the supplementary transformer that takes neighboring patches into account for complex scene text extraction. Figure~\ref{fig6} provides some selected cases, where ViTSTR makes mistakes whilst TANGER correctly recognizes the scene texts in some artistic fonts, or multi-scale and multi-orientation texts. \begin{table} \small \centering \caption{Experimental results comparing the recognition accuracy, speed, model parameters, and FLOPS. Three configurations of TANGER and ViTSTR are compared, namely tiny, small, and base version as used in \cite{atienza2021vision}.} \begin{tabular}{lccccc} \toprule \multirow{2}{*}{Method} & Acc & Speed & Parameters & FLOPS \\ & $\% $ & msec/image & $1 \times 10^{6}$ & $1\times 10^{9}$ \\ \midrule ViTSTR-Tiny & 80.6 & 8.1 & 5.2 & 1.6\\ ViTSTR-Small & 81.3 & 8.5 & 21.2 & 4.6\\ ViTSTR-Base& 84.9 & 9.1 & 83.7 & 17.3 \\ \hline TANGER-Tiny & 82.3 & 8.1 & 5.2 & 1.7\\ TANGER-Small& 84.2 & 9.2 & 22.4 & 5.1 \\ TANGER-Base& 85.8 & 9.5 & 84.6 & 18.2\\ \bottomrule \end{tabular} \label{tab2} \end{table} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{fig6.pdf} \caption{Six selected cases representing typical complex monolingual text scenarios, where ViTSTR makes mistakes while TANGER does not.} \label{fig6} \end{figure} \subsection{Ablation Studies} In this section, we conduct ablation studies on MLT17 and TsiText datasets. Our goal is two-fold. First, to assess the importance of the proposed adaptive n-grams embeddings for multilingual text representation, we adopt different but fixed values of $n$ on local neighboring image patches for multilingual text representation compared to the proposed adaptive n-grams embeddings in TANGER. Then, we examine the effect of the proposed cross-linguistic loss function on TANGER's recognition performance. \textbf{Adaptive n-grams Patch Embedding} The proposed adaptive n-grams embedding method aims to choose an optimal $n$ for each image patch. To demonstrate its benefit, we compare the performance of TANGER variants when $n$ is set to $n=2,3,4,5$, respectively. The comparative results are given Table \ref{tab3}, from which we can clearly see that the adaptive n-grams embedding has led to significant improvements in the performance for multilingual scene text recognition on both MLT17 and TsiText. Specifically, the adaptive n-grams embedding can achieve a maximum performance increase of 1.9\% and 3.2\% on MLT17 and TsiText, respectively, compared with the best variant with a fixed $n$. A possible explanation is that the font sizes of different scene texts in different languages may differ dramatically, which requires an adaptive setting of the n-grams embedding. \begin{table} \small \centering \caption{Comparison of TANGER variants with different but fixed $n$ in n-grams embedding on MLT17 and TsiText.} \begin{tabular}{lcccc} \toprule Dataset & \quad Method & \quad Accuracy \\ \midrule \multirow{5}{*}{MLT17} & \quad TANGER(n=2) & 46.3 \\ & \quad TANGER(n=3) & 48.6 \\ & \quad TANGER(n=4) & 53.2 \\ & \quad TANGER(n=5 ) & 52.8 \\ & \quad \textbf{TANGER(ours)} & \textbf{55.1} \\ \hline \multirow{5}{*}{TsiText} & \quad TANGER(n=2) & 84.7 \\ & \quad TANGER(n=3) & 85.6 \\ & \quad TANGER(n=4) & 86.7 \\ & \quad TANGER(n=5) & 85.9 \\ & \quad \textbf{TANGER(ours)} & \textbf{89.9} \\ \bottomrule \end{tabular} \label{tab3} \end{table} \textbf{Cross-language Rectification} \quad Next we consider the effect of the proposed cross-language loss functions on the performance of TANGER on the MLT17 dataset. We consider two metrics for comparison, including the character-based accuracy and the frequencies of the word-level edit distance. From the results in Table \ref{tab4}, we see that the accuracy of TANGER has an increase of 3.5\% on MLT17 and 2.3\% on TsiText with cross-language rectification loss. These results indicate that the cross-language loss is able to improve recognition performance on multilingual scenes. \begin{table} \small \centering \caption{Results of TANGER on multilingual scene text recognition on MLT17 with or without the cross-language loss function.} \begin{tabular}{lcc} \toprule \multirow{2}{*}{Dataset} & \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}Cross-language rectification used? \end{tabular}} \\ & No & Yes \\ \midrule MLT17 & 51.6 & \textbf{55.1} \\ TsiText & 87.6 & \textbf{89.9}\\ \bottomrule \end{tabular} \label{tab4} \end{table} \section{Conclusions} We have proposed a novel transformer-based architecture with adaptive n-grams embeddings and cross-language rectification to tackle complex multilingual scene text recognition. To the best of our knowledge, this is the first transformer-based approach to multilingual scene text recognition. TANGER can leverage the local neighboring visual patches with the help of the proposed adaptive n-grams embedding method, which is highly beneficial for handling complex scene texts containing multi-scale and multi-orientation texts. In addition, a cross-language loss function is suggested on the basis of a primary and supplementary transformers to effectively recognize multilingual scene texts. Finally, a new database containing complex multilingual tourism scenes is introduced, providing a challenging benchmark for multilingual scene text recognition. In the future, we plan to extend the proposed method to an end-to-end approach containing text detection. In addition, we will explore the proposed augmented transformer architecture for multi-modal text recognition tasks. \section*{Acknowledgment} This work was supported in part by the National Natural Science Foundation of China under Grant No. 62006053 and in part by the Program of Science and Technology of Guangzhou under Grant No. 202102020878 and No. 202102080491. Y. Jin is funded by an Alexander von Humboldt Professorship for Artificial Intelligence endowed by the German Federal Ministry of Education and Research. \bibliographystyle{IEEEtran} \small
{ "arxiv_id": "2302.14308", "language": "en", "timestamp": "2023-03-01T02:09:02", "url": "https://arxiv.org/abs/2302.14308", "yymm": "2302" }
\section{Introduction}\label{Sec:intro} Multiquark states that are beyond the traditional quark-antiquark ($q\bar{q}$) mesons and three-quark ($qqq$) baryons have been one of the most interested topics in hadron physics from the dawn of the quark model. In the past few decades, although a lot of multiquark states have been theoretically predicated or experimentally reported, no compelling multiquark candidates were unambiguously identified until 2015 when the LHCb Collaboration presented striking evidence for $J/\psi \, p$ resonances, named as $P_c^+(4380)$ and $P_c^+(4450)$, in $\Lambda^0_b\to K^- J/\psi \, p$ decays \cite{Aaij:2015tga}. In 2019, the LHCb Collaboration further reported the $P_c^+(4312)$ state and a two-peak structure of the $P_c^+(4450)$ state which is resolved into $P_c^+(4440)$ and $P_c^+(4457)$ \cite{Aaij:2019}. Unlike the low-energy nucleon resonances whose excitation energies are hundreds of MeV and thus can be accommodated as either excited three-quark states or baryon-meson states or compact pentaquark states, the $P_c$ states have more than $3$ GeV excitation energies, definitely excluding the possibility of being excited three-quark configuration dominated states. Indeed, they are the most promising candidates for hidden-charm pentaquark states or baryon-meson states as predicated in Refs.\cite{Wu:2010jy,Wu:2010vk,Wang:2011prc,Yang:2011wz,Yuan:2012wz,Xiao:2013yca}. In literature, there are many theoretical investigations on the nature of the $P_c$ states \cite{Guo:2017jvc,Chen:2016qju}. The fact that the reported masses of $P_c^+(4380)$ and $P_c^+(4457)$ locate just below the thresholds of $\bar{D}\Sigma_c^\ast$ and $\bar{D}^\ast\Sigma_c$ at $4382$ MeV and $4459$ MeV seems strongly support the interpretation of $P_c^+(4380)$ and $P_c^+(4457)$ as hadronic molecules composed of $\bar{D} \Sigma_c^\ast$ and $\bar{D}^\ast \Sigma_c$, respectively. Analogously, in the light quark sector, as the masses of $N(1875){3/2}^-$ and $N(2080){3/2}^-$ are just below the thresholds of $K\Sigma^\ast$ and $K^\ast\Sigma$ at $1880$ MeV and $2086$ MeV, respectively, the $N(1875){3/2}^-$ and $N(2080){3/2}^-$ are proposed to be the strange partners of the $P_c^+(4380)$ and $P_c^+(4457)$ molecular states \cite{He:2017aps,Lin:2018kcc}. In Ref.\cite{Lin:2018kcc}, the decay patterns of $N(1875){3/2}^-$ and $N(2080){3/2}^-$ as $S$-wave $K\Sigma^\ast$ and $K^\ast\Sigma$ molecular states were calculated within an effective Lagrangian approach, and it was found that the measured decay properties of $N(1875){3/2}^-$ and $N(2080){3/2}^-$ can be reproduced well, supporting the molecule interpretation of the $N(1875){3/2}^-$ and $N(2080){3/2}^-$ states. The limited number of available data points for $\bar{D}\Sigma_c^\ast$ and $\bar{D}^\ast\Sigma_c$ interactions restrains, to some extent, our exploration for the nature of $P_c^+(4380)$ and $P_c^+(4457)$ states. In contrast, the situation in the light quark section is much better. So far, lots of experimental data points on differential and total cross sections for $K\Sigma^\ast $ and $K^\ast\Sigma$ photoproductions are available \cite{Hleiqawi:2005sz,Hleiqawi:2007ad,Nanova:2008kr,Hwang2012,Wei:2013,Moriya:2013}, providing good opportunities to investigate the possible molecular scenario of the $N(1875){3/2}^-$ and $N(2080){3/2}^-$ states. In the present work, we focus on the $\gamma p\to K^{\ast +}\Sigma^0$ and $\gamma p \to K^{\ast 0}\Sigma^+$ reactions to test the effects of $N(2080){3/2}^-$ as $K^\ast \Sigma$ molecular state on these reactions. Note that in the most recent Particle Data Group (PDG) review \cite{PDG2022}, the two-star $N(2080){3/2}^-$ listed before the 2012 review has been split into two three-star states, i.e. the $N(1875){3/2}^-$ and $N(2120){3/2}^-$ states. For $N(1875){3/2}^-$, the Breit-Wigner mass and width are claimed to be $1850 < W < 1920$ MeV and $120 < \Gamma < 250$ MeV, respectively. For $N(2120){3/2}^-$, the corresponding values are $2060 < W < 2160$ MeV and $260 < \Gamma < 360$ MeV, respectively. Since in general the hadronic molecules are very shallowly bounded, in Ref.\cite{Lin:2018kcc} the masses of $N(1875){3/2}^-$ and $N(2120){3/2}^-$ were taken as $1875$ MeV and $2080$ MeV, respectively, and the old name $N(2080){3/2}^-$ was used for the $N(2120){3/2}^-$ state. In the present work, we follow Ref.\cite{Lin:2018kcc} to use the same name convention. The $K^\ast \Sigma$ photoproduction process has ever been investigated in several theoretical works by use of either chiral quark model \cite{Zhao:2001jw} or effective Lagrangian approaches \cite{Oh:2006in,Kim:20132,Wang:2018vlv}. Our previous work of Ref.\cite{Wang:2018vlv} provides so far the most recent and most comprehensive analysis of the available data for $\gamma p\to K^{\ast +}\Sigma^0$ and $\gamma p \to K^{\ast 0}\Sigma^+$ reactions. In Ref.\cite{Wang:2018vlv}, it was found that the $K^\ast \Sigma$ photoproduction data can be well reproduced by introducing the $s$-channel $\Delta(1905)5/2^+$ resonance exchange in addition to the $t$-channel $K$, $\kappa$, $K^\ast$ exchanges, $s$-channel nucleon and $\Delta$ exchanges, $u$-channel $\Lambda$, $\Sigma$, $\Sigma^\ast$ exchanges, and generalized contact term in constructing the reaction amplitudes. The $\Delta(1905)5/2^+$ resonance exchange was found to dominate the cross sections of $\gamma p\to K^{\ast +}\Sigma^0$ and provide considerable contributions to the cross sections of $\gamma p \to K^{\ast 0}\Sigma^+$ near the threshold energy region. In the present work, we re-analyze the data for $\gamma p\to K^{\ast +}\Sigma^0$ and $\gamma p \to K^{\ast 0}\Sigma^+$ within the effective Lagrangian approach as employed in Ref.\cite{Wang:2018vlv}. Our purpose is to investigate the effects of $N(2080){3/2}^-$ as $K^\ast \Sigma$ molecular state on $K^\ast \Sigma$ photoproduction reactions. Instead of introducing in $s$ channel the $\Delta(1905)5/2^+$ resonance exchange as done in Ref.\cite{Wang:2018vlv}, we now consider the contributions from the molecule $N(2080){3/2}^-$ exchange in addition to the background contributions, i.e., the contributions from all diagrams other than the $\Delta(1905)5/2^+$ resonance exchange considered in Ref.\cite{Wang:2018vlv}. We concentrate on the low-energy region where the $N(2080){3/2}^-$ is expected to have prominent contributions. Our results show that the available data for $\gamma p\to K^{\ast +}\Sigma^0$ and $\gamma p \to K^{\ast 0}\Sigma^+$ can be well described in the energy region considered, indicating that the $K^\ast \Sigma$ molecular picture of $N(2080){3/2}^-$ is compatible with the available data of $K^\ast \Sigma$ photoproduction reactions. The contributions of the $N(2080){3/2}^-$ molecule to the cross sections are discussed. The reaction mechanisms are analyzed and compared with those extracted from Ref.\cite{Wang:2018vlv}. The predictions of the beam asymmetry $\Sigma$, target asymmetry $T$, and recoil baryon asymmetry $P$ that can distinguish the reaction models constructed in the present work and Ref.\cite{Wang:2018vlv} are presented for future experiments. The paper is organized as follows. In Sec.\ref{Sec:forma}, we briefly introduce the framework of our theoretical model. In Sec.\ref{Sec:results}, the results of our theoretical calculations with some discussions are presented. Finally, we give a brief summary and conclusions in Sec.\ref{sec:summary}. \section{Formalism}\label{Sec:forma} \begin{figure}[tb] \centering {\vglue 0.15cm} \subfigure[~$s$ channel]{ \includegraphics[width=0.45\columnwidth]{sdao}} {\hglue 0.4cm} \subfigure[~$t$ channel]{ \includegraphics[width=0.45\columnwidth]{tdao}} \\[6pt] \subfigure[~$u$ channel]{ \includegraphics[width=0.45\columnwidth]{udao}} {\hglue 0.4cm} \subfigure[~Interaction current]{ \includegraphics[width=0.45\columnwidth]{contact}} \caption Generic structure of the $K^\ast$ photoproduction amplitude for $\gamma p \rightarrow K^\ast \Sigma$. Time proceeds from left to right.} \label{fig1} \end{figure} In effective Lagrangian approach, the amplitude of $K^\ast\Sigma$ photoproduction process can be expressed as \begin{equation} \mathcal{M} = \mathcal{M}_s + \mathcal{M}_t + \mathcal{M}_u + \mathcal{M}_{\rm int}, \label{1} \end{equation} where $\mathcal{M}_s$, $\mathcal{M}_t$, and $\mathcal{M}_u$ denote the amplitudes obtained straightforwardly from the $s$-, $t$-, and $u$-channel tree-level Feynman diagrams, respectively, with $s$, $t$, and $u$ being the Mandelstam variables of the internally exchanged particles. The last term $\mathcal{M}_{\rm int}$ is the interaction current arising from the photon attaching to the internal structure of the $\Sigma N K^\ast$ interaction vertex. All these four terms in Eq.~\eqref{1} are diagrammatically depicted in Fig.~\ref{fig1}. As shown in Fig.~\ref{fig1}, the following contributions are considered in the present work: (i) $N$, $\Delta$, and $N(2080){3/2}^-$ molecule exchanges in the $s$ channel, (ii) $K$, $\kappa$, and $K^\ast$ exchanges in the $t$ channel, (iii) $\Sigma$, $\Lambda$, and $\Sigma^\ast$ exchanges in the $u$ channel, and (iv) the interaction current. The most parts of the formalism including the Lagrangians, propagators, form factors attached to hadronic vertices, the gauge-invariance preserving term, and the interaction coupling constants are referred to Ref.~\cite{Wang:2018vlv}. For the simplicity of the present paper, we do not repeat them here. In the following subsections, we just present the additional parts of the theoretical formalism. \subsection{Lagrangians and couplings for $N(2080){3/2}^-$} \label{Sec:2080} The $N(2080){3/2}^-$, which is treated as a bound state of $K^\ast$ and $\Sigma$, is considered in the present work to construct the $s$-channel reaction amplitude. The effective Lagrangian of $N(2080){3/2}^-$ and $\Sigma K^\ast$ coupling reads \begin{equation} \mathcal{L}_{K^\ast\Sigma R}^{3/2^-} = g_{K^\ast\Sigma R} {\bar R}_\mu \Sigma K^{\ast\mu} + \text{H.\,c.}, \end{equation} where $R \equiv N(2080){3/2}^-$. Considering that the $N(2080){3/2}^-$ is assumed to be a pure $S$-wave molecular state of $K^\ast$ and $\Sigma$, the coupling constant $g_{K^\ast\Sigma R}$ can be estimated model-independently with the Weinberg compositeness criterion, which gives \cite{Lin:2017mtz,Baru:2003qq,Weinberg:1965zz} \begin{equation} g_{K^\ast\Sigma R}^2 = \frac{4\pi}{4 M_R M_\Sigma} \frac{\left(M_{K^\ast}+M_\Sigma\right)^{5/2}} {\left(M_{K^\ast} M_\Sigma\right)^{1/2}} \sqrt{32\,\epsilon}, \label{eq:coupling} \end{equation} where $M_R$, $M_{K^\ast}$, and $M_\Sigma$ denote the masses of $N(2080){3/2}^-$, $K^\ast$, and $\Sigma$, respectively, and $\epsilon$ is the $K^\ast\Sigma$ binding energy, \begin{equation} \epsilon \equiv M_{K^\ast} + M_\Sigma - M_R. \label{mass} \end{equation} Following Ref.~\cite{Lin:2018kcc}, we take the mass of $N(2080){3/2}^-$ to be $M_R=2080$ MeV. Then one gets from Eq.~\eqref{eq:coupling} \begin{equation} g_{K^\ast\Sigma R} = 1.72. \end{equation} Note that in practical calculations, the isospin factors $\sqrt{2/3}$ and $\sqrt{1/3}$ are multiplied to the $N(2080)\Sigma^+ K^{\ast 0}$ and $N(2080)\Sigma^0 K^{\ast +}$ vertices, respectively. \begin{figure}[tb] \centering \includegraphics[width=0.3\textwidth]{triangle} \vglue 6pt \caption{Electromagnetic coupling of $N(2080){3/2}^-$ as $K^\ast \Sigma$ molecule.} \label{fig2} \end{figure} The electromagnetic coupling of $N(2080){3/2}^-$ in the hadronic molecular picture is, in principle, dedicated by the loop diagram illustrated in Fig.~\ref{fig2}. Here for simplicity, we introduce an effective Lagrangian for $N(2080){3/2}^-$ and $N\gamma$ coupling: \begin{eqnarray} \mathcal{L}_{\gamma NR} &=& -\, i e\frac{g^{(1)}_{R N\gamma}}{2M_N}\bar{R_\mu}\gamma_\nu F^{\mu \nu} N \nonumber \\ && + \, e\frac{g^{(2)}_{R N \gamma}}{\left(2M_N\right)^2} \bar{R_\mu} F^{\mu \nu}\partial_\nu N + \text{H.\,c.}. \label{eq:nnr} \end{eqnarray} Then the electromagnetic vertex of $N(2080){3/2}^-$ is approximated by calculating the tree-level Feynman diagram from this Lagrangian, and an additional phase factor ${\rm Exp}[i\phi_R]$ is attached in front of the amplitude resulted from the $s$-channel $N(2080){3/2}^-$ exchange to partially mimic the loop contribution of Fig.~\ref{fig2}. Here $\phi_R$ is treated as a fit parameter. In practical calculation, the $g^{(2)}_{R N\gamma}$ term in Eq.~\eqref{eq:nnr} is ignored due to the lack of experimental information, and the parameter $g^{(1)}_{R N\gamma}$ will be fixed by fitting the cross-section data of $K^\ast\Sigma$ photoproduction. In Ref.~\cite{Lin:2018kcc}, Lin {\it et al.} showed that their calculated width of $N(2080){3/2}^-$ depends on the choice of the cutoff parameter. Here we treat the width of $N(2080){3/2}^-$, $\Gamma_{R}$, as a fit parameter too. \subsection{Single spin observables} \label{Sec:asy} Following Refs.\cite{Fasano:1992,Sandorfi:2010uv}, the single-polarization observables of photon beam asymmetry ($\Sigma$), target nucleon asymmetry ($T$), and recoil nucleon asymmetry ($P$) are defined as \begin{align} \Sigma & \,=\, \dfrac{\dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(\perp,0,0) \,-\, \dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(\parallel,0,0)}{\dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(\perp,0,0) \,+\, \dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(\parallel,0,0)}, \label{eq:beam} \\[6pt] T & \,=\, \dfrac{\dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(0,+y,0) \,-\, \dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(0,-y,0)}{\dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(0,+y,0) \,+\, \dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(0,-y,0)}, \label{eq:target} \\[6pt] P & \,=\, \dfrac{\dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(0,0,+y) \,-\, \dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(0,0,-y)}{\dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(0,0,+y) \,+\, \dfrac{\,{\rm d}\sigma}{{\rm d}\Omega}(0,0,-y)}. \label{eq:recoil} \end{align} Here the three arguments of ${{\rm d}\sigma}/{{\rm d}\Omega}$ denote the polarizations of the beam photon, target nucleon, and recoil $\Sigma$ baryon, respectively. The symbols ``$\perp$" and ``$\parallel$" denote that the photon beam is linearly polarized perpendicular and parallel to the reaction plane, respectively. The symbols ``$+y$" and ``$-y$" denote that the target nucleon or recoil $\Sigma$ baryon is polarized along the directions of ${\bm k}\times{\bm q}$ and $-\left({\bm k}\times{\bm q}\right)$, respectively, with ${\bm k}$ and ${\bm q}$ being the three-momentum of incoming photon and outgoing $K^\ast$. The symbol ``0" denotes that the corresponding argument is unpolarized. \section{Results and Discussion}\label{Sec:results} As has been mentioned in Sec.\ref{Sec:intro}, in literature the most recent and comprehensive investigation of the $\gamma p \to K^{\ast +} \Sigma^0$ and $\gamma p \to K^{\ast 0} \Sigma^+$ reactions is the one from Ref.~\cite{Wang:2018vlv}, where all the available differential and total cross-section data for $K^\ast \Sigma$ photoproduction off proton have been analyzed in an effective Lagrangian approach with the $\Delta(1905){5/2}^+$, a four-star resonance advocated in the most recent PDG review \cite{PDG2022}, being considered. It was found in Ref.~\cite{Wang:2018vlv} that the cross sections of $\gamma p \to K^{\ast +} \Sigma^0$ are dominated by $s$-channel $\Delta(1905){5/2}^+$ exchange at low energies and $t$-channel $K^\ast$ exchange at high energies, while for the $\gamma p \to K^{\ast 0} \Sigma^+$ reaction, the angular dependences are dominated by $t$-channel $K$ exchange at forward angles and $u$-channel $\Sigma^\ast$ exchange at backward angles. In the present work, we re-analyze the $\gamma p \to K^{\ast +} \Sigma^0$ and $\gamma p \to K^{\ast 0} \Sigma^+$ reactions by substituting the $\Delta(1905){5/2}^+$ resonance introduced in Ref.~\cite{Wang:2018vlv} with the $N(2080){3/2}^-$ state which was proposed to be the strange partner of $P_c(4457)$ \cite{He:2017aps,Lin:2018kcc}. The purpose is to check whether the differential and total cross-section data of $K^\ast \Sigma$ photoproduction off proton can accommodate the molecular scenario of $N(2080){3/2}^-$ as a $K^\ast \Sigma$ shallowly bound state. In view of this, unlike in Ref.~\cite{Wang:2018vlv} where all the available data for $K^\ast \Sigma$ photoproduction from the $K^\ast \Sigma$ threshold ($\sim 2086$ MeV) up to the center-of-mass energy $W=2.8$ GeV were considered, in the present work, we concentrate on the energy region from $K^\ast \Sigma$ threshold up to $W=2.3$ GeV only. Beyond this energy region, the $N(2080){3/2}^-$ state is not expected to have significant contributions. Note that $K^\ast \Sigma$ can couple to $N(2080){3/2}^-$ in $S$ wave, while it couples to $\Delta(1905){5/2}^+$ in $P$ wave or even higher odd partial waves. In this sense, the $N(2080){3/2}^-$ might have stronger effects than $\Delta(1905){5/2}^+$ in the energy region near the $K^\ast \Sigma$ threshold. \begin{table}[tb] \caption{\label{tab:new_set} Fitted values of model parameters.} \begin{tabular*}{0.75\columnwidth}{@{\extracolsep\fill}lr} \hline\hline $g^{(1)}_{\Delta \Sigma K^\ast}$ & $1.79 \pm 0.31$ \\ $g^{(1)}_{RN\gamma}$ & $ 0.10 \pm 0.02 $ \\ $\phi_R$ & $5.81 \pm 0.34$ \\ $\Gamma_R$ [MeV] & $83.8 \pm 17.6$ \\ $\Lambda_{N, \Delta, N(2080)}$ [MeV] & $2059\pm 41$ \\ $\Lambda_K$ [MeV] & $1116\pm 112$ \\ $\Lambda_{K^\ast, \kappa}$ [MeV] & $894\pm 113$ \\ $\Lambda_{\Sigma, \Lambda }$ [MeV] & $856\pm 24$ \\ $\Lambda_{\Sigma^\ast}$ [MeV] & $851\pm 26$ \\ \hline\hline \end{tabular*} \end{table} \begin{figure*}[htb] \centering \includegraphics[width=0.9\textwidth]{diff1} \caption{Differential cross sections for $\gamma p \rightarrow K^{\ast +} \Sigma^0$ as a function of $\cos\theta$. The blue dash-dotted lines, green dotted lines, and cyan dashed lines represent the individual contributions from the $s$-channel $N(2080){3/2}^-$, $\Delta$, and $N$ exchanges, respectively. The scattered symbols denote the CLAS data in Ref.~\cite{Wei:2013}. } \label{fig:diff-kp} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=\textwidth]{diff2} \caption{Differential cross sections for $\gamma p \rightarrow K^{\ast 0} \Sigma^+$ as a function of $\cos\theta$. Notations are the same as in Fig.~\ref{fig:diff-kp} except that now the orange sparse dashed lines and pink sparse dotted lines represent the individual contributions from the $t$-channel $K$ exchange and $u$-channel $\Sigma^\ast$ exchange, respectively. The scattered symbols denote the CLAS data in Ref.~\cite{Hleiqawi:2007ad}. } \label{fig:diff-k0} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.75\textwidth]{total} \caption{Total cross sections with dominant individual contributions for $\gamma p \rightarrow K^{\ast +}\Sigma^0$ (left) and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ (right). The black solid lines represent the full results. The blue dash-dotted, green dotted, and cyan dashed lines represent the individual contributions from the $N(2080){3/2}^-$, $\Delta$, and $N$ exchanges, respectively. The orange sparse dashed and pink sparse dotted line in the right graph represent the individual contributions from the $K$ and $\Sigma^*$ exchanges, respectively. The red long dashed lines represent the full results of Ref.~\cite{Wang:2018vlv}. The scattered symbols are data from CLAS Collaboration \cite{Wei:2013}. } \label{fig:total} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.9\textwidth]{asy} \caption{Single spin asymmetries $\Sigma$ (left), $T$ (middle), and $P$ (right) predicted at $W=2131$ MeV. The first row shows the results for $\gamma p\rightarrow K^{\ast +}\Sigma^0$, the second row shows the results for $\gamma p \rightarrow K^{\ast 0}\Sigma^+$. The blue solid lines represent the results from the present work, and the red dashed lines denote the results from Ref.~\cite{Wang:2018vlv}. } \label{fig:asy} \end{figure*} In practice, we take the couplings of $u$-channel hyperon exchanges and $t$-channel strange meson exchanges from Ref.~\cite{Wang:2018vlv} which are determined by data in higher energy region. All the other parameters that are used to fit the near-threshold data considered in the present work are listed in the first column of Table~\ref{tab:new_set}. There, $g^{(1)}_{\Delta\Sigma K^\ast}$ is the hadronic coupling constant for $\Delta$ pole diagram, $g^{(1)}_{R N \gamma}$ is the electromagnetic coupling constant for $N(2080){3/2}^-$ pole diagram, $\phi_R$ is the parameter in phase factor ${\rm Exp}[i\phi_R]$ attached in front of the amplitude resulted from $N(2080)3/2^-$ pole diagram, $\Gamma_R$ is the width of the $N(2080)3/2^-$ state, and $\Lambda_{B(M)}$ is the cutoff parameter in form factor attached to the diagram of baryon $B$ (meson $M$) exchange. The fitted values of these parameters are listed in the second column of Table~\ref{tab:new_set}. There, the uncertainties are estimates arising from the uncertainties (error bars) associated with the fitted data points. The obtained chi-squared ($\chi^2$) per data point is $1.277$, indicating a good fitting quality of the theoretical results. Note that our fitted decay width of $N(2080)3/2^-$ is $83.8$ MeV, smaller than the value $141.1$ MeV obtained by calculating the partial decay widths of various decay channels in an effective Lagrangian approach in Ref.~\cite{Lin:2018kcc}, although the same mass of $N(2080)3/2^-$ is adopted in both of these two works. The results of near-threshold differential cross sections for $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ corresponding to the parameters listed in Table~\ref{tab:new_set} are shown in Fig.~\ref{fig:diff-kp} and Fig.~\ref{fig:diff-k0}, respectively. There, the black solid lines represent the results from the full amplitudes. The blue dash-dotted lines, green dotted lines, and cyan dashed lines represent the individual contributions from the $s$-channel $N(2080){3/2}^-$, $\Delta$, and $N$ exchanges, respectively. The red sparse dashed lines and red sparse dotted lines in Fig.~\ref{fig:diff-k0} denote the individual contributions from the $t$-channel $K$ exchange and $u$-channel $\Sigma^\ast$ exchange, respectively, for the $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reaction. The contributions from other terms are too small to be clearly seen with the scale used, and thus they are not plotted. One sees from Fig.~\ref{fig:diff-kp} and Fig.~\ref{fig:diff-k0} that our overall description of the CLAS angular distribution data for $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ in the near-threshold energy region is fairly satisfactory. Compared with the results from Ref.~\cite{Wang:2018vlv}, for the $\gamma p \rightarrow K^{\ast +}\Sigma^0$ reaction, the fitting quality is similar, while for the $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reaction, the fitting quality is now improved significantly. For $\gamma p \rightarrow K^{\ast +}\Sigma^0$, Fig.~\ref{fig:diff-kp} shows that the $s$-channel $N(2080){3/2}^-$, $\Delta$, and $N$ exchanges provide dominate contributions to the differential cross sections in the near-threshold energy region. This is quite different from the reaction mechanism reported in Ref.~\cite{Wang:2018vlv}, where it was found that the $s$-channel $\Delta(1905){5/2}^+$ exchange dominates the near-threshold angular distributions, and the $s$-channel $\Delta$ exchange and $t$-channel $K^\ast$ exchange provide considerable contributions also. The contributions from the $s$-channel $\Delta$ exchange in the present work are much bigger than those in Ref.~\cite{Wang:2018vlv}. This can be understood if one notices that the fitted cutoff parameter for $\Delta$ exchange is $2059$ MeV in the present work, much bigger than the value $1358$ MeV obtained in Ref.~\cite{Wang:2018vlv}, although the fitted value of the magnitude of the coupling constant $g^{(1)}_{\Delta \Sigma K^\ast}$ in the present work is smaller than that in Ref.~\cite{Wang:2018vlv}. The contributions from the $s$-channel $N$ exchange in the present work are rather significant, while they are negligible in Ref.~\cite{Wang:2018vlv}. The contributions from the $t$-channel $K^\ast$ exchange provides considerable contributions in Ref.~\cite{Wang:2018vlv}, while they are negligible in the present work. Both these properties for $N$ and $K^\ast$ can be understood by different values of the fitted cutoff parameters in the present work and Ref.~\cite{Wang:2018vlv}. For $\gamma p \rightarrow K^{\ast 0}\Sigma^+$, Fig.~\ref{fig:diff-k0} shows that the dominant contributions to the differential cross sections are coming from the $s$-channel $N(2080){3/2}^-$ and $N$ exchanges. The $s$-channel $\Delta$ exchange, $u$-channel $\Sigma^\ast$ exchange, and $t$-channel $K$ exchange also provide considerable contributions. The $s$-channel $N(2080){3/2}^-$ exchange is seen to provide rather important contributions to the differential cross sections at $W=2153$ MeV, while its contributions are relatively small in the other two energy points. In Ref.~\cite{Wang:2018vlv}, it was reported that in the near-threshold energy region, the dominant contributions to the differential cross sections for $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ are coming from the $s$-channel $\Delta(1905){5/2}^+$ exchange and $t$-channel $K$ exchange, and considerable contributions are also seen from the $s$-channel $\Delta$ exchange and $u$-channel $\Sigma^\ast$ exchange. The $\Delta(1905){5/2}^+$ provides dominant contributions at all $W=2153$, $2222$, and $2280$ MeV energy points in Ref.~\cite{Wang:2018vlv} as the $\Delta(1905){5/2}^+$ resonance has a relatively large width, $\Gamma_{\Delta(1905){5/2}^+}\approx 330$ MeV. In the present work, the contributions from $N(2080){3/2}^-$ to the differential cross sections are significantly dominant only at the lowest energy $W=2153$ MeV since the value of the width of $N(2080){3/2}^-$ is fitted to be narrow, $\Gamma_{N(2080){3/2}^-} \approx 83.8$ MeV, as listed in Table~\ref{tab:new_set}. The differences of the contributions from other exchange diagrams in the present work and in Ref.~\cite{Wang:2018vlv} can be understood from the differences of fitted values of the corresponding cutoff parameters. Note that the contributions from both the $N(2080){3/2}^-$ and $N$ exchanges to the differential cross sections of $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ in the present work are much bigger than those in Ref.~\cite{Wang:2018vlv}. As a consequence, the theoretical differential cross sections from the present work agree much well with the data than the results of Ref.~\cite{Wang:2018vlv}. Figure~\ref{fig:total} shows our predicted total cross sections for $\gamma p \rightarrow K^{\ast +}\Sigma^0$ (left graph) and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ (right graph) obtained via an integration of the corresponding differential cross sections as shown in Fig.~\ref{fig:diff-kp} and Fig.~\ref{fig:diff-k0}. The individual contributions that can be clearly seen with the scale used are also plotted to help understand the reaction mechanisms. In this figure, the black solid lines represent the full results. The blue dash-dotted, green dotted, and cyan dashed lines represent the individual contributions from the $s$-channel $N(2080){3/2}^-$, $\Delta$, and $N$ exchanges, respectively. The orange sparse dashed line and pink sparse dotted line in the right graph represent the individual contributions from the $t$-channel $K$ and $u$-channel $\Sigma^\ast$ exchanges, respectively. The total cross sections from Ref.~\cite{Wang:2018vlv} are also plotted (red long dashed lines) for comparison. One sees from Fig.~\ref{fig:total} that our predicted total cross sections for $\gamma p \rightarrow K^{\ast +}\Sigma^0$ are in good agreement with the data, and for both $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reactions, the $s$-channel $N(2080){3/2}^-$, $N$, and $\Delta$ exchanges provide rather important contributions. The fact that in $\gamma p \rightarrow K^{\ast +}\Sigma^0$ the contributions from $\Delta$ exchange are bigger while the contributions from $N(2080){3/2}^-$ and $N$ exchanges are smaller than those in $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ is due to the difference of isospin factors attached to the corresponding hadronic vertices. For the $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reaction, considerable contributions are also seen from the $t$-channel $K$ exchange and $u$-channel $\Sigma^\ast$ exchange. Compared with Ref.~\cite{Wang:2018vlv}, for $\gamma p \rightarrow K^{\ast +}\Sigma^0$ the total cross sections in these two works are similar, both in agreement with the data, while for $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ the total cross sections in the present work are much bigger than those in Ref.~\cite{Wang:2018vlv}. Moreover, in the present work, the total cross sections for $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ are much bigger than those for $\gamma p \rightarrow K^{\ast +}\Sigma^0$, especially in the very near threshold energy region. While in Ref.~\cite{Wang:2018vlv}, opposite pattern is observed. Unfortunately we don't have data for the total cross sections of $\gamma p \rightarrow K^{\ast 0}\Sigma^+$. But note that the differential cross sections for $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ are described much better in the present work (c.f. Fig.~\ref{fig:diff-k0}) than in Ref.~\cite{Wang:2018vlv}. In this sense, the total cross sections predicted in the present work might be more reliable than those in Ref.~\cite{Wang:2018vlv}. Future data on this observable may give further insights for the reaction mechanisms of $\gamma p \rightarrow K^{\ast 0}\Sigma^+$, and provide further cue for the existence of the $N(2080){3/2}^-$ as a $K^\ast\Sigma$ molecule. In Fig.~\ref{fig:asy}, we show the theoretical results for the beam asymmetry ($\Sigma$), target asymmetry ($T$), and recoil asymmetry ($P$) predicted in the models of both the present work and Ref.~\cite{Wang:2018vlv}. As in the present work, the contributions of the molecular state $N(2080)3/2^-$ to the total cross sections are peaked around the center-of-mass energy $W = 2100$ MeV, and in Ref.~\cite{Wang:2018vlv} the contributions of the resonance state $\Delta(1905)5/2^+$ dominate the total cross sections in a much wide energy region around $W\sim 2.2$ GeV, here we calculate and compare the single spin observables at $W=2131$ MeV which corresponds to the incoming photon energy $E_{\gamma} = 1950$ MeV, where the differential cross section data for $\gamma p \rightarrow K^{\ast +}\Sigma^0$ are also available. In Fig.~\ref{fig:asy}, the upper three panels and lower three panels show the corresponding results for the $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reactions, respectively. The blue solid lines and red dashed lines represent the corresponding results from the present work and Ref.~\cite{Wang:2018vlv}, respectively. One sees that for both reactions, these spin observables calculated in the present work are quite different from those obtained in Ref.~\cite{Wang:2018vlv}. We hope that these observables can be measured in the near future in experiments, as they can help to distinguish the models of the present work and Ref.~\cite{Wang:2018vlv}, and thus can further confirm the existence of the $N(2080){3/2}^-$ state as a $\Sigma K^\ast$ molecule. From the results shown and discussed above, one sees that the available cross-section data for both $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ in the near-threshold energy region can be well described in both the present work and Ref.~\cite{Wang:2018vlv}. However, the reaction mechanisms extracted from these two works are quite different. In particular, the resonance $\Delta(1905){5/2}^+$ introduced in Ref.~\cite{Wang:2018vlv} is now replaced in the present work by $N(2080){3/2}^-$, a $\Sigma K^\ast$ molecular state proposed in Refs.~\cite{He:2017aps,Lin:2018kcc} as strange partner of the $P_c(4457)$ state. Even though we cannot prefer one model against the other at the moment, it seems to be appropriate to say that the available cross-section data for $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ do not exclude the possibility of the existence of the $N(2080){3/2}^-$ state as a $\Sigma K^\ast$ shallowly bound state. \section{Summary and Conclusion} \label{sec:summary} In literature, one of the plausible explanations of the $P_c^+(4380)$ and $P_c^+(4457)$ states is that they are $\bar{D}\Sigma_c^\ast$ and $\bar{D}^\ast\Sigma_c$ molecules as their masses are just below the $\bar{D}\Sigma_c^\ast$ and $\bar{D}^\ast\Sigma_c$ thresholds. Analogously, in the light quark sector, the $N(1875){3/2}^-$ and $N(2080){3/2}^-$ states are proposed to be $K\Sigma^\ast$ and $K^\ast\Sigma$ molecules as strange partners of the $P_c^+(4380)$ and $P_c^+(4457)$ states \cite{He:2017aps,Lin:2018kcc}. In the present work, we study the $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reactions to check if the $K^\ast \Sigma$ molecular picture of $N(2080){3/2}^-$ is compatible with the available data for $K^\ast \Sigma$ photoproduction reactions. The $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reactions have already been investigated in Ref.\cite{Wang:2018vlv} within an effective Lagrangian approach. There, the $t$-channel $K$, $\kappa$, $K^\ast$ exchanges, the $s$-channel $N$, $\Delta$, $\Delta(1905){5/2}^+$ exchanges, the $u$-channel $\Lambda$, $\Sigma$, $\Sigma^\ast$ exchanges, and the generalized contact term were took into account in constructing the reaction amplitudes, and all the available data for both $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ were well reproduced. It was found in Ref.\cite{Wang:2018vlv} that the cross sections of $\gamma p \rightarrow K^{\ast +}\Sigma^0$ are dominated by the $s$-channel $\Delta(1905){5/2}^+$ exchange at low energies and $t$-channel $K^\ast$ exchange at high energies, with the $s$-channel $\Delta$ exchange providing significant contributions in the near-threshold region, and the cross sections of $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ are dominated by the $t$-channel $K$ exchange at forward angles and $u$-channel $\Sigma^\ast$ exchange at backward angles, with the $s$-channel $\Delta$ and $\Delta(1905){5/2}^+$ exchanges making considerable contributions at low energies. In the present work, we restudy the $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reactions by employing the same theoretical framework as Ref.\cite{Wang:2018vlv} except that the $\Delta(1905){5/2}^+$ resonance introduced in Ref.\cite{Wang:2018vlv} is now replaced by the $N(2080){3/2}^-$ state. The coupling constants for $t$-channel meson exchanges and $u$-channel hyperon exchanges are taken from Ref.\cite{Wang:2018vlv}, and the hadronic coupling constant of $N(2080){3/2}^-$ is estimated by the Weinberg compositeness criterion under the assumption of molecular structure of $N(2080){3/2}^-$. We concentrate on the near-threshold energy region where the $N(2080){3/2}^-$ is supposed to have significant contributions. Our results show that the available cross-section data in the considered energy region for both $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reactions can be well described. Further analysis shows that the cross sections of $\gamma p \rightarrow K^{\ast +}\Sigma^0$ are dominated by the $s$-channel $N(2080){3/2}^-$, $\Delta$, and $N$ exchanges, and the cross sections of $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ are dominated by the $s$-channel $N(2080){3/2}^-$ and $N$ exchanges, with the $s$-channel $\Delta$ exchange, $u$-channel $\Sigma^\ast$ exchange, and $t$-channel $K$ exchange providing considerable contributions. Both of the models in the present work and Ref.\cite{Wang:2018vlv} describe the available cross-section data of $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ quite well in the near-threshold energy region, but the reaction mechanisms extracted from these two models are quite different. At the moment we cannot prefer one model against the other. Even though, we conclude from the present work that the molecular picture of the $N(2080){3/2}^-$ state is compatible with the available cross-section data of the $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ reactions. The total cross sections for $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ predicted in the present work are much bigger than those for $\gamma p \rightarrow K^{\ast +}\Sigma^0$. While in Ref.\cite{Wang:2018vlv}, opposite pattern is observed. The single spin observables $\Sigma$, $T$, and $P$ for both $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ predicted in models of the present work and Ref.\cite{Wang:2018vlv} are also presented, and it is found that they all are strongly model dependent. We hope that these observables can be measured in the near future in experiments, which can be used to further constrain the reaction mechanisms of $\gamma p \rightarrow K^{\ast +}\Sigma^0$ and $\gamma p \rightarrow K^{\ast 0}\Sigma^+$ and, in particular, to further verify the molecular scenario of the $N(2080){3/2}^-$ state. \begin{acknowledgments} This work is partially supported by the National Natural Science Foundation of China under Grants No.~12175240, No.~12147153, No.~12070131001, No.~11835015, and No.~12047503, the Fundamental Research Funds for the Central Universities, the China Postdoctoral Science Foundation under Grant No.~2021M693141, and the Grant of Chinese Academy of Sciences (XDB34030000). \end{acknowledgments}
{ "arxiv_id": "2302.14350", "language": "en", "timestamp": "2023-03-02T02:10:54", "url": "https://arxiv.org/abs/2302.14350", "yymm": "2302" }
\section{Introduction} Group activity recognition is an important sub-task in the field of video understanding. It shows wide application prospects in intelligent robots, security monitoring, and sports event analysis. Unlike the action recognition which focuses on a single individual~\cite{shi2019skeleton,plizzari2021skeleton,c3d,i3d}, group activity recognition needs to understand the scene of multiple individuals. This task is more challenging since it relies on the understanding of not only the actions of multiple individuals but also relations among them in the scene. Therefore, both effective individual features and relation modeling are essential to the group activity recognition. \begin{figure} \centering \includegraphics[width = 0.97\linewidth]{Images_pdf/cvpr_picture_fig1.pdf} \caption{Visualization of our motivation. In team sports, one particular group activity represents the execution and implementation of a specific tactic that reflects the correlation and the distribution of corresponding individual actions. We concretize the abstract knowledge, such as tactics, into the action distribution in different group activities and utilize them to improve the individual representations.} \label{fig1} \end{figure} Existing methods generally enhance the visual representation of individuals by introducing relation inference~\cite{ibrahim2016hierarchical,bagautdinov2017social,ibrahim2018hierarchical,GLSTM,ARG,gavrilyuk2020actor,yuan2021spatio,li2021groupformer,yuan2021learningcontext,wq,han2022dualai} with the graph network or transformer. However, they build relations mainly based on the visual representations or locations of individuals, which are not completely consistent with the semantic-level individual relations in the group activity. Some methods~\cite{liu2021multimodal,tang2018mining,tang2019learning,contextaware,GINs} introduce extra knowledge, such as action labels, to build semantic relations. By introducing knowledge, these methods improve the group activity recognition performance well, but the knowledge they explored merely stay at the semantic-level (\emph{i.e.,} individual action labels), which is insufficient for pursing notable accuracy. In fact, there is abundant knowledge in real group activity recognition scenarios. For example, in team sports, one particular group activity represents the execution and implementation of a specific tactic that reflects the correlation and distribution of corresponding individual actions. As shown in \cref{fig1}, the \textit{r-spike} activity in volleyball matches involves a ``spiking'' player in the offensive zone, several ``waiting'' players in the defensive zone, and several ``blocking'' players in the opposing offensive zone. Under this description, a \textit{r-spike} activity will never involve a ``setting'' player. This fact provides a clue for distinguishing other activities such as \textit{r-set}. Therefore, if the knowledge can be leveraged more sufficiently, we may have a good chance to improve the reliability of visual representation and interaction modeling, hence further improve the performance of recognition. Nevertheless, knowledge is usually extracted from a large amount of samples, it is a kind of highly generalized abstract representation. In contrast, the input samples are concrete. Therefore, it is critical to concretize abstract knowledge into the same space as input samples. Although several methods utilize cross-model aggregation~\cite{liu2021multimodal} or knowledge distillation~\cite{tang2019learning} to concretize the semantic label as the latent feature vector for the visual representation interacting, such concretization manner is hard to leverage richer knowledge. In fact, group activity is a comprehensive expression of a group of individual actions, which is correlated with individual actions and their position, and such correlation is implied in a large number of samples. Thus, it is possible to obtain a concrete representation of knowledge from amount of training samples through statistics. In this paper, we propose to concretize the abstract knowledge, such as tactics, into the action distribution in different group activities, which is further represented as Class-Class Distribution Map (C-C Map) and Class-Position Distribution Map (C-P Map). They present the correlation and distribution of individual actions. Furthermore, we propose a novel Knowledge Augmented Relation Inference Framework to construct interactions among individuals and use the above two maps to enhance the individual representations for group activity recognition. Specifically, we first design a Visual Representation Module to extract the individual appearance representations. Then we design a Semantic Relation Module to construct the correlation between different individual actions with the assistance of the C-C Map. After that, a Knowledge-Semantic-Visual Interaction Module is devised to integrate visual information and semantic information through a cross-modal interacting block, combining the C-P Map to perform the relation inference and improve the individual representation ability. Finally, the enhanced individual features, along with raw visual features, are utilized for activity recognition. We evaluate our method on the Volleyball dataset and the Collective Activity dataset, and the experimental results show that the proposed framework achieves competitive performance compared to the state-of-the-art methods. The contributions of this paper are summarized below: \begin{itemize} \item We propose an idea of knowledge concretization for the group activity recognition. And the knowledge in the specific application scenarios such as team sports or surveillance is concretized as the Class-Class Distribution Map (C-C Map) or Class-Position Distribution Map (C-P map). \item We propose a novel Knowledge Augmented Relation Inference framework which integrates visual representation and knowledge (\emph{i.e.,} action labels, C-C Map, C-P Map) in a unified relation inference architecture. \item Experiments on two public datasets show that the proposed method achieves competitive results compared with state-of-the-art methods. And the introduction of knowledge is also helpful in improving performance with limited training data. \end{itemize} \section{Related Work} Group activity recognition has been studied for over a decade. A lot of methods have been proposed. Early methods extract hand-crafted features to infer group behavior by probabilistic graphical models~\cite{amer2012cost,lan2012social,amer2014hirf,choi2013understanding,Amer2013}. Recently, deep relation inference based methods have shown promising performance~\cite{wu2021comprehensive}. They can be generally classified into the visual representation based method and visual-semantic representation based methods, according to the information they used. \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{Images_pdf/cvpr_picture_fig2.pdf} \caption{Overview of the proposed framework. It first devises the Visual Representation Module to extract the appearance representation of individuals. And then, it design a Knowledge Augmented Semantic Relation Module to capture semantic representations of individual actions with the assistance of the Class-Class Distribution Map. After that, a Knowledge-Semantic-Visual Interaction Module is proposed to integrate visual and semantic information, and enhance individual representations with the help of the Class-Position Distribution Map. Finally, the individual representations are utilized to perform the final classification.} \label{fig2} \end{figure*} \textbf{Visual Representation based Methods.} Visual representation based methods usually obtain the enhanced visual representation by introducing visual relation inference~\cite{ibrahim2016hierarchical,wang2017recurrent,bagautdinov2017social,ibrahim2018hierarchical,stagNet,tang2019learning,azar2019convolutional,hu2020progressive,yuan2021learningcontext,Detector-Free,han2022dualai}. Some methods adopt RNN or LSTM to explore the individual spatio-temporal relation in scene~\cite{ibrahim2016hierarchical,shu2017cern,bagautdinov2017social,stagNet,HANHCN,CCG,yan2018participation,PCTDM}. Alternatively, some researchers introduce attention mechanism to relation inference~\cite{ARG,yan2020higcin,lu2019gaim,yuan2021spatio,gavrilyuk2020actor,ehsanpour2020joint,li2021groupformer,Detector-Free, han2022dualai}, and improve the representation of visual features. Wu {\it et al}.~\cite{ARG} utilize a graph structure to construct the relations between actors in the scene and enhance their representation by graph convolution network. Yan {\it et al}.~\cite{yan2020higcin} construct a cross-graph to explore the temporal dynamics and spatial interaction context. Yuan {\it et al}.~\cite{yuan2021spatio} use the well-designed dynamic relation module and dynamic walk module to build person-specific interaction, which can model spatial-temporal effectively. Gavrilyuk {\it et al}.~\cite{gavrilyuk2020actor} adopt transformer architecture to model spatial-temporal relations among individuals and use a multi-modal information fusion strategy. Li {\it et al}.~\cite{li2021groupformer} propose a Clustered Spatial-Temporal Transformer to deeply explore the correlation of spatial and temporal context in a parallel manner. Yuan and Ni~\cite{yuan2021learningcontext} encode the global contextual information into individual features and explore all pairwise interactions between individuals. Han {\it et al}.~\cite{han2022dualai} propose a Dual-path Actor Interaction framework to learn complex actor relations in videos and further enhance individual representation by using an efficient self-supervised signal. \textbf{Visual-Semantic Representation based Methods.} Visual-semantic representation based methods introduce semantic information to relation inference and improve the consistency of visual relations with the semantic-level individual relations in the group activity~\cite{sbgar,stagNet,tang2018mining,GINs,contextaware,liu2021multimodal}. Li {\it et al}.~\cite{sbgar} propose a novel semantics based scheme that recognizes group activities based on the semantic meaning of video captions generated by LSTM. Qi {\it et al}.~\cite{stagNet} introduce a semantic graph to explicitly describe the spatial content of the scene and employ a structural-RNN to incorporate it with the temporal factor. Liu {\it et al}.~\cite{liu2021multimodal} directly utilize individual action labels to construct a semantic graph to refine visual representations. Tang {\it et al}.~\cite{tang2018mining} adopt knowledge distillation to force individual visual representations to be consistent with semantic representations embedded from action labels. The existing methods demonstrate that the introduction of extra knowledge is helpful in improving visual representation. However, the knowledge they explored is merely a small part of knowledge in the real application scenarios, and the way of knowledge utilization can not adapt to the complicated knowledge. Unlike these methods, we utilize richer knowledge, such as tactical information, and introduce the concretized knowledge into the relation inference framework to improve the feature representation. \section{Method} \subsection{Overall Architecture} \label{sec3.1} Our framework is mainly composed of three modules: the Visual Representation Module for extracting appearance representation of individuals, the Knowledge Augmented Semantic Relation Module for encoding semantic representations of individual actions, and the Knowledge-Semantic-Visual Interaction Module for aggregating visual and semantic information. As illustrated in \cref{fig2}, our framework first summarizes the individual actions, and constructs a Class-Class Distribution Map (C-C Map) and a Class-Position Distribution Map (C-P Map). Then, it feds C-C Map into the Knowledge Augmented Semantic Relation Module to enhance the semantic representation. Afterward, it utilizes the semantic representation and the C-P Map to enhance the visual representation through the Knowledge-Semantic-Visual Interaction Module. Finally, it predicts the group activities using the enhanced visual representations. \subsection{Knowledge Concretization} \label{sec3.2} As discussed above, knowledge is an abstract semantic representation with a gap from training samples. Thus, prior to the training procedure, we concretize knowledge in a form that can be integrated with visual representation of samples. To be specific, we summarize the individual actions of samples in the training set to concretize the semantic representation of knowledge, and further construct a Class-Class Distribution Map (C-C Map) and a Class-Position Distribution Map (C-P Map). These two maps can present the correlation and distribution of individual actions in one particular group activity. \textbf{Class-Class Distribution Map.} Given $K$ different individual action labels ${\mathbf{L}}=\{{l_i}\}_{i=1}^K$ in the training set, we count the total concurring times $m_{ij}$ of the $i$-th action label $l_{i}$ and the $j$-th action label $l_j$. For example, if two ``blocking'' players and three ``standing'' players occur in an image, we record the concurring times of ``blocking'' and ``standing'' in this image as 6. Then we add up their concurring times in all images to get the total concurring times of ``blocking'' and ``standing''. In this way, we construct the Class-Class Distribution Map $\mathbf{P^{cc}}\in {\mathbb{R}^{K\times K}}$, which measures the correlation degree among individual action labels, as follow: \begin{equation} p^{cc}_{ij}=\frac{{{m}_{ij}}}{\sum\limits_{i=1}^K\sum\limits_{j=1}^K{m_{ij}}} \end{equation} where $p^{cc}_{ij}\in\mathbf{P^{cc}}$ corresponds to the correlation of $i$-th and $j$-th label. This value reflects the probability of the simultaneous occurrence of different actions in a specific scenario. \textbf{Class-Position Distribution Map.} On the middle frame of every video clip in the training set, for the $i$-th individual action, we mark the coordinate $(x,y)$ of each individual who performs it. Then we project these coordinates of all video clips onto one single image $b_{i}$, which shares the same size with input frames. This way, we can obtain the distribution maps of $K$ individual action labels. Similar to the Class-Class Distribution Map, we construct the Class-Position Distribution Map $\mathbf{P^{cp}}\in {\mathbb{R}^{H\times W\times K}}$, which represents the distribution of individual actions, as follow: \begin{equation} p^{cp}_{ixy}=\frac{{b_{ixy}}}{\sum\limits_{x=1}^H\sum\limits_{y=1}^Wb_{ixy}} \end{equation} where $p^{cp}_{ixy}\in\mathbf{P^{cp}}$ denotes the value of $\mathbf{P^{cp}}$ of the $i$-th individual action at the coordinate of $(x,y)$. And $b_{ixy}$ denotes the value of $b_{i}$ at the coordinate of $(x,y)$, it reflects the occurrence probability of each individual action in a specific spatial location. \subsection{Visual Representation Module} \label{sec3.3} The Visual Representation Module aims to extract the appearance features of individuals. As shown in \cref{fig2}, given a $T$ frames video clip, we adopt an inflated 3D convNets network (I3D)~\cite{i3d} pre-trained on Kinetics dataset~\cite{kay2017kinetics} as the backbone to extract image appearance features, and employ two dimensions positional encoding (PE) to provide position information as in \cite{detr,wang2021end}. In this way, we obtain the raw individual visual representation $\mathbf{X} \in \mathbb{R}^{H \times W \times C}$, where $C$ is the number of channels. $\mathbf{X}$ also represents the global scene information directly extracted from the input frame. Then, the RoiAlign~\cite{he2017mask} operation is applied to extract the refined visual features of individuals from $\mathbf{X}$ according to the body part bounding boxes of each individual in the scene. After that, we utilize the fully-connected layer and ReLU~\cite{nair2010rectified} activation function to encode the extracted features as a $D$ dimensional feature vector $\mathbf{\overline{X}} \in \mathbb{R}^{N \times P \times D}$, where $N$ represents the number of actors in the scene, $P$ is the number of body parts. $\mathbf{\overline{X}}$ presents the visual representation of individuals and is further utilized to perform the individual relational inference in a later module. \begin{figure} \centering \includegraphics[width=0.83\linewidth]{Images_pdf/cvpr_picture_fig3.pdf} \caption{Illustration of multi-head self-attention in the Semantic Transformer. This process aims to explore the correlation between different action labels, and introduces C-C Map by element-wise operation to impact semantic relation modeling process.} \label{SRM} \end{figure} \subsection{Knowledge Augmented Semantic Relation Module} \label{sec3.4} In this module, we first encode $K$ different individual action labels $\mathbf{L}$ into one-hot vectors and embed them into a $D$ dimension latent space to obtain the semantic features $\mathbf{Y} \in {\mathbb{R}^{K\times D}}$. After that, we propose a Semantic Transformer to infer the relation among semantic features of different individual actions, and introduce the C-C Map into the multi-head self-attention mechanism of the standard transformer. In conventional self-attention operation, the output is computed as a weighted sum of values ($V$), where the weights are computed by the correlation function of queries ($Q$) and keys ($K$). As shown in \cref{SRM}, to better explore the correlation of different semantic features, we add C-C Map to the weights of values before the weighted sum operation. In this way, we can use real data distribution to facilitate the relation modeling. The output of multi-head self-attention mechanism $\mathbf{\overline{Y}}$ can be formulated as: \begin{equation} A^s_i=\sigma\left(\frac{\mathbf{Y}W^Q_i\cdot{(\mathbf{Y}W^K_i)}^T}{\sqrt{d}}\right) \end{equation} \begin{equation} h^s_i = \left(A^s_i+\mathbf{P^{cc}}\right)\cdot\mathbf{Y}W^V_i \end{equation} \begin{equation} \mathbf{\overline{Y}}=\mathrm{F}^s([{h}^s_1,{h}^s_2,...,{h}^s_i]) \end{equation} where $W^Q_i,W^K_i,W^V_i$ are the learnable matrices whose dimension is $D \times d$, $i$ is the number of attention heads. $\sigma$ presents the softmax operation. $[,]$ denotes the concatenate operation. $\mathrm{F}^s$ refers to the fully-connected layer adopted to integrate the outputs of multiple attention heads. The residual connection operation and a feed-forward network are adopted to enhance the feature representation, and the final output of the Semantic Transformer is denoted as $\mathbf{\widehat{Y}}\in \mathbb{R}^{K \times D}$. \begin{figure} \centering \includegraphics[width=0.97\linewidth]{Images_pdf/cvpr_picture_fig4.pdf} \caption{Illustration of the Knowledge-Semantic-Visual Interaction Module. This module integrates visual and semantic information to enhance the individual representations, and introduces C-P Map into the process of multi-head cross attention mechanism.} \label{HRM} \end{figure} \subsection{Knowledge-Semantic-Visual Interaction Module} \label{sec3.5} This module enhances the individual representations by integrating visual and semantic representation with the assistance of the C-P Map, and performs the task of group activity recognition. We first employ a conventional vision transformer encoder~\cite{gavrilyuk2020actor} to enhance $\mathbf{\overline{X}}$ output from Visual Representation Module and obtain a refined individual representation $\mathbf{\widehat{X}} \in \mathbb{R}^{N \times P \times D}$ which performed preliminary relational inference, and a multilayer perceptron to adjust the shape of $\mathbf{\widehat{X}}$ into ${N \times D}$. Then, we design a Visual-Semantic Inference Transformer to perform individual relation inference across the visual and semantic representation. Specifically, we feed $\mathbf{\widehat{X}}$ into the encoder of the transformer and output the encoded visual representation. The multi-head self-attention mechanism used in the above encoder is similar to which in the Semantic Transformer we introduced in \cref{sec3.4}, yet the C-C Map is not added before the weighted sum operation. The sum of $\mathbf{\widehat{X}}$ and the output of the self-attention mechanism is denoted as $\mathbf{\widetilde{X}}\in \mathbb{R}^{N \times D}$ and further used in the subsequent process. The decoder of the Visual-Semantic Inference Transformer takes the enhanced semantic feature representation $\mathbf{\widehat{Y}}$, the encoded visual representation $\mathbf{\widetilde{X}}$ and C-P Map as input to export the knowledge augmented individual features $\mathbf{\overline{O}} \in {\mathbb{R}^{N\times D}}$. In addition, to better correspond the individuals with the distribution of their actions, we first select the corresponding $N$ position coordinates from the C-P Map according to the bounding box of $N$ individuals on the frame and convert the dimension of the C-P Map into $N\times K$ from $H\times W\times K$. As shown in \cref{HRM}, we use a multi-head cross-attention mechanism to perform semantic-visual interaction as follow: \begin{equation} {A}^o_i=\sigma\left(\frac{\mathbf{\widetilde{X}}{W^{\widehat{Q}}_i}\cdot{(\mathbf{\widehat{Y}}{W^{\widehat{K}}_i})}^T}{\sqrt{d}}\right) \end{equation} \begin{equation} {h}^o_i = \left(A^o_i+\mathbf{P^{cp}}\right)\cdot\mathbf{\widehat{Y}}{W^{\widehat{V}}_i} \end{equation} \begin{equation} \mathbf{O}=\mathrm{F}^o([{h}^o_1,{h}^o_2,...,{h}^o_i]) \end{equation} where ${W^{\widehat{Q}}_i},{W^{\widehat{K}}_i},{W^{\widehat{V}}_i}$ are the learnable matrices whose dimension is $D \times d$, $i$ is the number of attention heads. $\mathrm{F}^o$ denotes the fully-connected layer adopted to integrate the outputs of multiple attention heads. In this way, we can take the distribution of individual actions to guide the representation of features. The residual connection operation and a feed-forward network are adopted to enhance the feature representation, and output the knowledge augmented individual feature $\mathbf{\overline{O}}$. \subsection{Training and Reasoning} \label{sec3.6} Our framework is trained in an end-to-end manner. To supervise the feature representation learning, we design two classification heads to perform the individual action classification and group activity classification for both $\mathbf{\widehat{X}}$ and $\mathbf{\overline{O}}$. We also utilize global scene information $\mathbf{X}$ to perform group activity classification. The classification losses $\mathcal{L}_{x}$, $\mathcal{L}_{o}$ and $\mathcal{L}_{s}$ can be formulated as: \begin{equation} \mathcal{L}_{x} = \mathcal{L}_{CE}(\mathbf{\hat{y}}^{x}_\mathbf{g},{\mathbf{y}_\mathbf{g}})+\lambda \mathcal{L}_{CE}(\mathbf{\hat{y}}^{x}_{a},{\mathbf{y}_{a}}) \end{equation} \begin{equation} \mathcal{L}_{o} = \mathcal{L}_{CE}(\mathbf{\hat{y}}^{o}_\mathbf{g},{\mathbf{y}_\mathbf{g}})+\lambda \mathcal{L}_{CE}(\mathbf{\hat{y}}^{o}_{a},{\mathbf{y}_{a}}) \end{equation} \begin{equation} \mathcal{L}_{s} = \mathcal{L}_{CE}(\mathbf{\hat{y}}^{s}_\mathbf{g}, {\mathbf{y}_\mathbf{g}}) \end{equation} where $\mathcal{L}_{CE}(\cdot)$ denotes the cross-entropy loss function. $\mathbf{y}_{a}$, $\mathbf{y}_\mathbf{g}$ are the ground truth labels for individual actions and group activities, receptively. $\mathbf{\hat{y}}^{x}_{a}$ and $\mathbf{\hat{y}}^{o}_{a}$ represent individual action scores predicted from $\mathbf{\widehat{X}}$ and $\mathbf{\overline{O}}$. Similarly, $\mathbf{\hat{y}}^{x}_\mathbf{g}$ and $\mathbf{\hat{y}}^{o}_\mathbf{g}$ represent group activity scores predicted from $\mathbf{\widehat{X}}$ and $\mathbf{\overline{O}}$, respectively. $\mathbf{\hat{y}}^{s}_\mathbf{g}$ is group activities scores predicted from $\mathbf{X}$. $\lambda$ is the scalar weight to balance different classification tasks. The overall loss function is formed as follow: \begin{equation} \mathcal{L} = \mathcal{L}_{x} + \mathcal{L}_{o} + \mathcal{L}_{s} \end{equation} In the inference stage, we sum the group activity classification scores $\mathbf{\hat{y}}^{x}_\mathbf{g}$, $\mathbf{\hat{y}}^{o}_\mathbf{g}$ and $\mathbf{\hat{y}}^{s}_\mathbf{g}$ as the final classification score. \section{Experiments} \subsection{Datasets} \label{sec4.1} \textbf{Volleyball dataset.} The Volleyball dataset (VD) is one of the largest public datasets for evaluation of the group activity recognition. It has 3,493 and 1,337 video clips for training and testing, respectively. Moreover, it provides high-resolution video clips containing eight group activity categories, including left spike, right spike, left set, right set, left set, right set, left set, right set, left win, and right win. In the middle frame of each video clip, all individuals are labeled by bounding box coordinates and individual action categories (blocking, digging, falling, jumping, moving, setting, passing, spiking, standing, and waiting). Image resolution for VD is 1280 $\times$ 720. \textbf{Collective Activity dataset.} The Collective Activity dataset (CAD) is another high-quality public dataset for group activity recognition. It contains 5 group activity categories (walking, crossing, waiting, talking, and queuing). The middle frame of every ten frames in this dataset is labeled with bounding box coordinates and individual action categories (NA, walking, crossing, waiting, talking, and queuing). The group activity category is determined by the vast majority of individual categories in the scene. Image resolution for CAD is 720 $\times$ 480. \subsection{Implementation Details} \label{sec4.2} For each video clip, we select ten frames (middle frame, five frames before, and four frames after) as the input of the backbone network. We utilize the RoIAlign layer with crop size $7 \times 7$ to obtain $\mathbf{\overline{X}}$ and embed them into $D=256$. For experiments on VD, we employ two attention heads in the encoder layer of transformers and one attention head in the decoder. And for experiments on CAD, we set the number of attention heads in the encoder of transformers to 16. The dimension of $d$ is set to 128 and the size of the fully-connected layer in the feed-forward network is set to 1024. For the VD, we utilize Adam optimizer with $\beta_{1}=0.9, \beta_{2}=0.999$ and $\varepsilon=10^{-8}$, empirically. The batch size is set to 1, the learning rate ranging from $1 \times 10^{-4}$ to $1 \times 10^{-6}$. For the CAD, we set Adam optimizer hyper-parameters $\varepsilon=10^{-10}$, the batch size to 2, and the initial learning rate as $5 \times 10^{-5}$. The other settings are the same as on the Volleyball dataset. We adopt the widely used Multi-class Classification Accuracy (MCA) and Mean Per Class Accuracy (MPCA) as evaluation metrics. Our experiments are conducted on an NVIDIA GeForce GTX 2080 GPU with PyTorch deep learning framework. \begin{table} \centering \caption{Comparison with state-of-the-art methods on the Volleyball dataset.} \begin{tabular}{@{}lccc@{}} \toprule Method & Backbone & \makecell{Optical \\ Flow} & MCA \\ \midrule HDTM~\cite{ibrahim2016hierarchical} & AlexNet & & 81.9\\ CERN~\cite{shu2017cern} & Vgg16 & & 83.3\\ StagNet~\cite{stagNet} & Vgg16 & & 89.3\\ Detector-Free~\cite{Detector-Free} & Resnet-18 & & 90.5\\ SSU~\cite{bagautdinov2017social} & Inception-v3 & & 90.6\\ HiGCIN~\cite{yan2020higcin} & Resnet-18 & & 91.4\\ AT~\cite{gavrilyuk2020actor} & I3D & & 91.4\\ PRL~\cite{hu2020progressive} & Vgg16 & & 91.4\\ ARG~\cite{ARG} & Inception-v3 & & 92.5\\ STBiP~\cite{yuan2021learningcontext} & Inception-v3 & & 93.3\\ STDIN~\cite{yuan2021spatio} & Vgg16 & & 93.6\\ Groupformer~\cite{li2021groupformer} & Inception-v3 & & 94.1\\ Dual-AI~\cite{fu2019dual} & Inception-v3 & & 94.4\\ \hline SBGAR~\cite{sbgar} & Inception-v3 & \checkmark & 66.9\\ CRM~\cite{azar2019convolutional} & I3D & \checkmark & 93.0\\ AT~\cite{gavrilyuk2020actor} & I3D & \checkmark & 93.0\\ JLSG~\cite{ehsanpour2020joint} & I3D & \checkmark & 93.1\\ MSCA-GNN~\cite{liu2021multimodal} & I3D & \checkmark & 93.4\\ ERN~\cite{pramono2020empowering} & R50-FPN+I3D & \checkmark & 94.1\\ Groupformer~\cite{li2021groupformer} & I3D & \checkmark & 94.9\\ Dual-AI~\cite{han2022dualai} & Inception-v3 & \checkmark & \textbf{95.4}\\ \hline Ours(RGB) & I3D & & \textbf{94.5}\\ Ours(RGB+Flow) & I3D & \checkmark & 94.8\\ \bottomrule \end{tabular} \label{table1} \end{table} \begin{table}[h] \centering \caption{Comparison against state-of-the-art methods on the Volleyball dataset under limited training data.} \begin{tabular}{@{}lccccc@{}} \toprule \multirow{2}{*}{Method} & \multicolumn{5}{c}{Data Ratio} \\ \cmidrule{2-6} & 5\% & 10\% & 25\% & 50\% & 100\% \\ \midrule PCTDM~\cite{PCTDM} & 53.6 & 67.4 & 81.5 & 88.5 & 90.3 \\ AT~\cite{gavrilyuk2020actor} & 54.8 & 67.7 & 84.2 & 88.0 & 90.0 \\ HiGCIN~\cite{yan2020higcin} & 35.5 & 55.5 & 71.2 & 79.7 & 91.4\\ ERN~\cite{ehsanpour2020joint} & 41.2 & 52.5 & 73.1 & 75.4 & 90.7\\ ARG~\cite{ARG} & 69.4 & 80.2 & 87.9 & 90.1 & 92.3\\ STDIN~\cite{yuan2021spatio} & 58.3 & 71.7 & 84.1 & 89.9 & 93.1\\ Dual-AI~\cite{han2022dualai} & 76.2 & 85.5 & 89.7 & 92.7 & 94.4\\ \hline Ours-Base & 66.2 & 78.8 & 88.2 & 92.2 & 93.4\\ \textbf{Ours} & \textbf{79.0} & \textbf{85.6} & \textbf{92.1} & \textbf{93.2} & \textbf{94.5}\\ \bottomrule \end{tabular} \label{table2} \end{table} \begin{table} \centering \caption{Comparison with state-of-the-art methods on the Collective Activity dataset.} \begin{tabular}{@{}lccc@{}} \toprule Method & Backbone & MCA & MPCA\\ \midrule SBGAR~\cite{sbgar} & Inception-v3 & 86.1 & -\\ Recurrent~\cite{wang2017recurrent} & Vgg16 & - & 89.4\\ PCTDM~\cite{yan2018participation} & AlexNet & - & 92.2\\ PRL~\cite{hu2020progressive} & Vgg16 & - & 93.8\\ CRM~\cite{azar2019convolutional} & I3D & 85.8 & 94.2\\ JLSG~\cite{ehsanpour2020joint} & I3D & 89.4 & -\\ SPTS \cite{tang2018mining} & Vgg16 & 90.7 & 95.7\\ AT~\cite{gavrilyuk2020actor} & I3D & 92.8 & 98.5\\ MSCA-GNN~\cite{liu2021multimodal} & I3D & 93.1 & - \\ ERN~\cite{pramono2020empowering} & R50-FPN+I3D & 93.9 & - \\ Groupformer~\cite{li2021groupformer} & I3D & \textbf{94.7} & -\\ Dual-AI~\cite{han2022dualai} & Inception-v3 & - & 96.5\\ \hline Ours(RGB) & I3D & 92.8 & 98.5 \\ Ours(RGB+Flow) & I3D & 93.5 & \textbf{98.7}\\ \bottomrule \end{tabular} \label{table3} \end{table} \subsection{Comparison with the State-of-the-Arts} \label{sec4.3} \textbf{Results on Volleyball dataset.} We compare our framework with the state-of-the-art methods on VD and report the results in \cref{table1}. Our method (RGB only) reaches the MCA of 94.5\%, achieving the best performance among all comparison methods~\cite{ibrahim2016hierarchical,HANHCN,stagNet,shu2017cern,ARG,azar2019convolutional,hu2020progressive,pramono2020empowering,yuan2021learningcontext,liu2021multimodal} which is complemented without optical flow. Moreover, we propose an extended version of the framework that utilizes a late fusion strategy as ~\cite{gavrilyuk2020actor,yuan2021learningcontext} to fusion RGB and Flow results. This version reaches the MCA of 94.8\%. Such results illustrate that our framework can achieve competitive performance compared with state-of-the-art methods. More importantly, although the performance of our method is slightly lower than~\cite{han2022dualai} in the above experiments, our framework achieves better results under the limited training datasets. To demonstrate this, we conduct experiments on the VD with data ratio of 5\%, 10\%, 25\%, and 50\%. For a fair comparison, we select the same samples as~\cite{han2022dualai}. The results of comparison methods~\cite{PCTDM,ARG,gavrilyuk2020actor,yan2020higcin,ehsanpour2020joint,yuan2021spatio,han2022dualai} are reported directly from~\cite{han2022dualai}. \cref{table2} presents the experimental results. As can be seen, our method obviously performs better than the state-of-the-art methods at the data ratio of 5\%, 10\%, 25\%, and 50\%. Particularly, it achieves the MCA of 79.0\%, 85.6\%, 92.1\%, and 93.2\%, surpassing the existing best results by 2.8\%, 0.1\%, 2.4\%, and 0.5\%, respectively. Furthermore, by comparing with the baseline model which only consists of the Visual Representation Module, and it predicts the classification score simply from $\mathbf{\overline{X}}$ and $\mathbf{X}$, our method significantly improves performance with knowledge under limited training data. In particular, when taking 5\%, 10\%, 25\% samples as the training data, our method achieves MCA higher by 12.8\%, 6.8\%, 3.9\% than the baseline model, respectively. This clearly demonstrate the effectiveness and superiority of our method under limited training data. \begin{figure*}[h] \centering \includegraphics[width=0.97\linewidth]{Images_pdf/cvpr_picture_fig6.pdf} \caption{The t-SNE visualization of learned representation by different models on the Volleyball dataset.} \label{fig5} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=0.97\linewidth]{Images_pdf/cvpr_picture_fig5.pdf} \caption{Visualization of the predictions on the Volleyball dataset. (a) The ground truth of individual action labels in \textquotedblleft right-set\textquotedblright~ (b) The model without C-P Map and C-C Map mistakenly classified the activity of \textquotedblleft right-set\textquotedblright~ into the \textquotedblleft right-spike\textquotedblright. (c) Our model with the C-C map and C-P map can classify the activity correctly.} \label{fig6} \end{figure*} \textbf{Results on Collective Activity dataset.} On CAD, we use both MCA and MPCA for evaluation. Similar to other methods~\cite{azar2019convolutional,yuan2021learningcontext,yuan2021spatio,han2022dualai}, we merge the category ``walking'' and ``crossing'' as ``moving'' to calculate MPCA. In addition, since the scenario of this dataset changes dramatically, there is no certain correlation between action labels and individual positions. Therefore, the Class-Position Distribution Map (C-P Map) has not been used in this dataset. \cref{table3} reports the experimental results. All the compared methods adopt the optical flow. As can be seen from this table, our method achieves excellent results in terms of the MPCA. Specifically, our method gains the MPCA of 98.5\% with only RGB input and outperforms all of the compared methods. When using optical flow inputs, it can further improve 0.2\% and achieve the MPCA of 98.7\%. While for the MCA, our method is slightly lower than the existing best result, this is mainly because “walking” and “crossing” have a high similarity in appearance, which makes our model confused in obtaining C-C Map, and further impact the recognition of these two categories. \begin{table} \centering \caption{Ablation studies of the introduced knowledge on the Volleyball dataset.} \begin{tabular}{@{}lc@{}} \bottomrule Model & MCA \\ \midrule Base & 93.4\\ Base + Semantic & 93.6\\ Base + Semantic + C-P Map & 93.9\\ Base + Semantic + C-C Map & 94.0\\ \hline \textbf{Base + Semantic + C-C Map + C-P Map} & \textbf{94.5} \\ \bottomrule \end{tabular} \label{table4} \end{table} \subsection{Ablation Study} \label{sec4.4} To investigate the effect of the introduced knowledge in the proposed framework, we conduct ablation studies on the VD with the following variants: (A) Base: it only consists of the Visual Representation Module and directly uses the individual representation $\mathbf{\overline{X}}$ and global scene information $\mathbf{X}$ for the final classification. (B) Base + Semantic: it removes both the Class-Class Distribution Map and Class-Position Distribution Map from the overall framework. (C) Base + Semantic + C-P Map: it removes the Class-Class Distribution Map from the whole framework. (D) Base + Semantic + C-C Map: it removes the Class-Position Distribution Map from the whole framework. As shown in \cref{table4}, the MCA of model (B) is improved by 0.2\% compared with model (A), which indicates the effectiveness of introducing semantic information. The MCA of model (C) and model (D) is improved by 0.3\%, 0.4\% compared with the result of model (B), respectively. The complete framework introducing both two distribution maps can reach the MCA of 94.5\%, outperforms ablation models by 1.1\%, 0.9\%, 0.6\% and 0.5\%, respectively. These results show the effectiveness of utilizing knowledge to enhance the relation inference process for group activity classification. \subsection{Visualization} \textbf{The t-SNE visualization of learned representation.} We adopt t-SNE~\cite{van2008visualizing} to analyze the feature distribution of different models on the VD. As shown in \cref{fig5} (a), feature representations of ``l-spike'' cannot be separated well from ``l-winpoint''. As in \cref{fig5} (b) and (c), when C-P Map or C-C Map is introduced to enhance the relation inference process, our method can distinguish ``l-spike'' and ``l-winpoint'' well. As in \cref{fig5} (d), our final framework is able to differentiate feature representations much better than others. These results obviously demonstrate the effectiveness of introducing knowledge to group activity recognition. \textbf{The visualization of predictions.} An example of the group activity recognition on the VD is visualized in \cref{fig6}. Compared with ground truth in \cref{fig6} (a), the model without C-P Map and C-C Map mistakenly classified the actions of four players as shown in \cref{fig6} (b), which classified ``setting'' into ``spiking'' may result in the mistake of the group activity recognition. In fact, the appearance of a jumping-like player in \cref{fig6} (b) looks like ``spiking''. Therefore, it is reasonable for such misclassification since the model performs classification mainly based on visual information. In comparison, by introducing the knowledge concretized C-C Map and C-P Map, such as the ``setting'' action appearing in the defensive zone with a higher probability than the ``spiking'' action, our model enhances the visual representation and correctly classifies the ``setting'' action. Therefore, the group activity is correctly classified as shown in \cref{fig6} (c). \section{Conclusion} In this paper, we observe that the existing visual representation based group activity recognition methods have not explored the influence of abundant knowledge on the relation modeling process, leading to a limitation of the performance. We propose an idea of knowledge concretization and further present an end-to-end group activity recognition framework. Our framework first utilizes a Visual Representation Module to extract appearance feature, and then a Knowledge Augmented Semantic Relation Module to extract semantic information and explore the semantic relations. Finally, a Knowledge Augmented Semantic Relation Module integrates visual and semantic information with the help of knowledge. Extensive experiments validate that knowledge enable effectively enhance relation inference process and the individual representations. Benefiting from the design of these modules, our framework achieves competitive experimental results on two widely-used datasets. \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.14323", "language": "en", "timestamp": "2023-03-01T02:09:36", "url": "https://arxiv.org/abs/2302.14323", "yymm": "2302" }
\section{\@startsection{section}{1}{\z@}% {-3ex \@plus -.3ex \@minus -.2ex}% {2.2ex \@plus.2ex}% {\normalfont\normalsize\protect\baselineskip=14.5pt plus.2pt minus.2pt\bfseries}} \def\subsection{\@startsection{subsection}{2}{\z@}% {-3ex\@plus -.2ex \@minus -.2ex}% {2ex \@plus.2ex}% {\normalfont\normalsize\protect\baselineskip=12.5pt plus.2pt minus.2pt\bfseries}} \def\subsubsection{\@startsection{subsubsection}{3}{\z@}% {-2.2ex\@plus -.21ex \@minus -.2ex}% {1.4ex \@plus.2ex} {\normalfont\normalsize\protect\baselineskip=12pt plus.2pt minus.2pt\sl}} \def{\indent \it Proof.}{{\indent \it Proof.}} \pagestyle{fancy} \fancyhf{ \fancyhead[LO]{\small\sl Shortened Title Within 45 Characters}% \fancyhead[RO]{\small\thepage} \fancyhead[LE]{\small\thepage} \fancyhead[RE]{\small\sl J. Comput. Sci. \& Technol.} \setcounter{page}{1} \begin{document} \begin{CJK*}{GBK}{song} \thispagestyle{empty} \vspace*{-13mm} \noindent {\small Journal of computer science and technology: Instruction for authors. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY} \vspace*{2mm} \title{Read Pointer Meters in complex environments based on a Human-like Alignment and Recognition Algorithm} \par{Yan Shu, Shaohui Liu, Honglei Xu and Feng Jiang} Harbin Institute of Technology \let\thefootnote\relax\footnotetext{{}\\[-4mm]\indent\ Regular Paper} \noindent {\small\bf Abstract} \quad {\small \textcolor{blue}{Recently, developing an automatic reading system for analog measuring instruments has gained increased attention, as it enables the collection of numerous state of equipment. Nonetheless, two major obstacles still obstruct its deployment to real-world applications. The first issue is that they rarely take the entire pipeline's speed into account. The second is that they are incapable of dealing with some low-quality images (i.e., meter breakage, blur, and uneven scale). In this paper, we propose a human-like alignment and recognition algorithm to overcome these problems. More specifically, a Spatial Transformed Module(STM) is proposed to obtain the front view of images in a self-autonomous way based on an improved Spatial Transformer Networks(STN). Meanwhile, a Value Acquisition Module(VAM) is proposed to infer accurate meter values by an end-to-end trained framework. In contrast to previous research, our model aligns and recognizes meters totally implemented by learnable processing, which mimics human's behaviours and thus achieves higher performances. Extensive results verify the good robustness of the proposed model in terms of the accuracy and efficiency. The code and the datasets will be available in https://github.com/shuyansy/A-detection-and-recognition-pipeline-of-complex-meters-in-wild. }} \vspace*{3mm} \noindent{\small\bf Keywords} \quad {\small Analog measuring instruments, Pointer meters reading, Spatial Transformed Module, Value Acquisition Module} \vspace*{4mm} \end{CJK*} \baselineskip=18pt plus.2pt minus.2pt \parskip=0pt plus.2pt minus0.2pt \begin{multicols}{2} \section{Introduction} \label{sec:intro} In the complex industrial environment, there are harsh environments such as radiation, toxic, high temperature, etc., it is necessary to inspect the production condition with the help of instruments to ensure safety\cite{article1}. Traditionally acquired data is typically read artificially by humans, who are capable of deriving precise readings from complex meters in a variety of shapes, forms, and styles, despite never having seen the meter in question. However, the manual method is always more labor intensive and time consuming. So it is of great practical significance to rely on inspection robots and computer vision technology\cite{article2,article3,article4,article5} for automatic meter reading. Substation meters are now classified as digital and pointers. While reading digital meters can be considered an OCR task and is relatively simple to accomplish using text spotting techniques\cite{QinICCV,MaskTextspotter,FOTS,ABCNet}, reading pointer meters presents a different and more difficult problem: there are major visual changes between meter faces, the camera viewpoint has a significant effect on their depicted shape and numbering location, and the existence of shadows, meter breakage, and specular reflections adds to the pointer hands' perplexity. While this issue has been around for a long time, few previous solutions have been capable of reliably obtaining readings from meters, except in extremely limited circumstances. Additionally, it is difficult for researchers to work on this project due to the lack of reliable training and evaluation standards. \vspace{3mm} \begin{center} \label{figpipeline} \includegraphics[width=8cm]{fig/pipeline.png}\\ \vspace{3mm} \parbox[c]{8.3cm}{\footnotesize{Fig.1.~} Overview of previous pointer meter reading pipeline (i) compared to ours (ii). The ``MD", ``PM", ``PT", and ``CR" mean Meter Detection, Points Matching, Perspective Transform, and Component Retrieval, respectively. ``STM" and ``VAM" mean Spatial Transformed Module and Value Acquision Module we proposed. \end{center} \vspace{3mm} \setcounter{figure}{1} \begin{figure*}[!htb] \centering \subfigure[]{ \includegraphics[width=6cm,height=5cm]{fig/1.jpg}} \subfigure[]{ \includegraphics[width=5cm,height=5cm]{fig/2.pdf}} \caption{(a) shows the efficiency of our STM for meter alignment, which is 5 times faster than the conventional perspective transform method. (b) shows our VAM (bottom line) can read more accurate values in some low-quality images than prior methods (top line).} \label{figcompare} \end{figure*} Existing automatic meter reading systems\cite{meter1,meter2,meter3,meter4}, according to relevant literature, include the following pipelines: To begin, the meter's pure area is detected using conventional neural network-based detection algorithms or image processing techniques; then the captured target is aligned to a front view by perspective transform method. Lastly, meter values can be obtained by meter component (the pointer and the scale) retrieval and meter number recognition. However, most of these methods suffer from two main problems. First, the alignment process is typically time-consuming due to its intricate point-matching steps, which hinders the overall efficiency of the system. Second, their reading model is not robust; it consists of isolated and independent modules for meter component retrieval and number recognition, which are unaware of their interdependence, resulting in poor accuracy. Therefore, ``how to design an algorithm for efficient alignment and robust recognition of pointer meters" remains largely unsolved. To address these issues, we propose a novel human-like alignment and recognition algorithm, which simplifies the meter reading pipeline as shown in Fig \ref{figpipeline}. To be more precise, we propose a novel Spatial Transformed Module (STM) for alignment via implicitly learning homography transformation, which is heavily inspired by the Spatial Transformer Networks (STN)\cite{stn}. STM is more efficient to align meter than previous morphological conversion methods by discarding point-matching process. Additionally, a Value Acquisition Module (VAM) is established in a unified framework of meter component retrieval and meter number recognition, simulating the structure of an end-to-end text spotter. By excavating the relationship between meter components and meter number, VAM can learn a richer representation and thus can read precise meter values from low-quality images. As shown in Fig \ref{figcompare}, on the MC1260 dataset we proposed, the FPS of STM is 50 FPS which is 5 times faster than the conventional alignment method. Meanwhile, VAM can handle some difficult data such as meter breakage, blur and uneven scale. In this paper, we make the following contributions: (i) We design a unified framework involving detection, alignment and recognition stages. The detection can simply be an off-the-shelf object detection model. The alignment stage involves a deep neural network which introduces an improved STN to regress homography transformation parameters implicitly. At the recognition stage, we are the first to establish an end-to-end architecture to tightly couple meter component retrieval and meter number recognition, boosting both the accuracy and efficiency of the pointer meter reading. (ii) We propose a new benchmark dataset called Meter\_Challenge (MC1296) which contains 1296 images captured in scene by automatic robots. MC1296 is organized in a tree structure, containing images, annotations and evaluation metrics for different tasks (meter detection, meter alignment, and meter recognition) from top to bottom. (iii) Extensive experiments verify the effectiveness and robustness of the method we propose. The rest of this paper is organized as follows. The related background knowledge is provided in Section \ref{sec:re}, including the previous pointer meter reading pipelines, the Spatial Transformer Networks (STN) and the end-to-end text spotting methods highly related to our works. Section \ref{sec:method} introduces the implementation process of the proposed method. In Section \ref{sec:experiment}, the proposed method is verified by extensive simulation experiments and ablation studies. The conclusions of this paper are summarized in Section \ref{sec:conclusion}. \setcounter{figure}{2} \begin{figure*}[!htb] \centering \includegraphics[width=18cm,height=6cm]{fig/model.png} \caption{The proposed framework of the pointer meter recognition. YDM can detect meter targets and crop meter regions into STM, where aligned views can be obtained. VAM can output meter values accurately and efficiently. } \label{figmethod} \end{figure*} \section{Related Works} \label{sec:re} We commence this section by reviewing major pointer meter reading frameworks. Additionally, we discuss the research on STN and end-to-end text spotting methods which are highly relevant to our works. \subsection{Pointer meter reading frameworks} Numerous advances\cite{meter1,meter2,meter3,meter4,meterdeep,metermaskrcnn,meterrobust,metertemplate} have been made in the reading of pointer meters over the last few years. The existing frameworks are generally divided into three stages: meter detection, meter alignment, and meter recognition. Traditional algorithms\cite{metertemplate} such as template matching and the table lookup method are used in meter detection. To address this issue with complex backgrounds, some object detection methods such as Faster RCNN\cite{meter4} have been introduced. In order to calibrate the camera angle to get a front view image, perspective transform techniques\cite{meter4,meter2} are applied by calculating transformation matrix determined by points matching. Image processing methods\cite{meterdesign} also propose using the image subtraction method or the Hough Transform algorithm to extract the pointer for meter recognition. Additionally, machine learning and deep learning are used to improve reading accuracy. While He et al.\cite{metermaskrcnn} improve the Mask RCNN\cite{maskrcnn} method for pointer segmentation. Following that, final values can be determined by calculating the pointer angle and meter number output. The majority of the aforementioned approaches are able to read pointer meters, but few of them can balance accuracy and speed due to complex post-processing in meter alignment\cite{meter4} or inadequate visual representations in meter recognition. \subsection{Spatial Transformer Networks(STN)} In contrast to the conventional perspective transform method, which explicitly calculates the transformation matrix. STN\cite{stn} introduces a novel learnable module that enables spatial manipulation of data within the network. STN is advantageous for a wide variety of computer vision tasks due to its efficiency and flexibility. ASTER\cite{aster} consists of a rectification network and a recognition network that can deal with text that is distorted or has an irregular layout. Lee et al.\cite{imageregis} propose Image-and-Spatial Transformer Networks (ISTNs) for downstream image registration optimization. Additionally, Yang et al.\cite{clock} introduce a clock alignment architecture based on STN, which motivates us to develop a more efficient meter alignment module. \subsection{End-to-end text spotters} To spot texts in images, a straight two-stage idea is proposed to cascade existing detector and recognizer sequentially. However, due to the lack of complementarity between detector and recognizer, they suffer from low efficiency and accuracy. To mitigate this problem, an end-to-end trainable Neural Network for text spotting is attempted, with state-of-the-art performances achieved. Li et al.~\cite{2017TCRNN} first builds an unified end-to-end work that simultaneously localizes and recognizes text with a single forward pass, with positive results achieved in horizontal text spotting task. Benefiting from convolution sharing strategy, FOTS~\cite{2018FOTS} and EAA~\cite{2018EAA} pool multi-oriented text regions from feature map by designing RoI Rotate and Text-Alignment layer, respectively. Unfortunately, few of researchers take end-to-end text spotters into their pointer meter recognition frameworks. Our work is structured similarly to existing frameworks for pointer meter reading. To increase the applicability of previous work, we replace the traditional perspective transform method by an improved STN and then create an end-to-end meter recognition module for meter component retrieval and meter number recognition. \section{Methods} \label{sec:method} The purpose of this paper is to design an algorithm for efficient alignment and robust recognition of pointer meters. To achieve this goal, we establish a unified framework, which is shown in Fig \ref{figmethod}. Our proposed architecture accepts an image as input and then performs detection, alignment, and recognition sequentially. It is noteworthy that our STM (see Sec.\ref{MA}) can directly transform the detected meter into an aligned view without any post-processing steps. Meanwhile, the VAM (see Sec.\ref{MR}) we proposed can learn rich visual representation by excavating the relationship between component retrieval and number recognition. \subsection{Meter Detection} Cropping meter regions prior to recognition is necessary to eliminate background interference. To accomplish this, some traditional image processing techniques such as Hough Circle Detection and Template Matching are used, both of which have shortcomings in some low-quality images. At the moment, object detection networks are used to detect and crop the meter, as follows. \begin{equation}\label{eq1} \begin{aligned}\ I_{det}=\Phi{_{det}}(I;\Theta {_{det}})\in \mathbb{R}^{^{3 \times h \times w}} \end{aligned} \end{equation} where $I$ is the given unlabeled image, while $\Phi{_{dec}}$ and $\Theta{_{dec}}$ represent detecting function and learnable parameters, respectively. The detector actually can be done using any off-the-shelf object detector. However, to reduce the efficiency cost and handle small meter targets, we propose a YOLO-based Detection Module (YDM) based on YOLO-v5\cite{yolov4}, which has achieved promising performance in many tasks. To achieve a better performance in our tasks where data is scarce and target is small, we apply multi-scale training strategy and artificially augment the images by copy-pasting some small objects. The performance of YDM can be seen in Sec. \ref{sec:experiment}. \subsection{Meter Alignment} \label{MA} \textbf{Motivation.} The detected pure meter image could be directly passed to a module for reading recognition. This is typically not ideal for two reasons: first, due to the limitations of the localisation module; and second, even when the meter is properly localised, it can be hard to read at times due to the viewpoint's interference. Previous methods apply directly perspective transform to calibrate the camera angle to get a front view image as shown in follows: \begin{equation}\label{eq2} \begin{aligned}\ (x,y,w')=(u,v,w) \cdot T=\\ (u,v,w)\cdot\begin{bmatrix} a_{11} & a_{12} &a_{13} \\ a_{21} & a_{22} &a_{23} \\ a_{31} & a_{32} & a_{33}\\ \end{bmatrix}\\ \end{aligned} \end{equation} \begin{equation}\label{eq3} \begin{aligned}\ (X,Y)=(\frac{x}{w'},\frac{y}{w'})\\ (U,V)=(\frac{u}{w'},\frac{v}{w'}) \end{aligned} \end{equation} where $(U,V)$ represents the coordinate of a point in the original image, $(X,Y)$ is the coordinate of the corresponding point in the transformed image, $(u,v,w)$ and $(x,y,w')$ are the homogenous space representation of $(U,V)$ and $(X,Y)$, respectively. By matching four feature points between two images, transform matrix $T$ is determined. Their methods, however, suffer primarily from complex points matching algorithms, which are time-consuming and not very robust. This drives us to design a more efficient and stronger module for meter alignment. \textbf{Revisiting vanilla STN.} Different from Perspective Transform which calculates transformation matrix by points matching, STN can transform the detected meter to a fronto-paralleled view by learned homography transformation parameters. Specifically, given the output $I_{det}$ of YDM, STN establishes mapping $\phi_{stn}$ by predicting homography transformation $H$ with 8 degree of freedoms, and $\phi_{sam}$ represents Differentiable Image Sampling(DIS) operation to obtain the canonical view of $I_{det}$ by bilinear interpolation: \begin{equation}\label{eq4} \begin{aligned}\ H&=\Phi_{stn}(I_{det})\in \mathbb{R}^{3\times3} \\ I_{align}&=\Phi_{sam}(I_{det},H)\in \mathbb{R}^{3\times h\times w} \end{aligned} \end{equation} Therefore, how to predict accurate homography transformation $H$ is a key issue. \textbf{Spatial Transformed Module(STM).} It is a direct idea to regress $H$ given ground truth $\hat{H}$ in a supervised way. Nonetheless, based on our major findings and rigorous testing, the deep network fails to learn the explicit parameter of H for the following reasons: (i) The training data is limited to the deep CNN's huge parameters; (ii) $H$'s parameters have a large range of values, making the regression difficult to optimise. To circumvent these problems, we model the implicit spatial transformation relationship between images instead of regressing $H$ directly. Specifically, for a $\hat{I}_{det}$ in training set, we firstly annotate its inner dial region with a binary mask map. Then, for various meter forms, we match four pairs of feature points to determine the real $\hat{H}$. As for an irregular ellipse, the endpoints of the major axis and minor axis are utilized as the initial points, while the corresponding points are defined by the intersection of the major axis, the minor axis, and the circumcircle. For a rectangular shape, the $\hat{H}$ can be calculated by mapping the vertices of the rectangle directly to the vertices of the image. Then we can get the aligned image $\hat{I}_{align}$ by perspective transform: \begin{equation}\label{eq5} \begin{aligned}\\ \hat{I}_{align}=warp(\hat{I}_{det},\hat{H}) \end{aligned} \end{equation} The vertex coordinate offsets $\hat{\delta}_{c}$ between $\hat{I}_{det}$ and $\hat{I}_{align}$ can be obtained, which is the training objective of STM implemented by Mean-Squared (MSE) Loss: \begin{equation}\label{eq6} \begin{aligned}\\ {L_{align}}=\sum_{i}(\delta_{ci}-\hat{\delta}_{ci})^2 \end{aligned} \end{equation} Where $i$ is the index of coordinates. Therefore, the algorithm of STM can be adjusted as follows: \begin{equation}\label{eq7} \begin{aligned}\ \delta_{c}&=\Phi_{stm}(I_{det})\in \mathbb{R}^{4\times2}\\ H&=warp\_inv(I_{det},I_{det}+\delta_{c})\in \mathbb{R}^{3\times3}\\ I_{align}&=\Phi_{sam}(I_{det},H)\in \mathbb{R}^{3\times h \times w} \end{aligned} \end{equation} In our training process, we use ResNet18\cite{resnet} to extract the feature of $I_{det}$, and by the propogation of network, accurate $H$ and canonical images can be acquired. \subsection{Meter Recognition} \label{MR} \textbf{Overall Design.} What is the best way to read meters like a human? Key meter elements like the pointer, scales, and number were predicted in previous methods to achieve this goal. However, they tended to create independent modules to handle different component and number, resulting in a suboptimal solution for meter recognition. We propose a unified framework called Value Acquisition Module (VAM) that consists of meter component retrieval branch and meter number recognition branch to excavate a deep relationship between them. As illustrated in Fig \ref{figmethod}, we apply ResNet18 as the backbone and create two separate feature merging modules to form a pair of complementary branches. Specifically, upsampling and pixel-wise addition are used to fuse intermediate layers of ResNet. VAM allows these two diametrically different tasks to benefit from each other by disentangling weight sharing and introducing a mirror symmetry of FPN\cite{fpn}. Ablation studies are demonstrated in Sec. \ref{sec:experiment}. \textbf{Meter component retrieval branch.} We retrieve meter component (meter pointer and key scales) using semantic segmentation methods that are heavily inspired by the Mask-RCNN\cite{maskrcnn}. The branch generates two 1-channel segmentation maps, namely the Pointer Map and the Key Scale Map, by performing two distinct $1 \times 1$ convolutional operations on the backbone features. The Pointer Map indicates the location of the meter's pointer, whereas the Key Scale Map indicates its angle. The Pointer Map and Key Scale Map are both trained by minimizing the Dice loss: \begin{equation}\label{eq3} \begin{aligned} \vspace{8pt} \vspace{8pt} \vspace{8pt} L_{pm}&=1-\dfrac{2 \sum_{i} P_{pm}(i) G_{pm}(i)}{\sum_{i} P_{pm}(i)^{2}+\sum_{i} G_{pm}(i)^{2}}\\ L_{k s m}&=1-\dfrac{2 \sum_{i} P_{k s m}(i) G_{k s m}(i)}{\sum_{i} P_{k s m}(i)^{2}+\sum_{i} G_{k s m}(i)^{2}} \end{aligned} \end{equation} where $pm$ and $ksm$ represent Pointer Map and Key Scale Map, and $P_{(\cdot)}(i)$ refer to the value of $i$\textsuperscript{th} pixel in the predicted result while $G_{(\cdot)}(i)$ refer to the value of $i$\textsuperscript{th} pixel in the GT region. The final loss for the meter component retrieval branch is a weighted combination of the two maps, balanced by $\lambda \in (0,1)$ as \begin{equation}\label{eq8} \begin{aligned}\ L{_{com}}=\lambda L{_{PointerMap}} + (1-\lambda) L{_{KeyScaleMap}} \end{aligned} \end{equation} In our experiments, we set $\lambda$ to 0.4, assigning more importance to Key Scale Map, which is relatively difficult to learn in training process due to its small spatial occupation. \textbf{Meter number recognition branch.} Previous methods recognize numbers in meters with another system, which poses severe memory waste and low efficiency. In our VAM, meter number recognition branch resembles like the standard text spotters, which is mentioned in Sec.\ref{sec:re}. To further boost the inference speed, We only detect the key number in the meter, the one closest to the number `0', and then recognize it with the assistance of feature sampling. The key number detection task is deemed a text classification task, in which one convolution is applied to output dense per-pixel predictions of the key number localization. The key number bounding box can be obtained by the minimum bounding rectangle operation. Meanwhile, to overcome the class imbalance problem, we introduce online hard example mining (OHEM)\cite{ohem} to better distinguish between number areas and backgrounds, in which the balanced factor is set to 3 in our work. The set of selected positive elements by OHEM in the score map as $\omega$, the loss function for key number detection can be formulated as: \begin{equation}\label{eq9} \begin{aligned} L{_{num\_det}}=\frac{1}{\parallel\Omega\parallel }\sum_{x\in\Omega}Cross\_Entropy(p_x,p_x^*)\\ =\frac{1}{\parallel\Omega\parallel }\sum_{x\in\Omega}(-p_x^*logp_x-(1-p_x^*)log(1-p_X)) \end{aligned} \end{equation} where $\parallel\cdot\parallel$ means the number of elements in a set, and the $p_x$ and $p_x^*$ are the predicted pixel and the ground truth label, respectively. The feature sampling layer aims to convert detected feature regions into fixed-size outputs from which a RNN-based sequence recognizer can be established. We introduce RoIRotate in \cite{FOTS} to our work, which can transform the rorated area into a fixed-size region via max-pooling and bilinear interpolation. Similar to but distinguished from STN, RoIRotate gets affine transformation via an unsupervised way, resulting in a more general operation for extracting features for regions of interest. To improve recognition performance, we use only ground truth key number regions during training rather than predicted number regions. \setcounter{figure}{3} \begin{figure*}[!htb] \centering \includegraphics[width=12cm,height=8cm]{fig/dataset.png} \caption{ Visualization results of one sample in the data. (i), (ii) and (iii) mean the data for the meter detection, meter alignment, and meter recognition. } \label{figdataset} \end{figure*} Given the transformed number feature, we first permute key number features $F\in \mathbb{R}^{C\times H\times W}$ into 2D sequence feature $L\in \mathbb{R}^{C\times W}$ in several sequential convolutions, which has the same configurations as CRNN\cite{crnn}. Then, for each time step $t=0,1,\ldots,T+1$, we feed $l_{1},\ldots,l_{w}\in L$ into bi-directional LSTM, with D=256 output channels per direction, which can be formulated as follows: \begin{equation}\label{eq10} \begin{aligned} h^{'}_t&=f(x_t,h^{'}_{t-1})\\ y_t&=\varphi (h^{'}_t)=softmax(W_0h^{'}_t)\\ \end{aligned} \end{equation} where $f()$ is the recurrence formulation, $h_t$ is the hidden state at time step t, and the $W_0$ linearly transforms hidden states to the output space of size 12, including 10 Arabic numerals and a token representing ``.", and a special END token. Finally, a CTC layer is applied to align the predicted sequence to label sequence. Following\cite{crnn}, the recognition loss can be formulated as \begin{equation}\label{eq11} \begin{aligned} L_{num\_reco}=-\frac{1}{N}\sum_{n=1}^{N}logp(y^{*}_n\mid x) \end{aligned} \end{equation} where $N$ is the number of number regions in an input image, and $y^*_n$ is the recognition label. \textbf{Training procedure and inference.} VAM is a unified module which can be trained end-to-end. the overall loss function can be calculated as follows: \begin{equation}\label{eq12} \begin{aligned} L=L_{com}+L_{num\_det}+L_{num\_reco} \end{aligned} \end{equation} In our inference process, binarized score maps for pointer and key scale are firstly obtained by applying threshold algorithm $\lambda=0.5$. Then, a thinning algorithm is applied to turn the pointer into a straight line segmentation and the Hough line transform is used to obtain the position of the pointer. Meanwhile the key scale centres can be localized by calculating the average pixels position within the closed area. Finally the meter reading is calculated by angle method, which is given by \begin{equation}\label{eq13} \begin{aligned} Result=\frac{\alpha_1 }{\alpha_2} \times num\_rec \end{aligned} \end{equation} where $\alpha_1$ is the angle between the pointer and the zero scale, and $\alpha_2$ is the angle between the zero scales and the key scale. The $num\_rec$ is the output of the meter number recognition branch, then the reading of the meter is completed automatically. \setcounter{figure}{4} \begin{figure*}[!htb] \centering \includegraphics[width=18cm,height=8cm]{fig/detection.png} \caption{ Qualitative results of the meter detection, where yellow bounding box means pointer meter and green bounding box means digital meter. ``ID-num" is the detection confidence. } \label{figdetect} \end{figure*} \section{Experiments} \label{sec:experiment} \subsection{Datasets} To our knowledge, there have been no publicly available and appropriate benchmarks for this task. As a result, we created a new dataset called Meter\_Challenge (MC1296), which contains 1296 images of scenes captured by automated robots. To help the model adapt to its natural environment, the dataset includes complex backgrounds, multiple scales, a variety of viewpoint angles, and a variety of meter shapes. To better fit the meter reading task, we organized the dataset into a tree structure, with each level representing a distinct task (meter detection, meter alignment, and meter recognition), complete with associated images, annotations, and evaluation metrics. Fig \ref{figdataset} illustrates some visualization results, while Table \ref{dataset} contains summary statistics. \begin{center} \captionof{table}{Statistics of the proposed MC\_1260 dataset. "mb","co", "psn" represent meter bounding box, coordinate offsets, and pointer/scale/number mask and number, respectively.} \begin{tabular}{lccc} \toprule Dataset\_task & Train\_size & Test\_size & Annotations \\ \midrule M\_detection & 1036 & 260 & mb \\ M\_alignment & 1028 & 247 & co \\ M\_reading & 739 & 185 & psn \\ \bottomrule \end{tabular} \label{dataset} \end{center} \subsection{Implementation details} In this paper, the system we propose consists of YDM, STM, and VAM. YDM has the similar configurations with \cite{yolov4}, so we focus on the implement of STM and VAM. Specifically, for both of the module we use ResNet pretrained in ImageNet\cite{imagenet} as backbone, and the image size is 640 \time 640 and the training batch size is 8. We use Adam to optimizer the two network and set the initial learning rate to $1\times 10^{-4}$ with the momentum of 0.9. Meanwhile, some basic data augmentation techniques are applied, such as the random cropping, random rotation, and the brightness contrast. Our experiment is conducted on one general GPU (GTX-1080), with the environment PyTorch 1.5.0. \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\linewidth]{fig/align.png} \end{center} \vspace{0pt} \caption{ Qualitative results of the meter alignment, where the top row is the original images, the middle row and the bottom row are the transformed image generated by STN and STM. Note that STN can not handle images with extreme large camera angles. } \label{figalign} \end{figure*} \subsection{Meter detection results} To disentangle the effects of YDM, we begin by reporting the dataset's meter detection results. To conform to the object detection literature, we report the average precision (AP) at two different bounding box IoU thresholds, AP50 and AP75. AP50 denotes the average precision for IoU thresholds greater than 0.5, while AP75 denotes the average precision for IoU thresholds greater than 0.75. As shown in the Table \ref{detect}, the meter detection task is relatively successful. To demonstrate the advantages of our method, we compare it to a commonly used YOLO algorithm\cite{yolo} and the method in \cite{meter4}, which demonstrates that our YDM performs better in terms of accuracy and efficiency. The qualitative results are demonstrated in Fig \ref{figdetect}, which shows TDM can detect meters with different shapes and sizes. \begin{center} \captionof{table}{The quantitative results of different methods for meter detection.} \begin{tabular}{@{}lccc@{}} \toprule Model & AP50(\%) & AP75(\%) & FPS \\ \midrule Liu.et al\cite{meter4} & 91.3 & 89.5 & 4.3 \\ YOLO\cite{yolo} & 90.0 & 88.2 & 6.7 \\ Ours & 98.6 & 97.1 & 12.4 \\ \bottomrule \end{tabular} \label{detect} \end{center} \begin{center} \captionof{table}{The quantitative results of different methods for meter alignment.``rel" is Average Relative Error and ``ref" is Average Reference Error.} \begin{tabular}{@{}lccc@{}} \toprule Method & Rel(\%) & Ref(\%) & FPS \\ \midrule None & 5.91 & 1.20 & - \\ Perspective transform\cite{meter4} & 1.72 & 0.23 & 10 \\ STN\cite{stn} & 3.40 & 0.95 & 44 \\ STM & 1.70 & 0.26 & 50 \\ \bottomrule \end{tabular} \label{align} \end{center} \subsection{Meter alignment results} To demonstrate the STM's availability and robustness in the recognition system, we conducted extensive experiments on the validation dataset, comparing it to the traditional perspective transform method and STN. Fig.\ref{figalign} illustrates the qualitative findings. As can be seen, the image can be easily and automatically transformed into a front-viewing image using STM, regardless of the camera angle. However, due to the limited learning capability of pure STN, it is difficult to align the meter in terms of some extremely large camera angles. Additionally, as shown in Tab \ref{align}, we conducted ablation studies to demonstrate its superiority by demonstrating inference speed and influence on meter recognition task. Take note that the average relative error and the average reference error are the evaluation metrics used to represent the meter recognition error rate, which will be discussed in detail in Sec \ref{recognition_result}. It can be seen that STM contributes to reduce the recognition error rate as it allows meters to be read from various angles and sizes. Our STM also achieves competitive accuracy to perspective transform while increasing inference speed, indicating that STM achieves a more favorable trade-off between accuracy and efficiency. \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{fig/result.png} \end{center} \vspace{0pt} \caption{ Some visualization results produced by our method. The red line is the predicted pointer line, the blue points are the key scale areas, and the meter reading results are shown in the left top. } \label{figresult} \end{figure*} \subsection{Meter recognition results} \label{recognition_result} To demonstrate our method's recognition performance, we incrementally compare it to other methods. To minimize inter-person variability in readings, the readings obtained by human vision are the average of the results of twenty expert workers. Meanwhile, to make the comparison more fair, we follow similar evaluation metrics as \cite{metermaskrcnn}. Specifically, we choose the average relative error $\hat{\Theta}$ and the average reference error $\hat{\Gamma}$ as evaluation indicators, as shown in follow \begin{equation}\label{eq14} \begin{aligned} \hat{\Theta}=\frac{\sum_{i=1}^{n}\frac{\mid p_i-g_i \mid}{g_i}}{n} \times 100\% \\ \hat{\Gamma}=\frac{\sum_{i=1}^{n}\frac{\mid p_i-g_i \mid}{R}}{n} \times 100\% \end{aligned} \end{equation} Where $p_i$ is the predicted meter value, $g_i$ is the ground truth value. R represents the meter`s range, and n represents the total number of experimental data. As shown in Tab \ref{result}, our method outperforms previous methods in terms of average relative error and achieves competitive results with \cite{meter4} in average reference error, indicating that our algorithm has strong capacity in reading recognition. Additionally, our method can perform inference at a rate of approximately 25 frames per second, demonstrating that it is practical for real-world applications. We show some visualization results in Fig \ref{figresult}, demonstrating our method's high adaptability to a complex environment with variable illumination, scale, and image tilt. \begin{center} \captionof{table}{The quantitative results of different methods for meter reading recognition. ``Rel" is Average Relative Error and ``Ref" is Average Reference Error.} \begin{tabular}{@{}lccc@{}} \toprule Method & \makecell{Avenue} & \makecell{Rel(\%)} & \makecell{Ref(\%)} \\ \midrule Zheng et al.\cite{meter2} & Measurement(2016) & 10.32 & 0.91 \\ He et al.\cite{metermaskrcnn} & ICIST(2019) & 1.85 & 0.30 \\ Liu et al.\cite{meter4} & Measurement(2020) & 1.77 & \textbf{0.24} \\ Ours & - & \textbf{1.70} & 0.26 \\ \bottomrule \end{tabular} \label{result} \end{center} To disentangle the effects of the unified framework VAM, we conduct ablation studies to investigate the relationships between the meter component retrieval and meter numbering recognition branches. We begin by reporting the full model's end-to-end results on the Tab \ref{ram}. Notably, we evaluate pointer/key scale detection and key number recognition using the AP50 and number-level accuracy recognition metrics, respectively. It can be demonstrated that by optimizing all loss functions simultaneously, our model achieves a reasonable level of success in detection and recognition tasks. Additionally, we construct a two-stage model in which the meter component retrieval and meter number recognition branches are trained independently. The meter component retrieval network is built by removing the meter number recognition branch, and similarly, the meter number recognition network is built by removing the meter component retrieval branch from the original network. Our proposed VAM outperforms the two-stage method by a significant margin in both the meter component retrieval and meter number recognition tasks. The results indicate that our joint training strategy accelerated the convergence of model parameters. \begin{center} \captionof{table}{The ablation studies on VAM. ``MCRB" and ``MNRB" represent we only train the meter component retrieval branch or meter number recognition branch.} \begin{tabular}{@{}lccc@{}} \toprule Method & \makecell{pointer\_det(\%)} & \makecell{key\\ scale\_det(\%)} & \makecell{key num-\\ber\_reco(\%)} \\ \midrule VAM & 95.6 & 93.2 & 88.7 \\ MCRB & 93.1 & 90.5 & - \\ MNRB & - & - & 87.2 \\ \bottomrule \end{tabular} \label{ram} \end{center} \section{Conclusion} \label{sec:conclusion} We propose a novel method for accurate and efficient pointer meter reading, which is implemented by the equipment of YDM, STM, and VAM. Specifically, STM can obtain the front view of images autonomously with the improved STN, and VAM can recognize meters accurately with the unified frameworks with the combination of meter component retrieval branch and meter number recognition branch. Experiments on the challenging datasets we proposed demonstrate that the proposed method has a strong capacity for pointer meter reading. Currently, the algorithm has been successfully applied to robots performing substation inspections. Future work will concentrate on model acceleration in order to develop a more efficient framework for video meter reading. {\small \bibliographystyle{JCST}
{ "arxiv_id": "2302.14277", "language": "en", "timestamp": "2023-03-01T02:07:56", "url": "https://arxiv.org/abs/2302.14277", "yymm": "2302" }
\section{Introduction} \label{sec:intro} The coronavirus disease (COVID-19) has spread rapidly to countries around the world since 2019. In March 2020, it was declared by World Health Organization (WHO) as a pandemic \cite{roosa2020real}. The pandemic has presented severe challenges to routine daily life, the global economy, and general public health. However, medical resources for COVID-19 are very limited compared to demand. As a complement to the RT-PCR test, computed tomography (CT) imaging of the human lung is considered a tool for diagnosing and monitoring COVID-19 infection. Several patterns such as ground-glass opacity, multifocal patchy consolidation, and crazy-paving pattern have been declared as a result of COVID-19 infection \cite{liu20202019}. However, the segmentation of COVID-19 infection is still challenging due to the characteristics of high variation in pattern, shape, location, and blurred boundary as shown in figure \ref{Figinf}. \begin{figure}[t] \centering \includegraphics[width=6cm]{patient.png} \caption{Examples of COVID-19 infected regions from COVID-19 Challenge dataset \cite{roth2021rapid}. The infection area is delineated by the red line.} \label{Figinf} \end{figure} In the field of medical image segmentation, deep learning networks, such as U-net \cite{ronneberger2015u}, Res-UNet \cite{diakogiannis2020resunet}, and transformer-based models \cite{cao2021swin}, have been proposed and achieved promising results. Recently, many networks were proposed for COVID-19 infection segmentation. Inf-Net \cite{fan2020inf} took edge information on the lung as one of the supervision and introduced specific attention and multi-scale mechanisms. \cite{karthik2022contour} utilized the information from boundary and shape to precisely capture infected tissues. These studies tried to take advantage of edge and shape information, but this may only provide limited help due to the blurred boundary of the infections. To alleviate the problem of insufficient data, semi-supervised \cite{liu2022ccat}, self-supervised \cite{fung2021self}, and weakly supervised \cite{laradji2021weakly} were applied in the segmentation of COVID-19 infection. Radiologists mainly rely on low-level features such as texture, line, and intensity to identify infections, due to the heterogeneity of the shape and location \cite{ng2020imaging}. However, there is still a lack of models that emphasize low-level features, which is crucial for identifying the pattern of the infection. To detect subtle differences in low-level features, we propose DECOR-NET, a network that adds more decorrelated features in the shallow layers. We apply the channel-reweighting strategy to increase the number of channels in the early layer of the network and add the proposed decorrelation loss to ensure the diversity of low-level features. Compared with other decorrelation method \cite{cogswell2015reducing, wang2020orthogonal}, the proposed decorrelation loss does not have the undesired effect of weight decay and directly decorrelate the feature map instead of the weight. Note that our method does not add any extra parameters to the network. Our contributions in this work are threefold: (1) We introduce a strategy to improve the COVID-19 infection segmentation by utilizing plenty of decorrelated low-level features. (2) A novel loss function is proposed to reduce the correlation between feature channels. (3) Extensive experiments showed that with a relatively small size our proposed network outperforms most existing networks. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{Network.png} \caption{Overview of the Network. The gray number around the block is the output size of each unit. The purple dashed arrows illustrate the channel re-weighting strategy.} \label{Fig.main1} \end{figure} \section{Method} \label{sec:method} The detailed architecture of the proposed network is presented in Figure \ref{Fig.main1} and Res-UNet\cite{diakogiannis2020resunet}, a widely used network for medical image segmentation, is selected to be the baseline of our model. We mainly made two changes on Res-UNet, which are applying the channel re-weighting strategy and adding the proposed decorrelation loss on the output feature maps of all encoder units. \subsection{Channel re-weighting} \label{ssec:channel} \cite{zeiler2014visualizing} shows that the shallow layers of the network capture low-level features such as texture, edge, and intensity, while the deep layers preserve the semantic information. Usually, Res-UNet has 5 layers and the channel number of these five layers is 32, 64, 128, 256, and 512 in order. However, this regular setting is not suitable for the COVID-19 infection segmentation task which desires relatively more low-level features. Thus, the channel re-weighting strategy strengthens the model's ability to perceive low-level features by directly increasing the number of channels in the shallow layers of the network. In addition, to keep the size of the parameter unchanged, it reduces the channel's number of deep layers simultaneously. In other words, it transfers parameters from deep layers to shallow layers. After trying different settings, we changed the channel setting of Res-UNet from (32, 64, 128, 256, 512) to (248, 248, 112, 112, 112). We retain the number of encoder units in the network to preserve a large field of view and some high-level semantic information to identify lobe regions. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{decor_loss.png} \caption{The pipeline of computing decorrelation loss.} \label{Fig.main2} \end{figure} \subsection{Decorrelation loss} \label{ssec:decorloss} Each channel on the feature map contains a specific representation of the input. Channel re-weight strategy has increased many channels in shallow layers, which may result in redundancy of learned low-level features. To ensure the diversity of the low-level features, the decorrelation loss (Decor loss) is proposed. The pipeline of computing the decorrelation loss is shown in Figure \ref{Fig.main2}. Refer to the channel interdependency of \cite{fu2019dual}, we calculate the channel correlation map $C\in \mathbb{R} ^{C\times C}$ from the original features $H\in \mathbb{R} ^{C\times H\times W}$: \begin{equation} c_{i,j} = \sum_{h}^H\sum_{w}^W h_{i}^{h,w}h_{j}^{h,w} \end{equation} where $c_{i,j}$ denotes the the value of the $i$th row and the $j$th column on the correlation map $C$. $h_{i}^{h,w}$ denotes the the value of the $h$th row and the $w$th column on the $i$th channel of the feature map $H$. For simplicity, we can also reshape $H\in \mathbb{R} ^{C\times H\times W}$ to $H\in \mathbb{R} ^{C\times HW}$: \begin{equation} c_{i,j} = H_{i}\cdot H_{j} \end{equation} Then, we apply a softmax function to obtain the probability map $X\in \mathbb{R} ^{C\times C}$: \begin{equation} x_{i,j} = \frac{exp(\frac{c_{i,j}}{z_{i}})}{\sum_{k}^C exp(\frac{c_{i,k}}{z_{i}})} \end{equation} where $z_{i}$ denotes the normalization term and is defined as the largest value in the $i$th row of $C$. Adding $z_{i}$ can prevent the model from directly enlarging the scale of $C$ to reduce the loss. Note that we treat the selected maximum value as constant when computing the gradient. Finally, we add cross-entropy loss, hoping that the channel only relates to itself: \begin{equation} L_{decor} = -\sum_{i}^C log(x_{i,i}) \end{equation} To understand this further, consider the gradient of the loss with respect to a particular activation $h_{a}^{h',w'}$. \begin{equation} \begin{split} \frac{\partial L_{decor}}{\partial h_{a}^{h',w'}} = \sum_{i\neq a}(\frac{x_{a,i}}{z_{a}}+ \frac{x_{i,a}}{z_{i}})h_{i}^{h',w'}\\ +\frac{2(x_{a,a}-1)}{z_{a}}h_{a}^{h',w'} \end{split} \end{equation} There are two terms in the above formula. The first term illustrates the influence of other channels on channel $a$ which allows channel $a$ to decorrelate with others. The second term makes channel $a$ enhance its value. Compared with \cite{cogswell2015reducing}, our proposed decorrelation loss will not have the undesired effect of weight decay, because increasing or decreasing the scale of the weight does not change $X$. Denoting $\lambda > 0$ as the weight of the decorrelation loss, the overall loss function $L$ is defined as follows: \begin{equation} L = \frac{1}{2}L_{CE}+\frac{1}{2}L_{DC}+\lambda L_{decor} \label{eq.L} \end{equation} where $L_{CE}$ denotes the binary cross entropy loss and $L_{DC}$ denotes the Dice loss. \section{Experiments} \label{sec:experiments} \begin{table}[!htb] \centering \small \setlength\tabcolsep{2pt} \caption{Comparison with other state-of-art methods on the test sets.} \label{Tab1} \begin{tabular}{c|c|cccc} \hline Method& Param.& Dice & IoU& Precision& Recall\\ \hline U-Net \cite{ronneberger2015u}& 2.637 M& 0.6097& 0.4654& 0.6647& 0.6457\\ Attention U-net \cite{oktay2018attention}& 8.725 M& 0.5883& 0.4500& 0.6449& 0.6195\\ U-net++ \cite{zhou2018unet}& 9.045 M& 0.5977& 0.4580& 0.6702& 0.6169\\ Inf-Net \cite{fan2020inf}& 33.122 M& 0.6129& 0.4689& 0.6504& 0.6508\\ U-net++ (large) \cite{zhou2018unet}& 36.165 M& 0.6053& 0.4664& 0.6672& 0.6372\\ Swin-Unet \cite{cao2021swin}& 41.342 M& 0.5998 & 0.4567 & 0.6423& 0.6502\\ \textbf{DECOR-Net(Ours)}& 6.457 M& \textbf{0.6378} & \textbf{0.4940}& \textbf{0.6788}& \textbf{0.6799}\\ \hline \end{tabular} \end{table} \subsection{Datasets and Evaluation Metrics} \label{ssec:metrcs} We run our experiments on the public COVID-19 Challenge \cite{roth2021rapid} dataset which contains 199 lung CT volumes of COVID-19 patients including 9704 CT slices of $512\times 512$ size. After splitting, there are 127 volumes in the training set, 32 volumes in the validation set, and 40 volumes in the test set. In this study, we adopt the Dice similarity coefficient (DSC), intersection over union (IoU), precision, and recall as evaluation metrics. \subsection{Implementation details} \label{ssec:imple} Our model was implemented in PyTorch using the MONAI framework \cite{cardoso2022monai}. Stochastic gradient descent (SGD) and Adam optimizer were implemented to optimize the model. The learning rate was initialized to $1\times10^{-4}$. Whenever training loss did not decrease by at least $5\times10^{-3}$ within the 30 epochs, the learning rate was reduced by a factor of 5. All models were trained for 300 epochs. The validation set was used to select epoch with the best model, while the performances of models were finally evaluated on the test set. The augmentation methods we applied include random rotation, scaling, elastic deformations, gamma correction, mirroring, and intensity shifting. The hyper-parameter $\lambda$ for decorrelation loss was set to be 0.01. \begin{table}[!htb] \centering \small \setlength\tabcolsep{2pt} \caption{Model performances using different decorrelation methods. CR denotes the channel re-weighting strategy.} \label{Tab.main3} \begin{tabular}{c|cccc} \hline Decorrelation Method& Dice & IoU& Precision& Recall\\ \hline Only CR& 0.6264& 0.4853& 0.6749& 0.6690\\ CR + Deconv loss\cite{cogswell2015reducing}& 0.6301& 0.4868& 0.6669& 0.6799\\ CR + Ortho loss\cite{wang2020orthogonal}& 0.6298& 0.4862& 0.6702& \textbf{0.6837}\\ CR + Decor loss (ours)& \textbf{0.6378}& \textbf{0.4940}& \textbf{0.6788}& 0.6799\\ \hline \end{tabular} \end{table} \subsection{Comparison with other models} \label{ssec:compare} Tabel \ref{Tab1} shows the quantitative results compared with other state-of-art methods. Our model achieves competitive results on all metrics. We experimented without loading the pre-trained model. All models are under the same training framework, except that Inf-net uses its published framework. \begin{figure}[htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.3cm]{loss1.png}} \centerline{(a)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.3cm]{loss0.png}} \centerline{(b)}\medskip \end{minipage} \caption{(a) and (b) show the probability matrix $X$ with and without Decor loss for the output of the second layer.} \label{fig:3} \end{figure} In table \ref{Tab.main3}, we compare the proposed decorrelation loss with two other decorrelation methods. Hyper-parameter tuning is performed for each method. The proposed decorrelation loss still achieves the best performance. Figure \ref{fig:3} shows the average probability map $X$ over all slices in the validation set, indicating that decorrelation loss can make the values on the diagonal larger and reduce the interdependences between channels. Figure \ref{fig:4} shows that adding decorrelation loss can stably improve the model performance under different channel settings. In addition, the model performance becomes better as the number of channels in the early layer becomes larger. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{Fig3.png} \caption{Performances under different channel settings.} \label{fig:4} \end{figure} \subsection{Ablation study} The ablation study was performed to demonstrate the effectiveness of our methods. Results in table \ref{Tab3} show the benefits of the channel re-weighting strategy and the decorrelation loss. \label{ssec:ablation} \begin{table}[!htb] \centering \small \setlength\tabcolsep{2pt} \caption{Ablation study of channel re-weighting strategy (CR) and decorrelation loss.} \label{Tab3} \begin{tabular}{cc|c|cccc} \hline CR& Decor loss& Param.& Dice & IoU& Precision& Recall\\ \hline & & 6.495 M& 0.5871& 0.4450& 0.6371& 0.6488\\ &\checkmark & 6.495 M& 0.5963& 0.4554& 0.6541& 0.6411\\ \checkmark & & 6.457 M& 0.6264& 0.4853& 0.6749& 0.6690\\ \checkmark& \checkmark & 6.457 M& \textbf{0.6378} & \textbf{0.4940}& \textbf{0.6788}& \textbf{0.6799}\\ \hline \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} In this paper, we build a DECOR-Net to accurately segment COVID-19 infection from CT volumes. To capture more discriminating low-level features, we introduce channel re-weighting strategy and a novel decorrelation loss. The channel re-weighting strategy enlarges the number of channels in the shallow layers and the decorrelation loss helps avoid redundancy between channels. A comprehensive experiment illustrates the proposed network outperforms most cutting-edge methods. The proposed decorrelation loss can consistently improve model performance under different settings and outperforms other deocorrelation methods. Moreover, our methods do not increase the model size, which prevents further overfitting and hunger for data. The proposed network may be widely applicable to lesion segmentation that relies on texture or edge information. \section{Acknowledgments} \label{sec:acknowledgments} This study is supported by grants from the National Natural Science Foundation of China (62276081), the Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine (NO:ZYYCXTD-C-202004), Basic Research Foundation of Shenzhen Science and Technology Stable Support Program (GXWD20201230 155427003- 20200822115709001), and The Major Key Project of PCL. \bibliographystyle{IEEEbib}
{ "arxiv_id": "2302.14355", "language": "en", "timestamp": "2023-03-01T02:10:45", "url": "https://arxiv.org/abs/2302.14355", "yymm": "2302" }
\section{Problem Formulation} \label{formulation} We consider the problem of learning a function $\mathcal{F}$ that receives a visual scene observation $O \in \mathbb{R}^{H\times W\times3}$ and a task instruction $I=\{{s_t}\}_{t=1}^{T}$, and outputs a task-oriented grasp pose $g$ (in the image space), where $s_t$ is the $t$-th word token and $T$ is the max length: \begin{equation*} g = \mathcal{F}(O, I) \end{equation*} Here, $O$ is an RGB image of multiple objects, including one or more target objects and distractors. $I$ is a natural language sentence of the task description. Depicted in Fig.\ref{fig:grasp_example} are two examples of $g$. Each of them is a 5-dimensional grasp rectangle parameterized by grasp location $(x, y)$, orientation $\theta$, opening width $w$, and length $h$: \begin{equation*}\label{eq:2} g = \{x, y, \theta, w, h\} \end{equation*} where the first three parameters are in $SE(2)$ and represent the reference frame of the rectangle, and the last two describe the dimensions. $h$ is a fixed value for a designated gripper in our implementation, although it could be a learnable parameter in general. For orientation, the space of $SO(2)$ rotation is discretized into 120 bins. \begin{figure*}[th] \centering \vspace*{-0.2in} \begin{tikzpicture}[inner sep = 0pt, outer sep = 0pt] \node[anchor=south west] (fnC) at (0in,0in) {\includegraphics[height=4.0in,clip=true,trim=0in 0in 0in 0.0in]{pipeline_new.png}}; \end{tikzpicture} \vspace*{-0.1in} \caption{An overview of GraspCLIP architecture: (a) GraspCLIP consists two CLIP-based encoders, a Task-Oriented Fusion module, and a decoder. (b) Coarse-grained Object Grounding module coarsely localizes the target object. (c) Fine-grained Affordance Grounding module creates fine-grained correspondences between functional/affordance regions and task instructions.} \label{fig:pipeline} \vspace*{-0.20in} \end{figure*} We approximate function $\mathcal{F}$ with a deep neural network, namely GraspCLIP. To train GraspCLIP, a dataset $\mathcal{D}=\{d_1, d_2,..., d_n\}$ of $n$ tuples is required. The detail of data generation will be introduced later. Each tuple consists of a visual scene observation $O_j$, a task instruction $I_j$, and a set of $m$ task-oriented grasp annotations $\mathcal{G}_j=\{g_{i, j}\}_{i=1}^{m}$: \begin{equation*} d_j = (O_j, I_j, \mathcal{G}_j) \end{equation*} where $j=1,2,...,n$. \section{Approach} \label{approach} An overview of the proposed GraspCLIP is presented in Fig.\ref{fig:pipeline}. The model architecture consists of four major components: two CLIP-based encoders, a two-stage, coarse-to-fine TOF module, and a decoder. Each component will be introduced for the rest of this section. \subsection{Encoder Module} Given a visual scene observation $O$ and a task instruction $I$, GraspCLIP first encodes multi-modal inputs into a joint representation space, which enables cross-modal reasoning and semantic understanding. Previous works on VLG usually use backbone networks trained on small-scale, single-modal datasets. This will lead to (1) a limited generalization capability to novel concepts and (2) a large semantic gap between two modalities. We, therefore, opt for CLIP-based encoders. Two encoders are pre-trained jointly on a dataset of 400 million (image, text) pairs and inherently learn a broad context of semantics. They contain rich prior knowledge for grounding open-end, high-level semantic concepts (see Fig.\ref{fig:clip}). This property is beneficial since an assistive robot in real-world applications needs to deal with a open set of object categories and tasks. Specifically, we use a CLIP pre-trained ResNet50 \cite{he2016deep} and a CLIP pre-trained BERT\cite{devlin2018bert} to encode $O$ and $I$, respectively. Although CLIP-based encoders provide a strong basis, CLIP is originally designed to align the whole image (instead of pixels or regions) with the input sentence, leading to a significant gap between high-level image understanding and low-level task-oriented grasping. We next address this problem with a multi-modal fusion module and a decoder to transfer CLIP encodings to a task-oriented grasp prediction. \begin{figure}[t] \centering \begin{tikzpicture}[inner sep = 0pt, outer sep = 0pt] \node[anchor=south west] (fnC) at (0in,0in) {\includegraphics[height=1.5in,clip=true,trim=0.0in 0.2in 0.1in 0.2in]{encoder.png}}; \end{tikzpicture} \vspace*{-0.28in} \caption{Given detected object proposals and natural language descriptions, CLIP outputs distributions over proposals without training.} \label{fig:clip} \vspace*{-0.25in} \end{figure} \subsection{Task-Oriented Fusion Module}\label{VLRF} Using a single output from each CLIP encoder is not enough for accurate task-oriented grasp prediction since a task instruction contains information at multiple levels of granularity. For example, ``Use the \textit{knife} to \textit{cut} an apple" requires both coarsely grounding the target object ``\textit{knife}" and understanding which fine-grained object part to grasp for the target task of ``\textit{cut}". To tackle this issue, a hierarchical approach is first employed to capture the semantic meaning of multi-modal inputs. First, we transform $I$ into two types of language embeddings: a sentence embedding vector $l_{sen} \in \mathbb{R}^{1024}$ and a word embedding sequence $l_{word} \in \mathbb{R}^{77\times 512}$ (with zero-padding). While $l_{sen}$ provides a broad abstraction of the whole instruction, $l_{word}$ stores an detailed embedding vector for each individual word token. Similarly, the intermediate features from CLIP visual encoder are also extracted to obtain a hierarchical representation of $O$ (i.e., object-part-shape). To achieve both object grounding and task grounding, we then need to build hierarchical correspondences between two sets of representations. Thus, a two-stage, coarse-to-fine Task-Oriented Fusion (TOF) module is proposed. It consists of a Coarse-grained Object Grounding (COG) module and a Fine-grained Affordance Grounding (FAG) module. In the first stage, COG creates a coarse mapping from $I$ to the target object in $O$. It takes as input the high-level visual feature map $v_{high} \in \mathbb{R}^{10 \times 10\times 1024}$ and the sentence embedding $l_{sen}$. To reduce the semantic gap between two modalities, a linear projection is first applied: $l_{sen}\rightarrow \Tilde{l}_{sen} \in \mathbb{R}^{1024}$. Hadamard product is then taken at each spatial location to perform object grounding: \begin{equation*} \Tilde{v}_{high, i} = v_{high, i} \odot \Tilde{l}_{sen}, i=1,...,10\times10 \end{equation*} $\Tilde{v}_{high}$ is upsampled and concatenated with mid-level visual feature map $v_{mid} \in \mathbb{R}^{20 \times 20 \times 1024}$, followed by a transposed convolution block to output $v_{cog} \in \mathbb{R}^{40 \times 40 \times 256}$. However, as discussed before, object grounding is nevertheless insufficient to predict task-oriented grasps. The model must also establish a fine-grained correspondence between the target task in $I$ and a functional/affordance region on the target object. To tackle this, FAG module is introduced in the second stage. According to the theory of affordance\cite{gibson1977theory}, affordance is defined in the second order here. For example, affordances ``\textit{cut}" and ``\textit{handover}" correspond to the knife handle and blade, respectively. The architecture of FAG is shown in Fig.\ref{fig:pipeline}(c). The computational procedure can be divided into two steps. FAG first explores affordance regions on $v_{cog}$ and then maps $l_{word}$, especially object and task tokens, to these regions. To model the fine-grained intra-modal (visual affordance exploration) and inter-modal (word-to-affordance mapping) interactions, two types of Transformer-based attention mechanisms\cite{vaswani2017attention} are utilized in cascade. According to \cite{myers2015affordance}, regions sharing similar geometric structures are likely to have the same affordance. Therefore, we incorporate a self-attention layer to capture the non-local structural information on $v_{cog}$. An example is depicted in Fig.\ref{fig:attention}(a) for clarification. Self-attention provides a global context for local point-wise affordance exploration and parses $v_{cog}$ into a set of functional regions. Specifically, $v_{cog}$ is first flattened into $z_{cog} \in \mathbb{R}^{1600 \times 256}$. The self-attended feature map $z_{sa}$ can be then computed as: \begin{gather*} z_{sa} = \text{softmax}(\frac{Q_{sa}K_{sa}^{\top}}{\sqrt{256}})V_{sa}, \\ Q_{sa} = W_{sa}^{Q}z_{cog}, K_{sa} = W_{sa}^{K}z_{cog}, V_{sa} = W_{sa}^{V}z_{cog} \end{gather*} where $W_{sa}^{Q}$, $W_{sa}^{K}$, and $ W_{sa}^{V}$ are self-attention query, key, and value projection matrices, respectively. After the visual affordance exploration, we are ready to build fine-grained correspondences between affordance regions and word tokens. A cross-attention layer is adopted. As is shown in Fig.\ref{fig:attention}(b), the intuition is to reconstruct $z_{sa, i}$ by all elements in $l_{word}$ weighted by their normalized cross-modal correspondences, where $i=1,...,1600$. To match the dimension of $z_{sa}$, $l_{word}$ is first projected to a lower dimension of 256: $l_{word}\rightarrow \Tilde{l}_{word} \in \mathbb{R}^{77 \times 256}$. The cross-attended feature map $z_{ca}$ then can be computed as: \begin{gather*} z_{ca} = \text{softmax}(\frac{Q_{ca}K_{ca}^{\top}}{\sqrt{256}})V_{ca}, z_{ca} = \text{FFN}(z_{ca}),\\ Q_{ca} = W_{ca}^{Q}z_{sa}, K_{ca} = W_{ca}^{K}\Tilde{l}_{word}, V_{ca} = W_{ca}^{V}\Tilde{l}_{word} \end{gather*} where $W_{ca}^{Q}$, $W_{ca}^{K}$, and $ W_{ca}^{V}$ are cross-attention query, key, and value projection matrices, respectively. FFN is a feedforward layer. Since the training of two attention layers is computationally unstable at the beginning, we insert them with a learnable gating parameter $\alpha$ initialized to 0. In this way, GraspCLIP learns to localize the target object in the initial training stage, and gradually attends to fine-grained affordance regions supporting the target task. Finally, $z_{ca}$ is reshaped to $v_{ca} \in \mathbb{R}^{40\times40\times256}$, and then fused with low-level visual feature map $v_{low}\in \mathbb{R}^{80\times80\times256}$ to output $v_{fag} \in \mathbb{R}^{160 \times 160 \times 64}$. \begin{figure}[t] \centering \vspace*{-0.2in} \begin{tikzpicture}[inner sep = 0pt, outer sep = 0pt] \node[anchor=south west] (fnC) at (0in,0in) {\includegraphics[height=2.2in,clip=true,trim=0.2in 0.0in 0.2in 0.3in]{attention.png}}; \end{tikzpicture} \vspace*{-0.2in} \caption{Visualizations of two attention mechanisms.} \label{fig:attention} \vspace*{-0.2in} \end{figure} \subsection{Decoder Module} The decoder predicts a task-oriented grasp pose based on $v_{fag}$. Specifically, three consecutive bottleneck layers are first applied to output $v_{pred} \in \mathbb{R}^{640\times640\times16}$. The grasp prediction is then divided into three parallel tasks, and each is solved by appending a prediction head to $v_{pred}$. For quality head, it outputs a heatmap $M_{q} \in \mathbb{R}^{H\times W}$, measuring the probability (between 0 and 1) of satisfying the task instruction at each spatial location $(x, y)$. The other two heads output the orientation map $M_{\theta} \in \mathbb{R}^{ H\times W\times l}$ and opening width map $M_{w} \in \mathbb{R}^{H\times W}$, respectively. A task instruction may correspond to multiple ground truth grasp poses. Here, GraspCLIP only outputs the top-1 prediction during inference by first taking the argmax over the smoothed $M_{q}$ and then querying the other two maps: \begin{align*} x^{*},y^{*} = \argmax_{x, y} \ & \text{Gaussian}(M_{q}) \\ \theta^{*} = \argmax_{dim=2}M_{\theta} |_{(x^{*},y^{*})} , \ \ &w^{*} = \text{Gaussian}(M_{w})|_{(x^{*}, y^{*})} \end{align*} The output grasp pose $g^{*}$ is constructed as: \begin{equation*} g^{*} = \{x^{*}, y^{*}, \theta^{*}, w^{*}, h\} \end{equation*} \subsection{Implementation Details}\label{ID} The loss function consists of a location loss, an orientation loss, and an opening width loss: \begin{equation*} \begin{aligned} &\mathcal{L}(M_{q}, M_{\theta}, M_{w},\hat{M_{q}}, \hat{M_{\theta}}, \hat{M_{w}}) = \beta * \mathcal{L}_{loc}(M_{q}, \hat{M_{q}}) \\ &+ \gamma * \mathcal{L}_{ori}(M_{\theta}, \hat{M_{\theta}}) + \mathcal{L}_{width}(M_{w}, \hat{M_{w}}) \end{aligned} \end{equation*} where $\mathcal{L}_{-}$ denotes binary cross entropy loss, and $\hat{M_{q}}$, $\hat{M_{\theta}}$, and $\hat{M_{w}}$ are ground truth maps. The model is trained on a single NVIDIA RTX 3090 GPU for 500 epochs with a batch size of 1. We use Adam \cite{kingma2014adam} as the optimizer with an initial learning rate of $10^{-4}$ and weight decay. During training, two CLIP pre-trained encoders are frozen. At each iteration, we randomly sample an input-output tuple $d_j$ from $\mathcal{D}$. \begin{figure}[t] \centering \begin{tikzpicture}[inner sep = 0pt, outer sep = 0pt] \node[anchor=south west] (fnC) at (0in,0in) {\includegraphics[height=1.12in,clip=true,trim=0in 0in 0in 0in]{data_new-min.png}}; \end{tikzpicture} \vspace*{-0.18in} \caption{Data generation: (a) A human operator teleoperates the robot arm to stable grasp poses and assigns task labels. (b) 5D grasp poses on a single object are collected. (c) A multi-object scene with ground truth task-oriented grasp annotations are generated automatically.} \label{fig:data} \vspace*{-0.25in} \end{figure} \section{Conclusion} \label{conclusion} To address the challenge of task grounding in addition to object grounding in the context of VLG, GraspCLIP is proposed to enable task-oriented grasp prediction with visual-language inputs. Evaluation on a custom dataset demonstrates that GraspCLIP outperforms established baselines with object grounding only. To further validate the effectiveness, we deploy GraspCLIP on an assistive robotic arm for grasping previously unseen kitchen tools given the task specification. As a future direction, we consider the incorporation of interactive language correction into the GraspCLIP framework, as well as an extension of GraspCLIP to support 6 DoF dexterous VLG. \section{Dataset}\label{data} To evaluate the performance of our design and established methods, a dataset $\mathcal{D}$ is required. Since there are no such datasets in the context of VLG, we build a custom one in two steps, including multi-object scene synthesis and template-based instruction generation. In multi-object scene synthesis, we first crowdsource a list of object categories and tasks from four highly cited VG datasets: ContactDB\cite{brahmbhatt2019contactdb}, SG14000\cite{liu2020cage}, TOG-Net\cite{fang2020learning}, and TaskGrasp\cite{murali2020same}. Note that we are particularly interested in kitchen tools as they are frequently manipulated by an assistive robot. Full object set and task set can be found in our presentation video. Then, a human operator teleoperates a robot arm (see Fig.\ref{fig:data}(a)) to collect single-object grasping data (see Fig.\ref{fig:data}(b)). Each grasp may afford one or more tasks. Teleoperation allows for the extraction of tool grasping skills from real human behavior, without the significant risk of a sim-to-real gap that may arise when using simulated data. Additionally, an assistive robot usually perceives more than one object (i.e., target objects + distractors) in real-world applications. Therefore, similar to domain randomization \cite{tobin2017domain}, we randomly drop single-object data on synthetic backgrounds (see Fig.\ref{fig:data}(c)) to generate multi-object scenes with ground truth grasp annotations. This process is done automatically. We apply a template-based instruction generation strategy to efficiently create $I_j$ at each iteration. 11 templates are adapted from \cite{nguyen2020robot}. Similar to \cite{chen2021joint}, we further augment the templates with QuillBot, an automatic paraphraser, to enrich the vocabulary and grammatical diversities. There are two types of instructions: (1) task with a target object (e.g., ``Use \textit{obj} to \textit{task}"), and (2) task only (e.g., ``Use something to \textit{task}"). Finally, \textit{obj} and \textit{task} are substituted with a target object category label and a target task label, respectively. Tab.\ref{tab:dataset} provides additional details of $D$. \section{Results} \label{exp} \subsection{Result of Perception Experiments} The result of perception experiments is reported in Tab.\ref{tab:sim_result}. Scene and instance-level generalizations focus on generalizing to cases with limited variances with respect to the training data. The \textbf{scene-level generalization} experiment creates novel scene layouts with seen categories, tasks, and category-task combinations. TAG randomly explores scenes without considering task instructions, setting a lower performance bound of 38.77\%. By incorporating CLIP, CLIP+TAG achieves a minor performance boost compared to TAG. Although CLIP contains rich priors for grounding high-level concepts, it has a limited ability to inform low-level grasping directly. OG+TAG can accurately ground the task instruction to the bounding box of the target object but is unable to predict task-oriented grasps. CLIPort-S is a competitive baseline, achieving a relatively high success rate of 80.19\%. It still falls behind GraspCLIP since it does not explicitly consider the fine-grained effects of tasks on object grasping. GraspCLIP outperforms all baselines on scene-level generalization. In terms of \textbf{instance-level generalization}, all methods bear a performance drop due to intra-category variance. Still, GraspCLIP achieves the best performance at 85.73\%. \textbf{Category-level generalization} aims to transfer knowledge learned from familiar tool categories to novel ones. For example, having been taught that \textit{`cup"} has the function of ``\textit{pour}", the robot can recognize the novel category \textit{``bowl"} affords the same function. Task-only language instruction is used solely in this experiment. Any object that affords the task can be counted as the target object. Therefore, it increases the probability of TAG predicting correct grasps. OG+TAG suffers from detecting novel categories, performing poorly on this evaluation. GraspCLIP outperforms the second-best CLIPort-S by 4.43\%. \textbf{Category-task-level generalization} is a challenging though practical evaluation. A user may teach category-task pairs $spoon-scoop$ and $ladle-dispense$, and the robot should be able to mix the knowledge from two sources (i.e., $spoon-dispense$, $ladle-scoop$). GraspCLIP outperforms all the baselines. The performance on these two generalizations demonstrates the superiority of GraspCLIP in generalizing to relatively significant variances. \begin{figure*}[t] \centering \vspace*{-0.2in} \begin{tikzpicture}[inner sep = 0pt, outer sep = 0pt] \node[anchor=south west] (fnC) at (0in,0in) {\includegraphics[height=2.8in,clip=true,trim=0in 0in 0in 0.0in]{qualitative.png}}; \end{tikzpicture} \vspace*{-0.1in} \caption{Qualitative results of real-robot experiments. The grasps predicted by GraspCLIP and OG+TAG are represented by red-blue rectangle and yellow-green rectangle, respectively. The green boxes represent the bounding boxes detected by OG+TAG.} \label{fig:qualitative} \vspace*{-0.10in} \end{figure*} \subsection{Result of Real-Robot Experiments} Real-robot experiments reveal the performance gap between perception and physical grasping. Tab.\ref{tab:real_result} presents the quantitative results, and Fig.\ref{fig:qualitative} illustrates the qualitative results. Although baselines are not statistically evaluated in real-robot experiments, we provide the qualitative results of a representative baseline in Fig.\ref{fig:qualitative} for comparison. GraspCLIP achieves a high success rate in no clutter or lightly cluttered scenes (Fig.\ref{fig:qualitative}(a)-(c)). One of the limitations is low grasping DoF. While humans can perform 6 DoF grasping, such as grasping along $x$ or $y$ axis, GraspCLIP can only predict planar grasps (i.e., along $z$ axis). We plan to extend our framework to 6 DoF dexterous VLG. We deliberately create complex scene layouts in the four-object setup to further gauge the limits of the implementation. The model performs reasonably well when the structure of the target object remains fair visibility. Some heavily cluttered layouts, such as stacking and containing (Fig.\ref{fig:qualitative}(d)-(e)), are hard to deal with. Therefore, the robot fails in some cases. Fig.\ref{fig:qualitative}(e) shows a failure case, highlighted in the red box on the rightmost side. A potential solution to this problem could involve equipping GraspCLIP with an active exploration module as in \cite{liu2020interactive}. Another type of failure comes from the language grounding error. In this case, the robot grasps a distractor with the same function as the target object. For example, when the task instruction is ``Use the \textit{laundry brush} to \textit{clean}", the robot falsely grounds the instruction to a sponge brush next to the laundry brush. Interactive correction with natural language \cite{sharma2022correcting } could fix this error. \begin{table}[t] \renewcommand\arraystretch{1.4} \centering \begin{tabular}{ccccc} \bottomrule \multirow{2}{*}{Method} & \multicolumn{4}{c}{Generalization Type} \\ \cline{2-5} & Scene & Instance & Category & Category-Task \\ \midrule TAG & 38.77 & 37.87 & 42.38 & 35.10 \\ CLIP+TAG & 43.76 & 43.43 & 41.56 & 44.72 \\ OG+TAG & 63.42 & 62.40 & 56.35 & 59.66 \\ CLIPort-S & 80.19 & 75.01 & 78.80 & 75.99 \\ \midrule GraspCLIP (Ours) & \textbf{88.02} & \textbf{85.73} & \textbf{83.23} & \textbf{82.54} \\ \bottomrule \end{tabular} \caption{Result of Perception Experiments (\%)} \label{tab:sim_result} \vspace*{-0.3in} \end{table} \begin{table}[t] \centering \renewcommand\arraystretch{1.5} \begin{tabular}{ccccc} \bottomrule \multirow{2}{*}{Num of Objects} & \multicolumn{4}{c}{Success} \\ \cline{2-5} & Perc & Plan & Act & Overall \\ \hline $N=1$ & 19/20 & 19/20 & 19/20 & 95.00\% \\ $N=2$ & 19/20 & 18/20 & 18/20 & 90.00\% \\ $N=4$ (light clutter) & 18/20 & 18/20 & 17/20 & 85.00\% \\ $N=4$ (heavy clutter) & 15/20 & 14/20 & 14/20 & 70.00\% \\ \bottomrule \end{tabular} \caption{Result of Real-Robot Experiments} \label{tab:real_result} \vspace*{-0.4in} \end{table} \begin{table}[t] \renewcommand\arraystretch{1.4} \centering \begin{tabular}{ccccc} \bottomrule \multirow{2}{*}{Method} & \multicolumn{4}{c}{Generalization Type} \\ \cline{2-5} & Scene & Instance & Category & Category-Task \\ \midrule COG-only & 72.38 & 70.87 & 72.68 & 63.58 \\ FAG-only & 77.48 & 74.07 & 77.93 & 73.14 \\ FAG-COG & 68.31 & 64.39 & 58.37 & 61.93 \\ \midrule RN50+BERT & 82.68 & 78.49 & 82.80 & 80.74 \\ \midrule GraspCLIP (Ours) & \textbf{88.02} & \textbf{85.73} & \textbf{83.23} & \textbf{82.54} \\ \bottomrule \end{tabular} \caption{Result of Ablation Studies (\%)} \label{tab:as_result} \vspace*{-0.4in} \end{table} \subsection{Ablation Study} To gain further insights into the effectiveness of each component, we conduct two sets of ablation studies. The result is reported in Tab.\ref{tab:as_result}. \subsubsection{TOF Module Structure} \ \newline \indent The proposed TOF module consists of a COG module and a FAG module. To investigate their effectiveness, we test three ablated versions shown in the first three rows of Tab.\ref{tab:as_result}. COG-only uses two consecutive COG modules without task-level grounding, and FAG-only uses one FAG module without object grounding. Using two FAG modules gives a meaningless result, thus not reported. We observe that FAG-only performs consistently better than COG-only. This suggests that (1) task grounding is more critical for task-oriented grasp prediction, and (2) FAG is able to perform object localization to some extent. GraspCLIP outperforms both COG-only and FAG-only by a large margin. FAG-COG reverses the order of two grounding modules. The significant performance gap between FAG-COG and COG-FAG (i.e., GraspCLIP) justifies our coarse-to-fine design. \subsubsection{CLIP-Based Encoders} \ \newline \indent To highlight the effectiveness of CLIP-based encoders over alternative pre-trained models, we substitute CLIP pre-trained encoders with an ImageNet-pretrained ResNet50 with BERT, denoted as RN50+BERT. In the fourth row of Tab.\ref{tab:as_result}, we observe that CLIP-based encoders consistently improve the performance across four generalization types, validating their effectiveness. \section{Experimental Setup} \label{exp_setup} \subsection{Perception Experiments} \label{perception_exp_setup} We evaluate the proposed method and baselines under four different test settings. The performance is measured by the ability to generalize to novel scenes, instances, object categories, and category-task combinations. For each level of generalization, the data is split into 80\% for training and 20\% for testing. Manually annotated grasps are used to evaluate models trained on $D$. Four established baselines retrained on $\mathcal{D}$ are compared. The details are as follows: \begin{itemize} \item \textbf{TAG} represents task-agnostic VG methods \cite{chu2018real, mousavian20196, mahler2017dex} that focus on grasp stability and ignores task suitability. It receives only visual inputs and randomly ranks each candidate's task suitability. We remove the language component of GraspCLIP to model TAG. \item \textbf{CLIP+TAG} is a naive combination of CLIP and a task-agnostic grasp detector. It is originally introduced in \cite{gadre2022clip} for object localization. CLIP+TAG follows a two-stage pipeline where the task instruction is first grounded via Grad-CAM\cite{selvaraju2017grad} at the pixel level, followed by a standalone task-agnostic grasp detector. \item \textbf{OG+TAG} represents methods \cite{hatori2018interactively, shridhar2018interactive,zhang2021invigorate} focusing on grounding natural language to coarse object-centric representations. Specifically, we first ground the task instruction to the object bounding box with the highest matching score. A standalone task-agnostic grasp detector is then applied to that object. \item \textbf{CLIPort-S} is an adapted version of state-of-the-art visual-language manipulation and grasping framework CLIPort \cite{shridhar2022cliport}. We only keep its semantic branch and drop its spatial branch since depth information is unavailable. CLIPort does not explicitly consider task constraints when predicting grasps on the target object. \end{itemize} \subsection{Real-Robot Experiments} To further validate the effectiveness in real-world robotic applications, we deploy GraspCLIP on a 7-DoF Kinova Gen3 robot arm equipped with a Robotiq 2-finger adaptive gripper. Test objects are selected from the same categories as the training data but unseen during training. Some test kitchen tools collected from our laboratory and YCB dataset are shown in Fig.\ref{fig:ycb}. Here, we are only interested in revealing the gap between perception and execution. Therefore, the four baselines in Section \ref{perception_exp_setup} are not physically evaluated. Converting the predicted grasp pose from image space to robot coordinate involves a sequence of transforms: \begin{equation*} g_{robot} = T_{RC}(T_{CI}(g_{img})) \end{equation*} where $g_{robot}$ and $g_{img}$ are grasp poses in image space and robot coordinate, respectively. $T_{CI}$ transforms from 2D image space to 3D camera frame, and $T_{RC}$ transforms from camera frame to robot coordinate. The experimental procedure is as follows: (1) a set of $N$ objects ($N\geq1$) are placed in the robot workspace; (2) a natural language instruction is sent to the robot; and (3) the robot uses GraspCLIP to predict a task-oriented grasp pose $g$ and execute it on the target object. \begin{figure}[t] \centering \begin{tikzpicture}[inner sep = 0pt, outer sep = 0pt] \node[anchor=south west] (fnC) at (0in,0in) {\includegraphics[height=1.9in,clip=true,trim=0in 0in 0in 0in]{ycb-min.jpg}}; \end{tikzpicture} \caption{Part of test objects collected from the laboratory and YCB dataset.} \label{fig:ycb} \vspace*{-0.25in} \end{figure} \subsection{Evaluation Metrics} Perception experiments examine the correctness of output grasps, while real-robot experiments test how well the robot physically interacts with objects. Two sets of evaluation metrics are adopted accordingly: \begin{itemize} \item \textit{Perception experiments}: Following previous works \cite{mahler2017dex}\cite{chu2018real}\cite{wang2021high}, we consider a predicted grasp $g$ correct if two criteria are met: (1) the difference between the angle of $g$ and a ground truth grasp pose $ \hat{g}$ is less than $30^{\circ}$ and (2) the Jaccard index (similar to IOU) between $g$ and $ \hat{g}$ is greater than 0.25. The Jaccard index $J$ is defined as: \begin{equation*} J(g, \hat{g}) = \frac{| \hat{g} \cap g|}{| \hat{g} \cup g|} \end{equation*} Here, we choose the top-1 grasp candidate. \item \textit{Real-robot experiments}: To systematically evaluate the performance in real-robot experiments, we divide the pipeline into three stages and record their statistics separately. Three stages include Perception ($Perc$), Planning ($Plan$), and Action ($Act$). A grasp is considered successful if the target object is grasped subject to the task requirement and lifted stably for three seconds by the robot. \end{itemize} \section{Introduction} Language provides a natural interface for task specification in unconstructed environments such as kitchens and offices, complementing pure vision-based robotic frameworks \cite{mahler2017dex, mousavian20196, chu2018real}. Guided by natural language, an assistive robot is able to perform a wide range of household manipulation tasks using verbal instructions, such as ``Use the \textit{knife} to \textit{cut} the apple for me" and ``\textit{Clean} the mug with a \textit{brush}". The initial step in performing such tasks is to grasp the intended tool in a task-oriented manner. This necessitates that the robot both coarsely localizes the target object (i.e., object grounding) and comprehends which fine-grained object part to grasp for the intended task execution (i.e., task grounding). However, previous researches on VLG, such as natural language object retrieval\cite{hatori2018interactively, shridhar2018interactive,zhang2021invigorate} and object rearrangement \cite{liu2022structformer}, focus on grounding language instructions to some coarse object-centric representations (e.g., bounding box, instance segmentation mask), while disregarding the fine-grained, task-oriented effects on object grasping. Fig.\ref{fig:concept}(a) illustrates an example of manipulating kitchen tools. The language instruction of ``Use the \textit{knife} to \textit{cut} an apple" necessitates both grounding the target object ``\textit{knife}" and grounding the target task of ``\textit{cut}" to the handle of the knife. Conversely, when the language instruction is ``\textit{Handover} the \textit{knife} to me", humans would choose a different way by holding the blade for the target task of ``\textit{handover}". It is clear from this example that a language instruction would affect not only what object to grasp but also how the target object is grasped for an intended task execution. We, humans, take this skill for granted, but it is not explored by previous VLG researches. Disregarding the fine-grained effects of tasks on grasp poses may result in potential task failures. For instance, handover by grabbing the knife handle may cause physical injury to the receiver. Furthermore, imprecise grasping of the knife handle may result in cutting failure. So, how can we endow robots with the same ability to predict task-oriented grasps with visual-language inputs? \begin{figure}[t] \centering \begin{tikzpicture}[inner sep = 0pt, outer sep = 0pt] \node[anchor=south west] (fnC) at (0in,0in) {\includegraphics[height=3.6in,clip=true,trim=0in 0in 0in 0in]{main_new-min.png}}; \end{tikzpicture} \vspace*{-0.25in} \caption{(a) Task grounding and object grounding revealed in humans' grasping behavior. (b) GraspCLIP takes as input a visual scene observation $O$ of multiple objects and a task instruction $I$, and outputs a task-oriented grasp pose $g$.} \label{fig:concept} \vspace*{-0.3in} \end{figure} To answer this question, we propose GraspCLIP to address task grounding in addition to object grounding to enable task-oriented grasp prediction. Fig.\ref{fig:concept}(b) shows an assistive robot operating in a kitchen environment. GraspCLIP takes as input a visual scene observation $O$ of multiple objects and a task instruction $I$, and outputs a task-oriented grasp pose $g$. GraspCLIP first leverages a visual-language model (VLM) CLIP\cite{radford2021learning} pre-trained on large-scale internet data to encode multi-modal inputs into a joint representation space. Then, to simultaneously achieve task grounding and object grounding, a two-stage, coarse-to-fine Task-Oriented Fusion (TOF) module is proposed to build hierarchical correspondences between visual observations and task instructions. This is in contrast to previous works, which have focused only on object grounding. In the last stage, a decoder predicts task-oriented grasp poses based on instruction-conditioned representations generated from the previous stage. Evaluation on a custom dataset demonstrates the superiority of GraspCLIP over established baselines with object groudning only. We further validate its effectiveness in real-world applications on an assistive robotic arm platform for grasping previously unseen kitchen tools given the task specification. In summary, our contributions are as follows: \begin{itemize} \item To address the challenge of task grounding in addition to object grounding, we contribute GraspCLIP to enable task-oriented grasp prediction with visual-language inputs. \item To evaluate the task-oriented grasp prediction performance, we provide a custom dataset comprising 28 object categories, 96 instances, 38 household tasks, task-oriented grasp annotations, and template-based language instructions. \item A system is built to enable an assistive robotic arm to predict and execute task-oriented grasps guided by user language instructions. \end{itemize} \section*{ACKNOWLEDGMENT} \bibliographystyle{IEEEtran} \balance \section{Related Work} \label{related_works} Vision-based grasping (VG) has been a fundamental problem in robotics. With the rise of deep learning in recent years, VG has achieved significant advances. For example, Mahler et al. \cite{mahler2017dex} and Chu et al. \cite{chu2018real} use CNN-based networks to predict planar grasps from RGB-D images. Mousavian et al. \cite{mousavian20196} propose to generate 6 degree-of-freedom (DoF) grasp poses on point clouds with a variational autoencoder. Most works in VG consider task-agnostic grasping, which finds stable grasp poses satisfying form and force closure. Failure to consider task constraints limits their usage in many application scenarios. To address this problem, some recent researches have proposed to merge language grounding into vision-based manipulation and grasping pipelines \cite{shridhar2020alfred, liu2021structformer, ahn2022can, shridhar2022cliport, chen2021joint, hatori2018interactively, shridhar2018interactive, zhang2021invigorate}. Conditioned on language, the robot can understand and execute a diverse range of VLG tasks. Hatori et al. \cite{hatori2018interactively} present the first system to resolve ambiguity in language instructions for object picking. Similarly, Shridhar et al. \cite{shridhar2018interactive} interactively pick objects using referring expressions. Built on top of \cite{hatori2018interactively} and \cite{shridhar2018interactive}, Zhang et al. \cite{zhang2021invigorate} address language-conditioned picking in the clutter. The above methods focus on grounding natural language to coarse object-centric representations such as bounding boxes and use off-the-shelf task-agnostic grasp detectors. They do not explicitly consider the fine-grained effects of tasks on object grasping. This effect is essential since a task instruction would affect not only what object to grasp but also how to grasp it for the subsequent task execution. Another problem is that they rely on deep learning models trained on small-scale, self-collected datasets or public datasets such as RefCOCO \cite{kazemzadeh2014referitgame}, limiting their generalization capability to novel scenes, instances, categories, and tasks. There has been a recent trend of building VLG pipelines based on large pre-trained models to improve the generalization capability. For example, SayCan\cite{ahn2022can} and CLIPort\cite{shridhar2022cliport} explore the power of large pre-trained models from natural language processing (NLP) and computer vision (CV) communities to build priors for robots efficiently. Ahn et al. \cite{ahn2022can} combine low-level skills with large language models (LLMs) \cite{brown2020language}\cite{chowdhery2022palm} to complete long-horizon, language-guided mobile manipulation tasks. Shridhar et al. \cite{shridhar2022cliport} present a CLIP \cite{radford2021learning} based imitation-learning agent trained for solving various language-specified tabletop tasks. Despite demonstrating a capacity to solve complex VLG tasks, they do not explicitly consider the fine-grained effects of tasks on object grasping. As a supplement to previous VLG researches, we address the challenge of task grounding in addition to object grounding to enable task-oriented grasp prediction with visual-language inputs.
{ "arxiv_id": "2302.14306", "language": "en", "timestamp": "2023-03-01T02:09:01", "url": "https://arxiv.org/abs/2302.14306", "yymm": "2302" }
\section{Introduction} ~\label{sec:intro} 3D understanding is of key importance in a wide range of applications including healthcare, medicine, entertainment, robotics, and human-machine interaction. Several 3D vision research problems (e.g., 3D point cloud classification~\cite{qi2017pointnet,qi2017pointnet++,wang2019dynamic}, detection~\cite{misra2021end}, and segmentation~\cite{qi2017pointnet++,thomas2019kpconv,wang2019dynamic}) have recently drawn much attention. However, obtaining 3D point cloud representations from the raw point clouds is challenging and often requires supervision, which causes high annotation costs. \begin{figure}[t!] \includegraphics[width=0.9\columnwidth]{figures/issues2-cropped.pdf} \caption{Motivation for CLR-GAM: a) motivation for Guided Feature Mapping, for better association b) motivation for Guided Augmentations, for better exploration of augmentation space} \label{fig:issues} \vspace{-0.3cm} \end{figure} As a result, self-supervised learning for 3D point cloud representations has witnessed much progress and can potentially improve sample efficiency and generalization for these 3D understanding tasks. Existing works are mainly based on generative models~\cite{achlioptas2018learning, han2019view,wu2016learning}, reconstruction~\cite{eckart2021self,han2019multi, li2018so, yang2018foldingnet, zhao20193d}, pretext task~\cite{wang2021unsupervised, poursaeed2020self, sauder2019self, hassani2019unsupervised, sun2021point, yang2021progressive, rao2020global}, and contrastive learning~\cite{ zhang2019unsupervised, sanghi2020info3d, xie2020pointcontrast,huang2021spatio, liu2021point, zhang2021self,du2021self}. Much progress has been made in recent contrastive learning-based methods. However, we observe the following two limitations. \noindent\textbf{Issue 1 (Contrast Ambiguity):} \textbf{a) GCA (Global Contrast Ambiguity).} With augmentations like cropping and nonrigid body transformation, the shape of an augmented object is entirely different from the original object, leading to ambiguity for contrastive learning. For instance, if we remove the back part of a "Chair" point cloud, the resulting point cloud could be similar in shape to a sample of the "Table" class, as shown in Figure~\ref{fig:issues}.a. It poses a challenge for contrastive learning based methods because they do not access class labels for training. \textbf{b) LCA (Local Contrast Ambiguity).} In addition, local feature contrasting techniques~\cite{xie2020pointcontrast,liu2021point} treat every other point's feature in the same point cloud as a negative. The drawback with this objective is that there are symmetries and similar shapes in an object that can have the same features. \noindent\textbf{Issue 2 (Curse of Dimensionality):} contrastive learning requires a variety of augmentations to learn discriminative 3D point cloud representations. However, searching over these high-dimensional augmentations is time-consuming and does not guarantee proper coverage with a dynamic limited number of samples. In this work, we introduce two novel modules, i.e., guided feature mapping (GFM) and guided augmentation (GA), to overcome the above limitations. We introduce the GFM module to associate features of the same structure between two augmented samples for effective feature association under heavy shape deformation. The feature contrasting is done at the object or global level, like most works, but with a tight coupling of local feature association. The GA module is present to efficiently explore higher-dimensional augmentation spaces with dynamically limited samples for diverse coverage of the augmentation space. We conducted extensive experiments to validate the effectiveness of the proposed contrastive learning framework. Specifically, we benchmark three downstream tasks, i.e., classification, few-shot learning, and object part semantic segmentation. We obtain state-of-the-art performance on the three tasks, and extensive ablative studies are conducted to justify the designed choice. \noindent\textbf{Our main contributions:} i) We propose Guided Augmentation (GA) and Feature Mapping (GFM) for learning discriminative 3D point cloud representations. ii) Our proposed approach achieves state-of-the-art performance on three downstream tasks, i.e., object classification, few-shot learning, and part segmentation. iii) Extensive ablatives studies are presented to justify our design choices. \section{Related Works} \subsection{Contrastive Learning on Point Clouds} \vspace{-0.3cm} \begin{table}[h!] \centering \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{ c|c|c|c|c } Method&\multicolumn{2}{c|}{Feature Contrast}&\multicolumn{2}{c}{Contrast Ambiguity}\\\hline Contrastive&global contrast & local contrast& GCA & LCA \\ \hline PointContrast~\cite{xie2020pointcontrast}&&\checkmark& &\checkmark\\ PointDisc~\cite{liu2021point}&&\checkmark &&\checkmark\\ DepthContrast~\cite{zhang2021self}&\checkmark& &\checkmark&\\ STRL~\cite{huang2021spatio}&\checkmark& &\checkmark&\\ CrossPoint~\cite{afham2022crosspoint}&\checkmark& &\checkmark&\\\hline \textbf{CLR-GAM (ours)}&\checkmark& & & \\ \end{tabular} } \caption{Comparison of existing works and the problems} \label{tbl:comp_related_works} \vspace{-0.2cm} \end{table} Following the recent success of self-supervised contrastive learning for images, recent works~\cite{du2021self, huang2021spatio, liu2021point, sanghi2020info3d, xie2020pointcontrast, zhang2019unsupervised, zhang2021self} explore contrastive learning for point cloud. PointContrast~\cite{xie2020pointcontrast} applies contrastive loss for pointwise features generated from the neural network for a point cloud transformed using two random augmentations, to learn invariant features. PointContrast uses local feature contrasting, whereas in our approach, we tightly couple local feature association with object-level/global feature contrasting. Most importantly, the features of different points in the same object can be similar because of symmetries and similar shapes, but PointContrast treats every other point's feature in the same pointcloud as a negative feature and suffers from LCA as shown in Table~\ref{tbl:comp_related_works}. DepthContrast~\cite{zhang2021self} uses two encoders for global level contrasting using voxel and point encoders but does not address GCA. Zhu et al.~\cite{zhu2021improving} uses the feature memory bank~\cite{he2020momentum} to store negatives and positives for hard sample mining. Huang et al.~\cite{huang2021spatio} propose STRL that applies spatial augmentation for temporally correlated frames in a sequence point cloud dataset and performs contrastive learning. Recently, Afham et al.~\cite{afham2022crosspoint} proposed CrossPoint that learns cross-modal representations (images and point clouds) using contrastive learning. All these methods rely on contrastive learning of the encoded global features of point clouds, ignoring the structural deformations that lead to intraclass confusion (GCA). Recently, the authors of PointDisc~\cite{liu2021point} apply a point discrimination loss within an object to enforce similarity in features for points within a local vicinity. PointDisc makes the geometric assumption of a fixed radius for obtaining positives from the encoded features of the same point cloud and also suffers from LCA, similar to PointContrast. In this work, we introduce the GFM to identify structurally similar features between two different augmentations of the same point cloud without any geometric assumptions. We empirically demonstrate the effectiveness of the proposed GFM to learn discriminative 3D representations for three different downstream tasks. \vspace{-0.2cm} \subsection{Guided Augmentation} Several guided augmentation approaches for image modality~\cite{charalambous2016data, hauberg2016dreaming, rogez2016mocap, peng2015learning, dixit2017aga} have shown to synthesize variable realistic samples for training. It is an important problem to generalize an algorithm to cover the unseen samples in the test data, which is expected to have wide variations of augmentation. In the context of human posture, \cite{charalambous2016data} generates synthetic videos for gait recognition, and \cite{rogez2016mocap} augments images with 2D poses using 3D MoCAP data for pose estimation. For improving image detection,~\cite{peng2015learning, su2015render} renders 3D CAD models with variable texture, background, and pose for generating synthetic images. Hauberg et al.\cite{hauberg2016dreaming} learn class-specific transformations (diffeomorphism) from external data, whereas another work ~\cite{miller2000learning} synthesizes new images using an iterative process. Since the existing works are task-specific and designed for supervised learning of image modality, they require class labels during training. AGA~\cite{dixit2017aga} extends to the feature space to be class agnostic, but it requires a huge corpus of annotated datasets with class labels to pretrain. We cannot directly adapt those approaches to self-supervised point cloud learning approaches, so we find exploration strategies in reinforcement learning are relevant for unsupervised guided augmentation. \subsection{Exploration of High Dimensional Spaces} Efficient exploration in high-dimensional space is a fundamental problem in reinforcement learning. Different strategies such as selecting new states including epsilon-greedy, selecting random states with epsilon probability~\cite{mnih2015human}, upper confidence bounds~\cite{auer2002using}, Boltzmann exploration~\cite{watkins1989learning, sutton1990integrated} using softmax over the utility of actions and Thomson sampling~\cite{agrawal2012analysis}. The motivation or curiosity to explore new states is coined as intrinsic motivation~\cite{oudeyer2008can}, which is adapted into ~\cite{bellemare2016unifying, haber2018learning, houthooft2016vime, oh2015action, ostrovski2017count, pathak2017curiosity, stadie2015incentivizing} as an intrinsic reward to quantify how different the new state is from already explored states. Some existing methods~\cite{haber2018learning, houthooft2016vime, oh2015action, pathak2017curiosity, stadie2015incentivizing} use error in prediction as an intrinsic reward, while others use count-based techniques~\cite{ostrovski2017count, bellemare2016unifying}. However, the computation of intrinsic reward using function approximation is slow to catch up and is not efficient enough for contrastive learning. In this work, we introduce a guided augmentation mechanism for efficient exploration of new states using a memory-based module motivated by~\cite{badia2020never}. Badia et al. construct an episodic memory-based intrinsic reward using k-nearest neighbors over the explored states to train the directed exploratory policies. \section{Methodology} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{figures/block_diagram.pdf} \vspace{-0.3cm} \caption{The proposed CLR-GAM framework with guided augmentation (GA) and guided feature mapping (GFM). $\otimes$ is the augmentation operator, $\odot$ is the indexing operator and $S_{12}$ is the structural index mapping.} \label{fig:motivation} \vspace{-0.6cm} \end{figure*} \subsection{Preliminaries and Notation} We denote a point cloud as $P_i$, which consists of an unordered set of points $\mathbf{x}_{j=1:n}$ and $\mathbf{x}_j\in \mathbb{R}^3$, where the parameter $n$ is the number of points, and a point $\mathbf{x}_j$ is in 3D coordinate space. A point cloud $P_i$ can be augmented by changing scale $\mathbf{a}^S_k\in \mathbb{R}^3$, translation $\mathbf{a}^T_k\in \mathbb{R}^3$, rotation $\mathbf{a}^R_k\in \mathbb{R}^3$, cropping $\mathbf{a}^C_k$, and jittering $\mathbf{a}^J_k$. The combined set of the above operations is denoted as $\mathbf{a}_{k}$, where $\mathbf{a}_{k} = [\mathbf{a}^C_k, \mathbf{a}^S_k, \mathbf{a}^R_k, \mathbf{a}^T_k, \mathbf{a}^J_k]$. Given a point cloud $P_i$, we apply the order defined in $\mathbf{a}_{k}$ to obtain an augmented point cloud $P^k_i$. In the remaining of this paper, we use $i,j,k$ as the index of a point cloud $P_i \in \mathbb{R}^{n\times3}$ and the corresponding encoded features $F_i \in \mathbb{R}^{n\times d}$, a point in point cloud $x_j = P_i(j)\in \mathbb{R}^{1\times3}$ and a row of the encoded features $F_i(j) \in \mathbb{R}^{1\times d}$, and an augmentation operation $\mathbf{a}_{k}$, respectively. Note that the parameter $n$ is the number of points in a point cloud. \vspace{-0.2cm} \subsection{Framework} The detailed architecture of the CLR-GAM framework, a contrastive learning-based approach with the proposed GA and GFM modules, is depicted in Figure~\ref{fig:motivation}. We briefly introduce the overall contrastive learning algorithm in this section. First, a point cloud $P_i$ is transformed into $P^1_i$ and $P^2_i$ by applying two augmentation operations $\mathbf{a}_1$ and $\mathbf{a}_2$. We utilize a Siamese architecture with shared weights for feature extraction. In this work, we utilize PointNet (a MLP based method)~\cite{qi2017pointnet} and DGCNN (a graph convolution-based method)~\cite{wang2019dynamic} to extract features that are invariant to the input order. The augmented point clouds $P^1_i, P^2_i \in \mathbb{R}^{n\times3}$ are encoded into latent space $F_i^1, F_i^2 \in \mathbb{R}^{n\times d}$, respectively. The parameter $n$ is the number of points, and $d$ is the feature dimension. The augmented point clouds $P^1_i$, $P^2_i$ could contain different structures, while both point clouds originate from the same point cloud $P_i$. To ensure an effective feature association between $F^1_i$ and $F^2_i$, we introduce the Guided Feature Mapping (GFM) module to associate the features that belong to the same structure between two augmented point clouds. The feature $F^{1}_i$ is mapped to $F^{12}_i$ to entail similar structural features when $F^{2}_i$ is considered. The features $F^{12}_i$ and $F^2_i$ are pooled and projected into the projected latent space, resulting in $z^1_i$ and $z^2_i$, respectively. We perform contrastive loss to enforce that the latent representation distance between the same point clouds (positives) features is smaller than the distance between the features from different point clouds (negatives) in a minibatch. In addition, contrastive learning heavily relies on the quality of augmentation. An efficient strategy for exploring the augmentation space is indispensable. We introduce a guided augmentation search to explore various augmentations efficiently, motivated by~\cite{badia2020never}. \textbf{a) Guided Augmentation:} \label{sec:ga} Augmentation is the key to the success of self-supervised contrastive learning. We hypothesize that if we can efficiently identify a wide range of informative augmentations, a discriminative representation can be learned. Existing approaches apply random sampling in augmentation spaces, which leads to ineffective augmentation and a high computational burden. Thus, we utilize a dynamic and efficient exploration strategy commonly used in reinforcement learning to mitigate the limitation. The ranges of each dimension of rotation $\mathbf{a}^R$, translation $\mathbf{a}^T$, and scaling $\mathbf{a}^S$ are $[0, 2\pi)$ radians, $[-1 , 1]$ meters, and $[0.5 , 1]$, respectively. Since the jittering and cropping operations are point specific, we ignore them in guided augmentation for simplicity. Specifically, motivated by~\cite{badia2020never}, we utilize a memory bank $M$ to save explored augmentation samples $\mathbf{a}_m$, where $m$ is the index of a slot. % The goal is to ensure that the new sample is different from the explored samples. % It is worth noting that it is hard to obtain this behavior when just the average of $L$-norm distance is used to select novel augmentations. % % To start, we first randomly sample $N$ augmentations $\hat{a}_{k=1:N}$ from the augmentation space $\mathbf{a}_k$. % We compute the distance of a new sample $\hat{\mathbf{a}}_k$ from all the explored samples in the memory bank $\mathbf{a}_m$. % The design is used to evaluate the novelty of a sample. % A novel augmentation $\mathbf{a}^*_k$ is identified by using equation~\ref{eq:inverse_dirac}. % % \begin{equation} \mathbf{a}^*_k=\arg_{\hat{a}_k}\max \frac{1}{\sqrt{\sum_{m \in M} K(\mathbf{a}_m , \hat{\mathbf{a}}_k)}+c} \label{eq:inverse_dirac} \end{equation} where $K(\mathbf{a}_m, \mathbf{a}_k) =\frac{\epsilon}{d(\mathbf{a}_m,\mathbf{a}_k)+\epsilon}$. The distance function $d$ between two augmentations is the $L_2$-norm. The parameters $c,\epsilon$ are small values added for numerical stability. The memory bank is updated with the selected novel augmentation $\mathbf{a}^*_k$. % The operation is applied twice on each point cloud $P_i$ in an iteration to obtain two novel augmentations $\mathbf{a}_1, \mathbf{a}_2$. The two augmentations are applied to input point cloud $P_i$, as shown in Figure~\ref{fig:motivation}. Note that the augmentations of rotation angles $2\pi$ and $0$ are the same in the angular space, we utilize an angular distance measure, i.e., $d_R(\mathbf{a}^R_m,\mathbf{a}^R_n)= \sum ( 0.5 - |\ |\textbf{a}^R_m - \textbf{a}^R_n| - 0.5|)$, instead of using $L_2$ distance. To be consistent with different scales and ranges of augmentations, we normalize each augmentation to $[0,1]$ before computing the total distance $d$ as shown in equation~\ref{eq:total_distance}, where $\alpha_{R}$, $\alpha_{T}$, and $\alpha_{S}$ are the weights for the three distances. \begin{equation} \begin{split} d(\mathbf{a}_m,\mathbf{a}_n)=&\alpha_R d_R(\mathbf{a}^R_m,\mathbf{a}^R_n) +\alpha_T||\mathbf{a}^T_m-\mathbf{a}^T_n||_2\\&+\alpha_S||\mathbf{a}^S_m-\mathbf{a}^S_n||_2 \label{eq:total_distance} \end{split} \end{equation} \textbf{b) Guided Feature Mapping:} \label{sec:gfm} To learn discriminative point cloud representations, it is crucial to project features with similar structural characteristics for contrastive learning. Existing methods may fail to identify the structural similarity between the two augmented point clouds because certain augmentations (e.g., cropping, scaling) could lead to heavy deformations of an augmented point cloud with a completely different shape from the original class and similar to a different class. Based on our observation, when both the augmentations $\mathbf{a}_1, \mathbf{a}_2$ contains crop operations, this results in the very limited structural similarity between the augmented point clouds. So we exclude the crop augmentation $\mathbf{a}^C_1$ from the augmentation $\mathbf{a}_1$. In $\mathbf{a}_2$, it uses all the augmentations, i.e., rotation, translation, scaling, cropping, and jittering. Note that $\mathbf{a}^R_k,\mathbf{a}^T_k,\mathbf{a}^S_k$ are invertible operations as they are applied on the whole point cloud. The operation $\mathbf{a}^J_k$ is a point-specific operation and invertible. On the other hand, the cropping operation $\mathbf{a}^C_k$ is not invertible as the information is lost. An invertible augmentation operation can be written as $P_i= (\mathbf{a}_1)^{-1}\otimes P^1_i$, where $P^1_i$ is an augmented point cloud, $P_i$ is the original point cloud, and $\otimes$ denotes an augmentation operator. The equation holds because the augmentation $\mathbf{a}_1$ does not contain a cropping operation. Whereas the augmentation inverted point cloud of $P^2_i$ results in $P^C_i= (\mathbf{a}_2)^{-1}\otimes P^2_i$, a cropped point cloud. The crop operation is ignored in the inverse operation with $\mathbf{a}_2$, as it is not invertible. The order of points and their structures cannot be directly associated with these two augmented point clouds even with the same number of points. The closest point association mapping $S_{12}$ between points of inverted point clouds of $P^1_i$ and $P^2_i$ is calculated based on equation~\ref{eq:association}. The structural index mapping $S_{12}$ retains only the indices of the closest points of $P^1_i$ to $P^2_i$, for every point in $P^2_i$ with index $j$. \begin{equation} S(j)_{12}= \arg_q \min || P^C_i(j)-P_i(q)||_2 \label{eq:association} \end{equation} The operators $P_i(\cdot)$ and $F_i(\cdot)$ denote an indexing operation to point cloud and feature set, respectively. The guided mapped feature $F^{12}_i$ is obtained according to $F^{12}_i= F^{1}_i(S_{12})$. The feature $F^{12}_i$ is projected to $z^1_i$ using the feature projection module after pooling. Feature projection module is an MLP to reduce the dimensionality of the features. Similarly, $F^2_i$ is projected to $z^2_i$. The contrastive loss~\cite{chen2020simple} is utilized to compute the similarity between positives ($z^1_i, z^2_i$) and negatives from the minibatch. We do not store negatives over multiple iterations in a memory bank for comparability with other techniques~\cite{afham2022crosspoint}, which is commonly done for improving the performance~\cite{he2020momentum}. The loss can be found in equation~\ref{eq:contrast_loss}. The similarity measure is the cosine distance between two features, $\textrm{sim}(z_1,z_2)=(z_1^Tz_2)/(||z_1|| ||z_2||)$. Given a minibatch, the final contrastive loss is $L_c=\frac{1}{2B}\sum_{b=1}^B(L^b_{\mathbf{1,2}}+ L^b_\mathbf{2,1})$. The parameter $\tau$ is temperature 0.5, $b$ is the index of the feature in the minibatch of total size $B$. \begin{equation} \resizebox{\columnwidth}{!}{ $L^i_\mathbf{1,2} = -log\frac{\textrm{exp}(\textrm{sim}(z^{1}_i, z^{2}_i)/\tau)}{\sum^B_{b=1, b\neq i} \textrm{exp}(\textrm{sim}(z^{1}_i, z^{1}_b)/\tau)+\sum^B_{b=1} \textrm{exp}(\textrm{sim}(z^{1}_i, z^{2}_b)/\tau)}$ } \label{eq:contrast_loss} \end{equation} \section{Experiments} In this section, we first quantitatively evaluate the self-supervised trained approach on different downstream tasks and different object data sets (synthetic and real world). Second, we qualitatively visualize the features on an unseen object dataset. Finally, we do ablation studies of a) our novel modules and augmentations, b) t-SNE feature visualization on the unseen dataset, and c) qualitatively visualize the features on an unseen driving scenario. \subsection{Quantitative Results} \begin{table}[h!] \centering \resizebox{0.87\columnwidth}{!}{ \begin{tabular}{l l c c} Modality&Method &\multicolumn{2}{c}{ModelNet-40}\\\hline\hline point cloud &3D-GAN~\cite{wu2016learning} & \multicolumn{2}{c}{83.3}\\ &Latent-GAN~\cite{achlioptas2018learning}& \multicolumn{2}{c}{85.7}\\ &SO-Net~\cite{li2018so} & \multicolumn{2}{c}{87.3}\\ &FoldingNet~\cite{yang2018foldingnet}& \multicolumn{2}{c}{88.4}\\ &MRTNet~\cite{gadelha2018multiresolution} & \multicolumn{2}{c}{86.4}\\ &3D-PCapsNet~\cite{zhao20193d}& \multicolumn{2}{c}{88.9}\\ &ClusterNet~\cite{zhang2019unsupervised} &\multicolumn{2}{c}{86.8}\\ &VIP-GAN~\cite{han2019view}& \multicolumn{2}{c}{90.2}\\ + Image Modality&DepthContrast~\cite{zhang2021self}& \multicolumn{2}{c}{85.4}\\\hline &&PNet& DGCNN\\\hline point cloud&Multi-Task~\cite{hassani2019unsupervised}&-&89.1\\ &PoinDisc~\cite{liu2021point}&86.2&89.3\\ &Self-contrast~\cite{du2021self}&-&89.6\\ &PoinContrast~\cite{xie2020pointcontrast}&86.7&89.9\\ &Jigsaw~\cite{sauder2019self}&87.3&90.6\\ &STRL~\cite{huang2021spatio}&88.3&90.9\\ &Rotation~\cite{poursaeed2020self} &88.6&90.8\\ &OcCo~\cite{wang2021unsupervised}&88.7&89.2\\ &\textbf{CLR-GAM (ours)}&\textbf{88.9}&\textbf{91.3}\\\hline + Image Modality&CrossPoint~\cite{afham2022crosspoint}&\underline{89.1}&91.2\\ \end{tabular} } \caption{We pretrained using the proposed contrastive self-supervised learning framework on ShapeNet. We evaluate on the test split of ModelNet-40 by fitting a linear SVM classifier. The reported results are the overall accuracy. The upper subtable uses custom backbone and training strategies.} \label{tbl:classification_mn40_results} \vspace{-0.4cm} \end{table} \begin{table}[h!] \centering \resizebox{0.6\columnwidth}{!}{ \begin{tabular}{ c|c|c } Method & PNet & DGCNN \\\hline Jigsaw~\cite{sauder2019self} & 55.2 & 59.5\\ PoinDisc~\cite{liu2021point} & 68.3 & 78.0\\ OcCo~\cite{wang2021unsupervised} & 69.5 & 78.3\\ PointContrast~\cite{xie2020pointcontrast} & 70.4 & 78.6\\ STRL~\cite{huang2021spatio} &74.2 & 77.9\\ \textbf{CLR-GAM (ours)}&\textbf{75.7}&\textbf{82.1}\\ \hline CrossPoint~\cite{afham2022crosspoint}&75.6&81.7\\ \end{tabular} } \caption{3D Object classification on ScanObjectNN. We pretrained using the proposed contrastive self-supervised learning framework on ShapeNet. We evaluate on test split of ScanObjectNN by fitting a linear SVM classifier. The reported results are the overall accuracy on the test split.} \label{tbl:classification_scanobject_resutls} \vspace{-0.5cm} \end{table} \noindent\textbf{a) 3D Object Classification:} For this task, we utilize the ModelNet-40 (synthetic) and ScanObjectNN (real-world) datasets. The ModelNet-40 dataset consists of a wide range of 3D objects' CAD models. The dataset contains 12,331 objects that are categorized into 40 classes. We use 9,843 for training and 2,468 for testing. The ScanObjectNN dataset is challenging because data is collected in cluttered environments, so objects could be partially observable due to occlusions. It consists of 15 classes totaling 2,880 objects (2,304 for training and 576 for testing). We follow the same evaluation strategy as in the existing works~\cite{huang2021spatio, afham2022crosspoint, wang2021unsupervised}. Specifically, we freeze the pretrained point cloud feature extractor pretrained on the ShapeNet dataset. % We randomly sample 1024 points from each object for testing classification accuracy on ModelNet-40 and ScanObjectNN. We fit a linear SVM~\cite{cortes1995support} on the extracted features. The results on the testing set of ModelNet-40 and ScanObjectNN can be found in Table~\ref{tbl:classification_mn40_results} and Table~\ref{tbl:classification_scanobject_resutls}, respectively. Additionally, we also conduct experiments using two different backbones, i.e., PNet~\cite{qi2017pointnet} and DGCNN~\cite{wang2019dynamic}, on the two datasets. We demonstrate state-of-the-art performance on the ModelNet-40 dataset using both backbone architectures compared to point cloud pretrained approaches in the bottom sub-table, as shown in Table~\ref{tbl:classification_mn40_results}. With the DGCNN backbone, the proposed approach performs better than CrossPoint and DepthContrast. It is worth noting that both methods utilize extra image modality for pretraining, while the proposed contrastive self-supervised learning framework only uses point clouds. Compared to the previous SOTA on a single modality (OcCo), the accuracy is improved by 2.35\% (with DGCNN). The results conducted on ScanObjectNN further justify the effectiveness of the proposed framework, as shown in Table~\ref{tbl:classification_scanobject_resutls}. State-of-the-art performance is present compared to both point cloud and multimodal pretrained approaches using both backbone architectures. Noticeably, compared to previous SOTA on a single modality (OcCo), the accuracy is improved by 4.8\% (with DGCNN). In addition to satisfactory results, we empirically demonstrate that the proposed approach has better generalization capability in a real-world setting under severe occlusions than other methods. \begin{table*}[t!] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{ l | c c c c | c c c c} &\multicolumn{4}{c|}{5-way}&\multicolumn{4}{c}{10-way}\\\hline Method &\multicolumn{2}{c}{10-shot} &\multicolumn{2}{c|}{20-shot} &\multicolumn{2}{c}{ 10-shot} &\multicolumn{2}{c}{20-shot}\\\hline\hline FoldingNet~\cite{yang2018foldingnet}&\multicolumn{2}{c}{33.4±4.1} & \multicolumn{2}{c|}{35.8±5.8} &\multicolumn{2}{c}{18.6±1.8}& \multicolumn{2}{c}{15.4±2.2}\\ Latent GAN~\cite{achlioptas2018learning}&\multicolumn{2}{c}{41.6±5.3} & \multicolumn{2}{c|}{46.2±6.2} &\multicolumn{2}{c}{32.9±2.9}& \multicolumn{2}{c}{25.5±3.2}\\ 3D-PointCapsNet~\cite{zhao20193d}&\multicolumn{2}{c}{42.3±5.5} & \multicolumn{2}{c|}{53.0±5.9} &\multicolumn{2}{c}{38.0±4.5}& \multicolumn{2}{c}{27.2±4.7}\\ PointNet++~\cite{qi2017pointnet++}&\multicolumn{2}{c}{38.5±4.4} & \multicolumn{2}{c|}{42.4±4.5} &\multicolumn{2}{c}{23.1±2.2}& \multicolumn{2}{c}{18.8±1.7}\\ PointCNN~\cite{li2018pointcnn}&\multicolumn{2}{c}{65.4±2.8} & \multicolumn{2}{c|}{68.6±2.2} &\multicolumn{2}{c}{46.6±1.5 }& \multicolumn{2}{c}{50.0±2.3}\\ RSCNN~\cite{liu2019relation}&\multicolumn{2}{c}{65.4±8.9} & \multicolumn{2}{c|}{68.6±7.0} &\multicolumn{2}{c}{46.6±4.8}& \multicolumn{2}{c}{50.0±7.2}\\ \hline &PNet&DGCNN&PNet&DGCNN&PNet&DGCNN&PNet&DGCNN\\\hline Rand&52.0±3.8&31.6±2.8&57.8±4.9&40.8±4.6&46.6±4.3&19.9±2.1& 35.2±4.8&16.9±1.5\\ Jigsaw~\cite{sauder2019self}&66.5±2.5&34.3±1.3&69.2±2.4&42.2±3.5&56.9±2.5&26.0±2.4&66.5±1.4&29.9±2.6\\ cTree~\cite{sharma2020self}&63.2±3.4&60.0±2.8&68.9±3.0&65.7±2.6&49.2±1.9&48.5±1.8&50.1±1.6&53.0±1.3\\ PoinDisc~\cite{liu2021point}&67.8±2.3&81.5±1.8&71.7±2.9&83.3±2.3&53.4±2.7&68.1±3.0&56.3±2.2&70.5±2.9\\ PointContrast~\cite{xie2020pointcontrast}&69.2±2.9&83.6±2.4&72.5±2.5&87.4±2.8&51.7±3.1&71.5±2.5&57.9±2.4&74.5±3.2\\ OcCo~\cite{wang2021unsupervised}&89.7±1.9&90.6±2.8&92.4±1.6&92.5±1.9&83.9±1.8 &82.9±1.3&\textbf{89.7±1.5}&86.5±2.2\\ \textbf{CLR-GAM (ours)}&\textbf{91.8$\pm$2.6}&\textbf{93.7$\pm$1.2}&\textbf{94.8$\pm$2.4}&\textbf{96.0$\pm$2.6}&\textbf{84.6$\pm$2.2}&\textbf{87.9$\pm$2.7}&89.1$\pm$2.0&\textbf{91.1$\pm$1.9}\\\hline CrossPoint~\cite{afham2022crosspoint}&90.9±4.8&92.5±3.0& 93.5±4.4&94.9±2.1& 84.6±4.7&83.6±5.3& \underline{90.2±2.2}&87.9±4.2\\ \end{tabular} } \caption{Few shot object classification on ModelNet-40. A linear SVM is fit on the training set of ModelNet-40 using the pretrained model learned from ShapeNet. Compared with existing methods, the proposed CLR-GAM achieves state-of-the-art performance under different few-shot settings. The results are the overall accuracy.} \label{tbl:fsl_mn10} \vspace{-0.2cm} \end{table*} \begin{table*}[t!] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{ l | c c c c | c c c c} &\multicolumn{4}{c|}{5-way}&\multicolumn{4}{c}{10-way}\\\hline Method &\multicolumn{2}{c}{10-shot} &\multicolumn{2}{c|}{20-shot} &\multicolumn{2}{c}{ 10-shot} &\multicolumn{2}{c}{20-shot}\\\hline\hline &PNet&DGCNN&PNet&DGCNN&PNet&DGCNN&PNet&DGCNN\\\hline Rand&57.6±2.5 &62.0±5.6& 61.4±2.4 &67.8±5.1& 41.3±1.3&37.8±4.3 & 43.8±1.9& 41.8±2.4\\ Jigsaw~\cite{sauder2019self}&58.6±1.9&65.2±3.8& 67.6±2.1&72.2±2.7& 53.6±1.7&45.6±3.1& 48.1±1.9&48.2±2.8\\ cTree~\cite{sharma2020self}&59.6±2.3&68.4±3.4& 61.4±1.4&71.6±2.9& 53.0±1.9&42.4±2.7& 50.9±2.1&43.0±3.0\\ PoinDisc~\cite{liu2021point}&57.4±1.8&58.2±2.7&58.1±2.7&60.1±2.1&38.1±2.5&41.5±2.9&39.5±2.2&40.1±3.1\\ PointContrast~\cite{xie2020pointcontrast}&59.2±1.7&60.6±3.2&62.7±2.9&65.5±2.6&42.3±1.7&46.3±3.2&45.9±1.7&52.5±2.9\\ OcCo~\cite{wang2021unsupervised}&70.4±3.3& 72.4±1.4& 72.2±3.0&77.2±1.4& 54.8±1.3& 57.0±1.3& 61.8±1.2&61.6±1.2\\ \textbf{CLR-GAM (ours)}&\textbf{71.8$\pm$2.8}&\textbf{80.6$\pm$1.9}&\textbf{78.4$\pm$3.2}&\textbf{86.3$\pm$2.3}&\textbf{63.8$\pm$2.6}&\textbf{67.2$\pm$1.5}&\textbf{69.4$\pm$2.8}&\textbf{76.4$\pm$2.7}\\ \hline CrossPoint~\cite{afham2022crosspoint}&68.2±1.8&74.8±1.5& 73.3±2.9&79.0±1.2& 58.7±1.8&62.9±1.7& 64.6±1.2&73.9±2.2\\ \end{tabular} } \caption{Few shot object classification on ScanObjectNN. A linear SVM is fit on the training set of ModelNet-40 using the pretrained model learned from ShapeNet. Compared with existing methods, the proposed CLR-GAM outperforms state-of-the-art method~\cite{wang2021unsupervised} by a large margin. Reported results are the overall accuracy.} \label{tbl:scanobject_fsl} \vspace{-0.4cm} \end{table*} \noindent\textbf{b) Few Shot Object Classification:} \label{subsec: few-shot} Few Shot Learning (FSL) is a learning paradigm that aims to train a model that generalizes with limited data. In this experiment, we conduct experiments on N-way K-shot learning, which means that a model is trained on N classes and K samples in each class. The test/query set for each N class consists of 20 unseen samples for all these experiments. We use ModelNet-40 and ScanObjectNN for these experiments. The same pretrained model is used for both classification and FSL tasks with respective backbones. Similar to the classification task, we fit a linear SVM classifier for testing the FSL task. A similar protocol is used in earlier works~\cite{afham2022crosspoint, sharma2020self}. We report the results in Tables~\ref{tbl:fsl_mn10},~\ref{tbl:scanobject_fsl}. As there is no standard benchmark test set, we follow the setting used in~\cite{afham2022crosspoint, sharma2020self, wang2021unsupervised}. Specifically, we report mean and standard deviation over 10 runs. As shown in Table~\ref{tbl:fsl_mn10}, we observe that the CLR-GAM with DGCNN achieves SOTA compared to all other approaches in the challenging 5-way setting. In the 10-way setting, CLR-GAM performs on par with CrossPoint (multimodal pretrained) and Occo (single modal pretrained). The results show the same trend as in~\ref{tbl:classification_mn40_results}. The few-shot object classification results on ScanObjectNN is reported in Table~\ref{tbl:scanobject_fsl}. % CLR-GAM with DGCNN and PointNet performs SOTA compared to both point cloud and multimodal pretrained approaches. Specifically, on ScanNet we show a large margin improvement (more than 11\%) using DGCNN on all sets, and more than 8\% improvement with PNET (5 way-20 shot, 10 way-10 shot, 10 way-20 shot). There is a 24\% improvement with both DGCNN and PNET backbones in 10 way-20shot. The results further testify that CLR-GAM learns discriminative 3D point cloud representations, and the representations can generalize to challenging real-world settings. \begin{figure*}[t!] \centering \includegraphics[width=0.8\textwidth]{figures/qualitative_feats.png} \caption{Feature visualization of unseen objects selected from the test sets of ShapeNet and ModelNet-40. For more qualitative results please check the supplementary material. } \label{fig:feat_viz} \vspace{-0.5cm} \end{figure*} \noindent\textbf{c) 3D Object Part Segmentation:} \begin{table}[h!] \centering \resizebox{0.85\columnwidth}{!}{ \begin{tabular}{ c|c|c } Category & Method & Mean IOU \\\hline Supervised & PointNet~\cite{qi2017pointnet} & 83.7\\ & PointNet++~\cite{qi2017pointnet++} & 85.1\\ & DGCNN~\cite{wang2019dynamic} & 85.1\\ \hline Self-Supervised & Self-Contrast~\cite{du2021self} & 82.3\\ & PointContrast ~\cite{xie2020pointcontrast} & 85.1\\ & PointDisc~\cite{liu2021point} & 85.3\\ & Jigsaw~\cite{sauder2019self} & 85.3\\ & OcCo~\cite{wang2021unsupervised} & 85.0\\ & \textbf{CLR-GAM (ours)} & \textbf{85.5} \\\hline + Image Modality & CrossPoint~\cite{afham2022crosspoint} & 85.5\\ \end{tabular} } \caption{We report the mean IOU results for 3D object part segmentation on the ShapeNet-part dataset. Supervised methods are trained with randomly initialized weights, whereas self-supervised methods are initialized with pretrained weights learned from ShapeNet.} \label{tbl:part_seg} \vspace{-0.3cm} \end{table} ShapeNet-part dataset~\cite{yi2016scalable}, which contains 50 different parts from 16 distinct object categories with a total of 16,881 3D objects, is used for 3D part object segmentation. We use the same pretrained model for both classification and FSL tasks with respective backbones. To be consistent with the evaluation for part segmentation, we finetune the pretrained model using 2048 points sampled from point clouds. We observe that the performance of CLR-GAM is better than the other point cloud contrastive learning-based approaches and on par with CrossPoint (multimodal pretrained). The reported results in Table~\ref{tbl:part_seg} are the average of intersection over union (IOU) computed for each part. \begin{table}[h!] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{ c|c|c|c|c|c|c|c } \multicolumn{5}{c|}{augmentations}&\multicolumn{2}{c|}{novel modules}&dataset\\\hline jitter&translation & rotation & scaling & crops & GFM& GA& Modelnet-40\\ \hline \hline \checkmark&\checkmark&\checkmark &\checkmark&&&&84.8\\ \checkmark&\checkmark&\checkmark &\checkmark&\checkmark&&&89.7\\ \checkmark&\checkmark&\checkmark &\checkmark&\checkmark&\checkmark &&90.7\\ \checkmark&\checkmark&\checkmark &\checkmark&\checkmark&&\checkmark& 90.4\\ \checkmark&\checkmark&\checkmark &\checkmark&\checkmark&\checkmark&\checkmark&\textbf{91.3}\\ \end{tabular} } \caption{Ablation Study of CLR-GAM: Trained on ShapeNet using the self-supervised method and evaluated ModelNet-40 using Linear-SVM. Reported results are overall accuracy } \label{tbl:ablation_augs} \vspace{-0.5cm} \end{table} \subsection{Qualitative Results} We visualize feature representations (learned from the proposed CLR-GAM) of each point/node in an unseen object's point cloud selected from test sets of ShapeNet and ModelNet-40 in Figure~\ref{fig:feat_viz}. We compute the cosine distance between the feature of a randomly selected point (colored in red) to other points' features in the same point cloud. The color scale is Yellow-Green-Blue. The closest feature in the feature space is yellow, and the farthest is blue. Our approach learns similar representations for the whole planar region for simple planar structures such as stool (a) and table (b). Moreover, in the case of a chair (f), a complicated planar structure, the proposed model can learn similar features for the back part of a seat. For monitor (k), the plane is assigned with closer/similar features, whereas the features at the corners (structural irregularities) are dissimilar to the center. A similar observation can be found in the case of a knife (e), i.e., the handle and sharp edge have different features. For a curved object like a bathtub (g), the whole tub has similar features except for the legs. Similarly, for the cone (h), the whole curved region has similar features except for the edges. In the case of lamp (i), the curved stand has similar features, separating the stem. For irregular-shaped objects, e.g., flowerpot (c), all leaves have similar features, and different features are learned for pot and stem. For airplane (d), all turbines have similar features since it is relatively small and curved, and the other sharply curved front and back regions of the airplane have similar features. \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{figures/tsne_combined.png} \vspace{-0.4cm} \caption{t-SNE plots: visualization of features from three different approaches, generated from unseen samples of ModelNet-10 test dataset. } \label{fig:tsne_plots} \vspace{-0.4cm} \end{figure*} \subsection{Ablation Study} \noindent\textbf{a) Augmentations and Novel Modules: }We conduct an ablation study on the ModelNet-40 dataset to understand the contribution of GFM, GA, and augmentation. The results are shown in Table~\ref{tbl:ablation_augs}. Contrastive learning without cropping achieves around 84.8\% in overall accuracy. With cropping, a large improvement of 4.9\% is observed. The result is similar to the performance of CrossPoint~\cite{afham2022crosspoint} without multimodal training (i.e., only Intra Modal Instance Discrimination, IMID). We treat the model as the vanilla baseline, i.e., the second row in Table~\ref{tbl:ablation_augs}. With GFM, we observe a performance improvement of 1.1\% compared to the vanilla baseline. A 0.77\% improvement is observed when GA is added. When both novel modules are introduced, we observe a 1.78\% improvement compared to the vanilla baseline. The ablative studies demonstrate the effectiveness of the proposed GA and GFM. \noindent\textbf{b) Feature Visualization:} We depict all features generated from our CLR-GAM approach on unseen samples of the ModelNet-10 test dataset using the DGCNN backbone in Figure~\ref{fig:tsne_plots}. To generate t-SNE plots, we use a perplexity of 30. In the vanilla contrastive learning approach, except monitor class, all the other classes have a wider spread making the classes closer. With the proposed GFM, we observe the improvement in nightstand toilet classes, but with a similar overlap of bed bathtub classes as vanilla. With added GA, our proposed approach CLR-GAM, we observe further improvement in toilet class separation from the nightstand and more concentrated class clusters. In all cases, the dresser and nightstand were more confused because of the shape similarity. \noindent\textbf{c) Generalization to driving scene:} To understand the generalization of the proposed unsupervised approach to the real-world applications or datasets, we visualize the features of a driving scenario point-cloud data from the KITTI dataset~\cite{geiger2013vision}, which is shown in Figure~\ref{fig:feat_viz_kitti}. The full scene with 80 meters in all directions of the ego-vehicle (160m x 160m scene) is shown in (a) as a top-down image. In subfigure-a, the gray color is used for the ground, and the red color is used for the non-ground or obstacles. The separation is done using -1.5 meters in height axis of the point cloud data from the Velodyne sensor. The blue box is the region of interest (ROI), which is zoomed in subfigure-b, which is a 20m x 20m region. This is subsampled to around 4000 points using voxel-based sampling with a 0.3-meter voxel length in all three axes. 1024 points are randomly selected and passed to the feature encoder. The features are visualized in subfigure-c. The color scale is the same as in Figure~\ref{fig:feat_viz}, Yellow-Green-Blue. The closest feature in the feature space is yellow, and the farthest one is blue with respect to a randomly selected point (red circle). The two vehicles have features different from the ground, highlighted in pink boxes. \section{Discussion} Existing local feature contrastive techniques using inter point cloud features (PointContrast)~\cite{xie2020pointcontrast} or intra point cloud features (PointDisc)~\cite{liu2021point} objective suffers from LCA. It is also observed from our qualitative results shown in Figure~\ref{fig:feat_viz} that similar parts/shapes or symmetries in an object can have similar features. Compared to these local contrast approaches, our novel approach performs better in downstream tasks using linear SVM on the learned feature representation. Our proposed approach avoids LCA by avoiding the local feature contrast objective. But global contrast introduces GCA as mentioned in Section~\ref{sec:intro}. With our novel GFM and global contrast, we address GCA and also perform better than other global contrast techniques DepthContrast, pretext-based approaches, and multimodal trained CrossPoint from our quantitative evaluation. Our proposed self-supervised approach not only generalizes to different object datasets but also to driving scenes, as shown in Figure~\ref{fig:feat_viz_kitti}. Please check the supplementary for further discussions. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{figures/kitti_2_2.pdf}% \caption{Feature visualization of unseen \textbf{driving scene} selected from the KITTI dataset.} \label{fig:feat_viz_kitti} \vspace{-0.5cm} \end{figure} \section{Conclusion} In this paper, we present a contrastive learning framework (CLR-GAM) with guided augmentation (GA) to search augmentation parameters efficiently and guided feature mapping (GFM) to associate structural features precisely. The former is realized by adapting the inverse Dirac delta function with a memory bank, and the latter is fulfilled by the global contrasting of associated structural features between two augmented point clouds. Both these processes help boost the contrastive learning of point cloud data. We benchmark on three different downstream tasks and show that our method performs state-of-the-art compared to other methods trained on single modality point cloud data. It also performs similar to or better than a recent multimodal trained approach CrossPoint. {\small \bibliographystyle{ieee_fullname}
{ "arxiv_id": "2302.14302", "language": "en", "timestamp": "2023-03-01T02:08:59", "url": "https://arxiv.org/abs/2302.14302", "yymm": "2302" }
\section{Introduction} Although deep neural networks (DNNs) have performed well on many tasks, the strong assumption that training and test samples are independent and identically distributed (i.i.d.) is hardly satisfied in real world applications. Due to biased processes of collecting training data and confounding factors~(\cite{hendrycks2019benchmarking, recht2019imagenet, corbett2018measure, fonseca2022similarity}), the occurrence of out-of-distribution (OOD) samples is commonplace. From a statistical viewpoint, the distribution of OOD samples may suffer from unknown shifts from the training data, which leads to the catastrophic failure of DNNs' generalization to OOD samples as shown in~\cite{calian2021defending}. Even small corruptions, such as blurring and adding noise, can greatly degrade the performance of existing classifiers~(e.g., \cite{vasiljevic2016examining,geirhos2018generalisation}). A lack of generalization to OOD data has been an obstacle to the practical application of DNNs. \begin{figure*}[t] \centering \includegraphics[width=0.95\linewidth]{fig1_v3.pdf} \caption{Boosting model generalization with on-manifold adversarial examples in frequency domain. The OOD data (denoted as a green triangle) lies on a manifold, which is connected but distribution shifted from the original data manifold. The off-manifold adversarial example (denoted as a red point) moves out of the manifold, while the on-manifold one (denoted as a blue box) moves on the manifold. The on-manifold perturbations are more related to semantic meaning in wavelet domain. Extensive results, such as Swin Transformer Small~(\cite{liu2021swin}), demonstrate that our AdvWavAug can significantly improve the model generalization on ImageNet~(\cite{russakovsky2015imagenet}) and its distorted versions for various backbone models. } \label{fig:1} \end{figure*} Data augmentations have been proven to be effective in improving model generalization to OOD data. Previous work~(\cite{devries2017improved,yun2019cutmix,zhang2017mixup}) has reduced generalization errors by randomly selecting transform-based augmentations. Some of these methods~(\cite{cubuk2019autoaugment,cubuk2020randaugment}) will learn adaptive augmentation policies. However, most of these methods depend on the diversity of the adopted transforms. It is still uncertain about how well a transform can benefit OOD generalization. A more active data augmentation is adversarial training in~\cite{tramer2017ensemble}, which augments input data with adversarial examples. Most of previous work~(\cite{das2018shield,akhtar2018defense,papernot2016distillation,xie2017mitigating}) has focused on regular adversarial examples, such as those generated by PGD~(\cite{madry2017towards}). Regular adversarial examples actually define a precise upper bound of corrupted risks as shown in~\cite{yi2021improved}, which means they can help in improving OOD generalization. However, regular adversarial examples have many noise-like patterns in the background, which is uncorrelated to the objects in the benign images. Adversarial augmentations with these off-manifold adversarial examples may suffer from internal friction (i.e., the noises that DNNs can defend against are rare in real world applications), which may limit their effectiveness on OOD generalization. In order to utilize adversarial augmentations to explore the generalization boundary and improve OOD generalization in a more certain way, we augment data with on-manifold adversarial examples, in which data manifold is defined as a lower-dimensional latent space inside the high-dimensional data space that occurs in the real world. This concept has been introduced in~\cite{stutz2019disentangling} to separate common data from regular adversarial examples. Compared with off-manifold adversarial examples, on-manifold ones are more related to semantic meanings. Therefore, data augmentation with these on-manifold examples can make classifier concentrate more on the semantic changes, which are more beneficial to OOD generalization. To verify this, we provide a theoretical analysis which shows that the models robust to on-manifold adversarial examples have smaller upper bound of corrupted risks than models robust to off-manifold adversarial examples. However, it is still non-trivial to generate on-manifold adversarial examples, because the real manifold is generally unknown. Some work~(\cite{zhou2020manifold,stutz2019disentangling}) approximates the lower-dimensional data manifold by training a variational auto-encoder (VAE)~(\cite{kingma2013auto}). However, it is inefficient to represent manifold with a VAE in large-scale datasets, because training a VAE and using it to produce more samples require much more computational cost. Inspired by compression algorithms (e.g., ~\cite{villasenor1995wavelet,rabbani2002jpeg2000}), which indicate that images in frequency domain are highly sparse, we intend to approximate manifold in a frequency domain. We find that perturbing the non-sparse coefficients in a frequency domain is a similar but more efficient process, compared with a VAE implementation in~\cite{stutz2019disentangling}. In this paper, we propose a novel framework to boost the generalization of a DNN model by Augmenting data with Adversarial examples via a Wavelet module (AdvWavAug). Specifically, we introduce a wavelet projection module, which is composed of a discrete wavelet transform and its inverse, to obtain the wavelet coefficients. Then, the coefficients are modified according to the gradients back-propagated from the loss function. Different from regular adversarial attacks, we construct a multiplicative attention map for the wavelet coefficients, and any perturbation can be added to the attention map. The benefit of using a multiplicative attention map is retaining the sparsity of the wavelet coefficients, as wavelet coefficients at sparse positions will always be zeros. Finally, the modified adversarial examples are generated by performing inverse wavelet transform on the product of the wavelet coefficients and the modified attention map. The whole process can be essentially regarded as projecting a back-propagated gradient to the wavelet domain, and retaining the adversarial perturbations located at the non-sparse coefficients. We have also provided detailed analysis that augmentations with on-manifold adversarial examples make the upper bound of the corrupted risks smaller, compared with those with off-manifold adversarial examples. To verify the effectiveness of our method, we conduct model training with these on-manifold adversarial examples. Note that AdvProp~(\cite{xie2020adversarial}) has more advantages in improving generalization to OOD data. Therefore, we adopt it as the basic training framework of our AdvWavAug. Experiments on the ImageNet dataset~(\cite{russakovsky2015imagenet}) and its distorted versions (ImageNet-A~(\cite{hendrycks2021natural}), ImageNet-R~(\cite{hendrycks2021many}) and ImageNet-C~(\cite{hendrycks2019benchmarking})) validate the effectiveness of the proposed method. Taking Swin Transformer Small~(\cite{liu2021swin}) as an example, compared with vanilla AdvProp, our method significantly improves generalization on ImageNet by $0.9\%$, ImageNet-A by $5.5\%$, ImageNet-R by $5.6\%$ and ImageNet-C by $8.0\%$. We have also implemented the adversarial augmentation with a pre-trained VAE network and compared the effectiveness of our method with the VAE implementation. With similar performance on OOD data, the total training time of our method is $59.4\%$ less than a VQ-VAE (~\cite{razavi2019generating}) implementation. Due to the assistance of on-manifold adversarial examples to understand the semantic meaning in an image, our method can further improve generalization of models pre-trained by self-supervised learning. For example, we obtain new SOTA results by integrating our method into the fine-tuning process of Masked AutoEncoder (MAE) in~\cite{he2021masked}, achieving improvements on ImageNet by $0.3\%$, ImageNet-A by $1.8\%$, ImageNet-R by $1.4\%$ and ImageNet-C by $3.7\%$ on Vision Transformer Large~(\cite{dosovitskiy2020image}). In summary, we make the following technical contributions: \begin{itemize} \item To solve the OOD generalization problem, we establish connections between on-manifold adversarial examples and OOD samples, and transfer the original problem into improving robustness to on-manifold adversarial examples. Detailed proof has been provided to verify that models robust to on-manifold adversarial examples have a smaller upper-bounded expected risk to OOD samples, compared with models robust to regular ones. \item We propose an adversarial augmentation module AdvWavAug to approximate data-manifold in a frequency domain, in which we project the noise-like perturbations into a frequency domain and leave the component positioned on the non-sparse coefficients. We integrate it into the AdvProp framework to verify the effectiveness of our method in terms of the model generalization. \item We conduct comprehensive experiments over ImageNet and its wide range of variants on OOD data. The results demonstrate that our algorithm can significantly boost the generalization of the modern DNN architectures. We have achieved new SOTA results on two transformer-based architectures, including Vision Transformers and Swin Transformers. \end{itemize} \section{Background} In this section, we will give a brief introduction to the augmentation techniques, which include normal transform-based augmentations and adversary-based ones. \subsection {Transform-based Augmentations} Data augmentation has been shown to be effective for improving model generalization. Most of these augmentations will randomly pick up some basic transforms. In image classification tasks, random rotations, crops and flips are common transforms for data augmentation in~\cite{he2016deep}. More complex operations have been used to improve accuracy on clean data, such as Cutout~(\cite{devries2017improved}), which randomly occludes parts of an input image; CutMix~(\cite{yun2019cutmix}), which replaces a part of target image with a different image; Mixup~(\cite{zhang2017mixup}), which produces a convex combination of two different images. To automatically combine some simple augmentation techniques, an enhanced mixup~(\cite{guo2019mixup}) is proposed by adaptively adjusting mixing policy. Some related works, such as AutoAugment~(\cite{cubuk2019autoaugment}) and RandAugment~(\cite{cubuk2020randaugment}), try to automatically learn augmentation policy. There also exist some works on the generalization to OOD data but these methods usually suffer from the unseen categories of corruptions in~\cite{hendrycks2019benchmarking}. Another study has tried to understand robustness to different corruptions in frequency domains~(\cite{saikia2021improving}). \subsection{Adversary-based Augmentations} Adversarial training is another augmentation policy, which takes adversarial examples as the augmented data. \noindent \textbf{Adversarial Attack.} Let $\bm{x}$ denote the benign image and $y$ denote the ground-truth label. A classifier can be denoted as $c(\bm{x}): \mathcal{X} \rightarrow \mathcal{Y}$, where $\bm{x} \in \mathcal{X} \subseteq \mathbb{R}^{d_0}$ is the input image and $\mathcal{Y} = \{1, 2, \cdots, N\}$ is the class label. An adversary tries to find an adversarial example $\bm{x}^{adv}$ to fool the classifier. Formally, $\bm{x}^{adv}$ is defined in an $L_p$-norm ball of $\bm{x}$, which is expressed as $\|\bm{x}^{adv}-\bm{x} \|_p \leq \epsilon$ with $\epsilon$ being the maximum perturbation constraint of regular adversarial examples. Denote $\mathcal{L}$ as the loss function and $\bm{\delta}$ as the perturbation, and the purpose for untargeted attacks is to maximize $\mathcal{L}(\bm{\theta},\bm{x}+\bm{\delta}, y)$ as \begin{equation} \label{eq:1} \mathop{\arg \max} \limits_{\|\bm{\delta}\|_p \leq \epsilon} \mathcal{L}(\bm{\theta},\bm{x}+\bm{\delta}, y). \end{equation} In the white-box scenario when the network structure, parameter and gradients are accessible, adversarial examples can be generated by adding perturbations to the benign images according to the gradient. FGSM~(\cite{goodfellow2014explaining}) is a common one-step gradient-based white-box attack algorithm. PGD~(\cite{madry2017towards}) improves FGSM by iterative optimization. Deepfool~(\cite{moosavi2016deepfool}) iteratively attacks input images based on decision hyper-plane. Carlini \& Wagner's method (C\&W)~(\cite{carlini2017towards}) optimizes the attack problem in Lagrangian form and adopts Adam~(\cite{kingma2014adam}). Due to the great threat brought by adversarial examples, adversarial training is proposed to defend against these malicious modifications. \noindent \textbf{Adversarial Training.} We first recall that the objective function of the vanilla training setting for a DNN model is \begin{equation} \label{eq:2} \mathop{\arg \min} \limits_{\bm{\theta}}\mathbb{E}_{(\bm{x},y) \sim \mathbb{D}}\mathcal{L}(\bm{\theta},\bm{x},y), \end{equation} where $\mathbb{D}$ is the real data distribution, $\mathcal{L}(\cdot,\cdot,\cdot)$ is the loss function, $\bm{\theta}$ represents the network parameters, $\bm{x}$ is the input image, and $y$ is the ground-truth label. Adversarial training advances the vanilla training to improve the adversarial robustness~(\cite{madry2017towards}) by training the networks with adversarially perturbed samples. AdversarialAugment~(\cite{calian2021defending}) tries to generates adversarially corrupted examples, which is used as augmentation samples in model training. The recently proposed AdvProp~(\cite{xie2020adversarial}) extends the framework by considering both the performance on the clean and the adversarial examples as \begin{align} \label{eq:3} \mathop{\arg \min} \limits_{\bm{\theta}} \mathbb{E}_{(\bm{x},y) \sim \mathbb{D}}(&\mathcal{L}(\bm{\theta},\bm{x},y) \nonumber\\ +&\mathop{\arg \max} \limits_{\| \bm{\delta} \|_p \leq \epsilon}\mathcal{L}(\bm{\theta},\bm{x}+\bm{\delta},y)), \end{align} in which $\bm{\delta}$ is the perturbation added to the benign image and $\epsilon$ is the constraint to the $L_p$-norm of $\bm{\delta}$. However, most of the adversarial augmentation techniques focus on defending against regular adversarial examples. We intend to apply these techniques to OOD generalization problem, because OOD data may also be adversarial to a classifier with different kinds of corruptions. \section{On-manifold Adversarial Augmentation in Frequency Domain} In this section, we will give a detailed description about the benefit of on-manifold adversarial augmentation to generalization on OOD data, as well as the manifold definition in a frequency domain. \subsection{On-manifold Augmentation Improving OOD Generalization} \label{sec:3.1} Given the training set $\{ ( \bm{x_i}, y_i ) \}$ with $n$ i.i.d. training samples $\{ \bm{x_i}\}$ and labels $\{y_i\}$. It is assumed that the training set distribution $P_0$ has compact support $\mathcal{X} \subseteq \mathbb{R}^{d_{0}}$. The loss function of a given input $(\bm{x},y)$ with parameter $\bm{\theta}$ can be defined as $\mathcal{L}(\bm{\theta}, \bm{x}, y)$, which is commonly assumed as a continuous, differentiable and bounded function, i.e., $0 \leq \mathcal{L}(\bm{\theta}, \bm{x}, y) \leq M$ for constant $M$. To illustrate the relationships between OOD data, regular adversarial examples and on-manifold adversarial examples, we first need to provide the definitions of these three kinds of samples. The OOD data can be defined as a shifted distribution, of which the offset to the original distribution can be measured by Wasserstein distance as defined in~\cite{arjovsky2017wasserstein}. The OOD data lies on a distribution set $\mathcal{P}(P_0,\epsilon)=\{ P: W_p(P_0, P) \leq \epsilon\}$, in which $\epsilon$ represents the distance to the real data distribution $P_0$ and $W_p$ represents the $W_p$-distance\footnote{The Wasserstein distance between $P$ and $Q$ can be defined as $W_{p}(P,Q)=\left(\inf_{\lambda \in \Lambda(P,Q)} \mathbb{E}_{\lambda}(\|x-y\|^{p})\right)^{1/p}$, in which $\Lambda(P,Q)$ is the coupling distribution of $P$ and $Q$, $x$ is sampled from $P$ and $y$ is sampled from $Q$.} with $p \in \{2, \infty\}$. We mainly focus on $p=\infty$. The expected risk of a given data distribution $P$ and the label distribution $P_{y \mid \bm{x}}$ is expressed as $\mathbb{E}_P [\mathbb{E}_{P_{y \mid \bm{x}}} [\mathcal{L}(\bm{\theta}, \bm{x}, y)]]$, in which $\mathbb{E}_{P_{y \mid \bm{x}}} [\mathcal{L}(\bm{\theta}, \bm{x}, y)]$ can be denoted as $\ell (\bm{\theta}, \bm{x})$. Therefore, the expected risk to OOD data in $\mathcal{P}(P_0,\epsilon_o)$ can be defined as \begin{equation} \label{eq:4} \gamma(\epsilon_{o}, p) = \left\vert \sup_{P\in \mathcal{P}(P_0,\epsilon_o)}\mathbb{E}_P(\ell (\bm{\theta}, \bm{x'})) - \mathbb{E}_{P_{0}}(\ell (\bm{\theta}, \bm{x})) \right\vert, \end{equation} in which, $\epsilon_{o}$ represents the distribution offset. As real data distribution $P_0$ is unknown, the problem can be simplified with $P_0$ replaced by the empirical distribution $P_n$\footnote{The empirical distribution is $P_n(\cdot)=\frac{1}{n}\sum_{i=1}^{n}\bm{1}_{\{\cdot =x_i\}}$}. The perturbation of a regular adversarial example to $\bm{x}$ is defined within the $L_p$-norm. A model is robust to adversarial noises when \begin{equation} \label{eq:5} \mathbb{E}_{P_0} \left[ \sup_{\Vert \bm{\delta} \Vert_{p} \leq \epsilon} \left\vert \ell(\bm{\theta}, \bm{x}+\bm{\delta})-\ell(\bm{\theta},\bm{x}) \right\vert \right] \leq \tau, \end{equation} in which, $\bm{\delta}$ is the perturbation noise, $\epsilon$ represents the maximum perturbation range, $\tau$ represents the upper bound of the expected risk. On-manifold adversarial examples are generated by modifying the latent code in a lower-dimensional subspace. For training sample $\bm{x}$ and its label $y$, it is assumed that there exists a mapping function $g$ which can project a natural image $\bm{x} \in \mathcal{X}$ into lower-dimensional manifold space $\bm{z} \in \mathcal{Z} \subseteq \mathbb{R}^{d}$, i.e., $g: \mathcal{X} \rightarrow \mathcal{Z}$. Due to the redundancy of the original space to represent natural images, it is generally accepted that $d \ll d_0$. A model is robust to on-manifold adversarial examples when \begin{equation} \label{eq:6} \mathbb{E}_{P_0}\left[ \sup_{\Vert \bm{\delta}_m \Vert_{p} \leq \epsilon_m} \left\vert \ell(\bm{\theta}, g^{-1}(\bm{z}+\bm{\delta}_m))-\ell(\bm{\theta},\bm{x}) \right\vert \right] \leq \tau, \end{equation} in which, latent vector $\bm{z}$ is obtained by $\bm{z}=g(\bm{x})$, $\bm{\delta}_m$ is the perturbation noise in latent space, $\epsilon_m$ represents the maximum perturbation range in latent space, $\tau$ represents the upper bound of the expected risk. The relationship between OOD generalization and regular adversarial robustness has been studied as shown in Theorem~\ref{th1}. \begin{theorem}[OOD Upper-bound for Models Robust on Regular Adversarial Examples~(\cite{yi2021improved})] \label{th1} If a model is robust to on-manifold adversarial examples given $2\epsilon$, $\tau$ and $p=\infty$, then for any $\epsilon_o \leq \epsilon$ with probability at least $1-e^t$, \begin{equation} \label{eq:7} \gamma(\epsilon_o,\infty) \leq \tau+M \sqrt{\frac{c \cdot N_r -2t}{n}}=U_r, \end{equation} in which, $N_r=(2d_0)^{\frac{2D}{\epsilon^2}+1}$, $d_0$ is the dimension of the data space, $D$ is the diameter of $\mathcal{X} \subseteq \mathbb{R}^{d_{0}}$, $c=\log 2$ is a constant and $M$ is the upper bound of the loss function $\ell(\bm{\theta},\bm{x})$. \end{theorem} We will provide the relationship between OOD generalization and on-manifold adversarial robustness. It should be noticed that the on-manifold perturbation range $\epsilon_m$ will increase as the dimension of representative code decreases. Considering discrete situations, the total number of samples within perturbation $\epsilon$ should be $D(\epsilon) \cdot d_0$, in which $D(\epsilon)$ is the discretized number of $\epsilon$ and $d_0$ is the dimension of data space, i.e., $d_0=C \times H \times W$. Supposing the original images are projected into the manifold with $d$ dimension, to have full access to all the perturbed samples, the maximum perturbation range in latent space $\epsilon_m$ should satisfy $D(\epsilon_m)=\frac{d}{d_0} \cdot D(\epsilon)$. It is assumed that the manifold projection $\bm{z}=g(\bm{x})$ can maintain the perturbed samples within $\epsilon_m$-ball in the latent space. Then, we have the first lemma as \begin{lemma} \label{lemma:1} For any $\bm{\theta}$ and $\epsilon$, we have \begin{align} \label{eq:8} \sup_{P \in \mathcal{P}(P_0,\epsilon)}&\mathbb{E}_P(\ell (\bm{\theta}, \bm{x'})) =\nonumber\\ &\mathbb{E}_{P_{0}}\left[\sup_{\|\delta\|_\infty\leq \epsilon_m}\ell (\bm{\theta}, g^{-1}(\bm{z}+\bm{\delta}))\right], \end{align} in which, $\bm{z}=g(\bm{x})$, $\bm{x} \in \mathbb{R}^{d_0}$, $\bm{z} \in \mathbb{R}^{d}$ and $\epsilon_m=\frac{d}{d_0} \cdot \epsilon$. \end{lemma} \begin{proof} Let $T_{\epsilon_m}(\bm{z})=\bm{z}+\mathop{\arg \max}_{\|\bm{\delta}\|_\infty \leq \epsilon_m}\ell(\bm{\theta}, \bm{z}+\bm{\delta})$ with $\bm{z}$ is the latent code of an input sample, i.e., $\bm{z}=g(\bm{x})$. The projection function $\bm{z}=g(\bm{x})$ can be formulated as an one-to-one function when the perturbation range in latent space is enlarged to $\epsilon_m$. Therefore, there exists some point $\bm{z}+\bm{\delta}_z$ with $\|\bm{\delta}_z\| \leq \epsilon_m$ such that $\ell(\bm{\theta,},\bm{x}+\bm{\delta}_z)=\mathop{\arg \max}_{\|\delta\|_\infty \leq \epsilon}\ell(\bm{\theta}, \bm{x}+\bm{\delta})$. Let $P_{\epsilon_m}$ be the distribution of $T_{\epsilon_m}$ with $g^{-1}(\bm{z}) \sim P_0$ and $P_{\epsilon}$ be the distribution of $T_{\epsilon}(\bm{x})=\bm{x}+\mathop{\arg \max}_{\|\bm{\delta}\|_\infty \leq \epsilon} \ell(\bm{\theta},\bm{x}+\bm{\delta})$, we have \begin{align} \mathbb{E}_{P_0}\left[ \sup_{\|\bm{\delta}\|_\infty \leq \epsilon} \ell(\bm{\theta, \bm{x}+\bm{\delta}}) \right] &=\mathbb{E}_{P_{\epsilon}}\left[\ell (\bm{\theta}, \bm{x})\right] \nonumber\\ &=\mathbb{E}_{P_{\epsilon_m}}\left[\ell (\bm{\theta}, g^{-1}(\bm{z}))\right] \end{align} Besides, we have \begin{equation} \mathbb{E}_{P_0}\left[{g(\bm{x})}+T_{\epsilon_m}(g(\bm{x}))\right] \leq \epsilon_m \Leftrightarrow \mathbb{E}_{P_0}\left[{\bm{x}}+T_{\epsilon}(\bm{x})\right] \leq \epsilon. \end{equation} Referring to the theorem in~\cite{yi2021improved}, we get the conclusion. \end{proof} Lemma~\ref{lemma:1} shows that models robust on an on-manifold perturbation range are actually equivalent to those robust on some regular perturbation range in ideal situations. The only difference is that the perturbation range should be scaled, because the dimension of the latent space changes during the projection process. Taking $p=\infty$ as an example, the upper bound of OOD generalization can be expressed by Theorem~\ref{th2} as \begin{theorem}[OOD Upper-bound for Models Robust on On-manifold Adversarial Examples] \label{th2} If a model is robust to on-manifold adversarial examples given $2\epsilon_m$, $\tau$ and $p=\infty$, then for any $\epsilon_o \leq \frac{d_0}{d} \cdot \epsilon_m$ with probability at least $1-e^t$, \begin{equation} \label{eq:11} \gamma(\epsilon_o,\infty) \leq \tau+M \sqrt{\frac{c \cdot N_m-2t}{n}}=U_m, \end{equation} in which, $N_m=(2d)^{\frac{2D'}{\epsilon_m^2}+1}$, $d$ is the dimension of the latent space, $D'$ is the diameter of $\mathcal{Z} \subseteq \mathbb{R}^{d}$, $c=\log 2$ is a constant and $M$ is the upper bound of the loss function $\ell(\bm{\theta},\bm{x})$. \end{theorem} The detailed proof of the theorem will be provided in Appendix~\ref{appendix:a}. From the Theorem~\ref{th1} and Theorem~\ref{th2}, we can obtain a corollary as \begin{corollary} \label{coro} If the latent space $\mathcal{Z} \subseteq \mathbb{R}^{d}$ is compressed from the original data space $\mathcal{X} \subseteq \mathbb{R}^{d_{0}}$, i.e., $k=\frac{d_0}{d} \geq 1$, we have \begin{equation} U_m \leq U_r. \end{equation} \end{corollary} The detailed proof of the corollary will be provided in Appendix~\ref{appendix:a}. The Theorem~\ref{th2} and Corollary~\ref{coro} reveals two important observations: \begin{itemize} \item Robustness to on-manifold adversarial examples is highly related to generalization to OOD data. We can simply improve OOD generalization by adversarial augmentation with on-manifold adversarial examples, because the OOD upper bound is limited by on-manifold adversarial robustness. \item Compared with robustness to regular adversarial examples as defined in~\cite{yi2021improved}, on-manifold adversarial robustness defines a smaller upper bound of OOD generalization, because the dimension of the latent code in latent space is much smaller than the original dimension of the image space. Therefore, adversarial augmentation with on-manifold adversarial examples can better benefit OOD generalization than adversarial augmentation with regular ones. \end{itemize} \subsection{Manifold Representation in Frequency Domain} Despite the benefit of training model with on-manifold adversarial examples, it is nontrivial to generate on-manifold samples. Some previous works~(\cite{lin2020dual,stutz2019disentangling}) have tried to approximate the data manifold with VAE models. The key idea is training a VAE with the encoder $g(\bm{x})$ projecting the original images into a latent code $\bm{z}$ and the decoder $g^{-1}(\bm{z})$ reconstructing the images $\bm{x}'$ from the latent code. However, approximating data manifold has high demands on the representational ability of the VAE networks, which is difficult to implement, especially on large-scale datasets. Also, the training process of VAE network requires much more computational resources and tricks. Therefore, it is necessary to replace the VAE implementation with a more stable and efficient one. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{wavelet_v2.pdf} \caption{Illustration of on-manifold and off-manifold adversarial examples in wavelet domain. Given the original image on data manifold (denoted as a yellow star), a PGD attacker generates an off-manifold adversarial example (denoted as a red point) with a perturbation $\bm{\delta}_{pgd}$; our method generates an on-manifold adversarial example (denoted as a blue box) with a perturbation $\bm{\delta}_{wt}$ parallel to the manifold. The wavelet decomposition of these two perturbations demonstrates that $\bm{\delta}_{wt}$ is more related to the semantic meaning, while $\bm{\delta}_{pgd}$ is noise-like. } \label{fig:2} \end{figure} We have noticed that projecting images into a frequency domain has similar effect as VAE. Taking wavelet domain as an example, the wavelet transform $\mathcal{W}(\cdot)$ and its inverse $\mathcal{W}^{-1}(\cdot)$ correspond to the encoder $g(\cdot)$ and the decoder $g^{-1}(\cdot)$. Besides, wavelet coefficients lies on a lower dimensional space because of the sparsity in wavelet domain of natural images, which is similar to the latent code $\bm{z}$ in latent space. It is more convenient and efficient to approximate data-manifold in frequency domain, because the wavelet transform requires no pre-training and less computation time. We propose to construct a subspace $\mathcal{Z}$ containing only the non-sparse coefficients in the wavelet domain. It contains most energy of a signal, which means any modification in this subspace is highly relevant to semantic manipulation. With a threshold $T$ and input $\bm{x}$, the latent space is be expressed as \begin{equation} \label{eq:13} \mathcal{Z}(\bm{x},T)=\{ \bm{\psi}_i \mid \langle \bm{\psi}_i,\bm{x} \rangle \geq T,\bm{\psi}_i \in \Psi \}, \end{equation} in which, $\Psi$ contains all the wavelet bases given the mother wavelet; and $T$ is a predefined threshold to filter out the sparse coefficients. In this way, the manifold projection function is defined as a composition of wavelet transform $\mathcal{W}$ and a filtering operation using indicator function to choose the non-sparse coefficients $\mathcal{F}(\bm{z})=\bm{z} \cdot \bm{1}_{\{ z_i \geq T \} }$, i.e., $g(\bm{x})=\mathcal{F} \circ \mathcal{W}(\bm{x})$. The inverse of $g(\bm{x})$ should be the corresponding inverse wavelet transform, i.e., $g^{-1}(\bm{z})=\mathcal{W}^{-1}(\bm{z})$. As shown in Fig.~\ref{fig:2}, regular adversarial attacks generate off-manifold adversarial examples with noise-like patterns in frequency domain. The perturbation $\bm{\delta}_{pgd}$ of a PGD attacker reveals the gradient of a classifier but no semantic meaning. We can obtain an on-manifold perturbation $\bm{\delta}_{wt}$ by projecting $\bm{\delta}_{pgd}$ into the pre-defined manifold. Then, we provide the definition of the models robust to on-manifold adversarial examples in a frequency domain as \begin{equation} \label{eq:14} \mathbb{E}_{P_0}\left[ \sup_{\Vert \bm{\delta} \Vert_{p} \leq \epsilon_f} \left\vert \ell(\bm{\theta}, \mathcal{W}^{-1}(\bm{z}_f+\bm{\delta}))-\ell(\bm{\theta},\bm{x}) \right\vert \right] \leq \tau, \end{equation} in which, latent vector $\bm{z}_f$ is obtained by $\bm{z}_f=\mathcal{F} \circ \mathcal{W}(\bm{x})$, $\epsilon_f$ represents the maximum perturbation range in latent space, $\tau$ represents the upper bound of the expected loss. It is shown that the approximation of the data manifold in a frequency domain can be regarded as a special case of the general manifold estimation, which means the models robust to the on-manifold adversarial examples in a frequency domain also have smaller upper bound on OOD generalization, compared with the models robust to the regular ones. \section{Boosting Model Generalization via AdvWavAug} \label{sec:4} In this section, we introduce the training scheme combined with our on-manifold augmentation module AdvWavAug. \subsection{Problem Formulation} \label{sec:4.1} The definition of the on-manifold adversarial examples in a frequency domain requires to decide the threshold $T$ in Eq.~\ref{eq:13}. A more adaptive way to decide which coefficients are non-sparse is replacing the traditional additive perturbation with a multiplicative one. To obtain on-manifold adversarial examples, we use the multiplicative perturbation in a frequency domain as \begin{equation} \label{eq:15} \mathop{\arg \max} \limits_{\|\bm{\delta}\|_p \leq \Tilde{\epsilon}_f} \ell(\bm{\theta}, \mathcal{W}^{-1}(\bm{z}_f \odot (\bm{1}+\bm{\delta}))), \end{equation} where $\bm{z}_f=\mathcal{F} \circ \mathcal{W}(\bm{x})$, $\mathcal{W}(\cdot)$ is the wavelet transform and $\mathcal{W}^{-1}(\cdot)$ is its inverse, $\bm{1}+\bm{\delta}$ can represent an adversarially modified attention map (wavelet components with higher contribution to model degradation will be strengthened more). It is noted that we introduce the adversarial perturbations in the frequency domain rather than the image domain, which facilitates to preserve the on-manifold characteristics of the examples. The perturbation range can also be derived from a regular perturbation constraint by simply setting all the wavelet coefficients below threshold $T$ as zeros as \begin{equation} \label{eq:16} \| \bm{\delta} \|_{p} \leq \frac{\| \mathcal{W}(\bm{\delta}) \|_{p}}{\sqrt[p]{n}T} \leq \frac{PQ\epsilon_f}{\sqrt[p]{n}T}=\Tilde{\epsilon}_f, \end{equation} in which $n$ is the total number of non-sparse coefficients, $P$ and $Q$ are two positive constants. It is noted that $\Tilde{\epsilon}_f$ serves as a new upper bound of $\| \bm{\delta} \|_{p}$. In practice, the perturbation $\| \bm{\delta} \|_{p}$ that satisfies the constraint $\Tilde{\epsilon}_f$ is a sufficient condition. The assignment to each wavelet scale is to be determined. More detailed analysis is provided in Appendix~\ref{appendix:b}. \begin{figure*}[t] \centering \includegraphics[width=0.98\linewidth]{fig4_v2.pdf} \vspace{-1ex} \caption{The overall pipeline of adversarial data augmentation with AdvWavAug, yielding an improved model generalization. To obtain on-manifold adversarial examples $\bm{x}^{adv}$, we send the input image $\bm{x}$ into an adversarial augmentation module, in which $\bm{x}$ is projected into a wavelet domain by wavelet transformation $\mathcal{W}$ and its inverse $\mathcal{W}^{-1}$. We design an attention map to receive the perturbations backpropagated from the loss. The wavelet decomposition of the perturbations ensures that the proposed AdvWavAug can generate on-manifold adversarial examples. In the model training process, we augment the original input $\bm{x}$ with the adversarial examples $\bm{x}^{adv}$, and utilize the augmented data to train the target model. } \label{fig:3} \end{figure*} AdvProp~(\cite{xie2020adversarial}) has been shown to be an effective training scheme to improve model generalization with adversarial examples. However, the off-manifold adversarial examples play a limited role in AdvProp, because the perturbations are unrelated to semantic meanings. Therefore, we advance the AdvProp framework by introducing our on-manifold adversarial augmentation, yielding the resultant problem formulation as \begin{align} \label{eq:17} \mathop{\arg \min} \limits_{\bm{\theta}} [&\mathbb{E}_{(\bm{x},y) \sim \mathbb{D}}(\ell(\bm{\theta},\bm{x}) \\ \nonumber &+\mathop{\arg \max} \limits_{\Vert \bm{\delta} \Vert_p \leq \Tilde{\epsilon}_f}\ell(\bm{\theta}, \mathcal{W}^{-1}(\bm{z}_f \odot (\bm{1}+\bm{\delta}))) ], \end{align} where $\bm{z}_f=\mathcal{F} \circ \mathcal{W}(\bm{x})$ and clean data $\bm{x}$ is augmented with our on-manifold adversarial examples $\mathcal{W}^{-1}(\bm{z}_f \odot (\bm{1}+\bm{\delta}))$. In this way, we can find the real pitfalls of a well-trained model on the clean data distribution and improve the model generalization. \subsection{Implementation Details} \label{sec:4.2} In this section, we will introduce the training process, which integrates our augmentation module AdvWavAug with AdvProp structure. The whole network is shown in Fig.~\ref{fig:3}. The input image is first decomposed into different scales by fast wavelet transformation proposed by~\cite{mallat1999wavelet}. Then, the wavelet coefficients are multiplied with an attention map, which can be regarded as the frequency filter in signal processing. Next, the product of wavelet coefficients and the attention map is sent to the corresponding inverse wavelet transformation. The reconstructed adversarial examples are sent to the target model to calculate the gradient to the attention map. Finally, with the optimal attention map, we can generate on-manifold adversarial examples, which can serve as an intentional data-augmentation to boost the model generalization to the OOD data. Since AdvWavAug just simply projects the gradient map to the data manifold, it is a portable module to be integrated into other training frameworks. The detailed algorithm has been shown in Algorithm~\ref{alg:A}. \begin{algorithm} \caption{Pseudo code of adversarial training with AdvWavAug for $T$ epochs, $\alpha$ step size, $N$ adversarial steps and a dataset of size $M$} \label{alg:A} \begin{algorithmic}[1] \State \textbf{Data:} A set of clean images $\boldsymbol{X}$ and labels $\boldsymbol{Y}$ \State \textbf{Result:} Network parameter $\bm{\theta}$ \For{$t=1\cdots T$} \For{$m=1\cdots M$} \State Sample $\bm{x}^c \subset \boldsymbol{X}$ with label $y \subset \boldsymbol{Y}$ \State $\bm{z}_f=\mathcal{W}(\bm{x}^c)$ \State $\bm{\delta}=\bm{0}$ \For{$n=1\cdots N$} \State $\bm{\delta}=\bm{\delta}+\alpha \cdot \nabla_{\bm{\delta}}\ell(\bm{\theta}, \mathcal{W}^{-1}(\bm{z}_f \odot \bm{1}+\bm{\delta}))$ \EndFor \State $x^a=\mathcal{W}^{-1}(\bm{z}_f \odot (\bm{1}+\bm{\delta}))$; \State $\bm{\theta}=\bm{\theta}-\nabla_{\bm{\theta}}(\ell(\bm{\theta},\bm{x}^c)+\ell(\bm{\theta},\bm{x}^a))$ \EndFor \EndFor \end{algorithmic} \end{algorithm} \section{Experiments} \label{sec:5} \subsection{Experiments Setup} \subsubsection{Datasets} To verify the effectiveness of our adversarial augmentation module, we conduct all of our training on the standard ImageNet~2012~(\cite{russakovsky2015imagenet}) training dataset. In addition to evaluating the performance for clean accuracy on ImageNet~2012 validation dataset, we also evaluate the generalization on its distorted version for OOD data. \begin{itemize} \setlength{\itemsep}{0pt} \item ImageNet-A~(\cite{hendrycks2021natural}): The ImageNet-A dataset consists of 7,500 real-world, unmodified, and naturally occurring examples, drawn from some challenging scenarios. \item ImageNet-C~(\cite{hendrycks2019benchmarking}): The ImageNet-C dataset consists of 19 different corruption types grouped into noise, blur, weather, digital and extra corruption. And each type has five levels of severity \item ImageNet-R~(\cite{hendrycks2021many}): The ImageNet-R dataset contains 16 different renditions (e.g. art, cartoons, deviantart etc). \end{itemize} \subsubsection{Architectures} To verify the attack performance of AdvWavAug, we conduct model training on ResNet~(\cite{he2016deep}), Swin Transformer~(\cite{liu2021swin}) and Vision Transformer~(\cite{dosovitskiy2020image}). For ResNet, ResNet~50 (Res50), ResNet~101 (Res101) and ResNet~152 (Res152) are chosen to verify the performance in smaller models and larger models. For Swin Transformer, we select the Swin Transformer Tiny (SwinT), Swin Transformer Small (SwinS) and Swin Transformer Base (SwinB). For Vision Transformer, we select the Vision Transformer Base (ViTB), Vision Transformer Large (ViTL) and Vision Transformer Huge (ViTH). Accordingly, we switch the original batch normalization in AdvProp into layer normalization, as transformer has no batch normalization. \begin{table}[h] \begin{center} \begin{minipage}{205pt} \caption{Wavelet settings of the AdvWavAug module with different step sizes in different frequency bands.} \label{tab:1}% \begin{tabular}{@{}lccccccc@{}} \toprule & $H_{1}$ & $H_{2}$ & $H_{3}$ & $H_{4}$ & $H_{5}$ & $H_{6}$ & $L$ \\ \midrule S1 & 0.50&0.07&0.05&0.03&0.02&0.010&0.001 \\ S2 & 0.40&0.06&0.04&0.03&0.02&0.010&0.001 \\ S3 & 0.30&0.05&0.04&0.03&0.02&0.015&0.015 \\ S4 & 0.10&0.30&0.05&0.03&0.02&0.010&0.010 \\ S5 & 0.09&0.09&0.13&0.15&0.17&0.150&0.150 \\ S6 & 0.09&0.09&0.09&0.11&0.13&0.150&0.170 \\ \botrule \end{tabular} \end{minipage} \end{center} \end{table} \subsubsection{Augmentation Module} We choose AdvWavAug as the default augmentation module to generate adversarial examples for training, and compare its performance with other three kinds of augmentation module: PGD adversarial augmentation (vanilla AdvProp), Gaussian noise augmentation and VQ-VAE(~\cite{razavi2019generating}) augmentation. During model training with AdvProp, we set up the generator of adversarial examples with one-step attack iteration. The PGD attacker is set up with $\epsilon=1/255$, number of iteration $n=1$, and attack step size $\alpha=1/255$. During model training with Gaussian noise augmentation, we add Gaussian noise with mean 0.0 and std 0.001 to the benign images and take these augmented images as the auxiliary inputs. During model training with VQ-VAE augmentation, we first approximate the data manifold by training VQ-VAE model based on ImageNet. Then, we generate on-manifold adversarial examples following the process in~\cite{stutz2019disentangling}. The perturbation noise is added in the latent space with one step attack and step size 0.007. Finally, the on-manifold images are sent into the auxiliary channel. During model training with our AdvWavAug, we adopt wavelet decomposition with 6 layers and sym8 mother wave as in~\cite{daubechies1993ten}. As modifications in high frequency may have lower contribution to the overall visual quality, the perturbation ranges in different frequency bands should be balanced by taking larger step sizes in higher frequency bands. Except for the ablation study of different wavelet settings, the basic setting of the step sizes during model training is Setting 3 (S3) in Tab.~\ref{tab:1}, in which $H_{1}$ to $H_{6}$ denote multi-scale decomposition (high frequency to low frequency bands) and $L$ denotes the lowest frequency band. To illustrate the effect of balancing step sizes in different frequency bands, we gradually shift the attention area from higher frequency bands to lower ones, i.e., from Setting 1 (S1) to Setting 6 (S6) in Tab.~\ref{tab:1}. \subsection{Adversarial Propagation with \textbf{AdvWavAug}} \label{sec:5.2} \subsubsection{Training Hyperparameters} \label{sec:5.2.1} For AdvProp training of ResNets, we follow these settings: SGD optimizer with momentum 0.9 and weight decay 5e-5; initial learning rate 0.2 with cosine learning rate scheduler; epoch 105 including 5 warm up epochs; batch size 256; random horizontal flipping and random cropping as the basic data augmentations. For AdvProp training of Swin Transformers, we follow the same basic training settings as in~\cite{liu2021swin}, except that training epoch is set to 105 including 5 warm up epochs, and batch size is set to 512 (for Swin-Tiny) and 256 (for Swin-Small) due to limited computing resources. Incidentally, we simply combine our proposed AdvWavAug and Augmix strategy with no Jensen-Shannon divergence consistency loss in Sec.~\ref{sec:5.2.4}. For normal adversarial training part in Sec.~\ref{sec:5.3.2}, to keep the efficiency of training, we set up PGD attackers with $\epsilon=2/255$, number of iteration $n=1$, and attack step size $\alpha=2/255$. Then we set up the one-step AdvWavAug attackers with competitive attack success rates. For model training of VQ-VAE, we adopt the settings in~\cite{razavi2019generating}: mean square error loss; latent loss weight 0.25; learning rate 3e-4 with cycle scheduler; epoch 560; batch size 256. For model fine-tuning with MAE, we adopt the settings in~\cite{he2021masked}. \begin{figure}[!t] \centering \includegraphics[width=0.98\linewidth]{flopv2.pdf} \caption{Generalization comparison on ImageNet. AdvWavAug boosts model performance over the original AdvProp with PGD attacker on ImageNet. Improvement to larger models is more significant. Our best result is based on Swin-Small (Swin-S), i.e., 81.6\% Top-1 Acc. on ImageNet.} \label{fig:advprop_clean} \end{figure} \subsubsection{Comparison with Gaussian Augmentation and Vanilla AdvProp} In this part, we compare our method with two typical augmentations: Gaussian noise augmentation (transform-based) and AdvProp (adversary-based). To align the training setting, we modify the training epochs of Swin Transformers to be the same as epochs of ResNets. We train all the models according to training hyperparameters in~\ref{sec:5.2.1}. \noindent \textbf{ImageNet Validation Results} In fig.~\ref{fig:advprop_clean}, we have shown the experimental results on ImageNet validation set. We verify our method in the classical architecture ResNet and the current SOTA architecture Swin Transformer. As we can see, our proposed AdvWavAug method outperforms both the Gaussian augmentation and the original AdvProp with PGD on all the tested architectures and model size. The experimental results also verify our basic opinion that on-manifold adversarial examples can improve model generalization better than off-manifold adversarial examples. That is because on-manifold adversarial examples improve generalization by directly finding the real pitfalls of a model, instead of introducing noise-like adversarial patterns. In addition, we have also found that the performance improvement is relevant to the capacity of the model. Our method has greater improvement on models with large capacity. For example, a ResNet 152 trained with our method has achieved 80.5\% Top-1 Acc., which has 0.7\% performance gains than Advprop and 1.7\% gains than the vanilla training baseline. \begin{table*}[h]\small \begin{center} \begin{minipage}{0.85\textwidth} \caption{Comparison of generalization with different training methods on different datasets, including ImageNet, ImageNet-A ImageNet-R and ImageNet-C. We compare the baseline model, Gaussian Augmentation, AdvProp and AdvWavAug.} \label{tab:2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llcccc@{\extracolsep{\fill}}} \toprule% Model & Method & ImageNet & ImageNet-A & ImageNet-R & ImageNet-C \\ \cmidrule{3-6} && Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & mCE \textcolor{red}{$\downarrow$} \\ \midrule \multirow{4}*{Res50} & Baseline & 76.3& 2.5& 35.9& 77.4\\ & Gaussian & 77.0& 2.5& 36.8& 75.7\\ & AdvProp & 77.4& 3.5& 37.8& 69.9\\ & AdvWavAug&\bf77.5&\bf4.1&\bf39.3&\bf69.0\\ \midrule \multirow{4}*{Res101} & Baseline & 78.2& 5.5& 39.4& 70.8\\ & Gaussian & 78.7& 6.1& 40.4& 69.4\\ & AdvProp &\bf79.2& 8.1&\bf42.2& 65.5\\ & AdvWavAug&\bf79.2&\bf8.6&\bf42.2&\bf63.7\\ \midrule \multirow{4}*{Res152} & Baseline & 78.8& 7.3& 40.6& 69.1\\ & Gaussian & 79.0& 8.6& 40.8& 67.3\\ & AdvProp & 79.8& 11.4& 43.6&\bf62.3\\ & AdvWavAug&\bf80.5&\bf13.0&\bf45.9& 63.2\\ \midrule \multirow{4}*{SwinT} & Baseline & 78.6& 13.9& 38.1& 67.0\\ & Gaussian & 79.4&\bf16.6& 39.3& 63.2\\ & AdvProp & 79.5& 15.3& 40.6& 61.9\\ & AdvWavAug&\bf79.7& 15.8&\bf44.9&\bf56.1\\ \midrule \multirow{4}*{SwinS} & Baseline & 80.6& 21.2& 41.0& 61.1\\ & Gaussian & 80.2& 20.3& 40.9& 58.2\\ & AdvProp & 80.7& 20.3& 42.1& 56.9\\ & AdvWavAug&\bf81.6&\bf25.8&\bf47.7&\bf48.9\\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \noindent \textbf{Generalization on OOD Datasets} We also evaluate the models on the more challenging distorted ImageNet datasets: ImageNet-A, ImageNet-C, ImageNet-R. The results are summarized in Tab.~\ref{tab:2}. From the results of Gaussian augmentations, it has been shown that adversary-based augmentations generally outperform one kind of corrupted augmentation. This is owed to the fact that adversary-based augmentations actually explore the boundary of a classifier, which can benefit generalization on unseen corruptions, while the improvement of Gaussian noise augmentation is minor. Due to the slight improvement of Gaussian augmentations, we will focus on the comparisons between our method with the vanilla AdvProp. Taking Swin-Small as an example, AdvWavAug achieves the best performance with performance gains 5.5\% on ImageNet-A, 8.0\% on ImageNet-C and 5.6\% on ImageNet-R, compared with original AdvProp. It has been shown that model generalization to these distorted datasets is improved by our AdvWavAug, which indicates that on-manifold adversarial examples can further improve generalization on OOD data. The results support our proof that the models robust to on-manifold adversarial examples have smaller upper bound on OOD samples, compared with the models robust to off-manifold ones. \begin{table*}[h]\small \begin{center} \begin{minipage}{0.85\textwidth} \caption{Comparison of generalization with AdvWavAug and VQ-VAE augmentation on different datasets, including ImageNet, ImageNet-A, ImageNet-R and ImageNet-C. The training settings of Res50, Res101, Res152, SwinT and SwinS are based on the aligned ones.} \label{tab:3} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llcccc@{\extracolsep{\fill}}} \toprule% Model & Method & ImageNet & ImageNet-A & ImageNet-R & ImageNet-C \\ \cmidrule{3-6} && Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & mCE \textcolor{red}{$\downarrow$} \\ \midrule \multirow{2}*{Res50} & VQ-VAE & 77.4&\bf4.5&\bf41.0&\bf68.0 \\ & AdvWavAug&\bf77.5& 4.1& 39.3& 69.0 \\ \midrule \multirow{2}*{Res101} & VQ-VAE &\bf79.2&\bf10.2&\bf44.1&\bf62.4 \\ & AdvWavAug&\bf79.2& 8.6& 42.2& 63.7 \\ \midrule \multirow{2}*{Res152} & VQ-VAE & 80.3&\bf14.5&\bf47.1&\bf58.2 \\ & AdvWavAug&\bf80.5& 13.0& 45.9& 63.2 \\ \midrule \multirow{2}*{SwinT} & VQ-VAE & 79.4& 15.4&\bf45.7& 57.0 \\ & AdvWavAug&\bf79.7&\bf15.8& 44.9&\bf56.1 \\ \midrule \multirow{2}*{SwinS} & VQ-VAE &\bf81.7&\bf27.5&\bf50.7& 49.1 \\ & AdvWavAug& 81.6& 25.8& 47.7&\bf48.9 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \begin{table*}[h]\small \begin{center} \begin{minipage}{0.85\textwidth} \caption{Comparison of generalization with AdvWavAug and VQ-VAE augmentation on different datasets, including ImageNet, ImageNet-A, ImageNet-R and ImageNet-C. The training settings of SwinT, SwinS and SwinB are based on the standard ones.} \label{tab:4} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llccccc@{\extracolsep{\fill}}} \toprule% Model & Method & ImageNet & ImageNet-A & ImageNet-R & ImageNet-C \\ \cmidrule{3-6} && Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & mCE \textcolor{red}{$\downarrow$} \\ \midrule \multirow{3}*{SwinT} & Baseline & 81.2& 20.7& 41.9& 62.0 \\ & VQ-VAE & 81.2& 21.1& 42.3& 60.3 \\ & AdvWavAug&\bf81.4&\bf22.2&\bf47.0&\bf53.2 \\ \midrule \multirow{3}*{SwinS} & Baseline & 83.2& 32.1& 45.3& 54.9 \\ & VQ-VAE & 82.9& 31.6&\bf52.8& 46.2 \\ & AdvWavAug&\bf83.4&\bf35.0& 49.4&\bf45.8 \\ \midrule \multirow{3}*{SwinB} & Baseline & 83.5& 34.4& 47.2& 54.5 \\ & VQ-VAE & 83.3& 33.4& 49.4& 50.2 \\ & AdvWavAug&\bf83.9&\bf38.3&\bf51.1&\bf44.9 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \subsubsection{Comparison with VQ-VAE Augmentation} We also train a VQ-VAE model to approximate the data manifold of ImageNet. With the assitance of VQ-VAE model, we can generate on-manifold adversarial examples as shown in~\cite{stutz2019disentangling}. We first compare the performance of our method with the VQ-VAE augmentation based on aligned setting, i.e., the training epochs of Swin Transformers are set as 105. The results are summarized in Tab.~\ref{tab:3}. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{time.pdf} \caption{Comparison of computation cost with AdvWavAug and VQ-VAE augmentation based on Swin Transformer Small.} \label{fig:time} \end{figure} Then, to achieve SOTA results on Swin Transformers, we follow the standard settings as defined in~\cite{liu2021swin} and Swin Transformer Base model is added. The results are summarized in Tab.~\ref{tab:4}, in which the baseline models are taken directly from the official implementation in~\cite{liu2021swin}. Taking Swin Transformer Base with standard settings as an example, our method improves OOD generalization on ImageNet by 0.6\%, ImageNet-A by 4.9\%, ImageNet-R by 1.7\% and ImageNet-C by 5.3\%. It has been shown that the implementation of data manifold with AdvWavAug has competitive OOD generalization, compared with the VQ-VAE implementation based on aligned training settings, while our method outperforms the VQ-VAE implementation based on standard Swin Transformers. From the aligned settings, it seems that the convergence time of the VQ-VAE augmentation is faster than our AdvWavAug. However, the final converged results of the VQ-VAE implementation are worse than those of the AdvWavAug, which reveals the difficulty to approximate the data manifold with VQ-VAE. Generally speaking, approximating the data manifold, especially on the large scale datasets, places great demands on the representational ability of the VAEs, while our method overcomes it in a more stable and simple way with the assistance of a frequency analysis. We have also compare the total training time of both augmentations in Fig.~\ref{fig:time}. We conduct the experiments based on Swin Transformer Small with 32 NVIDIA GeForce RTX 3090 GPUs with 24 GB memory each GPU. The computation cost of AdvWavAug is nearly $59.4\%$ lower than the VQ-VAE augmentation, because our method requires no pre-training process. Although the number of convolution operations of our method is smaller than the VQ-VAE implementation, our method shows no obvious advantage during the model training process. That is because the wavelet transforms have not been fully optimized on the PyTorch implementation, while the convolutions in VQ-VAE are all optimized based on the PyTorch framework. We believe the training time will be shortened once the operations are fully optimized. Overall, it is shown that AdvWavAug can approximate the data-manifold in a more efficient way, because AdvWavAug takes advantage of sparsity in frequency domain to approximate data manifold. Considering both the augmentation effect and efficiency, AdvWavAug is a good replacement of VQ-VAE augmentation. \begin{table*}[h]\small \begin{center} \begin{minipage}{0.85\textwidth} \caption{Combination with another augmentation technique AugMix on different datasets, including ImageNet, ImageNet-A and ImageNet-R. We combine our $\rm AdvWavAug$ with AugMix, and compare its performance with the original AugMix on ResNet-50.} \label{tab:5} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llccccc@{\extracolsep{\fill}}} \toprule% Model & Method & ImageNet & ImageNet-A & ImageNet-R & ImageNet-C \\ \cmidrule{3-6} && Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & mCE \textcolor{red}{$\downarrow$} \\ \midrule \multirow{4}*{Res50} & Baseline &76.3&2.5&35.9&77.4 \\ \cmidrule{2-6} & AugMix &77.5&3.8&41.1&65.3 \\ \cmidrule{2-6} & AdvWavAug & 77.5& 4.3& 39.5& 68.0 \\ & +AugMix &\bf77.7&\bf5.6&\bf41.3&\bf63.3 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \begin{table*}[h]\small \begin{center} \begin{minipage}{0.85\textwidth} \caption{Comparison of generalization with MAE and MAE+AdvWavAug on different datasets, including ImageNet, ImageNet-A and ImageNet-R.} \label{tab:6} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llccccc@{\extracolsep{\fill}}} \toprule% Model & Method & ImageNet & ImageNet-A & ImageNet-R & ImageNet-C \\ \cmidrule{3-6} && Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & mCE \textcolor{red}{$\downarrow$} \\ \midrule \multirow{2}*{ViTB} & Baseline & 83.6& 35.9& 48.3& 51.7 \\ & AdvWavAug&\bf83.9&\bf37.9&\bf50.8&\bf47.0 \\ \midrule \multirow{2}*{ViTL} & Baseline & 85.9& 57.1& 59.9& 41.8 \\ & AdvWavAug&\bf86.2&\bf58.9&\bf61.3&\bf38.1 \\ \midrule \multirow{2}*{ViTH} & Baseline & 86.9& 68.2& 64.4& 33.8 \\ & AdvWavAug&\bf87.1& \bf 68.3&\bf65.4&\bf32.6 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \subsubsection{Integration with Other Data Augmentations} \label{sec:5.2.4} We have also combined our module with some other data augmentation techniques, such as AugMix~(\cite{hendrycks2019augmix}). The effectiveness of the combinative version on transformers has already been verified by the results in Tab.~\ref{tab:4}, because AugMix has been integrated into the training process of Swin Transformers in the baseline setting. Therefore, we only provide the results based on ResNet 50 here. The results are shown in Tab.~\ref{tab:5}, in which the AugMix results are directly taken from~\cite{hendrycks2019augmix}. Our method has similar augmentation effect as common data augmentation technique, such as AugMix. Our AugMix+$\rm AdvWavAug$ has better performance gains with 0.2\% on ImageNet, 1.8\% on ImageNet-A 0.2\% on ImageNet-R, and 2.0\% on ImageNet-C, compared with single AugMix operation. It indicates that generalization boost of our method have no conflict with common data augmentations. This is because our method improves generalization by finding the worst cases (adversarial examples) on manifold, while common data augmentations improve generalization by increasing data diversity. \subsubsection{Integration into Masked Autoencoders} It has been shown that masked autoencoders can learn the structure of the objects in an image. The on-manifold adversarial examples are more related to the semantic meanings of the input images, which is a good way to the deeper understanding of the structure of an image. Therefore, we integrate our method to the fine-tuning process of MAE method. The baseline models are directly taken from~\cite{he2021masked}. The results are shown in Tab.~\ref{tab:6}, in which our method has improved ViTL on ImageNet by 0.3\%, ImageNet-A by 1.8\%, ImageNet-R by 1.4\% ImageNet-C by 3.7\%. It has been shown that our method can further improve the OOD generalization of Vision Transformers pre-trained with MAE. That is because our AdvWavAug help to explore the boundary of a classifier on the data manifold. \begin{table*}[h]\small \begin{center} \begin{minipage}{0.85\textwidth} \caption{Comparison of generalization with AdvWavAug and AdvWavAug+PGD on different datasets, including ImageNet, ImageNet-A, ImageNet-R and ImageNet-C.} \label{tab:7} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llccccc@{\extracolsep{\fill}}} \toprule% Model & Method & ImageNet & ImageNet-A & ImageNet-R & ImageNet-C \\ \cmidrule{3-6} && Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & mCE \textcolor{red}{$\downarrow$} \\ \midrule \multirow{2}*{Res50} & AdvWavAug &\bf77.5& 4.1& 39.3& 69.0 \\ & +PGD & 77.4&\bf4.3&\bf39.5&\bf68.0 \\ \midrule \multirow{2}*{Res101} & AdvWavAug & 79.2& 8.6& 42.2& 63.7 \\ & +PGD &\bf79.5&\bf8.9&\bf43.4&\bf63.0 \\ \midrule \multirow{2}*{Res152} & AdvWavAug &\bf80.5&\bf13.0&\bf45.9& 63.2 \\ & +PGD & 79.1& 9.6& 42.1&\bf62.7 \\ \midrule \multirow{2}*{SwinT} & AdvWavAug &\bf79.7& 15.8&\bf44.9&\bf56.1 \\ & +PGD &\bf79.7&\bf16.8& 43.9& 56.6 \\ \midrule \multirow{2}*{SwinS} & AdvWavAug &\bf81.6&\bf25.8&\bf47.7&\bf48.9 \\ & +PGD & 80.5& 20.9& 44.9& 53.4 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \begin{table*}[h]\small \begin{center} \begin{minipage}{0.85\textwidth} \caption{Comparison of normal adversarial training scheme on ResNet50 with adversarial examples generated by PGD and our AdvWavAug. We have chosen four datasets, including Top-1 Acc. (\%) on ImageNet, Top-1 Acc. (\%) on ImageNet-A, Top-1 Acc. (\%) on ImageNet-R and mCE (\%) on ImageNet-C.} \label{tab:8} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llcccc@{\extracolsep{\fill}}} \toprule% Model & Method & ImageNet & ImageNet-A & ImageNet-R & ImageNet-C \\\cmidrule{3-6}% && Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & Top-1 Acc. \textcolor{red}{$\uparrow$} & mCE \textcolor{red}{$\downarrow$} \\\midrule \multirow{3}*{Res50} & Baseline &\bf76.3&2.5&35.9&77.4 \\ \cmidrule{2-6} & PGD-AT &75.9&2.6&\bf74.7&38.6 \\ \cmidrule{2-6} & AdvWavAug-AT &76.1&\bf2.9&74.0&\bf37.0 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \subsubsection{Visualization of Adversarially Augmented Data} In Fig.~\ref{fig:visual}, we show adversarial examples generated by AdvWavAug which have more natural details and the perturbations are more related to the semantic meanings. We have also compare our AdvWavAug with PGD from the image quality of the adversarial examples (e.g., FID and LPIPS) in Appendix~\ref{appendix:c}. The results demonstrate that AdvWavAug can generate adversarial examples with desirable attack performance and closer distance to data manifold than PGD. \begin{figure}[!t] \centering \includegraphics[width=0.98\columnwidth]{visual.pdf} \caption{Visualization of adversarial examples generated by AdvWavAug and PGD. It can be seen that adversarial examples of our method are more natural, while PGD adversarial examples have many noise-like patterns, or off-manifold perturbations.} \label{fig:visual} \end{figure} \begin{table*}[h]\small \begin{center} \begin{minipage}{0.85\textwidth} \caption{Comparison of AdvWavAug generalization with different wavelet settings. We have chosen ResNet-50 as the base model and compare the Top-1 Acc. (\%). From Setting 1 (S1) to setting 6 (S6), the weights of perturbations will gradually shift from high frequency bands to low frequency bands.} \label{tab:9} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc@{\extracolsep{\fill}}} \toprule% Model & \multicolumn{6}{@{}c@{}}{Wavelet Setting} \\ \cmidrule{2-7} &1&2&3&4&5&6 \\ \midrule Res50 &76.7&76.8&\bf77.5&77.1&77.0&76.9 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \subsection{Ablation Study} \label{sec:5.3} \subsubsection{Integration with Off-manifold Adversarial Augmentation} Although the performance of on-manifold adversarial augmentation is generally better than that off-manifold one, it is still unknown about the combination version of both augmentations. We randomly choose adversarial examples from AdvWavAug and PGD attacker as augmentation samples. The detailed results are shown in Tab.~\ref{tab:7}. Interestingly, performance of AdvWavAug +PGD decreases in some of the tested models. It means introducing off-manifold adversarial examples cannot stably improve model generalization and even have conflicts with on-manifold adversarial examples. This phenomenon matches our analysis that off-manifold adversarial examples cannot find the real pitfalls of a model. Changing model structure may invalidate the generalization improvement of off-manifold adversarial examples. Our method can find the real pitfalls of a model, and consistently improve generalization on different model architectures. \subsubsection{Normal Adversarial Training} \label{sec:5.3.2} Here we conduct adversarial training by following the same objective function in Eq.~\eqref{eq:3} and Eq.~\eqref{eq:17}, which means we feed the clean and adversarial data to the network in a mini-batch. Tab.~\ref{tab:8} shows the experimental results. Expect for ImageNet-R, our AdvWavAug outperforms PGD on other datasets under the normal adversarial training. The results show that our method can also better improve the overall generalization of model under normal adversarial training. This is because on-manifold adversarial examples stay on the data manifold and are more suitable in normal adversarial training. \subsubsection{Training with Different Wavelet Settings} We have also studied the relationship between modified frequency bands and the generalization performance. We gradually shift the attention from high frequency bands to low frequency bands as the wavelet setting ranging from Setting 1 (S1) to Setting 6 (S6) in Tab.~\ref{tab:1}. Results in Tab.~\ref{tab:9} show that wavelet Setting 3 (S3), which modifies relatively higher frequency bands together with subtle modifications to the rest bands performs best on ResNet-50. The results are due to modifications on low frequency bands are difficult to be controlled and may lead to adversarial examples detached from the manifold. The chosen Setting 3 (S3) is a relatively balanced setting. \section{Conclusion} \label{sec:6} In this paper, we have proposed AdvWavAug, a new data augmentation algorithm based on on-manifold adversarial examples in frequency domain to improve OOD generalization, followed by an AdvProp training scheme to minimize the loss function of both the clean samples and the on-manifold adversarial examples. We have provided theoretical proof that the models robust to on-manifold adversarial examples have smaller upper bound of OOD generalization than those robust to off-manifold adversarial examples, which means on-manifold adversarial augmentation can better improve OOD generalization. We have conducted extensive experiments to verify the effectiveness of our method. We have achieved new SOTA results on two transformer-based architectures including Swin Transformers and Vision Transformers pre-trained by MAE. The future work of this paper lies on two aspects. We will build an adversarial dataset based on the adversarial examples generated by our method, which can be used to test the robustness of a classifier to more natural adversarial examples. Besides, our method can be regarded as a new paradigm to generate adversarial examples, which can be extended to some researches combined with adversarial examples. For example, we can replace the original adversarial module, in some researches~(\cite{ho2020contrastive}) combining adversarial augmentations with contrastive learning, with our method. \backmatter \bmhead{Acknowledgments} This work was supported by the National Natural Science Foundation of China (No.61976094) and the National Key Research and Development Program of China (No.s 2020AAA0106000,2020AAA0104304) \begin{appendix} \section*{Outline of the Appendix} In this appendix, we provide additional technical details of the proposed method. In Appendix~\ref{appendix:a}, we provide a detailed proof for the upper-bound of OOD generalization based on on-manifold adversarial robustness in Sec.~\ref{sec:3.1}. Then, in Appendix~\ref{appendix:b} we give detailed analysis of the perturbation range in frequency domain defined in Sec.~\ref{sec:4.1}. In Appendix~\ref{appendix:c}, we provide detailed analysis of the on-manifold adversarial examples generated by our method from the perspective of both the image quality and attack performance. In Appendix~\ref{appendix:d}, we provide detailed results on different corruptions in ImageNet-C. \section{Detailed Proof for Relationship Between OOD Generalization and On-manifold Adversarial Robustness} \label{appendix:a} The Lemma~\ref{lemma:1} shows that models robust to on-manifold perturbations are ideally equivalent to those robust to regular perturbations. Therefore, we can get similar conclusion of Theorem~\ref{th2} referring to~\cite{yi2021improved}. Detailed proof still starts from the definition of the cover of a point set. \begin{definition}[\cite{wainwright2019high}] To a point set $\mathcal{Z}$, the $\epsilon$-cover in norm $\|\cdot\|_\infty$ is defined as $\mathcal{Z}_c=\{z_1, \cdots , z_N\}$, in which, for any $z \in \mathcal{Z}$, there exists $z_i \in \mathcal{Z}_c$ such that $\|z-z_i\|_\infty \leq \epsilon$. The covering number can be defined as $N(\mathcal{Z},\epsilon, \|\cdot\|_\infty)=\inf\{N \in \mathbb{N}~\mid$ $\mathcal{Z}_c=\{z_1, \cdots , z_N\}$ is a $\epsilon$-cover of $\mathcal{Z}$ in $\|\cdot\|_\infty$ norm\}. \end{definition} Then, we come to the proof of Theorem~\ref{th2}, which is borrowed from~\cite{yi2021improved}. \begin{proof} As we transfer the image data space into the latent space, we only need to cover the latent space, which has equivalent effect to a cover in the image data space. A $\epsilon$-cover in norm $\|\cdot\|_\infty$ of the image space $\mathbb{X}$ is equivalent to a $\epsilon_m$-cover in norm $\|\cdot\|_\infty$ of the latent space $\mathbb{Z}$. Just the same as the conclusion in~\cite{yi2021improved} that $N(\mathcal{X}, \epsilon, \|\cdot\|_\infty) \leq (2d_0)^{(2D/\epsilon^2+1)}=N_r$, we can obtain an inequality that $N(\mathcal{Z}, \epsilon_m, \|\cdot\|_\infty) \leq (2d)^{(2D'/\epsilon_m^2+1)}=N_m$, in which $\mathcal{Z}$ is the latent space, $d$ is the number of dimensions of the latent code, $D'$ is the diameter of the support in latent space. Then, we can construct an $\epsilon_m$ cover in norm $\|\cdot\|_\infty$ of $\mathcal{Z}$ with pairwise disjoint sets $\{C_1,\cdots,C_{N_m}\}$. For each $c_m,c_n \in C_i$, we have $\|c_m-c_n\|_{\infty} \leq \epsilon_m$. We can denote $P_0(C_i)$ as the probability that sampled $z=g(x)$ ($x \sim P_0$) lies in the subset $C_i$ and $P_n(C_i)$ as the empirical probability that $z=g(x)$ ($x\in \mathcal{X}$) also lies in $C_i$. Due to Lemma~\ref{lemma:1}, we have \begin{align} \gamma(\epsilon_o,p)=&\Biggl\vert\mathbb{E}_{P_{0}}\left[\sup_{\|\delta\|_\infty\leq \epsilon_m}\ell (\bm{\theta}, g^{-1}(\bm{z}+\bm{\delta}))\right] \\ \nonumber &-\mathbb{E}_{P_{0}}(\ell (\bm{\theta}, \bm{x})) \Biggr\vert \\ \nonumber =&\Biggl\vert \sum_{i=1}^{N_m} \mathbb{E}_{P_0} \left[ \sup_{\|\delta\|_\infty\leq \epsilon_m}\ell (\bm{\theta}, g^{-1}(\bm{z}+\bm{\delta})) \mid z \in C_i \right] \\ \nonumber &\cdot P_0(C_i)- \mathbb{E}_{P_{0}}(\ell (\bm{\theta}, \bm{x}))\Biggr\vert \\ \nonumber \leq& \Biggl\vert \sum_{i=1}^{N_m} \mathbb{E}_{P_0} \left[ \sup_{\|\delta\|_\infty\leq \epsilon_m}\ell (\bm{\theta}, g^{-1}(\bm{z}+\bm{\delta})) \mid z \in C_i \right] \\ \nonumber &\cdot P_n(C_i)- \frac{1}{n}\sum_{j=1}^{n}\ell (\bm{\theta}, \bm{x}_j)\Biggr\vert \\ \nonumber &+M\sum_{i=1}^{N_m}\left\vert P_0(C_i)-P_n(C_i) \right\vert \\ \nonumber \leq& \Biggl\vert\frac{1}{n} \sum_{i=1}^{N_m} \sum_{z_j \in C_i} \sup_{z \in B(C_i,\epsilon_m)} \vert \ell (\bm{\theta}, g^{-1}(\bm{z})) \\ \nonumber &-\ell(\bm{\theta}, g^{-1}(\bm{z}_i)) \vert\Biggr\vert +M\sum_{i=1}^{N_m}\left\vert P_0(C_i)-P_n(C_i) \right\vert \\ \nonumber \leq& \frac{1}{n} \sum_{j=1}^n \sup_{\|\delta\|_\infty \leq 2\epsilon_m} \vert \ell (\bm{\theta}, g^{-1}(\bm{z}_j+\bm{\delta})) \\ \nonumber &-\ell(\bm{\theta}, g^{-1}(\bm{z}_j)) \vert +M\sum_{i=1}^{N_m}\left\vert P_0(C_i)-P_n(C_i) \right\vert \\ \nonumber \leq& \tau+M\sum_{i=1}^{N_m}\left\vert P_0(C_i)-P_n(C_i) \right\vert, \end{align} in which, $B(C_i,\epsilon_m)$ represents all the points $\{p_m\}$ that, for each $c_n \in C_i$, the $\|p_m-c_n\|_{\infty}\leq\epsilon_m$. For the same proposition in~\cite{wellner2013weak}, we get the conclusion in Theorem~\ref{th2}. \end{proof} Considering the upper-bound obtained from the regular adversarial examples in Theorem~\ref{th1} and that obtained from the on-manifold adversarial examples in Theorem~\ref{th2}, we have noticed that the only different factors in Eq.~\ref{eq:7} and Eq.~\ref{eq:11} is the covering number. What should be noticed in is that the perturbation range of the latent code should be different from the original perturbation range in data space, because the probability density functions in these two spaces is different. Supposing the ratio of the dimension in original space $d_0$ and that in the latent space $d$ is $k=\frac{d_0}{d} \geq 1$, we have \begin{proof} Considering the delta of $N_r$ and $N_m$, we can get \begin{align} \label{eq:19} \Delta N&=\log N_r-\log N_m \\ \nonumber &= \left(\frac{2D}{\epsilon^2}+1\right)\log(2d_0)-\left(\frac{2D'}{\epsilon_m^2}+1\right)\log(2d) \\ \nonumber &=\left(\frac{2D}{\epsilon^2}+1\right)\log(2d_0)-\left(\frac{2D'}{k^2 \epsilon^2}+1\right)\log \left(\frac{2d_0}{k}\right) \\ \nonumber &=\left(\frac{2D}{\epsilon^2}-\frac{2D'}{k^2\epsilon^2}\right)\log(2d_0)+\left(\frac{2D'}{k^2\epsilon^2}+1\right)\log k \\ \nonumber &=\left(\frac{a}{k^2}+1\right)\log k-\frac{ab}{k^2}, \end{align} in which, the notations are simplified as $a=\frac{2D'}{\epsilon^2}$ and $b=\log (2d_0)$. We can obtain the derivative of Eq.~\ref{eq:19} as \begin{align} \label{eq:20} \frac{d \Delta N}{dk}&=\frac{d}{dk}\left[\left(\frac{a}{k^2}+1\right)\log k-\frac{ab}{k^2}\right] \\ \nonumber &=\frac{1}{k}\left(\frac{a}{k^2}\left(\log k^2 +2b+1\right)+1\right). \end{align} It can be easily obtained that $\frac{d \Delta N}{dk} >0$, when $k>1$, $a>0$ and $b>0$. Therefore, it has been shown that $\Delta N$ is monotonically increasing when $k>1$. Then we get the inequality that $U_m \leq U_r$ and The inequality is tight when $k=1$. From the analysis above, we can obtain the conclusion that models robust to on-manifold adversarial examples have smaller upper bound on OOD generalization, compared with models robust to normal adversarial examples. \end{proof} \section{Detailed Analysis for Adversarial Augmentation of AdvWavAug} \label{appendix:b} In Sec.~\ref{sec:4.1}, we formulate the multiplicative perturbation in frequency domain as Eq.~\ref{eq:15}. The perturbation range can be derived from regular perturbation constraint. Comparing the latent code of regular adversarial example $\bm{z}^{adv}_r$ and our proposed adversarial example $\bm{z}^{adv}_m$ in frequency domain \begin{align} \bm{z}^{adv}_r&=\mathcal{W}(\bm{x})+\mathcal{W}(\bm{\delta}), \label{eq:22}\\ \bm{z}^{adv}_m&=\mathcal{W}(\bm{x}) \odot (\textbf{1}+\bm{\delta}_{m}), \label{eq:23} \end{align} the perturbation in Eq.~\eqref{eq:22} should be lower than that in Eq.~\eqref{eq:23}. By simply setting all the wavelet coefficients below threshold $T$ as zeros, we can get \begin{equation} \label{eq:24} \| \bm{\delta}_{m} \|_{p} \leq \frac{\| \mathcal{W}(\bm{\delta}) \|_{p}}{\sqrt[p]{n} T}, \end{equation} in which, $n$ is the number of all the non-sparse coefficients. As we conduct orthogonal wavelet transform, the $L_2$-norm of the coefficients in frequency domain and spatial domain are equal based on Parseval's theorem. Therefore, there exists two positive constants $P$ and $Q$ such that \begin{align} \label{eq:25} \| \mathcal{W}(\bm{\delta}) \|_{p} &\leq P \| \mathcal{W}(\bm{\delta}) \|_{2} \\ \nonumber &= P \| \bm{\delta} \|_{2} \\ \nonumber &\leq PQ\| \bm{\delta} \|_{p} \\ \nonumber &\leq PQ\epsilon_f. \end{align} By combining Eq.~\eqref{eq:24} and Eq.~\eqref{eq:25}, we can obtain the constraint of $\delta_{m}$ as \begin{equation} \label{eq:26} \| \delta_{m} \|_{p} \leq \frac{PQ\epsilon_f}{\sqrt[p]{n}T} = \Tilde{\epsilon}_f, \end{equation} in which, $\Tilde{\epsilon}_f$ is the new upper bound of $\| \delta_{m} \|_{p}$. \section{Image Quality of On-manifold Adversarial Examples} \label{appendix:c} In this section, we compare the effectiveness of the on-manifold adversarial examples generated by our AdvWavAug with the regular adversarial examples, e.g., PGD~(\cite{madry2017towards}) and DIM~(\cite{xie2019improving}) from the perspective of both the image quality and attack performance. \subsection{White-box Attack Settings} During white-box attack, we mainly compare the performance of PGD and AdvWavAug attack module. We set up PGD with $\epsilon=1/255$, iteration number $n=1$ and step size $\alpha=1/255$; AdvWavAug module with iteration number $n=1$, step size as Setting 3 (S3) in Tab.~\ref{tab:1}. To prevent the clip operation from destroying the continuity of attacks, we use a technique to set the constraints in different scales, by setting the step size of each step instead of the final constraint. In this way, the attack perturbation $\Tilde{\epsilon}$ can also be limited in an adaptive constraint. \begin{figure*}[tp] \centering \includegraphics[width=0.96\linewidth]{app1.pdf} \caption{The ASR-FID curves of different target models. The curves in (a)-(f) exhibit the transferability to other models with respectively AdvIncV3, IncV3, Res50, Res152, Swin-T and EffiAp as the target models. The curves with higher positions mean better attack performance given the same image quality. It has been shown that adversarial examples generated by AdvWavAug (solid lines) have better performance than adversarial examples generated by DIM (dashed lines), given the same target model.} \label{fig:app1} \end{figure*} \begin{figure*}[tp] \centering \includegraphics[width=0.96\linewidth]{app2.pdf} \caption{The ASR-LPIPS curves of different target models. The curves in (a)-(f) exhibit the transferability to other models with respectively AdvIncV3, IncV3, Res50, Res152, Swin-T and EffiAp as the target models. The curves with higher positions mean better attack performance given the same image quality. It has been shown that adversarial examples generated by AdvWavAug (solid lines) have better performance than adversarial examples generated by DIM (dashed lines), given the same target model.} \label{fig:app2} \end{figure*} \subsection{Black-box Attack Settings} To demonstrate that our method can be plugged into any attack method and make the adversarial examples closer to the manifold, we also conduct black-box attack. During black-box attack, we adopt DIM as the target attack method, which is combined with our method to form AdvWavAug-DIM. As perturbations in frequency domain cannot be equivalent to perturbations in spatial domain, we will compare the attack performance with similar image quality, which is further described in Sec.~\ref{sec:c.5}. We set up DIM with 0.7 probability to transform the input images and transformation includes random scaling, cropping and padding. The maximum perturbation range of DIM is set as $\epsilon=16/255$. We intend to show the relationship between black-box attack performance, such as attack success rate (ASR), and image quality, such as Fréchet Inception Distance (FID)~(\cite{heusel2017gans}) and Learned Perceptual Image Patch Similarity (LPIPS)~(\cite{zhang2018unreasonable}). To comprehensively observe the attack process that the ASR will gradually increase and the image quality will gradually decrease as the attack going on, we will draw a curve to describe the relationship, in which each point denotes intermediate adversarial examples with the x-axis denoting image quality score and y-axis denoting ASR. To obtain adversarial examples with different perturbation ranges, we need to gradually change our perturbation settings, which will make the attack process discontinuous. A better way is to set up a small step size and a large iteration number. In this way, different perturbation ranges can be simulated, because the step size and the iteration number indirectly define the perturbation range. As the iteration goes on, the perturbation range will gradually increase. Besides, the attack process will be smoother as the attack process is continuous. Therefore, we set up the number of iterations $n=100$ and the step size as $16/(255*100)$. The maximum perturbation range $\epsilon=16/255$ will be met until the end of the iteration. We also remove the clipping operations to limited the perturbations within the perturbation ranges, as the perturbation ranges are dynamic in our settings. As the black-box performance has no relation to the model training process, we select the parameters with the best performance. We set up the AdvWavAug-DIM module with the same DIM settings, step size Setting 3 (S3) in Tab.~\ref{tab:1} and adaptive constraint. \begin{table}[h]\tiny \begin{center} \begin{minipage}{205pt} \caption{The white-box attack performance, including ASR (\%), FID (\%), LPIPS (\%) and SCORE (\%), of PGD vs AdvWavAug for different models.} \label{tab:white_box}% \begin{tabular}{@{}llcccc@{}} \toprule Model & Attack & ASR & FID & LPIPS & SCORE \\ \midrule \multirow{2}*{Res50} & PGD &62.7&\bf96.1&97.8&58.9 \\ & AdvWavAug &\bf70.7&95.9&\bf99.3&\bf67.3 \\ \midrule \multirow{2}*{Res152} & PGD &82.7&95.9&97.6&77.4 \\ & AdvWavAug &\bf83.5&\bf96.1&\bf99.3&\bf79.6 \\ \midrule \multirow{2}*{IncV3} & PGD &71.8&72.8&97.2&50.8 \\ & AdvWavAug &\bf72.9&\bf91.9&\bf99.3&\bf66.5 \\ \midrule \multirow{2}*{AdvIncV3} & PGD &40.0&97.4&98.9&38.5 \\ & AdvWavAug &\bf40.8&\bf97.6&\bf99.6&\bf39.6 \\ \midrule \multirow{2}*{EffiAP} & PGD &37.5&79.9&96.2&28.8 \\ & AdvWavAug &\bf37.7&\bf96.6&\bf98.4&\bf35.8 \\ \midrule \multirow{2}*{SwinT} & PGD &55.2&\bf98.5&99.0&53.8 \\ & AdvWavAug &\bf60.5&96.9&\bf99.3&\bf58.2 \\ \botrule \end{tabular} \end{minipage} \end{center} \end{table} \subsection{Evaluation Metrics for Attacks} \label{sec:c.3} We adopt some image quality metrics including FID and LPIPS to measure the distance to the manifold. In this section, we provide the calculation formulation of attack success rate (ASR) and the normalized image quality metrics including FID and LPIPS. The normalization process is referred to the CVPR 2021 challenge: Unrestricted Adversarial Attacks on ImageNet. We change some detailed parameters to make the difference more obvious, but the scores are still limited to the range $[0,1]$. Suppose that the original $N$ clean images $X=\{x_1,x_2,\cdots,x_n\}$ are perturbed as adversarial examples $\hat{X}=\{\hat{x}_1,\hat{x}_2,\cdots,\hat{x}_n\}$. The ASR measures the attack ability as \begin{equation} {\rm ASR}=\frac{\| \{ \hat{x} \mid \mathcal{F}(\hat{x}) \ne y\} \|}{N}, \end{equation} in which, $\mathcal{F}(\hat{x})$ is the output of a classifier. One of the image quality metrics is FID, which represents the naturalness of a generated image. The normalized version is expressed as \begin{equation} {\rm FID}=\sqrt{1-\frac{\min(fid(X,\hat{X}),ubf)}{ubf}}, \end{equation} in which, $fid$ is calculated from the original setting in~\cite{heusel2017gans}, $ubf$ is the upper bound of the original $fid$ score. We set $ubf=10$ for white-box attack and $ubf=200$ for black-box attack. Another image quality metric is LPIPS, which represents the perceptual distance of two images. The normalized LPIPS score is expressed as \begin{equation} {\rm LPIPS}=\sqrt{1-2*(\min(\max(lpips,ubl),lbl))}, \end{equation} in which, $lpips$ is the original score referred to~\cite{zhang2018unreasonable}, $ubl$ is the upper bound and $lbl$ is the lower-bound of the original $lpips$ score. We set $lbl=0.0$, $ubl=0.5$ in both white-box attacks and $lbl=0.1$, $ubl=0.6$ in black-box attacks. The total score SCORE, including attack performance and image quality scores, is expressed as \begin{equation} \rm SCORE=100 *ASR*FID*LPIPS. \end{equation} \subsection{Experimental Results on White-box Attacks} The results in Tab.~\ref{tab:white_box} shows that our AdvWavAug can generate adversarial examples with higher ASR, FID, LPIPS and SCORE, compared to PGD. The results demonstrate that AdvWavAug can generate adversarial examples with desirable attack performance and closer distance to data manifold than PGD. \renewcommand\arraystretch{1.4} \begin{sidewaystable*}\small \sidewaystablefn% \begin{center} \begin{minipage}{\textheight} \caption{Comparison of generalization with different training methods on different corruptions in ImageNet-C. We compare the baseline model, Gaussian Augmentation, AdvProp and AdvWavAug.} \label{tab:2.2} \begin{tabular*}{\textheight}{@{\extracolsep{\fill}}lllccccccccccccccc@{\extracolsep{\fill}}} \toprule% & Method & mCE \textcolor{red}{$\downarrow$} & \multicolumn{3}{@{}c@{}}{Noise} & \multicolumn{4}{@{}c@{}}{Blur} & \multicolumn{4}{@{}c@{}}{Weather} & \multicolumn{4}{@{}c@{}}{Digital} \\\cmidrule{4-6} \cmidrule{7-10} \cmidrule{11-14} \cmidrule{15-18} & & & Gau & shot & Imp & Def & Glass & Mot & Zoom & Snow & Frost & Fog & Bright & Cont & Elas & Pixel & JPEG \\ \midrule \multirow{4}*{\rotatebox{90}{Res50}} & Baseline &77.4&78.3&79.5&82.0&73.7&89.7&78.1&81.0&82.6&77.5&68.3&57.8&72.2&85.5&77.4&77.7 \\ & Gaussian &75.7&75.5&76.7&78.3&73.8&88.6&76.3&80.3&79.2&75.7&69.0&57.2&71.6&84.3&74.7&74.8 \\ & AdvProp &69.9&74.0&74.2&76.2&69.4&\bf79.6&74.0&\bf73.8&\bf74.0&\bf67.6&\bf65.0&\bf52.2&68.3&\bf77.1&61.2&\bf61.4 \\ & AdvWavAug &\bf69.0&\bf66.7&\bf68.4&\bf68.8&\bf68.4&81.0&\bf73.5&74.9&78.2&71.1&66.6&53.5&\bf67.9&77.2&\bf56.2&63.3 \\ \midrule \multirow{4}*{\rotatebox{90}{Res101}} & Baseline &70.8&71.8&74.0&73.4&68.0&82.8&73.9&77.3&74.8&71.5&63.8&53.0&66.4&79.1&63.4&68.0 \\ & Gaussian &69.4&68.4&69.8&69.7&67.7&82.6&73.7&75.2&74.4&69.8&61.7&52.0&66.4&78.2&62.7&68.0 \\ & AdvProp &65.5&65.4&67.7&67.7&64.1&\bf75.5&\bf67.0&\bf69.7&\bf71.2&\bf65.6&64.9&\bf49.3&66.7&\bf70.7&58.7&\bf58.5 \\ & AdvWavAug &\bf63.7&\bf60.2&\bf61.5&\bf61.8&\bf62.9&75.7&68.2&70.2&72.4&65.9&\bf62.3&49.7&\bf63.2&70.8&\bf50.4&60.0 \\ \midrule \multirow{4}*{\rotatebox{90}{Res152}} & Baseline &69.1&69.0&70.9&71.2&65.5&83.3&71.1&74.0&73.6&70.0&61.8&51.6&65.2&77.5&63.1&68.9 \\ & Gaussian &67.3&66.8&68.5&67.8&66.3&81.0&67.5&71.8&72.2&68.9&61.9&49.9&63.8&76.2&60.8&66.2 \\ & AdvProp &\bf62.3&62.2&63.5&63.9&\bf61.9&\bf71.7&\bf64.4&\bf67.7&68.9&\bf62.0&60.9&48.0&64.1&\bf68.7&\bf52.1&\bf55.4 \\ & AdvWavAug &63.2&\bf62.0&\bf63.4&\bf63.6&62.4&74.9&65.8&69.9&\bf68.1&62.8&\bf56.3&\bf47.5&\bf61.9&70.7&60.4&57.8 \\ \midrule \multirow{4}*{\rotatebox{90}{SwinT}} & Baseline &67.0&57.1&58.3&57.7&70.9&82.4&69.4&78.1&62.0&58.8&60.5&53.9&50.0&80.1&79.2&86.5 \\ & Gaussian &63.2&52.8&53.8&54.7&69.2&80.9&66.9&77.7&56.3&52.9&54.9&49.8&45.7&78.3&74.9&79.3 \\ & AdvProp &61.9&51.2&52.2&53.0&68.1&79.4&67.3&76.2&54.6&51.8&57.3&49.2&44.0&76.3&74.1&74.6 \\ & AdvWavAug &\bf56.1&\bf46.6&\bf47.4&\bf47.7&\bf62.9&\bf73.6&\bf61.7&\bf71.2&\bf51.6&\bf47.1&\bf53.3&\bf46.1&\bf40.0&\bf70.4&\bf61.2&\bf60.1 \\ \midrule \multirow{4}*{\rotatebox{90}{SwinS}} & Baseline &61.1&49.9&51.2&50.8&66.1&78.7&62.4&72.1&57.3&54.1&51.4&49.3&46.8&74.3&69.3&83.4 \\ & Gaussian &58.2&47.8&49.4&48.5&65.0&78.1&62.7&73.2&52.8&51.1&57.9&48.2&44.6&74.5&66.8&52.9 \\ & AdvProp &56.9&45.1&46.6&46.3&63.2&74.9&60.6&71.0&50.6&48.9&52.8&46.5&42.8&71.0&62.2&71.8 \\ & AdvWavAug &\bf48.9&\bf37.9&\bf39.1&\bf38.1&\bf56.6&\bf67.3&\bf54.0&\bf63.3&\bf44.4&\bf42.8&\bf45.0&\bf41.0&\bf36.3&\bf63.4&\bf49.4&\bf55.2 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{sidewaystable*} \renewcommand\arraystretch{1.2} \begin{sidewaystable}\small \sidewaystablefn% \begin{center} \begin{minipage}{\textheight} \caption{Comparison of generalization with AdvWavAug and VQ-VAE augmentation on different corruptions in ImageNet-C.} \label{tab:3.2} \begin{tabular*}{\textheight}{@{\extracolsep{\fill}}lllccccccccccccccc@{\extracolsep{\fill}}} \toprule% & Method & mCE \textcolor{red}{$\downarrow$} & \multicolumn{3}{@{}c@{}}{Noise} & \multicolumn{4}{@{}c@{}}{Blur} & \multicolumn{4}{@{}c@{}}{Weather} & \multicolumn{4}{@{}c@{}}{Digital} \\\cmidrule{4-6} \cmidrule{7-10} \cmidrule{11-14} \cmidrule{15-18} & & & Gau & shot & Imp & Def & Glass & Mot & Zoom & Snow & Frost & Fog & Bright & Cont & Elas & Pixel & JPEG \\ \midrule \multirow{2}*{\rotatebox{90}{Res50}} & VQ-VAE &\bf68.0&68.3&70.7&72.1&68.9&\bf79.1&\bf71.2&\bf73.6&\bf73.9&\bf67.5&\bf62.7&\bf52.8&\bf67.6&\bf76.7&57.9&\bf57.3 \\ & AdvWavAug &69.0&\bf66.7&\bf68.4&68.8&68.4&81.0&73.5&74.9&78.2&71.1&66.6&53.5&67.9&77.2&\bf56.2&63.3 \\ \midrule \multirow{2}*{\rotatebox{90}{Res101}} & VQ-VAE &\bf62.4&61.9&64.4&64.6&63.6&\bf72.1&\bf67.7&\bf68.0&\bf70.2&\bf63.2&\bf57.9&\bf48.3&\bf61.7&\bf70.0&50.8&\bf50.8 \\ & AdvWavAug &63.7&\bf60.2&\bf61.5&\bf61.8&\bf62.9&75.7&68.2&70.2&72.4&65.9&62.3&49.7&63.2&70.8&\bf50.4&60.0 \\ \midrule \multirow{2}*{\rotatebox{90}{Res152}} & VQ-VAE &\bf58.2&\bf56.0&\bf57.9&\bf57.3&\bf60.2&\bf69.8&\bf63.0&\bf64.1&\bf65.2&\bf58.4&\bf54.4&\bf45.1&\bf57.8&\bf65.8&\bf48.9&\bf49.4 \\ & AdvWavAug &63.2&62.0&63.4&63.6&62.4&74.9&65.8&69.9&68.1&62.8&56.3&47.5&61.9&70.7&60.4&57.8 \\ \midrule \multirow{2}*{\rotatebox{90}{SwinT}} & VQ-VAE &57.0&\bf44.6&\bf45.4&\bf45.8&65.8&\bf71.6&64.1&73.3&51.9&48.0&58.9&\bf44.8&42.2&\bf69.7&68.7&60.4 \\ & AdvWavAug &\bf56.1&46.6&47.4&47.7&\bf62.9&73.6&\bf61.7&\bf71.2&\bf51.6&\bf47.1&\bf53.3&46.1&\bf40.0&70.4&\bf61.2&\bf60.1 \\ \midrule \multirow{2}*{\rotatebox{90}{SwinS}} & VQ-VAE &49.1&\bf37.8&\bf38.3&\bf38.0&57.7&\bf65.1&54.3&63.8&\bf43.5&\bf42.2&53.1&\bf39.9&37.4&\bf62.5&\bf49.4&\bf53.4 \\ & AdvWavAug &\bf48.9&37.9&39.1&38.1&\bf56.6&67.3&\bf54.0&\bf63.3&44.4&42.8&\bf45.0&41.0&\bf36.3&63.4&\bf49.4&55.2 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{sidewaystable} \renewcommand\arraystretch{1.2} \begin{sidewaystable}\small \sidewaystablefn% \begin{center} \begin{minipage}{\textheight} \caption{Comparison of generalization with baseline, AdvWavAug and VQ-VAE augmentation on different corruptions in ImageNet-C.} \label{tab:4.2} \begin{tabular*}{\textheight}{@{\extracolsep{\fill}}lllccccccccccccccc@{\extracolsep{\fill}}} \toprule% & Method & mCE \textcolor{red}{$\downarrow$} & \multicolumn{3}{@{}c@{}}{Noise} & \multicolumn{4}{@{}c@{}}{Blur} & \multicolumn{4}{@{}c@{}}{Weather} & \multicolumn{4}{@{}c@{}}{Digital} \\\cmidrule{4-6} \cmidrule{7-10} \cmidrule{11-14} \cmidrule{15-18} & & & Gau & shot & Imp & Def & Glass & Mot & Zoom & Snow & Frost & Fog & Bright & Cont & Elas & Pixel & JPEG \\ \midrule \multirow{3}*{\rotatebox{90}{SwinT}} & Baseline &62.0&52.2&53.7&53.6&67.9&78.6&64.1&75.3&55.9&52.8&51.3&48.1&45.1&75.7&76.3&79.1 \\ & VQ-VAE &60.3&48.7&49.8&50.7&66.8&76.9&65.2&75.4&52.4&49.2&50.7&46.8&45.5&75.3&76.5&75.0 \\ & AdvWavAug &\bf53.2&\bf44.3&\bf46.1&\bf46.3&\bf60.8&\bf70.9&\bf60.1&\bf70.0&\bf46.4&\bf43.6&\bf49.0&\bf42.1&\bf38.0&\bf66.2&\bf59.2&\bf54.9 \\ \midrule \multirow{3}*{\rotatebox{90}{SwinS}} & Baseline &54.9&42.9&44.9&43.3&61.3&74.1&56.6&67.5&50.9&48.5&46.0&44.1&42.1&68.9&62.1&70.7 \\ & VQ-VAE &46.2&35.6&35.7&35.1&55.9&\bf63.8&53.0&62.2&42.3&39.1&43.6&\bf37.6&33.8&\bf60.5&45.5&\bf49.0 \\ & AdvWavAug &\bf45.8&\bf34.4&\bf35.3&\bf34.9&\bf54.1&64.1&\bf51.4&\bf61.8&\bf40.7&\bf38.9&\bf43.2&37.8&\bf33.4&61.6&\bf44.8&50.6 \\ \midrule \multirow{3}*{\rotatebox{90}{SwinB}} & Baseline &54.5&43.4&44.9&43.8&61.4&71.4&55.1&66.7&50.0&48.4&47.2&43.2&38.9&70.4&65.5&66.6 \\ & VQ-VAE &50.2&38.7&39.7&39.6&57.3&66.0&54.4&63.9&47.1&42.9&44.1&38.3&34.6&64.6&62.0&60.1 \\ & AdvWavAug &\bf44.9&\bf33.9&\bf35.2&\bf34.2&\bf53.5&\bf62.8&\bf50.9&\bf61.1&\bf40.1&\bf37.9&\bf39.4&\bf36.8&\bf31.7&\bf60.6&\bf45.9&\bf49.5 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{sidewaystable} \renewcommand\arraystretch{1.2} \begin{sidewaystable}\small \sidewaystablefn% \begin{center} \begin{minipage}{\textheight} \caption{Combination with another augmentation technique AugMix on different corruptions in ImageNet-C. We combine our $\rm AdvWavAug$ with AugMix, and compare its performance with the original AugMix on ResNet-50.} \label{tab:5.2} \begin{tabular*}{\textheight}{@{\extracolsep{\fill}}lllccccccccccccccc@{\extracolsep{\fill}}} \toprule% & Method & mCE \textcolor{red}{$\downarrow$} & \multicolumn{3}{@{}c@{}}{Noise} & \multicolumn{4}{@{}c@{}}{Blur} & \multicolumn{4}{@{}c@{}}{Weather} & \multicolumn{4}{@{}c@{}}{Digital} \\\cmidrule{4-6} \cmidrule{7-10} \cmidrule{11-14} \cmidrule{15-18} & & & Gau & shot & Imp & Def & Glass & Mot & Zoom & Snow & Frost & Fog & Bright & Cont & Elas & Pixel & JPEG \\ \midrule \multirow{4}*{\rotatebox{90}{Res50}} & Baseline &77.4&78.3&79.5&82.0&73.7&89.7&78.1&81.0&82.6&77.5&68.3&57.8&72.2&85.5&77.4&77.7 \\ \cmidrule{2-18} & AugMix &65.3&67.0&66.0&68.0&64.0&79.0&\bf59.0&64.0&\bf69.0&68.0&65.0&54.0&\bf57.0&\bf74.0&60.0&66.0 \\ \cmidrule{2-18} & $\rm AdvWavAug$ &68.0&65.2&67.1&66.3&67.5&77.5&72.3&71.9&75.5&68.9&67.4&53.2&68.5&75.7&\bf59.8&62.5 \\ & +AugMix &\bf63.3&\bf61.3&\bf60.4&\bf59.8&\bf61.2&\bf74.3&59.1&\bf61.9&71.8&\bf66.7&\bf60.7&\bf51.9&62.6&74.7&61.3&\bf61.4 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{sidewaystable} \renewcommand\arraystretch{1.2} \begin{sidewaystable}\small \sidewaystablefn% \begin{center} \begin{minipage}{\textheight} \caption{Comparison of generalization with MAE and MAE+AdvWavAug on different corruptions in ImageNet-C.} \label{tab:6.2} \begin{tabular*}{\textheight}{@{\extracolsep{\fill}}lllccccccccccccccc@{\extracolsep{\fill}}} \toprule% & Method & mCE \textcolor{red}{$\downarrow$} & \multicolumn{3}{@{}c@{}}{Noise} & \multicolumn{4}{@{}c@{}}{Blur} & \multicolumn{4}{@{}c@{}}{Weather} & \multicolumn{4}{@{}c@{}}{Digital} \\\cmidrule{4-6} \cmidrule{7-10} \cmidrule{11-14} \cmidrule{15-18} & & & Gau & shot & Imp & Def & Glass & Mot & Zoom & Snow & Frost & Fog & Bright & Cont & Elas & Pixel & JPEG \\ \midrule \multirow{2}*{\rotatebox{90}{ViTB}} & Baseline &51.7&-&-&-&-&-&-&-&-&-&-&-&-&-&-&- \\ & AdvWavAug &\bf47.0&34.9&35.6&35.3&57.7&65.6&51.8&63.8&38.8&38.5&39.5&38.1&34.6&66.7&48.4&55.0 \\ \midrule \multirow{2}*{\rotatebox{90}{ViTL}} & Baseline &41.8&-&-&-&-&-&-&-&-&-&-&-&-&-&-&- \\ & AdvWavAug &\bf38.1&27.2&27.2&26.4&48.7&56.8&40.4&49.7&30.1&31.3&32.4&32.5&29.2&55.5&38.8&45.4 \\ \midrule \multirow{2}*{\rotatebox{90}{ViTH}} & Baseline &33.8&-&-&-&-&-&-&-&-&-&-&-&-&-&-&- \\ & AdvWavAug &\bf32.6&23.9&22.7&23.8&40.6&50.2&32.6&39.9&24.9&29.4&25.8&28.2&27.9&50.0&30.7&38.0 \\ \botrule \end{tabular*} The baseline models are taken directly from~\cite{he2021masked}, which have no detailed results on different corruptions. \end{minipage} \end{center} \end{sidewaystable} \renewcommand\arraystretch{1.2} \begin{sidewaystable}\small \sidewaystablefn% \begin{center} \begin{minipage}{\textheight} \caption{Comparison of generalization with AdvWavAug and AdvWavAug+PGD on different corruptions in ImageNet-C.} \label{tab:7.2} \begin{tabular*}{\textheight}{@{\extracolsep{\fill}}lllccccccccccccccc@{\extracolsep{\fill}}} \toprule% & Method & mCE \textcolor{red}{$\downarrow$} & \multicolumn{3}{@{}c@{}}{Noise} & \multicolumn{4}{@{}c@{}}{Blur} & \multicolumn{4}{@{}c@{}}{Weather} & \multicolumn{4}{@{}c@{}}{Digital} \\\cmidrule{4-6} \cmidrule{7-10} \cmidrule{11-14} \cmidrule{15-18} & & & Gau & shot & Imp & Def & Glass & Mot & Zoom & Snow & Frost & Fog & Bright & Cont & Elas & Pixel & JPEG \\ \midrule \multirow{2}*{\rotatebox{90}{Res50}} & AdvWavAug &69.0&66.7&68.4&68.8&68.4&81.0&73.5&74.9&78.2&71.1&\bf66.6&53.5&\bf67.9&77.2&\bf56.2&63.3 \\ & +PGD &\bf68.0&\bf65.2&\bf67.1&\bf66.3&\bf67.5&\bf77.5&\bf72.3&\bf71.9&\bf75.5&\bf68.9&67.4&\bf53.2&68.5&\bf75.7&59.8&\bf62.5 \\ \midrule \multirow{2}*{\rotatebox{90}{Res101}} & AdvWavAug &63.7&\bf60.2&\bf61.5&\bf61.8&\bf62.9&75.7&68.2&70.2&72.4&65.9&\bf62.3&49.7&\bf63.2&\bf70.8&50.4&60.0 \\ & +PGD &\bf63.0&61.0&62.9&62.5&64.0&\bf73.1&\bf65.6&\bf68.4&\bf71.6&\bf64.5&62.8&\bf48.7&63.9&71.0&\bf49.4&\bf55.7 \\ \midrule \multirow{2}*{\rotatebox{90}{Res152}} & AdvWavAug &63.2&62.0&63.4&63.6&\bf62.4&74.9&\bf65.8&69.9&\bf68.1&\bf62.8&\bf56.3&\bf47.5&\bf61.9&70.7&60.4&\bf57.8 \\ & +PGD &\bf62.7&\bf60.4&\bf61.6&\bf62.4&62.9&\bf70.7&65.7&\bf67.4&71.0&63.8&64.2&50.6&65.7&\bf68.1&\bf48.1&58.3 \\ \midrule \multirow{2}*{\rotatebox{90}{SwinT}} & AdvWavAug &\bf56.1&46.6&\bf47.4&47.7&\bf62.9&\bf73.6&\bf61.7&\bf71.2&51.6&\bf47.1&53.3&\bf46.1&\bf40.0&\bf70.4&\bf61.2&\bf60.1 \\ & +PGD &56.6&\bf46.4&\bf47.4&\bf47.4&65.0&74.6&62.0&71.9&\bf50.9&47.4&\bf52.7&46.3&40.3&70.8&63.6&62.5 \\ \midrule \multirow{2}*{\rotatebox{90}{SwinS}} & AdvWavAug &\bf48.9&\bf37.9&\bf39.1&\bf38.1&\bf56.6&\bf67.3&\bf54.0&\bf63.3&\bf44.4&\bf42.8&\bf45.0&\bf41.0&\bf36.3&\bf63.4&\bf49.4&\bf55.2 \\ & +PGD &53.4&41.0&42.2&42.2&60.4&72.0&58.3&68.9&47.7&45.0&50.2&44.3&39.7&66.6&61.7&61.4 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{sidewaystable} \subsection{Experimental Results on Black-box Attacks} \label{sec:c.5} In addition to white-box attack, black-box attack is also an important perspective to measure the attack performance of a method. We conduct black-box attack on different models, including ResNet 50 (Res50)~(\cite{he2016deep}), ResNet 152 (Res152)~(\cite{he2016deep}), Inception V3 (IncV3)~(\cite{szegedy2016rethinking}), Adv Inception V3 (AdvIncV3)~(\cite{kurakin2018adversarial}), Efficientnet B6 AP (EffiAP)~(\cite{tan2019efficientnet}), and Swin Transformer Tiny (SwinT)~\cite{liu2021swin}. We set the attack iteration number as $n=100$. The ASR-FID curves are shown in Fig.~\ref{fig:app1}. The ASR-LPIPS curves are shown in Fig.~\ref{fig:app2}. It is shown that the ASR-FID and ASR-LPIPS curves of our AdvWavAug-DIM appear above DIM. The black-box attacks reveal the transferability of a method. It means that adversarial examples generated by our AdvWavAug-DIM have better performance (better attack performance given the same image quality) than adversarial examples generated by DIM. That is because our method projects the original perturbations onto the data manifold. \section{Detailed results on Different Corruptions of ImageNet-C dataset} \label{appendix:d} In this part, we provide detailed results on different corruptions (e.g., gaussian noise, glass blur etc.) of ImageNet-C dataset. There are too many corruption categories in ImageNet-C, we exhibit the abbreviations of different categories in these tables with Gaussian noise denoted as Gau, shot noise denoted as Shot, impulse noise denoted as Imp, defocus blur denoted as Def, glass blur denoted as Glass, motion blur denoted as Mot, zoom blur denoted as Zoom, snow denoted as Snow, frost denoted as Frost, fog denoted as Fog, brightness denoted as Bright, contrast denoted as Cont, elastic transform denoted as Elas, pixelate denoted as Pixel, JPEG compression denoted as JPEG. We extend the detailed ImageNet-C results in Tab.~\ref{tab:2} to Tab.~\ref{tab:2.2}, results in Tab.~\ref{tab:3} to Tab.~\ref{tab:3.2}, results in Tab.~\ref{tab:4} to Tab.~\ref{tab:4.2}, results in Tab.~\ref{tab:5} to Tab.~\ref{tab:5.2}, results in Tab.~\ref{tab:6} to Tab.~\ref{tab:6.2}, results in Tab.~\ref{tab:7} to Tab.~\ref{tab:7.2}. What should be noticed in Tab.~\ref{tab:6.2} is that we take the results directly from~\cite{he2021masked}, which have no detailed data of different corruptions. \end{appendix}
{ "arxiv_id": "2302.14280", "language": "en", "timestamp": "2023-03-01T02:07:57", "url": "https://arxiv.org/abs/2302.14280", "yymm": "2302" }
\section{Introduction} \label{Section 1} The notion of error-sum function was first studied by Ridley and Petruska \cite{RP00} in the context of regular continued fraction expansion. For any real number $x$, the error-sum function of the continued fraction expansion is defined by \[ P (x) \coloneqq \sum_{n=0}^\infty q_n(x) \left( x - \frac{p_n(x)}{q_n(x)} \right), \] where \[ \frac{p_n(x)}{q_n(x)} \coloneqq [a_0(x); a_1(x), a_2(x), \dotsc, a_n(x)] \coloneqq a_0(x) + \cfrac{1}{a_1(x) + \cfrac{1}{a_2(x) + \cfrac{1}{\ddots + \cfrac{1}{a_n(x)}}}} \] is the $n$th convergent (or approximant) of the continued fraction expansion, with $p_n(x)$, $q_n(x)$ coprime, $a_0(x)$ an integer, and $a_1(x), \dotsc, a_n(x)$ positive integers. For rational number $x$, the $a_k(x)$ are undefined for some point and on, and hence $P(x)$ is a series of finitely many terms. In such case, $x = [a_0(x); a_1(x), \dotsc, a_n(x)]$ for some $n \geq 0$ and if, further, $n \geq 1$ then $a_n(x)>1$. In fact, Petruska \cite{Pet92} used the error-sum function to prove the existence of a $q$-series $F(z) = 1 + \sum_{n=1}^\infty \left( \prod_{k=1}^n (A-q^k) \right) z^n$ with radius of convergence $R$ for arbitrary given $R>1$. Here, $A = e^{2 \pi i P(\beta)}$ and $q = e^{2 \pi i \beta}$, where $\beta$ is some irrational number satisfying certain conditions in terms of the $q_n(\beta)$. Moreover, there are a number of studies using error-sum functions to obtain number-theoretical results. See, e.g., \cite{AB16, Els11, Els14, ES11, ES12} for further applications of error-sum functions. The continued fraction expansion is, along with the decimal expansion, one of the most famous representations of a real number. Since there is, as is well known, a wide range of representations of real numbers (see \cite{Gal76} and \cite{Sch95} for details), it was natural for intrigued researchers to define the error-sum function for other types of representations and investigate its basic properties. To name but a few, the error-sum functions were defined and studied in the context of the decimal expansion \cite{Ton07}, the integer $p$ base expansion \cite{QD10}, the non-integer $\beta$ base expansion \cite{MXH11}, the classical L{\"u}roth series \cite{SW05, SW07}, the alternating L{\"u}roth series \cite{SMZ06, SWZ06}, the $\alpha$-L{\"u}roth series \cite{CWY14}, the Engel continued fraction \cite{SZZ08}, and the alternating Sylvester series \cite{JS12}. In the previous studies, the list of examined basic properties includes, but is not limited to, boundedness, continuity, integrality, and intermediate value property (or Darboux property) of the error-sum function, and the Hausdorff dimension of the graph of the function. The {\em Pierece expansion} is another classical representation of real numbers introduced by Pierce \cite{Pie29} about a century ago. Since then, a number of studies were conducted to study the arithmetic and metric properties of the Pierce expansions. See, e.g., \cite{DCS22, ES91, Fan15, PVB98, Rem54, Sha84, Sha86, Sha93, Var17, VBP99}. It has proven to be useful in number theory and we mention two applications among others. Firstly, the Pierce expansion provides us with a simple irrationality proof of a real number (see \cite{Sha86}, p. 24): {\em A real number has an infinite Pierce expansion if and only if it is irrational}. For instance, the irrationalities of $1-e^{-1}$, $\sin 1$, and $\cos 1$ follow, respectively, from their infinite Pierce expansions which coincide with their usual series expansions obtained from the Maclaurin series. As for the other application, Varona \cite{Var17} constructed transcendental numbers by means of Pierce expansions. Although the Pierce expansion has a long history and is widely studied, different from other representations mentioned above, its error-sum function has not yet been studied. In this paper, we define the error-sum function of the Pierce expansion, and analyze its basic properties and fractal properties of its graph. The paper is organized as follows. In Section \ref{Section 2}, we introduce some elementary notions of Pierce expansion and then define the error-sum function of Pierce expansion. In Section \ref{Section 3}, we investigate the basic properties, e.g., boundedness and continuity, of the error-sum function. In Section \ref{Section 4}, we determine the Hausdorff dimension, the box-counting dimension, and the covering dimension of the graph of the error-sum function. Throughout the paper, $\mathbb{N}$ denotes the set of positive integers, $\mathbb{N}_0$ the set of non-negative integers, and $\mathbb{N}_\infty \coloneqq \mathbb{N} \cup \{ \infty \}$ the set of extended positive integers. Following the convention, we define $\infty + c \coloneqq \infty$ and $\frac{c}{\infty} \coloneqq 0$ for any constant $c \in \mathbb{R}$. We denote the Lebesgue measure on $[0,1]$ by $\lambda$. For any subset $A$ of a topological space $X$, the closure of $A$ is denoted by $\overline{A}$. Given any function $g \colon A \to B$, we write the preimage of any singleton $\{ b \} \subseteq B$ under $g$ simply as $g^{-1}(b)$ instead of $g^{-1}(\{ b \})$. \section{Pierce expansion} \label{Section 2} This section is devoted to introducing some basic notions of Pierce expansion. We refer the reader to \cite{DCS22, ES91, Fan15, PVB98, Pie29, Rem54, Sha84, Sha86, Sha93, Var17, VBP99} or Chapter 2 of \cite{Sch95} for arithmetic and metric properties of the Pierce expansion. The classical Pierce expansion is concerned with the numbers in the half-open unit interval $(0,1]$. In this paper, we extend our scope to the numbers in the closed unit interval $I \coloneqq [0,1]$. This extension is consistent with our use of $\mathbb{N}_\infty$ instead of $\mathbb{N}$ in this paper. To dynamically generate a Pierce expansion of $x \in I$, we begin with two maps $d_1 \colon I \to \mathbb{N}_\infty$ and $T \colon I \to I$ given by \[ d_1(x) \coloneqq \begin{cases} \left\lfloor 1/x \right\rfloor, &\text{if } x \neq 0; \\ \infty, &\text{if } x = 0, \end{cases} \quad \text{and} \quad T(x) \coloneqq \begin{cases} 1 - d_1 (x) \cdot x, &\text{if } x \neq 0; \\ 0, &\text{if } x =0, \end{cases} \] respectively, where $\lfloor y \rfloor$ denotes the largest integer not exceeding $y \in \mathbb{R}$. Observe by definition that, for each $n \in \mathbb{N}$, we have $d_1(x) =n$ if and only if $x \in I$ lies in the interval $\left( \frac{1}{n+1}, \frac{1}{n} \right]$ on which $T$ is linear. For each $n \in \mathbb{N}$, we write $T^n$ for the $n$th iterate of $T$, and $T^0 \coloneqq \id_{I}$. For the notational convenience, we write $T^n x$ for $T^n (x)$ whenever no confusion could arise. Given $x \in I$, we define the sequence of {\em digits} $( d_n (x) )_{n \in \mathbb{N}}$ by $d_n(x) \coloneqq d_1 (T^{n-1}x)$ for each $n \in \mathbb{N}$. Then, for any $n \in \mathbb{N}$, by definitions of the map $T$ and the digits, we have \begin{align} \label{T n-1 formula} T^{n-1}x = \frac{1}{d_1(T^{n-1}x)} - \frac{T(T^{n-1}x)}{d_1(T^{n-1}x)} = \frac{1}{d_n(x)} - \frac{T^nx}{d_n(x)}. \end{align} We recall two well-known facts about the digits in the following proposition. In particular, part (i) characterizes the digit sequence and it is stated in any study of Pierce expansions with or without proof. We include the proof to make it clear that the replacement of $(0,1]$ and $\mathbb{N}$ by $I$ and $\mathbb{N}_\infty$, respectively, does not violate the basic properties. \begin{proposition} [See \cite{Sha86} and Proposition 2.2 of \cite{Fan15}] \label{basic facts} Let $x \in I$ and $n \in \mathbb{N}$. Then the following hold. \begin{enumerate} \item[(i)] $d_{n+1}(x) \geq d_n(x) + 1$. \item[(ii)] $d_n(x) \geq n$. \end{enumerate} \end{proposition} \begin{proof} (i) If $d_n (x) = \infty$, then $T^{n-1}x=0$ and so $T^n x = 0$, which implies that $d_{n+1}(x) = \infty$. Since $\infty+1 = \infty$ by convention, we conclude that $d_{n+1}(x) = d_n(x)+1$. Assume $d_n(x) \in \mathbb{N}$. Then $T^{n-1}x \neq 0$ and $T^{n-1} x \in \left( \frac{1}{k+1}, \frac{1}{k} \right]$ for some $k \in \mathbb{N}$. Hence $d_n(x) = k$ and by definition of $T$, it follows that $T^n x \in \left[ 0, \frac{1}{k+1} \right)$. Now $d_{n+1}(x) = \infty$ if $T^nx=0$ and $k+1 \leq d_{n+1} (x) < \infty$ if $T^nx \neq 0$. In either case, $d_{n+1}(x) \geq d_n(x) + 1$. (ii) Clearly, $d_1(x) \geq 1$ by definition of $d_1$. If $n \geq 2$, using part (i) $(n-1)$ times, we obtain $d_n(x) \geq d_{n-1}(x) + 1 \geq \dotsb \geq d_1(x) + (n-1)$, and thus $d_n(x) \geq n$. \end{proof} We shall consider a symbolic space which is a subspace of $\mathbb{N}_\infty^\mathbb{N}$ closely related to Pierce expansions. Let $\Sigma_0 \coloneqq \{ (\sigma_k)_{k \in \mathbb{N}} \in \{ \infty \}^\mathbb{N} \}$ and for each $n \in \mathbb{N}$, let \begin{align*} \Sigma_n &\coloneqq \{ (\sigma_k)_{k \in \mathbb{N}} \in \mathbb{N}^n \times \{ \infty \}^{\mathbb{N} \setminus \{ 1, \dotsc, n \}} : \sigma_1 < \sigma_2 < \dotsb < \sigma_n \}. \end{align*} For ease of notation, we will occasionally write $(\sigma_k)_{k \in \mathbb{N}} \in \Sigma_n$ as $(\sigma_1, \dotsc, \sigma_n)$ in place of $(\sigma_1, \dotsc, \sigma_n, \allowbreak \infty, \infty, \dotsc)$. We also define \begin{align*} \Sigma_\infty &\coloneqq \{ (\sigma_k)_{k \in \mathbb{N}} \in \mathbb{N}^\mathbb{N} : \sigma_k < \sigma_{k+1} \text{ for all } k \in \mathbb{N} \}. \end{align*} Then $\Sigma_n$, $n \in \mathbb{N}_0$, consists of a sequence with strictly increasing $n$ many initial terms and $\infty$ for the remaining terms, and $\Sigma_\infty$ consists of strictly increasing infinite sequences of positive integers. Finally, let \begin{align*} \Sigma &\coloneqq \bigcup_{n \in \mathbb{N}_0} \Sigma_n \cup \Sigma_\infty \end{align*} in $\mathbb{N}_\infty^\mathbb{N}$. Each element of $\Sigma$ is said to be a {\em Pierce sequence}. In view of Proposition \ref{basic facts}(i), for any $x \in I$, the digit sequence $(d_n(x))_{n \in \mathbb{N}}$ is a Pierce sequence. We say $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma$ is {\em realizable} if there exists $x \in I$ such that $d_k(x) = \sigma_k$ for all $k \in \mathbb{N}$ and we denote by $\Sigma_{\realizable}$ the collection of all realizable Pierce sequences. Similar to Proposition \ref{basic facts}(ii), for any $(\sigma_n)_{n \in \mathbb{N}} \in \Sigma$, we have \begin{align} \label{sigma n bound} \sigma_n \geq n \end{align} for all $n \in \mathbb{N}$. It is well known that for each $x \in I$, the iterations of $T$ yield a unique expansion \begin{align} \label{series expansion} x = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{d_1(x) \dotsm d_n(x)} = \cfrac{1-\cfrac{1+\cfrac{1-\dotsb}{d_3(x)}}{d_2(x)}}{d_1(x)}, \end{align} where the digit sequence $( d_n(x) )_{n \in \mathbb{N}}$ is a realizable Pierce sequence. (See Proposition \ref{characterization of Sigma re} below.) The expression \eqref{series expansion} is called the {\em Pierce expansion}, {\em Pierce (ascending) continued fraction}, or {\em alternating Engel expansion} of $x$. We denote \eqref{series expansion} by \[ x = [d_1(x), d_2(x), \dotsc, d_n(x), \dotsc]_P. \] For brevity, if the digit is $\infty$ at some point and on, i.e., if $x = [d_1(x), \dotsc, d_n(x), \infty, \infty, \dotsc]_P$ with $d_n(x) < \infty$ for some $n \in \mathbb{N}$, then we write $x = [d_1(x), \dotsc, d_n(x)]_P$ and say $x$ has a {\em finite} Pierce expansion of length $n$. As mentioned in Section \ref{Section 1}, it is a classical result that the Pierce expansion of $x \in (0,1]$ is finite if and only if $x$ is rational. Since $0 = [\infty, \infty, \dotsc]_P$, we may write the Pierce expansion of $0$ as $[ \, ]_P$ which is of length zero. Thus $x \in I$ has a finite Pierce expansion if and only if $x$ is rational. \begin{proposition} [See \cite{Sha86}, pp. 23--24] \label{digit condition proposition} For any $x \in I$, if its Pierce expansion is of length $n \geq 2$, then $d_{n-1}(x) + 1 < d_n(x)$. \end{proposition} \begin{proof} The result follows from the definition of the digits. To see this, suppose otherwise. Put $M \coloneqq d_{n-1}(x)$ for some $M \in \mathbb{N}$, so that $d_{n}(x) = M+1$ by Proposition \ref{basic facts}(i). Since $d_{n+1}(x) = \infty$, we have $T^nx=0$. By \eqref{T n-1 formula}, we see that \[ T^{n-2} x = \frac{1}{d_{n-1}(x)} - \frac{1}{d_{n-1}(x)} \left( \frac{1}{d_n(x)} - \frac{T^nx}{d_n(x)} \right) = \frac{1}{M} - \frac{1}{M(M+1)} = \frac{1}{M+1}, \] and so $T^{n-1} x =0$. It follows that $d_n(x) = \infty$, which is a contradiction. \end{proof} We denote by $f \colon I \to \Sigma$ the map sending a number in $I$ to its sequence of Pierce expansion digits, that is, for each $x \in I$, $f$ is given by \begin{align*} f(x) \coloneqq (d_n(x))_{n \in \mathbb{N}} = (d_1(x), d_2(x), d_3(x), \dotsc). \end{align*} Clearly, $f$ is well-defined. We also note that $f(I) = \Sigma_{\realizable}$ by definition. Conversely, we shall introduce a function mapping a Pierce sequence to a real number in $I$ by means of the formula modelled on \eqref{series expansion}. Define a map $\varphi \colon \Sigma \to I$ by \begin{align*} \varphi ( \sigma ) \coloneqq \sum_{n=1}^\infty \frac{(-1)^{n+1}}{\sigma_1 \dotsm \sigma_n} = \frac{1}{\sigma_1} - \frac{1}{\sigma_1 \sigma_2} + \dotsb + \frac{(-1)^{n+1}}{\sigma_1 \dotsm \sigma_n} + \dotsb, \end{align*} for each $\sigma \coloneqq (\sigma_n)_{n \in \mathbb{N}} \in \Sigma$. Observe that $\varphi$ is well-defined since $\sum_{n=1}^\infty \frac{1}{\sigma_1 \dotsm \sigma_n} \leq \sum_{n=1}^\infty \frac{1}{n!} < \infty$, where the first inequality follows from \eqref{sigma n bound}. We rephrase Proposition 2.1 of \cite{Fan15} in terms of the maps $f$ and $\varphi$ in the following proposition. According to Fang \cite{Fan15}, the proposition is credited to Remez \cite{Rem54}. See also Section 4.1 of \cite{DCS22}. Let $E \coloneqq I \cap \mathbb{Q}$ and $E' \coloneqq E \setminus \{ 0, 1 \} = (0,1) \cap \mathbb{Q}$. \begin{proposition}[\cite{Fan15}, Proposition 2.1] \label{inverse image of phi} Let $x \in I$. Then the following hold. \begin{enumerate} \item[(i)] If $x \in E'$, then we have $\varphi^{-1}( x ) = \{ \sigma, \sigma' \}$, where \begin{align*} \sigma &\coloneqq (d_1(x), d_2(x), \dotsc, d_{n-1}(x), d_n(x)) = f(x) \in \Sigma_n \cap \Sigma_{\realizable}, \\ \sigma' &\coloneqq (d_1(x), d_2(x), \dotsc, d_{n-1}(x), d_n(x)-1, d_n(x)) \in \Sigma_{n+1} \cap (\Sigma \setminus \Sigma_{\realizable}), \end{align*} for some $n \in \mathbb{N}$. \item[(ii)] If $x \in I \setminus E'$, then we have $\varphi^{-1}( x ) = \{ \sigma \}$, where $\sigma \coloneqq f(x)$. More precisely, $\sigma \in \Sigma_\infty$ if $x \in I \setminus E$; $\sigma = (\infty, \infty, \dotsc) \in \Sigma_0$ if $x = 0$; $\sigma = ( 1, \infty, \infty, \dotsc ) \in \Sigma_1$ if $x = 1$. \end{enumerate} \end{proposition} The following proposition is a characterization of the set of realizable Pierce sequences. \begin{proposition} \label{characterization of Sigma re} Let $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma$. Then $\sigma \not \in \Sigma_{\realizable}$ if and only if $\sigma \in \Sigma_n$ for some $n \geq 2$ with $\sigma_{n-1} + 1 = \sigma_n$. \end{proposition} \begin{proof} The reverse implication follows from Proposition \ref{digit condition proposition}, since there does not exist $x \in I$ whose Pierce expansion is given by $[d_1(x), \dotsc, d_n(x)]_P$ with $d_{n-1}(x) = \sigma_{n-1}$ and $d_n(x) = \sigma_{n-1}+1$. Now, for the forward implication, suppose $\sigma \in \Sigma \setminus \Sigma_{\realizable}$. Put $x \coloneqq \varphi (\sigma) \in I$. Then $\sigma \neq f(x)$ by definition of $\Sigma_{\realizable}$. The preimage $\varphi^{-1}(x)$ contains $\sigma$ and by Proposition \ref{inverse image of phi}, it is either a singleton or a doubleton. If $\varphi^{-1}(x) = \{ \sigma \}$, then by part (ii) of the proposition, we have $\sigma = f(x)$, which is a contradiction. Hence $\varphi^{-1}(x)$ is a doubleton and by part (i) of the proposition, it follows that $\sigma \in \Sigma_{n}$ for some $n \geq 2$ with $\sigma_{n-1} = d_{n-1}(x)-1$ and $\sigma_n = d_{n-1}(x)$, in which case $\sigma_{n-1}+1 = \sigma_n$. \end{proof} For each $x \in I$ and $n \in \mathbb{N}$, define the $n$th {\em Pierce convergent}, or {\em approximant}, $s_n \colon I \to \mathbb{R}$ by \[ s_n (x) \coloneqq [d_1(x), \dotsc, d_n(x)]_P = \sum_{k=1}^n \frac{(-1)^{k+1}}{d_1 (x) \dotsm d_k(x)}. \] Then $s_n(x)$ is nothing but the $n$th partial sum of the Pierce expansion \eqref{series expansion}. Using \eqref{T n-1 formula} repeatedly, we find that \begin{align} \label{simplified series} x &= \frac{1}{d_1(x)} - \frac{Tx}{d_1(x)} \nonumber \\ &= \frac{1}{d_1(x)} - \frac{1}{d_1(x)} \left( \frac{1}{d_2(x)} - \frac{T^2x}{d_2(x)} \right) = \frac{1}{d_1(x)} - \frac{1}{d_1(x) d_2(x)} + \frac{T^2x}{d_1(x) d_2(x)} \nonumber \\ &= \dotsb \nonumber \\ &= \sum_{k=1}^n \frac{(-1)^{k+1}}{d_1(x) \dotsm d_k(x)} + \frac{(-1)^n T^nx}{d_1(x) \dotsm d_n(x)} = s_n(x) + \frac{(-1)^n T^nx}{d_1(x) \dotsm d_n(x)}. \end{align} For every $x \in I$, we define \[ \mathcal{E} (x) \coloneqq \sum_{n=1}^\infty ( x - s_n (x) ) \] and call $\mathcal{E} \colon I \to \mathbb{R}$ the {\em error-sum function of Pierce expansions} on $I$. Note that for any $x \in I$, by \eqref{simplified series}, Proposition \ref{basic facts}(ii), and boundedness of $T$, we have \[ \left| x - s_n (x) \right| = \left| \frac{(-1)^n T^nx}{d_1(x) \dotsm d_n(x)} \right| \leq \frac{1}{n!} \to 0 \] as $n \to \infty$, with $\sum_{n=1}^\infty \frac{1}{n!} < \infty$. It follows that $\mathcal{E}(x)$ is well-defined as an absolutely and uniformly convergent series (or as a series with finitely many non-zero terms if $T^nx=0$ for some $n \in \mathbb{N}$). Defining an error-sum function on $\Sigma$ is in order. For each $n \in \mathbb{N}$, define the $n$th partial sum $\varphi_n \colon \Sigma \to \mathbb{R}$ by \[ \varphi_n(\sigma) \coloneqq \sum_{k=1}^n \frac{(-1)^{k+1}}{\sigma_1 \dotsm \sigma_k} = \frac{1}{\sigma_1} - \frac{1}{\sigma_1 \sigma_2} + \dotsb + \frac{(-1)^{n+1}}{\sigma_1 \sigma_2 \dotsm \sigma_n} \] for $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma$. For every $\sigma \in \Sigma$, we define \begin{align} \label{definition of E star} \mathcal{E}^* (\sigma) \coloneqq \sum_{n=1}^\infty ( \varphi (\sigma) - \varphi_n(\sigma) ) \end{align} and call $\mathcal{E}^* \colon \Sigma \to \mathbb{R}$ the {\em error-sum function of Pierce sequences} on $\Sigma$. Notice that for any $\sigma \coloneqq (\sigma_n)_{n \in \mathbb{N}} \in \Sigma$, by \eqref{sigma n bound}, we have \begin{align} \label{phi - phi n} | \varphi (\sigma) - \varphi_n (\sigma) | &= \frac{1}{\sigma_1 \dotsm \sigma_{n}} \left| \frac{1}{\sigma_{n+1}} - \sum_{j=1}^\infty \left( \frac{1}{\sigma_{n+1} \dotsm \sigma_{n+2j}} - \frac{1}{\sigma_{n+1} \dotsm \sigma_{n+(2j+1)}} \right) \right| \nonumber \\ &\leq \frac{1}{\sigma_1 \dotsm \sigma_{n+1}} \leq \frac{1}{(n+1)!} \to 0 \end{align} as $n \to \infty$, with $\sum_{n=1}^\infty \frac{1}{(n+1)!} < \infty$. Hence the series converges absolutely and uniformly on $\Sigma$, and it follows that $\mathcal{E}^*(\sigma)$ is well-defined. \section{Some basic properties of $\mathcal{E}(x)$} \label{Section 3} This section is devoted to investigating some basic properties of the error-sum function of Pierce expansions $\mathcal{E} \colon I \to \mathbb{R}$. It will usually be done by the aid of the symbolic space $\Sigma$ and the error-sum function of Pierce sequences $\mathcal{E}^* \colon \Sigma \to \mathbb{R}$. \subsection{Symbolic space $\Sigma$} We equip $\mathbb{N}$ with the discrete topology and consider $(\mathbb{N}_\infty, \mathcal{T})$ as its one-point compactification, so that a subset in $(\mathbb{N}_\infty, \mathcal{T})$ is open if and only if it is either a subset of $\mathbb{N}$ or a set whose complement with respect to $\mathbb{N}_\infty$ is a finite set in $\mathbb{N}$. For a metric space $(X,d)$, we denote by $B_d(x;r)$ the $d$-open ball centered at $x \in X$ with radius $r>0$, i.e., $B_d(x;r) \coloneqq \{ y \in X : d(x,y) < r \}$. \begin{lemma} \label{metric on N infinity} Define $\rho : \mathbb{N}_\infty \times \mathbb{N}_\infty \to \mathbb{R}$ by \[ \rho (x, y) \coloneqq \begin{dcases} \frac{1}{x} + \frac{1}{y}, &\text{if } x \neq y; \\ 0, &\text{if } x=y, \end{dcases} \] for $x,y \in \mathbb{N}_\infty$. Then $\rho$ is a metric on $\mathbb{N}_\infty$ and induces $\mathcal{T}$. \end{lemma} \begin{proof} It is straightforward to check that $\rho$ is a metric on $\mathbb{N}_\infty$, so we prove the second assertion only. Let $O \in \mathcal{T}$. We show that $O$ is $\rho$-open. Suppose $x \in O$. Then either $x \in \mathbb{N}$ or $x = \infty$. If $x \in \mathbb{N}$, then $x \in \{ x \} = B_{\rho} \left( x ; \frac{1}{2 x} \right) \subseteq O$, and hence $O$ is a neighborhood of $x$ in the $\rho$-topology. Now assume $x = \infty$. Then $\mathbb{N}_\infty \setminus O \subseteq \mathbb{N}$ is finite. So we can find a $K \in \mathbb{N}$ such that every $n \geq K$ is in $O$. Then $x \in \{ \infty \} \cup \{ K+1, K+2, K+3, \dotsc \} = B_\rho \left( \infty; \frac{1}{K} \right) \subseteq O$. Thus $O$ is a neighborhood of $x$ in the $\rho$-topology. Since $x \in O$ is arbitrary, we conclude that $O$ is $\rho$-open. Conversely, let $U \in \mathbb{N}_\infty$ be a $\rho$-open set. Suppose $x \in U$. Then either $x \in \mathbb{N}$ or $x = \infty$. Assume first $x \in \mathbb{N}$. But then $\{ x \}$, which is open in $(\mathbb{N}_\infty, \mathcal{T})$, satisfies $x \in \{ x \} \subseteq U$. Hence $U$ is a neighborhood of $x$ in $(\mathbb{N}_\infty, \mathcal{T})$. Now assume $x = \infty$. Since $U$ is $\rho$-open, we can find an $r>0$ such that $B_{\rho} (\infty; r) \subseteq U$. Note that $y \in B_{\rho} (\infty; r)$ if and only if $y = \infty$ or $y > \frac{1}{r}$. Then $B_\rho (\infty; r)$ contains $\{ \infty \} \cup \{ \lfloor 1/r \rfloor + 1, \lfloor 1/r \rfloor + 2, \lfloor 1/r \rfloor + 3, \dotsc \}$ which is an open set in $(\mathbb{N}_\infty, \mathcal{T})$ containing $x = \infty$. Thus $U$ is a neighborhood of $x$ in $(\mathbb{N}_\infty, \mathcal{T})$. \end{proof} The Tychonoff's theorem tells us that $\mathbb{N}_\infty^\mathbb{N}$ is compact in the product topology, as a countable product of a compact space $(\mathbb{N}_\infty, \mathcal{T})$. It is easy to see that any non-empty open set in the product topology contains a non-Pierce sequence, so that $\Sigma$ is not open in $\mathbb{N}_\infty^\mathbb{N}$. However, $\Sigma$ is closed in the product topology. \begin{lemma} \label{Sigma is closed} $\Sigma$ is closed in $\mathbb{N}_\infty^\mathbb{N}$, and so $\Sigma$ is compact in the product topology. \end{lemma} \begin{proof} We show that $\mathbb{N}_\infty^\mathbb{N} \setminus \Sigma$ is open, i.e., every point of $\mathbb{N}_\infty^\mathbb{N} \setminus \Sigma$ is an interior point. Let $\sigma \coloneqq (\sigma_n)_{n \in \mathbb{N}} \in \mathbb{N}_\infty^\mathbb{N} \setminus \Sigma$. By definition of $\mathbb{N}_\infty^\mathbb{N} \setminus \Sigma$, we can find an index $K \in \mathbb{N}$ such that $\sigma_K \geq \sigma_{K+1} \neq \infty$, say $M \coloneqq \sigma_{K+1} \in \mathbb{N}$. Consider a subset $O \subseteq \mathbb{N}_\infty^\mathbb{N}$ given by \[ O \coloneqq\mathbb{N}_\infty^{\{ 1, \dotsc, K-1 \}} \times (\mathbb{N}_\infty \setminus \{ 1, 2, \dotsc, M-1 \} ) \times \{ M \} \times \mathbb{N}_\infty^{\mathbb{N} \setminus \{ 1, 2, \dotsc, K+1 \}}. \] Then $O$ is open in $\mathbb{N}_\infty^\mathbb{N}$ by definition of the product topology, since $\mathbb{N}_\infty \setminus \{ 1, 2, \dotsc, M-1 \}$ and $\{ M \}$ are open in $(\mathbb{N}_\infty, \mathcal{T})$. It is clear that $\sigma \in O \subseteq \mathbb{N}_\infty^\mathbb{N} \setminus \Sigma$. Thus $\sigma$ is an interior point of $\mathbb{N}_\infty^\mathbb{N} \setminus \Sigma$ and this proves that $\Sigma$ is closed in $\mathbb{N}_\infty^\mathbb{N}$. The second assertion follows immediately since $\mathbb{N}_\infty^\mathbb{N}$ is compact in the product topology by the Tychonoff's theorem, so that its closed subspace $\Sigma$ is compact. \end{proof} Let $\sigma \coloneqq (\sigma_n)_{n \in \mathbb{N}}$ and $\tau \coloneqq (\tau_n)_{n \in \mathbb{N}}$ be any two elements in $\mathbb{N}_\infty^\mathbb{N}$. We define $\rho^\mathbb{N} \colon \mathbb{N}_\infty^\mathbb{N} \times \mathbb{N}_\infty^\mathbb{N} \to \mathbb{R}$ by \[ \rho^\mathbb{N} (\sigma, \tau) \coloneqq \sum_{n=1}^\infty \frac{\rho (\sigma_n, \tau_n)}{n!}, \] where $\rho \colon \mathbb{N}_\infty \times \mathbb{N}_\infty \to \mathbb{R}$ is defined as in Lemma \ref{metric on N infinity}. Notice that we have $\rho (\sigma_n, \tau_n) \leq \frac{1}{\sigma_n} + \frac{1}{\tau_n} \leq \frac{1}{n} + \frac{1}{n} \leq 2$ for each $n \in \mathbb{N}$ by \eqref{sigma n bound}, and $\sum_{n=1}^\infty \frac{2}{n!} < \infty$. We deduce that $\rho^\mathbb{N}$ is well-defined. \begin{lemma} \label{metric on Sigma} $\rho^\mathbb{N}$ is a metric on $\mathbb{N}_\infty^\mathbb{N}$ and the topology induced by $\rho^\mathbb{N}$ is equivalent to the product topology on $\mathbb{N}_\infty^\mathbb{N}$. \end{lemma} \begin{proof} It is straightforward to check that $(\mathbb{N}_\infty^\mathbb{N},\rho^\mathbb{N})$ is a metric space. The proof of the second assertion is almost identical to the standard proof of a well-known fact that the countable product of metric spaces is metrizable. So we omit the details. \end{proof} \begin{lemma} \label{Sigma is compact} $(\Sigma, \rho^\mathbb{N})$ is a compact metric space. \end{lemma} \begin{proof} The lemma is immediate from Lemmas \ref{Sigma is closed} and \ref{metric on Sigma}. \end{proof} For a given $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma$, we define $\sigma^{(n)} \coloneqq (\tau_k)_{k \in \mathbb{N}}\in \Sigma$ for each $n \in \mathbb{N}$, by \[ \tau_k \coloneqq \begin{cases} \sigma_k, &\text{if } 1 \leq k \leq n; \\ \infty, &\text{otherwise}, \end{cases} \] i.e., $\sigma^{(n)} = (\sigma_1, \dotsc, \sigma_n, \infty, \infty, \dots)$. It is worth pointing out that it is not always the case that $\sigma^{(n)} \in \Sigma_n$, since we might have $\sigma_k = \infty$ for some $1 \leq k \leq n$. Fix $n \in \mathbb{N}$ and $\sigma \in \Sigma_n$. Let $\Upsilon_\sigma$ be the collection of sequences in $\Sigma$ defined as \[ \Upsilon_\sigma \coloneqq \{ \upsilon \in \Sigma : \upsilon^{(n)} = \sigma \} \] and we call $\Upsilon_\sigma$ the {\em cylinder set} of order $n$ associated with $\sigma$. Then $\Upsilon_\sigma$ consists of all sequences in $\Sigma$ whose initial $n$ terms agree with those of $\sigma$. By Lemma \ref{Sigma is closed}, it is clear that $\Upsilon_\sigma$ is compact in $\Sigma$ as a closed set in a compact space. Since $\Upsilon_\sigma$ is open in $\Sigma$ as well, it follows that $\Sigma \setminus \Upsilon_\sigma$ is compact by the same lemma. We also define the {\em fundamental interval} of order $n$ associated with $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}}$ by \begin{align*} I_\sigma \coloneqq \{ x \in I : d_k(x) = \sigma_k \text{ for all } 1 \leq k \leq n \} = f^{-1} (\Upsilon_\sigma). \end{align*} Then any number $x \in I_\sigma$ has its Pierce expansion beginning with $(\sigma_k)_{k=1}^n$, i.e., \[ x = [d_1(x), \dotsc, d_n(x), d_{n+1}(x), d_{n+2}(x), \dots]_P = [\sigma_1, \dotsc, \sigma_n, d_{n+1}(x), d_{n+2}(x), \dotsc]_P. \] In view of the following proposition, the reason for $I_\sigma$ being called an interval should be clear. For each $n \in \mathbb{N}$ and $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma_n$, we write $\widehat{\sigma} \coloneqq (\widehat{\sigma}_k)_{k \in \mathbb{N}} \in \Sigma_n$ where the $\widehat{\sigma}_k$ are given by \[ \widehat{\sigma}_k = \begin{cases} \sigma_n+1, &\text{if } k = n; \\ \sigma_k, &\text{otherwise}, \end{cases} \] i.e., $\widehat{\sigma} = (\sigma_1, \dotsc, \sigma_{n-1}, \sigma_n+1, \infty, \infty, \dotsc)$. \begin{proposition} [\cite{Sha86}, Theorem 1] \label{I sigma} Let $n \in \mathbb{N}$ and $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma_n$. If $\sigma \in \Sigma_{\realizable}$, then \begin{align} \label{fundamental interval 1} I_\sigma = \begin{cases} ( \varphi (\widehat{\sigma}), \varphi (\sigma)], &\text{if $n$ is odd}; \\ [ \varphi (\sigma), \varphi (\widehat{\sigma})), &\text{if $n$ is even}. \end{cases} \end{align} If $\sigma \not \in \Sigma_{\realizable}$, we have instead that $I_\sigma$ is an open interval with the same endpoints, i.e., \begin{align*} \label{fundamental interval 2} I_\sigma = \begin{cases} ( \varphi (\widehat{\sigma}), \varphi (\sigma)), &\text{if $n$ is odd}; \\ ( \varphi (\sigma), \varphi (\widehat{\sigma})), &\text{if $n$ is even}. \end{cases} \tag{\ref{fundamental interval 1}$'$} \end{align*} Consequently, the length of $I_\sigma$ is \begin{align} \label{length of I sigma} \lambda (I_\sigma) = | \varphi (\sigma) - \varphi (\widehat{\sigma})| = \frac{1}{\sigma_1 \dotsm \sigma_{n-1} \sigma_n (\sigma_n+1)}. \end{align} \end{proposition} We illustrate the exclusion of the endpoint $\varphi (\sigma)$ in \eqref{fundamental interval 2} by an example. Consider two sequences $\sigma \coloneqq (2) \in \Sigma_1 \cap \Sigma_{\realizable}$ and $\sigma' \coloneqq (1,2) \in \Sigma_2 \cap (\Sigma \setminus \Sigma_{\realizable})$. Then $\varphi (\sigma) = \frac{1}{2}$ and $\varphi (\sigma') = \frac{1}{1} - \frac{1}{1 \cdot 2} = \frac{1}{2}$ are equal and so they have the same Pierce expansion, namely $[2]_P$. It follows by the definition of fundamental intervals that $I_{\sigma}$ contains $\varphi (\sigma)$, whereas $I_{\sigma'}$ fails to contain $\varphi (\sigma')$. For later use, we record an upper bound for $\lambda (I_\sigma)$ derived from \eqref{length of I sigma}. For each $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma_n$, since $\sigma_k \geq k$ for $1 \leq k \leq n$ by \eqref{sigma n bound}, we have that \begin{align} \label{bound for length of I sigma} \lambda (I_\sigma) \leq \frac{1}{(1)(2) \dotsm (n-1)(n)(n+1)} = \frac{1}{(n+1)!}. \end{align} \subsection{Mappings $\varphi \colon \Sigma \to I$ and $f \colon I \to \Sigma$} By definition, the following observation is immediate. \begin{observation} $\varphi \circ f = \id_{I}$ and $f \circ (\varphi|_{\Sigma_{\realizable}}) = \id_{\Sigma_{\realizable}}$, where $\varphi|_{\Sigma_{\realizable}}$ is the restriction of $\varphi$ to $\Sigma_{\realizable}$, but $f \circ \varphi \neq \id_\Sigma$ in general. \end{observation} For a fixed $\sigma \in \Sigma_n$ for some $n \in \mathbb{N}$, we can explicitly describe the relation between the cylinder set $\Upsilon_\sigma \subseteq \Sigma$ and the fundamental interval $I_\sigma \subseteq I$ in terms of the map $f \colon I \to \Sigma$. We first observe the following from the definition $I_\sigma = f^{-1}(\Upsilon_\sigma)$. \begin{observation} Let $n \in \mathbb{N}$ and $\sigma \in \Sigma_n$. Then $f(I_\sigma) \subseteq \Upsilon_\sigma$. \end{observation} The inclusion in the above observation is proper, i.e., $f(I_\sigma) \subsetneq \Upsilon_\sigma$, and by Proposition \ref{characterization of Sigma re} we explicitly have \begin{align} \label{cylinder set minus f image} \Upsilon_\sigma \setminus f(I_\sigma) = \bigcup_{m \geq n} \{ \tau \coloneqq (\tau_k)_{k \in \mathbb{N}} \in \Sigma_{m} : \tau^{(n)} = \sigma \text{ and } \tau_{m-1}+1 = \tau_{m} \} \subseteq \Sigma \setminus f(I). \end{align} But $f(I_\sigma)$ is not so smaller than $\Upsilon_\sigma$ in the sense that $f (I_\sigma)$ is dense in $\Upsilon_\sigma$. \begin{lemma} \label{f image of fundamental interval is dense} Let $n \in \mathbb{N}$ and $\sigma \in \Sigma_n$. Then $\overline{f (I_\sigma)} = \Upsilon_\sigma$. \end{lemma} \begin{proof} Put $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma_n$. It is clear that $\Upsilon_\sigma$ is closed in $\Sigma$. Since $f(I_\sigma) \subseteq \Upsilon_\sigma$ and $\Upsilon_\sigma$ is closed, it suffices to show that any point in $\Upsilon_\sigma \setminus f(I_\sigma)$ is a limit point of $f(I_\sigma)$. Let $\upsilon \coloneqq (\upsilon_k)_{k \in \mathbb{N}} \in \Upsilon_\sigma \setminus f (I_\sigma)$. Then, by \eqref{cylinder set minus f image}, $\upsilon$ is of the form \[ (\sigma_1, \dotsc, \sigma_{n-1}, \sigma_n, \dotsc, \upsilon_{m-1}, \upsilon_{m}, \infty, \infty, \dotsc) \] where $\upsilon_{m-1} + 1 = \upsilon_m$, for some $m \geq n$. Consider a sequence $(\bm{\tau}_k)_{k \in \mathbb{N}}$ in $\Sigma$ given by \[ \bm{\tau}_k \coloneqq (\sigma_1, \dotsc, \sigma_n, \dotsc, \upsilon_{m-1}, \upsilon_m, \upsilon_m + (k+1), \infty, \infty, \dotsc) \] for each $k \in \mathbb{N}$. Then $\bm{\tau}_k \in f(I_\sigma)$ for all $k \in \mathbb{N}$ by Proposition \ref{characterization of Sigma re}, since $\bm{\tau}_k \in \Sigma_{m+1}$ and $\upsilon_m + 1 < \upsilon_m + (k+1)$. Clearly, $\bm{\tau}_k \to \upsilon$ as $k \to \infty$. This completes the proof. \end{proof} Similarly, any sequence in $\Sigma$ can be approximated arbitrarily close by sequences in $f(I)$. \begin{lemma} \label{closure of f I} $\Sigma = \overline{f(I)}$. \end{lemma} \begin{proof} Since $f(I) \subseteq \Sigma$, it suffices to show that any point in $\Sigma \setminus f(I)$ is a limit point of $f(I)$. Let $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma \setminus f(I)$. Then $\sigma \in \Sigma \setminus \Sigma_{\realizable}$, so by Proposition \ref{characterization of Sigma re}, $\sigma \in \Sigma_n$ for some $n \geq 2$ with $\sigma_{n-1}+1 = \sigma_n$. Now, a similar argument as in the proof of Lemma \ref{f image of fundamental interval is dense} shows that there is a sequence in $f(I)$ converging to $\sigma$. Hence the result. \end{proof} We are now concerned with the continuity of two maps of interest. We first show that $\varphi \colon \Sigma \to I$ is a Lipschitz mapping. \begin{lemma} \label{phi is Lipschitz} For any $\sigma, \tau \in \Sigma$, we have $|\varphi (\sigma) - \varphi (\tau)| \leq \rho^\mathbb{N} (\sigma, \tau)$. \end{lemma} \begin{proof} Let $\sigma \coloneqq (\sigma_n)_{n \in \mathbb{N}}, \tau \coloneqq (\tau_n)_{n \in \mathbb{N}} \in \Sigma$. If $\sigma = \tau$, there is nothing to prove, so we suppose that $\sigma$ and $\tau$ are distinct. If $\sigma_1 \neq \tau_1$, then \[ |\varphi (\sigma) - \varphi (\tau)| \leq |\varphi (\sigma)| + |\varphi (\tau)| \leq \frac{1}{\sigma_1} + \frac{1}{\tau_1} \leq \rho^{\mathbb{N}} (\sigma, \tau). \] Assume that $\sigma$ and $\tau$ share the initial block of length $n \in \mathbb{N}$, i.e., $\sigma^{(n)} = \tau^{(n)}$ and $\sigma_{n+1} \neq \tau_{n+1}$. Then \begingroup \allowdisplaybreaks \begin{align*} | \varphi (\sigma) - \varphi (\tau)| &= | ( \varphi (\sigma) - \varphi(\sigma^{(n)}) ) - (\varphi (\tau) - \varphi (\tau^{(n)}) ) | \\ &= |(\varphi (\sigma) - \varphi_n (\sigma)) - (\varphi (\tau) - \varphi_n (\tau))| \leq |\varphi (\sigma) - \varphi_n (\sigma)| + |\varphi (\tau) - \varphi_n (\tau)| \\ &\leq \frac{1}{\sigma_1 \dotsm \sigma_{n}} \left( \frac{1}{\sigma_{n+1}} + \frac{1}{\tau_{n+1}} \right) \leq \frac{1}{n!} \left( \frac{1}{\sigma_{n+1}} + \frac{1}{\tau_{n+1}} \right) \\ &\leq \frac{1}{n!} \left( \frac{1}{\sigma_{n}} + \frac{1}{\tau_{n}} \right) \leq \rho^{\mathbb{N}} (\sigma, \tau), \end{align*} \endgroup where we used \eqref{phi - phi n} and \eqref{sigma n bound} for the second and third inequality, respectively. \end{proof} Now we prove that $f \colon I \to \Sigma$ is continuous at every irrational number and at two rational numbers $0$ and $1$. \begin{lemma} \label{f is continuous at irrational} $f \colon I \to \Sigma$ is continuous at every $x \in I \setminus E'$. \end{lemma} \begin{proof} By Proposition \ref{inverse image of phi}(ii), it suffices to show that $f$ is continuous at $x \in I$ for which $\varphi^{-1} ( x )$ is a singleton. Suppose otherwise. Put $\{ \sigma \} \coloneqq \varphi^{-1} ( x )$ for some $\sigma \in \Sigma$. Then $f(x) = \sigma$ by definition. Since $f$ is not continuous at $x$, we can find an $\varepsilon > 0$ and a sequence $(x_n)_{n \in \mathbb{N}}$ in $I$ such that $|x-x_n| < \frac{1}{n}$ but $\rho^\mathbb{N} (\sigma, f(x_n)) \geq \varepsilon$ for all $n \in \mathbb{N}$. Since $(\tau_n)_{n \in \mathbb{N}} \coloneqq (f(x_n))_{n \in \mathbb{N}}$ is a sequence in a compact metric space $\Sigma$ (Lemma \ref{Sigma is compact}), there is a subsequence $(\tau_{n_k})_{k \in \mathbb{N}}$ converging to some $\tau \in \Sigma$. Note that $x_{n_k} = \varphi (f(x_{n_k})) = \varphi (\tau_{n_k})$ for each $k \in \mathbb{N}$. Now, by continuity of $\varphi$ (Lemma \ref{phi is Lipschitz}), we see that $x_{n_k} \to \varphi (\tau)$ as $k \to \infty$. Since $x$ is the limit of $(x_n)_{n \in \mathbb{N}}$, it follows that $x = \varphi (\tau)$. Thus $\tau = \sigma$ by the singleton assumption. But then $\rho^\mathbb{N} (\tau, f(x_{n_k})) = \rho^\mathbb{N} (\tau, \tau_{n_k}) \geq \varepsilon$ for all $k \in \mathbb{N}$, by our choice of $\varepsilon$ and $(x_n)_{n \in \mathbb{N}}$. This contradicts the convergence of $( \tau_{n_k})_{k \in \mathbb{N}}$ to $\tau$. Therefore, $f$ is continuous at $x \in I$ for which $\varphi^{-1} ( x )$ is a singleton, and hence at every $x \in I \setminus E'$. \end{proof} However, the continuity does not hold at any rational number in the open unit interval $(0,1)$. Notice in Proposition \ref{inverse image of phi}(i) that $\sigma \not \in \Upsilon_{\sigma'}$ and $\sigma' \not \in \Upsilon_\sigma$. \begin{lemma} \label{f is discontinuous at rational} Let $x \in E'$ and put $\varphi^{-1} ( x ) = \{ \sigma, \tau \}$. Then $f$ is not continuous at $x$, in particular, we have \[ \lim_{\substack{t \to x \\ t \in I_\sigma}} f(t) = \sigma \quad \text{and} \quad \lim_{\substack{t \to x \\ t \not \in I_\sigma}} f(t) = \tau. \] \end{lemma} \begin{proof} The argument is similar to the proof of Lemma \ref{f is continuous at irrational}. The main difference in this proof is the use of compactness of $\Upsilon_\sigma$ and $\Sigma \setminus \Upsilon_\sigma$. Suppose to the contrary that the first convergence fails to hold. We can find an $\varepsilon > 0$ and a sequence $(x_n)_{n \in \mathbb{N}}$ in $I_\sigma$ such that $|x-x_n| < \frac{1}{n}$ but $\rho^\mathbb{N} (\sigma, f(x_n)) \geq \varepsilon$ for all $n \in \mathbb{N}$. Since $(\upsilon_n)_{n \in \mathbb{N}} \coloneqq (f(x_n))_{n \in \mathbb{N}}$ is a sequence in $f(I_\sigma) \subseteq \Upsilon_\sigma$ and $\Upsilon_\sigma$ is a compact metric space, there is a subsequence $(\upsilon_{n_k})_{k \in \mathbb{N}}$ converging to some $\upsilon \in \Upsilon_\sigma$. Note that $x_{n_k} = \varphi (f(x_{n_k})) = \varphi (\upsilon_{n_k})$ for each $k \in \mathbb{N}$. Now, by continuity of $\varphi$ (Lemma \ref{phi is Lipschitz}), we see that $x_{n_k} \to \varphi (\upsilon)$ as $k \to \infty$. Since $x$ is the limit of $(x_n)_{n \in \mathbb{N}}$, it follows that $x = \varphi (\upsilon)$. Thus $\upsilon = \sigma$ or $\upsilon = \tau$ by the doubleton assumption. Since $\tau \not \in \Upsilon_\sigma$ by Proposition \ref{inverse image of phi}(i), it must be that $\upsilon = \sigma$. But then $\rho^\mathbb{N} (\upsilon, f(x_{n_k})) = \rho^\mathbb{N} (\upsilon, \upsilon_{n_k}) \geq \varepsilon$ for all $k \in \mathbb{N}$, by our choice of $\varepsilon$ and $(x_n)_{n \in \mathbb{N}}$. This contradicts the convergence of $( \upsilon_{n_k})_{k \in \mathbb{N}}$ to $\upsilon$. Therefore, $\lim_{\substack{t \to x \\ t \in I_\sigma}} f(t) = \sigma$. The proof for the second convergence is similar. First note that since $\Upsilon_\sigma \setminus f(I_\sigma)$ and $f(I)$ are disjoint by \eqref{cylinder set minus f image}, we have \[ f(I \setminus I_\sigma) = f(I) \setminus f(I_\sigma) = [f(I) \cup (\Upsilon_\sigma \setminus f(I_\sigma))] \setminus [f(I_\sigma) \cup (\Upsilon_\sigma \setminus f(I_\sigma))] \subseteq \Sigma \setminus \Upsilon_\sigma, \] where the first equality follows from the injectivity of $f$. Now, in the preceding paragraph, by replacing $I_\sigma$ and $\Upsilon_\sigma$ by $I \setminus I_\sigma$ and $\Sigma \setminus \Upsilon_\sigma$, respectively, and exchanging the roles of $\sigma$ and $\tau$, we obtain the desired result. \end{proof} In the preceding lemma, notice that there is no additional assumption for $\sigma$ and $\tau$. Compare this with Proposition \ref{inverse image of phi}(i). Hence, Lemma \ref{f is discontinuous at rational} holds for either the case where $\sigma \in \Sigma_{\realizable}$ with $\tau \in \Sigma \setminus \Sigma_{\realizable}$ or where $\sigma \in \Sigma \setminus \Sigma_{\realizable}$ with $\tau \in \Sigma_{\realizable}$. \subsection{The error-sum functions $\mathcal{E}$ and $\mathcal{E}^*$} We first establish the relation between two error-sum functions $\mathcal{E} \colon I \to \mathbb{R}$ and $\mathcal{E}^* \colon \Sigma \to \mathbb{R}$. \begin{lemma} \label{commutative diagram} $\mathcal{E} = \mathcal{E}^* \circ f$, i.e., the following diagram commutes: \begin{center} \begin{tikzcd}[column sep=small] I \arrow{rr}{\mathcal{E}} \arrow[swap, shift right=.75ex]{dr}{f}& &\mathbb{R} \\ & \Sigma \arrow[swap]{ur}{\mathcal{E}^*} \end{tikzcd} \end{center} \end{lemma} \begin{proof} Let $x \in I$. Under the mapping $f \colon I \to \Sigma$, we obtain the sequence of Pierce expansion digits, namely, if $x = [d_1(x), d_2(x), \dotsc]_P$, then $\sigma \coloneqq f(x) = ( d_1(x), d_2(x), \dotsc ) \in \Sigma$. Now, by definition of $\varphi$, we have \[ \varphi (\sigma) = \sum_{k=1}^\infty \frac{(-1)^{k+1}}{d_1(x) \dotsm d_k(x)} = x \] and by definitions of $\varphi_n$ and $s_n$, we have, for each $n \in \mathbb{N}$, \[ \varphi_n(\sigma) = \sum_{k=1}^n \frac{(-1)^{k+1}}{d_1(x) \dotsm d_k(x)} = s_n(x). \] Thus $(\mathcal{E}^* \circ f)(x) = \mathcal{E}(x)$ for all $x \in I$. \end{proof} However, the equality $\mathcal{E} \circ \varphi = \mathcal{E}^*$ does not hold in general. For instance, consider $(2,3) \in \Sigma_2 \cap (\Sigma \setminus \Sigma_{\realizable})$. On one hand, $\varphi ((2,3)) = \frac{1}{2} - \frac{1}{2 \cdot 3} = \frac{1}{3} = [3]_P$, and so $(\mathcal{E} \circ \varphi)((2,3)) = \mathcal{E}([3]_P) = \frac{1}{3} - \frac{1}{3} = 0$. On the other hand, $\mathcal{E}^*((2,3)) = \left( \frac{1}{3} - \frac{1}{2} \right) + \left( \frac{1}{3} - \left( \frac{1}{2} - \frac{1}{2 \cdot 3} \right) \right) = -\frac{1}{6}$. \begin{lemma} \label{sequence lemma} For any $\sigma \coloneqq (\sigma_n)_{n \in \mathbb{N}} \in \Sigma$, the sequence $\left( \frac{n}{\sigma_1 \dotsm \sigma_{n+1}} \right)_{n \in \mathbb{N}}$ is monotonically decreasing to $0$ and the series $\sum_{n=1}^\infty \frac{n}{\sigma_1 \dotsm \sigma_{n+1}}$ is convergent. \end{lemma} \begin{proof} Note that $n > \frac{n+1}{n+2}$ for each $n \in \mathbb{N}$. Hence, by \eqref{sigma n bound}, \begin{align*} \frac{n}{(n+1)!} \geq \frac{n}{\sigma_1 \dotsm \sigma_{n+1}} \geq \frac{n+1}{\sigma_1 \dotsm \sigma_{n+1} \sigma_{n+2}} \geq 0 \end{align*} for every $n \in \mathbb{N}$, with $\sum_{n=1}^\infty \frac{n}{(n+1)!} < \infty$. Hence the result. \end{proof} We derive one simple formula for $\mathcal{E}^* \colon \Sigma \to \mathbb{R}$ which will be used frequently in the subsequent discussion. \begin{lemma} \label{E star formula lemma} Let $\sigma \coloneqq (\sigma_n)_{n \in \mathbb{N}} \in \Sigma$. Then \begin{align} \label{E star formula} \mathcal{E}^*(\sigma) = \sum_{n=1}^\infty \frac{(-1)^n n}{\sigma_1 \dotsm \sigma_{n+1}} \end{align} \end{lemma} \begin{proof} Let $(a_n)_{n \in \mathbb{N}}$ be a sequence of real numbers defined by \[ a_n \coloneqq \frac{(-1)^k}{\sigma_1 \dotsm \sigma_{k+1}} \quad \text{for} \quad \frac{(k-1) k}{2} < n \leq \frac{k(k+1)}{2} = \frac{(k-1)k}{2} + k \] for each $k \in \mathbb{N}$. Notice that $k$ many consecutive terms are all equal, e.g., the first few terms of the sequence are as follows: \begin{align*} a_1 = - \frac{1}{\sigma_1 \sigma_2}, \quad a_2 = a_3 = \frac{1}{\sigma_1 \sigma_2 \sigma_3}, \quad a_4 = a_5 = a_6 = - \frac{1}{\sigma_1 \sigma_2 \sigma_3 \sigma_4}, \quad \dotsc. \end{align*} Then the series $\sum_{n=1}^\infty a_n$ is absolutely convergent by Lemma \ref{sequence lemma}. By the absolute convergence, we have, on one hand, \begin{align*} \sum_{n=1}^\infty a_n &= \sum_{j=1}^\infty \sum_{k=j}^\infty a_{\frac{(k-1) k}{2} + j} = \sum_{j=1}^\infty \sum_{k=j}^\infty \frac{(-1)^k}{\sigma_1 \dotsm \sigma_{k+1}} = \sum_{j=1}^\infty \sum_{l=j+1}^\infty \frac{(-1)^{l+1}}{\sigma_1 \dotsm \sigma_l} \\ &= \sum_{j=1}^\infty \left( \sum_{l=1}^\infty \frac{(-1)^{l+1}}{\sigma_1 \dotsm \sigma_l} - \sum_{l=1}^j \frac{(-1)^{l+1}}{\sigma_1 \dotsm \sigma_l} \right) = \sum_{j=1}^\infty ( \varphi (\sigma) - \varphi_j (\sigma) ) = \mathcal{E}^* (\sigma) . \end{align*} On the other hand, again by the absolute convergence, we have \begin{align*} \sum_{n=1}^\infty a_n &= \sum_{k=1}^\infty \sum_{j=1}^k a_{\frac{(k-1) k}{2} + j} = \sum_{k=1}^\infty \sum_{j=1}^k \frac{(-1)^k}{\sigma_1 \dotsm \sigma_{k+1}} = \sum_{k=1}^\infty \frac{(-1)^k k}{\sigma_1 \dotsm \sigma_{k+1}}. \end{align*} Therefore, \eqref{E star formula} follows by combining two results. \end{proof} The boundedness of $\mathcal{E} \colon I \to \mathbb{R}$ readily follows. \begin{theorem} \label{boundedness theorem} For any $\sigma \in \Sigma$, we have $- \frac{1}{2} \leq \mathcal{E}^* (\sigma) \leq 0$. Consequently, $- \frac{1}{2} < \mathcal{E}(x) \leq 0$ for all $x \in I$. \end{theorem} \begin{proof} We make use of \eqref{E star formula} and Lemma \ref{sequence lemma} to obtain both the desired upper and lower bounds. On one hand, for any $\sigma \coloneqq (\sigma_n)_{n \in \mathbb{N}} \in \Sigma$, we have \begin{align*} \mathcal{E}^*(\sigma) &= -\frac{1}{\sigma_1 \sigma_2} + \sum_{j=1}^\infty \left( \frac{2j}{\sigma_1 \dotsm \sigma_{2j+1}} - \frac{2j+1}{\sigma_1 \dotsm \sigma_{2j+2}} \right) \geq - \frac{1}{\sigma_1 \sigma_2} \geq - \frac{1}{(1)(2)} = - \frac{1}{2}, \end{align*} where the last inequality follows from \eqref{sigma n bound}. Notice that the equalities hold if and only if $\sigma = (1,2) \in \Sigma_2 \cap (\Sigma \setminus \Sigma_{\realizable})$. On the other hand, for any $\sigma \coloneqq (\sigma_n)_{n \in \mathbb{N}} \in \Sigma$, we have \begin{align*} \mathcal{E}^*(\sigma) &= - \sum_{j=1}^\infty \left( \frac{2j-1}{\sigma_1 \dotsm \sigma_{2j}} - \frac{2j}{\sigma_1 \dotsm \sigma_{2j+1}} \right) \leq 0, \end{align*} The second assertion is immediate in view of Lemma \ref{commutative diagram} and $(1,2) \not \in f(I)$. \end{proof} \begin{lemma}\label{E star is continuous} $\mathcal{E}^* \colon \Sigma \to \mathbb{R}$ is continuous. \end{lemma} \begin{proof} We showed that the series in \eqref{definition of E star} is uniformly convergent on $\Sigma$. But $\varphi$ is continuous by Lemma \ref{phi is Lipschitz} and $\varphi_n$ is cleary continuous, and so each term in the series of $\mathcal{E}^*$ is continuous. Therefore, $\mathcal{E}^*$ is continuous as a uniformly convergent series of continuous functions. \end{proof} The $\lambda$-almost everywhere continuity theorem for $\mathcal{E}(x)$ is now immediate. \begin{theorem} \label{continuity theorem} The error-sum function $\mathcal{E} \colon I \to \mathbb{R}$ is continuous on $I \setminus E'$ and so $\mathcal{E}$ is continuous $\lambda$-almost everywhere. \end{theorem} \begin{proof} Let $x \in I \setminus E'$. By Lemma \ref{f is continuous at irrational}, we know that $f \colon I \to \Sigma$ is continuous at $x$. Moreover, $\mathcal{E}^* \colon \Sigma \to \mathbb{R}$ is continuous by Lemma \ref{E star is continuous}. But $\mathcal{E} = \mathcal{E}^* \circ f$ by Lemma \ref{commutative diagram}, and therefore $\mathcal{E}$ is continuous at $x$. For the second assertion, it is enough to recall that $E' = \mathbb{Q} \cap (0,1)$ which has zero $\lambda$-measure. Thus $I \setminus E'$ is of full $\lambda$-measure and the result follows. \end{proof} On the other hand, we will show that $\mathcal{E} \colon I \to \mathbb{R}$ fails to be continuous at every point of $E'$ (Theorem \ref{discontinuity theorem}). The following lemma plays a key role in the proof of the theorem. \begin{lemma} \label{discontinuity lemma} Let $x \in E'$ and put $\varphi^{-1}( x ) = \{ \sigma, \sigma' \}$ where $\sigma \in \Sigma_{\realizable}$ and $\sigma' \in \Sigma \setminus \Sigma_{\realizable}$. Then \[ \lim_{\substack{t \to x \\ t \not \in I_{\sigma}}} \mathcal{E}(t) = \mathcal{E}^*(\sigma') = \mathcal{E}(x) + \frac{(-1)^n}{d_1(x) \dotsm d_{n-1}(x) (d_n(x)-1) d_n(x)}. \] \end{lemma} \begin{proof} By Lemma \ref{commutative diagram}, the continuity of $\mathcal{E}^*$ (Lemma \ref{E star is continuous}), and Lemma \ref{f is discontinuous at rational}, we obtain the first equality as follows: \[ \lim_{\substack{t \to x \\ t \not \in I_{\sigma}}} \mathcal{E}(t) = \lim_{\substack{t \to x \\ t \not \in I_{\sigma}}} (\mathcal{E}^* \circ f) (t) = \mathcal{E}^* \big( \lim_{\substack{t \to x \\ t \not \in I_{\sigma}}} f(t) \big) = \mathcal{E}^* (\sigma'). \] Since $x \in E'$, by Proposition \ref{inverse image of phi}(i), $x$ has a finite Pierce expansion of positive length, say $x = [d_1(x), d_2(x), \dotsc, d_n(x)]_P$ for some $n \in \mathbb{N}$. Then, since $\varphi (\sigma') = x$, we have \begin{align*} \mathcal{E}^* (\sigma') &= \sum_{k=1}^{n-1} (x - \varphi_k(\sigma')) + (x-\varphi_{n} (\sigma')) + \sum_{k=n+1}^{\infty} (x - \varphi_k(\sigma')). \end{align*} Note that $\sigma' = (d_1(x), \dotsc, d_{n-1}(x), d_n(x)-1, d_n(x)) \in \Sigma_{n+1}$. Hence $s_k(x) = \varphi_k(\sigma')$ for $1 \leq k \leq n-1$ and $x = \varphi_{k+1}(\sigma') = s_k (x)$ for all $k \geq n$. In particular, we have \[ x- \varphi_n (\sigma') = \varphi_{n+1} (\sigma') - \varphi_n (\sigma') = \frac{(-1)^{n}}{d_1(x) \dotsm d_{n-1}(x) (d_n(x)-1) d_n(x)}. \] Thus \begin{align*} \mathcal{E}^* (\sigma') &= \sum_{k=1}^{n-1} (x - s_k(x)) + (x-\varphi_{n} (\sigma')) = \mathcal{E}(x) + \frac{(-1)^n}{d_1(x) \dots d_{n-1}(x) (d_{n}(x)-1) d_{n}(x)}. \end{align*} \end{proof} Now we are ready to prove that $\mathcal{E}$ is discontinuous at every point of the dense subset $E' \subseteq I$. \begin{theorem} \label{discontinuity theorem} Let $x \in E'$ and put $\varphi^{-1}( x ) = \{ \sigma, \sigma' \}$ where $\sigma \in \Sigma_{\realizable}$ and $\sigma' \in \Sigma \setminus \Sigma_{\realizable}$. Write $x = [d_1(x), d_2(x), \dotsc, d_n(x)]_P$ for some $n \in \mathbb{N}$. Then the following hold. \begin{enumerate} \item[(i)] If $n$ is odd, then $\mathcal{E}$ is left-continuous but has a right jump discontinuity at $x$ with the right-hand limit \begin{align} \label{right jump formula} \lim_{t \to x^+} \mathcal{E}(t) = \mathcal{E}^*(\sigma') = \mathcal{E}(x) - \frac{1}{d_1(x) \dots d_{n-1}(x) (d_{n}(x)-1) d_{n}(x)}. \end{align} \item[(ii)] If $n$ is even, then $\mathcal{E}$ is right-continuous but has a left jump discontinuity at $x$ with the left-hand limit \begin{align} \label{left jump formula} \lim_{t \to x^-} \mathcal{E}(t) = \mathcal{E}^*(\sigma') = \mathcal{E}(x) + \frac{1}{d_1(x) \dots d_{n-1}(x) (d_{n}(x)-1) d_{n}(x)}. \end{align} \end{enumerate} \end{theorem} \begin{proof} By Proposition \ref{inverse image of phi}(i), we have \begin{align*} \sigma &= (d_1(x), \dotsc, d_{n-1}(x), d_n(x)) \in \Sigma_n \cap \Sigma_{\realizable}, \\ \sigma' &= (d_1(x), \dotsc, d_{n-1}(x), d_n(x)-1, d_n(x)) \in \Sigma_{n+1} \cap (\Sigma \setminus \Sigma_{\realizable}). \end{align*} Then $\varphi (\sigma) = \varphi (\sigma') = x$ and $\sigma = f(x)$, but $\sigma' \not \in f(I)$. (i) Assume $n$ is odd. Since $I_\sigma = (\varphi (\widehat{\sigma}), \varphi (\sigma)]$ by \eqref{fundamental interval 1} and $x = \varphi (\sigma)$, we have that $t \to x^+$ if and only if $t \to x$ with $t \not \in I_\sigma$. For the right-hand limit, apply Lemma \ref{discontinuity lemma} to obtain \eqref{right jump formula}. For the left-hand limit, note that $t \to x^-$ if and only if $t \to x$ with $t \in I_\sigma \setminus \{ x \}$. Then by Lemma \ref{commutative diagram}, the continuity of $\mathcal{E}^*$ (Lemma \ref{E star is continuous}), and Lemma \ref{f is discontinuous at rational} we deduce that \[ \lim_{t \to x^-} \mathcal{E}(t) = \lim_{t \to x^-} (\mathcal{E}^* \circ f)(t) = \mathcal{E}^* \bigg( \lim_{\substack{t \to x \\ t \in I_\sigma \setminus \{ x \}}} f(t) \bigg) = \mathcal{E}^* (\sigma). \] But $\mathcal{E}(x) = (\mathcal{E}^* \circ f)(x) = \mathcal{E}^*(\sigma)$ by Lemma \ref{commutative diagram}, we conclude that $\mathcal{E}$ is left-continuous at $x$. (ii) The proof is similar to that of part (i), so we omit the details. \end{proof} Note that for every point $x \in E'$, $\lim_{t \to x^-} \mathcal{E}(t)$ is strictly greater than $\lim_{t \to x^+} \mathcal{E}(t)$, regardless of left or right discontinuity. The following lemma provides us with the supremum and infimum of $\mathcal{E}^* \colon \Sigma \to \mathbb{R}$ on each cylinder set $\Upsilon_\sigma$. Recall that given $\sigma \coloneqq (\sigma_1, \dotsc, \sigma_{n-1}, \sigma_n) \in \Sigma_n$ for some $n \in \mathbb{N}$, the sequence $\widehat{\sigma}$ is defined as $(\sigma_1, \dotsc, \sigma_{n-1}, \sigma_n+1) \in \Sigma_n$. \begin{lemma} \label{sup inf in Upsilon sigma} Let $n \in \mathbb{N}$ and $\sigma \in \Sigma_n$. Then the following hold. \begin{enumerate} \item[(i)] If $n$ is odd, we have \[ \max_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) = \mathcal{E}^*(\sigma) \quad \text{and} \quad \min_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) = \mathcal{E}^*(\sigma) - n \cdot \lambda (I_\sigma). \] \item[(ii)] If $n$ is even, we have \[ \max_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) = \mathcal{E}^*(\sigma) + n \cdot \lambda (I_\sigma) \quad \text{and} \quad \min_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) = \mathcal{E}^*(\sigma). \] \end{enumerate} \end{lemma} \begin{proof} Put $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma_n$. Let $\widehat{\sigma}' \coloneqq (\widehat{\sigma}'_k)_{k \in \mathbb{N}}$, where $\widehat{\sigma}'_{n+1} = \sigma_{n}+1$ and $\widehat{\sigma}'_k = \sigma_k$ for all $k \in \mathbb{N} \setminus \{ n+1 \}$, i.e., \[ \widehat{\sigma}' = (\sigma_1, \dotsc, \sigma_{n-1}, \sigma_n, \sigma_n+1, \infty, \infty, \dotsc) \in \Upsilon_\sigma. \] Then $\widehat{\sigma} \in \Sigma_{\realizable}$ and $\widehat{\sigma}' \in \Sigma \setminus \Sigma_{\realizable}$ with $\varphi (\widehat{\sigma}) = \varphi (\widehat{\sigma}')$. (i) Assume $n$ is odd. For any $\tau \coloneqq (\tau_k)_{k \in \mathbb{N}} \in \Upsilon_\sigma$, we have $\tau^{(n)} = \sigma$ by definition of the cylinder set, so by using \eqref{E star formula} and Lemma \ref{sequence lemma}, we obtain \begin{align*} \mathcal{E}^*(\tau) &= \sum_{k=1}^{n-1} \frac{(-1)^k k}{\sigma_1 \dotsm \sigma_{k+1}} + \sum_{k=n}^{\infty} \frac{(-1)^k k}{\sigma_1 \dotsm \sigma_{n} \tau_{n+1} \dotsm \tau_{k+1}} \\ &= \mathcal{E}^*(\sigma) - \frac{1}{\sigma_1 \sigma_2 \dotsm \sigma_{n}} \sum_{j=1}^\infty \left( \frac{n+(2j-2)}{\tau_{n+1} \dotsm \tau_{n+(2j-1)}} - \frac{n+(2j-1)}{\tau_{n+1} \dotsm \tau_{n+2j}} \right) \leq \mathcal{E}^*(\sigma). \end{align*} This shows that $\mathcal{E}^*(\tau) \leq \mathcal{E}^*(\sigma)$ for any $\tau \in \Upsilon_\sigma$ and that $\mathcal{E}^*(\tau)$ attains the maximum when $\tau = \sigma \in \Upsilon_\sigma$. Again by \eqref{E star formula} and Lemma \ref{sequence lemma}, for any $\tau \coloneqq (\tau_k)_{k \in \mathbb{N}} \in \Upsilon_\sigma$, we have \begin{align*} \mathcal{E}^*(\tau) &= \mathcal{E}^*(\sigma) - \frac{n}{\sigma_1 \sigma_2 \dotsm \sigma_{n} \tau_{n+1}} + \frac{1}{\sigma_1 \sigma_2 \dotsm \sigma_{n}} \sum_{j=1}^\infty \left( \frac{n+(2j-1)}{\tau_{n+1} \dotsm \tau_{n+2j}} - \frac{n+2j}{\tau_{n+1} \dotsm \tau_{n+(2j+1)}} \right) \\ &\geq \mathcal{E}^*(\sigma) - \frac{n}{\sigma_1 \sigma_2 \dotsm \sigma_{n} \tau_{n+1}} \geq \mathcal{E}^*(\sigma) - \frac{n}{\sigma_1 \sigma_2 \dotsm \sigma_{n} (\sigma_n+1)} = \mathcal{E}^*(\sigma) - n \cdot \lambda (I_\sigma), \end{align*} where we used $\tau_{n+1} > \sigma_n$ for the second inequality and \eqref{length of I sigma} for the last equality. Notice that the equalities hold if and only if $\tau_{n+1} = \sigma_{n}+1$ and $\tau_{k} = \infty$ for all $k \geq n+2$, i.e., if and only if $\tau = \widehat{\sigma}' \in \Upsilon_\sigma$. Therefore, $\mathcal{E}^*(\tau) \geq \mathcal{E}^*(\sigma) - n \cdot \lambda (I_\sigma)$ for any $\tau \in \Upsilon_\sigma$ and the minimum is attained when $\tau = \widehat{\sigma}' \in \Upsilon_\sigma$. (ii) The proof is similar to that of part (i), so we omit the details. \end{proof} Using the preceding lemma, we can describe the supremum and infimum of $\mathcal{E} \colon I \to \mathbb{R}$ on each fundamental interval $I_\sigma$. We show that approaching the left endpoint from the right yields the infimum, while approaching the right endpoint from the left yields the supremum. (See Proposition \ref{I sigma} for the left and right endpoints of the fundamental intervals.) \begin{lemma} \label{sup inf in fundamental interval} Let $n \in \mathbb{N}$ and $\sigma \in \Sigma_n$. Then the following hold. \begin{enumerate} \item[(i)] If $n$ is odd, we have \[ \sup_{t \in I_\sigma} \mathcal{E}(t) = \lim_{t \to (\varphi(\sigma))^-} \mathcal{E}(t) \quad \text{and} \quad \inf_{t \in I_\sigma} \mathcal{E}(t) = \lim_{t \to (\varphi(\widehat{\sigma}))^+} \mathcal{E}(t). \] \item[(ii)] If $n$ is even, we have \[ \sup_{t \in I_\sigma} \mathcal{E}(t) = \lim_{t \to (\varphi(\widehat{\sigma}))^-} \mathcal{E}(t) \quad \text{and} \quad \inf_{t \in I_\sigma} \mathcal{E}(t) = \lim_{t \to (\varphi(\sigma))^+} \mathcal{E}(t). \] \end{enumerate} \end{lemma} \begin{proof} (i) Assume $n$ is odd. Put $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma_n$. Then $\widehat{\sigma} = (\sigma_1, \dotsc, \sigma_{n-1}, \sigma_n+1) \in \Sigma_n$ and by Proposition \ref{inverse image of phi}, we have $\varphi (\widehat{\sigma}) = \varphi (\widehat{\sigma}')$, where $\widehat{\sigma}' \coloneqq (\sigma_1, \dotsc, \sigma_{n-1}, \sigma_n, \sigma_n+1) \in \Sigma_{n+1}$ with $\widehat{\sigma} \in \Sigma_{\realizable}$ and $\widehat{\sigma}' \in \Sigma \setminus \Sigma_{\realizable}$. By using $\mathcal{E} = \mathcal{E}^* \circ f$ (Lemma \ref{commutative diagram}), $\overline{f(I_\sigma)} = \Upsilon_\sigma$ (Lemma \ref{f image of fundamental interval is dense}), and the continuity of $\mathcal{E}^*$ (Lemma \ref{E star is continuous}), we find that \begin{align} \sup_{t \in I_\sigma} \mathcal{E}(t) = \sup_{f(t) \in f(I_\sigma)} \mathcal{E}^*(f(t)) = \sup_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) &= \max_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) = \mathcal{E}^*(\sigma), \label{sup in I sigma} \\ \inf_{t \in I_\sigma} \mathcal{E}(t) = \inf_{f(t) \in f(I_\sigma)} \mathcal{E}^*(f(t)) = \inf_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) &= \min_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) = \mathcal{E}^*(\widehat{\sigma}') \label{inf in I sigma}, \end{align} where the last two equalities for both \eqref{sup in I sigma} and \eqref{inf in I sigma} follow from Lemma \ref{sup inf in Upsilon sigma} and its proof. For the supremum, notice that, by Proposition \ref{I sigma}, $t \to (\varphi (\sigma))^-$ if and only if $t \to \varphi (\sigma)$ with $t \in I_\sigma \setminus \{ \varphi (\sigma) \}$. Then by Lemma \ref{commutative diagram}, the continuity of $\mathcal{E}^*$ (Lemma \ref{E star is continuous}), and Lemma \ref{f is discontinuous at rational}, we obtain \[ \lim_{t \to (\varphi (\sigma))^-} \mathcal{E}(t) = \lim_{\substack{t \to \varphi (\sigma) \\ t \in I_{\sigma} \setminus \{ \varphi (\sigma) \}}} \mathcal{E}(t) = \mathcal{E}^* \bigg( \lim_{\substack{t \to \varphi (\sigma) \\ t \in I_{\sigma} \setminus \{ \varphi (\sigma) \}}} f(t) \bigg) = \mathcal{E}^*(\sigma). \] Combining this with \eqref{sup in I sigma} gives the result. For the infimum, notice that, by Proposition \ref{I sigma}, $t \to (\varphi (\widehat{\sigma}))^+$ if and only if $t \to \varphi (\widehat{\sigma})$ with $t \not \in I_{\widehat{\sigma}}$. Hence Lemma \ref{discontinuity lemma} tells us that \[ \lim_{t \to (\varphi(\widehat{\sigma}))^+} \mathcal{E}(t) = \lim_{\substack{t \to \varphi(\widehat{\sigma}) \\ t \not \in I_{\widehat{\sigma}}}} \mathcal{E}(t) = \mathcal{E}^* (\widehat{\sigma}'). \] Combining this with \eqref{inf in I sigma} gives the result. (ii) The proof is similar to that of part (i), so we omit the details. \end{proof} The following lemma is an analogue of Lemma 2.6 of \cite{SW07}, where the error-sum function of L{\"u}roth series is discussed. This lemma will serve as the key ingredient in finding a suitable covering for the graph of $\mathcal{E}(x)$ in Section \ref{Section 4}. \begin{lemma} \label{fourth lemma} Let $n \in \mathbb{N}$ and $\sigma \in \Sigma_n$. Then \[ \sup_{t,u \in I_\sigma} |\mathcal{E}(t) - \mathcal{E}(u)| = n \cdot \lambda (I_\sigma). \] \end{lemma} \begin{proof} By Lemmas \ref{commutative diagram}, \ref{f image of fundamental interval is dense}, \ref{E star is continuous}, and \ref{sup inf in Upsilon sigma} we have \[ \sup_{t,u \in I_\sigma} |\mathcal{E}(t) - \mathcal{E}(u)| = \sup_{t \in I_\sigma} \mathcal{E}(t) - \inf_{t \in I_\sigma} \mathcal{E}(t) = \max_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) - \min_{\tau \in \Upsilon_\sigma} \mathcal{E}^*(\tau) = n \cdot \lambda (I_\sigma). \] \end{proof} One might be tempted to say that $\mathcal{E} \colon I \to \mathbb{R}$ is fairly regular in the sense of $\lambda$-almost everywhere continuity (Theorem \ref{continuity theorem}). However, the following theorem tells us that $\mathcal{E}$ is not well-behaved in the bounded variation sense. \begin{theorem} \label{bounded variation theorem} The error-sum function $\mathcal{E} \colon I \to \mathbb{R}$ is not of bounded variation. \end{theorem} \begin{proof} Let $V_I (\mathcal{E})$ denote the total variation of $\mathcal{E}$ on $I$. Let $n \in \mathbb{N}$. We consider the collection $\mathcal{I} \coloneqq \{ I_\sigma : \sigma \in \Sigma_n \}$, i.e., the collection of all fundamental intervals of order $n$. Note that $\sum_{\sigma \in \Sigma_n} \lambda (I_\sigma) = 1$. Then, by Lemma \ref{fourth lemma}, we have \[ n = n \sum_{\sigma \in \Sigma_n} \lambda (I_\sigma) = \sum_{\sigma \in \Sigma_n} \sup_{t, u \in I_\sigma} |\mathcal{E}(t) - \mathcal{E}(u)| \leq V_I(\mathcal{E}), \] where the inequality follows from the fact that the $I_\sigma \in \mathcal{I}$ are mutually disjoint intervals. Since $n \in \mathbb{N}$ is arbitrary, it follows that $V_I(\mathcal{E})$ is not finite. This completes the proof. \end{proof} We prove that $\mathcal{E} \colon I \to \mathbb{R}$ enjoys an intermediate value property in some sense, which is an analogue of Theorem 4.3 in \cite{RP00}. Similar results can also be found in Theorem 5 of \cite{JS12}, Theorem 2.6 of \cite{QD10}, and Theorem 2.5 of \cite{SMZ06}. In fact, every result aforementioned is a consequence of the following theorem. \begin{theorem} \label{IVT theorem} Suppose that $g \colon J \to \mathbb{R}$ is a function on an interval $J \subseteq \mathbb{R}$ satisfying the following conditions. \begin{enumerate} \item[(i)] There exists a subset $D$ of the interior of $J$ such that $g$ is continuous on $J \setminus D$. \item[(ii)] For any $x \in D$, $g$ is either left-continuous or right-continuous at $x$ with $\lim_{t \to x^-} g(t) > \lim_{t \to x^+} g(t)$. \end{enumerate} Let $a,b \in J$ with $a<b$. If $g(a) < y < g(b)$, then there exists an $x \in (a,b) \setminus D$ such that $g(x) = y$. \end{theorem} \begin{proof} Consider the set \[ S \coloneqq \{ t \in [a,b] : g (t) < y \} \] and let $t_0 \coloneqq \sup S$. Since $g(a) < y$ by assumption, we have $a \in S$, and hence $S$ is non-empty. So $t_0 \neq - \infty$ and $t_0 \in [a,b]$. Our aim is to show that $t_0$ is a desired root, that is, $g(t_0) = y$ and $t_0 \in (a,b) \setminus D$. We claim that $t_0 > a$. We consider three cases depending on the continuity at $a$. {\sc Case I}. Assume $a \in J \setminus D$, so that $g$ is continuous at $a$ by condition (i). Then, since $g(a) < y$, there is an $\eta_1 \in (0, b-a)$ such that $g(t) < y$ for all $t \in (a-\eta_1, a+\eta_1) \cap J$. So $t_0 \geq a+\eta_1$ and hence $t_0 > a$. {\sc Case II}. Assume that $a \in D$ and $g$ is left-continuous at $a$. Then $\lim_{t \to a^+} g(t) < \lim_{t \to a^-} g(t) = g(a) < y$ by condition (ii) and assumption. By definition of the right-hand limit, there exists an $\eta_2 \in (0, b-a)$ such that $g (t) < y$ for all $t \in (a, a+ \eta_2)$. So $t_0 \geq a + \eta_2$ and hence $t_0 > a$. {\sc Case III}. Assume that $a \in D$ and $g$ is right-continuous at $a$. Then $\lim_{t \to a^+} g(t) = g(a) < y$ by assumption. By definition of the right-hand limit, there exists an $\eta_3 \in (0, b-a)$ such that $g (t) < y$ for all $t \in (a, a + \eta_3)$. So $t_0 \geq a + \eta_3$ and hence $t_0 > a$. By a similar argument, which we omit here, we can show that $t_0 < b$. We have shown above that $t_0 \in (a,b)$. It remains to prove that $g (t_0) = y$ with $t_0 \not \in D$. We show first that $t_0 \not \in D$. Suppose $t_0 \in D$ to argue by contradiction. Since $t_0 = \sup S$ we can find a sequence $( a_n )_{n \in \mathbb{N}}$ in $S$ such that $a_n \leq t_0$ for each $n \in \mathbb{N}$ and $a_n \to t_0$ as $n \to \infty$. (We can choose $a_n \in S$ such that $t_0 - \frac{1}{n} < a_n \leq t_0$ for each $n \in \mathbb{N}$.) Similarly, we can find a sequence $( b_n )_{n \in \mathbb{N}}$ in $[a,b] \setminus S$ such that $b_n \geq t_0$ for each $n \in \mathbb{N}$ and $b_n \to t_0$ as $n \to \infty$. Then by our choice of two sequences, $g(a_n) < y$ and $g(b_n) \geq y$ for all $n \in \mathbb{N}$. Now note that since $t_0 \in D$, $g$ is either left-continuous or right-continuous at $t_0$ by condition (ii). If $g$ is left-continuous at $t_0$, then by condition (ii), we have \[ y \geq \lim_{n \to \infty} g(a_n) = g (t_0) > \lim_{n \to \infty} g(b_n) \geq y, \] which is a contradiction. If $g$ is right-continuous at $t_0$, then by condition (ii), we have \[ y \geq \lim_{n \to \infty} g (a_n) > g (t_0) = \lim_{n \to \infty} g (b_n) \geq y, \] which is a contradiction. This proves that $t_0 \not \in D$, as desired. Since $t_0 \in J \setminus D$, we know from condition (i) that $g$ is continuous at $t_0$. Hence $g(t_0) = y$ by definition of $S$ and $t_0$. For, if not, say $g(t_0) < y$, we can find a $\delta \in (0, \min \{ t_0-a, b-t_0 \})$ such that $g(t_0)<y$ on the interval $(t_0-\delta, t_0+\delta)$, which contradicts $t_0 = \sup S$. Similarly, $g(t_0) > y$ gives a contradiction. This completes the proof that $t_0 \in (a,b) \setminus D$ is a root of $g(x) = y$ we were seeking for. \end{proof} \begin{remark} In Theorem \ref{IVT theorem}, the assumption $g(a) < y < g(b)$ for $a<b$ is stricter than that of the standard Intermediate Value Theorem in $\mathbb{R}$. This additional assumption is necessary because at every discontinuity, $g$ has a sudden drop therein. To be precise, for every $x \in D$, we have $\lim_{t \to x^-}g(t) > \lim_{t \to x^+} g(t)$. For the same phenomenon for $\mathcal{E} \colon I \to \mathbb{R}$, see Theorem \ref{discontinuity theorem} and \eqref{right jump formula}, \eqref{left jump formula} therein. \end{remark} \begin{corollary} \label{IVT} Let $a,b \in I$ with $a<b$. If $\mathcal{E}(a) < y < \mathcal{E}(b)$, then there exists an $x \in (a,b) \setminus E$ such that $\mathcal{E}(x) = y$. \end{corollary} \begin{proof} By Theorems \ref{continuity theorem} and \ref{discontinuity theorem}, $\mathcal{E}$ satisfies the two conditions of Theorem \ref{IVT theorem} with $J \coloneqq I$ and $D \coloneqq E'$. Since $(a,b) \setminus E = (a,b) \setminus E'$, the result follows from Theorem \ref{IVT theorem}. \end{proof} Using Theorem \ref{IVT theorem}, we can prove the intermediate value property of $P \colon \mathbb{R} \to \mathbb{R}$, the error-sum function of the regular continued fraction expansion, defined as in Section \ref{Section 1}. Compare the following corollary with Theorem 4.3 of \cite{RP00}, where the authors considered $P|_I$, the restriction of $P$ to $I$. \begin{corollary} Let $a, b \in \mathbb{R}$ with $a<b$. If $P (a) < y < P (b)$, then there exists an $x \in (a,b) \setminus \mathbb{Q}$ such that $P (x) = y$. \end{corollary} \begin{proof} Let $x$ be rational. Then the regular continued fraction expansion of $x$ is of finite length, say $x = [a_0(x); a_1(x), \dotsc, a_n(x)]$ for some $n \in \mathbb{N}_0$. By Lemma 1.1 and Theorem 2.3 of \cite{RP00}, the following hold: \begin{enumerate} \item[(i)] If $n$ is odd, then $P$ is left-continuous but has a right jump discontinuity at $x$ with the right-hand limit $\lim_{t \to x^+} P(t) = P(x) - \frac{1}{q_n(x)}$. \item[(ii)] If $n$ is even, then $P$ is right-continuous but has a left jump discontinuity at $x$ with the left-hand limit $\lim_{t \to x^-} P(t) = P(x) + \frac{1}{q_n(x)}$. \end{enumerate} Since $q_n(x) >0$ by definition (see \cite{RP00}, p. 274), it follows that $\lim_{t \to x^-} P(x) > \lim_{t \to x^+} P(x)$ for every $x \in \mathbb{Q}$. Moreover, by Theorem 2.3 of \cite{RP00}, $P$ is continuous at every irrational point. Therefore, by taking $J \coloneqq \mathbb{R}$ and $D \coloneqq \mathbb{Q}$ in Theorem \ref{IVT theorem}, the result follows. \end{proof} We have shown that $\mathcal{E}$ is bounded on $I$ (Theorem \ref{boundedness theorem}) and it is continuous $\lambda$-almost everywhere (Theorem \ref{continuity theorem}). Hence $\mathcal{E}$ is Riemann integrable on $I$. Before caculating the integral, we first find a simple formula for $\mathcal{E}$. \begin{lemma} \label{E formula lemma} For every $x \in I$ and for each $n \in \mathbb{N}$, we have \begin{align} \label{simplified formula} \mathcal{E}(x) = \sum_{k=1}^n ( x - s_k(x) ) + \frac{(-1)^n}{d_1(x) \dotsm d_n(x)} \mathcal{E} (T^n x). \end{align} \end{lemma} \begin{proof} Let $x \in I$ and $n \in \mathbb{N}$. From the definition of digits, we have $d_{n+j}(x) = d_j (T^nx)$ for any $j \in \mathbb{N}$. Then by making use of \eqref{simplified series}, we obtain \begingroup \allowdisplaybreaks \begin{align*} \mathcal{E}(x) - \sum_{k=1}^n ( x - s_k(x) ) &= \sum_{k=n+1}^\infty ( x - s_k(x) ) = \sum_{k=n+1}^\infty \frac{(-1)^k T^kx}{d_1(x) \dotsm d_n(x) d_{n+1}(x) \dotsm d_k(x)} \\ &= \frac{(-1)^n}{d_1(x) \dotsm d_n(x)} \sum_{j=1}^\infty \frac{(-1)^jT^{n+j} x}{d_{n+1}(x) \dotsm d_{n+j}(x)} \\ &= \frac{(-1)^n}{d_1(x) \dotsm d_n(x)} \sum_{j=1}^\infty \frac{(-1)^j T^j (T^nx)}{d_1(T^nx) \dotsm d_j(T^nx)} \\ &= \frac{(-1)^n}{d_1(x) \dotsm d_n(x)} \sum_{j=1}^\infty ( T^nx - s_j (T^nx) ) = \frac{(-1)^n}{d_1(x) \dotsm d_n(x)} \mathcal{E} (T^n x), \end{align*} \endgroup as desired. \end{proof} \begin{theorem} \[ \int_0^1 \mathcal{E}(x) \, dx = - \frac{1}{8}. \] \end{theorem} \begin{proof} Note that on the interval $\left( \frac{1}{2}, 1 \right]$, we have $d_1(x) = 1$ and $s_1(x) = 1$, and hence $\mathcal{E}(x) = x-1 - \mathcal{E}(Tx)$ by \eqref{simplified formula}. By letting $Tx = u = 1-x$ so that $du = - dx$ on the interval $\left( \frac{1}{2}, 1 \right]$, we obtain \begin{align*} \int_0^1 \mathcal{E}(x) \, dx &= \int_0^{\frac{1}{2}} \mathcal{E}(x) \, dx + \int_{\frac{1}{2}}^1 \left( (x-1) - \mathcal{E}(Tx) \right) \, dx \\ &= \int_0^{\frac{1}{2}} \mathcal{E}(x) \, dx + \int_{\frac{1}{2}}^1 (x-1) \, dx - \int_0^{\frac{1}{2}} \mathcal{E}(u) \, du = - \frac{1}{8}. \end{align*} \end{proof} Before we move on to the next section, we prove one lemma which will be used in Section \ref{Section 4}. \begin{lemma} \label{epsilon cover} Let $\sigma \coloneqq (\sigma_k)_{k \in \mathbb{N}} \in \Sigma$. For any $n \in \mathbb{N}$, we have \[ \big| \varphi (\sigma) - \varphi (\sigma^{(n)}) \big| \leq \frac{1}{\sigma_1 \dotsm \sigma_n \sigma_{n+1}} \] and \begin{align*} \big| \mathcal{E}^* (\sigma) - \mathcal{E}^* (\sigma^{(n)}) \big| \leq \frac{n}{\sigma_1 \dotsm \sigma_n \sigma_{n+1}}. \end{align*} \end{lemma} \begin{proof} Let $n \in \mathbb{N}$. The first inequality is immediate from the definitions of $\varphi$ and $\sigma^{(n)}$. Indeed, since $\sigma$ and $\sigma^{(n)}$ share the initial block of length $n$, by \eqref{phi - phi n} we have \begin{align*} \big| \varphi (\sigma) - \varphi (\sigma^{(n)}) \big| &= \big| \varphi (\sigma) - \varphi_n (\sigma) \big| \leq \frac{1}{\sigma_1 \dotsm \sigma_n \sigma_{n+1}}. \end{align*} The second inequality follows from Lemma \ref{E star formula lemma}. For $\mathcal{E}^*(\sigma^{(n)})$, we just need to take $\sigma_k = \infty$ for all $k \geq n+1$ in the formula \eqref{E star formula} to obtain \begin{align*} \mathcal{E}^* (\sigma^{(n)}) = \sum_{k=1}^{n-1} \frac{(-1)^k k}{\sigma_1 \dotsm \sigma_{k+1}}. \end{align*} Thus, by Lemma \ref{sequence lemma}, we find that \begin{align*} \big| \mathcal{E}^* (\sigma) - \mathcal{E}^* (\sigma^{(n)}) \big| &= \left| \sum_{k=n}^\infty \frac{(-1)^k k}{\sigma_1 \dotsm \sigma_{k+1}} \right| \\ &= \left| \frac{n}{\sigma_1 \dotsm \sigma_{n+1}} - \sum_{j=1}^\infty \left( \frac{n+(2j-1)}{\sigma_1 \dotsm \sigma_{n+2j}} - \frac{n+2j}{\sigma_1 \dotsm \sigma_{n+(2j+1)}} \right) \right| \leq \frac{n}{\sigma_1 \dotsm \sigma_{n+1}}. \end{align*} \end{proof} \section{Dimension of the Graph of $\mathcal{E}(x)$} \label{Section 4} In this section, we determine three widely used and well-known dimensions, namely the Hausdorff dimension, the box-counting dimension, and the covering dimension, of the graph of the error-sum function $\mathcal{E} \colon I \to \mathbb{R}$. In fact, although $\mathcal{E}$ is discontinuous on a dense subset of $I$ (Theorem \ref{discontinuity theorem}) and is not of bounded variation (Theorem \ref{bounded variation theorem}), it is not sufficiently irregular to have a graph of any dimension strictly greater than $1$. Nevertheless, we show that the Hausdorff dimension of the graph is strictly greater than its covering dimension. This will lead to the conclusion that the graph is indeed a fractal according to Mandelbrot's definition in his prominent book \cite{Man82}, where he coined the term {\em fractal} in a Euclidean space and defined it as a set whose covering dimension is strictly less than its Hausdorff dimension. Throughout this section, $\rho_2$ denotes the usual Euclidean distance in $\mathbb{R}^2$. For a subset $F$ of $\mathbb{R}^2$, we denote by $\mathcal{H}^s (F)$ the $s$-dimensional Hausdorff measure of $F$ and by $\hdim F$ the Hausdorff dimension of $F$. In addition, we denote by $\lbdim F$ and $\ubdim F$ the lower and upper box-counting dimension of $F$, respectively. If $\lbdim F = \ubdim F$, we call this common value the box-counting dimension of $F$ and denote the value by $\bdim F$. Lastly, the covering dimension of $F$ is denoted by $\covdim F$. We refer the reader to Chapters 2--4 of \cite{Fal14} for details on the Hausdorff measure, the Hausdorff dimension, and the box-counting dimension, and Chapters 1--2 of \cite{Coo15} for the covering dimension which is called the topological dimension in the book. \subsection{Hausdorff dimension of the graph of $\mathcal{E}(x)$} Define $G \colon I \to I \times \mathbb{R}$ by $G(x) \coloneqq (x, \mathcal{E}(x))$ for $x \in I$. Then $G(I) = \{ ( x, \mathcal{E}(x) ) : x \in I \}$ is the graph of $\mathcal{E}$. It should be mentioned that the proof idea of the following theorem is borrowed from earlier studies, e.g., \cite{CWY14, DT11, JS12, RP00, SMZ06, SW07}. \begin{theorem} \label{Hausdorff dimension of graph} The graph of the error-sum function $\mathcal{E} \colon I \to \mathbb{R}$ has the Hausdorff dimension one, i.e., $\hdim G(I)=1$. \end{theorem} \begin{proof} For the lower bound, we use the projection map onto the first coordinate $\Proj \colon \mathbb{R}^2 \to \mathbb{R}^2$ given by $\Proj ( (x,y) ) = (x,0)$ for each $(x,y) \in \mathbb{R}^2$. It is clear that $\Proj$ is Lipschitz: \[ \rho_2 ( \Proj ((x_1, y_1)), \Proj ((x_2, y_2)) ) \leq \rho_2 ( (x_1, y_1), (x_2, y_2) ), \] for any $(x_1, y_1), (x_2, y_2) \in \mathbb{R}^2$. It follows that \[ 1 = \lambda ( I ) = \mathcal{H}^1 ( I ) = \mathcal{H}^1 ( \Proj (G(I)) ) \leq \mathcal{H}^1 ( G(I) ), \] where the last inequality is due to Proposition 3.1 of \cite{Fal14}. Thus $\hdim G(I) \geq 1$. For the upper bound, we find a suitable covering for $G(I)$. For any $n \in \mathbb{N}$ and $\sigma \in \Sigma_n$, we define a closed interval $J_\sigma \subseteq \mathbb{R}$ by \[ J_\sigma \coloneqq \left[ \inf_{t \in I_\sigma} \mathcal{E}(t), \sup_{t \in I_\sigma} \mathcal{E}(t) \right]. \] Then $I_\sigma \times \mathcal{E}(I_\sigma) \subseteq I_\sigma \times J_\sigma$. We claim that, for any $n \in \mathbb{N}$, we have $I \setminus E \subseteq \bigcup_{\sigma \in \Sigma_n} I_\sigma$ where $E = I \cap \mathbb{Q}$. Indeed, if $x \in I \setminus E$, then $\tau \coloneqq f(x) \in \Sigma_\infty$ by Proposition \ref{inverse image of phi}(ii). Clearly $f(x) \in \Upsilon_{\tau^{(n)}}$ with $\tau^{(n)} \in \Sigma_n$. Hence $x \in f^{-1} (\Upsilon_{\tau^{(n)}}) = I_{\tau^{(n)}} \subseteq \bigcup_{\sigma \in \Sigma_n} I_\sigma$ and this proves the claim. It follows that, for any $n \in \mathbb{N}$, the collection $\mathcal{J} \coloneqq \{ I_\sigma \times J_\sigma : \sigma \in \Sigma_n \}$ is a covering of $F \coloneqq G(I) \setminus G(E)$. Here, we have $\sum_{\sigma \in \Sigma_n} \lambda (I_\sigma) = \lambda (I) = 1$ from the first coordinate. Due to Lemma \ref{fourth lemma}, we have $\lambda (J_\sigma) = n \cdot \lambda (I_\sigma)$. So we can cover each $I_\sigma \times J_\sigma \in \mathcal{J}$ by a rectangle of base $\lambda (I_\sigma)$ and height $n \cdot \lambda (I_\sigma)$. Note that such rectangles have diameter $\sqrt{n^2+1} \cdot \lambda (I_\sigma)$. Let $\varepsilon >0$ be given. Recall from \eqref{bound for length of I sigma} that $\lambda (I_\sigma) \leq \frac{1}{(n+1)!}$. Then \begin{align*} \mathcal{H}^{1+\varepsilon} ( F ) &\leq \liminf_{n \to \infty} \left[ \sum_{\sigma \in \Sigma_n} ( \sqrt{n^2+1} \cdot \lambda (I_\sigma) )^{1+\varepsilon} \right] \\ &\leq \liminf_{n \to \infty} \left[ (\sqrt{n^2+1})^{1+\varepsilon} \left( \frac{1}{(n+1)!} \right)^{\varepsilon} \sum_{\sigma \in \Sigma_n} \lambda (I_\sigma) \right] = 0. \end{align*} The above calculation shows that $\hdim F \leq 1+\varepsilon$. Since $\varepsilon>0$ was arbitrary, it follows that $\hdim F \leq 1$. Now note that the set $G(E) = \bigcup_{x \in E} \{ G(x) \}$ is countable as a union of singletons over a countable set $E = I \cap \mathbb{Q}$. Therefore, by countable stability of Hausdorff dimension (the third property in \cite{Fal14}, pp. 48--49), we deduce that $\hdim G(I) = \sup_{x \in E} \{ \hdim F, \hdim \{ G(x) \} \} \leq 1$ and this completes the proof. \end{proof} \subsection{Box-counting dimension of the graph of $\mathcal{E}(x)$} We shall establish the following theorem. \begin{theorem} \label{box dimension of graph} The graph of the error-sum function $\mathcal{E} \colon I \to \mathbb{R}$ has the box-counting dimension one, i.e., $\bdim G(I)=1$. \end{theorem} We define $\Gamma \colon \Sigma \to I \times \mathbb{R}$ by $\Gamma (\sigma) \coloneqq ( \varphi (\sigma), \mathcal{E}^* (\sigma) )$ for $\sigma \in \Sigma$. We list and prove two properties of $\Gamma$ in the following lemma. \begin{lemma} \label{properties of Gamma} \begin{enumerate} \item[(i)] $\Gamma \colon \Sigma \to \Gamma (\Sigma)$ is a homeomorphism. \item[(ii)] $\Gamma (\Sigma)$ is compact. \end{enumerate} \end{lemma} \begin{proof} (i) It is enough to show that $\Gamma$ is a continuous injection, since $\Sigma$ is compact (Lemma \ref{Sigma is closed}) and $\Gamma (\Sigma) \subseteq \mathbb{R}^2$ is Hausdorff. Since $\varphi \colon \Sigma \to I$ and $\mathcal{E}^* \colon \Sigma \to \mathbb{R}$ are continuous by Lemmas \ref{phi is Lipschitz} and \ref{E star is continuous}, respectively, it follows that $\Gamma$ is continuous. To prove injectivity, suppose $\Gamma (\sigma) = \Gamma (\tau)$. Then $\varphi (\sigma) = \varphi (\tau)$ from the first coordinate. Assume $\sigma \neq \tau$. Then by Proposition \ref{inverse image of phi}, we have either $\sigma \in \Sigma_{\realizable}$ with $\tau \not \in \Sigma_{\realizable}$ or $\tau \in \Sigma_{\realizable}$ with $\sigma \not \in \Sigma_{\realizable}$. In either case, Theorem \ref{discontinuity theorem} tells us that $\mathcal{E}^*(\sigma) \neq \mathcal{E}^*(\tau)$, which is a contradiction. Thus $\sigma = \tau$. (ii) Since $\Sigma$ is compact by Lemma \ref{Sigma is closed}, the result follows from part (i). \end{proof} \begin{lemma} \label{Gamma is a closure of G} $\Gamma (\Sigma) = \overline{G(I)}$. \end{lemma} \begin{proof} First note that since $\varphi \circ f = \id_{I}$ and $\mathcal{E}^* \circ f = \mathcal{E}$ (Lemma \ref{commutative diagram}), we have \[ (\Gamma \circ f) (x) = ( \varphi(f(x)), \mathcal{E}^*(f(x)) ) = (x, \mathcal{E}(x)) \] for any $x \in I$, and hence $(\Gamma \circ f) (I) = G(I)$. Since $\Sigma$ is compact (Lemma \ref{Sigma is closed}), the continuity of $\Gamma$ (Lemma \ref{properties of Gamma}(i)) tells us that $\Gamma (\overline{f (I)}) = \overline{\Gamma (f (I))}$. Then by Lemma \ref{closure of f I}, we have \[ \Gamma (\Sigma) = \Gamma (\overline{f (I)}) = \overline{\Gamma (f (I))} = \overline{G(I)}. \] \end{proof} The following proposition gives us a general relation among $\hdim$, $\lbdim$, and $\ubdim$ for certain subsets of $\mathbb{R}^2$. \begin{proposition}[\cite{Fal14}, Proposition 3.4] \label{hdim lbdim ubdim} If $F \subseteq \mathbb{R}^2$ is non-empty and bounded, then \[ \hdim F \leq \lbdim F \leq \ubdim F. \] \end{proposition} To prove Theorem \ref{box dimension of graph}, we first find a lower bound for the lower box-counting dimension. \begin{lemma} \label{lower box dimension} $\lbdim \overline{G(I)} \geq 1$. \end{lemma} \begin{proof} By Proposition \ref{hdim lbdim ubdim}, we have $\lbdim \overline{G(I)} \geq \hdim \overline{G(I)}$. By monotonicity of the Hausdorff dimension and by Theorem \ref{Hausdorff dimension of graph}, we further have $\hdim \overline{G(I)} \geq \hdim G(I) = 1$. Combining the inequalities, the result follows. \end{proof} We need the following proposition to find an upper bound for the upper box-counting dimension. The lemma gives an upper bound for the number of finite sequences whose length and the product of all terms are dominated, respectively, by prescribed numbers. The logarithm without base, denoted $\log$, will always mean the natural logarithm. \begin{proposition}[\cite{Luc97}, Claim 3] \label{Luczak lemma} Let $p, m \in \mathbb{N}$. Denote by $S(p,m)$ the set of sequences $(\sigma_1, \dotsc, \sigma_k) \in \mathbb{N}^k$ of finite length $k$ such that $1 \leq k \leq m$ and $\prod_{j=1}^k \sigma_j \leq p$, i.e., \[ S(p,m) \coloneqq \left\{ (\sigma_1, \dotsc, \sigma_k) \in \mathbb{N}^k : 1 \leq k \leq m \text{ and } \prod_{j=1}^k \sigma_j \leq p \right\}. \] Then \[ |S(p,m)| \leq p (2+\log p)^{m-1}. \] \end{proposition} We use Proposition \ref{Luczak lemma} to obtain an upper bound for the number of all strictly increasing finite sequences of fixed length whose product of digits is bounded above by a given number. \begin{lemma} \label{refined bound lemma} Let $p, m \in \mathbb{N}$. Let $I(p,m)$ be defined by \[ I(p,m) \coloneqq \left\{ (\sigma_1, \dotsc, \sigma_m) \in \mathbb{N}^m : \sigma_1 < \sigma_2 < \dotsb < \sigma_m \text{ and } \prod_{j=1}^m \sigma_j \leq p \right\}, \] i.e., the set $I(p,m)$ consists of all finite sequences of length $m$ whose terms are strictly increasing and the product of all terms is at most $p$. Then, we have \[ |I(p,m)| \leq \frac{p (2+\log p)^{m-1}}{m!}. \] \end{lemma} \begin{proof} Let $S(p,m)$ be as in Proposition \ref{Luczak lemma}. Obviously, $I(p,m) \subseteq S(p,m)$. For any $(\sigma_1, \dotsc, \sigma_m) \in I(p,m)$, all the terms $\sigma_k$, $1 \leq k \leq m$, are distinct, so that there are $m!$ many ways to form a sequence of length $m$ with the same terms. It is clear that all the sequences formed so are members of $S(p,m)$. Thus $m! |I(p,m)| \leq |S(p,m)|$. Therefore, the desired upper bound for $|I(p,m)|$ follows from Proposition \ref{Luczak lemma}. \end{proof} The following inequalities are well-known lower and upper bounds for the factorial function. These bounds are rougher than the famous Stirling's formula but the proof is elementary and they are satisfactory enough in our argument. \begin{proposition}[\cite{KL16}, Lemma 10.1] \label{factorial bound} For every $n \in \mathbb{N}$, we have \begin{align*} \frac{n^n}{e^{n-1}} \leq n! \leq \frac{n^{n+1}}{e^{n-1}}. \end{align*} \end{proposition} \begin{proof} The core idea of the proof is the fact that the map $x \mapsto \log x$ is increasing on $(0, \infty)$. We refer the interested readers to \cite{KL16} in which the detailed proof is given. \end{proof} Now we are in a position to establish the upper bound by considering a suitable covering of $\Gamma (\Sigma)$. \begin{lemma} \label{upper box dimension} $\ubdim \overline{G(I)} \leq 1$. \end{lemma} \begin{proof} Let $\varepsilon \coloneqq 2 e^{-M}$ with $M > 0$ large enough. Take $n = n(M) \in \mathbb{N}$ such that $(n-1)! \leq e^M \leq n!$. Clearly, $n \to \infty$ as $M \to \infty$ and vice versa. Then for any $(\sigma_k)_{k \in \mathbb{N}} \in \Sigma$, by \eqref{sigma n bound}, we have \begin{align} \label{bound for product of n+1 digits} \sigma_1 \sigma_2 \dotsm \sigma_n \sigma_{n+1} \geq (n+1)! \geq (n+1) e^M. \end{align} We obtain lower and upper bounds for $M$ by means of Proposition \ref{factorial bound}: \begin{align} \label{bounds for M} (n-1) \log (n-1) - (n-2) \leq M \leq (n+1) \log n - (n-1). \end{align} Since $\frac{(n-1)!}{e^{n}} \to \infty$ as $n \to \infty$ but $\frac{(n-1)!}{e^M} \leq 1$ by our choice of $n$, it must be that $n < M$. We first write $\Sigma$ as a union of finitely many sets. Define \begin{align*} \Lambda_1 &\coloneqq \{ (\sigma_j)_{j \in \mathbb{N}} \in \Sigma : \sigma_1 \geq e^M \} \end{align*} and for $k \geq 2$, define \[ \Lambda_k \coloneqq \left\{ (\sigma_j)_{j \in \mathbb{N}} \in \Sigma : \prod_{j=1}^{k-1} \sigma_j < (k-1) e^M \text{ and } \prod_{j=1}^k \sigma_j \geq k e^M \right\}. \] We claim that $\Sigma = \bigcup_{k=1}^{n+1} \Lambda_k$. To prove the claim, we need to show that $\Sigma \subseteq \bigcup_{k=1}^{n+1} \Lambda_k$ since the reverse inclusion is obvious. Let $\sigma \coloneqq (\sigma_j)_{j \in \mathbb{N}} \in \Sigma$ and assume $\sigma \in \Sigma \setminus \bigcup_{k=1}^n \Lambda_k$. Then $\sigma_1 < e^M$ since $\sigma \not \in \Lambda_1$, $\sigma_1 \sigma_2 < 2 e^M$ since $\sigma \not \in \Lambda_2$, $\dotsc$, $\prod_{j=1}^{n-1} \sigma_j < (n-1) e^M$ since $\sigma \not \in \Lambda_{n-1}$, and $\prod_{j=1}^n \sigma_j < n e^M$ since $\sigma \not \in \Lambda_n$. Since we have $\prod_{j=1}^{n+1} \sigma_j \geq (n+1) e^M$ by \eqref{bound for product of n+1 digits}, it must be that $\sigma \in \Lambda_{n+1}$. Therefore, $\sigma \in \bigcup_{k=1}^{n+1} \Lambda_k$ and this proves the claim. For each $1 \leq k \leq n+1$, our aim is to find a covering of $\Gamma (\Lambda_k)$ consisting of squares of side length $\varepsilon = 2 e^{-M}$ and to determine an upper bound, which we will denote by $a_k$, of the number of required squares. Let $\sigma \coloneqq (\sigma_j)_{j \in \mathbb{N}} \in \Lambda_1$. Then $\varphi (\sigma) \leq \frac{1}{\sigma_1}$ by definition of $\varphi$, and so $\varphi (\sigma) \in [0, e^{-M}]$ by definition of $\Lambda_1$. We know that $- \frac{1}{\sigma_1 \sigma_2} \leq \mathcal{E}^*(\sigma) \leq 0$ from Theorem \ref{boundedness theorem} and its proof. Since $\sigma_1 \geq e^M$ and $\sigma_2 > \sigma_1$, it follows that \begin{align*} \big| \mathcal{E}^* (\sigma) \big| &\leq \frac{1}{\sigma_1 \sigma_2} \leq \frac{1}{e^M (e^M+1)} < e^{-M} < \varepsilon. \end{align*} Hence $\Gamma (\Lambda_1)$ can be covered by $a_1 \coloneqq 1$ square of side length $\varepsilon= 2e^{-M}$. Let $k \in \{ 2, \dotsc, n+1 \}$. For every $\sigma \coloneqq (\sigma_j)_{j \in \mathbb{N}} \in \Lambda_k$, since $\prod_{j=1}^k \sigma_j \geq k e^M$, we have by Lemma \ref{epsilon cover} that \[ \big| \varphi (\sigma) - \varphi (\sigma^{(k-1)}) \big| \leq \frac{1}{\sigma_1 \sigma_2 \dotsm \sigma_k} \leq e^{-M} \quad \text{and} \quad \big| \mathcal{E}^*(\sigma) - \mathcal{E}^*(\sigma^{(k-1)}) \big| \leq \frac{k-1}{\sigma_1 \sigma_2 \dotsm \sigma_k} \leq e^{-M}. \] This shows that for a fixed $\tau \coloneqq (\tau_j)_{j \in \mathbb{N}} \in \Sigma_{k-1}$, we can cover $\Gamma(\Lambda_k \cap \Upsilon_\tau)$ by one square of side length $2e^{-M} = \varepsilon$. Since $\prod_{j=1}^{k-1} \sigma_j < (k-1) e^M$ by definition of $\Lambda_k$, using Lemma \ref{refined bound lemma}, we see that at most \[ a_k \coloneqq \frac{1}{(k-1)!} (k-1) e^M ( 2 + M + \log (k-1) )^{k-2} \] squares of side length $2 e^{-M} = \varepsilon$ are needed to cover $\Gamma (\Lambda_k)$. Denote by $N_\varepsilon$ the smallest number of squares of side length $\varepsilon$ needed to cover $\Gamma(\Sigma)$. Clearly, $\Gamma(\Sigma) = \bigcup_{k=1}^{n+1} \Gamma(\Lambda_k)$. Then, by the discussion so far, \[ N_\varepsilon \leq \sum_{k=1}^{n+1} a_k = 1 + \sum_{k=2}^{n+1} \frac{(k-1) e^M ( 2 + M + \log (k-1) )^{k-2}}{(k-1)!}. \] Now note that $a_1 < a_2 = e^M$, and for $2 \leq k \leq n$, \begin{align*} \frac{a_{k+1}}{a_k} = \frac{k}{k-1} \left( \frac{2+M+\log k}{2+M+\log (k-1)} \right)^{k-1} \frac{2+M+\log (k-1)}{k} > 1 \cdot 1^{k-1} \cdot \frac{M}{n} > 1, \end{align*} where the last inequality holds true since $M > n$. So $a_{n+1}> a_n > \dots > a_1$, and it follows that \begin{align*} N_\varepsilon \leq \sum_{k=1}^{n+1} a_{n+1} = (n+1) \frac{n e^M ( 2 + M + \log n )^{n-1}}{n!}. \end{align*} Recall that by our choice of $n$, we have $e^M \leq n!$ and $n < M$, and so \[ N_\varepsilon \leq (n+1) n ( 2 + M + \log n )^{n-1} < (M+1)^2 ( 2 + M + \log M )^{n-1}. \] Now \begin{align*} \frac{\log N_\varepsilon}{\log (1/\varepsilon)} &\leq \frac{2\log (M+1) + (n-1) \log ( 2 + M + \log M )}{M - \log 2} \\ &= \frac{2\log (M+1)}{M - \log 2} + \frac{(n-1)\log M}{M - \log 2} + \frac{(n-1) \log \left( \frac{2}{M} + \frac{\log M}{M} + 1 \right)}{M - \log 2} \end{align*} and we will estimate the upper limit of each of the three terms in the second line above. Clearly, the limit of the first term is $0$ as $M \to \infty$. For the second term, using \eqref{bounds for M}, we have \[ \frac{(n-1) \log M}{M - \log 2} \leq \frac{(n-1) \log (n+1) + (n-1) \log (\log n)}{(n-1) \log (n-1)-(n-2) -\log 2} \to 1 \] as $n \to \infty$. For the last term, notice that $\log \left( \frac{2}{M} + \frac{\log M}{M} + 1 \right) \to 0$ as $M \to \infty$ and $n-1 < M-\log 2$, to deduce that the limit is $0$. Thus, since $\overline{G(I)} = \Gamma (\Sigma)$ by Lemma \ref{Gamma is a closure of G}, we finally obtain \[ \ubdim \overline{G(I)} = \ubdim \Gamma (\Sigma) = \limsup_{\varepsilon \to 0} \frac{\log N_\varepsilon}{\log (1/\varepsilon)} \leq 1. \] \end{proof} \begin{proof}[Proof of Theorem \ref{box dimension of graph}] In view of Proposition \ref{hdim lbdim ubdim}, combining Lemmas \ref{lower box dimension} and \ref{upper box dimension} gives $\lbdim \overline{G(I)} = \ubdim \overline{G(I)} \allowbreak = 1$. Since taking closure of a set does not alter the upper and lower box-counting dimensions by Proposition 2.6 of \cite{Fal14}, it follows that $\lbdim G(I) = \ubdim G(I) = 1$ and, therefore, we conclude that $\bdim G(I) = 1$. \end{proof} \begin{remark} We point out that Lemma \ref{upper box dimension} gives an alternative proof of the upper bound part in Theorem \ref{Hausdorff dimension of graph}. In fact, due to Proposition \ref{hdim lbdim ubdim}, we have $\hdim \overline{G(I)} \leq \lbdim \overline{G(I)} \leq \ubdim \overline{G(I)}$ and, furthermore, $\ubdim \overline{G(I)} \leq 1$ by Lemma \ref{upper box dimension}. But then monotonicity of the Hausdorff dimension implies that $\hdim G(I) \leq 1$, which is the upper bound in Theorem \ref{Hausdorff dimension of graph}. \end{remark} \subsection{Covering dimension of the graph of $\mathcal{E}(x)$} The graph of $\mathcal{E} \colon I \to \mathbb{R}$ has the same Hausdorff dimension and box-counting dimension, both equaling one. In this subsection, we show that the covering dimension of the graph of $\mathcal{E}$ is zero, so that it is strictly smaller than the Hausdorff dimension. \begin{theorem} \label{covering dimension theorem} The graph of the error-sum function $\mathcal{E} \colon I \to \mathbb{R}$ has the covering dimension zero, i.e., $\covdim G(I) = 0$. \end{theorem} We say that a topological space $X$ is {\em totally separated} if for every pair of distinct points $x, y \in X$, there are disjoint open sets $U$ and $V$ such that $x \in U$, $y \in V$, and $X = U \cup V$. The following propositions will be used for the proof of the theorem. \begin{proposition} [\cite{Coo15}, Theorem 2.7.1] \label{covering dimension lemma} Let $X$ be a non-empty compact Hausdorff space. Then $X$ is totally separated if and only if $\covdim X = 0$. \end{proposition} \begin{proposition} [\cite{Coo15}, Theorem 1.8.3] \label{monotonicity lemma} If $X$ is a metrizable space and $Y \subseteq X$, then $\covdim Y \leq \covdim X$. \end{proposition} The theorem is a consequence of the following lemma. \begin{lemma} \label{totally separated lemma} $\covdim \Gamma (\Sigma) = 0$. \end{lemma} \begin{proof} Obviously $\Sigma (\Gamma) \subseteq \mathbb{R}^2$ is non-empty and Hausdorff, and, furthermore, by Lemma \ref{properties of Gamma}(ii), it is compact. By Proposition \ref{covering dimension lemma} it is sufficent to show that $\Gamma (\Sigma)$ is totally separated. To see this, first recall from Lemma \ref{properties of Gamma}(i) that $\Gamma \colon \Sigma \to \Gamma (\Sigma)$ is a homeomorphism. It is clear that $\mathbb{N}_\infty$ is totally separated, and so is its (countable) product $\mathbb{N}_\infty^\mathbb{N}$. It follows that $\Sigma \subseteq \mathbb{N}_\infty^\mathbb{N}$ is also totally separated. Hence its homeomorphic image $\Gamma (\Sigma)$ is totally separated. This proves the result. \end{proof} \begin{proof}[Proof of Theorem \ref{covering dimension theorem}] On one hand, since $G(I) \neq \varnothing$ we have $\covdim G(I) \geq 0$ by Example 1.1.9 of \cite{Coo15}. On the other hand, since $G(I)$ is a subset of the metrizable space $\Gamma (\Sigma) \subseteq \mathbb{R}^2$, Proposition \ref{monotonicity lemma} and Lemma \ref{totally separated lemma} tell us that $\covdim G(I) \leq 0$. This completes the proof. \end{proof} \section*{Acknowledgements} I wish to thank my advisor, Dr. Hanfeng Li, for helpful comments and suggestions, continuous support, and encouragement to work on the subject.
{ "arxiv_id": "2302.14285", "language": "en", "timestamp": "2023-03-01T02:08:15", "url": "https://arxiv.org/abs/2302.14285", "yymm": "2302" }
\section{Introduction and motivation} \label{sec:intro} In several models and extensions of the standard electroweak theory the neutrinos interact with a scalar ($\phi$) and fermion ($f$) via a coupling of the form $\bar f_R\nu_L\phi$, or just with neutrinos themselves $\bar\nu^c_R\nu_L\phi$. Couplings of the form $\bar f_R\nu_L\phi$ produce additional contributions to the neutrino effective potential when the neutrino propagates in a background of $\phi$ and $f$ particles and their possible effects have been considered in various contexts, such as collective oscillations in supernova (see for example \Rrefs{Duan:2010bg}{Chakraborty:2016yeg} and the works cited therein), the hot plasma of the Early-Universe\cite{Wong:2002fa,Mangano:2006ar}, cosmological observations such as cosmic microwave background and big bang nucleosynthesis data\cite{babu}, and in particular Dark Matter-neutrino interactions\cite{Mangano:2006mp, Binder:2016pnr,Primulando:2017kxf,Campo:2017nwh,Franarin:2018gfk, Pandey:2018wvh}. Motivated by these developments, we have carried out in previous works a systematic calculation of the neutrino dispersion relation in such models, including the damping and decoherence effects (see \Rref{ns:nuphiresonance} and references therein). These works have been based on the calculation of the neutrino thermal self-energy using thermal field theory (TFT) methods\cite{ftft:reviews}. Analytic formulas for the various quantities of interest have been obtained by considering various different cases of the $f$ and $\phi$ background, such as the non-relativistic or ultra-relativistic gases, and in particular the case in which the $f$ background is a completely degenerate Fermi-gas. To complement that previous work, our goal is to determine the corresponding quantities (e.g, effective potential and/or dispersion relation and damping) of a neutrino that propagates in a thermal background that contains a scalar Bose-Einstein (BE) condensate. The hypothesis that the dark matter (DM) can be self-interacting is intriguing, and a DM background of scalar particles is a candidate for such environments\cite{garani:becdm, kirkpatrick:becdm,bohmen:becdm,craciun:bdcdm}. In that context, the interest is the application to the case of a neutrino propagating in such a background. The problem of fermions propagating in such backgrounds can be relevant in other contexts as well. For example, the possibility of BE condensation of pions and/or kaons in the interior of a neutron star, or kaon condensation in heavy ion collisions\cite{baym:nstar, thorsson:kaon,schmitt:kaon,li:hion}. Our purpose here is to propose an efficient and consistent method to treat the propagation of a fermion in the background of the BE condensate, in particular the calculation of the effective potential and dispersion relation, in a general way and not tied to any specific application. To model the fermion propagation in such an environment, we assume some simple Yukawa-type interactions between the fermions and the scalar. We consider three generic, but specific, models of the fermion-scalar interaction: \begin{enumerate} \item Model I: Two massless chiral fermions, $f_L$ and $f_R$, with a coupling to the scalar particle $\phi$ of the form $\bar f_R f_L\phi$. \item Model II: A massless chiral fermion $f_L$ with coupling $\bar f^c_R f_L\phi$. \item Model III: One massive Dirac fermion $f$ with a coupling $\bar f^c f\phi$. \end{enumerate} As we will see, the symmetry breaking process produces a Dirac fermion, a Majorana fermion and a pseudo-Dirac fermion in Model I, II and III, respectively\cite{wolfensteinpetcov}. The field theoretical method we use to treat the BE condensate has been discussed by various authors\cite{weldon:phimu,filippi:phimu, schmitt:phimu}. For completeness we first discuss those aspects and details of the method that are relevant for our purposes. We then present the extension we propose to treat the fermion propagation in the BE condensate, in the context of the three models mentioned above for concreteness and illustrative purposes. Although one of our motivations is the possible application in neutrino physics contexts, the method we propose for the propagation of fermions in a BE condensate has never being used before, and most importantly, is general and paves the way for applications to problems in other systems, for example condensed matter, or nuclear matter systems and heavy-ion collisions as already mentioned. The plan of the paper is as follows. In \Section{sec:becmodel} we review the model we use to describe the BE condensate. There we focus on the essential elements of the symmetry breaking mechanism that we need in the next sections. In \Section{sec:modeli} we consider in detail the method we use for calculating the dispersion relations of the propagating fermions in the BE condensate, in the context of the model-I mentioned above. The method is further illustrated by applying it to the models II and III in Sections \ref{sec:modelii} and \ref{sec:modeliii}, respectively. With a view to possible interest and/or future work, we summarize in an appendix the details related to the scalar modes that have a definite dispersion relation, which are useful for the calculation of the thermal corrections to the fermion dispersion relations due to the thermal excitations of the BE condensate. Our concluding remarks and outlook are given in \Section{sec:conclusions}. \section{Model for the BE condensate} \label{sec:becmodel} To describe the BE condensate the proposal is to start with the complex scalar field $\phi$ that has a standard $\phi^4$ Lagrangian \begin{equation} \label{Lphi} L^{(\phi)} = (\partial^\mu\phi)^\ast(\partial_\mu\phi) - V_0\,, \end{equation} where \begin{equation} \label{V0} V_0(\phi) = m^2_\phi\phi^\ast\phi + \lambda_\phi(\phi^\ast\phi)^2\,. \end{equation} In the context of thermal field theory (TFT), denoting the temperature by $T$ and the chemical potential of $\phi$ by $\mu_\phi$, the procedure is to calculate the \emph{effective potential} of $\phi$, call it $V^{(\phi)}_\text{eff}(T,\mu_\phi)$, and then see under what conditions $V^{(\phi)}_\text{eff}$ has minimum at $\phi = 0$ or some other value. In the latter case, there has been a phase transition, and \begin{equation} \langle\phi\rangle \not = 0\,, \end{equation} indicative of the symmetry breaking. The alternative approach that we use, which is particularly useful for treating the symmetry breaking associated with the transition to the BE condensate, is to consider the field $\phi^\prime$ defined by\cite{weldon:phimu,filippi:phimu,schmitt:phimu} \begin{equation} \label{defphiprime} \phi^\prime \equiv e^{i\mu_\phi t}\phi\,. \end{equation} The recipe is to substitute $\phi = e^{-i\mu_\phi t}\phi^\prime$ in $L^{(\phi)}$ to obtain the Lagrangian for the field $\phi^\prime$, which we denote by $L^{(\phi^\prime)}$. To express $L^{(\phi^\prime)}$ in a convenient form we write \begin{equation} \mu_\phi t = \mu_\phi (u\cdot x)\,, \end{equation} where \begin{equation} \label{u} u^\mu = (1,\vec 0)\,, \end{equation} and define \begin{equation} \label{D} D_\mu \equiv \partial_\mu - i v_\mu\,, \end{equation} with \begin{equation} \label{v} v_\mu = \mu_\phi u_\mu\,. \end{equation} Then using \begin{equation} \label{derivativerelation} \partial_\mu\phi = \partial_\mu(e^{-i\mu_\phi t}\phi^\prime) = e^{-i\mu_\phi t}D_\mu\phi^\prime\,, \end{equation} it follows that \begin{equation} \label{Lphiprime} L^{(\phi^\prime)} = (D^\mu\phi^\prime)^\ast(D_\mu\phi^\prime) - V_0(\phi^\prime)\,. \end{equation} Expanding the $D$ term in \Eq{Lphiprime}, \begin{equation} \label{Lmu} L^{(\phi^\prime)} = (\partial^\mu\phi^\prime)^\ast(\partial_\mu\phi^\prime) + i[\phi^{\prime\,\ast} (v\cdot\partial\phi^\prime) - (v\cdot\partial\phi^\prime)^\ast \phi^\prime] - U(\phi^\prime)\,, \end{equation} where \begin{equation} \label{U} U = -(\mu^2_\phi - m^2_\phi)\phi^{\prime\,\ast}\phi^\prime + \lambda_\phi(\phi^{\prime\,\ast}\phi^\prime)^2\,. \end{equation} Now comes the key observation. If $m^2_\phi > \mu^2_\phi$, this $U$ corresponds to a standard massive complex scalar with mass $m^2_\phi - \mu^2_\phi$. On the other hand, if $\mu^2_\phi > m^2_\phi$, the minimum of the potential is not at $\phi = 0$, and therefore $\phi$ develops a non-zero expectation value and the $U(1)$ symmetry is broken. We assume the second option, \begin{equation} \label{sbcondition} \mu^2_\phi > m^2_\phi\,, \end{equation} and proceed accordingly. Namely, we put \begin{equation} \label{phi} \phi^\prime = \frac{1}{\sqrt{2}}\left(\phi_0 + \phi_1 + i\phi_2\right), \end{equation} where \begin{equation} \label{phiexpectationvalue} \langle\phi^\prime\rangle \equiv \frac{1}{\sqrt{2}}\phi_0\,, \end{equation} is chosen to be the minimum of \begin{equation} \label{U0} U_0 = -\frac{1}{2}(\mu^2_\phi - m^2_\phi)\phi^2_0 + \frac{1}{4}\lambda_\phi\phi^4_0\,. \end{equation} Thus, \begin{equation} \label{phi0} \phi^2_0 = \frac{\mu^2_\phi - m^2_\phi}{\lambda_\phi}\,. \end{equation} Substituting \Eqs{phi}{phi0} in \Eq{Lphi} we obtain the Lagrangian for $\phi_{1,2}$. $\phi_1$ and $\phi_2$ are mixed by the $v^\mu$ term. The central result that we invoke now is that the calculation of the effective potential $V^{(\phi)}_\text{eff}(T,\mu_\phi)$ can be carried out in TFT using $\mu_\phi = 0$ in the partition (and/or distribution) function, but using the $\mu_\phi$-dependent Lagrangian $L^{(\phi^\prime)}$ given in \Eq{Lmu} \cite{weldon:comment}. Therefore, the next step would be to find the propagator matrix of the $\phi_{1,2}$ system, determine the modes that have a definite dispersion relation, and then define the thermal propagators of the modes. However, for our purposes in what follows, it is sufficient to observe that, neglecting the $T$-dependent terms (that is, at zero temperature), $V^{(\phi)}_\text{eff}(0,\mu_\phi)$ is simply the $U$ potential given in \Eq{U}, and the zero-temperature expectation value of $\phi^\prime$ is given by \Eqs{phiexpectationvalue}{phi0}. As we will see, this strategy will allow us to determine the contribution to the effective potential of fermions propagating in the BE condensate. The thermal propagators of the $\phi_{1,2}$ modes would allow us to calculate the corresponding corrections due to the thermal excitations. While we do not purse here the calculation of those thermal corrections, for completeness and possible relevance in future work we give in \Appendix{sec:phi12modes} some details about the propagator matrix of the $\phi_{1,2}$ complex, the modes that have a definite dispersion relation, and the corresponding propagators of the modes. \section{Model I} \label{sec:modeli} \subsection{Formulation} We consider two chiral fermions $f_L$ and $f_R$, with an interaction \begin{equation} L_{\text{int}} = -\lambda \phi \bar f_R f_L + h.c\,. \end{equation} There are two conserved charges, which we will label as $Q_{1,2}$. The assignments must satisfy \begin{equation} \label{chargeassignments} Q_i(\phi) + Q_i(f_L) - Q_i(f_R) = 0\,. \end{equation} We can take \begin{eqnarray} Q_1(f_L) = Q_1(f_R) = 1,& \qquad & Q_1(\phi) = 0\,,\nonumber\\ Q_2(\phi) = Q_2(f_R) = 1, & \qquad & Q_2(f_L) = 0\,. \end{eqnarray} Remembering how the $Q_i$ enter in the partition function operator, namely \begin{equation} \label{Zgen} Z = e^{-\beta{\cal H} + \sum_i\alpha_i Q_i}\,, \end{equation} the assignments in \Eq{chargeassignments} imply that the chemical potentials satisfy \begin{equation} \label{murelation} \mu_\phi + \mu_L - \mu_R = 0\,, \end{equation} where we are denoting by $\mu_L$ and $\mu_R$ the chemical potential of $f_L$ and $f_R$, respectively. From our discussion of the BE condensate model in \Section{sec:becmodel} we take that we should rewrite the Lagrangian in terms of the field $\phi^\prime$ defined in \Eq{defphiprime}. The generalization that we propose here is that every field with non-zero $Q_i$ must be transformed accordingly. Therefore, a generalization of the transformation considered in \Section{sec:becmodel} is to put \begin{eqnarray} \label{primedfieldsi} \phi & = & e^{-i\mu_\phi t}\phi^\prime\,,\nonumber\\ f_L & = & e^{-i\mu_L t}f^\prime_L\,,\nonumber\\ f_R & = & e^{-i\mu_R t}f^\prime_R\,. \end{eqnarray} Our assumption here is that the prime fields, $f^\prime_R$ and $f^\prime_L$, are the convenient ones to use to determine the fermion modes in the BE condensate. With the condition in \Eq{murelation}, the interaction coupling keeps the same form, namely \begin{equation} \label{Lintprime} L_{\text{int}} = -\lambda \phi^\prime \bar f^\prime_R f^\prime_L + h.c\,. \end{equation} However, the kinetic part of the Lagrangian changes. For $\phi^\prime$ we will borrow what we did in \Section{sec:becmodel}. But now we have to do something analogous for the fermion fields. The kinetic part of the fermion Lagrangian, \begin{equation} L_{f} = i\bar f_L\lslash{\partial} f_L + i\bar f_R\lslash{\partial}f_R\,, \end{equation} in terms of $f^\prime_R$ and $f^\prime_L$ is \begin{equation} L_{f} = i\bar f^\prime_L\lslash{\partial}f^\prime_L + i\bar f^\prime_R\lslash{\partial}f^\prime_R + \mu_L\bar f^\prime_L\lslash{u}f^\prime_L + \mu_R\bar f^\prime_R\lslash{u}f^\prime_R\,. \end{equation} As discussed in \Section{sec:becmodel}, we assume a symmetry breaking by the mechanism implemented around \Eq{sbcondition}. Therefore, we put \begin{equation} \langle\phi^\prime\rangle \equiv \frac{1}{\sqrt{2}}\phi_0\,, \end{equation} where $\phi_0$ is given in \Eq{phi0}. As a result $Q_2$ is broken, but $Q_1$ remains unbroken. This produces a mass term in \Eq{Lintprime} of the form \begin{equation} -m\bar f^\prime_R f^\prime_L + h.c.\,, \end{equation} with \begin{eqnarray} \label{msymmbreaking} m & = & \frac{\lambda\phi_0}{\sqrt{2}}\,,\nonumber\\ & = & \frac{\lambda}{\sqrt{2}}\left( \frac{\mu^2_\phi - m^2_\phi}{\lambda_\phi}\right)^{1/2}\,, \end{eqnarray} where in the second equality we have used \Eq{phi0}. The total Lagrangian is then \begin{equation} \label{modelitotalL} L = L^{(\phi^\prime)} + L_0 + L^\prime_{\text{int}}\,, \end{equation} where $L^{(\phi^\prime)}$ is given in \Eq{Lmu}, \begin{equation} L_0 = \bar f^\prime_L i\lslash{\partial} f^\prime_L + \bar f^\prime_R i\lslash{\partial} f^\prime_R + \mu_L\bar f^\prime_L\lslash{u} f^\prime_L + \mu_R \bar f^\prime_R \lslash{u} f^\prime_R - (m\bar f^\prime_R f^\prime_L + h.c.)\,, \end{equation} and \begin{equation} L^\prime_{\text{int}} = -\frac{\lambda}{\sqrt{2}} (\phi_1 + i\phi_2) \bar f^\prime_R f^\prime_L + h.c\,. \end{equation} Defining \begin{equation} f = f^\prime_L + f^\prime_R\,, \end{equation} in momentum space $L_0$ is given by \begin{equation} L_0(k) = \bar f(k)(\lslash{k} - \Sigma(k))f(k)\,, \end{equation} where \begin{equation} \label{Sigmai} \Sigma = mL + m^\ast R - \mu_L\lslash{u}L - \mu_R\lslash{u}R \,. \end{equation} The two chiral fermions form a Dirac particle, in which the left and right components have different dispersion relations. The next step is to find the propagating modes (dispersion relations and wave functions) at the tree-level. This is most conveniently done using the Weyl representation of the $\gamma$ matrices. \subsection{Dispersion relations} The field equation in momentum space is \begin{equation} (\lslash{k} - \Sigma)f = 0\,, \end{equation} or, in terms of the left- and right-hand components of $f$, \begin{eqnarray} \lslash{A}_L f^\prime_L - m^\ast f^\prime_R & = & 0\nonumber\\ \lslash{A}_R f^\prime_R - m f^\prime_L & = & 0\,, \end{eqnarray} where \begin{eqnarray} A_{L\mu} & = & k_\mu + \mu_L u_\mu\nonumber\\ A_{R\mu} & = & k_\mu + \mu_R u_\mu\,. \end{eqnarray} In the one-generation case we are considering the phase of $m$ is irrelevant, since it can be absorbed by a field redefinition, so that we could take $m^\ast = m$. However, since in more general cases such field redefinitions cannot be done independently, we keep $m$ arbitrary. We use the Weyl representation of the gamma matrices and put \begin{eqnarray} f^\prime_L & = & \left(\begin{array}{cc}0\\ \eta\end{array}\right)\,, \nonumber\\ f^\prime_R & = & \left(\begin{array}{cc}\xi\\ 0\end{array}\right)\,. \end{eqnarray} The equations to be solved then become \begin{eqnarray} \label{modelixietaeqs} \left(A_L^0 + \vec{\sigma}\cdot\vec\kappa\right)\eta - m^\ast\xi & = & 0\,, \nonumber\\ \left(A_R^0 - \vec{\sigma}\cdot\vec\kappa\right)\xi - m\eta & = & 0\,, \end{eqnarray} where \begin{eqnarray} A_{L0} & = & \omega + \mu_L\,,\nonumber\\ A_{R0} & = & \omega + \mu_R\,, \end{eqnarray} and we have used $\vec A_{L} = \vec A_R = \vec\kappa$. In general, leaving out the case that $\mu_R = \mu_L$ (i.e., assuming $\mu_\phi \not= 0$), these equations have non-trivial solutions only if $\xi$ and $\eta$ are proportional to the same eigenvector of $\vec{\sigma}\cdot\vec\kappa$. This can be seen in various ways. For example, using the second equation of \Eq{modelixietaeqs} to eliminate $\eta$ in the first equation gives \begin{equation} \left[A^0_L A^0_R - \kappa^2 - |m|^2 + \vec\sigma\cdot\vec\kappa(A^0_R - A^0_L)\right]\xi = 0\,, \end{equation} which implies that $\xi$ is eigenvector of $\vec{\sigma}\cdot\vec\kappa$, and then the second equation implies that $\eta$ is proportional to $\xi$. Therefore, we write the solution in the form \begin{eqnarray} \label{etachiparammodeli} \eta & = & x\chi_s\,,\nonumber\\ \xi & = & y\chi_s\,, \end{eqnarray} where $\chi_s$ is the spinor with definite helicity, defined by \begin{equation} \label{helicityspinor} \left(\vec\sigma\cdot\hat\kappa\right)\chi_s = s\chi_s\,, \end{equation} with $s = \pm 1$. For a given helicity $s$, the equations for $x$ and $y$ are \begin{eqnarray} \label{xymodeli} (\omega + s\kappa + \mu_L)x - m^\ast y & = & 0\,,\nonumber\\ (\omega - s\kappa + \mu_R)y - mx & = & 0\,, \end{eqnarray} which imply that $\omega$ must satisfy \begin{equation} (\omega + s\kappa + \mu_L)(\omega - s\kappa + \mu_R) - |m|^2 = 0\,. \end{equation} Expressing $\mu_R$ and $\mu_L$ in terms of their sum and their difference $\mu_R \pm \mu_L$, this equation can be written in the form \begin{equation} \left[\omega + \frac{1}{2}(\mu_R + \mu_L)\right]^2 - \left[s\kappa - \frac{1}{2}(\mu_R - \mu_L)\right]^2 - |m|^2 = 0\,. \end{equation} For each $s$, we have two solutions, one with positive $\omega$ and another with a negative $\omega$. They correspond to the positive and negative helicity states of the Dirac particle and its anti-particle, which are associated with the unbroken $Q_1$. We label the two solutions for each $s$ as $\omega^{(\pm)}_{s}$. With this notation the solutions are \begin{equation} \label{drmodeli} \omega^{(\pm)}_{s}(\vec\kappa) = \pm \left\{\left[\kappa - \frac{s}{2}(\mu_R - \mu_L)\right]^2 + |m|^2\right\}^{1/2} - \frac{1}{2}(\mu_R + \mu_L)\,. \end{equation} Denoting the particle and anti-particle dispersion relations by $\omega_s$ and $\bar\omega_s$, respectively, they are to be identified according to \begin{eqnarray} \label{drmodeliantiparticle} \omega_s(\vec\kappa) & = & \omega^{(+)}_s(\vec\kappa)\nonumber\\ & = & \left\{\left[\kappa - \frac{s}{2}\mu_\phi\right]^2 + |m|^2\right\}^{1/2} - \frac{1}{2}\mu_{RL}\,,\nonumber\\ \bar\omega_s(\vec\kappa) & = & -\omega^{(-)}_s(-\vec\kappa)\nonumber\\ & = & \left\{\left[\kappa - \frac{s}{2}\mu_\phi\right]^2 + |m|^2\right\}^{1/2} + \frac{1}{2}\mu_{RL}\,, \end{eqnarray} where we have used \Eq{murelation}, and defined \begin{equation} \label{muRLdef} \mu_{RL} = \mu_R + \mu_L\,. \end{equation} It should be kept in mind that, apart from the explicit dependence on $\mu_\phi$ in \Eq{drmodeliantiparticle}, $m$ also depends on $\mu_\phi$ [see \Eq{msymmbreaking}]. \subsection{Discussion} \label{discussion} To gain some insight into the solution we can consider some particular cases. For example, while the particle and anti-particle dispersion relations are different in general, they are approximately equal in the limit of small $\mu_{RL}$. We also note that in the limit $\kappa \gg |\mu_\phi|$, the dispersion relations are approximately independent of $s$. They are strictly independent of $s$ at $\kappa = 0$, \begin{eqnarray} \omega_s(0) & = & \left\{\frac{1}{4}\mu_\phi^2 + |m|^2\right\}^{1/2} - \frac{1}{2}\mu_{RL}\,,\nonumber\\ \bar\omega_s(0) & = & \left\{\frac{1}{4}\mu_\phi^2 + |m|^2\right\}^{1/2} + \frac{1}{2}\mu_{RL}\,, \end{eqnarray} which can be interpreted as the effective masses of the particle and anti-particle. On top of these effects, the dispersion relations will also get corrections due to the interactions with the background excitations. In the context of thermal field theory such corrections can be determined by calculating the one-loop self-energy diagrams. As we have already indicated, those calculations are not in the scope of the present work. \section{Model II} \label{sec:modelii} We consider a massless chiral fermion $f_L$ with an interaction \begin{equation} L_{\text{int}} = -\frac{\lambda}{2} \phi \bar f^c_R f_L + h.c\,. \end{equation} In this case there is one conserved charge, with \begin{equation} Q(f_L) = 1\,,\qquad Q(\phi) = -2\,, \end{equation} and the chemical potentials satisfy \begin{equation} \label{murelatrionii} \mu_\phi + 2\mu_L = 0\,, \end{equation} where we are denoting by $\mu_L$ the chemical potential of $f_L$. Proceeding as in \Section{sec:modeli}, the total Lagrangian is given as in \Eq{modelitotalL}, but in the present case \begin{equation} \label{L0modelii} L_0 = \bar f^\prime_L i\lslash{\partial} f^\prime_L + \mu_L\bar f^\prime_L\lslash{u} f^\prime_L - \left(\frac{m}{2}\bar f^{c\,\prime}_R f^\prime_L + h.c.\right)\,, \end{equation} and \begin{equation} \label{Lintmodelii} L^\prime_{\text{int}} = -\frac{\lambda}{2\sqrt{2}} (\phi_1 + i\phi_2) \bar f^{c\,\prime}_R f^\prime_L + h.c\,. \end{equation} Defining \begin{equation} f = f^\prime_L + f^{c\,\prime}_R\,, \end{equation} $L_0$ can be written in the form \begin{equation} L_0 = \frac{1}{2}\bar f\left(i\lslash{\partial} - \Sigma\right)f\,, \end{equation} or in momentum space \begin{equation} L_0 = \frac{1}{2}\bar f(k)\left(\lslash{k} - \Sigma\right) f(k)\,, \end{equation} where \begin{equation} \label{Sigmaii} \Sigma = mL + m^\ast R - \mu_L\lslash{u}L + \mu_L\lslash{u}R\,. \end{equation} Thus in this case, as a consequence of the symmetry breaking, the fields $f_L$ and $f^c_R$ form a Majorana fermion, with the two helicities having different dispersion relations. In order to obtain the solution for the dispersion relation explicitly, by comparing \Eqs{Sigmaii}{Sigmai} we observe that the equations for the dispersion relations in the present case can be obtained from those of Model-I by setting $\mu_R \rightarrow -\mu_L$. Thus, from \Eq{drmodeli}, making the indicated substitution and remembering \Eq{murelatrionii} [$\mu_L = -\frac{1}{2}\mu_\phi$], the solutions in the present case are \begin{equation} \label{drmodelii} \omega^{(\pm)}_{s} = \pm \left\{\left(\kappa - \frac{s}{2}\mu_\phi\right)^2 + |m|^2\right\}^{1/2}\,. \end{equation} Furthermore, by the same identification given in \Eq{drmodeliantiparticle}, in this case we have \begin{equation} \bar\omega_s(\vec\kappa) = \omega_s(\vec\kappa)\,, \end{equation} that is, the particle and anti-particle dispersion relations are the same, as it must be for Majorana modes. Similar to the discussion in \Section{sec:modeli} we can consider some limiting cases. For illustrative purposes, in the limit of small or large $\kappa$, the dispersion relation reduce to \begin{eqnarray} \label{drmodeliiexamples} \omega_s & = & \sqrt{\frac{1}{4}\mu^2_\phi + |m|^2} - \frac{\frac{s}{2}\kappa\mu_\phi}{\sqrt{\frac{1}{4}\mu^2_\phi + |m|^2}} \qquad \mbox{(small $\kappa$)}\,, \nonumber\\[12pt] \omega_s & = & \kappa - \frac{s}{2}\mu_\phi + \frac{\frac{1}{4}\mu^2_\phi + |m|^2}{2\kappa} \qquad \mbox{(large $\kappa$)}\,, \end{eqnarray} respectively. \section{Model III} \label{sec:modeliii} \subsection{Formulation} We consider a massive Dirac fermion $f$ with mass $M$, and an interaction \begin{equation} L_{\text{int}} = -\frac{\lambda}{2} \phi \bar f^c f + h.c\,. \end{equation} Similar to Model II, there is one conserved charge, and the chemical potentials satisfy \begin{equation} \label{mufmuphirelmodeliii} \mu_\phi + 2\mu_f = 0\,. \end{equation} Putting once more \begin{eqnarray} \label{primedfieldsii} \phi & = & e^{-i\mu_\phi t}\phi^\prime\,,\nonumber\\ f & = & e^{-i\mu_f t} f^\prime\,,\\ \end{eqnarray} instead of \Eqs{L0modelii}{Lintmodelii} in this case we have \begin{eqnarray} \label{L0modeliii} L_0 & = & \bar f^\prime i\lslash{\partial} f^\prime + \mu_f\bar f^\prime\lslash{u}f^\prime - M\bar f^\prime f^\prime - \left(\frac{m}{2}\bar f^{\prime\,c} f^\prime + h.c.\right)\,,\\ \label{Lintmodeliii} L^\prime_{\text{int}} & = & -\frac{\lambda}{2\sqrt{2}} (\phi_1 + i\phi_2)\bar f^{\prime\,c} f^\prime + h.c\,, \end{eqnarray} where $m$ is given in \Eq{msymmbreaking}. The mass term $\frac{m}{2}\bar f^{c\,\prime}f^\prime$ breaks the degeneracy between the two Majorana components of what would otherwise be a Dirac fermion. $L_0$ in \Eq{L0modeliii} resembles the kinetic part of the Lagrangian of the pseudo-Dirac neutrino model\cite{wolfensteinpetcov}, but here it has the additional term involving the chemical potential. We take $m$ to be complex in general, and denote its phase by $\theta$, i.e., \begin{equation} m = |m|e^{i\theta}\,. \end{equation} To proceed we introduce the Majorana fields \begin{eqnarray} f_1 & = & \frac{1}{\sqrt{2}}\left(e^{i\theta/2}f^\prime + e^{-i\theta/2}f^{\prime\,c}\right)\,, \nonumber\\ f_2 & = & \frac{1}{i\sqrt{2}}\left(e^{i\theta/2}f^\prime - e^{-i\theta/2}f^{\prime\,c}\right)\,, \end{eqnarray} and therefore \begin{equation} \label{nudecomposition} f^\prime = \frac{e^{-i\theta/2}}{\sqrt{2}}(f_1 + if_2)\,. \end{equation} In terms of the Majorana fields $f_{1,2}$, $L_0$ becomes \begin{equation} L_0 = \frac{1}{2}(\bar f_1 i\lslash{\partial} f_1 + \bar f_2 i\lslash{\partial} f_2) + \frac{i\mu_f}{2}(\bar f_1\lslash{u} f_2 - \bar f_2\lslash{u}f_1) - \frac{M}{2}(\bar f_1 f_1 + \bar f_2 f_2) - \frac{|m|}{2}(\bar f_1 f_1 - \bar f_2 f_2)\,. \end{equation} Therefore, in the absence of the $\mu_f$ term, $f_1$ and $f_2$ are uncoupled in $L_0$ with masses $M \pm |m|$, respectively. In the presence of the $\mu_f$ term, $f_1$ and $f_2$ are mixed. Our purpose now is to obtain the proper combinations that have a definite dispersion relation in the presence of the $\mu_f$ term. \subsection{Dispersion relations} To restate the problem in a more compact algebraic form we introduce the notation \begin{equation} f_M = \left(\begin{array}{c}f_1 \\ f_2\end{array}\right)\,. \end{equation} In momentum space, $L_0$ is then \begin{equation} L_0 = \frac{1}{2}\bar f_M\left(\lslash{k} + \hat\mu_f\lslash{u} - \hat M\right)f_M\,, \end{equation} where \begin{equation} \hat\mu_f = \mu_f\left(\begin{array}{cc} 0 & i\\ -i & 0 \end{array}\right)\,, \end{equation} and \begin{equation} \label{matrixM} \hat M = \left(\begin{array}{cc} M_{+} & 0\\ 0 & M_{-} \end{array}\right)\,, \end{equation} where we have defined \begin{equation} M_{\pm} = M \pm |m|\,. \end{equation} The equation for the dispersion relations and the corresponding eigenspinors is \begin{equation} \label{eveqmodeliii} (\lslash{k} + \hat\mu_f\lslash{u} - \hat M)f_M = 0\,. \end{equation} As in the previous cases, we use the Weyl representation of the gamma matrices, and decompose \begin{equation} f_i = \left(\begin{array}{c}x_i\chi_s\\ y_i\chi_s\end{array}\right) \qquad (i = 1,2)\,, \end{equation} using the helicity spinors $\chi_s$ (defined in \Eq{helicityspinor}) as basis. The equations for the coefficients $x_i$ and $y_i$ then become \begin{eqnarray} (\omega + s\kappa + \hat\mu_f)x - \hat M y & = & 0\,,\nonumber\\ (\omega - s\kappa + \hat\mu_f)y - \hat M x & = & 0\,, \end{eqnarray} where $x,y$ are two-dimensional spinors in the $f_{1,2}$ \emph{flavor} space, \begin{eqnarray} x & = & \left(\begin{array}{c}x_1 \\ x_2\end{array}\right)\,,\nonumber\\ y & = & \left(\begin{array}{c}y_1 \\ y_2\end{array}\right)\,. \end{eqnarray} Again, if the $\mu_f$ term is dropped, we get back two uncoupled pairs of equations, in the Weyl representation and the helicity basis, for two massive fermions with dispersion relations $\omega = \sqrt{\kappa^2 + (M \pm |m|)^2}$. We now seek the solutions in the presence of $\mu_f$ term. Using the first to write \begin{equation} y = \frac{1}{\hat M}(\omega + s\kappa + \hat\mu_f)x\,, \end{equation} and substituting in the second one, we get the equation for $x$, \begin{equation} \label{eqx} \left[(\omega - s\kappa + \hat\mu_f)\frac{1}{\hat M} (\omega + s\kappa + \hat\mu_f) - \Hat M\right]x = 0\,. \end{equation} By straightforward algebra, we obtain \begin{equation} \label{eqxrelation} (\omega - s\kappa + \hat\mu_f)\frac{1}{\hat M} (\omega + s\kappa + \hat\mu_f) = \frac{1}{\hat M}\hat A\,, \end{equation} where \begin{equation} \label{matrixA} \hat A = \left( \begin{array}{cc} \omega^2 - \kappa^2 + r\mu^2_f & i\mu_f(\omega - s\kappa) + i\mu_f r(\omega + s\kappa)\\[6pt] - i\mu_f(\omega - s\kappa) - i\frac{\mu_f}{r}(\omega + s\kappa) & \omega^2 - \kappa^2 + \frac{\mu^2_f}{r} \end{array} \right) \end{equation} with \begin{equation} r = \frac{M_{+}}{M_{-}}\,. \end{equation} Substituting \Eq{eqxrelation} in \Eq{eqx} and multiplying by $\hat M$, the equation for $x$ is \begin{equation} (\hat A - \hat M^2)x = 0\,, \end{equation} where $\hat M$ and $\hat A$ are given in \Eqs{matrixM}{matrixA}, respectively. The dispersion relations are obtained by solving the equation \begin{equation} \label{eqdisprel1} (A_{11} - M^2_{+})(A_{22} - M^2_{-}) - A_{12}A_{21} = 0\,, \end{equation} where $A_{ij}$ are the elements of the matrix $\hat A$ defined in \Eq{matrixA}. It follows by inspection of \Eq{matrixA} that the products of the $A_{ij}$ that appear in \Eq{eqdisprel1} have the form \begin{eqnarray} (A_{11} - M^2{_+})(A_{22} - M^2_{-}) & = & \omega^4 + A_1\omega^2 + A_0\,, \nonumber\\ A_{12}A_{21} & = & A^\prime_1 \omega^2 + A^\prime_0\,, \end{eqnarray} where $A_{0,1}$ and $A^\prime_{0,1}$ are independent of $\omega$. \Eq{eqdisprel1} then leads to the following equation for the dispersion relation, \begin{equation} \label{eqdisprel2} \omega^4 - 2b\omega^2 + c = 0\,, \end{equation} where \begin{eqnarray} \label{bcmodeliiieq} b & = & -\frac{1}{2}(A_1 - A^\prime_1)\nonumber\,,\\ c & = & A_0 - A^\prime_0\,. \end{eqnarray} By straightforward algebra, after some simplifications, we find \begin{eqnarray} A^\prime_1 & = & \frac{\mu^2_f}{r}(1 + r)^2\,,\nonumber\\ A^\prime_0 & = & -\frac{\mu^2_f}{r}(1 - r)^2\kappa^2\,,\nonumber\\ A_0 & = & (\kappa^2 + M^2_{+} - r\mu^2_f) \left(\kappa^2 + M^2_{-} - \frac{\mu^2_f}{r}\right)\,,\nonumber\\ A_1 & = & -\left[2(\kappa^2 + M^2 + |m|^2 + \mu^2_f) - \frac{\mu^2_f}{r}(1 + r)^2\right]\,. \end{eqnarray} Then from \Eq{bcmodeliiieq}, \begin{eqnarray} \label{bcmodeliii} b & = & \kappa^2 + M^2 + |m|^2 + \frac{1}{4}\mu^2_\phi\,,\nonumber\\ c & = & \kappa^4 + 2\kappa^2\left(M^2 + |m|^2 - \frac{1}{4}\mu^2_\phi\right) + \left(M^2_{+} - \frac{r\mu^2_\phi}{4}\right) \left(M^2_{-} - \frac{\mu^2_\phi}{4r}\right)\,. \end{eqnarray} The dispersion relations are given by \begin{equation} \label{disprelmodeliii} \omega^2_{\pm} = b \pm \sqrt{d}\,, \end{equation} with \begin{equation} \label{dmodeliiieq} d = b^2 - c\,, \end{equation} where, from \Eq{bcmodeliii}, \begin{equation} \label{dmodeliii} d = 4M^2|m|^2 + \mu^2_\phi\kappa^2 + \frac{\mu^2_\phi}{4r}\left[(1 + r)^2(M^2 + |m|^2) + 2(1 - r^2)M|m|\right]\,. \end{equation} Once again we recall that $m$ is given in \Eq{msymmbreaking}. \subsection{Discussion} To gain some insight we can consider various limiting cases. \begin{description} \item[Pseudo-Dirac limit.] If the situation is such that the term $\mu^2_\phi \kappa^2$ in \Eq{dmodeliii} can be dropped (suffciently small $\mu_\phi$ and/or $\kappa$), then the dispersion relations are given by \begin{equation} \omega^2_{\pm} = \kappa^2 + M^{\prime\,2}_{\pm}\,, \end{equation} where \begin{equation} \label{drmodeliiipseudodirac1} M^{\prime\,2}_{\pm} = M^2 + |m|^2 + \frac{1}{4}\mu^2_\phi \pm \left\{ 4M^2|m|^2 + \frac{\mu^2_\phi}{4r}\left[(1 + r)^2(M^2 + |m|^2) + 2(1 - r^2)M|m|\right]\right\}^{1/2}\,, \end{equation} which are the dispersion relations for two fermions with effective masses masses $M^\prime_{\pm}$. Further, in the special case that $\mu_\phi$ is sufficiently small that the explicit $\mu_\phi$ terms can be dropped in \Eq{drmodeliiipseudodirac1} (while $|m|$ is kept), the dispersion relations reduce to \begin{equation} \label{drmodeliiipseudodirac2} \omega^2_{\pm} = \kappa^2 + M^2_{\pm}\,, \end{equation} which resemble the dispersion relations in vacuum for two fermions with masses $M_{\pm}$, as already anticipated above. In the neutrino context \Eq{drmodeliiipseudodirac2} is the familiar pseudo-Dirac neutrino model\cite{wolfensteinpetcov}. However it must be kept in mind that in the more general case in which the term $\mu^2_\phi \kappa^2$ in \Eq{dmodeliii} cannot be dropped, the $\kappa$ dependence of the dispersion relations does not have the canonical form of \Eqs{drmodeliiipseudodirac1}{drmodeliiipseudodirac2}. \begin{figure} \begin{center} \epsfig{file=fig1.eps,bbllx=163,bblly=274,bburx=474,bbury=517} \end{center} \caption[] {Plot of the dispersion relations of the Majorana modes in the case of negligible $M$, given in \Eq{omegamodeliiismallM}. For the plot we taken $|m|^2 \sim \mu^2_\phi$. For reference, the plot of the dispersion relation $\omega_0 = \kappa$ is superimposed. \label{fig:wmodeliii} } \end{figure} \item[$|m| \ll M$ limit.] In this limit, the $d$ term in \Eq{dmodeliii} can be approximated by \begin{equation} d = 4M^2|m|^2 + \mu^2_\phi\kappa^2 + \mu^2_\phi M^2\,, \end{equation} so that the dispersion relations reduce to \begin{equation} \label{approxsol1modeliii} \omega^2_{\pm}(\kappa) = \kappa^2 + M^2 + |m|^2 + \frac{1}{4}\mu^2_\phi \pm 2\sqrt{M^2 |m|^2 + \frac{1}{4}\mu^2_\phi\kappa^2 + \frac{1}{4}\mu^2_\phi M^2}\,. \end{equation} Further, taking the $\kappa\rightarrow 0$ limit, \begin{equation} \omega^2_{\pm}(0) = \left(M \pm\sqrt{|m|^2 + \frac{1}{4}\mu^2_\phi}\right)^2\,, \end{equation} which can be interpreted as the effective masses of the Majorana modes, in the $|m| \ll M$ limit. But again, the $\kappa$ dependence of the dispersion relation is different than the one given in \Eqs{drmodeliiipseudodirac1}{drmodeliiipseudodirac2}. In the case that $|m|$ can be neglected relative to $\mu_\phi$ (for example, if $\mu_\phi$ is sufficiently close to $m_\phi$), then \Eq{approxsol1modeliii} can be approximated by \begin{equation} \omega_{\pm}(\kappa) = \sqrt{\kappa^2 + M^2} \pm \frac{1}{2}\mu_\phi\,, \end{equation} which resemble the dispersion relation of a neutrino propagating in a matter background with a Wolfenstein-like potential $V_{\text{eff}} = \frac{1}{2}\mu_\phi$. \item[Small $M$ limit.] For sufficently small values of $M$, the dispersion relations are approximated by % \begin{equation} \label{omegamodeliiismallM} \omega^2_{\pm} = \kappa^2 + |m|^2 + \frac{1}{4}\mu^2_\phi \pm \mu_\phi\kappa\,. \end{equation} % Therefore, the two modes have the same efective mass % \begin{equation} \omega(0) = \sqrt{|m|^2 + \frac{1}{4}\mu^2_\phi}\,, \end{equation} % but different dispersion relations away from $\kappa = 0$. A plot of \Eq{omegamodeliiismallM} is shown in \Fig{fig:wmodeliii}. \end{description} \section{Conclusions and outlook} \label{sec:conclusions} In previous works we have carried out a systematic calculation of the neutrino dispersion relation, as well as the damping and decoherence effects, when the neutrino propagates in a thermal background of fermions and scalars, with a Yukawa-type interaction between the neutrino and the background particles [see \Rref{ns:nuphiresonance} and references therein]. As a complement of that work, the motivation of the present work is to determine the corresponding quantities for the case in which the scalar background consists of a Bose-Einstein condensate. To this end, here we have proposed an efficient and consistent method to treat the propagation of generic fermions in the background of BE condensate. With an outlook to possible application in other contexts, we have illustrated and implemented the method in a general way and not tied to any specific application. In the present work we have focused exclusively on the calculation of the dispersion relations. To model the propagation of the fermions in such an environment, we assumed some simple Yukawa-type interactions between the fermions and the scalar. As mentioned in the Introduction, the method we use to treat the BE condensate has been discussed by various authors\cite{weldon:phimu,filippi:phimu, schmitt:phimu}. In \Section{sec:becmodel} we reviewed those aspects and details of the method that are relevant for our purposes. In the following three sections we presented the extension we propose of that method to treat the propagation of fermions in the BE condensate, in the context of three generic, but specific, models of the fermion-scalar interaction. Specifically in \Section{sec:modeli} we considered two massless chiral fermions, $f_L$ and $f_R$, with a coupling to the scalar particle $\phi$ of the form $\bar f_R f_L\phi$ (Model I). In \Section{sec:modelii} we considered a massless chiral fermion $f_L$ with coupling $\bar f^c_R f_L\phi$ (Model II). Finally in \Section{sec:modeliii} we considered one massive Dirac fermion $f$ with a coupling $\bar f^c f\phi$ (Model III). In each case we determined the fermion modes and corresponding dispersion relations and pointed out some of their particular characteristics. For example, as a result of the symmetry breaking the propagating mode is a Dirac fermion and a Majorana fermion in Models I and II, respectively. In Model III the symmetry breaking produces two non-degenerate Majorana modes of what otherwise would form a Dirac fermion field in the unbroken phase. In the latter case, various particular features of the dispersion relations of the Majorana modes were illustrated by considering particular limiting cases of the parameters of the model. For example, one interesting observation is that, while in general the two Majorana modes have different effective masses (the value of the dispersion relation at zero momentum), in some limits the two modes have the same effective mass although the dispersion relations at non-zero momentum are different. The method we propose for the propagation of fermions in a BE condensate has never being used before, and can be applicable in various contexts, for example neutrino physics, condensed or nuclear matter systems and heavy-ion collisions. In addition, the work sets the ground for considering the case of various fermion flavors, as would be required for the aplication to neutrinos, or the corrections to the dispersion relations due to the thermal effects of the background excitations, that could be required for particular applications. The work of S. S. is partially supported by DGAPA-UNAM (Mexico) PAPIIT project No. IN103522.
{ "arxiv_id": "2302.14268", "language": "en", "timestamp": "2023-03-01T02:07:37", "url": "https://arxiv.org/abs/2302.14268", "yymm": "2302" }
\section{Method} \begin{figure*}[ht] \centering \includegraphics[width=0.90\textwidth]{./imgs/overall-pipeline-z.pdf} \vspace{-10pt} \caption{ \footnotesize Overview of the proposed self-supervised articulated object pose estimation strategy. The method takes a complete or partial point cloud of an articulated object as input, factorizes canonical shapes, object structure, and the articulated object pose from it. The network is trained by a shape reconstruction task. \textbf{Left:} A high-level abstraction of our pipeline. \textbf{Right:} An illustrate of decomposed information for shape reconstruction. \textcolor{mygreen}{Green} lines (\textcolor{mygreen}{$\leftarrow$}) denote the iterative pose estimation process. } \label{fig_overall_pipeline} \vspace{-10pt} \end{figure*} We present our method for self-supervised category-level articulated object pose estimation. We first propose to learn part-level SE(3) equivariant features through a novel pose-aware equivariant point convolution module (sec.~\ref{sec_method_revisit}). Based on such features, we then design a disentanglement strategy to factorize an arbitrarily posed 3D point cloud into three types of information. Such information includes a canonical shape with category-aligned pose and articulation state, the object structure describing the part adjacency and joints, as well as the articulated object pose (sec.~\ref{sec_method_ssl_method}). We find part-level SE(3) equivariant features are key to achieve the factorization above. Further, we adopt a part-by-part shape reconstruction task that combines the factorized information for shape reconstruction to self-supervise the factorization (sec.~\ref{sec_method_self_supervised_task}). Our method assumes a category-level setting where input shapes have the same kinematic chain. For notations frequently used in the following text, $N$, $C$, and $K$ denote the number of points, feature dimension, and the number of parts per shape respectively. \subsection{Part-level SE(3)-equivariant Network} \label{sec_method_revisit} We first elaborate on our part-level SE(3) equivariant network. The network $\phi(\cdot)$ operates on a point cloud $X = \{ \mathbf{x}_i \vert 1\le i\le N \}$ with per-point pose and outputs part-level SE(3) equivariant features for all points $F = \{F_i=\phi(X)[i] \vert 1\le i\le N \}$. Here the pose of a point refer to the pose of that point's parent part. We introduce the concept of part-level equivariant feature to differentiate from object-level equivariant features in~\cite{chen2021equivariant}, where the per-point feature changes equivariantly with the global transformation applied to the object. Part-level equivariant feature $F_i$ of each point $x_i$ changes equivariantly with the rigid transformation applied to its parent part, but remains invariant to transformations of other parts. We develop our network based on the Equivariant Point Network (EPN)~\cite{chen2021equivariant} with a novel pose-aware equivariant point convolution module to support part-level equivariance. In the following text, we would briefly review EPN, and then continue with our pose-aware equivariant point convolution. \vpara{Equivariant Point Network.} EPN takes a point cloud $X$ containing $N$ points and a rotation group $G$ with $\vert G\vert$ elements as input and extracts $C$-dimensional per-point per-rotation features, forming a feature matrix $F\in \mathbb{R}^{N\times C\times \vert G\vert}$. $F$ is rotational and translational equivariant to a specific rigid transformation group $G_A$ induced by $G$. The rotational equivariant transformation for each rotation element $g\in G$ in the feature domain is a corresponding matrix permutation of $F$ along the last dimension. The translational equivariance achieved by EPN is essentially translational invariance. Simply using relative point coordinates for convolution allows $F$ to remain the same while translating the input point cloud. \vpara{Pose-aware Equivariant Point Convolution.} For part-level SE(3) equivariant features, we design a pose-aware point convolution strategy that operates on a point cloud with per-point poses. While conducting convolution within a group of points, our core idea is to align point poses to the pose of the group center. Since we use the pose of a point to refer to the pose of its parent part, such alignment could cancel out the influence of the varying articulation states on the geometric description of each point. Intuitively speaking, if a point comes from the same part as the group center, information will just get aggregated as a normal convolution. While a point comes from a different part from the group center, pose alignment will canonicalize the articulation state so that the convolution outcome remains the same regardless of the articulation state change. Our pose-aware convolution strategy allows aggregating context information from different parts but avoids feature changing as the articulation changes. Equipping EPN with this strategy, we are able to achieve part-level equivariance since the feature of each point only changes as its parent part transforms but remains invariant to the transformation of other parts. We then formally define our convolution operator. Taking a point cloud $X$ and the per-point pose $P = \{ {P}_i \vert 1\le i\le N \}$ as input, our convolution operator for the point $x_i$'s feature at the rotation element $g$ is as follows: \begin{align} (\mathcal{F}* h_1)(x_i,g) = \sum_{x_j\in \mathcal{N}_{x_i}} \mathcal{F}(x_j, g \mathbf{R}_i\mathbf{R}_j^{-1} ) h_1(g(x_i - {P}_{i}{P}_j^{-1}x_j)), \label{eq_part_level_convolution} \end{align} where $\mathcal{F}(x_i,g)$ is an input function, $h_1(x_i,g)$ is a kernel function, $\mathcal{N}_{x_i}$ is the set of points in the neighbourhood of $x_i$, $P_i$ and $P_j$ denote the input pose of point $x_i$ and point $x_j$ respectively, $\mathbf{R}_i$ and $\mathbf{R}_j$ is their rotation components. We prove that using the above convolution within EPN leads to part-level equivariance in the Appendix~\ref{sec_appen_part_level_equiv}. We highlight that we adopt an iterative pose estimation strategy (see Appendix~\ref{sec_appen_additional_explanations} for details) for the per-point poses and rotations in Eq.~\ref{eq_part_level_convolution}, which are initialized to be identity in the first iteration. \subsection{Part Shape, Structure, and Pose Disentanglement} \label{sec_method_ssl_method} To obtain a fine-grained understanding of an articulated object, we disentangle three types of information from the input: 1) {Canonical shape}; 2) Object structure; 3) Articulated object pose. To be more specific, we first use the designed part-level SE(3)-equivariant network to extract per-point features from an input shape. We then leverage a self-supervised slot-attention module to group the featured points, forming a set of featured parts for the disentanglement. We predict a canonical shape for each part to induce the category-level canonical part spaces required by part pose definition. Then we disentangle structure and pose-related information that gradually transform canonical part shapes to the observed shape. First, we predict \emph{part-assembling parameters} to transform each canonical part shape to form the canonical object shape. After that, the \emph{kinematic chain}, \emph{joint parameters} and \emph{joint states} are predicted to articulate the canonical object shape into the observed articulation state. Finally, a \emph{base part rigid transformation} is predicted to further transform the resulting articulated object to the observed shape in the camera space. We will elaborate details of the above designs in the following text. \vpara{Part Proposal.} The part proposal module groups $N$ points in the input shape $X$ into $K$ parts for per-part equivariant features extraction. It learns an invariant grouping function that maps $X$ together with a point feature matrix $F$ to a point-part association matrix $\mathbf{W}\in \mathbb{R}^{N\times K}$. Specifically, we adopt an attention-pooling operation for the per-point invariant feature together with a slot attention module~\cite{locatello2020objectcentric} for the grouping purpose. Based on the proposed parts, we can group points in the input shape $X$ into $K$ point clouds $\{ X_i\vert 1\le i\le K \}$ and compute the per-part equivariant feature $\{ F_i\vert 1\le i\le K\}$. \vpara{Shape: Canonical Part Shape Reconstruction.} With per-part equivariant features, we aim to predict a canonical shape for each part which should be aligned within a certain category so that the category-level part pose can be defined. The canonical shape for each part should be invariant to every parts’ rigid transformations. Thus, we adopt an SE(3)-invariant canonical shape reconstruction module constructed based on an SO(3)-PointNet module as utilized in~\cite{li2021leveraging}. The reconstruction module converts per-part equivariant features $F_i$ into per-part invariant features through attention pooling first and then predicts an SE(3)-invariant shape $Z_i$ for each part. \vpara{Structure: Kinematic Chain Prediction.} In addition to the canonical shape of each part, we also need to understand the kinematic chain of a shape. The kinematic chain defines how different parts are connected and the order they get transformed when a cascaded transformation happens. , \emph{i.e.} from chain leaves to the chain root. To estimate the kinematic chain for a given shape, we first construct an adjacency confidence graph from object parts and then extract its maximum spanning tree consisting of the set of confident adjacency edges. We set the part with the largest degree in the graph to be the root of the tree, which will also serve as the base part of the object. The transformation order is further predicted as the inverse DFS visiting order of the tree. Notice the kinematic chain should not be affected by the articulated input pose, we therefore leverage per-part SE(3)-invariant features for estimation. \vpara{Structure: Joint Parameters Prediction.} For each pair of adjacent parts , we will then infer their joint parameters, including an invariant pivot point $\mathbf{p}_{i,j}^v$ and a joint axis orientation hypothesis $\mathbf{u}_i^g$ for each rotation element $g\in G_g$. For pivot points, we treat them as invariant properties and still adopt an invariant shape reconstruction module for prediction. Specifically, we predict the pivot point $\mathbf{p}_{i,j}^v$ between every two adjacent parts $i,j$ from their equivariant feature $( F_i, F_j )$ using an invariant shape reconstruction module~\cite{li2021leveraging}. For joint axis orientations, we regress an axis orientation hypothesis $\mathbf{u}_i^g$ for part $i$ corresponding to each rotation group element $g\in G_g$ from its equivariant feature $F_i$. \vpara{Pose: Part-assembling Parameters Prediction.} Part-assembling parameters transform the predicted canonical part shapes to assemble a canonical object shape. As parameters connecting invariant canonical shapes, they should be invariant to every parts’ rigid transformations as well. Here, we simply predict a translation vector $\mathbf{p}_i^c \in \mathbb{R}^{3}$ for each part $i$. We predict them through invariant shape reconstruction modules from per-part equivariant feature $\{F_i\vert 1\le i\le K\}$. We can then assemble predicted canonical part shapes together to form the canonical object shape: $Z = \{ Z_i + \mathbf{p}_i^c \vert 1 \le i\le K \}$. \vpara{Pose: Joint States Prediction.} Joint states describe the articulation state of an object. For each part $i$, we predict a joint state hypothesis for each rotation element $g\in G$ from its equivariant feature $F_i$, \emph{i.e.} a rotation angle $\theta_i^g$ for a revolute part or a translation scalar $s_i^g$ for a prismatic part. We can therefore articulate the canonical object shape based on the predicted kinematic chain and joint states with the base part fixed, so as to match the object articulation from the input observation. \vpara{Pose: Base Part Rigid Transformation.} The base part rigid transformation needs to transform the articulated canonical object shape to the camera space. Since we have previously predicted joint states hypotheses for all rotation element $g$, we will also need multiple base transformation hypotheses correspondingly. We simplify the base part transformation to be a rotation, which proves to be effective in practice. A straightforward way is to use the rotation matrix corresponding to each rotation element $g$ as the base transformation hypothesis. We follow this idea but also predict an additional residual rotation as a refinement. By transforming the articulated shape of the canonical object shape via the predicted base part rigid transformation, we can align the resulting shape with the observed input object. \vpara{Articulated Object Pose.} With the above predicted quantities, we can calculate per-rotation articulated object pose hypotheses for an input articulated object $X$, including three parts: 1) translation $\mathbf{p}_i^c$ of each part $i$ which assembles category-aligned canonical parts into a canonical object; 2) per-rotation articulated transformation of the canonical object based upon the predicted kinematic chain, joint parameters and per-rotation joint states; 3) per-rotation base part rigid transformation which transforms the articulated canonical object into the camera space. The rigid transformation hypothesis for each part $i$ corresponding to each rotation element $g\in G$ is denoted as $P_i^g = (\mathbf{R}_i^g, \mathbf{t}_i^g)$. We treat them as part pose hypotheses. \subsection{Shape Reconstruction-based Self-supervised Task} \label{sec_method_self_supervised_task} Based on the reconstructed canonical part shapes and predicted per-rotation part pose hypotheses, we can get per-rotation shape reconstruction for each part $i$: $\{ Y_i^{g} = \mathbf{R}_i^g Z_i + \mathbf{t}_i^g \vert g\in G\}$. A part-by-part reconstruction task is adopted to self-supervise the network. Besides, we add a regularization term for each predicted joint so that the joint indeed connects two parts. \vpara{Shape Reconstruction-based Self-supervised Loss.} The per-rotation shape reconstruction for the whole object can be calculated by concatenating all part reconstructions: $Y^{g} = \{ Y_i^{g} \vert 1\le i\le K \}$. We then adopt a min-of-N loss between the input observation $X$ and the reconstructed posed point clouds: \begin{align} \mathcal{L}_{rec} = \min_{ g \in G } d(X, Y^{g}), \label{eq_recon_loss} \end{align} where $d: \mathbb{R}^{N_X\times 3} \times \mathbb{R}^{N_Y\times 3}\rightarrow \mathbb{R}$ denotes the distance function between two point clouds and could be unidirectional or bidirectional Chamfer Distance as an example. \vpara{Regularization for Joint Prediction.} Predicted joints should connect adjacent parts and support natural articulations. However, just supervising joint parameters from the reconstruction loss is not sufficient for the needs above. Therefore, we devise a point-based joint constraint term for each predicted joint $(\mathbf{u}_i^{g_0}, \mathbf{p}_{i,j}^v)$, where $g_0 = \text{argmin}_{g\in G}d(X, Y^g)$ (Eq.~\ref{eq_recon_loss}). Specifically, given the predicted pivot point $\mathbf{p}_{i,j}^v$ and joint orientation $\mathbf{u}_i^{g_0}$, we independently randomly sample a set of points from the joint by shifting the pivot point $\mathbf{p}_{i,j}^v$: $P_{i,j}^v = \{ \mathbf{p}_{i,j}^{v,k} \vert 0\le k\le K^v\}$. The joint regularization loss term is as follows: \begin{align*} \mathcal{L}_{reg} = \sum_{(i,j)\in \mathcal{E}_{\mathcal{T}}} d(P_{i,j}^v, Z_i^2) + d(P_{i,j}^v, Z_j^2) + d(P_{i,j}^v, Z_i^1) + d(P_{i,j}^v, Z_j^1), \end{align*} where $Z_i^1$ and $Z_i^2$ are shapes of the part $i$ in the canonical object space before and after its articulated transformation, $\mathcal{E}_{\mathcal{T}}$ is the set of adjacent parts, $d(X_1, X_2)$ is the unidirectional Chamfer Distance function from point cloud $X_1$ to $X_2$. Our final self-supervised shape reconstruction loss is a linear combination of the above two loss terms: $\mathcal{L} = \mathcal{L}_{rec} + \lambda \mathcal{L}_{reg}$, where $\lambda$ is a hyper-parameter. \begin{figure*}[ht] \centering \includegraphics[width=0.80\textwidth]{./imgs/vis_com_ours_npcs.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization for qualitative evaluation. For every two lines, the first line draws the results of our method, and the second line draws those of NPCS. Every three shapes from the left side to the right side are the input point cloud (\textbf{Input}), reconstruction (\textbf{Recon.}), and the reconstructed canonical object shape (\textbf{Canon.}). \textbf{We do not assume input shape alignment but align them here when drawing just for a better view.} Please zoom in for details. } \label{fig_canon_vis} \vspace{-20pt} \end{figure*} \section{Introduction} Articulated object pose estimation is a crucial and fundamental computer vision problem with a wide range of applications in robotics, human-object interaction, and augmented reality~\cite{katz2008manipulating,mu2021maniskill,labbe2021single,jiang2022ditto,goyal2022human,li2020detailed}. Different from 6D pose estimation for rigid objects~\cite{tremblay2018deep,xiang2017posecnn,sundermeyer2018implicit,wang2019normalized}, articulated object pose estimation requires a hierarchical pose understanding on both the object-level and part-level~\cite{li2020category}. This problem has been long studied on the instance level where an exact CAD model is required to understand the pose of a specific instance. Recently, there is a trend in estimating category-level object pose such that the algorithm can generalize to novel instances. Despite such merits, supervised category-level approaches always assume rich annotations that are extremely expensive to acquire~\cite{li2020category,chi2021garmentnets,9670684}. To get rid of such restrictions, we tackle this problem under a self-supervised setting instead. Given a collection of unsegmented articulated objects in various articulation states with different object poses, our goal is to design a network that can acquire a category-level articulated object pose understanding in a self-supervised manner without any human labels such as pose annotations, segmentation labels, or reference frames for pose definition. The self-supervised category-level articulated object pose estimation problem is highly ill-posed since it requires the knowledge of object structure and per-part poses, which are usually entangled with part shapes. Very few previous works try to solve such a problem or even similar ones. The most related attempt is the work of~\cite{li2021leveraging}. It tackles the unsupervised category-level pose estimation problem but just for rigid objects. They leverages SE(3) equivariant shape analysis to disentangle the global object pose and shape information so that a category-aligned canonical object space can emerge. This way the category-level object poses could be automatically learned by predicting a transformation from the canonical space to the camera space. Going beyond rigid objects, estimating articulated object poses demands more than just global pose and shape disentanglement. It requires a more fine-grained disentanglement of part shape, object structure such as part adjacency relationship, joint states, part poses, and so on. To achieve such fine-grained disentanglement, we propose to leverage part-level SE(3) equivariant shape analysis. Especially, we introduce the concept of part-level SE(3) equivariant features to equip equivariance with a spatial support. The part-level SE(3) equivariant feature of a local region should only change as its parent part transforms but should not be influenced by the transformation of other parts. This is in contrast to the object-level SE(3) equivariant feature for a local region, which is influenced by both the region's parent part and other parts. To densely extract part-level SE(3) equivariant features from an articulated shape, we propose a novel pose-aware equivariant point convolution operator. Based on such features, we are able to achieve a fine-grained disentanglement which learns three types of information from input shapes: 1) \emph{Canonical part shapes}, which are invariant to input pose or articulation changes and are category-aligned to provide a consistent reference frame for part poses; 2) \emph{Object structure}, which is also invariant to input pose or articulation changes and contains structural information about the part adjacency relationships, part transformation order, and joint parameters such as pivot points; 3) \emph{Articulated object pose}, which is composed of a series of estimated transformations. Such transformations include per-part rigid transformations which assembles canonical part shapes into a canonical object shape, per-part articulated transformation which articulates the canonical object shape to match the input articulation state, and a base part rigid transformation transforming the articulated canonical object to the camera space. To allow such disentanglement, we guide the network learning through a self-supervised part-by-part shape reconstruction task that combines the disentangled information to recover the input shapes. With the above self-supervised disentanglement strategy, our method demonstrates the possibility of estimating articulated object poses in a self-supervised way for the first time. Extensive experiments prove its effectiveness on both complete point clouds and partial point clouds from various categories covering both synthetic and real datasets. On the Part-Mobility Dataset~\cite{wang2019shape2motion}, our method without the need for any human annotations can already outperform the iterative pose estimation strategy with ground-truth segmentation masks on both complete and partial settings by a large margin, \emph{e.g.} reduce the rotation estimation error by around 30 degrees on complete shapes and by 40 degrees on partial shapes. Besides, our method can perform on par with to or even better than supervised methods like NPCS~\cite{li2020category}. For instance, we can achieve an average of 7.9$^{\circ}$ rotation estimation error on complete shapes, comparable to NPCS's 5.8$^{\circ}$ error. We can even outperform NPCS on some specific categories such as partial Eyeglasses. Finally, we prove the effectiveness of our part-level SE(3) equivariance design and the fine-grained disentanglement strategy in the ablation study. Our main contributions are summarized as follows: \begin{itemize}[leftmargin=.5cm] \item To our best knowledge, we are the first that tackles the self-supervised articulated object pose estimation problem. \item We design a pose-aware equivariant point convolution operator to learn part-level SE(3)-equivariant features. \item We propose a self-supervised framework to achieve the disentanglement of canonical shape, object structure, and articulated object poses. \end{itemize} \section{Method Details} \begin{figure*}[ht] \centering \includegraphics[width=0.95\textwidth]{./imgs/detailed-overview.pdf} \vspace{-10pt} \caption{ \footnotesize Overview of the proposed self-supervised articulated object pose estimation strategy. The method takes a complete or partial point cloud of an articulated object as input, factorizes canonical shapes, object structure, and the articulated object pose from it. The network is trained by a shape reconstruction task. Part-level SE(3) equivariant features are learned by iterating between part pose estimation and pose-aware equivariant point convolution. \textcolor{mygreen}{Green} lines (\textcolor{mygreen}{$\leftarrow$}) denote procedures for feeding the estimated part poses back to the pose-aware point convolution module. } \label{fig_overall_pipeline_detailed} \vspace{-10pt} \end{figure*} \subsection{Overview} \label{sec_appen_method_detailed_overview} We provide an detailed diagram of our self-supervised learning strategy in Figure~\ref{fig_overall_pipeline_detailed}. \subsection{Proof of Part-Level Equivariant Property of the Pose-aware Point Convolution Module} \label{sec_appen_part_level_equiv} In this section, we prove the part-level equivariant property of the designed pose-aware point convolution module: \begin{align} (\mathcal{F} * h_1)(x_i,g) = \sum_{{P}_j^{-1}x_j\in \mathcal{N}_{{P}_i^{-1}{x_i}}^c} \mathcal{F}(x_j, g \mathbf{R}_i\mathbf{R}_j^{-1} ) h_1(g(x_i - {P}_i{P}_j^{-1}x_j)), \end{align} where ${P}_i$ and ${P}_j$ are the (estimated) pose of $x_i$ and $x_j$ from the canonical object space to the camera space respectively, $\mathbf{R}_i$ and $\mathbf{R}_j$ are the (estimated) rotations of point $x_i$ and point $x_j$ from the canonical object space to the camera space respectively, $\mathcal{N}_{{P}_i^{-1}x_i}^c$ denotes the set of point $x_i$'s neighbours in the canonical object space. Note that the neighbourhood set $\mathcal{N}_{x_i}$ in the Equation~\ref{eq_part_level_convolution} represents the neighbourhood of $x_i$ in the camera space with points in which belong to $x_i$’s neighourhood in the canonical object space, \emph{i.e.} $\mathcal{N}_{{P}_i^{-1}{x_i}}^c$. $\mathcal{N}_{x_i}$ would vary as $x_i$'s pose changes, while $\mathcal{N}_{{P}_i^{-1}{x_i}}^c$ keeps the same. $\{ x_j \vert x_j \in \mathcal{N}_{x_i} \} = \{ x_j \vert {P}_j^{-1}x_j\in \mathcal{N}_{{P}_i^{-1}{x_i}}^c \}$. To prove the part-level equivariance of $(\mathcal{F}* h_1)(x_i,g)$, we need to prove 1) $(\mathcal{F}* h_1)(x_i,g)$ is invariant to the rigid transformation of point $x_i$'s each neighbouring point $x_j$; 2) $(\mathcal{F}* h_1)(x_i,g)$ is equivariant to the rigid transformation of $x_i$ itself. We then prove those properties for the continuous convolution operations, $(\mathcal{F} * h_1)(x_i, g) = \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, g\mathbf{R}_i\mathbf{R}_j^{-1})h_1(g(x_i - {P}_i{P}_j^{-1}x_j))$. \begin{theorem} The continuous operation $(\mathcal{F} * h_1)(x_i, g) = \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, g\mathbf{R}_i\mathbf{R}_j^{-1})h_1(g(x_i - {P}_i{P}_j^{-1}x_j))$ is invariant to each arbitrary rigid transformation $\Delta {P}_j = (\Delta \mathbf{R}_j \in \text{SO(3)}, \Delta \mathbf{t}_j \in \mathbb{R}^{3})$ of ($x_j \forall x_j\in \mathbb{R}^3, x_j\neq x_i$) of $x$'s neighbouring point $x_j$. \end{theorem} \begin{proof} To prove the invariance of $(\mathcal{F} * h_1)(x_i, g)$, we need to prove that $\forall x_j \in \mathbb{R}^{3}, x_j\neq x_i, \forall \Delta{P}_j \in \text{SE(3)}, \mathbf{R}_j' = \Delta \mathbf{R}_j \mathbf{R}_j$, we have \begin{align*} \Delta {P}_j (\mathcal{F} * h_1) (x_i,g) = (\mathcal{F} * h_1) (x_i,g). \end{align*} Let $x_j' = \Delta {P}_j x_j$, ${P}_j' = \Delta {P}_j {P}_j$, then we have, \begin{align*} \Delta {P}_j (\mathcal{F} * h_1)(x_i, g) &= \int_{x_j'\in \mathbb{R}^{3}} \mathcal{F}(x_j', g\mathbf{R}_i\mathbf{R}_j^{'-1}) h_1(g(x_i - {P}_i{P}_j^{'-1}x_j')) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(\Delta {P}_j x_j, g\mathbf{R}_i\mathbf{R}_j^{-1}\Delta \mathbf{R}_j^{-1}) h_1(g(x_i - {P}_i{P}_j^{-1}\Delta {P}_j^{-1}\Delta {P}_jx_j)) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(\Delta \mathbf{R}_j x_j, g\mathbf{R}_i\mathbf{R}_j^{-1}\Delta \mathbf{R}_j^{-1}) h_1(g(x_i - {P}_i{P}_j^{-1}x_j)) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, g\mathbf{R}_i\mathbf{R}_j^{-1}) h_1(g(x_i - {P}_i{P}_j^{-1}x_j)) \\ &= (\mathcal{F} * h_1)(x_i,g). \end{align*} \end{proof} \begin{theorem} The continuous operation $(\mathcal{F} * h_1)(x_i, g) = \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, g\mathbf{R}_i\mathbf{R}_j^{-1})h_1(g(x_i - {P}_i{P}_j^{-1}x_j))$ is equivariant to the rigid transformation $\Delta {P}_i = (\Delta \mathbf{R}_i \in \text{SO(3)}, \Delta \mathbf{t}_i \in \mathbb{R}^{3})$ of $x_i$. \end{theorem} \begin{proof} To prove that $(\mathcal{F} * h_1)(x_i,g)$ is equivariant to the rigid transformation of $x_i$, we need to prove that $\forall \Delta {P}_i\in \text{SE(3)}$, we have \begin{align*} \Delta {P}_i (\mathcal{F} * h_1)(x_i,g) = (\Delta \mathbf{R}_i\mathcal{F} * h_1)(x_i, g). \end{align*} It can be proved by \begin{align*} \Delta {P}_i (\mathcal{F} * h_1)(x_i,g) &= (\mathcal{F} * h_1)(\Delta{P}_ix_i, g\Delta\mathbf{R}_i) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, g\Delta \mathbf{R}_i \mathbf{R}_i\mathbf{R}_j^{-1}) h_1(g(\Delta{P}_ix_i - \Delta{P}_i{P}_j^{-1}x_j)) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, (g\Delta \mathbf{R}_i) \mathbf{R}_i\mathbf{R}_j^{-1}) h_1((g\Delta\mathbf{R}_i)(x_i - {P}_j^{-1}x_j)) \\ &= (\Delta \mathbf{R}_i\mathcal{F} * h_1)(x_i, g). \end{align*} \end{proof} \subsection{Further Explanations on some Method Components} \label{sec_appen_additional_explanations} \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{./imgs/kinematic_chain.pdf} \vspace{-10pt} \caption{ \footnotesize Kinematic chain prediction procedure (an example of the object containing three parts). } \label{fig_kinematic_chain_prediction} \vspace{-10pt} \end{figure*} \vpara{Kinematic Chain Prediction.} The kinematic chain is predict as an invariant property from per-part invariant features to describe part articulation transformation order. It is predicted through the following four steps: 1) Predict an adjacency confidence value $c_{i,j}$ for each part pair $(i,j)$; 2) Construct an fully-connected adjacency confidence graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ based on predicted confidence values, with all parts as its nodes and predicted confidence values as edge weights; 3) Find a maximum spanning tree from the constructed graph $\mathcal{G}$: $\mathcal{T} = (\mathcal{V}, \mathcal{E}_{\mathcal{T}})$; 4) Calculate the DFS visiting order of $\mathcal{T}$ and take the inverse visiting order as the predicted kinematic chain. We draw the prediction procedure in Figure~\ref{fig_kinematic_chain_prediction}. \vpara{Invariant/Equivariant Features for Prediction.} Given per-part equivariant feature output from the feature backbone $F_i$, its equivariant feature for equivariant properties prediction is further calculated from an SO(3)-PointNet, \emph{i.e.} $\hat{F}_i = \text{SO(3)-PointNet}(X_i, F_i)$. Its invariant feature for invariant properties prediction is then computed through a max-pooling operation: $F_i^{inv} = \text{Max-Pooling}(\hat{F}_i)$. \vpara{Joint Axis Orientation.} We assume that all joints' axis orientations in one shape are consistent. Thus, in practice, we set the orientation of all joints to the same predicted orientation, \emph{i.e.} $\mathbf{u}_i^{g}\leftarrow \mathbf{u}_{i_m}^g, \forall (i, j)\in \mathcal{E}_{\mathcal{T}}, \forall g\in G_g$, where $(i_m, j_m)$ is set to the part pair connected to the tree root. Saying $(i,j)\in \mathcal{E}_{\mathcal{T}}$, we mean a directional edge from part $i$ to part $j$. In the node pair $(i,j) \in \mathcal{E}_{\mathcal{T}}$, node $i$ is deeper than node $j$ in the tree $\mathcal{T}$. It indicates that node $i$'s subtree should rotate around the joint $\mathbf{u}_i^g$ passing through the joint between $i$ and $j$. \vpara{Iterative Pose Estimation.} Our pose-aware equivariant point convolution module requires per-point pose as input. Due to our self-supervised setting where input poses are not assumed, we adopt an iterative pose estimation strategy. Through this design, we can improve the quality of part-level equivariant features gradually by feeding back estimated poses in the last iteration to the pose-aware point convolution module in the current iteration. It is because that more accurate input per-point poses would lead to better ``part-level'' SE(3) equivariant features considering the nature of our pose-aware point convolution. In practice, we set per-point poses to identity values in the first iteration due to the lack of estimated poses, \emph{i.e.} $P_0 = (\mathbf{R}_0, \mathbf{t}_0)$. \begin{figure*}[ht] \centering \includegraphics[width=0.7\textwidth]{./imgs/three-spaces.pdf} \caption{ \footnotesize The relationship between our three crucial spaces: the canonical part spaces, the canonical object space, and the camera space. } \label{fig_three_spaces} \end{figure*} \vpara{Canonical Part Spaces, Canonical Object Space, Camera Space.} For each part, the canonical part space normalizes its pose. For each object, the canonical object space normalizes its object orientation, articulation states. Camera space denotes the observation space. Each part shape in the canonical part space is its canonical part shape. Each object shape with articulation states canonicalized is called its canonical object shape. Canonical \emph{spaces} are category-level concepts, while canonical shapes are instance level concepts. Figure~\ref{fig_three_spaces} draws the relationship between such three spaces mentioned frequently in our method. \vpara{Partial Point Clouds.} The loss function used in~\cite{li2021leveraging} for partial point clouds is the unidirectional Chamfer Distance. Using this function can make the network aware of complete shapes by observing partial point clouds from different viewpoints. Such expectation can be achieved for asymmetric objects if the viewpoint could cover the full SO(3) space. However, we restrict the range of the viewpoint when rendering partial point clouds for articulated objects to make each part visible. Such restriction would result in relatively homogeneous occlusion patterns. Therefore, we choose to use unidirectional Chamfer Distance only for certain categories such as Safe when tested on partial point clouds. \vpara{Equivariant/Invariant Properties of the Designed Modules.} The designed method wishes to use part-level SE(3)-equivariance to reduce the difficulty of factorizing part pose and part shape. Exact part-level equivariant features can make those modules meet our expectations. However, due to the approximate SE(3)-equivariance of the employed feature backbone and the estimated part pose that may not be very accurate, we cannot expect such invariance/equivariance for them. For instance, if we do not consider part kinematic constraints, the part shape reconstruction module and the part-assembling parameters prediction module should be invariant to K rigid transformations in the quotient group $(\text{SE(3)}/G_A)(\text{SE(3)})^{K-1}$ if using global equivariant features, while it should be invariant to the rigid transformation in the quotient group $\text{SE(3)}/G_A$ if using part-level equivariant features given correct part pose estimation. Similarly, the pivot point prediction module should be invariant to two rigid transformations in the quotient group $(\text{SE(3)} / G_A)^2$ if using part-level equivariant features. Part-level equivariance design could reduce the difficulty of a network doing factorization, which may count as a reason for its effectiveness. \section{Experiments} \subsection{Data Preparation} \label{sec_appen_data_prepare} \vpara{Data Collection.} We choose seven categories from three different datasets, namely Oven, Washing Machine, Eyeglasses, Laptop (S) with revolute parts from Shape2Motion~\cite{wang2019shape2motion}, Drawer with prismatic parts from SAPIEN~\cite{xiang2020sapien}, Safe and Laptop (R) with revolute parts from HOI4D~\cite{liu2022hoi4d}. The first five datasets are selected according to previous works on articulated object pose estimation or part decomposition~\cite{li2020category,kawana2022uppd}. To further test the effectiveness of our method on objects collected from the real world, we choose two more categories (Safe and Laptop (R)) from a real dataset~\cite{liu2022hoi4d}. \vpara{Data Splitting.} We split our data according to the per-category data split approach introduced in~\cite{li2020category}. Note that not all shapes in a category are used for training/testing. Incomplete shapes or instances whose canonical articulation states are inconsistent with other shapes are excluded from experiments. Per-category train/test splits are listed in Table~\ref{tb_exp_dataset_meta_info}. \vpara{Data Preprocessing.} For each shape, we generate 100 posed shapes in different articulation states. Then for complete point clouds, we randomly generate 10 rotated samples from each articulated posed object. When generating articulated posed objects, we would add restrictions on valid articulation state ranges. For Oven, Safe, and Washing Machine, the valid degree range of their lids is [45$^\circ$, 135$^\circ$). For Eyeglasses, the range of the degree between two legs and the frame is set to [0$^\circ$, 81$^\circ$). For Laptop (S) and Laptop (R), the range of the degree between two parts is set to [9$^\circ$, 99$^\circ$). For partial point clouds, we render depth images of complete object instances using the same rendering method described in~\cite{li2021leveraging}. The difference is that we manually set a viewpoint range for each category to ensure that all parts are visible in the rendered depth images. For each articulated posed shape, we render 10 depth images for it. The dataset will be made public. \vpara{Data samples visualization.} In Figure~\ref{fig_data_train_test_split}, we provide samples of training and test shapes for some categories for an intuitive understanding w.r.t. intra-category shape variations. Such variations mainly come from part geomtry (\emph{e.g.} Eyeglasses frames, Oven bodies, Laptop) and part size (\emph{e.g.} Washing Machine, Laptop). \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/train_tesst_211_compressed.pdf} \vspace{-5pt} \caption{ \footnotesize Samples of training and test shapes. } \label{fig_data_train_test_split} \vspace{-8pt} \end{figure*} \begin{table}[t] \centering \caption{\footnotesize Per-category data splitting. } \resizebox{0.8\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \#Total & 32 & 41 & 42 & 82 & 30 & 50 & 30 \\ \#Train & 28 & 36 & 37 & 73 & 26 & 44 & 24 \\ \#Test & 4 & 5 & 5 & 9 & 4 & 6 & 6 \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-10pt} \label{tb_exp_dataset_meta_info} \end{table} \subsection{Implementation Details} \label{sec_appen_imple_details} \vpara{Architecture.} For point convolution, we use a kernel-rotated version kernel point convolution (KPConv~\cite{thomas2019kpconv}) proposed in EPN~\cite{chen2021equivariant}. The size of the (one) convolution kernel is determined by the number of anchor points and the feature dimension. In our implementation, we use 24 anchor points. Feature dimensions at different convolution blocks are set to 64, 128, and 512 respectively. \vpara{Training Protocol.} In the training stage, the learning rate is set to $0.0001$, which is decayed by 0.7 every 1000 iterations. The model is trained for 10000 steps with batch size 8 on all datasets. We use the self-supervised reconstruction loss to train the network, with the weight for joint regularization $\lambda$ set to $1.0$ empirically. We use Adam optimizer with $\beta = (0.9, 0.999), \epsilon = 10^{-8}$. \vpara{Software and Hardware Configurations.} All models are implemented by PyTorch version 1.9.1, torch\_cluster version 1.5.1, torch\_scatter version 2.0.7, pyrender version 0.1.45, trimesh version 3.2.0, and Python 3.8.8. All the experiments are conducted on a Ubuntu 20.04.3 server with 8 NVIDIA GPUs, 504G RAM, CUDA version 11.4. \subsection{Implementation details for baselines} \label{sec_appen_baselines} \vpara{NPCS~\cite{li2020category}.} The NPCS's original version~\cite{li2020category} trains a network for category-level articulated object pose estimation in a supervised manner. It utilizes a PointNet++~\cite{qi2017pointnetpp} to regress three kinds of information and a set of pre-defined normalized part coordinate spaces. Then in the evaluation process, the RANSAC algorithm is leveraged to calculate the rigid transformation of each part from its predicted normalized part coordinates to the shape in the camera space. To apply NPCS in our experiments, we make the following two modifications: 1) We change the backbone used in NPCS from PointNet++ to EPN. We further add supervision on its rotation mode selection process for the major rotation matrix prediction as does in~\cite{chen2021equivariant}. 2) We add a joint axis orientation prediction branch and a pivot point regression branch for joint parameters estimation. Such two prediction branches act on the global shape feature corresponding to the selected rotation mode and predict a residual rotation and a translation for estimation. By applying the major rotation matrix of the selected mode, we could then arrive at the joint axis orientations and pivot points in the camera space. \vpara{Oracle ICP.} To apply ICP on the articulated object pose estimation, we introduce Oracle ICP. Oracle ICP registers each ground-truth part from the template shape to the observed shape iteratively. We randomly select 5 segmented shapes from the train set to register on each test shape. For complete point clouds, we first centralize the part shape from both of the template shape and the observed shape, we then iteratively register the template part shape to the observed part shape under 60 initial hypotheses. For partial point clouds, we iteratively register the template part shape to the observed part shape under 60 initial hypotheses together with 10 initial translation hypotheses. The one that achieves the smallest inlier RMSE value is selected as the registration result. After each registration, we assign per-point segmentation label as the label of the nearest part. Therefore, we also treat Oracle ICP as one of our segmentation baseline. \vpara{BSP-Net~\cite{chen2020bsp}.} BSP-Net reconstructs an input shape using implicit fields as the representation by learning to partition the shape using three levels of representations, from planes to convexes, and further to the concave. Indices of reconstructed covexes are consistent across different shapes in the same category. Thus, we can map from each convex index to a ground-truth segmentation label. The relationship can then help us get segmentations for test shapes. The mapping can then help us segment each test shape by assigning each convex to its corresponding part segment. The intra-category convex partition consistency further provide cross-instance aligned part segmentations. However, due to the global pose variations of our data, such convex index consistency may not be observed when directly applying BSP-Net on our data. Thus, we propose to improve the evaluation process of BSP-Net to mitigate this problem. Specifically, for each test shape, we find a shape from the train set that is the most similar to the current test shape. Then, we directly use its convex-segmentation mapping relationship to get segments for the test shape. The segments are then used to calculate the segmentation IoU for the test shape. \vpara{NSD~\cite{kawana2020neural}.} Neural Star Domain~\cite{kawana2020neural} decomposes shapes into parameterized primitives. To test its part segmentation performance on our data with arbitrary global pose variations, we adopt the evaluation strategy similar to that used for BSP-Net. \subsection{Experiments on Partial Point Clouds} \label{sec_appen_exp_partial} In this section, we present the experimental results on rendered partial point clouds of our method and baseline methods. \vpara{Articulated Object Pose Estimation.} In Table~\ref{tb_exp_pose_cmp_partial}, we present the part pose estimation and joint parameter prediction results of our method and baseline methods. Compared to the pose estimation results of different models achieved on complete point clouds, our model can sometimes outperform the \textbf{supervised} NPCS baseline (using EPN as the backbone), such as better part pose estimations and joint parameter prediction results on the Laptop (S) dataset. In Figure~\ref{fig_partial_vis}, we draw some samples for a qualitative evaluation. Moreover, we also provide the visualization of all categories for complete point clouds in Figure~\ref{fig_complete_vis}. In the figure, the point cloud distance function used for Safe is unidirectional Chamfer Distance, while that used for others is still bidirectional Chamfer Distance. Using unidirectional Chamfer Distance can relieve the problem of joint regularization on partial point clouds to some extent. It is because that in this way the point cloud completion could be naturally enforced. For instance, reconstruction results for the Safe category are drawn in Figure~\ref{fig_partial_vis}. However, points that are not mapped to any point in the input shape will also affect the point-based joint regularization. For simple shapes, using bidirectional Chamfer Distance could also sometimes make the decoder decode complete part shapes, e.g. reconstructions for Laptop (R). As for the reconstructed reference shape in the canonical object space, better joint predictions would lead to better global shape alignment. For instance, we can observe that the angle between two parts of Laptop (R) and Laptop (S) is relatively consistent across shapes with different articulation states. Joint regularization enforcing the connectivity between two adjacent parts both before and after the articulated transformation in the canonical object space. It could then help make the joint behave like a real joint, based on which we the ``lazy'' network tends to decode part shapes with consistent orientations. However, there is a degenerated solution where the decoded rotation angle is put near to zero. In that case, the joint regularization term could be satisfied by decoding ``twisted'' part shapes. Since the decoded angle is near zero, the connectivity between two parts will not be broken when rotating along the decoded joint. Sometimes, decoding angles near to zeros is a local minimum that the optimization process gets stuck in. At that time, the regularization loss term is a large one but the decoded joint parameters are not optimized in the correct direction by the network. The reconstructed shapes in the canonical object space do not have consistent angles, e.g. Washing Machine and Safe drawn in Figure~\ref{fig_partial_vis}. \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/complete-vis-3_compressed.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization for experimental results on complete point clouds. Shapes drawn for every three shapes from the left side to the right side are the input point cloud, reconstructions, and the predicted canonical object shape. \textbf{We put drawers in an aligned space just for better visualization.} Their global pose may vary when feeding into the network. Please zoom in for details. } \label{fig_complete_vis} \vspace{-8pt} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/partial-vis-8_compressed.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization for experimental results on \textbf{\textcolor{red}{partial point clouds}}. Shapes drawn for every three shapes from the left side to the right side are the input point cloud, reconstructions, and the predicted canonical object shape. Please zoom in for details. } \label{fig_partial_vis} \vspace{-8pt} \end{figure*} \vpara{Part Segmentation.} In Table~\ref{tb_exp_seg_cmp_partial}, we evaluate the segmentation performance of our method. BSP-Net is not compared here since it requires mesh data for pre-processing, which is not compatible with the rendered partial point clouds. Oracle ICP uses real segmentation labels to register each part from the example shape to the observed shape. Despite this, it can still not achieve satisfactory estimation results due to shape occlusions and part-symmetry-related pose ambiguity issues. \vpara{Shape Reconstruction.} In Table~\ref{tb_exp_completion_cmp_partial}, we evaluate the shape reconstruction performance of our method. The part-by-part reconstruction strategy used by our method can outperform the EPN-based whole shape reconstruction strategy in most of those categories except for Washing Machine. One possible reason is the poor segmentation performance of our model on shapes in the Washing Machine category. \begin{table}[t] \centering \caption{\footnotesize Comparison between the part pose estimation performance of different methods on all test categories (\textbf{\textcolor{red}{partial point clouds}}). ``R'' denotes rotation errors with the value format ``Mean $R_{err}$/Median $R_{err}$''. ``T'' denotes translation errors with the value format ``Mean $T_{err}$/Median $T_{err}$''. ``J'' denotes joint parameters estimation results with the value format ``Mean $\theta_{err}$/Mean $d_{err}$''. \textbf{ICP could not predict joint parameters.} Therefore, only the results of supervised NPCS and our method on joint prediction are presented. For all metrics, the smaller, the better. \textbf{Bold} numbers for best values, while \emph{\textcolor{blue}{blue}} values represent second best ones. } \resizebox{\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Method & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer & Avg. \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{R} & \makecell[c]{NPCS-EPN \\ (supervised)} & \makecell[c]{\textbf{2.51}/\textbf{2.27}, \\ \textbf{{2.93}}/\textbf{{2.64}}} & \makecell[c]{\emph{\textcolor{blue}{4.71}}/\emph{\textcolor{blue}{3.84}}, \\ \textbf{8.56}/\textbf{7.46}} & \makecell[c]{\emph{\textcolor{blue}{7.26}}/\emph{\textcolor{blue}{6.08}}, \\ \emph{\textcolor{blue}{23.39}}/\emph{\textcolor{blue}{17.33}}, \\ \emph{\textcolor{blue}{20.86}}/\emph{\textcolor{blue}{18.76}}} & \makecell[c]{\emph{\textcolor{blue}{21.40}}/\emph{\textcolor{blue}{23.56}}, \\ \emph{\textcolor{blue}{29.90}}/\emph{\textcolor{blue}{32.71}}} & \makecell[c]{\textbf{6.64}/\textbf{5.76}, \\ \textbf{5.43}/\textbf{5.19}} & \makecell[c]{\emph{\textcolor{blue}{9.39}}/\emph{\textcolor{blue}{8.75}}, \\ \textbf{{6.75}}/\emph{\textcolor{blue}{6.14}}} & \makecell[c]{\textbf{21.74}/\textbf{10.80}, \\ \textbf{22.92}/\textbf{10.18}, \\ \textbf{25.10}/\textbf{14.16}, \\ \textbf{7.34}/\textbf{6.83}} & \textbf{13.34}/\textbf{10.73} \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Oracle ICP & \makecell[c]{{21.53}/10.80, \\ {20.68}/20.50} & \makecell[c]{32.42/17.82, \\ 19.39/16.99} & \makecell[c]{73.24/78.73, \\ 68.74/74.09, \\ 69.23/74.53} & \makecell[c]{67.48/73.01, \\ 63.22/68.23} & \makecell[c]{38.72/34.44,\\ 52.28/42.16} & \makecell[c]{30.78/28.674, \\ 42.06/39.25} & \makecell[c]{82.93/82.64,\\61.31/59.51,\\54.39/52.82,\\26.88/29.66} & 48.55/47.29 \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{\emph{\textcolor{blue}{11.77}}/\emph{\textcolor{blue}{7.87}}, \\ \emph{\textcolor{blue}{10.83}}/\emph{\textcolor{blue}{9.15}}} & \makecell[c]{\textbf{{1.61}}/\textbf{{1.52}}, \\ \emph{\textcolor{blue}{12.81}}/\emph{\textcolor{blue}{12.51}}} & \makecell[c]{\textbf{{4.69}}/\textbf{{3.77}}, \\ \textbf{{9.56}}/\textbf{{5.36}}, \\ \textbf{{7.53}}/\textbf{{6.12}}} & \makecell[c]{\textbf{{10.18}}/\textbf{5.30}, \\ \textbf{11.10}/\textbf{5.22}} & \makecell[c]{\emph{\textcolor{blue}{15.38}}/\emph{\textcolor{blue}{14.46}}, \\ \emph{\textcolor{blue}{21.91}}/\emph{\textcolor{blue}{19.04}}} & \makecell[c]{\textbf{8.50}/\textbf{6.85}, \\ \emph{\textcolor{blue}{6.92}}/\textbf{5.66}} & \makecell[c]{\emph{\textcolor{blue}{2.60}}/\emph{\textcolor{blue}{1.79}}, \\ \emph{\textcolor{blue}{2.60}}/\emph{\textcolor{blue}{1.79}}, \\ \emph{\textcolor{blue}{2.06}}/\emph{\textcolor{blue}{1.79}}, \\ \emph{\textcolor{blue}{2.06}}/\emph{\textcolor{blue}{1.79}}} & \emph{\textcolor{blue}{8.36}}/\emph{\textcolor{blue}{6.47}} \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{T} & \makecell[c]{NPCS-EPN \\ (supervised)} & \makecell[c]{\textbf{0.028}/\textbf{0.030}, \\ \textbf{0.028}/\textbf{0.023}} & \makecell[c]{\textbf{0.034}/\textbf{0.030}, \\ \textbf{0.033}/\textbf{{0.028}}} & \makecell[c]{\textbf{0.085}/\textbf{0.075}, \\ \textbf{0.056}/\textbf{0.052}, \\ \textbf{0.057}/{\textbf{0.049}}} & \makecell[c]{\emph{\textcolor{blue}{0.263}}/\emph{\textcolor{blue}{0.253}}, \\ {0.286}/{0.236}} & \makecell[c]{\textbf{0.022}/\textbf{0.021}, \\ \textbf{0.034}/\textbf{0.034}} & \makecell[c]{\textbf{0.048}/\textbf{0.043}, \\ \textbf{0.047}/\textbf{0.044}} & \makecell[c]{{\textbf{0.441}}/\emph{\textcolor{blue}{0.365}}, \\ \textbf{0.367}/\emph{\textcolor{blue}{0.343}}, \\ \textbf{0.549}/\textbf{0.299}, \\ \textbf{0.081}/\textbf{0.065}} & \textbf{0.145}/\emph{\textcolor{blue}{0.117}} \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Oracle ICP & \makecell[c]{{0.324}/{{0.321}}, \\ \emph{\textcolor{blue}{0.169}}/\emph{\textcolor{blue}{0.171}}} & \makecell[c]{0.322/{{0.311}}, \\ \emph{\textcolor{blue}{0.136}}/\emph{\textcolor{blue}{0.144}}} & \makecell[c]{\emph{\textcolor{blue}{0.092}}/\emph{\textcolor{blue}{0.097}}, \\ 0.188/0.197, \\ 0.185/0.193} & \makecell[c]{0.265/0.278, \\ \emph{\textcolor{blue}{0.267}}/\emph{\textcolor{blue}{0.277}}} & \makecell[c]{0.281/{{0.280}}, \\ 0.246/{{0.248}}} & \makecell[c]{0.280/0.289,\\ 0.305/0.306} & \makecell[c]{\emph{\textcolor{blue}{0.193}}/\emph{\textcolor{blue}{0.197}},\\ \emph{\textcolor{blue}{0.161}}/\emph{\textcolor{blue}{0.170}}, \\ \emph{\textcolor{blue}{0.159}}/\textbf{0.164},\\\emph{\textcolor{blue}{0.129}}/{0.132}} & 0.218/0.222 \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{\emph{\textcolor{blue}{0.071}}/\emph{\textcolor{blue}{0.065}}, \\ {{0.204}}/\emph{\textcolor{blue}{0.120}}} & \makecell[c]{\emph{\textcolor{blue}{0.179}}/\emph{\textcolor{blue}{0.164}}, \\ 0.253 /0.254} & \makecell[c]{{{ 0.219}}/{{0.226}}, \\ \emph{\textcolor{blue}{0.169}}/\emph{\textcolor{blue}{0.166}}, \\ \emph{\textcolor{blue}{0.177}}/\emph{\textcolor{blue}{0.171}}} & \makecell[c]{{\textbf{0.044}}/\textbf{{0.034}}, \\ \textbf{{0.031}}/\textbf{0.025}} & \makecell[c]{\emph{\textcolor{blue}{0.030}}/\emph{\textcolor{blue}{0.030}}, \\ \emph{\textcolor{blue}{0.100}}/\emph{\textcolor{blue}{0.104}}} & \makecell[c]{\emph{\textcolor{blue}{0.088}}/\emph{\textcolor{blue}{0.082}}, \\ \emph{\textcolor{blue}{0.070}}/\emph{\textcolor{blue}{0.067}}} & \makecell[c]{{0.046}/{0.046}, \\ {0.047}/{0.050}, \\ {0.122}/{0.131}, \\ 0.172/0.142} & \emph{\textcolor{blue}{0.119}}/\textbf{0.110} \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \multirow{2}{*}{J} & \makecell[c]{NPCS-EPN \\ (supervised)} & {28.62}/\textbf{0.092} & \textbf{8.05}/\textbf{0.194} & \makecell[c]{\textbf{20.11}/0.221,\\ \textbf{20.11}/\textbf{0.239}} & {10.91}/0.155 & \textbf{11.23}/\textbf{0.084 } & \textbf{12.25}/\textbf{0.134} & {11.21}/- & \textbf{15.31}/\textbf{0.160} \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Ours & \textbf{5.24}/{0.105 } & 22.30/0.212 &\makecell[c]{26.96/\textbf{0.087},\\ 26.96/0.260} & \textbf{10.83}/\textbf{0.142} & 55.16/0.170 & 18.02/0.170 & \textbf{7.43}/- & 21.61/0.164 \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \end{tabular} } \label{tb_exp_pose_cmp_partial} \end{table} \begin{table}[t] \centering \caption{\footnotesize Comparison between the part segmentation performance of different methods (\textbf{\textcolor{red}{partial point clouds}}). The metric used for this task is Segmentation MIoU, calculated on 4096 points for each shape. Values presented in the table are scaled by 100. Larger values indicate better performance. } \resizebox{0.8\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Oven & \makecell[c]{Washing\\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} Oracle ICP & 75.83 & \textbf{73.07} & \textbf{68.92} & 54.01 & \textbf{66.90} & 59.96 & \textbf{58.38} \\ Ours & \textbf{87.07} & 51.73 & 56.80 & \textbf{84.94} & 44.64 & \textbf{86.04} & {45.45} \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-12pt} \label{tb_exp_seg_cmp_partial} \end{table} \begin{table}[t] \centering \caption{\footnotesize Comparison between the shape reconstruction performance of different methods (\textbf{\textcolor{red}{partial point clouds}}). The metric used in this task is unidirectional Chamfer L1 from the original input shape to the reconstructed shape. The smaller, the better. } \resizebox{0.8\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} Method & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} EPN~\cite{li2021leveraging} & {0.040} & \textbf{0.043} & 0.044 & 0.032 & 0.020 & 0.026 & 0.079 \\ Ours & \textbf{0.035} & 0.062 & \textbf{0.041} & \textbf{0.025} & \textbf{0.019} & \textbf{0.024} & \textbf{0.061} \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-10pt} \label{tb_exp_completion_cmp_partial} \end{table} \subsection{Additional Comparisons and Applications} \label{sec_appen_more_exp_additional} \vpara{Comparison with Other Baselines.} We compare our method with other two baselines that are not discussed in the main body in Table~\ref{tb_exp_pose_cmp_other_baseline}. Firstly, we use KPConv~\cite{thomas2019kpconv} as NPCS's feature backbone (denoted as ``NPCS-KPConv'') and test its performance on our data with arbitrary global pose variation. We can see that NPCS of this version performs terribly compared to our unsupervised method. NPCS estimates part poses by estimating the transformation from estimated NPCS coordinates and the observed shape. It therefore requires invariant NPCS predictions to estimate category-level part poses. However, such prediction consistency may not be easily achieved for input shapes with various global pose variations. The second one is Oracle EPN, where we assume ground-truth part segmentation labels and use EPN to estimate the pose for each individual part. Despite in such oracle setting, EPN cannot infer joint parameters since it estimates per-part poses individually. Besides, the part symmetry problem will also hinder such strategy from getting good performance to some extent, which will be discussed in the next section~\ref{sec_appen_symmetric_parts}. \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/all-mani-1.pdf} \vspace{-5pt} \caption{ \footnotesize Reconstruction for shapes in different articulation states and manipulations to change their states. Shapes (in blue and orange) drawn on the two sides are manipulated shapes from their nearest reconstructions. Others are reconstructions (in purple and green). Please zoom in for details. } \label{fig_mani_vis} \vspace{-8pt} \end{figure*} \vpara{Shape Reconstruction and Manipulation.} The predicted joints can enable us to manipulate the reconstruction by changing the value of predicted rotation angles. We then arrive at shapes in new sarticulation states different from input shapes. In Figure~\ref{fig_mani_vis}, we draw some examples for Laptop (S) and Oven. \begin{table}[t] \centering \caption{\footnotesize Comparison between the part pose estimation performance of different methods. Backbone used for NPCS is KPConv. ``R'' denotes rotation errors with the value format ``Mean $R_{err}$/Median $R_{err}$''. ``T'' denotes translation errors with the value format ``Mean $T_{err}$/Median $T_{err}$''. ``J'' denotes joint parameters estimation results with the value format ``Mean $\theta_{err}$/Mean $d_{err}$''. For all metrics, the smaller, the better. \textbf{{Bold}} numbers for best values. } \resizebox{\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Method & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer & Avg. \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{R} & \makecell[c]{NPCS-KPConv \\ (supervised)} & \makecell[c]{{44.16}/{43.09}, \\ {{60.58}}/{{63.35}}} & \makecell[c]{56.20/56.22,\\50.16/51.38} & \makecell[c]{51.99/53.97,\\ 42.48/38.08, \\ 42.29/38.11} & \makecell[c]{55.67/66.44,\\55.63/61.33} & \makecell[c]{11.68/11.10,\\ 43.48/42.22} & \makecell[c]{49.98/68.43,\\ 73.40/83.55} & \makecell[c]{62.73/69.42,\\ 56.16/60.34, \\ 57.23/63.90, \\ 48.76/46.82} & 50.74/53.99 \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Oracle EPN & \makecell[c]{\textbf{{7.07}}/\textbf{{6.88}}, \\ {16.33}/{9.17}} & \makecell[c]{{{7.97}}/{{7.60}}, \\ {{33.56}}/{{20.49}}} & \makecell[c]{{{54.01}}/{{13.09}}, \\ {{86.12}}/{{65.07}}, \\ {{116.56}}/{{119.23}}} & \makecell[c]{{{18.33}}/{9.73}, \\ {18.98}/{12.75}} & \makecell[c]{{{45.85}}/{{48.59}}, \\ {{38.03}}/{{27.67}}} & \makecell[c]{{20.46}/{14.03}, \\ {21.08}/{19.30}} & \makecell[c]{{{47.88}}/{{47.03}}, \\ {{30.84}}/{{25.23}}, \\ {{35.79}}/{{37.17}}, \\ {{43.83}}/{{39.46}}} & 37.81/30.73 \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{{{7.74}}/{{7.35}}, \\ \textbf{{4.07}}/\textbf{{3.97}}} & \makecell[c]{\textbf{{7.49}}/\textbf{{7.37}}, \\ \textbf{{19.27}}/\textbf{{19.19}}} & \makecell[c]{\textbf{{8.16}}/\textbf{{8.21}}, \\ \textbf{{12.29}}/\textbf{{10.89}}, \\ \textbf{{12.53}}/\textbf{{9.88}}} & \makecell[c]{\textbf{{7.34}}/\textbf{5.16}, \\ \textbf{10.41}/\textbf{9.34}} & \makecell[c]{\textbf{{9.03}}/\textbf{{9.09}}, \\ \textbf{{13.83}}/\textbf{{13.59}}} & \makecell[c]{\textbf{5.71}/\textbf{3.61}, \\ \textbf{3.64}/\textbf{2.84}} & \makecell[c]{\textbf{{3.18}}/\textbf{{2.73}}, \\ \textbf{{3.18}}/\textbf{{2.73}}, \\ \textbf{{3.18}}/\textbf{{2.71}}, \\ \textbf{{3.18}}/\textbf{{2.71}}} & \textbf{7.90}/\textbf{7.14} \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{T} & \makecell[c]{NPCS-KPConv \\ (supervised)} & \makecell[c]{{0.133}/{0.121}, \\ {0.104}/0.091} & \makecell[c]{0.146/0.142,\\ 0.066/0.065} & \makecell[c]{{0.401}/{0.326}, \\ {0.418}/{0.257}, \\ {0.396}/{{0.263}}} & \makecell[c]{0.233/0.203,\\ 0.217/0.169} & \makecell[c]{\textbf{0.055}/\textbf{0.052}, \\ {0.098}/{0.091}} & \makecell[c]{0.179/0.226,\\ 0.161/0.174} & \makecell[c]{{{0.791}}/{{0.742}}, \\ {0.694}/{{0.640}}, \\ {1.005}/{0.942}, \\ {0.271}/{0.240}} & 0.316/0.279 \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Oracle EPN & \makecell[c]{\textbf{{0.031}}/\textbf{{0.030}}, \\ \textbf{{0.058}}/{{0.052}}} & \makecell[c]{\textbf{{0.046}}/\textbf{{0.044}}, \\ {0.059}/{0.053}} & \makecell[c]{{{0.197}}/{{0.129}}, \\ {{0.128}}/{{0.118}}, \\ {{0.334}}/{{0.292}}} & \makecell[c]{{{0.132}}/{{0.128}}, \\ {{0.117}}/{0.090}} & \makecell[c]{{{0.157}}/0.157, \\ {{0.158}}/{0.151}} & \makecell[c]{\textbf{{0.092}}/\textbf{{0.086}}, \\ \textbf{{0.094}}/\textbf{{0.082}}} & \makecell[c]{{0.204}/{0.187}, \\ {0.177}/{0.166}, \\ {0.161}/{0.146}, \\ { 0.290}/{0.282}} & 0.143/0.129 \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{{{0.054}}/{{0.052}}, \\ {{0.067}}/\textbf{{0.046}}} & \makecell[c]{\textbf{{0.082}}/\textbf{{0.083}}, \\ \textbf{{0.042}}/\textbf{{0.034}}} & \makecell[c]{{\textbf{{0.054}}}/\textbf{{0.039}}, \\ \textbf{{0.086}}/\textbf{{0.088}}, \\ \textbf{{0.070}}/\textbf{{0.055}}} & \makecell[c]{{\textbf{0.040}}/\textbf{{0.037}}, \\ \textbf{{0.046}}/\textbf{0.042}} & \makecell[c]{{{0.066}}/0.069, \\ \textbf{{0.037}}/\textbf{0.035}} & \makecell[c]{\textbf{{0.021}}/\textbf{{0.019}}, \\ \textbf{{0.027}}/\textbf{{0.026}}} & \makecell[c]{\textbf{0.096}/\textbf{0.096}, \\ \textbf{0.097}/\textbf{0.092}, \\ \textbf{0.108}/\textbf{0.105}, \\ \textbf{0.109}/\textbf{0.100}} & \textbf{0.065}/\textbf{0.060} \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \multirow{2}{*}{J} & \makecell[c]{NPCS-KPConv \\ (supervised)} & {55.62}/{0.194} & 55.01/0.149 & \makecell[c]{{60.58}/0.329,\\ {60.59}/{0.379}} & 41.40/0.259 & 54.07/0.055 & 57.04/0.070 & 52.48/- & 54.60/0.205 \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Ours & \textbf{20.30}/\textbf{0.089} & \textbf{28.40}/\textbf{0.118} & \makecell[c]{\textbf{17.75}/\textbf{0.045},\\ \textbf{17.75}/\textbf{0.129}} & \textbf{30.31}/\textbf{0.122} & \textbf{4.36}/\textbf{0.031} & \textbf{17.17}/\textbf{0.169} & \textbf{38.86}/- & \textbf{21.86}/\textbf{0.100} \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \end{tabular} } \label{tb_exp_pose_cmp_other_baseline} \end{table} \subsection{Robustness to Input Data Noise} \label{sec_appen_robustness} \begin{table}[t] \centering \caption{\footnotesize Performance comparison of the proposed method on clean data and data corrupted by random normal noise. } \resizebox{1.0\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} Category & Method & Seg. IoU & Mean $R_{err}(^\circ)$ & Median $R_{err}(^\circ)$ & Mean $T_{err}$ & Median $T_{err}$ & Joint Error & Chamfer L1 \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{2}{*}{Oven} & Without noise & \textbf{76.22} & \textbf{7.74}, \textbf{4.07} & \textbf{7.35}, \textbf{3.97} & \textbf{0.054}, {0.067} & \textbf{0.052}, \textbf{0.046} & 20.30/\textbf{0.089} & \textbf{0.025} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & With noise & {55.35} & {9.84}, {11.05} & {9.94}, {9.99} & {0.073}, \textbf{0.063} & {0.073}, {0.057} & \textbf{9.28}/0.310 & 0.049 \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{2}{*}{Laptop (S)} & Without noise & \textbf{82.97} & \textbf{7.34, 10.41} & \textbf{5.16, 9.34} & \textbf{0.040}, \textbf{0.046} & \textbf{0.037}, \textbf{0.042} & \textbf{30.31}/0.122 & \textbf{0.024} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & With noise & {70.04} & {16.01}, {13.27} & {11.47}, {9.52} & {0.082}, {0.067} & {0.075}, {0.065} & {32.84}/\textbf{0.029} & 0.044 \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \end{tabular} } \label{tb_exp_abl_oven_noise} \end{table} \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/noise-vis-2.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization for the model performance on input data with random noise. Shapes for each three drawn from left to the right are input data corrupted by random normal noise, segmentation, and reconstruction, respectively. We align shapes here just for a better visualization, while they may be put into arbitrary poses for input. } \label{fig_noose_vis} \vspace{-8pt} \end{figure*} Besides testing the performance of the proposed method on partial point clouds with occlusion patterns caused by viewpoint changes, we also test its effectiveness on noisy data. Specifically, we add noise for each point in the shape by sampling offsets for its x/y/z coordinates from normal distributions, \emph{e.g.} $\Delta x \sim \mathcal{N}(0, \sigma^2)$, where we set $\sigma = 0.02$ here. Results on Oven and Laptop (S) are presented in Table~\ref{tb_exp_abl_oven_noise}. From the table, we can see the degenerated segmentation IoU on Oven's noisy data, while still relatively good part pose estimation performance. Another discovery is the even better joint axis orientation prediction, but larger offset prediction perhaps due to the poor segmentation. Besides, the shape reconstruction quality also drops a lot, probably due to the randomly shifted point coordinates. We can observe a similar phenomenon on Laptop (S). In Figure~\ref{fig_noose_vis}, we draw some examples for a qualitative understanding w.r.t. model's performance on noise data. \subsection{Visualization of Part-Level Equivariant Features} \label{sec_appen_part_level} \begin{figure*}[htbp] \centering \includegraphics[width=0.5\textwidth]{./imgs/vis-part-level-feats-laptop.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization for an intuitive understanding w.r.t. the difference between the part-level equivariant feature and globa equivariant feature of a specific part. Visualized features are obtained by using the PCA algorithm to reduce the feature dimension to 3, which are further normalized to the range of [0, 1]. We only draw point features of the non-motion part with the moving part in gray. Features drawn on the left global equivariant features while those on the right are from the part-level equivariant network. } \label{fig_part_level_vis_laptop} \vspace{-8pt} \end{figure*} Aiming for an intuitive understanding w.r.t. the property output by the designed part-level equivariant network, we draw features output by the global equivariant network and part-level equivariant network for some laptop samples in Figure~\ref{fig_part_level_vis_laptop}. From the figure, we can see that the point features of the non-motion part (base) do not change a lot when the moving part (display) rotates an angle. That echoes the wish for the part-level equivariance design to disentangle other parts' rigid transformation from the current part's feature learning. \subsection{Evaluation Strategy for Category-Level Articulated Object Poses} \label{sec_appen_eval} To evaluate the category-level part pose estimation performance of our model, we adopt the evaluation strategy used in~\cite{li2021leveraging}. For part-based metrics, we first feed a set of train shapes in the canonical articulation states and canonical object pose state to get a set of per-part pose predictions $\{ P_i \}$. Then we can calculate the residual pose $\hat{P}_i$ for each part $i$ from the canonical part space defined by human to the canonical part space defined by the network from the pose prediction set (via RANSAC). After that, predicted pose from the canonical part space defined by human can be computed by applying the inverse residual pose estimation on the estimated per-part pose, \emph{e.g.} $P_i \leftarrow \hat{P}_i^{-1} P_i$. When calculating the rotation and translation from part shape $X_1$ to $X_2$, we first centralize their bounding boxes ($\overline{X_1}$ and $\overline{X_2}$, respectively). Then, the transformation from $\overline{X_1}$ to $\overline{X_2}$ is taken as the transformation from $X_1$ to $X_2$. For joint parameters, we take the angle error between the predicted joint axis orientation and the ground-truth axis orientation as the metric for joint axis orientation prediction. Metric for joint position prediction is set to the minimum line-to-line distance, following~\cite{li2020category}. Only joint axis orientation prediction error is computed for prismatic joints. \section{Discussion on Part Symmetry} \label{sec_appen_symmetric_parts} In this section, we discuss the part-symmetry-related problem that one would encounter in the part pose estimation problem. For rigid objects, the pose of a shape is ambiguous for symmetric shapes. To say a shape $X$ is symmetric, we mean that there is a non-trivial SE(3) transformation $S_{A_0}$ such that $X = S_{A_0}[X]$. In those cases, the performance of the pose estimation algorithm may degenerate due to ambiguous poses. It is a reasonable phenomenon, however. But for articulated objects, we may have symmetric parts even if the whole shape is not a symmetric one. For those shapes, we still expect for accurate part pose estimation. It indicates that estimating part poses for each part individually is not reasonable due to part pose ambiguity. That's why we choose to model the relationship between parts, or specifically, the kinematic chain, joint parameters. Without such object-level inter-part modeling, we cannot get accurate part poses by estimating their pose individually, even using ground-truth segmentation. The comparison between Oracle EPN and our method in Table~\ref{tb_exp_pose_cmp_other_baseline} can demonstrate this point to some extent. \section{Method} \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/abs-overview.pdf} \vspace{-10pt} \caption{ \footnotesize Overview of the proposed self-supervised articulated object pose estimation strategy. The method takes a complete or partial point cloud of an articulated object as input, factorizes canonical shapes, object structure, and the articulated object pose from it. The network is trained by a shape reconstruction task. Part-level SE(3) equivariant features are learned by iterating between part pose estimation and pose-aware equivariant point convolution. \textcolor{mygreen}{Green} lines (\textcolor{mygreen}{$\leftarrow$}) denote procedures for feeding the estimated part poses back to the pose-aware point convolution module. } \label{fig_overall_pipeline} \vspace{-10pt} \end{figure*} In this section, we present our method for self-supervised category-level articulated object pose estimation. We first propose to learn part-level SE(3) equivariant features through a novel pose-aware equivariant point convolution module to reduce the difficulty of the decomposition problem (sec.~\ref{sec_method_revisit}). Based on such features, we then design a disentanglement strategy that factorizes invariant canonical shapes, object structure, and articulated object poses from input shapes (sec.~\ref{sec_method_ssl_method}). We hope that the category-level canonical object space and part spaces could be induced automatically from input shapes taking advantage of part-level SE(3) equivariance. Further, we adopt a part-by-part shape reconstruction task that combines the factorized information for shape reconstruction to self-supervise the factorization (sec.~\ref{sec_method_self_supervised_task}). Our method assumes a category-level setting where input shapes have the same kinematic chain. For notations frequently used in the following text, $N$, $C$ and $K$ denote the number of points, feature dimension, and the number of parts of the input shape respectively. \subsection{Part-level SE(3)-equivariant Network} \label{sec_method_revisit} We first introduce our part-level SE(3) equivariant network. The network $\phi(\cdot)$ operates on a point cloud $X = \{ \mathbf{x}_i \vert 1\le i\le N \}$ with per-point pose and outputs part-level SE(3) equivariant feature for each point $F = \{\phi(X)[i] \in \mathcal{S} \vert 1\le i\le N \}$, where $\mathcal{S}$ is an arbitrary feature domain. Part-level equivariant feature $F_i$ of each point $x_i$ would change equivalently to arbitrary rigid transformation of the part it belongs to, but remains invariant to transformations of other parts. Formally, for any set of per-point transformation $\{A_j \in \text{SE}(3) \vert 1\le j\le N \}$ with their corresponding rigid transformations in the spatial domain $\{T_{A_j}: \mathbb{R}^{n\times 3} \rightarrow \mathbb{R}^{n\times 3} \vert 1\le j \le N \}$, there is an equivariant transformation $S_{A_i}: \mathcal{S}\rightarrow \mathcal{S}$ for each point $x_i$, which only depends on $A_i$ no matter what $A_j$ is applied on other point $x_j (j \neq i)$, such that: \begin{align} S_{A_i} [F_i] \equiv \phi( \{ T_{A_j} [\mathbf{x}_j] \}_{j=1}^{N} )[i], \forall A_j \in \text{SE}(3). \label{eq_part_equiv_property} \end{align} We design the network based on the Equivariant Point Network (EPN)~\cite{chen2021equivariant}. The core of our design is a pose-aware equivariant point convolution module. Therefore, in the following text, we would briefly review EPN as a background at first, and then continues with our pose-aware equivarinat point convolution. \vpara{Equivariant Point Network.} EPN takes a point cloud $X$ and a rotation group $G$ as input and extracts per-point per-rotation features $F\in \mathbb{R}^{N\times C\times \vert G\vert}$ that is rotational and translational equivariant to a specific rigid transformation group $G_A$ induced by $G$. The rotational equivariant transformation for each rotation element $g\in G$ in the feature domain is a corresponding feature permutation of $F$ around the rotation dimension $G$. The translational equivariance achieved by EPN is translational invariance by using relative point coordiantes for convolution. \vpara{Pose-aware Equivariant Point Convolution.} For part-level SE(3) equivariant features, we design a pose-aware point convolution strategy that operates on a point cloud and per-point poses. The basic idea is canceling neighbours' relative poses to the center point when communicating features. Formally, taking the per-point pose $P = \{ {P}_i \vert 1\le i\le N \}$ and the input point cloud $X$ as input, the our convolution operator for the point $x_i$'s feature at the rotation element $g$ is as follows: \begin{align} (\mathcal{F}* h_1)(x_i,g) = \sum_{{P}_j^{-1}x_j\in \mathcal{N}_{P_i^{-1}x_i}^c} \mathcal{F}(x_j, g \mathbf{R}_i\mathbf{R}_j^{-1} ) h_1(g(x_i - {P}_{i}{P}_j^{-1}x_j)), \label{eq_part_level_convolution} \end{align} where $\mathcal{F}$ is the input function, $h_1(x_i,g): \mathbb{R}^3\times \text{SO}(3)\rightarrow \mathbb{R}^D$ is a kernel, $\mathcal{N}_{P_i^{-1}x_i}^c$ is the set of point $x_i$'s neighbours in the canonical object space, $P_i$ and $P_j$ denote the input pose of point $x_i$ and point $x_j$ respectively, $\mathbf{R}_i$ and $\mathbf{R}_j$ are their rotation components. The proof is deferred to the Appendix. We highlight that we adopt an \emph{iterative pose estimation} strategy. Therefore, input poses and rotations in Eq.~\ref{eq_part_level_convolution} are estimated in the last iteration and set to be identities in the first iteration. \subsection{Part Shape, Structure, and Pose Disentanglement} \label{sec_method_ssl_method} To get a fine-grained understanding of articulated objects, we design a shape, structure, and pose disentanglement strategy. To be more specific, we predict the following three types of information leveraging part-level SE(3)-equivariance from an input articulated object: 1) {Canonical part shapes}; 2) Object structure; 3) Articulation-aware part rigid transformations. Canonical part shapes are predicted to induce category-level canonical spaces for part pose definition. The object structure contains a kinematic chain that encodes part adjacency information and transformation order, and {joint parameters} such as joint axis directions and pivot points. Articulation-aware part transformations include 1) {part-assembling parameters} that transform each part from the canonical part space to the canonical object space; 2) {joint states} that describes the part articulation states in the canonical object space; 3) a {base part rigid transformation} that transforms the articulated posed shape from the canonical object space to the camera space. Besides, we add a part proposal module for per-part equivariant features to take advantage of part-level equivariance. We will elaborate details of the above designs in the following text. \vpara{Part Proposal.} The part proposal module groups points in the input shape $X$ into parts for per-part equivariant features extraction. It learns an invariant grouping function that maps $X$ together with the per-point feature $F$ to a point-part association matrix $\mathbf{W}\in \mathbb{R}^{N\times K}$. Specifically, we adopt an attention-pooling operation for per-point invariant feature together with a slot attention module~\cite{locatello2020objectcentric} for the grouping purpose. Based on the proposed parts, we can group points in the input shape $X$ into $K$ point clouds $\{ X_i \in \mathbb{R}^{N_i^p\times 3}\vert 1\le i\le K \}$ and compute per-part equivariant feature matrix $\{ F_i\in \mathbb{R}^{N_i^p\times C\times \vert G\vert} \vert 1\le i\le K\}$, where $N_i^p$ is the number of points assigned to part $i$. \vpara{Shape: Canonical Part Shape Reconstruction.} With per-part equivariant features, we wish to predict a category-level canonical shape for each part to help with the emergence of pose-invariant canonical part spaces of the shape collection. The canonical shape for each part should be invariant to every parts’ rigid transformations. Thus, we adopt the same SE(3)-invariant canonical shape reconstruction module as utilized in~\cite{li2021leveraging} constructed based on an SO(3)-PointNet module. The reconstruction module for each part $i$ takes its equivariant feature matrix $F_i$ and the part coordinate matrix $X_i$ as input and outputs an invariant shape $Z_i\in \mathbb{R}^{N_i^z\times 3}$ with $N_i^z$ points. \vpara{Structure: Kinematic Chain Prediction.} A kinematic chain defines the part adjacency relationship and part transformation order in the canonical object space, \emph{i.e.} $(i_1,..., i_K)$, where $1\le i_1,...,i_K \le K$ are the indexes of parts. The kinematic chain should not be affected by objects’ articulation state variations, and therefore an invariant property. We first construct a confidence graph from object parts and find its maximum spanning tree, denoted as $\mathcal{T} = (\mathcal{V}, \mathcal{E}_{\mathcal{T}})$. The kinematic chain is further predicted as the inverse DFS visiting order of the tree. The last part of the predicted kinematic chain is treated as the base part, \emph{i.e.} $i_K$. \vpara{Structure: Joint Parameters Prediction.} Then, for each adjacent part $(i,j)\in \mathcal{E}_{\mathcal{T}}$, we want to infer their joint parameters, including an invariant pivot point $\mathbf{p}_{i,j}^v$ and a joint axis direction hypothesis $\mathbf{u}_i^g$ for each rotation element $g\in G_g$. For pivot points, we treat them as invariant properties and still adopt an invariant shape reconstruction module for prediction. Specifically, we predict the pivot point $\mathbf{p}_{i,j}^v$ between every two adjacent parts $i,j$ from their equivariant feature $( F_i, F_j )$ and the coordinate matrices $(X_i, X_j)$ using an invariant shape reconstruction module~\cite{li2021leveraging}. For joint axis directions, we regress an axis direction hypothesis $\mathbf{u}_i^g$ for part $i$ corresponding to each rotation group element $g\in G_g$ from its equivariant feature $F_i$. \vpara{Pose: Part-assembling Parameters Prediction.} Part-assembling parameters transform the predicted canonical part shapes such that we can get the canonical object shape by assembling the transformed parts together. As parameters connecting invariant canonical shapes, they should be invariant to every parts’ rigid transformations as well. Here, we simply predict a translation vector $\mathbf{p}_i^c \in \mathbb{R}^{3}$ for each part $i$. We predict them through invariant shape reconstruction modules from per-part equivariant feature $\{F_i, 1\le i\le K\}$. The canonical object shape is formed by assembling parts together $Z = \{ Z_i + \mathbf{p}_i^c \vert 1 \le i\le K \}$. \vpara{Pose: Joint States Prediction.} Joint states describe the articulation state of an object. For each part $i$, we predict a joint state hypothesis for each rotation element $g\in G$ from its equivariant feature $F_i$, \emph{i.e.} a rotation angle $\theta_i^g$ for a revolute part or a translation scalar $s_i^g$ for a prismatic part. By transforming parts in the canonical object space based on the predicted kinematic chain and joint states, we can get the posed articulated shape in the canonical object space. \vpara{Pose: Base Part Rigid Transformation.} The base part transformation transforms the posed articulated shape in the canonical object space to the camera space. We predict a residual rotation hypothesis for each discrete major rotation matrix of the rotation element $g$. Each residual translation vector is set to zero. \vpara{Articulated Object Pose.} With the above predicted quantities, we can calculate per-rotation articulated object pose hypotheses for an input articulated object $X$, including three parts: 1) rigid transformation $(\mathbf{1}, \mathbf{p}_i^c)$ of each part $i$ from the canonical part space to the canonical object space; 2) per-rotation articulated transformation of each part in the canonical object space computed from the predicted kinematic chain, joint parameters and per-rotation joint states; 3) per-rotation base part rigid transformation from the canonical object space to the camera space. The rigid transformation hypothesis for each part $i$ corresponding to each rotation element $g\in G$ is denoted as $P_i^g = (\mathbf{R}_i^g, \mathbf{t}_i^g)$. We treat them as part pose hypotheses. \subsection{Shape Reconstruction-based Self-supervised Task} \label{sec_method_self_supervised_task} Based on the reconstructed canonical part shape and predicted per-rotation part pose hypothesis, we can get per-rotation shape reconstruction for each part $i$: $\{ Y_i^{g} = \mathbf{R}_i^g Z_i + \mathbf{t}_i^g \vert g\in G\}$. A part-by-part reconstruction task is adopted to self-supervise the network. Besides, we add a regularization term for predicted joints to make them behave like real joints. \vpara{Shape Reconstruction-based Self-supervised Loss.} The per-rotation shape reconstruction for the whole object can be calculated by concatenating all part reconstructions: $Y^{g} = \{ Y_i^{g} \vert 1\le i\le K \}$. We then adopt a min-of-N loss between the input observation $X$ and the reconstructed posed point clouds: \begin{align} \mathcal{L}_{rec} = \min_{ g \in G } d(X, Y^{g}), \label{eq_recon_loss} \end{align} where $d: \mathbb{R}^{N_X\times 3} \times \mathbb{R}^{N_Y\times 3}\rightarrow \mathbb{R}$ denotes the distance function between two point clouds. \vpara{Regularization for Joint Prediction.} Predicted joints should behave like real joints for real articulated part transformations and part-level SE(3) equivariant features learning. However, just predicting joint parameters for computing rotation matrices is not sufficient to predict real joints. Therefore, we devise a point-based joint constraint term for each predicted joint $(\mathbf{u}_i^{g_0}, \mathbf{p}_{i,j}^v)$, where $g_0 = \text{argmin}_{g\in G}d(X, Y^g)$ (Eq.~\ref{eq_recon_loss}). Specifically, given the predicted pivot point $\mathbf{p}_{i,j}^v$ and joint direction $\mathbf{u}_i^{g_0}$, we independently randomly sample a set of points from the joint by shifting the pivot point $\mathbf{p}_{i,j}^v$: $P_{i,j}^v = \{ \mathbf{p}_{i,j}^{v,k} \vert 0\le k\le K^v\}$. The joint regularization loss term is as follows: \begin{align*} \mathcal{L}_{reg} = \sum_{(i,j)\in \mathcal{E}_{\mathcal{T}}} d(P_{i,j}^v, Z_i^2) + d(P_{i,j}^v, Z_j^2) + d(P_{i,j}^v, Z_i^1) + d(P_{i,j}^v, Z_j^1), \end{align*} where $Z_i^1$ and $Z_i^2$ are shapes of part $i$ in the canonical object space before and after its articulated, $\mathcal{E}_{\mathcal{T}}$ is the set of adjacent parts, $d(X_1, X_2)$ is the unidirectional Chamfer Distance function from point cloud $X_1$ to $X_2$. Therefore, our self-supervised shape reconstruction loss is a linear combination of the above two loss terms: $\mathcal{L} = \mathcal{L}_{rec} + \lambda \mathcal{L}_{reg}$, where $\lambda$ is a hyper-parameter. \section{Conclusion and Limitations} \section{Conclusion} \label{sec_conclusion} In this work, we propose a self-supervised strategy for category-level articulated object pose estimation without any annotations. Leveraging part-level SE(3) equivariant features, we propose a part shape, structure, pose disentanglement strategy that successfully accomplish the category-level articulated object pose estimation task. A part-by-part shape reconstruction task is adopted to self-supervise the network learning. Experiments prove the effectiveness of our method and our core ideas. This work can reduce the annotation efforts for solving this tasks and would also promote further thinkings on designing part-level equivariant networks. \section{Reproducibility Statement} \vpara{Novel Models and Algorithms.} Our method is composed of two novel designs: 1) Part-level SE(3) equivariant network; 2) Part Shape, Structure, Pose Disentanglement Strategy. Please refer to \emph{Key Implementations} section in the file \emph{equi-articulated-pose/README.md} in our Supplementary Materials. For each key design/component, we provide the implementation reference to our submitted code in the file. \vpara{Theoretical Results.} Please refer to section~\ref{sec_appen_part_level_equiv} in the Appendix for the proof of the part-level SE(3) equivariant property of our pose-aware point convolution strategy. \vpara{Experiments.} For experiments, we provide 1) Source code. Please refer to the folder \emph{equi-articulated-pose} in our Supplementary Materials for details. 2) Details of the data preparation process. Please refer to the section~\ref{sec_appen_data_prepare} in the Appendix for details. 3) Implementation details of our method and compared baselines. Please refer to the section~\ref{sec_appen_imple_details} and~\ref{sec_appen_baselines} in the Appendix. 4) Category-level articulated object pose evaluation strategy. Please refer to the section~\ref{sec_appen_eval} for details. We also provide related references to the code in the \emph{README.md} file. \section{Related Works} \vpara{Unsupervised Part Decomposition for 3D Objects.} Decomposing an observed 3D object shape into parts in an unsupervised manner is a recent interest in shape representation learning. Previous works always tend to adopt a generative shape reconstruction task to self-supervise the shape decomposition. They often choose to represent parts via learnable primitive shapes~\cite{tulsiani2017learning,kawana2020neural,yang2021unsupervised,paschalidou2021neural,deng2020cvxnet,zhu2020adacoseg,chen2020bsp} or non-primitive-based implicit field representation~\cite{chen2019bae,kawana2022uppd}. Shape alignment is a common assumption of such methods to achieve consistent decomposition across different shapes. \vpara{Articulated Object Pose Estimation.} Pose estimation for articulated objects aims to acquire a fine-grained understanding of target articulated objects from both the object level and the part level. The prior work~\cite{li2020category} proposes to estimate object orientations, joint parameters, and per-part poses in a fully-supervised setting. They define Articulation-aware Normalized Coordinate Space Hierarchy (ANCSH), composed of the canonical object space and a set of canonical part spaces, as a consistent representation for articulated objects to support pose estimation. In this work, we also want to estimate a hierarchy of articulation-aware object poses but in a totally unsupervised setting. Instead of hand-crafting normalized coordinate spaces, we wish to let them be automatically induced during learning. \vpara{SE(3) Equivariant Networks.} Recently, there is a trend of pursuing SE(3)-equivariant and invariant features through network design~\cite{weiler20183d,thomas2018tensor,fuchs2020se,zhao2020quaternion,chen2021equivariant}. Equivariance is achieved by designing kernels~\cite{thomas2018tensor,fuchs2020se} or designing feature convolution strategies~\cite{chen2021equivariant,zhao2020quaternion}. In this work, we design our part-level SE(3) equivariant feature network based on Equivariant Point Network~\cite{chen2021equivariant} for articulated object pose estimation. Common SE(3) equivariant feature of a local region would be affected by both its parent part's and other parts' rigid transformations. By contrast, its part-level SE(3) equivariant feature would only be affected by its parent part. \section{Experiments} We evaluate our method on the category-level articulated object pose estimation task (sec.~\ref{sec_exp_pose_est}) to demonstrate its effectiveness. Besides, we also test its performance on two side tasks that can be completed by our network at the same time, namely part segmentation (sec.~\ref{sec_exp_seg}), and shape reconstruction (sec.~\ref{sec_exp_recon}). \subsection{Datasets} Following previous literature~\cite{li2020category}, we choose seven categories from three datasets for evaluation on both complete shapes and rendered partial point clouds: 1) Four categories from the Part-Mobility~\cite{wang2019shape2motion} dataset, namely Oven, Washing Machine, Laptop (denoted as Laptop (S)), and Eyeglasses with revolute parts. 2) One category, Drawer with prismatic parts, from SAPIEN dataset~\cite{thomas2011one}. 3) Two categories from a real dataset HOI4D~\cite{liu2022hoi4d}, namely Safe and Laptop (denoted as Laptop (R)) with revolute parts. Please refer to the Appendix~\ref{sec_appen_data_prepare} for data preparation details. \subsection{Category-level Articulated Object Pose Estimation} \label{sec_exp_pose_est} \vpara{Metrics.} Following ~\cite{li2020category}, we use the following metrics to evaluate our method: 1) Part-based pose-related metrics, namely per-part rotation error $R_{err}(^{^\circ})$ and per-part translation error $T_{err}$, both in the form of mean and median values; 2) Joint parameters, namely joint axis orientation errors $\theta_{err}(^\circ)$ in degrees and joint position error $d_{err}$, both in the form of mean values. Please refer to the Appendix~\ref{sec_appen_eval} for details of our evaluation strategy. \vpara{Baselines.} Since there is no previous works that have exactly the same setting with ours, we choose NPCS~\cite{li2020category}, a \textbf{supervised} pose estimation method for articulated objects, and ICP, a traditional pose estimation approach, as our baseline methods. To apply them on our articulated objects with arbitrary global poses, we make the following modifications: 1) We change the backbone of NPCS to EPN~\cite{chen2021equivariant} (denoted as ``NPCS-EPN'') and add supervision on its discrete rotation mode selection process to make it work on our shapes with arbitrary global pose variations. We do observe that the NPCS without EPN will fail to get reasonable results on our data (see Appendix~\ref{sec_appen_more_exp_additional} for details). Beyond part poses, we also add a joint prediction branch for joint parameters estimation. 2) We equip ICP with ground-truth segmentation labels (denoted as ``Oracle ICP'') and register each part individually for part pose estimation. Notice that Oracle ICP cannot estimate joint parameters. \vpara{Experimental Results.} Table~\ref{tb_exp_pose_cmp} presents the experimental results of our method and baseline methods on complete point clouds. We defer the results on partial point clouds to Table~\ref{tb_exp_pose_cmp_partial} in the Appendix~\ref{sec_appen_exp_partial}. We can make the following observations: 1) As a self-supervised strategy, our average and per-category performance are comparable to that of the supervised baseline NPCS-EPN. We can even sometimes outperform NPCS-EPN such as the joint axis orientation estimation on Safe. 2) Without any human label available during training, our method can outperform the Oracle ICP with ground-truth segmentation labels by a large margin in all categories. As a further discussion, the poor performance of Oracle ICP may be caused by the part-symmetry related problem. It would add ambiguity on part poses especially when we treat each part individually for estimation. Please refer to Appendix~\ref{sec_appen_symmetric_parts} for more discussions. For a qualitative evaluation and comparison, we visualize the input objects, reconstructions, and the predicted canonical object shapes by our method and NPCS in Figure~\ref{fig_canon_vis}. Our method is able to reconstruct category-level aligned canonical shapes, which serve as good support for estimating category-level articulated object poses. \begin{table}[t] \centering \caption{\footnotesize Comparison between the part pose estimation performance of different methods on all categories. ``R'' denotes rotation errors with the value format ``Mean $R_{err}$/Median $R_{err}$''. ``T'' denotes translation errors with the value format ``Mean $T_{err}$/Median $T_{err}$''. ``J'' denotes joint parameters estimation results with the value format ``Mean $\theta_{err}$/Mean $d_{err}$''. ``Avg.'' refers to ``Average Value''. \textbf{Since Oracle ICP could not predict joint parameters}, we only present joint parameter prediction results of our method and supervised NPCS-EPN. For all metrics, the smaller, the better. Best values and \textbf{Bold}, while second best ones are shown in \emph{\textcolor{blue}{blue}}. } \resizebox{0.7\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Method & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer & Avg. \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{R} & \makecell[c]{NPCS-EPN \\ (supervised)} & \makecell[c]{\textbf{5.47}/\textbf{4.45}, \\ \emph{\textcolor{blue}{7.35}}/\emph{\textcolor{blue}{7.30}}} & \makecell[c]{\textbf{4.76}/\textbf{4.07}, \\ \textbf{6.66}/\textbf{5.41}} & \makecell[c]{\textbf{2.75}/\textbf{2.44}, \\ \textbf{9.34}/\textbf{7.64}, \\ \textbf{7.93}/\textbf{6.74}} & \makecell[c]{\textbf{6.72}/\emph{\textcolor{blue}{6.08}}, \\ \emph{\textcolor{blue}{15.96}}/\emph{\textcolor{blue}{13.91}}} & \makecell[c]{\textbf{1.75}/\textbf{1.59}, \\ \textbf{2.67}/\textbf{2.50}} & \makecell[c]{\emph{\textcolor{blue}{8.20}}/\emph{\textcolor{blue}{7.12}}, \\ \emph{\textcolor{blue}{5.13}}/\emph{\textcolor{blue}{4.72}}} & \makecell[c]{\textbf{1.52}/\textbf{1.31}, \\ \textbf{2.01}/\textbf{1.81}, \\ \textbf{2.15}/\textbf{1.81}, \\ \textbf{1.14}/\textbf{0.94}} & \textbf{5.38}/\textbf{4.70} \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Oracle ICP & \makecell[c]{{46.46}/38.56, \\ {47.11}/43.41} & \makecell[c]{55.12/50.42, \\ 52.38/51.57} & \makecell[c]{34.41/23.25, \\ 34.58/25.82, \\ 35.71/25.12} & \makecell[c]{43.26/42.02, \\ 44.04/43.64} & \makecell[c]{52.80/56.02,\\ 53.04/52.13} & \makecell[c]{42.50/43.06, \\ 42.06/39.25} & \makecell[c]{50.15/47.14,\\50.12/47.15,\\50.11/47.15,\\50.07/46.41} & 46.11/42.48 \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{\emph{\textcolor{blue}{7.74}}/\emph{\textcolor{blue}{7.35}}, \\ \textbf{4.07}/\textbf{3.97}} & \makecell[c]{\emph{\textcolor{blue}{7.49}}/\emph{\textcolor{blue}{7.37}}, \\ \emph{\textcolor{blue}{19.27}}/\emph{\textcolor{blue}{19.19}}} & \makecell[c]{\emph{\textcolor{blue}{8.16}}/\emph{\textcolor{blue}{8.21}}, \\ \emph{\textcolor{blue}{12.29}}/\emph{\textcolor{blue}{10.89}}, \\ \emph{\textcolor{blue}{12.53}}/\emph{\textcolor{blue}{9.88}}} & \makecell[c]{\emph{\textcolor{blue}{7.34}}/\textbf{5.16}, \\ \textbf{10.41}/\textbf{9.34}} & \makecell[c]{\emph{\textcolor{blue}{9.03}}/\emph{\textcolor{blue}{9.09}}, \\ \emph{\textcolor{blue}{13.83}}/\emph{\textcolor{blue}{13.59}}} & \makecell[c]{\textbf{5.71}/\textbf{3.61}, \\ \textbf{3.64}/\textbf{2.84}} & \makecell[c]{\emph{\textcolor{blue}{3.18}}/\emph{\textcolor{blue}{2.73}}, \\ \emph{\textcolor{blue}{3.18}}/\emph{\textcolor{blue}{2.73}}, \\ \emph{\textcolor{blue}{3.18}}/\emph{\textcolor{blue}{2.71}}, \\ \emph{\textcolor{blue}{3.18}}/\emph{\textcolor{blue}{2.71}}} & \emph{\textcolor{blue}{7.90}}/\emph{\textcolor{blue}{7.14}} \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{T} & \makecell[c]{NPCS-EPN \\ (supervised)} & \makecell[c]{\textbf{0.029}/\textbf{0.029}, \\ {0.020}/0.019} & \makecell[c]{\textbf{0.021}/\textbf{0.018}, \\ \textbf{0.016}/\emph{\textcolor{blue}{0.015}}} & \makecell[c]{\textbf{0.025}/\textbf{0.025}, \\ \textbf{0.022}/\textbf{0.020}, \\ \textbf{0.027}/{\textbf{0.024}}} & \makecell[c]{\textbf{0.040}/\textbf{0.019}, \\ \textbf{0.027}/\textbf{0.023}} & \makecell[c]{\textbf{0.005}/\textbf{0.005}, \\ \textbf{0.010}/\textbf{0.009}} & \makecell[c]{\textbf{0.014}/\textbf{0.011}, \\ \textbf{0.023}/\textbf{0.021}} & \makecell[c]{{\textbf{0.035}}/\emph{\textcolor{blue}{0.033}}, \\ \textbf{0.039}/\emph{\textcolor{blue}{0.033}}, \\ \textbf{0.025}/\textbf{0.016}, \\ \textbf{0.013}/\textbf{0.011}} & \textbf{0.023}/\textbf{0.019} \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Oracle ICP & \makecell[c]{{0.091}/\emph{\textcolor{blue}{0.041}}, \\ {0.070}/\emph{\textcolor{blue}{0.030}}} & \makecell[c]{0.126/\emph{\textcolor{blue}{0.028}}, \\ \emph{\textcolor{blue}{0.032}}/\textbf{0.013}} & \makecell[c]{0.092/0.097, \\ 0.188/0.197, \\ 0.185/0.193} & \makecell[c]{0.071/0.037, \\ 0.120/\emph{\textcolor{blue}{0.030}}} & \makecell[c]{0.072/\emph{\textcolor{blue}{0.036}}, \\ 0.060/\emph{\textcolor{blue}{0.017}}} & \makecell[c]{0.123/0.122,\\ 0.120/0.123} & \makecell[c]{\emph{\textcolor{blue}{0.053}}/\textbf{0.029},\\\emph{\textcolor{blue}{0.054}}/\textbf{0.027},\\\emph{\textcolor{blue}{0.050}}/\emph{\textcolor{blue}{0.028}},\\ \emph{\textcolor{blue}{0.052}}/\emph{\textcolor{blue}{0.031}}} & 0.092/0.063 \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{\emph{\textcolor{blue}{0.054}}/0.052, \\ \emph{\textcolor{blue}{0.067}}/0.046} & \makecell[c]{\emph{\textcolor{blue}{0.082}}/0.083, \\ 0.042/0.034} & \makecell[c]{\emph{\textcolor{blue}{0.054}}/\emph{\textcolor{blue}{0.039}}, \\ \emph{\textcolor{blue}{0.086}}/\emph{\textcolor{blue}{0.088}}, \\ \emph{\textcolor{blue}{0.070}}/\emph{\textcolor{blue}{0.055}}} & \makecell[c]{{\textbf{0.040}}/\emph{\textcolor{blue}{0.037}}, \\ \emph{\textcolor{blue}{0.046}}/0.042} & \makecell[c]{\emph{\textcolor{blue}{0.066}}/0.069, \\ \emph{\textcolor{blue}{0.037}}/0.035} & \makecell[c]{\emph{\textcolor{blue}{0.021}}/\emph{\textcolor{blue}{0.019}}, \\ \emph{\textcolor{blue}{0.027}}/\emph{\textcolor{blue}{0.026}}} & \makecell[c]{{0.096}/{0.096}, \\ {0.097}/{0.092}, \\ {0.108}/{0.105}, \\ 0.109/0.100} & \emph{\textcolor{blue}{0.065}}/\emph{\textcolor{blue}{0.060}} \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \multirow{2}{*}{J} & \makecell[c]{NPCS-EPN \\ (supervised)} & \textbf{5.04}/\textbf{0.076} & \textbf{5.66}/\textbf{0.078} & \makecell[c]{\textbf{7.42}/0.090,\\ \textbf{7.42}/\textbf{0.101}} & \textbf{5.74}/0.129 & 14.15/0.063 & \textbf{8.53}/\textbf{0.084} & \textbf{20.18}/- & \textbf{9.27}/\textbf{0.089} \\ \cline{2-10} \specialrule{0em}{1pt}{0pt} ~ & Ours & 20.30/0.089 & 28.40/0.118 & \makecell[c]{17.75/\textbf{0.045},\\ 17.75/0.129} & 30.31/\textbf{0.122} & \textbf{4.36}/\textbf{0.031} & 17.17/0.169 & 38.86/- & 21.86/0.100 \\ \cline{1-10} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-16pt} \label{tb_exp_pose_cmp} \end{table} \subsection{Part Segmentation} \label{sec_exp_seg} \vpara{Evaluation Metric and Baselines.} The metric used for this task is Segmentation IoU (MIoU). We choose three position-based segmentation strategies, namely BAE-Net~\cite{chen2019bae}, NSD~\cite{kawana2020neural}, BSP-Net~\cite{chen2020bsp} and one motion-based segmentation method ICP~\cite{algo_icp} as our baselines for this task. For BAE-NEt and BSP-Net, we generate data in their implicit representation using the data generation method described in IM-NET~\cite{chen2019learning}. We improve the evaluation strategy for NSD and BSP-Net considering the global pose variation of our data (see Appendix~\ref{sec_appen_baselines} for details). \vpara{Experimental Results.} In table~\ref{tb_exp_seg_cmp}, we present experimental results of our method and baselines on complete point clouds. Results on partial data are deferred to Table~\ref{tb_exp_seg_cmp_partial} in the Appendix~\ref{sec_appen_exp_partial}. Our method can consistently outperform such four part segmentation methods in all categories. BSP-Net, BAE-Net, and NSD assume input data alignment and highly rely on position information for segmentation. However, such segmentation cues may not be well preserved in our data with arbitrary global pose variations. By contrast, part motions can serve as a more consistent cue for segmenting articulated objects than positions. We hypothesize that ICP's poor registration performance on some categories such as Eyeglasses further lead to its low segmentation IoUs. \subsection{Shape Reconstruction} \label{sec_exp_recon} \vpara{Evaluation Metric and Baselines.} We choose to use Chamfer L1 as our evaluation metric for shape reconstruction. To demonstrate the superiority of part-by-part reconstruction for articulated objects over the whole shape reconstruction, we choose EPN which treats them as rigid objects for reconstruction as the baseline. \vpara{Experimental Results.} As shown in Table~\ref{tb_exp_completion_cmp}, our method can consistently outperform the EPN-based whole shape reconstruction. We suppose part-by-part reconstruction where only simple parts should be recovered makes the reconstruction an easier problem for networks than recovering the whole shape. \begin{table}[h] \centering \caption{\footnotesize Comparison between the part segmentation performance of different methods on all categories. Metric used for this task is Segmentation MIoU, calculated on 4096 points for each shape. Values presented in the table are scaled by 100. Larger values indicate better performance. ``*'' denote cases where the network fails by segmenting input shapes into single parts. } \vspace{-1pt} \resizebox{0.7\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Oven & \makecell[c]{Washing\\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer & Avg. \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} BAE-Net~\cite{chen2019bae} & 55.04 & 46.07* & 37.19* & 65.21 & 39.83* & 66.35 & 22.83* & 47.50 \\ NSD~\cite{kawana2020neural} & 60.59 & 56.43 & 53.31 & 80.88 & 71.30 & 76.86 & 33.61 & 61.85 \\ BSP-Net~\cite{chen2020bsp} & 67.24 & 62.52 & 54.28 & 79.41 & 76.59 & 81.33 & 42.15 & 66.22 \\ Oracle ICP~\cite{algo_icp} & 75.17 & 72.80 & 49.49 & 56.20 & 66.90 & 59.96 & 45.68 & 60.89 \\ Ours & \textbf{76.22} & \textbf{73.27} & \textbf{62.84} & \textbf{82.97} & \textbf{80.06} & \textbf{86.04} & \textbf{51.39} & \textbf{73.26} \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-16pt} \label{tb_exp_seg_cmp} \end{table} \begin{table}[htbp] \centering \caption{\footnotesize Comparison between the shape reconstruction performance of different methods on all categories. Metric used in this task is Chamfer L1. The smaller, the better. } \vspace{-1pt} \resizebox{0.8\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} Method & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer & Avg. \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} EPN~\cite{li2021leveraging} & {0.033} & 0.051 & {0.028} & {0.029} & 0.030 & {0.028} & 0.057 & 0.036 \\ Ours & \textbf{0.025} & \textbf{0.049} & \textbf{0.025} & \textbf{0.024} & \textbf{0.026} & \textbf{0.026} & \textbf{0.045} & \textbf{0.031} \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-16pt} \label{tb_exp_completion_cmp} \end{table} \begin{table}[htbp] \centering \caption{\footnotesize Ablation study w.r.t. the effectiveness of joint reguralization for part pose estimation and the design of pose-aware equivariant feature communication (denoted as ``Pose.''). Reported values are per-category per-part average values. Please refer to the caption of Table~\ref{tb_exp_abl_with_part_proposal} for the data format of ``Joint''. } \resizebox{0.8\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} Method & Seg. IoU & Mean $R_{err}(^\circ)$ & Median $R_{err}(^\circ)$ & Mean $T_{err}$ & Median $T_{err}$ & Joint & Chamfer L1 \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} No $\mathcal{L}_{reg}$ & {76.40} & {11.74} & 10.87 & 0.070 & 0.065 & - & 0.038 \\ With $\mathcal{L}_{reg}$ & 74.32 & 10.40 & 9.30 & 0.072 & 0.073 & 22.01/0.111 & 0.032 \\ With $\mathcal{L}_{reg}$ (Pose.) & \textbf{76.90} & \textbf{9.21} & \textbf{8.40} & \textbf{0.052} & \textbf{0.047} & \textbf{19.72/0.103} & \textbf{0.025} \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-18pt} \label{tb_exp_abl_iteration_joint} \end{table} \begin{table}[htbp] \centering \caption{\footnotesize Ablation study w.r.t. the effectiveness of accumulating part-level features for part-based properties prediction. Reported values are per-category per-part average values on all categories. ``Joint'' represents joint parameter estimation errors, with the value in the format of ``Mean $\theta_{err}$/Mean $d_{err}$''. } \vspace{-2pt} \resizebox{0.8\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} Method & Seg. IoU & Mean $R_{err}(^\circ)$ & Median $R_{err}(^\circ)$ & Mean $T_{err}$ & Median $T_{err}$ & Joint & Chamfer L1 \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} Without Parts & 71.68 & 12.85 & 11.52 & 0.068 & 0.060 & {27.97/0.172} & 0.036 \\ With Parts & \textbf{76.90} & {\textbf{9.21}} & \textbf{8.40} & \textbf{0.052} & \textbf{0.047} & \textbf{19.72/0.103} & \textbf{0.029} \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-18pt} \label{tb_exp_abl_with_part_proposal} \end{table} \section{Ablation Study} In this section, we try to ablate some crucial designs in the method to demonstrate their effectiveness, including part-level feature accumulation, pose-aware point convolution, and joint regularization. \vpara{Part-level Feature Accumulation.} We use a grouping module to group points into parts for part-level features in our method. To demonstrate the effectiveness of using part-level features for part shape, structure, pose disentanglement, we ablate part-level features and only use features from the whole shape for part-level properties prediction, similar to those used in~\cite{kawana2022uppd,chen2019bae}. Table~\ref{tb_exp_abl_with_part_proposal} compares their performance. For each metric, we report its per-category per-part average value. It can be observed that part-level features can help with part-based properties prediction, letting the network achieve better performance on all pose-related metrics. \vpara{Pose-aware Point Convolution.} Our method contains a pose-aware equivariant feature convolution design for part-level SE(3) equivariant feature learning. To demonstrate the superiority of part-level equivariance over common global equivariance, we compare the model's performance when using part-level equivariant features (With $\mathcal{L}_{reg}$ (Pose.)) with the one using global equivariant features (With $\mathcal{L}_{reg}$) in table~\ref{tb_exp_abl_iteration_joint}. For each metric, its per-category per-part average value is reported. The network using part-level equivariant features could consistently outperform the one using only global equivariant features on all metrics. \vpara{Joint Regularization.} Besides reconstruction loss, we additionally add a joint regularization term to predict joints that connect two adjacent parts. Beyond acquiring joint-related parameters, joint regularization could improve the pose estimation performance as well, especially for translation prediction, as shown in Table~\ref{tb_exp_abl_iteration_joint}. \section{Method Details} \subsection{Proof of Part-Level Equivariant Design} \label{sec_appen_part_level_equiv} In this section, we give the proof for the part-level equivariant property of the designed pose-aware point convolution strategy: \begin{align} (\mathcal{F} * h_1)(x_i,g) = \sum_{{P}_j^{-1}x_j\in \mathcal{N}_{{P}_i^{-1}x}^c} \mathcal{F}(x_j, g \mathbf{R}_i\mathbf{R}_j^{-1} ) h_1(g(x_i - {P}_i{P}_j^{-1}x_j)), \end{align} where ${P}_i$ and ${P}_j$ are the (estimated) pose of $x_i$ and $x_j$ from the canonical object space to observed camera space respectively, $\mathbf{R}_i$ and $\mathbf{R}_j$ are the (estimated) rotations of point $x_i$ and point $x_j$ from the canonical object space to the observed camera space respectively, $\mathcal{N}_{{P}_i^{-1}x_i}^c$ denotes the set of point $x$'s neighbours in the canonical object space. To prove the part-level equivariance of $(\mathcal{F}* h_1)(x_i,g)$, we need to prove 1) $(\mathcal{F}* h_1)(x_i,g)$ is invariant to the rigid transformation of point $x_i$'s each neighbouring point $x_j$; 2) $(\mathcal{F}* h_1)(x_i,g)$ is equivariant to the rigid transformation of $x_i$ itself. We then prove those properties for the continuous convolution operations, $(\mathcal{F} * h_1)(x_i, g) = \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, g\mathbf{R}_i\mathbf{R}_j^{-1})h_1(g(x_i - {P}_i{P}_j^{-1}x_j))$. \begin{theorem} The continuous operation $(\mathcal{F} * h_1)(x_i, g) = \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_k, g\mathbf{R}_i\mathbf{R}_j^{-1})h_1(g(x_i - {P}_i{P}_j^{-1}x_j))$ is invariant to each arbitrary rigid transformation $\Delta {P}_j = (\Delta \mathbf{R}_j \in \text{SO(3)}, \Delta \mathbf{t}_j \in \mathbb{R}^{3})$ of $x_j, \forall x_j\in \mathbb{R}^3, x_j\neq x_i$. of $x$'s neighbouring point $x_j$. \end{theorem} \begin{proof} To prove the invariance of $(\mathcal{F} * h_1)(x_i, g)$, we need to prove that $\forall x_j \in \mathbb{R}^{3}, x_j\neq x_i, \forall \Delta{P}_j \in \text{SE(3)}, \mathbf{R}_j' = \Delta \mathbf{R}_j \mathbf{R}_j$, we have \begin{align*} \Delta {P}_j (\mathcal{F} * h_1) (x_i,g) = (\mathcal{F} * h_1) (x_i,g). \end{align*} Let $x_j' = \Delta {P}_j x_j$, ${P}_j' = \Delta {P}_j {P}_j$, then we have, \begin{align*} \Delta {P}_j (\mathcal{F} * h_1)(x_i, g) &= \int_{x_j'\in \mathbb{R}^{3}} \mathcal{F}(x_j', g\mathbf{R}_i\mathbf{R}_j^{'-1}) h_1(g(x_i - {P}_i{P}_j^{'-1}x_j')) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(\Delta {P}_j x_j, g\mathbf{R}_i\mathbf{R}_j^{-1}\Delta \mathbf{R}_j^{-1}) h_1(g(x_i - {P}_i{P}_j^{-1}\Delta {P}_j^{-1}\Delta {P}_jx_j)) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(\Delta \mathbf{R}_j x_j, g\mathbf{R}_i\mathbf{R}_j^{-1}\Delta \mathbf{R}_j^{-1}) h_1(g(x_i - {P}_i{P}_j^{-1}x_j)) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, g\mathbf{R}_i\mathbf{R}_j^{-1}) h_1(g(x_i - {P}_i{P}_j^{-1}x_j)) \\ &= (\mathcal{F} * h_1)(x_i,g). \end{align*} \end{proof} \begin{theorem} The continuous operation $(\mathcal{F} * h_1)(x_i, g) = \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, g\mathbf{R}_i\mathbf{R}_j^{-1})h_1(g(x_i - {P}_i{P}_j^{-1}x_j))$ is equivariant to the rigid transformation $\Delta {P}_i = (\Delta \mathbf{R}_i \in \text{SO(3)}, \Delta \mathbf{t}_i \in \mathbb{R}^{3})$ of $x_i$. \end{theorem} \begin{proof} To prove that $(\mathcal{F} * h_1)(x_i,g)$ is equivariant to the rigid transformation of $x_i$, we need to prove that $\forall \Delta {P}_i\in \text{SE(3)}$, we have \begin{align*} \Delta {P}_i (\mathcal{F} * h_1)(x_i,g) = (\Delta \mathbf{R}_i\mathcal{F} * h_1)(x_i, g). \end{align*} It can be proved by \begin{align*} \Delta {P}_i (\mathcal{F} * h_1)(x_i,g) &= (\mathcal{F} * h_1)(\Delta{P}_ix_i, g\Delta\mathbf{R}_i) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, g\Delta \mathbf{R}_i \mathbf{R}_i\mathbf{R}_j^{-1}) h_1(g(\Delta{P}_ix_i - \Delta{P}_i{P}_j^{-1}x_j)) \\ &= \int_{x_j\in \mathbb{R}^{3}} \mathcal{F}(x_j, (g\Delta \mathbf{R}_i) \mathbf{R}_i\mathbf{R}_j^{-1}) h_1((g\Delta\mathbf{R}_i)(x_i - {P}_j^{-1}x_j)) \\ &= (\Delta \mathbf{R}_i\mathcal{F} * h_1)(x_i, g). \end{align*} \end{proof} \subsection{Additional Details of the Method Components} \label{sec_appen_method_additional_explanations} In this section, we provide some additional explanations for some components or strategies used in the proposed method. \vpara{Kinematic Chain.} The kinematic chain describes the moving order of different parts in the shape with adjacency information kept in the chain. For instance, the spanning tree of an Eyeglasses shape is always a binary tree with height $2$. Thus, the kinematic chain suggests that two parts should rotate around their joints with the third part with no motion added on the third part. \vpara{Joint Axis Direction.} We assume that all joints' axis directions in one shape are consistent. Thus, in practice, we set the direction of all joints to the same predicted direction, i.e. $\mathbf{u}_i^{g}\leftarrow \mathbf{u}_{i_m}^g, \forall (i, j)\in \mathcal{E}_{\mathcal{T}}, \forall g\in G_g$, where $(i_m, j_m)$ is set to the connected pair with the tree root as one of its node. Saying $(i,j)\in \mathcal{E}_{\mathcal{T}}$, we mean a directional edge where the node $i$ is deeper than node $j$ in the tree $\mathcal{T}$. It indicates that node $i$'s subtree rotates around the joint $\mathbf{u}_i^g$ passing through node $j$. \vpara{Prismatic Parts.} For shapes having only prismatic parts, the part proposal module is not used. Instead, we feed features of the whole shape to the following modules directly to predict properties for each part. Thus, to get part segmentations for shapes with prismatic parts like drawers, we adopt a reconstruction-based part label assignment strategy by directly assigning the segmentation label for each point $\mathbf{x}$ in the original shape $X$ as the label of its nearest reconstructed part. \vpara{Iterative Pose Estimation.} In practice, an additional zero iteration is added by treating the whole shape as a single rigid part and estimating its pose $P_0 = (\mathbf{R}_0, \mathbf{t}_0)$. Then its inverse rigid transformation $P_0^{-1}$ is used to transform the input shape for the following part pose estimation iterations. \vpara{Partial Point Clouds.} The loss function used in~\cite{li2021leveraging} for partial point clouds is the unidirectional Chamfer Distance. Using this function can make the network aware of complete shapes by observing partial point clouds from different viewpoints. Such expectation can be achieved for asymmetric objects if the viewpoint could cover the full SO(3) space. However, we restrict the range of the viewpoint when rendering partial point clouds for articulated objects to make each part visible. Such restriction would result in relatively homogeneous occlusion patterns. Therefore, we choose to use unidirectional Chamfer Distance only for certain categories such as Safe when tested on partial point clouds. \vpara{Equivariant/Invariant Properties of the Designed Modules.} The designed method wishes to use part-level SE(3)-equivariance to reduce the difficulty of factorizing part pose and part shape. Exact part-level equivariant features can make those modules meet our expectations. However, due to the approximate SE(3)-equivariance of the employed feature backbone and the estimated part pose that may not be very accurate, we cannot expect such invariance/equivariance for them. For instance, if we do not consider part kinematic constraints, the part shape reconstruction module and the part-assembling parameters prediction module should be invariant to K rigid transformations in the quotient group $(\text{SE(3)}/G_A)(\text{SE(3)})^{K-1}$ if using global equivariant features, while it should be invariant to the rigid transformation in the quotient group $\text{SE(3)}/G_A$ if using part-level equivariant features given correct part pose estimation. Similarly, the pivot point prediction module should be invariant to two rigid transformations in the quotient group $(\text{SE(3)} / G_A)^2$ if using part-level equivariant features. Part-level equivariance design could reduce the difficulty of a network doing factorization, which may count as a reason for its effectiveness. \section{Experiments} \subsection{Data Preparation} \label{sec_appen_data_prepare} We choose seven categories from three different datasets, namely Oven, Washing Machine, Eyeglasses, Laptop (S) from Shape2Motion~\cite{wang2019shape2motion}, Drawer from SAPIEN~\cite{xiang2020sapien}, and Safe, Laptop (R) from HOI4D~\cite{liu2022hoi4d}. The first five datasets are selected regarding previous works on part pose estimation~\cite{li2020category,kawana2022uppd}. To test the effectiveness of our method on objects collected from the real world, we choose two more categories from a real dataset~\cite{liu2022hoi4d}. We split out data according to the per-category data split approach introduced in~\cite{li2020category}. Not all shapes in a category are used for training/testing, due to their canonical articulation states that are inconsistent with other shape or missing parts. Number of shapes in the train/test sets are listed in Table~\ref{tb_exp_dataset_meta_info}. For each shape, we generate 100 posed shapes with different articulation states. Then for complete point clouds, we generate 10 samples with each of them rotated by a random rotation matrix. To avoid the ambiguity caused by the shape's global symmetry, we restrict the articulation state change to a certain range. For Oven, Safe, and Washing Machine, the valid degree range of their lids is [45$^\circ$, 135$^\circ$). For Eyeglasses, the range of the degree between two legs and the frame is set to [0$^\circ$, 81$^\circ$). For Laptop (S) and Laptop (R), the range of the degree between two parts is set to [9$^\circ$, 99$^\circ$). For partial point cloud, we render depth images of complete object instances using the same rendering method described in~\cite{li2021leveraging}. The difference is that we manually set a viewpoint range for each category to ensure that all parts are visible in the rendered depth images. For each articulated posed shape, we render 10 depth images for it, the same number for generating shapes with arbitrary global poses. The dataset will be made public. \vpara{Data samples visualization.} In Figure~\ref{fig_data_train_test_split}, we provide samples of training and test shapes for some categories to give an intuitive understanding w.r.t. intra-category shape variations. For instance, shape variation in the Eyeglasses category mainly lies in the geometry of eyeglass frames. The variation of the Oven category lies in the shape of oven bodies. Shape variations of the Washing Machine category come from the size and the geometric appearance of the body and the lid. Shape variations of the Laptop category come from the size and the geometric appearance of the base and screen of laptop shapes. \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/train_tesst_211_compressed.pdf} \vspace{-5pt} \caption{ \footnotesize Samples of training and test shapes. } \label{fig_data_train_test_split} \vspace{-8pt} \end{figure*} \begin{table}[t] \centering \caption{\footnotesize Per-category data splitting. 100 samples in different articulation states are generated for each shape. For each articulated posed sample, 10 randomly rotated samples or 10 partial point clouds with different viewpoints are generated from it. } \resizebox{0.8\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \#Total & 32 & 41 & 42 & 82 & 30 & 50 & 30 \\ \#Train & 28 & 36 & 37 & 73 & 26 & 44 & 24 \\ \#Test & 4 & 5 & 5 & 9 & 4 & 6 & 6 \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-10pt} \label{tb_exp_dataset_meta_info} \end{table} \subsection{Implementation Details} \label{sec_appen_imple_details} \vpara{Training Protocol.} In the training stage, the learning rate is set to $0.0001$, which is decayed by 0.7 every 1000 iterations. The model is trained for 10000 steps with batch size 8 on all datasets. We use the self-supervised reconstruction loss to train the network, with the weight for joint regularization $\lambda$ set to $1.0$ empirically. \vpara{Software and Hardware Configurations.} All models are implemented by PyTorch version 1.9.1, torch\_cluster version 1.5.1, torch\_scatter version 2.0.7, pyrender version 0.1.45, trimesh version 3.2.0, and Python 3.8.8. All the experiments are conducted on a Ubuntu 20.04.3 server with 8 NVIDIA 24576MiB GPUs, 504G RAM, CUDA version 11.6. \vpara{Architecture.} For point convolution, we use a kernel-rotated version kernel point convolution (KPConv~\cite{thomas2019kpconv}) proposed in EPN~\cite{chen2021equivariant}. The size of the (one) convolution kernel is determined by the number of anchor points and the feature dimension. In our implementation, we use 24 anchor points. Feature dimensions at different convolution blocks are set to 64, 128, and 512 respectively. \subsection{Additional Implementation Details for the Baselines} \label{sec_appen_baselines} \vpara{NPCS~\cite{li2020category}.} The original version of NPCS proposed in~\cite{li2020category} which supervised trains a network for category-level articulated object pose estimation. It trains a PointNet++~\cite{qi2017pointnetpp} to regress three kinds of information and a pre-defined normalized part coordinate space. Then in the evaluation process, the RANSAC algorithm is used to calculate the rigid transformation of each part from its predicted normalized part coordinates to the coordinates in the observed shape. To apply NPCS on our data where shapes may undergo an arbitrary global rigid transformation and also use NPCS to predict joint parameters, we make the following two improvements on the original version of NPCS: 1) We change the backbone used in NPCS from PointNet++ to EPN with the mode selection process supervisedly trained~\cite{chen2021equivariant}. Specifically, we regress the feature in each mode to a score that is learned under the supervision of the ground-truth nearest mode. 2) We add a joint axis direction and a pivot point regression branch to learn to predict the joint axis direction and pivot points from the feature of the selected mode. After that, the rotation matrix of the selected mode is applied on the predicted result to get the joint axis direction and predicted pivot points of the observed shape. \vpara{Oracle ICP.} This method denotes the strategy using template shapes and observed shapes with ground-truth segmentations for part-by-part registration. For complete point clouds, we first centralize the part shape from both of the template shape and the observed shape, then we iteratively register the template part shape to the observed part shape under 60 initial hypotheses. For partial point clouds, we iteratively register the template part shape to the observed part shape under 60 initial hypotheses together with 10 initial translation hypotheses. \vpara{BSP-Net~\cite{chen2020bsp}.} BSP-Net reconstructs an input shape using implicit fields as the representation by learning to partition the shape using three-level representations, from planes to convexes, and further to the concave. Indices of reconstructed covexes are consistent across different shapes. Thus, we can build a mapping relationship from segmented convex indices to segmentation labels. The relationship can then help us get segmentations for test shapes. However, due to the change of part articulation states and the global pose variation of our input shape, the mapping may not be consistent across different shapes. Thus, we propose to improve the evaluation process of BSP-Net to tackle this problem. Specifically, for each test shape, we find a shape from the train set that is the most similar to the current test shape. Then, we directly use its convex-segmentation mapping relationship to get segments for the test shape. The segments are then used to calculate the segmentation IoU for the test shape. \vpara{NSD~\cite{kawana2020neural}.} Neural Star Domain~\cite{kawana2020neural} decomposes shapes into parameterized primitives. To test its part segmentation performance on our data with arbitrary global pose variations, we adopt the evaluation strategy similar to that used for BSP-Net. \subsection{Experiments on Partial Point Clouds} \label{sec_appen_exp_partial} In this section, we present the experimental results on rendered partial point clouds of both our method and baseline methods. \vpara{Articulated Object Pose Estimation.} In Table~\ref{tb_exp_pose_cmp_partial}, we present the part pose estimation and joint parameter prediction results of our method and baseline methods. Compared to the pose estimation results of different models achieved on complete point clouds, our model can sometimes outperform the supervised NPCS baseline (using EPN as the backbone), such as better values on rotation, translation, and joint prediction parameters' error of Laptop (S) datatset. In Figure~\ref{fig_partial_vis}, we draw some samples for a qualitative evaluation. Moreover, we also provide the visualization of all categories for complete point clouds in Figure~\ref{fig_complete_vis}. In the figure, the point cloud distance function used for Safe is unidirectional Chamfer Distance, while that used for others is still bidirectional Chamfer Distance. Using unidirectional Chamfer Distance can relieve the problem of joint regularization on partial point clouds to some extent since in this way the point cloud completion could be naturally enforced. For instance, reconstruction results for the Safe category are drawn in Figure~\ref{fig_partial_vis}. However, points that are not mapped to any point in the input shape will also affect the point-based joint regularization. For simple shapes, using bidirectional Chamfer Distance could also sometimes make the decoder decode complete part shapes, e.g. reconstructions for Laptop (R). As for the reconstructed reference shape in the canonical object space, good global shape alignment and articulation state normalization could be observed if the joint prediction result is good. For instance, we can observe that the angle between two parts of Laptop (R) and Laptop (S) is relatively consistent across shapes with different articulation states. Joint regularization enforcing the connectivity between two adjacent parts both before and after the articulated transformation in the canonical object space can help make the joint behave like a real joint, based on which we can the ``lazy'' network tends to decode part shapes with consistent orientations. However, there is a degenerated solution where the decoded rotation angle is put near zero. In that case, the joint regularization condition term could be satisfied by decoding ``twisted'' part shapes. Since the decoded angle is near zero, the connectivity between two parts will not be broken when rotating along the decoded joint. Sometimes, decoding angles near to zeros is a local minimum that the optimization process gets stuck in. At that time, the regularization loss term is a large one but the decoded joint parameters are not optimized in the correct direction by the network. The reconstructed shapes in the canonical object space do not have consistent angles, e.g. Washing Machine and Safe drawn in Figure~\ref{fig_partial_vis}. \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/complete-vis-3.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization for experimental results on complete point clouds. Shapes drawn for every three shapes from the left side to the right side are the input point cloud, reconstructed shape, and the reconstructed canonical object shape. \textbf{We put drawers in an aligned space just for better visualization.} There will be an arbitrary SE(3) transformation added on input shapes for drawers as well. Please zoom in for details. } \label{fig_complete_vis} \vspace{-8pt} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/partial-vis-8.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization for experimental results on \textbf{\textcolor{red}{partial point clouds}}. Shapes drawn for every three shapes from the left side to the right side are the input point cloud, reconstructed shape, and the reconstructed canonical object shape. Please zoom in for details. } \label{fig_partial_vis} \vspace{-8pt} \end{figure*} \vpara{Part Segmentation.} In Table~\ref{tb_exp_seg_cmp_partial}, we evaluate the segmentation performance of our method. BSP-Net is not compared here due to the data preparation process that needs mesh data as input, which is not compatible with the rendered partial point clouds. Oracle ICP uses real segmentation labels to register each part from the example shape to the observed shape. Thus, it is not surprising that it can achieve good performance on tested categories. However, due to the occluded shape, and the ambiguity caused by occluded part shapes and part symmetry, the part pose estimation performance of ICP is not that satisfactory. \vpara{Shape Reconstruction.} In Table~\ref{tb_exp_completion_cmp_partial}, we evaluate the shape reconstruction performance of our method. The part-by-part reconstruction strategy used by our method can outperform the EPN-based whole shape reconstruction strategy in most of those categories except for Washing Machine. One possible reason is the poor segmentation performance of our model on shapes in the Washing Machine category. \begin{table}[t] \centering \caption{\footnotesize Comparison between the part pose estimation performance of different methods on all test categories (\textbf{\textcolor{red}{partial point clouds}}). ``R'' denotes rotation errors, whose values are in the format of ``Mean $R_{err}$/Median $R_{err}$''. ``T'' denotes translation errors, whose values are in the format of ``Mean $T_{err}$/Median $T_{err}$''. ``J'' denotes joint parameters-related estimation errors,whose values are in the format of ``Mean $\theta_{err}$/Mean $d_{err}$''. \textbf{ICP could not predict joint parameters.} Therefore, only the results of supervised NPCS and our method on joint prediction are presented. For all metrics, the smaller, the better. \textbf{Bold} numbers for best values, while \emph{\textcolor{blue}{blue}} values represent second best ones. } \resizebox{\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Method & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{R} & NPCS (supervised) & \makecell[c]{\textbf{2.51}/\textbf{2.27}, \\ \textbf{{2.93}}/\textbf{{2.64}}} & \makecell[c]{\emph{\textcolor{blue}{4.71}}/\emph{\textcolor{blue}{3.84}}, \\ \textbf{8.56}/\textbf{7.46}} & \makecell[c]{\emph{\textcolor{blue}{7.26}}/\emph{\textcolor{blue}{6.08}}, \\ \emph{\textcolor{blue}{23.39}}/\emph{\textcolor{blue}{17.33}}, \\ \emph{\textcolor{blue}{20.86}}/\emph{\textcolor{blue}{18.76}}} & \makecell[c]{\emph{\textcolor{blue}{21.40}}/\emph{\textcolor{blue}{23.56}}, \\ \emph{\textcolor{blue}{29.90}}/\emph{\textcolor{blue}{32.71}}} & \makecell[c]{\textbf{6.64}/\textbf{5.76}, \\ \textbf{5.43}/\textbf{5.19}} & \makecell[c]{\emph{\textcolor{blue}{9.39}}/\emph{\textcolor{blue}{8.75}}, \\ \textbf{{6.75}}/\emph{\textcolor{blue}{6.14}}} & \makecell[c]{\textbf{21.74}/\textbf{10.80}, \\ \textbf{22.92}/\textbf{10.18}, \\ \textbf{25.10}/\textbf{14.16}, \\ \textbf{7.34}/\textbf{6.83}} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Oracle ICP & \makecell[c]{{21.53}/10.80, \\ {20.68}/20.50} & \makecell[c]{32.42/17.82, \\ 19.39/16.99} & \makecell[c]{73.24/78.73, \\ 68.74/74.09, \\ 69.23/74.53} & \makecell[c]{67.48/73.01, \\ 63.22/68.23} & \makecell[c]{38.72/34.44,\\ 52.28/42.16} & \makecell[c]{30.78/28.674, \\ 42.06/39.25} & \makecell[c]{82.93/82.64,\\61.31/59.51,\\54.39/52.82,\\26.88/29.66} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{\emph{\textcolor{blue}{11.77}}/\emph{\textcolor{blue}{7.87}}, \\ \emph{\textcolor{blue}{10.83}}/\emph{\textcolor{blue}{9.15}}} & \makecell[c]{\textbf{{1.61}}/\textbf{{1.52}}, \\ \emph{\textcolor{blue}{12.81}}/\emph{\textcolor{blue}{12.51}}} & \makecell[c]{\textbf{{4.69}}/\textbf{{3.77}}, \\ \textbf{{9.56}}/\textbf{{5.36}}, \\ \textbf{{7.53}}/\textbf{{6.12}}} & \makecell[c]{\textbf{{10.18}}/\textbf{5.30}, \\ \textbf{11.10}/\textbf{5.22}} & \makecell[c]{\emph{\textcolor{blue}{15.38}}/\emph{\textcolor{blue}{14.46}}, \\ \emph{\textcolor{blue}{21.91}}/\emph{\textcolor{blue}{19.04}}} & \makecell[c]{\textbf{8.50}/\textbf{6.85}, \\ \emph{\textcolor{blue}{6.92}}/\textbf{5.66}} & \makecell[c]{\emph{\textcolor{blue}{2.60}}/\emph{\textcolor{blue}{1.79}}, \\ \emph{\textcolor{blue}{2.60}}/\emph{\textcolor{blue}{1.79}}, \\ \emph{\textcolor{blue}{2.06}}/\emph{\textcolor{blue}{1.79}}, \\ \emph{\textcolor{blue}{2.06}}/\emph{\textcolor{blue}{1.79}}} \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{T} & NPCS (supervised) & \makecell[c]{\textbf{0.028}/\textbf{0.030}, \\ \textbf{0.028}/\textbf{0.023}} & \makecell[c]{\textbf{0.034}/\textbf{0.030}, \\ \textbf{0.033}/\textbf{{0.028}}} & \makecell[c]{\textbf{0.085}/\textbf{0.075}, \\ \textbf{0.056}/\textbf{0.052}, \\ \textbf{0.057}/{\textbf{0.049}}} & \makecell[c]{\emph{\textcolor{blue}{0.263}}/\emph{\textcolor{blue}{0.253}}, \\ {0.286}/{0.236}} & \makecell[c]{\textbf{0.022}/\textbf{0.021}, \\ \textbf{0.034}/\textbf{0.034}} & \makecell[c]{\textbf{0.048}/\textbf{0.043}, \\ \textbf{0.047}/\textbf{0.044}} & \makecell[c]{{\textbf{0.441}}/\emph{\textcolor{blue}{0.365}}, \\ \textbf{0.367}/\emph{\textcolor{blue}{0.343}}, \\ \textbf{0.549}/\textbf{0.299}, \\ \textbf{0.081}/\textbf{0.065}} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Oracle ICP & \makecell[c]{{0.324}/{{0.321}}, \\ \emph{\textcolor{blue}{0.169}}/\emph{\textcolor{blue}{0.171}}} & \makecell[c]{0.322/{{0.311}}, \\ \emph{\textcolor{blue}{0.136}}/\emph{\textcolor{blue}{0.144}}} & \makecell[c]{\emph{\textcolor{blue}{0.092}}/\emph{\textcolor{blue}{0.097}}, \\ 0.188/0.197, \\ 0.185/0.193} & \makecell[c]{0.265/0.278, \\ \emph{\textcolor{blue}{0.267}}/\emph{\textcolor{blue}{0.277}}} & \makecell[c]{0.281/{{0.280}}, \\ 0.246/{{0.248}}} & \makecell[c]{0.280/0.289,\\ 0.305/0.306} & \makecell[c]{\emph{\textcolor{blue}{0.193}}/\emph{\textcolor{blue}{0.197}},\\ \emph{\textcolor{blue}{0.161}}/\emph{\textcolor{blue}{0.170}}, \\ \emph{\textcolor{blue}{0.159}}/\textbf{0.164},\\\emph{\textcolor{blue}{0.129}}/{0.132}} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{\emph{\textcolor{blue}{0.071}}/\emph{\textcolor{blue}{0.065}}, \\ {{0.204}}/\emph{\textcolor{blue}{0.120}}} & \makecell[c]{\emph{\textcolor{blue}{0.179}}/\emph{\textcolor{blue}{0.164}}, \\ 0.253 /0.254} & \makecell[c]{{{ 0.219}}/{{0.226}}, \\ \emph{\textcolor{blue}{0.169}}/\emph{\textcolor{blue}{0.166}}, \\ \emph{\textcolor{blue}{0.177}}/\emph{\textcolor{blue}{0.171}}} & \makecell[c]{{\textbf{0.044}}/\textbf{{0.034}}, \\ \textbf{{0.031}}/\textbf{0.025}} & \makecell[c]{\emph{\textcolor{blue}{0.030}}/\emph{\textcolor{blue}{0.030}}, \\ \emph{\textcolor{blue}{0.100}}/\emph{\textcolor{blue}{0.104}}} & \makecell[c]{\emph{\textcolor{blue}{0.088}}/\emph{\textcolor{blue}{0.082}}, \\ \emph{\textcolor{blue}{0.070}}/\emph{\textcolor{blue}{0.067}}} & \makecell[c]{{0.046}/{0.046}, \\ {0.047}/{0.050}, \\ {0.122}/{0.131}, \\ 0.172/0.142} \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{2}{*}{J} & NPCS (supervised) & {28.62}/\textbf{0.092} & \textbf{8.05}/\textbf{0.194} & \makecell[c]{\textbf{20.11}/0.221,\\ \textbf{20.11}/\textbf{0.239}} & {10.91}/0.155 & \textbf{11.23}/\textbf{0.084 } & \textbf{12.25}/\textbf{0.134} & {11.21}/- \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Ours & \textbf{5.24}/{0.105 } & 22.30/0.212 &\makecell[c]{26.96/\textbf{0.087},\\ 26.96/0.260} & \textbf{10.83}/\textbf{0.142} & 55.16/0.170 & 18.02/0.170 & \textbf{7.43}/- \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \end{tabular} } \label{tb_exp_pose_cmp_partial} \end{table} \begin{table}[t] \centering \caption{\footnotesize Comparison between the part segmentation performance of different methods on all tested categories (\textbf{\textcolor{red}{partial point clouds}}). The metric used for this task is Segmentation MIoU, calculated on 4096 points for each shape. Values presented in the table are scaled by 100. Larger values indicate better performance. } \resizebox{0.8\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Oven & \makecell[c]{Washing\\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} Oracle ICP & 75.83 & \textbf{73.07} & \textbf{68.92} & 54.01 & \textbf{66.90} & 59.96 & \textbf{58.38} \\ Ours & \textbf{87.07} & 51.73 & 56.80 & \textbf{84.94} & 44.64 & \textbf{86.04} & {45.45} \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-12pt} \label{tb_exp_seg_cmp_partial} \end{table} \begin{table}[t] \centering \caption{\footnotesize Comparison between the shape reconstruction performance of different methods on all tested categories (\textbf{\textcolor{red}{partial point clouds}}). The metric used in this task is unidirectional Chamfer L1 from the original input shape to the reconstructed shape. The smaller, the better. } \resizebox{0.8\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} Method & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} EPN~\cite{li2021leveraging} & {0.040} & \textbf{0.043} & 0.044 & 0.032 & 0.020 & 0.026 & 0.079 \\ Ours & \textbf{0.035} & 0.062 & \textbf{0.041} & \textbf{0.025} & \textbf{0.019} & \textbf{0.024} & \textbf{0.061} \\ \cline{1-8} \specialrule{0em}{1pt}{0pt} \end{tabular} } \vspace{-10pt} \label{tb_exp_completion_cmp_partial} \end{table} \subsection{Robustness to Input Data Noise} \label{sec_appen_robustness} \begin{table}[t] \centering \caption{\footnotesize Performance comparison of the proposed method on clean data and data corrupted by random normal noise. } \resizebox{1.0\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} Category & Method & Seg. IoU & Mean $R_{err}(^\circ)$ & Median $R_{err}(^\circ)$ & Mean $T_{err}$ & Median $T_{err}$ & Joint Error & Chamfer L1 \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{2}{*}{Oven} & Without noise & \textbf{76.22} & \textbf{7.74}, \textbf{4.07} & \textbf{7.35}, \textbf{3.97} & \textbf{0.054}, {0.067} & \textbf{0.052}, \textbf{0.046} & 20.30/\textbf{0.089} & \textbf{0.025} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & With noise & {55.35} & {9.84}, {11.05} & {9.94}, {9.99} & {0.073}, \textbf{0.063} & {0.073}, {0.057} & \textbf{9.28}/0.310 & 0.049 \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{2}{*}{Laptop (S)} & Without noise & \textbf{82.97} & \textbf{7.34, 10.41} & \textbf{5.16, 9.34} & \textbf{0.040}, \textbf{0.046} & \textbf{0.037}, \textbf{0.042} & \textbf{30.31}/0.122 & \textbf{0.024} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & With noise & {70.04} & {16.01}, {13.27} & {11.47}, {9.52} & {0.082}, {0.067} & {0.075}, {0.065} & {32.84}/\textbf{0.029} & 0.044 \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \end{tabular} } \label{tb_exp_abl_oven_noise} \end{table} \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/noise-vis-2.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization for the model performance on input data with random noise. Shapes for each three drawn from left to the right are input data corrupted by random normal noise, segmentation, and reconstruction, respectively. We align shapes here for better visualization, while a random SE(3) rigid transformation will be added to them for input. } \label{fig_noose_vis} \vspace{-8pt} \end{figure*} Besides testing the performance of the proposed method on partial point clouds with occlusion patterns caused by viewpoint changes, we also test its effectiveness on noisy data. Specifically, we add noise for each point in the shape by sampling offset for its x/y/z coordinates from a normal distribution, e.g. $\Delta x \sim \mathcal{N}(0, \sigma^2)$, where we set $\sigma = 0.02$ here. Results on Oven and Laptop (S) are presented in Table~\ref{tb_exp_abl_oven_noise}. From the table, we can see the degenerated segmentation IoU on Oven's noisy data, while still relatively good part pose estimation performance. Another discovery is the even better joint axis direction prediction, but larger offset prediction perhaps due to the poor segmentation. Besides, the shape reconstruction quality also drops a lot, probably due to the randomly shifted point coordinates. We can observe a similar phenomenon on Laptop (S). In Figure~\ref{fig_noose_vis}, we draw some examples for a qualitative understanding w.r.t. model's performance on noise data. \subsection{Visualization of Part-Level Equivariant Features} \label{sec_appen_part_level} \begin{figure*}[htbp] \centering \includegraphics[width=0.5\textwidth]{./imgs/vis-part-level-feats-laptop.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization for an intuitive understanding w.r.t. the difference between part features output by the global equivariant network and those output by the part-level equivariant network of the selected mode. Visualized features are obtained by using the PCA algorithm to reduce the high-dimensional features to features of dimension 3, which are further normalized to the range of [0, 1]. We only draw point features of the non-motion part with the moving part in gray. Features drawn on the left are those output by the global equivariant network while those on the right are from the designed part-level equivariant network. } \label{fig_part_level_vis_laptop} \vspace{-8pt} \end{figure*} Aiming for an intuitive understanding w.r.t. the property output by the designed part-level equivariant network, we draw features output by the global equivariant network and part-level equivariant network for some laptop samples in Figure~\ref{fig_part_level_vis_laptop}. From the figure, we can see that the point features of the non-motion part (base) do not change a lot when the moving part (display) rotates an angle. That echoes the wish for the part-level equivariance design to disentangle other parts' rigid transformation from the current part's feature learning. \subsection{Additional Experimental Comparisons and Applications} \label{sec_appen_more_exp_additional} \vpara{Comparison with Other Baselines.} We compare our method with other two baselines that are not discussed in the main body in Table~\ref{tb_exp_pose_cmp_other_baseline}. Firstly, we use KPConv~\cite{thomas2019kpconv} as NPCS's feature backbone and test its performance on our data with arbitrary global pose variation. We can see that NPCS of this version performs terribly compared to our unsupervised method. Since NPCS estimate part poses by estimating the transformation from predicted NPCS coordinates and the observed shape, requiring the invariance that the shape reconstruction network should achieve when the input shape undergoes an arbitrary SE(3) transformation. The poor performance achieved by KPConv may reveal the effectiveness of using equivariant networks for pose and shape factorization. The second one is Oracle EPN, where we use EPN to estimate the pose for each individual part. Despite with ground-truth segmentation labels, Oracle EPN cannot infer joint parameters since they estimate the pose of each part individually. Besides, the part symmetry problem will also hinder such strategy from getting good performance to some extent, which will be discussed in the next section~\ref{sec_appen_symmetric_parts}. \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/all-mani-1.pdf} \vspace{-5pt} \caption{ \footnotesize Reconstruction for shapes in different articulation states and manipulation to change their states. Shapes (in blue and orange) drawn on the two sides are manipulated shapes from their nearest reconstructions. Others are reconstructions (in purple and green). } \label{fig_mani_vis} \vspace{-8pt} \end{figure*} \vpara{Shape Reconstruction and Manipulation.} The predicted joints can enable us to manipulate the reconstruction by changing the value of predicted rotation angles. The resulting shapes then undergo a change in the articulation state from the input shape. In Figure~\ref{fig_mani_vis}, we draw some examples for Laptop (S) and Oven. \begin{table}[t] \centering \caption{\footnotesize Comparison between the part pose estimation performance of different methods on all tested categories. Backbone used for NPCS is KPConv. ``R'' denotes rotation errors, whose values are in the format of ``Mean $R_{err}$/Median $R_{err}$''. ``T'' denotes translation errors, whose values are in the format of ``Mean $T_{err}$/Median $T_{err}$''. ``J'' denotes joint parameters-related estimation errors,whose values are in the format of ``Mean $\theta_{err}$/Mean $d_{err}$''. For all metrics, the smaller, the better. \textbf{{Bold}} numbers for best values. } \resizebox{\linewidth}{!}{% \begin{tabular}{@{\;}c@{\;}|c|c|c|c|c|c|c|c@{\;}} \midrule \hline \specialrule{0em}{1pt}{0pt} ~ & Method & Oven & \makecell[c]{Washing \\ Machine} & Eyeglasses & Laptop (S) & Safe & Laptop (R) & Drawer \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{R} & NPCS (supervised) & \makecell[c]{{44.16}/{43.09}, \\ {{60.58}}/{{63.35}}} & \makecell[c]{56.20/56.22,\\50.16/51.38} & \makecell[c]{51.99/53.97,\\ 42.48/38.08, \\ 42.29/38.11} & \makecell[c]{55.67/66.44,\\55.63/61.33} & \makecell[c]{11.68/11.10,\\ 43.48/42.22} & \makecell[c]{49.98/68.43,\\ 73.40/83.55} & \makecell[c]{62.73/69.42,\\ 56.16/60.34, \\ 57.23/63.90, \\ 48.76/46.82} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Oracle EPN & \makecell[c]{\textbf{{7.07}}/\textbf{{6.88}}, \\ {16.33}/{9.17}} & \makecell[c]{{{7.97}}/{{7.60}}, \\ {{33.56}}/{{20.49}}} & \makecell[c]{{{54.01}}/{{13.09}}, \\ {{86.12}}/{{65.07}}, \\ {{116.56}}/{{119.23}}} & \makecell[c]{{{18.33}}/{9.73}, \\ {18.98}/{12.75}} & \makecell[c]{{{45.85}}/{{48.59}}, \\ {{38.03}}/{{27.67}}} & \makecell[c]{{20.46}/{14.03}, \\ {21.08}/{19.30}} & \makecell[c]{{{47.88}}/{{47.03}}, \\ {{30.84}}/{{25.23}}, \\ {{35.79}}/{{37.17}}, \\ {{43.83}}/{{39.46}}} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{{{7.74}}/{{7.35}}, \\ \textbf{{4.07}}/\textbf{{3.97}}} & \makecell[c]{\textbf{{7.49}}/\textbf{{7.37}}, \\ \textbf{{19.27}}/\textbf{{19.19}}} & \makecell[c]{\textbf{{8.16}}/\textbf{{8.21}}, \\ \textbf{{12.29}}/\textbf{{10.89}}, \\ \textbf{{12.53}}/\textbf{{9.88}}} & \makecell[c]{\textbf{{7.34}}/\textbf{5.16}, \\ \textbf{10.41}/\textbf{9.34}} & \makecell[c]{\textbf{{9.03}}/\textbf{{9.09}}, \\ \textbf{{13.83}}/\textbf{{13.59}}} & \makecell[c]{\textbf{5.71}/\textbf{3.61}, \\ \textbf{3.64}/\textbf{2.84}} & \makecell[c]{\textbf{{3.18}}/\textbf{{2.73}}, \\ \textbf{{3.18}}/\textbf{{2.73}}, \\ \textbf{{3.18}}/\textbf{{2.71}}, \\ \textbf{{3.18}}/\textbf{{2.71}}} \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{6}{*}{T} & NPCS (supervised) & \makecell[c]{{0.133}/{0.121}, \\ {0.104}/0.091} & \makecell[c]{0.146/0.142,\\ 0.066/0.065} & \makecell[c]{{0.401}/{0.326}, \\ {0.418}/{0.257}, \\ {0.396}/{{0.263}}} & \makecell[c]{0.233/0.203,\\ 0.217/0.169} & \makecell[c]{\textbf{0.055}/\textbf{0.052}, \\ {0.098}/{0.091}} & \makecell[c]{0.179/0.226,\\ 0.161/0.174} & \makecell[c]{{{0.791}}/{{0.742}}, \\ {0.694}/{{0.640}}, \\ {1.005}/{0.942}, \\ {0.271}/{0.240}} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Oracle EPN & \makecell[c]{\textbf{{0.031}}/\textbf{{0.030}}, \\ \textbf{{0.058}}/{{0.052}}} & \makecell[c]{\textbf{{0.046}}/\textbf{{0.044}}, \\ {0.059}/{0.053}} & \makecell[c]{{{0.197}}/{{0.129}}, \\ {{0.128}}/{{0.118}}, \\ {{0.334}}/{{0.292}}} & \makecell[c]{{{0.132}}/{{0.128}}, \\ {{0.117}}/{0.090}} & \makecell[c]{{{0.157}}/0.157, \\ {{0.158}}/{0.151}} & \makecell[c]{\textbf{{0.092}}/\textbf{{0.086}}, \\ \textbf{{0.094}}/\textbf{{0.082}}} & \makecell[c]{{0.204}/{0.187}, \\ {0.177}/{0.166}, \\ {0.161}/{0.146}, \\ { 0.290}/{0.282}} \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Ours & \makecell[c]{{{0.054}}/{{0.052}}, \\ {{0.067}}/\textbf{{0.046}}} & \makecell[c]{\textbf{{0.082}}/\textbf{{0.083}}, \\ \textbf{{0.042}}/\textbf{{0.034}}} & \makecell[c]{{\textbf{{0.054}}}/\textbf{{0.039}}, \\ \textbf{{0.086}}/\textbf{{0.088}}, \\ \textbf{{0.070}}/\textbf{{0.055}}} & \makecell[c]{{\textbf{0.040}}/\textbf{{0.037}}, \\ \textbf{{0.046}}/\textbf{0.042}} & \makecell[c]{{{0.066}}/0.069, \\ \textbf{{0.037}}/\textbf{0.035}} & \makecell[c]{\textbf{{0.021}}/\textbf{{0.019}}, \\ \textbf{{0.027}}/\textbf{{0.026}}} & \makecell[c]{\textbf{0.096}/\textbf{0.096}, \\ \textbf{0.097}/\textbf{0.092}, \\ \textbf{0.108}/\textbf{0.105}, \\ \textbf{0.109}/\textbf{0.100}} \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \multirow{2}{*}{J} & NPCS (supervised) & {55.62}/{0.194} & 55.01/0.149 & \makecell[c]{{60.58}/0.329,\\ {60.59}/{0.379}} & 41.40/0.259 & 54.07/0.055 & 57.04/0.070 & 52.48/- \\ \cline{2-9} \specialrule{0em}{1pt}{0pt} ~ & Ours & \textbf{20.30}/\textbf{0.089} & \textbf{28.40}/\textbf{0.118} & \makecell[c]{\textbf{17.75}/\textbf{0.045},\\ \textbf{17.75}/\textbf{0.129}} & \textbf{30.31}/\textbf{0.122} & \textbf{4.36}/\textbf{0.031} & \textbf{17.17}/\textbf{0.169} & \textbf{38.86}/- \\ \cline{1-9} \specialrule{0em}{1pt}{0pt} \end{tabular} } \label{tb_exp_pose_cmp_other_baseline} \end{table} \subsection{Evaluation Strategy for the Category-Level Part Poses} \label{sec_appen_eval} To evaluate the category-level part pose estimation performance of our model, we adopt the evaluation strategy used in~\cite{li2021leveraging}. When calculating the rotation and translation from part shape $X_1$ to $X_2$, we first centralize them by putting their bounding box center to the zero point ($\overline{X_1}$ and $\overline{X_2}$, respectively) and then taking the transformation from $\overline{X_1}$ to $\overline{X_2}$ as their transformation. \section{Discussion on Part Symmetry} \label{sec_appen_symmetric_parts} In this section, we discuss the part-symmetry-related problem that one would encounter in the part pose estimation problem. For rigid objects, the pose of a shape is ambiguous for symmetric shapes. To say a shape $X$ is symmetric, we mean that there is a non-trivial SE(3) transformation $S_{A_0}$ such that $X = S_{A_0}[X]$. In those cases, the performance of the pose estimation algorithm may degenerate due to ambiguous poses. It is a reasonable phenomenon, however. But for articulated objects, we may have symmetric parts even if the whole shape is not a symmetric one. For those shapes, we still expect for accurate part pose estimation. It indicates that estimating part poses for each part individually is not reasonable due to part pose ambiguity. That's why we choose to model the relationship between parts, or specifically, the kinematic chain, joint parameters. Without such object-level inter-part modeling, we cannot get accurate part poses by estimating their pose individually, even using ground-truth segmentation. The comparison between Oracle EPN and our method in Table~\ref{tb_exp_pose_cmp_other_baseline} can demonstrate this point to some extent. \section{Limitations} \label{sec_appen_limitations} \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{./imgs/partial_washing.pdf} \vspace{-5pt} \caption{ \footnotesize Visualization on partial point clouds of the Washing Machine category. We restrict the range of the viewpoint to a very small one around the original viewpoint where its lid is straight ahead. Even so, the lid of the washing machine in many cases is hard to identify. } \label{fig_partial_washing} \vspace{-8pt} \end{figure*} One of the limitations is the unsatisfactory performance of the model on some categories for partial point clouds, including partial washing machine and partial safe , especially for joint prediction. Reasons may come from the limitation of our proposed method on partial point clouds. This may come from the following three aspects: 1) Symmetric shapes that are hard to be aligned globally together to a fixed position. For instance, a single rectangle that is possible to appear in partial washing machine shapes~\ref{fig_partial_washing}. 2) Occluded parts that are hard to distinguish from a partial point cloud. For instance, the occluded lids of the washing machine. Shape and size variation of the lid would further make a network hard to identify whether it is a moving part of the shape or just a valid intra-category shape variation of a single part. 3) Missing points from the joint of the shape. Missing points would make our point-based joint regularization strategy less effective to find real joint direction and real pivot point when observing two shapes. The joint axis direction prediction results in the clean data and noise data also reveal the limitation of the point-based joint regularization strategy to some extent (see Sec.~\ref{sec_appen_robustness}). This limitation may be solved by introducing a category-level joint prediction strategy that can predict category-common joints for shapes in the same category. Another possible direction is trying to improve the robustness of the joint regularization strategy towards missing points. \section{Submission of conference papers to ICLR 2023} \bibliographystyle{iclr2023_conference} \section{Appendix} \input{articles/appendix_main} \bibliographystyle{iclr2023_conference}
{ "arxiv_id": "2302.14309", "language": "en", "timestamp": "2023-03-01T02:09:14", "url": "https://arxiv.org/abs/2302.14309", "yymm": "2302" }
\section{Introduction} Deep neural networks have achieved tremendous success in many computer vision tasks when the training and test data are identically and independently distributed (i.i.d). However, the mismatch between them is common when the model is deployed in the real world (\cite{wang2022variational, li2020domain}). For example, weather changes like rain and fog, and data pre-processing like saturate adjustment and compression can corrupt the test data. Many works show that the common corruptions arising in nature can degrade the performance of models at test time significantly~\citep{hendrycks2019benchmarking,yi2021benchmarking,kar20223d,geirhos2018generalisation, yu2022towards}. In video classification, \cite{yi2021benchmarking} demonstrates the vulnerability of models against corruptions like noise, blur, and weather variations. Various techniques have been proposed to improve the robustness of models against common corruptions (a.k.a.corruption robustness). The most popular direction is increasing the diversity of input training data via data augmentations and applying regularization at training time~\citep{zheng2016improving,wang2021augmax,hendrycks2019augmix,rusak2020increasing, wang2020heterogeneous}. It trains one model and evaluates on all types of corruption. However, the training data is often unavailable because of privacy concerns and policy constraints, making it more challenging to deploy models in the real world. In this work, we focus on the direction of test-time optimization. Under such a scheme, the parameters of one model will be optimized by one type of corrupted test data specifically. Test-time optimization updates the model to fit into the deployment environment at test time, without access to training data. There are several test-time optimization techniques emerging in image-based tasks~\citep{wang2020tent,schneider2020improving,pmlr-v119-liang20a,sun2020test}. However, we find these techniques are not able to generalize to video-based corruption robustness tasks well from empirical analysis. We hypothesize the gap between image and video-based tasks comes from several aspects. Firstly, the corruptions in the video can change temporal information, which requires the model to be both generalizable and robust~\citep{yi2021benchmarking}. Hence, improving model robustness against the corruptions in video like bit error, and frame rate conversion is more challenging. Secondly, video input data has a different format from image data. For example, video data has a much larger size than image data. It is impractical to use a similar batch size as image-based tasks (e.g., batch size of 256 in Tent~\citep{wang2020tent}), though the batch size is an important hyper-parameter in test-time optimization~\citep{wang2020tent,schneider2020improving}. Lastly, these techniques ignore the huge information hidden in the temporal dimension. To improve the video classification model robustness against corruptions consistently, we propose a temporal coherent test-time optimization framework \textbf{TeCo}. TeCo is a test-time optimization technique with two self-supervised objectives. We propose to build our method upon the test-time optimization which updates all the parameters in shallow layers and only normalization layers in the deep layers. It can benefit from both training and test data, and such an optimization strategy remains part of model parameters and statistics obtained at training time and updates the unfrozen parameters with test information. Besides, we utilize global and local spatio-temporal information for self-supervision. We use uniform sampling to ensure global information in input video data and optimize the model parameters via entropy minimization. By dense sampling, we extract another local stream that has a smaller time gap between consecutive frames. Due to the smooth and continuous nature of adjacent frames in the video~\citep{li2008unsupervised,wood2016development}, we apply a temporal coherence regularization as a self-supervisor in test-time optimization on the local pathway. As such, our proposed technique enables the model to learn more invariant features against corruption. As a result, TeCo achieves promising robustness improvement on two large-scale video corruption robustness datasets, Mini Kinetics-C and Mini SSV2-C. Its performance is superior across various video classification backbones. TeCo increases average accuracy by 6.5\% across backbones on Mini Kinetics-C and by 4.1\% on Mini SSV2-C, which is better than the baseline methods Tent (1.9\% and 0.9\%) and SHOT (1.9\% and 2.0\%). Additionally, We show that TeCo can guarantee the smoothness between consequent frames at the feature level, which indicates the effectiveness of temporal coherence regularization. We summarize our contributions as follows: \begin{itemize} \item To the best of our knowledge, we make the first attempt to study the test-time optimization techniques for video classification corruption robustness across datasets and model architectures. \item We propose a novel test-time optimization framework TeCo for video classification, which utilizes spatio-temporal information in training and test data to improve corruption robustness. \item For video corruption robustness, TeCo outperforms other baseline test-time optimization techniques significantly and consistently on Mini Kinetics-C and Mini SSV2-C datasets. \end{itemize} \section{Related Works} \subsection{Problem Setting} \textbf{Corruption Robustness.} Deep neural networks are vulnerable to common corruptions generated in real-world deployment~\citep{geirhos2017comparing,hendrycks2019benchmarking}. \cite{hendrycks2019benchmarking} firstly propose ImageNet-C to benchmark the common corruption robustness of deep learning models. It assumes the tested model has no prior knowledge of the corruption arising during test time. The model is trained with clean data while tested on corrupted data. Under such a setting, we are able to estimate the overall robustness of models against corruption. In the following, benchmark studies and techniques across various computer vision tasks are booming~\citep{kar20223d,michaelis2019benchmarking,kamann2020benchmarking}. These studies on corruption robustness bridge the gap between research in well-setup lab environments and deployment in the field. Recently, studies have emerged on corruption robustness in video classification~\citep{yi2021benchmarking,Wu_2020_CVPR}. In this work, we tap the potential of spatio-temporal information in video data and improve the corruption robustness of video classification models during testing. \textbf{Video Classification.} Recently, with the introduction of a number of large-scale video datasets such as Kinetics~\citep{carreira2017quo}, Something-Something V2 (SSV2)~\citep{goyal2017something}, and Sports1M~\citep{karpathy2014large}, video classification has attracted increasing attention. Most existing video classification works mainly focus on two perspectives: improving accuracy~\citep{simonyan2014two,wang2016temporal,carreira2017quo,hara2017learning,xie2018rethinking,ECCV2020ysy,bertasius2021space,li2022mvitv2} and model efficiency~\citep{feichtenhofer2019slowfast,lin2019tsm,feichtenhofer2020x3d,kondratyuk2021movinets,wang2021tdn}. However, as mentioned above, techniques for improving video classification corruption robustness remain few. In this paper, we focus on the optimization of video corruption robustness and conduct experiments on Mini Kinetics-C and Mini SSV2-C datasets. The Kinetics dataset relies on spatial semantic information for video classification, while SSV2 contains more temporal information. It enables us to evaluate the effectiveness of our proposed optimization method on these two different types of video data. \vspace{-0.3cm} \subsection{Test-time Optimization} Optimizing the model with unlabeled test data has been used in many machine learning tasks~\citep{shu2022,huang2020neural,shocher2018zero,azimi2022self,niu2022efficient,wang2022continual}. These methods only require the pretrained model and test data for optimization in the inference. Test-Time Training (TTT)~\citep{sun2020test} and TTT++~\citep{NEURIPS2021_b618c321} update models at test-time, but they jointly optimize the supervised loss and the self-supervised auxiliary loss in training. Though these methods demonstrate promising improvement in image-based corruption robustness, there is little progress made in video-based test-time optimization. We make the first attempt to utilize the temporal information for model robustness under the test-time optimization setting. \vspace{-0.3cm} \subsection{Temporal Coherence in Video} The concept of temporal coherence makes an assumption that adjacent frames in video data have semantically similar information~\citep{goroshin2015unsupervised}. The assumption is based on the phenomenon that the visual world is smoothly varying and continuous. The stability of the visual world in the temporal dimension is significant for many studies in biological vision~\citep{li2008unsupervised,wood2016smoothness,wood2016development}. With the assumption, the invariance between neighboring frames in the video can provide self-supervision for many computer vision tasks~\citep{jayaraman2015learning,wiskott2002slow,Dwibedi_2019_CVPR,wang2019learning}. For example, \cite{jayaraman2015learning} exploits motion signal in the egocentric video to regularize image recognition task; \cite{wang2019learning} utilizes the cycle consistency in time to learn visual temporal correspondence. In the corruption robustness problem, we use temporal coherence as a self-supervisor to regularize models to be more invariant against corruption. \section{Method} We propose an end-to-end training framework to improve the robustness of video classification models against corruption at test time. In such a framework, we optimize pre-trained deep learning models during the test time and minimize two objectives without label information. These two objectives correspond to two pathways. The first pathway uses a global stream as input and minimizes the entropy of prediction; the second pathway leverages a local stream as input and regularizes the temporal coherence among consecutive video frames. We integrate these two pathways and named our framework as \textbf{TeCo}. Figure~\ref{framework-fig} outlines our temporal coherent test-time optimization framework. The backbone in Figure~\ref{framework-fig} is initialized by a pre-trained model. Any deep video classification model pretrained under a supervised learning scheme can fit into it. \begin{figure}[t] \begin{center} \includegraphics[width=0.85\linewidth]{./images/framework.pdf} \end{center} \caption{The proposed framework TeCo at the test-time optimization stage. TeCo consists of two pathways: the local pathway uses dense video stream $I_{d}$ as input. It passes through the shallow layers $f$ and attention module $h$ to generate an attention-based feature map $z$. Then it applies temporal coherence regularization on the feature map. We update all the parameters in the shallow layers in this pathway; the global pathway uses a sparse video stream $I_{u}$, which captures the long-term information. TeCo minimizes the prediction entropy in the global pathway by updating all the parameters in the shallow layers and only normalization parameters in the deep layers. } \label{framework-fig} \vspace{-0.5cm} \end{figure} \subsection{Test-Time Optimization Strategy} Though BN~\citep{schneider2020improving} and Tent~\citep{wang2020tent} achieve promising performance on image-based test-time optimization tasks, they only show minor improvements on video data and sometimes degrade the robustness on corruptions in our empirical analysis. BN only adapts the mean and variance of normalization layers by test data partially, it limits the searching space of optimization. Tent removes the training time batch normalization statistics fully, while video classification tasks usually have smaller batch size. It can only capture sub-optimal statistics when just relying on test-time data. We perform a simple but effective mechanism by grafting the advantage of these two classic techniques. Following~\citep{schneider2020improving}, we initialize mean $\mu = \alpha \mu_{s} + (1-\alpha) \mu_{t}$ and variance $ \sigma^2 =\alpha \sigma_{s}^2 + (1-\alpha) \sigma_{t}^2$, where $s$ stands for training, and $t$ stands for testing. $\alpha$ is the hyper-parameter that controls the weights of training and test time statistics. At the same time, we update the affine transformation parameters in the normalization layers, which helps the model utilize both training and test-time statistics after adaptation. Apart from normalization layers, we also fine-tune other network parameters (e.g., Conv Layers) in the shallow layers, which are the initial (lower) layers in the network. The shallow layers of robust models generate smoother feature tensors that are better aligned with human perception~\citep{xie2019feature,tsipras2018robustness}. They play the role of denoiser when encountering natural corruptions arising in input data. Inspired by this phenomenon, we update all the parameters in the shallow layers and only normalization layers in the deep layers, as shown in Figure~\ref{framework-fig}. It gives more flexibility to the shallow layers while not interrupting the higher-level semantic information. The effect of different divisions of the shallow and deep layers is explored in the ablation study. \subsection{Global and Local Pathways} In the test-time optimization of TeCo, we use two pathways (i.e, global pathway and local pathway) and two objectives to optimize model. The global pathway uses uniform sampling to extract frames from the input video $I_{u}$. The uniform sampling splits the video into multiple clips with equal length. Then we randomly take one frame from each clip. As a result, the selected frames by uniform sampling can capture the global information effectively~\citep{chen2020deep}. In this pathway, we optimize the overall parameters of shallow layers and the transformation parameters in deep normalization layers by minimizing the entropy of prediction. The objective is: \begin{equation} \label{ent-equ} L_{ent}=H(Softmax(g(f(I_{u}))) \end{equation} where $g(\cdot)$ represents the deep layers, $f(\cdot)$ stands for the shallow layers, $H$ is the Shannon entropy of prediction $\hat{y}$ with N classes: $H(\hat{y})=-\sum^{N}_{n=1} p(\hat{y}_{n})\log p(\hat{y}_{n})$. The Shannon entropy serves as the optimization of the global pathway. The local pathway leverages dense sampling to take consecutive frames $I_{d}$, at a random location of the input video, which randomly chooses a location as the center of the sampled sequence. Compared to the large time gap in frames by uniform sampling, the time gap in the frames by dense sampling can be as low as one. In this pathway, the temporal coherence between frames is much stronger. Because the visual world is smoothly varying, the neighboring frames in video data will not change abruptly in the temporal dimension. When common corruptions interrupt the natural structure of video data, we apply a temporal-coherent regularization to the output feature of the shallow layer. The regularization can be achieved by enforcing the feature similarity in consecutive feature maps of corrupted data. We explicitly represent it with the loss function: $L_{coherence} = \sum_{t} || x^{t}-x^{t-i}||_{1}$. We use $l_{1}$ distance to measure the similarity of feature tensors since it penalizes more on outliers and benefits the robustness~\citep{alizadeh2020gradient,bektacs2010comparison}. $x^{t}$ is the tensor at time $t$, which is the output after $f(I^{t}_{d})$. $i$ is the time gap between two neighboring tensors. $L_{coherence}$ encourages the layers to generate smoother features. Correspondingly, the feature extractor will be more invariant to the perturbation, corruption, and disruption in the input data. It leads to more robust video classification models. \subsection{Attentive Temporal Coherence Regularization} When we apply $L_{coherence}$ on feature tenors, we simply assume the frames are varying in a uniform speed because of the same weights along the time dimensions. However, the changes in neighboring frames have different scales in the real world. The example of bungee jumping in Figure~\ref{framework-fig} shows that the change is small at the beginning of the video because the background is fixed. It has only the blue sky and a jumping platform. When the person is reaching the ground, the background changes rapidly, and there are houses and other people appearing. Hence, we propose to assign different weights to different frames. To encourage temporal consistency in the static background, the slower changes in temporal dimension correspond to higher weights in regularization. The weights can be generated by an attention module $h(\cdot)$. In this work, we use the time dimension-only 1D non-local operation~\citep{wang2018non} to obtain the weights along the time dimension. Hence, the attentive temporal coherent regularization can be written as: \begin{equation} L_{att-co} = \sum_{t}||h(f(I_{d}))^{t}-h(f(I_{d}))^{t-i}||_{1}, \end{equation} where $h(\cdot)$ is the attention module. \subsection{Overall Objective Function} To summarize, the global pathway uses $I_{u}$ as input and entropy minimization as objective, and the local pathway leverages $I_{d}$ as input and attentive temporal coherence regularization as objective. The overall objective function for TeCo can be represented as: \begin{equation} L = L_{ent}(I_{u},f,g) +\beta L_{att-co}(I_{d},f,h), \label{eq:overall_equation} \end{equation} where $\beta>0$ is a balancing hyper-parameter. \section{Experiments} We evaluate the corruption robustness of TeCo on Mini Kinetics-C and Mini SSV2-C. The mean performance on corruption is the main metric for robustness measurement. We also compare it with other baseline methods across architectures. \textbf{Dataset.} Kinetics~\citep{carreira2017quo} and Something-Something-V2 (SSV2)~\citep{goyal2017something} are two most popular large-scale datasets in video classification community. Kinetics is extracted from the Youtube website and it relies on spatial information for classification. As complementary, SSV2 is a first-view video dataset constructed systematically in the lab environment, which has more temporal changes. We use their variants Mini Kinetic-C and Mini SSV2-C~\citep{yi2021benchmarking} to evaluate the robustness of models against corruptions. The mini-version datasets randomly sample half of the classes from the original large-scale datasets. Hence, Mini Kinetics-C contains around 10K videos in the test dataset; Mini SSV2-C has a test dataset with a size of around 13K videos. These two datasets apply 12 types of corruptions arising in nature on the original clean test datasets. Each corruption has 5 levels of severities. The samples of Mini Kinetics-C and Mini SSV2-s are shown in supplementary. \textbf{Metrics.} The classification accuracy on corrupted data is the most common metric to measure the corruption robustness of models. In our experiments, the corrupted datasets have 12 types of corruptions and each has 5 levels of severities. We use $mPC$ (stands for mean performance on corruptions) to evaluate the overall robustness. We compute $mPC$ by averaging the classification accuracy as: $mPC=\sum_{c=1}^{12} \sum_{s=1}^{5} CA_{c,s},$ where $CA_{c,s}$ is the classification accuracy on corruption $c$ at level $s$. \textbf{Models.} The test-time optimization techniques can usually be applied across architectures. We use several classic 2D, 3D CNN-based architectures and transformers as the backbone to show the superiority of our method TeCo. For 2D CNN, we choose vanilla ResNet18 and its variant TAM-ResNet18~\citep{fan2019more} as the model architecture. The vanilla ResNet18 treats single image frames in the video as the input data. It has only an average fusion at the output stage for the overall video classification. TAM-ResNet18 integrates a temporal module in the ResNet architecture. It helps the model store more temporal information for classification. Hence, it obtains improvements in clean accuracy, especially on the SSV2 dataset. For 3D CNN, we use the 3D version ResNet18~\citep{hara2017learning} for training and evaluation. The 3D convolution layers enable it to capture sufficient spatio-temporal features in an end-to-end training manner. Since TeCo is model agnostic in principle, we use the state-of-the-art transformer MViTv2-S~\citep{li2022mvitv2} as the backbone as well. Different from CNN-based architecture, the transformer uses Layer Normalization~\citep{ba2016layer}. \textbf{Baseline.} Though the video classification area lacks research in test-time optimization, we import the baseline methods in image-based test-time optimization. BN (test-time batch normalization)~\citep{schneider2020improving} is a simple method that adapts the batch normalization statistics with test-time data. Tent~\citep{wang2020tent} extends from BN by updating the transformation parameters in the normalization layers, with an objective of entropy minimization. SHOT~\citep{pmlr-v119-liang20a} is well-known for combining entropy minimization and pseudo-labeling. TTT~\citep{sun2020test} optimizes the model with a self-supervised auxiliary task at both the training and test stages. Since we only conduct optimization at test stage and have no access to training data in our setting, we denote TTT as TTT*. We choose rotation prediction~\citep{hendrycks2019using} as the self-supervised auxiliary task in TTT*. All the methods can improve the corruption robustness of image-based deep learning models. We re-implement them on the video classification frameworks based on the public available codes. \textbf{Trainig and Evaluation Setup.} We separate the setup of the experiment into three stages: pretraining, test-time optimization, and test-time evaluation. In the pretraining stage, we use uniform sampling to create 16-frame-length input data. Then we use multi-scale cropping on the input for data augmentation and resize the cropped frames to a size of 224. We train the models with an initial learning rate of 0.01 and a cosine annealing learning rate schedule. In test-time optimization, we use the pre-trained model for network weight initialization and update the model parameters for one epoch. For standard (without test-time optimization), BN, Tent, and SHOT methods, we use uniform sampling to extract input from corrupted test data. We apply both uniform and dense sampling to create input for TeCo. In all the test-time optimization experiments, we follow the offline adaptation setting in~\cite{wang2020tent}. Because the baseline methods have not been implemented in video classification previously, we follow their implementations in image classification and tune the hyper-parameter settings to obtain the best results\footnote{The detailed implementation details can be found in the appendix.}. For the optimization, we use SGD with momentum. On Mini Kinetics-C, we use a batch size of 32 and a learning rate of 0.001; while for Mini SSV2-C, we use the same batch size but a learning rate of 0.00001. After adapting the models with test data, we freeze the model weights and evaluate on the corrupted data. The inference is only based on the global pathway. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{./images/mean_mpc-v2.pdf} \end{center} \caption{Mean Robustness on Mini Kinetics-C and Mini SSV2-C of methods. TeCo reduces the gap between model accuracy on clean and corrupted data significantly.} \vspace{-0.5cm} \label{mean-robustness-fig} \end{figure} \begin{table}[htp] \caption{mPC across architectures on Mini Kinetics-C and Mini SSV2-C. TeCo outperforms other baseline methods on different architectures and datasets. \textbf{Clean Acc} is the accuracy of model tested on clean data.} \label{mPC-table} \begin{center} \resizebox{0.98\textwidth}{!}{ \begin{tabular}{cc|c|cccccc} \toprule & \bf Backbone &\bf Clean Acc &\bf Standard & \bf BN &\bf Tent&\bf SHOT &\bf TTT* &\bf TeCo \\ \midrule \multirow{4}{*}{\bf Mini Kinetics-C} &3D ResNet18 & 61.7 &49.4 &50.6 &53.9 &52.6 &54.6& \textbf{56.9} \\ & ResNet18 & 66.0 &51.6 & 53.0 &55.6 & 53.2 & 59.5& \textbf{60.8} \\ & TAM-ResNet18 & 68.5 &55.9 & 57.1 & 53.8 & 58.4 & 62.2&\textbf{63.4}\\ & MViTv2-S &84.4 & 77.9 & 78.0& 79.2 & 78.2 &78.0 &\textbf{80.1} \\ \midrule \multirow{4}{*}{\bf Mini SSV2-C} &3D ResNet18 & 52.2 &39.3 &40.0 & 39.9 & 42.4 & 41.5&\textbf{45.7}\\ & ResNet18 & 30.2 &20.0 & 20.3 & 22.5 & 23.8 & 22.2 &\textbf{24.5} \\ & TAM-ResNet18 & 55.5 &44.2 & 45.0 & 44.5 & 45.2 & 46.7 &\textbf{49.4}\\ & MViTv2-S &56.8 & 48.1 & 48.1 & 48.4 & 48.2 & 48.1 & \textbf{48.5}\\ \bottomrule \end{tabular} } \end{center} \vspace{-0.7cm} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\linewidth]{./images/kinetics-I3D-RESNET18-full-plot-v2.png} \end{center} \caption{Full results on Mini Kinetics-C, with a backbone of 3D ResNet18. The accuracy of standard and BN methods drops sharply when the models encounter severe corruptions like noise, fog, rain, and contrast change. TeCo remains robust against corruptions at even level-5 severity.} \vspace{-0.3cm} \label{fig:full-kinetics-c} \end{figure} \subsection{Experimental Results} \textbf{mPC across architectures and datasets.} Benefiting from the temporal coherence, our method TeCo consistently outperforms image-based test-time optimization methods across architectures and datasets. Table~\ref{mPC-table} shows the mPC on Mini Kinetics-C of various methods. Compared with the standard inference, BN improves the mPC by up to 2\% because it only adapts BN statistics; Tent, SHOT and TTT* increase the robustness by a larger margin, due to their updates on more parameters. However, the improvements are not consistent across architectures. For TAM-ResNet18, the Tent degrades the performance. We hypothesize that Tent fully removes the training time BN statistics, but the test-time data is not sufficient to make the parameters reach the global optimum. Though there is a smaller gap between clean and corruption accuracy on MViTv2-S, TeCo is still able to improve the mPC by 2.2\%. As a result, our proposed TeCo achieves the best performance, enhancing the mPC by 2.2$\sim$9.2\% on various architectures. In Figure~\ref{mean-robustness-fig}, we show the mPC on various architectures, including 3D ResNet18, ResNet18, TAM-ResNet18 and MViTv2-S. There is a small gap of 5\% between the accuracy testing on clean data and the mPC results achieved by TeCo. On Mini SSV2-C, our method TeCo surpasses other baseline methods significantly. Table~\ref{mPC-table} demonstrates the overall mPC of methods on Mini SSV2-C. We find the ResNet18-based method has an obvious degradation in the robustness. Because this method has no module in capturing temporal information except output fusion, their performance on Mini SSV2 is relatively poor when the Mini SSV2 replies more on temporal information for classification. Similar to the phenomenon on Mini-Kinetics-C, BN increases the mPC by up to 1\%; Tent, SHOT and TTT* show a diverging enhancement across architectures, up to 3.8\%. TeCo consistently improves the corruption robustness on various architectures from 0.4$\sim$6.4\%. Horizontally comparing the performance on Mini Kinetics-C and Mini SSV2-C, there is a larger gap between clean accuracy and mPC on Mini SSV2-C from absolute and relative aspects, as shown in Figure~\ref{mean-robustness-fig}. We hypothesize that the corrupted temporal information is hard to be compensated, but the spatial semantics can be extracted from the neighboring frames with the nature of temporal coherence. \textbf{Accuracy w.r.t corruption and severity.} When digging deeper into the robustness of methods, we find that TeCo achieves superior performance on corruptions at different levels of severities. Figure~\ref{fig:full-kinetics-c} shows the complete results on various corruptions from Mini Kinetics-C, with a backbone of 3D ResNet18. The horizontal axis indicates the severity of corruption. When the value is 0, it corresponds to the accuracy on clean test data. We find the Standard and BN methods have much worse performance compared to Tent, SHOT, TTT*, and TeCo, when data is corrupted by shot noise, fog, contrast, saturate change, and rain. We hypothesize that simply adapting the batch normalization statistics is not able to correct the shifts raised by these corruptions. Tent, SHOT, TTT*, and TeCo demonstrate a similar trend when they encounter corruptions at various severity levels. However, TeCo always lies at the top of the trend lines, especially at a severity level of 5. For instance, TeCo maintains the accuracy of 46.7\% at level-5 contrast changes, while the accuracy of standard and BN methods drops to 12\%. It shows that TeCo benefits from the temporal coherence in video data, apart from an end-to-end test-time optimization scheme. In Appendix, we also show TeCo consistently outperforms other methods on various corruptions on Mini SSV2-C. \subsection{Ablation Studies} \textbf{Disentangle Modules in TeCo.} Table~\ref{ablation-table} shows the utility of three key components in TeCo: test-time optimization strategy, global and local pathways integration, and attentive temporal coherence regularization with local content. The test-time optimization strategy balances training and test-time parameters and statistics, which is more effective than other baseline methods in improving video classification corruption robustness. Global and local pathways provide complementary information for optimization. If we only use uniform sampling for entropy minimization and temporal coherence regularization, the improvement is as low as 0.1\% on Mini SSV2-C. When we apply the regularization on local content, the mPC increases by 1\% and 1.3\% respectively. \begin{table}[t] \caption{Ablating components of TeCo on Mini Kinetics-C and Mini SSV2-C. The test-time optimization strategy, local pathway (l-path), and attentive temporal coherence regularization improve robustness.} \label{ablation-table} \begin{center} \begin{tabular}{c|c|c | c c c } \toprule model, 3D R18 & Standard & SHOT& TeCo (w/o l-path) & TeCo (uniform) & TeCo \\ \midrule Mini Kinetics-C &49.4 &52.6 &55.9& 56.5 & \textbf{56.9} \\ Mini SSV2-C & 39.3& 42.4& 44.4 & 44.5 & \textbf{45.7} \\ \bottomrule \end{tabular} \end{center} \vspace{-0.7cm} \end{table} \textbf{Study Hyper-parameter Sensitivity.} Table~\ref{beta-table} demonstrates the impact of $\beta$ in Equation~\ref{eq:overall_equation}. We obtain the best result at $\beta$=1. Table~\ref{stage-table} shows the comparison results of the attentive temporal coherence regularization module added after different blocks of 3D ResNet18. The performances after $res_{1-3}$ are similar, while there is a slight drop after $res_{4}$. We hypothesize the deeper layer will generate more distinguishable features. It benefits from variety instead of consistency for classification. Table~\ref{batchsize-table} studies the impact of batch size on the performance of TeCo. The mPC keeps improving when we increase the batch size from 8 to 32, while it drops when we use the batch size of 64. Because we optimize the model for one epoch for a fair comparison, a larger batch size will lead to fewer iterations of updates. The batch size of 32 balances the amount of data in one batch and the number of iterations in one epoch, which enables models to obtain the best performance. Hence, TeCo can improve robustness reliably without tuning the hyper-parameters carefully. \textbf{Balance training and test statistics with $\alpha$}. Figure~\ref{data-fig} shows the mPC on Mini Kinetics-C w.r.t optimization iteration, with different $\alpha$ and the backbone of 3D ResNet18. The $\alpha$ in Section 3.1 controls the weights of training and test statistics in normalization layer initialization. When we set $\alpha$ to a smaller value (e.g., 0.2), the normalization layer relies more on test data initially. Hence, it converges faster in test-time optimization. After 40 iterations of test-time optimization, the mPC is enhanced from 49.6\% to 55.3\%. When the iterations increases, the performance improves slower and gets closer for different $\alpha$. In summary, the model robustness is more sensitive to $\alpha$ when less data is used in test-time optimization. \textbf{Visualize Space-Time Feature Map.} Figure~\ref{featuremap-fig} shows the feature maps of noise corrupted video after $res_{1}$ stages. Horizontally, we visualize the features of different frames. The time gap between neighboring frames is $i=1$. Standard and BN methods fail to recover human-recognizable semantics from the noise-corrupted frames. Tent obtains noisy outlines along the temporal dimension. TeCo removes the noise and generates temporal coherent feature maps, indicating that TeCo helps the model obtain smoother and more consistent features. \begin{table}[t] \caption{TeCo ablation experiments with 3D ResNet18 on Mini Kinetics-C. If not specified, the default setting is: the beta $\beta$ that control the weight of temporal coherence regularization is 1, the regularization is applied after $res_2$ stage of model, the test time optimization uses a batch size of 32 and $\alpha$ is 0.4. The default setting has bold text.} \vspace{-0.2cm} \begin{subtable}[h]{0.31\linewidth} \caption{\textbf{$\beta$ in Equation~\ref{eq:overall_equation}}. TeCo applies a balanced weight to temporal coherence regularization.} \label{beta-table} \begin{center} \begin{tabular}{c|c} \toprule $\beta$& mPC \\ \midrule 0.1& 56.4\\ 0.5 & 56.7\\ \bf 1 & \bf 56.9\\ 5 & 56.6\\ \bottomrule \end{tabular} \end{center} \end{subtable} \hfill \begin{subtable}[h]{0.31\linewidth} \caption{\textbf{Block stage}. The temporal coherence regularization module is added after certain block stage.} \label{stage-table} \begin{center} \begin{tabular}{c|c} \toprule Stage & mPC \\ \midrule $res_{1}$ & 56.6\\ \bf $res_{2}$ & \bf 56.9\\ $res_{3}$ & 56.6\\ $res_{4}$ & 56.2\\ \bottomrule \end{tabular} \end{center} \end{subtable} \hfill \begin{subtable}[h]{0.31\linewidth} \caption{\textbf{Batch size}. TeCo uses batches of data in test time optimization.} \label{batchsize-table} \begin{center} \begin{tabular}{c|c} \toprule Batch Size& mPC \\ \midrule 8 & 56.6\\ 16 & 56.7\\ \bf 32 & \bf 56.9\\ 64& 56.6\\ \bottomrule \end{tabular} \end{center} \end{subtable} \vfill \vspace{-0.1cm} \end{table} \begin{figure}[t] \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width=0.83\linewidth]{./images/alpha-iteration-full_v3.png} \end{center} \captionof{figure}{Iteration in test-time optimization vs mPC, on various $\alpha$. The mPC improves rapidly in the beginning, then the improvements slow down when more data is used. Smaller $\alpha$ uses more test data statistics, which enables the performance to converge faster.} \label{data-fig} \end{minipage} \hfill \begin{minipage}{0.48\textwidth} \begin{center} \includegraphics[width=0.97\linewidth]{./images/feature_map.png} \end{center} \captionof{figure}{We visualize $res_{1}$ feature maps of consecutive frames. The time gap between frames is $i=1$. TeCo generates smoother feature maps in both spatial and temporal dimensions.} \label{featuremap-fig} \end{minipage} \vspace{-0.5cm} \end{figure} \vspace{-0.3cm} \section{Conclusion} TeCo is a test-time optimization technique for improving the corruption robustness of video classification models, which updates model parameters during testing by two objectives with self-supervised learning: TeCo uses global information for entropy minimization, and it also applies attentive temporal coherence regularization on local information. Benefiting from long-term and short-term spatio-temporal information, TeCo achieves significant corruption robustness improvement on Mini Kinetics-C and Mini SSV2-C. We hope TeCo becomes a new baseline method that makes video classification models more reliable and robust in real-world deployment. \section{Acknowledgements} This work was done at Rapid-Rich Object Search (ROSE) Lab, Nanyang Technological University. This research is supported in part by the NTU-PKU Joint Research Institute (a collaboration between the Nanyang Technological University and Peking University that is sponsored by a donation from the Ng Teng Fong Charitable Foundation). This work is also supported by the Research Grant Council(RGC) of Hong Kong through Early Career Scheme (ECS) under the Grant21200522,CityU Applied Research Grant(ARG) 9667244,and Sichuan Science and Technology Program 2022NSFSC0551.
{ "arxiv_id": "2302.14326", "language": "en", "timestamp": "2023-03-01T02:09:37", "url": "https://arxiv.org/abs/2302.14326", "yymm": "2302" }
\section{Introduction} We believe in the essentiality of maintaining high ethical standards when conducting and evaluating computer security research. As examples of the field's\footnote{When we say ``the field'', we refer to the computer security research field even though our team is composed of both computer security researchers and a moral philosopher. We use this terminology because our primary goal is to contribute to the computer security research field.} commitment to moral considerations, conference program committees are leveraging ethics review boards and authors are discussing ethics in submissions. There also exist tools to help community members make adequate moral decisions, such as the 2012 Menlo Report~\cite{Menlo} and recent author guidelines in conference calls for papers, e.g.,~\cite{Oakland2023CFP,Oakland2022CFP,USENIXSecurity2023CFP}. However, challenges still arise. Central to these challenges is that in some cases there may \textit{not} be universal agreement on what constitutes an adequate decision. Consider, for example, a hypothetical scenario (our Scenario~A) in which researchers find a vulnerability in a wireless implantable medical device. Assume that the device manufacturer is out of business and, hence, it is impossible to patch the vulnerability. Also, assume that there is \emph{zero} chance of the vulnerability ever being exploited, even if adversaries know about it.\footnote{To enable us to focus on the philosophical aspects of ethics and morality and not become entangled in real-world details, we make simplifying assumptions in this and all our scenarios. We elaborate on this decision in Section~\ref{sec:scenarios-initial}.} Should the researchers \textit{disclose the vulnerability} to the government and the public, thereby respecting patients' right to be informed (a key component of the ``respect for persons'' principle of the Menlo Report~\cite{Menlo} as well as the earlier Belmont Report~\cite{Belmont} and the principle of ``autonomy'' in the Principles of Biomedical Ethics~\cite{BeauchampChildress})? Or, should the researchers, knowing that adversaries would \emph{never} manifest but that a knowledge of the vulnerability's existence could harm patients (who might remove the device from their bodies and hence lose the health benefits out of unnecessary concerns), \textit{not disclose} the vulnerability to the government and the public (thereby respecting the principle of ``beneficence'' and the avoidance of harm, another core element of the Menlo Report~\cite{Menlo}, the Belmont Report~\cite{Belmont}, and the Principles of Biomedical Ethics~\cite{BeauchampChildress})? There are strong arguments for \emph{both} decisions. In a situation with conflicting arguments, how are we as a field to make the right decision? We argue that whatever process is used, that process will benefit from being informed by philosophy's understanding of the different approaches that people take to ethics \dash approaches that \textit{can} result in different people coming to different conclusions. Hence, our research. In short, we seek to contribute to future conversations about \emph{what} is morally right, good, or allowed, and we do so by studying \emph{how}, from a philosophical perspective, to have such discussions. \paragraph{Ethics and Moral Philosophy} The field of ethics / moral philosophy centers the question of what it means to be ``good'', ``morally right'', or ``morally allowed'' (i.e., not prescribed but also not forbidden). Even in the field of philosophy, there is no consensus for \textit{what} this exactly means. But, there is a mature understanding for \emph{how} to discuss what it might mean. Conversations about ``good'' and ``bad'' begin by centering a perspective \dash an \emph{ethical framework}. \paragraph{Ethical Frameworks} Ethical frameworks define approaches for reasoning about what is morally right or wrong, i.e., what is ``good'' or ``bad''. Two of today's leading frameworks (or, more precisely, categories of frameworks) are \textit{consequentialist} and \textit{deontological ethics}. Consequentialist ethics centers questions about the impacts (consequences) of different decisions. Under consequentialist ethics, one might assess the benefits and harms of different options before making a decision that maximizes net benefits. Deontological ethics centers questions about duties (deon) and rights. Under deontological ethics, one might ask what rights different stakeholders have, e.g., a right to privacy or a right to autonomy. We center consequentialist and deontological ethics \dash and in particular utilitarianism and Kantian deontological ethics, respectively (Section~\ref{sec:frameworks}) \dash in our study because of (1) their prominence in the field of ethics / moral philosophy (they are two of the three leading frameworks) and (2) their existing impact on the computer security research field's approach to ethics and morality (e.g., the Menlo Report~\cite{Menlo} derives from the Belmont Report~\cite{Belmont}, which itself embeds both consequentialist and deontological elements). We stress, however, that both consequentialist and deontological ethics have limitations and that by centering them we are \emph{not} arguing that anyone adopt a strict consequentialist or deontological perspective. At a minimum, one might include considerations from both frameworks, as the Menlo Report~\cite{Menlo} does. As we discuss more in Section~\ref{sec:scenarios-initial}, modern frameworks include a more critical perspective. Additionally, much of philosophy's discussion of consequentialist and deontological ethics centers a Western perspective. While Western frameworks encompass ethical considerations that are part of non-Western traditions (e.g., about duties towards each other, the nature of fundamentally relating to each other, the outcomes of actions / policies, and so on), each tradition has its own unique history and elements. Although outside the scope of this work, we encourage the computer security research community to gain greater familiarity with other frameworks as well. \paragraph{Our Work} As exemplified by the Menlo Report~\cite{Menlo} and recent calls for papers, e.g.,~\cite{Oakland2023CFP,Oakland2022CFP,USENIXSecurity2023CFP}, there are \emph{already} connections between the computer security research field and ethics / moral philosophy. Our assessment is that many of these connections are implicit. We seek to make these connections explicit and, by doing so, contribute to \emph{how} the field discusses and considers moral questions. To do our research, we composed a team of researchers consisting of both those trained in computer security research \emph{and} those trained in moral philosophy. The computer security researchers on our team have significant prior work addressing and discussing ethical questions in computer security research. However, prior to this collaboration, the security researchers approached ethics from a ``we should be good'' and ``having thought carefully and talked with others, I think this is right'' approach rather than from an approach informed by ethics / moral philosophy. The moral philosopher on our team has significant prior work in applied ethics outside of computer security. Our work is thus cross-disciplinary and could be read as both a work in philosophy (particularly normative and applied ethics) and (we believe) a contribution to the computer security research field. \paragraph{Goals, Methods, and Findings} We seek to leverage tools and insights from ethics / moral philosophy to facilitate clear, thoughtful, and rigorous conversations within the computer security research field about what are morally right or allowed decisions / policies / institutions \dash i.e., we seek to contribute to \emph{how} the field discusses moral questions. We do \textit{not} seek to define \textit{what} (morally) ``good'' means (which would be a metaethical question). Our methodology is to: \begin{itemize} \item Develop computer security scenarios reminiscent of classical ethical dilemmas and for which evaluations under different ethical frameworks justify different outcomes; our scenarios are akin to philosophy's classic trolley problems, which we describe later. (Section~\ref{sec:scenarios-initial}.) \item Explore those computer security scenarios using both consequentialist (utilitarianism) and (Kantian) deontological analyses. (Section~\ref{sec:analysis-scenarios-initial}.) \item Develop additional computer security scenarios that, individually, may not pose ethical dilemmas (i.e., there may be stronger agreement for what constitutes a morally right or allowed decision for some scenarios) but that, together, facilitate deeper explorations about moral considerations within the field. (Summarized in Section~\ref{sec:scenarios-more}.) \item Reflect upon the above scenarios and explorations and derive lessons about \emph{how} to have informed conversations about ethics and morality in the computer security research community. (Section~\ref{sec:discussion}.) \end{itemize} Our findings and methodology, per the first three bullets above, are thus in the realm of moral philosophy \emph{and} computer security. We believe that our findings will be valuable to the computer security research field, per the last bullet above. \paragraph{Summary of Our Three Main Scenarios} We summarized Scenario~A above. Scenario~B explores the morality of studying stolen data \dash data that people did not intend to be public. Scenario~C explores what to do if a program committee member encounters a submission containing undisclosed information about their company's product. All our scenarios are based on actual situations encountered within the computer security research community, though we modified the scenarios to make them more conducive to ethical analyses, per our research goals. While reflective of real-world scenarios, our scenarios do not cover the full spectrum of moral dilemmas encountered within the security research community, nor is it our intent to do so. \paragraph{Example Use Case of Our Results: Program Committee Discussions} The security researchers on this team have on multiple occasions encountered the following situation: \begin{itemize} \item A paper is submitted to a peer-reviewed conference. The paper reports on work that one program committee member flags as possibly unethical. \item Program committee members discuss the morality of the work but cannot agree; some committee members think it was ethical, and others think it was not. \end{itemize} Such disagreements can be challenging if, for example, some committee members adopt consequentialist perspectives and other committee members adopt deontological perspectives, but the committee members do not realize that they are using different frameworks for evaluating morality. Prior to this collaboration, we (the security researchers on this team) did not have the tools and language to untangle such disagreements. Now, through this collaboration and the exploration of computer security scenarios via the consequentialist and deontological frameworks, we have that language. \paragraph{Example Use Case of Our Results: Discussing Research Path} In many cases, there may already exist clarity for researchers on how to navigate moral questions, e.g., researchers might follow the recommendations in the Menlo Report~\cite{Menlo}. However, there may remain times when clarity does not exist, e.g., when there are tensions between what is morally right from a benefits / harms perspective (consequentialist ethics) and what is right from a duties / rights perspective (deontological ethics). Through the articulation of established ethical frameworks, and through the exploration of computer security scenarios via these frameworks, we hope to help researchers have more methodical and informed discussions about ethics and morality when there are such tensions. Since different frameworks can lead to different conclusions of what is morally right, however, we stress that the frameworks should \emph{not} be used to justify a path that researchers have \emph{a priori} decided that they want to take. Rather, we argue that the ethically correct process is to \textit{center} ethics in the decision of whether or not to do a research project or do some component of the research and accept that sometimes the answer is ``no''. \paragraph{Example Use Case of Our Results: Community Conversations and Education} In ethics / moral philosophy, ethical dilemmas, like the trolley problems in Section~\ref{sec:background:trolley}, often feature prominently in education and also certain debates within the community. Consider, for example, the centrality of trolley problems in the popular book \emph{Justice} by Sandel~\cite{Justice}. We believe that our computer security dilemmas (Section~\ref{sec:scenarios-initial}) and other scenarios (Section~\ref{sec:scenarios-more}) can likewise facilitate community conversations and education. For example, conference ethics review committees (and the field at large) could use these scenarios as starting points for discussing norms, and instructors could use these scenarios to help students understand ethical thinking in the computer security field (and the importance of ethics in the first place). Additionally, we hope that our work inspires others to likewise create and share computer security-themed ethical scenarios and trolley problems for broader community conversations and educational discussions. As we create additional scenarios and educational materials, we will put them online at \url{https://securityethics.cs.washington.edu/}. \section{Motivation and Background} \label{sec:background} We begin in Section~\ref{sec:background:ethics} with a brief background on ethics / moral philosophy and then turn to a background discussion on ethics and computer security research in Section~\ref{sec:background:security}. In Section~\ref{sec:background:trolley} we summarize the trolley problem, a classic moral dilemma. \input{ScenarioFigures/trolley} \subsection{Ethics / Moral Philosophy} \label{sec:background:ethics} Ethics / moral philosophy is a field that has existed for centuries. In Western culture, the most well-known ethical frameworks are virtue ethics (most notably developed by ancient Greek philosophers such as Plato and Aristotle), deontological ethics (a famous example from German Enlightenment philosopher Immanuel Kant), and utilitarianism (an example of consequentialist ethics, first developed by Jeremy Bentham and John Stuart Mill). In other cultures, classic ethical frameworks include Confucianism, Daoism (as first coined by Laozi), and Ubuntu. As a field that is centuries old and spans cultures and histories, it is natural that there is no universal consensus on what, precisely, ethics and morality mean. For our work, we use \textit{ethics / moral philosophy} to refer to the (scientific) exploration of \emph{how} to consider, evaluate, and discuss moral questions,\footnote{Whereas ``moral philosophy'' refers to a mainly philosophical endeavor, ``ethics'' also comprises non-strictly-philosophical (e.g., theological) reasoning.} and \textit{morality} to refer to the object of this exploration. For example, ethicists / moral philosophers use ethics in the sense of moral reasoning to determine whether an action, social institution, or set of norms is moral or not~\cite{sep-morality-definition}. Modern ethicists often use \textit{ethics} and \textit{morality} interchangeably.\footnote{Against this, Habermas argues that ``ethics'' refers to questions about the good life, whereas ``morality'' is concerned with what we owe others~\cite{habermas1994justification}.} Thus, the computer security field is not wrong in its use of the term \textit{ethics} to encompass both ethics and morality. When precision is not necessary, we may do so as well. \subsection{Ethics and Computer Security Research} \label{sec:background:security} \input{02b_ethics_security} \subsection{A Classic Moral Dilemma} \label{sec:background:trolley} Ethicists / moral philosophers have, for generations, proposed dilemmas for ethical debate and consideration. A classic dilemma (or, more precisely, family of dilemmas) are the ``trolley problems''. These are dilemmas because they present a choice between two options, both of which contain undesired aspects. Therefore, different ethical frameworks potentially present different answers to such dilemmas. Some authors (among them Philippa Foot herself, who came up with the original trolley problem~\cite{Foot2002MoralDilemmas}) take them to show that people's moral intuitions will most likely diverge in important cases. Figure~\ref{fig:scenario:trolley} presents an archetypical trolley problem. In this trolley problem, a runaway trolley with no brakes is heading straight down a track. Five people are tied to that track. A trolley operator is watching the trolley. They could do nothing, in which case five people would die. The trolley operator could, however, choose to redirect the trolley down a second, adjacent track. If the operator does so, then the trolley would kill only one person\dash the person tied to that adjacent track. Philosophers and psychologists have studied people's responses to trolley problems such as in Figure~\ref{fig:scenario:trolley} and, indeed, there is no universal consensus for what constitutes the morally correct action of the trolley operator~\cite{Foot2002MoralDilemmas}. In psychology studies, for example, differences can arise due to the moral intuitions and values of the participant and may vary by culture, e.g.,~\cite{conway2013deontological, conway2018sacrificial, gold_colman_pulford_2014, ChineseandWesternersRespondDifferentlytotheTrolleyDilemmas, yamamoto2020causes, lamont2017bridging}. Variants of the trolley problem feature different outcomes and can elicit different thought processes and decisions.\footnote{An interactive exploration of different trolley problems is available at \url{https://neal.fun/absurd-trolley-problems/}.} As an example variant, the single person on the alternate track might be a young child whereas the five people on the main track might already be near death. In this variant, if the operator does nothing, five near-death people will die; if the operator changes the trolley's track, a single young child will die. As another variant, the five people tied to the main track might have tied themselves there intentionally whereas the single person on the other track might be there against their will. Or, the five people on the main track might have been convicted of war crimes by an international tribunal whereas the person on the alternate track is known to have led a virtuous life. \section{Computer Security Trolley Problems} \label{sec:scenarios-initial} We first describe our scenario generation process (Section~\ref{sec:scenarios-initial:process}) and then present our three scenarios (Sections~\ref{sec:scenario:medical}, \ref{sec:scenario:immoraldata}, and~\ref{sec:scenario:inadvertentdisclosure}). \subsection{Scenario Generation Process} \label{sec:scenarios-initial:process} Our research team used a collaborative and interactive process for scenario generation. After discussing our initial approach, we present our final methodology and scenario selection criteria. \paragraph{Initial Approach} Initially, the security researchers on the team created scenarios representative of scenarios that we (the security researchers) had previously encountered (e.g., as program committee members or as researchers). Our team generated dozens of such scenarios with an initial goal of exhaustively and systematically surfacing the full spectrum of ethical considerations encountered within our field. While the generative process was important toward normalizing an understanding of scenarios and moral issues in the field across the entire team (including both the security researchers and the philosopher), these scenarios had several key limitations: \begin{itemize} \item \textbf{Too late.} Some scenarios were framed as ``a program committee member reads a conference submission in which the authors did such-and-such; the program committee member believes that such-and-such should not have been done; should the program committee accept the paper?'' \item \textbf{Open-ended.} Other scenarios were very open-ended and highly unconstrained, e.g., ``here is an issue that a research group encountered, what should they do?'' \item \textbf{Not a dilemma.} Some scenarios had relatively clear and uncontroversial moral implications; we encountered them as program committee members because (for example) of an oversight by the authors of a paper submission, e.g., because the authors assumed that the IRB process was sufficient to cover all aspects of moral decision-making. \item \textbf{Indecisive.} Some of our scenarios did not have conclusive decisions under different ethical frameworks, at least not without significant additional information that would greatly expand the scenarios and make them unwieldy. \end{itemize} The ``too late'' scenarios all shared a common theme: researchers made decision $X$, for some $X$; what should the program committee do if they question the morality of $X$? A discussion of the ethical processes for program committees when encountering such papers is important, and indeed we consider a family of such scenarios in Section~\ref{sec:scenarios-more}. For our core ethical dilemmas (this section), we sought scenarios featuring a decision \emph{before} a controversial act $X$ is committed in the first place. As a concrete example, for our Scenario~B (to be described), an initial version featured a scenario in which a program committee reviews a paper that studies data that some program committee members believe should not have been studied. What should the program committee do? Our final Scenario~B asks: should researchers study that data? The ``open-ended'' scenarios, while representative of what researchers might encounter in the real world, made analyses of the scenarios under established ethical frameworks too unconstrained for focused treatments. As evidenced by our team's internal discussions, when faced with open-ended questions of the form ``what should the researchers do?'', it is possible to spend hours, and hence volumes of written pages, exploring different possible paths forward. While for some scenarios such explorations would be important contributions of their own, those are not the contributions we sought with this work. Rather, we wanted scenarios conducive to short, precise, and focused analyses with minimal (binary) options. The ``not a dilemma'' scenarios were intellectually interesting and important in establishing our team's shared understanding of questions of morality and computer security research. However, because these scenarios were not actual dilemmas, evaluation under different ethical frameworks resulted in the same conclusions and hence were not as generative of philosophical explorations as scenarios that yielded different conclusions under different ethical frameworks. In short, we sought scenarios for which people \dash including computer security research community members \dash might through sound reasoning plausibly disagree. Still, our team found value in comparing multiple ``not a dilemma'' scenarios: even if two scenarios do not individually present dilemmas, the comparison of what is morally correct in those two scenarios can contribute insights into how our field reasons about moral questions. We return to the comparison of multiple scenarios in Section~\ref{sec:scenarios-more}. The ``indecisive'' scenarios featured decisions for which all possible choices would result in ``comparable'' benefits / harms that would need extensive empirical work to assess. In the real world, if one were to encounter such a situation, a significant portion of the conversation might center on assessing those empirical claims. For our work, we wanted to center ethical and moral thought processes, not empirical questions. Hence, we sought scenarios without complicated benefits / harms calculus. \paragraph{Revised Approach: Criteria, Creation, and Validation} \label{sec:scenarios-initial:criteria} Informed by the results of our analyses of and conversations about our initial scenarios, our team developed the following criteria for scenario generation: \begin{itemize} \item \textbf{Early.} We sought scenarios that featured moral questions that actors (e.g., researchers) might encounter about their own future actions, not questions about what to do after it has been determined that researchers have already committed a morally questionable act. \item \textbf{Binary options.} We sought scenarios that \dash like the trolley problems \dash have binary options for some actor (e.g., the trolley operator in the trolley problem in Figure~\ref{fig:scenario:trolley} or a research team in computer security-related scenarios). \item \textbf{Dilemmas.} We sought scenarios that were true dilemmas. Specifically, we sought scenarios for which analyses under consequentialist and deontological ethics would yield different conclusions. \item \textbf{Decisive.} We sought scenarios for which analyses under the consequentialist and deontological ethical frameworks were clear, straightforward, and decisive. Sometimes this came at the cost of simplifying and artificially contrasting the ethical traditions to bring out key differences in perspective and focus. \end{itemize} Having these criteria meant that our resulting scenarios were not ``too late'', were not ``open-ended'', were actually dilemmas, and were analyzable in a contrasting way under at least the consequentialist and deontological ethical frameworks. We discuss the consequentialist and deontological ethical frameworks in Section~\ref{sec:frameworks}. Although we use the term ``early'', we observe that every decision is influenced by earlier, preceding decisions, and hence there may exist important-to-consider scenarios even earlier in a timeline. Our research team iterated extensively on the creation of scenarios that satisfied these criteria, over regular meetings throughout late summer and fall 2022 and early 2023. Our iteration was both at a high level, focusing on the scenario's overall setup and context, and at a low level, focusing on fine nuances and details. As we iterated on these scenarios, we presented variants in university seminars (at other universities) and in courses (at the undergraduate and graduate levels). After each presentation, we reflected upon and revised the scenarios as needed to address ambiguities or clarify key aspects relevant to the scenarios' intended moral questions. We additionally shared our scenarios with others in the computer security research community for feedback. In addition to being instrumental to the process of scenario creation, this iterative process also served as scenario validation. Specifically, the iterative process with systematic philosophical analyses and external discussions helped us validate that our scenarios met our ``dilemmas'' and ``decisive'' criteria. (That our scenarios met the ``early'' and ``binary options'' criteria was easy to assess by construction.) \paragraph{On the Chosen Scenarios} Our final scenarios were inspired by scenarios previously encountered by the security field, though we modified the scenarios to meet our design criteria. While we initially intended for our scenarios to capture, systematically, the full spectrum of ethical questions that we have encountered within the field, we soon realized that a full analysis of archetypical examples of \emph{all} such questions would be beyond the scope of a single paper, and especially so for a paper (such as this one) intended to dive into and explore the relationship between ethical frameworks and computer security scenarios. Thus, we chose to focus on three scenarios \dash scenarios that are each different from each other and that enable philosophical analyses of the form we sought. Our selected scenarios reflect ethical scenarios encountered within our field: what to do after discovering a vulnerability (Scenario~A), whether to study stolen data (Scenario~B), and what to do if a program committee member learns about an undisclosed vulnerability in their company's product (Scenario~C). We provide additional scenarios in Section~\ref{sec:scenarios-more}, though again our full set of scenarios are not exhaustive. Researchers seeking to create additional scenarios might draw inspiration from their own experiences or from the works surveyed in Section~\ref{sec:background:security}. \paragraph{Based on Reality, But Not Real} We stress that although our scenarios are based on reality, they are \emph{not} realistic. Real-world scenarios generally do not present only a binary option to decision-makers \dash they present a medley of options. Additionally, to enable precise analyses under different ethical frameworks, our scenarios minimize uncertainty. The real world, on the other hand, is full of uncertainty, e.g., uncertainty about when or if an adversary might manifest or the actual benefits / harms of a technology or exploit. Thus, assessing benefits / harms (for consequentialist ethics) and rights violations (for deontological ethics) is significantly more challenging in the real world than in our scenarios. Real-world scenarios may have multiple actors simultaneously making decisions, each of which might impact the other actors; in our scenarios, we consider only a single decision-maker. Additionally, to simplify our analyses, we reduce the impacts of decisions on the decision maker in our core scenarios (Scenarios~A, B, and~C); we add such impacts back in Scenarios~D$^*$ and~F (Section~\ref{sec:scenarios-more}). In the real world, decision-makers may involve others in the decision-making process; our scenario descriptions do not preclude such discussions but leave the final decision in the hands of the specified decision-maker rather than allow for the transference of the decision responsibility to another entity (e.g., a committee or government). \paragraph{The Structure of a Scenario} For each scenario, we use a structure similar to Figure~\ref{fig:scenario:trolley} for the trolley problem. Each scenario centers a decision-maker and has: \begin{itemize} \item \textbf{Context:} The ``context'' of the scenario provides the background context for the decision that the actor needs to make. \item \textbf{Choice:} The ``choice'' of the scenario describes two options that the actor must choose between. \end{itemize} In the body of the paper, we use prose to describe the context and choice. Appendix~\ref{ap:figures} provides figures, like Figure~\ref{fig:scenario:trolley}, for each of our scenarios. We defer the figures to the appendix because it is not necessary to read the figures in order to understand this paper. Still, the figures provide self-contained descriptions of each scenario and as such may be useful in other contexts (e.g., presenting a scenario to a class or for discussion with other researchers). \paragraph{Reflection} The significant number of scenarios that we initially created that did not meet all our criteria nonetheless provided valuable insights for the formulation of the above criteria. Here, the interdisciplinary work proved especially fruitful in realizing and establishing these criteria. We encourage future researchers at the intersection of ethical frameworks and computer security to also adopt the above criteria for scenario creation. \subsection{Scenario A: Medical Device Vulnerability} \label{sec:scenario:medical} Scenario~A centers researchers who discover a vulnerability in a wireless implantable medical device. For a self-contained description of Scenario~A, see Figure~\ref{fig:scenario:medical_device} in Appendix~\ref{ap:figures}. \paragraph{Context} Researchers found a vulnerability in a wireless implantable medical device made by a manufacturer that is no longer in business. Existing patients still use the device and new patients are still receiving the device. It is not possible to update the software on the device and patch the vulnerability. Even if the researchers disclose the vulnerability to the public, there is zero probability of the vulnerability being exploited in the wild. There are no field- or industry-wide gains to be made via the public disclosure and discussion of the vulnerability, e.g., the public disclosure of the vulnerability would not teach the field any new lessons about computer security and medical devices. \paragraph{The Choice} For this scenario, a disclosure to some sufficiently large group (e.g., all healthcare professionals who work with the relevant medical condition) would eventually result in a disclosure to the public (through information leakage). Hence, the researchers must choose between not disclosing the vulnerability to anyone or disclosing the vulnerability to the government, the healthcare industry, patients, and the public. If the researchers disclose the vulnerability to the public, then patients may be harmed psychologically (a fear of having a vulnerable / imperfect device even if the likelihood of it being compromised is zero) or physically (the device increases a person's life by ten years; if a patient removes or does not receive the device, they would not receive the health benefits). If the researchers do not disclose the vulnerability to anyone, then patients do not have the option to make an informed choice with respect to whether they keep the device or, for new patients, whether or not they receive the device. \paragraph{On this Scenario} In 2008, one of us (T.K.) co-authored a study that discovered and reported on vulnerabilities in a wireless implantable medical device~\cite{HalHeyRan2008ICD}. We thought deeply about ethics and responsible disclosure at the time of that study, and the medical device security field has continued to reflect upon ethics and responsible disclosure thereafter, e.g.,~\cite{FDAadvisoryIMD,IMDguide}. We designed Scenario~A to center patient-focused elements of consideration: the fundamental rights that patients have and the benefits and harms to patients with either disclosing or not disclosing a vulnerability. To center the ethical considerations on the patients, in Scenario~A it is not possible to update the software on the medical device, and hence a traditional coordinated disclosure process of first notifying the manufacturer and then giving them time to respond is not an option (a situation which, unfortunately, is plausible~\cite{IEEESpectrumEye}). Additionally, the healthcare industry has already internalized the importance of computer security for wireless implantable medical devices, e.g., \cite{FDAMITREPlaybook2018,FDAsecurityWeb}, and hence there are no significant field-wide positive impacts with a public disclosure. To meet our scenario design criteria, this scenario presents only two options to the researchers. In a real-world scenario, we anticipate much greater involvement from organizations like the U.S.\ Food and Drug Association (FDA); the researchers might even cede the final decision to the FDA or another entity such as U.S.\ CERT. Some elements of Scenario~A, like the continued implantation of devices after Company A's bankruptcy, may be implausible. \subsection{Scenario B: Studying Stolen Data} \label{sec:scenario:immoraldata} Scenario~B centers researchers who are trying to decide whether or not to study stolen data. For a self-contained description of this scenario, see Figure~\ref{fig:scenario:immoral_data_jobs} in Appendix~\ref{ap:figures}. \paragraph{Context} Company B offers a service that matches job applicants with jobs. The public believes that Company~B's AI matching system has racial and gender biases. Some people also believe that Company~B's AI system could be manipulated by adversaries. Adversaries compromise Company~B's servers and steal the entirety of their data, including all data about all past job postings, all past job application packets, and the outputs of all past job-applicant matches from Company~B's AI system. The adversaries also steal all the internal details of Company~B's AI matching system, including the underlying ML model. Many victims of the data breach \dash the job applicants \dash have publicly stated their desire for the stolen data to be permanently deleted, everywhere. A research group obtained a copy of the stolen data before all publicly-available copies were deleted. \paragraph{The Choice} The research group wishes to study the stolen data and scientifically assess whether Company~B's AI matching system is, in fact, biased. If it is biased, the researchers seek to measure past impacts of those biases, e.g., by counting the number of applicants not forwarded to employers because of racial or gender biases. Additionally, using the stolen data \dash including both the ML model and knowledge of the contents of past application packets \dash the researchers hope to assess the vulnerability of Company~B's AI system to adversarial manipulation. Informed by a scientific understanding of the biases and vulnerabilities in Company~B's AI system, the researchers intend to propose technical and policy mechanisms to mitigate such biases and vulnerabilities in the future. The researchers know, however, that the data was stolen and made public over the objections of many job applicants. The researchers must choose between doing nothing (not studying the data) or studying the data and reporting on the results. If the researchers study the data and report on their results, they know not to include anything in their publication that could lead to the identification of any of the job applicants. If they study the data, they also know that they must continue to retain a copy of the data even after publishing their results in case their results are challenged, e.g., by Company~B. \paragraph{On this Scenario} Adjacent to the computer security research field, the human-computer interaction field has an extensive history of considering the morality of studying data that people might have technically made public but that they might not wish to be used in research or that might cause harms if quoted in a publication. These works also consider best practices for how to study such data and how to report on the results. See, for example,~\cite{bruckman2002studying,FieslerTwitter,ZimmerPublic,FieslerReddit,FieslerFan, MarkHam2012}. Within the security research community, it is not uncommon to study datasets containing information that users did not intend to be public. A typical example is the study of the contents of stolen password or other databases~\cite{thomas2017ethical}. An adjacent example is the study of anonymized datasets that are, in actuality, not fully anonymized, e.g.,~\cite{netflixprize,adar2007user,sweenymerge}. The ubiquity of such studies speaks to at least partial agreement within the community on the morality of such studies in general, though we observe that researchers must still pay attention to details. For example, even if researchers study the contents of a leaked password database, they might not include real username and password pairs in a resulting publications, similar to how human-computer interaction researchers might not include full quotes in publications even if quoting from public data, e.g., \cite{MarkHam2012,NatureQuote,bruckman2002studying,FieslerTwitter}. For Scenario~B, we sought a scenario related to stolen data but with content that, by itself, is more sensitive than usernames and passwords. We explored numerous possibilities, including (for example) scenarios related to data from victims of intimate partner violence (motivated by, e.g.,~\cite{chatterjee2018spyware,havron2019clinical} from within the security community and earlier works in adjacent areas, e.g.,~\cite{matthews2017stories,freed2018stalker,freed2017digital}), scenarios related to face recognition systems created from scraping ``public'' images (motivated by, e.g.,~\cite{evtimov2021foggysight,shan2020fawkes}), and scenarios related to stolen data about activists during a revolution (motivated by, e.g.,~\cite{daffalla2021defensive}). Motivated in part by past computer security research on biases and vulnerabilities remote proctoring software~\cite{burgess2022watching} as well past concerns about biases in job-applicant matching systems, e.g.,~\cite{techreviewAIbias}, we chose to focus on an AI job-applicant matching system: a system for which job applicants might submit an extensive amount of private information. We found that the other scenarios were too difficult to fully present and explain in a ``short'' amount of space; the explanation needed to include, for example, a broader context about intimate partner violence or activism. As with all our scenarios, our goals in Section~\ref{sec:scenarios-initial:criteria} influenced our scenario design. Here we highlight two aspects of this scenario that enable it to meet our ``decisive'' goal. First, while one might argue that people's right to privacy extends to data that they intended to be private even after others (illegally) made the data public, we make the right to privacy in Scenario~B even more definitive by having those impacted by the data leak explicitly request that all copies of the data be deleted. Second, if biases are present in the AI system, and if those biases are removed, that would change which applicants are shown to employers. Although preferable in terms of overall fairness, such a change could also do harm, e.g., to the removed applicants. To simplify our consequentialist analyses, we explicitly assume that anyone removed through this process would still be able to find a job that they desire. \subsection{Scenario C: Inadvertent ``Disclosure''} \label{sec:scenario:inadvertentdisclosure} Scenario~C features an ethical dilemma for a conference program committee member. We selected this scenario to be among the three featured in this section because questions of ethics and morality arise not only in research (Scenarios~A and~B), but also during the peer review process (this scenario). Figure~\ref{fig:scenario:inadvertent_disclosure} in Appendix~\ref{ap:figures} provides a self-contained presentation of this scenario. \paragraph{Context} A program committee member works for Company C and, as part of the program committee process, encounters a confidential paper submission detailing an undisclosed vulnerability in Company C's product. Upon reading the results in the submission, the Company C employee realizes that the vulnerability is very serious and that it will take a significant amount of time to patch. The employee feels an obligation to their employer and to Company C's users. But, the program chairs required all committee members to explicitly agree to maintain the confidentiality of all submissions. Company C's leadership team decided that the Company C employee should agree to the confidentiality condition and join the program committee. \paragraph{The Choice} The employee of Company C must decide between doing nothing (not disclosing the vulnerability in the paper to Company C) or disclosing the vulnerability to their employer. \paragraph{On this Scenario} While we are aware of real-world scenarios similar to Scenario~C, we are unaware of written public statements about those situations and consequently include no background citations here. Scenario~C is thus based solely on the memories and experiences of the computer security researchers on this team as well as discussions with others. As with our other scenarios, the real world is more complex, with additional options available to the program committee member, e.g., the program committee member could work with the program chairs to determine a course of action. And, as one possibility, if the program committee member contacts the program chairs, the program chairs could assert full decision-making responsibility, thereby not requiring (or even allowing) the Company~C employee to make any subsequent decisions (the Company~C employee already made at least one decision: to discuss with the chairs). \section{Ethical Frameworks} \label{sec:frameworks} Ethical frameworks define approaches for reasoning about whether actions are morally right or wrong. In ethics / moral philosophy, the oft-cited three main ethical frameworks are consequentialist ethics, deontological ethics, and virtue ethics. A fourth oft-discussed framework is discourse ethics. We discuss the first two in Section~\ref{sec:frameworks:CandD}. Although our analyses focus on the first two, we discuss the latter two along with several other frameworks, including principlism (featured in the Belmont and Menlo Reports~\cite{Belmont,Menlo}) in Section~\ref{sec:frameworks:other}. The frameworks we explore have in some cases evolved over considerable periods of time, with a multitude of contributions, objections, and adaptations. There can thus exist a vast variety of different branches and nuances within each framework. Since our goal is to explore moral dilemmas in computer security research from the perspective of different ethical frameworks and not to argue, for example, the benefits of one framework over another or for a new theory of ethics for computer security research, we limit our descriptions to the general features of each framework. Our summaries are sufficient to clearly contrast the different frameworks with each other and to receive clearly distinguishable reasonings and outcomes with regard to our scenarios from Section~\ref{sec:scenarios-initial}. While we believe that our summaries are sufficient to enable security researchers to explore their own problems with these frameworks, we defer interested readers to works such as Baggini and Fosl's \textit{The Ethics Toolkit}~\cite{EthicsToolkit} or online resources such as~\cite{StanfordEthics} and~\cite{SCU:Ethics} for additional information. Further, as we stress elsewhere, we are \emph{not} arguing for the application of any of these frameworks; rather, we are arguing for the use of these frameworks as mechanisms to facilitate thoughtful dialog and inquiry while, for example, applying the principles in the Menlo Report~\cite{Menlo}. \subsection{Consequentialist and Deontological Ethics} As discussed earlier, we center consequentialist and deontological ethics in our analyses because of their prominence in the field of ethics / moral philosophy and because of their existing role in the computer security research community, e.g., their presence in the Menlo Report~\cite{Menlo}. \label{sec:frameworks:CandD} \paragraph{Consequentialist Ethics} Consequentialism centers the consequences --- the outcomes --- of an action, both positive (benefits) and negative (harms). Each consequentialism comprises of a value theory (e.g., hedonism) and a moral principle (e.g., impartial maximizationism), according to which an action is morally right exactly when there is no other action with better consequences as measured by the respective value theory. Utilitarianism is an example of consequentialism in which positive and negative outcomes are generally assessed with respect to the well-being (welfare) of people. We use utilitarianism in the consequentialist analyses in this paper. Under utilitarianism, the right action is the action that produces the greatest net positive well-being. There are three main categories of utilitarianism,\footnote{For the purposes of this paper and the decisions in the scenarios, we focus on direct action utilitarianism and ignore rule utilitarianism} each corresponding to one of three main theories of well-being: \begin{itemize} \item \textbf{Hedonic utilitarianism:} An action is right if it produces the greatest net happiness --- the greatest aggregate happiness over a given set of individuals~\cite{stuart1863utilitarianism}. \item \textbf{Preference utilitarianism:} An action is right if it enables the greatest number of people to live by their own preferences~\cite{hare1981moral}. \item \textbf{Objective list utilitarianism:} An action is right if it produces the greatest net positive impacts on the greatest number of people with respect to an objective list of measures~\cite{parfit1984reasons}; example measures are the levels of one's health, wealth, or access to resources. \end{itemize} For objective list utilitarianism, the standards to maximize for are not subjective desires or preferences, but rather ``objective'' (in the sense of ``applicable to all'') measures such as level of health, wealth, and safety (happiness could also be one standard on the list, though some argue that happiness cannot be objectively measured). These categories are related but distinct. For example, increased health (an objective list measure) can lead to increased happiness (the hedonic measure). Likewise, if someone can live by their preference (preference), then they may be more likely to be happy (hedonic). On the other hand, and as a security-related example, people might prefer to create short passwords or not waste time waiting for software updates to complete (preference), but the use of short passwords and declining software updates could make people's computer systems less secure (an objective list measure). Rather than rely solely on a single definition of well-being and hence a single category of utilitarianism, those evaluating morality of actions may employ: \begin{itemize} \item \textbf{Pluralistic utilitarianism:} Pluralistic utilitarianism considers happiness, preference, objective lists, and other forms of benefits / harms in combination. \end{itemize} In moral considerations, a central focus is on the question, ``what is the right decision to make?'' However, the question ``did we make the right decision?''\ is equally important, as it deals with questions of (retrospective) responsibility, redress, and retributive justice. When evaluating the moral quality of an action that has already happened, one view of consequentialism focuses on the actual outcomes regardless of what the likely outcomes were prior to the action. A probabilistic view of consequentialism asks whether the action was likely to have produced a net positive outcome regardless of whether it actually did so. Under the former view, an action that would likely have produced net negative results but that did not is still a right action; under the latter view, the action is not right. Relatedly, when considering what decision to make, direct action utilitarianism focuses on the outcome of an action. Rule utilitarianism focuses on whether the decision follows rules designed to maximize positive net outcomes. Under rule utilitarianism, an action that causes net harm is still right if it follows rules that, across all scenarios, produce the greatest net positive results. We designed the scenarios in Section~\ref{sec:scenarios-initial} to highlight key points of consideration about benefits and harms in individual situations and not as vehicles to discuss generalizable rules for the field. Hence, in Section~\ref{sec:analysis-scenarios-initial}, we adopt a direct action utilitarian perspective. \paragraph{Deontological Ethics} Deontological ethics focuses on the moral duties of a given moral actor, such as an individual or an institution. These duties are often specified as direct duties (obligations) against others\footnote{This is one aspect that differentiates deontological reasoning from consequentialism, which at most posits a general duty to be moral (i.e., produce the best outcomes).} \dash i.e., what does one person (morally) owe others~\cite{kant1785grundlegung}? These duties are often specified in terms of justice, either as negative duties (refrain from doing harm)\footnote{While consequentialism is also concerned about (overall) harm, it does not hold that there are specific duties to the single individuals not to harm them. Rather, it aggregates harms and benefits, such that one given individual might suffer considerable harm if the net benefit for others is positive.} or as positive duties to certain claims that others have. In modern rights-based theories, these duties correspond to (moral) rights of the moral patient to whom the duty is owed. For example, if one person has a right to privacy, others owe this person (the moral patient) a certain behavior associated with that right.\footnote{These rights are often spelled out in Hohfeldian claim rights and corresponding obligations. As in our scenarios, the moral agents (researchers, program committee members, and so on) may incur direct moral obligations, Hohfeld's distinction between claim rights and liberty rights is not important here.} A defining result of the duty- / rights-based approach of deontological ethics is the focus on the right intention to act. While consequentialist ethics are mainly concerned with the outcomes, deontological ethics ask whether the action is undertaken out of a consideration for one's moral duty, or by some other thought process. Only an action that is performed with the intention to discharge a moral duty is considered moral.\footnote{This is not to say that deontological ethics entirely disregards the outcomes of an action. Neo-Kantian versions like John Rawls' ``difference principle'' (unequal distribution of certain goods is just, as long as it also benefits the least well-off), for example, often add a consequentialist aspect to the otherwise deontological reasoning. However, also in these cases the primary factor is individual rights and the intention to discharge duties associated with these rights.} Kant as one of the main protagonists of deontological ethics distinguishes between acting morally (i.e., out of consideration for one's moral duty) and legally (e.g., out of consideration for an actual legal framework or out of fear of sanctions). For example, completing a human subjects review process, such as an IRB within the U.S., solely because it is a university requirement is not a moral act. Deontological ethics differ widely in their justification of the respective duties. One historical example is Divine Command and the duty to fulfill God's will. Most famously, Kantian ethics derives the moral duties from the faculty of reason that human beings have: because we can reason about what to do and thus control our desires, we have the obligation to do so in order to become autonomous (giving ourselves the moral law). And, since we are all potentially autonomous, we have a duty to treat all other human beings as such, i.e., as ``ends and never purely as means'' in Kant's words; Kant refers to this obligation as the Categorical Imperative. A modern version of this Kantian thought is a specific version of contractualism~\cite{scanlon2000we}, which posits that we should act in a way that cannot be reasonably rejected by anyone. Natural rights theories, on the other hand, take the idea that human beings as moral actors have certain faculties and justify natural (i.e., unalienable) rights (and corresponding duties) from those faculties for all persons (e.g., John Locke's ``life, liberty, and estate'' or the Virginia Bill of Rights)~\cite{locke1988}. Given the influence of Kantian deontological ethics on the Belmont Report~\cite{Belmont} and the Principles of Biomedical Ethics~\cite{BeauchampChildress}, and hence on the Menlo Report~\cite{Menlo} and (sometimes implicit) arguments within the computer security research community, we take a Kantian approach to our ethics analyses. In order to make deontological ethics more tangible for computer security ethics and to contrast it more sharply with consequentialist ethics, in this paper we make a somewhat simplified assumption that deontological ethics conducts moral evaluation in the form of (individual or collective) duties and corresponding (individual) rights, that are spelled out in (absolute) terms of right or wrong. For example, if it is a duty not to harm someone, then killing one person to save five other lives \dash as presented in the classical trolley problem (Figure~\ref{fig:scenario:trolley}) \dash directly interferes with this duty (and the person's right not to be killed) and is therefore wrong, no matter what.\footnote{Of course, there are also pro-tanto-duties, which only hold as long as no more important or pressing duty surfaces. In order to contrast the two traditions more sharply, however, we ignore these in this paper.} In contrast, consequentialist ethics, as exemplified by utilitarianism, conducts moral evaluation in the form of overall well-being (net utility), which allows comparative evaluations. The state in which only one person is dead can thus be a better state than the state in which five people are dead. While deontological ethics focuses on the intention (to discharge one's moral duties and to honor the rights of others to be treated in a certain way), consequentialist ethics focuses on the outcome of an action, policy, social practice, and more. \subsection{Other Ethical Frameworks} \label{sec:frameworks:other} \input{04b_otherframeworks} \section{Analysis of Scenarios A, B, and C} \label{sec:analysis-scenarios-initial} We now turn to using consequentialist and deontological ethics (Section~\ref{sec:frameworks}) to analyze the scenarios in Section~\ref{sec:scenarios-initial}. As we noted in the discussion of our scenario criteria (Section~\ref{sec:scenarios-initial:criteria}), we designed our scenarios to reflect the real world but to also facilitate clear, precise analyses. Real-world scenarios can be and often are significantly more complicated and may present more than a binary option to the decision-makers. Decisions may depend more on an ethical risk assessment about uncertain future states. Further, decision-makers must also consider the law, in addition to ethics. We encourage readers to review Scenarios~A, B, and~C first and consider what decisions they would make before reading our analyses. If readers wish, they may complete an online Google Form with their decisions (link available at \url{https://securityethics.cs.washington.edu}) and, upon doing so, see how others chose to respond.\footnote{ The Google Form is anonymous \dash it requires Google authentication but does not reveal any identifiers to the authors of this paper. When interpreting the results of this survey, we caution that no mechanisms, other than Google authentication, are used to protect against the use of different Google accounts to vote multiple times.} \subsection{Analysis of Scenario A (Medical Device Vulnerability)} \label{sec:analysis-scenarios-initial:medical} Here we consider the medical device vulnerability scenario from Section~\ref{sec:analysis-scenarios-initial:medical} (see also Figure~\ref{fig:scenario:medical_device} in Appendix~\ref{ap:figures}). \paragraph{Consequentialist Ethics} Physical health in this scenario is an objective measure; if a patient chooses to remove a device or chooses not to obtain one because of a known vulnerability, then they would have a shorter life expectancy. Psychological health in this scenario is also an objective measure; if a patient knows about the vulnerability and still chooses to keep or get the implant, then they could live in fear of a security incident even though the likelihood of an incident is zero (by scenario construction).\footnote{In the real world, decision-makers must also consider family members, loved ones, and other stakeholders; we focus on patients for expositional simplicity.} From a hedonic perspective, the knowledge that one has a shorter life expectancy (if they do not have the device) or the fear of a security incident (if they have the device) could lead to decreased happiness. In addition, the fact that removing or not opting for the device will result in ten years less of potential happiness may also significantly decrease overall happiness. From a preference utilitarian perspective, under the assumption in Figure~\ref{fig:scenario:medical_device} that most patients would prefer not to learn about the vulnerability, then not disclosing the vulnerability would maximize the ability of people to live by their preference. Hence, the morally correct decision is to \emph{not disclose the vulnerability}. \paragraph{Deontological Ethics} Under deontological ethics, the researchers have a duty to respect people's right to informed consent and the right to self-agency. In the medical context, this right to informed consent manifests (for example) as warnings in TV advertisements for medicines. These are fundamental human rights, and not disclosing the vulnerability would violate those rights. Hence, the morally correct decision is to \textit{disclose the vulnerability}. This conclusion is correct even if most people would have preferred not to know about the vulnerability. \paragraph{Informed by the Real World, Not Real} As discussed in Section~\ref{sec:scenarios-initial}, although real-world observations and experiences informed our scenario designs, there are gaps between our scenarios and what one might encounter in the real-world. Rather than choose from one of only two provided (binary) options, the researchers might, for example, choose to involve others in the decision-making process or cede the decision responsibility to another entity entirely. In the U.S., the FDA \dash not the researchers \dash could make or strongly contribute to the decision on whether to disclose the vulnerability to the public. Should they choose to disclose the vulnerability, they might work with healthcare providers to thoughtful and conscientiously craft the message, thereby reducing patient alarm. Given the medical and security contexts, the decision-makers might leverage the Principles of Biomedical Ethics~\cite{BeauchampChildress} and the Menlo Report~\cite{Menlo}. Thus, even if the decision-makers do not solely rely on consequentialist or deontological analyses, and indeed consequentialist and deontological ethics both have limitations, consequentialist and deontological thinking may be part of the final decision-making process. \subsection{Analysis of Scenario B (Studying Immorally Obtained Data)} \label{sec:analysis-scenarios-initial:immoraldata} We now turn to analyzing the second scenario in Section~\ref{sec:analysis-scenarios-initial:immoraldata} (see also Figure~\ref{fig:scenario:immoral_data_jobs} in Appendix~\ref{ap:figures}). \paragraph{Consequentialist Ethics} Being able to find a job that one is qualified for is an objective measure of well-being in this scenario. The research has the potential to uncover biases or attack capabilities that can limit people's ability to find jobs. By proposing mechanisms to mitigate these biases or vulnerabilities, the research output can improve the ability of people to find such jobs.\footnote{Recall from Section~\ref{sec:scenario:immoraldata} that addressing biases will improve the ability of some people to find a job (those impacted by biases) but, to simplify the analysis, will not negatively impact the ability of other people to find a job.} Thus, from an objective list utilitarian perspective, the benefits of studying the data is high. Further, the data is already ``public'' and hence harm to job applicants has already happened. Further, the number of people harmed by the theft and release of the data is comparatively small compared to the one hundred-fold prediction of future use. Thus, from an objective list utilitarian perspective, the morally correct decision is \emph{to study} the data. A hedonic or preference utilitarianist would, respectively, observe that analyzing the data could degrade the happiness of the people whose data was stolen and would also prevent them to live by their preference, if they would prefer that the data not be studied. However, the number of people who would benefit (in both happiness and the ability to live by their own preferences) after the data is studied is far greater. Hence, even with the hedonic and preference utilitarianist frameworks, the morally correct decision is \emph{to study} the data. \paragraph{Deontological Ethics} Taking a Kantian deontological view, we observe that people have inalienable rights, including agency and privacy. Those rights extend to data intended to be private, whether it is private or not. Further, even if the right to privacy did not extend to adversarially-released data after it becomes public, in this scenario, the victims of the data breach have explicitly requested that their data be deleted everywhere. In order to do their research, the researchers would need to retain a copy of the data, thereby disrespecting the request to delete all data copies. They would also need to retain a copy \emph{after} their research is complete in case their results are challenged, e.g., by Company~B. One might observe that future job applicants have a right to be treated fairly during the job application process, that the research results could result in a more fair AI system, and hence ultimately that the research would result in greater respect for the rights of future job applicants. However, under Kantian deontological ethics, individuals enjoy dignity. In Kant's own terms, this means that individuals may \emph{not only} be treated as a means, but also always as \emph{an end in itself}. Violating privacy rights in order to study the data (and prevent future harm) amounts to treating those whose data is studied solely as means to a different end, and is therefore morally wrong. One might ask whether it would be appropriate to contact victims of the data breach and ask if their data can be retained and used for research \dash i.e., to obtain those victims' informed consent. This scenario does not present that option to the researchers. However, even if it did, under a deontological perspective, the act of asking a victim for informed consent in this scenario requires using the stolen data (to obtain victim identity or contact information); using the stolen data in this way is already a violation of privacy. Further, the act of contacting the victims could have unknown consequences. Therefore, under Kantian deontological ethics, the stolen data should \emph{not be studied}. \paragraph{Informed by the Real World, Not Real} As with Scenario~A, this scenario is informed by real-world experiences and observations but is not real. Researchers in the real world might have a mandatory first step prior to analyzing the data, e.g., if the researchers are in the U.S., they should work with their institution's IRB. The IRB would leverage the principles in the Belmont Report~\cite{Belmont}, which itself includes both consequentialist and deontological reasoning. The researchers may also seek input from others. For example, they may seek input from AI and security ethics experts, who might then also reference consequentialist, deontological, and other ethical framworks. The researchers might also seek input from populations impacted by the study or non-study of the data, including representatives of people impacted by the data breach and representatives of people who could be harmed by the perpetuation of biases in Company~B's AI system. Moreover, the researchers might offer these groups the option to lead the decision on whether to study the data. \subsection{Analysis of Scenario C (Inadvertent Data ``Disclosure'')} \label{sec:analysis-scenarios-initial:inadvertentdisclosure} We now turn to the third scenario in Section~\ref{sec:analysis-scenarios-initial:inadvertentdisclosure} (see also Figure~\ref{fig:scenario:inadvertent_disclosure} in Appendix~\ref{ap:figures}). \paragraph{Consequentialist Ethics} The consequentialist must weight harms against benefits. There are harms to authors if the employee of Company C discloses vulnerability to Company~C \dash the authors will not be able to disclose at their preferred time (perference utilitarianism) and may be unhappy (hedonic utilitarianism) and may have their careers or other aspects of their lives negatively impacted if the early disclosure to Company C limits their impact or ability to publish (career advancement could be a measure of well-being per objective list utilitarianism). However, the harms to Company C's users if the employee does not disclose the vulnerability to Company C is much greater --- without early disclosure, Company C will not be able to protect their users and, as a result, millions of people around the world could be significantly harmed. Hence, the morally correct action is for the employee \emph{to disclose} the vulnerability internally to Company C. \paragraph{Deontological Ethics} The employee of Company C may feel a sense of duty to their company and to their company's users. However, program committee members also have a duty to respect the autonomy of authors and a duty to respect the confidentiality of the peer review process. Moreover, the employee of Company C agreed to respect this duty when they joined the program committee, as did Company C's leadership team when they granted the employee permission to join the program committee. Moreover, from a Kantian reasoning, the employee could not form a maxim that allowed breaking the confidentiality promise, as otherwise the peer review process and the institution of program committees would not be possible. Thus, the morally correct thing for the Company C employee to do is respect the rights of the authors and the confidenitality of the review process and \emph{not disclose the vulnerability} to Company C. \paragraph{Informed by the Real World, Not Real} In a real-world scenario, the employee of Company~C might not make the decision on their own. For example, rather than decide between the two options we presented, they might first reach out to the program chairs and ask them to give advice or render a decision. The program chairs might then explore questions such as: should they reach out to the paper's authors, asking for more information about the disclosure timeline? If Company~C's employee assesses the harms of the vulnerability as significant, should the program chairs ask the authors to disclose to Company~C right away? What are the impacts on the scientific peer review process if the program chairs ask the authors to disclose to Company~C? Would the authors feel compelled to grant permission because they want their paper to be accepted even if granting permission is not in their best interest? Since not all companies have members on the program committee, is it morally right to give this company (and their users) advance notice of a vulnerability (even with author permission) solely because an employee of their company is on the program committee? \section{Additional Scenario Contributions} \label{sec:scenarios-more} Although Scenarios~A, B, and~C, along with their analyses, are our core contributions, in developing our initial sets of scenarios (per Section~\ref{sec:scenarios-initial:criteria}), we identified numerous other scenarios that we believed would be valuable to document and make available for community discussion. We summarize several additional scenarios, and the reasons for their creation, here. Appendix~\ref{ap:scenarios-more-full} provides more detailed descriptions of these scenarios. Reading Appendix~\ref{ap:scenarios-more-full} is not necessary to understand the core contributions of this paper. Still, the contents of the appendix may be of interest to those wishing to dive even more deeply into an exploration of computer security research scenarios from the perspective of ethics / moral philosophy. Scenarios~D$^*$ might be of particular interest to researchers working on vulnerability disclosures and Scenarios~E$^*$ and~F might be of particular interest to program committee members. As with the Google Form for Scenarios~A, B, and ~C, we provide Google Forms for Scenarios~D$^*$, E$^*$, and~F at \url{https://securityethics.cs.washington.edu}. \paragraph{Scenarios~D$^*$: Vulnerability Disclosure} \label{sec:scenarios:more:disclosure} This is a family of scenarios, D1 through D7, that feature different considerations with respect to vulnerability disclosure. In the base scenario, Scenario~D1, the morally right decision is obvious across consequentialist and deontological ethics: researchers should disclose a vulnerability to a manufacturer first, before disclosing the vulnerability to the public. (As with all our scenarios, we intentionally simplify our scenarios and limit options; in the real-world, other options might include anonymous disclosure or disclosure through an entity such as CERT.) The remaining scenarios in this sequence provide variations that increase the complexity of the moral decision. For example, what if the company is litigious and would block the publication of the research after receiving a disclosure (Scenario~D2)? To aid in the analysis, in Scenario~D2, we also assume that the company does not care about security and will not work on a patch even after a private vulnerability disclosure, and hence disclosing privately to the company first would ultimately lead to the greatest harm to users (the researchers cannot publicly discuss their results, and the company will leave users vulnerable). Next, what if the company in question is in an industry that is highly litigious but it is unknown whether the company itself is litigious (Scenario~D3)? And, for Scenario~D3, it is also known that the company is in an industry that, as a whole, does not care about security and will not begin developing a patch even after a private disclosure, but it is also unknown whether the company itself cares about security or not. Next, what if there is significant uncertainty on the likelihood of adversaries to discover the vulnerability on their own (Scenario~D4)? Or, what if the company is litigious but now it is known that the company cares significantly about security: it would immediately begin working on a patch while at the same time entangling the researchers in a lawsuit (Scenario~D5)? Next, does the calculus change if the lead researcher is a PhD student and the research is the final piece of their dissertation? If the research gets entangled in a legal battle, the PhD student cannot graduate and must either decline the industry offer that they already accepted and stay in graduate school longer or leave graduate school without a PhD. How should the PhD advisor handle such a situation (Scenario~D6)? Lastly, does the calculus change if the company's users are, for the most part, mostly engaged in an illegal activity, e.g., the vulnerability, if exploited, would allow someone to learn the names and email addresses of people sharing non-consensual explicit material (so-called revenge porn) (Scenario~D7)? \paragraph{Scenarios E$^*$: Submission Raises Ethical Concerns} \label{sec:scenarios:more:submission} This family of scenarios are all related to the following situation: a program committee reviews a paper that reports on research that should not have been done, e.g., an Internet crawl that caused insulin pump machines in hospitals to crash. The paper is long, and the part that should not have been done (the Internet crawl) is confined to only one small section of the paper (Section 9.3). What decision should the program committee make about the paper? In the base case (Scenario~E1), the program committee has two options: reject the paper, or accept it without any required modification; if the latter option is selected, the authors will receive reviews from the program committee and will have the option to voluntarily revise the paper as they see fit. While offering only two options is consistent with our ``binary decisions'' criteria goal in Section~\ref{sec:scenarios-initial:criteria}, we decided to add an additional option for Scenario~E2: accept the paper with the relevant results (Section 9.3) removed. We add another additional option for Scenario~E3: accept the paper but attach a note (written by the program committee) to the paper that explains the ethical concerns. We added these additional options because our goals with these scenarios are different than in Section~\ref{sec:scenarios-initial:criteria}. A central goal here is to offer scenarios that facilitate conversations within the community, and program committees will likely not consider only the initial two binary options. Our next scenario asks whether the program committee would make a different decision if they know, for certain, that the researchers did extensive testing and tried to minimize the likeilhood of crashes and, in fact, thought that they had done so (Scenario~E4). And, does the calculus of the program committee change if they learn that the authors \emph{did} know about the potential for crashes (in general, not for insulin pumps), but that the authors felt the moral responsibility to do their crawls anyway because the crawling results will benefit society (Scenario~E5)? Or, what if the researchers knew there was a risk of crashes (in general, not for insulin pumps) but decided to proceed anyway because they thought that the results would increase the likelihood of their paper being published (Scenario~E6)? Or, what if the scenario is exactly like Scenario~E6 but the researchers were simply lucky and no crashes happened \dash does the absence of any actual harms, even though the researchers believed that their crawls could cause crashes, change the program committee's calculus (Scenario~E7)? Or, what if the scenario is exactly like Scenario~E6 except that the program committee strongly believes that the results in Section 9.3 of the submission should be public, and that not publishing the findings in Section 9.3 of the submission would result in harms to people (Scenario~E8)? Or, does the program committee's calculus change if the researchers were previously naive and, after learning about the impact of their work on insulin pumps, the researchers express significant regret (Scenario~E9)? \paragraph{Scenario~F: Response to Submission Rejection} \label{sec:scenarios:more:rejection} Scenario~F is a continuation of Scenario~E1, from the perspective of the researchers. Suppose the researchers receive a rejection, along with an explanation from the program committee about how the crawls in Section 9.3 of the submission caused insulin pumps to crash. What should the researchers do? Should they stop working on the project? Should they submit the paper, unmodified, to a new conference? Should they remove Section 9.3 of their paper, pretend like the crawls never existed, and submit to a new conference? Should they add a note to Section 9.3 that explains the harms that their crawls had and then submit to a new conference? \section{Discussion} \label{sec:discussion} \subsection{Reflection on Analyses} We begin by reflecting upon our analyses and summarizing key points and observations. \paragraph{Different Frameworks Can Lead to Different Conclusions} For some moral questions, different ethical frameworks lead to \textit{different} conclusions regarding what is right and wrong. \paragraph{Different Frameworks Can Lead to the Same Conclusion} For other moral questions, different ethical frameworks lead to the \textit{same} conclusion regarding what is right and wrong. \paragraph{A Framework Can Fail to Reach a Conclusion} We intentionally designed our scenarios to be ``decisive'', per the goals in Section~\ref{sec:scenarios-initial:criteria}; real-world scenarios may \emph{not} be decisive and may \emph{not} lead to conclusive decisions under either the consequentialist or deontological frameworks. Also, it could be the case that under a framework a certain action is morally permitted, i.e., not necessarily required but also not forbidden. \paragraph{Ethical Frameworks Can Provide Tools for Discussion} That different ethical frameworks can lead to different conclusions of right and wrong, or can fail to reach a conclusion, may seem intuitive and perhaps obvious, at least in retrospect, to members of the computer security research community. Indeed, given past debates about ethics within the community, it should be clear that there are differences in opinion of what constitutes right and wrong and that some scenarios are significantly complicated. A key question is: what should one do when there are differences of opinion or lack of clarity into what constitutes the right decision? Here is where the tools \dash the frameworks \dash from ethics / moral philosophy can help. In short, they can help decision-makers thoughtfully, methodically, and articulately analyze moral questions; we elaborate more below. In discussions of right or wrong, when there is disagreement, we suggest first surfacing the communicants' underlying values and their frameworks of consideration. Simply knowing that another communicant is centering different values and a different framework may help further a collaborative discussion. \paragraph{Ethicists Often Use a Combination of Frameworks; Frameworks: Tools for Thought and Conversation} In this work, we primarily consider consequentialist and deontological ethics. Both of these frameworks have limitations, and we are \emph{not} advocating for strict adherence to either of them. In fact, it is not uncommon for people \dash including modern ethicists \dash to include elements of multiple frameworks (consequentialist, deontological, and other) as they reason through decisions. Within the security research community, the Menlo Report~\cite{Menlo} includes both consequentialist and deontological elements, for example. On the one hand, the observation above might call into question the value of articulating ethical frameworks in the first place: if people are not strictly consequentialist or deontological, what value is there in exploring scenarios from strict consequentialist or deontological perspectives? We argue that precise analyses of scenarios under different perspectives can help the decision-maker in multiple ways. At a minimum, precise thinking via the ethical frameworks can help slow the decision-making process and encourage thoughtful reflection and contemplation. Additionally, the frameworks can help decision-makers identify which parts of arguments they agree with and which parts they do not and, by doing so, help the decision-maker better articulate their own arguments, even if their arguments are neither consequentialist nor deontological. \paragraph{Sometimes the Morally Correct Action is Not in the Best Interest of the Decision-Maker} In Scenarios~A, B, and~C, we tried to minimize the impact of either decision on the decision-makers themselves. Thus, the decision-makers could focus on the impacts on and rights of others. In the real world, a decision-maker's decision might also impact themselves (e.g., a researcher might desire a publication, and the decision on how to proceed might impact their ability to publish). We explore such situations in Scenarios~D$^*$ and~F. In short, sometimes the morally right decision might \emph{not} be the decision that seems to be in the best interest of the decision-maker. \paragraph{Shifting Morality Earlier} Our scenarios all feature moral questions for decision-makers. However, one might ask (not just for our scenarios, but for the field) how to shift questions of morality earlier, such that the scenarios we consider (or the real world encounters) do not come up. In a trolley problem, an example of ``shifting earlier'' might be to ensure that all trolleys have better, more resilient brakes. For Scenario~A, code escrow might enable the patching of devices even after the manufacturer ceases operation. For Scenario~B, the researchers would not have needed to study the data if the underlying AI algorithms were already unbiased and secure. For Scenario~C, the situation could be mitigated if the conference had pre-specified rules for such situations or if the conference required disclosure before submission; of course, whether such a rule should be in place raises its own ethical questions, as exhibited (for example) by the role of legal threats in Scenarios~D$^*$. \subsection{For Consideration} With the background of our results and our reflections, we now present a collection of considerations for members of the security research community. \paragraph{For Decision-Makers} Decision-makers (researchers, program committees, others) should consider ethics \emph{before} making decisions, rather than after. For certain moral dilemmas (e.g., Scenarios~A, B, and~C), it is possible to pick an outcome and then find the ethical framework that justifies that outcome. We do \emph{not} argue for this practice. Instead, decision-makers should let the decision follow from a disinterested ethical analysis. Toward facilitating disinterested analyses, we encourage decision-makers to explicitly enumerate and articulate any interests that they might have in the results of the decision; such an articulation could be included as part of a positionality statement in a paper. \paragraph{For Researchers Writing Papers} For researchers new to ethics, the Menlo Report~\cite{Menlo} provides concrete guidance. The Menlo Report and other ethical frameworks can help researchers reach a conclusion about what is morally right. This paper, we hope, can help researchers consider and discuss morality when there are differences of opinion or uncertainty regarding what to do. Because (1) we believe that the field can grow through the explicit articulation of ethical thought and (2) there can be differences in ethical perspectives and thought (as Section~\ref{sec:analysis-scenarios-initial} shows), we encourage researchers to do more than just apply a single approach (consequentialist, deontological, the principles in the Menlo Report, or otherwise) and then act accordingly. Rather, we encourage researchers to conduct analyses under multiple ethical frameworks \textit{and} include the reasoning for their decisions under the multiple frameworks in their paper submissions and publications. If the frameworks lead to the \textit{same} conclusion, the inclusion of multiple arguments can strengthen the paper's ethics section and can serve as part of the growing foundation for ethical thought in the field. If different frameworks lead to \textit{different} conclusions, and the authors proceed with what is considered morally right under one framework but morally wrong under another, then surfacing those different considerations and the final thought processes can be particularly valuable. To aid in the above, we suggest a process that is familiar to the security community \dash threat modeling \dash but adapted for ethics analyses. This process shares commonality with the Menlo Report~\cite{Menlo} and other approaches to evaluate ethics~\cite{manzeschke2015meestar}. Namely, we suggest that researchers first do a stakeholder analysis to identify all stakeholders potentially impacted by the decision, e.g., using methods from value sensitive design~\cite{friedman2019value}. Then, for each stakeholder, we suggest explicitly identifying the assets that might be impacted by the possible decisions. Then, for each possible decision, for each stakeholder, and for each asset, enumerate the benefits / harms (consequentialist ethics) and the rights supported / violated (deontological ethics). The benefits / harms and rights analyses should consider situations in which no adversaries manifest and situations in which adversaries manifest. We refer to this process as ``ethics modeling'' as it combines elements of both ethical analyses and threat modeling. We further encourage researchers to become familiar with ethical frameworks not deeply considered in this work. An example in the context of computer security and victims of intimate partner violence is care ethics, as considered in Section 6.2 of~\cite{EmilyCare}. \paragraph{For Program Committees Discussing Submissions} We encourage program committees and paper reviewers to become familiar with the different ethical frameworks. When questions of ethics arise in the review process, we encourage program committee discussions to explicitly reference not just \emph{what} the communicants believe is morally right and wrong, but \emph{why} they believe that. The latter \dash the why \dash can explicitly refer to analyses under one or more ethical frameworks. \paragraph{For the Community} We further encourage the community at large to familiarize themselves with different ethical frameworks. Those community discussions could leverage the scenarios that we developed over the course of this research. For example, prior to reviewing papers, a program committee could, together, discuss the committee's perspective on the right decisions for the scenarios that we present. From preliminary conversations with members of our community, we believe that such discussions will not lead to a universal consensus. But we believe that the resulting conversations, and the points raised, would be helpful for those community members as they, for example, embark on reviewing papers with possible ethical concerns. \paragraph{For Educators} We encourage educators to include explicit discussions of ethics and ethical frameworks in their courses if they are not already doing so. Our Scenarios~A, B, and~C, by design, do not lead to obvious right and wrong answers. As a result, we have found that our scenarios are particularly conducive to conversations in classes. Educators are welcome to use our scenarios in their classes as well, and we are happy to provide a companion slide deck. As part of our lectures, we have found it generative of in-class discussions to have course participants complete one or more of the Google Forms on \url{https://securityethics.cs.washington.edu}, corresponding to the scenarios in this paper, and then discuss. If educators wish to create new scenarios, we encourage them to consider scenarios that meet the design criteria in Section~\ref{sec:scenarios-initial:criteria}. \paragraph{For Everyone} Creating ethical norms for computer security research is fundamentally challenging because different ethical frameworks can lead to different conclusions about right and wrong. We believe that a more achievable near-term goal is the creation of extensive sets of case studies (like our scenarios) that community members can discuss and learn from. \paragraph{For Us} This is the first public draft of our paper, after extensive iteration and discussion. Although we are confident that our dilemmas in Scenarios~A, B, and~C are true dilemmas (per our criteria in Section~\ref{sec:scenarios-initial:criteria} and our validation methodology), this version of our paper does not report concrete data (as doing so is not the goal of this paper). Our ongoing work seeks to provide such concrete data across cultures and communities. We are additionally preparing other computer scenario descriptions, in the format of Scenarios~A through~F, for community consideration. As we create these scenarios, we will add them to \url{https://securityethics.cs.washington.edu/}. \section{Conclusions} In this paper, we embark on a research collaboration spanning (1) ethics / moral philosophy and (2) computer security research. We develop criteria for computer security-themed trolley problems. We present three such trolley problems (Scenarios~A, B, and~C) and then evaluate those trolley problems under today's main ethical frameworks. We provide additional scenarios that, together, surface additional points of consideration about ethics for the computer security research community. Given the findings of our research, we reflect and offer considerations for the computer security research community. \section*{Acknowledgements} This work was supported in part by the U.S.\ National Science Foundation under awards CNS-2205171 and CNS-2206865, the University of Washington Tech Policy Lab (which receives support from the William and Flora Hewlett Foundation, the John D. and Catherine T. MacArthur Foundation, Microsoft, and the Pierre and Pamela Omidyar Fund at the Silicon Valley Community Foundation), and gifts from Google, Meta, Qualcomm, and Woven Planet. We are grateful to everyone who contributed to this project. Thank you to all who offered comments, questions, insights, and conversations, hosted talks, and reviewed preliminary drafts, including Lujo Bauer, Hauke Behrendt, Dan Boneh, Kevin Butler, Aylin Caliskan, Inyoung Cheong, Lorrie Faith Cranor, Sauvik Das, Zakir Durumeric, Kevin Fu, Alex Gantman, Gennie Gebhart, Kurt Hugenberg, Umar Iqbal, Erin Kenneally, David Kohlbrenner, Seth Kohno, Phil Levis, Rachel McAmis, Alexandra Michael, Bryan Parno, Elissa Redmiles, Katharina Reinecke, Franziska Roesner, Stuart Schechter, Sudheesh Singanamalla, Patrick Traynor, Emily Tseng, and Miranda Wei. We also sincerely thank all attendees of past presentations about this work. \section{Ethics and Computer Security Research: A Short Historical Perspective} \label{ap:ethics_and_security} \xxx[Yoshi]{@Wulf and @Yas: If you see this in the PDF, it means that the paper is compiled with the submission variable set to 1 (\textit{a submission}). This appendix will not be an appendix in the full version (submission set to 0).} Here we expand on the history that we touched on in Section~\ref{sec:background:security}. \input{02b_ethics_security_longbody} \section{Additional Ethical Frameworks} \label{ap:ethics:other} \input{04b_otherframeworks} \section{Additional Scenario Contributions} \label{ap:scenarios-more-full} Here we provide more details about the scenarios mentioned in Section~\ref{sec:scenarios-more}. Unlike the trolley problem-like scenarios from Section~\ref{sec:scenarios-initial}, for the scenarios here, we loosen the design criteria and no longer require the scenarios to be ``early'', have ``binary decision'', be ``dilemmas'', and yield ``decisive'' analyses. Instead, our primary goals are for these new scenarios to (1) reflect scenarios encountered in the computer security research field and (2) inspire thoughtful conversation and reflection upon how individuals and the computer security research field at large reason about moral decisions. We assess goal (1) by construction and through reflection upon our own experiences as computer security researchers and, additionally, through the review of our scenarios with others security researchers. Goal (2) is more subjective; we highlight specific points for consideration and reflection in our discussions below. \subsection{Vulnerability Disclosure Scenarios} \label{ap:scenario:disclosure} After discovering a vulnerability in a company's product, computer security researchers often face the following question: do they disclose privately to the company first, before publicly discussing their results (e.g., before publishing a paper)? Or, do they instead publicly discuss their results first, before privately disclosing to the company? Scenario A (Section~\ref{sec:scenario:medical} and Figure~\ref{fig:scenario:medical_device} in Appendix~\ref{ap:figures}) captures a particularly complicated vulnerability disclosure situation in which it is impossible to patch the vulnerable artifact because the maker of the artifact no longer exists. In general, there \emph{is} a company to disclose to, it \emph{is} possible to patch the vulnerable artifact, and the company \textit{will} patch the vulnerability once it is disclosed to them. Scenario~D1 provides an example of this general setting (Figure~\ref{fig:scenario:disclosure:base} in Appendix~\ref{ap:figures}). By itself, Scenario~D1 is rather uninteresting: coordinated disclosure best practices in the computer security research community would have the researchers privately disclose the vulnerability to the company before any public discussion. We present Scenario~D1 not because it is interesting unto itself but because, as a baseline, it enables the philosophical exploration of variants to Scenario~D1 where the ethical analysis might be different or more complicated. We summarize Scenario~D1 and the sequence of related scenarios below, beginning first with more details about Scenario~D1: \begin{itemize} \item \textbf{Scenario D1, Vulnerability Disclosure (Base Case).} Scenario~D1 (Figure~\ref{fig:scenario:disclosure:base} in Appendix~\ref{ap:figures}) is a generic vulnerability disclosure scenario in which researchers discover a vulnerability in a product made by Company~D. The description of Scenario~D1 contains additional complexities; these complexities exist in order to enable precise comparisons between Scenario~D1 and the other scenarios below. Company~D is known \textit{not} to be litigious. Company~D is known to take security seriously and will begin working on a patch as soon as it learns about a vulnerability. The researchers are tenured full professors who do not need a publication. The timeline for adversaries to manifest without a public vulnerability disclosure is one year; the timeline for how long it will take to develop a patch once Company D knows about the vulnerability is six months; and it will take adversaries three months to weaponize the vulnerability if it were publicly disclosed first. Given these timelines, a private vulnerability disclosure to Company D would give it enough time to patch its product before adversaries manifest. On the other hand, a public disclosure first would make users vulnerable to significant harm for three months (adversaries manifest after three months, a patch is available after six months). \end{itemize} \begin{itemize} \item \textbf{Scenario D2, Vulnerability Disclosure (Legal Threat).} Scenario~D2 (Figure~\ref{fig:scenario:disclosure:legalthreat} in Appendix~\ref{ap:figures}) is a variant of Scenario~D1 in which Company~D is known to be highly litigious and to not consider security seriously. If the researchers disclose privately to the company first, the researchers would become entangled in a legal battle and not be able to publicly disclose the vulnerability. Further, the company will not begin efforts to patch their product. This means that when adversaries manifest (and they are guaranteed to manifest within a year), users will be significantly harmed. It is only after the public learns about the vulnerability, e.g., via a public disclosure or the manifestation of adversaries, that the company will begin to develop a patch. A public disclosure would thus result in users being vulnerable for three months (starting after month three and until after month six). A private disclosure to Company D would result in users being vulnerable for six months (starting after one year and continuing until after eighteen months). \item \textbf{Scenario D3, Vulnerability Disclosure (Legal Threat with Uncertainty).} Scenario~D3 (Figure~\ref{fig:scenario:disclosure:legalthreat:uncertainty} in Appendix~\ref{ap:figures}) is like Scenario~D2 except now there is uncertainty whether Company D is highly litigious and whether they take security seriously. It \emph{is} known that Company D is in an industry (Industry~D) that is highly litigious and that does not take security seriously. Compared to Scenario~D2, the central ethical question is thus whether the researchers should assume (with high probability) that Company D is like the rest of Industry~D or whether to give Company~D the benefit of the doubt and assume that it is like any other unknown company. \item \textbf{Scenario D4, Vulnerability Disclosure (Legal Threat with Uncertainty and Uncertain Adversaries).} Scenario~D4 (Figure~\ref{fig:scenario:disclosure:legalthreat:uncertainty:uncertainadvesaries} in Appendix~\ref{ap:figures}) is like Scenario~D3 but adds even greater uncertainty. Whereas the researchers in Scenario~D3 know for a fact that adversaries would manifest within a year, the researchers in this scenario do not know when (or even if) adversaries will manifest if there is no public disclosure. \item \textbf{Scenario D5, Vulnerability Disclosure (Legal Threat, Security Responsible).} Scenario~D5 (Figure~\ref{fig:scenario:disclosure:legalthreat:securityresponsible} in Appendix~\ref{ap:figures}) is like Scenario~D2 except that it separates the legal battle following a private vulnerability disclosure from the protection of users. In this scenario, the day before their planned publication date, the researchers learn that the company \textit{will} start taking computer security vulnerabilities seriously. Given this change in policy for Company~D, if the researchers privately disclose to the company first, they will not be able to publish their results for three years (Company~D remains litigious), but the company will begin working on a patch and users will be protected from any eventual manifestation of adversaries. The researchers are all tenured full professors who would not be significantly harmed if they are unable to publish for three years. \item \textbf{Scenario D6, Vulnerability Disclosure (Legal Threat, Security Responsible, Career-critical Research).} Scenario~D6 (Figure~\ref{fig:scenario:disclosure:legalthreat:securityresponsible:student} in Appendix~\ref{ap:figures}) is like Scenario~D5 except that the lead researcher is a senior PhD student and the planned publication is their final PhD defense and the filing of their dissertation. They will be unable to defend and graduate if their research (the final portion of their dissertation) becomes entangled in a legal battle. If they cannot defend and file their dissertation, then they must decide whether to decline a job offer and remain in graduate school for longer or leave academia without a PhD. The PhD student's department's executive committee met and decided that \dash unless the student's PhD advisor intervenes \dash the student should defend and file their dissertation as planned and \emph{not} notify Company~D first. The department chair additionally told the PhD advisor that it is their responsibility \dash not the student's \dash to decide whether to disrupt the current plans and notify Company~D before the defense and dissertation filing. \item \textbf{Scenario D7, Vulnerability Disclosure (Legal Company, Law-breaking Users).} Scenario~D7 (Figure~\ref{fig:scenario:disclosure:legalthreat:usersbreaklaw} in Appendix~\ref{ap:figures}) is like Scenario~D1 in Figure~\ref{fig:scenario:disclosure:base} in that Company~D is not litigious and takes the development of security patches seriously. It is unlike the company in Scenario~D1 in that this scenario's Company~D, although operating legally in a country due to a legal loophole, is used by people who are breaking the law. Specifically, the users with accounts on Company~D's web service use Company~D's web server to share non-consensual explicit material (often called revenge porn), an act that is illegal. \end{itemize} \subsection{Paper Raises Concerns} \label{ap:scenario:submissionconcerns} We now turn to another set of scenarios, beginning with the base scenario, Scenario~E1 (Figure~\ref{fig:scenario:program_committee_concerns:base} in Appendix~\ref{ap:figures}). In Scenario~E1, while reviewing a paper under submission, the program committee determines that part of the research (the research reported in Section 9.3 of the submission) should not have been done. Section 9.3 of the submission presents the results of repeated Internet-wide scans. The program committee observes that the scanning infrastructure could have caused computers, including insulin pumps in hospitals, to crash. Moreover, after discussing with staff at a local hospital, the program committee learns that the researcher's Internet-wide scans \emph{did} cause insulin pumps to crash. We describe key elements of Scenario~E1, as well as the subsequent scenarios in this collection of scenarios, below: \begin{itemize} \item \textbf{Scenario~E1 (Base Case).} In this base case scenario (Figure~\ref{fig:scenario:program_committee_concerns:base} in Appendix~\ref{ap:figures}), program committee members are only given two options: (1) to reject the paper or (2) to accept the paper as-is, with no required modifications. If option (2) is selected, the authors will receive reviews from the program committee and will have the opportunity to revise the paper however they see fit. \item \textbf{Scenario~E2 (Reject, Accept, Remove).} This scenario builds on Scenario~E1. In this scenario (Figure~\ref{fig:scenario:program_committee_concerns:moreoptionsremove} in Appendix~\ref{ap:figures}), program committee members are given an additional option: (3) to accept the paper under the condition that the authors agree to remove Section 9.3. \item \textbf{Scenario~E3 (Accept, Reject, Remove, Note).} This scenario builds on Scenario~E2. In this scenario (Figure~\ref{fig:scenario:program_committee_concerns:moreoptions} in Appendix~\ref{ap:figures}), program committee members are given an additional option: (4) to accept the paper under the condition that the authors agree to add a clearly visible note to the first page, written by the program committee, detailing the ethical concerns with the methods used in Section 9.3. \item \textbf{Scenario~E4 (Extensive Author Testing).} This scenario builds on Scenario~E3. In this scenario (Figure~\ref{fig:scenario:program_committee_concerns:authorstried} in Appendix~\ref{ap:figures}), the program committee learns that the authors conducted extensive, thorough testing and evaluation prior to deploying their crawling infrastructure and believed that their scans would not cause any crashes. Out of an abundance of caution, the authors also developed mechanisms to learn whether their system caused any crashes. For example, the authors followed the practices described in~\cite{durumeric2013zmap}. The reason the authors did not learn about crashes to insulin pumps was because of errors made by the operators of the impacted insulin pumps. \item \textbf{Scenario~E5 (Known Risk and Moral Responsibility).} This scenario builds on Scenario~E3. In this scenario (Figure~\ref{fig:scenario:program_committee_concerns:authorsknewandfeltduty} in Appendix~\ref{ap:figures}), after extensive testing, the researchers learned that their crawling infrastructure could cause some computers to crash. They decided to proceed with their Internet-wide scans anyway, out of a sense of duty and moral responsibility. Because of the seriousness of the vulnerability, they felt a need to crawl the Internet, identify vulnerable webservers, and then contact those webservers' operators and provide instructions on how to patch. In fact, there is a direct and measurable impact to the security of those webservers (and their users) because of the authors' crawls and subsequent efforts to reach webserver operators. Due to space limitations, the authors did not include any of the above details in their submission \dash the program committee members only learned these details because the program chairs reached out to the authors with questions. \item \textbf{Scenario~E6 (Authors Ignored Risks)}. This scenario builds on Scenario~E3. In this scenario (Figure~\ref{fig:scenario:program_committee_concerns:authorsknew} in Appendix~\ref{ap:figures}), the authors knew that their scans could cause crashes but proceeded anyway because they thought that the results of their scans would increase the likelihood of their paper being accepted. The authors believed that a few crashes here and there would be okay since people's computers crash all the time anyway. \item \textbf{Scenario~E7 (Authors Ignored Risks, Luck Prevents Crashes)}. This scenario builds on Scenario~E6. In this scenario (Figure~\ref{fig:scenario:program_committee_concerns:authorsknew:nocrashesluck} in Appendix~\ref{ap:figures}), the crawling infrastructure did \emph{not} cause any crashes out of sheer luck --- the researchers accidentally had a bug in their crawling infrastructure that caused extra delays, and the extra delays meant no crashes. Nevertheless, as with Scenario~E6, the researchers believed that their crawls could cause crashes and proceeded anyway because they thought that the results of their scans would increase the likelihood of their paper being accepted. The extra delays in the crawling infrastructure do not impact the correctness of the results in Section 9.3. \item \textbf{Scenario~E8 (Authors Ignored Risks, Section 9.3 Results Critical).} This scenario builds on Scenario~E6, where the researchers knew that their scans could cause crashes but proceeded anyway because they thought that the results of their scans would increase the likelihood of their paper being accepted. In this scenario (Figure~\ref{fig:scenario:program_committee_concerns:authorsknew:73critical} in Appendix~\ref{ap:figures}), the program committee believes that it is vital for the results in Section 9.3 to be published. With the publication of the paper, and the results in Section 9.3, the remaining vulnerable webservers will have increased motivation to patch. Without the publication of the paper, including Section 9.3, many webservers and hence many users will remain vulnerable. \item \textbf{Scenario~E9 (Authors Ignored Risks, Moral Implications Not Realized).} This scenario builds on Scenario~E6. In this scenario (Figure~\ref{fig:scenario:program_committee_concerns:authorsknew:phdstudent:implicationsnotrealized} in Appendix~\ref{ap:figures}), the researchers are deeply regretful after they are informed by the program chair about the crashes caused by their crawls. With this new information, the researchers strongly wish that they did not conduct the crawls reported in Section 9.3. \end{itemize} For these scenarios, the program committee will be evaluating not just the morality of the work done, but their moral responsibilities as a program committee. \subsection{After Rejection} \label{ap:scenario:afterrejection} Our Scenario~F continues Scenario~E1 from the perspective of the researchers: after review by the program committee, the program committee rejected the authors' paper. The rejection email explains to the authors the reason for the rejection: that the authors' crawls caused medical devices to crash. Until receiving the notification email, the authors did not know that their crawls could crash machines. But, now that they do know, they agree that the research in Section 9.3 should not have been done. The authors believe that their research, even without Section 9.3, is valuable and important to publish. The authors know of another conference with zero overlapping program committee members with the conference that rejected the authors' research, and the authors know that the two conferences' program committees will not discuss. (Scenario~F is captured in Figure~\ref{fig:scenario:resubmit_paper_with_concerns} in Appendix~\ref{ap:figures}.) The question for these researchers is: what should they do? Should they stop working on the project and never try to publish any part of it? Should they submit the paper again, to the new conference, without modification? Should they remove Section 9.3 and, in their new submission, pretend like Section 9.3 never existed and that the crawls never happened? Or should they keep Section 9.3 in their new submission, but add a note about what happened and why, in retrospect, they should not have done those crawls? \section{Figures for Scenarios} \label{ap:figures} Since the main body of the text captures key elements of our scenarios, for readability, we do not include our scenario figures in the body of the paper. The scenario figures are in this appendix. The following is a mapping from scenarios to figures: \begin{itemize} \item \textbf{Scenario A}: Figure~\ref{fig:scenario:medical_device}. (Section~\ref{sec:scenario:medical}.) \item \textbf{Scenario B}: Figure~\ref{fig:scenario:immoral_data_jobs}. (Section~\ref{sec:scenario:immoraldata}.) \item \textbf{Scenario C}: Figure~\ref{fig:scenario:inadvertent_disclosure}. (Section~\ref{sec:scenario:inadvertentdisclosure}.) \item \textbf{Scenario D1}: Figure~\ref{fig:scenario:disclosure:base}. (Section~\ref{sec:scenarios:more:disclosure}.) \item \textbf{Scenario D2}: Figure~\ref{fig:scenario:disclosure:legalthreat}. (Section~\ref{sec:scenarios:more:disclosure}.) \item \textbf{Scenario D3}: Figure~\ref{fig:scenario:disclosure:legalthreat:uncertainty}. (Section~\ref{sec:scenarios:more:disclosure}.) \item \textbf{Scenario D4}: Figure~\ref{fig:scenario:disclosure:legalthreat:uncertainty:uncertainadvesaries}. (Section~\ref{sec:scenarios:more:disclosure}.) \item \textbf{Scenario D5}: Figure~\ref{fig:scenario:disclosure:legalthreat:securityresponsible}. (Section~\ref{sec:scenarios:more:disclosure}.) \item \textbf{Scenario D6}: Figure~\ref{fig:scenario:disclosure:legalthreat:securityresponsible:student}. (Section~\ref{sec:scenarios:more:disclosure}.) \item \textbf{Scenario D7}: Figure~\ref{fig:scenario:disclosure:legalthreat:usersbreaklaw}. (Section~\ref{sec:scenarios:more:disclosure}.) \item \textbf{Scenario E1}: Figure~\ref{fig:scenario:program_committee_concerns:base}. (Section~\ref{sec:scenarios:more:submission}.) \item \textbf{Scenario E2}: Figure~\ref{fig:scenario:program_committee_concerns:moreoptionsremove}. (Section~\ref{sec:scenarios:more:submission}.) \item \textbf{Scenario E3}: Figure~\ref{fig:scenario:program_committee_concerns:moreoptions}. (Section~\ref{sec:scenarios:more:submission}.) \item \textbf{Scenario E4}: Figure~\ref{fig:scenario:program_committee_concerns:authorstried}. (Section~\ref{sec:scenarios:more:submission}.) \item \textbf{Scenario E5}: Figure~\ref{fig:scenario:program_committee_concerns:authorsknewandfeltduty}. (Section~\ref{sec:scenarios:more:submission}.) \item \textbf{Scenario E6}: Figure~\ref{fig:scenario:program_committee_concerns:authorsknew}. (Section~\ref{sec:scenarios:more:submission}.) \item \textbf{Scenario E7}: Figure~\ref{fig:scenario:program_committee_concerns:authorsknew:nocrashesluck}. (Section~\ref{sec:scenarios:more:submission}.) \item \textbf{Scenario E8}: Figure~\ref{fig:scenario:program_committee_concerns:authorsknew:73critical}. (Section~\ref{sec:scenarios:more:submission}.) \item \textbf{Scenario E9}: Figure~\ref{fig:scenario:program_committee_concerns:authorsknew:phdstudent:implicationsnotrealized}. (Section~\ref{sec:scenarios:more:submission}.) \item \textbf{Scenario F}: Figure~\ref{fig:scenario:resubmit_paper_with_concerns}. (Section~\ref{sec:scenarios:more:rejection}.) \end{itemize} For Figure~\ref{fig:scenario:medical_device}, the 85\,000 number of patients corresponds to roughly 2.6 in every 10\,000 people, which is one-tenth the rate of the number of people who had pacemakers as of 1988~\cite{NumberIMDs}. For the calculation of 85\,000, we used the U.S.\ Census Bureau's estimated population on January 1, 2022~\cite{Census2022}. \input{ScenarioFigures/medical_device} \input{ScenarioFigures/immoral-data-jobs} \input{ScenarioFigures/inadvertent_disclosure} \input{ScenarioFigures/disclosure_base} \input{ScenarioFigures/program_committee_concerns} \input{ScenarioFigures/resubmit_paper_with_concerns} \section{0pt}{7pt plus 2pt minus 2pt}{3.5pt plus 1pt minus 2pt} \usepackage{tcolorbox} \input{macros} \AtBeginEnvironment{quote}{\itshape} \renewcommand{\headrulewidth}{0pt} \addtolength{\headheight}{15pt} \fancypagestyle{firstpage} \lhead{Tadayoshi Kohno, Yasemin Acar, Wulf Loh. \textit{Ethical Frameworks and Computer Security Trolley Problems: Foundations for Conversations}, February 2023. Available online at \url{https://securityethics.cs.washington.edu}.} \rhead{} } \title{Ethical Frameworks and Computer Security Trolley Problems: \\ Foundations for Conversations } \author{ Tadayoshi Kohno\\ University of Washington \and Yasemin Acar \\ George Washington University \and Wulf Loh\\ Universit\"at T\"ubingen } \ifnum\showpublicationinfobelowauthor=1 \newcommand{\publicationinfo}{February 8, 2023 \\ \url{https://securityethics.cs.washington.edu} \\ \tt \texttt{yoshi@cs.washington.edu}, \texttt{acar@email.gwu.edu}, \texttt{wulf.loh@izew.uni-tuebingen.de} } \fi \begin{document} \maketitle \ifnum\showcitationheader=1 \thispagestyle{firstpage} \fi \input{00_abstract} \input{01_intro} \input{02_motivation} \input{03_scenarios-initial} \input{04_frameworks} \input{05_analysis-scenarios-initial} \input{06_scenarios-more} \input{07_discussion} \input{08_conclusions} \input{09_acknowledgements} \balance \bibliographystyle{plain}
{ "arxiv_id": "2302.14327", "language": "en", "timestamp": "2023-03-01T02:09:39", "url": "https://arxiv.org/abs/2302.14327", "yymm": "2302" }
\section{Introduction} \label{sec:intro} Frequency-modulated continuous wave (FMCW) radars have become a popular choice for short-range applications like automotive radars\cite{sun2020mimo,patole2017automotive}, human vital sign monitoring\cite{xu2022simultaneous}, synthetic aperture radars (SARs)\cite{meta2007signal}, and surveillance systems\cite{saponara2017radar}. The main advantages of FMCW radar are portability, low cost, and high resolution. An FMCW radar transmits a finite train of (piece-wise) linear frequency-modulated (LFM) chirps in each coherent processing interval (CPI). At the receiver, the target returns are mixed with the transmitted signal to obtain a complex sinusoidal beat or intermediate frequency (IF) signal. The targets' locations (and velocities if moving) information can be extracted from the frequencies of this IF signal. To this end, fast Fourier transforms (FFTs) have traditionally been used to estimate the IF signal frequencies\cite{sun2020mimo}. However, to localize targets in the angular domain, multiple transmit and receive antennas are required. In MIMO radars, multiple orthogonal waveforms are transmitted simultaneously with the target returns processed jointly by the multiple receive antennas. The MIMO radar achieves a better angular resolution than conventional radar by exploiting a large number of degrees of freedom of a virtual array synthesized with a small number of physical antenna elements. In this work, we focus on multi-target range-angle detection using MIMO FMCW radars. Conventionally, two-dimensional frequency estimation algorithms are used to estimate both targets' ranges and angles of arrival (AOAs) from the received signal. Other frequency estimation algorithms considered for MIMO FMCW radar include 2D-FFT\cite{feger200977}, 2D-MUSIC\cite{belfiori20122d}, and ESPRIT\cite{lemma2003analysis}. From the array processing theory, it is known that a high angular resolution requires a large array aperture\cite{richards2014fundamentals}. Further, increasing the aperture without a parallel increase in antenna elements leads to ambiguities in angle estimation. Although MIMO technology helps to achieve higher resolution, the cost of synthesizing a large virtual array with the half-wavelength element spacing (spatial Nyquist sampling rate) can be very high. In this context, sparse linear arrays (SLAs) have been proposed recently for both pulsed-waveform and continuous-wave radars\cite{rossi2013spatial,feger200977,sun2020sparse}. Optimal sparse array design was considered in \cite{diamantaras2021sparse} while \cite{feger200977} designed a non-uniform SLA and applied digital beamforming techniques for AOA estimation after interpolating for the missing measurements in the synthesized SLA. On the other hand, \cite{sun2020sparse} suggested matrix completion techniques to complete the corresponding linear array for angle detection. Compressed sensing (CS) addresses sparse signal recovery with fewer measurements\cite{elad2010sparse}. The sparse array setup enables spatial compressive sensing such that the CS recovery naturally suits our target localization problem. Note that the target scene is sparse since only a small number of targets are present in the scene. The CS-recovery-based localization has recently been applied for angle estimation for pulsed-MIMO radar\cite{rossi2013spatial}. In \cite{alistarh2022compressed}, CS-based algorithms were used to process measurements from a traditional full array. Besides, spatial compression, CS techniques have also been considered in radars for reduced sampling rate\cite{bar2014sub,yu2010mimo}, interference mitigation\cite{correas2019sparse}, and multi-target shadowing effect mitigation in constant false-alarm rate (CFAR) detection\cite{cao2021compressed}. \textbf{Contributions:} In this paper, we present a novel multi-target localization algorithm to detect targets' ranges and AOAs using a random SLA. Prior methods employing CS-based techniques (e.g. \cite{rossi2013spatial}) often address only angle detection at a prior known range bin. Here, we consider both range and angle detection in a MIMO FMCW radar. For range detection, we exploit a discrete Fourier transform (DFT)-based focusing operation followed by binary integration\cite{richards2014fundamentals} of measurements across pulses and virtual array channels, trading off range resolution for higher detection probability. For angle recovery, we use CS-based techniques, which relax the dependence of the angular resolution on the number of antenna elements. Finally, we illustrate the proposed method's performance through numerical simulations, comparing it with classical-FFT processing. The rest of the paper is organized as follows. In the next section, we describe the FMCW radar system model with the random sparse MIMO array setup. In Section~\ref{sec:recovery algorithm}, we present the proposed range and angle detection algorithm. The simulation results are discussed in Section~\ref{sec:simulation}, followed by conclusins in Section~\ref{sec:summary}. \section{Radar System model} \label{sec:system model} Consider a colocated MIMO radar system, as shown in Fig.~\ref{fig:mimo radar}, composed of $N_{T}$ transmitters and $N_{R}$ receivers forming a (possibly overlapping) array of total aperture $Z_{T}$ and $Z_{R}$, respectively, and define $Z\doteq Z_{T}+Z_{R}$. The $n$-th transmitter's and $m$-th receiver's locations along the x-axis are $Z\alpha_{n}/2$ and $Z\beta_{m}/2$, respectively, where $\alpha_{n}\in[ -Z_{T}/Z, Z_{T}/Z]$ and $\beta_{m}\in[ -Z_{R}/Z, Z_{R}/Z]$. Note that $\alpha_{n}$ and $\beta_{m}$ are randomly drawn from appropriate uniform distributions\cite{rossi2013spatial}. The transmitters transmit LFM chirps, orthogonal across transmitters. Consider $f_{c}$ as the carrier frequency and $\gamma$ as the chirp rate of the LFM chirp with chirp duration $T$. The FMCW radar's transmitted chirp is modeled as \par\noindent\small \begin{align*} s(\overline{t})=\exp{\left(j2\pi\left(f_{c}\overline{t}+\frac{\gamma}{2}\overline{t}^{2}\right)\right)},\;\;\; 0\leq \overline{t}\leq T, \end{align*} \normalsize with $\overline{t}$ as the continuous-time index. A total of $P$ chirps is transmitted in each CPI. Different orthogonal waveform designs for MIMO-FMCW radar transmitters have been proposed in \cite{de2011orthogonal,babur2013nearly}. For simplicity, we consider time-domain multiplexing, where the transmitters transmit the same signal with relative time shifts. In our proposed detection algorithm, we process each transmitted chirp independently and use binary integration\cite{richards2014fundamentals} after detection across pulses (in a CPI) to obtain the estimated ranges. On the contrary, classical-FFT processing considers coherent or non-coherent integration of the pulses to average out the interference and noise before detection\cite{richards2014fundamentals}. In Section~\ref{sec:simulation}, we discuss how binary integration improves the detection probability over classical processing. Similarly, the orthogonality of the transmitted signals allows the corresponding received signal components to be separated at each receiver. Hence, we first focus on the received signal component at the $m$-th receiver due to the single chirp transmitted from the $n$-th transmitter. \begin{figure} \centering \includegraphics[width = 0.85\columnwidth]{MIMO_radar_setup.png} \caption{MIMO radar system ($\square$ and $\circ$ denote receivers and transmitters, respectively).} \label{fig:mimo radar} \end{figure} We assume a target scene of $K$ stationary, far-field, non-fluctuating point targets (Swerling Case 0 model\cite{richards2014fundamentals}). We denote the $k$-th target's range and angle of arrival (AOA) as $R_{k}$ and $\theta_{k}$, respectively. Denote $\tau_{m,n,k}$ as the total time-delay in the $k$-th target's return at the $m$-th receiver from the $n$-th transmitted signal such that the received signal component is given as \par\noindent\small \begin{align*} r_{m,n}(\overline{t})=\sum_{k=1}^{K}a_{k}s(\overline{t}-\tau_{m,n,k}), \end{align*} \normalsize where $a_{k}$ is the complex amplitude proportional to the $k$-th target's radar cross-section (RCS). The time delay $\tau_{m,n,k}$ consists of the range delay $\tau^{R}_{k}$ and angular delay $\tau^{\theta}_{m,n,k}$ as \par\noindent\small \begin{align} \tau_{m,n,k}=\tau^{R}_{k}+\tau^{\theta}_{m,n,k},\label{eqn:total delay} \end{align} \normalsize where $\tau^{R}_{k}=2R_{k}/c$ and $\tau^{\theta}_{m,n,k}=Z(\alpha_{n}+\beta_{m})\sin{(\theta_{k})/2c}$ with constant $c$ denoting the speed of light. Note that the far-field assumption leads to a constant AOA across the array. After mixing the $m$-th received signal with the $n$-th transmitted signal, the FMCW radar's IF signal $y_{m,n}(\overline{t})$ is represented as \par\noindent\small \begin{align*} y_{m,n}(\overline{t})&=\sum_{k=1}^{K}a_{k}^{*}\exp{\left(j2\pi\left(\gamma\tau_{m,n,k}\overline{t}+f_{c}\tau_{m,n,k}-\frac{\gamma}{2}\tau_{m,n,k}^{2}\right)\right)}\\ &\;\;\;+w_{m,n}(\overline{t}), \end{align*} \normalsize where $(\cdot)^{*}$ represents the conjugate operation and $w_{m,n}(\overline{t})$ is the interference plus-noise term. Each IF signal $y_{m,n}(\overline{t})$ is sampled at sampling frequency $f_{s}$ as \par\noindent\small \begin{align*} y_{m,n}[t]&=\sum_{k=1}^{K}a_{k}^{*}\exp{\left(j2\pi\left(\gamma\tau_{m,n,k}\frac{t}{f_{s}}+f_{c}\tau_{m,n,k}-\frac{\gamma}{2}\tau_{m,n,k}^{2}\right)\right)}\\ &\;\;\;+w_{m,n}[t], \end{align*} \normalsize for $0\leq t\leq N-1$, where $N=f_{s}T$ is the total number of samples in a single pulse and $w_{m,n}[t]$ is the sampled noise. Here, we represent the discrete-time index by $t$. For the $N_{T}$ transmitters and $N_{R}$ receivers MIMO setup, we obtain `$N_{T}N_{R}$' sampled measurements $\{y_{m,n}[t]\}_{1\leq m\leq N_{R},1\leq n\leq N_{T}}$ for all $P$ pulses. \section{Sparse array Recovery algorithm} \label{sec:recovery algorithm} In this section, we describe the proposed range-angle detection algorithm. The spatial compressive sensing framework proposed in \cite{rossi2013spatial} for pulsed MIMO radar assumes an independent range-Doppler processing and focus only on targets in a given range-Doppler bin for AOA estimation. On the contrary, here, we consider both range and AOA detection. In Section~\ref{subsec:range detection}, we adopt a DFT-focusing operation to estimate the targets' ranges and separate the range and AOA information. Finally, in Section~\ref{subsec:angle detection}, the CS-based recovery provides the AOA estimates at each detected range bin. \subsection{Range detection} \label{subsec:range detection} Consider the $N$-point DFT of the sampled IF signal $y_{m,n}[t]$ as \par\noindent\small \begin{align} &Y_{m,n}[l]=\sum_{t=0}^{N-1}y_{m,n}[t]\exp{(-j2\pi lt/N)},\nonumber\\ &=\sum_{k=1}^{K}a^{*}_{k}\exp{\left(j2\pi\left(f_{c}\tau_{m,n,k}-\frac{\gamma}{2}\tau_{m,n,k}^{2}\right)\right)}\nonumber\\ &\;\;\;\times\sum_{t=0}^{N-1}\exp{\left(j2\pi\left(\frac{\gamma\tau_{m,n,k}}{f_{s}}-\frac{l}{N}\right)t\right)}+W_{m,n}[l],\label{eqn:DFT of y} \end{align} \normalsize for $0\leq l\leq N-1$, where $W_{m,n}[l]=\sum_{t=0}^{N-1}w_{m,n}[t]\exp{(-j2\pi lt/N)}$ represents the noise term. Replacing $N=f_{s}T$, we first analyze the sum of exponents $\sum_{t=0}^{N-1}\exp{\left(j(\frac{2\pi\gamma}{f_{s}})\left(\tau_{m,n,k}-\frac{l}{\gamma T}\right)t\right)}$ in \eqref{eqn:DFT of y}. Consider the sum of $M$ exponents $g(x|\overline{x})=\sum_{q=0}^{M-1}e^{j(x-\overline{x})q\omega}$ for given constants $\overline{x}$ and $\omega$. We can approximate $|g(x|\overline{x})|$ as \par\noindent\small \begin{align*} |g(x|\overline{x})|=\begin{cases}M,&|x-\overline{x}|\leq\pi/M\omega\\0,&|x-\overline{x}|>\pi/M\omega\end{cases}. \end{align*} \normalsize The approximation implies that in the focus zone $|x-\overline{x}|\leq\pi/M\omega$, the $M$ exponents are coherently integrated while the signal outside the focus zone is severely attenuated. In \cite{bar2014sub}, this focusing approximation was introduced as Doppler focusing across pulses in a CPI to reduce the joint delay-Doppler estimation problem to delay only estimation at a particular Doppler frequency. In our case, the sum of exponents appears naturally in the DFT of $y_{m,n}[t]$. Using the focusing approximation for the sum of $N$ exponents (indexed by $t$) in \eqref{eqn:DFT of y}, we have \par\noindent\small \begin{align} Y_{m,n}[l]\approx\sum_{k'=1}^{K'}a^{*}_{k'}N\exp{\left(j2\pi\left(f_{c}\tau_{m,n,k'}-\frac{\gamma}{2}\tau_{m,n,k'}^{2}\right)\right)}+W_{m,n}[l],\label{eqn:Ymn} \end{align} \normalsize where $\{a_{k'},\tau_{m,n,k'}\}_{1\leq k'\leq K'}$ represents the subset of targets which satisfy $|\tau_{m,n,k'}-l/(\gamma T)|\leq 1/(2\gamma T)$ for the given $l$-th DFT bin. Assuming $\tau^{R}_{k}\gg\tau^{\theta}_{m,n,k}$ for all targets, we have $\tau_{m,n,k'}\approx\tau^{R}_{k'}$ such that the received signal from targets at ranges satisfying $|\tau^{R}_{k'}-l/(\gamma T)|\leq 1/(2\gamma T)$ are coherently integrated, resulting in a (magnitude) peak at the $l$-th DFT bin. Furthermore, the practical values of $\gamma$ and $T$ for an FMCW radar ensures that the value $1/(2\gamma T)$ is small enough and $\tau^{R}_{k'}\approx l/(\gamma T)$. Hence, using threshold detection to identify the peaks in $Y_{m,n}[l]$ (corrupted by noise), we obtain the range estimates. The estimated range $R'$ corresponding to a DFT peak at $l'$-th bin is computed as \par\noindent\small \begin{align*} R'=\frac{c l'}{2\gamma T}. \end{align*} \normalsize These range estimates are computed independently for all $P$ pulses and for all $N_{T}N_{R}$ measurements $\{y_{m,n}[t]\}_{1\leq m\leq N_{R},1\leq n\leq N_{T}}$. The detected ranges are first filtered for false alarms across the $P$ pulses using binary integration, i.e., only the ranges detected in a sufficient number of pulses are considered valid target ranges. Similarly, the detected ranges are also filtered across the $N_{T}N_{R}$ measurements which further reduces the false alarm probability. The classical-FFT range processing also involves threshold detection for peaks in the DFT of the sampled IF signal. However, in classical processing, all the pulses are processed together non-coherently to compute the DFT, which increases the range resolution by increasing the frequency resolution of the computed DFT. On the other hand, by processing each pulse independently, we trade off range resolution for reduced missed detection probability. In particular, in the case of close-range targets, the classical processing often suffers from false peaks dominating the actual target peaks. Using binary integration across pulses and then across $N_{T}N_{R}$ virtual array channels, the detection probability is enhanced with a constant false alarm probability. This performance improvement with binary integration is further discussed in Section~\ref{subsec:DFT processing} with a simulated example of three close-range targets. \subsection{Angle detection} \label{subsec:angle detection} Consider a detected range bin at the $l'$-th DFT point. Substituting \eqref{eqn:total delay} in \eqref{eqn:Ymn} for $\tau_{m,n,k'}$, we obtain \par\noindent\small \begin{align*} &Y_{m,n}[l']=W_{m,n}[l']\\ &+\sum_{k'=1}^{K'}a^{*}_{k'}N\exp{\left(j2\pi\left(f_{c}\tau^{R}_{k'}-\frac{\gamma}{2}(\tau^{R}_{k'})^{2}\right)\right)}\exp{(j2\pi(f_{c}-\gamma\tau^{R}_{k'})\tau^{\theta}_{m,n,k'})}, \end{align*} \normalsize using $(\tau^{R}_{k'})^{2}\gg(\tau^{\theta}_{m,n,k'})^{2}$. For practical FMCW radars, carrier frequency $f_{c}$ (in GHz), chirp rate $\gamma$ (in MHz/$\mu$s) and short-range delay $\tau^{R}_{k}$ (a few $\mu$s) are such that the term $\gamma\tau^{R}_{k'}$ is negligible and \par\noindent\small \begin{align} Y_{m,n}[l']&=\sum_{k'=1}^{K'}a^{*}_{k'}N\exp{\left(j2\pi\left(f_{c}\tau^{R}_{k'}-\frac{\gamma}{2}(\tau^{R}_{k'})^{2}\right)\right)}\exp{(j2\pi f_{c}\tau^{\theta}_{m,n,k'})}\nonumber\\ &\;\;\;+W_{m,n}[l'].\label{eqn:Ymn for angle} \end{align} \normalsize Note that the exponential terms with the range and angle delays are now separated in $Y_{m,n}[l']$. Denote $x_{k}\doteq a^{*}_{k}N\exp{\left(j2\pi\left(f_{c}\tau^{R}_{k}-\frac{\gamma}{2}(\tau^{R}_{k})^{2}\right)\right)}$ as the complex amplitude independent of the AOAs. Further, we denote $Y^{p}_{m,n}[l']$ as the $l'$-th DFT coefficient computed for the $p$-th pulse. Stack the measurements $Y^{p}_{m,n}[l']$ for all $(m,n)$-pairs in a $N_{T}N_{R}\times 1$ vector $\mathbf{y}_{p}$. Now, define the `$N_{T}N_{R}\times P$' matrix $\mathbf{Y}=[\mathbf{y}_{1},\hdots,\mathbf{y}_{p}]$. Similarly, define the $K'\times P$ matrix $\widetilde{\mathbf{X}}=[\widetilde{\mathbf{x}}_{1},\hdots,\widetilde{\mathbf{x}}_{P}]$ with $\widetilde{\mathbf{x}}_{p}=[x_{1},\hdots,x_{K'}]^{T}$. Now, substituting $\tau^{\theta}_{m,n,k}=Z(\alpha_{n}+\beta_{m})\sin{(\theta_{k})/2c}$ in \eqref{eqn:Ymn for angle} yields \par\noindent\small \begin{align} \mathbf{Y}=\widetilde{\mathbf{C}}(\bm{\theta})\widetilde{\mathbf{X}}+\mathbf{W},\label{eqn:AOA matrix eqn} \end{align} \normalsize where the $N_{T}N_{R}\times K'$ matrix $\widetilde{\mathbf{C}}(\bm{\theta})=[\mathbf{c}(\theta_{1}),\hdots,\mathbf{c}(\theta_{K'})]$ with each column \small $\mathbf{c}(\theta)=[\exp{(j\pi f_{c}Z(\alpha_{1}+\beta_{1})sin(\theta))},\hdots,\exp{(j\pi f_{c}Z(\alpha_{N_{T}}+\beta_{N_{R}})sin(\theta))}]^{T}$, \normalsize known as the virtual array steering vector\cite{rossi2013spatial} parameterized by the AOA $\theta$. Here, $\mathbf{W}$ represents the $N_{T}N_{R}\times P$ noise matrix obtained from similarly stacking $W_{m,n}[l']$ from all pulses. We need to recover $\bm{\theta}$ and $\widetilde{\mathbf{X}}$ from $\mathbf{Y}$ with a small number of antenna elements. To this end, we use a sparse localization framework. Assume a grid of $G$ points $\bm{\phi}_{1\leq g\leq G}$ of the possible target AOAs $\theta$ with $G\gg K$ and negligible discretization errors. Each grid element $\phi_{g}$ parameterizes a column of $\widetilde{\mathbf{C}}(\bm{\theta})$. Hence, we can define a $N_{T}N_{R}\times G$ dictionary matrix $\mathbf{C}=[\mathbf{c}(\phi_{1}),\hdots,\mathbf{c}(\phi_{G})]$. From \eqref{eqn:AOA matrix eqn}, the measurements $\mathbf{Y}$ are then expressed as \par\noindent\small \begin{align} \mathbf{Y}=\mathbf{C}\mathbf{X}+\mathbf{W},\label{eqn:AOA cs eqn} \end{align} \normalsize where the unknown $G\times P$ matrix $\mathbf{X}$ contains the target AOAs and complex amplitudes ($x_{k}$). A non-zero row of $\mathbf{X}$ represents a target present at the corresponding grid point. Hence, the system \eqref{eqn:AOA cs eqn} is sparse since $\mathbf{X}$ has only $K'\ll G$ non-zero rows for a particular detected range bin. Given the measurements $\mathbf{Y}$ and matrix $\mathbf{C}$, AOA estimation reduces to determining the support (non-zero rows) of $\mathbf{X}$. Note that the matrix $\mathbf{C}$ and hence, the recovery guarantees depend on the choice of grid points $\bm{\phi}_{1\leq g\leq G}$ as well as the number and (random) positions of the transmitters and receivers ($\{\alpha_{n}\}_{1\leq n\leq N_{T}}$ and $\{\beta_{m}\}_{1\leq m\leq N_{R}}$). In \cite{rossi2013spatial}, authors also discuss the sufficient conditions on the grid and the random array for recovery of $\mathbf{X}$ with high probability. For the recovery of sparse matrix $\mathbf{X}$ with limited antenna elements, we consider CS-based algorithms. CS problems can be classified as single measurement vector (SMV) models for $P=1$ where $\mathbf{Y}$ reduces to a single vector, or multiple measurement vector (MMV) models for $P\geq 1$. Our problem \eqref{eqn:AOA cs eqn} is an MMV setting. However, we first consider the SMV setting with $P=1$ such that $\mathbf{Y}=\mathbf{y}$, $\mathbf{X}=\mathbf{x}$ and $\mathbf{W}=\mathbf{w}$ in \eqref{eqn:AOA cs eqn}. Recovering a sparse $\mathbf{x}$ from $N_{T}N_{R}$ measurements $\mathbf{y}$ involves solving the non-convex combinatorial $l_{0}$-norm problem \par\noindent\small \begin{align} \textrm{min}_{\mathbf{x}} \|\mathbf{x}\|_{0}\;\;\; \textrm{s.t.}\;\;\;\|\mathbf{y}-\mathbf{C}\mathbf{x}\|_{2}\leq\nu,\label{eqn:CS problem} \end{align} \normalsize where parameter $\nu$ is chosen based on the noise level $\|\mathbf{w}\|_{2}$ or the sparsity of $\mathbf{x}$. Solution of \eqref{eqn:CS problem} requires an exhaustive search of exponential complexity\cite{elad2010sparse}. However, an approximate solution can be obtained using a variety of polynomial complexity algorithms. Matching pursuit (MP) is one such family of methods, which iteratively refines the provisional support by adding one dictionary element at a time. Orthogonal MP (OMP)\cite{pati1993orthogonal}, orthogonal least squares (OLS)\cite{chen1989orthogonal}, and compressive sampling MP (CoSaMP)\cite{needell2009cosamp} are some popular MP algorithms for the SMV setting. For the general MMV setting, simultaneous OMP (SOMP)\cite{tropp2006algorithms} extends the OMP algorithm to matrix measurements. Another class of recovery algorithms is the Basis pursuit (BP) which relaxes the $l_{0}$-norm in \eqref{eqn:CS problem} with $l_{1}$-norm, resulting in a convex problem whose global solution can be found in polynomial time\cite{candes2008introduction}. In Section~\ref{subsec:performance}, we consider OMP and SOMP, respectively, for the sparse recovery in SMV and MMV settings. \section{Simulation results} \label{sec:simulation} We now demonstrate the performance of the proposed method in comparison to the classical FFT-processing. In Section~\ref{subsec:DFT processing}, we first investigate the effect of binary integration for range processing discussed in Section~\ref{subsec:range detection}. The simulation results for a sparse target scene are provided in Section~\ref{subsec:performance}. We considered a MIMO-FMCW radar system transmitting at carrier frequency $f_{c}=9.4$ GHz. The transmitted bandwidth was chosen as $B=250$ MHz with chirp duration $T=363 \mu$s (chirp rate $\gamma=B/T$) and sampling frequency $f_{s}=1.4$ MHz such that the range resolution was $0.6$ m. One CPI consisted of $P=10$ MIMO sweeps. For the sparse array, $3$ transmitters and $3$ receivers (total $6$ antenna elements) were placed uniformly over the array apertures $Z_{T}=Z_{R}=6\lambda$, where $\lambda$ is the wavelength of the transmitted signal. Note that in this case $\alpha_{n},\beta_{m}\in[-0.5,0.5]$ for $1\leq n,m\leq 3$. For the full array, we considered $4$ transmitters and $8$ receivers arranged as in \cite{belfiori20122d}. In particular, two transmitters were placed on either side of the array with an inter-element spacing of $\lambda$. The receivers were placed in the middle with an inter-element spacing of $0.5\lambda$ and $0.25\lambda$ spacing between the closest transmitter-receiver elements. This arrangement results in a virtual array of $20$ unique element locations with $0.5\lambda$ uniform separation. The target gains were generated as $a_{k}=\exp{(j\psi_{k})}$ with $\psi_{k}$ drawn from i.i.d. uniform distribution over $[0,2\pi)$. The noise term $w_{m,n}[t]$ is modeled as i.i.d. zero-mean complex circular Gaussian noise $\mathcal{CN}(0,\sigma^{2}\mathbf{I})$, mutually independent across pulses and virtual array channels. The signal-to-noise ratio (SNR) is then defined as $-10\log_{10}{(\sigma^{2})}$\cite{rossi2013spatial}. \subsection{DFT processing: classical and proposed method} \label{subsec:DFT processing} \begin{figure} \centering \includegraphics[width = \columnwidth]{dtft_outputs.png} \caption{Normalized DFT magnitude for (a) Classical range-FFT; and (b) Three different pulses for the proposed method (arrows indicate the detected peaks).} \label{fig:DFT} \end{figure} Consider three close-range targets with ranges $R_{1}=20.6$ m, $R_{2}=20.0$ m and $R_{3}=19.4$ m at AOAs $\theta_{1}=\theta_{2}=\theta_{3}=0^{\circ}$. Considering the noise-free case, Fig.~\ref{fig:DFT}a shows the range-FFT computed in the classical-FFT processing. Fig.~\ref{fig:DFT}b shows the DFT computed in the proposed method for three different pulses from measurement $y_{1,1}[t]$, which are then used to estimate the target ranges using binary integration as detailed in Section~\ref{subsec:range detection}. We observe that non-coherent processing of the pulses in the classical method provides a refined spectrum as compared to the proposed method of processing one pulse at a time. However, the classical range-FFT suffers from side-lobe effect which results in a false peak of the same order of the third target ($R_{3}$) peak. Hence, reducing the false alarms (increasing the threshold) results in a missed detection. On the other hand, in the proposed binary integration method, the missed targets in one pulse can be detected at other pulses (or some other $y_{m,n}[t]$ measurement). Hence, binary integration can enhance the detection probability for a constant false alarm rate by trading off range resolution. \subsection{Performance analysis} \label{subsec:performance} \begin{figure} \centering \includegraphics[width = \columnwidth]{false_hit.png} \caption{Average (a) false alarm rate, and (b) hit rate at different SNRs for classical-FFT processing and the proposed method.} \label{fig:rates} \end{figure} \begin{figure} \centering \includegraphics[width = \columnwidth]{error.png} \caption{Root MSE in (a) range, and (b) angle estimation at different SNRs for classical-FFT processing and the proposed method.} \label{fig:error} \end{figure} We considered $K=5$ targets with target delays and AOAs chosen uniformly at random with ranges in $[10 m, 40 m]$ and AOAs in $[-15^{\circ},15^{\circ}]$. This ensures the target scene has close-range targets as well as multiple targets at the same range. For the proposed CS-based angle recovery, we considered OMP with the vector measurement $\mathbf{y}$ as the sum across $10$ pulses, and SOMP for matrix measurement $\mathbf{Y}$. In \cite{rossi2013spatial}, authors assumed a known sparsity level and used the prior information of the actual number of targets $K$ in the CS algorithms. On the contrary, here, we assumed a sparsity level of $K_{max}$ for OMP and SOMP algorithms. The target AOAs were then obtained using threshold detection on the recovered signal. Hence, we do not require a prior estimate of $K$. We set $K_{max}=10$ for both OMP and SOMP. The grid $\bm{\phi}_{1\leq g\leq G}$ was chosen as $150$ uniformly spaced points in the $\sin(\theta)$ domain in the interval $[-0.7071,0.7071]$. Note that the AOA estimates are uniform in the $\sin{(\theta)}$ domain. This holds for classical-FFT processing as well where the DFT samples are equally spaced in the $\sin{(\theta)}$ domain between $[-1,1]$ and assume a non-linear distribution in the $\theta$ domain. The grid $\bm{\phi}_{1\leq g\leq G}$ spans the interval $[-45^{\circ},45^{\circ}]$ in the AOA domain. We consider hit rate and root-mean-squared error (RMSE) of the recovered targets as the performance metrics. A `hit' is defined as a range-angle estimate within $0.6$ m in range and $1^{\circ}$ in angle of the true target. The recovery error is computed for the estimates classified as hits. The target estimates not classified as hits are the false alarms. We vary the thresholds of the threshold detectors to maintain a constant false-alarm rate at different SNRs. The hit rate and false alarm rate for different SNRs, averaged over $300$ independent simulations, are shown in Fig.~\ref{fig:rates} for the proposed method and classical-FFT processing considering both full and sparse arrays. The corresponding range and angle recovery errors are shown in Fig.~\ref{fig:error}. From Fig.~\ref{fig:rates}, we observe that for high SNRs, the proposed method with OMP-based recovery achieves the same hit rate as the classical processing for the full array with same false alarm rates. However, the full array consists of $12$ ($4$ Tx+ $8$ Rx) antenna elements, while the sparse array requires only half of these elements. On the other hand, reducing the transmitter and receiver elements drastically degrades the detection ability of classical-FFT processing. Interestingly, at lower SNRs, the hit rate of the classical processing (full array) reduces due to the side-lobe effect discussed earlier. Note that at high SNRs, the false peaks from the side-lobes are not prominent compared to the actual target peaks. On the contrary, the proposed method maintains the same hit rate with varying noise levels. The SOMP-based angle recovery in the proposed method further helps to increase the detection probability with a reduced false alarm rate, compared to OMP-based recovery. SOMP improves the detection ability by exploiting the correlation among the measurements across different pulses to recover the true target AOAs. In Fig.~\ref{fig:error}a, we observe that the classical method has a lower range recovery error for both full and sparse arrays, because of the refined range FFT computed in the classical method. The proposed method achieves a slightly higher range error of about $0.15$ m. Similarly, in Fig.~\ref{fig:error}b, the classical method slightly outperforms the proposed method in terms of angle recovery error. However, the classical method's angular resolution (hence, the error) depends on the array aperture. A higher angular resolution requires an increase in the array aperture and hence, the number of antenna elements. On the other hand, the proposed method's angular resolution is determined by the number of grid points $G$. Hence, the angle recovery error of the proposed method can be reduced with a finer grid $\bm{\phi}_{1\leq g\leq G}$. However, the number and locations of the antenna elements still affect the dictionary matrix $\mathbf{C}$, which in turn, determines the recovery probability of the CS-based algorithms. \section{Summary} \label{sec:summary} We have proposed a novel sparse-recovery-based multi-target detection algorithm in the range-angle domain for MIMO FMCW radar. The proposed method enables a random array MIMO system to localize multiple targets in a sparse scene with reduced antenna elements compared to the traditional full array system. For range detection, we considered a DFT-based focusing operation with binary integration across pulses and virtual array channels. The binary integration in range detection provided a reduced missed detection probability than the classical non-coherent range-FFT processing. Finally, we considered a sparse recovery framework for target AOAs detection using both SMV and MMV-based CS recovery algorithms. Through numerical simulations, we illustrated the proposed method's target recovery compared to classical-FFT processing. Our numerical experiments suggest that the proposed method can achieve the traditional full-array hit rate with limited antenna elements. Furthermore, the MMV-based angle recovery can outperform both SMV-based and classical-FFT methods. \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.14328", "language": "en", "timestamp": "2023-03-01T02:09:39", "url": "https://arxiv.org/abs/2302.14328", "yymm": "2302" }
\section{Introduction} Quantum magnetism in crystalline solids and the study of spin liquids is experiencing a resurgence. It is partly due to a remarkable exactly solvable quantum spin model on a honeycomb lattice by Kitaev \cite{KITAEV2006}, followed by an exciting proposal by Jackeli and Khaliullin \cite{Jackeli2009} of experimental realization of Kitaev spin liquid in certain real materials. Several potential Kitaev proximity materials are appearing on the scene \cite{Yogesh2010,Liu2011,Choi2012,Ye2012,Comin2012,Hwan2015,Kubota2015,Williams2016,kitagawa2018}. New experimental results in possible Kitaev systems, such as $\alpha$-RuCl$_3$ \cite{Plumb2014,Sandilands2015,Sears2015,Majumder2015,Johnson2015,Banerjee2016,Do2017,Wagner2022magneto}, continue to surprise us. Beyond basic sciences, developments in quantum spin liquids give hope and pave the way for novel qubits, topological quantum computation, and quantum information science and technology. Spin glasses, both classical and quantum, are often found in systems with spatial disorder. In an exciting recent experiment, glassiness is found in a nearly disorder-free $\alpha$-RuCl$_3$ at low temperatures and intermediate magnetic fields \cite{Shivaram2021}, as seen in anomalous non-linear susceptibilities. \textit{Is $\alpha$-RuCl$_3$, a Kitaev proximity system, showing us a way to intrinsic glassiness in disorder-free and generic quantum spin liquid systems?} At the end of this article, we suggest an affirmative answer to this question, using our present work and the above developments, and make connections to earlier proposals on glass physics in a variety of quantum systems. We begin with viewing the glass phase in a generic sense as a ground state exhibiting slow dynamics. Emergent glassiness in disorder-free many-body systems is seen, sporadically or otherwise, in many earlier works, although the observed phase was not often associated with glassiness. Intuitively, if the ground state is in proximity to a wealth of local minima due to (say frustration-induced or topological-) degeneracy \cite{Chamon2005}, `emergent disorder' arising from an excessive number of conserved quantities \cite{Moessner2017,Prem2017,Hart2021Logarithmic}, or orthogonal catastrophe near a critical point \cite{Anderson1967,Sachdev1999}, or local constraints or local bath, \cite{Chamon2005,Heyl2021QLM,Feldmeier2019} its dynamics are impeded. In modern calculations, it is also shown that if the Hilbert space is partitioned \cite{Khemani2020,Lee2021} and/or disentangled \cite{Grover2014} into (local) Hilbert space, then the ergodicity is hampered. The Kitaev model is studied extensively in the presence of the magnetic field in 2D honeycomb lattice \cite{Nandini2019,Pollmann2018,LiangFu2018,MiangLu2018,Trebst2019,Valenti2019}, in ladder setups \cite{Gordon2019,Affleck2020}, and combined with other interactions \cite{Yadav2016,Jiang2019,Kim2020}. There has been a variety of results and proposals, some of which are ubiquitous while others remain active research topics, yet the physics of glassiness was not reported earlier. There is theoretical evidence of $U(1)$ quantum spin liquid (QSL) in the intermediate field regime, with gapless excitations whose nature is still debated (for reviews see \cite{Kee2016,Hermanns2018,Takagi2019Concept,TREBST2022}). Our understanding of the constituent gauge and matter excitations in the Kitaev model with other interactions \cite{Schaffer2012,Craig2012,Oitmaa2015,Janssen2019,Gohlke2018,Batista2021Var,Feng2020,PRADHAN2021} and external perturbations are gradually evolving \cite{Berke2020,Batista2022,Aprem2022,Sodemann2022,He2020,Subhro2020}. In particular, the behavior of the gauge fluxes is not explicitly investigated in the previous numerical studies at finite magnetic fields, and hence their role on the corresponding phases remained unknown. The experimental situations similarly remained inconclusive. Experiments have observed half quantization in the thermal Hall effect \cite{Matsuda2018}, and quantum oscillations in in-plane longitudinal thermal conductivity without any observed quantization in the corresponding transverse conductivity \cite{Czajka2021} in $\alpha$- RuCl$_3$ in the intermediate magnetic field region. Another experiment has indicated multiple phase transitions in the same field region based on the anomalies in thermal (both longitudinal and Hall) conductivity \cite{Takagi2022} Evidence of magnetic excitations \cite{Banerjee2018} and phonon anomlies \cite{Dean2021} are also presented in experiments in the same field region (before polarized phase appears). More interestingly, this is roughly the same magnetic field region where a recent experiment finds a signature of glassiness \cite{Shivaram2021}. A magnetic field modifies the dynamics of spins. But why are the dynamics slow in the intermediate field region without any external inhomogeneity causing glassiness? Here we carry out a DMRG study on the Kitaev model on the 1D ladder at a finite magnetic field and zero temperature. Such a model is studied earlier with ED (also DMRG) \cite{Gordon2019}, and with iDMRG \cite{Sorensen2021}, but missed some of our phases, presumably due to numerical limitations and for overlooking the role of flux operators responsible for phase transitions. We find a set of interesting phases with increasing magnetic field. At low fields, the $Z_2$ gauge flux stabilizes in a spatially homogeneous phase, before it tends to crystallize. In the intermediate field region, we spot a robust glass phase determined by random spatial distributions of the $Z_2$ gauge fluxes, with possible gapless excitations. The emergence of glass physics is corroborated by the signature results of the correlation functions, quantum Fidelity calculations of the ground state. The glass phase intervenes in the homogeneous flux phase on one side and a homogeneous polarised phase at high field. On the basis of these observations, we provide a possible mechanism of intrinsic glass physics and make connections to other glass physics to outline an organizing principle for the many-body glass state. Our remaining article is organized as follows. We present our DMRG method and results in the Kitaev ladder at $T = 0$ as a function of the magnetic field and discuss the emergence of various phases with emphasis on the intrinsic glass phase. Before concluding, we attempt to obtain a unifying understanding of emergent glassiness in disorder-free many-body systems by connecting various other proposals dating back to the RVB theory of the QSL state to modern languages where slow dynamics of a ground state are promoted by temperature or magnetic field or other external baths. \section{Method}\label{sec:Methods} \begin{figure}[tb!] \centering \includegraphics[width=1\linewidth]{./figures/Ladder_Poster_New.pdf} \caption{A Kitaev ladder setup that we study here. At each site, we have three nearest neighbor bonds with exchange interactions, $J_{x,y,z}$ between $S^{x,y,z}$, respectively, as in a honeycomb analog. The $J_z$ interactions ($J_{3}$, $J_4$) are kept to be the same as well as different, for comparison. ${\bf a}$ denotes the lattice constant, while $W$, $T_i$ are flux operators defined in the text.} \label{fig:Lattice} \end{figure} We consider the Kitaev model with the magnetic field (${\bf h}$) along the [111]-direction as \begin{equation}\label{eq:ham} H = \sum_{\langle ij \rangle_\alpha} J_\alpha S_i^\alpha S_j^\alpha - \sum_{i,\alpha}h_{\alpha}S_i^{\alpha}. \end{equation} Here $J_{\alpha} > 0$ are bond dependent exchange couplings, $\alpha = x, y, z$. This model is set on the 1D Ladder as shown in Fig. \ref{fig:Lattice}). Each bond has three nearest neighbor interaction, hence mimicking the setup proposed by Kitaev on a honeycomb lattice. The coupling along the $z$-bond (between the chains) is taken to be staggered, in general, as $J_z=J_3$ or $J_4$ in alternative rungs, see Fig.~\ref{fig:Lattice}. The spin operator $S_i^{\alpha}$ at each site $i$ can be factorized into matter Majorana fermion ($c_i$) and gauge Majorana fermion ($b_i^{\alpha}$) operators. Then the gauge Majorana operators in the nearest bonds can be combined into a bilinear operator $u_{ij}^{\alpha}=ib_i^{\alpha}b_j^{\alpha}$, which serves as a $Z_2$ gauge field. With this, we can define a flux operator at a six-bond plaquette $p$ as \begin{equation} W_p = S_{i}^yS_{j}^zS_{k}^xS_{l}^yS_{m}^zS_{n}^x = \prod_{l_p} u_{l_p}^{\alpha}, \label{eq:Wp} \end{equation} where $l_p=ij, jk, kl, lm, mn,$ and $ni$ are nearest neighbor bonds. The chosen spin component at a given site is the one present in the outward bond (normal to the plaquette). It turns out that $W_p$ at each plaquette commutes with the Hamiltonian at $h=0$, giving $N$ conserved quantities in both 2D Honeycomb lattice as well as in the 1D ladder. In addition, in the present 1D ladder setting, there are two additional local conserved quantities, which are four-bond plaquette operators as defined by \begin{eqnarray} T_{1p} &=& S_{i}^{y}S_{j}^{y}S_{k}^{x}S_{l}^{x} = - \prod_{l_p} u_{l_p}^{\alpha},\nonumber\\ T_{2p}&=& S_{j}^{x}S_{m}^{x}S_{n}^{y}S_{k}^{y} = - \prod_{l_p} u_{l_p }^{\alpha}, \end{eqnarray} where $l_p=ij, jk, kl, li$ bonds in the $1p$-plaquette, and so on. These operators are shown in Fig.~\ref{fig:Lattice}. Consequently, $W_p = T_{1p}T_{2p}$ and $\left[ T_{1p}, T_{2p} \right]=0$ \footnotemark[1]. In the ground state, all these conserved quantities assume $W = +1$ and $T_{1p/2p} = +1$, (uniform flux-free phase),\footnotemark[2] giving us an extensive number of conserved quantities. Hence the many-body Hilbert space is made of `trivial' product states of gauge sectors, and matter sectors \cite{Baskaran2007}. This is a $Z_2$ - QSL state \cite{Brenig2017}. The energy dispersion of matter (Majorana) fermions for the couplings $J_x=J_y=J_3=1, J_4=0$ shows gap-less (quadratic dispersion), and for $J_y=J_3=J_4=1$: $J_x=1$ is gaped; $J_x=2$ is gap-less with linear dispersion. \footnotetext{These four-bond plaquette operators do not commute themselves or with the 2D Kitaev Hamiltonian, but commute with the 1D ladder Hamiltonian in Eq.~1 at $h=0$.} \footnotetext{Uniform flux free ground state is obtained by fixing the gauge: $u_{ij}^\alpha = +1$ on $\alpha= x, y$ bonds and on the legs, $u_{ij}^z=+1 (-1)$ along $J_3$ ($J_4$)-couplings \cite{Brenig2017, Satoshi2019, Tao2007}.} We study Eq.~\ref{eq:ham} at $h\ne 0$ by using DMRG method for $N=200,300,400$ with cyllindrical boundary condition between the chains, and open boundary condition at the edge. The randomly initialized Matrix product state (MPS) is variationally tuned to the ground state by minimising the expectation value of the matrix product operator of $H$ in Eq.~\ref{eq:ham} (energy) with bond dimension up to $D \leq 2500$ and truncation error, $\epsilon \sim 10^{-10}$. The DMRG algorithm is implemented using ITensors Library \cite{itensor}. The expectation values of any gauge-invariant operators are calculated by contracting the MPO with DMRG predicted ground state MPS. We repeat some of the calculations on a four-leg 1D lattice with cylindrical boundary conditions along the armchair direction and open boundary condition along the zig-zag direction. This geometry is closer to the 2D Honeycomb lattice, see Appendix. \ref{sec:2D}. The salient properties that are presented in the main text for two-ladder compound are reproduced in the four-ladder settings. \section{Results}\label{sec:Results} \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{./figures/Mag_Comb_zoom.pdf} \caption{{(a) The spatial average value of the magnetization along the magnetic field direction is plotted as a function of field strength. (b) Corresponding values of the uniform spin susceptibility ($\chi$) are plotted here. Three different colors denote the same calculated values but for three different system sizes $N=400, 300, 200$}. The vertical dashed lines mark the phase boundaries which are located at $h \simeq 0.24, 0.28, 0.3$, and $0.43$. (The plots are magnified between $h=0.2$ - $0.5$ values for vizsualization.)} \label{fig:magGapped} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{./figures/PlqAvrg.pdf} \caption{Computed values of the spatial average of the three flux operators $W$, $T_{i}$ are plotted as a function of field strength. The results are shown for a DMRG run on a $400$ site lattice. The vertical dashed lines indicate the same phase boundaries as in Fig.~\ref{fig:magGapped}. The horizontal dashed line marks the $\langle W\rangle =0$ line.} \label{fig:AvrgPlq} \end{figure} In gauge theories of the present kind, it is often difficult to find the right order parameter (s), especially when there are multiple phases that compete and/or coexist. As $h\rightarrow 0$, we have a non-local multi-linear operator, $W_p$, which acquires a fixed eigenvalue at each site as discussed above. At $h\rightarrow \infty$, the local linear (magnetization) operator $S_{i}^{\hat{h}}$ has a uniform average value in the polarised phase with the easy axis oriented along the field direction $\hat{h}$. There is no obvious way to smoothly interpolate between these two (quasi-) local operators, and a phase transition between them, if exists, evades the Landau theory and occasionally can be classified within the deconfined quantum critical paradigm. Non-local string operators arise as dynamics are introduced in the intermediate magnetic field strength. These string operators bind flux-flux, matter-matter, and/or flux-matter excitations. It is numerically expensive to evaluate their expectation values within DMRG. We will, however, occasionally comment on the possible role of such non-local string operators for the slow dynamics of the glassy phase we obtain here. We present the spatial average values of the ground-state expectation value $\langle \mathcal{O}\rangle = \frac{1}{N}\sum_l\langle \mathcal{O}_l\rangle$, where $\mathcal{O}_l=S_i^{\hat{h}}$, $l=i$ site index as shown in Fig.~\ref{fig:magGapped}, and $\mathcal{O}_l=W_p$, $T_{1p}$, $T_{1p}$ and $l=p$ plaquette index, as shown in Fig.~\ref{fig:AvrgPlq}. In both values of $M=\langle S\rangle$ and $\langle W\rangle$, we observe concurrence of kinks or jumps with increasing magnetic field strength $h$. We denote these finite-field phases by I, II, III, IV, and V. We see in Phase I a uniform flux value at all plaquettes with the average value decreasing with $h$, and hence we dub it the uniform-flux phase, see Fig.~\ref{fig:Plq_1}. In Phase II, local flux (we will call them $Z_2$ vortex) values begin to fluctuate around their finite mean value. Phase III appears in the region where the number of vortices is nearly half of the number of lattice sites (half-filling), and $Z_2$ vortices tend to crystallize. Phase IV corresponds to the glass phase with random fluctuations in the $Z_2$ vortices around a zero-mean value. Finally, Phase V corresponds to the uniform polarised phase. The magnetization grows near-linearly at all field strengths except in the intermediate region. The uniform spin susceptibility, defined as $\chi= \frac{\partial M}{\partial h}$, shows divergence features at all phase boundaries. The divergence in $\chi$ is most sharp at $h=0.43J$, at the phase boundary between the glass and the polarized phases, possibly indicating a phase transition caused by the long-wavelength collective excitations (magnons). \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{./figures/Plq_1.pdf} \caption{The computed value of $\langle W_p \rangle $ are shown for each plaquette $p$ for two different fields (a) $h=0.2$ (b) $h=0.275$, which correspond to Phase I and Phase II. (c),(d)) The values of $\langle T_{\beta p} \rangle$ are shown in the corresponding bottom panel. The $T_{2p} > T_{1p}$ at a $p$ corresponds to $T_{1p}$ flux sitting at the boundaries, and vice versa} \label{fig:Plq_1} \end{figure} \subsection{Uniform and Crystalline phases of fluxes} The expectation values of flux operators show an intriguing behavior, as shown in Fig.~\ref{fig:AvrgPlq}. Up to $h\approx 0.24J$, we observe a uniform value of $\langle W_p\rangle$, but $\langle T_{ip}\rangle$ obtain staggered mean values between the alternative four-bond plaquettes, as shown in Fig.~4(a) and 4(b), respectively. (The condition for $T_{1p}>T_{2p}$ versus $T_{1p}<T_{2p}$ at a given plaquette depends on the open boundary condition.) Moreover, the uniform value of $\langle W_p\rangle <1$ at all plaquettes suggests that the gauge sector of the ground state can still be approximated to be a product state of local basis, but now the local states have changed from $|+\rangle_p$ at $h=0$ to $\alpha_p|+\rangle_p + \beta_p|-\rangle_p$ for $h>0$, where $W_p|\pm\rangle_p=\pm |\pm\rangle_p$, and $\alpha_p^2-\beta_p^2=\langle W_p\rangle$, $\forall p$. The normalization condition dictates $\alpha_p^2 = (1+\langle W_p\rangle)/2$. When the Kitaev model is perturbed, in general, one gets complicated multi-body interactions among Majorana Fermions and Z$_2$ gauge fluxes. Z$_2$ gauge fluxes become dynamic and acquire finite effective masses\cite{Aprem2022}. Further, open string operators carrying Majorana fermion modes (both $b_i^{x, y, z}$ and $c_i$) at their ends also have expectation values in the ground state. Study of open strings using DMRG at finite fields is cumbersome. Elaborated discussion on these string objects at finite fields and their role in dynamics is presented in Appendix.~\ref{sec:Strings}. There are virtual excitations due to $T_{ip}$ fluxes whose energy scale is $<10^{-3}J$, but in the uniform $\langle W_p\rangle$ phase, they make no contributions. There are long-wavelength collective excitations, in which $\alpha_p$ (i.e. $\langle W_p\rangle$) varies slowly across the lattice but with a gap which scales with the system size. Finally, single matter Majorana excitations appear at higher energy. A single $Z_2$ vortex creation in the uniform flux case at a six-bond plaquette, i.e. changing $W_p$ from $+1$ to $-1$ costs energy $E\sim 0.24J$. Therefore, for $h > 0.24J$, $W_p$ vortex creation is energetically feasible. In the dilute limit, the vortices start to proliferate in the lattice like a vortex gas or liquid phase, which is Phase II in our phase diagram. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{./figures/Plq_2.pdf} \caption{Similar to Fig.~\ref{fig:Plq_1}, but here the results are shown at two representative fields of Phase II ($h=0.2975$) and Phase III ($h = 0.365$). In the middle panel ((c),(d)), we plot the real part of the Fourier transformation of $\langle W_p \rangle$ with wave vector $k$.} \label{fig:Plq_2} \end{figure} With further increase of the field strength, by $h \geq 0.28J$ there is a tendency for the vortices to crystallize, as shown in Fig.~\ref{fig:Plq_2}(a). This is Phase III. Here $W_p=\pm1$ plaquettes are nearly equal in number, giving $\langle W\rangle\rightarrow 0$, which is close to half-filling. In this case, the vortices are `frozen' to the lattice site with alternating plaquettes having opposite $W_p$ sites, see Fig.~\ref{fig:Plq_2}(a). This phase is analogous to a density wave order in a correlated fermionic insulator or hard-core bosonic insulator at half-filling. The vortex lattice formation is evident in the dominant value of the Fourier component of the flux operators at a single wavevector as shown in Fig.~\ref{fig:Plq_2}(c). Slightly away from the half-filling on both sides, we observe here a few wavevectors and quasi-long-range correlation functions. which suggests an amorphous behavior. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{./figures/Corr_Zoom.pdf} \caption{We plot the correlation function of the flux operator $\Delta W_p$ with $p=50$ for (a) $h=0.295$, and (b) $h = 0.365$.} \label{fig:Corr} \end{figure} \subsection{Emergent Glassiness} An amorphous crystal is a precursor to glassiness and may be at play in the present case as well. The energy to create a single $W_p$ flux in the crystalline phase is $\sim 0.05J$ (assuming uniform crystal for this estimation, see Appendix~\ref{sec:Gaps} for more details). Therefore, for $h>0.3J$, we enter into the dense vortex region (Phase IV). The dense $Z_2$ vortex density is evident in the $\langle W\rangle \leq 0$ value shown in Fig.~\ref{fig:AvrgPlq}. Because of this high density, any small local fluctuation tends to impede the ordering of the entire lattice and hence a {\it glassiness} arises. We calculate the correlations of $W_p$, quantifying the fluctuations from its mean, as $\Delta W =\langle W_pW_{q}\rangle-\langle W_p \rangle \langle W_{q}\rangle$, where the expectation value is calculated with respect to the MPS ground state. The correlations of the fluxes are small but finite with several wavevectors as shown in Fig. \ref{fig:Corr}(d). Interestingly, the correlation survives to a large length in this glass phase than that in the previous crystalline phase. This is in contrast to a solid-to-liquid phase transition where the correlation length decreases in a liquid phase. This is one aspect of the glassiness that distinguishes phase IV from it being a liquid phase. Furthermore, we have also checked that the phase has non-zero central charge, signaling gapless excitations. Note that this is the approximate range of fields where gapless $U(1)$ QSL state is proposed in the 2D Honeycomb Kitaev model \cite{Nandini2019,Pollmann2018,LiangFu2018,MiangLu2018,Trebst2019,Valenti2019}. The convergence of DMRG minimization in this range of fields is slow compared to time scales for other phases. Referring to the definition of $W_p$ in Eq. \ref{eq:Wp}, it is easy to associate the fluctuation of $\langle W_p\rangle$ with the quantum fluctuation of the spins. This sets the present glass physics apart from the classical glassy phase of frozen spin configurations. Note that apart from single flux productions, there are also non-local flux pairs which are connected by Wilson operator $(W_p)^n$ - which in the spin operator form takes a string operator. This automatically generates $n-$point spin-spin correlations in this system. Definite a $n^{\rm th}$-order uniform susceptibility $\chi_n\sim\partial^n M/\partial h^n$, we have checked that the second and third-order susceptibilities in this region are large and more chaotic as a function of the magnetic field. Note that in a gaussian fluctuation theory, the third and higher-order susceptibilities vanish, as we also find in the other phases. But in Phase IV, we find significant enhancement of the mean square values of of the second and third-order susceptibilities in the range of $\mathcal{O}\left(10^2\right)$ to $\mathcal{O}\left(10^3\right)$. For the high magnetic fields, Phase V is trivially polarised along the $[111]$ direction. The fluxes are half of the plaquettes with $\pi-$ fluxes resulting in $\langle W_p \rangle = 0$ and $\langle T_{1p/2p} \rangle = 0$ in every plaquette uniformly. \subsection{Robustness of results with other Lattice Settings} We repeat the DMRG calculation in a 2D lattice strip via the four-leg Honeycomb lattice with cylindrical boundary conditions; see Appendix.~\ref{sec:2D} for more details. We find four phases (Fig. \ref{fig:2Dfluxes}), where phase II and phase III are not distinguishable within the finite system size calculation. More importantly, the glass phase is reproduced here. We also repeat the DMRG calculation in 1D ladder for $J_4=0$ with other parameters fixed at 1. This creates open boundary conditions between the chains. The result is presented in Appendix~\ref{sec:OtherCouplings}. We find two phases: at $h>0$ we immediately find a crystalline phase (phase III) and the uniform polarized phase (phase V). The glass phase is absent here. \section{Phase transitions and Fidelity} \begin{figure}[htb!] \centering \includegraphics[width=1\linewidth]{./figures/Fidelity.pdf} \caption{We plot the quantum Fidelity (defined in the text) with magnetic field for $N=400$ lattice sites. The vertical dashed lines indicate all five phase transition points, which coincides with Figs. \ref{fig:magGapped} and \ref{fig:AvrgPlq}.} \label{fig:Fidelity} \end{figure} In the absence of a well-defined local order parameter, the phase boundaries and phase transitions are difficult to characterize. In such a scenario, we can study how `orthogonal' different variational ground states are as a function of the control parameter. This information is obtained with the quantum Fidelity analysis. The quantum Fidelity is defined as $F(h)= \left|\langle \psi_0\left(h\right)| \psi_0\left(h+\delta h\right) \rangle \right|$, where $|\psi_0 \left(h\right) \rangle$ is the ground state vector obtained from the DMRG calculation at $h$ \cite{Zanardi2006,Vidal2008,Zhou2008,Yang2008,Mukherjee2012}. It is now evident that if the states $|\psi(h)\rangle$ and $|\psi(h+\delta h)\rangle$ are linearly dependent we have $F\rightarrow 1$, and if they are completely orthogonal we get $F\rightarrow 0$, and any value between them measures the overlap between the two wavefunctions. At $F=0$, with an infinitesimal change in $h$, the system picks up a different local minima configuration whose state is orthogonal to other local minima states \cite{Castelnovo2010}. Earlier, the measure of the Fidelity was captured via the concept of `Orthoganality Catastrophe' a la Anderson in free Fermion systems. This is an infrared catastrophe arising from gapless excitations. Fidelity can vanish for different reasons, such as emerging glassiness. As shown in Fig. \ref{fig:Fidelity}, we see that $F\rightarrow 1$ in both the uniform phases of flux (Phase I) and of spin (Phase V), suggesting a unique ground state in these phases. $F$ sharply decreases at the phase boundary between Phase I and II, implying that the vortex gas phase is separated from the uniform phase by a phase transition. Within the Phase II region, the Fidelity does not completely reach 1, suggesting the presence of configurations that partially overlap with the chosen ground state. The most exciting feature is obtained in Phase III (amorphous vortex crystal) and Phase IV (vortex glass) where $F=0$. This clearly indicates the presence of a plethora of local minima whose wavefuncions are orthogonal to the chosen ground states. These local minima are not necessarily degenerate to each other but lie within the energy fluctuation scale provided by the magnetic field. Such an abundance in local minima hinders dynamics and ergodicity and is responsible for glassiness. For a given value of $h\in $ Phase- III and IV, we have repeated our DMRG runs many times, and each time, the DMRG iterations yield a different flux configuration that is orthogonal to the other one. In phase- IV, this behavior hints at the presence of gapless feature of the ground state as reaffirmed by the calculation of the central charge (not shown). \section{Discussion: Intrinsic Glassiness of Generic Quantum Spin Liquids} Building on the previous works on quantum glass physics and the above-detailed demonstration of it in a Kitaev model, it is important to ask whether glass physics proximates any generic QSL state when a perturbation is introduced to cause dynamics. Various numerical studies have suspected intrinsic glassiness as a function of temperature and anomalous thermalisation in the pure Kitaev model. Localisation features are observed for a range of different couplings in the Kitaev Ladder in low-field \cite{Brenig2021} indicating that behaviour of localisation is beyond low-field uniform flux approximations. The phase with non-ergodic dynamics is also observed in the 2D Kitaev model under quench with skew magnetic field \cite{Heyl2021} and without the field for anisotropic couplings \cite{Rademaker2019}. The exact solubility of the Kitaev model is a result of $N$ local conserved quantities (flux operator) in a honeycomb lattice of $N$ plaquettes. As claimed in Refs.~\onlinecite{Khemani2020,Lee2021}, the local conservation constraint leads to the shattering of the full Hilbert space of dimension 2$^{2N}$ into 2$^N$ sectors of equal dimension. Each sector defines a 2$^N$ dimensional Hilbert space of free neutral fermions. This perfect partitioning of Hilbert space and consequent superselection challenges ergodicity and leads to many-body localization etc. A complementary view can be given from a pure gauge theory (3D Toric Code model) from Ref.~\onlinecite{Chamon2005}. The spins are {\it locally} coupled to a dissipating bosonic bath (due to, say, local displacement fields, local temperature gradient) to induce dynamics. Yet, there is a tendency for slow relaxation of the ground state due to topological (over-) protection of the emergent low-energy excitations (such as matter-field is confined to the gauge flux (s)). Such a state is associated with emergent glassiness (see Appendix.~\ref{sec:Strings} for discussions in the Kitaev model). Caution is to be taken that the aforementioned glass behaviors are not always characterized by all the signatures of glass physics. In an RVB state, a spin operator at a site S$^\alpha_i$, while acting on a given site produces two spinons, which separate away during time evolution. However, spinons as sources of emergent gauge fields, carry gauge fluxes \cite{baskaran1988gauge}; sometimes both electric and magnetic charges, called Dyons \cite{Affleck1986,Baskaran2003Skyrmion}. In the context of Z$_2$ spin liquids, Z$_2$ flux excitations are also present\cite{Senthil2001}. Net gauge fluxes created by the spin operators are zero, even though the spin operators are gauge invariant. Flux attachment endows spinons with fractional exchange statistics in 2 dimensions \cite{Read1989}. This is also transparent in Kalmeyer-Laughlin chiral spin liquid state \cite{Kalmeyer1987,Laughlin1990} and later works, where low energy spinon carries a Vison or Meron (half-Skyrmion) \cite{Baskaran2003Skyrmion} or SU(2) gauge fluxes \cite{Lee2000}. Any external degree of freedom which directly couples to the constituent local spin degree of freedom can generate (extended) topologically-protected excitations that do not disappear easily \cite{Hart2021}. Consequently, we expect glassiness in spin liquids in general. Anomalous behaviour may generally arise in the non-linear susceptibilities as magnetic fields get directly coupled to the spin operator via Zeeman coupling. Glassiness arising from emergent gauge fluxes and neutral Fermions will be visible in experiments involving field quenching and other non-equilibrium analysis as well. The femtosecond laser pulses can be used to probe glassiness in the Kitaev spin liquid material. \section{Conclusions} Our detailed DMRG study on the 1D Kitaev model with a magnetic field reveals an intriguing phase diagram with five phases, and among them, we discover a glass phase. The model is also studied earlier with exact diagonalization (ED) \cite{Gordon2019} as well as with iDMRG method \cite{Sorensen2021}. But numerical studies of 2D Kitaev have the disadvantage of even lesser plaquette number than in 1D ladder counterpart, and hence the phases distinguished by the behavior of flux operators may not be discernible due to boundary effects. The earlier DMRG study found three phases, which are Phase I, Phase II, III, and IV combined, and phase IV. We are able to segregate between the vortex gas, crystal, and glass phases in the otherwise known $U(1)$ QSL phase, due to the detailed analysis of the vortex operators as well as the Fidelity calculation. We find evidence of gapless excitations in the vortex glass phase but not in the gas and crystal phases. How robust is our phase diagram beyond a two-leg ladder geometry and beyond the limitations of the DMRG studies? A complete answer to this question is not known in the community. We have however repeated the DMRG calculation on a four-leg ladder geometry as given in the Appendix. Here we find four Phases: Phase I, Phase III, Phase IV, and Phase V. This means the boundary between Phase II (vortex gas) and Phase III (vortex crystal) is not discernible. However, the vortex glass of present interest is well reproduced. There are now numerical softwares available for finite temperature calculation within DMRG and Tensor network formalism. Future extension of our calculation to finite temperature will shed light on the possibility of a BKT-like physics for $Z_2$ vortex as well as the stability of glass phase to thermal broadening. \textit{Note:} As we were finishing this manuscript, we came across an interesting paper from Zheng Yan et al. \cite{sachdev2023}, where they report numerical finding of emergent glassiness in disorder free Rydberg atom arrays in 2D. \section*{Acknowledgements} GB thanks B. Shivaram for the discussion on experimental results from his group on anomalous non-linear susceptibility in $\alpha$-RuCl$_3$; Tarun Grover and Mathew Fisher for discussions. We thank Vijay Shenoy for suggesting the Fidelity calculation. GB acknowledges continuing support from the Institute of Mathematical Sciences, the Indian Institute of Technology in Chennai, India, and the Perimeter Institute for Theoretical Physics at Waterloo, ON, Canada. GB's research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. TD acknowledges research funding from S.E.R.B. Department of Science and Technology, India, under I.R.H.P.A Grant No. IPA/2020/000034 and acknowledges the computational facility at S.E.R.C. Param Pravega under NSM grant No. DST/NSM/R\&D HPC Applications/2021/39. KBY thank ICTS for the accommodation during the program "Frustrated Metals and Insulators" (code: ICTS$/$frumi2022$/$9).
{ "arxiv_id": "2302.14351", "language": "en", "timestamp": "2023-03-01T02:10:37", "url": "https://arxiv.org/abs/2302.14351", "yymm": "2302" }
\section{Introduction} In this paper we study the (nonlocal) torsional rigidity in the ambient space of random walk spaces. Important examples of these spaces are locally finite weighted graphs, finite Markov chains and nonlocal operators on domains in $\R^N$ where the jumps are driven by a non-negative integrable and radially symmetric kernel (see~\cite{MST0} and \cite{MSTBook}). In the classical local setting, the torsional rigidity of a Lebesgue subset of $\mathbb{R}^N$ has been, and is nowadays, a source of interesting problems. Let us consider an isotropic elastic cylindrical beam in $\mathbb{R}^3$ with cross-section, perpendicular to the $z$-axis, is an an open bounded domain $D \subset \R^2$. The {\it torsion rigidity problem} (see e.g. \cite {Sokolnikoff}) is to find the shape of the cross section $D$ which provides the greatest {\it torsional rigidity}, under an area constraint, when a torque is applied around the $z$-axis . It was conjectured by A. Saint-Venant in 1856 that the simply connected cross-section with maximal torsional rigidity is the circle and it was proved by G. P\'{o}lya in 1948. The distribution of stress generated in the beam due to the applied torque is determined by the {\it stress function} $u_D$, the unique positive weak solution of the Dirichlet problem \begin{equation}\label{DPTorsion} \left\{ \begin{array}{ll} - \Delta u_D = 1 \quad \hbox{in} \ D \\[10pt] u_D= 0 \quad \hbox{on} \ \partial D. \end{array} \right. \end{equation} Notice that the function $u_D$ is also the unique minimizer of the {\it torsional energy} $$E(D) = \min_{v\in W^{1,2}_0(D)} \frac12 \int_D |\nabla v|^2 dx - \int_D v dx.$$ The total resultant torque due to this stress function is called {\it torsional rigidity} and is expressed as $$T(D):= \int_\Omega u_D(x) dx$$ or equivalently (see~\cite{P1} or \cite{Bandle}) \begin{equation}\label{var1}T(D) = \displaystyle\max_{v\in W^{1,2}_0(D)\setminus\{0\}}\frac{\displaystyle \left(\int_D v dx \right)^2}{\displaystyle\int_D |\nabla v|^2 dx}. \end{equation} Throughout this paper, we adopt the following notation. If $D$ is open in $\R^N$ with $0 < \vert D \vert < \infty$ then $D^*$ is the ball in $\R^N$ centered at the origin with $\vert D^* \vert = \vert D \vert$. Furthermore $B_R$ is a ball with radius $R$. We put $\omega_N = \vert B_1 \vert$. The {\it Saint-Venant inequality} reads, for $D$ a bounded domain, as follows: $$T(D) \leq T(D^*).$$ This inequality was established by G. P\'{o}lya \cite{P1} using symmetrization methods (see also E. Makai \cite{M}). On the other and, the {\it Faber-Krahn inequality} establishes that $$\lambda_1(D^*) \leq \lambda_1(D),$$ where $\lambda_1(D)$ is the lowest $\lambda$ for which the eigenvalue problem \begin{equation}\label{Eigen} \left\{ \begin{array}{ll} - \Delta u = \lambda u, \quad \hbox{in} \ D \\[10pt] u = 0, \quad \hbox{on} \ \partial D. \end{array} \right. \end{equation} admits a non trivial solution. The first proof of the Faber-Krahn inequality was given by P\'{o}lya and Szeg\"o in \cite{PS1} based in spherically symmetric decreasing rearrangement. Since $\lambda_1(D)$ is the minimizer of the Rayleigh quotient $$\lambda_1(D) = \min_{v\in W^{1,2}_0(D)\setminus\{0\}}\frac{\displaystyle \int_D |\nabla v|^2 dx}{\displaystyle\int_D v^2 dx },$$ is easy to see (see, for example, \cite{BBV}) that \begin{equation}\label{lambdatorsionlocal1}\lambda_1(D)\le\frac{|D|}{T(D)}. \end{equation} Let $D$ be an open bounded domain $D \subset \R^N$. The {\it spectral heat content} of $D$ is given by $$\mathbb{Q}_D(t):= \int_D v_D(x, t) dx$$ where $v_D$ is the solution of Dirichlet problem $$\left\{ \begin{array}{lll} \displaystyle\frac{v_D(x, t)}{\partial t}(x, t) = \Delta v_D(x, t)(x,t), \quad &\hbox{if} \ (x,t) \in D \times [0, \infty), \\[10pt] v_D(x, t) (x,t) =0, \quad &\hbox{if} \ (x,t) \in \partial D \times (0, \infty), \\[10pt] v_D(x, t)(x,0) = \1_D(x), \quad &\hbox{if} \ x\in D. \end{array}\right. $$ $\mathbb{Q}_D(t)$ represents the amount of heat contained in $D$ at time $t$ when $D$ has initial temperature $1$ and when the boundary of $D$ is keps at temperature $0$ for all $t >0$. The functions $u_D$ and $v_D$ have a probabilistic interpretation (see for instance \cite{BBC}). For this, let $(B(s), s \geq 0, \mathbb{P}_x, x \in\R^N)$ be a brownian motion associated to the Laplacian on $\R^N$, and let $\tau$ be the first exit time from $D$: $$\tau = \inf \{ s \geq 0 \ : \ B(s) \not\in D \}.$$ Then \begin{equation}\label{exp1}u_D(x) = \mathbb{E}_x[\tau], \quad x \in D, \end{equation} where $\mathbb{E}_x$ denotes expectation with respect to $ \mathbb{P}_x$, and \begin{equation}\label{exp2}v_D(x,t) = \mathbb{P}_x[\tau >t], \quad x \in D, \ t>0.\end{equation} For $j \in \N$ the {\it sequence of exit-moments} of $D$ is defined as $$EM_j(D):= \int_D \mathbb{E}_x[\tau^j] \, dx.$$ Notice that, by \eqref{exp1}, \begin{equation}\label{exp3}T(D) = EM_1(D). \end{equation} Using \eqref{exp2}, we can express moments of the exit time in term of $v_D$ as \begin{equation}\label{exp4} \mathbb{E}_x[\tau^j] = j \int_0^\infty t^{j-1} v_D(x, t) dt. \end{equation} Integrating in \eqref{exp4} and using Fubini's Theorem, we see that the sequence of exit-moments can be expressed as moments of the heat content: \begin{equation}\label{exp5} EM_j(D) = j \int_0^\infty t^{j-1} \mathbb{Q}_D(t) dt. \end{equation} In particular, by \eqref{exp3}, we have \begin{equation}\label{exp6} T(D) = \int_0^\infty \mathbb{Q}_D(t) dt. \end{equation} Our aim is to study the torsional rigidity in the general framework of the random walk spaces. We get the nonlocal versions of the previous local results \eqref{var1}, \eqref{lambdatorsionlocal1}, \eqref{exp3} and \eqref{exp6}. In particular we give the precise characterization of the nonlocal torsional rigidity of a set, and of the all nonlocal exit moments, by using uniquely probability terms involving the set, see~\eqref{specheatcont01t02} and~\eqref{Nspecheatcont01t02mom}, and recover the first eigenvalue of the nonlocal Laplacian with homogeneous Dirichlet boundary conditions, when exists, by a limit formula using such terms, see~\eqref{pueq010}. For the random walk in $\R^N$ associated with a non singular kernel, we get a nonlocal version of the Saint-Venant inequality, and, under rescaling we recover the classical Saint-Venant inequality. We also get the variational characterization of the nonlocal $p$-torsional rigidity. We relate the nonlocal $p$-torsional rigidity of a set with its $1$-Cheeger and $p$-Cheeger constants in~\eqref{ChegeerT1}, and as a consequence we prove that the nonlocal $1$-Cheeger constant of a set is the limit, as $p\to 1^+$, of the inverse of its nonlocal $p$-torsional rigidities, see~\eqref{ChegeerT2}. See also~\eqref{ChegeerT2dd01} for another limit attaining the nonlocal $1$-Cheeger constant by means of nonlocal Poincar\'{e} constants. We also obtain a nonlocal version of P\'{o}lya-Makai-type inequalities. To the best of our knowledge most of the results we get are new even for the particular cases of locally finite weighted graphs and nonlocal problems in domains of $\mathbb{R}^N$. Finally we relate the torsional rigidity given here for graphs with the torsional rigidity on metric graphs stated in~\cite{MP}. \section{Preliminaries} \subsection{Random walk spaces}\label{RWS1} We recall some concepts and results about random walk spaces given in \cite{MST0}, \cite{MST2} and \cite{MSTBook}. Let $(X,\mathcal{B})$ be a measurable space such that the $\sigma$-field $\mathcal{B}$ is countably generated. A random walk $m$ on $(X,\mathcal{B})$ is a family of probability measures $(m_x)_{x\in X}$ on $\mathcal{B}$ such that $x\mapsto m_x(B)$ is a measurable function on $X$ for each fixed $B\in\mathcal{B}$. The notation and terminology chosen in this definition comes from Ollivier's paper \cite{O}. As noted in that paper, geometers may think of $m_x$ as a replacement for the notion of balls around $x$, while in probabilistic terms we can rather think of these probability measures as defining a Markov chain whose transition probability from $x$ to $y$ in $n$ steps is \begin{equation} \displaystyle dm_x^{*n}(y):= \int_{z \in X} dm_z(y)dm_x^{*(n-1)}(z), \ \ n\ge 1 \end{equation} and $m_x^{*0} = \delta_x$, the dirac measure at $x$. \begin{definition}\label{convolutionofameasure}{\rm If $m$ is a random walk on $(X,\mathcal{B})$ and $\mu$ is a $\sigma$-finite measure on $X$. The convolution of $\mu$ with $m$ on $X$ is the measure defined as follows: $$\mu \ast m (A) := \int_X m_x(A)d\mu(x)\ \ \forall A\in\mathcal{B},$$ which is the image of $\mu$ by the random walk $m$.} \end{definition} \begin{definition}\label{def.invariant.measure} {\rm If $m$ is a random walk on $(X,\mathcal{B})$, a $\sigma$-finite measure $\nu$ on $X$ is {\it invariant} with respect to the random walk $m$ if $$\nu\ast m = \nu.$$ The measure $\nu$ is said to be {\it reversible} if moreover, the detailed balance condition $$dm_x(y)d\nu(x) = dm_y(x)d\nu(y) $$ holds true.} \end{definition} \begin{definition}\label{DefMRWSf}{\rm Let $(X,\mathcal{B})$ be a measurable space where the $\sigma$-field $\mathcal{B}$ is countably generated. Let $m$ be a random walk on $(X,\mathcal{B})$ and $\nu$ an invariant measure with respect to $m$. The measurable space together with $m$ and $\nu$ is then called a random walk space and is denoted by $[X,\mathcal{B},m,\nu]$.} \end{definition} If $(X,d)$ is a Polish metric space (separable completely metrizable topological space), $\mathcal{B}$ is its Borel $\sigma$-algebra and $\nu$ is a Radon measure (i.e. $\nu$ is inner regular and locally finite), then we denote $[X,\mathcal{B},m,\nu]$ as $[X,d,m,\nu]$, and call it a metric random walk space. \begin{definition}\label{def.m.connected.random.walk.space}{\rm Let $[X,\mathcal{B},m,\nu]$ be a random walk space. We say that $[X,\mathcal{B},m,\nu]$ is $m$-connected if, for every $D\in \mathcal{B}$ with $\nu(D)>0$ and $\nu$-a.e. $x\in X$, $$\sum_{n=1}^{\infty}m_x^{\ast n}(D)>0.$$ } \end{definition} \begin{definition}\label{def.m.interaction}{\rm Let $[X,\mathcal{B},m,\nu]$ be a random walk space and let $A$, $B\in\mathcal{B}$. We define the {\it $m$-interaction} between $A$ and $B$ as \begin{equation}\label{m.interaction} L_m(A,B):= \int_A \int_B dm_x(y) d\nu(x)=\int_A m_x(B) d\nu(x). \end{equation} } \end{definition} The following result gives a characterization of $m$-connectedness in terms of the $m$-interaction between sets. \begin{proposition}\label{connectedness.iff.Lm}(\cite[Proposition 2.11]{MST0}, \cite[Proposition 1.34]{MSTBook}) Let $[X,\mathcal{B},m,\nu]$ be a random walk space. The following statements are equivalent: \item{ (i) } $[X,\mathcal{B},m,\nu]$ is $m$-connected. \item {(ii)} If $ A,B\in\mathcal{B}$ satisfy $A\cup B=X$ and $L_m(A,B)= 0$, then either $\nu(A)=0$ or $\nu(B)=0$. \item {(iii)} If $A\in \mathcal{B}$ is a $\nu$-invariant set then either $\nu(A)=0$ or $\nu(X\setminus A)=0$. \end{proposition} \begin{definition}\label{defomegaconnected} Let $[X,\mathcal{B},m,\nu]$ be a reversible random walk space, and let $\Omega\in\mathcal{B}$ with $\nu(\Omega)>0$. We denote by $\mathcal{B}_\Omega$ to the following $\sigma$-algebra $$\mathcal{B}_\Omega:=\{B\in\mathcal{B} \, : \, B\subset \Omega\}.$$ We say that $\Omega$ is {\it $m$-connected (with respect to $\nu$)} if $L_m(A,B)>0$ for every pair of non-$\nu$-null sets $A$, $B\in \mathcal{B}_\Omega$ such that $A\cup B=\Omega$. \end{definition} Let us see now some examples of random walk spaces. \begin{example}\label{example.nonlocalJ} \rm Consider the metric measure space $(\R^N, d, \mathcal{L}^N)$, where $d$ is the Euclidean distance and $\mathcal{L}^N$ the Lebesgue measure on $\R^N$ (which we will also denote by $|.|$). For simplicity, we will write $dx$ instead of $d\mathcal{L}^N(x)$. Let $J:\R^N\to[0,+\infty[$ be a measurable, nonnegative and radially symmetric function verifying $\int_{\R^N}J(x)dx=1$. Let $m^J$ be the following random walk on $(\R^N,d)$: $$m^J_x(A) := \int_A J(x - y) dy \quad \hbox{ for every $x\in \R^N$ and every Borel set } A \subset \R^N .$$ Then, applying Fubini's Theorem it is easy to see that the Lebesgue measure $\mathcal{L}^N$ is reversible with respect to $m^J$. Therefore, $[\R^N, d, m^J, \mathcal{L}^N]$ is a reversible metric random walk space. \end{example} \begin{example}\label{example.graphs}[Weighted discrete graphs] \rm Consider a locally finite weighted discrete graph $$G = (V(G), E(G)),$$ where $V(G)$ is the vertex set, $E(G)$ is the edge set and each edge $(x,y) \in E(G)$ (we will write $x\sim y$ if $(x,y) \in E(G)$) has a positive weight $w_{xy} = w_{yx}$ assigned. Suppose further that $w_{xy} = 0$ if $(x,y) \not\in E(G)$. Note that there may be loops in the graph, that is, we may have $(x,x)\in E(G)$ for some $x\in V(G)$ and, therefore, $w_{xx}>0$. Recall that a graph is locally finite if every vertex is only contained in a finite number of edges. A finite sequence $\{ x_k \}_{k=0}^n$ of vertices of the graph is called a {\it path} if $x_k \sim x_{k+1}$ for all $k = 0, 1, ..., n-1$. The {\it length} of a path $\{ x_k \}_{k=0}^n$ is defined as the number $n$ of edges in the path. With this terminology, $G = (V(G), E(G))$ is said to be {\it connected} if, for any two vertices $x, y \in V$, there is a path connecting $x$ and $y$, that is, a path $\{ x_k \}_{k=0}^n$ such that $x_0 = x$ and $x_n = y$. Finally, if $G = (V(G), E(G))$ is connected, the {\it graph distance} $d_G(x,y)$ between any two distinct vertices $x, y$ is defined as the minimum of the lengths of the paths connecting $x$ and $y$. Note that this metric is independent of the weights. For $x \in V(G)$ we define the weight at $x$ as $$d_x:= \sum_{y\sim x} w_{xy} = \sum_{y\in V(G)} w_{xy},$$ and the neighbourhood of $x$ as $N_G(x) := \{ y \in V(G) \, : \, x\sim y\}$. Note that, by definition of locally finite graph, the sets $N_G(x)$ are finite. When all the weights are $1$, $d_x$ coincides with the degree of the vertex $x$ in a graph, that is, the number of edges containing $x$. For each $x \in V(G)$ we define the following probability measure \begin{equation}\label{discRW}m^G_x:= \frac{1}{d_x}\sum_{y \sim x} w_{xy}\,\delta_y.\\ \\ \end{equation} It is not difficult to see that the measure $\nu_G$ defined as $$\nu_G(A):= \sum_{x \in A} d_x, \quad A \subset V(G),$$ is a reversible measure with respect to this random walk. Therefore, $[V(G),\mathcal{B},m^G,\nu_G]$ is a reversible random walk space being $\mathcal{B}$ is the $\sigma$-algebra of all subsets of $V(G)$. Moreover $[V(G),d_G,m^G,\nu_G]$ is a reversible metric random walk space. \end{example} \begin{example}\label{example.restriction.to.Omega} \rm Given a random walk space $[X,\mathcal{B},m,\nu]$ and $\Omega \in \mathcal{B}$ with $\nu(\Omega) > 0$, let $$m^{\Omega}_x(A):=\int_A d m_x(y)+\left(\int_{X\setminus \Omega}d m_x(y)\right)\delta_x(A) \quad \hbox{ for every } A\in\mathcal{B}_\Omega \hbox{ and } x\in\Omega. $$ Then, $m^{\Omega}$ is a random walk on $(\Omega,\mathcal{B}_\Omega)$ and it easy to see that $\nu \res \Omega$ is invariant with respect to $m^{\Omega}$. Therefore, $[\Omega,\mathcal{B}_\Omega,m^{\Omega},\nu \res \Omega]$ is a random walk space. Moreover, if $\nu$ is reversible with respect to $m$ then $\nu \res \Omega$ is reversible with respect to $m^{\Omega}$. Of course, if $\nu$ is a probability measure we may normalize $\nu \res \Omega$ to obtain the random walk space $$\left[\Omega,\mathcal{B}_\Omega,m^{\Omega}, \frac{1}{\nu(\Omega)}\nu \res \Omega \right].$$ Note that, if $[X,d,m,\nu]$ is a metric random walk space and $\Omega$ is closed, then $[\Omega,d,m^{\Omega},\nu \res \Omega]$ is also a metric random walk space, where we abuse notation and denote by $d$ the restriction of $d$ to $\Omega$. In particular, in the context of Example \ref{example.nonlocalJ}, if $\Omega$ is a closed and bounded subset of $\R^N$, we obtain the metric random walk space $[\Omega, d, m^{J,\Omega},\mathcal{L}^N\res \Omega]$ where $m^{J,\Omega} := (m^J)^{\Omega}$; that is, $$m^{J,\Omega}_x(A):=\int_A J(x-y)dy+\left(\int_{\R^n\setminus \Omega}J(x-z)dz\right)d\delta_x$$ for every Borel set $A \subset \Omega$ and $x\in\Omega$. \end{example} \subsection{The nonlocal gradient, divergence and Laplace operators}\label{nonlocal.notions.1.section} Let us introduce the nonlocal counterparts of some classical concepts. \begin{definition}\label{nonlocalgraddiv}{\rm Let $[X,\mathcal{B},m,\nu]$ be a random walk space. Given a function $f: X \rightarrow \R$ we define its {\it nonlocal gradient} $\nabla f: X \times X \rightarrow \R$ as $$\nabla f (x,y):= f(y) - f(x) \quad \forall \, x,y \in X.$$ Moreover, given $\z : X \times X \rightarrow \R$, its {\it $m$-divergence} ${\rm div}_m \z : X \rightarrow \R$ is defined as $$({\rm div}_m \z)(x):= \frac12 \int_{X} (\z(x,y) - \z(y,x)) dm_x(y).$$ } \end{definition} We define the (nonlocal) Laplace operator as follows. \begin{definition}\label{deflap1310}{\rm Let $[X,\mathcal{B},m,\nu]$ be a random walk space, we define the {\it $m$-Laplace operator} (or {\it $m$-Laplacian}) from $L^1(X,\nu)$ into itself as $\Delta_m:= M_m - I$, i.e., $$\Delta_m f(x)= \int_X f(y) dm_x(y) - f(x) = \int_X (f(y) - f(x)) dm_x(y), \quad x\in X,$$ for $f\in L^1(X,\nu)$. } \end{definition} Note that $$\Delta_m f (x) = {\rm div}_m (\nabla f)(x).$$ In the case of the random walk space associated with a locally finite weighted discrete graph $G=(V,E)$ (as defined in Example~\ref{example.graphs}), the $m^G$-Laplace operator coincides with the graph Laplacian (also called the normalized graph Laplacian) studied by many authors (see, for example, \cite{BJ}, \cite{BJL}, \cite{DK}, \cite{Elmoatazetal}, \cite{Hafiene}): $$\Delta_{m^G} u(x):=\frac{1}{d_x}\sum_{y\sim x}w_{xy}(u(y)-u(x)), \quad u\in L^2(V,\nu_G), \ x\in V .$$ In \cite{MST2} (see also \cite{MSTBook}) we define and proof the following facts. $$BV_m(X):= \left\{ f: X \rightarrow \R \ \hbox{measurable} \ : \ \int_{X \times X} \vert \nabla u(x,y) \vert \, d(\nu\otimes m_x)(x,y) < \infty \right\},$$ and for $f \in BV_m(X)$ we define its {\it $m$-total variation} as $$TV_m(f):= \frac12 \int_{X \times X} \vert \nabla u(x,y) \vert \, d(\nu\otimes m_x)(x,y).$$ For a set $E \in \mathcal{B}$ such that $\1_E \in BV_m(X)$, we define its {\it $m$-perimeter} as $$P_m(E):= TV_m(\1_E) = L_m(E, X \setminus E).$$ If $\nu(E)<+\infty$ then \begin{equation}\label{secondf021}\displaystyle P_m(E)=\nu(E) -\int_E\int_E dm_x(y) d\nu(x). \end{equation} The following {\it coarea formula} holds: \begin{equation}\label{coarea} TV_m(f) = \int_{-\infty}^{+\infty} P_m( \{ x\in X : f(x) >t \}) dt,\quad\hbox{for }f \in BV_m(X), \end{equation} Furthermore we give the following nonlocal concept of mean curvature. Let $E \in \mathcal{B}$ with $\nu(E)>0$. For a point $x \in X$ we define the {\it $m$-mean curvature of $\partial E$ at $x$} as \begin{equation}\label{defcurdefdef}H^m_{\partial E}(x):= \int_{X} (\1_{X \setminus E}(y) - \1_E(y)) dm_x(y).\end{equation} Observe that \begin{equation}\label{defcur}H^m_{\partial E}(x) = 1 - 2 \int_E dm_x(y).\end{equation} Having in mind \eqref{secondf021}, we have that, if $\nu(E)<+\infty$, $$\int_E H^m_{\partial E}(x) d\nu(x) = \int_E \left( 1 - 2 \int_E dm_x(y) \right) d\nu(x) = \nu(E) - 2\int_E\int_E dm_x(y) d\nu(x)$$ $$ = P_m(E) - \int_E\int_E dm_x(y) d\nu(x) = 2P_m(E) -\nu(E).$$ Consequently, \begin{equation}\label{1secondf021}\displaystyle \int_E H^m_{\partial E}(x) d\nu(x)=2P_m(E) -\nu(E). \end{equation} and \begin{equation}\label{pararm01}\frac{1}{\nu(E)}\int_\Omega H_{\partial E}^m(x)d\nu(x)=2\frac{P_m(E)}{\nu(E)}-1. \end{equation} \subsection{Schwarz’s symmetrization} Let $E \subset \R^N$ be a measurable set of finite measure, and let $\1_E$ its characteristic function. The {\it symmetric rearrangement} of $E$ is the ball $E^*$ centered at zero with $\vert E^* \vert = \vert E \vert$, i.e., with radius $\left(\frac{ \vert E \vert }{\omega_N} \right)^{\frac{1}{N}}$, where $\omega_N$ denotes the volume of the $N$-dimensional unit ball. For a non-negative measurable function $f : \R^N \rightarrow \R$ vanishing at infinity, the {\it Schwarz’s symmetrization} of $f$ is $$f^*(x):= \int_0^\infty \1_{\{ f > s \}^*}(x) ds,$$ where by definition, $(\1_E)^* = \1_{E^*}$. Thus, the level sets of $f^*$ are the rearrangements of the level sets $f$, implying the equimeasurability property $$\vert \{ x \ : \ f^*(x) >s \} \vert = \vert \{ x \ : \ f(x) >s \} \vert.$$ The Schwarz’s symmetrization $f^*$ of a function $f$ inherits many measure geometric properties from its source function $f$ (see \cite{Bandle}). It also fulfils some optimization properties with respect to integration. We will make use of the following inequalities (see \cite{Liebloss}), the {\it Hardy-Littlewood’s inequality}: \begin{equation}\label{HLineq} \int_{\R^N} f_1(x) f_2(x) dx \leq \int_{\R^N} f^*_1(x) f^*_2(x) dx; \end{equation} and the {\it Riesz’s inequality}: \begin{equation}\label{Rieszineq} \int_{\R^N} f_1(x) \left( \int_{\R^N} f_2(x-y) f_3(y) dy \right) dx \leq \int_{\R^N} f^*_1(x) \left( \int_{\R^N} f^*_2(x-y) f^*_3(y) dy \right) dx \end{equation} We also need the general rearrangement inequality proved in~\cite{BLLuttinger}: \begin{theorem}[see Theorem~3.8 in \cite{Liebloss}]\label{gririesz} Let $m,k\in \mathbb{N}$, $m\ge k$, and $f_i$, $i=1,2,...,m$, nonnegative functions in $\mathbb{R}^N$, vanishing at infinity. Let $B$ a $k\times m$ matrix with coefficient $b_{ij}$ in the raw $i$ and column $j$. Then, if $$I(f_1,f_2,...,f_m):=\int_{\mathbb{R}^N}\dots\int_{\mathbb{R}^N}\prod_{j=1}^m f_j\left(\sum_{i=1}^k b_{ij}x_i\right)dx_1\cdots dx_k,$$ we have that $$I(f_1,f_2,...,f_m)\le I(f_1^*,f_2^*,...,f_m^*),$$ where each $f_j^*$ is the symmetric-nonincreasing rearrangement of $f_j$. \end{theorem} \section{Torsional rigidity in random walk spaces} Let $[X,\mathcal{B},m,\nu]$ be a reversible random walk space. Given $\Omega \in \mathcal{B}$, we define the {\it $m$-boundary of $\Omega$} by $$\partial_m\Omega:=\{ x\in X\setminus \Omega \, : \, m_x(\Omega)>0 \}$$ and its {\it $m$-closure} as $$\Omega_m:=\Omega\cup\partial_m\Omega.$$ From now on we will assume that $\Omega$ is $m$-connected (which imply that also $\Omega_m$ is $m$-connected), $$0<\nu(\Omega)<\nu(\Omega_m)<\infty.$$ \begin{remark}\label{04012301}\rm A first consequence of the above assumptions is that \begin{equation}\label{04012302} 0<P_m(\Omega)<\nu(D). \end{equation} Indeed, if $P_m(\Omega)=0$ then , by~\eqref{secondf021}, $\displaystyle\int_\Omega m_x(\Omega)d\nu(x)=1,$ and consequently $m_x(\Omega)=1$ $\nu$-a.e. $x\in\Omega$. Therefore $$L_m(\Omega_m\setminus\Omega,\Omega)=\int_\Omega m_x(\Omega_m\setminus\Omega)d\nu(x)=\int_\Omega (1-m_x(\Omega))d\nu(x)=0,$$ which contradicts tha $\Omega_m$ is $m$-connected (we are assuming $0<\nu(\Omega)<\nu(\Omega_m)$). On the other hand, if $P_m(\Omega)=\nu(\Omega)$ then, by~\eqref{secondf021}, $m_x(\Omega)=0$ $\nu$-a.e. $x\in\Omega$. Therefore $$L_m(\Omega,\Omega)=\int_\Omega m_x(\Omega)d\nu(x)=0,$$ which contradicts that $\Omega$ is $m$-connected. $\blacksquare$ \end{remark} Given $p\geq 1$, we define $$L^p_0(\Omega_m, \nu):= \{ f \in L^p(\Omega_m, \nu) \ : \ f(x)=0 \ a.e. \ x \in \partial\Omega_m \}.$$ We say that $\Omega$ satisfies a {\it $p$-Poincar\'{e} inequality} if there exists $\lambda >0$ such that \begin{equation}\label{PoincareIneq2} \lambda \int_\Omega \vert f (x) \vert^p d\nu(x)\leq \int_{\Omega_m \times \Omega_m} \vert \nabla f(x,y) \vert^p d(\nu\otimes m_x)(x,y) \end{equation} for all $f \in L^p_0(\Omega_m, \nu)$ Let us point out that the random walk spaces given in Example~\ref{example.nonlocalJ}, for~$J$ with compact support, and in Example~\ref{example.graphs} satisfy a $2$-Poincar\'{e}'s type inequality, see~\cite{ElLibro, MSTBook}. In this section we will assume that $\Omega$ satisfies a $2$-Poincar\'{e} inequality. As a consequence of the results in \cite{ST0} (see also~\cite{MSTBook}), there is a unique solution of the following homogenous Dirichlet problem for the $m$-Laplacian \begin{equation}\label{torsioneq} \left\{\begin{array}{ll} -\Delta_m f_\Omega =1&\hbox{in } \Omega,\\[10pt] f_\Omega =0&\hbox{on }\partial_m\Omega; \end{array}\right. \end{equation} that is, \begin{equation}\label{torsioneqti} \left\{\begin{array}{ll}\displaystyle -\int_{\Omega_m} \left( f_\Omega(y)-f_\Omega(x)\right) dm_x(y)=1, & x\in \Omega, \\ \\ f_\Omega(x)=0,&x\in \partial_m\Omega. \end{array}\right. \end{equation} We denote by $f_\Omega$ this unique solution and name it as the {\it $m$-stress function} of $\Omega$. By the comparison principle given in~\cite{ST0}, we have that $f_\Omega \ge 0.$ \begin{definition}{\rm The {\it $m$-torsional rigidity of~$\Omega$}, $T_m(\Omega)$, is defined as the $L^1(\nu)$-norm of the torsion function: $$T_m(\Omega)= \int_\Omega f_\Omega (x) d \nu(x).$$} \end{definition} In the local case, it is well known (see, for exmaple, \cite{BBP}) that $$T(B_R) = \frac{\omega_N}{N(N+2)}R^{N+2}.$$ Then, $$T(B_R) \geq \vert B_R| \iff \frac{\omega_N}{N(N+2)}R^{N+2} \geq R^N \omega_N\iff R \geq \sqrt{N(N+2)}.$$ Contrary to the local setting, the $m$-torsional rigidity of $\Omega$ always satisfies \begin{equation}\label{torsiomass}T_m(\Omega)\ge \nu(\Omega).\end{equation} Indeed, by the first equation in~\eqref{torsioneq}, for $x\in \Omega$, since $m_x(\Omega_m)=1$, we have \begin{equation}\label{recu01}f_\Omega(x)=1+\int_\Omega f_\Omega(y)dm_x(y), \end{equation} Hence $$T_m(\Omega)= \int_\Omega f_\Omega (x) d \nu(x) = \nu(\Omega) + \int_\Omega \int_\Omega f_\Omega(y)dm_x(y) d \nu(x) \geq \nu(\Omega).$$ We will give in Proposition~\ref{proptors01} a detailed description of $T_m(\Omega)$ by using a kind of geometrical terms relative to $\Omega$ via the random walk. The next result is the nonlocal version of equation \eqref{var1}. It is a particular case of Theorem~\ref{Charact1}. \begin{theorem}\label{$T_m$formula} We have \begin{equation}\label{thest01}\displaystyle T_m(\Omega)=\max_{ \hbox{\tiny$\begin{array}{c}g\in L^2(\Omega_m)\setminus\{0\}\\ g=0\hbox{ on }\partial_m\Omega \end{array}$} }\frac{\displaystyle\left(\int_\Omega gd\nu\right)^2}{\displaystyle \frac12\iint_{\Omega_m\times \Omega_m}|\nabla g(x,y)|^2dm_x(y)d\nu(x)}, \end{equation} and the maximum is attained at $f_\Omega$. \end{theorem} In \cite{MSTBook} (see also~\cite{redbook}) we introduce the {\it spectral $m$-heat content} of $\Omega$ as $$ \mathbb{Q}_\Omega^m(t) =\int_\Omega v(t,x)d\nu(x),$$ where $v(t,x)$ is the solution of the {\it homogeneous Dirichlet problem for the $m$-heat equation}: \begin{equation}\label{CPNL1dir} \left\{ \begin{array}{ll} \displaystyle\frac{dv}{dt}(t,x) = \displaystyle\int_{\Omega_m} (v(t,y) - v(t,x))dm_x(y), &(t,x)\in (0, +\infty)\times \Omega, \\[12pt] v(t,x)=0,&(t,x)\in(0, +\infty)\times\partial_m \Omega, \\[12pt] u(0,x) =1,&x\in \Omega.\end{array}\right. \end{equation} Moreover, we have (see~\cite{MSTBook} and~\cite{redbook}): \begin{equation}\label{specheatcont01t01} \mathbb{Q}_\Omega^m(t) = \sum_{k=0}^{+\infty} g_{m,\Omega}(k) \frac{e^{-t}t^k}{k!}, \end{equation} where, for $k\in \mathbb{N}\cup\{0\}$, $g_{m,\Omega}(k)$ is the measure of the amount of individuals that, starting in $\Omega$, end up in $\Omega$ after $k$ jumps without ever leaving $\Omega$, that is: $$g_{m,\Omega}(0)= \nu (\Omega)$$ and $$g_{m,\Omega}(1)= \int_\Omega\int_\Omega dm_x (y)d\nu(x) = L_m(\Omega, \Omega), $$ $$g_{m,\Omega}(2)= \int_\Omega\int_\Omega\int_\Omega dm_y(z)dm_x (y)d\nu(x), $$ $$ \vdots$$ \begin{equation}\label{rem001}g_{m,\Omega}(n)= \int_{\hbox{\tiny$\underbrace{\Omega\times...\times\Omega}_n\times\Omega$}}dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1).\end{equation} i.e., $\mathbb{Q}_\Omega^m(t)$ is the expected value of the amount of individuals that start in $\Omega$ and end in $\Omega$ at time $t$ without ever leaving $\Omega$, when these individuals move by successively jumping according to $m$ and the number of jumps made up to time $t$ follows a Poisson distribution with rate~$t$. \begin{lemma}\label{lemma-gmn02} We have that \begin{equation}\label{gmn01} \hbox{the sequence $\{g_{m,\Omega}(n) \, : \, n \in \N\}$ is non-increasing.} \end{equation} \end{lemma} \begin{proof} For $n\ge 1$, $$\begin{array}{c} \displaystyle g_{m,\Omega}(n)= \int_{\hbox{\tiny$\underbrace{\Omega\times...\times\Omega}_n\times\Omega$}}dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1)\\ \\ \displaystyle =\int_{\hbox{\tiny$\underbrace{\Omega\times...\times\Omega}_{n-1}\times\Omega$}}m_{x_n}(\Omega)dm_{x_{n-1}}(x_{n})\dots dm_{x_1}(x_2)d\nu(x_1)\\ \\ \displaystyle \le\int_{\hbox{\tiny$\underbrace{\Omega\times...\times\Omega}_{n-1}\times\Omega$}} dm_{x_{n-1}}(x_{n})\dots dm_{x_1}(x_2)d\nu(x_1)=g_{m,\Omega}(n-1). \end{array}$$ Then \eqref{gmn01} holds. \end{proof} \begin{remark}\label{rem35}\rm Observe that, by~\eqref{04012302}, we have $g_{m,\Omega}(1) < g_{m,\Omega}(0)$. We also have $$g_{m,\Omega}(2) < g_{m,\Omega}(1).$$ Indeed, using reversibility, $$\begin{array}{l} \displaystyle g_{m,\Omega}(2)= \int_\Omega\int_\Omega\int_\Omega dm_y(z)dm_x (y)d\nu(x) \\ \\ \displaystyle =\int_X\int_X\int_X \1_\Omega(z)\1_\Omega(y)\1_\Omega(x)dm_y(z)dm_x (y)d\nu(x) \\ \\ \displaystyle =\int_X\int_X\int_X \1_\Omega(z)\1_\Omega(y)\1_\Omega(x)dm_x(z)dm_x (y)d\nu(x) \\ \\ \displaystyle = \int_\Omega\int_\Omega m_x(\Omega)dm_x (y)d\nu(x) = \int_\Omega \left(m_x(\Omega)\right)^2 d\nu(x) \\ \\ \displaystyle \le \int_\Omega m_x(\Omega)d\nu(x) = g_{m,\Omega}(1). \end{array} $$ Then, if $g_{m,\Omega}(2) = g_{m,\Omega}(1)$, we have $$\displaystyle\int_\Omega m_x(\Omega)(1-m_x(\Omega))d\nu(x)=0.$$ Hence $\Omega=A\cup B$, where $A:=\{x\in\Omega:m_x(\Omega)=0\}$ and up to a $\nu$-null set, $B=\{x\in\Omega:m_x(\Omega)=1\}.$ Now, we have $$L_m(A,B)=\int_A m_x(B)d\nu(x)=0,$$ and consequenlty, since $\Omega$ is $m$-connected, $\nu(A)=0$ or $\nu(B)=0$, which yields a contradiction (remember Remark~\ref{04012301}). $\blacksquare$ \end{remark} Let us now see the nonlocal version of equation \eqref{exp6}. Observe that the second statement in the next result gives a complete description of $T_m(\Omega)$ in term of the sequence of {\it probabilistic} terms $\{g_{m,\Omega}(n) \, : \, n \in \N\}$. \begin{theorem}\label{proptors01} We have \begin{equation}\label{Nspecheatcont01t02} T_m(\Omega)=\int_0^\infty \mathbb{Q}_\Omega^m(t) dt \end{equation} and \begin{equation}\label{specheatcont01t02} T_m(\Omega)=\sum_{k=0}^{+\infty} g_{m,\Omega}(k). \end{equation} \end{theorem} \begin{proof} It is easy to see that if $v$ is the solution of the Dirichlet problem \eqref{CPNL1dir}, then $$f(x):= \int_0^\infty v(x,t) dt$$ is the unique solution $f_\Omega$ of problem \eqref{torsioneq}. Hence, by Fubini's Theorem, $$T_m(\Omega) = \int_\Omega f (x) d \nu(x) = \int_\Omega \int_0^\infty v(x,t) dt d\nu(x) = \int_0^\infty \mathbb{Q}_\Omega^m(t) dt.$$ By \eqref{Nspecheatcont01t02} and \eqref{specheatcont01t01}, since the convergence in \eqref{specheatcont01t01} is uniform, we have $$T_m(\Omega)=\int_0^\infty \sum_{k=0}^{+\infty} g_{m,\Omega}(k) \frac{e^{-t}t^k}{k!} dt = \sum_{k=0}^{+\infty} \int_0^\infty g_{m,\Omega}(k) \frac{e^{-t}t^k}{k!} dt = \sum_{k=0}^{+\infty} g_{m,\Omega}(k).$$ \end{proof} As consequence of \eqref{specheatcont01t02} we have the following result. \begin{corollary} If $\Omega_1 \subset \Omega_2$, then $T_m(\Omega_1) \leq T_m(\Omega_2)$. \end{corollary} Having in mind \eqref{exp5}, we give the following definition. \begin{definition}{\rm We define {\it the sequence of exit-$m$-moments of $\Omega$} as $$EM^m_{j}(\Omega)=j\int_0^{+\infty}t^{j-1}\mathbb{Q}_\Omega^m(t) dt, \quad j\in \N.$$ } \end{definition} Note that, as in~\eqref{exp3}, $$EM^m_{1}(\Omega)=T_m(\Omega).$$ In the next result we also describe explicitly the sequence of exit-$m$-moments in terms of the sequence $\{ g_{m,\Omega}(k) \, : \, k \in \N \}$. In the context of Riemannian manifolds, see~\cite{CLM2017} for other type of expansions. \begin{proposition}\label{proptors02} We have \begin{equation}\label{Nspecheatcont01t02mom} EM^m_{j}(\Omega)=j!\, \sum_{k=0}^{+\infty} \binom{k+j-1}{j-1} g_{m,\Omega}(k),\quad j=1,2,3,... \end{equation} \end{proposition} \begin{proof} Let $j\ge 1$, then $$EM^m_{j}(\Omega)=j\int_0^{+\infty}t^{j-1}\mathbb{Q}_\Omega^m(t)dt =j\int_0^{+\infty}t^{j-1}\sum_{k=0}^{+\infty} g_{m,\Omega}(k) \frac{e^{-t}t^k}{k!}dt.$$ Now we can interchange the integral with the sum to get $$\begin{array}{c}\displaystyle EM^m_j(\Omega)=j\,\sum_{k=0}^{+\infty}g_{m,\Omega}(k)\frac{1}{k!} \int_0^{+\infty} t^{j+k-1}e^{-t}dt\\ \\ \displaystyle=j\,\sum_{k=0}^{+\infty}g_{m,\Omega}(k)\frac{1}{k!} (k+j-1)!=j!\, \sum_{k=0}^{+\infty} \binom{k+j-1}{j-1} g_{m,\Omega}(k). \end{array}$$ \end{proof} Let us now define \begin{equation}\label{Realyq}\lambda_{m,2}(\Omega) = \inf_{\hbox{\tiny$\begin{array}{c}g\in L^2(\Omega_m)\setminus\{0\}\\ g=0\hbox{ on }\partial_m\Omega\end{array}$}}\frac{\displaystyle \frac12\int_{\Omega_m}\int_{\Omega_m} |\nabla g(x,y)|^2 dm_x(y)d\nu(x)}{\displaystyle\int_\Omega g(x)^2 d\nu(x) }.\end{equation} Since we are assuming $\Omega$ satisfies a $2$-Poincar\'{e} type inequality, we have $$\lambda_{m,2}(\Omega) >0.$$ And, since $\big||a|-|b|\big|\le |a-b|$ for all $a,b\in \mathbb{R}$, \begin{equation}\label{deflmo}\lambda_{m,2}(\Omega) = \inf_{\hbox{\tiny$\begin{array}{c}g\in L^2(\Omega_m)\setminus\{0\}\\ g\ge 0\hbox{ on }\Omega\\g=0\hbox{ on }\partial_m\Omega\end{array}$}}\frac{\displaystyle \frac12\int_{\Omega_m}\int_{\Omega_m} |\nabla g(x,y)|^2 dm_x(y)d\nu(x)}{\displaystyle \int_\Omega g(x)^2 d\nu(x) }. \end{equation} Similarly to the local case we have the following nonlocal version of~\eqref{lambdatorsionlocal1} (see Corollary~\ref{corlambdatorsionlocal1nlp} later on): \begin{equation}\label{lambdatorsionlocal1nl}\lambda_{m,2}(\Omega)\le\frac{\nu(\Omega)}{T_m(\Omega)}. \end{equation} We also have that, see~\eqref{04012303}, \begin{equation}\label{da01}\frac{\nu(\Omega)}{T_m(\Omega)}\le\frac{P_m(\Omega)}{\nu(\Omega)}. \end{equation} Observe that, by~\eqref{04012302} we have that $\frac{P_m(\Omega)}{\nu(\Omega)}<1$. Therefore, from~\eqref{da01} and ~\eqref{lambdatorsionlocal1nl}, \begin{equation}\label{da02} 0<\lambda_{m,2}(\Omega)< 1. \end{equation} The following assumption will be used in the next result: {\it There exists a non-null function $f\in L^2(\Omega_m,\nu)$ such that} \begin{equation}\label{eigenproblem}\left\{\begin{array}{l} \displaystyle-\int_{\Omega_m}(f(y)-f(x))dm_x(y)=\lambda_{m,2}(\Omega)f(x),\quad x\in\Omega, \\ \\ f(x)=0,\quad x\in\partial_m\Omega. \end{array}\right. \end{equation} Observe that then the infimum defining $\lambda_{m,2}(\Omega)$ in~\eqref{Realyq} is attained at $f$. We say that $\lambda_{m,2}(\Omega)$ is the first eigenvalue of the $m$-Laplacian with homogeneous Dirichlet boundary conditions with associated eigenfunction $f$. Note that, in fact, there is a non-negative eigenfunction associated to $\lambda_{m,2}(\Omega)$. In the next result we see that it is possible to obtain $\lambda_{m,2}(\Omega)$ via the sequence $\{ g_{m,\Omega}(k) \, : \, k \in \N \}$ that characterize the torsional rigidity $T_m(\Omega)$ (Theorem~\ref{proptors01}) and the exit-$m$-moments (Proposition \ref{proptors02}). \begin{theorem}\label{fk01} Assume $\lambda_{m,2}(\Omega)$ is an eigenvalue of the $m$-Laplacian with homogeneous Dirichlet boundary conditions. Then: \item{1.} $ g_{m,\Omega}(n) > 0\quad\hbox{for all } n \in \N. $ \item{ 2.} Assume moreover that there exists an eigenfunction $f$ associated to $\lambda_{m,2}(\Omega)$ such that \begin{equation}\label{condH} \hbox{ $\alpha \le f\le \widetilde\alpha$ in $\Omega$, for some constants $\alpha,\widetilde\alpha>0$.} \end{equation} Then, \begin{equation}\label{pueq010} \lambda_{m,2}(\Omega)=1-\displaystyle\lim_n\sqrt[n]{\frac{g_{m,\Omega}(2n)} {g_{m,\Omega}(n)}}. \end{equation} \end{theorem} \begin{proof} We have, for a non-negative (non-null) eigenfunction $f$ associated to $\lambda_{m,2}(\Omega)$: \begin{equation}\label{est001} (1-\lambda_{m,2}(\Omega))f(x)=\int_\Omega f(y)dm_x(y),\quad x\in\Omega. \end{equation} Now, since $0<\lambda_{m,2}(\Omega)< 1$, we can write~\eqref{est001} as $$f(x)=\frac{1}{1-\lambda_{m,2}(\Omega)}\int_\Omega f(y)dm_x(y).$$ Then, by induction, for $n\in \mathbb{N}$, $$f(x_1)=\frac{1}{(1-\lambda_{m,2}(\Omega))^n}\int_{\Omega\times...\times\Omega}f(x_{n+1})dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2);$$ and, then, integrating over $\Omega$ with respect to $d\nu$, we have \begin{equation}\label{expfn}0<\int_\Omega fd\nu=\frac{1}{(1-\lambda_{m,2}(\Omega))^n}\int_{\Omega\times...\times\Omega\times\Omega}f(x_{n+1})dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1).\end{equation} Let us see that \begin{equation}\label{mund02}\displaystyle\int_{\Omega\times...\times\Omega\times\Omega}f(x_{n+1})^2dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1)\le \int_\Omega f^2d\nu, \end{equation} In fact, by the reversibility of $\nu$ with respect to the random walk, for $n=1$, we have $$\begin{array}{c}\displaystyle \int_{\Omega\times\Omega}f(y)^2 dm_{x}(y)d\nu(x) =\int_{X\times X}f(y)^2\1_\Omega(y)\1_\Omega(x) dm_{x}(y)d\nu(x)\\ \\ \displaystyle =\int_{X\times X}f(x)^2\1_\Omega(x)\1_\Omega(y) dm_{x}(y)d\nu(x)) \displaystyle =\int_{X}f(x)^2\1_\Omega(x)m_x(\Omega) d\nu(x)\\ \\ \displaystyle\le\int_{X}f(x)^2\1_\Omega(x) d\nu(x)=\int_\Omega f^2d\nu. \end{array} $$ For $n=2$, using moreover Fubini's theorem, $$\begin{array}{c}\displaystyle \int_{\Omega\times\Omega\times\Omega}f(z)^2 dm_y(z)dm_{x}(y)d\nu(x)\\ \\ \displaystyle =\int_{X\times X\times X}f(z)^2\1_\Omega(z)\1_\Omega(y)\1_\Omega(x) dm_y(z)dm_{x}(y)d\nu(x)\\ \\ \displaystyle =\int_{X\times X\times X}f(z)^2\1_\Omega(z)\1_\Omega(x)\1_\Omega(y) dm_x(z)dm_{x}(y)d\nu(x)\\ \\ \displaystyle =\int_{X\times X\times X}f(z)^2\1_\Omega(z)\1_\Omega(x)\1_\Omega(y)dm_{x}(y) dm_x(z)d\nu(x) \\ \\ \displaystyle =\int_{X\times X }f(z)^2\1_\Omega(z)\1_\Omega(x)m_x(\Omega)dm_{x}(y) dm_x(z)d\nu(x) \\ \\ \displaystyle \le\int_{X\times X}f(z)^2\1_\Omega(z)\1_\Omega(x) dm_x(z)d\nu(x), \end{array} $$ and now we can use the case $n=1$. The general case follows by induction. Then, by \eqref{expfn}, we have $$\begin{array}{c}\displaystyle 0<\int_{\Omega\times...\times\Omega\times\Omega}f(x_{n+1})dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1)\\ \\ \displaystyle \le \left(\int_{\Omega\times...\times\Omega\times\Omega}f(x_{n+1})^2dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1)\right)^{1/2}g_{m,\Omega}(n)^{1/2} \\ \\ \displaystyle \le \left(\int_\Omega f^2d\nu\right)^{1/2}g_{m,\Omega}(n)^{1/2}; \end{array}$$ therefore, $g_{m,\Omega}(n)>0.$ \noindent {\it Proof of 2.} Dividing the expression~\eqref{expfn} in $n$ between the one in $n+1$, we get $$1=(1-\lambda_{m,2}(\Omega))\frac{\displaystyle\int_{\Omega\times...\times\Omega\times\Omega}f(x_{n+1})dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1)} {\displaystyle\int_{\Omega\times\Omega\times...\times\Omega\times\Omega}f(x_{n+2})dm_{x_{n+1}}(x_{n+2})dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1)}.$$ Therefore, $$1-\lambda_{m,2}(\Omega)=\frac{\displaystyle\int_{\Omega\times\Omega\times...\times\Omega\times\Omega}f(x_{n+2})dm_{x_{n+1}}(x_{n+2})dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1)} {\displaystyle\int_{\Omega\times...\times\Omega\times\Omega}f(x_{n+1})dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1)},$$ or equivalently, \begin{equation}\label{tl01} 1-\lambda_{m,2}(\Omega)=\frac{\tau_{n+1}\,g_{m,\Omega}(n+1)}{\tau_{n}\ g_{m,\Omega}(n)}, \end{equation} where \begin{equation}\label{media01}\tau_n=\frac{1}{g_{m,\Omega}(n)}\int_{\Omega\times...\times\Omega\times\Omega}f(x_{n+1})dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1), \end{equation} with $g_{m,\Omega}(n)$ given in~\eqref{rem001}, that is, $$g_{m,\Omega}(n)=\int_{\Omega\times...\times\Omega\times\Omega} dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1).$$ Observe that $\tau_n$ is the average of $g(x_1,x_2,...,x_n,x_{n+1}):=f(x_{n+1})$ in $\Omega\times...\times\Omega\times\Omega$ with respect to the measure $dm_{x_n}(x_{n+1})\dots dm_{x_1}(x_2)d\nu(x_1)$. Since $0<\alpha\le f\le \widetilde\alpha $, we have \begin{equation}\label{re003}\alpha\le \tau_n\le\widetilde{\alpha}.\end{equation} Now, from~\eqref{tl01}, we have that \begin{equation}\label{re001} (1-\lambda_{m,2}(\Omega))^n=\frac{\tau_{2n}\,g_{m,\Omega}(2n)}{\tau_{n}\ g_{m,\Omega}(n)}. \end{equation} Hence, \begin{equation}\label{re002} \log(1-\lambda_{m,2})=\frac{1}{n}\log\left(\frac{\tau_{2n}}{\tau_{n}}\right)+ \log\sqrt[n]{\frac{g_{m,\Omega}(2n)}{g_{m,\Omega}(n)}}. \end{equation} Since by~\eqref{re003}, $\displaystyle \lim_n\frac{1}{n}\log\left(\frac{\tau_{2n}}{\tau_{n}}\right)=0$, taking limits in \eqref{re002} we get~\eqref{pueq010}. \end{proof} \begin{remark}\label{remdeejem}\rm \ \item{ 1.} Let $[\R^N, d, m^J, \mathcal{L}^N]$ be the metric random walk space given in Example~\ref{example.nonlocalJ} with $J$ continuous and compactly supported. For $\Omega$ a bounded domain, the assumption~\eqref{condH} is true, see~\cite[Section 2.1.1]{ElLibro}. \item{ 2.} For weighted discrete graphs, $\lambda_{m^G,2}(\Omega)$ is an eigenvalue with $0<\lambda_{m^G,2}(\Omega)\le 1$ (see~\cite{AGrigor}). Now, since we are assuming that $\Omega$ is $m^G$-connected, $0<\lambda_{m^G,2}(\Omega)< 1.$ And, by connectedness, using~\eqref{est001}, we have that~\eqref{condH} is also true. \item{ 3.} Let us see what can happen if $\Omega$ is not $m^G$-connected. Consider, for example, the weighted graph $G$ with five different vertices $V:=V(G) = \{x_1, x_2, x_3, x_4, x_5 \}$ and $w_{x_i,x_{i+1}}=1$, for $i=1,2,3,4$, and $w_{x_i,x_j}=0$ otherwise. We have, $$m^G_{x_1} = \delta_{x_2}, m^G_{x_2} = \frac12 \delta_{x_1} + \frac12 \delta_{x_3}, m^G_{x_3} = \frac12 \delta_{x_2} + \frac12 \delta_{x_4}, m^G_{x_4} = \frac12 \delta_{x_3} + \frac12 \delta_{x_5}, m^G_{x_5} = \delta_{x_4},$$ $$ \nu^G= \delta_{x_1} + 2\delta_{x_2} + 2 \delta_{x_3} + 2\delta_{x_4} +\delta_{x_5}.$$ \item{ 3.1} Take $\Omega = \{ x_2,x_4 \}$, which is not $m^G$-connected. It is easy to see that $g_{m, \Omega}(n)=0$ for all $n\ge 1.$ And we have that $$T_{m^G}(\Omega) = \nu_G(\Omega)=\nu(\{x_2\})+\nu_G(\{x_4\})=T_{m^G}(\{x_2\})+T_{m^G}(\{x_4\}).$$ \item{ 3.2} Take now $\Omega:=\{x_1,x_2,x_4,x_5\}$, which is also not $m^G$-connected. In this case $g_{m^G,\Omega}(n)\neq 0$ for all $n\ge1$, and $$T_{m^G}(\Omega) =T_{m^G}(\{x_1,x_2\})+T_{m^G}(\{x_4,x_5\})>\nu_{m^G}(\{x_1,x_2\})+\nu_{m^G}(\{x_4,x_5\})=\nu(\Omega).$$ Observe that $\{x_1,x_2\}$ is $m^G$-connected, and $\{x_4,x_5\}$ is also $m^G$-connected. $\blacksquare$ \end{remark} \section{The particular case of a nonlocal operator with non singular kernel} In this section we study the particular case of the random walk space given in Example~\ref{example.nonlocalJ}, that is, we consider the metric measure space $(\R^N, d, \mathcal{L}^N)$, where $d$ is the Euclidean distance and $\mathcal{L}^N$ the Lebesgue measure on $\R^N$. Let $J:\R^N\to[0,+\infty[$ be a measurable, nonnegative and radially symmetric function verifying $\int_{\R^N}J(x)dx=1$. Let $m^J$ the random walk $$m^J_x(A) = \int_A J(y-x) dy, \quad x \in \R^N,$$ for which the Lebesgue measure is reversible. We are going to prove a nonlocal version of the Saint-Venant inequality. For this we need the following result. \begin{lemma}\label{buenisimo} Let $\Omega$ be a bounded domain in $\R^N$. If $J$ is radial and non-increasing, then \begin{equation}\label{despargo} g_{m^J,\Omega}(k)\le g_{m^J,\Omega^*}(k)\quad\forall k\ge 0. \end{equation} \end{lemma} \begin{proof} It is obvious that $$g_{m^J,\Omega}(0)=g_{m^J,\Omega^*}(0),$$ and, by Riesz inequality and having in mind that $J^*=J$ and $(\1_\Omega)^* = \1_{\Omega^*}$, we have $$g_{m^J,}\Omega(1)= \int_\Omega\int_\Omega J(x-y) dx = \int_{\R^N} \1_\Omega(x) \left( \int_{\R^N} J(x-y) \1_\Omega(y) dy \right) dx $$ $$ \leq \int_{\R^N} (\1_\Omega)^*(x) \left( \int_{\R^N} J^*(x-y) (\1_\Omega)^*(y) dy \right) dx $$ $$= \int_{\R^N} \1_{\Omega^*}(x) \left( \int_{\R^N} J(x-y) \1_{\Omega^*}(y) dy \right) dx $$ $$ = \int_{\Omega^*}\int_{\Omega^*} J(x-y) dx =g_{\Omega^*}(1).$$ Let us now see that $$g_{m^J,\Omega}(k)\le g_{m^J,\Omega^*}(k)\quad\forall k\ge 2.$$ Indeed, for $k=2$, $$g_{m^J,\Omega}(2)=\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\1_\Omega(x)\1_D(y)\1_\Omega(z)J(z-y)J(y-x)dxdydz.$$ Now, since $$\left(\begin{array}{ccc} x\ &y\ &z\end{array}\right) \cdot \left(\begin{array}{ccccc} 1&0&0&0&-1\\ 0&1&0&-1&1\\ 0&0&1&1&0 \end{array} \right) =\left(\begin{array}{ccccc} x\ &y\ &z\ &z\!-\!y\ &y\!-\!x\end{array}\right),$$ choosing the $3\times 5$ matrix $$(b_{ij}):= \left(\begin{array}{ccccc} 1&0&0&0&-1\\ 0&1&0&-1&1\\ 0&0&1&1&0 \end{array} \right),$$ we have $$g_{m,\Omega}(2) = I(\1_\Omega, \1_\Omega, \1_\Omega, J,J).$$ Then, by Theorem~\ref{gririesz}, we have $$\begin{array}{l}\displaystyle g_{m^J,\Omega}(2) = I(\1_\Omega, \1_\Omega, \1_\Omega, J,J)\qquad\qquad\\ \\ \displaystyle \qquad\qquad \leq I((\1_\Omega)^*, (\1_\Omega)^*, (\1_\Omega)^*, J^*,J^*) = I(\1_{\Omega^*}, \1_{\Omega^*}, \1_{\Omega^*}, J,J) = g_{m^J,\Omega^*}(2).\end{array}$$ The inequalities for rest of $g_{m,\Omega}$(k) are obtained similarly. \end{proof} \begin{theorem}\label{simetr01} Let $\Omega$ be a bounded measurable subset of $\mathbb{R}^N$ and assume that $J$ is radial and non-increasing. Then, we have the following inequalities: \item[ 1.] $\mathbb{Q}_\Omega^{m^J}(t) \le \mathbb{Q}_{\Omega^*}^{m^J}(t)\quad\forall t\ge 0.$ \item[ 2.] $T_{m^J}(\Omega)\le T_{m^J}(\Omega^*)$ \ \ {\it (Saint-Venant inequality)}. \item[ 3.] $EM^{m^J}_{j}(\Omega)\le EM^{m^J}_{j}(\Omega^*)\quad\forall j\ge 1.$ \end{theorem} \begin{proof} $1.$ It is consequence of \eqref{specheatcont01t01} and Lemma \ref{buenisimo}. \noindent $2$. It is consequence \eqref{specheatcont01t02} and Lemma \ref{buenisimo}. \noindent $3$. It is consequence of Proposition \ref{proptors02} and Lemma \ref{buenisimo}. \end{proof} \begin{remark} {\rm A Faber-Krahn inequality \begin{equation}\label{pueq02}\lambda_{m^J,2}(\Omega^*)\le \lambda_{m^J,2}(\Omega). \end{equation} can be obtained as a consequence of \cite[Lemma A.2]{FS}. Moreover, assuming that $J$ is decreasing, and assuming also $\lambda_{m^J,2}(\Omega)$ is an eigenvalue, or equivalently the infimum in the Rayleigh quotient $$\lambda_{m^J,2}(\Omega) = \inf_{\hbox{\tiny$\begin{array}{c}g\in L^2(\Omega_m^J)\setminus\{0\}\\ g\ge 0\hbox{ on }\Omega\\g=0\hbox{ on }\partial_m\Omega\end{array}$}}\frac{\displaystyle \frac12\int_{\Omega_m^J}\int_{\Omega_m^J} |\nabla g(x,y)|^2dy dx}{\displaystyle \int_\Omega g(x)^2 dx } $$ is a minimum (we know this is true for $J$ with compact support which, obviously, are not decreasing), by \cite[Lemma A.2]{FS}, one can also prove $$\lambda_{m^J,2}(\Omega^*) = \lambda_{m^J,2}(\Omega) \iff \Omega \ \hbox{is a ball}.$$ $\blacksquare$ } \end{remark} \subsection{Rescaling results} In this subsection we see that we can recover the local concepts and some of their properties from the nonlocal ones. In particular we give a different proof of the classical Saint-Venant inequality. Set \begin{equation}\label{fjai01} J_\epsilon(x):=\frac{1}{\epsilon^N}J\left(\frac{x}{\epsilon}\right),\quad\epsilon>0. \end{equation} And define $$C_{J,2}=\frac{2}{\displaystyle \int_{\mathbb{R}^N}J(x)|x_N|^2dx.}$$ Observe that $C_{J_\epsilon,2}=\frac{1}{\epsilon^2}C_{J,2}$. \begin{theorem}\label{fkineq} Let $\Omega$ be a bounded domain in $\R^N$. Assume $\displaystyle\int_{\mathbb{R}^N}J(x)|x|dx<+\infty$. We have: $$\lim_{\epsilon\downarrow 0}\mathbb{Q}_\Omega^{m^{J_\epsilon}}\left(\frac{C_{J,2}}{\epsilon^2}t\right)=\mathbb{Q}_\Omega(t),$$ where $\mathbb{Q}_\Omega(t)$ is the (local) spectral heat content of $\Omega$; and \begin{equation}\label{resc001} \lim_{\epsilon\downarrow 0}\frac{\epsilon^2}{C_{J,2}}T_{m^{J_\epsilon}}(\Omega)=T(\Omega). \end{equation} \end{theorem} \begin{proof} The first part is consequence of the rescaling results proved in~\cite{ElLibro} (see also~\cite{redbook}) that also work if $\int_{\mathbb{R}^N}J(x)|x|dx<+\infty$ thanks to the general results given by A. Ponce in~\cite{Ponce}. The second part is a consequence of the fact that we can interchange the limit with the integral. \end{proof} By Theorems \ref{fkineq} and \ref{simetr01}, we can recover the classical {\it Saint-Venant inequality}: \begin{theorem}[Saint-Venant inequality]\label{simetr02} Let $\Omega$ be a bounded domain in $\R^N$. Then, $$T(\Omega)\le T(\Omega^*).$$ And, more generally, for any $j\ge 1$, $$EM_j(\Omega)\le EM_j(\Omega^*).$$ \end{theorem} \section{The particular case of a weighted graph} In this section we describe an iterative numerical method to get the torsional rigidity of a non-trivial subset of a weighted discrete graph. It is not our intention to give numerical results. We only want to show that~\eqref{rem001} and~\eqref{specheatcont01t02} allow to use such iterative method. Consider a weighted discrete graph $[V(G),\mathcal{B},m^G,\nu_G]$ as in Example~\ref{example.graphs} and a $\Omega$ a finite connected subset of $V(G)$. Let us write $\Omega=\{x_1,x_2,...,x_N\}$ and $\partial_{m^{G}}\Omega=\{x_{N+1},...,x_M\}$, $x_i\ne x_j$ for $i\neq j$. Set $w_{ij}$ the weights between $x_i$ and $x_j$ (remember that $w_{ij}=0$ if $x_i\not\sim x_j$). Set the weight of each $x_i\in\Omega$: $$\displaystyle d_i=\sum_{j=1}^Mw_{ij},\ i=1,2,...,N.$$ Then, from~\eqref{rem001} and~\eqref{specheatcont01t02}, the following iterative scheme gives an approximation $T(n)$ of the torsion: $$\left\{\begin{array}{ll} \displaystyle T(0)=\sum_{i=1}^Nd_i, &\hbox{ (the term $g_{m^G,\Omega}(0)$)} \\[16pt] \displaystyle f^1_i=\sum_{j=1}^Nw_{i,j},\ i=1,2,...,N, \\[16pt] \displaystyle g(1)=\sum_{i=1}^Nf_i^1,& \hbox{ (the term $g_{m^G,\Omega}(1)$}) \\[16pt] \displaystyle T(1)=T(0)+g(1), \\[16pt] \hbox{for } n\ge2: \\[8pt] \displaystyle \qquad f^{n}_i=\sum_{j=1}^N\frac{1}{d_j}f_j^{n-1}w_{i,j},\ i=1,2,...,N, \\[16pt] \displaystyle \qquad g(n)=\sum_{i=1}^Nf_i^n,& \hbox{ (the term $g_{m^G,\Omega}(n)$}) \\[16pt] \displaystyle \qquad T(n)=T(n-1)+g(n).& \hbox{ ($\displaystyle \lim_nT(n)=T_{m^G}(\Omega)$}) \end{array}\right.$$ From~\eqref{tl01} we have that $$\left|T_{m^G}(\Omega)-T(n)\right|= O\left((1-\lambda_{m,2}(\Omega))^{n+1}\right).$$ \section{The $m$-$p$-torsional rigidiy} Brasco in \cite{Brasco1}, for $p >1$, defines the {\it $p$-torsional rigidity} of the set $D$ as $$T_p(D):= \max_{v\in W^{1,2}_0(D)\setminus\{0\}}\frac{\displaystyle \left( \int_\Omega \vert v \vert dx \right)^{p}}{\displaystyle \int_D \vert\nabla v \vert^p dx}.$$ In \cite[Proposition 2.2]{Brasco1}, it is proved that \begin{equation}\label{BasrcoR1} T_p(D) = \left(\int_D v_D dx \right)^{p-1}, \end{equation} where $v_D$ is the unique weak solution of the problem \begin{equation}\label{torsioneqforplocal} \left\{\begin{array}{ll} -\Delta_p v_D =1&\hbox{in } D,\\[10pt] v_D(x)=0&\hbox{on }\partial D. \end{array}\right. \end{equation} Now we are going to get the nonlocal version of equation \eqref{BasrcoR1}. In this section we will we assume that $1 < p < \infty$, $\Omega \in \mathcal{B}$, $0 < \nu(\Omega) < \nu(X)$ and $\Omega$ satisfies a $p$-Poincar\'{e} inequality (see~\eqref{PoincareIneq2}). From the reversibility of $\nu$ respect to $m$, we have the following {\it integration by parts formula} \begin{equation}\label{INTBYPart} \begin{array}{ll} -\displaystyle\int_{\Omega_m \times \Omega_m} \vert \nabla f(x,y) \vert^{p-2} \nabla f(x,y) g(x) dm_x(y) d\nu(x) \\[10pt] = \displaystyle\frac12 \int_{\Omega_m \times \Omega_m} \vert \nabla f(x,y) \vert^{p-2} \nabla f(x,y) \nabla g(x,y) dm_x(y) d\nu(x), \end{array} \end{equation} if $f, g \in L^p(\Omega_m, \nu)$. We give the following definition of the homogeneous Dirichlet problem for the $m$-$p$-Laplacian. \begin{definition}{\rm Given $g \in L^1(\Omega, \nu)$, we say that $f \in L^p_0(\Omega_m, \nu)$ is a solution of problem \begin{equation}\label{Dirichletp} \left\{\begin{array}{ll} -\Delta_{m,p} f =g&\hbox{in } \Omega,\\[10pt] f(x)=0&\hbox{on }\partial_m\Omega; \end{array}\right. \end{equation} if it verifies $$\left\{\begin{array}{ll} -\hbox{div}_m(|\nabla f|^{p-2}\nabla f)(x)=g&\hbox{in } \Omega,\\[10pt] f(x)=0&\hbox{on }\partial_m\Omega; \end{array}\right.$$ that is, $$ \left\{\begin{array}{ll}\displaystyle -\int_{\Omega_m} |f(y)-f(x)|^{p-2}(f(y)-f(x)) dm_x(y)= g(x), & x\in \Omega, \\ \\ f(x)=0,&x\in \partial_m\Omega. \end{array}\right. $$ } \end{definition} Existence and uniqueness are given in \cite{ST0} (see also~\cite{MSTBook}). Nevertheless, and for the sake of completeness, we give the next result with a different proof. \begin{theorem}\label{EUp} There is a unique solution $f_{\Omega,p} \geq 0$ of the homogenous Dirichlet problem for the $m$-$p$-Laplacian, \begin{equation}\label{mpLaplace} \left\{\begin{array}{ll}\displaystyle -\int_{\Omega_m} |f_{\Omega,p}(y)-f_{\Omega,p}(x)|^{p-2}(f_{\Omega,p}(y)-f_{\Omega,p}(x)) dm_x(y)= 1, & x\in \Omega, \\ \\ f_{\Omega,p}(x)=0,&x\in \partial_m\Omega. \end{array}\right. \end{equation} Moreover, $f_{\Omega,p}$ is the only minimizer of the variational problem \begin{equation}\label{VarPro2} \min_{f \in L_0^2(\Omega_m, \nu) \setminus \{ 0 \}} \mathcal{F}_{m,p}(f), \end{equation} where $$\mathcal{F}_{m,p}(f):= \frac{1}{2p} \int_{\Omega_m \times \Omega_m} \vert \nabla f(x,y) \vert^p d(\nu\otimes m_x)(x,y) - \int_{\Omega} f(x) d\nu(x).$$ And, \begin{equation}\label{igualdaB} \int_{\Omega} f_{\Omega,p}(x) d\nu(x) = \frac{1}{2} \int_{\Omega_m \times \Omega_m} \vert \nabla f_{\Omega,p}(x,y) \vert^p d(\nu\otimes m_x)(x,y). \end{equation} \end{theorem} \begin{proof} First note that $\mathcal{F}_{m,p}$ is convex and lower semicontinuous in $L^p(\Omega,\nu)$, thus weakly lower semicontinuous (see~\cite[Corollary 3.9]{BrezisAF}). Set $$\theta := \inf_{f \in L_0^2(\Omega_m, \nu) \setminus \{ 0 \}} \mathcal{F}_{m,p}(f),$$ and let $\{ f_n \}$ be a minimizing sequence. Then, $$\theta = \lim_{n \to \infty} \mathcal{F}_{m,p}(f_n) \quad \hbox{and} \quad K:= \sup_{n \in \NN} \mathcal{F}_{m,p}(f_n) < + \infty\,.$$ Since $\Omega_m$ satisfies a the Poincar\'{e} inequality \eqref{PoincareIneq2}, by Young's inequality, we have $$\displaystyle \lambda\int_\Omega \left|f_n(x) \right|^p \, {d\nu(x)} \leq \int_{\Omega_m \times \Omega_m} \vert \nabla f_n(x,y) \vert^p d(\nu\otimes m_x)(x,y)$$ $$= 2p \mathcal{F}_{m,p}(f_n) + \int_{\Omega} f_n(x) d \nu(x) \leq 2pK + \int_\Omega \vert f_n (x) \vert d\nu(x) $$ $$ \leq 2pK + \frac{\lambda}{2} \int_\Omega \left|f_n(x) \right|^p \, {d\nu(x)} + \left(\frac{\lambda}{2} \right)^{-\frac{1}{p-1}} \nu(\Omega).$$ Therefore, we obtain that $$\int_{\Omega} |f_n(x)|^p \, {d\nu(x)} \leq C \quad \forall n \in \NN.$$ Hence, up to a subsequence, we have $$f_n \rightharpoonup f_{\Omega,p} \ \ \ \hbox{in } L_0^p(\Omega_m, \nu).$$ Furthermore, using the weak lower semicontinuity of the functional $\mathcal{F}_{m,p}$, we get $$\mathcal{F}_{m,p}(f_{\Omega,p}) = \inf_{f \in L_0^2(\Omega_m, \nu) \setminus \{ 0 \}} \mathcal{F}_{m,p}(f).$$ Since the functional $\mathcal{F}_{m,p}$ is strictly convex, we have that $f_{\Omega,p}$ is the unique minimizer, and since $ \mathcal{F}_{m,p}(\vert f \vert ) \leq \mathcal{F}_{m,p}(f)$, we have that $f_{\Omega,p} \geq 0$. Thus, given $\lambda >0$ and $w \in L_0^p(\Omega,\nu)$ , we have $$0 \leq \frac{\mathcal{F}_{m,p}(f_{\Omega,p} + \lambda w)- \mathcal{F}_{m,p}(f_{\Omega,p})}{\lambda} $$ or, equivalently, $$0\leq \frac{1}{\lambda} \Big[ \frac{1}{2p} \int_{\Omega_m \times \Omega_m} \vert \nabla (f_{\Omega,p} + \lambda w)(x,y) \vert^p d(\nu\otimes m_x)(x,y) - \int_{\Omega} (f_{\Omega,p} + \lambda w)(x) d\nu(x) $$ $$ - \left( \frac{1}{2p} \int_{\Omega_m \times \Omega_m} \vert \nabla f_{\Omega,p}(x,y) \vert^p d(\nu\otimes m_x)(x,y) - \int_{\Omega} f_{\Omega,p}(x) d\nu(x) \right)\Big].$$ Now, since $p >1$, we pass to the limit as $\lambda \downarrow 0$ to obtain $$0 \leq \frac12\int_{\Omega_m \times \Omega_m} \vert \nabla f_{\Omega,p}(x,y) \vert^{p-2} \nabla f_{\Omega,p}(x,y) \nabla w(x,y) d(\nu\otimes m_x)(x,y) - \int_\Omega w(x) d \nu(x).$$ Taking $\lambda <0$ and proceeding as above we obtain the opposite inequality. Consequently, we conclude that $$0 = \frac12\int_{\Omega_m \times \Omega_m} \vert \nabla f_{\Omega,p}(x,y) \vert^{p-2} \nabla f_{\Omega,p}(x,y) \nabla w(x,y) d(\nu\otimes m_x)(x,y) - \int_\Omega w(x) d \nu(x)$$ $$= -\int_{\Omega_m}\int_{\Omega_m} \vert \nabla f_{\Omega,p}(x,y) \vert^{p-2} \nabla f_{\Omega,p}(x,y) dm_x(y)w(x)d\nu(x) - \int_\Omega w(x) d \nu(x),$$ which shows that $f_{\Omega,p}$ is solution of \eqref{mpLaplace}. Finally, taking $w=f_{\Omega,p}$ in the above first equation we get~\eqref{igualdaB}. \end{proof} \begin{definition}{\rm We call to $f_{\Omega,p}$ as the {\it $p$-torsional function} of $\Omega$, and we define the {\it $m$-$p$-torsional rigidity} of $\Omega$ as $$T_{m,p}(\Omega):= \left(\int_\Omega f_{\Omega,p} d \nu \right)^{p-1}.$$ Note that $T_{m}(\Omega) = T_{m,2}(\Omega)$. } \end{definition} \begin{theorem}\label{Charact1} We have \begin{equation}\label{NonBasrcoR1} T_{m,p}(\Omega) = \max_{g\in L_0^p(\Omega_m, \nu) \setminus \{ 0 \}} \frac{\displaystyle\left(\int_\Omega |g|d\nu \right)^p}{\displaystyle \frac{1}{2}\int_{\Omega_m\times \Omega_m}|\nabla g(x,y)|^p \, d(\nu\otimes m_x)(x,y)}, \end{equation} and the maximum is attained at $f_{\Omega,p}$. \end{theorem} \begin{proof} By~\eqref{igualdaB}, \begin{equation}\label{eq01paratN} \frac{1}{2}\int_{\Omega_m}\int_{\Omega_m}|\nabla f_{\Omega,p}(x,y)|^p \, d(\nu\otimes m_x)(x,y)=\int_\Omega f_{\Omega,p}(x) d\nu(x). \end{equation} Therefore, $$\displaystyle T_{m,p}(\Omega)=\left(\int_\Omega f_{\Omega,p} d\nu \right)^{p-1} = \frac{\displaystyle\left(\int_\Omega f_{\Omega,p} d\nu\right)^p}{\displaystyle \frac12\int_{\Omega_m\times \Omega_m}|\nabla f_\Omega(x,y)|^pdm_x(y)d\nu(x)}.$$ Let $g\in L_0^p(\Omega_m, \nu)$, $g \not=0$. Since $f_{\Omega,p}$ is a solution of Problem~\eqref{mpLaplace}, $$\int_\Omega |g| d\nu =\frac12 \int_{\Omega_m}\int_{\Omega_m} \vert \nabla f_{\Omega,p}(x,y) \vert^{p-2} \nabla f_{\Omega,p}(x,y) \nabla |g|(x,y) d(\nu\otimes m_x)(x,y).$$ Then, by H\"{o}lder's inequality, $$ \begin{array}{c}\displaystyle\int_\Omega |g| d\nu \le \frac{1}{2}\left(\int_{\Omega_m}\int_{\Omega_m}\vert \nabla f_{\Omega,p}(x,y) \vert^{p} \, d(\nu\otimes m_x)(x,y)\right)^{1/p'} \\ \\ \displaystyle \times\left(\int_{\Omega_m}\int_{\Omega_m}\vert \nabla g(x,y) \vert^p \, d(\nu\otimes m_x)(x,y)\right)^{1/p}. \end{array}$$ Then, from \eqref{igualdaB}, $$\int_\Omega |g| d\nu \le \frac{1}{2}\left(2\int_\Omega f_{\Omega,p}(x) d\nu(x)\right)^{1/p'}\left(\int_{\Omega_m}\int_{\Omega_m}\vert \nabla g(x,y) \vert^p \, d(\nu\otimes m_x)(x,y)\right)^{1/p}$$ $$=\frac{1}{2^{1/p}} \left( T_{m,p}(\Omega) \right)^{1/p}\left(\int_{\Omega_m}\int_{\Omega_m}\vert \nabla g(x,y) \vert^p \, d(\nu\otimes m_x)(x,y)\right)^{1/p}.$$ Thus, $$\displaystyle T_{m,p}(\Omega) \geq \frac{\displaystyle\left(\int_\Omega |g| d\nu\right)^p}{\displaystyle \frac12\int_{\Omega_m\times \Omega_m}|\nabla g(x,y)|^pdm_x(y)d\nu(x)},$$ and consequently \eqref{NonBasrcoR1} holds. \end{proof} We now define, for $p\ge 1$, \begin{equation}\label{RayleighC} \begin{array}{c}\displaystyle \lambda_{m,p}(\Omega):= \inf_{f \in L_0^p(\Omega_m, \nu) \setminus \{ 0 \}} \frac{ \displaystyle\frac{1}{2} \displaystyle\int_{\Omega_m \times \Omega_m} \vert \nabla f(x,y) \vert^p d(\nu\otimes m_x)(x,y)}{\displaystyle\int_\Omega \vert f (x) \vert^p d\nu(x)} \\ \\ \displaystyle =\inf_{\hbox{\tiny$\begin{array}{c}f \in L_0^p(\Omega_m, \nu) \setminus \{ 0 \}\\ f\ge 0\hbox{ on }\Omega\end{array}$}} \frac{ \displaystyle\frac{1}{2} \displaystyle\int_{\Omega_m \times \Omega_m} \vert \nabla f(x,y) \vert^p d(\nu\otimes m_x)(x,y)}{\displaystyle\int_\Omega \vert f (x) \vert^p d\nu(x)}. \end{array} \end{equation} As a consequence of the above result we have: \begin{corollary}\label{corlambdatorsionlocal1nlp} For $p>1$ we have \begin{equation}\label{lambdatorsionlocal1nlp}\lambda_{m,p}(\Omega)\le \frac{\nu(\Omega)^{p-1}}{T_{m,p}(\Omega)}. \end{equation} \end{corollary} \begin{proof} By Theorem \ref{Charact1}, we have $$ \frac{1}{T_{m,p}(\Omega)} = \frac{\displaystyle \frac{1}{2}\int_{\Omega_m\times \Omega_m}|\nabla f_{\Omega,p}(x,y)|^p \, d(\nu\otimes m_x)(x,y)}{\displaystyle\left(\int_\Omega f_{\Omega,p}d\nu \right)^p}. $$ Now $$\int_\Omega f_{\Omega,p}(x) d\nu(x) \leq \left(\int_\Omega (f_{\Omega,p}(x))^p d\nu(x)\right)^{\frac1p} \nu(\Omega)^{\frac{1}{p'}}.$$ Hence $$\frac{1}{T_{m,p}(\Omega)} \geq \frac{\displaystyle \frac{1}{2}\int_{\Omega_m}\int_{\Omega_m} |\nabla f_{\Omega,p}(x,y)|^p d(\nu\otimes m_x)(x,y)}{\nu(\Omega)^{p-1}\displaystyle \int_\Omega (f_{\Omega,p}(x))^p d\nu(x) } \ge \frac{\lambda_{m,p}(\Omega)}{\nu(\Omega)^{p-1}}.$$ \end{proof} Fusco, Maggi and Pratelli in \cite{FMP1} (see also \cite{Avinyo}, \cite{FMP2} and \cite{PratelliSaracco}) generalized the classical concept of Cheeger constant, introducing, for $p \geq \frac{N-1}{N}$, the {\it $p$-Cheeger constant} of and open set $\Omega \subset \R^N$ of finite measure as $$h_p(\Omega):= \inf \left\{ \frac{P(E)}{\vert E \vert^p} : E \subset \Omega \ \hbox{is open} \right\}.$$ Note that for $h_1(\Omega)$ is the classical Cheeger constant. In \cite{MST2} (see also~\cite{MSTBook}), for a set $\Omega \in \mathcal{B}$ such that $0 < \nu(\Omega) < \nu(X)$, we define its {\it $m$-Cheeger constant} as $$h_1^m(\Omega):= \inf \left\{ \frac{P_m(E)}{\nu(E)} : E \in \mathcal{B}, \ E \subset \Omega, \ \nu(E) >0 \right\}$$ and we prove (see~\cite[Theorem 3.37]{MSTBook}) that \begin{equation}\label{RayleighCns01}\lambda_{m,1}(\Omega)=h_1^m(\Omega). \end{equation} \begin{remark}\rm For any $p\ge 1$, \begin{equation}\label{29d001}\lambda_{m,p}(\Omega)\le h_1^m(\Omega)=\lambda_{m,1}(\Omega). \end{equation} Indeed, for any $E\subset \Omega$, $\nu(E)>0$, $$\lambda_{m,p}(\Omega) \le \frac{\displaystyle \frac12\int_{\Omega_m}\int_{\Omega_m} |\nabla \1_E(x,y)|^p dm_x(y)d\nu(x)}{\displaystyle \int_\Omega \1_E(x) d\nu(x) } $$ $$= \frac{\displaystyle\frac12\int_{\Omega_m}\int_{\Omega_m} |\nabla \1_E(x,y)| dm_x(y)d\nu(x)}{\displaystyle \int_\Omega \1_E(x) d\nu(x) } =\frac{P_m(E)}{\nu(E)}.$$ Then taking infimum, and on account of~\eqref{RayleighCns01}, we get~\eqref{29d001}. $\blacksquare$ \end{remark} Now, we introduce the following nonlocal version of the $p$-Cheeger constant. \begin{definition}{\rm Let $p >1$, we define its {\it $m$-$p$-Cheeger constant} of~$\Omega$ as $$h_p^m(\Omega):= \inf \left\{ \frac{P_m(E)}{\nu(E)^p} : E \in \mathcal{B}, \ E \subset \Omega, \ \nu(E) >0 \right\},$$ } \end{definition} Similarly to the local case (see, for example,~\cite[Proposition 5.2]{BBPrinari}), we have the following relation between the Chegeer constants and the $m$-$p$-torsional rigidity. \begin{theorem} For $p >1$ we have \begin{equation}\label{ChegeerT1} 2^{p-1}\frac{h_1^m(\Omega){}^p}{\nu(\Omega_m)^{p-1}} \leq \frac{1} {T_{m,p}(\Omega)} \leq h_p^m(\Omega), \end{equation} and \begin{equation}\label{ChegeerT2} \lim_{p \to 1^+} \frac{1}{T_{m,p}(\Omega)}=\lim_{p\to 1^+}h_p^m(\Omega)= h_1^m(\Omega). \end{equation} \end{theorem} \begin{proof} By the coarea formula \eqref{coarea} and Cavalieri's principle, we have $$\displaystyle \frac12\int_{\Omega_m\times \Omega_m}|\nabla f_{\Omega,p}(x,y)|d(\nu\otimes m_x)(x,y) = \int_{0}^{+\infty} P_m( \{ x\in \Omega: f_{\Omega,p}(x) >t \}) dt$$ $$ \geq \int_{0}^{+\infty} h_1^m(\Omega) \nu(\{ x\in \Omega: f_{\Omega,p}(x) >t \}) dt $$ $$= h_1^m(\Omega) \int_{0}^{+\infty} \nu(\{ x\in \Omega: f_{\Omega,p} (x) >t \}) dt = h_1^m(\Omega)\int_\Omega f_{\Omega,p}(x) d \nu(x) . $$ Hence, by H\"older's inequality and \eqref{igualdaB}, we obtain $$h_1^m(\Omega) \leq \frac{\displaystyle\frac12\int_{\Omega_m\times \Omega_m}|\nabla f_{\Omega,p}(x,y)|d(\nu\otimes m_x)(x,y)}{\displaystyle\int_\Omega f_{\Omega,p}(x) d \nu(x)} $$ $$ \leq \frac{\displaystyle\frac12 \left(\int_{\Omega_m\times \Omega_m}|\nabla f_{\Omega,p}(x,y)|^p d(\nu\otimes m_x)(x,y)\right)^{\frac{1}{p}} \nu(\Omega_m)^{\frac{p-1}{p}}}{\displaystyle\int_\Omega f_{\Omega,p}(x) d \nu(x)} $$ $$= \frac{1}{2^{\frac{p-1}{p}}} \frac{\left(\displaystyle\int_\Omega f_{\Omega,p}(x) d \nu(x)\right)^{\frac{1}{p}} \nu(\Omega_m)^{ \frac{p-1}{p}}}{\displaystyle\int_\Omega f_{\Omega,p}(x) d \nu(x)}$$ $$ = \frac{1}{2^{\frac{p-1}{p}}} \frac{ \nu(\Omega_m)^{\frac{p-1}{p}}}{\left(\displaystyle\int_\Omega f_{\Omega,p}(x) d \nu(x)\right)^{\frac{p-1}{p}}} = \frac{1}{2^{\frac{p-1}{p}}}\left( \frac{ \nu(\Omega_m)^{p-1}}{T_{m,p}(\Omega)} \right)^{\frac{1}{p}},$$ and, from here \begin{equation}\label{ChegeerT1laprime} 2^{p-1}\frac{h_1^m(\Omega){}^p}{\nu(\Omega_m)^{p-1}} \leq \frac{1} {T_{m,p}(\Omega)}. \end{equation} On the other hand, by \eqref{NonBasrcoR1}, for any $E \in \mathcal{B}, \ E \subset \Omega, \ \nu(E) >0$, we have $$ \frac{1}{T_{m,p}(\Omega) } \leq \frac{\displaystyle \frac{1}{2}\int_{\Omega_m\times \Omega_m}|\nabla \1_E(x,y)|^p \, d(\nu\otimes m_x)(x,y)}{\displaystyle\left(\int_\Omega \1_E d\nu \right)^p} = \frac{P_m(E)}{\nu(E){}^p},$$ from where, \begin{equation}\label{ChegeerT1lasegun} \frac{1} {T_{m,p}(\Omega)} \leq h_p^m(\Omega). \end{equation} And~\eqref{ChegeerT1} is proved. Taking limits in~\eqref{ChegeerT1}, we have \begin{equation}\label{siiprevm01}h_1^m(\Omega) \leq \liminf_{p \to 1^+} \frac{1}{T_{m,p}(\Omega)}\le \liminf_{p\to 1^+}h_p^m(\Omega), \end{equation} and $$ \limsup_{p \to 1^+} \frac{1}{T_{m,p}(\Omega)}\le \limsup_{p\to 1^+}h_p^m(\Omega).$$ Let us now see that \begin{equation}\label{sii} \limsup_{p\to 1+}h_p^m(\Omega) \leq h_1^m(\Omega). \end{equation} Indeed, for any $E \in \mathcal{B}, \ E \subset \Omega, \ \nu(E) >0$, we have $$ h_p^m(\Omega) \leq \frac{P_m(E)}{\nu(E)^p},$$ and, from here $$\limsup_{p\to 1+}h_p^m(\Omega) \leq \frac{P_m(E)}{\nu(E)},$$ which allows to prove~\eqref{sii}. Finally,~\eqref{siiprevm01} and~\eqref{sii} gives~\eqref{ChegeerT2}. \end{proof} P\'{o}lya \cite{P2} proves that, among all bounded open and convex planar sets, the following inequality holds \begin{equation}\label{PolyaInq1} \frac13 \leq \frac{T(D) P(D)^2}{\vert D \vert^3}, \end{equation} being the constant $\frac13$ optimal. This was generalized in~\cite{BBPrinari0} to dimension $N\ge 3$. On the other hand, Makai \cite{Makai} proves that, among all bounded open and convex planar sets, the following upper bound holds \begin{equation}\label{MakaiInq} \frac{T(D) P(D)^2}{\vert D \vert^3} \leq \frac23, \end{equation} being the constant $\frac23$ optimal. See~ \cite{BBPrinari0} for a conjecture in dimension $N\ge 3$. Estimates \eqref{PolyaInq1} and \eqref{MakaiInq} are generalized for the $p$-Laplacian by Fragala, Gazzola and Lamboley in \cite{FGL}. Recall that $\Omega$ is $m$-calibrable if $ h_1^m(\Omega)= \frac{P_m(\Omega)}{\nu(\Omega)}.$ \begin{corollary} We have \begin{equation}\label{Polya2} \frac{\nu(\Omega)^2}{ P_m(\Omega)}\leq{T_{m}(\Omega)}. \end{equation} Moreover, if $\Omega$ is $m$-calibrable, then \begin{equation}\label{Polya1} T_m(\Omega) \leq \frac12\frac{\nu(\Omega)^2\nu(\Omega_m)}{P_m(\Omega)^2}. \end{equation} \end{corollary} \begin{proof} Taking $p=2$ in~\eqref{ChegeerT1}, since $T_{m,2}(\Omega) = T_{m}(\Omega)$, we have \begin{equation}\label{inf2} 2 \frac{h_1^m(\Omega){}^2}{\nu(\Omega_m)} \le\frac{1} {T_{m}(\Omega)} \leq h_2^m(\Omega) . \end{equation} Then, since $h_2^m(\Omega) \leq \frac{P_m(\Omega)}{\nu(\Omega)^2}$, from the second inequality in~\eqref{inf2} we get $$\frac{1} {T_{m}(\Omega)} \leq \frac{P_m(\Omega)}{\nu(\Omega)^2},$$ and~\eqref{Polya2} holds. On the other hand, assuming that $\Omega$ is $m$-calibrable, we have $h_1^m(\Omega)= \frac{P_m(\Omega)}{\nu(\Omega)}$, and, substituting this value in the first inequality of~\eqref{inf2}, we have $$ 2\frac{P_m(\Omega)^2}{\nu(\Omega)^2\nu(\Omega_m)} \leq \frac{1} {T_{m}(\Omega)},$$ from where \eqref{Polya1} holds. \end{proof} Observe that, from~\eqref{04012302}, \eqref{lambdatorsionlocal1nl} and~\eqref{Polya2}, we have \begin{equation}\label{04012303}\nu(\Omega)<\frac{\nu(\Omega)^2}{ P_m(\Omega)}\leq{T_{m}(\Omega)} \le \frac{\nu(\Omega)}{\lambda_{m,2}(\Omega)}. \end{equation} In the next example we will see that the second and third inequalities in \eqref{04012303} are sharp. We see that they are equalities for the most simple connected set for weighted discrete graphs, which is trivially $m^G$-calibrable. \begin{example}\label{7en01}\rm \item{ 1.} Consider the weighted discrete lasso graph $V(G)=\{x,y\}$ with weights $w_{xx}=a>0$, $w_{xy}=b>0$ and $w_{yy}=0$ (we are in a situation of Example~\ref{example.graphs}). And take $\Omega=\{x\}$, which is $m^G$-connected (because of the loop). It is easy to see that $$\nu_G(\Omega)=a+b,$$ $$P_{m^G}(\Omega)=b,$$ $$T_{m^G}(\Omega)=\frac{(a+b)^2}{b},$$ and $$\lambda_{m^G,2}(\Omega)=\frac{b}{a+b}.$$ Hence, $$ \frac{\nu_G(\Omega)^2}{ P_{m^G}(\Omega)}={T_{m^G}(\Omega)}=\frac{\nu_G(\Omega)}{\lambda_{m^G,2}(\Omega)}.$$ \item{ 2.} For the weighted discrete graph $V(G)=\{x,y_1,y_2,\dots,y_k\}$, $k\ge 2$ with weights $w_{xx}=a>0$, $w_{xy_i}=b_i>0$ and $w_{y_iy_j}=0$ for any $i,j$, if we set $b=\sum_{i=1}^kb_j$, and take $\Omega=\{x\}$, we have the same results than for the lasso graph. $\blacksquare$ \end{example} In the next result we will see the influence of the $m$-mean curvature of $\Omega$. Observe first that, by \eqref{pararm01}, \begin{equation}\label{pararm01NN}\frac{\displaystyle 1+\frac{1}{\nu(\Omega)}\int_\Omega H_{\partial\Omega}^m(x)d\nu(x)}{2}=\frac{P_m(\Omega)}{\nu(\Omega)}. \end{equation} Then,~\eqref{Polya2} is equivalent to \begin{equation}\label{equivtopolya2}\frac{\displaystyle1+\frac{1}{\nu(\Omega)}\int_\Omega H_{\partial\Omega}^m(x)d\nu(x)}{2}\frac{\nu(\Omega)^3}{ P_m(\Omega)^2}\leq{T_{m}(\Omega)}. \end{equation} Remember also that $$-1\le \frac{1}{\nu(\Omega)}\int_\Omega H_{\partial\Omega}^m(x)d\nu(x)\le 1.$$ Then, as an inmediate consequence of~\eqref{equivtopolya2} we have: \begin{corollary}\label{PMIneq} Assume that b there exists $\beta \in \R$ such thatb \begin{equation}\label{lapol002dd} -1<\beta\le \frac{1}{\nu(\Omega)}\int_\Omega H_{\partial\Omega}^m(x)d\nu(x)<1. \end{equation} Then \begin{equation}\label{la3001} \left(\frac{\beta+1}{2} \right)\frac{\nu(\Omega)^3}{ P_m(\Omega)^2}\leq{T_{m}(\Omega)}. \end{equation} \end{corollary} By~\eqref{pararm01}, we have $$\frac{1}{\nu(\Omega)}\int_\Omega H_{\partial\Omega}^m(x)d\nu(x)\le \alpha<1\ \Leftrightarrow\ \frac{P_m(\Omega)}{\nu(\Omega)}\le \frac{\alpha+1}{2}.$$ Now, since~\eqref{Polya2} can be written as $$T_{m}(\Omega)\ge \frac{\nu(\Omega)^2}{ P_m(\Omega)}=\frac{\nu(\Omega)}{ P_m(\Omega)}\nu(\Omega),$$ we obtain the following result. \begin{corollary}\label{Corlapol001dxcur} Assume that there exists $\alpha \in \R$ such that \begin{equation}\label{lapol001dxcur} \displaystyle -1<\frac{1}{\nu(\Omega)}\int_\Omega H_{\partial\Omega}^m(x)d\nu(x)\le \alpha <1. \end{equation} Then $$T_{m}(\Omega)\ge \frac{2}{\alpha+1}\nu(\Omega).$$ \end{corollary} \begin{remark}\rm \item{1.} Let us remark that, assuming \eqref{lapol001dxcur}, by the above Corollary and by~\eqref{lambdatorsionlocal1nl}, we have $$ \lambda_{m,2}(\Omega) \le \frac{\alpha+1}{2}.$$ \item{2.} Observe that~\eqref{la3001} is a P\'{o}lya-type inequality for subsets satisfying~\eqref{lapol002dd}; and that~\eqref{Polya1} is a Makai-type inequality for calibrable subsets. \item{3.} As a consequence of~\eqref{equivtopolya2} and~\eqref{Polya1}, if $\Omega$ is calibrable then $$\frac{1}{\nu(\Omega)}\int_\Omega H_{\partial\Omega}^m(x)d\nu(x)\le\frac{\nu(\partial_m\Omega)}{\nu(\Omega)},$$ or equivalently, using~\eqref{pararm01}, $$h_1^m(\Omega)=\frac{P_m(\Omega)}{\nu(\Omega)}\le \frac12\left(1+\frac{\nu(\partial_m\Omega)}{\nu(\Omega)}\right).$$ $\blacksquare$ \end{remark} We have the following result (see~\cite{kafri} in the local case). \begin{theorem} We have, \begin{equation}\label{29d002} \left(\frac{\lambda_{m,1}(\Omega)}{p} \right)^p \leq \lambda_{m,p}(\Omega)\le \lambda_{m,1}. \end{equation} And consequently, \begin{equation}\label{ChegeerT2dd01} \lim_{p\to 1^+}\lambda_{m,p}(\Omega) = \lambda_{m,1}= h_1^m(\Omega). \end{equation} \end{theorem} \begin{proof} The second inequality of~\eqref{29d002} is given in~\eqref{29d001}. On the other hand, for $p >1$, we have, for any $a,b\in \mathbb{R}$, $$\vert \vert b \vert^{p-1} b - \vert a \vert^{p-1} a \vert \leq p\vert b - a\vert \max\left\{\vert b \vert^{p-1},\vert a \vert^{p-1}\right\}.$$ Hence $$\vert \nabla (\vert u \vert^{p-1} u)(x,y) \vert \leq p\vert \nabla u(x,y)\vert\max\left\{ \vert u(y) \vert^{p-1},\vert u(x) \vert^{p-1}\right\},$$ and consequently, for $u \in L_0^p(\Omega_m, \nu) \setminus \{ 0 \}$, we have \begin{equation}\label{mund03} \begin{array}{c} \displaystyle\lambda_{m,1}(\Omega) \leq \frac{ \displaystyle\frac{1}{2} \displaystyle\int_{\Omega_m \times \Omega_m} \vert \nabla (\vert u \vert^{p-1} u)(x,y) \vert d(\nu\otimes m_x)(x,y)}{\displaystyle\int_\Omega \vert u(x)\vert^p d\nu(x)} \\ \\ \displaystyle \leq \frac{ \displaystyle\frac{p}{2} \displaystyle\int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert \max\left\{ \vert u(y) \vert^{p-1},\vert u(x) \vert^{p-1}\right\}d(\nu\otimes m_x)(x,y)}{\displaystyle\int_\Omega \vert u(x)\vert^p d\nu(x)}. \end{array} \end{equation} We claim now that \begin{equation}\label{max} \begin{array}{c} \displaystyle \int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert \max\left\{ \vert u(y) \vert^{p-1},\vert u(x) \vert^{p-1}\right\}d(\nu\otimes m_x)(x,y)\\ \\ \displaystyle = 2\int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert\,\1_{\{(x,y)\in\Omega_m\times\Omega_m:u(x)> u(y)\}}(x,y) \vert u(x) \vert^{p-1}d(\nu\otimes m_x)(x,y) .\end{array} \end{equation} Indeed, by the reversibility of $\nu$ respect to $m$, and having in mind that $\nabla u(x,y) =0$ if $u(x) = u(y)$ and $\vert \nabla u(x,y) \vert = \vert \nabla u(y,x) \vert$, we have $$ \displaystyle \int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert \max\left\{ \vert u(y) \vert^{p-1},\vert u(x) \vert^{p-1}\right\}d(\nu\otimes m_x)(x,y) $$ $$= \int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert\,\1_{\{(x,y)\in\Omega_m\times\Omega_m:u(x)> u(y)\}}(x,y) \vert u(x) \vert^{p-1}d(\nu\otimes m_x)(x,y) $$ $$+\int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert\,\1_{\{(x,y)\in\Omega_m\times\Omega_m:u(y)> u(x)\}}(x,y) \vert u(y) \vert^{p-1}d(\nu\otimes m_x)(x,y) $$ $$ = \int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert\,\1_{\{(x,y)\in\Omega_m\times\Omega_m:u(x)> u(y)\}}(x,y) \vert u(x) \vert^{p-1}d(\nu\otimes m_x)(x,y) $$ $$ +\int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert\,\1_{\{(x,y)\in\Omega_m\times\Omega_m:u(x)> u(y)\}}(x,y) \vert u(x) \vert^{p-1}d(\nu\otimes m_x)(x,y) $$ $$= \displaystyle 2\int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert\,\1_{\{(x,y)\in\Omega_m\times\Omega_m:u(x)> u(y)\}}(x,y) \vert u(x) \vert^{p-1}d(\nu\otimes m_x)(x,y). $$ Now, applying H\"older's inequality, we get $$ \int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert\,\1_{\{(x,y)\in\Omega_m\times\Omega_m:u(x)> u(y)\}} (x,y) \vert u(x) \vert^{p-1}d(\nu\otimes m_x)(x,y) $$ $$ \begin{array}{c} \displaystyle \le \left(\int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert^p \,\1_{\{(x,y)\in\Omega_m\times\Omega_m:u(x)> u(y)\}}(x,y)d(\nu\otimes m_x)(x,y) \right)^{\frac{1}{p}} \\[6pt] \displaystyle \hfill \times\left(\displaystyle\int_{\Omega_m\times\Omega_m} \vert u(x) \vert^{p} d\nu\otimes m_x(y) \right)^{\frac{p-1}{p}} \end{array} $$ $$ \begin{array}{c} \displaystyle \le \left(\int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert^p \,\1_{\{(x,y)\in\Omega_m\times\Omega_m:u(x)> u(y)\}}(x,y)d(\nu\otimes m_x)(x,y) \right)^{\frac{1}{p}} \\[6pt] \displaystyle \hfill \times\left(\displaystyle\int_\Omega \vert u(x) \vert^{p} d \nu(x) \right)^{\frac{p-1}{p}} \end{array} $$ $$ =\left(\int_{\Omega_m \times \Omega_m} \frac12\vert \nabla u(x,y) \vert^p (x,y)d(\nu\otimes m_x)(x,y) \right)^{\frac{1}{p}} \left(\displaystyle\int_\Omega \vert u(x) \vert^{p} d \nu(x) \right)^{\frac{p-1}{p}}, $$ where reversibility is used, as in the proof of \eqref{max}, to get the last equality. Then $$ \begin{array}{c} \displaystyle \int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert \max\left\{ \vert u(y) \vert^{p-1},\vert u(x) \vert^{p-1}\right\}d(\nu\otimes m_x)(x,y)\\ \\ \displaystyle \le 2\left(\int_{\Omega_m \times \Omega_m} \frac12\vert \nabla u(x,y) \vert^p (x,y)d(\nu\otimes m_x)(x,y) \right)^{\frac{1}{p}} \left(\displaystyle\int_\Omega \vert u(x) \vert^{p} d \nu(x) \right)^{\frac{p-1}{p}} \end{array}.$$ Hence, using the above inequality, from~\eqref{mund03} we get $$\lambda_{m,1}(\Omega) \leq \frac{\displaystyle p \left(\displaystyle\frac{1}{2}\int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert^p d(\nu\otimes m_x)(x,y) \right)^{\frac{1}{p}} \left(\displaystyle\int_\Omega \vert u(x) \vert^{p} d \nu(x) \right)^{\frac{p-1}{p}}}{\displaystyle\int_\Omega \vert u(x)\vert^p d\nu(x)}$$ $$= \frac{\displaystyle p \left(\displaystyle \frac{1}{2} \int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert^p d(\nu\otimes m_x)(x,y) \right)^{\frac{1}{p}}}{\displaystyle\left(\int_\Omega \vert u(x)\vert^p d\nu(x) \right)^{\frac{1}{p}}}.$$ Thus $$ \left(\frac{\lambda_{m,1}(\Omega)}{p} \right)^p \leq \frac{ \displaystyle\frac12 \int_{\Omega_m \times \Omega_m} \vert \nabla u(x,y) \vert^p d(\nu\otimes m_x)(x,y) }{\displaystyle\int_\Omega \vert u(x)\vert^p d\nu(x) }.$$ The, taking infimum in $u \in L_0^p(\Omega_m, \nu) \setminus \{ 0 \}$, we obtain that $$ \left(\frac{\lambda_{m,1}(\Omega)}{p} \right)^p \leq \lambda_{m,p}(\Omega),$$ and~\eqref{29d002} is proved. Finally,~\eqref{ChegeerT2dd01} is a direct consequence of~\eqref{29d002} and~\eqref{RayleighCns01}. \end{proof} \subsection{A rescaling result} Set $J_\epsilon$ as in~\eqref{fjai01}. Define $$C_J=\frac{2}{\displaystyle \int_{\mathbb{R}^N}J(x)|x_N|dx}.$$ Observe that $C_{J_\epsilon}=\frac{1}{\epsilon}C_J$. If $\displaystyle\int_{\mathbb{R}^N}J(x)|x|dx<+\infty$, we have (see~\cite{redbook}): \begin{equation}\label{limitT}\lim_{\epsilon\downarrow 0} \frac{C_J}{\epsilon}h_{1}^{m^{J_\epsilon}}(\Omega)=h_1(\Omega). \end{equation} Remember that by~\eqref{resc001}, $$\lim_{\epsilon\downarrow 0}\frac{\epsilon^2}{C_{J,2}}T_{m^{J_\epsilon}}(\Omega)=T(\Omega).$$ Now, from~\eqref{inf2}, $$ \frac{h_1^{m^{J_\epsilon}}(\Omega){}^2}{|\Omega_{m^{J_\epsilon}}|} \leq \frac12 \frac{1} {T_{{m^{J_\epsilon}}}(\Omega)}.$$ Then $$ \frac{\displaystyle \frac{1}{\epsilon^2}h_1^{m^{J_\epsilon}}(\Omega){}^2}{|\Omega_{m^{J_\epsilon}}|} \leq \frac12 \frac{1} {\epsilon^2T_{{m^{J_\epsilon}}}(\Omega)},$$ and, taking limits as $\epsilon\to 0$, \begin{equation}\label{nb02} \frac{ h_1 (\Omega){}^2}{|\Omega|} \leq \frac{C_J{}^2}{2C_{J,2}} \frac{1} { T(\Omega)}. \end{equation} Observe that $$\frac{C_J{}^2}{2C_{J,2}}= \frac{\displaystyle \int_{\mathbb{R}^N}J(x)|x_N|^2dx}{\left(\displaystyle\int_{\mathbb{R}^N}J(x)|x_N|dx\right)^2}\ge 1.$$ But, $\frac{C_J{}^2}{2C_{J,2}}$ is as close to 1 as we want by choosing adequately $J$. So we can get \begin{equation}\label{nb02wc01} \frac{ h_1 (\Omega){}^2}{|\Omega|} \leq \frac{1} { T(\Omega)}, \end{equation} and in particular, for $\Omega$ calibrable we get the Makai-type inequality \begin{equation}\label{quasiMakaiInq} \frac{T(\Omega) P(\Omega)^2}{\vert \Omega \vert^3} \leq 1. \end{equation} \section{Torsional rigidity on Quantum Graphs as a $m$-torsional rigidity on graphs} Torsional rigidity on quantum graphs was introduce by Colladay, Kaganovskiy and McDonald in~\cite{CKM2017}. To the best of our knowledge, after this paper, the only existing literature on this topic is the paper by Mugnolo and Plumer~\cite{MP}, where the torsional rigidity of a quantum graph is related to the rigidity of an associated weighted combinatorial graph. We will interpret here that result with the (nonlocal) rigidity of a weighted graph. Let $\mathcal{G}$ be a compact, finite, connected quantum graph. Let $V$ be the set of vertices of $\mathcal{G}$ and $E$ be the set of edges. Fora vertex $x \in V$, let ${\rm deg}_{\mathcal{G}}(x)$ denote is {\it degree}, i.e. the number of edges incident in $x$. We suppose that $\mathcal{G}$ has at least one vertex of degree $1$. Set $$V_D:= \{ x \in V : {\rm deg}_{\mathcal{G}}(x) = 1 \}$$ and set $V_N:= V \setminus V_D$. We assume that the graph does not contain multiple edges between the same vertices but it can contain at most one loop at each vertex (we comment on this later on). Let us call $\ell_e$ or $\ell_{x,y}$ the length of the edge $e$ that join the vertices $x$ and $y$. For each $e \in E$ there exists an increasing an bijective function $$ \begin{array}{rlcc} c_e:&e&\to& [0,\ell_e]\\ &x&\rightsquigarrow& x_{e}, \end{array} $$ $x_{e}$ is called the coordinate of the point $x\in e$. A function $u$ on a metric graph $\mathcal{G}$ is a collection of functions $[u]_{e}$ defined on $[0,\ell_{e}]$ for all $e\in E.$ Throughout this work, $ \int_{\mathcal{G}} u(x) dx$ denotes $ \sum_{e\in E} \int_{0}^{\ell_{e}} [u]_{e}(x_e)\, dx_e$. For $A \subset \mathcal{G}$, the {\it length} of $A$ is defined as $$\ell(A) = \int_{\mathcal{G}} \1_A dx.$$ Let $\Delta_{\mathcal{G}}$ the Laplacian on $\mathcal{G}$ with homogeneous Dirichlet boundary condition at vertices in $V_D$ and with the Kirchhoff type condition on the vertices in $V_N$, that is, its associated quadratic form $a_{\mathcal{G}}$ is given by $$a_{\mathcal{G}}(u):= \int_{\mathcal{G}} \vert u'(x) \vert^2 dx = \sum_{e \in E} \vert u'(x_e) \vert^2 dx_e$$ on the domain $$H_{\mathcal{G}}(\mathcal{G},V_D):= \left\{ u = (u_e)_{e \in E} \in \bigoplus_{e \in E} H^1(0,l_e) : u(v) = 0 \hbox{ for } v \in V_D, \ u \ \hbox{continuos in } V_N \right\}.$$ Let $v$ be the solution of \begin{equation}\label{eqparaquantumgraph} \left\{\begin{array}{ll} -\Delta_\mathcal{G} v(x)=1,&x\in \mathcal{G},\\[10pt] v(x)=0,&x\in V_D. \end{array}\right. \end{equation} The function~$v$ is called the {\it torsion function} of $\mathcal{G}$, and the {\it (quantum) torsional rigidity} of~$V_D$ is given by the $L^1$-norm of $v$: $$T_q(\mathcal{G}):= \int_\mathcal{G} \vert v \vert dx.$$ In \cite{MP} Mugnolo and Plumer show that, if $v$ is the torsion function of $\mathcal{G}$, then $f=2v_{|_V} : V \rightarrow \R$ is the unique solution of the following problem: \begin{equation}\label{torsioneqgraphs} \left\{\begin{array}{ll}\displaystyle -\frac{1}{\displaystyle \sum_{y\sim x}\ell_{yx}}\sum_{y\sim x}\frac{1}{\ell_{yx}}\left(f(y)-f(x)\right)=1,& x\in V_N, \\ \\ f(x)=0,&x\in V_D. \end{array}\right. \end{equation} And they prove that \begin{equation}\label{ftrf01} T_q(\mathcal{G})=\frac{1}{12}\sum_{e \in E}\ell_e^3+\frac12\sum_{x\in V}\left(\sum_{y\sim x}\ell_{yx} +\ell_{xx} \right)v(x) . \end{equation} Observe that in the above expression, $\displaystyle\sum_{y\sim x}\ell_{yx} +\ell_{xx} = \sum_{y\sim x,\, y\neq x}\ell_{yx} +2\ell_{xx} $. If we had $k (\ge 2)$ loops at the vertex $x$ with lengths $\ell_{xx}(i)$, $i=1,2,...k$, then we should change $\displaystyle 2\ell_{xx} $ by $ 2(\ell_{xx}(1)+...+\ell_{xx}(k)).$ Take $c>0$ large enough such that (we do not mark the dependence on $c$) \begin{equation}\label{choiceofc} \widetilde{w}_{xx}:= \sum_{y\sim x}c\ell_{yx}-\sum_{y\sim x}\frac{1}{c\ell_{yx}} > 0 \quad\forall x\in V_N \end{equation} Observe that, since $\mathcal{G}$ is finite, such a $c$ exists. Let us consider the weighted graph $G_c$ having the same vertices and edges than $\mathcal{G}$ with weights (we do not mark the dependence on $c$ in $\widetilde w_{xx}$): $$\begin{array}{l}w_{yx}=\frac{1}{c\ell_{yx}}\quad \hbox{for } y\sim x,\ y\neq x,\\ \\ w_{xx}=\frac{1}{c\ell_{xx}}+ c\ell_{xx}+\widetilde{w}_{xx}\quad{if }\ \ell_{xx}\neq 0, \\ \\ w_{xx}=\widetilde{w}_{xx}\quad{if }\ \ell_{xx}= 0. \end{array} $$ On account of~\eqref{choiceofc}, we have that \begin{equation}\label{onacc01}\sum_{y\sim x}w_{yx}=\sum_{y\sim x}c\ell_{yx} + c\ell_{xx}. \end{equation} And, then, { from~\eqref{torsioneqgraphs}, we have that $f_c:=2c^2v_{|_V}$ satisfies \begin{equation}\label{torsioneqconc} \left\{\begin{array}{ll}\displaystyle -\frac{1}{\displaystyle \sum_{y\sim x}w_{yx}}\sum_{y\sim x}w_{yx}\left(f_c(y)-f_c(x)\right)=1,& x\in V_N, \\ \\ f_c(x)=0,&x\in V_D. \end{array}\right. \end{equation} Observe that, since $V_D=\partial_{m^{G_c}}V_N$, $f_c$ is solution of the problem $$\left\{\begin{array}{ll} -\Delta_{m^{G_c}} f_c =1&\hbox{in } \Omega,\\[10pt] f_c=0&\hbox{on }\partial_{m^{G_c}}\Omega; \end{array}\right. $$ Then we have that formula~\eqref{ftrf01} given in~\cite{MP} can be written using weighted discrete graphs, seen as random walk spaces, as follows. \begin{theorem}\label{Charact1} We have \begin{equation}\label{iiets01} T_q(\mathcal{G}) = \frac{1}{12}\sum_{e \in E}\ell_e^3+\frac{1}{4}\frac{1}{c^3}T_{m^{G_c}}(V_N), \end{equation} whatever $c$ is chosen in~\eqref{choiceofc}. \end{theorem} \begin{proof}Indeed, from~\eqref{onacc01}, $$\begin{array}{c} \displaystyle T_{m^{G_c}}(V_N)=\sum_{x\in V}\left(\sum_{y\sim x}w_{xy}\right)f_c(x) \\ \\ \displaystyle =2c^2\sum_{x\in V}\left(\displaystyle \sum_{y\sim x}c\ell_{yx} + c\ell_{xx} \right) v(x) =2c^3\sum_{x\in V}\left(\displaystyle \sum_{y\sim x}\ell_{yx} + \ell_{xx} \right) v (x). \end{array}$$ And hence the statement~\eqref{iiets01} follows from~\eqref{ftrf01}. \end{proof} As a consequence of the above theorem and \eqref{Polya2} we recover the equivalent to Proposition 4.8 of~\cite{MP}. \begin{corollary} We have, for any $c>0$ satisfying~\eqref{choiceofc}, \begin{equation}\label{IneqIT} T_q(\mathcal{G}) \geq \frac{1}{12}\sum_{e \in E}\ell_e^3 +\frac{1}{4}\frac{1}{c^3} \frac{\nu_{G_c}(V_N)^2}{P_{m^{G_c}}(V_N)}. \end{equation} \end{corollary} \begin{remark}\rm \item{ 1.} Observe that if we assume that $\ell_e =1$ for all edge $e$ in $\mathcal{G}$, and we have not loops, \begin{equation}\label{MGboundal01} T_q(\mathcal{G}) \geq \frac{1}{12} \sharp(E) + \frac{1}{4}\frac{\left( \sum_{x \in V_N} {\rm deg}_{\mathcal{G}}(x) \right)^2}{\sum_{x \in V_N}\sharp( \{ y \in V_D \, : \, y \sim x \})} \ge \frac{1}{12} \sharp(E) + \frac{1}{4}\sum_{x \in V_N} {\rm deg}_{\mathcal{G}}(x).\end{equation} Indeed, $\nu_{G_c}(V_N) = c \sum_{x \in V_N} {\rm deg}_{\mathcal{G}}(x)$ and $ P_{m^{G_c}}(V_N) = \frac{1}{c} \sum_{x \in V_N} \sharp( \{ y \in V_D : y \sim x \}).$ Then, the first inequality in~\eqref{MGboundal01} follows from~\eqref{IneqIT}, and the second inequality follows since, for each $x\in V_N$, ${\rm deg}_{\mathcal{G}}(x) \ge \sharp( \{ y \in V_D : y \sim x \})$. \item{ 2.} Consider a star metric graph $\mathcal{G}$, with Dirichlet conditions imposed on all vertices except the central one, and with a possible loop in the central vertex. Suppose that there are $k$ Dirichlet vertices with their edges joining the central vertex having length $\ell_i$, $i=1,2,...,k$, and the possible loop at the central vertex with length $\ell_0\ge 0$ (if $\ell_0=0$ we do not have a loop and we have only a star). Then, on account of Theorem~\ref{Charact1} and Example~\ref{7en01}, for $c$ satisfying~\eqref{choiceofc}, we have that $$\begin{array}{ll}\displaystyle T_q(\mathcal{G})&\displaystyle =\frac{1}{12}\sum_{i=0}^k\ell_i^3+\frac{1}{4c^3}\frac{\left(2c\ell_0+c\sum_{i=1}^k\ell_i\right)^2}{\sum_{i=1}^k\frac{1}{c\ell_i}} \\ \\&\displaystyle =\frac{1}{12}\sum_{i=0}^k\ell_i^3+\frac{1}{4}\frac{\left(2\ell_0+\sum_{i=1}^k\ell_i\right)^2}{\sum_{i=1}^k\frac{1}{\ell_i}}. \end{array} $$ The above equality recover, as could not be otherwise, the result of Example~3.10 of~\cite{MP}. We see that in this case that we have equality in~\eqref{IneqIT} (this is also remarked in~\cite[Proposition 4.8]{MP}). In the particular case that $\ell_i=1$ for $i=1,2,...,k$ and $\ell_0=0$, then $T_q(\mathcal{G}) = \frac13 k$, and all the inequalities in~\eqref{MGboundal01} are equalities. $\blacksquare$ \end{remark} \ \noindent {\bf Acknowledgments.} The authors have been partially supported by Conselleria d'Innovaci\'{o}, Universitats, Ci\`{e}ncia y Societat Digital, project AICO/2021/223.
{ "arxiv_id": "2302.14339", "language": "en", "timestamp": "2023-03-01T02:10:06", "url": "https://arxiv.org/abs/2302.14339", "yymm": "2302" }
\section{INTRODUCTION} Reinforcement learning (RL) has shown great promise in many robotic control tasks \cite{hwangbo2019learning,andrychowicz2020learning}. RL-based algorithm can facilitate the agent to maximize the expected sum of rewards (return), which is a manually designed metric. Meanwhile, safety of the agent should be considered carefully due to the existing obstacles or other constraints in the real-world applications \cite{garcia2015comprehensive,yang2022safe}. Therefore, reaching the optimality and safety remains an essential problem in the field of RL. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{figs/lcpo-fig0-v3.pdf} \caption{Intuitive example showing the impact of efficient exploration in the early stage. The red region represents the obstacles. When the robot concerns the safety constraints a lot at the initial stage, it maybe find a sub-optimal trajectory in the task. Instead, when the robot can ignore the constraints for unsafe states in the beginning, it can find a direct path to finish the task. Afterward, it can meet the demand of avoiding collisions gradually so that the optimal trajectory can be obtained finally.} \label{fig:toy} \end{figure} To balance rewards and costs, researchers propose Lagrangian-based methods to transfer the prime problem to a unconstrained problem with a Lagrangian multiplier \cite{stooke2020responsive,liu2020ipo}. We can plug it into any RL-based algorithms conveniently. Because those methods lack theoretical guarantee, safe RL algorithms based on trust region method are proposed to achieve the adequate policy updating \cite{achiam2017constrained,Yang2020Projection-Based}. Furthermore, inspired by the success of Lyapunov stability theorem, there exists some methods that design safety certificates to ensure constraint satisfaction \cite{chow2018lyapunov,yang2023model}. Despite the recent progress in safe RL, previous studies adopt the whole consistency constraints in the training process. The strong constraints are likely to harm the policy exploration in the early training stage. Thus, whole consistency constraints are detrimental to the early exploration and the policy may be trapped into the sub-optimal points. To address this issue, the proposed algorithm should encourage early exploration and gradually maintain safety constraints. We provide a toy example shown in Fig. \ref{fig:toy}, to illustrate why the above strategy can improve the performance. Based on the simple yet effective idea, we propose Constrained Policy Optimization with Extra Safety Budget (ESB-CPO) algorithm \footnote{See our project page at \href{https://sites.google.com/view/esb-cpo}{https://sites.google.com/view/esb-cpo}.}. Our method can achieve higher rewards under the same cost limits compared with baselines. Our contribution can be summarized as follows: \begin{itemize} \item We construct a novel metric, Lyapunov-based Advantage Estimation (LAE), to evaluate the safe and unsafe transitions. It consists of two parts, stability value and safety value. Safety value part has a significant impact only on unsafe transitions. \item We propose Constrained Policy Optimization with Extra Safety Budget (ESB-CPO) algorithm based on LAE. To encourage exploration, our method loosens the constraints of unsafe transitions by adding an extra safety budget. The extra safety budget comes from the safety value part of LAE. Furthermore, the extra safety budget becomes very close to 0 in the final stage of training. \item To achieve the goal in ESB-CPO, we update the two factors, $\alpha$ and $\beta$, in LAE using the adaptive methods. By introducing a variable concerning safety, LAE can distinguish safe and unsafe transitions via $\beta$. In the early stages, the optimization-based adaptation of $\alpha$ controls the degree to which constraints are loosened. \end{itemize} \section{RELATED WORKS} Safe reinforcement learning aims to solve a constrained optimization problem with safety constraints. Constrained Markov Decision Process (CMDP) was commonly used to describe this problem. Concretely, the safe policy can satisfy the expected sum of safety violation costs below a given threshold. Some methods transform the such constrained problem into an unconstrained problem \cite{liu2021policy,ding2021provably}. Previous methods introduced Lagrangian relaxation to take the reward and cost into consideration together \cite{stooke2020responsive,Ray2019,peng2022model}. Liu \emph{et al.} applied the original dual interior point method to constrained reinforcement learning, transformed the constraints into the penalty of the objective function by using logarithmic barrier function \cite{liu2020ipo}. Another line of work added trust region constraints in policy optimization, thus providing a guarantee of safety violations \cite{achiam2017constrained}. TRPO is a model-free RL-based algorithm to guarantee the monotonicity of policy updating \cite{schulman2015trust}. CPO is proposed to implement constrained reinforcement learning based on TRPO \cite{achiam2017constrained}. Chow \emph{et al.} proposed an algorithm based on Lyapunov function to ensure safety during the training process \cite{chow2018lyapunov,chow2019lyapunovbased}. Inspired by traditional control methods, they propose some algorithms to jointly learn a policy and a neural barrier certificate under stepwise state constraint setting \cite{yang2023model,mathiesen2022safety}. Furthermore, some researchers use a two-stage method to ensure safety at each time step \cite{Yang2020Projection-Based,yang2022cup,xu2020primal,srinivasan2020learning}. The first stage of each training step uses TRPO to solve the unconstrained optimization problem, and the second stage projects the policy from the first stage onto the policy that satisfies the constraints. Additionally, introducing new variables that directly reflect the current state's safety is also a promising direction \cite{sootla2022saute, sootla2022enhancing}. A. Sootla \emph{et al.} proposed Sauté RL, which uses a new state that reflects the current safety of the system as the cumulative loss changes and reflects it in the reward function, so that the agent can well satisfy the constraints \cite{sootla2022saute}. However, those method used the whole consistency constraints, thus making the agent learn a comparatively conservative policy. Our method forces the agent to explore in the early stage, then restrict the agent's behaviour gradually until the agent satisfies the safety constraints. \section{PRELIMINARY}\label{pre} MDP (Markov Decision Process) is defined as a tuple ($\mathcal S$, $\mathcal A$, $\mathcal P$, $\mathcal R$, $\mathcal \mu$, $\mathcal \gamma$), where $\mathcal S$ and $\mathcal A$ are the state and action space respectively, $\mathcal P: \mathcal S \times \mathcal A \times \mathcal S \mapsto [0, 1]$ is the transition probability function, $\mathcal R: \mathcal S \times \mathcal A \times \mathcal S \mapsto \mathds{R}$ is the reward function, $\mathcal \mu: \mathcal S \mapsto [0, 1]$ is the distribution of initial state, $\mathcal \gamma$ is the discount factor for future rewards. CMDP is defined as a tuple ($\mathcal S$, $\mathcal A$, $\mathcal P$, $\mathcal R$, $\mathcal C_i$, $\mathcal \mu$, $\mathcal \gamma$), where $\mathcal S$, $\mathcal A$, $\mathcal P$, $\mathcal R$, and $\mathcal \mu$ have the same meaning as in MDP, $\mathcal C_i: \mathcal S \times \mathcal A \times \mathcal S \mapsto \mathds{R}$ is the cost function which describe the satisfaction of the $i$-th constraint, $\mathcal \gamma$ is the discount factor for both future rewards and costs. A policy $\mathcal \pi: \mathcal S \mapsto P(\mathcal A)$ maps given states to probability distributions over action space and $\pi(a_t|s_t)$ is the probability of taking action $a$ under state $s$ in time step $t$. We use $\pi_\theta$ to describe a policy parameterized by $\theta$. The expected discounted cumulative return of a policy is \begin{equation} J^R(\theta)=\mathds{E}_{\tau \sim \pi_\theta}\left[ \sum_{t=0}^{\infty}\gamma^t R(s_t, a_t, s_{t+1})\right]\label{JR} \end{equation} where $\tau \sim \pi_\theta$ is a trajectory sampled from $\pi_\theta$. The expected discounted cumulative cost of a policy is \begin{equation} J^{C_i}(\theta)=\mathds{E}_{\tau \sim \pi_\theta}\left[ \sum_{t=0}^{\infty}\gamma^t {C_i}(s_t, a_t, s_{t + 1})\right]\label{JC} \end{equation} The optimization goal of a safe RL algorithm is to find the optimal policy $\pi_{\theta^*}$ which maximizes $J^R$ while guarantees $J^{C_i} \leq d_i$, where $d_i$ is the cost limit for the $i$-th constraint. Formally, the optimization problem is defined as: \begin{equation} \begin{aligned} & \max_\theta J^R(\theta) \\ & {\rm s.t.} \quad J^{C_i}(\theta) \leq d_i \end{aligned} \label{problem defination} \end{equation} Standard definitions of the value function $V_{\theta}$, the state-action value function $Q_{\theta}$, the cost value function $V_{\theta}^{C_i}$ and the state-action cost value function $Q_{\theta}^{C_i}$ are exploited in most previous studies. Thus we omit them in the main text. The commonly used advantage functions are defined as $A_{\theta}^R(s,a)=Q_{\theta}(s,a)-V_{\theta}(s)$ and $A_{\theta}^{C_i}(s,a)=Q_{\theta}^{C_i}(s,a)-V_{\theta}^{C_i}(s)$. The following theorem provides bounds of the error of the objectives and constraints with $\pi_\theta$ and $\pi_{\Tilde{\theta}}$ \cite{achiam2017constrained}. \begin{theorem} For any function $f: \mathcal S \times \mathcal A \times \mathcal S \mapsto \mathds{R} $ and any policies $\pi_\theta$ and $\pi_{\Tilde{\theta}}$, define \begin{equation} \epsilon_f^{\Tilde{\theta}} \doteq \max_{s_t} \left| \mathop{\mathds{E}}_{a_t \sim \pi_{\Tilde{\theta}} \atop s_{t+1} \sim P_{a_t}^{s_t}} \left[ f(s_t, a_t, s_{t+1}) \right] \right|, \label{epsilon} \end{equation} The following bounds hold: \begin{equation} \begin{aligned} J^R(\Tilde{\theta}) \geq &J^R(\theta) + \frac{1}{(1-\gamma)}\mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}} \left[ \frac {\pi_{\Tilde{\theta}}(a|s)}{\pi_{\theta}(a|s)} A_\theta^R (s,a)\right] \\ &- M_\theta^R (\Tilde{\theta}) \mathop{\mathds{E}}_{s \sim \rho_\theta} \left[ D_{TV}(\Tilde{\theta} || \theta) [s] \right], \label{Jrbound} \end{aligned} \end{equation} \begin{equation} \begin{aligned} J^C(\Tilde{\theta}) \leq &J^C(\theta) + \frac{1}{(1-\gamma)}\mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}} \left[ \frac {\pi_{\Tilde{\theta}}(a|s)}{\pi_{\theta}(a|s)} A_\theta^C (s,a)\right] \\ &+ M_\theta^C (\Tilde{\theta}) \mathop{\mathds{E}}_{s \sim \rho_\theta} \left[ D_{TV}(\Tilde{\theta} || \theta) [s] \right], \label{Jcbound} \end{aligned} \end{equation} where $M^{{R}}_{\theta}(\Tilde{\theta})=\frac{2 \gamma }{(1-\gamma)^2} \epsilon_{V_\theta}^{\Tilde{\theta}}$, $M^{{C_i}}_{\theta}(\Tilde{\theta})=\frac{2 \gamma }{(1-\gamma)^2} \epsilon_{V_\theta^{C_i}}^{\Tilde{\theta}}$. \end{theorem} These bounds can be used as surrogate objectives to guarantee theoretically monotonic improvement in policy search update. CPO is a practical algorithm using these surrogate objectives with trust region theorems, which optimizes (\ref{problem defination}) by following update step: \begin{equation} \begin{aligned} \theta' = & \mathop{\rm argmax}_{\Tilde{\theta}} \mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}}\left[ \frac {\pi_{\Tilde{\theta}}(a|s)}{\pi_{\theta}(a|s)} A_{\theta}^R (s, a) \right] \\ {\rm s.t.} \quad & J^{C_i}(\theta) + \frac{1}{(1-\gamma)}\mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}}\left[ \frac {\pi_{\Tilde{\theta}}(a|s)}{\pi_{\theta}(a|s)} {A_{\theta}^{C_i}} (s) \right] \leq d_i \\ & \mathop{\mathds{E}}_{s \sim \rho_\theta} \left[ D_{KL}(\pi_{\Tilde{\theta}}(\cdot | s) || \pi_{\theta}(\cdot | s)) \right] \leq \delta \label{CPO} \end{aligned} \end{equation} \section{METHODOLOGY} \subsection{Lyapunov-Based Advantage Estimation} The existence of Lyapunov function becomes an effective tool to evaluate the system's stability in RL \cite{chang2021stabilizing,han2020actor,wang2023rl}. Recent studies utilize the control Lyapunov function (CLF) to assess the system's safety, achieving promising results on some robotic tasks \cite{lawrence2020almost,mittal2020neural,dawson2022safe}. Inspired by those successes, we find they can separately evaluate the performance of safe and unsafe transitions. Thus, we construct a new metric, namely Lyapunov-based Advantage Estimation (LAE) ${A^{C_i}_\theta}'(s_t, a_t)$ as follows. \begin{equation} \begin{aligned} & {A^{C_i}_\theta}'(s_t, a_t) = \mathop{\mathds{E}}_{s_{s+1} \sim P_a^{s_t}}[ V^{C_i}_\theta(s_{t+1}) \\ &-V^{C_i}_\theta(s_t) + \alpha \left( V^{C_i}_\theta(s_t) - \beta V^{C_i}_\theta(s_{t+1}) \right)] \label{Lya adv} \end{aligned} \end{equation} where $\alpha, \beta \in [0, 1]$ are adaptive factors. Furthermore, $P_a^{s_t}$ is the distribution of the next state after $s_t$ with $a_t$. We can notice that when ${A^{C_i}_\theta}'(s_t,a_t) \leq 0$, that means \begin{equation} \begin{aligned} &\mathop{\mathds{E}}_{s_{t+1} \sim P_a^{s_t}} \left[ V^{C_i}_\theta(s_{t+1}) \right] -V^{C_i}_\theta(s_t) \\ & \leq -\alpha \left( V^{C_i}_\theta(s_t) - \beta \mathop{\mathds{E}}_{s_{t+1} \sim P_a^{s_t}} \left[ V^{C_i}_\theta(s_{t+1}) \right] \right) \label{constraint function} \end{aligned} \end{equation} Concretely, the above inequality corresponds to the Lyapunov function constraints when $\beta = 0$ holds \cite{murray2017mathematical}. On the other hand, when $\beta = 1$ holds, it equals to the constraints of a control Lyapunov function (CLF) \cite{mittal2020neural}. Intuitively, CLF is a stronger constraint than the Lyapunov function due to containing an extra safety consideration. Fig. \ref{fig:adv} shows an illustrative example to depict the relationship between our advantage estimation ${A^{C_i}_\theta}'(s_t, a_t)$ and the total cost. ${A^{C_i}_\theta}'(s_t, a_t)$ contains two parts, stability and safety values. In the safe region, it only represents a value function concerning stability. When the agent is in unsafe region, extra part, $\beta V^{C_i}_\theta(s_{t+1})$, is similar to a metric of safety. Thus, evaluating safe transitions' performance depends on the stability value part. In contrast, evaluating unsafe transitions depends on the stability and safety value parts. More specifically, $\beta$ can adjust the evaluation function of LAE. In other words, $\beta$ controls the threatening estimation of the transition according to the policy's satisfaction of safety at step $t$. To sum up, our advantage estimation can magnify the gap between safe and unsafe transitions by safety value part. \subsection{Constrained Policy Optimization With Extra Safety Budget} Based on the problem \eqref{CPO}, we derive our optimization problem using LAE, which updates policy as: \begin{equation} \begin{aligned} \theta' = & \mathop{\rm argmax}_{\Tilde{\theta}} \mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}}\left[ \frac {\pi_{\Tilde{\theta}}(a|s)}{\pi_{\theta}(a|s)} A_{\theta}^R (s, a) \right] \\ {\rm s.t.} \quad & J^{C_i}(\theta) + \frac{1}{(1-\gamma)}\mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}}\left[ \Delta_{\theta, \Tilde{\theta}} (s,a) \frac{{A_{\theta}^{C_i}}' (s, a)}{1 - \alpha_{i\theta}} \right] \leq d_i \\ & \mathop{\mathds{E}}_{s \sim \rho_\theta} \left[ D_{KL}(\pi_{\Tilde{\theta}}(\cdot | s) || \pi_{\theta}(\cdot | s)) \right] \leq \delta \label{ESB-CPO} \end{aligned} \end{equation} where $\alpha_{i\theta}$ decreases from $1^-$ to $0$ with updating and $\Delta_{\theta, \Tilde{\theta}} (s, a) = \frac {\pi_{\Tilde{\theta}}(a|s)}{\pi_{\theta}(a|s)} - 1$. $\Delta_{\theta, \Tilde{\theta}} (s, a)$ describes the tendency of the policy update from $\pi_\theta$ to $\pi_{\Tilde{\theta}}$. If the new policy try to avoid choosing action $a$ under $s$, $\Delta_{\theta, \Tilde{\theta}} (s, a) < 0$; on the contrary, $\Delta_{\theta, \Tilde{\theta}} (s, a) > 0$. In the following part, we introduce the reason that our method can encourage exploration in the early stage and meet the demand of safety gradually. First, we can get the relationship between ${A_{\theta}^{C_i}}' (s, a)$ and $A_{\theta}^{C_i}(s,a)$: \begin{equation} \frac{{A_{\theta}^{C_i}}' (s, a)}{1-\alpha_\theta} = A_{\theta}^{C_i}(s,a) + B_{1\theta}^i (s, a) + B_{2\theta}^i (s'), \end{equation} where $B_{1\theta}^i (s, a) = (1-\gamma)V_\theta^{C_i}(s') - C(s, a, s')$, $B_{2\theta}^i (s) = \frac{\alpha_\theta (1 - \beta_\theta (s))}{1-\alpha_\theta} V_\theta^{C_i}(s)$. Therefore, (\ref{ESB-CPO}) can be obtained by adding two gaps in the constraint function in (\ref{CPO}): \begin{equation} \begin{aligned} &J^{C_i}(\theta) + \frac{1}{(1-\gamma)}\mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}}\left[ \Delta_{\theta, \Tilde{\theta}} (s,a) \frac{{A_{\theta}^{C_i}}' (s, a)}{1 - \alpha_{i\theta}} \right] \leq d_i \\ &\Leftrightarrow \begin{aligned} &J^{C_i}(\theta) + \frac{1}{(1-\gamma)}\mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}}\left[ \frac {\pi_{\Tilde{\theta}}(a|s)}{\pi_{\theta}(a|s)} {A_{\theta}^{C_i}} (s) \right] \\ &+ G_{1\theta}^i (s, a) + G_{2\theta}^i (s, a)\leq d_i \end{aligned} \end{aligned} \end{equation} where $G_{1\theta}^i (s, a) = \frac{1}{1-\gamma}\mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}}\left[ \Delta_{\theta, \Tilde{\theta}} (s,a) B_{1\theta}^i (s, a) \right]$, $G_{2\theta}^i (s, a) = \frac{1}{1-\gamma}\mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}}\left[ \Delta_{\theta, \Tilde{\theta}} (s,a) B_{2\theta}^i (s') \right]$. It's clear that if these gaps are negative, they loosen the constraint, otherwise they tighten the constraint. Therefore, considering the safety budget is defined as $d_i - J_{C_i}(\theta)$, the gaps can be seen as Extra Safety Budgets (ESBs). Notice that if gap is negative, the corresponding ESB is positive: \begin{equation} \rm ESB = -Gap. \end{equation} In the rest of this subsection, we'll show how these ESBs match our new metric and how they influence the policy update in detail. \begin{figure}[t!] \centering \includegraphics[width=0.25\textwidth]{figs/lcpo-fig2-v3.pdf} \caption{Stability value and safety value of Lyapunov-based advantage (LAE) under safe and unsafe transitions.} \label{fig:adv} \end{figure} \subsubsection{ESB For Stability} Notice that \begin{equation} A_{\theta}^{C_i}(s,a) + B_{1\theta}^i (s, a) = V_{\theta}^{C_i}(s') - V_{\theta}^{C_i}(s). \end{equation} This means by adding $G_{1\theta}^i (s, a)$ we actually use our stability value replacing the advantage function. Our experiments demonstrate that $G_{1\theta}^i (s, a)$ is very close to 0 in practical usage. Furthermore, $B_{1\theta}^i (s, a)$ provides another advantage. When a transition is safe, a sparse-costs environment will give zero immediate cost. Thus $B_{1\theta}^i (s, a)$ can provide a prediction of average future cost. Since $\gamma < 1$, $B_{1\theta}^i (s, a)$ is always positive. It means we tighten the bound when $\Delta_{\theta, \Tilde{\theta}} (s,a) > 0$ (the action should be encouraged). \subsubsection{ESB Balancing Exploration Efficiency and Constraint Satisfaction} $G_{2\theta}^i (s, a)$ matches the safety value in our new metric. From the form of $B_{2\theta}^i (s)$ we can know that, when state $s$ is safe, $\beta_\theta (s) = 0$, thus $B_{2\theta}^i (s) = 0$; when state $s$ is unsafe, $\beta_\theta (s) > 0$, $B_{2\theta}^i (s) > 0$ since $V_\theta^{C_i}(s)$ is positive. Similar to $G_{1\theta}^i (s, a)$, if the new policy tends to avoid an unsafe transition in most of states, $G_{2\theta}^i (s, a) < 0$, otherwise $G_{2\theta}^i (s, a)>0$. This means we strengthen the constraint when the policy try to take more risk in most states, while loosen the constraint when the policy tends to be safer. Since the policy is updated to minimizing total costs, $G_{2\theta}^i (s, a)$ is more likely to be negative than positive because of $\Delta_{\theta, \Tilde{\theta}} (s, a)$. Thus $G_{2\theta}^i (s, a)$ is more likely to provide positive extra safety budget to loosen the constraint, so that it encourages exploration. Furthermore, this encouragement decreases with updating. At the beginning, when $\alpha_\theta \rightarrow 1$, the total constraint will be greatly loosen, thus we can achieve excellent exploration efficiency; with the influence of $G_{2\theta}^i (s, a)$ becomes weaker, the extra safety budget becomes smaller, thus the satisfaction of original constraint can be gradually obtained. Since $G_{2\theta}^i (s, a)$ will be zero when $\alpha_\theta=0$, our policy can reach a satisfaction of constraint no worse than CPO. In practical algorithm, $\alpha_{i\theta}$ decreases from $1^-$ to $0$, thus the influence of $G_{2\theta}^i (s, a)$ becomes weaker. Fig. \ref{fig:constraint-steps} clearly shows the influences of ESBs. The total safety budget determines the constraint: if total safety budget is higher, the constraint is weaker. At the early epochs, $G_{2\theta}^i (s, a)$ provides large extra safety budget w.r.t safety state. In the end, the total safety budget is close to the original cost limit, thus the constraint is close to the original one. $G_{1\theta}^i (s, a)$ provides a small extra safety budget of stability independent to safety or training steps. The direction of policy update determines whether the ESBs are positive or negative. \subsection{Sample-Based Adaptation of Factors} According to the above parts, we give practical sample-based methods to update factors $\alpha_{i\theta}$ and $\beta_{i\theta}(s_t)$. \subsubsection{Adaptation of $\beta_{i\theta}(s_t)$ based on safety states} A normalized safety state $z_{i\theta}(s_t)$ researchers proposed is a sample-based inner state which directly shows the safety of the state at step $t$, based on the remaining safety budget \cite{sootla2022saute}. The definitions of $z_{i\theta}(s_t)$ is \begin{equation} z_{i\theta}(s_t) = \frac{d_i - \sum_{(l=0)}^t \gamma^l {C_i}(s_l, a_l, s_{l+1})}{\gamma^t d_i} , \end{equation} where $s_l$, $a_l$ and $s_{l+1}$ are in trajectory sampled from $\pi_\theta$. Notice that our ESBs are not considered because we need $z_{i\theta}(s_t)$ to show the actual safety. It can be seen that when the sum of cost is over than the safety budget $d_i$, $z_{i\theta}(s_t)$ is less than 0. It's easy to find that $z_{i\theta}(s_{t})$ can be updated as \begin{equation} \begin{aligned} z_{i\theta}(s_{t+1}) &= \frac{z_{i\theta}(s_t) - \frac{{C_i}(s_t, a_t, s_{t+1})}{d_i}}{\gamma} \label{z_t} \end{aligned} \end{equation} with an initial value $1$ before $t=0$. Considering the range $[0,1]$, we calculate $\beta_{i\theta}(s_t)$ by \begin{equation} \beta_{i\theta}(s_t) = 1 + \min \left( \tanh \left( z_{i\theta}(s_t) \right),0 \right). \label{beta} \end{equation} When $z_{i\theta}(s_t)$ is less than 0 (unsafe state), $\beta_{i\theta}(s_t)$ decreases towards 0. \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth]{figs/fig3_v2.pdf} \caption{Impact of $G_{1\theta}^i$ and $G_{2\theta}^i$ on practical constraints with time approaching. $G_{1\theta}^i$ is very close to 0 consistently, and the total safety budgets decrease gradually due to the change of $G_{2\theta}^i$. } \label{fig:constraint-steps} \end{figure} \subsubsection{Adaptation of $\alpha_{i\theta}$ based on optimization} To our concerns, the policy gradient reflects directly how the constraint influencing policy. Therefore, we introduce a Lagrangian multiplier $\lambda_i$ to calculate $\alpha_\theta$ based on the policy gradient of constraint function. First, we construct the following local optimization problem: \begin{equation} \min_{\Tilde{\theta}} \max_{\lambda_i} \lambda_i P_{i\theta}(\Tilde{\theta})\label{prime} \end{equation} where $P_{i\theta}(\Tilde{\theta})=\mathop{\mathds{E}}_{s \sim \rho_\theta \atop a \sim \pi_{\theta}}\left[ \frac {\pi_{\Tilde{\theta}}(a|s)}{\pi_{\theta}(a|s)} {A_\theta^{C_i}}'(s, a) \right]$. The dual problem of (\ref{prime}) is \begin{equation} \begin{aligned} \max_{\lambda_i} \min_{\Tilde{\theta}} &\quad \lambda_i P_\theta(\Tilde{\theta}), \\ {\rm s.t.} &\quad \lambda_i \geq 0. \end{aligned} \end{equation} Thus $\lambda_i$ can be updated as \begin{equation} \lambda_{i, t+1} = \max \left( \lambda_{i, t} + \eta P_\theta(\Tilde{\theta}), 0 \right),\label{lagragian} \end{equation} where $\eta$ is the step size. Notice that during policy optimization, $P_{i\theta} (\Tilde{\theta})$ is more likely to be negative. Therefore, Eq. \ref{lagragian} indicates that $\lambda_i$ decreases with the training process. Considering the range of $\alpha_{i\theta}$, we calculate $\alpha_{i\theta}$ by \begin{equation} \alpha_{i\theta} = \tanh \left( \frac{k_i}{e^{-\lambda_{i}}} \right) , \end{equation} where $k$ is a hyper parameter which globally controls the decreasing speed of $\alpha_\theta$. As $\lambda_i$ decreases, $\alpha_{i\theta}$ changes from 1 to 0. \begin{figure*}[th] \centering \includegraphics[width=0.88\textwidth]{figs/framework.png} \caption{Framework of ESB-CPO algorithm. The method firstly compute the adaptive factors $\alpha_{i\theta}$ and $\beta_{i\theta}(s_t)$. Then LAE value can be obtained by them. Finally we use an approximate trust region method to update the current policy.} \label{fig:framework} \end{figure*} \subsection{Algorithm Description} For small step size $\delta$, the optimization problem can be solved approximately by updating with first-order approximation of objective and constraint and second-order approximation of KL-divergence. Denoting the gradient of the objective as $\mathfrak{g}$, the gradient of constraint as $\mathfrak{b}$, the Hessian of the KL-divergence as $\mathcal{H}$, and defining $\mathfrak{c}_i \doteq J^{C_i} (\theta) - d_i$, the approximation to Eq. (\ref{ESB-CPO}) is \begin{equation} \begin{aligned} \theta' = & \mathop{\rm argmax}_{\Tilde{\theta}} \mathfrak{g}^\top (\Tilde{\theta} - \theta) \\ {\rm s.t.} \quad & \mathfrak{c}_i + \mathfrak{b}^\top (\Tilde{\theta} - \theta) \leq 0 \\ & \frac{1}{2}(\Tilde{\theta} - \theta)^\top \mathcal{H} (\Tilde{\theta} - \theta) \leq \delta \label{approx LCPO} \end{aligned} \end{equation} Eq. (\ref{approx LCPO}) directly matches the form of approximate CPO \cite{achiam2017constrained}, whose dual problem is \begin{equation} \max_{\mu_1 \geq 0 \atop \mu_2 \succeq 0} \frac{-1}{2\mu_1}\left( \mathfrak{g}^\top \mathcal{H}^{-1} \mathfrak{g} -2\mathfrak{r}^\top \mu_2 + \mu_2^\top \mathcal{S} \mu_2 \right) + \mu_2^\top c - \frac{\mu_1 \delta}{2},\label{dual approx} \end{equation} where $\mathfrak{c} = [\mathfrak{c}_0, \mathfrak{c}_1, ...]$, $\mathfrak{r} \doteq \mathfrak{g}^\top \mathcal{H}^{-1} \mathcal{B}$, $\mathcal{S} \doteq \mathcal{B}^\top \mathcal{H}^{-1} \mathcal{B}$, $\mathcal{B}=[\mathfrak{b}_0, \mathfrak{b}_1, ...]$. Therefore, in our experiments with a single constraint, Eq. (\ref{approx LCPO}) can be solved via approximate CPO updating: \begin{equation} {\rm If \quad (\ref{dual approx}) \quad is \quad feasible:} \quad \hat{\theta} = \theta + \frac{1}{\mu_1^*}\mathcal{H}^{-1}(\mathfrak{g} - \mu_2^* \mathfrak{b}), \label{feasible} \end{equation} \begin{equation} {\rm else:} \quad \hat{\theta'} = \theta - \sqrt{\frac{2\delta}{\mathfrak{b}^\top \mathcal{H}^{-1} \mathfrak{b}}} \mathcal{H}^{-1} \mathfrak{b}, \label{unfeasible} \end{equation} where $\mu_1^*$ and $\mu_2^*$ are solutions to (\ref{dual approx}). Finally the new policy $\pi_{\theta'}$ is obtained by backtracking line searching to enforce satisfaction of constraints. The pseudo-code of our algorithm is shown as Algorithm \ref{alg:1}, the corresponding frameworks is shown in Fig. \ref{fig:framework}. \begin{algorithm}[h] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \caption{ESB-CPO} \label{alg:1} \begin{algorithmic}[1] \STATE Orthogonal initialize the actor network and critic networks \FOR {$k$ in 0, 1, 2, ...} \STATE Sample a set of trajectories $D = \{ \tau \} \sim \pi_{\theta_k}$ \FOR{$\tau$ in $D$} \FOR{$s$ in $\tau$} \STATE Compute $\beta_{\theta_k}(s)$ with (\ref{beta}) \ENDFOR \ENDFOR \STATE Compute $\alpha_{\theta_{k}}$ by solving local dual problem \STATE Form sample estimates $\hat{\mathfrak{g}}$, $\hat{\mathfrak{b}}$, $\hat{\mathcal{H}}$, $\hat{\mathfrak{c}}$ with $D$ \IF {approximate ESB-CPO is feasible} \STATE Compute policy proposal $\hat{\theta}$ with (\ref{feasible}) \ELSE \STATE Compute policy proposal $\hat{\theta}$ with (\ref{unfeasible}) \ENDIF \STATE Obtain $\theta_{k+1}$ by backtracking line search to enforce satisfaction of constraint function in (\ref{ESB-CPO}) \STATE Update critic networks by TD-like critic learning \ENDFOR \end{algorithmic} \end{algorithm} \begin{figure*}[ht!] \centering Average Returns: \begin{subfigure}{\textwidth} \centering \vspace{0.22em} \begin{subfigure}{0.245\textwidth} \includegraphics[width=\textwidth]{figs/baseline/Safexp-DoggoGoal1-v0-reward.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \includegraphics[width=\textwidth]{figs/baseline/Safexp-PointPush1-v0-reward.png} \end{subfigure} \centering \begin{subfigure}{0.245\textwidth} \includegraphics[width=\textwidth]{figs/baseline/SafetyBallReach-v0-reward.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \includegraphics[width=\textwidth]{figs/baseline/SafetyDroneCircle-v0-reward.png} \end{subfigure} \end{subfigure} Average Costs: \begin{subfigure}{\textwidth} \centering \vspace{0.22em} \begin{subfigure}{0.245\textwidth} \includegraphics[width=\textwidth]{figs/baseline/Safexp-DoggoGoal1-v0-cost.png} \caption{Doggo-Goal} \end{subfigure} \begin{subfigure}{0.245\textwidth} \includegraphics[width=\textwidth]{figs/baseline/Safexp-PointPush1-v0-cost.png} \caption{Point-Push} \end{subfigure} \begin{subfigure}{0.245\textwidth} \includegraphics[width=\textwidth]{figs/baseline/SafetyBallReach-v0-cost.png} \caption{Ball-Reach} \end{subfigure} \begin{subfigure}{0.245\textwidth} \includegraphics[width=\textwidth]{figs/baseline/SafetyDroneCircle-v0-cost.png} \caption{Drone-Circle} \end{subfigure} \end{subfigure} \caption{Average performance for ESP-CPO, CPO, SPPO, TRPO-L and TRPO over several seeds; the x-axis is training iteration. ESB-CPO outperforms the baselines in terms of average return or at least no worse than the best one of them, which proves that our method has better exploration efficiency. Though at the beginning of training ESB-CPO may fail to satisfy the constraints, it achieves satisfaction of constraints eventually or at least get very close to the limit during training.} \label{fig:results} \end{figure*} \begin{figure}[h] \centering \begin{subfigure}{0.235\textwidth} \centering \includegraphics[width=0.75\linewidth]{figs/Safexp-Doggogoal.jpg} \caption{Doggo-Goal: a Safety Gym task, where a quadruped robot need to navigation to a goal in a environment with barriers.} \end{subfigure} \begin{subfigure}{0.235\textwidth} \centering \includegraphics[width=0.75\linewidth]{figs/Safexp-PointPush.jpg} \caption{Point-Push: a Safety Gym task, where a point need to push a box to a goal in a environment with barriers.} \end{subfigure} \begin{subfigure}{0.235\textwidth} \centering \includegraphics[width=0.75\linewidth]{figs/SafetyBallReach.jpg} \caption{Ball-Reach: a Safety-Bullet-Gym task, where a spherical shaped robot need to reach a series of goals in a environment with barriers.} \end{subfigure} \begin{subfigure}{0.235\textwidth} \centering \includegraphics[width=0.75\linewidth]{figs/SafetyDroneCircle.jpg} \caption{Drone-Circle: a Safety-Bullet-Gym task, where a air vehicle need to move on a circle in clock-wise direction and not go out of the safe region.} \end{subfigure} \caption{Specific tasks used in experimental part.} \label{fig:env} \vspace{-10pt} \end{figure} \section{EXPERIMENTS} In this section, we design experiments to answer the following questions: \begin{itemize} \item Does ESB-CPO outperform baseline algorithms on exploration efficiency? \item Does ESB-CPO achieve great satisfaction of constraint at the end of training? \item Does ESBs adaptively change as we expect? \end{itemize} \subsection{Comparison With Baselines} We construct experiments on four tasks from two benchmarks, Bullet-Safety-Gym\cite{Gronauer2022BulletSafetyGym} and Safety Gym\cite{Ray2019}. We give describes of tasks in Fig. \ref{fig:env}. We use CPO\cite{achiam2017constrained}, SPPO\cite{chow2019lyapunovbased}, TRPO-Lagrangian\cite{peng2022model} as baselines \footnote{Baselines are implemented in \href{https://github.com/PKU-MARL/Safe-Policy-Optimization}{https://github.com/PKU-MARL/Safe-Policy-Optimization}}. These baselines are representative works of trust region based methods, Lyapunov-based methods and primal-dual methods, respectively. We also do experiments with TRPO\cite{schulman2015trust} to show the situations if no constraints are provided. The results are shown in Fig. \ref{fig:results}. In our experiments, ESB-CPO outperforms most of the baselines in total returns, and achieves good satisfaction of constraints. The training process of Drone-Circle significantly shows how our method works. In the early epochs, the returns and costs are both high and close to TRPO, and the costs decrease to cost limit gradually. These results prove that in early epochs the agent explored efficiently with very loose constraints, and tried to avoid unsafe situations gradually with ESBs went close to zero. The results shows that our method allows overshoots of returns and violation of constraints in the early epochs, and constrains the policy to go back to a safe region. In the experiments of Doggo-Goal, we set a cost limit close to the average costs of TRPO, which means that it is almost constraint free. In this case we expect the policy to achieve a performance close to TRPO. The results show that ESP-CPO achieves the goal eventually, but some baselines have performance much worse than TRPO. These results prove that loosening constraints depending on constraints' satisfaction encourages exploration. The total ESBs($-(G_{1\theta} (s, a) + G_{2\theta} (s, a))$) in the experiments of Drone-Circle and Doggo-Goal are shown in Fig. \ref{fig:exp-ESB}, which provide evidences that the constraints changed as what we expected. In the early epochs the ESBs greatly influence the total safety budget since their absolute value is much larger than cost limits; in the end, ESBs are close to 0 so that the policies are optimized to satisfy the original constraints. In most cases ESBs are positive. Thus ESBs loosen the constraints for better exploration efficiency. \begin{figure}[h] \centering \begin{subfigure}{0.237\textwidth} \centering \includegraphics[width=\linewidth]{figs/baseline/SafetyDroneCircle-v0-figure.png} \caption{Drone-Circle} \end{subfigure} \begin{subfigure}{0.237\textwidth} \centering \includegraphics[width=\linewidth]{figs/baseline/Safexp-DoggoGoal1-v0-figure.png} \caption{Doggo-Goal} \end{subfigure} \caption{Extra Safety Budgets in experiments.} \label{fig:exp-ESB} \vspace{-10pt} \end{figure} \begin{figure}[h] \centering Average Returns: \begin{subfigure}{0.5\textwidth} \centering \vspace{0.22em} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figs/ablation/SafetyDroneCircle-v0-reward_ablation.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figs/ablation/Safexp-PointPush1-v0-reward_ablation.png} \end{subfigure} \end{subfigure} Average Costs: \begin{subfigure}{0.5\textwidth} \centering \vspace{0.22em} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figs/ablation/SafetyDroneCircle-v0-cost_ablation.png} \caption{Drone-Circle} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figs/ablation/Safexp-PointPush1-v0-cost_ablation.png} \caption{Point-Push} \end{subfigure} \end{subfigure} \caption{Average performance for ablation study. } \label{fig:ablation} \vspace{-10pt} \end{figure} \subsection{Ablation Study} In Fig. \ref{fig:ablation}, we compare performance of CPO (ESB-CPO with no ESBs), ESB-CPO with only $G_{1\theta} (s, a)$ and full ESB-CPO (ESB-CPO with both $G_{1\theta} (s, a)$ and $G_{2\theta} (s, a)$). We denote the latter two algorithms as ESB-CPO (G1) and ESB-CPO (G1+G2), respectively. $G_{1\theta} (s, a)$ controls the constraints based on stability, which is independent to training epochs. Thus the costs of ESB-CPO (G1) is similar to CPO, but have a slight gap since in some tasks stability is a tighter constraint. Stability is a task-dependent constraint, thus has different influences on the two tasks. Notice that in Point-Push, ESB-CPO (G1) has worse performance but better satisfaction of constraint. $G_{2\theta} (s, a)$ has a greater influence on constraint in early epochs to encourage exploration. Thus ESB-CPO (G1+G2) gains a significant improvement in return, though it has slightly higher costs than the other methods. \section{CONCLUSIONS} Constrained Policy Optimization with Extra Safety Budget (ESB-CPO) algorithm constructs a constrained optimizaiton problem based on trust region method. Different from CPO algorithm, we propose a new metric, namely Lyapunov-based Advantage Estimation (LAE) which consists of stability and safety values. It can magnify the gap between safe and unsafe transitions by safety value part. When we view the safety value part as a extra safety budget, our method can loosen the constraints of unsafe transitions in the early stage. Meanwhile, our method can maintain the safety constraints gradually because the theoretical bound is very close to the bound in CPO algorithm. A promising direction of future work is to evaluate our method on more practical robotic tasks. Furthermore, we hope we can extend our work to off-policy and model-based RL methods. \bibliographystyle{ieeetr}
{ "arxiv_id": "2302.14297", "language": "en", "timestamp": "2023-03-01T02:08:44", "url": "https://arxiv.org/abs/2302.14297", "yymm": "2302" }
\section{Introduction} In mobile networks, an enormous amount of data is being continuously generated by billions of edge devices. Data analytics can be performed on distributed data to distill useful insight for empowering a broad range of mobile applications ranging from e-commerce to auto-driving to IoT sensing~\cite{Bennis6Gvision,Letaief2021JSAC}. One basic class of techniques is called tensor decomposition which extracts a low-dimensional structure from large-scale multi-attribute data with tensor representation (i.e., high-dimensional counterparts of a matrix) \cite{Larsson2010,TDreview2017}. A popular technique in this class, called Tucker decomposition, is a higher-dimensional extension of singular-value decomposition (SVD) that has supported diverse applications such as Google's image recognition and Cynefin's spotting anomalies. In mobile networks, the tensor decomposition can be implemented in a centralized manner which requires uploading of high-dimensional data from many devices to a central server. However, such implementation is stymied not only by a communication bottleneck but also by the issue of data privacy~\cite{MZChen2021,NiyatoFL2020}. In view of these issues, we focus on \emph{distributed tensor decomposition} (DTD) that avoids direct data uploading and reduces communication overhead via distributing the computation of data tensors to multiple devices. A direct distributed implementation therein is to parallel centralized iterative methods such as alternating least square~\cite{PARAFAC2005} and stochastic gradient descent~\cite{Kijung2017,zhang2021turning} over edge devices, which, however, results in great communication overhead due to slow convergence. On the other hand, DTD can be realized via \emph{one-shot} distributed matrix analysis techniques~\cite{chen2022analog,JF2019estimation2019,VC2021DPCA}, since the desired orthogonal factor matrices can be estimated as the principal eigenspaces of unfolding matrices of the tensor along different modes~\cite{TensorReview}. These one-shot methods improve communication efficiency at a slight cost of decomposition accuracy by following two steps: 1) computing local estimates of the desired factor matrix at devices using local data; 2) uploading and aggregating the local estimates at the server to compute a global estimate. Though alleviated, the communication bottleneck still exists due to the required aggregation of high-dimensional local tensors over potentially many devices. This multi-access problem can be addressed by using a technique called \emph{over-the-air computation} (AirComp), which exploits the waveform superposition property of a multi-access channel to realize over-the-air data aggregation in one shot~\cite{GZhu2021WCM,MZChen2021}. In general, AirComp finds applications in communication-efficient distributed computing and learning with a recent focus on federated learning (see, e.g., \cite{Eldar2021TSP, Ding2020TWC,Deniz2020TSP}). Considering a DTD system with AirComp, this work aims to solve two open problems. The first is the prohibitive cost and latency of computation at resource-constrained devices. A traditional one-shot DTD algorithm requires each device to perform eigenvalue decomposition of a potentially high-dimensional local dataset (from, e.g., its sensor or user). The resultant complexity increases \emph{super-linearly} with the data dimensions~\cite{SVDComplexity2019} and latency is high as limited by frequent access to memory that is usually much slower than processing speeds~\cite{randomprojection2011}. These make it difficult for DTD to support emerging mission-critical applications~\cite{Petar2022TimePersp}. The second problem is that the one-shot transmissions by devices are susceptible to link disruption. Specifically, the loss of mobile connection during the process of transmitting high-dimensional local principle components can render the already received partial data useless. In other words, the existing designs lack the feature of graceful performance degradation due to fading. To solve these problems, we propose the novel framework of \emph{on-the-fly communication-and-computing} (FlyCom$^2$). Underpinning the framework is the use of a technique from randomized linear algebra, \emph{randomized sketching}, that generates low-dimensional random representations, called \emph{sketches}, of a high-dimensional data sample by projecting it into randomly generated low-dimensional sub-spaces~\cite{StreamingTD, randomprojection2011}. The technique has been successfully used in diverse applications ranging from online data tracking~\cite{Sketching} to matrix approximation~\cite{randomprojection2011}. In FlyCom$^2$, in place of the traditional high-dimensional local eigenspaces, each device generates a stream of low-dimensional sketches for uploading to the server. Considering a \emph{multiple-input-multiple-output} (MIMO) channel, the simultaneous transmission of local sketches is enabled by spatially multiplexed AirComp~\cite{GXZhuAirComp2019}. Upon its arrival at the server, each aggregated sketch is immediately used to improve the global tensor decomposition, giving the name of FlyCom$^2$. Since random sketches serve as independent observations on the tensor, the server can produce an estimate for tensor decomposition in every time slot based on the sketches already received. The FlyCom$^2$ framework addressed the mentioned open problems in several aspects. First, random sketching that involves matrix multiplication has much lower complexity than eigen decomposition and helps to reduce the complexity of on-device computation. Second, the DTD accuracy depends on the number of successfully received sketches and hence is robust against loss of sketches in transmission. This endows on FlyCom$^2$ a key property of graceful degradation in the events of link disruption or packet loss. Third, as the principle components of a high-dimensional tensor are usually low-dimensional, the progressive DTD at the server is shown to approach its optimal performance quickly as the number of received aggregated sketches increases, thereby reining in the communication overhead. Last, the parallel streaming communication and computation in FlyCom$^2$ are more efficient than the sequential operations of the traditional one-shot algorithms due to the communication-computation separation. In designing the FlyCom$^2$ framework, this work makes the following key contributions. \begin{itemize} \item \emph{On-the-Fly Sub-space Detection:} One key component of the framework is an optimal on-the-fly detector at the server to estimate the tensor's principle eigenspace from the received, noisy (aggregated) sketches. To design a \emph{maximum likelihood} (ML) detector, a whitening technique is used to pre-process the sketches so as to yield an effective global observation, which is shown to have the covariance matrix sharing the same eigenspace as the tensor. Using the result, the ML estimation problem is formulated as a \emph{subspace alignment problem} and is solved in a closed form. It is observed from the solution that the optimal estimate of the desired principal eigenspace approaches its ground truth as the said observation's dimensionality grows (or equivalently more sketches are received). \item \emph{DTD Error Analysis:} The end-to-end performance of the FlyCom$^2$-based DTD system is measured by the square error of the estimated principle eigenspace \emph{with respect to} (w.r.t.) its ground truth. Using the perturbation theory and concentration of measure, bounds are derived on both the error and its expectation. These results reveal that the error consists of one residual component contributed by non-principal components and the other component caused by random sketching. Moreover, the error is observed to be linearly proportional to the number of received sketches, validating the earlier claims on the progressive nature of the designed DTD as well as its feature of graceful degradation. This also suggests a controllable trade-off between the decomposition accuracy and communication overhead, which is useful for practical implementation. \item \emph{Threshold Based Sketch Selection:} Removing severely channel distorted sketches from use in sub-space detection can lead to performance improvement. This motivates the design of a sketch-selection scheme that applies a threshold on a scaling factor in MIMO AirComp that reflects the received \emph{signal-to-noise ratio} (SNR) of an aggregated sketch. We show that such a threshold can be efficiently optimized by an enumeration method of polynomial complexity w.r.t. the population of received sketches. \end{itemize} The remainder of the paper is organized as follows. Section~\ref{section:model} introduces system models and metrics, followed by an overview of the proposed FlyCom$^2$ framework in Section~\ref{section:overview}. Then, Section~\ref{section:design} presents the design of the on-the-fly sub-space estimator and its error analysis. The sketch-selection scheme is proposed in Section~\ref{section:selection}. Numerical results are provided in Section~\ref{section:experiment}, followed by concluding remarks in Section~\ref{section:conclusion}. \section{Models, Operations, and Metrics}\label{section:model} We consider the support of DTD in a MIMO system as illustrated in Fig.~\ref{fig:scenario}. The relevant models, operations and metrics are described as follows. \begin{figure*}[t] \centering \begin{minipage}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{figures/scenario.pdf} \vspace{-8mm} \end{minipage} \caption{On-the-fly communication-and-computing for distributed tensor decomposition.} \label{fig:scenario} \end{figure*} \subsection{Distributed Tensor Decomposition} We consider the distributed implementation of the popular Tucker method of tensor decomposition~\cite{TensorReview}. For ease of notation, the tensor is assumed to have $N$ modes that generalize the concepts of columns and rows in matrices with the first $(N-1)$ modes corresponding to data features and mode $N$ for indexing data samples. For instance, in a surveillance system, images captured by multiple cameras are expressed as local tensors with three modes indicating pixels, colors, and data sample indices, respectively. Let data samples collected by an arbitrary device, say device $k$, be represented as a local tensor $\mathcal{X}_k\in\mathbb{R}^{I_1^{(k)}\times I_2^{(k)}\cdots\times I_N^{(k)}}$, where $I_n^{(k)}$ denotes the dimensionality of mode $n$ of local tensor $k$. For ease of notation, we assume that local tensors have the same dimensions for their feature modes: $I_n^{(k)}=I_n$, $\forall k, 1\leq n\leq N-1$. Next, these local tensors are aggregated from $K$ devices to form a global tensor $\mathcal{X}\in\mathbb{R}^{I_1\times I_2\cdots\times I_N}$ with $I_N = \sum_kI_N^{(k)}$. The Tucker decomposition of $\mathcal{X}$ can be written as~\cite{TensorReview} \begin{equation}\label{eq:Tucker} \mathcal{X} \approx \mathcal{G}\times_1\mathbf{U}_1\times_2\mathbf{U}_2\cdots\times_N\mathbf{U}_N\overset{\triangle}{=}\tilde{\mathcal{X}}, \end{equation} where $\mathcal{G}\in\mathbb{R}^{r_1\times r_2\cdots\times r_N}$ represents a core tensor [that generalizes singular values in \emph{singular value decomposition} (SVD)], $\mathbf{U}_n\in\mathbb{R}^{I_n\times r_n}$ is an orthogonal factor matrix corresponding to the $n$-th mode satisfying $\mathbf{U}_n^{\top}\mathbf{U}_n = \mathbf{I}_{r_n}$ with $r_n$ ($r_n\leq I_n$) representing the number of principal dimensions, and $\times_n$ denotes the mode-$n$ matrix product~\cite{TensorReview}. In the sequel, we pursue these factor matrices $\{\mathbf{U}_n\}$ as they reveal the characteristics of the data tensor at different modes and given $\{\mathbf{U}_n\}$, the computation of $\mathcal{G}$ is straightforward~\cite{Larsson2010}. In centralized computation with full data aggregation, $\{\mathbf{U}_n\}$ can be computed by using the \emph{higher-order SVD} approach~\cite{Larsson2010,StreamingTD}. In this approach, the tensor is first flattened along chosen mode $n$ to yield a matrix $\mathbf{X}^{(n)}\in\mathbb{R}^{I_n\times J_n}$ with $J_n=\prod_{j=1,j\neq n}^NI_n$, termed \emph{mode-$n$ unfolding}; then the desired factor matrix is computed as $\mathbf{U}_n = [\mathbf{u}_1,\cdots,\mathbf{u}_{r_n}]$ where $\mathbf{u}_i$ is given by the $i$-th principal eigenvector of the mode-$n$ unfolding. Let this operation be represented by $\mathcal{S}_{r_n}(\cdot)$ and hence $\mathbf{U}_n = \mathcal{S}_{r_n}(\mathbf{X}^{(n)}(\mathbf{X}^{(n)})^{\top})$. In contrast with its centralized counterpart, DTD is to compute the eigenspaces of different unfolding matrices distributively by avoiding the aggregation of raw data to preserve the data ownership~\cite{JF2019estimation2019,chen2022analog}. Considering the computation of $\mathbf{U}_n$, DTD goes through the following procedure: 1) local tensors are flattened along chosen mode $n$ to generate local unfoldings, denoted by $\{\mathbf{X}_k^{(n)}\}$; 2) devices compute low-dimensional component $\{\mathbf{S}_k\}$ from local unfoldings $\{\mathbf{X}_k^{(n)}\}$ through dimensionality reduction techniques; 3) the server gathers these local components from devices and aggregates them into a global component, denoted as $\mathbf{S}$, to yield a global estimate of the ground truth, $\mathbf{U}_n$. It is worth mentioning that the computation results $\{\mathbf{S}_k\}$ depend on a particular dimensionality reduction technique. For example, when using \emph{principal component analysis} (PCA)~\cite{JF2019estimation2019,chen2022analog}, $\{\mathbf{S}_k\}$ are computed as the principal eigenspaces of $\{\mathbf{X}_k^{(n)}\}$ at devices, and then the server averages them to estimate $\mathbf{U}_n$. In this work, the random sketching approach is adopted as elaborated in Section~\ref{section:overview}. \subsection{MIMO Over-the-Air Computation} FlyCom$^2$ builds on MIMO AirComp to over-the-air aggregate local results, which is described as follows. First, let $N_\text{r}$ and $N_{\text{t}}$ with $N_{\text{r}} \geq N_{\text{t}}$ denote the numbers of antennas at the edge server and each device, respectively. We assume perfect transmit \emph{channel state information} (CSI) and symbol-level synchronization between devices~\cite{GXZhuAirComp2019}. Time is slotted and then grouped to form durations with $t$ denoting the duration index. In each time slot, an $N_{\text{t}}\times 1$ vector of complex scalar symbols is transmitted over $N_{\text{t}}$ antennas. Then a \emph{matrix-symbol} duration spans at least $I$ symbol durations to support the transmission of an $N_{\text{t}}\times I$ matrix. In an arbitrary duration, say $t$, all edge devices transmit simultaneously their $I\times M$ real matrices, denoted as $\{\mathbf{S}_{t,k}\}$, each of which is termed a \emph{matrix symbol}. As a result, the server receives an over-the-air aggregated matrix symbol, $\mathbf{Y}_t$, as \begin{equation*} \mathbf{Y}_t = \mathbf{A}_t\sum_{k=1}^K\mathbf{H}_{t,k}\mathbf{B}_{t,k}\mathbf{S}_{t,k}^{\top} + \mathbf{A}_t\mathbf{Z}_t, \end{equation*} where $\mathbf{H}_{t,k}\in \mathbb{C}^{N_{\text{r}}\times N_{\text{t}}}$ denotes the channel matrix corresponding to device $k$, $\mathbf{Z}_t$ models additive Gaussian noise with \emph{independent and identically distributed} (i.i.d.) elements of $\mathcal{CN}(0,\sigma^2)$, $\mathbf{A}_t\in \mathbb{C}^{L\times N_{\text{r}}}$ and $\mathbf{B}_{t,k}\in \mathbb{C}^{N_{\text{t}}\times M}$ denote receive and transmit beamforming matrices, respectively. To realize AirComp, we consider \emph{zero forcing} (ZF) transmit beamforming that inverts individual MIMO channels~\cite{GXZhuAirComp2019}. Mathematically, conditioned on a fixed receive beamformer, transmit beamforming matrices are given as \begin{equation}\label{eq:example:ZF} \mathbf{B}_{t,k} = \left(\mathbf{A}_{t}\mathbf{H}_{t,k}\right)^{H}\left(\mathbf{A}_{t}\mathbf{H}_{t,k}\mathbf{H}_{t,k}^{H}\mathbf{A}_{t}^{H}\right)^{-1}. \end{equation} The received matrix $\mathbf{Y}_t$ is then rewritten as \begin{equation}\label{eq:WA} \mathbf{Y}_t =\sum_{k=1}^K\mathbf{S}_{t,k}^{\top}+\mathbf{A}_t\mathbf{Z}_t. \end{equation} In the absence of noise, the AirComp in~\eqref{eq:WA} provides the one-shot realization of the desired aggregation operation for DTD. The average transmission power of each device is enforced not to exceed a power budget of $P$ per slot, i.e. \begin{equation}\label{eq:powerconstraint} \mathsf{E}\left[\Vert\mathbf{B}_{t,k}\mathbf{S}_{t,k}^{\top}\Vert_F^2\right] = \mathsf{Tr}\left(\left(\mathbf{A}_{t}\mathbf{H}_{t,k}\mathbf{H}_{t,k}^{H}\mathbf{A}_{t}^{H}\right)^{-1}\mathsf{E}[\mathbf{S}_{t,k}^{\top}\mathbf{S}_{t,k}]\right)\leq IP,\ \forall t,k. \end{equation} The transmit \emph{signal-to-noise ratio} (SNR) is then given by $\gamma = \frac{P}{\sigma^2}$. \subsection{Error Metric} Given MIMO AirComp described earlier, a noisy version of $\mathbf{U}_n$, denoted by $\tilde{\mathbf{U}}_n$, will be computed progressively from a set of received matrices $\{\mathbf{Y}_t\}$ (see Section~\ref{section:overview}). The $\tilde{\mathbf{U}}_n$ deviates from the ground truth due to both distributed computation and channel noise. The resultant error can be a performance metric of DTD in the wireless system. Mathematically, given $\tilde{\mathcal{X}}$ as the tensor derived from $\{\tilde{\mathbf{U}}_n\}$, the error is measured as $\Vert\mathcal{X}-\tilde{\mathcal{X}}\Vert_F^2$ that can be bounded as $ \Vert\mathcal{X}-\tilde{\mathcal{X}}\Vert_F^2 \leq \sum_{n=1}^N\Vert (\mathbf{I}_{I_n} - \tilde{\mathbf{U}}_n\tilde{\mathbf{U}}_n^{\top})\mathbf{X}^{(n)}\Vert_F^2$~\cite{StreamingTD}. As this result allows traceability, we define the DTD error as \begin{equation}\label{eq:error} d\left(\tilde{\mathbf{U}}_n,\mathbf{X}^{(n)}\right) = \Vert (\mathbf{I}_{I_n} - \tilde{\mathbf{U}}_n\tilde{\mathbf{U}}_n^{\top})\mathbf{X}^{(n)}\Vert_F^2. \end{equation} \section{Overview of On-the-Fly Communication-and-Computing}\label{section:overview} To support DTD over edge devices with limited computation power, we propose a FlyCom$^2$ framework as shown in Fig.~\ref{fig:scenario}. To elaborate on it, we first briefly introduce the random approach exploited in FlyCom$^2$ and then explain how to use FlyCom$^2$ to support DTD. \subsection{Data Dimensionality Reduction via Random Sketching} Recall that the DTD requires data dimensionality reduction on devices prior to transmission. For high-dimensional tensors, the traditional PCA technique becomes too complex for resource-constrained devices. To address this issue, we adopt a technique for random dimensionality reduction, known as \emph{random sketching}, which is simpler than PCA as it only relies on matrix multiplication and also requires a smaller number of passes over datasets~\cite{randomprojection2011}. Specifically, given an $I\times J$ data matrix $\mathbf{X}$, random sketching uses a $J\times M$ random matrix, termed \emph{dimension reduction mapping} (DRM) and denoted by $\mathbf{\Omega}$, to map $\mathbf{X}$ to an $I\times M$ sketch matrix $\mathbf{S}$ with $J\gg M$: $\mathbf{S} = \mathbf{X}\mathbf{\Omega}$. The mapping $\mathbf{\Omega}$ can be composed of i.i.d. Gaussian elements and projects the high-dimensional $\mathbf{X}$ to random directions in a space of low dimensionality. Despite the random projection, the mutual vector distances between the rows of $\mathbf{X}$ can be approximately preserved such that the principal (column) eigenspace of the sketch, $\mathbf{S}$, constitutes a good approximation of that of $\mathbf{X}$. The approximation accuracy grows as $J$ increases and becomes perfect when $J$ is equal to $M$~\cite{randomprojection2011}. Importantly, to estimate an $r$-dimensional principal eigenspace, random sketching has the complexity of $\mathcal{O}(IJM)$ and a single data pass of memory, as opposed to the complexity of PCA $\mathcal{O}(\min\{I,J\}^2\times \max\{I,J\})$ and $\mathcal{O}(r)$ memory passes in PCA~\cite{SVDComplexity2019}. \subsection{FlyCom$^2$-Based DTD} Based on the preceding random-sketching technique, we propose the FlyCom$^2$ framework that decomposes the high-dimensional DTD into on-the-fly processing and transmission of streams of low-dimensional random sketches. Thereby, we not only overcome devices' resource constraints but also achieve the graceful reduction of DTD error as the communication time increases. Without loss of generality, we focus on the computation of the principal eigenspace $\mathbf{U}_n$ for an arbitrary data-feature mode $n$ with $n\in[1,\cdots, N-1]$. To simplify notation, the superscript $(n)$ and subscript $n$ are removed. The detailed operations of FlyCom$^2$ are described as follows. \subsubsection{On-the-Fly Computation at Devices} Each device streams a sequence of low-dimensional local sketches to the server by generating and transmitting them one by one in data packets. First, the progressive computation of local sketches at devices is introduced as follows. Let each local tensor, say $\mathcal{X}_k$ at device $k$, be flattened along the desired mode to generate the unfolding matrix $\mathbf{X}_k$. Then, in the (matrix-symbol) slot $t$, each device $k$ draws i.i.d. $\mathcal{N}(0,1)$ entries to form a $J\times M$ DRM, denoted by $\mathbf{\Omega}_{t,k}$, or retrieves it efficiently from a memory~\cite{TRP_Tropp2018}. Then an $M$-dimensional local sketch for $\mathbf{X}_{k}$ can be computed as $\mathbf{S}_{t,k} = \mathbf{X}_{k} \mathbf{\Omega}_{t,k}$, which is then uploaded to the server immediately before computing the next sketch $\mathbf{S}_{t+1,k}$. This allows the efficient communication-and-computation parallelization as shown in Fig.~\ref{fig:parallel}. \begin{figure*}[t] \centering \begin{minipage}[b]{0.65\textwidth} \centering \includegraphics[width=\textwidth]{figures/parallel.pdf} \vspace{-8mm} \end{minipage} \caption{Parallelization between communication and computation.} \label{fig:parallel} \end{figure*} \subsubsection{On-the-Fly Global Random Sketching} MIMO AirComp is used for low-latency aggregation of the local sketches simultaneously streamed by devices. Local temporal sketches are progressively aggregated at the server by linearly modulating them as MIMO AirComp symbols. For ease of notation, we consider the dimension of local sketches to be fixed as $M = N_{\text{t}}$ but this assumption can be easily relaxed similarly as in~\cite{chen2022analog}. Consider the uploading of the $t$-th local sketches. It follows from~\eqref{eq:WA} that the matrix symbol received at the server can be written as \begin{equation}\label{eq:receivedsymbol} \mathbf{Y}_t^{\top} = \sum_k\mathbf{X}_k\mathbf{\Omega}_{t,k} + \mathbf{Z}_t^{\top}\mathbf{A}_t^{\top}. \end{equation} To explain how to use $\mathbf{Y}_t$ in estimating the principal eigenspace of the global unfolding matrix $\mathbf{X}$, we first consider the case without channel noise, in which $\mathbf{Y}_t^{\top} = \sum_k\mathbf{X}_k\mathbf{\Omega}_{t,k}$. Since the global tensor $\mathcal{X}$ is given by assembling local tensors along mode $N$, the corresponding global unfolding matrix, denoted by $\mathbf{X}$, is related to the local unfoldings $\{\mathbf{X}_k\}$ as \begin{equation}\label{eq:distributed_samples} \mathbf{X}=[\mathbf{X}_1,\mathbf{X}_2,\cdots,\mathbf{X}_K]. \end{equation} It follows that \begin{align*} \mathbf{Y}_t^{\top}&= [\mathbf{X}_1,\cdots,\mathbf{X}_K][\mathbf{\Omega}_{t,1}^{\top},\cdots,\mathbf{\Omega}_{t,K}^{\top}]^{\top}\\ & \overset{\triangle}{=}\mathbf{X}\mathbf{F}_t, \end{align*} where we define $\mathbf{F}_{t} = [\mathbf{\Omega}_{t,1}^{\top},\cdots,\mathbf{\Omega}_{t,K}^{\top}]^{\top}$. As $\{\mathbf{\Omega}_{t,k}\}$ are mutually independent, $\mathbf{F}_{t}$ has i.i.d. $\mathcal{N}(0,1)$ elements and can be used as an $M$-dimensional DRM for randomly sketching $\mathbf{X}$. Therefore, in the absence of channel noise, $\mathbf{Y}_t$ gives an $M$-dimensional global sketch for $\mathbf{X}$. The dimension of the global sketch grows, thereby improving the DTD accuracy, as more aggregated local sketches are received (or equivalently $t$ progresses), giving the name of on-the-fly global sketching. \subsubsection{On-the-Fly Sub-space Detection at the Server} In the case with channel noise, the server can produce an estimate of the desired principal eigenspace, $\mathbf{U}$, based on the noisy observations accumulated up to the current symbol slot. Specifically, in slot $t$, given the current and past received matrix symbols, $\{\mathbf{Y}_{\ell}\}_{\ell\leq t}$, and the receive beamformers $\{\mathbf{A}_{\ell}\}_{\ell\leq t}$ (discussed in the sequel), the server estimates $\mathbf{U}$ as \begin{equation} \tilde{\mathbf{U}} = f(\{\mathbf{Y}_{\ell}\}_{\ell\leq t},\{\mathbf{A}_{\ell}\}_{\ell\leq t}), \end{equation} where the estimator $f(\cdot)$ is optimized in the sequel to minimize the DTD error in~\eqref{eq:error}. Following the above discussion, the procedure for FlyCom$^2$-based DTD is summarized as follows. \begin{equation*} \boxed{ \begin{array}{l} \mathrm{To\ compute\ the\ principal\ eigenspace\ of\ the\ global\ unfolding\ matrix\ }\mathbf{X}\mathrm{,\ initialize\ }t=1,\ \\ \mathrm{and\ FlyCom}^2\mathrm{-based\ DTD\ repeats:}\\ \quad \mathrm{Step\ 1}: \mathrm{Each\ device,\ say\ device\ }k, \mathrm{\ computes\ a\ local\ sketch\ using\ } \mathbf{S}_{t,k} = \mathbf{X}_{k} \mathbf{\Omega}_{t,k}; \\ \quad \mathrm{Step\ 2}: \mathrm{The\ server\ receives\ } \mathbf{Y}_t^{\top} = \sum_k\mathbf{X}_k\mathbf{\Omega}_{t,k} + \mathbf{Z}_t^{\top}\mathbf{A}_t^{\top}\mathrm{\ via\ MIMO\ AirComp};\\ \quad \mathrm{Step\ 3}: \mathrm{The\ server\ computes\ an\ estimate\ of\ the\ eigenspace\ of\ }\mathbf{X}: \\ \quad\quad\quad\quad\ \ \tilde{\mathbf{U}} = f(\{\mathbf{Y}_{\ell}\}_{\ell\leq t},\{\mathbf{A}_{\ell}\}_{\ell\leq t});\\ \quad \mathrm{Step\ 4}: \mathrm{Set\ }t=t+1.\\ \mathrm{Untill\ }t=T. \end{array} } \end{equation*} The key component of the FlyCom$^2$ framework, the on-the-fly sub-space estimator $f(\cdot)$, is designed in Section~\ref{section:design}. The performance of FlyCom$^2$-based DTD is enhanced using a sketch selection algorithm designed in Section~\ref{section:selection}. \section{Optimal Sub-space Detection for FlyCom$^2$}\label{section:design} In this section, we design the sub-space detection function of the FlyCom$^2$ framework, namely $f(\cdot)$ mentioned in the preceding section. It consists of two stages -- pre-processing of received symbols and the subsequent sub-space estimation, which are summarized in Algorithm~\ref{algo:detection} and designed in the following sub-sections. Furthermore, the resultant DTD error is analyzed. \subsection{Pre-Processing of Received Matrix Symbols} The function of pre-processing is to accumulate received matrix symbols from slot $1$ to the current slot, $t$, and generate from them an effective matrix for the ensuing sub-space detection. The operation is instrumental for on-the-fly detection to see a progressive performance improvement. The design of the pre-processing takes several steps. First, since the transmitted symbol $\mathbf{X}\mathbf{F}_{t}$ is real but the channel noise is complex, the real part of the received symbols, namely $\mathbf{Y}_t$ in~\eqref{eq:receivedsymbol}, gives an effective observation of the transmitted symbol\footnote{It is possible to transmit the coefficients of $\mathbf{X}\mathbf{F}_t$ over both the in-phase and quadrature channels, which halves air latency. The extension is straightforward (see, e.g.~\cite[Section II]{chen2022analog}) but complicates the notation without providing new insights. Hence, only the in-phase channel is used in this work.}. Let $\tilde{\mathbf{Y}}_t$ denote the effective observation in slot $t$ and $\tilde{\mathbf{Z}}_t$ the real part of $\mathbf{A}_t\mathbf{Z}_t$. It follows that \begin{equation}\label{eq:observations} \tilde{\mathbf{Y}}_t = \Re\{{\mathbf{Y}_t^{\top}}\} = \mathbf{X}\mathbf{F}_{t}+\tilde{\mathbf{Z}}_t^{\top}. \end{equation} Second, the relation between the eigenspace of $\mathbf{X}$ and the accumulated observations up to the current slot is derived as follows. To this end, let the SVD of $\mathbf{X}$ be expressed as \begin{equation}\label{eq:original_decomposition} \mathbf{X} =\mathbf{U}_{\mathbf{X}}\mathbf{\Sigma}_{\mathbf{X}}\mathbf{V}_{\mathbf{X}}^{\top}, \end{equation} where $\mathbf{\Sigma}_{\mathbf{X}}$ comprises descending singular values along its diagonal. Then, the accumulation of the current and past observations, denoted by $\hat{\mathbf{Y}}_t = [\tilde{\mathbf{Y}}_1,\tilde{\mathbf{Y}}_2,\cdots,\tilde{\mathbf{Y}}_t]$, is a random Gaussian matrix as shown below. \begin{Lemma}\label{Lemma:GaussianMatrix} \emph{The accumulated aggregations, $\hat{\mathbf{Y}}_t$, can be decomposed as \begin{equation*} \hat{\mathbf{Y}}_t = \mathbf{C}^{\frac{1}{2}}\mathbf{W}\mathbf{D}^{\frac{1}{2}}, \end{equation*} where $\mathbf{W}$ is a random Gaussian matrix with i.i.d. $\mathcal{N}(0,1)$ entries, the left covariance matrix $\mathbf{C} = \mathbf{X}\mathbf{X}^{\top} + \frac{1}{2tM}\sigma^2\sum_{\ell\leq t}\mathsf{Tr}(\mathbf{A}_{\ell}^{H}\mathbf{A}_{\ell})\mathbf{I}_I$, and the right one $\mathbf{D} = \frac{\mathsf{Tr}(\mathbf{X}^{\top}\mathbf{X})\mathbf{I}_{tM} + \frac{1}{2}I\sigma^2\mathsf{diag}(\mathbf{A}_1\mathbf{A}_1^{H},\cdots,\mathbf{A}_t\mathbf{A}_t^{H})}{\mathsf{Tr}(\mathbf{X}^{\top}\mathbf{X}) + \frac{1}{2tM}I\sigma^2\sum_{\ell\leq t}\mathsf{Tr}(\mathbf{A}_{\ell}^{H}\mathbf{A}_{\ell})}$.} \end{Lemma} \begin{proof} See Appendix~\ref{Apdx:representation} \end{proof} Third, based on~\eqref{eq:original_decomposition}, the covariance matrix, $\mathbf{C}$, in Lemma~\ref{Lemma:GaussianMatrix} can be rewritten as \begin{align} \mathbf{C} &= \mathbf{U}_{\mathbf{X}}\left(\mathbf{\Sigma}_{\mathbf{X}}^2 + \frac{1}{2tM}\sigma^2\sum_{\ell\leq t}\mathsf{Tr}(\mathbf{A}_{\ell}^{H}\mathbf{A}_{\ell})\mathbf{I}_I\right)\mathbf{U}_{\mathbf{X}}^{\top},\nonumber\\ & \overset{\triangle}{=}\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}\mathbf{U}_{\mathbf{X}}^{\top}, \end{align} where we define $\mathbf{\Lambda} = \mathbf{\Sigma}_{\mathbf{X}}^2 + \frac{1}{2tM}\sigma^2\sum_{\ell\leq t}\mathsf{Tr}(\mathbf{A}_{\ell}^{H}\mathbf{A}_{\ell})\mathbf{I}_I$. Hence, the square root, $\mathbf{C}^{\frac{1}{2}}$, is given as \begin{equation}\label{eq:effectivecovariance} \mathbf{C}^{\frac{1}{2}} = \mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}. \end{equation} \begin{Remark}[Effective Sketching with Channel Noise]\label{Remark:analogtransmission} \emph{According to Lemma~\ref{Lemma:GaussianMatrix} and~\eqref{eq:effectivecovariance}, the accumulated observations, $\hat{\mathbf{Y}}_t$, gives a sketch of the matrix $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}$ using a Gaussian DRM with the covariance of $\mathbf{D}$. The matrix $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}$ and the unfolding matrix $\mathbf{X}$ share the eigenspace, $\mathbf{U}_{\mathbf{X}}$. Furthermore, as $\mathbf{\Lambda}$ retains the descending sort of singular values, the top-$r$ principal eigenspace of $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}$ is identical to that of $\mathbf{X}$ for any $1\leq r\leq M$.} \end{Remark} Finally, according to the preceding discussion, the desired principal eigenspace of $\mathbf{X}$ can be estimated from the sketch $\hat{\mathbf{Y}}_t$. It is known that randomized sketching prefers DRMs with i.i.d. entries~\cite{randomprojection2011}. To improve the performance, $\hat{\mathbf{Y}}_t$ can be further ``whitened" to equalize the right covariance $\mathbf{D}$. Specifically, let $\hat{\mathbf{Y}}_t$ be right-multiplied by $\mathbf{D}^{-\frac{1}{2}}$ to yield the final \emph{effective observation} in time slot $t$ as \begin{equation}\label{eq:whitening} \boxed{\mathbf{\Phi}_t = \hat{\mathbf{Y}}_t\mathbf{D}^{-\frac{1}{2}} = \mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}\mathbf{W}.} \end{equation} To compute the covariance matrix $\mathbf{D}$, the server needs to acquire the value of $\mathsf{Tr}(\mathbf{X}\mathbf{X}^{\top}) = \sum_{k}\mathsf{Tr}(\mathbf{X}_k\mathbf{X}_k^{\top})$. Note that each term in the summation, say $\mathsf{Tr}(\mathbf{X}_k\mathbf{X}_k^{\top})$, relates to the covariance of transmitted symbols, $\mathsf{E}[\mathbf{S}_{t,k}^{\top}\mathbf{S}_{t,k}]$, as \begin{equation*} \mathsf{E}[\mathbf{S}_{t,k}^{\top}\mathbf{S}_{t,k}] = \mathsf{E}[\mathbf{\Omega}_{t,k}^{\top}\mathbf{X}_k^{\top}\mathbf{X}_k\mathbf{\Omega}_{t,k}] = \mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)\mathbf{I}_{M}. \end{equation*} Then, $\mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)$ can be acquired at the server by one-time feedback. \subsection{Optimal Sub-space Estimation} In this sub-section, the principal eigenspace of the unfolding matrix $\mathbf{X}$ with dimensions fixed as $r$, is estimated from the effective observation given in~\eqref{eq:whitening} under the ML criterion. First, using~\eqref{eq:whitening}, the distribution of the observation $\mathbf{\Phi}_t$ conditioned on $\mathbf{U}$ and $\mathbf{\Lambda}$ is given as \begin{equation*} \mathsf{Pr}\left(\mathbf{\Phi}_t|\mathbf{U}_{\mathbf{X}},\mathbf{\Lambda}\right) = \frac{\exp\left(-\frac{tM}{2}\mathsf{Tr}\left(\mathbf{\Phi}_t^{\top}\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{-1}\mathbf{U}_{\mathbf{X}}^{\top}\mathbf{\Phi}_t\right)\right)}{(2\pi)^{ItM/2}\mathsf{det}(\mathbf{\Lambda})^{tM/2}}. \end{equation*} This yields the logarithmic likelihood function required for ML estimation as \begin{align}\label{eq:likelihood} \mathcal{L}\left(\mathbf{U}_{\mathbf{X}};\mathbf{\Phi}_t,\mathbf{\Lambda}\right) &= \ln\left(\mathsf{Pr}\left(\mathbf{\Phi}_t|\mathbf{U}_{\mathbf{X}},\mathbf{\Lambda}\right)\right),\nonumber\\ & = -\frac{tM}{2}\mathsf{Tr}\left(\mathbf{\Phi}_t^{\top}\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{-1}\mathbf{U}_{\mathbf{X}}^{\top}\mathbf{\Phi}_t\right) -\frac{ItM}{2}\ln(2\pi) - \frac{tM}{2}\ln(\mathsf{det}(\mathbf{\Lambda})). \end{align} Let $\mathbf{U}$ denote the desired $r$-dimensional principal components of $\mathbf{X}$, as obtained from splitting $\mathbf{U}_{\mathbf{X}} = [\mathbf{U},\mathbf{U}^{\bot}]$. It is observed from~\eqref{eq:likelihood} that only the first term depends on the variable $\mathbf{U}$. Then letting $(\tilde{\mathbf{U}},\tilde{\mathbf{U}}_{\mathbf{X}})$ denote an estimate of $(\mathbf{U},\mathbf{U}_{\mathbf{X}})$, the ML-estimation problem can be formulated as \begin{equation}\label{problem:MLestimation} \begin{aligned} \mathop{\min}_{\tilde{\mathbf{U}}}&\ \ \mathsf{Tr}\left(\mathbf{\Phi}_t^{\top}\tilde{\mathbf{U}}_{\mathbf{X}}\mathbf{\Lambda}^{-1}\tilde{\mathbf{U}}_{\mathbf{X}}^{\top}\mathbf{\Phi}_t\right)\\ \mathrm{s.t.}&\ \ \tilde{\mathbf{U}}_{\mathbf{X}}^{\top}\tilde{\mathbf{U}}_{\mathbf{X}}=\tilde{\mathbf{U}}_{\mathbf{X}}\tilde{\mathbf{U}}_{\mathbf{X}}^{\top} = \mathbf{I},\\ &\ \ \tilde{\mathbf{U}}_{\mathbf{X}}=[\tilde{\mathbf{U}},\tilde{\mathbf{U}}^{\bot}]. \end{aligned} \end{equation} Despite the non-convex orthogonality constraints, the problem in~\eqref{problem:MLestimation} can be solved with an optimal closed-form solution as follows. First, define the eigenvalue decomposition $\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top} = \mathbf{Q}\mathbf{\Gamma}\mathbf{Q}^T$ with $\mathbf{Q}=[\mathbf{q}_1,\cdots,\mathbf{q}_I]$ and $\mathbf{\Gamma}=\mathsf{diag}(\gamma_1,\cdots,\gamma_I)$ with eigenvalues arranged in a descending order. Then, given $\mathbf{\Lambda}=\mathsf{diag}(\lambda_1,\cdots,\lambda_I)$ and $\mathbf{U}_{\mathbf{X}}=[\mathbf{u}_1,\cdots,\mathbf{u}_I]$, the objective function of~\eqref{problem:MLestimation} can be rewritten as \begin{equation*} \mathsf{Tr}\left(\mathbf{\Lambda}^{-1}\mathbf{U}_{\mathbf{X}}^{\top}\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}\mathbf{U}_{\mathbf{X}}\right) = \sum_{i=1}^I\sum_{j=1}^I\lambda_j^{-1}\gamma_i(\mathbf{q}_i^{\top}\mathbf{u}_j)^2. \end{equation*} Next, define $x_{ij} = \mathbf{q}_i^{\top}\mathbf{u}_j$, and the constraints in~\eqref{problem:MLestimation} can be rewritten as $\sum_{i=1}^Ix_{ij}^2 = \mathbf{u}_j^{\top}\mathbf{Q}\mathbf{Q}^{\top}\mathbf{u}_j=1$ and $\sum_{j=1}^Ix_{ij}^2 = \mathbf{q}_i^{\top}\mathbf{U}\mathbf{U}^{\top}\mathbf{u}_i=1$. Without loss of optimality, such constraints can be further rewritten as $\sum_{i=1}^Ix_{ij}^2\geq 1$ and $\sum_{j=1}^Ix_{ij}^2\geq 1$. This allows the problem in~\eqref{problem:MLestimation} to be reformulated as a convex problem: \begin{equation}\label{problem:new} \begin{aligned} \mathop{\min}_{\{x_{ij}\}}&\ \ \sum_{i=1}^I\sum_{j=1}^I\lambda_j^{-1}\gamma_ix_{ij}^2\\ \mathrm{s.t.}&\ \ \sum_{i=1}^Ix_{ij}^2\geq 1,\ \forall j,\\ &\ \ \sum_{j=1}^Ix_{ij}^2\geq 1,\ \forall i. \end{aligned} \end{equation} Since $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_I$, the objective of~\eqref{problem:new} subject to the constraints is lower bounded as \begin{equation} \sum_{i=1}^I\sum_{j=1}^I\lambda_j^{-1}\gamma_ix_{ij}^2\geq \sum_{i=1}^I\lambda_i^{-1}\gamma_i. \end{equation} The lower bound can be achieved by letting $x_{ii} = 1$, $\forall i$ and $x_{ij} = 0$, $\forall i\neq j$. The optimal solution for~\eqref{problem:new} follows as shown below. \begin{Proposition} \emph{Based on the ML criterion, in slot $t$, the optimal on-the-fly estimate of the $r$-dimensional principal components of the unfolding matrix, $\mathbf{X}$, is denoted as $\tilde{\mathbf{U}}^{\star}$ and given as \begin{equation}\label{eq:MLestimate} \tilde{\mathbf{U}}^{\star} = [\mathbf{q}_1,\cdots,\mathbf{q}_r] = \mathcal{S}_r\left(\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}\right), \end{equation} where $\mathbf{\Phi}_t$ is the effective observation in slot $t$ as given in~\eqref{eq:whitening} and we recall $\mathcal{S}_r(\cdot)$ to yield the $r$-dimensional principal eigenspace of its argument.} \end{Proposition} \begin{Remark}[Minimum Number of FlyCom$^2$ Operations] \emph{For the result in~\eqref{eq:MLestimate} to hold, the dimensions of the current effective observations $\mathbf{\Phi}_t$ should be larger than those of $\mathbf{U}$, i.e. $tM\geq r$. This implies that the FlyCom$^2$ should run at least $t\geq r/M$ rounds to enable the estimation of an $r$-dimensional principal eigenspace of the tensor.} \end{Remark} \begin{algorithm}[t] \caption{On-the-Fly Sub-space Detection for FlyCom$^2$ Based DTD} \label{algo:detection} \textbf{Initialize:} Received in-phase matrix symbols $\{\tilde{\mathbf{Y}}_{\ell}\}_{\ell\leq t}$ in slot $t$\; \textbf{Perform:}\\ \begin{enumerate} \item[1:] \emph{Aggregation:} Aggregate all received matrix symbol $\{\tilde{\mathbf{Y}}_{\ell}\}_{\ell\leq t}$ into $\hat{\mathbf{Y}}_t = [\tilde{\mathbf{Y}}_1,\cdots,\tilde{\mathbf{Y}}_t]$; \item[2:] \emph{Whitening:} Compute the whitened version, $\mathbf{\Phi}_t$, of the aggregated matrix $\hat{\mathbf{Y}}_t$ by~\eqref{eq:whitening}; \item[3:] \emph{Sub-space extraction:} Compute the first $r$ eigenvectors of $\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}$ and aggregate them into $\tilde{\mathbf{U}}$. \end{enumerate} \textbf{Output:} $\tilde{\mathbf{U}}$ used as the principal eigenspace of the unfolding matrix $\mathbf{X}$. \end{algorithm} \subsection{DTD Error Analysis} Based on the optimal sub-space detection designed in the preceding sub-section, we mathematically quantify the key feature of FlyCom$^2$ that the DTD error gracefully decreases with communication. The existing error analysis for random sketching does not target distributed implementation and hence requires no communication links~\cite{randomprojection2011,StreamingTD}. Then, the new challenge for the current analysis arises from the need to account for distortion increased by the air interface based on MIMO AirComp. By tackling the challenge, we derive deterministic and probabilistic bounds on the DTD error defined in~\eqref{eq:error}. \subsubsection{Deterministic Error Bound} As the unfolding matrix comprises $r$ principal components, its singular values can be represented as $\mathbf{\Sigma}_{\mathbf{X}} = \mathsf{diag}(\sigma_1,\sigma_2,\cdots,\sigma_I)$ with $\sigma_1=\cdots=\sigma_r\gg\sigma_{r+1}\geq\cdots\geq\sigma_I$, where we assume the same principal singular values following the literature (see, e.g.~\cite{StreamingTD}). \begin{Lemma}\label{Lemma:deviation} \emph{Consider the DTD of the unfolding matrix $\mathbf{X}$ in tensor decomposition that has an $r$-dimensional principal eigenspace $\mathbf{U}= [\mathbf{u}_1,\cdots,\mathbf{u}_r]$ and the singular values $\mathbf{\Sigma}_{\mathbf{X}}$. The estimation of $\mathbf{U}$ as in~\eqref{eq:MLestimate} yields the DTD error given as \begin{equation}\label{eq:rewrite_error} d(\tilde{\mathbf{U}},\mathbf{X})=\sum_{i=1}^r\sum_{j\geq r+1}(\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2 + \sum_{i\geq r+1}\sigma_i^2. \end{equation} } \end{Lemma} \begin{proof} See Appendix~\ref{Apdx:deviation}. \end{proof} At the right side of the equation, the first term, $\sum_{i=1}^r\sum_{j\geq r+1}(\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2$, represents the error due to random sketching; the second term $\sum_{i\geq r+1}\sigma_i^2$ represents the residual error due to non-zero non-principal components of $\mathbf{X}$. Next, we make an attempt to characterize the behavior of each error term, $(\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2$. Let $\tilde{\mathbf{u}}_i$ and $\mathbf{u}_j$ denote the $i$-th and $j$-th ($i\leq r<j$) eigenvectors of the sample covariance matrix $\frac{1}{tM}\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}$ and the covariance matrix $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}\mathbf{U}_{\mathbf{X}}^{\top}$, respectively. The error in~\eqref{eq:rewrite_error} is caused by the perturbation $\mathbf{\Delta}=\frac{1}{tM}\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}-\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}\mathbf{U}_{\mathbf{X}}^{\top}$. Using the fact allows us to obtain the following desired result. \begin{Lemma}\label{Lemma:UPofEachTerm} \emph{Consider a fixed realization $\mathbf{W}$ in the DRM, $\mathbf{\Phi}_t$, in~\eqref{eq:whitening} and the error term, $(\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2$, in Lemma~\ref{Lemma:deviation} is upper bounded as \begin{equation*} (\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2\leq \max \left\{4,\delta_{ij}^2\right\}\frac{\Vert\mathbf{\Delta}\mathbf{u}_j\Vert_2^2}{\sigma_i^2-\sigma_j^2},\quad i\leq r<j, \end{equation*} where $\delta_{ij} \overset{\triangle}{=} \frac{\min\{2|\tilde{\lambda}_i-\lambda_i|,(\sigma_i^2-\sigma_j^2)\}}{|\tilde{\lambda}_i-\lambda_j|}$ with $\lambda_i$ and $\tilde{\lambda}_i$ being the $i$-th eigenvalues of $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}\mathbf{U}_{\mathbf{X}}^{\top}$ and $\frac{1}{tM}\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}$, respectively. } \end{Lemma} \begin{proof} See Appendix~\ref{Apdx:deterministicerror}. \end{proof} The upper bound in Lemma~\ref{Lemma:UPofEachTerm} suggests two scaling regions of the DTD error, namely $\delta_{ij} \geq 2$ and $\delta_{ij}<2$. Invoking the well-known Weyl's theorem (see, e.g.~\cite{WeylsTheorem}), the norm of the perturbation $\mathbf{\Delta}$ and hence the value of $\delta_{ij}$ reduce as FlyCom$^2$ progresses in time. This is aligned with the result in Fig.~\ref{subfig:eigenvalues} where the average value of $\{\delta_{ij}\}$ is observed to decrease with increasing communication time $t$. To simplify the analysis, we focus on the case of $\delta_{ij}\leq 2$, $\forall i\leq r<j$, by assuming sufficiently large $t$. In this case, the upper bound in Lemma~\ref{Lemma:UPofEachTerm} is simplified as \begin{equation}\label{eq:simplifiedUP} (\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2\leq \frac{4\Vert\mathbf{\Delta}\mathbf{u}_j\Vert_2^2}{\sigma_i^2-\sigma_j^2},\quad i\leq r<j. \end{equation} Next, based on Lemma~\ref{Lemma:deviation} and~\eqref{eq:simplifiedUP}, the desired deterministic error bound is derived as follows. \begin{Theorem}[Expected Error Bound]\label{Theorem:expectation} \emph{Given the receive beamformers $\{\mathbf{A}_{\ell}\}_{\ell\leq t}$ of FlyCom$^2$-based DTD and $\delta_{ij}\leq 2$, $\forall i\leq r<j$, the expected error can be bounded as \begin{equation*} \mathsf{E}[d(\tilde{\mathbf{U}},\mathbf{X})]\leq \frac{4}{tM}\sum_{i=1}^r\sum_{j\geq r+1}\frac{\lambda_j^2+\lambda_j\mathsf{Tr}(\mathbf{\Lambda})}{\sigma_i^2-\sigma_j^2} + \sum_{i\geq r+1}\sigma_i^2, \end{equation*} where $\delta_{ij}$ and $\lambda_j$ follow those in Lemma~\ref{Lemma:UPofEachTerm}.} \end{Theorem} \begin{proof} See Appendix~\ref{Apdx:expectation}. \end{proof} \begin{figure*}[t] \centering \subfigure[Average value of $\{\delta_{ij}\}$ versus communication time]{\label{subfig:eigenvalues}\includegraphics[width=0.48\textwidth]{figures/eigenvalues-eps-converted-to.pdf}} \subfigure[Expected DTD error versus communication time]{\label{subfig:expectederror}\includegraphics[width=0.48\textwidth]{figures/validation-eps-converted-to.pdf}} \caption{Validation of theoretical results under the settings of $r = 12$, $I=100$, $\mathbf{\Sigma}_{\mathbf{X}} = \mathsf{diag}(1,\cdots,1,\frac{1}{2},\frac{1}{3},\cdots,\frac{1}{88})$, and $\mathbf{A}_t\mathbf{A}_t^{\top} =\frac{1}{10\sigma}\mathbf{I}$} \label{fig:validation} \end{figure*} The error bound in the Theorem~\ref{Theorem:expectation} is compared numerically with the exact error and that of centralized tensor decomposition in Fig.~\ref{subfig:expectederror}. One can observe the bound to capture the trend of decreasing DTD error as $t$ progresses. In particular, it shows that under a small perturbation, \begin{equation*} \mathsf{E}[d(\tilde{\mathbf{U}},\mathbf{X})]\propto \frac{1}{tM}. \end{equation*} \subsubsection{Probabilistic Error Bound} We derive in the sequel a probabilistic bound on the DTD error using the method of \emph{concentration of measure}. A relevant useful result is given below. \begin{Lemma}[McDiarmid's Inequality~\cite{McDiarmid}]\label{Lemma:McDiarmid} \emph{Let $g$ be a positive function on independent variables $\{W_m\}$ satisfying the bounded difference property: \begin{equation*} \sup_{\{W_{M}\}_{M\neq m},W_m,W_M}|g(\{W_{M}\}_{M\neq m},W_m) - g(\{W_{M}\}_{M\neq m},W_M)|\leq c_m,\ \forall m, \end{equation*} with constants $\{c_m\}$ and $W_M$ being i.i.d. as $W_m$. Then, for any $\epsilon>0$, \begin{equation*} \mathsf{Pr}\left[g(\{W_m\})-\mathsf{E}[g(\{W_m\})]\geq\epsilon\right]\leq \exp\left(-\frac{2\epsilon^2}{\sum_mc_m^2}\right). \end{equation*}} \end{Lemma} Using Lemma~\ref{Lemma:McDiarmid}, the desired result is obtained as shown below. \begin{Theorem}[Probabilistic Error Bound]\label{Theorem:probability} \emph{Given receive beamformers $\{\mathbf{A}_{\ell}\}_{\ell\leq t}$ and $\delta_{ij}\leq 2$, $\forall i\leq r<j$, for any $\epsilon\geq 0$, the error of the FlyCom$^2$-baded DTD can be upper bounded as \begin{equation*} d(\tilde{\mathbf{U}},\mathbf{X})\leq \frac{4(1 + \epsilon)}{tM}\sum_{i=1}^r\sum_{j\geq r+1}\frac{\lambda_j^2+\lambda_j\mathsf{Tr}(\mathbf{\Lambda})}{\sigma_i^2-\sigma_j^2} + \sum_{i\geq r+1}\sigma_i^2, \end{equation*} with the probability of at least $\left[1-\exp\left(-\frac{\epsilon^2}{2\kappa^8}\right)\right]\mathsf{erf}\left(\frac{\kappa}{\sqrt{2}}\right)^{tM(I-r)}$, where $\mathsf{erf}(\cdot)$ denotes the error function defined as $\mathsf{erf}(y) = \frac{2}{\sqrt{\pi}}\int_{0}^y\exp(-x^2)\mathrm{d}x$ and $\kappa\geq 1$.} \end{Theorem} \begin{proof} See Appendix~\ref{Apdx:probability}. \end{proof} The upper bound on the DTD error in Theorem~\ref{Theorem:probability} holds \emph{almost surely} if the constant $\epsilon$ is sufficiently large. Comparing Theorem~\ref{Theorem:expectation} and Theorem~\ref{Theorem:probability}, one can make the important observation that as the communication time ($t$) progresses, both the DTD error and its expectation vanish at the same rate of \begin{equation}\label{eq:scalinglaw} \mathsf{Error} \propto \frac{1}{tM}\sum_{i=1}^r\sum_{j\geq r+1}\frac{\lambda_j^2+\lambda_j\mathsf{Tr}(\mathbf{\Lambda})}{\sigma_i^2-\sigma_j^2}. \end{equation} Another observation is that non-principle components of the tensor contribute to the DTD error but the effect is negligible when the eigen-gap is large. \section{Optimal Sketch Selection for FlyCom$^2$}\label{section:selection} Discarding aggregated sketches that have been transmitted under unfavourable channel conditions can improve the FlyCom$^2$ performance. This motivates us to design a sketch-selection scheme in this section. \subsection{Threshold Based Sketch Selection} First, we follow the approach in~\cite{GXZhuAirComp2019} to design the receive beamforming, $\{\mathbf{A}_t\}$, for MIMO AirComp. To this end, we decompose $\mathbf{A}_t$ as $\mathbf{A}_t = \eta_t\mathbf{U}_{\mathbf{A}_t}$, where the positive scalar $\eta_t$ is called a denoising factor and $\mathbf{U}_{\mathbf{A}_t}$ is an $N_{\text{t}}\times N_{\text{r}}$ unitary matrix. Following similar steps as in~\cite{GXZhuAirComp2019}, we can show that to minimize the DTD error bounds in Theorem~\ref{Theorem:expectation} and~\ref{Theorem:probability}, the beamformer component should be aligned with the channels of devices as \begin{equation}\label{eq:receive_alignment} \mathbf{U}_{\mathbf{A}_t}^{\top}=\mathcal{S}_{N_{\text{t}}}\left(\frac{1}{K}\sum_{k}\lambda_{\mathbf{H}_{t,k}}\mathbf{U}_{\mathbf{H}_{t,k}}\mathbf{U}_{\mathbf{H}_{t,k}}^{\top}\right), \end{equation} where $\lambda_{\mathbf{H}_{t,k}}$ and $\mathbf{U}_{\mathbf{H}_{t,k}}$ denote the $N_{\text{t}}$-th eigenvalue and the first $N_{\text{t}}$ eigenvectors of $\mathbf{H}_{t,k}\mathbf{H}_{t,k}^{\top}$, respectively. Furthermore, the denoising factor $\eta_t$ should cope with the weakest channel by being \begin{equation}\label{eq:denoising} \eta_t = \max_k \frac{1}{IP} \mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)\mathsf{Tr}\left(\left(\mathbf{U}_{\mathbf{A}_t}\mathbf{H}_{t,k}\mathbf{H}_{t,k}^{H}\mathbf{U}_{\mathbf{A}_t}^{H}\right)^{-1}\right). \end{equation} It follows from~\eqref{eq:receive_alignment} and~\eqref{eq:denoising} that $\mathsf{Tr}(\mathbf{A}_t^{H}\mathbf{A}_t) = \eta_tN_{\text{t}} = \eta_tM$ and $\lambda_j$ in DTD error bounds in Theorem~\ref{Theorem:expectation} and~\ref{Theorem:probability} can be expressed as \begin{equation}\label{eq:lambda_j} \lambda_j=\sigma_j^2+\frac{\sigma^2}{2t}\sum_{\ell\leq t}\eta_{\ell}, \end{equation} which shows that the error relies on only the denoising factor up to the current time slot. The result also suggests that it is preferable to select from the received sketches $\{\tilde{\mathbf{Y}}_n\}_{\ell\leq t}$ those associated with small $\eta_{\ell}$ that reflects a favourable channel condition. Naturally, we can derive a threshold based selection scheme as follows: \begin{equation}\label{eq:selection} \boxed{\tilde{\mathbf{Y}}_{\ell}\ \mathrm{is\ selected\ if}\ \eta_{\ell}\leq \eta_{\mathrm{th}},\ \forall \ell\leq t,} \end{equation} where the threshold $\eta_{\mathrm{th}}$ is optimized in the sequel. \subsection{Threshold Optimization} The threshold, $\eta_{\mathrm{th}}$, in~\eqref{eq:selection}, needs to be optimized to minimize the error in~\eqref{eq:scalinglaw}. Solving the problem is hindered by that the singular values, $\{\sigma_j\}$, in the DTD error are not available at the server in advance. We tackle this problem by designing a practical optimization scheme. To this end, we resort to using an upper bound on the DTD error as shown below. \begin{Lemma}\label{Lemma:selection} \emph{Let $\tilde{M}$ denote the number of aggregated sketches selected from $\{\tilde{\mathbf{Y}}_{\ell}\}_{\ell\leq t}$ based on~\eqref{eq:selection} with the threshold $\eta_{\mathrm{th}}$, the DTD error in~\eqref{eq:scalinglaw} satisfies \begin{equation*} \frac{1}{\tilde{M}}\sum_{i=1}^r\sum_{j\geq r+1}\frac{\lambda_j^2+\lambda_j\mathsf{Tr}(\mathbf{\Lambda})}{\sigma_i^2-\sigma_j^2}\leq \frac{c}{\tilde{M}}\left[1+\frac{r\sigma^2\eta_{\mathrm{th}}}{2\sum_{k}\mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)}\right]^2, \end{equation*} where $c$ is a constant. } \end{Lemma} \begin{proof} See Appendix~\ref{Apdx:selection}. \end{proof} Lemma~\ref{Lemma:selection} suggests that a sub-optimal threshold can be obtained by minimizing the error upper bound. Let $S$ denote a set and $|S|$ its cardinality. Then, the threshold-optimization problem can be formulated as \begin{equation}\label{problem:threshold} \begin{aligned} \mathop{\min}_{\eta_{\mathsf{th}}}&\ \ \frac{1}{|S|}\left[1+\frac{r\sigma^2\eta_{\mathrm{th}}}{2\sum_{k}\mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)}\right]^2\\ \mathrm{s.t.}&\ \ S = \left\{\eta_{\ell}|\eta_{\ell}\leq \eta_{\mathsf{th}}, \ell\leq t\right\}, \end{aligned} \end{equation} where $\eta_{\ell}$ follows the definition in~\eqref{eq:denoising}. One can observe that the objective function of~\eqref{problem:threshold} is a monotonically increasing function with respect to the variable $\eta_{\mathrm{th}}$ if $\tilde{M}$ is fixed. Such piecewise monotonicity of the objective function~\eqref{problem:threshold} renders a linear-search solution method (e.g. bisection search) infeasible, but allows the optimal solution, $\eta_{\mathrm{th}}^{\star}$, to be restricted into a finite set, say $\eta_{\mathsf{th}}^{\star} \in \left\{\eta_1,\eta_2,\cdots,\eta_t\right\}$. Then, finding $\eta_{\mathsf{th}}^{\star}$ is simple by exhausted enumeration as follows. Let $\tilde{M}_{\ell}$ denote the number of selected sketches corresponding to the threshold fixed as $\eta_{\mathsf{th}} = \eta_{\ell}$. Define \begin{equation*} \ell^{\star} = \arg\min_{\ell\leq t}\frac{1}{\tilde{M}_{\ell}}\left[1+\frac{r\sigma^2\eta_{\ell}}{2\sum_{k}\mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)}\right]^2. \end{equation*} The optimal threshold solving the problem in~\eqref{problem:threshold} is $\eta_{\mathsf{th}} = \eta_{\ell^{\star}}$. Two remarks are offered as follows. First, the above research for the optimal threshold has complexity linearly proportional to $t$, the population of accumulated sketches at the server. Second, the implementation of the optimization at the server requires feedback of a scalar from each device, namely $\mathsf{Tr}\left(\mathbf{X}_k^{\top}\mathbf{X}_k\right)$ from device $k$. \section{Experimental Results}\label{section:experiment} \subsection{Experimental Settings} First, the MIMO AirComp system is configured to have the following settings. There are $K=20$ edge devices connected to the server. The array sizes at each devices and the server are set as $N_{\text{t}}=4$ and $N_{\text{r}}=16$, respectively. The Rayleigh channel model is adopted, in which each MIMO channel matrix $\mathbf{H}_{t,k}$ comprises i.i.d. $\mathcal{CN}(0,1)$ entries and different channels are independent. Second, following the DTD literature (see, e.g.~\cite{StreamingTD}), we use a synthetic data model. Considering the computation of the $n$-th factor matrix, the unfolding matrix of the data tensor has the size of $I_n\times (\prod_{j=1,j\neq n}^N) = 100\times 1500$, and its columns are uniformly distributed over devices. Under such settings, each local sketch has the length of $I_n = 100$ that is smaller than a single channel-coherence duration (see, e.g.~\cite{Bjornson2016Tenmyths}, for justification). To demonstrate the performance of the proposed FlyCom$^2$ for data with a range of parameterized spectral distribution, the singular values of the unfolding matrix are set to decay with a polynomial rate: \begin{equation*} \mathbf{\Sigma}_{\mathbf{X}} = \mathsf{diag}\left(1,\cdots,1,\frac{1}{2^{\xi}},\frac{1}{3^{\xi}},\cdots,\frac{1}{(I_n-r)^{\xi}}\right), \end{equation*} where the first $r=12$ principal singular values are fixed as $1$ and $\xi>0$ controls the decay rate of residual values. Furthermore, the left and right eigenspaces of the unfolding matrix are generated as those of random matrices with i.i.d. $\mathcal{N}(0,1)$ entries~\cite{StreamingTD}. Third, we consider two benchmarking schemes that are variants of SVD-based DTD. \begin{itemize} \item Centroid SVD-DTD: Devices compute local principal eigenspaces $\{\hat{\mathbf{U}}_k\}$ of their on-device data samples by using SVD and the server then aggregates these local results as $\mathbf{P}=\frac{1}{K}\sum_k\hat{\mathbf{U}}_k\hat{\mathbf{U}}_k^{\top}$. The principal eigenspace of $\mathbf{P}$ represents the centroid of all local estimates $\{\hat{\mathbf{U}}_k\}$ on the Grassmannian manifold and is extracted to form a global estimate of the ground truth~\cite{JF2019estimation2019,chen2022analog}. \item Alignment SVD-DTD: The scheme follows a similar procedure as above except for aggregating local results, $\{\hat{\mathbf{U}}_k\}$, as $\mathbf{P}=\frac{1}{K}\sum_k\hat{\mathbf{U}}_k\mathbf{J}_k$, where the orthogonal matrices $\{\mathbf{J}_k\}$ are alignment matrices that are to be optimized by using past global estimates to improve the system performance~\cite{VC2021DPCA}. \end{itemize} The aggregation operations in both benchmark schemes are implemented using MIMO AirComp~\cite{GZhu2021WCM, GXZhuAirComp2019} as FlyCom$^2$ for fair comparison. \subsection{Performance Gain of Sketch Selection} \begin{figure*}[t] \centering \subfigure[SNR $= 0\mathrm{dB}$]{\label{subfig:0dB}\includegraphics[width=0.32\textwidth]{figures/Compare_selection_0dB_decay2-eps-converted-to.pdf}} \subfigure[SNR $= 5\mathrm{dB}$]{\label{subfig:5dB}\includegraphics[width=0.32\textwidth]{figures/Compare_selection_5dB_decay2-eps-converted-to.pdf}} \subfigure[SNR $= 10\mathrm{dB}$]{\label{subfig:10dB}\includegraphics[width=0.32\textwidth]{figures/Compare_selection_10dB_decay2-eps-converted-to.pdf}} \caption{Error-performance comparison between FlyCom$^2$ with and without sketch selection.} \label{fig:selection} \end{figure*} In Fig.~\ref{fig:selection}, we compare the error performance of FlyCom$^2$ between the cases with and without sketch selection. The communication time is measured by the total number of symbol slots used in uploading local sketches, namely $tI_n$, where $I_n$ is the number of rows of local sketches. It is observed from Fig.~\ref{fig:selection} that the proposed selection scheme helps to reduce the expected DTD for different transmit SNRs. The gain emerges when the communication time exceeds a SNR dependent threshold (e.g. $2500$ time slots for $10$ dB) as the total number of available sketches becomes sufficiently large. Another observation from Fig.~\ref{fig:selection} is that the DTD error decreases at an approximately linear rate with respect to the communication time, which is aligned with our conclusion in~\eqref{eq:scalinglaw}. In the sequel, FlyCom$^2$ is assumed to have sketch selection. \begin{figure*}[t] \centering \subfigure[Spectral decay rate $\xi = 0.5$]{\label{subfig:decay05}\includegraphics[width=0.32\textwidth]{figures/benchmarks_10dB_100x1500_decay05-eps-converted-to.pdf}} \subfigure[Spectral decay rate $\xi = 1$]{\label{subfig:decay1}\includegraphics[width=0.32\textwidth]{figures/benchmarks_10dB_100x1500_decay1-eps-converted-to.pdf}} \subfigure[Spectral decay rate $\xi = 2$]{\label{subfig:decay2}\includegraphics[width=0.32\textwidth]{figures/benchmarks_10dB_100x1500_decay2-eps-converted-to.pdf}} \caption{FlyCom$^2$ versus Benchmark schemes, SNR $\gamma = 10\mathrm{dB}$.} \label{fig:benchmark} \end{figure*} \subsection{Error Performance of FlyCom$^2$} While FlyCom$^2$ requires much simpler on-device computation than benchmarking schemes (see Section~\ref{subsec:experiment_error}), we demonstrate in Fig.~\ref{fig:benchmark} that it can achieve comparable or even better error performance than the latter. Fig.~\ref{fig:benchmark} displays the curves of expected DTD error versus communication time. Note that the performance of the benchmarking schemes with one-shot computation and communication appears as single points in the figure. The results in Fig.~\ref{fig:benchmark} show that FlyCom$^2$-based DTD achieves comparable decomposition accuracies as the benchmarking schemes with time progressing. Furthermore, its performance is improved by increasing the decay rate ($\xi$) of singular values, which validates our conclusion that large eigen-gaps help distinguish principal from non-principal eigenvectors during random sketching. For instance, for $\xi=1$, the proposed scheme approaches the centroid and alignment based SVD-DTD in performance for communication time larger than $800$ and $1700$ symbol slots, respectively. As $\xi$ increases to $2$, the former outperforms both of the latter schemes. Furthermore, one can also observe from Fig.~\ref{fig:benchmark} that the proposed on-the-fly framework realizes a flexible trade-off between the decomposition accuracy and communication time, which is the distinctive feature of the design. \begin{figure*}[t] \centering \subfigure[Computation complexity ($I=100$, $N_{\text{t}}=4$)]{\label{subfig:complexity}\includegraphics[width=0.48\textwidth]{figures/complexity-eps-converted-to.pdf}} \subfigure[Passes of raw data in memory]{\label{subfig:memory}\includegraphics[width=0.48\textwidth]{figures/memory_passes-eps-converted-to.pdf}} \caption{Computation-cost comparison between FlyCom$^2$ based DTD and benchmarking schemes.} \label{fig:computation} \end{figure*} \subsection{Device Computation Costs of FlyCom$^2$}\label{subsec:experiment_error} In Fig.~\ref{fig:computation}, we compare two kinds of computational costs at devices, namely complexity and memory passes, between FlyCom$^2$ and benchmark schemes. The complexity refers to the flop count of computation, and the memory passes is equal to the number of memory visits for reading data entries. The computational advantage of FlyCom$^2$ is demonstrated by comparing the cost of matrix-vector multiplication in random sketching with that of deterministic SVD used in the one-shot benchmarking schemes. Specifically, given $I\times J$ local unfolding matrices, deterministic SVD has the complexity proportional to $\min\{I,J\}^2\times \max\{I,J\}$~\cite{SVDComplexity2019}; based on matrix multiplication, the complexity of FlyCom$^2$ to yield an $N_{\text{t}}$-dimensional sketch at each time slot is $IJN_{\text{t}}$. For the schemes in comparison, their curves of computation complexity versus sample size are plotted in Fig.~\ref{subfig:complexity}. One can observe that the proposed FlyCom$^2$ dramatically reduces devices' complexity by more than an order of magnitude. On the other hand, Fig.~\ref{subfig:memory} displays the curves of the number of memory passes versus the principal dimensionality, $r$. The proposed design keeps a constant memory pass for matrix multiplication, as opposed to that of SVD that increases linearly with the principal dimensionality. Specifically, the number of memory passes is reduced using FlyCom$^2$ by $10$ times and $30$ times for $r=10$ and $r=30$, respectively. \section{Conclusion}\label{section:conclusion} We have presented a FlyCom$^2$ framework to support the progressive computation of DTD in mobile networks. Through the use of the random sketching technique at devices, the traditional one-shot high-dimensional mobile communication and computation is reduced to low-dimensional operations spread over multiple time slots. Thereby, the resource constraints of devices are overcome. Furthermore, FlyCom$^2$ obtains its distinctive feature of progressive improvement of DTD accuracy with increasing communication time, providing robustness against link disruption. To develop the FlyCom$^2$ based DTD framework, we have designed an on-the-fly sub-space estimator and a sketch-selection scheme to ensure close-to-optimal system performance. Beyond DTD, high-dimensional communication and computation pose a general challenge for machine learning and data analytics in wireless networks. We expected that FlyCom$^2$ can be further developed into a broad approach for efficient deployment of relevant algorithms such as federated learning and distributed optimization. For the current FlyCom$^2$ targeting DTD, its extension to accommodate other wireless techniques such as broadband transmission and radio resource management is also a direction worth pursuing. \section{Introduction} In mobile networks, an enormous amount of data is being continuously generated by billions of edge devices. Data analytics can be performed on distributed data to distill useful insight for empowering a broad range of mobile applications ranging from e-commerce to auto-driving to IoT sensing~\cite{Bennis6Gvision,Letaief2021JSAC}. One basic class of techniques is called tensor decomposition which extracts a low-dimensional structure from large-scale multi-attribute data with tensor representation (i.e., high-dimensional counterparts of a matrix) \cite{Larsson2010,TDreview2017}. A popular technique in this class, called Tucker decomposition, is a higher-dimensional extension of singular-value decomposition (SVD) that has supported diverse applications such as Google's image recognition and Cynefin's spotting anomalies. In mobile networks, the tensor decomposition can be implemented in a centralized manner which requires uploading of high-dimensional data from many devices to a central server. However, such implementation is stymied not only by a communication bottleneck but also by the issue of data privacy~\cite{MZChen2021,NiyatoFL2020}. In view of these issues, we focus on \emph{distributed tensor decomposition} (DTD) that avoids direct data uploading and reduces communication overhead via distributing the computation of data tensors to multiple devices. A direct distributed implementation therein is to parallel centralized iterative methods such as alternating least square~\cite{PARAFAC2005} and stochastic gradient descent~\cite{Kijung2017,zhang2021turning} over edge devices, which, however, results in great communication overhead due to slow convergence. On the other hand, DTD can be realized via \emph{one-shot} distributed matrix analysis techniques~\cite{chen2022analog,JF2019estimation2019,VC2021DPCA}, since the desired orthogonal factor matrices can be estimated as the principal eigenspaces of unfolding matrices of the tensor along different modes~\cite{TensorReview}. These one-shot methods improve communication efficiency at a slight cost of decomposition accuracy by following two steps: 1) computing local estimates of the desired factor matrix at devices using local data; 2) uploading and aggregating the local estimates at the server to compute a global estimate. Though alleviated, the communication bottleneck still exists due to the required aggregation of high-dimensional local tensors over potentially many devices. This multi-access problem can be addressed by using a technique called \emph{over-the-air computation} (AirComp), which exploits the waveform superposition property of a multi-access channel to realize over-the-air data aggregation in one shot~\cite{GZhu2021WCM,MZChen2021}. In general, AirComp finds applications in communication-efficient distributed computing and learning with a recent focus on federated learning (see, e.g., \cite{Eldar2021TSP, Ding2020TWC,Deniz2020TSP}). Considering a DTD system with AirComp, this work aims to solve two open problems. The first is the prohibitive cost and latency of computation at resource-constrained devices. A traditional one-shot DTD algorithm requires each device to perform eigenvalue decomposition of a potentially high-dimensional local dataset (from, e.g., its sensor or user). The resultant complexity increases \emph{super-linearly} with the data dimensions~\cite{SVDComplexity2019} and latency is high as limited by frequent access to memory that is usually much slower than processing speeds~\cite{randomprojection2011}. These make it difficult for DTD to support emerging mission-critical applications~\cite{Petar2022TimePersp}. The second problem is that the one-shot transmissions by devices are susceptible to link disruption. Specifically, the loss of mobile connection during the process of transmitting high-dimensional local principle components can render the already received partial data useless. In other words, the existing designs lack the feature of graceful performance degradation due to fading. To solve these problems, we propose the novel framework of \emph{on-the-fly communication-and-computing} (FlyCom$^2$). Underpinning the framework is the use of a technique from randomized linear algebra, \emph{randomized sketching}, that generates low-dimensional random representations, called \emph{sketches}, of a high-dimensional data sample by projecting it into randomly generated low-dimensional sub-spaces~\cite{StreamingTD, randomprojection2011}. The technique has been successfully used in diverse applications ranging from online data tracking~\cite{Sketching} to matrix approximation~\cite{randomprojection2011}. In FlyCom$^2$, in place of the traditional high-dimensional local eigenspaces, each device generates a stream of low-dimensional sketches for uploading to the server. Considering a \emph{multiple-input-multiple-output} (MIMO) channel, the simultaneous transmission of local sketches is enabled by spatially multiplexed AirComp~\cite{GXZhuAirComp2019}. Upon its arrival at the server, each aggregated sketch is immediately used to improve the global tensor decomposition, giving the name of FlyCom$^2$. Since random sketches serve as independent observations on the tensor, the server can produce an estimate for tensor decomposition in every time slot based on the sketches already received. The FlyCom$^2$ framework addressed the mentioned open problems in several aspects. First, random sketching that involves matrix multiplication has much lower complexity than eigen decomposition and helps to reduce the complexity of on-device computation. Second, the DTD accuracy depends on the number of successfully received sketches and hence is robust against loss of sketches in transmission. This endows on FlyCom$^2$ a key property of graceful degradation in the events of link disruption or packet loss. Third, as the principle components of a high-dimensional tensor are usually low-dimensional, the progressive DTD at the server is shown to approach its optimal performance quickly as the number of received aggregated sketches increases, thereby reining in the communication overhead. Last, the parallel streaming communication and computation in FlyCom$^2$ are more efficient than the sequential operations of the traditional one-shot algorithms due to the communication-computation separation. In designing the FlyCom$^2$ framework, this work makes the following key contributions. \begin{itemize} \item \emph{On-the-Fly Sub-space Detection:} One key component of the framework is an optimal on-the-fly detector at the server to estimate the tensor's principle eigenspace from the received, noisy (aggregated) sketches. To design a \emph{maximum likelihood} (ML) detector, a whitening technique is used to pre-process the sketches so as to yield an effective global observation, which is shown to have the covariance matrix sharing the same eigenspace as the tensor. Using the result, the ML estimation problem is formulated as a \emph{subspace alignment problem} and is solved in a closed form. It is observed from the solution that the optimal estimate of the desired principal eigenspace approaches its ground truth as the said observation's dimensionality grows (or equivalently more sketches are received). \item \emph{DTD Error Analysis:} The end-to-end performance of the FlyCom$^2$-based DTD system is measured by the square error of the estimated principle eigenspace \emph{with respect to} (w.r.t.) its ground truth. Using the perturbation theory and concentration of measure, bounds are derived on both the error and its expectation. These results reveal that the error consists of one residual component contributed by non-principal components and the other component caused by random sketching. Moreover, the error is observed to be linearly proportional to the number of received sketches, validating the earlier claims on the progressive nature of the designed DTD as well as its feature of graceful degradation. This also suggests a controllable trade-off between the decomposition accuracy and communication overhead, which is useful for practical implementation. \item \emph{Threshold Based Sketch Selection:} Removing severely channel distorted sketches from use in sub-space detection can lead to performance improvement. This motivates the design of a sketch-selection scheme that applies a threshold on a scaling factor in MIMO AirComp that reflects the received \emph{signal-to-noise ratio} (SNR) of an aggregated sketch. We show that such a threshold can be efficiently optimized by an enumeration method of polynomial complexity w.r.t. the population of received sketches. \end{itemize} The remainder of the paper is organized as follows. Section~\ref{section:model} introduces system models and metrics, followed by an overview of the proposed FlyCom$^2$ framework in Section~\ref{section:overview}. Then, Section~\ref{section:design} presents the design of the on-the-fly sub-space estimator and its error analysis. The sketch-selection scheme is proposed in Section~\ref{section:selection}. Numerical results are provided in Section~\ref{section:experiment}, followed by concluding remarks in Section~\ref{section:conclusion}. \section{Models, Operations, and Metrics}\label{section:model} We consider the support of DTD in a MIMO system as illustrated in Fig.~\ref{fig:scenario}. The relevant models, operations and metrics are described as follows. \begin{figure*}[t] \centering \begin{minipage}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{figures/scenario.pdf} \vspace{-8mm} \end{minipage} \caption{On-the-fly communication-and-computing for distributed tensor decomposition.} \label{fig:scenario} \end{figure*} \subsection{Distributed Tensor Decomposition} We consider the distributed implementation of the popular Tucker method of tensor decomposition~\cite{TensorReview}. For ease of notation, the tensor is assumed to have $N$ modes that generalize the concepts of columns and rows in matrices with the first $(N-1)$ modes corresponding to data features and mode $N$ for indexing data samples. For instance, in a surveillance system, images captured by multiple cameras are expressed as local tensors with three modes indicating pixels, colors, and data sample indices, respectively. Let data samples collected by an arbitrary device, say device $k$, be represented as a local tensor $\mathcal{X}_k\in\mathbb{R}^{I_1^{(k)}\times I_2^{(k)}\cdots\times I_N^{(k)}}$, where $I_n^{(k)}$ denotes the dimensionality of mode $n$ of local tensor $k$. For ease of notation, we assume that local tensors have the same dimensions for their feature modes: $I_n^{(k)}=I_n$, $\forall k, 1\leq n\leq N-1$. Next, these local tensors are aggregated from $K$ devices to form a global tensor $\mathcal{X}\in\mathbb{R}^{I_1\times I_2\cdots\times I_N}$ with $I_N = \sum_kI_N^{(k)}$. The Tucker decomposition of $\mathcal{X}$ can be written as~\cite{TensorReview} \begin{equation}\label{eq:Tucker} \mathcal{X} \approx \mathcal{G}\times_1\mathbf{U}_1\times_2\mathbf{U}_2\cdots\times_N\mathbf{U}_N\overset{\triangle}{=}\tilde{\mathcal{X}}, \end{equation} where $\mathcal{G}\in\mathbb{R}^{r_1\times r_2\cdots\times r_N}$ represents a core tensor [that generalizes singular values in \emph{singular value decomposition} (SVD)], $\mathbf{U}_n\in\mathbb{R}^{I_n\times r_n}$ is an orthogonal factor matrix corresponding to the $n$-th mode satisfying $\mathbf{U}_n^{\top}\mathbf{U}_n = \mathbf{I}_{r_n}$ with $r_n$ ($r_n\leq I_n$) representing the number of principal dimensions, and $\times_n$ denotes the mode-$n$ matrix product~\cite{TensorReview}. In the sequel, we pursue these factor matrices $\{\mathbf{U}_n\}$ as they reveal the characteristics of the data tensor at different modes and given $\{\mathbf{U}_n\}$, the computation of $\mathcal{G}$ is straightforward~\cite{Larsson2010}. In centralized computation with full data aggregation, $\{\mathbf{U}_n\}$ can be computed by using the \emph{higher-order SVD} approach~\cite{Larsson2010,StreamingTD}. In this approach, the tensor is first flattened along chosen mode $n$ to yield a matrix $\mathbf{X}^{(n)}\in\mathbb{R}^{I_n\times J_n}$ with $J_n=\prod_{j=1,j\neq n}^NI_n$, termed \emph{mode-$n$ unfolding}; then the desired factor matrix is computed as $\mathbf{U}_n = [\mathbf{u}_1,\cdots,\mathbf{u}_{r_n}]$ where $\mathbf{u}_i$ is given by the $i$-th principal eigenvector of the mode-$n$ unfolding. Let this operation be represented by $\mathcal{S}_{r_n}(\cdot)$ and hence $\mathbf{U}_n = \mathcal{S}_{r_n}(\mathbf{X}^{(n)}(\mathbf{X}^{(n)})^{\top})$. In contrast with its centralized counterpart, DTD is to compute the eigenspaces of different unfolding matrices distributively by avoiding the aggregation of raw data to preserve the data ownership~\cite{JF2019estimation2019,chen2022analog}. Considering the computation of $\mathbf{U}_n$, DTD goes through the following procedure: 1) local tensors are flattened along chosen mode $n$ to generate local unfoldings, denoted by $\{\mathbf{X}_k^{(n)}\}$; 2) devices compute low-dimensional component $\{\mathbf{S}_k\}$ from local unfoldings $\{\mathbf{X}_k^{(n)}\}$ through dimensionality reduction techniques; 3) the server gathers these local components from devices and aggregates them into a global component, denoted as $\mathbf{S}$, to yield a global estimate of the ground truth, $\mathbf{U}_n$. It is worth mentioning that the computation results $\{\mathbf{S}_k\}$ depend on a particular dimensionality reduction technique. For example, when using \emph{principal component analysis} (PCA)~\cite{JF2019estimation2019,chen2022analog}, $\{\mathbf{S}_k\}$ are computed as the principal eigenspaces of $\{\mathbf{X}_k^{(n)}\}$ at devices, and then the server averages them to estimate $\mathbf{U}_n$. In this work, the random sketching approach is adopted as elaborated in Section~\ref{section:overview}. \subsection{MIMO Over-the-Air Computation} FlyCom$^2$ builds on MIMO AirComp to over-the-air aggregate local results, which is described as follows. First, let $N_\text{r}$ and $N_{\text{t}}$ with $N_{\text{r}} \geq N_{\text{t}}$ denote the numbers of antennas at the edge server and each device, respectively. We assume perfect transmit \emph{channel state information} (CSI) and symbol-level synchronization between devices~\cite{GXZhuAirComp2019}. Time is slotted and then grouped to form durations with $t$ denoting the duration index. In each time slot, an $N_{\text{t}}\times 1$ vector of complex scalar symbols is transmitted over $N_{\text{t}}$ antennas. Then a \emph{matrix-symbol} duration spans at least $I$ symbol durations to support the transmission of an $N_{\text{t}}\times I$ matrix. In an arbitrary duration, say $t$, all edge devices transmit simultaneously their $I\times M$ real matrices, denoted as $\{\mathbf{S}_{t,k}\}$, each of which is termed a \emph{matrix symbol}. As a result, the server receives an over-the-air aggregated matrix symbol, $\mathbf{Y}_t$, as \begin{equation*} \mathbf{Y}_t = \mathbf{A}_t\sum_{k=1}^K\mathbf{H}_{t,k}\mathbf{B}_{t,k}\mathbf{S}_{t,k}^{\top} + \mathbf{A}_t\mathbf{Z}_t, \end{equation*} where $\mathbf{H}_{t,k}\in \mathbb{C}^{N_{\text{r}}\times N_{\text{t}}}$ denotes the channel matrix corresponding to device $k$, $\mathbf{Z}_t$ models additive Gaussian noise with \emph{independent and identically distributed} (i.i.d.) elements of $\mathcal{CN}(0,\sigma^2)$, $\mathbf{A}_t\in \mathbb{C}^{L\times N_{\text{r}}}$ and $\mathbf{B}_{t,k}\in \mathbb{C}^{N_{\text{t}}\times M}$ denote receive and transmit beamforming matrices, respectively. To realize AirComp, we consider \emph{zero forcing} (ZF) transmit beamforming that inverts individual MIMO channels~\cite{GXZhuAirComp2019}. Mathematically, conditioned on a fixed receive beamformer, transmit beamforming matrices are given as \begin{equation}\label{eq:example:ZF} \mathbf{B}_{t,k} = \left(\mathbf{A}_{t}\mathbf{H}_{t,k}\right)^{H}\left(\mathbf{A}_{t}\mathbf{H}_{t,k}\mathbf{H}_{t,k}^{H}\mathbf{A}_{t}^{H}\right)^{-1}. \end{equation} The received matrix $\mathbf{Y}_t$ is then rewritten as \begin{equation}\label{eq:WA} \mathbf{Y}_t =\sum_{k=1}^K\mathbf{S}_{t,k}^{\top}+\mathbf{A}_t\mathbf{Z}_t. \end{equation} In the absence of noise, the AirComp in~\eqref{eq:WA} provides the one-shot realization of the desired aggregation operation for DTD. The average transmission power of each device is enforced not to exceed a power budget of $P$ per slot, i.e. \begin{equation}\label{eq:powerconstraint} \mathsf{E}\left[\Vert\mathbf{B}_{t,k}\mathbf{S}_{t,k}^{\top}\Vert_F^2\right] = \mathsf{Tr}\left(\left(\mathbf{A}_{t}\mathbf{H}_{t,k}\mathbf{H}_{t,k}^{H}\mathbf{A}_{t}^{H}\right)^{-1}\mathsf{E}[\mathbf{S}_{t,k}^{\top}\mathbf{S}_{t,k}]\right)\leq IP,\ \forall t,k. \end{equation} The transmit \emph{signal-to-noise ratio} (SNR) is then given by $\gamma = \frac{P}{\sigma^2}$. \subsection{Error Metric} Given MIMO AirComp described earlier, a noisy version of $\mathbf{U}_n$, denoted by $\tilde{\mathbf{U}}_n$, will be computed progressively from a set of received matrices $\{\mathbf{Y}_t\}$ (see Section~\ref{section:overview}). The $\tilde{\mathbf{U}}_n$ deviates from the ground truth due to both distributed computation and channel noise. The resultant error can be a performance metric of DTD in the wireless system. Mathematically, given $\tilde{\mathcal{X}}$ as the tensor derived from $\{\tilde{\mathbf{U}}_n\}$, the error is measured as $\Vert\mathcal{X}-\tilde{\mathcal{X}}\Vert_F^2$ that can be bounded as $ \Vert\mathcal{X}-\tilde{\mathcal{X}}\Vert_F^2 \leq \sum_{n=1}^N\Vert (\mathbf{I}_{I_n} - \tilde{\mathbf{U}}_n\tilde{\mathbf{U}}_n^{\top})\mathbf{X}^{(n)}\Vert_F^2$~\cite{StreamingTD}. As this result allows traceability, we define the DTD error as \begin{equation}\label{eq:error} d\left(\tilde{\mathbf{U}}_n,\mathbf{X}^{(n)}\right) = \Vert (\mathbf{I}_{I_n} - \tilde{\mathbf{U}}_n\tilde{\mathbf{U}}_n^{\top})\mathbf{X}^{(n)}\Vert_F^2. \end{equation} \section{Overview of On-the-Fly Communication-and-Computing}\label{section:overview} To support DTD over edge devices with limited computation power, we propose a FlyCom$^2$ framework as shown in Fig.~\ref{fig:scenario}. To elaborate on it, we first briefly introduce the random approach exploited in FlyCom$^2$ and then explain how to use FlyCom$^2$ to support DTD. \subsection{Data Dimensionality Reduction via Random Sketching} Recall that the DTD requires data dimensionality reduction on devices prior to transmission. For high-dimensional tensors, the traditional PCA technique becomes too complex for resource-constrained devices. To address this issue, we adopt a technique for random dimensionality reduction, known as \emph{random sketching}, which is simpler than PCA as it only relies on matrix multiplication and also requires a smaller number of passes over datasets~\cite{randomprojection2011}. Specifically, given an $I\times J$ data matrix $\mathbf{X}$, random sketching uses a $J\times M$ random matrix, termed \emph{dimension reduction mapping} (DRM) and denoted by $\mathbf{\Omega}$, to map $\mathbf{X}$ to an $I\times M$ sketch matrix $\mathbf{S}$ with $J\gg M$: $\mathbf{S} = \mathbf{X}\mathbf{\Omega}$. The mapping $\mathbf{\Omega}$ can be composed of i.i.d. Gaussian elements and projects the high-dimensional $\mathbf{X}$ to random directions in a space of low dimensionality. Despite the random projection, the mutual vector distances between the rows of $\mathbf{X}$ can be approximately preserved such that the principal (column) eigenspace of the sketch, $\mathbf{S}$, constitutes a good approximation of that of $\mathbf{X}$. The approximation accuracy grows as $J$ increases and becomes perfect when $J$ is equal to $M$~\cite{randomprojection2011}. Importantly, to estimate an $r$-dimensional principal eigenspace, random sketching has the complexity of $\mathcal{O}(IJM)$ and a single data pass of memory, as opposed to the complexity of PCA $\mathcal{O}(\min\{I,J\}^2\times \max\{I,J\})$ and $\mathcal{O}(r)$ memory passes in PCA~\cite{SVDComplexity2019}. \subsection{FlyCom$^2$-Based DTD} Based on the preceding random-sketching technique, we propose the FlyCom$^2$ framework that decomposes the high-dimensional DTD into on-the-fly processing and transmission of streams of low-dimensional random sketches. Thereby, we not only overcome devices' resource constraints but also achieve the graceful reduction of DTD error as the communication time increases. Without loss of generality, we focus on the computation of the principal eigenspace $\mathbf{U}_n$ for an arbitrary data-feature mode $n$ with $n\in[1,\cdots, N-1]$. To simplify notation, the superscript $(n)$ and subscript $n$ are removed. The detailed operations of FlyCom$^2$ are described as follows. \subsubsection{On-the-Fly Computation at Devices} Each device streams a sequence of low-dimensional local sketches to the server by generating and transmitting them one by one in data packets. First, the progressive computation of local sketches at devices is introduced as follows. Let each local tensor, say $\mathcal{X}_k$ at device $k$, be flattened along the desired mode to generate the unfolding matrix $\mathbf{X}_k$. Then, in the (matrix-symbol) slot $t$, each device $k$ draws i.i.d. $\mathcal{N}(0,1)$ entries to form a $J\times M$ DRM, denoted by $\mathbf{\Omega}_{t,k}$, or retrieves it efficiently from a memory~\cite{TRP_Tropp2018}. Then an $M$-dimensional local sketch for $\mathbf{X}_{k}$ can be computed as $\mathbf{S}_{t,k} = \mathbf{X}_{k} \mathbf{\Omega}_{t,k}$, which is then uploaded to the server immediately before computing the next sketch $\mathbf{S}_{t+1,k}$. This allows the efficient communication-and-computation parallelization as shown in Fig.~\ref{fig:parallel}. \begin{figure*}[t] \centering \begin{minipage}[b]{0.65\textwidth} \centering \includegraphics[width=\textwidth]{figures/parallel.pdf} \vspace{-8mm} \end{minipage} \caption{Parallelization between communication and computation.} \label{fig:parallel} \end{figure*} \subsubsection{On-the-Fly Global Random Sketching} MIMO AirComp is used for low-latency aggregation of the local sketches simultaneously streamed by devices. Local temporal sketches are progressively aggregated at the server by linearly modulating them as MIMO AirComp symbols. For ease of notation, we consider the dimension of local sketches to be fixed as $M = N_{\text{t}}$ but this assumption can be easily relaxed similarly as in~\cite{chen2022analog}. Consider the uploading of the $t$-th local sketches. It follows from~\eqref{eq:WA} that the matrix symbol received at the server can be written as \begin{equation}\label{eq:receivedsymbol} \mathbf{Y}_t^{\top} = \sum_k\mathbf{X}_k\mathbf{\Omega}_{t,k} + \mathbf{Z}_t^{\top}\mathbf{A}_t^{\top}. \end{equation} To explain how to use $\mathbf{Y}_t$ in estimating the principal eigenspace of the global unfolding matrix $\mathbf{X}$, we first consider the case without channel noise, in which $\mathbf{Y}_t^{\top} = \sum_k\mathbf{X}_k\mathbf{\Omega}_{t,k}$. Since the global tensor $\mathcal{X}$ is given by assembling local tensors along mode $N$, the corresponding global unfolding matrix, denoted by $\mathbf{X}$, is related to the local unfoldings $\{\mathbf{X}_k\}$ as \begin{equation}\label{eq:distributed_samples} \mathbf{X}=[\mathbf{X}_1,\mathbf{X}_2,\cdots,\mathbf{X}_K]. \end{equation} It follows that \begin{align*} \mathbf{Y}_t^{\top}&= [\mathbf{X}_1,\cdots,\mathbf{X}_K][\mathbf{\Omega}_{t,1}^{\top},\cdots,\mathbf{\Omega}_{t,K}^{\top}]^{\top}\\ & \overset{\triangle}{=}\mathbf{X}\mathbf{F}_t, \end{align*} where we define $\mathbf{F}_{t} = [\mathbf{\Omega}_{t,1}^{\top},\cdots,\mathbf{\Omega}_{t,K}^{\top}]^{\top}$. As $\{\mathbf{\Omega}_{t,k}\}$ are mutually independent, $\mathbf{F}_{t}$ has i.i.d. $\mathcal{N}(0,1)$ elements and can be used as an $M$-dimensional DRM for randomly sketching $\mathbf{X}$. Therefore, in the absence of channel noise, $\mathbf{Y}_t$ gives an $M$-dimensional global sketch for $\mathbf{X}$. The dimension of the global sketch grows, thereby improving the DTD accuracy, as more aggregated local sketches are received (or equivalently $t$ progresses), giving the name of on-the-fly global sketching. \subsubsection{On-the-Fly Sub-space Detection at the Server} In the case with channel noise, the server can produce an estimate of the desired principal eigenspace, $\mathbf{U}$, based on the noisy observations accumulated up to the current symbol slot. Specifically, in slot $t$, given the current and past received matrix symbols, $\{\mathbf{Y}_{\ell}\}_{\ell\leq t}$, and the receive beamformers $\{\mathbf{A}_{\ell}\}_{\ell\leq t}$ (discussed in the sequel), the server estimates $\mathbf{U}$ as \begin{equation} \tilde{\mathbf{U}} = f(\{\mathbf{Y}_{\ell}\}_{\ell\leq t},\{\mathbf{A}_{\ell}\}_{\ell\leq t}), \end{equation} where the estimator $f(\cdot)$ is optimized in the sequel to minimize the DTD error in~\eqref{eq:error}. Following the above discussion, the procedure for FlyCom$^2$-based DTD is summarized as follows. \begin{equation*} \boxed{ \begin{array}{l} \mathrm{To\ compute\ the\ principal\ eigenspace\ of\ the\ global\ unfolding\ matrix\ }\mathbf{X}\mathrm{,\ initialize\ }t=1,\ \\ \mathrm{and\ FlyCom}^2\mathrm{-based\ DTD\ repeats:}\\ \quad \mathrm{Step\ 1}: \mathrm{Each\ device,\ say\ device\ }k, \mathrm{\ computes\ a\ local\ sketch\ using\ } \mathbf{S}_{t,k} = \mathbf{X}_{k} \mathbf{\Omega}_{t,k}; \\ \quad \mathrm{Step\ 2}: \mathrm{The\ server\ receives\ } \mathbf{Y}_t^{\top} = \sum_k\mathbf{X}_k\mathbf{\Omega}_{t,k} + \mathbf{Z}_t^{\top}\mathbf{A}_t^{\top}\mathrm{\ via\ MIMO\ AirComp};\\ \quad \mathrm{Step\ 3}: \mathrm{The\ server\ computes\ an\ estimate\ of\ the\ eigenspace\ of\ }\mathbf{X}: \\ \quad\quad\quad\quad\ \ \tilde{\mathbf{U}} = f(\{\mathbf{Y}_{\ell}\}_{\ell\leq t},\{\mathbf{A}_{\ell}\}_{\ell\leq t});\\ \quad \mathrm{Step\ 4}: \mathrm{Set\ }t=t+1.\\ \mathrm{Untill\ }t=T. \end{array} } \end{equation*} The key component of the FlyCom$^2$ framework, the on-the-fly sub-space estimator $f(\cdot)$, is designed in Section~\ref{section:design}. The performance of FlyCom$^2$-based DTD is enhanced using a sketch selection algorithm designed in Section~\ref{section:selection}. \section{Optimal Sub-space Detection for FlyCom$^2$}\label{section:design} In this section, we design the sub-space detection function of the FlyCom$^2$ framework, namely $f(\cdot)$ mentioned in the preceding section. It consists of two stages -- pre-processing of received symbols and the subsequent sub-space estimation, which are summarized in Algorithm~\ref{algo:detection} and designed in the following sub-sections. Furthermore, the resultant DTD error is analyzed. \subsection{Pre-Processing of Received Matrix Symbols} The function of pre-processing is to accumulate received matrix symbols from slot $1$ to the current slot, $t$, and generate from them an effective matrix for the ensuing sub-space detection. The operation is instrumental for on-the-fly detection to see a progressive performance improvement. The design of the pre-processing takes several steps. First, since the transmitted symbol $\mathbf{X}\mathbf{F}_{t}$ is real but the channel noise is complex, the real part of the received symbols, namely $\mathbf{Y}_t$ in~\eqref{eq:receivedsymbol}, gives an effective observation of the transmitted symbol\footnote{It is possible to transmit the coefficients of $\mathbf{X}\mathbf{F}_t$ over both the in-phase and quadrature channels, which halves air latency. The extension is straightforward (see, e.g.~\cite[Section II]{chen2022analog}) but complicates the notation without providing new insights. Hence, only the in-phase channel is used in this work.}. Let $\tilde{\mathbf{Y}}_t$ denote the effective observation in slot $t$ and $\tilde{\mathbf{Z}}_t$ the real part of $\mathbf{A}_t\mathbf{Z}_t$. It follows that \begin{equation}\label{eq:observations} \tilde{\mathbf{Y}}_t = \Re\{{\mathbf{Y}_t^{\top}}\} = \mathbf{X}\mathbf{F}_{t}+\tilde{\mathbf{Z}}_t^{\top}. \end{equation} Second, the relation between the eigenspace of $\mathbf{X}$ and the accumulated observations up to the current slot is derived as follows. To this end, let the SVD of $\mathbf{X}$ be expressed as \begin{equation}\label{eq:original_decomposition} \mathbf{X} =\mathbf{U}_{\mathbf{X}}\mathbf{\Sigma}_{\mathbf{X}}\mathbf{V}_{\mathbf{X}}^{\top}, \end{equation} where $\mathbf{\Sigma}_{\mathbf{X}}$ comprises descending singular values along its diagonal. Then, the accumulation of the current and past observations, denoted by $\hat{\mathbf{Y}}_t = [\tilde{\mathbf{Y}}_1,\tilde{\mathbf{Y}}_2,\cdots,\tilde{\mathbf{Y}}_t]$, is a random Gaussian matrix as shown below. \begin{Lemma}\label{Lemma:GaussianMatrix} \emph{The accumulated aggregations, $\hat{\mathbf{Y}}_t$, can be decomposed as \begin{equation*} \hat{\mathbf{Y}}_t = \mathbf{C}^{\frac{1}{2}}\mathbf{W}\mathbf{D}^{\frac{1}{2}}, \end{equation*} where $\mathbf{W}$ is a random Gaussian matrix with i.i.d. $\mathcal{N}(0,1)$ entries, the left covariance matrix $\mathbf{C} = \mathbf{X}\mathbf{X}^{\top} + \frac{1}{2tM}\sigma^2\sum_{\ell\leq t}\mathsf{Tr}(\mathbf{A}_{\ell}^{H}\mathbf{A}_{\ell})\mathbf{I}_I$, and the right one $\mathbf{D} = \frac{\mathsf{Tr}(\mathbf{X}^{\top}\mathbf{X})\mathbf{I}_{tM} + \frac{1}{2}I\sigma^2\mathsf{diag}(\mathbf{A}_1\mathbf{A}_1^{H},\cdots,\mathbf{A}_t\mathbf{A}_t^{H})}{\mathsf{Tr}(\mathbf{X}^{\top}\mathbf{X}) + \frac{1}{2tM}I\sigma^2\sum_{\ell\leq t}\mathsf{Tr}(\mathbf{A}_{\ell}^{H}\mathbf{A}_{\ell})}$.} \end{Lemma} \begin{proof} See Appendix~\ref{Apdx:representation} \end{proof} Third, based on~\eqref{eq:original_decomposition}, the covariance matrix, $\mathbf{C}$, in Lemma~\ref{Lemma:GaussianMatrix} can be rewritten as \begin{align} \mathbf{C} &= \mathbf{U}_{\mathbf{X}}\left(\mathbf{\Sigma}_{\mathbf{X}}^2 + \frac{1}{2tM}\sigma^2\sum_{\ell\leq t}\mathsf{Tr}(\mathbf{A}_{\ell}^{H}\mathbf{A}_{\ell})\mathbf{I}_I\right)\mathbf{U}_{\mathbf{X}}^{\top},\nonumber\\ & \overset{\triangle}{=}\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}\mathbf{U}_{\mathbf{X}}^{\top}, \end{align} where we define $\mathbf{\Lambda} = \mathbf{\Sigma}_{\mathbf{X}}^2 + \frac{1}{2tM}\sigma^2\sum_{\ell\leq t}\mathsf{Tr}(\mathbf{A}_{\ell}^{H}\mathbf{A}_{\ell})\mathbf{I}_I$. Hence, the square root, $\mathbf{C}^{\frac{1}{2}}$, is given as \begin{equation}\label{eq:effectivecovariance} \mathbf{C}^{\frac{1}{2}} = \mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}. \end{equation} \begin{Remark}[Effective Sketching with Channel Noise]\label{Remark:analogtransmission} \emph{According to Lemma~\ref{Lemma:GaussianMatrix} and~\eqref{eq:effectivecovariance}, the accumulated observations, $\hat{\mathbf{Y}}_t$, gives a sketch of the matrix $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}$ using a Gaussian DRM with the covariance of $\mathbf{D}$. The matrix $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}$ and the unfolding matrix $\mathbf{X}$ share the eigenspace, $\mathbf{U}_{\mathbf{X}}$. Furthermore, as $\mathbf{\Lambda}$ retains the descending sort of singular values, the top-$r$ principal eigenspace of $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}$ is identical to that of $\mathbf{X}$ for any $1\leq r\leq M$.} \end{Remark} Finally, according to the preceding discussion, the desired principal eigenspace of $\mathbf{X}$ can be estimated from the sketch $\hat{\mathbf{Y}}_t$. It is known that randomized sketching prefers DRMs with i.i.d. entries~\cite{randomprojection2011}. To improve the performance, $\hat{\mathbf{Y}}_t$ can be further ``whitened" to equalize the right covariance $\mathbf{D}$. Specifically, let $\hat{\mathbf{Y}}_t$ be right-multiplied by $\mathbf{D}^{-\frac{1}{2}}$ to yield the final \emph{effective observation} in time slot $t$ as \begin{equation}\label{eq:whitening} \boxed{\mathbf{\Phi}_t = \hat{\mathbf{Y}}_t\mathbf{D}^{-\frac{1}{2}} = \mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{\frac{1}{2}}\mathbf{W}.} \end{equation} To compute the covariance matrix $\mathbf{D}$, the server needs to acquire the value of $\mathsf{Tr}(\mathbf{X}\mathbf{X}^{\top}) = \sum_{k}\mathsf{Tr}(\mathbf{X}_k\mathbf{X}_k^{\top})$. Note that each term in the summation, say $\mathsf{Tr}(\mathbf{X}_k\mathbf{X}_k^{\top})$, relates to the covariance of transmitted symbols, $\mathsf{E}[\mathbf{S}_{t,k}^{\top}\mathbf{S}_{t,k}]$, as \begin{equation*} \mathsf{E}[\mathbf{S}_{t,k}^{\top}\mathbf{S}_{t,k}] = \mathsf{E}[\mathbf{\Omega}_{t,k}^{\top}\mathbf{X}_k^{\top}\mathbf{X}_k\mathbf{\Omega}_{t,k}] = \mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)\mathbf{I}_{M}. \end{equation*} Then, $\mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)$ can be acquired at the server by one-time feedback. \subsection{Optimal Sub-space Estimation} In this sub-section, the principal eigenspace of the unfolding matrix $\mathbf{X}$ with dimensions fixed as $r$, is estimated from the effective observation given in~\eqref{eq:whitening} under the ML criterion. First, using~\eqref{eq:whitening}, the distribution of the observation $\mathbf{\Phi}_t$ conditioned on $\mathbf{U}$ and $\mathbf{\Lambda}$ is given as \begin{equation*} \mathsf{Pr}\left(\mathbf{\Phi}_t|\mathbf{U}_{\mathbf{X}},\mathbf{\Lambda}\right) = \frac{\exp\left(-\frac{tM}{2}\mathsf{Tr}\left(\mathbf{\Phi}_t^{\top}\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{-1}\mathbf{U}_{\mathbf{X}}^{\top}\mathbf{\Phi}_t\right)\right)}{(2\pi)^{ItM/2}\mathsf{det}(\mathbf{\Lambda})^{tM/2}}. \end{equation*} This yields the logarithmic likelihood function required for ML estimation as \begin{align}\label{eq:likelihood} \mathcal{L}\left(\mathbf{U}_{\mathbf{X}};\mathbf{\Phi}_t,\mathbf{\Lambda}\right) &= \ln\left(\mathsf{Pr}\left(\mathbf{\Phi}_t|\mathbf{U}_{\mathbf{X}},\mathbf{\Lambda}\right)\right),\nonumber\\ & = -\frac{tM}{2}\mathsf{Tr}\left(\mathbf{\Phi}_t^{\top}\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}^{-1}\mathbf{U}_{\mathbf{X}}^{\top}\mathbf{\Phi}_t\right) -\frac{ItM}{2}\ln(2\pi) - \frac{tM}{2}\ln(\mathsf{det}(\mathbf{\Lambda})). \end{align} Let $\mathbf{U}$ denote the desired $r$-dimensional principal components of $\mathbf{X}$, as obtained from splitting $\mathbf{U}_{\mathbf{X}} = [\mathbf{U},\mathbf{U}^{\bot}]$. It is observed from~\eqref{eq:likelihood} that only the first term depends on the variable $\mathbf{U}$. Then letting $(\tilde{\mathbf{U}},\tilde{\mathbf{U}}_{\mathbf{X}})$ denote an estimate of $(\mathbf{U},\mathbf{U}_{\mathbf{X}})$, the ML-estimation problem can be formulated as \begin{equation}\label{problem:MLestimation} \begin{aligned} \mathop{\min}_{\tilde{\mathbf{U}}}&\ \ \mathsf{Tr}\left(\mathbf{\Phi}_t^{\top}\tilde{\mathbf{U}}_{\mathbf{X}}\mathbf{\Lambda}^{-1}\tilde{\mathbf{U}}_{\mathbf{X}}^{\top}\mathbf{\Phi}_t\right)\\ \mathrm{s.t.}&\ \ \tilde{\mathbf{U}}_{\mathbf{X}}^{\top}\tilde{\mathbf{U}}_{\mathbf{X}}=\tilde{\mathbf{U}}_{\mathbf{X}}\tilde{\mathbf{U}}_{\mathbf{X}}^{\top} = \mathbf{I},\\ &\ \ \tilde{\mathbf{U}}_{\mathbf{X}}=[\tilde{\mathbf{U}},\tilde{\mathbf{U}}^{\bot}]. \end{aligned} \end{equation} Despite the non-convex orthogonality constraints, the problem in~\eqref{problem:MLestimation} can be solved with an optimal closed-form solution as follows. First, define the eigenvalue decomposition $\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top} = \mathbf{Q}\mathbf{\Gamma}\mathbf{Q}^T$ with $\mathbf{Q}=[\mathbf{q}_1,\cdots,\mathbf{q}_I]$ and $\mathbf{\Gamma}=\mathsf{diag}(\gamma_1,\cdots,\gamma_I)$ with eigenvalues arranged in a descending order. Then, given $\mathbf{\Lambda}=\mathsf{diag}(\lambda_1,\cdots,\lambda_I)$ and $\mathbf{U}_{\mathbf{X}}=[\mathbf{u}_1,\cdots,\mathbf{u}_I]$, the objective function of~\eqref{problem:MLestimation} can be rewritten as \begin{equation*} \mathsf{Tr}\left(\mathbf{\Lambda}^{-1}\mathbf{U}_{\mathbf{X}}^{\top}\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}\mathbf{U}_{\mathbf{X}}\right) = \sum_{i=1}^I\sum_{j=1}^I\lambda_j^{-1}\gamma_i(\mathbf{q}_i^{\top}\mathbf{u}_j)^2. \end{equation*} Next, define $x_{ij} = \mathbf{q}_i^{\top}\mathbf{u}_j$, and the constraints in~\eqref{problem:MLestimation} can be rewritten as $\sum_{i=1}^Ix_{ij}^2 = \mathbf{u}_j^{\top}\mathbf{Q}\mathbf{Q}^{\top}\mathbf{u}_j=1$ and $\sum_{j=1}^Ix_{ij}^2 = \mathbf{q}_i^{\top}\mathbf{U}\mathbf{U}^{\top}\mathbf{u}_i=1$. Without loss of optimality, such constraints can be further rewritten as $\sum_{i=1}^Ix_{ij}^2\geq 1$ and $\sum_{j=1}^Ix_{ij}^2\geq 1$. This allows the problem in~\eqref{problem:MLestimation} to be reformulated as a convex problem: \begin{equation}\label{problem:new} \begin{aligned} \mathop{\min}_{\{x_{ij}\}}&\ \ \sum_{i=1}^I\sum_{j=1}^I\lambda_j^{-1}\gamma_ix_{ij}^2\\ \mathrm{s.t.}&\ \ \sum_{i=1}^Ix_{ij}^2\geq 1,\ \forall j,\\ &\ \ \sum_{j=1}^Ix_{ij}^2\geq 1,\ \forall i. \end{aligned} \end{equation} Since $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_I$, the objective of~\eqref{problem:new} subject to the constraints is lower bounded as \begin{equation} \sum_{i=1}^I\sum_{j=1}^I\lambda_j^{-1}\gamma_ix_{ij}^2\geq \sum_{i=1}^I\lambda_i^{-1}\gamma_i. \end{equation} The lower bound can be achieved by letting $x_{ii} = 1$, $\forall i$ and $x_{ij} = 0$, $\forall i\neq j$. The optimal solution for~\eqref{problem:new} follows as shown below. \begin{Proposition} \emph{Based on the ML criterion, in slot $t$, the optimal on-the-fly estimate of the $r$-dimensional principal components of the unfolding matrix, $\mathbf{X}$, is denoted as $\tilde{\mathbf{U}}^{\star}$ and given as \begin{equation}\label{eq:MLestimate} \tilde{\mathbf{U}}^{\star} = [\mathbf{q}_1,\cdots,\mathbf{q}_r] = \mathcal{S}_r\left(\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}\right), \end{equation} where $\mathbf{\Phi}_t$ is the effective observation in slot $t$ as given in~\eqref{eq:whitening} and we recall $\mathcal{S}_r(\cdot)$ to yield the $r$-dimensional principal eigenspace of its argument.} \end{Proposition} \begin{Remark}[Minimum Number of FlyCom$^2$ Operations] \emph{For the result in~\eqref{eq:MLestimate} to hold, the dimensions of the current effective observations $\mathbf{\Phi}_t$ should be larger than those of $\mathbf{U}$, i.e. $tM\geq r$. This implies that the FlyCom$^2$ should run at least $t\geq r/M$ rounds to enable the estimation of an $r$-dimensional principal eigenspace of the tensor.} \end{Remark} \begin{algorithm}[t] \caption{On-the-Fly Sub-space Detection for FlyCom$^2$ Based DTD} \label{algo:detection} \textbf{Initialize:} Received in-phase matrix symbols $\{\tilde{\mathbf{Y}}_{\ell}\}_{\ell\leq t}$ in slot $t$\; \textbf{Perform:}\\ \begin{enumerate} \item[1:] \emph{Aggregation:} Aggregate all received matrix symbol $\{\tilde{\mathbf{Y}}_{\ell}\}_{\ell\leq t}$ into $\hat{\mathbf{Y}}_t = [\tilde{\mathbf{Y}}_1,\cdots,\tilde{\mathbf{Y}}_t]$; \item[2:] \emph{Whitening:} Compute the whitened version, $\mathbf{\Phi}_t$, of the aggregated matrix $\hat{\mathbf{Y}}_t$ by~\eqref{eq:whitening}; \item[3:] \emph{Sub-space extraction:} Compute the first $r$ eigenvectors of $\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}$ and aggregate them into $\tilde{\mathbf{U}}$. \end{enumerate} \textbf{Output:} $\tilde{\mathbf{U}}$ used as the principal eigenspace of the unfolding matrix $\mathbf{X}$. \end{algorithm} \subsection{DTD Error Analysis} Based on the optimal sub-space detection designed in the preceding sub-section, we mathematically quantify the key feature of FlyCom$^2$ that the DTD error gracefully decreases with communication. The existing error analysis for random sketching does not target distributed implementation and hence requires no communication links~\cite{randomprojection2011,StreamingTD}. Then, the new challenge for the current analysis arises from the need to account for distortion increased by the air interface based on MIMO AirComp. By tackling the challenge, we derive deterministic and probabilistic bounds on the DTD error defined in~\eqref{eq:error}. \subsubsection{Deterministic Error Bound} As the unfolding matrix comprises $r$ principal components, its singular values can be represented as $\mathbf{\Sigma}_{\mathbf{X}} = \mathsf{diag}(\sigma_1,\sigma_2,\cdots,\sigma_I)$ with $\sigma_1=\cdots=\sigma_r\gg\sigma_{r+1}\geq\cdots\geq\sigma_I$, where we assume the same principal singular values following the literature (see, e.g.~\cite{StreamingTD}). \begin{Lemma}\label{Lemma:deviation} \emph{Consider the DTD of the unfolding matrix $\mathbf{X}$ in tensor decomposition that has an $r$-dimensional principal eigenspace $\mathbf{U}= [\mathbf{u}_1,\cdots,\mathbf{u}_r]$ and the singular values $\mathbf{\Sigma}_{\mathbf{X}}$. The estimation of $\mathbf{U}$ as in~\eqref{eq:MLestimate} yields the DTD error given as \begin{equation}\label{eq:rewrite_error} d(\tilde{\mathbf{U}},\mathbf{X})=\sum_{i=1}^r\sum_{j\geq r+1}(\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2 + \sum_{i\geq r+1}\sigma_i^2. \end{equation} } \end{Lemma} \begin{proof} See Appendix~\ref{Apdx:deviation}. \end{proof} At the right side of the equation, the first term, $\sum_{i=1}^r\sum_{j\geq r+1}(\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2$, represents the error due to random sketching; the second term $\sum_{i\geq r+1}\sigma_i^2$ represents the residual error due to non-zero non-principal components of $\mathbf{X}$. Next, we make an attempt to characterize the behavior of each error term, $(\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2$. Let $\tilde{\mathbf{u}}_i$ and $\mathbf{u}_j$ denote the $i$-th and $j$-th ($i\leq r<j$) eigenvectors of the sample covariance matrix $\frac{1}{tM}\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}$ and the covariance matrix $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}\mathbf{U}_{\mathbf{X}}^{\top}$, respectively. The error in~\eqref{eq:rewrite_error} is caused by the perturbation $\mathbf{\Delta}=\frac{1}{tM}\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}-\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}\mathbf{U}_{\mathbf{X}}^{\top}$. Using the fact allows us to obtain the following desired result. \begin{Lemma}\label{Lemma:UPofEachTerm} \emph{Consider a fixed realization $\mathbf{W}$ in the DRM, $\mathbf{\Phi}_t$, in~\eqref{eq:whitening} and the error term, $(\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2$, in Lemma~\ref{Lemma:deviation} is upper bounded as \begin{equation*} (\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2\leq \max \left\{4,\delta_{ij}^2\right\}\frac{\Vert\mathbf{\Delta}\mathbf{u}_j\Vert_2^2}{\sigma_i^2-\sigma_j^2},\quad i\leq r<j, \end{equation*} where $\delta_{ij} \overset{\triangle}{=} \frac{\min\{2|\tilde{\lambda}_i-\lambda_i|,(\sigma_i^2-\sigma_j^2)\}}{|\tilde{\lambda}_i-\lambda_j|}$ with $\lambda_i$ and $\tilde{\lambda}_i$ being the $i$-th eigenvalues of $\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}\mathbf{U}_{\mathbf{X}}^{\top}$ and $\frac{1}{tM}\mathbf{\Phi}_t\mathbf{\Phi}_t^{\top}$, respectively. } \end{Lemma} \begin{proof} See Appendix~\ref{Apdx:deterministicerror}. \end{proof} The upper bound in Lemma~\ref{Lemma:UPofEachTerm} suggests two scaling regions of the DTD error, namely $\delta_{ij} \geq 2$ and $\delta_{ij}<2$. Invoking the well-known Weyl's theorem (see, e.g.~\cite{WeylsTheorem}), the norm of the perturbation $\mathbf{\Delta}$ and hence the value of $\delta_{ij}$ reduce as FlyCom$^2$ progresses in time. This is aligned with the result in Fig.~\ref{subfig:eigenvalues} where the average value of $\{\delta_{ij}\}$ is observed to decrease with increasing communication time $t$. To simplify the analysis, we focus on the case of $\delta_{ij}\leq 2$, $\forall i\leq r<j$, by assuming sufficiently large $t$. In this case, the upper bound in Lemma~\ref{Lemma:UPofEachTerm} is simplified as \begin{equation}\label{eq:simplifiedUP} (\sigma_i^2-\sigma_j^2)\langle\tilde{\mathbf{u}}_i,\mathbf{u}_j\rangle^2\leq \frac{4\Vert\mathbf{\Delta}\mathbf{u}_j\Vert_2^2}{\sigma_i^2-\sigma_j^2},\quad i\leq r<j. \end{equation} Next, based on Lemma~\ref{Lemma:deviation} and~\eqref{eq:simplifiedUP}, the desired deterministic error bound is derived as follows. \begin{Theorem}[Expected Error Bound]\label{Theorem:expectation} \emph{Given the receive beamformers $\{\mathbf{A}_{\ell}\}_{\ell\leq t}$ of FlyCom$^2$-based DTD and $\delta_{ij}\leq 2$, $\forall i\leq r<j$, the expected error can be bounded as \begin{equation*} \mathsf{E}[d(\tilde{\mathbf{U}},\mathbf{X})]\leq \frac{4}{tM}\sum_{i=1}^r\sum_{j\geq r+1}\frac{\lambda_j^2+\lambda_j\mathsf{Tr}(\mathbf{\Lambda})}{\sigma_i^2-\sigma_j^2} + \sum_{i\geq r+1}\sigma_i^2, \end{equation*} where $\delta_{ij}$ and $\lambda_j$ follow those in Lemma~\ref{Lemma:UPofEachTerm}.} \end{Theorem} \begin{proof} See Appendix~\ref{Apdx:expectation}. \end{proof} \begin{figure*}[t] \centering \subfigure[Average value of $\{\delta_{ij}\}$ versus communication time]{\label{subfig:eigenvalues}\includegraphics[width=0.48\textwidth]{figures/eigenvalues-eps-converted-to.pdf}} \subfigure[Expected DTD error versus communication time]{\label{subfig:expectederror}\includegraphics[width=0.48\textwidth]{figures/validation-eps-converted-to.pdf}} \caption{Validation of theoretical results under the settings of $r = 12$, $I=100$, $\mathbf{\Sigma}_{\mathbf{X}} = \mathsf{diag}(1,\cdots,1,\frac{1}{2},\frac{1}{3},\cdots,\frac{1}{88})$, and $\mathbf{A}_t\mathbf{A}_t^{\top} =\frac{1}{10\sigma}\mathbf{I}$} \label{fig:validation} \end{figure*} The error bound in the Theorem~\ref{Theorem:expectation} is compared numerically with the exact error and that of centralized tensor decomposition in Fig.~\ref{subfig:expectederror}. One can observe the bound to capture the trend of decreasing DTD error as $t$ progresses. In particular, it shows that under a small perturbation, \begin{equation*} \mathsf{E}[d(\tilde{\mathbf{U}},\mathbf{X})]\propto \frac{1}{tM}. \end{equation*} \subsubsection{Probabilistic Error Bound} We derive in the sequel a probabilistic bound on the DTD error using the method of \emph{concentration of measure}. A relevant useful result is given below. \begin{Lemma}[McDiarmid's Inequality~\cite{McDiarmid}]\label{Lemma:McDiarmid} \emph{Let $g$ be a positive function on independent variables $\{W_m\}$ satisfying the bounded difference property: \begin{equation*} \sup_{\{W_{M}\}_{M\neq m},W_m,W_M}|g(\{W_{M}\}_{M\neq m},W_m) - g(\{W_{M}\}_{M\neq m},W_M)|\leq c_m,\ \forall m, \end{equation*} with constants $\{c_m\}$ and $W_M$ being i.i.d. as $W_m$. Then, for any $\epsilon>0$, \begin{equation*} \mathsf{Pr}\left[g(\{W_m\})-\mathsf{E}[g(\{W_m\})]\geq\epsilon\right]\leq \exp\left(-\frac{2\epsilon^2}{\sum_mc_m^2}\right). \end{equation*}} \end{Lemma} Using Lemma~\ref{Lemma:McDiarmid}, the desired result is obtained as shown below. \begin{Theorem}[Probabilistic Error Bound]\label{Theorem:probability} \emph{Given receive beamformers $\{\mathbf{A}_{\ell}\}_{\ell\leq t}$ and $\delta_{ij}\leq 2$, $\forall i\leq r<j$, for any $\epsilon\geq 0$, the error of the FlyCom$^2$-baded DTD can be upper bounded as \begin{equation*} d(\tilde{\mathbf{U}},\mathbf{X})\leq \frac{4(1 + \epsilon)}{tM}\sum_{i=1}^r\sum_{j\geq r+1}\frac{\lambda_j^2+\lambda_j\mathsf{Tr}(\mathbf{\Lambda})}{\sigma_i^2-\sigma_j^2} + \sum_{i\geq r+1}\sigma_i^2, \end{equation*} with the probability of at least $\left[1-\exp\left(-\frac{\epsilon^2}{2\kappa^8}\right)\right]\mathsf{erf}\left(\frac{\kappa}{\sqrt{2}}\right)^{tM(I-r)}$, where $\mathsf{erf}(\cdot)$ denotes the error function defined as $\mathsf{erf}(y) = \frac{2}{\sqrt{\pi}}\int_{0}^y\exp(-x^2)\mathrm{d}x$ and $\kappa\geq 1$.} \end{Theorem} \begin{proof} See Appendix~\ref{Apdx:probability}. \end{proof} The upper bound on the DTD error in Theorem~\ref{Theorem:probability} holds \emph{almost surely} if the constant $\epsilon$ is sufficiently large. Comparing Theorem~\ref{Theorem:expectation} and Theorem~\ref{Theorem:probability}, one can make the important observation that as the communication time ($t$) progresses, both the DTD error and its expectation vanish at the same rate of \begin{equation}\label{eq:scalinglaw} \mathsf{Error} \propto \frac{1}{tM}\sum_{i=1}^r\sum_{j\geq r+1}\frac{\lambda_j^2+\lambda_j\mathsf{Tr}(\mathbf{\Lambda})}{\sigma_i^2-\sigma_j^2}. \end{equation} Another observation is that non-principle components of the tensor contribute to the DTD error but the effect is negligible when the eigen-gap is large. \section{Optimal Sketch Selection for FlyCom$^2$}\label{section:selection} Discarding aggregated sketches that have been transmitted under unfavourable channel conditions can improve the FlyCom$^2$ performance. This motivates us to design a sketch-selection scheme in this section. \subsection{Threshold Based Sketch Selection} First, we follow the approach in~\cite{GXZhuAirComp2019} to design the receive beamforming, $\{\mathbf{A}_t\}$, for MIMO AirComp. To this end, we decompose $\mathbf{A}_t$ as $\mathbf{A}_t = \eta_t\mathbf{U}_{\mathbf{A}_t}$, where the positive scalar $\eta_t$ is called a denoising factor and $\mathbf{U}_{\mathbf{A}_t}$ is an $N_{\text{t}}\times N_{\text{r}}$ unitary matrix. Following similar steps as in~\cite{GXZhuAirComp2019}, we can show that to minimize the DTD error bounds in Theorem~\ref{Theorem:expectation} and~\ref{Theorem:probability}, the beamformer component should be aligned with the channels of devices as \begin{equation}\label{eq:receive_alignment} \mathbf{U}_{\mathbf{A}_t}^{\top}=\mathcal{S}_{N_{\text{t}}}\left(\frac{1}{K}\sum_{k}\lambda_{\mathbf{H}_{t,k}}\mathbf{U}_{\mathbf{H}_{t,k}}\mathbf{U}_{\mathbf{H}_{t,k}}^{\top}\right), \end{equation} where $\lambda_{\mathbf{H}_{t,k}}$ and $\mathbf{U}_{\mathbf{H}_{t,k}}$ denote the $N_{\text{t}}$-th eigenvalue and the first $N_{\text{t}}$ eigenvectors of $\mathbf{H}_{t,k}\mathbf{H}_{t,k}^{\top}$, respectively. Furthermore, the denoising factor $\eta_t$ should cope with the weakest channel by being \begin{equation}\label{eq:denoising} \eta_t = \max_k \frac{1}{IP} \mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)\mathsf{Tr}\left(\left(\mathbf{U}_{\mathbf{A}_t}\mathbf{H}_{t,k}\mathbf{H}_{t,k}^{H}\mathbf{U}_{\mathbf{A}_t}^{H}\right)^{-1}\right). \end{equation} It follows from~\eqref{eq:receive_alignment} and~\eqref{eq:denoising} that $\mathsf{Tr}(\mathbf{A}_t^{H}\mathbf{A}_t) = \eta_tN_{\text{t}} = \eta_tM$ and $\lambda_j$ in DTD error bounds in Theorem~\ref{Theorem:expectation} and~\ref{Theorem:probability} can be expressed as \begin{equation}\label{eq:lambda_j} \lambda_j=\sigma_j^2+\frac{\sigma^2}{2t}\sum_{\ell\leq t}\eta_{\ell}, \end{equation} which shows that the error relies on only the denoising factor up to the current time slot. The result also suggests that it is preferable to select from the received sketches $\{\tilde{\mathbf{Y}}_n\}_{\ell\leq t}$ those associated with small $\eta_{\ell}$ that reflects a favourable channel condition. Naturally, we can derive a threshold based selection scheme as follows: \begin{equation}\label{eq:selection} \boxed{\tilde{\mathbf{Y}}_{\ell}\ \mathrm{is\ selected\ if}\ \eta_{\ell}\leq \eta_{\mathrm{th}},\ \forall \ell\leq t,} \end{equation} where the threshold $\eta_{\mathrm{th}}$ is optimized in the sequel. \subsection{Threshold Optimization} The threshold, $\eta_{\mathrm{th}}$, in~\eqref{eq:selection}, needs to be optimized to minimize the error in~\eqref{eq:scalinglaw}. Solving the problem is hindered by that the singular values, $\{\sigma_j\}$, in the DTD error are not available at the server in advance. We tackle this problem by designing a practical optimization scheme. To this end, we resort to using an upper bound on the DTD error as shown below. \begin{Lemma}\label{Lemma:selection} \emph{Let $\tilde{M}$ denote the number of aggregated sketches selected from $\{\tilde{\mathbf{Y}}_{\ell}\}_{\ell\leq t}$ based on~\eqref{eq:selection} with the threshold $\eta_{\mathrm{th}}$, the DTD error in~\eqref{eq:scalinglaw} satisfies \begin{equation*} \frac{1}{\tilde{M}}\sum_{i=1}^r\sum_{j\geq r+1}\frac{\lambda_j^2+\lambda_j\mathsf{Tr}(\mathbf{\Lambda})}{\sigma_i^2-\sigma_j^2}\leq \frac{c}{\tilde{M}}\left[1+\frac{r\sigma^2\eta_{\mathrm{th}}}{2\sum_{k}\mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)}\right]^2, \end{equation*} where $c$ is a constant. } \end{Lemma} \begin{proof} See Appendix~\ref{Apdx:selection}. \end{proof} Lemma~\ref{Lemma:selection} suggests that a sub-optimal threshold can be obtained by minimizing the error upper bound. Let $S$ denote a set and $|S|$ its cardinality. Then, the threshold-optimization problem can be formulated as \begin{equation}\label{problem:threshold} \begin{aligned} \mathop{\min}_{\eta_{\mathsf{th}}}&\ \ \frac{1}{|S|}\left[1+\frac{r\sigma^2\eta_{\mathrm{th}}}{2\sum_{k}\mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)}\right]^2\\ \mathrm{s.t.}&\ \ S = \left\{\eta_{\ell}|\eta_{\ell}\leq \eta_{\mathsf{th}}, \ell\leq t\right\}, \end{aligned} \end{equation} where $\eta_{\ell}$ follows the definition in~\eqref{eq:denoising}. One can observe that the objective function of~\eqref{problem:threshold} is a monotonically increasing function with respect to the variable $\eta_{\mathrm{th}}$ if $\tilde{M}$ is fixed. Such piecewise monotonicity of the objective function~\eqref{problem:threshold} renders a linear-search solution method (e.g. bisection search) infeasible, but allows the optimal solution, $\eta_{\mathrm{th}}^{\star}$, to be restricted into a finite set, say $\eta_{\mathsf{th}}^{\star} \in \left\{\eta_1,\eta_2,\cdots,\eta_t\right\}$. Then, finding $\eta_{\mathsf{th}}^{\star}$ is simple by exhausted enumeration as follows. Let $\tilde{M}_{\ell}$ denote the number of selected sketches corresponding to the threshold fixed as $\eta_{\mathsf{th}} = \eta_{\ell}$. Define \begin{equation*} \ell^{\star} = \arg\min_{\ell\leq t}\frac{1}{\tilde{M}_{\ell}}\left[1+\frac{r\sigma^2\eta_{\ell}}{2\sum_{k}\mathsf{Tr}(\mathbf{X}_k^{\top}\mathbf{X}_k)}\right]^2. \end{equation*} The optimal threshold solving the problem in~\eqref{problem:threshold} is $\eta_{\mathsf{th}} = \eta_{\ell^{\star}}$. Two remarks are offered as follows. First, the above research for the optimal threshold has complexity linearly proportional to $t$, the population of accumulated sketches at the server. Second, the implementation of the optimization at the server requires feedback of a scalar from each device, namely $\mathsf{Tr}\left(\mathbf{X}_k^{\top}\mathbf{X}_k\right)$ from device $k$. \section{Experimental Results}\label{section:experiment} \subsection{Experimental Settings} First, the MIMO AirComp system is configured to have the following settings. There are $K=20$ edge devices connected to the server. The array sizes at each devices and the server are set as $N_{\text{t}}=4$ and $N_{\text{r}}=16$, respectively. The Rayleigh channel model is adopted, in which each MIMO channel matrix $\mathbf{H}_{t,k}$ comprises i.i.d. $\mathcal{CN}(0,1)$ entries and different channels are independent. Second, following the DTD literature (see, e.g.~\cite{StreamingTD}), we use a synthetic data model. Considering the computation of the $n$-th factor matrix, the unfolding matrix of the data tensor has the size of $I_n\times (\prod_{j=1,j\neq n}^N) = 100\times 1500$, and its columns are uniformly distributed over devices. Under such settings, each local sketch has the length of $I_n = 100$ that is smaller than a single channel-coherence duration (see, e.g.~\cite{Bjornson2016Tenmyths}, for justification). To demonstrate the performance of the proposed FlyCom$^2$ for data with a range of parameterized spectral distribution, the singular values of the unfolding matrix are set to decay with a polynomial rate: \begin{equation*} \mathbf{\Sigma}_{\mathbf{X}} = \mathsf{diag}\left(1,\cdots,1,\frac{1}{2^{\xi}},\frac{1}{3^{\xi}},\cdots,\frac{1}{(I_n-r)^{\xi}}\right), \end{equation*} where the first $r=12$ principal singular values are fixed as $1$ and $\xi>0$ controls the decay rate of residual values. Furthermore, the left and right eigenspaces of the unfolding matrix are generated as those of random matrices with i.i.d. $\mathcal{N}(0,1)$ entries~\cite{StreamingTD}. Third, we consider two benchmarking schemes that are variants of SVD-based DTD. \begin{itemize} \item Centroid SVD-DTD: Devices compute local principal eigenspaces $\{\hat{\mathbf{U}}_k\}$ of their on-device data samples by using SVD and the server then aggregates these local results as $\mathbf{P}=\frac{1}{K}\sum_k\hat{\mathbf{U}}_k\hat{\mathbf{U}}_k^{\top}$. The principal eigenspace of $\mathbf{P}$ represents the centroid of all local estimates $\{\hat{\mathbf{U}}_k\}$ on the Grassmannian manifold and is extracted to form a global estimate of the ground truth~\cite{JF2019estimation2019,chen2022analog}. \item Alignment SVD-DTD: The scheme follows a similar procedure as above except for aggregating local results, $\{\hat{\mathbf{U}}_k\}$, as $\mathbf{P}=\frac{1}{K}\sum_k\hat{\mathbf{U}}_k\mathbf{J}_k$, where the orthogonal matrices $\{\mathbf{J}_k\}$ are alignment matrices that are to be optimized by using past global estimates to improve the system performance~\cite{VC2021DPCA}. \end{itemize} The aggregation operations in both benchmark schemes are implemented using MIMO AirComp~\cite{GZhu2021WCM, GXZhuAirComp2019} as FlyCom$^2$ for fair comparison. \subsection{Performance Gain of Sketch Selection} \begin{figure*}[t] \centering \subfigure[SNR $= 0\mathrm{dB}$]{\label{subfig:0dB}\includegraphics[width=0.32\textwidth]{figures/Compare_selection_0dB_decay2-eps-converted-to.pdf}} \subfigure[SNR $= 5\mathrm{dB}$]{\label{subfig:5dB}\includegraphics[width=0.32\textwidth]{figures/Compare_selection_5dB_decay2-eps-converted-to.pdf}} \subfigure[SNR $= 10\mathrm{dB}$]{\label{subfig:10dB}\includegraphics[width=0.32\textwidth]{figures/Compare_selection_10dB_decay2-eps-converted-to.pdf}} \caption{Error-performance comparison between FlyCom$^2$ with and without sketch selection.} \label{fig:selection} \end{figure*} In Fig.~\ref{fig:selection}, we compare the error performance of FlyCom$^2$ between the cases with and without sketch selection. The communication time is measured by the total number of symbol slots used in uploading local sketches, namely $tI_n$, where $I_n$ is the number of rows of local sketches. It is observed from Fig.~\ref{fig:selection} that the proposed selection scheme helps to reduce the expected DTD for different transmit SNRs. The gain emerges when the communication time exceeds a SNR dependent threshold (e.g. $2500$ time slots for $10$ dB) as the total number of available sketches becomes sufficiently large. Another observation from Fig.~\ref{fig:selection} is that the DTD error decreases at an approximately linear rate with respect to the communication time, which is aligned with our conclusion in~\eqref{eq:scalinglaw}. In the sequel, FlyCom$^2$ is assumed to have sketch selection. \begin{figure*}[t] \centering \subfigure[Spectral decay rate $\xi = 0.5$]{\label{subfig:decay05}\includegraphics[width=0.32\textwidth]{figures/benchmarks_10dB_100x1500_decay05-eps-converted-to.pdf}} \subfigure[Spectral decay rate $\xi = 1$]{\label{subfig:decay1}\includegraphics[width=0.32\textwidth]{figures/benchmarks_10dB_100x1500_decay1-eps-converted-to.pdf}} \subfigure[Spectral decay rate $\xi = 2$]{\label{subfig:decay2}\includegraphics[width=0.32\textwidth]{figures/benchmarks_10dB_100x1500_decay2-eps-converted-to.pdf}} \caption{FlyCom$^2$ versus Benchmark schemes, SNR $\gamma = 10\mathrm{dB}$.} \label{fig:benchmark} \end{figure*} \subsection{Error Performance of FlyCom$^2$} While FlyCom$^2$ requires much simpler on-device computation than benchmarking schemes (see Section~\ref{subsec:experiment_error}), we demonstrate in Fig.~\ref{fig:benchmark} that it can achieve comparable or even better error performance than the latter. Fig.~\ref{fig:benchmark} displays the curves of expected DTD error versus communication time. Note that the performance of the benchmarking schemes with one-shot computation and communication appears as single points in the figure. The results in Fig.~\ref{fig:benchmark} show that FlyCom$^2$-based DTD achieves comparable decomposition accuracies as the benchmarking schemes with time progressing. Furthermore, its performance is improved by increasing the decay rate ($\xi$) of singular values, which validates our conclusion that large eigen-gaps help distinguish principal from non-principal eigenvectors during random sketching. For instance, for $\xi=1$, the proposed scheme approaches the centroid and alignment based SVD-DTD in performance for communication time larger than $800$ and $1700$ symbol slots, respectively. As $\xi$ increases to $2$, the former outperforms both of the latter schemes. Furthermore, one can also observe from Fig.~\ref{fig:benchmark} that the proposed on-the-fly framework realizes a flexible trade-off between the decomposition accuracy and communication time, which is the distinctive feature of the design. \begin{figure*}[t] \centering \subfigure[Computation complexity ($I=100$, $N_{\text{t}}=4$)]{\label{subfig:complexity}\includegraphics[width=0.48\textwidth]{figures/complexity-eps-converted-to.pdf}} \subfigure[Passes of raw data in memory]{\label{subfig:memory}\includegraphics[width=0.48\textwidth]{figures/memory_passes-eps-converted-to.pdf}} \caption{Computation-cost comparison between FlyCom$^2$ based DTD and benchmarking schemes.} \label{fig:computation} \end{figure*} \subsection{Device Computation Costs of FlyCom$^2$}\label{subsec:experiment_error} In Fig.~\ref{fig:computation}, we compare two kinds of computational costs at devices, namely complexity and memory passes, between FlyCom$^2$ and benchmark schemes. The complexity refers to the flop count of computation, and the memory passes is equal to the number of memory visits for reading data entries. The computational advantage of FlyCom$^2$ is demonstrated by comparing the cost of matrix-vector multiplication in random sketching with that of deterministic SVD used in the one-shot benchmarking schemes. Specifically, given $I\times J$ local unfolding matrices, deterministic SVD has the complexity proportional to $\min\{I,J\}^2\times \max\{I,J\}$~\cite{SVDComplexity2019}; based on matrix multiplication, the complexity of FlyCom$^2$ to yield an $N_{\text{t}}$-dimensional sketch at each time slot is $IJN_{\text{t}}$. For the schemes in comparison, their curves of computation complexity versus sample size are plotted in Fig.~\ref{subfig:complexity}. One can observe that the proposed FlyCom$^2$ dramatically reduces devices' complexity by more than an order of magnitude. On the other hand, Fig.~\ref{subfig:memory} displays the curves of the number of memory passes versus the principal dimensionality, $r$. The proposed design keeps a constant memory pass for matrix multiplication, as opposed to that of SVD that increases linearly with the principal dimensionality. Specifically, the number of memory passes is reduced using FlyCom$^2$ by $10$ times and $30$ times for $r=10$ and $r=30$, respectively. \section{Conclusion}\label{section:conclusion} We have presented a FlyCom$^2$ framework to support the progressive computation of DTD in mobile networks. Through the use of the random sketching technique at devices, the traditional one-shot high-dimensional mobile communication and computation is reduced to low-dimensional operations spread over multiple time slots. Thereby, the resource constraints of devices are overcome. Furthermore, FlyCom$^2$ obtains its distinctive feature of progressive improvement of DTD accuracy with increasing communication time, providing robustness against link disruption. To develop the FlyCom$^2$ based DTD framework, we have designed an on-the-fly sub-space estimator and a sketch-selection scheme to ensure close-to-optimal system performance. Beyond DTD, high-dimensional communication and computation pose a general challenge for machine learning and data analytics in wireless networks. We expected that FlyCom$^2$ can be further developed into a broad approach for efficient deployment of relevant algorithms such as federated learning and distributed optimization. For the current FlyCom$^2$ targeting DTD, its extension to accommodate other wireless techniques such as broadband transmission and radio resource management is also a direction worth pursuing.
{ "arxiv_id": "2302.14266", "language": "en", "timestamp": "2023-03-01T02:07:28", "url": "https://arxiv.org/abs/2302.14266", "yymm": "2302" }
\section{Introduction} Accurate predictions of charm meson lifetimes are challenging due to the contributions of strong-interaction to the decay amplitudes and it is an important ingredient to many theoretical calculations as well as experimental measurements. The predictions must resort to effective models, such as the heavy-quark expansion~~\cite{Neubert:1997gu,Uraltsev:2000qw,Lenz:2013aua,Lenz:2014jha,Kirk:2017juj,Cheng:2018rkz} and precise lifetime measurements provide excellent tests of such models. The lifetime measurement with early Belle II data will demonstrate the excellent vertexing capability of the Belle II detector which is essential for future analyses of decay-time-dependent effects. In this paper, we report the measurement of $D^0$ and $D^+$ lifetimes by reconstructing $D^{*+}\to(D^0\to K^-\pi^+)\pi^+$ and $D^{*+}\to(D^+\to K^-\pi^+\pi^+)\pi^0$ decays using ${\rm 72~fb^{-1}}$ of data collected by Belle II detector~\cite{belle2}(Charge-conjugate decays are implied throughout). The $D^{*+}$ tagging is requested to suppress the combinatoric background. In SuperKEKB~\cite{skekb}, the $D^{*+}$ mesons are produced with a boost that displace the $D^0$ and $D^+$ mesons. The decay time is estimated from the projection of this displacement on to the direction of momentum, $\vec{p}$, as $t=m_{D}\vec{L}\cdot\vec{p}/|\vec{p}|^2$, where $m_D$ is the known mass of the relevant $D$ meson~\cite{pdg}. The uncertainty in decay time, $\sigma_t$, is estimated by propagating the uncertainties in $\vec{L}$ and $\vec{p}$, including their correlations. \section{Belle II detector} The Belle II detector is built around the interaction region of SuperKEKB $e^+e^-$ collider. The inner most part is a two-layer silicon-pixel detector (PXD) followed by a four-layer double-sided silicon-strip detector (SVD) and a central drift chamber (CDC) together form the tracking system. A time-of-propagation counter and an aerogel ring-imaging Cherenkov counter that cover the barrel and forward end-cap regions of the detector, respectively, are used for charged-particle identification. An electromagnetic calorimeter is used to reconstruct photons and electrons. All these components are kept inside a 1.5 T magnetic field. A dedicated system to identify $K^0_L$ mesons and muons is installed in the outermost part of the detector. \section{Reconstruction} $D^0\to K^-\pi^+$ and $D^+\to K^-\pi^+\pi^+$ candidates are reconstructed using charged tracks identified as kaons and pions. Each track is required to have at-least one hit in the first layer of PXD, one hit in the SVD. Tracks from $D^0(D^+)$ need to have at least 20 (30) hits in the CDC. The low momentum $\pi^+$ from the $D^{*+}$ decay are tracks consistent with originating from the interaction region that have at least one hit in the SVD and one hit in the CDC. Low momentum $\pi^0$ is reconstructed from two photons as $\pi^0\to\gamma\gamma$. The $D^{*+}$ momentum in $e^+e^-$ centre-of-mass frame is required to be greater than $2.5(2.6)~{\rm GeV}/c$ to suppress $D^0(D^+)$ mesons coming from bottom mesons. A global decay-chain vertex fit~\cite{vxf} constraining the tracks according to the decay topology is applied and only candidates with fit $\chi^2$ probabilities larger than 0.01 are retained for further analysis. The mass of $D^0$ and $D^+$ candidate is required to be $1.75<m(K^-\pi^+)<2.00~{\rm GeV}/c^2$. The difference between the $D^{*+}$ and $D$ candidate masses, $\Delta M$, must satisfy $144.94 < \Delta M < 145.90~{\rm MeV}/c^2$ and $138 < \Delta M < 143~{\rm MeV}/c^2$ for $D^0$ and $D^+$ candidates, respectively. By applying these selections, approximately $171\times10^3$ signal $D^0$ candidates with signal purity of 99.8\% is observed in the signal region, defined as $1.815<m(K^-\pi^+)<1.878~{\rm GeV}/c^2$. The signal region in $m(K^-\pi^+\pi^+)$ is defined as $1.855<m(K^-\pi^+\pi^+)<1.883~{\rm GeV}/c^2$ and contains approximately $59\times10^{3}$ signal candidates with a background contamination of 9\%. Mass distributions of $D^0\to K^-\pi^+$ and $D^+\to K^-\pi^+\pi^+$ candidates are shown in Fig.~\ref{fig:massfit}. \begin{figure}[t!] \centering \includegraphics[width=0.5\linewidth]{figs/fig1}\\ \caption{Mass distributions of (top) $D^0\to K^-\pi^+$ and (bottom) $D^+\to K^-\pi^+\pi^+$ candidates with fit projections overlaid. The vertical dashed and (for the bottom plot) dotted lines indicate the signal regions and the sideband, respectively.\label{fig:massfit}} \end{figure} \section{Lifetime extraction} Lifetimes are extracted by using unbinned maximum-likelihood fits to the $(t,\sigma_t)$ distributions of candidates populating the signal regions. The signal probability-density function (PDF) is the convolution of an exponential function in $t$ with a resolution function that depends on $\sigma_t$, multiplied by the PDF of $\sigma_t$. The time constant of the exponential function will return the lifetime. The PDF of $\sigma_t$ is a histogram template derived directly from the signal region of the data. In the $D^0$ case, the PDF of $\sigma_t$ is obtained assuming that all candidates in the signal region are signal decays. In the $D^+$ case, instead, the template is obtained from the candidates in the signal region after having subtracted the distribution of the sideband data. Simulation shows that a double (single) Gaussian with common mean will describe the resolution function for $D^0(D^+)$. The mean of the resolution function is allowed to float in the fit to account for a possible bias in the determination of the decay time; the width is the per-candidate $\sigma_t$ scaled by a free parameter $s$ to account for a possible misestimation of the decay-time uncertainty. In the $D^0$ case, the per-mille-level fraction of background candidates in the signal region is neglected and a systematic uncertainty is assigned for this. A sizable background contamination is accounted for in the $D^+$ case using the data sideband: $1.758<m(K^-\pi^+\pi^+)<1.814, 1.936<m(K^-\pi^+\pi^+)<1.992~{\rm GeV}/c^2$. The background PDF consists of a zero-lifetime component and two exponential components, all convoluted with a Gaussian resolution function having a free mean and a width corresponding to $s\sigma_t$. To better constrain the background parameters, a simultaneous fit to the candidates in the signal region and sideband is performed by constraining the background fraction obtained from a fit to $m(K^-\pi^+\pi^+)$. The lifetime fits are tested on simulated samples and the returned lifetimes are consistent with the true values. The decay-time distributions of the data, with fit projections overlaid, are shown in Fig.~\ref{fig:lifetime-fit}. The $D^0$ and $D^+$ lifetimes are measured to be $410.5\pm 1.1(\rm stat)\pm0.8 (\rm syst)$ fs and $1030.4\pm4.7(\rm stat)\pm3.1(\rm syst)$ fs, respectively~\cite{dl_prl}. The results are consistent with their respective world average values~\cite{pdg}. The systematic uncertainties are summarized in Table~\ref{tab:syst} and the total systematic uncertainty is the sum in quadrature of the individual components. \begin{figure}[t!] \centering \includegraphics[width=0.5\linewidth]{figs/fig2}\\ \caption{Decay-time distributions of (top) $D^0\to K^-\pi^+$ and (bottom) $D^+\to K^-\pi^+\pi^+$ candidates in their respective signal regions with fit projections overlaid.\label{fig:lifetime-fit}} \end{figure} \begin{table}[t] \centering \caption{Systematic uncertainties.\label{tab:syst}} \begin{tabular}{lcc} \hline Source & $\tau(D^0\to K^-\pi^+)$ [fs] & $\tau(D^+\to K^-\pi^+\pi^+)$ [fs]\\ \hline Resolution model & 0.16 & 0.39 \\ Backgrounds & 0.24 & 2.52 \\ Detector alignment & 0.72 & 1.70 \\ Momentum scale & 0.19 & 0.48 \\ \hline Total & 0.80 & 3.10 \\ \hline \end{tabular} \end{table} \section{Systematic Uncertainty} A small correlation between $t$ and $\sigma_{t}$ is neglected in our nominal fitting model. In order to quantify the effect 1000 signal-only samples of simulated events with same statistics as data are fitted with the nominal PDF. Upper bounds of 0.16 fs and 0.39 fs on the average absolute deviation of measured lifetimes from their true value is assigned as a systematic uncertainty due to imperfect resolution for $D^0\to K^-\pi^+$ and $D^+\to K^-\pi^+\pi^+$, respectively. A background contamination of 0.2\% is neglected in the signal region of $D^0\to K^-\pi^+$. To estimate the effect on our result, 500 simulated samples of $e^+e^-$ events with same size and signal-to-background ratio as data are fitted with the nominal model. The average absolute deviation of fitted lifetime, after subtracting the uncertainty due to resolution modeling, from the true value is 0.24 fs and assigned as systematic uncertainty due to background contamination. The background in $D^+\to K^-\pi^+\pi^+$ signal region is modeled using data sideband. A mismatch between data and simulation in the sideband may be indicating an imperfect description of background components in signal region by the sideband. 1000 samples prepared using pseudo experiments in signal region and simulated data in sideband reproduce the same level of disagreement is fitted and the absolute average difference between the measured and simulated lifetime, 2.52 fs, is assigned as the systematic uncertainty due to background modeling. Misalignment of tracking detectors may cause bias in the decay-length determination and hence the lifetime. Two sources of uncertainties associated with the alignment are considered: the statistical precision and a possible systematic bias. The day-to-day difference between alignments in real data is used for the statistical contribution. Samples of same statistics as data are simulated by introducing realistic misalignment effects and the difference in lifetime residual for a given misalignment configuration and that from a perfectly aligned sample is assigned as systematic uncertainty. \section{Conclusions} In conclusion, the $D^0$ and $D^+$ lifetimes are measured using the data collected by the Belle II experiment corresponding to an integrated luminosity of $72~{\rm fb}^{-1}$. The results are the most precise to date and are consistent with previous measurements.
{ "arxiv_id": "2302.14265", "language": "en", "timestamp": "2023-03-01T02:07:28", "url": "https://arxiv.org/abs/2302.14265", "yymm": "2302" }
\section{Introduction} ML/AI is often (not entirely unjustifiably) thought of as an `existential threat' to model-based sciences, from physics to conventional control theory. In recent years, a framework has emerged \cite{lu2019deeponet,lu2021deeponet, li2020neural,li2021fourier}, initiated by George Karniadakis, his coauthors, and teams led by Anima Anandkumar and Andrew Stuart, which promises to unite the goals of physics and learning, rather than presenting learning as an alternative or substitute to first-principles physics. In this framework, often referred to as neural operators (NO), which is formulated as learning of mappings from function spaces into function spaces, and is particularly suitable for PDEs, solution/``flow'' maps can be learned after a sufficiently large number of simulations for different initial conditions. (In some cases, parameters of models can also be identified from experiments.) \paragraph{Mappings of plant parameters to control gains and learning of those maps} One can't but ask what the neural operator reasoning can offer to control theory, namely, to the design of controllers, observers, and online parameter estimators. This paper is the first venture in this direction, a breakthrough with further possibilities, and a blueprint (of a long series of steps) to learn PDE control designs and prove their stability. In control systems (feedback controllers, observers, identifiers), various kinds of nonlinear maps arise, some from vector into vector spaces, others from vector or function spaces into function spaces. Some of the maps have time as an argument (making the domain infinite) and others are mappings from compact domains into compact image sets, such as mappings converting system coefficients into controller coefficients, such as the mapping $K(A,B)$ for the closed-loop system $\dot x = Ax + Bu,\ u=Kx$ (under either pole placement or LQR). While learning nonlinear maps for various design problems for nonlinear ODEs would be worth a study, we focus in this initial work one step beyond, on a benchmark PDE control class. Our focus on an uncomplicated---but unstable---PDE control class is for pedagogical reasons. Combining the operator learning with {\em PDE backstepping} is complex enough even for the simplest-looking among PDE stabilization problems. \paragraph{PDE backstepping control with the gain computation obviated using neural operators} Consider 1D hyperbolic partial integro-differential equation systems of the general form $v_t(x,t) = v_x(x,t) + \lambda(x) v(x,t) + g(x) v(0,t) +\int_0^x f(x,y) v(y,t) dy$ on the unit interval $x\in[0,1]$, which are transformable, using an invertible backstepping ``pre-transformation'' introduced in \cite{Bernard2014} into the simple PDE \begin{eqnarray} \label{eq-PDE} u_t(x,t) &=& u_x(x,t) + \beta(x) u(0, t) \\ \label{eq-PDEBC} u(1, t) &=& U(t). \end{eqnarray} Our goal is the design of a PDE backstepping boundary control \begin{equation}\label{eq-bkstfbkk} U(t) = \int_0^1 k(1-y) u(y,t) dy. \end{equation} Physically, \eqref{eq-PDE} is a ``transport process (from $x=1$ towards $x=0$) with recirculation'' of the outlet variable $u(0,t)$. Recirculation causes instability when the coefficient $\beta(x)$ is positive and large. This instability is prevented by the backstepping boundary feedback \eqref{eq-bkstfbkk} with the gain function $k(\cdot)$ as a kernel in the spatial integration of the measured state $u(y,t)$. (The full state does not need to be measured, as explained in Remark \ref{rem-observer} at the end of Section \ref{sec-stabilityDeepONet}.) Backstepping produces the gain kernel $k$ for a given $\beta$. The mapping ${\cal K} : \beta \mapsto k$ is nonlinear, continuous, and we learn it. Why do we care to learn ${\cal K}$? The kernel {\em function} $k$ can always be computed for a particular $\beta$, so what is the interest in learning the {\em functional mapping/operator}? Once ${\cal K}$ is learned, $k$ no longer needs to be sought, for a new $\beta$, as a solution to a partial differential or integral equation. For the next/new $\beta$, finding $k$ is simply a ``function evaluation'' of the learned mapping ${\cal K}$. This provides benefits in both adaptive control where, at each time step, the gain estimate $\hat k$ has to be computed for a new parameter update $\hat\beta$, and in gain scheduling for nonlinear PDEs where the gain has to be recomputed at each current value of the state. \begin{figure}[t] \centering \includegraphics{images/algorithm.pdf} \caption{An algorithmic representation of our design paradigm of employing neural operators in boundary control of PDEs. Three major step clusters are performed: (1) \underline{derivation} of the integral equations for the backstepping kernels, performed only once; (2) \underline{learning} of the mapping from the plant parameter functions into the backstepping kernel functions, also performed only once; and (3) \underline{implementation} of the controller for specific plant parameters. The task in the top box has been completed in \cite{krstic2008Backstepping}. In this paper, the task in the middle box is introduced and stability guarantees for the task in the bottom box are provided.} \label{fig:0} \end{figure} As well known, learning (ML, in general, and its operator learning varieties: DeepONet, FNO, LOCA, NOMAD, etc.) comes with an upfront price. Large data sets need to be first produced, and then large (possibly ``deep'') neural networks need to be trained. There is no exception to this in the approach we propose. For a large sample set of recirculation functions $\beta_i$, we need to first solve for the corresponding backstepping kernels $k_i$. After that, a NN approximation of ${\cal K}$ needs to be trained on that data set of the $(\beta_i, k_i)$ pairs. One can stop at producing the NN approximation of the mapping ${\cal K}$ and proceed with a heuristic use of the approximated gains $\hat k$. But we don't stop there. We ask whether the PDE system will be still stabilized with the NN-approximated gain kernel $\hat k$. Our main theoretical result is affirmative. With a large enough data set of solved pairs $(\beta_i, k_i)$, and a large enough trained (deep) NN, closed-loop stability is guaranteed for a new $\beta$, not in the training set. When ML is applied in the control context (as RL or other approaches), it is usually regarded as a model-free design. Our design, summarized in Figure \ref{fig:0}, is not model-free; it is model-based. It is only that the computational portion of this model-based (PDE backstepping) design is obviated through ML. Our learning is offline; not as in adaptive control \cite{Bernard2014,Anfinsen2019Adaptive}. \paragraph{Neural operator literature---a brief summary} Neural operators are NN-parameterized maps for learning relationships between function spaces. They originally gained popularity due to their success in mapping PDE solutions while remaining discretization-invariant. Generally, nonlinear operators consist of three components: an encoder, an approximator, and a reconstructor \cite{lanthaler2022errorDeeponet}. The encoder is an interpolation from an infinite-dimensional function space to a finite-dimensional vector representation. The approximator aims to mimic the infinite map using a finite-dimensional representation of both the domain function space and the target function space. The reconstructor then transforms the approximation output into the infinite-dimensional target function space. The implementation of both the approximator and the reconstructor is generally coupled and can take many forms. For example, the original DeepONet \cite{lu2021deeponet} contains a ``branch'' net that represents the approximation network and a ``trunk'' net that builds a basis for the target function space. The outputs of the two networks are then taken in linear combination with each other to form the operator. FNO \cite{li2021fourier} utilizes the approximation network in a Fourier domain where the reconstruction is done on a basis of the trigonometric polynomials. LOCA \cite{kissas2022loca} integrates the approximation network and reconstruction step with a unified attention mechanism. NOMAD \cite{seidman2022nomad} extends the linear reconstructor map in DeepONet to a nonlinear map that is capable of learning on nonlinear submanifolds in function spaces. There have been many more extensions to the neural operator architectures omitted here as they are usually designed around domain-specific enhancements \cite{wang2021physicsinformedNeuralOperators} \cite{li2022dissipative} \cite{Pickering2022}. Another line of work, called physics-informed neural networks (PINNs) \cite{raissi2019physics, karniadakis2021physics}, which can be used as generic solvers of PDEs by adding physics constraint loss to neural networks. However, PINNs need to be re-trained for new recirculation function $\beta$, thus not providing as much acceleration for the computation of the backstepping kernels as the neural operators. \paragraph{Advances in learning-based control} Among the first in demonstrating the stability of learning-based model predictive controllers (MPC) were the papers \cite{ASWANI20131216,rosolia2017learning}, followed in several directions. First, for nonlinear systems, deep learning-based approaches consist of jointly learning the controller and(or) Lyapunov functions via NNs~\cite{tedrakeNNControl, chang2019neural, chen2020learning, chen2021learning, chen2021learning2, dawson2022safe, chen2022large}. \cite{chang2019neural} proposed a method for learning control policies and NN Lyapunov functions using an empirical Lyapunov loss and then validating using formal verification. \cite{chen2020learning, chen2021learning} generalize the method to learning Lyapunov functions for piecewise linear and hybrid systems, and \cite{chen2021learning2} for learning regions of attraction of nonlinear systems. In addition, \cite{taylor2019control, nguyen2021} have explored how learning-based control will affect nominal systems with known Lyapunov functions, and \cite{boffi2021learning, pfrommer2022tasil, claudioNonlinear2023} studied the problem of learning stability certificates and stable controllers directly from data. In a similar vein, \cite{frank2022} has developed a provable stable data-driven algorithm based on system measurements and prior knowledge for linear time-invariant systems. In a separate, but related direction, many reinforcement learning (RL) \cite{Bert05,sutton2018reinforcement} control approaches have been developed over the past few years. On the one side, model-based RL has been studied due to its superior sample efficiency and interpretable guarantees. The main focus has been on learning the system dynamics and providing closed-loop guarantees in \emph{finite-time} for both linear systems \cite{dean2018regret,chen2021black,lale2022reinforcement,faradonbeh2018finite,tsiamis2022learning} (and references within), and nonlinear systems~\cite{berkenkamp2017safe,kakade2020information,singh2021learning,lale2022kcrl}. For model-free RL methods, \cite{fazel2018global,mohammadi2021convergence,jiang2022,zhao2023global} proved the convergence of policy optimization, a popular model-free RL method, to the optimal controller for linear time-invariant systems, ~\cite{PANG2020109035, pramod2022} for linear time-varying systems, ~\cite{tang2021analysis} for partially observed linear systems. See \cite{hu2022towards} for a recent review of policy optimization methods for continuous control problems such as the LQR, $H_{\infty}$ control, risk-sensitive control, LQG, and output feedback synthesis. For nonlinear systems, \cite{chow2018lyapunov,choi2020reinforcement, cui2022structured,shi2022stability} investigated policy optimization with stability guarantees in which the stability constraints are derived from control Lyapunov functions. In addition to policy optimization methods, \cite{vamvoudakis2010online, lewis2012reinforcement, bhasin2013novel, vamvoudakis2017q} have studied and proved the stability and asymptotic convergence of other model-free RL algorithms such as actor-critic methods~\cite{vamvoudakis2010online, lewis2012reinforcement} and Q-learning~\cite{vamvoudakis2017q} in control affine systems. In the domain of cyber-physical systems (CPS), a theoretical framework has been developed for learning-based control to handle partially observable systems \cite{andreasSeparation2023}. Many advances have been made in learning-based control in games and multi-agent systems \cite{zhang2019policy, mazumdar2020gradient, fiez2020implicit, mojica2022stackelberg, zhang2021multi, mao2022provably, poveda2022fixed, fiez2020implicit, vamvoudakis2017game, qu2020scalable, lin2021multi}. Convergence is characterized for various learning-based methods to Nash equilibria in zero-sum linear quadratic games~\cite{zhang2019policy}, continuous games~\cite{mazumdar2020gradient}, Stackelberg games~\cite{fiez2020implicit, mojica2022stackelberg}, Markov games~\cite{zhang2020model,mao2022provably}, and multi-agent learning over networked systems~\cite{qu2020scalable, lin2021multi, poveda2022fixed}. A recent review for learning-based control in games is in~\cite{zhang2021multi}. We focus on learning-based control for PDE systems. In our previous work~\cite{shi2022machine}, we demonstrate the empirical success of using NOs for accelerating PDE backstepping observers, without theoretical guarantees. This work represents the first step towards using NOs for provably bypassing gain computations (with exponential stability guarantees) or directly learning the controller (with practical stability) in PDE backstepping. \paragraph{Backstepping control of first-order hyperbolic PDEs} The PDE system \eqref{eq-PDE}, \eqref{eq-PDEBC} is the simplest open-loop unstable PDE of any kind which can be of interest to the researcher working on PDE stabilization by boundary control. This system is treated here as a technical benchmark, as was done as well in \cite{Bernard2014} and a number of other references offering methodological advances in PDE stabilization. System \eqref{eq-PDE}, \eqref{eq-PDEBC} is a particular case of a single-PDE hyperbolic class in \cite{krstic2008Backstepping} for which PDE backstepping was first introduced in the hyperbolic setting. Coupled systems of first-order hyperbolic PDEs are of greater interest because they arise in fluid flows, traffic flows, elastic structures, and other applications. The first result on backstepping for a {\em pair} of coupled hyperbolic PDEs was in \cite{Coron2013Local}. The extension from two to $n+1$ hyperbolic PDEs, with actuation of only one and with counterconvection of $n$ other PDEs was introduced in \cite{Meglio2013Stabilization}. An extension from $n+1$ to $n+m$ coupled PDEs, with actuation on $m$ ``homodirectional'' PDEs, was provided in \cite{Hu2016Control,hu2019boundary}. Redesigns that are robust to delays were provided in \cite{Auriol2018Delay1}. An extension from coupled hyperbolic PDEs to cascades with ODEs was presented in \cite{DIMEGLIO2018281}. An extension from hyperbolic PDE-ODE cascades to ``sandwiched'' ODE-PDE-ODE systems was presented in \cite{WANG2020109131} and an event-triggered design for such systems was given in \cite{9319184}. The extension of PDE backstepping to output-feedback regulation with disturbances is proposed in \cite{DEUTSCHER201556,deutscher2018}. For coupled hyperbolic PDEs with unknown parameters, a comprehensive collection of adaptive control designs was provided in the book \cite{Anfinsen2019Adaptive}. Applications of backstepping to coupled hyperbolic PDE models of traffic are introduced in \cite{Yu2019Traffic,Yu2022}. \paragraph{Paper outline and contributions} After a brief introduction to the backstepping design in Section \ref{sec-bkst-intro}, for system \eqref{eq-PDE}, \eqref{eq-PDEBC}, in Section \ref{sec-kernelLip} we prove that the backstepping kernel operator is locally Lipschitz, between the spaces of continuous functions, with which we satisfy a sufficient condition for the existence of a neural operator approximation of a nonlinear operator to arbitrarily high accuracy---stated at the section's end in a formal result and illustrated with an example of approximating the operator $k={\cal K}(\beta)$. In Section \ref{sec-stabilityDeepONet} we present the first of our main results: the closed-loop stabilization (not merely practical but exponential) with a DeepONet-approximated backstepping gain kerne l function. In Section \ref{sec-simulations} we present simulation results that illustrate stabilization under DeepONet-approximated gains. Then, in Section~\ref{sec-beta,u->u} we pose the question of whether we can not only approximate the gain kernel mapping $\beta(x)\mapsto k(x)$, as in Sections \ref{sec-kernelLip} and \ref{sec-stabilityDeepONet}, but the entire feedback law mapping $(\beta(x),u(x,t)) \mapsto \int_0^1 k(1-y)u(y,t)dy$ at each time instant $t$; we provide an affirmative answer and a guarantee of semiglobal practical exponential stability under such a DeepONet approximation. In Section \ref{sims-fbklawapprox} we illustrate this feedback law approximation with a theory-confirming simulation. Then, in Section \ref{sec-PIDEextension}, we present the paper's most general result, which we leave for the end for pedagogical reasons, since it deals with Volterra operator kernel functions of two variables, $(x,y)$, on a triangular domain, and requires continuity of mappings between spaces of functions that are not just continuous but continuously differentiable, so that not only the backstepping kernel is accurately approximable but also the kernel's spatial derivatives, as required for closed-loop stability. We close with a numerical illustration for this general case in Section \ref{sec-simulationsQ}. In summary, the paper's contributions are the PDE stabilization under DeepONet approximations of backstepping gain kernels (Theorems \ref{thm-stabDeepONet} and \ref{thm-stabDeepONet-gf}) and under the approximation of backstepping feedback laws (Theorem \ref{thm-semiglobalpractical}). Our stabilization results also hold for any other neural operators with a universal approximation property (shown for LOCA~\cite{kissas2022loca} and for FNO on the periodic domain~\cite{kovachki2021universal}). \paragraph{Notation} We denote convolution operations as \begin{eqnarray} ( a * b)(x) = \int_0^x a(x-y) b(y) dy \end{eqnarray} In the sequel, we suppresses the arguments $x$ and $t$ wherever clear from the context. For instance, we write \eqref{eq-PDE}, \eqref{eq-PDEBC} compactly as $u_t=u_x +\beta u(0)$ and $u(1)=U$, where, from the context, the boundary values $u(0), u(1)$ depend on $t$ as well. \section{Backstepping Design for a Transport PDE with `Recirculation'} \label{sec-bkst-intro} Consider the PDE system \eqref{eq-PDE}, \eqref{eq-PDEBC}. We employ the following backstepping transformation: \begin{eqnarray}\label{eq-bkrsttransconv} w = u - k\ast u, \end{eqnarray} i.e., $w(x, t) = u(x, t) - \int_0^x k(x- y)u(y, t) dy$, to convert the plant into the target system \begin{eqnarray}\label{eq-targetPDE} w_t &= &w_x \\ \label{eq-targetPDEBC} w(1) &=& 0 \end{eqnarray} with the help of feedback \begin{equation}\label{sec-perfectfbk} U = (k\ast u)(1), \end{equation} namely, $U(t) = \int_0^1 k(1- y)u(y, t) dy$. To yield the target system, $k$ must satisfy the integral/convolution equation \begin{eqnarray} \label{eq:1.11} k(x) = - \beta(x)+ \int_0^x \beta(x-y) k(y) dy \end{eqnarray} for $x\in[0,1]$. Note that, while this integral equation is linear in $k$ for a given $\beta$, the mapping from $\beta$ to $k$ is actually nonlinear, due to the product in the convolution of $\beta$ with $k$. \section{Accuracy of Approximation of Backstepping Kernel Operator with DeepONet} \label{sec-kernelLip} An $n$-layer NN $f^\mathcal{N}:\mathbb{R}^{d_1}\rightarrow \mathbb{R}^{d_n}$ is given by \begin{eqnarray} \label{eq-nn} f^\mathcal{N}(x, \theta) := (l_n \circ l_{n-1} \circ ... \circ l_2 \circ l_1) (x,\theta) \end{eqnarray} where layers $l_i$ start with $l_0 = x\in\mathbb{R}^{d_1}$ and continue as \begin{equation} l_{i+1 (l_i,\theta_{i+1}):= \sigma(W_{i+1} l_i + b_{i+1}), \quad i=1,\ldots,n-1 \end{equation} $\sigma$ is a nonlinear activation function, and weights $W_{i+1} \in \mathbb{R}^{d_{i+1} \times d_i}$ and biases $b_{i+1} \in \mathbb{R}^{d_{i+1}}$ are parameters to be learned, collected into $\theta_i\in \mathbb{R}^{d_{i+1}(d_i+1)}$, and then into $\theta = [\theta_1^{\rm T},\ldots, \theta_n^{\rm T}]^{\rm T}\in\mathbb{R}^{\sum_{i=1}^{n-1} d_{i+1}(d_i+1)}$. Let $\vartheta^{(k)}, \theta^{(k)} \in\mathbb{R}^{\sum_{i=1}^{k-1} d_{k,(i+1)}(d_{k,i}+1)}$ denote a sequence of NN weights. An neural operator (NO) for approximating a nonlinear operator $\mathcal{G}: {\cal U} \mapsto {\cal V}$ is defined as \begin{eqnarray} {\cal G}_\mathbb{N}(\mathbf{u}_m)(y) = \sum_{k=1}^p g^\mathcal{N} (\mathbf{u}_m; \vartheta^{(k)}) f^\mathcal{N} (y; \theta^{(k)}) \end{eqnarray} where ${\cal U}, {\cal V}$ are function spaces of continuous functions $u \in {\cal U}, v \in {\cal V}$. $\mathbf{u}_m$ is the evaluation of function $u$ at points $x_i=x_1, ..., x_m$, $p$ is the number of chosen basis components in the target space, $y \in Y$ is the location of the output function $v(y)$ evaluations, and $g^\mathcal{N}$, $f^\mathcal{N}$ are NNs termed branch and trunk networks. Note, $g^\mathcal{N}$ and $f^\mathcal{N}$ are not limited to feedforward NNs \ref{eq-nn}, but can also be of convolutional or recurrent. \begin{theorem}\label{thm-DeepONet} {\em (DeepONet universal approximation theorem \cite[Theorem 2.1]{lu2021advectionDeepONet}).} Let $X \subset \mathbb{R}^{d_x}$ and $Y \subset \mathbb{R}^{d_y}$ be compact sets of vectors $x\in X$ and $y\in Y$, respectively. Let ${\cal U}:X\rightarrow U\subset \mathbb{R}^{d_u}$ and ${\cal V}:Y\rightarrow V\subset \mathbb{R}^{d_v}$ be sets of continuous functions $u(x)$ and $v(y)$, respectively. Let ${\cal U}$ be also compact. Assume the operator $\mathcal{G}: {\cal U} \rightarrow {\cal V}$ is continuous. Then, for all $\epsilon > 0$, there exist $m^*, p^* \in \mathbb{N}$ such that for each $m \geq m^*$, $p \geq p^*$, there exist $\theta^{(k)}, \vartheta^{(k)}$, neural networks $f^{\mathcal{N}}(\cdot ; \theta^{(k)}) , g^\mathcal{N}(\cdot ; \vartheta^{(k)}), k=1,\ldots, p$, and $ x_j \in X, j=1, \ldots, m$, with corresponding $\mathbf{u}_m = (u(x_1), u(x_2), \cdots, u(x_m))^{\rm T}$, such that \begin{equation} \label{eq:2.14G} |\mathcal{G}(u)(y) - \mathcal{G}_\mathbb{N}(\mathbf{u}_m)(y)| < \epsilon \end{equation} for all functions $u \in {\cal U}$ and all values $y \in Y$ of ${\cal G}(u)\in{\cal V}$. \end{theorem} \begin{definition}{\em (backstepping kernel operator).} \label{def:kernel} A mapping ${\cal K}:\beta \mapsto k$ of $C^0[0,1]$ into itself, where $k={\cal K} (\beta)$ satisfies \begin{eqnarray}\label{eq:2.12} \mathcal{K}(\beta) = -\beta + \beta \ast\mathcal{K}(\beta), \end{eqnarray} namely, in the Laplace transform notation, \begin{equation} k=\mathcal{K}(\beta) := \mathscr{L}^{-1} \left\{\frac{\mathscr{L}\{ \beta\}}{\mathscr{L}\{\beta\} - 1}\right\} \end{equation} is referred to as the {\em backstepping kernel operator}. \end{definition} \begin{lemma}{\em (Lipschitzness of backstepping kernel operator ${\cal K}$).}\label{lemma:holder} The kernel operator $\mathcal{K}:\beta \mapsto k$ in Definition \ref{def:kernel} is Lipschitz. Specifically, for any $B>0$ the operator $\mathcal{K}$ satisfies \begin{eqnarray} ||\mathcal{K}(\beta_1) - \mathcal{K}(\beta_2) ||_\infty \leq C ||\beta_1 - \beta_2||_\infty \end{eqnarray} with the Lipschitz constant \begin{equation}\label{eq-LipC} C= {\rm e}^{3B} \end{equation} for any pair of functions $(\beta_1, \beta_2)$ such that $\|\beta_1\|_\infty, \|\beta_2\|_\infty \leq B$, where $\|\cdot\|_\infty$ is the supremum norm over the argument of $\beta$ and $k$. \end{lemma} \begin{proof} Start with the iteration $k^0 = -\beta, k^{n+1} =k^0 + \beta\ast k^{n}, \ n\geq 0 $ and consider the iteration \begin{equation}\label{eq-Deltakn} \Delta k^{n+1} =\beta\ast \Delta k^n, \qquad \Delta k^0 = k^0 = - \beta \end{equation} for the difference $\Delta k^n = k^n-k^{n-1}$, which sums to \begin{eqnarray}\label{eq-Deltaknsum} k = \sum_{n=1}^\infty \Delta k^n. \end{eqnarray} Next, for $\bar\beta = \|\beta\|_\infty$ and all $x\in[0,1]$, \begin{eqnarray}\label{eq-Deltaknbound} \left|\Delta k^{n}(x) \right| \leq {\bar\beta^{n+1}x^{n}\over n!}, \end{eqnarray} which is established by induction by postulating $ \left|\Delta k^{n-1}(x) \right| \leq {\bar\beta^{n}x^{n-1}\over (n-1)!}$ and by computing, from \eqref{eq-Deltakn}, \begin{eqnarray}\label{eq-Deltakninduction} \left|\Delta k^{n}(x) \right| &=& \left| \int_0^x\beta (x-y) \Delta k^{n-1}(y)dy\right| \nonumber \\ &\leq& \bar\beta \int_0^x {\bar\beta^{n}y^{n-1}\over (n-1)!}|dy \leq {\bar\beta^{n+1}x^{n}\over n!}. \end{eqnarray} And then, \eqref{eq-Deltaknbound} and \eqref{eq-Deltaknsum} yield \begin{equation}\label{eq-kbound} |k(x)|\leq \bar\beta {\rm e}^{\bar\beta x} . \end{equation} Next, for $k_1 = {\cal K}(\beta_1)$ and $k_2 = {\cal K}(\beta_2)$ it is easily verified that \begin{equation} k_1 - k_2 = \beta_1 \ast(k_1-k_2) -\delta\beta +\delta\beta\ast k_2 \end{equation} where $\delta\beta = \beta_1 - \beta_2$. Define the iteration \begin{eqnarray}\label{eq-deltakndelta} \delta k^{n+1} &=& \beta_1\ast \delta k^n \\ \label{eq-deltak0} \delta k^0 &=& - \delta\beta + \delta\beta \ast k_2 \end{eqnarray} which verifies $ k_1 - k_2 = \sum_{n=1}^\infty \delta k^n$. Noting that \eqref{eq-kbound} ensures that $k_2={\cal K}(\beta_2)$ verifies $|k_2(x)|\leq \bar\beta_2 {\rm e}^{\bar\beta_2 x}$, from \eqref{eq-deltak0}, \begin{equation} |\delta k^0 (x)| \leq \left(1+\bar\beta_2{\rm e}^{\bar\beta_2 x}\right)\overline{\delta\beta} \leq \mu_2 \overline{\delta\beta} \end{equation} where $ \mu_2 := 1+\bar\beta_2{\rm e}^{\bar\beta_2 }$ and $ \overline{\delta\beta} = \|\beta_1-\beta_2\|_\infty$, it can be shown by induction, by mimicking the chain of inequalities \eqref{eq-Deltakninduction}, that, for all $x\in[0,1]$, \begin{equation} \left|\delta k^n(x)\right| \leq \mu_2\overline{\delta\beta} {\bar\beta_1^{n}x^{n}\over n!} \end{equation} and therefore it follows that, for all $x\in[0,1]$, \begin{eqnarray} |k_1(x) - k_2(x)|& \leq &\left(1+\bar\beta_2{\rm e}^{\bar\beta_2}\right) {\rm e}^{\bar\beta_1 x} \|\beta_1-\beta_2\|_\infty \nonumber \\ & \leq & {\rm e}^{3B } \|\beta_1-\beta_2\|_\infty . \end{eqnarray} Hence, local Lipschitzness is proven with \eqref{eq-LipC}. \end{proof} \begin{corollary}{\em (to Theorem~\ref{thm-DeepONet}).}\label{cor-DeepONet} Consider the backstepping kernel operator ${\cal K}$ in Definition \ref{def:kernel}. For all $B>0$ and $\epsilon > 0$, there exist $p^*(B,\epsilon), m^*(B,\epsilon) \in \mathbb{N}$, with an increasing dependence on $B$ and $1/\epsilon$, such that for each $p \geq p^*$ and $m \geq m^*$ there exist $\theta^{(k)}, \vartheta^{(k)}$, neural networks $f^{\mathcal{N}}(\cdot ; \theta^{(k)}) , g^\mathcal{N}(\cdot ; \vartheta^{(k)}), k=1,\ldots, p$, and $ x_j \in K_1, j=1, \ldots, m$, with corresponding $\mathbf{\beta}_m = (\beta(x_1), \beta(x_2), \cdots, \beta(x_m))^{\rm T}$, such that \begin{equation} \label{eq:2.14} |\mathcal{K}(\beta )(x) - \mathcal{K}_\mathbb{N}(\mathbf{\beta}_m)(x)| < \epsilon \end{equation} holds for all Lipschitz $\beta$ with the property that $\|\beta\|_\infty\leq B$. \end{corollary} \begin{figure*}[t] \includegraphics{images/fig1.pdf} \caption{Examples of $\beta$, $\hat{k}$ for Chebyshev polynomials defined as $\beta = 6 \cos (\gamma \cos^{-1}(x))$ with $\gamma = 3, 7.35$ on the left and right respectively. The $\gamma$ parameter controls the wave frequency of $\beta$ and therefore affects the resulting kernel. Additionally, the DeepONet absolute approximation error of $\hat{k}$ and $k$ is shown. The DeepONet approximates the "smoother" function on the left with better precision than the large, oscillating function on the right. } \label{fig:1} \end{figure*} So the backstepping kernel is approximable, qualitatively, but how many neurons and how much data are needed for a given $\epsilon$? We recall a result on the minimum-sized DeepONet. \begin{proposition}{\em (DeepONet size for kernel operator approximation {\cite[Theorem 3.3 and Remark 3.4]{lu2021advectionDeepONet}}).} If the kernel operator defined in \eqref{eq:2.12} is Lipschitz (or at least H\"{o}lder) continuous, a DeepONet that approximates it to a {\em required} error tolerance $\epsilon >0$ indicated by \eqref{eq:2.14} employs the number of data point evaluations for $\beta$ on the order of \begin{equation} m \sim \epsilon^{-1}, \end{equation} the number of basis components in the interpolation when reconstructing into $C^0[0,1]$ on the order of \begin{equation} p \sim \epsilon^{-\frac{1}{2}}, \end{equation} the numbers of layers $L_{g^\mathcal{N}}$ in the branch network and of neurons $N_{g^\mathcal{N}}$ in each layer of the branch network on the order given, respectively, by \begin{eqnarray} N_{g^\mathcal{N}} \cdot L_{g^\mathcal{N}} \sim \left({1\over\epsilon}\right)^{\frac{1}{\epsilon}}, \end{eqnarray} and the total size of the trunk network on the order of \begin{eqnarray} |\theta^{(k)}| \sim \left(\frac{3}{2} \log \frac{1}{\epsilon}\right)^2. \end{eqnarray} \end{proposition} \begin{example} \label{example1} In Figure \ref{fig:1} we present two examples of approximation of $k$ using a DeepONet approximation of ${\cal K}(\beta)$ for given $\beta_1$ and $\beta_2$, which are taken as Chebyshev polynomials $\beta(x) = 6 \cos (\gamma \cos^{-1}(x))$. They are trained on approximating kernels from 900 samples with $\gamma \in \text{uniform}[2, 8]$. \end{example} \section{Stability under Kernel Approximation with DeepONet} \label{sec-stabilityDeepONet} For our stability study under an approximate (imperfect) kernel, we begin with a derivation of the target PDE system under a backstepping transformation employing a DeepONet approximation of the backstepping kernel. For a given $\beta$, let $\hat k = \hat{\mathcal{K}} (\beta)$, where $\hat{\mathcal{K}} = \mathcal{K}_\mathbb{N}$, denote an NO approximation of the exact backstepping kernel $k$ whose existence is established in Corollary \ref{cor-DeepONet} for DeepONet. Let \begin{equation} \tilde k = k-\hat k \end{equation} denote the approximation error. Finally, let the backstepping transformation with the approximate kernel $\hat k$ be \begin{eqnarray}\label{eq-bksttrans} \hat{w} = u - \hat{k} * u. \end{eqnarray} With routine calculations, employing the approximate backstepping transformation and the feedback \begin{equation}\label{eq-khatfbk} U = (\hat k \ast u) (1) \end{equation} we arrive at the target system \begin{eqnarray}\label{eq-what-target1} \hat{w}_t &=& \hat{w}_x + \delta\hat{w}(0) \\ \label{eq-what-target2} \hat w(1) &=&0, \end{eqnarray} where the function $\delta(x)$ is defined as \begin{equation}\label{eq:2.30} \delta = -\tilde k + \beta\ast\tilde k. \end{equation} Next, we proceed with a Lyapunov analysis. \begin{lemma}{\em (a Lyapunov estimate).} \label{lem-Lyap} Given arbitrarily large $B>0$, for all Lipschitz $\beta$ with $\|\beta\|_\infty \leq B$, and for all neural operators $\hat{\cal K}$ with $\epsilon \in (0, \epsilon^*)$, where \begin{equation} \label{eq-eps*} \epsilon^*(B) = {c {\rm e}^{-c/2}\over 1+B} \end{equation} the Lyapunov functional \begin{equation} V(t) = \int_0^1 {\rm e}^{cx} \hat w^2(x,t) dx, \qquad c>0. \end{equation} satisfies the following estimate along the solutions of the target system \eqref{eq-what-target1}, \eqref{eq-what-target2}, \begin{equation} V(t) \leq V(0) {\rm e}^{-c^*t}, \end{equation} for \begin{equation}\label{eq-c*} c^* =c- \frac{e^c}{c}\epsilon^2 \left(1+ B \right)^2 >0. \end{equation} The accuracy required of the NO $\hat{\cal K}$, and given by \eqref{eq-eps*}, is maximized with $c=2$ and has the value $\epsilon^*(B) = \frac{2}{{\rm e}\left(1+B\right)}$. \end{lemma} \begin{proof} Several steps of calculation (chain rule, substitution, integration by parts) result in \begin{eqnarray} \dot V &=& - \hat w^2(0) - c \int_0^1 {\rm e}^{cx} \hat w^2(x,t) dx \nonumber \\ && + \hat w(0) \int_0^1 \delta(x) {\rm e}^{cx} \hat w(x) dx \nonumber \\ &\leq & -{1\over 2 }w^2(0) - c \int_0^1 {\rm e}^{cx} \hat w^2(x,t) dx \nonumber\\ && + \left(\int_0^1 \delta(x) {\rm e}^{cx} \hat w(x) dx\right)^2 \end{eqnarray} With the Cauchy-Schwartz inequality \begin{eqnarray} && \left(\int_0^1 \delta(x) {\rm e}^{cx} \hat w(x) dx\right)^2 \nonumber \\ && \leq \int_0^1 \delta^2(x) {\rm e}^{cx} dx\int_0^1 {\rm e}^{cx} \hat w(x)^2 dx \end{eqnarray} we get \begin{eqnarray} \dot V & \leq & -{1\over 2 }w^2(0) - \left(c- \int_0^1 \delta^2(x) {\rm e}^{cx} dx\right)V \end{eqnarray} The function $\delta$ in \eqref{eq:2.30} is bounded by $|\delta(x)| \leq \left(1+||\beta||_\infty\right)||\tilde{k}||_\infty $ which, in turn, using \eqref{eq:2.14}, yields \begin{equation}\label{eq-deltabar} |\delta(x)| \leq (1+ \bar{\beta})\epsilon =: \bar\delta. \end{equation} Then, substituting this into (37), we obtain: \begin{eqnarray} \dot V &\leq & -{1\over 2 }w^2(0) - \left(c- \epsilon^2\left(1 + \bar{\beta}\right)^2\int_0^1 {\rm e}^{cx} dx\right)V \nonumber\\ &\leq & -{1\over 2 }w^2(0) - \left(c- \frac{e^c}{c}\epsilon^2 \left(1+ \bar{\beta} \right)^2 \right)V \nonumber\\ &\leq & -{1\over 2 }w^2(0) - \left(c- \frac{e^c}{c}\epsilon^2 \left(1+ B \right)^2 \right)V \end{eqnarray} For $0\leq \epsilon \leq \epsilon^*$, where $\epsilon^*$ is defined in \eqref{eq-eps*}, we have \begin{eqnarray} \dot V \leq -{1\over 2 }w^2(0) - c^* V \end{eqnarray} for some $c^*>0$ in \eqref{eq-c*}. \end{proof} The size of the NO and of the dataset needs to increase with $\bar\beta$, i.e., with the potential instability in the open-loop system. \begin{lemma}{\em (bound on inverse approximate kernel).} \label{eq-Lyapsandwich} The kernel $\hat l$ of the inverse to the backstepping transformation \eqref{eq-bksttrans}, \begin{eqnarray} u = \hat{w} + \hat{l} \ast \hat w, \end{eqnarray} satisfies, for all $x\in[0,1]$, the estimate \begin{equation}\label{eq-lhatestimate} |\hat l(x)| \leq \left(\bar\beta + (1+ \bar{\beta})\epsilon \right) {\rm e}^{(1+ \bar{\beta})\epsilon x}. \end{equation} \end{lemma} \begin{proof} It is easily shown that $\hat l$ obeys the integral equation \begin{equation} \hat l = -\beta +\delta + \delta\ast\hat l. \end{equation} Using the successive approximation approach, we get that the following bound holds for all $x\in[0,1]$: \begin{equation} |\hat l(x)| \leq \left(\bar\beta + \bar\delta\right) {\rm e}^{\bar\delta x}. \end{equation} With \eqref{eq-deltabar}, we get \eqref{eq-lhatestimate}. \end{proof} \begin{theorem}{\em (Closed-loop stability robust to DeepONet approximation of backstepping kernel).} \label{thm-stabDeepONet} Let $B>0$ be arbitrarily large and consider the closed-loop system consisting of \eqref{eq-PDE}, \eqref{eq-PDEBC} with any Lipschitz $\beta$ such that $\|\beta\|_\infty \leq B$, and the feedback \eqref{eq-khatfbk} with the NO gain kernel $\hat k = \hat{\cal K}(\beta)$ of arbitrary desired accuracy of approximation $\epsilon\in (0,\epsilon^*)$ in relation to the exact backstepping kernel $k$, where $\epsilon^*(B)$ is defined in \eqref{eq-eps*}. This closed-loop system obeys the exponential stability estimate \begin{equation}\label{eq-esestimate} \|u(t)\| \leq M {\rm e}^{-c^*t/2} \|u(0)\|, \qquad \forall t\geq 0 \end{equation} with the overshoot coefficient \begin{equation} M=\left(1+\left(\bar\beta + (1+ \bar{\beta})\epsilon \right) {\rm e}^{(1+ \bar{\beta})\epsilon } \right) \left(1+\bar\beta {\rm e}^{\bar\beta} \right) {\rm e}^{c/2}. \end{equation} \end{theorem} \begin{proof} First, we note that $V$ Lemma \ref{lem-Lyap} satisfies \begin{equation}\label{eq-Vsandwich} {1\over \left(1+\|\hat l\|_\infty \right)^2}\|u\|^2 \leq V \leq {\rm e}^c \left(1+\|\hat k\|_\infty \right)^2 \|u\|^2. \end{equation} Since, by Lemma~\ref{lem-Lyap}, $V(t) \leq V(0) {\rm e}^{-c^*t}$, we get, for all $t\geq 0$, \begin{eqnarray} \|u(t)\| &\leq& \left(1+\|\hat l\|_\infty \right) \left(1+\|\hat k\|_\infty \right){\rm e}^{c/2} \nonumber \\ && \times {\rm e}^{-c^*t/2} \|u(0)\|. \end{eqnarray} Then, noting, with Theorem \ref{thm-DeepONet}, \eqref{eq-kbound}, and Lemma \ref{eq-Lyapsandwich} that \begin{eqnarray}\label{eq-khatinf} \|\hat k\|_\infty &\leq & \|k\|_\infty +\epsilon \leq \bar\beta {\rm e}^{\bar\beta} +\epsilon \\ \label{eq-lhatinf} \|\hat l\|_\infty &\leq & \left(\bar\beta + (1+ \bar{\beta})\epsilon \right) {\rm e}^{(1+ \bar{\beta})\epsilon } \end{eqnarray} we finally arrive at the exponential stability estimate \eqref{eq-esestimate}. \end{proof} \begin{remark} \label{rem-observer} Full-state measurement $u(x,t)$ is employed in the feedback law \eqref{eq-khatfbk} but can be avoided by employing only the measurement of the outlet signal, $u(0,t)$, from which the full state $u(x,t)$ is observable, the observer \begin{eqnarray}\label{exp-obsPDE} \breve u_t &=& \breve u_x +\beta u(0)\\ \label{exp-obsPDEBC} \hat u(1) &=& U \end{eqnarray} and the observer-based controller \begin{equation} U=(\hat k \ast \breve u)(1), \end{equation} which can avoid solving the PDE \eqref{exp-obsPDE}, \eqref{exp-obsPDEBC} online by employing its explicit solution as an arbitrary function $\breve u(x,t) = \breve u_0(x)$ for $t+x \in [0,1)$ and \begin{equation} \breve u(x,t) = U(t+x-1) + \int_{t+x-1}^t \beta(t+x-\tau) u(0,\tau) d\tau \end{equation} for $t+x\geq 1$. A closed-loop stability result as in Theorem \ref{thm-stabDeepONet} can be established for this observer-based controller. \end{remark} \begin{figure*}[t] \includegraphics[width=\textwidth]{images/figCombo.pdf} \caption{Top row showcases open-loop instability for the recirculation functions $\beta$ that are the same as in Fig. \ref{fig:1}, with $\gamma=3, 7.35$ on the left and right respectively. Additionally, the bottom two rows highlight examples of PDE closed-loop state response and errors between the response with ``perfect gain'' $k$ and ``approximate gain'' $\hat{k}$. $\beta$ corresponds to the same values in Figure \ref{fig:1}. For the more ``fluctuating'' plant parameter $\beta$, on the right of Figure \ref{fig:1}, the control task is more challenging and, consequently, the state approximation error is also higher (bottom right). } \label{fig:3} \end{figure*} \section{Simulations: Stabilization with NO-Approximated Gain Kernel $\beta\mapsto {\cal K}(\beta)$} \label{sec-simulations} Continuing with Example \ref{example1}, in Figure \ref{fig:3} we show that the system is open-loop unstable for both $\beta$s and we present tests with the learned kernels in closed-loop simulations up to $t=2$. In both cases, the PDE settles (nearly perfectly) by $t=1,$ as expected from the target system with the perfect kernel $k$. The small ripple in the right simulation is due to the use of the approximated kernel $\hat k$. The simulations confirm the theoretical guarantee that an NO-approximated kernel can successfully emulate a backstepping kernel while maintaining stability. The NO architecture in $\hat{\cal K}$ consists of about 680 thousand parameters with a training time of $1$ minute (using an Nvidia RTX 3090Ti GPU) on a dataset of $900$ different $\beta$ defined as the Chebyshev polynomials $\beta = 6 \cos (\gamma \cos^{-1}(x))$ where $\gamma \sim \text{uniform(2, 10)}$. We choose $\beta$ of this form due to the rich set of PDEs and kernel functions constructed by varying only a single parameter. The resulting training relative $L_2$ error $4e-3$ and the testing relative $L_2$ loss on $100$ instances sampled from the same distribution was $5e-3$. If a wider distribution of $\gamma$ is chosen, the mapping can be learned but requires both a larger network and more data for the same accuracy. \section{Approximating the Full Feedback Law Map $(\beta,u) \mapsto U$} \label{sec-beta,u->u} We have so far pursued only the approximation of operator ${\cal K}(\beta)$, while treating the feedback operator \eqref{sec-perfectfbk}, given by $U = (k\ast u)(1)=({\cal K}(\beta)\ast u)(1)$, as straightforward to compute---merely an integral in $x$, i.e., a simple inner product between the functions ${\cal K}(\beta)(1-x)$ and the state measurement $u(x,t)$. It is of theoretical (if not practical) interest to explore the neural approximation of the mapping from $(\beta,u)$ into the scalar control input $U$. Such a mapping is clearly from a much larger space of functions $(\beta,u)$ into scalars (i.e., the mapping is {\em functional}) and is, therefore, considerably more training-intensive and learning-intensive. Nevertheless, since it is legitimate to ask how one would approximate not just the feedback gain kernel but the entire feedback law map, we examine this option in this section. We emphasize that we are approximating just the feedback operator $({\cal K}(\beta)\ast u)(1)$, whose second argument is the current state $u$ as a function of $x$, not the entire trajectory $u(x,t)$. We do not train the NO using a trajectory-dependent cost $\int_0^{t_{\rm f}} \left(\int_0^1 u^2(x,t) dx + U^2(t)\right) dt $ for different initial conditions $u_0$, as, e.g., in the application of RL to the hyperbolic PDEs of traffic flow in \cite{9568241}. Instead, we perform the training simply on the kernel integral equation \eqref{eq:2.12} and the convolution operation \eqref{sec-perfectfbk} for sample functions $\beta$ and $u$ of $x$. The form of stability we achieve in this section is less strong than in Theorem \ref{thm-stabDeepONet}. While Theorem \ref{thm-stabDeepONet} guarantees global exponential stability, here we achieve only {\em semiglobal practical} exponential stability. Because in this section we do not just train a multiplicative gain ${\cal K}(\beta)$ but a feedback of $u$ as well, the approximation error is not just multiplicative but additive, which is the cause of the exponential stability being {\em practical}. Because the data set involves samples $u$ of bounded magnitude, stability is {\em semiglobal} only. Nevertheless, in comparison to the training on closed-loop solutions over a finite time horizon for the traffic flow in \cite{9568241}, where the finite horizon precludes the possibility of stability guarantees, the semiglobal practical exponential stability achieved here is a rather strong result. We start by establishing the Lipschitzness of the backstepping feedback map. \begin{lemma}Consider the feedback \eqref{sec-perfectfbk}, namely, \begin{equation}\label{sec-perfectfbk+} U = ({\cal K}(\beta) \ast u)(1), \end{equation} and the associated map ${\cal U}: (\beta,u) \mapsto U$ from $C^0([0,1]^2)$ into $\mathbb{R}$. For arbitrary $B_\beta,B_u>0$, the mapping ${\cal U}$ is Lipschitz on any set of $x$-dependent Lipschitz functions $(\beta,u)$ such that $\|\beta\|_\infty\leq B_\beta, \|u\|_\infty \leq B_u$, with a Lipschitz constant \begin{equation} C_{\cal U} = B_\beta {\rm e}^{B_\beta} + B_u {\rm e}^{3B_\beta}. \end{equation} \end{lemma} \begin{proof} Let $U_1 = {\cal U}(\beta_1,u_1) = ({\cal K}(\beta_1) \ast u_1)(1)$ and $U_2 = {\cal U}(\beta_2,u_2) = ({\cal K}(\beta_2) \ast u_2)(1)$. A calculation gives \begin{eqnarray} && |U_1-U_2| = |({\cal K}(\beta_1) \ast u_1)(1) - ({\cal K}(\beta_2) \ast u_2)(1)| \nonumber\\ &&\leq \|{\cal K}(\beta_1)\|_\infty \|u_1-u_2\|_\infty + \|u_2\|_\infty \|{\cal K}(\beta_1)- {\cal K}(\beta_2) \|_\infty. \nonumber \\ \end{eqnarray} Let $\|\beta_1\|_\infty, \|\beta_2\|_\infty \leq B_\beta$ and $\|u_1\|_\infty, \|u_2\|_\infty \leq B_u$. Recall that $\|{\cal K}(\beta)\|_\infty \leq B_\beta {\rm e}^{B_\beta}$ and $\|{\cal K}(\beta_1)- {\cal K}(\beta_2) \|_\infty\leq {\rm e}^{3B_\beta}\|\beta_1-\beta_2\|_\infty$. Then we get \begin{eqnarray} && |{\cal U}(\beta_1,u_1) -{\cal U}(\beta_2,u_2)| \nonumber\\ &&\leq \left(B_\beta {\rm e}^{B_\beta} + B_u {\rm e}^{3B_\beta} \right) \|(\beta_1-\beta_2, u_1-u_2)\|_\infty. \end{eqnarray} \end{proof} Taking the backstepping transformation $w = u - k\ast u$, where $k={\cal K}(\beta)$ is the exact backstepping kernel for $\beta$, we get \begin{eqnarray} w_t &= &w_x \\ w(1) &=& U - ({\cal K}(\beta)\ast u)(1) \end{eqnarray} Let now $\hat{\cal U}$ be the NO version of the mapping ${\cal U}(\beta,u) = ({\cal K}(\beta)\ast u)(1)$. Taking the NO control $U=\hat{\cal U}(\beta,u)$, we obtain the boundary condition $w(1) = \hat{\cal U}(\beta,u) - ({\cal K}(\beta)\ast u)(1)$, namely, the target system \begin{eqnarray} w_t &= &w_x \\ w(1) &=& \hat{\cal U}(\beta,u) - {\cal U}(\beta,u) \end{eqnarray} Due to the Lipschitzness of ${\cal U}$, based on the DeepONet approximation accuracy theorem, we get the following. \begin{lemma}\label{lemma-Utilde} For all $B_\beta,B_u>0$ and $\epsilon$, there exists an NO $\hat{\cal U}$ such that \begin{eqnarray} |{\cal U}(\beta,u) - \hat{\cal U}(\beta,u)|<\epsilon \end{eqnarray} for all $\beta, u \in C^0[0,1]$ that are Lipschitz in $x$ and such that $\|\beta\|_\infty\leq B_\beta, \|u\|_\infty \leq B_u$. \end{lemma} Next, we state and then prove the main result. \begin{theorem}{\em (Semiglobal practical stability under DeepONet approximation of backstepping feedback law).}\label{thm-semiglobalpractical} If $\epsilon <\epsilon^*$, where \begin{equation} \epsilon^*(B_\beta,B_u,c) := {\sqrt{c} B_u\over {\rm e}^{c/2} \left(1+ B_{\beta}\right)}\, >0, \end{equation} and $\|u(0)\| \leq B_u^0$, where \begin{equation}\label{em-ROA-Ucal} B_u^0(\epsilon,B_\beta,B_u,c):= {1\over 1+B_\beta {\rm e}^{B_\beta}} \left({B_u\over {\rm e}^{c/2} \left(1+ B_{\beta}\right)}-{\epsilon\over\sqrt{c}} \right)\, >0, \end{equation} the closed-loop solutions under the NO approximation of the PDE backstepping feedback law, i.e., \begin{eqnarray} \label{eq-PDEUcal} u_t(x,t) &=& u_x(x,t) + \beta(x) u(0, t) \\ \label{eq-PDEBCUcal} u(1, t) &=& \hat{\cal U}(\beta,u)(t) \end{eqnarray} satisfy the {\em semiglobal practical exponential stability} estimate \begin{eqnarray}\label{eq-esestimateUcal-final} \|u(t)\| &\leq& \left(1+B_\beta \right) \left(1+B_\beta {\rm e}^{B_\beta} \right) {\rm e}^{c/2}{\rm e}^{-c t/2} \|u(0)\| \nonumber\\ && +\left(1+ B_{\beta}\right) {{\rm e}^{c/2}\over \sqrt{c}} \epsilon , \qquad \forall t\geq 0. \end{eqnarray} \end{theorem} The estimate \eqref{eq-esestimateUcal-final} is semiglobal because the radius $B_u^0$ of the ball of initial conditions in $L^2[0,1]$ is made arbitrarily large by increasing $B_u$, and by increasing, in accordance with the increase of $B_u$, the training set size and the number of NN nodes. Nevertheless, though semiglobal, the attraction radius $B_u^0$ in \eqref{em-ROA-Ucal} is much smaller than the magnitude $B_u$ of the samples of $u$ in the training set. The residual value, \begin{equation} \lim\sup_{t\rightarrow} \|u(t)\| \leq \left(1+ B_{\beta}\right) {{\rm e}^{c/2}\over \sqrt{c}} \epsilon \end{equation} is made arbitrarily small by decreasing $\epsilon$, and by increasing, in accordance with the decrease of $\epsilon$, the training set size and the number of NN nodes. As the magnitude $B_\beta$ of the (potentially destabilizing) gain samples $\beta$ used for training grows, the residual error grows. \begin{proof}{\em (of Theorem \ref{thm-semiglobalpractical})} To make the notation concise, denote $\tilde{\cal U} = {\cal U} - \hat{\cal U}$ and note that this mapping satisfies $|\tilde{\cal U}(\beta,u)|= |w(1)|\leq \epsilon$ for all $\|\beta\|_\infty\leq B_\beta, \|u\|_\infty \leq B_u$. Note also that $\tilde{\cal U}$ depends on $\epsilon, B_\beta, B_u$ through the number of training data and NO size. Consider now the Lyapunov functional $ V(t) = \int_0^1 {\rm e}^{cx} w^2(x,t) dx$. Its derivative is \begin{eqnarray} \dot V &=& {\rm e}^{c} w^2(1) - w^2(0) - c \int_0^1 {\rm e}^{cx} w^2(x,t) dx \nonumber \\ &\leq & - c V + {\rm e}^{c} w^2(1) \end{eqnarray} which yields \begin{eqnarray} V(t) &\leq& V(0) {\rm e}^{-ct} + {{\rm e}^{c}\over c} \sup_{0\leq\tau\leq t} w^2(1,\tau) \nonumber\\ &\leq& V(0) {\rm e}^{-ct} + {{\rm e}^{c}\over c} \sup_{0\leq\tau\leq t} \left(\tilde{\cal U}(\beta,u)(\tau)\right)^2. \end{eqnarray} Using the facts that \begin{equation}\label{eq-Vsandwich-k} {1\over \left(1+\| l\|_\infty \right)^2}\|u\|^2 \leq V \leq {\rm e}^c \left(1+\| k\|_\infty \right)^2 \|u\|^2. \end{equation} and $\|k\|_\infty, \|l\|_\infty \leq B_{\beta} {\rm e}^{B_{\beta}}$, $\|l\|_\infty \leq B_{\beta} $ we get \begin{eqnarray}\label{eq-esestimateUcal} \|u(t)\| &\leq& \left(1+B_\beta \right) \left(1+B_\beta {\rm e}^{B_\beta} \right) {\rm e}^{c/2}{\rm e}^{-c t/2} \|u(0)\| \nonumber\\ && +\left(1+ B_{\beta}\right) {{\rm e}^{c/2}\over \sqrt{c}} \sup_{0\leq\tau\leq t} \left|\tilde{\cal U}(\beta,u)(\tau)\right|. \end{eqnarray} The conclusions of the theorem are directly deduced from this estimate and the bound $|\tilde{\cal U}|<\epsilon$ in Lemma \ref{lemma-Utilde}. \end{proof} The NO $\hat{\cal U}: (\beta,u) \mapsto U$ is complex, and therefore computationally burdensome in real time. Why not instead precompute the neural operator $\hat{\cal K}: \beta\mapsto \hat k$ and also find a DeepONet $\hat\Omega$ approximation of the {\em bilinear} map $\Omega : (k,u) \mapsto U$, which is simply the convolution $\Omega(k,u)(t) = \int_0^1 k(1-x) u(x,t) dy$, and then compute just $\hat\Omega(\hat k,u)(t)$ in real time, after computing $\hat k = \hat{\cal K}(\beta)$ offline? This is certainly possible. Why haven't we developed the theory for this approach? Simply because the theory for such a ``composition-of-operators'' approach, for $\hat\Omega(\hat{\cal K}(\beta),u)$, would be hardly any different, but just notationally more involved, than the theory that we provide here for the one-shot neural operator $\hat{\cal U}(\beta,u)$. \section{Simulations: Practical Stabilization with NO-Approximated Feedback Law $(\beta,u)\rightarrow U$} \label{sims-fbklawapprox} Learning the map $(\beta, u) \mapsto U$ is harder than $\beta \mapsto k$ due to the combination of two functions, $\beta$, and $u$. We can learn the mapping using a training set defined by $\beta$ as in Figure \ref{fig:1} with $\gamma \in \text{uniform}(2, 6)$ and random values of $u$. We present results with the learned mapping in Figure \ref{fig:4} where the learned control contains significant error. Due to this, we see that the PDE in the right of Figure \ref{fig:4} contains a significant ripple past the time $T=1$ whereas the analytically controlled PDE is stabilized, as stipulated by the target system, by $T=1$. When compared to the operator approximation for gain kernel in Figure \ref{fig:3} left, the PDE error is at least twice as large confirming the theoretical results in Theorem \ref{thm-semiglobalpractical} and Theorem \ref{thm-stabDeepONet}. Furthermore, the network architecture, as presented in Figure \ref{fig:5} requires significant enhancement over a traditional DeepONet. To learn this mapping, we emulate the operator structure where the map $(\beta, u)$ requires two DeepONet layers for the integral operators adjoined with linear layers for the multiplicative operation. Additionally, to make the network feasible, we use a smaller spatial resolution than in Section \ref{sec-simulations} and a larger dataset. The dataset requires a combination of both $\beta$ and $u$ and thus consists of 50000 instances. Therefore a network of approximately 415 thousand parameters takes approximately 20 minutes to train. We achieved a training relative $L_2$ error of $7.2e-3$ and a testing relative $L_2$ error of $3.3e-2$. This demonstrates, to the practical user, that the map $(\beta, u)$ requires more training data and significant architectural enhancements boosting training time, yet the error in Figure \ref{fig:4} is larger compared to employing the learned map $\beta \mapsto k$. \begin{figure*}[t] \includegraphics[width=\textwidth]{images/fig4.pdf} \caption{Examples of PDE closed-loop state response and errors between the response with ``perfect control'' $U$ and ``approximate control'' $\hat{U}$. $\beta$ is same as in \ref{fig:1} with $\gamma=3$.} \label{fig:4} \end{figure*} \begin{figure*}[t] \includegraphics[width=\textwidth]{images/network1.pdf} \caption{Network architecture for the map $(\beta, u) \mapsto U$ presented in Section \ref{sec-beta,u->u}. The network first solves the kernel function using a DeepONet layer, then utilizes linear layers to multiply $k$ with the PDE state $u$, and concludes by learning a second neural operator layer for the nonlinear integral operation yielding the final control output $U$. } \label{fig:5} \end{figure*} \section{Extension to Hyperbolic PIDEs} \label{sec-PIDEextension} We present the ``general case'' for a class of hyperbolic partial integro-differential equations (PIDE) of the form \begin{eqnarray} \label{eq-PIDE} u_t(x,t) &=& u_x(x,t) + g(x) u(0,t) \nonumber \\&& + \int_0^xf(x,y) u(y,t) dy , \qquad \mbox{$x\in[0,1)$} \\ \label{eq-PIDEBC} u(1, t) &=& U(t). \end{eqnarray} We have left this generalization for the end of the paper for pedagogical reasons---in order not to overwhelm and daze the reader---since the case where the Volterra operator kernel $f(x,y)$ is a function of two variables complicates the treatment considerably. The backstepping transformation is no longer a convolution with a function of a single variable, the gain mapping is no longer of $C^0$ functions on $[0,1]$ but of $C^1$ functions on the triangle $\{0\leq y\leq x\leq 1\}$, and the DeepONet theorem requires estimates of the derivatives of the backstepping kernel. While \cite[(11)--(17)]{Bernard2014} shows that \eqref{eq-PDE}, \eqref{eq-PDEBC} can be transformed into \eqref{eq-PIDE}, \eqref{eq-PIDEBC}, this transformation involves a nonlinear mapping $f\mapsto \beta$, which itself would have to be learned to produce an approximation of the complete kernel mapping $(g,f)\mapsto k$ as a composition of two mappings. This is why the results of the previous sections do not provide a solution to the general case \eqref{eq-PIDE}, \eqref{eq-PIDEBC}, but only a pedagogical introduction, and this is why a generalization in this section is necessary. To find the mapping from the PIDE coefficients $(g,f)$ to the kernel $k$ of the backstepping controller \begin{equation}\label{eq-bkstfbkkPIDE} U(t) = \int_0^1 k(1,y) u(y,t) dy, \end{equation} we take the backstepping transform \begin{eqnarray} w(x, t) = u(x, t) - \int_0^x k(x,y)u(y, t) dy, \end{eqnarray} which is not a simple convolution as in \eqref{eq-bkrsttransconv}, with a kernel depending on a single argument, and the same target system as in \eqref{eq-targetPDE}, \eqref{eq-targetPDEBC}, $ w_t = w_x, \ w(1) = 0$, which gives the kernel integral equation derived in \cite{krstic2008Backstepping} as \begin{eqnarray} \label{eq:kernelIntegral} k(x,y) &=& F_0(x,y)+F(g,f,k)(x,y), \end{eqnarray} where \begin{align}\label{eq-F_0} &F_0(x,y) := - g(x-y)- \int_0^y f(x-y+\xi,\xi) d\xi \\ \label{eq-Fgf} &F(g,f,\kappa)(x,y):= \int_0^{x-y} g(\xi) \kappa(x-y,\xi) d\xi \nonumber\\ & +\int_0^y \int_0^{x-y} f(\xi+\eta,\eta)\kappa(x-y+\eta, \xi+\eta) d\xi d\eta. \end{align} Denote ${\cal T} = \{0\leq y\leq x\leq 1\}$ as the domain of the functions $f$ and $k$. Further, denote \begin{align} \bar g = \sup_{[0,1]} |g|, \quad & \overline{g'} = \sup_{[0,1]} |g'| \\ \bar f = \sup_{\cal T} |f|, \quad &\overline{f_x} = \sup_{\cal T} |f_x|. \end{align} It was proven in \cite{krstic2008Backstepping} that \begin{equation}\label{eq-kbarbound} |k(x,y)|\leq \left(\bar g + \bar f\right) {\rm e}^{\bar g + \bar f} =:\bar k\left(\bar g, \bar f\right). \end{equation} For the partial derivatives \begin{eqnarray} k_x &=& F_0^x + F(g,f,k_x)\\ k_y &=& F_0^y - F(g,f,k_x) \end{eqnarray} where \begin{align} & F_0^x(x,y) = -\int_0^y f_x(x-y+\xi, \xi) d\xi +\phi_0(x,y) \\ &F_0^y(x,y) = f_x(x,y) + \int_y^x f(\sigma, y) k(x, \sigma) d\sigma -\phi_0(x,y) \\ & \phi_0(x,y) = -g'(x-y)+g(x-y)k(x-y,x-y) \nonumber\\ & + \int_0^y f(x-y+\eta, \eta) k(x-y+\eta, x-y+\eta) d\eta \end{align} it is proven using the same approach (successive approximation, infinite series, induction) that, on the triangle ${\cal T}$, \begin{eqnarray}\label{eq-kxbarbound} |k_x(x,y)| &\leq& \left(\overline{f_x}+\overline{\phi_0}\right){\rm e}^{\bar g + \bar f} = :\overline{k_x}\left(\bar g, \overline{g'},\bar f, \overline{f_x}\right) \\ \label{eq-kybarbound} |k_y(x,y)| &\leq& \overline{f_x} + \bar f \bar k + \overline{\phi_0}+ \left(\bar g + \bar f\right) \overline{k_x} \end{eqnarray} where \begin{equation} \overline{\phi_0}(\bar g, \overline{g'},\bar f) := \overline{g'}+ \left(\bar g + \bar f\right) \bar k. \end{equation} Hence, along with the existence, uniqueness, and continuous differentiability of $k$ \cite{krstic2008Backstepping}, we have proven the following. \begin{lemma} The map ${\cal Q}:C^1([0,1]\times{\cal T})\rightarrow C^1([0,1]\times{\cal T})$ defined by $k = {\cal Q}(g,f)$, and representing the solution of \eqref{eq:kernelIntegral}, is continuous. In addition, $|k|, |k_x|, |k_y|$ are bounded, respectively, as in \eqref{eq-kbarbound}, \eqref{eq-kxbarbound}, \eqref{eq-kybarbound}, in terms of the bounds on $|g|, |g'|, |f|, |f_x|$. \end{lemma} From the continuity of the map ${\cal Q}$ on the Banach space $C^1([0,1]\times{\cal T})$, the following result is inferred from the DeepONet theorem. \begin{lemma} For all $\epsilon>0$ and $B_g, B_{g'}, B_f, B_{f_x}>0$ there exists an NO $\hat{\cal Q}$ such that, for all $(x,y) \in {\cal T}$, \begin{align}\label{eq-Qerrorbound} &\left|\hat{\cal Q}(g,f)(x,y) - {\cal Q}(g,f)(x,y)\right| \nonumber \\& +\left|{\partial\over\partial y}\left(\hat{\cal Q}(g,f)(x,y) - {\cal Q}(g,f)(x,y)\right)\right| \nonumber\\& +\left|{\partial\over\partial y}\left(\hat{\cal Q}(g,f)(x,y) - {\cal Q}(g,f)(x,y)\right)\right| < \epsilon \end{align} for all functions $g\in C^1([0,1])$ and $f\in C^1({\cal T})$ whose derivatives are Lipschitz and which satisfy $\|g\|_\infty \leq B_g$, $\|g'\|_\infty \leq B_{g'}$, $\|f\|_\infty \leq B_f$, $\|f_x\|_\infty \leq B_{f_x}$. \end{lemma} Denoting $\tilde k = k-\hat k = {\cal K}(g,f) - \hat {\cal K}(g,f)$, \eqref{eq-Qerrorbound} can be written as $\left|\tilde k(x,y)\right|+\left|\tilde k_x(x,y)\right| + \left|\tilde k_y(x,y)\right|<\epsilon$. Now take the backstepping transformation \begin{equation} \hat w(x,t) = u(x,t) - \int_0^x \hat k(x,y) u(y,t) dy. \end{equation} With the control law \begin{equation}\label{eq-fbkgf} U(t) = \int_0^1 \hat k(1,y) u(y,t) dy, \end{equation} the target system becomes \begin{eqnarray}\label{eq-target-gf} \hat w_x(x,t) &=& \hat w_t(x,t) +\delta(x) \hat w(0,t) \nonumber \\ &&+ \int_0^x \delta_1(x,y) u(y,t) dy \\ \hat w(1,t) &=&0, \end{eqnarray} where \begin{eqnarray} \delta_0(x) &=& -\tilde k(x,0) + \int_0^x g(y) \tilde k(x,y) dy \\ \delta_1(x,y) &=& -\tilde k_x(x,y) -\tilde k_y(x,y) \nonumber\\ && +\int_y^x f(\xi,y) \tilde k(x,\xi) d\xi \end{eqnarray} satisfy \begin{eqnarray} \|\delta_0\|_\infty &\leq & (1+\bar g) \epsilon\\ \|\delta_1\|_\infty &\leq & (2+\bar f) \epsilon. \end{eqnarray} Since the state $u$ appears under the integral in the $\hat w$-system \eqref{eq-target-gf}, in the Lyapunov analysis we need the inverse backstepping transformation \begin{equation}\label{eq-invbkstgf} u(x,t) = \hat w(x,t) + \int_0^x \hat l(x,y) \hat w(y,t) dy. \end{equation} It is shown in \cite{krstic2008boundary} that the direct and inverse backstepping kernels satisfy in general the relationship \begin{equation} \hat l(x,y) = \hat k(x,y) + \int_y^x \hat k(x,\xi) \hat l(\xi,y) dy. \end{equation} The inverse kernel satisfies the following conservative bound \begin{equation} \|\hat l\|_\infty \leq \|\hat k\|_\infty {\rm e}^{\|\hat k\|_\infty}. \end{equation} Since $\|k-\hat k\|_\infty <\epsilon$, we have that $\|\hat k\|_\infty \leq \|k\|_\infty+\epsilon$. With \eqref{eq-kbarbound} we get $\|\hat k\|_\infty \leq \bar k(\bar g, \bar f) +\epsilon$ and hence \begin{equation} \|\hat l\|_\infty \leq \left(\bar k +\epsilon\right) {\rm e}^{\bar k +\epsilon}. \end{equation} Going back to \eqref{eq-invbkstgf}, we get \begin{equation} \|u\| \leq \left( 1+ \left(\bar k +\epsilon\right) {\rm e}^{\bar k +\epsilon}\right) \|\hat w\|. \end{equation} Mimicking and generalizing the steps of the proofs of Lemma \ref{lem-Lyap} and Theorem \ref{thm-stabDeepONet}, we get the following exponential stability result. (We omit the explicit but conservative and exceedingly complicated and uninformative estimates of the overshoot coefficient, the decay rate, and the upper bound $\epsilon^*$ on the approximation accuracy needed to guarantee stability under the gain approximation.) \begin{theorem} \label{thm-stabDeepONet-gf} Let $B_g, B_{g'}, B_f, B_{f_x}>0$ be arbitrarily large and consider the system \eqref{eq-PIDE}, \eqref{eq-PIDEBC} with any $g\in C^1([0,1])$ and $f\in C^1({\cal T})$ whose derivatives are Lipschitz and which satisfy $\|g\|_\infty \leq B_g$, $\|g'\|_\infty \leq B_{g'}$, $\|f\|_\infty \leq B_f$, $\|f_x\|_\infty \leq B_{f_x}$. There exists a sufficiently small $\epsilon^*(B_g, B_{g'}, B_f, B_{f_x})>0$ such that the feedback law \eqref{eq-fbkgf} with the NO gain kernel $\hat k = \hat{\cal Q}(g,f)$ of arbitrary desired accuracy of approximation $\epsilon\in (0,\epsilon^*)$ in relation to the exact backstepping kernel $k$ ensures that there exist $M, c^*>0$ such that the closed-loop system satisfies the exponential stability bound \begin{equation}\label{eq-esestimategf} \|u(t)\| \leq M {\rm e}^{-c^*t/2} \|u(0)\|, \qquad \forall t\geq 0. \end{equation} \end{theorem} \section{Simulations: Stabilization of PIDE with NO-Approximated Gain Kernel $f\mapsto {\cal Q}(f)$ Dependent on $(x,y)$} \label{sec-simulationsQ} For clarity, we consider the systems of the form \eqref{eq-PIDE} with $g=0$, so that the focus is solely on the mapping of two-dimensional plant kernels $f(x,y)$ into two-dimensional backstepping kernels $k(x,y)$, which are governed by the (double) integral equation \begin{align} \label{eq:kernelIntegralf} & k(x,y) = - \int_0^y f(x-y+\xi,\xi) d\xi \nonumber \\ & + \int_0^y \int_0^{x-y} f(\xi+\eta,\eta)k(x-y+\eta, \xi+\eta) d\xi d\eta. \end{align} We illustrate in this section the NO approximation $\hat{\cal Q}$ of the nonlinear operator ${\cal Q}:f\mapsto k$ mapping $C^1({\cal T})$ into itself. First, in Figure \ref{fig:7} we present the construction of the two-dimensional function $f$ via a product of Chebyshev polynomials and highlight the PDE's open-loop instability. Then, we showcase the corresponding learned kernel and the error in Figure \ref{fig:8}. The pointwise error for the learned kernel peaks at around $10\%$ of $k$, as it ``ripples'' in the right of Figure \ref{fig:8}. The learned kernel $\hat k$ achieves stabilization in Figure \ref{fig:9} (right), but not by $t=1$, as it would with perfect $k$ in \eqref{eq-targetPDE}, \eqref{eq-targetPDEBC}, but only exponentially, as guaranteed for the learned $\hat k$ in Theorem \ref{thm-stabDeepONet-gf}. For this 2D problem ($f$ and $k$ are functions of $x$ and $y$), we design the branch network of the NO with convolutional neural networks (CNNs) as they have had large success in handling 2D inputs \cite{alexnet,lecunCNN}. The network consists of 70 million parameters (due to the CNNs), yet only takes around 5 minutes to train. On 900 instances, the network achieves a relative $L_2$ training error of $1.3e-3$ and a relative $L_2$ testing error of $1.8e-3$ on 100 instances. \section{Conclusions} \label{sec-concl} \paragraph{What is achieved} PINN, DeepONet, FNO, LOCA, NOMAD---they have all been used with success to approximate solution maps of PDEs. What we introduce is a novel framework: for approximating the solution maps for {\em integral equations} \eqref{eq:kernelIntegral}, \eqref{eq-F_0}, \eqref{eq-Fgf}, or simply \eqref{eq:1.11}, for the feedback gain functions $k$ in {\em control of PDEs}. We provide the guarantees that (i) any desired level of accuracy of NO approximation of the backstepping gain kernel is achieved for any $\beta$ that satisfies $\|\beta\|_\infty \leq B$ for arbitrarily large given $B>0$, and (ii) the PDE is stabilized with an NO-approximated gain kernel for any $\|\beta\|_\infty \leq B$. These results generalize to a class of PIDEs with functional coefficients $(g,f)$ that depends on two variables, $(x,y)$, and result in kernels $k$ that are also functions of $(x,y)$. For a given $B>0$ and any chosen positive $\epsilon < \epsilon^*(B)$, the determination of the NO approximate operator $\hat{\cal K}(\cdot)$ is done offline, once only, and such a $\hat{\cal K}(\cdot)$, which depends on $B$ and $\epsilon$, is usable ``forever,'' so to speak, for any recirculation kernel that does not violate $\|\beta\|_\infty \leq B$. When the entire PDE backstepping feedback law---rather than just its gain kernel---is being approximated, globality and perfect convergence are lost, but only slightly. Decay remains exponential, over infinite time, and stability is semiglobal. \paragraph{What is gained by making a particular controller class with theoretical guarantees the object of learning} By now it is probably clear to the reader that what we present here is a method for learning an entire {\em class} of model-based controllers, by learning the gains $\hat k=\hat{\cal K}(\beta)$, or $\hat k = {\cal Q}(g,f)$, for any plant parameters $\beta$ or $(g,f)$. What does one profit from learning a particular class of controllers backed up by theory? Suppose that, instead of learning the {\em PDE backstepping} gain mapping ${\cal K}(\cdot)$, we were trying to find {\em any} gain function $k(x)$ that meets some performance objective. This goal could be formulated as a finite-time minimization of $\int_0^{t_{\rm f}} \left(\int_0^1 u^2(x,t) dx + U^2(t)\right) dt $, for a given $\beta$, over a set of gain functions $k$ for a ball of initial conditions $u_0(x) = u(x,0)$ around the origin. Not only would this be a much larger search, over $(k,u_0)$, but such a finite-time minimization could ensure only finite-time performance, not exponential stability. Our achievement of global exponential stability (not ``practical''/approximate, but with an actual convergence of the state to zero) relies crucially---in each of the lemmas and theorems that we state---on the theoretical steps from the PDE backstepping toolkit (backstepping transform, target system, integral equation for kernel, successive infinite-series approximation, Lyapunov analysis). It is only by assigning the NO a service role in an otherwise model-based design that stability is assured. Stability assurance is absent from learning approaches in which the feedback law design is left to ML and a finite-time cost, as in RL for the traffic flow PDEs \cite{9568241}. \paragraph{Future research} Of immediate interest are the extensions of the results of this paper to parabolic PDEs in \cite{1369395}, as well as extensions from the approximations of controller kernels to the NO approximations of PDE backstepping observer kernels \cite{SMYSHLYAEV2005613}, with guarantees of observer convergence, and with observer-based stabilization (separation principle). \begin{figure*} \centering \includegraphics[width=\textwidth]{images/fig7.pdf} \caption{On the left is an example of $f(x,y)$ generated by the following form $f(x, y)$ = $\beta(x)\beta(y)$ where $\beta$ is the Chebyshev polynomial as in Fig. \ref{fig:1} with $\gamma=6$. On the right is the corresponding PDE with open loop instability.} \label{fig:7} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{images/fig8.pdf} \caption{Examples of the learned kernel $\hat{k}(x, y)$ and the kernel error $k(x, y) - \hat k(x, y)$ with a peak of about $10\%$ of $k(x, y)$, around the coordinate $(0.9, 0.9)$. The kernel shown corresponds to $f(x, y)$ in Figure \ref{fig:7}.} \label{fig:8} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{images/fig9.pdf} \caption{PDE with the analytical kernel on the top left and with the learned kernel on the top right. Additionally, the $L_2$ error between the systems peaks around $t=1.3$ at a value of $0.2$. The PDE shown corresponds to the $f(x, y)$ presented in Figure \ref{fig:7}. } \label{fig:9} \end{figure*}
{ "arxiv_id": "2302.14279", "language": "en", "timestamp": "2023-03-01T02:07:57", "url": "https://arxiv.org/abs/2302.14279", "yymm": "2302" }
\section{Introduction} With the development of quantum devices and quantum algorithms, it is possible to solve problems on quantum computers that are hard for classical ones. Quantum computers have already been successfully implemented in many fields, including quantum chemistry, condensed matter physics and lattice field theory, see references~\cite{McArdle_20, Klco_2018, Klco_2020, Ciavarella_2021, Funcke_2022,Clemente_strategies,Banuls2020} as some examples. With the growing number of qubits and improved fidelities of quantum devices, more realistic physical models can be tackled, and the potential of quantum computers can be explored. As an example of application, in this article, we prepare the thermal state of the Ising model with a quantum algorithm at various temperatures, including points close to the critical temperature and the low-temperature region. To demonstrate the feasibility of our approach, we compare the quantum simulation results of the chosen physical quantities with the results from classical simulations. Numerous algorithms have been proposed to enable a quantum computer to prepare a thermal state. These include the quantum thermal dynamic method, where the target system is coupled with a bath at equilibrium~\cite{Terhal_2000}, variational quantum algorithm based on the thermofield double state~\cite{Wu_2019,21npj_Sagastizabal}, as well as many quantum imaginary time evolution(QITE) algorithms such as the one utilizing Hubbard-Stratonovich transformation~\cite{Chowdhury_16}, QITE based on variational ansatz (QITE-ansatz)~\cite{McArdle_19} and QITE based on measurement (QITE-measure)~\cite{Motta_20}. The scope of our research is to focus on the usage of noisy intermediate-scale quantum (NISQ) devices~\cite{Preskill_2018,Bharti22}. Given the presence of quantum noise, it is necessary to minimize the depth of the quantum circuits. We utilize the QITE-ansatz algorithm to generate thermal states in our research, as it has a relatively shallower circuit depth in comparison to other algorithms mentioned previously. In QITE-ansatz algorithm, the imaginary time evolution is carried out on a prior parameterized quantum circuit, and the parameters are evolved variationally. Thus, the parameterized quantum circuit is usually called variational ansatz. The variational ansatz is designed for ground state preparation in most references utilizing QITE-ansatz, such as~\cite{McArdle_19,Liu2021,Zoufal_2021}. Here, for thermal state preparation, we propose to construct a variational ansatz converted from quantum circuits utilized in QITE-measure~\cite{Motta_20}. The circuit in QITE-measure can also carry out imaginary time evolution, but the circuit depth is quite large. The circuit depth can be much reduced by converting the circuit into a variational ansatz. For example, when simulating the Ising model, the quantum circuits in QITE-measure have $\sim 100$ layers, while the variational ansatz circuits used in this work have less than 10 layers. In this article, we study the long-range interacting Ising model. Long-range interaction between spins is introduced naturally in trapped-ion spin systems~\cite{Islam_2013}, and its dynamics can be simulated utilizing quantum simulation algorithms. The long-range interaction also leads to interesting physics such as confinement~\cite{Liu_2019} and meson scattering~\cite{vovrosh2022dynamical}. Meanwhile, the long-range interaction leads to effective dimensions that impact the system's critical behavior. Here, we calculate the specific heat of the long-range interacting Ising model near the critical point and in the low-temperature region. This article is organized as follows. In section~\ref{sec:Long-range-interacting-Ising-model}, we introduce the long-range interacting Ising model and the measurement method of relevant physical quantities on a quantum computer. In section~\ref{sec:QITE}, we discuss the process of thermal state preparation using QITE-ansatz algorithm in detail, especially the method of variational ansatz design. In section~\ref{sec:numerical-result}, we present the numerical results and discuss the observed indications of the criticality. Finally, in section~\ref{sec:discussion-and-outlook}, we summarize the techniques used in this article and discuss the possible extension for further works. \section{Long-range interacting Ising model}\label{sec:Long-range-interacting-Ising-model} We consider the $D=2$ dimensional Ising model on a square lattice $\Lambda$ with long-range interactions. The Hamiltonian reads \begin{align} H = -\sum_{i > j\in \Lambda}\frac{J}{r_{ij}^{\alpha}}Z_i Z_j-h\sum_i Z_i, \label{eq:long-range-interacting-Hamiltonian} \end{align} where $Z_i$ is the Pauli-$Z$ operator on the $i$th spin. $J$ is the bare coupling strength, and $\alpha$ denotes the range of the interaction. $h$ denotes the strength of the longitudinal external field. The distance $r_{ij}$ is defined by the Manhattan distance under periodic boundary condition(PBC): Assuming the position of spin $i$ on the square lattice is represented by integer vector $\vec{r}^{i}=(r^i_1,\ldots,r^i_D,)$ and the volume of the lattice is $|\Lambda|=N_1\times \ldots \times N_D$, then \begin{align} r_{ij}=\sum_{d=1}^D \min(|r^i_d-r^j_d|,N_d-|r^i_d-r^j_d|). \end{align} This Hamiltonian is a generalization of the interaction part of the Hamiltonian introduced in reference \cite{Liu_2019}. It reduces to the original nearest-neighbor Ising model~(NNIM) in the limit $\alpha\rightarrow \infty$. The state of the Ising system at a finite temperature is described by the density operator. Its equilibrium state is the Gibbs state of which the density operator reads \begin{align} \rho=\frac{1}{Z_{\beta}}e^{-\beta H},\quad Z_{\beta}\equiv \tr(e^{-\beta H}). \label{eq:gibbs-state} \end{align} Here $\beta$ is the inverse temperature $\beta\equiv 1/(k_B T)$ and we define $K\equiv J\beta$ for later convenience. For an arbitrary observable $O$, its expectation value of the thermal state is given by \begin{align} \langle O\rangle\equiv \tr(\rho O). \label{eq:thermal-expectation} \end{align} This article targets the case where the expectation values are evaluated for different $K$ and a zero external field $h=0$. Now we exhibit observables to compute the Ising model's specific heat and susceptibility. Analyzing these measures allows us to examine the critical behavior of the Ising model. The specific heat is defined by the changing rate of the internal energy in a unit volume when varying the temperature $T$. It can be evaluated by the energy-fluctuation relation: \begin{align} C_v\equiv\frac{1}{|\Lambda|}\frac{\partial \langle H\rangle}{\partial T}=\frac{1}{|\Lambda| T^2}\left[ \langle H^2\rangle- \langle H\rangle^2\right], \end{align} where the last expression can be derived by taking the Gibbs state Eq.~(\ref{eq:gibbs-state}) to evaluate the expectation values. Similarly, the susceptibility is defined by the changing rate of the magnetization in a unit volume with respect to the external field strength $h$ (evaluated at $h=0$). The total magnetization is given by \begin{align} \langle M\rangle\equiv \langle Z_{tot}\rangle, \end{align} where $Z_{tot}\equiv\sum_i Z_i$, i.e., the sum of all the spins in the lattice. Then the susceptibility can be evaluated according to the susceptibility-fluctuation relation \begin{align} \chi\equiv\frac{1}{|\Lambda|}\frac{\partial\langle M\rangle}{\partial h}\Big|_{h=0}=\frac{1}{|\Lambda|T}\left[ \langle Z_{tot}^2\rangle- \langle Z_{tot}\rangle^2\right]. \label{eq:susceptibility-fluctuation-relation} \end{align} In summary, evaluating the specific heat and susceptibility is equivalent to calculating the expectation values of the corresponding operators. The operators to be measured include \begin{align} H^2,H , Z_{tot}^2,Z_{tot} \label{eq:concerned-observable} \end{align} which can all be reduced to linear combinations of Pauli operators. To evaluate the expectation values of the above operators on quantum computers, we can generate the thermal state utilizing a quantum algorithm and then evaluate the expectation values of the Pauli operators. Notice that for the above operators, the elementary Pauli operators can be written as products of Pauli-$Z$ operators, so they commute and can be measured simultaneously on the quantum computer. Combined with the fact that the Hamiltonian in Eq.~(\ref{eq:long-range-interacting-Hamiltonian}) consists of only Pauli-$Z$ operators, we can simplify the initial state to be evolved on quantum computer. It enables us to simulate the system on a larger lattice. For general models, such as the Ising model with a transversal field, the simplification does not hold. More details can be found in section~(\ref{sec:cps}). \section{Thermal state preparation with quantum imaginary time evolution}\label{sec:QITE} One can use the quantum imaginary time evolution(QITE) algorithm to prepare a thermal state, as demonstrated in previous studies~\cite{McArdle_19,Motta_20}. This section provides an explanation of the QITE-ansatz algorithm. QITE-ansatz algorithm is designed to evolve an $N_q$-qubit quantum state $\ket{\psi(0)}$ to \begin{align} \ket{\psi(\tau)}=\frac{e^{-\tau H}\ket{\psi(0)}}{\sqrt{\bra{\psi(0)}e^{-2\tau H}\ket{\psi(0)}}}, \label{eq:psi_QITE} \end{align} where $\tau$ is a real number denoting imaginary time. The denominator is a normalization factor to guarantee the evolution's unitarity. Assuming we have the quantum circuit to carry out the unitary evolution, then by choosing the initial state to be the maximally mixed state~(defined as the density operator) $\ket{\psi(0)}\bra{\psi(0)}=\ensuremath{\mathbf{I}}/\mathbf{d}$~\footnote{Here we abuse bra-ket notation for the mixed state. It will be explained in detail in the subsection~(\ref{sec:initial-state-preparation})}~($\ensuremath{\mathbf{I}}$ is the identity operator of the $\mathbf{d}\equiv 2^{N_q}$ dimensional Hilbert space), one finds the final state is the thermal state with inverse temperature $\beta=2\tau$ \begin{align} \ket{\psi(\tau)}\bra{\psi(\tau)}= \frac{1}{Z_{2\tau}}e^{-2\tau H},\quad Z_{2\tau} \equiv \tr(e^{-2\tau H}). \label{eq:QITE-thermal-state} \end{align} The QITE-ansatz algorithm was proposed in references \cite{McArdle_19, Yuan_2019}. This technique is originally used to project out the ground state of the Hamiltonian according to Eq.~(\ref{eq:psi_QITE}). It has been successfully implemented in the field of quantum chemistry, quantum field theory and machine learning, see e.g.~\cite{McArdle_20,Liu2021,Zoufal_2021}. Following \cite{Yuan_2019}, we first review the QITE-ansatz algorithm within the density operator formalism. The density operator of Eq.~(\ref{eq:psi_QITE}) reads \begin{align} \rho(\tau) = \frac{e^{-\tau H}\ket{\psi(0)}\bra{\psi(0)}e^{-\tau H}}{\bra{\psi(0)}e^{-2\tau H}\ket{\psi(0)}}. \end{align} The mathematical description of a quantum state with the density operator is equivalent to that with the pure state. In particular, the expectation values of any observable $O$ coincide \begin{align} \tr(\rho(\tau)O)=\bra{\psi(\tau)}O\ket{\psi(\tau)}. \end{align} The imaginary time evolution of the density operator follows the von-Neumann equation~\cite{Yuan_2019} \begin{align} \frac{\mathrm{d} \rho(\tau)}{\mathrm{d} \tau}=\mathcal{L}[\rho(\tau)], \label{eq:von-Neumann equation} \end{align} where $\mathcal{L}$ is the Liouville operator defined by $\mathcal{L}(\rho)=-\{H,\rho\}+2\tr(\rho H)\rho$ with anti-commutator $\{H,\rho\}=H\rho+\rho H$. As the Hilbert space of the whole $N_q$ qubits is hard to be explored by a quantum circuit, we utilize a density operator $\hat{\rho}(\tau)=\ket{\phi(\tau)}\bra{\phi(\tau)}$ to approximate the target density $\rho(\tau)$. The approximation $\hat{\rho}(\tau)$ satisfies the following requirements: (1) It has the same initial state $\hat{\rho}(0)=\rho(0)=\ket{\psi(0)}\bra{\psi(0)}$. (2) The evolution of $\hat{\rho}(\tau)$ approximately satisfies the von-Neumann equation $\mathrm{d} \hat{\rho}(\tau)/\mathrm{d} \tau-\mathcal{L}[\hat{\rho}(\tau)]=0$. The approximation $\hat{\rho}(\tau)$ is generated with a variational ansatz $\ket{\phi(\vec{\theta}(\tau))}=U(\vec{\theta}(\tau))\ket{\psi(0)}$, where $\vec{\theta}$ is a real variational parameter vector with $N$ components. $U(\vec{\theta})=U_N(\theta_N)\ldots U_1(\theta_1)$ is a series of parameterised unitary quantum gates. According to the first requirement mentioned above, $U(\vec{\theta}(0))$ should be the identity operator $\ensuremath{\mathbf{I}}$. With the variational ansatz, the evolution of the quantum state is converted to the evolution of the variational parameters $\vec{\theta}$. However, as the variational ansatz cannot explore the whole Hilbert space, $\ket{\phi(\vec{\theta}(\tau))}$ can not fulfill the von-Neumann equation exactly. Instead, we demand that the von-Neumann equation is fulfilled sufficiently well according to the second requirement. The violation of the von-Neumann equation is measured by the McLachlan distance $L^2$, which is defined by \begin{align} L^2\equiv \left|\left| \frac{\mathrm{d} \hat{\rho}(\tau)}{\mathrm{d} \tau}- \mathcal{L}[\hat{\rho}(\tau)]\right|\right|^2, \end{align} where $||A||^2=\tr(A^{\dagger}A)$ represents Frobenius norm. According to the differential chain rule, we have \begin{align} L^2= \left|\left| \sum_{\mu} \frac{\partial \hat{\rho}(\theta)}{\partial \theta_{\mu}}\dot{\theta}_{\mu}-\mathcal{L}(\hat{\rho}) \right|\right|^2. \end{align} So that the McLachlan distance is a quadratic function of the time derivatives of the variational parameters $\dot{\theta}_{\mu}\equiv \partial \theta_{\mu}/\partial \tau$. $L^2$ can be minimized with the variational principle, which leads to \begin{align} \delta L^2=0\Rightarrow \frac{\partial L^2}{\partial \dot{\theta}_{\mu}}=\sum_{\nu} M_{\mu\nu}\dot{\theta}_{\nu}-V_{\mu}=0, \end{align} where \begin{equation} \begin{aligned} M_{\mu\nu}&\equiv 2\Re\left[\frac{\partial \bra{\phi(\vec{\theta})}}{\partial \theta_{\mu}}\frac{\partial \ket{\phi(\vec{\theta})}}{\partial \theta_{\nu}}\right],\\ V_{\mu}&\equiv -2\Re\left[\frac{\partial \bra{\phi(\vec{\theta})}}{\partial \theta_{\mu}} H\ket{\phi(\vec{\theta})}\right]. \label{eq:M-V-calculation} \end{aligned} \end{equation} Here $M$ is a $N\times N$ matrix while $V$ is a $N$ dimensional vector. Following \cite{McArdle_19,Funcke2021dimensional}, one can construct some specific quantum circuits to measure $M$ and $V$, which cost $\mathcal{O}(N^2)$ quantum device calls and one additional ancilla qubit. After deriving $M$ and $V$, we can construct the following linear equations \begin{align} \sum_{\nu} M_{\mu\nu}\dot{\theta}_{\nu}=V_{\mu}. \end{align} Then one can solve for the time derivative of the variational parameters $\dot{\theta}_{\nu}|_{\tau=\tau_0}$ at a given imaginary time $\tau_0$, utilizing methods such as pseudo-inverse~\cite{McArdle_19}. The variational parameters at the next time slice $\tau_0+\delta \tau$ are given according to the Euler method \begin{align} \vec{\theta}(\tau_0+\delta \tau) \simeq \vec{\theta}(\tau_0)+\dot{\vec{\theta}}\delta\tau, \label{eq:Euler-integration} \end{align} where $\dot{\theta}_{\nu}=\sum_{\mu}M^{-1}_{\nu\mu}V_{\mu}$. The computational complexity of the QITE-ansatz grows polynomially with the number of variational parameters $N$. In each time slice, the time complexity of solving linear equations grows polynomially with $N$, while the matrix $M$ and vector $V$ can also be evaluated using quantum computers within polynomial time. Thus as long as $N$ grows polynomially with the system size $N_q$, the time complexity of the QITE-ansatz grows polynomially with $N_q$ and can be extended to large-scale quantum systems. The following subsections will introduce how to prepare the maximally mixed state and choose an appropriate variational ansatz. \subsection{Initial state preparation}\label{sec:initial-state-preparation}\label{sec:cps} Here we introduce how to prepare the initial state as the maximally mixed state $\ensuremath{\mathbf{I}}/\mathbf{d}$. Quantum circuits are suitable for generating pure states. We need some strategies to generate mixed states utilizing pure states. As discussed in \cite{White2009}, there are two strategies: ancilla pair state~(APS) and classical product state~(CPS). Both strategies can be used to prepare maximally mixed state $\ensuremath{\mathbf{I}}/\mathbf{d}$. However, preparing $\ensuremath{\mathbf{I}}/\mathbf{d}$ with APS doubles the number of qubits to $2N_q$~\cite{Zoufal_2021}. It also introduces some complexities in variational ansatz design to evolve the pair state. Instead, we can prepare the maximally mixed state via CPS, which reduces the required qubits to $N_q$. The maximally mixed state $\ensuremath{\mathbf{I}}/\mathbf{d}$ describes that the probabilities of sampling every basis vector from a given orthogonal basis are the same, where each basis vector is a pure state. As the maximally mixed state is unitarily invariant $U(\ensuremath{\mathbf{I}}/\mathbf{d})U^{-1}=\ensuremath{\mathbf{I}}/\mathbf{d}$, the orthogonal basis can be chosen arbitrarily. To generate the thermal state, it is recommended in \cite{White2009} to use a basis formed by classical product states, such as $\{\ket{+},\ket{-}\}^{\otimes N_q}$, where $\{\cdot\}^{\otimes N_q}$ represents a set generated by the $N_q$ times tensor product of each element in $\{\cdot\}$. For example, \begin{align} \{\ket{+},\ket{-}\}^{\otimes 2}=\{\ket{++},\ket{+-},\ket{-+},\ket{--}\}. \label{eq:X-basis-vector} \end{align} Here $\ket{+}$,$\ket{-}$ represent the eigenvectors of the Pauli-$X$ operator \begin{align} X\ket{+}=\ket{+},\quad X\ket{-}=-\ket{-}. \label{eq:X-basis} \end{align} If we use the classical product state as the initial state, the thermal expectation value $\langle O\rangle$ can not be measured straightforwardly due to the normalization factor in Eq.~(\ref{eq:psi_QITE}). Assume that we take the orthogonal basis as $\{\ket{i}\}$. Evolving all basis vectors $\ket{i}$ for imaginary time $\tau$, one gets the expectation values of an observable $O$, which read \begin{align} \bra{i(\tau)}O\ket{i(\tau)}=\frac{\bra{i}e^{-\tau H}Oe^{-\tau H}\ket{i}}{\bra{i}e^{-2\tau H}\ket{i}}. \label{eq:expectation_CPS} \end{align} Usually, the denominators would be different for different basis vectors $\ket{i}$. To derive the thermal expectation value $\langle O\rangle$ in Eq.~(\ref{eq:thermal-expectation}), we should multiply the above expectation values with coefficients $\{p_i\}$ \begin{align} \langle O\rangle =\sum_i p_i \bra{i(\tau)}O\ket{i(\tau)}, \label{eq:CPS-expectation} \end{align} where $p_i$ is defined by \begin{align} p_i\equiv\frac{\bra{i}e^{-2\tau H}\ket{i}}{Z_{2\tau}}. \end{align} Here $\{p_i\}$ can be treated as a probability distribution, as they are all positive and satisfy the normalization condition $\sum_i p_i=1$. To evaluate the thermal expectation value of the operator $O$, as mentioned in \cite{Motta_20}, we do not need to calculate all the $\{p_i\}$(which would be impossible to calculate, as the number of $p_i$ grows exponentially with the number of qubits). With the minimally entangled typical thermal state(METTS) algorithm proposed by Stoudenmire and White~\cite{Stoudenmire_2010}, one can sample $\{\ket{i}\}$ according to the distribution $\{p_i\}$. The thermal expectation value $\langle O\rangle$ is the average of the expectation of $O$ with the time-evolved sampled vectors. In conclusion, though imaginary time evolution with CPS as initial states requires the number of qubits equal to the system size, one has to evolve different initial states $\ket{i}$ to acquire statistics. On the other hand, imaginary time evolution with APS as an initial state doubles the number of qubits while evolving only one initial state. However, the situation gets simplified when we consider the classical Ising model and the observables in Eq.~(\ref{eq:concerned-observable}), which consist of Pauli-$Z$ operators. The observables can be generally expressed as \begin{align} O=\sum_m h_m\tilde{Z}_m. \label{eq:observable-Z-decomposition} \end{align} Here $\tilde{Z}_m$ represents the tensor product of $Z$ operators at some sites and identity operators at the others, such as $\tilde{Z}_m=Z_{N_q-1}\ldots I_1Z_0$. In Appendix~\ref{app:cps-expectation}, we prove that the thermal expectation value of $O$ can be calculated according to \begin{equation} \begin{aligned} \langle O\rangle=\sum_m h_m \bra{\tilde{+}(\tau)}\tilde{Z}_m\ket{\tilde{+}(\tau)}, \label{eq:simplified-estimation} \end{aligned} \end{equation} where $\ket{\tilde{+}(\tau)}$ is imaginary time evolved state according to Eq.~(\ref{eq:psi_QITE}). The state is initialized as $\ket{\tilde{+}(0)}=\ket{\tilde{+}}$, where $\ket{\tilde{+}}\equiv\ket{+}^{\otimes N_q}$ is the $N_q$-fold tensor product of $\ket{+}$ in Eq.~(\ref{eq:X-basis}). Thus for the Ising model, we only need to calculate the imaginary time evolution with the initial state $\ket{\tilde{+}}$. In this work, we use $\ket{\tilde{+}}$ as the initial state to present our results. For general models, such as the Ising model with a transversal field, the above simplification does not hold. We need to sample the classical product states using the METTS algorithm or utilize the ancilla pair state. \subsection{Variational ansatz design} Choosing a proper variational ansatz is a cornerstone for the success of the QITE-ansatz algorithm~\cite{Bharti22}. In most literature on QITE-ansatz, the variational ansatz is designed to prepare the ground state of a Hamiltonian, and it is suitable to evolve some specific initial states, such as the unitary coupled cluster ansatz evolving Hartree-Fock states~\cite{McArdle_20}. Focusing on thermal state preparation and the initial state introduced in the previous section, we propose to construct a variational ansatz converted from quantum circuits utilized in the QITE-measure algorithm proposed by Motta et. al.~\cite{Motta_20}. We briefly introduce how to construct the quantum circuits used in the QITE-measure algorithm. The goal of QITE-measure is also evolving an initial state $\ket{\psi(0)}$ according to Eq.~(\ref{eq:psi_QITE}). Consider evolving the state $\ket{\psi(\tau_0)}$ for a small time slice $\Delta\tau$ \begin{align} \ket{\psi(\tau_0+\Delta\tau))}=\frac{e^{-\Delta\tau H}\ket{\psi(\tau_0)}}{\sqrt{\bra{\psi(\tau_0)}e^{-2\Delta\tau H}\ket{\psi(\tau_0)}}}. \end{align} As this transformation is unitary, we can always find a Hermitian operator $\hat{A}(\tau_0)$ such that \begin{align} \ket{\psi(\tau_0+\Delta\tau)}=e^{-i\Delta \tau \hat{A}(\tau_0)}\ket{\psi(\tau_0)}, \label{eq:psi_measurement_QITE} \end{align} and $\hat{A}(\tau_0)$ can be expanded in a complete Pauli basis \begin{align} \hat{A}(\tau_0)=\sum_{i_1\ldots i_{N_q}} a_{i_1\ldots i_{N_q}}^{(\tau_0)} \sigma_{i_1}\ldots \sigma_{i_{N_q}}\equiv \sum_{I} a_{I}^{(\tau_0)} \tilde{\sigma}_I, \label{eq:Hermitian-expansion} \end{align} where the expansion coefficients $ a_{i_1\ldots i_{N_q}}^{(\tau_0)}$ are real due to the Hermicity of $\hat{A}(\tau_0)$, and $\sigma_{i_j}=I,X,Y,Z$ corresponding to $i_j=0,1,2,3$ is the single-qubit Pauli operator on the site $j$, and we call the tensor product of the single-qubit Pauli operator, $\tilde{\sigma}_I$ as Pauli string. For this reason, the single-qubit Pauli operator is sometimes called Pauli letter~\cite{Oliver_22}. For each imaginary time $\tau_0$, one can calculate all the expansion coefficients $a_{I}^{(\tau_0)}$ by evaluating the expectation values of some observables with respect to the quantum state $\ket{\psi(\tau_0)}$. The observables are the composition of Pauli strings and the Hamiltonian (See more details in \cite{Motta_20}). Notice that the transformation in Eq.~(\ref{eq:psi_measurement_QITE}) can be approximated by \begin{align} e^{-i\Delta \tau \sum_{I} a_{I}^{(\tau_0)} \tilde{\sigma}_I}=\prod_I e^{-i\Delta \tau a_{I}^{(\tau_0)} \tilde{\sigma}_I}+\mathcal{O}(\Delta \tau^2), \end{align} where the product consists of several Pauli exponentials which have the form $e^{-i\theta \tilde{\sigma}_I}$, and the Pauli exponential can be realized with quantum gates in a standard way~\cite{Nielsen2000}. Thus, the whole quantum circuit used in the QITE-measure can be constructed using several Pauli exponentials for each time slice. In the last time slice, the circuit depth is proportional to the final imaginary time $\tau$. Notice that if a system has $N_q$ qubits, the total number of Pauli strings on these qubits is $4^{N_q}$. Thus the number of Pauli exponentials required for evolving each time slice seems exponential as a function of system size according to Eq.~(\ref{eq:Hermitian-expansion}). However, the situation gets simplified when the Hamiltonian $H$ consists of some local interaction terms \begin{align} H=\sum_m H_m, \end{align} where each $H_m$ acts on a local set of qubits, and the number of $H_m$ is polynomial as a function of system size. For example, $H_m\propto Z_i Z_j$ and the number of $H_m$ is $\mathcal{O}(N_q^2)$ in case of long-range interacting Ising model. Though the local terms $H_m$ may not commute, the imaginary time evolution $e^{-\Delta \tau H}$ can be decomposed by \begin{align} e^{-\Delta \tau H}=\prod_m e^{-\Delta \tau H_m}+\mathcal{O}(\Delta \tau^2). \end{align} Then the previous steps in QITE-measure can be implemented for each $e^{-\Delta \tau H_m}$. As shown in \cite{Motta_20}, when the Hamiltonian consists of local terms and the correlation length of the system is finite, the expansion in Eq.~(\ref{eq:Hermitian-expansion}) for each $H_m$ can be implemented with Pauli strings on a support constantly larger than the support of $H_m$ (Support of a Pauli string is defined by the set of qubits on which the Pauli letters are not identity). The correlation length of a system is finite when its Hamiltonian is outside the critical region. Thus the support of the Pauli strings has no dependence on the system size, and the total number of Pauli exponentials $e^{-i\theta \tilde{\sigma}_I}$ is a polynomial function of the system size at least when the Hamiltonian is sufficiently far away from the critical point. Compared with the QITE-ansatz, the precision of the QITE-measure is not limited by the variational ansatz. However, the circuit depth grows linearly with the evolution time $\tau$. Thus this algorithm would be very sensitive to coherent or incoherent noise in real quantum devices and can only be applied to small spin systems~\cite{Aydeniz_2020}. Quantum circuits constructed in QITE-measure can be naturally converted into a variational ansatz with the following steps: (1) using all the necessary Pauli exponentials at one time slice as one layer of the variational ansatz; (2) sequentially repeating the layer several times in the quantum circuit; (3) converting all the expansion coefficients $a_{I}^{(\tau_0)}$ into undetermined parameters, which are initially zero and to be evolved according to the QITE-ansatz algorithm. Times of repetition for one layer is called the depth of the variational ansatz, also called the number of layers. The behavior of this variational ansatz can be analyzed with the help of QITE-measure. Assuming we have the same quantum circuit layers for the variational ansatz in QITE-ansatz and the quantum circuits in QITE-measure. Because the states prepared in QITE-measure can all be explored by the variational ansatz, one can expect QITE-ansatz using this circuit to behave at least better than QITE-measure. The systematic error of the QITE-measure circuit is of the first-order Trotter type, i.e., $\mathrm{error}\sim \mathcal{O}(\Delta \tau)$~\cite{Motta_20}. By equalizing the longest circuit depth used in QITE-measure and the depth in variational ansatz, it can be deduced that in the worst case, the variational ansatz leads to an error of $\mathcal{O}(1/\mathrm{L})$, where $L$ is the number of layers. \begin{figure*} \centering \includegraphics[width = 0.65\textwidth]{Circuits_ab.pdf} \includegraphics[width = 0.9\textwidth]{Circuits_c.pdf} \caption{Example of circuits for the imaginary time evolution of Ising systems. The basic building blocks of the circuits are defined as $U_{ZY}(\theta)\equiv e^{-i\theta ZY}, U_{YZ}(\theta)\equiv e^{-i\theta YZ}$. (\textbf{a}) The quantum circuit in the QITE-measure algorithm to carry out the imaginary time evolution $e^{\tau ZZ}\ket{++}$. $\Delta \tau$ is the length of one time slice. (\textbf{b}) The variational ansatz converted from the QITE-measure circuit. $L$ is the number of circuit layers, $\theta_i,\theta_i', i\in[1,L]$ are free variational parameters. (\textbf{c}) The variational ansatz for nearest neighbor 1-D Ising chain under the periodic boundary condition. Each layer consists of one layer of ZY-Pauli exponentials and one layer of YZ-Pauli exponentials, as shown in the dashed box. The figure shows the case of two layers, and we have measurements denoted by black boxes at the end of the circuit.} \label{fig:circuit} \end{figure*} In the numerical simulations, we find that the circuit depth required in QITE-ansatz is much smaller than that required in the QITE-measure. For example, in our numerical simulation of the Ising model, if the imaginary time of the final state is $\tau=0.5$, with step size $\Delta \tau=0.002$, QITE-measure requires the number of layers $\tau/\Delta \tau=250$. In contrast, to reach a sufficiently good precision using the variational ansatz, we find the number of layers required is at most $L=N_d$ for the 2-D nearest neighbor Ising model where $N_d$ is the side length of the Ising lattice system. More details on the number of circuit layers required are shown in Appendix~\ref{app:error-and-layer-estimation}. The variational ansatz can be simplified due to some special structures of the Hamiltonian and the initial state. In the numerical simulations, we notice that some of the variational parameters are always zero during the whole evolution, which corresponds to the same set of Pauli strings over all the layers. We call the Pauli string in this set \textit{irrelevant}, and the other Pauli strings corresponding to non-zero variational parameters are \textit{relevant}. As the irrelevant Pauli exponentials are identity, they can be removed a priori when constructing the variational ansatz. These irrelevant Pauli strings can be identified according to the symmetry and some special structures of the Hamiltonian and the initial state. For example, if all the entries in the Hamiltonian and the initial state are real, then the corresponding unitary operator $e^{-i\Delta \tau \hat{A}}$ should also be real. Thus all Pauli strings with an even number of Pauli-$Y$ letters are irrelevant. \begin{figure*} \centering \includegraphics[width = 0.40\textwidth]{Energy_capacity_2D.pdf} \includegraphics[width = 0.40\textwidth]{susceptibility_2D.pdf} \includegraphics[width = 0.40\textwidth]{Energy_capacity_3D.pdf} \includegraphics[width = 0.40\textwidth]{susceptibility_3D.pdf} \caption{Specific heat(left column) and susceptibility(right column) as a function of $K$ in 2-D(upper row) and 3-D(lower row) nearest-neighbour Ising model($\alpha=\infty$). ED represents results from exact diagonalization. We see that the results from the noiseless quantum simulation are close to exact diagonalization, especially in the region far from the critical point. The black dashed lines in the four panels are the exact critical temperatures $K_c$ of the corresponding dimension in the infinite volume limit. The solid grey line in the upper left panel shows the peak movement as the system size enlarged, which is fitted inspired by finite size scaling~(FSS). As the lattice size increases, the peaks of the specific heat and the leaps of the susceptibility are more obvious, and the transition points approach the exact critical point.} \label{fig:2-3-d-Cv-Chi} \end{figure*} We demonstrate the above construction of variational ansatz using an example of a two-qubit~($N_q=2$) Ising system. There are $4^2=16$ Pauli strings on the two-qubit system. Assume we have an initial state $\ket{++}$ and the system Hamiltonian $H=-Z_1Z_0$. Because all the entries in the Hamiltonian and the initial state are real, eliminating Pauli strings with an even number of Pauli-$Y$ letters leaves 6 Pauli strings: $I_1Y_0, X_1Y_0, Y_1I_0, Y_1X_0, Z_1Y_0, Y_1Z_0$. Evolving one layer with these 6 Pauli strings using QITE-ansatz, we further find 4 Pauli strings are irrelevant. It leaves only two relevant Pauli strings for the imaginary time evolution \begin{align} Z_1Y_0,Y_1Z_0. \end{align} One can verify that \begin{equation} \begin{aligned} e^{-\Delta \tau H}\ket{++}&=e^{\Delta \tau Z_1 Z_0}\ket{++}\\ &\propto e^{-ia_1(0) Z_1 Y_0}e^{-ia_2(0) Y_1 Z_0}\ket{++}, \label{eq:two-qubit-QITE} \end{aligned} \end{equation} with expansion coefficients \begin{align} a_1^{(0)}=a_2^{(0)}=\frac{1}{2}\tan^{-1}(\tanh \Delta \tau). \end{align} In the QITE-measure algorithm, to evolve the initial state to an arbitrary time $\tau$, the quantum circuit is shown in figure~\ref{fig:circuit}\textbf{a}. It has $\tau/\Delta \tau$ layers. The variational ansatz with $L$ layers for the two-qubit Ising system is constructed as shown in figure~\ref{fig:circuit}\textbf{b}. In this circuit, $\{\theta_1,\theta_1'\ldots \theta_L,\theta_L'\}$ are all variational parameters, taking zero as initial values, and to be evolved according to the QITE-ansatz algorithm. \section{Numerical results}\label{sec:numerical-result} In this section, we apply the previous variational ansatz design procedure to the long-range interacting Ising model, where we will prepare CPS as the initial state. Equipped with the thermal state, we can calculate the specific heat $C_v$ and susceptibility $\chi$ as a function of $K\equiv J\beta$. Our numerical simulations are carried out on the Qiskit noiseless statevector quantum simulator~\cite{Qiskit}. The initial state and variational ansatz are chosen as described in section~\ref{sec:QITE}. To calculate the thermal expectation values of the Ising model, we only need to calculate the imaginary time evolution of the product state $\ket{\tilde{+}}$. With the initial state, and for every local interaction term in Ising model $Z_iZ_j(\forall i,j\in\Lambda$), we have the corresponding relevant Pauli strings \begin{align} Z_iY_j, Y_i Z_j. \label{eq:ZZ_unitary} \end{align} Then we can construct the variational ansatz for the target Ising Hamiltonian. An example of a variational ansatz for nearest-neighbour Ising chain under periodic boundary conditions is shown in figure~\ref{fig:circuit}\textbf{c}. Each layer of the variational ansatz consists of one layer of ZY-Pauli exponentials and one layer of YZ-Pauli exponentials, as shown in the dashed box. In the figure, we show the case of two layers, and we will use two layers in the following numerical simulations if not specified otherwise. One has to notice that here we assume the imaginary time evolution of each local interaction term $e^{\tau Z_iZ_j}$ can be realized with the Pauli exponentials $e^{-i\theta Z_i Y_j}e^{-i\theta' Y_i Z_j}$, which have the same support of $Z_iZ_j$. These two Pauli exponentials are enough in the 2-qubit case as indicated by Eq.~(\ref{eq:two-qubit-QITE}), but are not when the system size is large and when the system approaches the critical point, as explained in the previous section. It means that the expressivity of this variational ansatz is not sufficiently good to carry out the whole imaginary time evolution $e^{-\tau H}$. Limited expressivity leads to systematic errors, which will affect the numerical results. First, we present the numerical results of the nearest-neighbor Ising model(NNIM), i.e., taking the limit $\alpha\rightarrow\infty$ in Eq.~(\ref{eq:long-range-interacting-Hamiltonian}). With the nearest-neighbor interaction, there are $N=2D|\Lambda|L$ parameters in the variational ansatz. In two and three-dimensional NNIMs, there is a second-order phase transition in the infinite volume limit, where the critical points are $K_c=\ln(1+\sqrt{2})/2\approx 0.441$~\cite{Schultz_64} and $0.222$~\cite{Sonsin_2015} for dimension $D=2,3$, respectively. The specific heat and susceptibility would hence diverge near the critical point in the infinite volume limit. Figure~\ref{fig:2-3-d-Cv-Chi} shows the specific heat and susceptibility for various $K$ values obtained via QITE-ansatz. The lattice size is $2\times 2,3\times 3,4\times 4$ for the 2-D system, marked by triangular-down, circle and triangular-up, respectively, and $2\times 2\times 2,3\times 3\times 2$ for the 3-D system, with results marked by triangular-down and circle respectively. In the evolution of the variational parameters, we use the Euler method with step length $\delta\tau=0.002$ as in Eq.~(\ref{eq:Euler-integration}), which is chosen such that further shrinking the step length has no impact on the numerical results (We will take this step length also throughout the following simulations.). We see that the QITE results converge well with the results from exact diagonalization(ED) when the system size is small for both 2-D and 3-D systems. For $4\times 4$ and $3\times 3\times 2$ lattices, the specific heat curves deviate from the ED curves near the critical point, which result from the limitation of the variational ansatz expressivity. The expressivity can be improved by increasing the number of ansatz layers and using longer Pauli strings for each local interaction term beyond $Z_iY_j, Y_i Z_j$. More detailed error analyses are shown in Appendix~\ref{app:error-and-layer-estimation}. Indications of the Ising criticality can be observed in figure~\ref{fig:2-3-d-Cv-Chi}. The critical temperatures of 2-D and 3-D systems in the infinite volume limit are denoted by the black dashed line. Near the critical points, values of the specific heat and susceptibility increase, and there are peaks in the specific heat as a function of $K$. For 2-D NNIM with volume $N_d\times N_d$, we denote the position of the peak as $K_c(N_d)$. For larger system sizes, $K_c(N_d)$ moves slowly towards the infinite volume critical point $K_c$. To guide the eye of this movement, we draw the grey solid line in the upper left panel of figure~\ref{fig:2-3-d-Cv-Chi}. The analytic expression of the grey solid line is inspired by the finite size scaling~\cite{Pascal_lecture}. \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{Energy_capacity_2D_3Col_3Row_long_range.pdf} \caption{Specific heat as a function of $K$ in the 2-D long-range interacting Ising model with interaction range $\alpha=1,2,3,\infty$, where smaller $\alpha$ indicates larger interaction range. The system size is $|\Lambda|=3\times 3$. ED represents results from exact diagonalization. We see that for various $\alpha$ and $K$, the QITE results and the ED results are consistent. The black dashed line denotes the exact critical point of the 2-D NNIM in the infinite volume limit. As $\alpha$ decreases, the peak of the specific heat curve left shift, indicating that the effective dimension is raised for a larger interaction range.} \label{fig:specific-heat-long-range} \end{figure} Figure~\ref{fig:specific-heat-long-range} presents the behavior of the specific heat for the 2-D long-range interacting Ising model with finite $\alpha$. Compared with the nearest neighbor interaction, the long-range model introduces more $Z_iZ_j$ interactions and requires more variational parameters. There are $N=|\Lambda|(|\Lambda|-1)L$ parameters in the variational ansatz. The system size in the figure is $|\Lambda|=3\times 3$, with $\alpha=1,2,3$ and the nearest neighbor case $\alpha=\infty$, marked by the triangular-up, cross, triangular-down and circle, respectively. We see that for various $\alpha$ and $K$, the QITE-ansatz results and the ED results are consistent. Moreover, the peak of the specific heat shifts to the direction of high temperature(smaller $K$) for a larger interaction range(smaller $\alpha$). This behavior is reasonable since the long-range interaction effectively raises the system's dimension, and a higher system dimension leads to a higher critical temperature, e.g., 3-D NNIM critical temperature is higher than that of 2-D NNIM. \section{Discussion}\label{sec:discussion-and-outlook} This work discussed the possibility of using the imaginary time evolution algorithm to prepare the thermal state of the Ising model on NISQ devices. We numerically calculate the specific heat and susceptibility of the long-range interacting Ising model with the prepared thermal state. We find that the results using the quantum algorithm are consistent with the ones from exact diagonalization for various temperatures, including the critical and low-temperature regions. We presented a systematic procedure to design a variational ansatz for the thermal state preparation. This ansatz is inherited from the quantum circuits used in QITE-measure algorithm. We show that it out-performs the original circuit designed using QITE-measure. This variational ansatz can be further simplified according to the symmetry of the Hamiltonian and the initial state. The ideas proposed in this work can be applied to study the critical behavior of other classical models, such as the $Q$-state Potts model, which would be difficult to simulate using the Monte-Carlo algorithm when $Q$ is very large. Additionally, according to the correspondence of the $D$ dimensional quantum model to the $D+1$ dimensional classical model~\cite{Hsieh2012FromDQ}, the algorithm can also be used to study quantum phase transition. \section{Acknowledgements} We thank Xiao Yuan, Jinzhao Sun, Lena Funcke, Stefan K\"uhn and Yahui Chai for helpful discussions. X.W. and X.F. were supported in part by NSFC of China under Grants No. 12125501, No. 12070131001, and No. 12141501, and National Key Research and Development Program of China under No. 2020YFA0406400. PS acknowledges support from: ERC AdG NOQIA; Ministerio de Ciencia y Innovation Agencia Estatal de Investigaciones (PGC2018-097027-B-I00/10.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI, QUANTERA MAQS PCI2019-111828-2, QUANTERA DYNAMITE PCI2022-132919, Proyectos de I+D+I “Retos Colaboración” QUSPIN RTC2019-007196-7); MICIIN with funding from European Union NextGenerationEU(PRTR-C17.I1) and by Generalitat de Catalunya; Fundació Cellex; Fundació Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT \ U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNostrum (FI-2022-1-0042); EU Horizon 2020 FET-OPEN OPTOlogic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 — NeQST), National Science Centre, Poland (Symfonia Grant No. 2016/20/W/ST4/00314); ICFO Internal “QuantumGaudi” project; European Union’s Horizon 2020 research and innovation program under the Marie-Skłodowska-Curie grant agreement No 101029393 (STREDCH) and No 847648 (“La Caixa” Junior Leaders fellowships ID100010434: LCF/BQ/PI19/11690013, LCF/BQ/PI20/11760031, LCF/BQ/PR20/11770012, LCF/BQ/PR21/11840013). Views and opinions expressed in this work are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Climate, Infrastructure and Environment Executive Agency (CINEA), nor any other granting authority. Neither the European Union nor any granting authority can be held responsible for them.
{ "arxiv_id": "2302.14347", "language": "en", "timestamp": "2023-03-01T02:10:27", "url": "https://arxiv.org/abs/2302.14347", "yymm": "2302" }
\section{Introduction}\label{intro} \begin{figure*} \centering \includegraphics[scale=0.5]{Fig1.jpeg} \caption{This figure shows the WISE $\rm 22~\mu m$ image of the IC 1396. The cross mark (`$\times$') shows the position of the massive star HD 206267. The major globules from \citet{1991ApJS...77...59S} are marked as `+' symbols, and their IDs are mentioned. The white dashed circle (radius of $1.5^\circ$) is the region considered in this study for searching members using the Gaia-DR3 data. Center of the circle ($\rm \alpha = 21:40:39.28$ and $\rm \delta = +57:49:15.51$) marked as a magenta `+'. A scale bar of 10~pc is shown in the top-left corner.} \label{wise} \end{figure*} Star formation is one of the most complicated yet least understood phenomena in the field of astrophysics. Most of the stars form in clusters \citep{1964ARA&A...2..213B,1983MNRAS.203.1011E,1987IAUS..115....1L,2000prpl.conf..151C, 2004ApJS..154..367M,2008MNRAS.389.1556B,2010ARA&A..48..431P,2011MNRAS.410L...6G,2012MNRAS.419.2606B} by the fragmentation and hierarchical collapsing of molecular clouds \citep{1981MNRAS.194..809L,2004ARA&A..42..211E,2004RvMP...76..125M,2007ARA&A..45..565M}. Star clusters are unique tracers of galactic properties such as their origin, dynamics, and evolution \citep{2008IAUS..246...13K,2016ApJ...828...75F}. In addition to this, such studies aid in investigating the kinematics, dispersion, and evolution of the star-forming environment \citep{2019ApJ...870...32K,2019ApJ...871...46K,2020ApJ...900L...4P}. Clusters with massive O and B type stars serve as important laboratories for star-formation since these massive stars ionize their surroundings, create H\,{\scshape ii}\ regions and shape the evolution of low mass star population in the vicinity through their feedback effects \citep{2016ApJ...822...49J,2014A&A...566A.122S,2017MNRAS.472.4750D,2021MNRAS.500.3123D,2020A&A...638A...7Z,2021MNRAS.508.3388G,2022ApJ...926...25P}. Hence, the identification and characterization of cluster members are essential to investigate various star-formation properties, such as stars form hierarchically by the natural collapse of clumpy molecular clouds or by the collapsing gas formed through sweeping and compression of the cold neutral gas by the H\,{\scshape ii}\ regions and bubbles. The distinction between these processes is important in understanding the net outcome of star formation, such as star formation efficiency (SFE) and star formation rate (SFR) due to various modes of star formation processes \citep{2012MNRAS.424..377D,2013MNRAS.430..234D,2015MNRAS.454..238W}. The Global Astrometric Interferometer for Astrophysics (Gaia; \citealt{2016A&A...595A...1G}) data has revolutionized the identification and investigation of various scientific properties of the Galactic clusters \citep{2017MNRAS.470.2702K,2018A&A...616A..12G, 2019A&A...623A.108B,2019ApJ...870...32K, 2021MNRAS.504.2557D}. The Gaia-DR2 \citep{2018A&A...616A...1G} data contains the five parameters (positions, parallax, and proper motions) and astrometric solutions of $\sim$1.3 billion of stars up to G-band magnitude of 21 \citep{2018A&A...616A...1G}. Compared to Gaia-DR2, the Gaia-EDR3 improved the accuracy in proper motion and parallax measurements by factors of 2 and 2.5, respectively \citep{2020arXiv201201533G}. This accuracy improvement has benefited a better distinction of cluster members, especially for distant clusters. The final data release, Gaia-DR3, has significantly improved the radial velocity measurement of stars. The Gaia-DR3 preserves the astrometry properties of Gaia-EDR3, but it has improved the radial velocity measurement compared to the Gaia-DR2 in terms of accuracy and number of stars. This work aims to identify the new member population associated with the star-forming complex around Trumpler 37 (Tr 37) in IC 1396 using the multi-dimensional Gaia-DR3 data and machine learning techniques. This work is arranged as follows. We describe the complex IC 1396 in Section \ref{source_detl}. In Section \ref{analy}, we present the analysis and results of this work. This includes the details of Gaia-DR3 data, the membership analysis using the machine learning approach, and the properties of the identified members. In Section \ref{res}, we discuss the various physical properties of IC 1396 derived using new members identified in this work along with literature-based members. We discuss the complex's 3D kinematic property and star-formation history in Section \ref{diss}. We summarize our work in Section \ref{summ}. \\ \\ \section{IC 1396} \label{source_detl} The star-forming complex around Trumpler 37 (Tr 37; \citealt{1930LicOB..14..154T}) in IC 1396, shown in Figure \ref{wise}, is one of the classic examples of H\,{\scshape ii}\ regions with simple circular morphology, which is a part of the Cepheus OB2 complex \citep{1999AJ....117..354D}. IC 1396 has relatively low ($\rm A_V<$5~mag) foreground reddening \citep{2005AJ....130..188S,2012MNRAS.426.2917G,2012AJ....143...61N}. The star-forming complex is believed to be powered by the massive star (HD 206267) of spectral type O6 V, located near the center \citep{1995Obs...115..180S}. This H\,{\scshape ii}\ region is well known for its association with more than 20 bright-rimmed clouds (BRCs; \citealt{1991ApJS...77...59S}), fingertip structures, and elephant trunk structures in and around them, suggesting feedback effect from the massive central star \citep{1991ApJ...370..263S, 2005A&A...432..575F, 2012MNRAS.421.3206S}. The well-known BRCs at the peripheries of the H\,{\scshape ii}\ region (IC 1396A and IC 1396N) have often been referred to as the best examples of feedback-driven star formation \citep{2004AJ....128..805S,2006ApJ...638..897S,2007ApJ...654..316G,2010ApJ...717.1067C, 2013A&A...559A...3S,2014MNRAS.443.1614P,2014A&A...562A.131S,2019A&A...622A.118S}, with many previous studies focused around IC 1396A. Using Gaia-DR2 data of the previously identified members, \citet{2019A&A...622A.118S} estimate a distance of $\rm 945^{+90}_{-73}~pc$, which is consistent within errors with the previous estimate of \citet{2002AJ....124.1585C}. Also, \citet{2005AJ....130..188S} obtained a mean age of $\rm \sim2 - 4~Myr$ of the complex based on the spectroscopically identified members. The modest distance and low foreground reddening make IC 1396 an ideal target for understanding the evolution of the H\,{\scshape ii}\ region and exploring the low-mass population associated with the complex. We present the entire field of view of IC 1396 using the WISE $\rm 22~\mu m$ image in Figure \ref{wise}. The region exhibits a prominent mid-infrared cavity of radius $\sim$ $1.5^\circ$, which signifies the role of UV photons from the associated massive stars towards the gas and dust content of the cluster. BRCs, fingertip, and elephant trunk structures are visible towards the periphery of the H\,{\scshape ii}\ region displaying the feedback-driven activity in the region. To better understand the evolution of the host H\,{\scshape ii}\ region and its possible impact on the next generation stars associated with BRCs/globules and hence the star formation history of the complex, it is important to identify the total member population of the whole complex. There have been many studies in the past in search of the young stellar objects (YSOs) associated with the complex, however these surveys have different area coverage and sensitivity. A brief detail of the membership analysis from previous works towards the complex is given in the next subsection. Gaia-DR3, due to its improvement in both photometry, astrometry, and radial velocity measurements over Gaia-DR2, is the best data set to obtain the membership population of the complex and, subsequently, its physical properties. \subsection{Member population from previous studies} \label{lit_stars} \begin{table*} \small \centering \caption{Area covered and the number of stars obtained in previous works of literature.} \label{work_mem} \begin{tabular}{ccccc} \\ \hline \hline Work & No. of stars & Radius (degree) & RA (J2000) & DEC (J2000) \\ \hline \citet{2002AJ....124.1585C} & 66 & 0.5 & 21:39:09.89 & +57:30:56.07 \\ \citet{2006AJ....132.2135S} & 172 & 0.6 & 21:37:54.41 & +57:33:15.32 \\ Sicilia-Aguilar et al. (2013) & 67 & 0.25 & 21:37:03.17 & +57:29:05.43 \\ \citet{2004ApJS..154..385R} & 17 & 0.12 & 21:36:33.09 & +57:29:13.83 \\ \citet{2006ApJ...638..897S} & 57 & 0.15 & 21:36:39.73 & +57:29:28.45 \\ \citet{2009ApJ...702.1507M} & 69 & 0.15 & 21:36:36.32 & +57:29:54.78 \\ \citet{2011MNRAS.415..103B} & 158 & 1.5 & 21:40:00.43 & +57:26:42.60 \\ \citet{2012AJ....143...61N} & 639 & 1.4 & 21:39:48.76 & +57:30:31.56 \\ \citet{2007ApJ...654..316G} & 24 & 0.1 & 21:40:36.73 & +58:15:37.51 \\ \citet{2009AJ....138....7M} & 39 & 0.15 & 21:38:54.67 & +57:29:17.61 \\ \citet{2012MNRAS.426.2917G} & 457 & 0.25 & 21:37:05.85 & +57:32:30.06 \\ \citet{2021AJ....162..279S} & 421 & 0.37 & 21:33:59.30 & +57:29:30.76\\ Cantat-Gaudin et al. (2018) & 460 & 0.7 & 21:38:58.80 & +57:30:50.40 \\ \hline \end{tabular} \end{table*} The identified member population towards this complex in the previous studies can broadly be divided into four categories. Spectroscopically identified members \citep{2002AJ....124.1585C,2006AJ....132.2135S,2013A&A...559A...3S}, {\it Spitzer} based NIR excess sources \citep{2004ApJS..154..385R,2006ApJ...638..897S,2009ApJ...702.1507M}, identification based on $\rm H_{\alpha}$ excess emission \citep{2011MNRAS.415..103B,2012AJ....143...61N}, and X-ray emission sources \citep{2007ApJ...654..316G,2009AJ....138....7M,2012MNRAS.426.2917G}. In addition, a relatively more recent analysis by \citet{2021AJ....162..279S} combines the near-infrared data from UKIRT with X-ray data from XMM-Newton to identify Class III YSO cluster members in a region covering the IC 1396A region. Altogether, there are 1791 candidate members identified in the literature. Apart from this, \citet{2018A&A...618A..93C} have analyzed a large number (1229) of Milky Way clusters using the Gaia-DR2 catalog. They used an unsupervised machine-learning technique to detect the member stars. They have listed the stars with membership probability greater than $50\%$ as candidate cluster members. For IC 1396, they have identified 460 stars within a radius of $~0.7^{\circ}$ centered at $\rm \alpha = 21:38:58.80$ and $\rm \delta = +57:30:50.40$. This region mostly covers the central part of the complex around the massive star HD 206267. Recently, \citet{2022arXiv221011930P}\footnote{This artcile is in press, hence detailed comparison of the sources could not be incorporated.}, using Gaia-EDR3 and optical spectroscopic analysis of the complex, provides distance, age, and distribution of the the member sources. In Table \ref{work_mem}, we summarize details of the area covered and the number of stars retrieved in individual work. We detect the member stars within the region of $1.5^\circ$ radius shown as a white dashed circle in Figure \ref{wise} and aim to detect new members of the complex. In Section \ref{compa_lit}, we compare the catalog identified in this work with the literature. \section{Analysis \& results} \label{analy} \subsection{Data from Gaia-DR3} \label{data} To obtain the Gaia-based membership of the region, we use the Gaia-DR3 catalog, downloaded from the Gaia archive\footnote{https://gea.esac.esa.int/archive/}. We retrieve all the sources within the $1.5^\circ$ radius centered at $\rm \alpha = 21:40:39.28$ and $\rm \delta = +57:49:15.51$. The search region is shown as the white dashed circle in Figure \ref{wise}, covering the entire IC 1396 complex. To identify the likely cluster members of this complex, we select sources based on the following criteria. All the selected sources must have positive parallax values ($\rm \pi>0~mas$). We consider all the sources with their proper-motion ranging between $\rm |\mu_{\alpha}cos\delta| \leq 20~mas/yr$ and $\rm |\mu_{\delta}| \leq 20~mas/yr$. This constraint on the proper motion values removes a large fraction of contaminants \citep{2018ApJ...869....9G, 2018AJ....156..121G}. All the sources we consider must-have magnitude values in G, BP, and RP bands. We thus obtain 458875 sources within the $1.5^\circ$ region which satisfy all the criteria mentioned above. Following the histogram turnover method \citep{2007ApJ...669..493W,2013MNRAS.432.3445J,2017ApJ...836...98J,2017ApJS..229...28G,2021MNRAS.504.2557D}, we obtain the 90\% photometry completeness limits of G, BP, and RP bands to be 20.5, 21.5, and 19.5~mag, respectively. This is in agreement with the survey completeness, which is between $\rm G\approx 19$ and $\rm G\approx 21~mag$ \citep{2020arXiv201201533G}. The corresponding mass completeness limits are estimated in Section \ref{age}. \subsection{Membership analysis} \label{memb_analysis} Detecting the membership of a star-forming region is the first step towards analyzing its various star-formation properties. If the regions are large (e.g., IC 1396, Lupus) or the regions are not isolated, then the identification of members is not straightforward. Several authors have used different methods to achieve this. Here we briefly summarize the different methods of segregating the member stars from the field population. Pioneering works of \citet{1971A&A....14..226S, 1958AJ.....63..387V} adopt the probability measurements of stars using their proper motions to confirm their membership. In these works, they modeled the distribution of stars in the vector point diagram (VPD) using a bi-variate Gaussian mixture model (GMM). Later, adding the celestial coordinates of stars to their proper motions, \citet{1995AJ....109..672K} refined the membership probabilities. Some researchers selected the stars by partitioning data space into bins \citep{1991A&AS...87...69P,2012MNRAS.422.1495L}. In another work, \citet{2007A&A...470..585B} tried to separate the cluster members from the field stars based on their probability density in their VPD space. The broadband photometry is also considered as a tool to separate the cluster members from field stars with the help of color-magnitude (CMD) and color-color diagram (CCD) \citep{2004A&A...416..125D,2007A&A...470..585B}. \citet{2014A&A...561A..57K} have developed a method of computing membership probabilities in an unsupervised manner from the combination of celestial coordinates and photometric measurements. Their method is unsupervised photometric membership assignment in stellar clusters (UPMASK). The method of \citet{2014A&A...563A..45S,2019A&A...625A.115O} uses astrometric and photometric features of the stars for membership analysis. Then they apply the GMM with different components to model the field population and follow the Bayesian information criteria to choose a model. Then this method modeled the cluster with GMM in the astrometric space and a principal curve in the photometric space. Several recent works have used this methodology for membership analysis \citep{2020A&A...643A.148G,2021A&A...646A..46G}. The use of unsupervised and supervised computation of membership probabilities has also followed in several works in the recent past. In these works, the unsupervised GMM is used to generate a first catalog for the computation of supervised membership probability. These works used the random forest (RF; \citealt{breiman2001random, DBLP:journals/corr/abs-1201-0490}) classifier of the machine learning algorithm for the supervised computation of membership probability. Recently, \citet{2022arXiv220913302M} used various CMDs effectively along with RF classifier to obtain the membership of NGC 2244. So both astrometry and photometric properties of stars play a crucial role in identifying member populations. As discussed, unsupervised and supervised membership probability estimation works efficiently and effectively. The crucial part of this method is preparing a training set, which comes through the unsupervised estimation, the GMM method. However, GMM suffers difficulty in filtration if the field contamination is relatively high. A safe way to overcome this difficulty is to combine the photometric properties in CMDs to obtain a set of stars, which can be used for the supervised membership probability estimation. Our present analysis uses the various CMDs to refine the member population obtained from the GMM method. We use a few CMDs and theoretical isochrones with prior knowledge about the nature of the star-forming complex from earlier studies. This helps to derive a cleaner member data set, which is used as a training set to derive the supervised membership probability using the RF classifier. We discuss the application of both GMM and RF in the following. More detail about the GMM method is explained in Appendix \ref{gmm_det}. \subsubsection{Applying the Gaussian Mixture Model} \label{GMM} We use five parameters (proper motions, parallax, and positions) for our clustering analysis using GMM. We have neither used the errors of the corresponding parameters nor the magnitude and color values as input parameters since they do not follow the Gaussian distributions. The GMM method fails drastically in cluster identification if we apply it to all stars (i.e., 458875 number of sources within the whole area). This is one of the significant limitations of the GMM method, which is also observed in other analyses \citep{2018AJ....156..121G,2018ApJ...869....9G}. The possible reasons for this failure are described by \citet{1990A&A...235...94C}. They pointed out that if the ratio between field stars and member stars is very high, it might cause an issue in clustering analysis using GMM. The other possible reason could be that the field stars do not follow a gaussian distribution. To avoid the above issues related to the GMM method, we try to apply GMM over a small sample with minimum field star contamination. We must remember that obtaining the member population is not straightforward when dealing with a large star-forming complex such as IC 1396, whose radius is $\sim 1.5^\circ$. The reason is that the member populations of IC 1396 might not follow a single Gaussian distribution in their proper motion parameters, unlike an isolated cluster. So, we have to choose a small region very carefully, such that the astrometric and photometric properties of the stars in this region should represent the whole complex, and also, at the same time, the field star contamination should be as minimum as possible. In this work, we choose a conservative small central circular region of radius $30\arcmin$ around the coordinate mentioned in Section \ref{data}. We also use information from previous studies to minimize regional field star contamination. The previous studies suggest the distance of IC 1396 to be $\rm \sim 1~kpc$ \citep{2002AJ....124.1585C,2019A&A...622A.118S}. So we consider the stars that lie within the distance of 700~pc and 1100~pc to run the GMM algorithm so that we can safely throw the stars that lie outside the distance range. With these conditions, there are 6263 stars within the circular region of radius $30\arcmin$. We apply GMM on the 6263 stars, and based on the unsupervised membership estimation, we try to retrieve an initial sample of member stars, which will be used for the membership analysis based on supervised probability computation using the RF method. \begin{figure} \centering \includegraphics[scale=0.45]{Fig2.jpeg} \caption{(a) shows the proper-motion vector point diagram (VPD), and (b,c,d) show the various combination of CMDs of the 6263 sources within a $30\arcmin$ radius. On the CMDs, the red curve displays the PARSEC isochrone \citep{2014MNRAS.444.2525C} for 10~Myr, plotted after correcting for a distance of 900~pc and extinction of $\rm A_V=1~mag$ (discussed in section \ref{age}). In all the plots, the gray dots are 6263 stars, and the black dots are the 3760 stars separated by the GMM method with a probability greater than 80\%. Out of the 3760 stars, 577 stars lie right of 10~Myr in all the CMDs are shown in blue dots. } \label{gmm_vpd} \end{figure} Since the stars can broadly be separated into two groups as cluster members and field contaminants, we apply the GMM method with two components on these 6263 stars, and we retrieve 3760 stars with $\rm P_{GMM}\geqslant0.8$, and the remaining 2503 stars are mostly non-members consisting of the field star population. A few possible combinations of CMDs and the VPD of these 6263 stars (gray), along with the extracted 3760 starts (black) from GMM, are shown in Figure \ref{gmm_vpd}. As seen from the VPD diagram (Figure \ref{gmm_vpd}(a)), the 3760 stars populate as the central black region. This is expected since the member stars of a region usually lie within a narrow circular distribution in the VPD plot. However, the VPD plot and distribution of the 3760 stars on the CMDs show that the member stars are still associated with contamination. There could be a few probable reasons for this. In this analysis, we do not apply any constraint on the magnitude of stars to filter a maximum number of member stars in the fainter end. However, the fainter stars have higher uncertainty and are less reliable. The other possible reason is that in the case of a giant star-forming region such as IC 1396, the member stars might have a little wide distribution in proper motions compared to an isolated stellar cluster. That again increases the chance of contamination in the member star population. So it requires a double check to minimize contamination from the 3760 stars extracted from the GMM method. For this, we use various CMDs, shown in Figure \ref{gmm_vpd}. Though the cluster associated with IC 1396 have a mean age of $\sim$ 2$-$4~Myr \citep{2005AJ....130..188S}, but there is a spread in age up to 10~Myr for some stars, so here we consider only those sources younger than 10~Myr as members. This further removes a significant fraction of contaminated stars from the member population. There are 577 stars left which are more reliable to be member stars. These 577 stars are shown as blue dots in Figure \ref{gmm_vpd}. The selected member stars show a distribution that is largely indistinguishable from the field stars, likely due to the large number of field stars along the line of sight compared to the small number of cluster stars. However, compared to the distribution of filed stars, the distribution of member stars peaks at different locations and shares conservative space in the VPD diagram. For a training sample for the RF method, we keep the 577 stars as member stars and the 2503 stars as non-member stars. \subsubsection{Applying the random forest classifier method} \label{RF} In this section, we apply the supervised machine-learning technique, RF classifier to identify the membership of the entire complex. This technique is an ensemble of machine-learning decision trees for classification and regression tasks. Due to its robustness, the RF technique is widely used in the astrophysical field \citep{2011MNRAS.414.2602D, 2013MNRAS.435.1047B, 2017ApJ...843..104L,2018PASJ...70S..39L,2018MNRAS.476.3974P, 2018AJ....156..121G,2018ApJ...869....9G,2021arXiv210305826M}. In this work, we use the python-based RF classifier available in the scikit-learn package\footnote{https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.\\ RandomForestClassifier.html}. Before using RF on the total population to identify member stars, we need to train the machine, as described in Appendix \ref{rf_eff}. After checking RF's efficiency, we run the RF method to obtain the most probable population of the whole complex. The relative importance of the parameters in separating the member and non-member stars is also listed in Appendix \ref{rf_eff}. After training the machine with the training set retrieved from the GMM method, we ran the RF classifier on a total of 458875 stars located in the direction of the complex IC 1396. Out of these stars, we need to retrieve the most reliable member population of the complex. As described in Appendix \ref{rf_eff}, while training the machine, a few color and magnitude terms also become essential in segregating members from non-member populations. In order to make the detection more robust, we can use the parallax parameter to filter out the non-member stars. Here, we run RF on the stars ($\sim 70000$), which lie within the parallax range of (0.8 to 1.6~mas). With this, we can use the color and magnitude parameters effectively; otherwise, this could increase more unlikely sources. RF provides a membership probability to each star based on its training in the previous step. In our analysis, we retrieve 1803 likely possible members with a probability value of $\rm P_{RF}\geqslant0.6$. Details of the 1803 likely member stars are listed in Table \ref{star_list}. Of these 1803 stars, 1243 have a high probability value of $\rm P_{RF}\geqslant0.8$. Hereafter we use these highly probable candidate members for follow-up analysis. In this work, the massive star HD 206267 has $\rm P_{RF}<0.6$. HD 206267 is an multiple-star system of spectral type O5V $-$ O9V and an older member of the cluster \citep{2012A&A...538A..74P,2020A&A...636A..28M}. RUWE, parallax, $\rm \mu_{\alpha}cos\delta$, and $\rm \mu_{\delta}$ of the star are 5.07, 1.360$\pm$0.218~mas, -1.951$\pm$0.120, and -5.493$\pm$0.281~mas/yr, respectively. The multiple stellar systems resulted in higher RUWE values and proper motions. The radial velocity of the star is $\rm -24.8\pm1.4~ km/s$ \citep{2021ApJS..254...42B}, which is well within the radial velocity distribution of the member stars (Figure \ref{hist_Rv}). This is a direct confirmation of its membership. Also, many earlier studies using multi-wavelength data sets show the connection of the massive star with the star-forming complex \citep{1995ApJ...447..721P,2012MNRAS.426.2917G,2014A&A...562A.131S,2015A&A...573A..19S,2019A&A...622A.118S}. In Figure \ref{rf_vpd}, we plot the proper motion VPD plot for all the 458875 stars. The member stars with $\rm P_{RF}\geqslant0.8$ identified by RF are shown as blue dots. This plot shows that the members are concentrated within a narrow range of proper motion values. \begin{figure} \centering \includegraphics[scale=0.42]{Fig3.jpeg} \caption{VPD of all the 458875 stars is shown as a gray density plot. The blue dots indicate the member population with $\rm P_{RF}\geqslant0.8$.} \label{rf_vpd} \end{figure} The G versus G-RP CM diagram is shown in Figure \ref{g_rp_cm_iso} for the member stars ($\rm P_{RF}\geqslant0.8$) within the region of radius $1.5^\circ$ (shown in Figure \ref{wise}). All the identified member stars indicate a well-defined pre-main-sequence locus on the CM diagram. In Figure \ref{rf_wise}, we over-plot the likely members on the $\rm 22~\mu m$ WISE image, highlighting their distribution as a function of their $\rm P_{RF}$ value. An over-density of the source distribution is visible in the central part of IC 1396. Within the complex, the stars display a diagonal distribution ranging from the BRC IC 1396A to the IC 1396N. Most of the stars are clustered around the massive star HD 206267, shown as the white `$\times$' symbol in the figure. IC 1396N is also associated with a small cluster. A tiny clustering of stars is also visible towards the tip of BRC SFO39. A small fraction of stars is also seen to be randomly distributed all around the complex. A clustering of stars also found towards the northern periphery of the complex. The overall distribution of stars is higher towards the west than the east of the complex. \begin{figure} \centering \includegraphics[scale=0.35]{Fig4.jpeg} \caption{Spatial distribution of the likely cluster members identified from the RF method on the WISE $\rm 22~\mu m$ band. The white `$\times$' symbol marks the position of the massive central star HD 206267. The candidate members' color code is based on their $\rm P_{RF}$ values, and their color bar is also shown. Locations of the BRCs IC 1396A, IC 1396N, and SFO39 are labeled on the plot. } \label{rf_wise} \end{figure} \begin{table*} \small \centering \caption{List of the Gaia-based member population identified using the RF method. The table provides the positions, parallax, proper motions, and magnitude values in G, BP, and RP bands along with $\rm P_{RF}$ values of 1803 stars identified with $\rm P_{RF}\geqslant0.6$. For analysis in this paper, we consider stars with $\rm P_{RF}\geqslant0.8$. } \label{star_list} \begin{tabular}{ccccccccccc} \\ \hline Star No. & RA (2000) & DEC (2000) & RUWE & Parallax & pmra & pmdec & G & BP & RP & $\rm P_{RF}$\\ & (degree) & (degree) & & (mas) & (mas/yr) & (mas/yr) & (mag) & (mag) & (mag) & \\ \hline 1 & 327.6486 & 57.3557 & 0.922 & 1.204$\pm$0.178 & -2.553$\pm$0.232 & -3.151$\pm$0.194 & 18.80 & 20.60 & 17.56 & 0.716 \\ 2 & 327.6364 & 57.4715 & 1.055 & 1.299$\pm$0.157 & -2.162$\pm$0.194 & -3.075$\pm$0.156 & 18.59 & 20.34 & 17.33 & 0.788 \\ 3 & 327.6372 & 58.4742 & 0.951 & 0.940$\pm$0.012 & -1.580$\pm$0.016 & -3.890$\pm$0.012 & 12.04 & 13.02 & 11.07 & 0.682 \\ 4 & 327.4235 & 57.5640 & 0.999 & 1.032$\pm$0.022 & -3.746$\pm$0.027 & -4.302$\pm$0.024 & 15.18 & 15.95 & 14.30 & 0.682 \\ 5 & 327.3769 & 57.5977 & 1.043 & 1.009$\pm$0.013 & -4.326$\pm$0.016 & -4.777$\pm$0.014 & 11.15 & 11.53 & 10.57 & 0.664 \\ 6 & 327.4919 & 57.6289 & 0.964 & 1.018$\pm$0.011 & -3.333$\pm$0.012 & -2.184$\pm$0.011 & 12.53 & 13.71 & 11.46 & 0.682 \\ 7 & 327.1520 & 57.5298 & 0.962 & 1.017$\pm$0.014 & -3.234$\pm$0.017 & -4.564$\pm$0.015 & 14.04 & 14.82 & 13.16 & 0.780 \\ 8 & 327.2893 & 57.6506 & 0.852 & 1.094$\pm$0.012 & -2.117$\pm$0.014 & -4.260$\pm$0.012 & 11.20 & 11.44 & 10.80 & 0.828 \\ 9 & 327.5741 & 57.8185 & 0.981 & 1.025$\pm$0.069 & -1.073$\pm$0.076 & -3.113$\pm$0.082 & 17.59 & 19.09 & 16.42 & 0.702 \\ 10 & 327.1886 & 57.7081 & 1.014 & 1.049$\pm$0.030 & -1.048$\pm$0.035 & -4.333$\pm$0.033 & 15.98 & 17.29 & 14.84 & 0.766 \\ \hline \end{tabular} \\(This table is available in its entirety as online material. A portion is shown here for guidance regarding its form and content.) \end{table*} \subsection{Characterstics of the member stars} \label{charc_stars} In Figure \ref{hist_parallax}, we show histogram distributions of RUWE\footnote{$\rm https://gea.esac.esa.int/archive/documentation/GDR2/ \\ Gaia\_archive/chap\_datamodel/sec\_dm\_main\_tables/ssec\_dm\_ruwe.html$} (Renormalised unit weight error), parallax, and proper-motions of member stars detected in this work. Table \ref{gaia_range} provides the range of these parameters. RUWE parameter provides a measure of astrometric solutions. The RUWE value of around 1.0 is expected for sources where the single-star model provides a good fit for the astrometric observations. Stars with RUWE greater than 1.4 are considered resolved doubles \citep{2020arXiv201201533G}. In our list of selected members, only 144 and 82 stars have RUWE $>$ 1.4, from the list with $\rm P_{RF}\geqslant0.6$, and 0.8, respectively. These sources with higher RUWE could be multiple-star systems. The stars detected in this work are of good quality sources. Out of the 1243 stars, $\sim95\%$ stars have relative parallax error less than $20\%$. \begin{table} \tiny \centering \caption{Range, mean, median, and standard deviation of RUWE, parallax, and proper motions of the 1243 member stars.} \label{gaia_range} \begin{tabular}{ccccc} \\ \hline \hline Parameter & Range & Mean & Median & SD \\ \hline RUWE & $0.77 - 13.79$ & 1.12 & 1.02 & 0.59 \\ Parallax (mas) & 0.834$\pm$0.162 -- 1.564$\pm$0.184 & 1.085$\pm$0.003 & 1.078 & 0.109 \\ $\rm \mu_{\alpha}cos\delta$ (mas/yr) & -2.506$\pm$0.006 -- -0.378$\pm$0.015 & -1.194$\pm$0.002 & -1.187 & 0.325 \\ $\rm \mu_{\delta}$ (mas/yr) & -6.011$\pm$0.216 -- -0.764$\pm$0.014 & -4.215$\pm$0.004 & -4.404 & 0.712 \\ \hline \end{tabular} \end{table} Figure \ref{hist_parallax} (b) displays the histogram distribution of parallaxes for all these identified member stars. In parallax, the stars detected in this work lie within a spread of $\sim$0.8~mas with mean, median, and standard deviation values of $\rm 1.085\pm0.003~mas$, 1.078~mas, and 0.109~mas, respectively. The distance to the cluster is estimated using the parallax values of those sources whose relative parallax error ($\sigma \pi/ \pi$) is better than 20\% and $\rm RUWE<1.4$. Out of 1243, we find 1107 stars satisfy this condition. From these 1107 stars, we estimate the weighted mean parallax to be $\rm 1.090\pm0.003~mas$, which translates to a distance of $\rm 917\pm2.7~pc$. This distance estimate matches well with earlier estimates in literature \citep{2002AJ....124.1585C, 2019A&A...622A.118S,2022arXiv221011930P}. In Figure \ref{hist_parallax} (c) and (d), we show the histogram distributions of the proper motions ($\rm \mu_{\alpha}cos\delta, and ~\mu_{\delta}$). We derive the mean, median, and standard deviation values for $\rm \mu_{\alpha}cos\delta$ to be $\rm -1.194\pm0.002~mas/yr$, $\rm -1.187~mas/yr$, and $\rm 0.325~mas/yr$, respectively. For $\rm \mu_{\delta}$, these values are $\rm -4.215\pm0.004~mas/yr$, $\rm -4.404~mas/yr$, and $\rm 0.712~mas/yr$, respectively. \begin{figure} \centering \includegraphics[scale=0.5]{Fig5.jpeg} \caption{Histogram distributions of RUWE (a), parallax (b), $\rm \mu_{\alpha}cos\delta$ (c), and $\rm \mu_{\delta}$ (d) of the 1243 member stars identified in this work. Bin size of the histograms are 0.25, 0.05~mas, 0.2~mas/yr, and 0.2~mas/yr, for RUWE, parallax, $\rm \mu_{\alpha}cos\delta$, and $\rm \mu_{\delta}$, respectively. } \label{hist_parallax} \end{figure} \subsection{Comparision with literature} \label{compa_lit} In this section, we compare our detected member stars with the sources detected in the literature. As discussed in Section \ref{lit_stars}, there are 1791 stars detected towards the complex based on various surveys. Also, using Gaia-DR2 data, \citet{2018A&A...618A..93C}, detected 460 stars towards IC 1396. We compare our findings separately with the source lists found in the literature. To compare with the sources of various surveys, we first find their Gaia-DR3 counterpart information. Out of the 1791 stars, 1002 stars have Gaia counterparts. Then we refine the catalog further based on the astrometry quality. Thus we use the 705 stars, which have a relative parallax error of $<20\%$, for comparison. Of the 705 stars, 360 stars ($\sim51\%$) are retrieved in our work as member stars with $\rm P_{RF}\geqslant0.8$. The number is 409 ($\sim 60\%$) with $\rm P_{RF}\geqslant0.6$. Due to their poor membership probability, the remaining stars are not detected as members. Then we compare our member list with the 460-star list of \citet{2018A&A...618A..93C}. Within the common area, out of the 460 stars, we retrieved 348 ($\sim76\%$) stars in this work with $\rm P_{RF}\geqslant0.8$. The number is 389 ($\sim85\%$) with $\rm P_{RF}\geqslant0.6$. In this work, we consider only the stars with a higher probability of $80\%$. In \citet{2018A&A...618A..93C}, they considered all the stars with membership probability above $50\%$. So the stars, with higher probability, are retrieved in our work. In our work, we identify more member stars than \citet{2018A&A...618A..93C} mainly due to the large area we consider. Then we also compared the source list obtained by the various surveys (section \ref{lit_stars}) with the stars detected by \citet{2018A&A...618A..93C}. Here, also we considered the good quality 705 stars for comparison. In this case, we found 221 ($\sim31\%$) survey-based stars common with the catalog of \citet{2018A&A...618A..93C}. There are 196 stars common to all three catalogs discussed here. We summarize the analysis as a Venn diagram (Figure \ref{venn_compa}). \begin{figure} \centering \includegraphics[scale=0.5]{Fig6.jpeg} \caption{Venn diagram summarizing the comparison between the member population from this work with the stars from several other surveys, and with stars from work of \citet{2018A&A...618A..93C}. } \label{venn_compa} \end{figure} \section{Properties of the complex} \label{res} \subsection{Sub-clusters within the complex} \label{cluster} \begin{figure} \centering \includegraphics[scale=0.35]{Fig7.jpeg} \caption{Contours of stellar surface density distribution generated using the 1243 candidate members identified towards IC 1396 complex overlaid on the WISE $\rm 22~\mu m$ band image. Contours are at levels of 0.6, 1, 2, 3, 5, 7, 10, 12, and 20 $\rm stars~ pc^{-2}$. The white `$\times$' symbol marks the position of the massive star HD 206267. Different clusters are retrieved from the stellar density map. The blue curves are the clusters shown along with their nomenclature.} \label{NN} \end{figure} The spatial distribution of the 1243 stars (Figure \ref{rf_wise}) displays the association of clustering with IC 1396. In this section, we attempt to identify the clusters quantitatively. To do this, we generate the surface density plot using the 1243 member stars and apply the nearest neighbor (NN) method \citep{1985ApJ...298...80C,2011AN....332..172S}. According to this method, the j-th nearest neighbour density is defined as \begin{equation} \rm \rho_j~ = ~ \frac{j-1}{S(r_j)} \end{equation} where $\rm r_j$ is the distance to its j-th nearest neighbour and $\rm S(r_j), $ is the surface area with radius $\rm r_j$. To obtain the distribution of member stars, we use $\rm j = 20$, which is found to be an optimum value for cluster identification \citep{2008MNRAS.389.1209S,2017MNRAS.465.4753R,2021MNRAS.504.2557D}. With this procedure, we generate the stellar density map with a pixel size of 0.1~pc ($20.5^{\prime\prime}$). Figure \ref{NN} shows the WISE 22~$\rm \mu m$ map overlaid with density contours. The lowest contour is at 0.6~stars $\rm pc^{-2}$, within which the maximum number of sources falls. These stellar density contours reveal the cluster of stars towards the star-forming complex. \begin{figure*} \centering \includegraphics[scale=0.5]{Fig8.jpeg} \caption{The spatial distribution of proper motions and parallax of the member stars detected in this work. Blue dots represent the 1243 member stars. The population of the clusters is shown in different colors and shapes. Cluster C-1 (red dot), C-1A (green square), C-1B (magenta dot), C-2 (cyan circle), C-3 (purple diamond), C-4 (black plus mark), and C-5 (yellow dot).} \label{parallax_propermotion} \end{figure*} \begin{table*} \tiny \centering \caption{The number of stars, mean, median, and standard deviation of RUWE, parallax, and proper motions of the clusters associated with IC 1396.} \label{stat_cluster} \begin{tabular}{ccccccccccccccc} \\ \hline \hline Cluster & Radius & No. & \multicolumn{3}{c}{RUWE} & \multicolumn{3}{c}{Parallax} & \multicolumn{3}{c}{$\rm \mu_{\alpha}cos\delta$} & \multicolumn{3}{c}{$\rm \mu_{\delta}$} \\ & (pc) & of stars & \multicolumn{3}{c}{} & \multicolumn{3}{c}{(mas)} & \multicolumn{3}{c}{(mas/yr)} & \multicolumn{3}{c}{(mas/yr)} \\ \hline & & & Mean & Median & SD & Mean & Median & SD & Mean & Median & SD & Mean & Median & SD \\ \hline C-1 & 3.80 & 426 & 1.11 & 1.02 & 0.38 & 1.098$\pm$0.006 & 1.084 & 0.118 & -1.329$\pm$0.004 & -1.327 & 0.197 & -4.671$\pm$0.006 & -4.691 & 0.334 \\ C-1A & 1.72 & 162 & 1.11 & 1.02 & 0.37 & 1.101$\pm$0.009 & 1.079 & 0.126 & -1.298$\pm$0.007 & -1.309 & 0.159 & -4.690$\pm$0.010 & -4.699 & 0.287 \\ C-1B & 1.22 & 80 & 1.12 & 1.02 & 0.53 & 1.108$\pm0.013$ & 1.103 & 0.116 & -1.442$\pm$0.010 & -1.449 & 0.222 & -4.656$\pm$0.016 & -4.656 & 0.366 \\ C-2 & 1.40 & 27 & 1.07 & 1.03 & 0.21 & 1.076$\pm$0.026 & 1.066 & 0.102 & -1.512$\pm$0.019 & -1.608 & 0.274 & -4.825$\pm$0.030 & -4.882 & 0.411 \\ C-3 & 2.17 & 60 & 1.11 & 1.02 & 0.38 & 1.047$\pm$0.013 & 1.052 & 0.085 & -1.172$\pm$0.008 & -1.117 & 0.317 & -3.438$\pm$0.014 & -3.266 & 0.580 \\ C-4 & 1.84 & 23 & 1.04 & 1.02 & 0.09 & 1.085$\pm$0.021 & 1.080 & 0.097 & -0.831$\pm$0.014 & -0.772 & 0.296 & -3.332$\pm$0.028 & -3.417 & 0.225 \\ C-5 & 3.76 & 87 & 1.07 & 1.02 & 0.12 & 1.083$\pm$0.012 & 1.074 & 0.095 & -0.854$\pm$0.007 & -0.816 & 0.141 & -3.401$\pm$0.013 & -3.330 & 0.336 \\ \hline \end{tabular} \end{table*} For identification of the clusters in this region, we use the {\it astrodendro} algorithm \citep{2019ascl.soft07016R} in Python. This algorithm works based on constructing tree structures starting from the brightest pixels in the dataset and progressively adding fainter and fainter pixels. It requires the threshold flux value (minimum value), contour separation (min delta), and the minimum number of pixels required for a structure to be considered a cluster. In our analysis, we use the threshold and minimum delta to be 1.0, 0.3~stars $\rm pc^{-2}$, respectively. We use the minimum number of pixels as 150 to detect the potential clusters. These parameters are adopted after multiple trials for optimal detection of clusters. We identify six individual leaf structures with these input parameters, which we call clusters here. Two individual clusterings (C-1A and C-1B) are seen towards the massive star HD 206267, and collectively (C-1) is the central cluster of this complex. Towards the tail of BRC IC 1396A, another grouping (C-2) of stars is also seen. Except for this cluster towards the central part, another three clusters are also seen. They (C-3 and C-4) are linked with the BRC IC 1396N and SFO 39, respectively. We also detect a cluster (C-5) close to the boundary of the star-forming complex. Cluster identification in our work matches well with the clusters identified by \citet{2012AJ....143...61N} from the $\rm H_{\alpha}$ emission line survey. In their work, a cluster is associated with the southern BRC SFO 37. However, in our analysis, we cannot see any such cluster with SFO 37, which could be due to the sensitivity of Gaia. The cluster (C-5), which we detect in this work, was not seen by \citet{2012AJ....143...61N}, which could be because, their work survey a lesser area than the area covered in this study. In Table \ref{stat_cluster}, we list the statistics (radius, number of stars, mean, median, and standard deviation) of RUWE, parallax, $\rm \mu_{\alpha}cos\delta$, and $\rm \mu_{\delta}$ for all the identified clusters. We derive physical radius ($\rm R_{cluster} = (A_{cluster}/\pi)^{0.5}$; \citealt{2017MNRAS.472.4750D}) of the clusters using the apertures retrived from {\it astrodendro}. Area of each cluster is calculated as $\rm A_{cluster}=N\times A_{pixel}$, where N is the number of pixels and $\rm A_{pixel}$ is the area of each pixel. The distribution of parallax and the proper motions of the cluster stars are also displayed in Figure \ref{parallax_propermotion}. We see from this plot two groupings. As we see, one is the larger group, mainly from the stars of the C-1 and C-2 clusters, and the second is a smaller group that appears due to stars from the other three (C-3, C-4, and C-5) clusters. This is also evident from the histogram distribution of the $\rm \mu_{\delta}$ (Figure \ref{hist_parallax}(d)). To quantitatively confirm our findings, we carry out a two-component Kolmogorov-Smirnov (KS) test with parallax and proper motions. The $p-$ score from the test is minimal and close to zero for the proper motions. For parallax, the $p-$ score is 0.02. This quantitatively confirms that proper motion parameters are the distinctive astrometric features, distinguishing the stars projected in the two sub-groups, which is seen in Figure \ref{parallax_propermotion}. \subsection{Age and mass range of the candidate cluster members}\label{age} \begin{figure} \centering \includegraphics[scale=0.5]{Fig9.jpeg} \caption{G versus G-RP CMD of the 1243 member stars within the IC 1396 complex. PARSEC isochrones of 0.1, 0.5, 2.0, and 10.0~Myr are over-plotted. All the curves are plotted after correcting the distance (917~pc) and minimum extinction ($\rm A_V=1~mag$) (see text for details). Evolutionary tracks for stars having the mass of 0.09, 0.3, 0.5, 1.0, and 2.0~$\rm M_{\odot}$ are also shown. The colored symbols have the same meaning as in Figure \ref{parallax_propermotion}.} \label{g_rp_cm_iso} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{Fig10.jpeg} \caption{Histogram distribution of the logarithmic age of the 1243 candidate members of IC 1396. The red curve displays the Gaussian fit.} \label{hist_age} \end{figure} In this section, we estimate the mean age and mass completeness limit of the member stars identified in this analysis. Studies like \citet{2005AJ....130..188S,2012MNRAS.426.2917G} and references therein claim an approximate age of $\sim$4~Myr for the primary cluster. To estimate the member population's age and mass completeness limit, we use the PARSEC isochrones available for the filters of Gaia-DR3 \citep{2014MNRAS.444.2525C}. We need to correct the isochrones for distance and extinction to fit them. In an earlier study using NIR and optical data, \citet{2005AJ....130..188S} have derived the average visual extinction value towards the entire complex to be $\rm A_V = 1.5\pm0.5~mag$. This value also matches the estimations by \citet{2002AJ....124.1585C} and \citet{2012AJ....143...61N}. The majority of detected stars in this work are located towards the central part of IC 1396, which is expected to be of less extinction due to the presence of massive star(s) around them compared to the surrounding regions such as BRCs, which are associated with the dense molecular clouds. For further analysis, we use the minimum extinction value of $\rm A_V = 1~mag$ obtained from \citet{2012AJ....143...61N}. After correcting for distance (917~pc) and extinction ($\rm A_V = 1~mag$), we plot the isochrones of various ages on the G versus G-RP CMD in Figure \ref{g_rp_cm_iso}. To correct the extinction in individual bands for all the sources, we use the empirical relations of $\rm A_G/A_V, A_{RP}/A_V$ \citep{2018A&A...616A..10G,2019A&A...623A.108B}. In Figure \ref{g_rp_cm_iso}, we plot various isochrones of evolutionary ages 0.1, 0.5, 2, and 10~Myr along with the evolutionary tracks corresponding to 0.09, 0.3, 0.5, 1, and 2~$\rm M_{\odot}$. From Figure \ref{g_rp_cm_iso}, we derive the age of individual stars by assigning the age of the closest isochrone. Similarly, by assigning the closest mass evolutionary track, we derive the mass of individual stars. However, local variation in extinction and binarity of stars might affect the accurate estimation of these parameters. In Figure \ref{hist_age}, we show the histogram distribution of logarithmic values of the age. By fitting a Gaussian curve to the distribution, we obtain the mean logarithmic age of the cluster to be $6.17\pm0.50$, which corresponds to a mean age of $\sim1.5\pm1.6$~Myr. Using of the upper limit of extinction, i.e., $\rm A_V = 1.5~mag$, the mean age obtained to be $\sim1.6\pm1.7$~Myr, which is still in match with the previous studies. As discussed in Section \ref{data}, we see that the 90\% completeness limits of G, BP, and RP bands are 20.5, 21.5, and 19.5~mag, respectively. We use the G-band to estimate the mass-completeness limit of the cluster. Using an extinction value of $\rm A_V = 1 - 1.5~mag$, distance of 917~pc and considering pre-main-sequence isochrone of 2~Myr \citep{2014MNRAS.444.2525C}, the magnitude limit of G-band (20.5~mag) corresponds to a mass of $\rm \sim 0.1 - 0.2 ~M_{\odot}$. This analysis shows that the Gaia-DR3 is complete down to the low-mass end. However, compared to the central region, i.e., towards the IC 1396A region, the extinction might be higher due to the presence of BRC and an associated molecular cloud. This local variation in extinction will play a role in the local mass completeness of the member stars towards the outer edge of the complex. We list the mean, median, and standard deviation values of log(age) and mass for the entire complex and the individual clusters in Table \ref{stat_age_mass}. The mean and median values of log(age) and mass are similar, considering the whole complex and the clusters. This suggests that most of the population has evolved within the similar time scale of $\rm \sim~ 3~Myr$. However, previous studies have shown that, in the proximity of BRC candidates, multi-episodic star-formation is happening \citep{2014A&A...562A.131S}. Similarly, the stellar mass distribution appears uniform for the entire complex, which can be seen from the mean and median values for all the clusters. However, local mass segregation might be happening within the individual clusters. \begin{table} \small \centering \caption{Mean, median, and standard deviation for log(age) and mass derived from stars of whole IC 1396 and for the clusters.} \label{stat_age_mass} \begin{tabular}{ccccccc} \\ \hline \hline Cluster & \multicolumn{3}{c}{log(age)} & \multicolumn{3}{c}{Mass}\\ & \multicolumn{3}{c}{(yr)} & \multicolumn{3}{c}{($\rm M_{\odot}$)}\\ \hline & Mean & Median & SD & Mean & Median & SD \\ \hline Full & 6.17 & 6.25 & 0.49 & 0.68 & 0.43 & 1.01 \\ C-1 & 6.05 & 6.14 & 0.48 & 0.62 & 0.4 & 1.07 \\ C-1A & 6.06 & 6.14 & 0.49 & 0.70 & 0.4 & 1.37 \\ C-1B & 5.98 & 6.07 & 0.47 & 0.60 & 0.37 & 0.93 \\ C-2 & 5.99 & 6.20 & 0.40 & 0.66 & 0.35 & 1.29 \\ C-3 & 5.97 & 5.98 & 0.48 & 0.74 & 0.36 & 1.26 \\ C-4 & 6.33 & 6.34 & 0.30 & 0.44 & 0.45 & 0.20 \\ C-5 & 6.37 & 6.42 & 0.37 & 0.70 & 0.50 & 0.70 \\ \hline \end{tabular} \end{table} \subsection{Cluster properties} \label{clust_prop} Several clusters have been identified towards IC 1396 based on the spatial distribution of the associated stellar members. Each cluster leaves an imprint of the ongoing star formation in the complex. In this section, we briefly discuss the formation of clusters taking into account their age and spatial distribution. \subsubsection{Inner clusters (C-1 and C-2)} \label{c1_c2} Clusters (C-1 and C-2) are located towards the center of the complex. Also, two sub-clustering (C-1A and C-1B) are observed within cluster C-1. Subcluster C-1A is on the eastern side, and C-1B is on the western side of the massive star. The subcluster C-1B is linked to the head of BRC IC 1396 A, while the C-2 is seen towards its tail. C-1A contains more stars with slightly higher ages than C-1B. So the mean age of C-1A is slightly higher compared to C-1B. Similarly, the mean age of cluster C-2 is similar to C-1B. This indicates a multi-generation star formation triggeded by the feedback effect of the central massive star. Earlier studies \citep{2014A&A...562A.131S,2019A&A...622A.118S,2022arXiv221011930P} have reported such triggered star formation activities towards the head of IC 1396 A. The presence of cluster C-2 is also a signature of ongoing triggered star formation towards the BRC complex. Using {\it Herschel} PACS images and analyzing the properties of young members in the head of IC 1396 A, \citep{2014A&A...562A.131S} suggested that this second generation of star-formation is triggered via radiative driven implosion (RDI) induced by the massive star HD 206267. However, more in-depth analysis with multiwavelength data would be helpful to understand the mechanism behind the triggered star formation towards the entire IC 1396A region. \subsubsection{Outer clusters (C-3, C-4, and C-5)} \label{c3_c4_c5} The outer clusters (C-3, C-4, and C-5) differ from the inner clusters based on their astrometry properties (see Figure \ref{parallax_propermotion}). C-3 is linked with BRC IC 1396 N, C-4 with SFO 37, and C-5 in the northwest boundary of IC 1396. The mean age of C-3 is slightly lower than C-1 (refer Table \ref{stat_age_mass}). This indicates that the triggered star formation mechanism also forms the stars associated with IC 1396 N. The mean age of C-4 and C-5 appears slightly higher than all other clusters. In these two clusters, a significant fraction of stars of higher age is present. Earlier studies carried out by \citet{2008AJ....135.2323I}, and \citet{2014MNRAS.443.1614P} have already reported sequential star-formation in the direction of BRCs SFO 37 and SFO 39 (see Figure 1) due to the UV radiation impact of the exciting central star. The cluster C-4 is associated with SFO 39, but we do not detect any significant clustering towards SFO 37, as it is a small globule-like structure consisting of mainly a few embedded pre-main-sequence stars. \subsection{Radial Velocity} \label{Rv_vel} We searched for the stars with the radial velocity ($\rm R_V$) information in our member list. We obtained 107 stars with radial velocity information from Gaia-DR3. This is an improvement in $\rm R_V$ measurements in the Gaia-DR3 catalog compared to the DR2 catalog. Out of these 107 stars, 85 stars with good astrometry quality, i.e., $\rm RUWE<1.4$, are considered for further analysis. The mean and median of $\rm R_V$ of the 85 stars are -16.30$\pm$1.28 and -16.56~$\rm km/s$, respectively. To maximize the $\rm R_V$ measurements of the member stars of the complex, we also search for the $\rm R_V$ measurements in the literature. In previous work towards the region, \citet{2006AJ....132.2135S} has carried out high-resolution ($\rm R\sim 34000$) spectroscopic observations and obtained the radial velocity information for 136 stars. By cross-matching these stars with our Gaia-detected member lists, we find 78 stars in common, out of which 67 stars are of good astrometry quality, i.e., $\rm RUWE<1.4$. The mean and median of $\rm R_V$ of the 67 stars are -16.54$\pm$0.25 and -15.80~$\rm km/s$, respectively. The $\rm R_V$ has a broad range for 85 stars compared to the list of 67 stars taken from \citet{2006AJ....132.2135S}. However, the mean and median values for both lists are similar. In Figure \ref{hist_Rv}, we display smooth histogram distribution for sources from both lists. In the figure, we scaled down the curve for the 67 stars by $50\%$ for a better representation. Smoothed distribution from this figure also suggests similar mean and median values found from the different lists. The spatial distribution of these 152 stars with $\rm R_V$ information is shown in Figure \ref{radec_pm}. Most of the stars are distributed within the central part of the complex, with few distributed all around the complex. Out of these 152 stars, 68 are members of the central cluster (C-1). We note that the properties of the complex and identification of different sub-groups in the complex by us are in close agreement with the recent work by \citet{2022arXiv221011930P}. \begin{figure} \centering \includegraphics[scale=0.45]{Fig11.jpeg} \caption{Smoothed histogram distribution of $\rm R_V$ values for the 85 stars (red), 67 stars (green), and the total 152 stars (blue). We have $\rm R_V$ values for 85 stars from Gaia-DR3. For the 67 stars, $\rm R_V$ values taken from \citet{2006AJ....132.2135S}. To better represent the plot, the green curve is scaled down by $50\%$. The small vertical lines represent each star's $\rm R_V$ value. } \label{hist_Rv} \end{figure} \section{Discussion} \label{diss} \subsection{Kinematic properties of IC 1396} \label{kinematic} \begin{figure} \centering \includegraphics[scale=0.37]{Fig12.jpeg} \caption{Spatial distribution of the 1243 member stars as red dots on the WISE $\rm 22~\mu m$ band. The proper-motion values are shown as arrows. A reference arrow of 10~mas/yr is shown in the top right corner of the image. The 152 stars having $\rm R_V$ information are highlighted, where the 85 stars from Gaia based $\rm R_V$ are shown as solid circles and the 67 sources from \citet{2006AJ....132.2135S} are shown as square symbols. The colors of these objects mark their variation in $\rm R_V$, which is displayed as the color bar.} \label{radec_pm} \end{figure} In Figure \ref{radec_pm}, we show the spatial distribution of the 1243 stars on the WISE $\rm 22~\mu m$ band as red dots, along with their proper-motion values as blue arrows. The magnitudes of $\rm \mu_{\alpha}cos\delta, and ~\mu_{\delta}$ gives the length of the arrow and the signs of $\rm \mu_{\alpha}cos\delta, and ~\mu_{\delta}$ determine the direction. All the arrows are scaled according to the white reference arrow of length 10~mas/yr. As seen from the plot, most stars are moving towards the south, one of the unique features observed towards the star-forming complex. In this section, we analyze the kinematics of the complex to shed more light on the internal motion of the member stars within the complex. \subsubsection{Determination of 3-dimensional position and velocity} \label{3d_vel} Since the complex IC 1396 is a relatively large star-forming complex, it is essential to inspect its physical structure and spatial distribution in Galactic cartesian coordinates, XYZ. We derive the XYZ coordinates for all the sources associated with IC 1396. The origin of the coordinate system is chosen to be Sun. In this system, the X-axis runs along the Sun-Galactic center with a positive direction toward the Galactic center, and the Y-axis is in the Galactic plane orthogonal to the X-axis with its positive direction along the Galactic rotation, the Z-axis is perpendicular to the Galactic plane, oriented in the direction of Galactic North Pole. Thus it makes a right-handed coordinate system. We used the Gaia-DR3 astrometric information of the detected stars and derived their 3-dimensional positions (X, Y, Z) and the heliocentric velocities (U, V, W). We have also computed the LSR velocities for each star along with the heliocentric velocities. The transformation of heliocentric to LSR velocity transformation made considering the solar motion velocities ($\rm U_0~=~11.1\pm0.7~km/s,~ V_0~=~12.2\pm0.47~km/s,~, and~ W_0~=~7.25\pm0.37~km/s$) from \citet{2010MNRAS.403.1829S}. The majority of stars with $\rm R_V$ information lie towards the complex's central region. So to obtain the kinematic property, we focus only on the central cluster C-1. Table \ref{3dvel} lists the derived 3D-dimensional positions (X, Y, Z), the heliocentric velocities (U, V, W), and the LSR velocities of the 68 stars of the cluster C-1. \begin{table*} \tiny \centering \caption{3-dimensional position, heliocentric velocities, and LSR velocities of the 68 stars within the cluster C-1.} \label{3dvel} \begin{tabular}{cccccccccccccccc} \\ \hline Star No. & RA (2000) & DEC (2000) & X & Y & Z & U & V & W & u & v & w & $\rm \bf {\hat{r}_*~.~\bf{\delta v_*}}$ & \multicolumn{3}{c}{$\rm \bf {\hat{r}_*~\times~\bf{\delta v_*}}$} \\ & (degree) & (degree) &(pc)&(pc)&(pc)&(km/s)&(km/s)&(km/s)&(km/s)&(km/s)&(km/s) & (km/s)& \multicolumn{3}{c}{(km/s)} \\ \hline 1 & 325.1479 & 57.4754 & -165.86 & 998.44 & 90.92 & 22.66 & -18.99 & -13.75 & 33.76 & -6.75 & -6.50 & -5.02 & -0.87 & -0.16 & 1.19\\ 2 & 324.8110 & 57.3874 & -150.43 & 925.01 & 87.06 & 17.96 & -6.24 & -12.17 & 29.06 & 6.00 & -4.92 & -7.60 & 3.56 & 1.14 & 0.50\\ 3 & 324.9923 & 57.4759 & -142.16 & 861.63 & 83.00 & 19.24 & -18.27 & -12.72 & 30.34 & -6.03 & -5.47 & 3.72 & -0.40 & 0.11 & 1.75\\ 4 & 325.0083 & 57.5653 & -170.96 & 1028.83 & 95.00 & 20.73 & -54.06 & -11.33 & 31.83 & -41.82 & -4.08 & -38.90 & 3.59 & 0.29 & -7.38\\ 5 & 324.6100 & 57.4779 & -135.03 & 832.29 & 83.10 & 16.58 & -14.38 & -11.84 & 27.68 & -2.14 & -4.59 & -0.66 & -0.90 & 0.10 & 3.83\\ 6 & 324.5352 & 57.4465 & -142.94 & 885.98 & 86.76 & 13.13 & -10.52 & -8.97 & 24.23 & 1.72 & -1.72 & -5.37 & -3.48 & -0.29 & 6.45\\ 7 & 324.4657 & 57.4479 & -143.82 & 894.13 & 87.71 & 18.56 & -13.26 & -10.34 & 29.66 & -1.02 & -3.09 & -1.56 & -2.31 & -0.42 & 1.64\\ 8 & 324.7432 & 57.4737 & -145.45 & 891.47 & 86.29 & 21.12 & -20.71 & -14.08 & 32.22 & -8.47 & -6.83 & 6.49 & 0.79 & 0.15 & 0.34\\ 9 & 324.7699 & 57.4714 & -137.55 & 842.19 & 82.85 & 21.43 & -30.57 & -13.51 & 32.53 & -18.33 & -6.26 & 16.20 & -0.46 & 0.05 & 1.60\\ 10 & 324.8082 & 57.4760 & -141.29 & 863.34 & 84.10 & 16.27 & 5.70 & -11.68 & 27.37 & 17.94 & -4.43 & -20.46 & 0.53 & 0.17 & 1.10\\ \hline \end{tabular} \\(This table is available in its entirety as online material. A portion is shown here for guidance regarding its form and content.) \end{table*} \subsubsection{Kinematic properties of the stars} \label{kine_stars} In Figure \ref{xyz_vel}, we show the spatial distribution of the 68 stars of C-1, which have radial velocity information in the XY, YZ, and XZ planes. In the top row, we display the heliocentric and LSR velocities. The heliocentric and the LSR velocities indicate the stars' bulk motion. \begin{figure*} \centering \includegraphics[scale=0.25]{Fig13_1.jpeg} \includegraphics[scale=0.25]{Fig13_2.jpeg} \includegraphics[scale=0.25]{Fig13_3.jpeg} \includegraphics[scale=0.25]{Fig13_4.jpeg} \includegraphics[scale=0.25]{Fig13_5.jpeg} \includegraphics[scale=0.25]{Fig13_6.jpeg} \caption{Spatial distribution of the 68 stars of C-1 on XY, YZ, and XZ planes. Top: arrows represent the heliocentric (green) and LSR (magenta) velocities of the 68 stars, respectively. Bottom: Same as top, but the arrows represent the difference between the individual velocities and mean velocity. Details of the velocities are given in the text.} \label{xyz_vel} \end{figure*} To investigate the stability of the cluster C-1, it is essential to analyze the internal kinematics of the stars. First, we derive the mean value of the velocities of the stars. The values are listed in Table \ref{3d_mean_vel}. To assess the internal motion of the stars, we calculate the difference in velocities $\rm (\delta u, \delta v, \delta w)$ of individual stars with respect to the mean value. In the bottom row of Figure \ref{xyz_vel}, we show the $\rm (\delta u, \delta v, \delta w)$. This displays the random movement of the stars with respect to the central velocity. This shows that $\rm (\delta u, \delta v, \delta w)$ of stars are canceling each other, and the mean values of $\rm (\delta u, \delta v, \delta w)$ are close to zero, indicating no real expansion. The three dimensional dispersion is $\rm \sigma = \sqrt{(\sigma u)^2 + (\sigma v)^2 + (\sigma w)^2}$ derived to be $\rm 16.56~km/s$. Then we conduct a qualitative analysis of the relative motion of the stars within the complex in a similar manner carried out by \citet{2015ApJ...807..119R} for the Taurus complex. This analysis will provide an implication of the stability of the complex. Each star is located at a certain distance from the complex's center and moves with a relative velocity. We denote the separation from the complex center with a position vector $\rm \bf {r_*}$ and the relative velocity vector as $\rm \bf{\delta v_*}$. Each position vector is associated with a unit vector, which can be represented as $\rm \bf{\hat{r}_* = r_* / |r_*|}$, directing from the center of complex towards the location of each star. So the relative motion of stars with respect to the complex center can be used to analyze the two types of motions expansion or contraction and rotation. The expansion and contraction properties can be gauged by looking at the directions of the position vector and the relative velocity vector. For expansion, $\rm \bf{\delta v_*}$ will be parallel to $\rm \bf {r_*}$ and for contraction $\rm \bf{\delta v_*}$ will be anti-parallel to $\rm \bf {r}_*$. Hence for expansion, the dot product ($\rm \bf {\hat{r}_*~.~\bf{\delta v_*}}$) should be a large and positive number, and for contraction, it should be a large and negative number. In a similar analogy, the cross product ($\rm \bf {\hat{r}_*~\times~\bf{\delta v_*}}$) will be small for both expansion and contraction. In other way, the cross product ($\rm \bf {\hat{r}_*~\times~\bf{\delta v_*}}$) will be higher for large-scale rotation, and the dot product ($\rm \bf {\hat{r}_*~.~\bf{\delta v_*}}$) will be minimal. In the following, we derive the dot and cross products and list them in Table \ref{3dvel}. Since in both the dot and cross product parameters, we use the unit position vector $\rm \bf{\hat{r}_*}$, the values of both the parameters have similar velocities. The mean values of the parameters can be expressed with the equations $\rm v_{exp} = \overline{ \bf {\hat{r}_*}~.~\bf{\delta v_*} }$ and $\rm v_{rot} = \overline{ \bf {\hat{r}_*}~\times~\bf{\delta v_*} }$. \begin{table*} \tiny \centering \caption{Mean values of heliocentric and LSR velocities and dispersions derived from the 68 stars of cluster C-1. Values of expansion and rotational velocities are listed in the table.} \label{3d_mean_vel} \begin{tabular}{cccccccccccccccccc} \\ \hline Cluster & $\rm \bar{U}$ & $\rm \bar{V}$ & $\rm \bar{W}$ & $\rm \bar{u}$ & $\rm \bar{v}$ & $\rm \bar{w}$ & $\rm \sigma u$ & $\rm \sigma v$ & $\rm \sigma w$ & $\rm \sigma$ & $\rm v_{exp}$ & \multicolumn{3}{c}{$\rm v^a_{rot}$} \\ &(km/s)&(km/s)&(km/s)&(km/s)&(km/s)&(km/s) & (km/s)&(km/s)&(km/s)&(km/s)&(km/s)& \multicolumn{3}{c}{(km/s)} \\ \hline C-1 & 20.47 & -14.33 & -12.75 & 32.57 & -2.09 & -5.50 & 3.04 & 16.15 & 2.03 & 16.56 & 1.11 & -0.06 & 0.07 & 1.02 \\ \hline \end{tabular} \\ $\rm ^a$ Columns 13, 14, and 15 list values of the three components of the rotation velocity. \end{table*} We derive the expansion velocity, $\rm v_{exp}$, to be 1.11~km/s. The derived rotation velocities are listed in Table \ref{3d_mean_vel}. From the CO maps \citet{1995ApJ...447..721P} have obtained an expansion velocity of the whole complex to be 5~km/s. Their analysis suggests that the gas within the complex is pushed away to the outskirts by the central massive star resulting in an expansion of the system. A similar expanision velocity is also observed by \citet{2022arXiv221011930P}. Though cluster C-1 is expanding, but its expansion is slow compared to the whole complex. This could be because young stars dominate the central region, and cluster C-1 is expanding slowly due to higher density. Nearby Galactic clusters are expanding with similar velocities as of cluster C-1, observed by \citet{2019ApJ...870...32K}. Their study over a set of 28 Galactic clusters using Gaia-DR2, reported a typical expansion velocity of $\rm \sim0.5~km/s$. Similarly, the study conducted by \citet{2021ApJ...912..162P} of 13 open clusters within a distance of 500~pc using Gaia-EDR3 reported many clusters to be super-virial and expanding in nature. \subsection{Star-formation history in IC 1396} \label{over_star_form} IC 1396 is one of the nearby star-forming complexes dominated by feedback-driven star formation activity (see Section \ref{source_detl}). The energetic stellar wind from the central massive star has cleared up most of the gas, resulting in a cavity of radius $\sim1.5^{\circ}$. The large cavity can be seen at infrared wavelengths with photodissociation regions (PDRs) associated with the boundary of the complex (see Figure \ref{wise}). This massive feedback effect also forms BRCs and fingertip structures within the complex \citep{1991ApJ...370..263S, 2005A&A...432..575F, 2012MNRAS.421.3206S}. Here, we discuss the overall star formation history of the complex. The spatial distribution of the member sources (see Figure \ref{rf_wise}) and their association with the BRCs all indicate the ongoing feedback-driven star formation activity within the complex. The mean age of the sub-clusters (see Section ) suggests a multi-generation star formation activity within the complex. However, the sub-clusters formation in the complex might have happened through a hierarchical process. To assess this nature, we conduct a KS test on the age of the two major groups of stars (see Section \ref{cluster}). One group is from the inner clusters (C-1 and C-2), and the other is from the outer clusters (C-3, C-4, and C-5). The $p-$ score of the KS test comes out to be 0.00026. This low value of $p-$ score indicates that a majority number of stars from both groups might have formed over a similar time scale. The hierarchical star formation could be due to the fractal and turbulent nature of the ambient cloud, where star formation can occur simultaneously or near simultaneously at different locations of the clouds \citep{2003MNRAS.343..413B,2018MNRAS.481..688G,2022MNRAS.510.2097T}. However, one limitation of our analysis is that we have probed stars using optical measurements. Thus many sources embedded in the BRCs might be missing in our analysis; as a result, the estimated ages of the groups associated with BRCs are likely upper limits. Kinematics and age analysis of the embedded members are needed to understand whether the groups associated with BRCs are formed through entirely hierarchical collapse processes or whether stellar feedback from the central cluster has helped induce star formation in these clouds. In favorable conditions, stellar feedback can enhance or accelerate star formation in pre-existing clouds where star formation is already underway. In this case, one may have both older as well as the young population of sources. Observations show that young clusters tend to show typical velocity dispersion of 2~km/s \citep{2019ApJ...870...32K}. Thus, older stars can move $\sim2$~pc in 2~Myr of time, so inferences such as age gradient and elongated morphology, which are signatures of induced star formation as we move from ionizing sources to the tip of the BRCs, can be erased, particularly, if we are dealing with smaller groups or number of stars. Thus compressive spectroscopic and kinematic analysis of member stars in both the optical and infrared bands would be highly desirable to shed more light in understanding the formation of different sub-groups in the complex. \section{Summary} \label{summ} We use the high-precision Gaia-DR3 astrometry and photometry data and apply the machine learning algorithms to carry out the membership analysis of the complex. Using the identified members in this work, we study various star-formation properties of this complex. In the following, we report our significant findings from this work. \begin{enumerate} \item Using the Gaia-DR3 astrometry and photometry data and applying the supervised RF technique of the machine learning algorithm, we identify this complex's 1243 high probable member populations. The identified member population is of high quality, with 95\% stars having a relative parallax error of less than 20\%. More than 99\% stars have RUWE less than 1.4 suggesting they are of high astrometry quality. Of the 1243 stars, 731 are entirely new members identified in this work. This has significantly enhanced the reliable member population list for IC 1396. \item The mean values of the parameters RUWE, parallax, $\rm \mu_{\alpha}cos\delta$, and $\rm \mu_{\delta}$ are 1.12, $1.085\pm0.003$~mas, $-1.194\pm0.002$~mas/yr, and $-4.215\pm0.004$~mas/yr, respectively. The spatial distribution of the parallax, $\rm \mu_{\alpha}cos\delta$, and $\rm \mu_{\delta}$ suggests that the total population is broadly segregated into two groups. Our KS test shows that proper motion parameters are the most distinctive astrometric features, distinguishing the stars projected in the two sub-groups. \item The spatial distribution of the stars reveals the associated clusters. We use the NN method to identify 6 clusters (\# C-1A, C-1B, C-2, C-3, C-4, and C-5) towards IC 1396. C-1A and C-1B are the subclusters of the central cluster C-1. We study the statistical properties of stars lying within the subclusters. \item Using the G vs. G-RP CMD and parsec isochrones, we estimate the age and mass of individual stars. The mean age derived from all the 1243 stars to $1.5\pm1.6$~Myr, matching with the estimations from previous studies. Using the completeness limit of 19~mag in the G band and distance to be 917~pc, we derive the mass completeness limit for the complex to be $\rm \sim0.1~M_{\odot}$. Thus suggesting the complex is associated very low massive population. \item Of the 1243 stars, 152 good quality stars ($\rm RUWE<1.4$) have $\rm R_V$ measurements, out of which 85 stars $\rm R_V$ information from Gaia-DR3 and the remaining 67 stars from a high-resolution spectroscopic study of \citet{2006AJ....132.2135S}. The mean and median values of $\rm R_V$ derived from the 152 stars are $-16.41\pm0.72$ and 15.80~km/s, respectively. \item We carry out a 3D kinematic analysis to understand the internal motion of stars within the central cluster C-1. We use the $\rm R_V$ values and astrometric data of the 68 stars of the cluster. We derive the 3-D cartesian positional and velocities of each star. To study the stability of the cluster, we derive the expansion velocity, which is low compared to the previous value derived based on CO maps. The low value of the expansion velocity of the cluster suggests a slow expansion compared to the whole complex. The slow expansion might be due to the higher density of recently formed young stars. \item Considering the spatial distribution, association with BRCs, and age of stars, we study the overall star formation within the complex. The variation in the age of the sub-clusters suggests an ongoing multi-generation star formation process in the complex. However, the sub-clusters of the complex might have formed through a hierarchical process. \end{enumerate} We thank the anonymous referee for a constructive review of the manuscript, which helped in improving the quality of the paper. SRD acknowledges support from Fondecyt Postdoctoral fellowship (project code 3220162). This work presents results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This work made use of various packages of Python programming language.
{ "arxiv_id": "2302.14320", "language": "en", "timestamp": "2023-03-01T02:09:28", "url": "https://arxiv.org/abs/2302.14320", "yymm": "2302" }
\section{Introduction} Generative adversarial networks (GANs) have become a crucial data-driven tool for generating synthetic data. GANs are generative models trained to produce samples from an unknown (real) distribution using a finite number of training data samples. They consist of two modules, a generator G and a discriminator D, parameterized by vectors $\theta\in\Theta\subset \mathbb{R}^{n_g}$ and $\omega\in\Omega\subset\mathbb{R}^{n_d}$, respectively, which play an adversarial game with each other. The generator $G_\theta$ maps noise $Z\sim P_Z$ to a data sample in $\mathcal{X}$ via the mapping $z\mapsto G_\theta(z)$ and aims to mimic data from the real distribution $P_{r}$. The discriminator $D_\omega$ takes as input $x\in\mathcal{X}$ and classifies it as real or generated by computing a score $D_\omega(x)\in[0,1]$ which reflects the probability that $x$ comes from $P_r$ (real) as opposed to $P_{G_\theta}$ (synthetic). For a chosen value function $V(\theta,\omega)$, the adversarial game between G and D can be formulated as a zero-sum min-max problem given by \thinmuskip=1mu \begin{align}\label{eqn:GANgeneral} \inf_{\theta\in\Theta}\sup_{\omega\in\Omega} \,V(\theta,\omega). \end{align} Goodfellow \emph{et al.}~\cite{Goodfellow14} introduce the vanilla GAN for which \thickmuskip=2mu \medmuskip=0mu \begin{align*} V_\text{VG}(\theta,\omega) \nonumber =\mathbb{E}_{X\sim P_r}[\log{D_\omega(X)}]+\mathbb{E}_{X\sim P_{G_\theta}}[\log{(1-D_\omega(X))}]. \end{align*} For this $V_\text{VG}$, they show that when the discriminator class $\{D_\omega\}_{\omega\in\Omega}$ is rich enough, \eqref{eqn:GANgeneral} simplifies to minimizing the Jensen-Shannon divergence~\cite{Lin91} between $P_r$ and $P_{G_\theta}$. Various other GANs have been studied in the literature using different value functions, including $f$-divergence based GANs called $f$-GANs~\cite{NowozinCT16}, IPM based GANs~\cite{ArjovskyCB17,sriperumbudur2012empirical,liang2018well}, etc. Observing that the discriminator is a classifier, recently, Kurri \emph{et al.}~\cite{KurriSS21,kurri-2022-convergence} show that the value function in \eqref{eqn:GANgeneral} can be written using a class probability estimation (CPE) loss $\ell(y,\hat{y})$ whose inputs are the true label $y\in\{0,1\}$ and predictor $\hat{y}\in[0,1]$ (soft prediction of $y$) as \begin{align*} V(\theta,\omega) =\mathbb{E}_{X\sim P_r}[-\ell(1,D_\omega(X))]+\mathbb{E}_{X\sim P_{G_\theta}}[-\ell(0,D_\omega(X))]. \end{align*} Using this approach, they introduce $\alpha$-GAN using the tunable CPE loss $\alpha$-loss~\cite{sypherd2019tunable,sypherd2022journal}, defined for $\alpha \in(0,\infty]$ as \begin{align} \label{eq:cpealphaloss} \ell_\alpha(y,\hat{y})\coloneqq\frac{\alpha}{\alpha-1}\left(1-y\hat{y}^{\frac{\alpha-1}{\alpha}}-(1-y)(1-\hat{y})^{\frac{\alpha-1}{\alpha}}\right). \end{align} They show that the $\alpha$-GAN formulation recovers various $f$-divergence based GANs including the Hellinger GAN~\cite{NowozinCT16} ($\alpha=1/2$), the vanilla GAN~\cite{Goodfellow14} ($\alpha=1$), and the Total Variation (TV) GAN~\cite{NowozinCT16} ($\alpha=\infty$). Further, for a large enough discriminator class, the min-max optimization for $\alpha$-GAN in \eqref{eqn:GANgeneral} simplifies to minimizing the Arimoto divergence~\cite{osterreicher1996class,LieseV06}. While each of the abovementioned GANs have distinct advantages, they continue to suffer from one or more types of training instabilities, including vanishing/exploding gradients, mode collapse, and sensitivity to hyperparameter tuning. In \cite{Goodfellow14}, Goodfellow \emph{et al.} note that the generator's objective in the vanilla GAN can \emph{saturate} early in training (due to the use of the sigmoid activation) when D can easily distinguish between the real and synthetic samples, i.e., when the output of D is near zero for all synthetic samples, leading to vanishing gradients. Further, a confident D induces a steep gradient at samples close to the real data, thereby preventing G from learning such samples due to exploding gradients. To alleviate these, \cite{Goodfellow14} proposes a \emph{non-saturating} (NS) generator objective: \begin{align} V_\text{VG}^\text{NS}(\theta,\omega)=\mathbb{E}_{X\sim P_{G_\theta}}[-\log{D_\omega(X)}]. \end{align} This NS version of the vanilla GAN may be viewed as involving different objective functions for the two players (in fact, with two versions of the $\alpha=1$ CPE loss, i.e., log-loss, for D and G). However, it continues to suffer from mode collapse \cite{arjovsky2017towards,wiatrak2019stabilizing}. While other dual-objective GANs have also been proposed (e.g., Least Squares GAN (LSGAN)~\cite{Mao_2017_LSGAN}, R\'{e}nyiGAN~\cite{bhatia2021least}, NS $f$-GAN \cite{NowozinCT16}, hybrid $f$-GAN \cite{poole2016improved}), few have had success fully addressing training instabilities. Recent results have shown that $\alpha$-loss demonstrates desirable gradient behaviors for different $\alpha$ values \cite{sypherd2022journal}. It also assures learning robust classifiers that can reduce the confidence of D (a classifier) thereby allowing G to learn without gradient issues. To this end, we introduce a different $\alpha$-loss objective for each player to address training instabilities. We propose a tunable dual-objective $(\alpha_D,\alpha_G)$-GAN, where the objective functions of D and G are written in terms of $\alpha$-loss with parameters $\alpha_D\in(0,\infty]$ and $\alpha_G\in(0,\infty]$, respectively. Our key contributions are: \begin{itemize}[leftmargin=*] \item For this non-zero sum game, we show that a Nash equilibrium exists. For appropriate $(\alpha_D,\alpha_G)$ values, we derive the optimal strategies for D and G and prove that for the optimal $D_{\omega^*}$, G minimizes an $f$-divergence and can therefore learn the real distribution $P_r$. \item Since $\alpha$-GAN captures various GANs, including the vanilla GAN, it can potentially suffer from vanishing gradients due to a saturation effect. We address this by introducing a non-saturating version of the $(\alpha_D,\alpha_G)$-GAN and present its Nash equilibrium strategies for D and G. \item A natural question that arises is how to quantify the theoretical guarantees for dual-objective GANs, specifically for $(\alpha_D,\alpha_G)$-GANs, in terms of their estimation capabilities in the setting of limited capacity models and finite training samples. To this end, we define estimation error for $(\alpha_D,\alpha_G)$-GANs, present an upper bound on the error, and a matching lower bound under additional assumptions. \item Finally, we demonstrate empirically that tuning $\alpha_D$ and $\alpha_G$ significantly reduces vanishing and exploding gradients and alleviates mode collapse on a synthetic 2D-ring dataset. For the high-dimensional Stacked MNIST dataset, we show that our tunable approach is more robust in terms of mode coverage to the choice of GAN hyperparameters, including number of training epochs and learning rate, relative to both vanilla GAN and LSGAN. \end{itemize} \section{Main Results} \subsection{$(\alpha_D,\alpha_G)$-GAN} We first propose a dual-objective $(\alpha_D,\alpha_G)$-GAN with different objective functions for the generator and discriminator. In particular, the discriminator maximizes $V_{\alpha_D}(\theta,\omega)$ while the generator minimizes $V_{\alpha_G}(\theta,\omega)$, where \medmuskip=0mu \begin{align} &V_{\alpha}(\theta,\omega)\nonumber\\ &=\mathbb{E}_{X\sim P_r}[-\ell_{\alpha}(1,D_\omega(X))]+\mathbb{E}_{X\sim P_{G_\theta}}[-\ell_{\alpha}(0,D_\omega(X))]\label{eqn:sat-gen-objective}, \end{align} for $\alpha=\alpha_D,\alpha_G \in (0,\infty]$. We recover the $\alpha$-GAN \cite{KurriSS21,kurri-2022-convergence} value function when $\alpha_D=\alpha_G=\alpha$. The resulting $(\alpha_D,\alpha_G)$-GAN is given by \begin{subequations} \begin{align} &\sup_{\omega\in\Omega}V_{\alpha_D}(\theta,\omega) \label{eqn:disc_obj} \\ & \inf_{\theta\in\Theta} V_{\alpha_G}(\theta,\omega) \label{eqn:gen_obj}. \end{align} \label{eqn:alpha_D,alpha_G-GAN} \end{subequations} The following theorem presents the conditions under which the optimal generator learns the real distribution $P_r$ when the discriminator set $\Omega$ is large enough. \begin{theorem}\label{thm:alpha_D,alpha_G-GAN-saturating} For a fixed generator $G_\theta$, the discriminator optimizing \eqref{eqn:disc_obj} is given by \begin{align} D_{\omega^*}(x)=\frac{p_r(x)^{\alpha_D}}{p_r(x)^{\alpha_D}+p_{G_\theta}(x)^{\alpha_D}}, \label{eqn:optimaldisc-gen-alpha-GAN} \end{align} where $p_r$ and $p_{G_\theta}$ are the corresponding densities of the distributions $P_r$ and $P_{G_\theta}$, respectively, with respect to a base measure $dx$ (e.g., Lebesgue measure). For this $D_{\omega^*}$ and the function $f_{\alpha_D,\alpha_G}:\mathbb R_+ \to \mathbb R$ defined as \begin{align}\label{eqn:f-alpha_d,alpha_g} f_{\alpha_D,\alpha_G}(u)=\frac{\alpha_G}{\alpha_G-1}\left(\frac{u^{\alpha_D\left(1-\frac{1}{\alpha_G}\right)+1}+1}{(u^{\alpha_D}+1)^{1-\frac{1}{\alpha_G}}}-2^{\frac{1}{\alpha_G}}\right), \end{align} \eqref{eqn:gen_obj} simplifies to minimizing a non-negative symmetric $f_{\alpha_D,\alpha_G}$-divergence $D_{f_{\alpha_D,\alpha_G}}(\cdot||\cdot)$ as \begin{align}\label{eqn:gen-alpha_d,alpha_g-obj} \inf_{\theta\in\Theta} D_{f_{\alpha_D,\alpha_G}}(P_r||P_{G_\theta})+\frac{\alpha_G}{\alpha_G-1}\left(2^{\frac{1}{\alpha_G}}-2\right), \end{align} which is minimized iff $P_{G_\theta}=P_r$ for $(\alpha_D,\alpha_G)\in (0,\infty]^2$ such that \begin{align*} \Big (\alpha_D \le 1,\;\alpha_G > \frac{\alpha_D}{\alpha_D+1} \Big ) \; \text{ or } \; \Big ( \alpha_D > 1,\;\frac{\alpha_D}{2}< \alpha_G \le \alpha_D \Big ). \end{align*} \end{theorem} \begin{proof}[Proof sketch]\let\qed\relax We substitute the optimal discriminator of \eqref{eqn:disc_obj} into the objective function of \eqref{eqn:gen_obj} and translate it into the form \begin{align} \int_\mathcal{X} p_{G_\theta}(x)f_{\alpha_D,\alpha_G}\left(\frac{p_r(x)}{p_{G_\theta}(x)}\right) dx + \frac{\alpha_G}{\alpha_G-1}\left(2^{\frac{1}{\alpha_G}}-2\right). \label{eq:gen-obj-with-opt-disc} \end{align} We then find the conditions on $\alpha_D$ and $\alpha_G$ for $f_{\alpha_D,\alpha_G}$ to be strictly convex so that the first term in \eqref{eq:gen-obj-with-opt-disc} is an $f$-divergence. Figure \ref{fig:convexity_regions}(a) illustrates the feasible $(\alpha_D,\alpha_G)$-region. A detailed proof can be found in Appendix \ref{appendix:alpha_D,alpha_G-GAN-saturating}. \end{proof} Noting that $\alpha$-GAN recovers various well-known GANs, including the vanilla GAN, which is prone to saturation, the $(\alpha_D,\alpha_G)$-GAN formulation using the generator objective function in \eqref{eqn:sat-gen-objective} can similarly saturate early in training, causing vanishing gradients. We therefore propose the following NS alternative to the generator's objective in \eqref{eqn:sat-gen-objective}: \begin{align} V^\text{NS}_{\alpha_G}(\theta,\omega) &= \mathbb{E}_{X\sim P_{G_\theta}}[\ell_{\alpha_G}(1,D_\omega(X))], \label{eqn:nonsat-gen-objective} \end{align} thereby replacing \eqref{eqn:gen_obj} with \begin{align} \inf_{\theta\in\Theta} V^\text{NS}_{\alpha_G}(\theta,\omega). \label{eqn:gen_obj_ns} \end{align} Comparing \eqref{eqn:gen_obj} and \eqref{eqn:gen_obj_ns}, note that the additional expectation term over $P_r$ in \eqref{eqn:sat-gen-objective} results in \eqref{eqn:gen_obj} simplifying to a symmetric divergence for $D_{\omega^*}$ in \eqref{eqn:optimaldisc-gen-alpha-GAN}, whereas the single term in \eqref{eqn:nonsat-gen-objective} will result in \eqref{eqn:gen_obj_ns} simplifying to an asymmetric divergence. The optimal discriminator for this NS game remains the same as in \eqref{eqn:optimaldisc-gen-alpha-GAN}. The following theorem provides the solution to \eqref{eqn:gen_obj_ns} under the assumption that the optimal discriminator can be attained. \begin{theorem}\label{thm:alpha_D,alpha_G-GAN-nonsaturating} For the same $D_{\omega^*}$ in \eqref{eqn:optimaldisc-gen-alpha-GAN} and the function $f_{\alpha_D,\alpha_G}^\text{NS}:\mathbb R_+ \to \mathbb R$ defined as \begin{align}\label{eqn:f-alpha_d,alpha_g-ns} f^\text{NS}_{\alpha_D,\alpha_G}(u)=\frac{\alpha_G}{\alpha_G-1}\left(2^{\frac{1}{\alpha_G}-1}-\frac{u^{\alpha_D\left(1-\frac{1}{\alpha_G}\right)}}{(u^{\alpha_D}+1)^{1-\frac{1}{\alpha_G}}}\right), \end{align} \eqref{eqn:gen_obj} simplifies to minimizing a non-negative asymmetric $f^\text{NS}_{\alpha_D,\alpha_G}$-divergence $D_{f^{\text{NS}}_{\alpha_D,\alpha_G}}(\cdot||\cdot)$ as \begin{align}\label{eqn:gen-alpha_d,alpha_g-obj-ns} \inf_{\theta\in\Theta} D_{f^\text{NS}_{\alpha_D,\alpha_G}}(P_r||P_{G_\theta})+\frac{\alpha_G}{\alpha_G-1}\left(1-2^{\frac{1}{\alpha_G}-1}\right), \end{align} which is minimized iff $P_{G_\theta}=P_r$ for $(\alpha_D,\alpha_G) \in (0,\infty]^2$ such that $\alpha_D + \alpha_G > \alpha_G\alpha_D.$ \end{theorem} \vspace{-0.05in} The proof mimics that of Theorem \ref{thm:alpha_D,alpha_G-GAN-saturating} and is detailed in Appendix \ref{appendix:alpha_D,alpha_G-GAN-nonsaturating}. Figure \ref{fig:convexity_regions}(b) illustrates the feasible $(\alpha_D,\alpha_G)$-region; in contrast to the saturating setting of Theorem \ref{thm:alpha_D,alpha_G-GAN-saturating}, the NS setting constrains $\alpha\le 2$ when $\alpha_D=\alpha_G=\alpha$. Nonetheless, we later show empirically in Section~\ref{subsec:stacked-mnist} that even tuning over this restricted set provides robustness against hyperparameter choices. \begin{figure}[t] \centering \footnotesize \setlength{\tabcolsep}{1pt} \begin{tabular}{@{}cc@{}} \includegraphics[page=1,width=0.48\linewidth]{Figures/plots_sat_and_nonsat.pdf} & \raisebox{1pt}{\includegraphics[page=2,width=0.48\linewidth]{Figures/plots_sat_and_nonsat.pdf}} \\ (a) & (b) \end{tabular} \caption{(a) Plot of regions $R_1 = \{(\alpha_D,\alpha_G) \in (0,\infty]^2 \bigm\vert \alpha_D \le 1,\alpha_G > \frac{\alpha_D}{\alpha_D+1}\}$ and $R_2 = \{(\alpha_D,\alpha_G) \in (0,\infty]^2 \bigm\vert \alpha_D > 1,\frac{\alpha_D}{2}< \alpha_G \le \alpha_D\}$ for which $f_{\alpha_D,\alpha_G}$ is strictly convex. (b) Plot of region $R_\text{NS}=\{(\alpha_D,\alpha_G)\in (0,\infty]^2 \mid \alpha_D+\alpha_G > \alpha_D\alpha_G\}$ for which $f^\text{NS}_{\alpha_D,\alpha_G}$ is strictly convex.} \label{fig:convexity_regions} \end{figure} \subsection{Estimation Error} \label{subsec:est-err} Theorems \ref{thm:alpha_D,alpha_G-GAN-saturating} and \ref{thm:alpha_D,alpha_G-GAN-nonsaturating} assume sufficiently large number of training samples and ample discriminator and generator capacity. However, in practice both the number of training samples and model capacity are usually limited. We consider a setting similar to prior works on generalization and estimation error for GANs (e.g., \cite{JiZL21,kurri-2022-convergence}) with finite training samples $S_x=\{X_1,\dots,X_n\}$ and $S_z=\{Z_1,\dots,Z_m\}$ from $P_r$ and $P_Z$, respectively, and with neural networks chosen as the discriminator and generator models. The sets of samples $S_x$ and $S_z$ induce the empirical real and generated distributions $\hat{P}_r$ and $\hat{P}_{G_\theta}$, respectively. A useful quantity to evaluate the performance of GANs in this setting is that of the estimation error, defined in \cite{JiZL21} as the performance gap of the optimized value function when trained using only finite samples relative to the optimal when the statistics are known. Using this definition, \cite{kurri-2022-convergence} derived upper bounds on this error for $\alpha$-GANs. However, such a definition requires a common value function for both discriminator and generator, and therefore, does not directly apply to the dual-objective setting we consider here. Our definition relies on the observation that estimation error inherently captures the effectiveness of the generator (for a corresponding optimal discriminator model) in learning with limited samples. We formalize this intuition below. Since $(\alpha_D,\alpha_G)$-GANs use different objective functions for the discriminator and generator, we start by defining the optimal discriminator ${\omega}^*$ for a generator model $G_\theta$ as \begin{align} {\omega}^*(P_r,P_{G_\theta}) \coloneqq \argmax_{\omega \in \Omega} \; V_{\alpha_D}(\theta,\omega)\big\rvert_{P_r,P_{G_\theta}}, \label{eq:est-err-opt-disc} \end{align} where the notation $|_{\cdot,\cdot}$ allows us to make explicit the distributions used in the value function. In keeping with the literature where the value function being minimized is referred to as the neural net (NN) distance (since D and G are modeled as neural networks) \cite{AroraGLMZ17,JiZL21,kurri-2022-convergence}, we define the generator's NN distance $d_{\omega^*(P_r,P_{G_\theta})}$ as \begin{align} d_{\omega^*(P_r,P_{G_\theta})}(P_r,{P}_{G_{{\theta}}}) \coloneqq V_{\alpha_G}(\theta,\omega^*(P_r,P_{G_\theta}))\big\rvert_{P_r,P_{G_\theta}}. \label{eq:est-err-gen-obj} \end{align} The resulting minimization for training the $(\alpha_D,\alpha_G)$-GAN using finite samples is \begin{align} \inf_{\theta\in\Theta} d_{\omega^*(\hat{P}_r,\hat{P}_{G_\theta})}(\hat{P}_r,\hat{P}_{G_{{\theta}}}). \label{eq:training-empirical-alpha_d,alpha_g-GAN} \end{align} Denoting $\hat{\theta}^*$ as the minimizer of \eqref{eq:training-empirical-alpha_d,alpha_g-GAN}, we define the estimation error for $(\alpha_D,\alpha_G)$-GANs as \begin{align} d_{\omega^*(P_r,P_{G_{\hat{\theta}^*}})}(P_r,{P}_{G_{\hat{\theta}^*}})-\inf_{\theta\in\Theta} d_{\omega^*({P}_r,{P}_{G_\theta})}(P_r,P_{G_{\theta}}) \label{eq:est-error-def-alpha_d,alpha_g-GAN}. \end{align} We use the same notation as in \cite{kurri-2022-convergence}, detailed in the following for easy reference. For $x\in\mathcal{X}\coloneqq\{x\in\mathbb{R}^d:||x||_2\leq B_x\}$ and $z\in\mathcal{Z}\coloneqq\{z\in\mathbb{R}^p:||z||_2\leq B_z\}$, we model the discriminator and generator as $k$- and $l$-layer neural networks, respectively, such that $D_\omega$ and $G_\theta$ can be written as: \begin{align} D_\omega&:x\mapsto \sigma\left(\mathbf{w}_k^\mathsf{T}r_{k-1}(\mathbf{W}_{d-1}r_{k-2}(\dots r_1(\mathbf{W}_1(x)))\right)\, \label{eqn:disc-model}\\ G_\theta&:z\mapsto \mathbf{V}_ls_{l-1}(\mathbf{V}_{l-1}s_{l-2}(\dots s_1(\mathbf{V}_1z))), \end{align} where (i) $\mathbf{w}_k$ is a parameter vector of the output layer; (ii) for $i\in[1:k-1]$ and $j\in[1:l]$, $\mathbf{W}_i$ and $\mathbf{V}_j$ are parameter matrices; (iii) $r_i(\cdot)$ and $s_j(\cdot)$ are entry-wise activation functions of layers $i$ and $j$, respectively, i.e., for $\mathbf{a}\in\mathbb{R}^t$, $r_i(\mathbf{a})=\left[r_i(a_1),\dots,r_i(a_t)\right]$ and $s_i(\mathbf{a})=\left[s_i(a_1),\dots,s_i(a_t)\right]$; and (iv) $\sigma(\cdot)$ is the sigmoid function given by $\sigma(p)=1/(1+\mathrm{e}^{-p})$. We assume that each $r_i(\cdot)$ and $s_j(\cdot)$ are $R_i$- and $S_j$-Lipschitz, respectively, and also that they are positive homogeneous, i.e., $r_i(\lambda p)=\lambda r_i(p)$ and $s_j(\lambda p)=\lambda s_j(p)$, for any $\lambda\geq 0$ and $p\in\mathbb{R}$. Finally, as is common in such analysis \cite{neyshabur2015norm,salimans2016weight,golowich2018size,JiZL21}, we assume that the Frobenius norms of the parameter matrices are bounded, i.e., $||\mathbf{W}_i||_F\leq M_i$, $i\in[1:k-1]$, $||\mathbf{w}_k||_2\leq M_k$, and $||\mathbf{V}_j||_F\leq N_j$, $j\in[1:l]$. We now present an upper bound on \eqref{eq:est-error-def-alpha_d,alpha_g-GAN} in the following theorem. \begin{theorem}\label{thm:estimationerror-upperbound-alpha_d,alpha_g-GAN} In the setting described above, with probability at least $1-2\delta$ over the randomness of training samples $S_x=\{X_i\}_{i=1}^n$ and $S_z=\{Z_j\}_{j=1}^m$, we have \begin{align} &d_{\omega^*(P_r,P_{G_{\hat{\theta}^*}})}(P_r,{P}_{G_{\hat{\theta}^*}})-\inf_{\theta\in\Theta} d_{\omega^*({P}_r,{P}_{G_\theta})}(P_r,P_{G_{\theta}})\nonumber\\ &\leq \frac{4C_{Q_x}(\alpha_G) B_xU_\omega\sqrt{3k}}{\sqrt{n}}+\frac{4C_{Q_z}(\alpha_G) U_\omega U_\theta B_z\sqrt{3(k+l-1)}}{\sqrt{m}}\nonumber\\ &\hspace{12pt}+U_\omega\sqrt{\log{\frac{1}{\delta}}}\left(\frac{4C_{Q_x}(\alpha_G) B_x}{\sqrt{2n}}+\frac{4C_{Q_z}(\alpha_G) B_zU_\theta}{\sqrt{2m}}\right), \label{eq:estimationbound} \end{align} where the parameters $U_\omega\coloneqq M_k\prod_{i=1}^{k-1}(M_iR_i)$ and $U_\theta\coloneqq N_l\prod_{j=1}^{l-1}(N_jS_j)$, $Q_x\coloneqq U_\omega B_x$, $Q_z\coloneqq U_\omega U_\theta B_z$, and \begin{align} \label{eq:clipalpha} C_h(\alpha)\coloneqq\begin{cases}\sigma(h)\sigma(-h)^{\frac{\alpha-1}{\alpha}}, \ &\alpha\in(0,1]\\ \left(\frac{\alpha-1}{2\alpha-1}\right)^{\frac{\alpha-1}{\alpha}}\frac{\alpha}{2\alpha-1}, &\alpha\in(1,\infty). \end{cases} \end{align} \end{theorem} \begin{figure*}[h] \centering \footnotesize \setlength{\tabcolsep}{1pt} \begin{tabular}{@{}cc@{}} \includegraphics[height=4.3cm]{Figures/2dring_modes.png} & \raisebox{1pt}{\includegraphics[height=4.3cm]{Figures/2dring_sat.pdf}} \\ (a) & (b) \end{tabular} \caption{(a) Plot of mode coverage over epochs for $(\alpha_{D}, \alpha_{G})$-GAN training with the \textbf{saturating} objectives in \eqref{eqn:alpha_D,alpha_G-GAN}. Fixing $\alpha_{G}=1$, we compare $\alpha_{D} = 1$ (vanilla GAN) with $\alpha_{D} = 0.2$. Placed above this plot are 2D visuals of the generated samples (in black) at different epochs; these show that both GANs successfully capture the ring-like structure, but the vanilla GAN fails to maintain the ring over time. We illustrate the discriminator output in the same visual as a heat map to show that the $\alpha_{D} = 1$ discriminator exhibits more confident predictions (tending to 0 or 1), which in turn subjects G to vanishing and exploding gradients when its objective $\log(1-D)$ saturates as $D\rightarrow 0$ and diverges as $D\rightarrow 1$, respectively. This combination tends to repel the generated data when it approaches the real data, thus freezing any significant weight update in the future. In contrast, the less confident predictions of the $(0.2,1)$-GAN create a smooth landscape for the generated output to descend towards the real data. (b) Plot of success and failure rates over 200 seeds for a range of $\alpha_{D}$ values with $\alpha_{G} = 1$ for the \textbf{saturating} $(\alpha_{D}, \alpha_{G})$-GAN on the 2D-ring, which underscores the stability of $(\alpha_{D} < 1,\alpha_G)$-GANs relative to vanilla GAN. } \label{fig:sat-figure} \end{figure*} The proof is similar to that of \cite[Theorem 3]{kurri-2022-convergence} (and also \cite[Theorem 1]{JiZL21}). We observe that \eqref{eq:estimationbound} does not depend on $\alpha_D$, an artifact of the proof techniques used, and is therefore most likely not the tightest bound possible. See Appendix \ref{appendix:estimationerror-upperbound-alpha_d,alpha_g-GAN} for proof details. When $\alpha_D=\alpha_G=\infty$, \eqref{eq:gen-obj-with-opt-disc} reduces to the total variation distance (up to a constant) \cite[Theorem 2]{KurriSS21}, and \eqref{eq:est-err-gen-obj} simplifies to the loss-inclusive NN distance $d^{\ell}_{\mathcal{F}_{nn}}(\cdot,\cdot)$ defined in \cite[eq. (13)]{kurri-2022-convergence} with $\phi(\cdot)=-\ell_\alpha(1,\cdot)$ and $\psi(\cdot)=-\ell_\alpha(0,\cdot)$ for $\alpha=\infty$. We consider a slightly modified version of this quantity with an added constant to ensure nonnegativity (more details in Appendix~\ref{appendix:est-error-lower-bound-alpha-infinity}). For brevity, we henceforth denote this as $d^{\ell_\infty}_{\mathcal{F}_{nn}}(\cdot,\cdot)$. As in \cite{JiZL21}, suppose the generator's class $\{G_\theta\}_{\theta \in \Theta}$ is rich enough such that the generator $G_\theta$ can learn the real distribution $P_r$ and that the number $m$ of training samples in $S_z$ scales faster than the number $n$ of samples in $S_x$\footnote{Since the noise distribution $P_Z$ is known, one can generate an arbitrarily large number $m$ of noise samples.}. Then $\inf_{\theta \in \Theta} d^{\ell_\infty}_{\mathcal{F}_{nn}}(P_r,P_{G_\theta}) = 0$, so the estimation error simplifies to the single term $d^{\ell_\infty}_{\mathcal{F}_{nn}}(P_r,P_{G_{\hat{\theta}^*}})$. Furthermore, the upper bound in \eqref{eq:estimationbound} reduces to $O(c/\sqrt{n})$ for some constant $c$ (note that, in \eqref{eq:clipalpha}, $C_h(\infty)=1/4$). In addition to the above assumptions, also assume the activation functions $r_i$ for $i \in [1:k-1]$ are either strictly increasing or ReLU. For the above setting, we derive a matching min-max lower bound (up to a constant multiple) on the estimation error. \begin{theorem} \label{thm:est-error-lower-bound-alpha-infinity} For the setting above, let $\hat{P}_n$ be an estimator of $P_r$ learned using the training samples $S_x=\{X_i \}_{i=1}^n$. Then, \[\inf_{\hat{P}_n} \sup_{P_r \in \mathcal{P}(\mathcal{X})} \, \mathbb P\left\{d^{\ell_\infty}_{\mathcal{F}_{nn}}(\hat{P}_n,P_r) \ge \frac{C(\mathcal{P}(\mathcal{X}))}{\sqrt{n}} \right\} > 0.24,\] where the constant $C(\mathcal{P}(\mathcal{X}))$ is given by \begin{align} C(\mathcal{P}(\mathcal{X})) = \frac{\log(2)}{20} \Big[ \sigma&(M_k r_{k-1}(\dots r_1(M_1 B_x)) \nonumber \\ &- \sigma(M_k r_{k-1}(\dots r_1(-M_1 B_x)) \Big]. \label{eq:est-error-lower-bound-constant} \end{align} \end{theorem} \begin{proof}[Proof sketch]\let\qed\relax To obtain min-max lower bounds, we first prove that $d^{\ell_\infty}_{\mathcal{F}_{nn}}$ is a semi-metric. The remainder of the proof is similar to that of \cite[Theorem 2]{JiZL21}, replacing $d_{\mathcal{F}_{nn}}$ with $d^{\ell_\infty}_{\mathcal{F}_{nn}}$ and noting that the additional sigmoid activation function after the last layer in D satisfies the monotonicity assumption as detailed in Appendix \ref{appendix:est-error-lower-bound-alpha-infinity}. A challenge that remains to be addressed is to verify if $d^{\ell_\alpha}_{\mathcal{F}_{nn}}$ is a semi-metric for $\alpha<\infty$. \end{proof} \section{illustration of Results} In this section, we compare $(\alpha_{D}, \alpha_{G})$-GAN to two state-of-the-art GANs, namely the vanilla GAN (i.e., the $(1,1)$-GAN) and LSGAN \cite{Mao_2017_LSGAN}, on two datasets: (i) a synthetic dataset generated by a two-dimensional, ring-shaped Gaussian mixture distribution (2D-ring) \cite{srivastava2017veegan} and (ii) the Stacked MNIST image dataset \cite{PACGANLin}. For each dataset and different GAN objectives, we report several metrics that encapsulate the stability of GAN training over hundreds of random seeds. This allows us to clearly showcase the potential for tuning $(\alpha_{D}, \alpha_{G})$ to obtain stable and robust solutions for image generation. \subsection{2D Gaussian Mixture Ring} The 2D-ring is an oft-used synthetic dataset for evaluating GANs. We draw samples from a mixture of 8 equal-prior Gaussian distributions, indexed $i \in \{1,2,\hdots , 8 \}$ with a mean of $(\cos(2\pi i / 8), \text{ } \sin(2\pi i / 8))$ and variance $10^{-4}$. We generate 50,000 training and 25,000 testing samples; additionally, we generate the same number of 2D latent Gaussian noise vectors. Both the D and G networks have 4 fully-connected layers with 200 and 400 units, respectively. We train for 400 epochs with a batch size of 128, and optimize with Adam \cite{kingma2014adam} and a learning rate of $10^{-4}$ for both models. We consider three distinct settings that differ in the objective functions as: \textbf{(i)} $(\alpha_{D}, \alpha_{G})$-GAN in \eqref{eqn:alpha_D,alpha_G-GAN}; \textbf{(ii)} NS $(\alpha_{D}, \alpha_{G})$-GAN's in \eqref{eqn:disc_obj}, \eqref{eqn:gen_obj_ns}; \textbf{(iii)} LSGAN with the 0-1 binary coding scheme (see Appendix \ref{appendix:experimental-details-results} for details). For every setting listed above, we train our models on the 2D-ring dataset for 200 random state seeds, where each seed contains different weight initializations for D and G. Ideally, a stable method will reflect similar performance across randomized initializations and also over training epochs; thus, we explore how GAN training performance for each setting varies across seeds and epochs. Our primary performance metric is \textit{mode coverage}, defined as the number of Gaussians (0-8) that contain a generated sample within 3 standard deviations of its mean. A score of 8 conveys successful training, while a score of 0 conveys a significant GAN failure; on the other hand, a score in between 0 and 8 may be indicative of common GAN issues, such as mode collapse or failure to converge. For the saturating setting, the improvement in stability of the $(0.2,1)$-GAN relative to the vanilla GAN is illustrated in Fig. \ref{fig:sat-figure} as detailed in the caption. In fact, vanilla GAN completely fails to converge to the true distribution 30\% of the time while succeeding only 46\% of the time. In contrast, the $(\alpha_{D}, \alpha_{G})$-GAN with $\alpha_{D} < 1$ learns a more stable G due to a less confident D (see also Fig.~\ref{fig:sat-figure}(a)). For example, the $(0.3,1)$-GAN success and failure rates improve to 87\% and 2\%, respectively. Finally, for the NS setting in Fig. \ref{fig:NS}, we find that tuning $\alpha_D$ and $\alpha_G$ yields more consistently stable outcomes than vanilla and LSGANs. Mode coverage rates over 200 seeds for saturating (Tables \ref{table:2d-ring-sat-success-rates} and \ref{table:2d-ring-sat-failure-rates}) and NS (Table \ref{table:2d-ring-ns-success-rates}) are in Appendix \ref{appendix:experimental-details-results}. \begin{figure}[t] \centering \includegraphics[width=8cm]{Figures/2dring_grid.png} \caption{Generated samples from two $(\alpha_{D}, \alpha_{G})$-GANs trained with the \textbf{NS} objectives in \eqref{eqn:disc_obj}, \eqref{eqn:gen_obj_ns}, as well as the LSGAN. We provide 6 seeds to illustrate the stability in performance for each GAN across multiple runs.} \label{fig:NS} \vspace{-0.075in} \end{figure} \vspace{-0.075in} \subsection{Stacked MNIST} \label{subsec:stacked-mnist} The Stacked MNIST dataset is an enhancement of MNIST \cite{deng2012mnist} as it contains images of size $3 \times 28 \times 28$, where each RGB channel is a $28 \times 28$ image randomly sampled from MNIST. Stacked MNIST is a popular choice for image generation since its use of 3 channels allows for a total of $10^{3} = 1000$ modes, as opposed to the 10 modes (digits) in MNIST, which makes the latter much easier for GANs to learn. We generate 100,000 training samples, 25,000 testing samples, and the same number of 100-dimension latent Gaussian noise vectors. We use the DCGAN architecture \cite{radford2015} for training, which uses deep convolutional neural networks (CNN) for both D and G (details in Tables \ref{table:disc}, \ref{table:gen} of Appendix \ref{appendix:experimental-details-results}). As in other works, we focus solely on the NS setting using appropriate objective functions for vanilla GAN, $(\alpha_{D}, \alpha_{G})$-GAN, and LSGAN. We compute the mode coverage of each trial by feeding each generated sample to a 1000-mode CNN classifier. The classifier is obtained by pretraining on MNIST to achieve 99.5\% validation accuracy. We also consider a range of settings for two key hyperparameters: the number of epochs and learning rate for Adam optimization. Each combination of objective function, number of epochs, and learning rate is trained for 100 seeds; this allows us to report the \emph{mean mode coverage}. We also report the mean Fréchet Inception Distance (FID)\footnote{FID is an unsupervised similarity metric between the real and generated feature distributions extracted by InceptionNet-V3~\cite{heusel2017fid}.}. In Fig. \ref{fig:stacked-modes}(a) and \ref{fig:stacked-modes}(b), we empirically demonstrate the dependence of mode coverage on learning rate and number of epochs, respectively (FID plots are in Appendix \ref{appendix:stacked-mnist}). Achieving robustness to hyperparameter initialization is highly desirable in the unsupervised GAN setting as the choices that facilitate steady model convergence are not easily determined without prior mode knowledge. Observing the mode coverage of different $(\alpha_{D}, \alpha_{G})$-GANs, we find that as the learning rate or training time increases, the performance of both vanilla GAN and LSGAN deteriorates faster than a GAN with $\alpha_{D}= \alpha_{G} > 1$ (see Appendix \ref{appendix:experimental-details-results} for additional details that motivate this choice). Finally, as shown in Fig. \ref{fig:stacked-output}, we observe that the outputs of $(\alpha_{D}, \alpha_{G})$-GAN are more consistent and accurate across multiple seeds, relative to LSGAN and vanilla GAN. \begin{figure}[t] \centering \footnotesize \setlength{\tabcolsep}{1pt} \begin{tabular}{@{}cc@{}} \includegraphics[width=4.5cm]{Figures/stacked_lrs_modes.pdf} & \includegraphics[width=4.5cm]{Figures/stacked_epochs_modes.pdf} \\ (a) & (b) \end{tabular} \caption{Mode coverage vs. (a) varied learning rates with fixed epoch number ($=50$) and (b) varied epoch numbers with fixed learning rate ($=5\times 10^{-4}$) for different GANs, underscoring the vanilla GAN's hyperparameter sensitivity. } \label{fig:stacked-modes} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8cm]{Figures/stacked_grid.png} \caption{Generated Stacked MNIST samples from three GANs over 6 seeds when trained for 200 epochs with a learning rate of $5 \times 10^{-4}$.} \label{fig:stacked-output} \vspace{-0.075in} \end{figure} \section{Concluding Remarks} We have introduced a dual-objective GAN formulation, focusing in particular on using $\alpha$-loss for both players' objectives. Our results highlight the value of tuning $\alpha$ in alleviating training instabilities and enhancing robustness to learning rates and training epochs, hyperparameters whose optimal values are generally not known \emph{a priori}. Generalization guarantees of $(\alpha_D,\alpha_G)$-GANs is a natural extension to study. An equally important problem is to evaluate if our observations hold more broadly, including, when the training data is noisy \cite{nietert2022outlier}. \clearpage \balance \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.14263", "language": "en", "timestamp": "2023-03-01T02:07:12", "url": "https://arxiv.org/abs/2302.14263", "yymm": "2302" }
\section{Introduction} With the rapid development of 5G and 6G networks, autonomous driving, and internet of things (IOT), we have witnessed the widely development of active sensing systems (such as radar and communication systems). Typically, active sensing systems utilize the radio frequency (RF) spectrum to determine the property of the target or the propagation medium. However, because of the scarceness of the RF spectrum, the increasing number of active sensing systems makes the spectrum extremely crowded. For active systems operating in spectrally crowded environments, the mutual interference will severely degrade the system performance \cite{Griffiths2014spectrum,Griffiths2015spectrum}. Therefore, improving the coexistence between active sensing systems has gained considerable interests in the past few years (see, e.g., \cite{Aubry2016Optimization,Qian2018Coexistence,Zheng2019Coexistence} and the references therein). Many paradigms have been implemented to improve the spectral coexistence between radar and communication systems. One possibility is to use the opportunistic illuminator (e.g., signals from satellites, broadcast transmitters, base stations, etc.) for radar detection. Since such radar systems (also called passive radar) receive the direct-path signals and the target reflections passively to detect the targets \cite{howland2005passive,kuschel2019tutorial}, the conflict between radar and communication systems could be minimized. However, passive radar systems might suffer the problem of low range resolutions and high sidelobes. Recently, cognitive paradigms are extensively discussed to improve the spectral compatibility among active sensing systems \cite{Jakabosky2016spectrum,Aubry2021Cognitive}. Through the perception of outside environments, cognitive systems can minimize the mutual interference, e.g., by adaptive processing in the receiver, or adjusting the waveforms intelligently in the transmitter \cite{Shi2022TAES}. In \cite{Lindenfeld2004Sparse,Rowe2014SHAPE,Aubry2014spectrally,Liang2015LPNN,Tang2019Efficient}, by assuming that the operating frequency bands of nearby communication systems can be obtained through spectrum sensing, methods for synthesizing spectrally constrained waveforms were proposed. The proposed waveforms can form deep notches in the stopbands (i.e., the operating frequency bands of the communication systems), thus improving the spectral coexistence between radar and nearby communication systems. In \cite{Li2016codesign,Li2017Coexistence,Qian2018Coexistence}, the co-design of multiple-input-multiple-output (MIMO) radar and MIMO communication systems were discussed. By sensing the channel state as well as the clutter/interference parameters, the co-design paradigm can improve the target acquisition performance and ensure the quality of service (QOS) for communications. In addition to the aforementioned paradigms, there are many attempts in pursuing the functional coexistence. Herein, the functional coexistence refers to using an integrated system to simultaneously support multiple functions, including radar and communications. Such systems are also called dual-function radar-communication (DFRC) systems, or integrated sensing and communication (ISAC) systems \cite{Tavik2005RF, Zhang2022Survey,Zhang2021Overview,Liu2020overview,Shi2021DRFC,Tang2022MFRF}. Since these functions are realized with a shared hardware and an integrated waveform, DFRC systems enjoy many advantages, such as reduced size and weight, improved hardware and spectral efficiency. One important problem in DFRC systems is the proper design of transmit waveforms. In \cite{Hassanien2016DFRC}, through controlling the sidelobes of the array beampattern by elaborate waveform design, the DFRC system was capable of detecting the target at the mainlobe and delivering the information bits toward the communication user at the sidelobes. However, the bit rate associated with this approach was limited by the number of orthogonal waveforms. In \cite{McCormick2017Simultaneous}, an algorithm was proposed to design waveforms for a MIMO array to simultaneously transmit radar and communication signals toward different directions. In \cite{Liu2018DFRC,Tang2020DFRC,Shi2020DFRC}, the authors proposed waveform design method to match a radar beampattern and communicate with multiple users simultaneously. In \cite{Liu2020Joint}, the authors proposed a joint transmit beamforming approach for DFRC systems. The optimized waveforms therein could approximate a desired beampattern and guarantee the SINR performance for each communication user. However, the designed waveforms in \cite{Liu2020Joint} are not constant-modulus (CM), which might result in nonlinear effects and distortions. In this work, we investigate waveform design methods for DFRC systems in the presence of clutter. To maximize the detection performance, the output signal-to-interference-plus-noise ratio (SINR) is used as the design metric. Different from the waveform design approach in \cite{Tsinos2021DFRC}, which used an indirect method to control the overall synthesis error of the communication signals, we directly constrain the synthesis error associated with each communication user to be lower than a prescribed level (thus, the performance of every user is able to be controlled). Moreover, we impose a CM constraint on the transmit waveforms, to comply the requirements of saturated amplifiers. To tackle the encountered non-convex waveform design problem, a nested optimization algorithm is derived, which is based on cyclic optimization, Dinkinbach's transform, and alternating direction method of multipliers (ADMM). Simulations show that the waveforms synthesized by the proposed algorithm can obtain superior detection performance and achieve multi-user communication capability at the same time. The outline of the rest paper is as follows. Section \ref{Sec:ProblemFormulation} presents the signal model and formulates the waveform design problem. Section \ref{Sec:JointDesign} derives an iterative algorithm to tackle the waveform design problem. Section \ref{Sec:NumericalResults} presents numerical examples to demonstrate the performance of the proposed algorithm and analyzes the impact of several factors (including the transmit energy of the desired communication signals, the number of communication users, the code length, and the synthesis errors) on the performance of DFRC systems. Finally, conclusions are drawn in Section \ref{Sec:Conclusion}. \emph{Notations}: The notations used in this paper are listed in Table \ref{tab:notations}. \begin{table}[!htbp] \caption{{{List of Notations}}} \renewcommand{\arraystretch}{1.15} \centering \begin{tabular}{cl} \toprule Symbol & Meaning\\ \midrule ${\mathbf A}$ & Matrix \\ ${\mathbf a}$ & Vector\\ $a$ & Scalar\\ ${\mathbf I}$ & The identity matrix with the size determined by the subscript\\ $(\cdot)^\ast$, $(\cdot)^\top$, $(\cdot)^\dagger$ & Conjugate, transpose, and conjugate transpose\\ $(\cdot)^{1/2}$ & Squared root of a matrix\\ $\textrm{tr}(\cdot)$ & Trace of a matrix\\ $\left| \cdot \right|$, $\left\| \cdot \right\|_2$, $\left\| \cdot \right\|_\textrm{F}$ & Magnitude, Euclidian norm (of a vector), and Frobenius norm (of a matrix) \\ ${\mathbb{E}}\{\cdot \}$ & Expectation of a random variable\\ ${\mathbb{R}}$, ${\mathbb{C}}$ & Domain of real and complex numbers\\ $\textrm{vec}(\cdot)$ & Vectorization\\ $\otimes$ & Kronecker product\\ $\textrm{Re}(\cdot)$ & The real part of a complex-valued scalar/vector/matrix\\ ${\mathbf A} \succ 0$ $({\mathbf A} \succeq 0)$ & ${\mathbf A}$ is positive definite (semidefinite)\\ \bottomrule \end{tabular} \label{tab:notations} \end{table} \section{Signal Model and Problem Formulation}\label{Sec:ProblemFormulation} Consider a DFRC system based on a MIMO array, which has $N_{\textrm{T}}$ transmit and $N_{\textrm{R}}$ receive antennas, as illustrated in \figurename~\ref{fig_1}. We assume that a target of interest and $Q$ interference sources are present. In addition, the system serves $M$ communication users. To support simultaneous target detection and communication with multiple users, next we establish the signal models. \begin{figure*}[!htbp] \centering \includegraphics[width= 0.55 \textwidth] {1-paper.pdf} \caption{Illustration of a MIMO DFRC system.} \label{fig_1} \end{figure*} \subsection{Radar Model} Denote the target direction by $\theta_0$, and the direction of the $q$th interference source by $\theta_q$ ($\theta_q\neq\theta_0$, $q=1,2, \cdots, Q$). The received signals at the $l$th instant ($l=1,2,\cdots,L$, and $L$ is the code length) is formulated by \begin{align}\label{eq:sig_l} {\mathbf y}_l &= \alpha_0 {\mathbf a}_{\textrm{R}} (\theta_0) {\mathbf a}_{\textrm{T}}^\top (\theta_0) {\mathbf x}_l +\sum\limits_{q=1}^Q \alpha_q {\mathbf a}_{\textrm{R}} (\theta_q) {\mathbf a} _{\textrm{T}}^\top(\theta_q) {\mathbf x}_l + {\mathbf n}_l, \end{align} where $\alpha_0, \alpha_1, \cdots, \alpha_Q$ are the amplitudes of the target and the $Q$ interference sources, ${\mathbf a}_{\textrm{T}} (\theta)$ and ${\mathbf a}_{\textrm{R}} (\theta)$ denote the transmit array steering vector and the receive array steering vector at $\theta$, ${\mathbf x}_l = [x_l(1), x_l(2), \cdots,$ $x_l({N_{\textrm{T}}})]^\top$, $x_l(n)$ denotes the code of the $l$th subpulse of the (baseband) waveforms in the $n$th transmitter $(n = 1,2, \cdots N_{\textrm{T}})$, and ${\mathbf n}_l$ is the receiver noise (we assume that it is white Gaussian). Stacking ${\mathbf y}_1, {\mathbf y}_2,\cdots, {\mathbf y}_L$ in one column, we obtain \begin{align} \label{eq:RadarSig} {\mathbf y} = \alpha_0 {\mathbf A} (\theta_0) {\mathbf x} + \sum\limits_{q = 1}^Q \alpha_q {\mathbf A} (\theta_q) {\mathbf x} + {\mathbf n}, \end{align} where ${\mathbf y} = [{\mathbf y}_1^\top,{\mathbf y}_2^\top,\cdots,{\mathbf y}_L^\top]^\top\in {\mathbb{C}}^{LN_{\textrm{R}} \times1}$, ${\mathbf A} (\theta) = {\mathbf I}_L \otimes ({\mathbf a}_{\textrm{R}} (\theta) {\mathbf a}_{\textrm{T}}^\top (\theta))$, ${\mathbf x} = [{\mathbf x}_1^\top, {\mathbf x}_2^\top,\cdots,{\mathbf x}_L^\top]^\top\in {\mathbb{C}}^{LN_{\textrm{T}} \times1}$, and ${\mathbf n} = [{\mathbf n}_1^\top,{\mathbf n}_2^\top,\cdots,{\mathbf n}_L^\top]^\top\in {\mathbb{C}}^{LN_{\textrm{R}} \times1}$. To detect the target, we pass ${\mathbf y}$ through a finite impulse response filter, denoted by ${\mathbf w}$. The filter output $z$ can be written as \begin{align}\label{eq:FilterOutput} z &= {\mathbf w}^\dagger {\mathbf y} \nonumber\\ &= \underbrace{\alpha_0 {\mathbf w}^\dagger {\mathbf A} (\theta_0) {\mathbf x}}_{\textrm{Target}} + \underbrace{{\mathbf w}^\dagger \sum\limits_{q = 1}^Q \alpha_q {\mathbf A} (\theta_q) {\mathbf x}}_{\textrm{Interference}} + \underbrace{{\mathbf w}^\dagger {\mathbf n}}_{\textrm{Noise}}. \end{align} According to \eqref{eq:FilterOutput}, we define the output SINR as follows: \begin{align}\label{eq:SINR} \textrm{SINR}_{\rm{r}} ({\mathbf x},{\mathbf w}) = \frac{\sigma_0^2|{\mathbf w}^\dagger {\mathbf A} (\theta_0) {\mathbf x}|^2}{{\mathbf w}^\dagger \left[ \sum\nolimits_{q = 1}^Q \sigma_q^2 {\mathbf A} (\theta_q) {\mathbf x} {\mathbf x}^\dagger {\mathbf A}^\dagger (\theta_q) \right]{\mathbf w} + \sigma_{\textrm{n}}^2 {\mathbf w}^\dagger {\mathbf w}}, \end{align} where the subscript r stands for ``radar", $\sigma_0^2$ is the target power, $\sigma_q^2 = {\mathbb{E}}\{|\alpha_q|^2\}$ is the average power of the $q$th interference source, $q=1,2, \cdots, Q$, $\sigma _{\textrm{n}}^2$ is the noise power level, and we have assumed that the interference sources are uncorrelated. \subsection{Communication Model} The signals received by the $M$ users is given by \begin{equation}\label{eq:CommSig} \begin{split} {\mathbf Y} = \mathbf{HX} + {\mathbf Z}, \end{split} \end{equation} where ${\mathbf H}=[{\mathbf h}_1, {\mathbf h}_2, \cdots, {\mathbf h}_{M}]^\top \in {\mathbb{C}}^{M \times N_{\textrm{T}}}$ is the channel matrix; ${\mathbf X} = [{\mathbf x}_1, {\mathbf x}_2, \cdots, {\mathbf x}_L]\in {\mathbb{C}}^{ N_{\textrm{T}}\times L}$ denotes the transmit waveform matrix (${\mathbf x} = \textrm{vec}({\mathbf X})$), and ${\mathbf Z}$ is the noise matrix in the $M$ communication receivers. Let ${\mathbf s}_m\in {\mathbb{C}}^{L \times 1}$ denote the desired symbols for the $m$th user ($m=1,2,\cdots,M$), and let ${\mathbf S} = [{\mathbf s}_1, {\mathbf s}_2, \cdots, {\mathbf s}_M]^\top\in{\mathbb{C}}^{M \times L}$. Then we can rewrite \eqref{eq:CommSig} as \begin{equation} \label{eq:CommSig2} \begin{split} {\mathbf Y} = {\mathbf S}+\underbrace{\mathbf{HX}-{\mathbf S}}_{{\textrm{MUI}}}+ {\mathbf Z}, \end{split} \end{equation} where $\mathbf{HX}-{\mathbf S}$ stands for the multi-user interference (MUI). In \cite{Larsson2013precoding}, it is shown that for a Gaussian broadcasting channel and a Gaussian input, the achievable rate of the $m$th communication user ($m=1,2,\cdots,M$) is given by \begin{equation}\label{eq:SumRate} \begin{split} \ell_m = \log_2 (1 + {\textrm{SINR}}_{m,\rm{c}}), \end{split} \end{equation} where ${\textrm{SINR}}_{m,\rm{c}}$ is the SINR of the $m$th communication receiver, defined by \begin{equation}\label{eq:SINRComm} \begin{split} {\textrm{SINR}}_{m,\rm{c}}= \frac{{\mathbb{E}}\{ |s_{m,l}|^2\} }{{\mathbb{E}}\{ |{\mathbf h}_m^\top {\mathbf x}_l - s_{m,l}|^2\} + \sigma_{z,m}^2}, \end{split} \end{equation} where the subscript c stands for ``communication", $s_{m,l}$ is the $l$th symbol of ${\mathbf s}_m$ ($l=1,2,\cdots,L$), and $\sigma_{z,m}^2$ is the noise power level of the $m$th communication receiver. Note that ${\mathbf h}_m^\top {\mathbf x}_l - s_{m,l}$ is the $(m,l)$th element of $\mathbf{HX}-{\mathbf S}$. Thus, the minimization of \begin{equation}\label{eq:MUI} \psi_m({\mathbf X}) = \| {\mathbf h}_m^\top{\mathbf X} - {\mathbf s}_m^\top\|_2^2 =\| {\mathbf H}_m{\mathbf x} - {\mathbf s}_m\|_2^2\approx L{\mathbb{E}}\{ |{\mathbf h}_m^\top {\mathbf x}_l - s_{m,l}|^2\}, \end{equation} which is the MUI for the $m$th communication receiver, results in the maximization of the achievable rate of the $m$th user, where ${\mathbf H}_m={\mathbf h}_m^\top\otimes{\mathbf I}_L$. \subsection{Problem Formulation} To maximize the target detection performance and ensure the communication performance of each user, we formulate the waveform design problem as follows: \begin{align} \max_{{\mathbf x},{\mathbf w}} &\ \textrm{SINR}_{\rm{r}} ({\mathbf x},{\mathbf w}) \nonumber \\ \textrm{s.t.} &\ \psi_m({\mathbf X}) \leq \varsigma_m, m=1,2,\cdots,M, \nonumber \\ &\ {\mathbf x} \in \mathcal{X}, \end{align} where $\varsigma_m$ is the maximum allowed synthesis error of the $m$th user, and $\mathcal{X}$ denote the constraint set on the waveforms. In practice, to make the radio frequency (RF) amplifier work at maximum efficiency and avoid nonlinear distortions, CM waveforms are often used. Thus, we enforce the CM constraint on the transmit waveforms, i.e., \begin{equation}\label{eq:CM} \begin{split} |{\mathbf x}(n)| = \sqrt {p_s} , n=1, 2, \cdots, LN_{\textrm{T}}, \end{split} \end{equation} where ${\mathbf x}(n)$ indicates the $n$th element of ${\mathbf x}$, $p_s = e_{\textrm{T}} /(LN_{\textrm{T}})$, and $e_{\textrm{T}}$ is the total transmit energy. By combining the results in \eqref{eq:SINR}, \eqref{eq:MUI}, and \eqref{eq:CM}, the joint design of transmit waveforms and receive filters for the DFRC system can be formulated as \begin{align} \label{eq:objective function} \max_{{\mathbf x},{\mathbf w}} &\ \frac{|{\mathbf w}^\dagger {\mathbf A} (\theta_0) {\mathbf x}|^2}{{\mathbf w}^\dagger \left[ \sum\nolimits_{q = 1}^Q \sigma_q^2 {\mathbf A} (\theta_q) {\mathbf x} {\mathbf x}^\dagger {\mathbf A}^\dagger (\theta_q) \right]{\mathbf w} + \sigma_{\textrm{n}}^2 {\mathbf w}^\dagger {\mathbf w}} \nonumber \\ \textrm{s.t.} &\ \| {\mathbf H}_m{\mathbf x} - {\mathbf s}_m\|_2^2 \leq \varsigma_m, m=1, 2,\cdots,M, \nonumber \\ &\ |{\mathbf x}(n)| = \sqrt {p_s} , n=1, 2, \cdots, LN_{\textrm{T}}. \end{align} \textit{Remark 1:} In the formulation of \eqref{eq:objective function}, we have assumed that the average power and the directions of the interference sources as well as the channel matrix are known \textit{a priori}. Such an assumption is justified if the access to the estimates of them from the previous scans are available (see also in \cite{Tang2020Polyphase,Tsinos2021DFRC} for similar assumptions). \textit{Remark 2:} We note that the formulated problem in \eqref{eq:objective function} is different from that in \cite{Tsinos2021DFRC}. The studies in \cite{Tsinos2021DFRC} controlled the total MUI energy indirectly by tuning the combination coefficient, while we constrain the MUI energy of each user directly to control their communication performance. More specifically, according to \eqref{eq:MUI}, if the per-user MUI energy constraint is satisfied, we can approximately obtain that \begin{align} {\mathbb{E}}\{ |{\mathbf h}_m^\top {\mathbf x}_l - s_{m,l}|^2\} \leq \varsigma_m/L, m=1,2,\cdots,M. \end{align} As a result, the achievable information rate for the $m$th user satisfies \begin{align} \ell_m \geq \log_2 \left(1 + \frac{p_m}{\varsigma_m/L+\sigma_{z,m}^2}\right), \end{align} where $p_m = {\mathbb{E}}\{ |s_{m,l}|^2\}$ is the average power of the $m$th communication signals, $m=1,2,\cdots,M$. On the other hand, if only the total MUI energy constraint is enforced, the initialization of the transmit waveforms will affect the communication performance of each user (i.e., the communication performance of each user cannot be guaranteed). \section{Algorithm Design}\label{Sec:JointDesign} Due to the CM constraint, it is evident that the fractional programming problem in \eqref{eq:objective function} is non-convex. In this section, a cyclic optimization method is used to deal with this non-convex problem. Specifically, at the $(k+1)$th iteration, we optimize ${\mathbf w}^{(k+1)}$ for fixed ${\mathbf x}^{(k)}$; then we optimize ${\mathbf x}^{(k+1)}$ for fixed ${\mathbf w}^{(k+1)}$. Next we present solutions to the two optimization problems. To lighten the notations, we omit the superscript if doing so does not result in confusions. \subsection{Optimization of ${\mathbf w}^{(k+1)}$ for fixed ${\mathbf x}^{(k)}$} The corresponding optimization problem is given by \begin{align} \label{eq:ProblemFixedS} \max_{{\mathbf w}} &\ \frac{|{\mathbf w}^\dagger {\mathbf A} (\theta_0) {\mathbf x}|^2}{{\mathbf w}^\dagger \left[ \sum\nolimits_{q = 1}^Q \sigma_q^2 {\mathbf A} (\theta_q) {\mathbf x} {\mathbf x}^\dagger {\mathbf A}^\dagger (\theta_q) \right]{\mathbf w} + \sigma_{\textrm{n}}^2 {\mathbf w}^\dagger {\mathbf w}}. \end{align} It can be checked that the solution to \eqref{eq:ProblemFixedS} is given by \begin{equation}\label{eq:slove w} \begin{split} {\mathbf w}= \gamma {\mathbf R}_x^{-1} {\mathbf A} (\theta_0) {\mathbf x}, \end{split} \end{equation} where $\gamma \neq 0$ is an arbitrary constant, and \begin{equation}\label{eq:transform 1} \begin{split} {\mathbf R}_x = \sum\limits_{q = 1}^Q \sigma_q^2 {\mathbf A} (\theta_q) {\mathbf x} {\mathbf x}^\dagger {\mathbf A}^\dagger (\theta_q) + \sigma_{\textrm n}^2 {\mathbf I}_{LN_{\textrm{R}}}. \end{split} \end{equation} \subsection{Optimization of ${\mathbf x}^{(k+1)}$ for fixed ${\mathbf w}^{(k+1)}$} Define \begin{equation}\label{eq:R0} {\mathbf R}_0= {\mathbf A}^\dagger (\theta_0) {\mathbf w}\bw^\dagger {\mathbf A} (\theta_0) \end{equation} and \begin{equation} \label{eq:R1} {\mathbf R}_1 = \sum\limits_{q = 1}^Q \sigma_q^2 {\mathbf A}^\dagger (\theta_q) {\mathbf w} {\mathbf w}^\dagger {\mathbf A} (\theta_q) + \sigma_{\textrm n}^2 {\mathbf w}^\dagger {\mathbf w} /e_{\textrm{T}}\cdot {\mathbf I}_{LN_{\textrm{T}}}. \end{equation} Then the waveform design problem for fixed ${\mathbf w}^{(k+1)}$ can be written as \begin{align} \label{eq:ProblemFixedW} \max_{{\mathbf x}} &\ \frac{{\mathbf x}^\dagger {\mathbf R}_0 {\mathbf x} }{{\mathbf x}^\dagger {\mathbf R}_1 {\mathbf x}} \nonumber \\ \textrm{s.t.} &\ \| {\mathbf H}_m{\mathbf x} - {\mathbf s}_m\|_2^2 \leq \varsigma_m, m=1,2,\cdots,M, \nonumber \\ &\ |{\mathbf x}(n)| = \sqrt {p_s} , n=1, 2, \cdots, LN_{\textrm{T}}. \end{align} Next we resort to Dinkelbach's transform \cite{1967Dinkelbach} to deal with the optimization problem in \eqref{eq:ProblemFixedW}. To this end, we use $g({\mathbf x}^{(k,l)})$ to indicate the objective value of \eqref{eq:ProblemFixedW} at the $(k,l)$th iteration (Here we use the superscript $k$ to indicate the outer iteration for cyclic optimization, and $l$ to denote the inner iteration for Dinkelbach's transform). By using Dinkelbach's transform, the optimization problem at the $(k,l+1)$th iteration is formulated as \begin{align} \label{eq:DinT} \max_{{\mathbf x}} &\ {\mathbf x}^\dagger \widetilde{{\mathbf T}} {\mathbf x} \nonumber \\ \textrm{s.t.} &\ \| {\mathbf H}_m{\mathbf x} - {\mathbf s}_m\|_2^2 \leq \varsigma_m, m=1,2, \cdots,M, \nonumber \\ &\ |{\mathbf x}(n)| = \sqrt {p_s} , n=1, 2, \cdots, LN_{\textrm{T}}, \end{align} where $\widetilde{{\mathbf T}}= {\mathbf R}_0 - g({\mathbf x}^{(k,l)}){\mathbf R}_1$. To proceed, we define \begin{equation} \label{eq:T} {\mathbf T}= \widetilde{{\mathbf T}} - \beta {\mathbf I}_{LN_{\textrm{T}}}, \end{equation}\label{eq:proof T}where $\beta \leq \lambda_{\textrm{min}}(\widetilde{{\mathbf T}})$, and $\lambda_{\textrm{min}}(\widetilde{{\mathbf T}})$ is the smallest eigenvalue of $\widetilde{{\mathbf T}}$. It is evident that ${\mathbf T}$ is positive semidefinite such that its square root exists. In addition, \begin{equation} {\mathbf x}^\dagger {{\mathbf T}}{\mathbf x} = {\mathbf x}^\dagger \widetilde{{\mathbf T}}{\mathbf x} - \beta e_{\textrm{T}}. \end{equation} Therefore, the optimization problem in \eqref{eq:DinT} is equivalent to \begin{align} \label{eq:DinT2} \max_{{\mathbf x}} &\ {\mathbf x}^\dagger {{\mathbf T}} {\mathbf x} \nonumber \\ \textrm{s.t.} &\ \| {\mathbf H}_m{\mathbf x} - {\mathbf s}_m\|_2^2 \leq \varsigma_m, m=1,2,\cdots,M, \nonumber \\ &\ |{\mathbf x}(n)| = \sqrt {p_s} , n=1, 2, \cdots, LN_{\textrm{T}}. \end{align} Next we apply ADMM (we refer to \cite{Boyd2010ADAMM} for a comprehensive survey of ADMM) to tackle \eqref{eq:DinT2}. By using the variable splitting trick and introducing auxiliary variables ${{\hat{\bx}}}$ and ${{\widetilde \bx}}_m$ ($m=1,\cdots,M$), the optimization problem in \eqref{eq:DinT2} is recast by \begin{subequations}\label{eq:objective function ADMM} \begin{align} \max_{{\mathbf x},{{\hat{\bx}}},{{\widetilde \bx}}} &\ {{\hat{\bx}}}^\dagger {{\hat{\bx}}} \\ \textrm{s.t.} &\ {{\hat{\bx}}} = {\mathbf T}^{1/2} {\mathbf x}, \label{eq:constraintA}\\ &\ \| {{\widetilde \bx}}_m\|_2^2 \leq \varsigma_m, {{\widetilde \bx}}_m = {\mathbf H}_m{\mathbf x} - {\mathbf s}_m, m=1,2, \cdots,M, \label{eq:constraintB}\\ &\ |{\mathbf x}(n)| = \sqrt {p_s} , n=1, 2, \cdots, LN_{\textrm{T}}. \label{eq:constraintC} \end{align} \end{subequations} The augmented Lagrange function of \eqref{eq:objective function ADMM} is written as \begin{align}\label{eq:Lagrange function} \mathop{{L_\mu }({\mathbf x},{{\hat{\bx}}},\{{{\widetilde \bx}}_m\},\boldsymbol{\nu},\{\boldsymbol{\upsilon}_m\})} &= - {{\hat{\bx}}}^\dagger {{\hat{\bx}}}\nonumber\\ &+ \frac{\mu}{2}\left[ {\| {{\hat{\bx}}} -{\mathbf T}^{1/2}{\mathbf x} + \boldsymbol{\nu} \|_2^2 - \left\| \boldsymbol{\nu} \right\|_2^2} \right]\nonumber\\ &+ \frac{\mu}{2}\left\{\sum\limits_{m = 1}^M \Big{[} {\| {{\widetilde \bx}}_m - {\mathbf H}_m {\mathbf x} + {\mathbf s}_m + \boldsymbol{\upsilon}_m \|_2^2 - \| \boldsymbol{\upsilon}_m \|_2^2} \Big{]} \right\}, \end{align} where ${\mu}$ is the penalty parameter; $\boldsymbol{\nu}$ and $\boldsymbol{\upsilon}_m (m=1,\cdots,M)$ are the Lagrange multipliers associated with the constraints in \eqref{eq:constraintA} and \eqref{eq:constraintB}, respectively. In the $(t+1)$th iteration of the ADMM algorithm, we carry out the following steps sequentially: \begin{equation}\label{eq:ADMM1} {\mathbf x}^{{(t+1)}} = \mathop {\rm{argmin}} \limits_{{\mathbf x} \in \mathcal{X}} L_\mu ({\mathbf x}, {{\hat{\bx}}}^{(t)}, \{{{\widetilde \bx}}_m^{(t)}\}, \boldsymbol{\nu}^{(t)}, \{\boldsymbol{\upsilon}_m^{(t)}\}), \end{equation} \begin{equation}\label{eq:ADMM2} {{\hat{\bx}}}^{{(t+1)}} = \mathop {\rm{argmin}} \limits_{{{\hat{\bx}}}} L_\mu ({\mathbf x}^{{(t+1)}}, {{\hat{\bx}}}, \{{{\widetilde \bx}}_m^{(t)}\}, \boldsymbol{\nu}^{(t)}, \{\boldsymbol{\upsilon}_m^{(t)}\}), \end{equation} \begin{equation}\label{eq:ADMM3} {{\widetilde \bx}}_m^{{(t+1)}} = \mathop {\rm{argmin}} \limits_{{{\widetilde \bx}}_m} L_\mu ({\mathbf x}^{{(t+1)}}, {{\hat{\bx}}}^{{(t+1)}}, \{{{\widetilde \bx}}_m\}, \boldsymbol{\nu}^{(t)}, \{\boldsymbol{\upsilon}_m^{(t)}\}), \end{equation} \begin{equation}\label{eq:ADMM4} \boldsymbol{\nu}^{{(t+1)}} = \boldsymbol{\nu}^{(t)} + {{\hat{\bx}}}^{{(t+1)}} - {\mathbf T}^{1/2} {\mathbf x}^{{(t+1)}}, \end{equation} \begin{equation}\label{eq:ADMM5} \boldsymbol{\upsilon}_m^{{(t+1)}} = \boldsymbol{\upsilon}_m^{(t)} + {{\widetilde \bx}}_m^{{(t+1)}} - {\mathbf H}_m {\mathbf x}^{{(t+1)}} + {\mathbf s}_m. \end{equation} Next we derive the solutions to \eqref{eq:ADMM1}, \eqref{eq:ADMM2}, and \eqref{eq:ADMM3}. \subsubsection{Solution to \eqref{eq:ADMM1}\rm} The optimization problem in \eqref{eq:ADMM1} can be recast as \begin{align}\label{eq:Update1} \min_{{\mathbf x}}&\ \| {{\hat{\bx}}} - {\mathbf T}^{1/2}{\mathbf x} + \boldsymbol{\nu} \|_2^2 + \sum\limits_{m = 1}^M\| {{\widetilde \bx}}_m - {\mathbf H}_m {\mathbf x} + {\mathbf s}_m + \boldsymbol{\upsilon}_m \|_2^2 \nonumber \\ \textrm{s.t.} &\ |{\mathbf x}(n)| = \sqrt {p_s} , n=1, 2, \cdots, LN_{\textrm{T}}. \end{align} Let \begin{equation}\label{eq:B} {\mathbf B} = {\mathbf T} + \sum\limits_{m = 1}^M{\mathbf H}_m^\dagger {\mathbf H}_m, \end{equation} and \begin{equation}\label{eq:b} {\mathbf b} = {\mathbf T}^{1/2}({{\hat{\bx}}}+\boldsymbol{\nu})+\sum\limits_{m = 1}^M{\mathbf H}_m^\dagger({{\widetilde \bx}}_m + {\mathbf s}_m + \boldsymbol{\upsilon}_m). \end{equation} Then, we can rewrite the optimization problem in \eqref{eq:Update1} as \begin{align}\label{eq:ADMM1 transform} \mathop {\min}\limits_{\mathbf x} &\ {\mathbf x}^\dagger {\mathbf B} {\mathbf x} - 2{\mathop{\textrm{Re}}\nolimits} ({\mathbf b}^\dagger {\mathbf x}) \nonumber \\ \textrm{s.t.} &\ |{\mathbf x}(n)| = \sqrt {p_s} , n=1, 2, \cdots, LN_{\textrm{T}}. \end{align} Note that \eqref{eq:ADMM1 transform} is a standard unimodular quadratic programming (UQP) problem. A number of algorithms have been proposed to tackle the UQP problem, including the power-method like (PML) iterations \cite{Soltanalian2013Joint,Soltanalian2014Optimization}, the coordinate descent method (CDM) \cite{Cui2017Quadratic,Tsinos2022CM}, the gradient projection (GP) method \cite{Tranter2017CM}, and the majorization-minimization (MM) method \cite{Tang2019Efficient,Tang2021Information,Tang2021Profiling,Tsinos2022CM,Palomar2018CM}. We find that these methods have similar performance but the MM method is usually the fastest. Therefore, we use the MM method to tackle the problem in \eqref{eq:ADMM1 transform}. \subsubsection{Solution to \eqref{eq:ADMM2}\rm} The optimization problem in \eqref{eq:ADMM2} is formulated by \begin{equation}\label{eq:Update2} \min_{{{\hat{\bx}}}} - {{\hat{\bx}}}^\dagger {{\hat{\bx}}} + \frac{\mu}{2}\| {{\hat{\bx}}} -{\mathbf T}^{1/2} {\mathbf x} + \boldsymbol{\nu} \|_2^2 . \end{equation} Define \begin{equation}\label{eq:q} {\mathbf q} = {\mathbf T}^{1/2} {\mathbf x} -\boldsymbol{\nu}. \end{equation} Then, we can recast \eqref{eq:Update2} as \begin{equation}\label{eq:ADMM2 transform} \mathop {\min}\limits_{{\hat{\bx}}} \frac{\mu}{2}\left\| {{\hat{\bx}}} - {\mathbf q} \right\|_2^2 - {{\hat{\bx}}}^\dagger {{\hat{\bx}}}, \end{equation} Assume that $\mu >2$. Then the quadratic optimization problem in \eqref{eq:ADMM2 transform} is convex. Taking the derivative of \eqref{eq:ADMM2 transform} with respect to ${{\hat{\bx}}}$ and setting it equal to zero, we can acquire the solution to \eqref{eq:ADMM2 transform}: \begin{equation}\label{eq:solve ADMM2 transform} {{\hat{\bx}}}= \frac{\mu}{\mu-2} {\mathbf q}. \end{equation} \subsubsection{Solution to \eqref{eq:ADMM3}\rm} Define \begin{equation}\label{eq:p} {\mathbf p}_m={\mathbf H}_m {\mathbf x}- \boldsymbol{\upsilon}_m - {\mathbf s}_m. \end{equation} Then, we can rewrite \eqref{eq:ADMM3} as \begin{align} \label{eq:ADMM3 transform} \min_{{{\widetilde \bx}}_m} &\ \left\| {{\widetilde \bx}}_m- {\mathbf p}_m \right\|_2^2 \nonumber \\ \textrm{s.t.} &\ \| {{\widetilde \bx}}_m\|_2^2 \leq \varsigma_m. \end{align} It can be verified that the solution to \eqref{eq:ADMM3 transform} is \begin{align}\label{eq:slove ADMM3 transform} {{\widetilde \bx}}_m = \begin{cases} {\mathbf p}_m, & \|{\mathbf p}_m\|_2^2 \le \varsigma_m, \\ \sqrt{\varsigma_m} {\mathbf p}_m/\left\| {\mathbf p}_m \right\|, & \|{\mathbf p}_m\|_2^2 > \varsigma_m. \end{cases} \end{align} We summarize the proposed ADMM algorithm in Algorithm \ref{alg1}, where we stop the proposed ADMM method when the norm of the primal residual $\|{\mathbf r}_m^{(t)}\|_2 \leq \epsilon^{\textrm{primal}}$ ($m = 1,2,\cdots,M+1$) and the norm of the dual residual $\|{\mathbf d}_m^{(t)}\|_2 \leq \epsilon^{\textrm{dual}}$ ($m = 1,2,\cdots,M+2$), where \begin{align}\label{eq:primal residual} {\mathbf r}_m^{(t)}= \begin{cases} {\mathbf H}_m {\mathbf x}^{(t)} - {\mathbf s}_m - {{\widetilde \bx}}_m^{(t)}, &1\leq m \leq M,\\ {\mathbf T}^{1/2} {\mathbf x}^{(t)} - {{\hat{\bx}}}^{(t)}, &m = M+1, \end{cases} \end{align} \begin{align}\label{eq:Dual residuals} {\mathbf d}_m^{(t)} = \begin{cases} {{\widetilde \bx}}_m^{(t)} - {{\widetilde \bx}}_m^{(t-1)}, & 1\leq m \leq M,\\ {{\hat{\bx}}}^{(t)} - {{\hat{\bx}}}^{(t-1)}, &m = M+1, \\ {\mathbf x}^{(t)} - {\mathbf x}^{(t-1)}, &m = M+2, \end{cases} \end{align} ${\epsilon^{\textrm{primal}}} > 0$ and ${\epsilon^{\textrm{dual}}} > 0$ are the feasible tolerances of the primal and dual conditions, respectively. \begin{algorithm}[!htbp] \renewcommand{\arraystretch}{1.3} \caption{ \small ADMM algorithm for the problem in \eqref{eq:DinT2}.}\label{alg1} \KwIn{${\mathbf T}, p_s,\{{\mathbf H}_m, {\mathbf s}_m,\varsigma_m\}_{m=1}^M$.} \KwOut{${\mathbf x}^{(k,l+1)}$.} \textbf{Initialize:} \\ Compute ${\mathbf B}$ by \eqref{eq:B}. \\ $t=0,{\mathbf x}^{(t)}={\mathbf x}^{(k,l)}$, $\boldsymbol{\nu}^{(t)} = {\mathbf{0}}$, $\boldsymbol{\upsilon}_m^{(t)} = {\mathbf{0}}$.\\ Compute ${{\hat{\bx}}}^{(t)}$, $\{{{\widetilde \bx}}_m^{(t)}\}_{m=1}^M$. \\ \Repeat{convergence}{ ${\mathbf b}^{(t)} = {\mathbf T}^{1/2}({{\hat{\bx}}}^{(t)}+\boldsymbol{\nu}^{(t)})+\sum\nolimits_{m = 1}^M{\mathbf H}_m^\dagger({{\widetilde \bx}}_m^{(t)} + {\mathbf s}_m + \boldsymbol{\upsilon}_m^{(t)}).$\\ Update ${\mathbf x}^{{(t+1)}}$ through solving \eqref{eq:ADMM1 transform} by MM.\\ ${\mathbf q}^{(t)} = {\mathbf T}^{1/2} {\mathbf x}^{{(t+1)}} -\boldsymbol{\nu}^{{(t)}}$.\\ ${{\hat{\bx}}}^{(t+1)} = \dfrac{\mu}{\mu-2} {\mathbf q}^{(t)} $. \\ ${\mathbf p}_m^{(t)} ={\mathbf H}_m {\mathbf x}^{(t+1)} - \boldsymbol{\upsilon}_m^{(t)} - {\mathbf s}_m$. \\ ${{\widetilde \bx}}_m^{(t+1)} = \min(\sqrt{\varsigma_m} /\| {\mathbf p}_m^{(t)} \|,1)\cdot {\mathbf p}_m^{(t)} $.\\ $\boldsymbol{\nu}^{{(t+1)}} = \boldsymbol{\nu}^{(t)} + {{\hat{\bx}}}^{{(t+1)}} - {\mathbf T}^{1/2} {\mathbf x}^{{(t+1)}}$. \\ $\boldsymbol{\upsilon}_m^{{(t+1)}} = \boldsymbol{\upsilon}_m^{(t)} + {{\widetilde \bx}}_m^{{(t+1)}} - {\mathbf H}_m {\mathbf x}^{{(t+1)}} + {\mathbf s}_m.$ \\ $t = t+1$.\\ } ${\mathbf x}^{(k,l+1)}={\mathbf x}^{(t)}$. \end{algorithm} \subsection{Algorithm Summary and Computational Complexity Analysis} We summarize the proposed constant-modulus waveform design algorithm for the DFRC systems in Algorithm \ref{alg2}, where we terminate the algorithm if \begin{equation}\label{eq:Stop Criterion} \frac{|{\textrm{SINR}^{(k)}_{\rm{r}}}-{\textrm{SINR}^{(k-1)}_{\rm{r}}}|}{\textrm{SINR}^{(k-1)}_{\rm{r}}} < \vartheta_{\textrm{O}}, \end{equation} and we terminate the inner loop (for Dinkelbach's transform) if \begin{equation}\label{eq:inner loop Criterion} \frac{|g({\mathbf x}^{(k,l)})-g({\mathbf x}^{(k,l-1)})|}{g({\mathbf x}^{(k,l-1)})}< \vartheta_{\textrm{I}}, \end{equation} where $\vartheta_{\textrm{O}}$ and $\vartheta_{\textrm{I}}$ are predefined small values (e.g., $10^{-5}$). \begin{algorithm}[!htbp] \label{alg2} \caption{\small Joint Design of CM Waveforms and Receive Filters for DFRC Systems} \KwIn{$\left\{\theta_q, \sigma_q^2 \right\}_{q = 1}^Q, \theta_0, \alpha_0^2, \{{\mathbf s}_m,\varsigma_m\}_{m=1}^M, {\mathbf H}, p_s$} \KwOut{${\mathbf x}$, ${\mathbf w}$} \textbf{Initialize:} \\ $\left\{{\mathbf H}_m = {\mathbf h}_m^{\top} \otimes {\mathbf I}_L\right\}_{m = 1}^M$, ${\mathbf A}(\theta) = {\mathbf I}_L \otimes ({\mathbf a}_{\textrm{R}}(\theta){\mathbf a}_{\textrm{T}}^\top(\theta))$.\\ $k=0, {\mathbf x}^{(0)}$.\\ \Repeat{convergence}{ $\textit {// Optimization of}$ ${\mathbf w}$.\\ Compute ${\mathbf R}_x^{(k)}$ by \eqref{eq:transform 1}.\\ Update ${\mathbf w}^{(k+1)}$ by \eqref{eq:slove w}.\\ $\textit {// Optimization of}$ ${\mathbf x}$.\\ Compute ${\mathbf R}_0^{(k)}$ by \eqref{eq:R0}.\\ Compute ${\mathbf R}_1^{(k)}$ by \eqref{eq:R1}.\\ $l=0, {\mathbf x}^{(k,l)}={\mathbf x}^{(k)}$.\\ \Repeat{convergence}{ Compute $g({\mathbf x}^{(k,l)})= \dfrac{({\mathbf x}^{(k,l)})^\dagger {\mathbf R}_0^{(k)} {\mathbf x}^{(k,l)} }{({\mathbf x}^{(k,l)})^\dagger {\mathbf R}_1^{(k)} {\mathbf x}^{(k,l)}}$.\\ Compute ${{\widetilde \bT}}^{(k,l)} = {\mathbf R}_0^{(k)}-g({\mathbf x}^{(k,l)}){\mathbf R}_1^{(k)}$.\\ Compute ${\mathbf T}^{(k,l)}= \widetilde{{\mathbf T}}^{(k,l)} - \beta {\mathbf I}_{LN_{\textrm{T}}}$.\\ Compute ${\mathbf x}^{(k,l+1)}$ by Algorithm \ref{alg1}.\\ $l=l+1$.\\ } ${\mathbf x}^{(k+1)}={\mathbf x}^{(k,l)}$.\\ $k=k+1$.\\ } ${\mathbf x}={\mathbf x}^{(k)},{\mathbf w}={\mathbf w}^{(k)}$. \end{algorithm} The computational complexity of Algorithm \ref{alg2} is determined by the number of outer iterations and the complexity at each outer iteration. We present the computational complexity for each outer iteration in Table \ref{tab:Complexity}, where $N_{\textrm{D}}$ and $N_{\textrm{A}}$ denote the number of iterations for Dinkelbach's transform and the proposed ADMM algorithm to reach convergence, respectively. \textit{Remark 3:} Due to the constant-modulus constraint and the fractional objective function, the optimization problem in \eqref{eq:objective function} is non-convex and difficult to tackle. To synthesize the constant-modulus waveforms, we derive Algorithm \ref{alg2}, which is a nested optimization algorithm based on the cyclic optimization, Dinkinbach's transform, and ADMM. Essentially, cyclic optimization and the method based on Dinkinbach's transform are ascent algorithms. However, the convergence property of the proposed ADMM algorithm, which deals with a non-convex optimization problem, remains unknown (for some recent progress on this problem, we refer to \cite{Hong2016Convergence}). Therefore, it is non-trivial to prove the convergence property of Algorithm \ref{alg2}. Fortunately, we have not encountered any convergence problems in our extensive numerical studies. \begin{table}[!htbp] \caption{{{Computational Complexity Analysis}}} \renewcommand\arraystretch{1.2} \centering \begin{tabular}{l|ll} \toprule \multicolumn{2}{c}{Computation} & {Complexity} \\ \cline{1-3} \multirow{3}{*}{Update ${\mathbf w}$} & ${\mathbf R}_x$ & $O(L^2 N_{\textrm{R}} N_{\textrm{T}})$ \\ \multirow{3}{*}{} & ${\mathbf R}_x^{-1}$ & $O(L^3 N_{\textrm{R}}^3)$\\ \multirow{3}{*}{} & ${\mathbf w}$ & $O(L^2 N_{\textrm{R}} N_{\textrm{T}})$\\ \cline{1-3} \multirow{3}{*}{Update ${\mathbf x}$} & ${\mathbf R}_0, {\mathbf R}_1$ & $O(L^2 N_{\textrm{R}} N_{\textrm{T}})$ \\ \multirow{3}{*} & $g({\mathbf x})$ & $O(N_{\textrm{D}}L^2 N_{\textrm{T}}^2)$ \\ \multirow{3}{*} & Compute ${\mathbf x}^{(k,l+1)}$ by Algorithm 1 & $O(N_{\textrm{D}}N_{\textrm{A}}(2M+3)L^2 N_{\textrm{T}}^2)$ \\ \cline{1-3} Total & \multicolumn{2}{c}{$O(L^3 N_{\textrm{R}}^3+O(N_{\textrm{D}}N_{\textrm{A}}(2M+3)L^2 N_{\textrm{T}}^2)$} \\ \cline{1-3} \end{tabular} \label{tab:Complexity} \end{table} \renewcommand\arraystretch{1} \subsection{Algorithm Extension} We note that the proposed algorithm can be extended to deal with the peak-to-average-power-ratio (PAPR) constraint. To this purpose, we only need to replace the optimization problem in \eqref{eq:ADMM1 transform} by the following: \begin{align}\label{eq:ADMM_PAPR} \mathop {\min}\limits_{\mathbf x} &\ {\mathbf x}^\dagger {\mathbf B} {\mathbf x} - 2{\mathop{\textrm{Re}}\nolimits} ({\mathbf b}^\dagger {\mathbf x}) \nonumber \\ \textrm{s.t.} &\ {\mathbf x}(n)^\dagger{\mathbf x}(n) = {e_{\textrm{T}}}/N_\textrm{T}, \textrm{PAPR}({\mathbf x}(n)) \le \rho, n=1,2,\cdots,N_\textrm{T}. \end{align} where ${\mathbf x}(n)$ denotes the waveforms of the $n$th transmitter, $1 \le \rho \le L$, \begin{align}\label{eq:PAPR} \textrm{PAPR}({\mathbf x}(n)) = \dfrac{\max_l |{\mathbf x}_l(n)|^2}{{\dfrac{1}{L}}\sum\nolimits_{l = 1}^{L} |{\mathbf x}_l(n)|^2}, \end{align} and we have assumed that the transmit energy across the antenna elements are uniform. Similarly, we can use the MM method to tackle the optimization problem in \eqref{eq:ADMM_PAPR} \cite{Tang2021Information}. In addition to the PAPR constraint, we can also extend the proposed algorithm to design waveforms under a similarity constraint \cite{Li2006SPL,Maio2008TSP} as well as both similarity and constant-modulus constraints. However, we skip the details due to space limitations. \section{Numerical Results}\label{Sec:NumericalResults} In this section, numerical examples are provided to demonstrate the performance of the proposed algorithm. The DFRC system under consideration has $N_{\textrm{T}}=16$ transmit antennas and $N_{\textrm{R}}=8$ receive antennas. Both the transmit and the receive antenna array are uniform linear arrays, with inter-element spacing being $\lambda/2$ ($\lambda$ is the wavelength). The target is at the direction of $\theta_0 = 20^\circ$ and has a power of $0$ dB. We assume that $Q=4$ interference sources are present, with the directions and power being $\left\{-40^\circ, -20^\circ, 40^\circ, 50^\circ \right\}$ and $\sigma_q^2 = 30$ dB ($q = 1,2,3,4$), respectively. The noise power level in the radar receiver is $0$ dB. The available transmit energy is ${e_{\textrm{T}}} = 20$. The elements of the channel matrix ${\mathbf H}$ are independent and identically distributed, obeying a Gaussian distribution with zero mean and variance of $1$ (i.e., we assume a flat fading channel). Regarding the stopping criteria of the proposed algorithm, we set ${\epsilon^{\textrm{primal}}}=10^{-4}$, ${\epsilon^{\textrm{dual}}}=10^{-2}$, and $\vartheta_{\textrm{I}}=\vartheta_{\textrm{O}}= 10^{-5}$. Finally, all the analysis is performed on a standard laptop with CPU CoRe i7 2.8 GHz and 16 GB of RAM. Firstly, we analyze the convergence of the proposed algorithm. We assume $M=2$ communication users. The desired signals of user 1 and user 2 have a quadrature phase shift keying (QPSK) and an 8-quadrature amplitude modulation (8QAM) modulation, respectively. The associated information bits are randomly generated. The transmit energies of the desired communication signals are $e_1=e_2=20$. \figurename~\ref{fig_2} illustrates the convergence of the $\textrm{SINR}$ of the synthesized waveforms versus the number of outer iterations, where the code length is $L = 20$, and the maximum allowed synthesis errors are $ \varsigma_1 = 10^{-3}$ and $\varsigma_2 = 5\times 10^{-3}$, respectively. In addition, the upper bound\footnote{It can be verified that the upper bound of $\textrm{SINR}_{\rm{r}}$ is $N_{\textrm{T}}N_{\textrm{R}}{e_{\textrm{T}}}$.} and the $\textrm{SINR}$ curve associated with the radar-only case\footnote{The waveforms for the radar-only case are synthesized by removing the communication constraints in \eqref{eq:objective function}, and the associated waveform design problem can be tackled by Algorithm 3 in \cite{Tang2016TxRx}.} are also drawn. We can see that the $\textrm{SINR}$ at convergence is $32.49$ dB, which is about $10$ dB higher than that of the LFM signals. However, since the DFRC system has to spare some transmit energy to satisfy the additional communication constraints, the proposed waveforms will suffer some performance loss (compared with the upper bound and the radar-only case, the loss of the proposed waveforms are about $1.59$ dB and $1.38$ dB, respectively). \begin{figure*}[!htbp] \centering \includegraphics[width=2.9in]{2-paper.pdf} \caption{$\textrm{SINR}_{\rm{r}}$ versus the number of outer iterations. $M = 2, e_1 = e_2 = 20, L =20, \varsigma_1 = 10^{-3}, \varsigma_2 = 5\times 10^{-3}$.} \label{fig_2} \end{figure*} \figurename~\ref{fig_3} shows the beampattern associated with the designed waveforms and filters, where the beampattern is defined by \begin{equation}\label{eq:pattern} \begin{split} P(\theta) = |{\mathbf w}^\dagger {\mathbf A}(\theta) {\mathbf x}|^2. \end{split} \end{equation} Note that the beampattern has the highest gain at the target direction ($\theta_0 = 20^\circ$), and forms deep nulls (lower than $-120$ dB) at the interference directions ($-40^\circ, -20^\circ, 40^\circ,$ and $50^\circ $). Therefore, the designed transmit waveforms and receive filters can suppress the interference power to a very low level, ensuring the target detection performance. \begin{figure*}[!htbp] \centering \includegraphics[width=2.9in]{3-paper.pdf} \caption{Beampattern of the DFRC systems. $M = 2, e_1 = e_2 = 20, L =20, \varsigma_1 = 10^{-3}, \varsigma_2 = 5\times 10^{-3}$.} \label{fig_3} \end{figure*} Next we compare the detection performance of the proposed waveforms with that of the LFM waveforms. To detect the target, we set up a hypothesis test as follows: \begin{equation}\label{eq:binary hypothesis} \left\{ \begin{array}{l} {{\cal H}_0: {\mathbf y} = {\mathbf r}},\\ {{\cal H}_1: {\mathbf y} = \alpha_0 {\mathbf A} (\theta_0) {\mathbf x} + {\mathbf r}}, \end{array} \right. \end{equation} where ${\mathbf r} = \sum_{q = 1}^Q \alpha_q {\mathbf A} (\theta_q) {\mathbf x} +{\mathbf n}$. Assume that ${\mathbf n} \sim {\cal C} {\cal N}({\mathbf{0}},\sigma_{\textrm{n}}^2 {\mathbf I})$ and ${\mathbf r} \sim {\cal C} {\cal N}({\mathbf{0}},{\mathbf R}_x)$ (${\mathbf R}_x$ is defined in \eqref{eq:transform 1}). According to the Neyman-Pearson criterion \cite{kaybook1998}, we decide ${\cal H}_1$ if\footnote{\textcolor{red}{This detector is also called generalized matched filter \cite[pp. 478-479]{kaybook1998}}. We can also use the Bayesian detector proposed in \cite{Pd1973}, i.e., we decide ${\cal H}_1$ if $|{\mathbf w}^\dagger {\mathbf y}|> T_{\text{h}}$.} \begin{equation}\label{eq:H_1} \begin{split} \textrm{Re}({\mathbf w}^\dagger {\mathbf y}) > T_{\text{h}}, \end{split} \end{equation} where ${\mathbf w} = \alpha_0 {\mathbf R}_x^{-1} {\mathbf A} (\theta_0) {\mathbf x}$, and $T_{\text{h}}$ is the detection threshold. The detection probability associated with this detector is given by\cite{kaybook1998} \footnote{The detection probability presented in \eqref{eq:P_D} represents an upper bound for the waveform ${\mathbf x}$. If the prior knowledge of the clutter or the target is imprecise, the detection performance degrades. } \begin{equation}\label{eq:P_D} \begin{split} P_{\textrm{D}} = \frac{1}{2}\textrm{erfc}\left(\textrm{erfc}^{-1}(2P_{\textrm{FA}})-\sqrt{\textrm{SINR}_{\rm{r}}}\right), \end{split} \end{equation} where $\textrm{erfc}(x)=\frac{2}{\sqrt{\pi}}\displaystyle\int_{x}^{\infty} \mathrm{e}^{-t^2}\mathrm{d}t$ is the complementary error function, $P_{\textrm{FA}}$ is the probability of false alarm, and $\textrm{SINR}_{\rm{r}}$ is given by \begin{align}\label{eq:SINR new} \textrm{SINR}_{\rm{r}} = \sigma_0^2{\mathbf x}^\dagger{\mathbf A}^\dagger(\theta_0){\mathbf R}_x^{-1}{\mathbf A}(\theta_0){\mathbf x}. \end{align} We can observe from \eqref{eq:P_D} and \eqref{eq:SINR new} that the detection probability $P_{\textrm{D}}$ depends on the transmit waveforms. Therefore, it can be expected that the optimized waveforms can achieve a larger SINR and a higher detection probability. To illustrate the superiority of the proposed waveforms, we let $\sigma_0^2 = -20$ dB. It can be verified that the $\textrm{SINR}_{\rm{r}}$ of the proposed waveform and the LFM signals are $12.49$ dB and $2.04$ dB, respectively. The associated detection probability of them is shown in \figurename~\ref{fig_4}. We can see that the detection probability of the proposed waveforms is significantly higher than that of the LFM signals. \begin{figure*}[!htbp] \centering \includegraphics[width=2.9in]{4-paper.pdf} \caption{Target detection probability of the DFRC systems. $N_{\textrm{T}}=16,N_{\textrm{R}}=8,{e_{\textrm{T}}} = 20,L=20,\sigma_0^2 = -20$ dB.} \label{fig_4} \end{figure*} \figurename~\ref{fig_5} shows the synthesis error of the communication signals versus the number of outer iterations. We can see that the waveforms at convergence satisfy the communication constraints, implying that the communication functionality is supported. To verify this claim, \figurename~\ref{fig_6}(a) and \figurename~\ref{fig_6}(d) compare the synthesized communication signals and the desired signals for the two users. The constellation diagrams of the synthesized communication signals are displayed in \figurename~\ref{fig_6}(b) and \figurename~\ref{fig_6}(e). It can be seen that the real and imaginary part of the synthesized signals are close to the desired ones and the constellation diagrams of these signals are nearly ideal. \figurename~\ref{fig_6}(c) and \figurename~\ref{fig_6}(f) show the symbol error rate (SER) of the synthesized signals, where we define the SNR for the $m$th communication user as \begin{equation}\label{eq:SNR} \begin{split} {\textrm{SNR}}_{m,c} = \frac{{\mathbb{E}}\{ |s_{m,l}|^2\} }{\sigma_{z,m}^2}, \end{split} \end{equation} $s_{m,l}$ is the $l$th symbol of ${\mathbf s}_m$ ($l=1,2,\cdots,L$), $\sigma_{z,m}^2$ is the noise power level of the $m$th communication receiver, and $2000$ independent trials are conducted to obtain the SER. To achieve a given ${\textrm{SNR}}_{m,c}$, we keep the amplitude of the communication signals fixed and vary the noise power. We can observe that the SER performance of the synthesized signals is also close to the desired ones. \begin{figure*}[!htbp] \centering \includegraphics[width=2.9in]{5-paper.pdf} \caption{The synthesis error of communication signals versus the number of outer iterations. $M = 2, e_1 = e_2 = 20, L =20, \varsigma_1 = 10^{-3}, \varsigma_2 = 5\times 10^{-3}$.} \label{fig_5} \end{figure*} \begin{figure*}[!htbp] \centering {\subfigure[]{{\includegraphics[width = 2.1in]{6-1-paper.pdf}} \label{fig_6-1}} } {\subfigure[]{{\includegraphics[width = 1.615in]{6-2-paper.pdf}} \label{fig_6-2}} } {\subfigure[]{{\includegraphics[width = 2.1in]{6-3-paper.pdf}} \label{fig_6-3}} } {\subfigure[]{{\includegraphics[width = 2.1in]{6-4-paper.pdf}} \label{fig_6-4}} } {\subfigure[]{{\includegraphics[width = 1.615in]{6-5-paper.pdf}} \label{fig_6-5}} } {\subfigure[]{{\includegraphics[width = 2.1in]{6-6-paper.pdf}} \label{fig_6-6}} } \caption{Analysis of the synthesized communication signals. (a)-(c): Comparison of the synthesized communication signals with the desired ones, the constellation diagram, and the SER of the synthesized communication signals for user 1. (d)-(f): Comparison of the synthesized communication signals with the desired ones, the constellation diagram, and the SER of the synthesized communication signals for user 2. $M = 2, e_1 = e_2 = 20, L =20, \varsigma_1 = 10^{-3}, \varsigma_2 = 5\times 10^{-3}$.} \label{fig_6} \end{figure*} Next, we vary the transmit energy of the communication signals and analyze the $\textrm{SINR}$ performance for target detection. \figurename~\ref{fig_7} illustrates the $\textrm{SINR}$ curves of the DFRC system versus the number of outer iterations for different communication energy. The associated $\textrm{SINR}$ values at convergence are listed in Table \ref{tab:transmit energy}. Note that $\textrm{SINR}$ decreases with the transmit energy of the communication signals. This is due to that the DFRC system has to spare more transmit energy to ensure the communication performance, resulting in a degraded target detection performance. \begin{figure*}[!htbp] \centering \includegraphics[width=2.9in]{7-paper.pdf} \caption{$\textrm{SINR}_{\rm{r}}$ for different $e_m$. $M = 2, L =20, \varsigma_1 = 10^{-3}, \varsigma_2 = 5\times 10^{-3}$.} \label{fig_7} \end{figure*} \begin{table}[!htbp] \caption{$\textrm{SINR}_{\rm{r}}$ versus $e_m$} \centering \begin{tabular}{ccccc} \hline $e_1$ & $10$ & $20$ & $20$ & $30$ \\ \hline $e_2$ & $10$ & $20$ & $30$ & $30$ \\ \hline $\textrm{SINR}_{\rm{r}}$ (dB) & $32.91$ & $32.59$ & $31.87$ & $31.73$\\ \hline \end{tabular} \label{tab:transmit energy} \end{table} Then, we analyze the impact of the number of communication users on the system performance. \figurename~\ref{fig_8} draws the $\textrm{SINR}$ curves versus the number of outer iterations for different number of communication users ($M=2,3$, and $4$), where the code length is $L = 20$, and the maximum allowed synthesis errors are $\varsigma_m =10^{-3}$ ($m=1, \cdots, M$). The $\textrm{SINR}$ values at convergence for different $M$ are shown in Table \ref{tab:different users}. We can find that due to the increase of the number of communication users (thus, a larger number of constraints and a smaller feasibility region), the $\textrm{SINR}$ performance decreases rapidly. Moreover, the propose algorithm takes a longer time to converge. \begin{figure*}[!htbp] \centering {\subfigure[]{{\includegraphics[width = 2.5in]{8-1-paper.pdf}} \label{fig_8-1}} } {\subfigure[]{{\includegraphics[width = 2.5in]{8-2-paper.pdf}} \label{fig_8-2}} } \caption{$\textrm{SINR}_{\rm{r}}$ for different $M$. $L =20$, $e _m = 20, \varsigma_m =10^{-3}, m=1,\cdots,M$. (a) $\textrm{SINR}_{\rm{r}}$ versus the number of outer iterations. (b) $\textrm{SINR}_{\rm{r}}$ versus the CPU time.} \label{fig_8} \end{figure*} \begin{table}[!htbp] \caption{$\textrm{SINR}_{\rm{r}}$ versus $M$} \centering \begin{tabular}{cccc} \hline $M$ & $2$ & $3$ & $4$ \\ \hline $\textrm{SINR}_{\rm{r}}$ (dB) & $32.74$ & $31.22$ & $29.57$\\ \hline \end{tabular} \label{tab:different users} \end{table} Now we study how the code length affects the DFRC system performance. We use the same parameter setting as that in \figurename ~\ref{fig_2}, but vary the code length and set the maximum allowed synthesis error to be $\varsigma_m =10^{-3}$ ($m=1,2$). \figurename~\ref{fig_9-1} and \figurename~\ref{fig_9-2} plot the $\textrm{SINR}$ of the synthesized waveforms versus the number of outer iterations and the CPU time, respectively, for $L = 10,20$, and $30$. Table \ref{tab:code length} presents the $\textrm{SINR}$ values at convergence. \figurename~\ref{fig_9-1} indicates that the target detection performance improves for a larger $L$. It is because that the degree of freedom (DOF) of the waveforms increases with the code length (which implies that the system has better interference suppression capability). However, as shown in \figurename~\ref{fig_9-2}, the increased code length will also lead to a longer time for the algorithm to reach convergence. \begin{figure*}[!htbp] \centering {\subfigure[]{{\includegraphics[width = 2.5in]{9-1-paper.pdf}} \label{fig_9-1}} } {\subfigure[]{{\includegraphics[width = 2.5in]{9-2-paper.pdf}} \label{fig_9-2}} } \caption{$\textrm{SINR}_{\rm{r}}$ for different $L$. $M=2, e_1 = e_2 = 20, \varsigma_1 = \varsigma_2 = 10^{-3}$. (a) $\textrm{SINR}_{\rm{r}}$ versus the number of outer iterations. (b) $\textrm{SINR}_{\rm{r}}$ versus the CPU time.} \label{fig_9} \end{figure*} \begin{table}[!htbp] \caption{$\textrm{SINR}_{\rm{r}}$ versus $L$} \centering \begin{tabular}{cccc} \hline $L$ & $10$ & $20$ & $30$\\ \hline $\textrm{SINR}_{\rm{r}}$ (dB) & $31.54$ & $32.38$ & $32.59$\\ \hline \end{tabular} \label{tab:code length} \end{table} Subsequently, we study the impact of the number of antennas on the DFRC system performance. We use the same parameter setting as that in \figurename~\ref{fig_2}, but vary the number of antennas and set the maximum allowed synthesis error to be $\varsigma_m =10^{-3}$ ($m=1,2$). \figurename~\ref{fig_10}(a) and \figurename~\ref{fig_10}(b) show the convergence of the proposed algorithm versus the number of outer iterations and the CPU time, respectively, for different number of transmit and receive antennas. The $\textrm{SINR}$ values at convergence are displayed in Table \ref{tab:the number of antennas}. It can be seen that increasing the number of antennas improves the detection performance, but it also lead to a heavier computation load. \begin{figure*}[!htbp] \centering {\subfigure[]{{\includegraphics[width = 2.5in]{10-1-paper.pdf}} \label{fig_10-1}} } {\subfigure[]{{\includegraphics[width = 2.5in]{10-2-paper.pdf}} \label{fig_10-2}} } \caption{$\textrm{SINR}_{\rm{r}}$ for different $N_\textrm{T}, N_\textrm{R}$. $L=20, M=2, e_1 = e_2 = 20, \varsigma_1 = \varsigma_2 = 10^{-3}$. (a) $\textrm{SINR}_{\rm{r}}$ versus the number of outer iterations. (b) $\textrm{SINR}_{\rm{r}}$ versus the CPU time.} \label{fig_10} \end{figure*} \begin{table}[!htbp] \caption{$\textrm{SINR}_{\rm{r}}$ versus $N_\textrm{T}$ and $N_\textrm{R}$} \centering \begin{tabular}{ccccc} \hline $N_\textrm{T}$ & $8$ & $8$ & $16$ & $16$\\ \hline $N_\textrm{R}$ & $8$ & $16$ & $8$ & $16$\\ \hline $\textrm{SINR}_{\rm{r}}$ (dB) & $28.68$ & $30.53$ & $32.55$ & $35.28$\\ \hline \end{tabular} \label{tab:the number of antennas} \end{table} In \figurename~\ref{fig_11}, the impact of the maximum allowed synthesis error on the $\textrm{SINR}$ performance is analyzed, where we use the same parameter setting as that in \figurename~\ref{fig_2} except for varying the maximum allowed synthesis error. The $\textrm{SINR}$ values at convergence for these cases are given in Table \ref{tab:the different synthesis error}. The SER performance of the synthesized communication signals is analyzed in \figurename~\ref{fig_12}. We can see that for a smaller value of the maximum allowed synthesis error, the $\textrm{SINR}$ performance only degrades slightly. However, the associated communication performance improves quickly. This implies that to guarantee the quality of service for communication without affecting the detection performance, we can control the synthesis error to a reasonably low level. \begin{figure*}[!htbp] \centering \includegraphics[width=3in]{11-paper.pdf} \caption{$\textrm{SINR}_{\rm{r}}$ for different $\varsigma_m$. $M = 2, e_1 = e_2 = 20, L =20, \varsigma_1 = \varsigma_2$.} \label{fig_11} \end{figure*} \begin{figure*}[!htbp] \centering {\subfigure[]{{\includegraphics[width = 2.5in]{12-1-paper.pdf}} \label{fig_12-1}} } {\subfigure[]{{\includegraphics[width = 2.5in]{12-2-paper.pdf}} \label{fig_12-2}} } \caption{SER for different $\varsigma_m$. $M=2, e_1 = e_2 = 20, L =20, \varsigma_1 = \varsigma_2$. (a) User 1. (b) User 2.} \label{fig_12} \end{figure*} \begin{table}[!htbp] \caption{$\textrm{SINR}_{\rm{r}}$ versus $\varsigma_m$} \centering \begin{tabular}{cccc} \hline $\varsigma_m$ & $10^{-3}$ & $10^{-1}$ & $1$ \\ \hline $\textrm{SINR}_{\rm{r}}$ (dB) & $32.50$ & $32.67$ & $33.01$\\ \hline \end{tabular} \label{tab:the different synthesis error} \end{table} Finally, we demonstrate that the proposed algorithm can be extended to deal with different constraint (including the PAPR constraint, the similarity constraint, the CM and similarity constraints). We set $\rho = 2$, the similarity parameters $\delta = 0.25e_{\textrm{T}}$, and $\delta_{\infty} = 1.5\sqrt {p_s}$ \footnote{The similarity constraint is written as $\| {\mathbf x}-{\mathbf x}_0\|_2^2 \le \delta$; The constant-modulus and similarity constraints are written as $|{\mathbf x}(n)| = \sqrt {p_s} , n=1, 2, \cdots, LN_{\textrm{T}}, \| {\mathbf x}-{\mathbf x}_0\|_{\infty} \le \delta_{\infty}$.}. The reference waveform ${\mathbf x}_0$ is given by: \begin{equation}\label{eq:LFM} {\mathbf x}_0(n) = \sqrt{p_s}\textrm{e}^{\textrm{j}\pi (n-1)^2/{LN_{\textrm{T}}}}, n=1, 2, \cdots, LN_{\textrm{T}}. \end{equation} The other parameter setting is the same as that in \figurename~\ref{fig_2}. In \figurename~\ref{fig_13}, we can observe that increasing the PAPR results in a better detection performance. On the contrary, enforcing a similarity constraint will degrade the detection performance. \figurename~\ref{fig_14} analyzes the performance of the communication signals synthesized under different constraints. We can find that since all the synthesized communication signals satisfy the communication constraint, they have similar performance. \begin{figure*}[!htbp] \centering \includegraphics[width=3in]{13-paper.pdf} \caption{$\textrm{SINR}_{\rm{r}}$ under different constraints. $M = 2, e_1 = e_2 = 20, L =20, \varsigma_1 = 10^{-3}, \varsigma_2 = 5\times 10^{-3}$.} \label{fig_13} \end{figure*} \begin{figure*}[!htbp] \centering {\subfigure[]{{\includegraphics[width = 2.5in]{14-1-paper.pdf}} \label{fig_14-1}} } {\subfigure[]{{\includegraphics[width = 2.5in]{14-2-paper.pdf}} \label{fig_14-2}} } \caption{SER under different constraints. $M=2, e_1 = e_2 = 20, \varsigma_1 = 10^{-3}, \varsigma_2 = 5\times 10^{-3}$. (a) User 1. (b) User 2.} \label{fig_14} \end{figure*} \section{Conclusion}\label{Sec:Conclusion} An algorithm for jointly designing the transmit waveforms and receive filters was devised to maximize the detection performance of the DFRC system and ensure communications with multiple users. Results showed that the optimized waveforms and filters could form deep nulls at the interference directions while maintaining the target response. In addition, the synthesized communication signals approximated the desired ones with the synthesis error of every user being precisely controlled. Moreover, if the transmit energy of the desired communication signals or the number of communication users was increased, the target detection performance degraded. Therefore, there is a fundamental tradeoff between the target detection and the communication performance. \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.14318", "language": "en", "timestamp": "2023-03-01T02:09:23", "url": "https://arxiv.org/abs/2302.14318", "yymm": "2302" }
\section{Introduction} Consider a general, full-rank, bi-partite state $\Psi$ in the (for the moment, finite dimensional) tensor product Hilbert space $\mathcal{H}_{L}\otimes \mathcal{H}_R$. Any such state can always be written in the form: \begin{equation} |\Psi\rangle = \sum_{n} \sqrt{p_n}\, |\tilde{\chi}_n\rangle_L\otimes |\chi_n\rangle_R, \end{equation} where $p_n$ are the eigenvalues of the reduced density matrices on $L$ and $R$, $\chi_n$ form an orthonormal set of eigenstates of the reduced density matrix on the right, and $\tilde{\chi}_n$ form an orthonormal set of eigenstates of the reduced density matrix on the left. While the eigenvalues are common to both the parties, the eigenstates are not. If we only had access to, say, the left factor, then we could write down a purification for the density matrix as follows: \begin{equation} |\Psi^{\star}\rangle = \sum_{n} \sqrt{p_n}\, |\tilde{\chi}_n\rangle_L\otimes |\tilde{\chi}_n^{\star}\rangle_{L^{\star}}. \end{equation} Here \begin{equation} |\tilde{\chi}_n^{\star}\rangle = \Theta |\tilde{\chi}_n\rangle, \end{equation} where $\Theta$ is an anti-unitary operator on $L$.\footnote{In quantum field theory, we could take it to be CPT in even dimensions or CRT in odd dimensions, with R being reflection along one spatial direction. \cite{Witten:2018zxz}} This new state is called the \emph{canonical purification} of $\Psi$ with respect to the left side \cite{Dutta:2019gen}.\footnote{Equivalently, the canonical purification of a density matrix $\rho$ is defined as the state $|\sqrt{\rho}\rangle$ viewed as a vector in the Hilbert space $\text{End}(\mathcal{H}) = \mathcal{H}\otimes \mathcal{H}^*$.} Note that $\Psi^{\star}$ resembles the thermofield double state. Physically, if one only had access to the left party in $\Psi$ and not to the right, then we can think of $\Psi^{\star}$ as the ``simplest'' purification that one could build from this information. Since $\Psi$ and $\Psi^{\star}$ are two different purifications of the same reduced density matrix on $L$, these two states are related by a unitary transformation on the right:\footnote{Here, we are using the full-rank condition. More generally, the two purifications would be related by an isometry.} \begin{equation} |\Psi^{\star}\rangle := \mathcal{R}_{\Psi}|\Psi\rangle, \end{equation} where \begin{equation} \mathcal{R}_{\Psi}: \mathcal{H}_R \to \mathcal{H}_{L^{\star}},\;\;\;\mathcal{R}_{\Psi} := \sum_n |\tilde{\chi}^{\star}_n\rangle_{L^{\star}}\langle \chi_n|_R. \end{equation} The operator $\mathcal{R}_{\Psi}$ quantifies how ``complex'' it is to reconstruct the original state $\Psi$ from $\Psi^{\star}$. We should emphasize that entanglement or R\'enyi entropies between the left and the right parties are blind to $\mathcal{R}_{\Psi}$. It therefore seems interesting to study aspects of this operator $\mathcal{R}_{\Psi}$, which goes beyond entanglement in quantifying properties of the state $\Psi$. We will call this operator $\mathcal{R}_{\Psi}$ which maps $\Psi$ to its canonical purification with respect to $L$ the \emph{reflection operator} with respect to $L$. When $\Psi$ is full-rank, the reflection operator is uniquely specified by the condition that it maps $\Psi$ to its canonical purification. There are several motivations to study the reflection operator in holographic conformal field theories: one important motivation comes from the quantum error correction perspective on the bulk to boundary map in AdS/CFT \cite{Papadodimas:2013jku, Almheiri:2014lwa, Dong:2016eik, Harlow:2016vwg}. It was shown by Harlow \cite{Harlow:2016vwg} that for a bulk degree of freedom (say, a qudit) within the entanglement wedge of a boundary subregion $A$, the encoding map $V$ into the dual CFT takes the general form: \begin{equation} |\psi_i\rangle_{\text{CFT}} = V|i\rangle_{\text{bulk}} = U_{A}|i\rangle_{A_1}\otimes |\chi\rangle_{A_2,\bar{A}}, \end{equation} where $\{|i\rangle\}$ forms a basis of states for the bulk qudit, and $\mathcal{H}_A= \mathcal{H}_{A_1}\otimes \mathcal{H}_{A_2} \oplus \mathcal{H}_{A_3}$, with $\mathcal{H}_{A_1}$ being the same dimension as that of the code subspace. Harlow's structure theorem is a general consequence of the Ryu-Takayanagi formula \cite{Ryu:2006bv, Hubeny:2007xt, Faulkner:2013ana}. Importantly, we can think of the unitary $U_A$ appearing in Harlow's structure theorem as a reflection operator: introduce an auxiliary reference system $``\text{ref}"$ which has the same dimension as that of the code subspace, and consider the maximally entangled state: \begin{equation} |\Psi\rangle = \frac{1}{\sqrt{d_{\text{code}}}}\sum_i |i\rangle_{\text{ref}}\otimes |\psi_i\rangle_{\text{CFT}}. \end{equation} Then, the unitary $U_A$ is precisely the adjoint of the reflection operator for this state $\Psi$ with respect to $\text{ref}\cup \bar{A}$. Furthermore, this particular reflection operator gives a simple recipe for bulk reconstruction: we can represent any bulk operator $\phi$ on the code subspace as a boundary operator on $A$ via the formula \begin{equation} O_A = U_{A}\phi_{A_1} U_{A}^{\dagger}. \end{equation} Thus, the reflection operator finds a natural role in formulating the bulk-to-boundary map in AdS/CFT as a quantum error correcting code. A second (perhaps much more direct) motivation, comes from the fact that the reflection operator is closely linked with the canonical purification, which finds several interesting applications in holography. For instance, the canonical purification is crucially used in identifying the area of the outermost extremal surface as the simple entropy \cite{Engelhardt:2018kcs, Engelhardt:2021mue}. Along similar lines, the reflected entropy \cite{Dutta:2019gen} for a mixed two-party state $\rho_{AB}$ is also defined in terms of the canonical purification $\Psi^{\star}_{ABA^{\star}B^{\star}}$ as the entanglement entropy of $AA^{\star}$. The reflected entropy is an interesting information theoretic quantity \cite{Akers:2019gcv, Chandrasekaran:2020qtn, Zou:2020bly, Hayden:2021gno, Akers:2021pvd, Akers:2022max}, one which finds a natural bulk dual in terms of the cross section area of the entanglement wedge of $AB$. Finally, it was recently argued in \cite{Engelhardt:2022qts} that for a black hole evaporating into a non-gravitational bath, the canonical purification of the total state with respect to the black hole side is dual to a connected wormhole, thus realizing the ER=EPR idea in the context of an evaporating black hole (see also \cite{Geng:2020fxl, Anderson:2020vwi} for other approaches). While the original state of the radiation plus the evaporating black hole does not appear to have a wormhole in it, the state after the action of the corresponding reflection operator does; in this way, the reflection operator in this case acts to ``geometrize" the entanglement in the originally complex and non-geometric state. For holographic theories, it was proposed by Engelhardt and Wall \cite{Engelhardt:2018kcs} that the classical, Lorentzian bulk geometry dual to the canonical purification with respect to a boundary subregion $A$ is obtained by taking the entanglement wedge of $A$ and gluing it to its CPT image at the RT or HRRT surface \cite{Ryu:2006bv, Hubeny:2007xt} dual to $A$ (see figure \ref{fig:EW}). A replica trick argument for this proposal was later given by Dutta and Faulkner \cite{Dutta:2019gen} (see also \cite{Marolf:2019zoo}). In gluing together portions of solutions of Einstein equations to obtain new solutions, one must impose junction conditions at the gluing surface in order to ensure that the resulting geometry also satisfies Einstein's equations. In the case at hand, the fact that the co-dimension two surface we are gluing across is a classically extremal surface implies that these junction conditions are trivially satisfied (see section \ref{sec:EWconstruction} for more details). The resulting geometry contains an entire Cauchy surface, and one can obtain the full solution by evolving the data on this surface with the Einstein equations. Upon including quantum corrections, the gluing must be done across the quantum extremal surface (QES) \cite{Engelhardt:2014gca}. However, due to quantum corrections, the QES is not generically classically extremal, and now the junction conditions imply that the bulk matter must be in a state whose stress tensor has a delta function ``shock''\cite{Bousso:2019dxk} proportional to the first shape-derivative of the bulk entanglement entropy, in order for Einstein's equations to be satisfied. \begin{figure} \centering \begin{tabular}{c c c} \includegraphics[height=4cm]{EW1.pdf} & \hspace{1cm} & \includegraphics[height=4cm]{EW2.pdf} \end{tabular} \caption{(Left) A portion of the bulk geometry dual to some holographic state $\Psi$. The entanglement wedge of the left party is shown in blue and the entanglement wedge of the right party is shown in green. (Right) The Engelhardt-Wall proposal for the geometry dual to the canonical purification $\Psi^{\star}$ with respect to the left party consists of the left entanglement wedge glued to its CPT image at the quantum extremal surface. In situations where the quantum extremal surface is not classically extremal, the geometry needs to be supported by a shock (red dashed lines) in the bulk matter stress tensor. } \label{fig:EW} \end{figure} Our goal in this paper will be to study the reflection operator in a perturbative setup. The main application we have in mind is to verify the above prediction of general relativity for the bulk stress tensor shock in the context of the Engelhardt-Wall construction. We will consider a family of states $\Psi_{\lambda}$ labelled by some parameter $\lambda$. We will first derive a differential equation for $\mathcal{R}_{\lambda} \equiv \mathcal{R}_{\Psi_{\lambda}}$ along the flow parametrized by $\lambda$; this equation will involve more familiar quantities such as the modular Hamiltonian and modular flow. In order to be concrete, we will then apply this general equation to the thermofield double (TFD) state perturbed by turning on a source (with a small amplitude) in the Euclidean path integral. In a holographic quantum system, we will then use this to compute the bulk stress tensor one-point function (to first order in the deformation) in the bulk dual to the canonically purified state and show that it has the quantum extremal shock contribution required for the Engelhardt-Wall construction to work. While we will explicitly demonstrate the existence of this shock to first order in perturbation theory around the TFD state, we expect that with some mild assumptions, our calculation can be extended beyond perturbation theory (i.e., at finite deformation parameter $\lambda$). Since the shock is a prediction of Einstein's equations from the bulk point of view, we are seeing here the emergence of bulk gravitational physics from the CFT entanglement structure \cite{Lashkari:2013koa, Faulkner:2013ica, Jacobson:2015hqa, Faulkner:2017tkh, Haehl:2017sot, Lewkowycz:2018sgn}, but in a context where quantum corrections in the bulk are important (see also \cite{Haehl:2019fjz, Belin:2021htw} for related previous work). The rest of the paper is organized as follows: in section \ref{sec:EWconstruction}, we review the Engelhardt-Wall construction of the bulk dual to the canonical purification. In section \ref{sec:flow}, we study the reflection operator for a general one-parameter family of states. We then apply this to the special case of the TFD state deformed by a source in the Euclidean path integral, and derive an explicit formula for the reflection operator in this context to first order in perturbation theory. In section \ref{sec:gravity}, we apply this formula to holographic quantum systems in order to study the one-point function of the bulk matter stress tensor and demonstrate the existence of the quantum extremal shock. We end in section \ref{sec:discussion} with some concluding remarks and open directions. \section{Review of Engelhardt-Wall construction}\label{sec:EWconstruction} In this section, we briefly review the construction of Engelhardt and Wall (EW) for the holographic dual of the canonical purification of a bi-partite state. The EW geometry is a Lorentzian geometry constructed in the following way: let us begin with the original Lorentzian spacetime $M$ dual to the original state $\Psi$. Let $\sigma$ be the quantum extremal surface (QES) corresponding to the left subregion, and let $D_{\sigma}$ be the corresponding entanglement wedge. The EW proposal for the geometry dual to the canonical purification with respect to the left is to glue $D_{\sigma}$ to its CPT image at the surface $\sigma$, then evolve the resulting data on a Cauchy slice using Einstein's equations to obtain the full Lorentzian geometry. However, for this to work, we must impose a set of co-dimension two junction conditions on the geometric data at $\sigma$ in $M$. These junction conditions are analogous to, and in fact follow from, the standard, co-dimension one junction conditions which are imposed when gluing two solutions to Einstein's equations across a co-dimension one hypersurface \cite{Israel:1966rt, Barrabes:1991ng}. The basic idea is as follows: let us imagine, for the moment, that we have two different spacetimes $M$ and $M'$ with some Cauchy slices $\Sigma$ and $\Sigma'$ respectively. Now, consider co-dimension two surfaces $\sigma$ and $\sigma'$ in $M$ and $M'$ respectively, which divide $\Sigma$ and $\Sigma'$ into two parts. Let us call one part $\text{In}_{\Sigma}(\sigma)$ and the other $\text{Out}_{\Sigma}(\sigma)$ in $M$, we can write similar divisions of the Cauchy slice in $M'$. This procedure naturally divides each spacetime into four parts, namely, $I_W(\sigma) \equiv D[\text{In}_{\Sigma}(\sigma)]$, $O_W(\sigma) \equiv D[\text{Out}_{\Sigma}(\sigma)]$, $J^+[\sigma]$ and $J^-[\sigma]$, where $D$ denotes the domain of dependence and $J^{+(-)}$ denotes the causal future (past). We have a similar division for $M'$ as well. We wish to glue $I_W(\sigma)$ to $O_W(\sigma')$ by identifying the two surfaces $\sigma$ and $\sigma'$. For this to work, the most basic thing we must demand is that the intrinsic geometry on $\sigma$ and $\sigma'$ should be identical, or more precisely, the induced metrics $h= h_{ij}dy^idy^j$ on the two surfaces should be equivalent (up to a change of coordinates) -- this is the first junction condition. Next, let us imagine that there exists a consistent solution to Einstein's equations which contains $V_{\text{in}} \equiv I_W(\sigma)$ and $V_{\text{out}} \equiv O_W(\sigma')$ glued together at $\sigma=\sigma'$. Let us consider the null surface $\mathcal{N}_k$ which separates $V_{\text{out}}\cup J^-(\sigma)$ from $V_{\text{in}} \cup J^+(\sigma)$. Let $k$ be the generating vector field tangent to null geodesics (not necessarily affinely parametrized) along $\mathcal{N}_k$; at $\sigma$, we can take $k$ to be orthogonal to $\sigma$. Let $\ell^{\mu}$ be a transverse null vector field satisfying $\ell.k=-1$ everywhere on $\mathcal{N}_k$. We can take $\ell$ such that at $\sigma$ it is orthogonal to $\sigma$ and agrees with the generating vector field of the null surface $\mathcal{N}_{\ell}$ separating $V_{\text{out}} \cup J^+(\sigma)$ and $V_{\text{in}} \cup J^-(\sigma)$. The idea is to now apply the co-dimension one Barrab\`es-Israel junction conditions \cite{Barrabes:1991ng, Israel:1966rt} individually to $\mathcal{N}_k$ and $\mathcal{N}_{\ell}$. For instance, the junction condition across $\mathcal{N}_k$ gives the following expression for the matter stress tensor localized to this null sheet: \begin{equation} 8\pi G_N \;T_{\mu\nu}^{(k)}= -\left(\left[\theta_{(\ell)}\right] k_{\mu}k_{\nu} + [{\chi_{(\ell)}}_{(\mu}]k_{\nu)}+ [\kappa_{(\ell)}] h_{\mu\nu}\right)\delta(\mathcal{N}_k), \end{equation} where $\theta_{(\ell)}$ is the expansion of the null-geodesic congruence generated by the vector field $\ell$, ${\chi_{(\ell)}}_{\mu}$ is called its twist, and $\kappa_{(\ell)}$ measures the in-affinity of the geodesic congruence generated by $k$: \begin{eqnarray} \theta_{(\ell)} &=& h^{ij} h^{\mu}_i h^{\nu}_j \nabla_{\mu} \ell_{\nu},\\ \chi_{(\ell)\,\mu}&=&\frac{1}{2} h^{\mu}_i k^{\nu} \nabla_{\mu}\ell_{\nu},\\ \kappa_{(\ell)}&=&-k^{\nu} k^{\mu} \nabla_{\mu} \ell_{\nu}=\ell_{\nu}k^{\mu}\nabla_{\mu}k^{\nu}. \end{eqnarray} Finally, the notation $[\cdot]$ stands for difference across $\mathcal{N}_k$. We can write a similar equation for $\mathcal{N}_{\ell}$ as well. We are interested in evaluating these constraints at $\sigma$. Since the in-affinity at a point along a geodesic (in the present case, corresponding to where it intersects $\sigma$) can be adjusted by an arbitrary rescaling, we can set the discontinuity in the in-affinity to zero at $\sigma$ by a suitable choice of parametrization. Furthermore, for our specific case where we wish to glue an entanglement wedge to its CPT image, the twist term also drops out, since the twist is even under CPT. On the other hand, the expansion is odd under CPT, and so we get \begin{equation}\label{IJC} 8\pi G_N T^{(k)}_{\mu\nu} = -2\theta_{(\ell)} k_{\mu}k_{\nu}\,\delta(\mathcal{N}_k)\;\;\;\;\;\;\; (\text{at}\;\sigma). \end{equation} For classically extremal surfaces, the expansion vanishes and the gluing does not require any singular matter stress tensor. However, for a quantum extremal surface, the expansion is not zero, but given by the quantum extremality formula \cite{Engelhardt:2014gca}: \begin{equation} \theta_{(\ell)} = -\frac{4G_N}{\sqrt{h}} \ell^{\mu}\frac{\delta S_{\text{bulk}}}{\delta x^{\mu}}. \end{equation} Thus, general relativity makes a prediction for the singular part of the matter stress tensor at the quantum extremal surface in the Lorentzian geometry dual to the canonical purification of a holographic state: \begin{equation} 2\pi T^{(k)}_{\mu\nu} = \frac{2}{\sqrt{h}} \ell^{\mu}\frac{\delta S_{\text{bulk}}}{\delta x^{\mu}}\,k_{\mu}k_{\nu}\delta(\mathcal{N}_k), \end{equation} with a similar prediction for the stress tensor localized to $\mathcal{N}_{\ell}$. In principle, we need to compute the state of bulk matter fields in the bulk dual to the canonically purified state, evaluate the corresponding bulk stress tensor, and check whether it satisfies the above prediction. Our goal is to do this in the perturbative framework. \section{Perturbation theory for the reflection operator}\label{sec:flow} Consider a bi-partite Hilbert space $\mathcal{H}_L\otimes \mathcal{H}_R$, where $\mathcal{H}_L$ and $\mathcal{H}_R$ are both finite dimensional Hilbert spaces of the same dimension. Let us say that we have a general one-parameter family of states $\Psi_{\lambda} \in \mathcal{H}_L\otimes \mathcal{H}_R$ which are all full rank. At any value of $\lambda$, we can construct the reduced density matrices $\rho_L(\lambda)$ and $\rho_R(\lambda)$ corresponding to the left and right factors respectively. Accordingly, we have the one-parameter family of modular Hamiltonians $K_L(\lambda)$ and $K_R(\lambda)$, where the modular Hamiltonian for a density matrix $\rho$ is defined as $K=-\log\,\rho$. At any given value of $\lambda$, we have a Schmidt decomposition for the state $\Psi_{\lambda}$: \begin{equation} |\Psi_{\lambda}\rangle = \sum_n e^{-\frac{1}{2}E_n(\lambda)}|\tilde{\chi}_n(\lambda)\rangle_L\otimes |\chi_n(\lambda)\rangle_R. \end{equation} In terms of the modular Hamiltonians, the $\chi_n$ and $\tilde{\chi}_n$ satisfy \begin{equation} K_R(\lambda)|\chi_n(\lambda)\rangle_R = E_n(\lambda) |\chi_n(\lambda)\rangle_R, \end{equation} \begin{equation} K_L(\lambda)|\tilde{\chi}_n(\lambda)\rangle_L = E_n(\lambda) |\tilde{\chi}_n(\lambda)\rangle_L, \end{equation} where note that the eigenvalues are common to both sides. In terms of these quantities, recall that the reflection operator $\mathcal{R}_{\lambda}$ is defined as: \begin{equation} \mathcal{R}_{\lambda} = \sum_n |\tilde{\chi}^{\star}\rangle_{L^{\star}}\langle \chi_n|_R, \end{equation} where $|\tilde{\chi}^{\star}\rangle_{L^{\star}} = \Theta |\tilde{\chi}\rangle_{L^{\star}}$, and $\Theta$ is an anti-unitary operator which we will take to be CPT. Our first goal is to derive a differential equation for $\mathcal{R}_{\lambda}$ along the flow parametrized by $\lambda$. \subsection{Flow equation} Upon an infinitesimal deformation of the parameter $\lambda$, the change in the eigenstates of, say $K_R$, is given by \begin{equation} \frac{d}{d\lambda} |\chi_n\rangle_R = \sum_{m\neq n} \frac{\langle \chi_m | \frac{d}{d\lambda} K_R |\chi_n\rangle_R}{(E_n(\lambda)-E_m(\lambda))} |\chi_m\rangle_R. \end{equation} Here we have assumed that the eigenvalues are non-degenerate. We can rewrite this in the following way: \begin{eqnarray} \frac{d}{d\lambda} |\chi_n\rangle_R &=& \sum_{m\neq n} \int_0^{\infty}idt\,e^{-\epsilon t}\left(\langle \chi_m | e^{it K_R(\lambda)} \frac{d}{d\lambda} K_R e^{-it K_R(\lambda)}|\chi_n\rangle_R\right)|\chi_m\rangle_R\nonumber\\ &=& \int_0^{\infty}idt\,e^{-\epsilon t} e^{it K_R(\lambda)} \frac{d}{d\lambda} K_R e^{-it K_R(\lambda)}|\chi_n\rangle_R-\frac{i}{\epsilon}\frac{d}{d\lambda} E_n(\lambda)\,|\chi_n\rangle_R. \end{eqnarray} Here we have introduced a regulator $\epsilon \to 0^+$, which plays two roles: firstly, it regulates the $t$-integral at large $t$ in the first line. Secondly, it allows us to add and subtract the $m=n$ term in the sum, which together with $$\sum_m |\chi_m\rangle\langle \chi_m|_R = \mathbb{1}_R,$$ allows us to rewrite the expression as in the second line. Note that the $\frac{1}{\epsilon}$ divergence is not really present, since it cancels the corresponding divergence from the first term; we have merely chosen to write the expression in this way for convenience. A similar formula is also true for the modular eigenstates of the left party, and so we get the following flow equations for the eigenstates: \begin{equation} \label{transport} \frac{d}{d\lambda}|\chi_n\rangle_R = i \mathcal{A}_{R}|\chi_n\rangle_R,\;\;\; \frac{d}{d\lambda}|\tilde{\chi}_n\rangle_L = i \mathcal{A}_{L}|\tilde{\chi}_n\rangle_L, \end{equation} where \begin{equation} \label{conn0R} \mathcal{A}_R(\lambda) = a_R(\lambda)+\int_0^{\infty}dte^{-\epsilon t}\,e^{itK^{(\lambda)}_R}\frac{d}{d\lambda} K^{(\lambda)}_R e^{-it K^{(\lambda)}_R}, \end{equation} \begin{equation} \label{conn0L} \mathcal{A}_L(\lambda) = a_L(\lambda)+\int_0^{\infty}dte^{-\epsilon t}\,e^{itK^{(\lambda)}_L}\frac{d}{d\lambda} K^{(\lambda)}_L e^{-it K^{(\lambda)}_L}. \end{equation} Here $a_L$ and $a_R$ are the diagonal terms proportional to $\frac{1}{\epsilon}$. There is an important subtlety we need to address at this point: orthonormality does not fix the overall phase of an eigenstate of the modular Hamiltonian, i.e., we have the freedom $\chi_n \to e^{i\phi_n}\chi_n$. So, as far as the eigenstates of the modular Hamiltonian are concerned, the diagonal terms in the above flow equation are ambigious. Some of this ambiguity is fixed by the fact that we want $\chi_n(\lambda)$ and $\tilde{\chi}_n(\lambda)$ to be a Schmidt basis for the family of states $\Psi(\lambda)$. In particular, the sum of the left and the right phases $(\phi_n + \tilde{\phi}_n)$ is fixed, but the relative phase $(\phi_n - \tilde{\phi}_n)$ is not; this is good enough for our purposes, because the reflection operator is unambiguous once the Schmidt condition is imposed. Crucially, these ambiguities all correspond to diagonal terms in the modular eigenstate basis, and for what we are interested in, we will not need to worry about fixing them. We will simply gather all these diagonal terms inside $a_L$ and $a_R$ henceforth.\footnote{More precisely, $(a_L+a_R)$ can be fixed by the Schmidt condition. But as we will see later, $a_L$ and $a_R$ will drop out of the calculations we are interested in.} Coming back to the reflection operator, the change in $\mathcal{R}_{\lambda}$ in these terms is given by \begin{equation}\label{diff} i\frac{d}{d\lambda} \mathcal{R}_{\lambda} = i\sum_n \Big(\Theta\frac{d}{d\lambda} |\tilde{\chi}_n\rangle_{L^{\star}}\langle \chi_n|_R + \Theta|\tilde{\chi}_n\rangle_{L^{\star}}\frac{d}{d\lambda} \langle \chi_n|_R\Big)=\mathcal{A}^{\star}_{L}(\lambda)\,\mathcal{R}_{\lambda}+\mathcal{R}_{\lambda}\,\mathcal{A}_{R}(\lambda), \end{equation} where we have defined \begin{equation} \mathcal{A}^{\star}_{L}(\lambda) = \Theta\,\mathcal{A}_{L}(\lambda)\,\Theta^{-1}. \end{equation} While we have focused on the special case with only one parameter $\lambda$, the formulas above apply naturally to the more general case where the parameter space is an $n$-dimensional manifold $\mathcal{M}$ parametrized locally by coordinates $\lambda^i$. In this case, $\mathcal{A}_R$ and $\mathcal{A}_{L^{\star}}$ become one-forms on this parameter space. It is natural to interpret them as connection one-forms for a $\boldsymbol{U}(\text{dim}\,\mathcal{H}_L)\times \boldsymbol{U}(\text{dim}\,\mathcal{H}_R)$ bundle over the base space $\mathcal{M}$, where $\boldsymbol{U}(D)$ is the unitary group. To see this more explicitly, imagine that we consider a modified state $\Psi' =U \Psi$, where $U$ is a one-sided unitary transformation acting on $R$, but we can let $U$ depend on the parameters $\lambda^i$. Then, it follows from a short calculation (using the defining equation \eqref{conn0R}) that the connections transform as \begin{equation} \mathcal{A}_{L}' = \mathcal{A}_{L}, \end{equation} \begin{equation} \mathcal{A}_R' = U\,\mathcal{A}_R \,U^{-1} - i dU\,U^{-1}, \end{equation} which is precisely the transformation property of a connection 1-form. The same formula is also true for the transformation of $\mathcal{A}_{L}$ under a one-sided unitary acting on $L$. Thus, $\mathcal{A}_{R}$ and $\mathcal{A}_{L}$ are connection 1-forms under the action of local, one-sided unitary transformations, and we can think of equation \eqref{transport} as defining transport with respect to these connections. We will refer to these connections as \emph{modular Berry connections}. The curvature for these connections must only lie along the diagonal $U(1)^{\mathrm{dim}\,\mathcal{H}}$ subgroups in the non-degenerate case. However, the curvature is much more interesting to study in the degenerate case, where one encounters further ambiguities in how to transport eigenstates within degenerate subspaces; this is a non-Abelian generalization of the phase ambiguities we encountered previously (see \cite{Czech:2017zfq, Czech:2018kvg, Czech:2019vih} for some related work on modular Berry connections). Coming back to the case with one parameter $\lambda$, the general solution to the differential equation \eqref{diff} takes the form:\footnote{The flow equation satisfied -- for instance, by $U_R$ -- is a regulated version of the flow equation satisfied by the Connes cocycle $u_s = e^{is K_R^{(\lambda)}}e^{-isK_R^{(0)}}$, in the large $s$ limit; see \cite{Ceyhan:2018zfg, Lashkari:2019ixo, Bousso:2020yxi, Levine:2020upy} for some recent discussions of the Connes cocycle.} \begin{equation} \mathcal{R}_{\lambda} = U^{\star}_{L}(\lambda)\cdot \mathcal{R}_0 \cdot U_R^{\dagger}(\lambda), \end{equation} where \begin{equation} \label{udiff2} i\frac{dU^{\star}_{L}}{d\lambda} = \mathcal{A}^{\star}_{L} U^{\star}_{L},\;\;\; -i\frac{dU_R}{d\lambda}= \mathcal{A}_RU_{R},\;\;\;U^{\star}_{L}(0) = \mathbb{1}_{L^{\star}},\;\; U_R(0) = \mathbb{1}_R, \end{equation} The formal solutions to these equations are given by \begin{equation} U^{\star}_{L} = \mathcal{P}\exp\left\{-i\int_0^{\lambda}d\lambda'\mathcal{A}^{\star}_{L}(\lambda')\right\},\;\;\;U_R = \mathcal{P}\exp\left\{i\int_0^{\lambda}d\lambda'\mathcal{A}_R(\lambda')\right\}, \end{equation} where $\mathcal{P}$ stands for path-ordering. The matrices $U_R$ and $U_{L}$ supply a notion of parallel transport. This, in principle, allows us to completely solve for the reflection operator $\mathcal{R}_{\lambda}$ in terms of the modular Hamiltonians of the left and right subregions for the one-parameter family of states $\Psi_{\lambda}$.\footnote{Note that the reflection operator only depends on $a_L$ and $a_R$ through the combination $(a_L + a_R)$. We also need to impose the Schmidt condition to fix this phase ambiguity, as discussed previously. With this, the reflection operator is completely determined, but this phase ambiguity will not be important for us.} \subsection{Expanding around the TFD state} So far, we have derived a general differential equation satisfied by the operator $\mathcal{R}_{\lambda}$ for a one-parameter family of states $\Psi_{\lambda}$. Now we wish to apply this to a more concrete setting. Let us consider the TFD state \begin{equation} |\Psi_0\rangle = \frac{1}{\sqrt{Z}}\sum_n e^{-\frac{\beta}{2}E_n(0)}|\chi_n(0)\rangle_{L}\otimes |\chi^{\star}_n(0)\rangle_R, \end{equation} where $E_n(0)$ and $\chi_n(0)$ are the eigenstates of some local Hamiltonian $H$, and the right Hilbert space $\mathcal{H}_R$ can be identified with $ \mathcal{H}_{L^{\star}}$. The TFD state can also be thought of as a Euclidean path integral over a Euclidean time segment of length $\beta/2$. The TFD state is itself the canonical purification of the thermal ensemble, and so the reflection operator in the present case is essentially the identity operator. We wish to consider a one-parameter deformation of the TFD state. A natural such family of states can be constructed by turning on a source $\tilde{J}(\tau)$ for some operator $\mathcal{O}(\tau)$ in the Euclidean path integral \cite{Marolf:2017kvq}. Concretely, we change the action inside the Euclidean path integral in the following way:\footnote{We are only displaying the time coordinate here and in what follows, but in principle the sources can also depend on spatial directions.} \begin{equation} S_{\text{new}} = S_{\text{old}}+\lambda \int_{-\pi}^0 d\tau \tilde{J}(\tau)\mathcal{O}(\tau), \end{equation} where we have defined $\tau=\frac{2\pi}{\beta}\hat{\tau}$ and $\hat{\tau}$ is the Euclidean time coordinate with period $\beta$. This new path integral now constructs a new bi-partite state which we will call $\Psi_{\lambda}$. We wish to construct the reflection operator $\mathcal{R}_{\lambda}$ for this family of states to first order in $\lambda$. In order to do this, we first need to compute the change in the modular Hamiltonians of the $L$ and $R$ subsystems to first order in perturbation theory. This has been computed previously in several works, see for instance \cite{Faulkner:2016mzt, Sarosi:2017rsq, Lashkari:2018tjh, Balakrishnan:2020lbp}: \begin{equation}\label{ModHam} \frac{dK_R}{d\lambda} = \int_0^{2\pi} d\tau\, J_R(\tau)\int_{-\infty}^{\infty}\frac{ds}{4\sinh^2(\frac{s+i\tau}{2})}e^{\frac{is}{2\pi}K_{R}(0)}\mathcal{O}(0)e^{-\frac{is}{2\pi}K_{R}(0)}. \end{equation} Here $K_R(0)=\beta H$ is the original, undeformed modular Hamiltonian for $\Psi_0$, which is simply $\beta$ times the Hamiltonian $H$ corresponding to the TFD state. The source $J_R(\tau)$ is a time-reflection symmetric version of $\tilde{J}(\tau)$: \begin{equation} J_R(t) = \begin{cases} \tilde{J}(\tau) & -\pi < \tau < 0\\ \tilde{J}^*(- \tau) & 0 < \tau < \pi. \end{cases} \end{equation} Note that the operator on the right hand side of equation \eqref{ModHam} is a fully Lorentzian operator; all the Euclidean time dependence is now in the $\sinh^{-2}(\frac{s+i\tau}{2})$ kernel. A similar formula can also be written for the left subsystem. The only difference is that the corresponding source $J_L$ is related to $J_R$ by a left-right reflection, i.e., $J_L(\tau) = J_R(\pi - \tau)$. Let us briefly recap where equation \eqref{ModHam} comes from. In the finite dimensional case, one proceeds as follows:\footnote{We will temporarily drop the subscripts $L$ and $R$, since this derivation applies to both and the subscript is not so relevant.} \begin{eqnarray} \frac{dK}{d\lambda}\Big|_{\lambda= 0} &=& -\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\log(\rho_0 +\epsilon \frac{d\rho}{d\lambda})-\log \rho_0\right)\nonumber\\ &=&-\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\log[\rho_0(1 +\epsilon \rho_0^{-1} \frac{d\rho}{d\lambda})]-\log \rho_0\right)\nonumber\\ &=&-\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\log[e^{-K(0)}e^{\epsilon \rho_0^{-1} \frac{d\rho}{d\lambda}}]-\log \rho_0\right). \end{eqnarray} Here, we have only assumed that $\rho_0$ is invertible. Using the Baker-Campbell-Hausdorff formula in the first term, we get \begin{equation} \frac{dK}{d\lambda}\Big|_{\lambda= 0} = -\sum_{n=0}^{\infty}(-1)^n\frac{B_n}{n!}\left[K(0),\cdots\left[K(0),\rho_0^{-1} \frac{d\rho}{d\lambda}\right]\cdots\right]. \end{equation} Now, using the integral formula \begin{equation} B_n = \int_{-\infty+i\epsilon}^{\infty+i\epsilon}ds \frac{\left(\frac{-is}{2\pi}\right)^n}{4\sinh^2(s/2)}, \end{equation} we can re-sum the BCH expansion to obtain \begin{equation} \frac{dK}{d\lambda}\Big|_{\lambda= 0} = -\int_{-\infty+i\epsilon}^{\infty+i\epsilon}\frac{ds}{4\sinh^2(s/2)} e^{\frac{is}{2\pi}K(0)}\rho_0^{-1} \frac{d\rho}{d\lambda}e^{-\frac{is}{2\pi}K(0)}. \end{equation} For Euclidean path-integral states, a path-integral argument \cite{Rosenhaus:2014zza} shows that\footnote{More precisely, the operator which appears in this equation is $:\mathcal{O}:=\mathcal{O}-\langle \mathcal{O}\rangle_0$, but for simplicity, we can assume that the one point function of $\mathcal{O}$ vanishes.} \begin{equation} \rho_0^{-1} \frac{d\rho}{d\lambda} = -\int_0^{2\pi}d\tau J(\tau)\mathcal{O}(\tau), \end{equation} so we obtain \begin{equation} \frac{dK}{d\lambda}\Big|_{\lambda= 0} = \int d\tau J(\tau)\int_{-\infty+i\epsilon}^{\infty+i\epsilon}\frac{ds}{4\sinh^2(s/2)} e^{\frac{is}{2\pi}K(0)}\mathcal{O}(\tau)e^{-\frac{is}{2\pi}K(0)}. \end{equation} In the finite-dimensional case, this expression is good enough, but we would like to obtain a formula which is well-defined in the infinite dimensional or continuum quantum field theory limit as well. In the latter case, the above expression becomes problematic, since the operator $\mathcal{O}(\tau)$ is a Euclidean operator and does not admit a bounded continuum limit for all $\tau$. In order to avoid this problem, we first deform the $s$-contour integral\footnote{The integrand is analytic in the $0<\text{Im}(s)<2\pi$ strip of the complex $s$-plane. Furthermore, in the finite dimensional setting, the vertical contours at $s=\pm \infty$ can be dropped because $\sinh^{-2}(s/2)$ decays exponentially.} (before taking the continuum limit), to write the above expression as \begin{equation} \frac{dK}{d\lambda}\Big|_{\lambda= 0} = \int d\tau J(\tau)\int_{-\infty}^{\infty}\frac{ds}{4\sinh^2(\frac{s+i\tau}{2})} e^{\frac{is}{2\pi}K(0)}\mathcal{O}(0)e^{-\frac{is}{2\pi}K(0)}. \end{equation} Now we have a completely Lorentzian operator at hand, and at this stage we can take the continuum limit to obtain a well-defined continuum operator. With the first order change of the modular Hamiltonian in hand, we can now obtain the first order change in $U_R$: \begin{equation} -i\frac{dU_R}{d\lambda}(0) = \mathcal{A}_R(0), \end{equation} where \begin{equation} \mathcal{A}_R(0) = a_R(0)+\int_0^{\infty}dt\,e^{-\epsilon t}\int d\tau J_R(\tau)\int_{-\infty}^{\infty}\frac{ds}{4\sinh^2(\frac{s+i\tau}{2})}e^{\frac{i(s+2\pi t)}{2\pi}K_{R}(0)}\mathcal{O}(0)e^{-\frac{i(s+2\pi t)}{2\pi}K_{R}(0)}. \end{equation} Shifting $s$ by $2\pi t$ allows us to perform the $t$ integral: \begin{equation} \int_0^{\infty}idt\,\frac{e^{-\epsilon t}}{4\sinh^2(\frac{s-2\pi t+i\tau}{2})} = \frac{1}{2\pi i}\frac{1}{\left(1-e^{-(s+i\tau)}\right)}+\frac{\epsilon}{\pi^2}e^{-\frac{\epsilon}{2\pi}(s+i\tau)}B_{e^{s+i\tau}}(1+\frac{\epsilon}{2\pi},0) \end{equation} where $$B_z(a,b)= \int_0^z dt\,t^{a-1}(1-t)^{b-1},$$ is the incomplete Beta function. In the $\epsilon\to 0$ limit, the second term drops out, as long as the source $J_R(\tau)$ is supported away from $\tau = 0$. Thus, we get \begin{equation} \label{MBC1} \mathcal{A}_R(0) =a_R(0)- \frac{1}{2\pi }\int d\tau J_R(\tau)\int_{-\infty}^{\infty} ds\frac{1}{\left(1-e^{-(s+i\tau)}\right)}e^{\frac{is}{2\pi}K_{R}(0)}\mathcal{O}(0)e^{-\frac{is}{2\pi}K_{R}(0)}. \end{equation} Similarly, \begin{equation}\label{MBC2} \mathcal{A}_L(0) = a_L(0)- \frac{1}{2\pi }\int d\tau J_L(\tau)\int_{-\infty}^{\infty} ds\frac{1}{\left(1-e^{-(s+i\tau)}\right)}e^{\frac{is}{2\pi}K_{L}(0)}\mathcal{O}(0)e^{-\frac{is}{2\pi}K_{L}(0)}. \end{equation} Equations \eqref{MBC1} and \eqref{MBC2} are our main formulas for the modular Berry connections evaluated on the TFD state. In the next section, we will use these to derive the quantum extremal shock in the Engelhardt-Wall geometry. As another application of these formulas, it is not hard to show that in holographic conformal field theories, these expressions for the modular Berry connections can be put in a manifestly geometric form in the bulk. Indeed, when $\mathcal{O}$ is taken to be a single-trace operator, we find that \begin{equation}\label{SF1} \Pi_{\text{code}}\mathcal{A}_R(0)\Pi_{\text{code}} = \Pi_{\text{code}}a_R(0)\Pi_{\text{code}} + \int_{\Sigma_R} \boldsymbol{\omega}(\delta_{\lambda}\phi,\boldsymbol{\phi}), \end{equation} \begin{equation}\label{SF2} \Pi_{\text{code}}\mathcal{A}_L(0)\Pi_{\text{code}} = \Pi_{\text{code}}a_L(0)\Pi_{\text{code}} + \int_{\Sigma_L} \boldsymbol{\omega}(\delta_{\lambda}\phi,\boldsymbol{\phi}). \end{equation} Here, $\Pi_{\text{code}}$ is the projector onto states where we can think of the bulk in terms of quantum fields on a fixed background geometry, $\boldsymbol{\phi}$ is the bulk operator valued field dual to $\mathcal{O}$, $\delta_{\lambda}\phi$ is the linearized change in the bulk field configuration under the boundary deformation $J_R$, and $\boldsymbol{\omega}$ is the symplectic current for the bulk fields:\footnote{In the case where $\mathcal{O}$ is the stress tensor, one uses the gravitational symplectic form which appears naturally in the covariant phase space method \cite{Iyer:1994ys}. The region of integration for the gravitational symplectic flux turns out to be the entanglement wedge of the boundary subregion in the deformed geometry. } \begin{equation} \boldsymbol{\omega}(\delta_1\phi, \delta_2\phi) = (\delta_1\phi\, n^{\mu}\partial_{\mu}\delta_2\phi - \delta_2\phi\, n^{\mu}\partial_{\mu}\delta_1\phi). \end{equation} The derivation of equations \eqref{SF1} and \eqref{SF2} more or less follows the same logic as in \cite{Faulkner:2017tkh}, so we will not repeat it here. These equations give a natural generalization of \cite{Belin:2018fxe} to the case of subregions (see also \cite{Kirklin:2019ror} for a different approach). It is intriguing that the above expressions can be written as a sum of two terms, where the first term comes from the ``diagonal'' part of the connection, while the second term is related to the symplectic flux of bulk quantum fields in the relevant entanglement wedge; it would be interesting to understand the first term better. One thing to note is that if the source $J_R$ is tuned in order to create a localized excitation at some point in the bulk, then the geometric term in $\mathcal{A}_R$ is also localized at that point. Thus, the deeper in the bulk the excitation created by the source, the more ``complex'' is the corresponding unitary $U_R$. \section{Quantum extremal shock}\label{sec:gravity} In this section, we wish to study the state of bulk matter in the holographic dual corresponding to the canonical purification $\Psi_{\lambda}^{\star}$. To be concrete, we will work to first order in perturbation theory near the TFD state. \subsection{Double-trace deformation} We wish to turn on an operator $\mathcal{O}$ in the Euclidean path-integral which sources the bulk stress tensor at $O(\lambda)$. The reason is that in order to see the quantum extremal shock at $O(\lambda)$ in the canonically purified state, we need to have a non-trivial shape derivative for the bulk entanglement entropy at $O(\lambda)$ in the original state. But to linear order in $\lambda$, we have \begin{eqnarray} \label{EEfromANEC} \frac{1}{\sqrt{h(y^i)}}\frac{d}{d\lambda}\frac{\delta S_{\text{bulk}}}{\delta x^+}\Big|_{\lambda=0,y^i} &=& - 2\pi\int_0^{\infty} dx^+ \frac{d}{d\lambda}\langle T_{++}^{\text{bulk}}(x^+,x^-=0,y^i)\rangle_{\Psi_{\lambda}}\Big|_{\lambda=0},\nonumber\\ &=& 2\pi\int^0_{-\infty} dx^+ \frac{d}{d\lambda}\langle T_{++}^{\text{bulk}}(x^+,x^-=0,y^i)\rangle_{\Psi_{\lambda}}\Big|_{\lambda=0}, \end{eqnarray} with a similar equation for the shape derivative along $x^-$. Here $(x^+,x^-)$ are light-cone coordinates on which Schwarzschild boosts act simply as $(x^+,x^-) \to (x^+e^s,x^-e^{-s})$, $y^i$ are transverse bulk coordinates which parametrize the original extremal surface (i.e., the bifurcation point), $h$ is the determinant of the induced metric on the original extremal surface, and the shape derivative at the point $y^i$ is defined as \begin{equation} \frac{\delta S_{\text{bulk}}}{\delta x^+}\Big|_{\lambda=0,y^i} = \lim_{\epsilon \to 0}\frac{1}{\epsilon}\left[S_{\text{bulk}}[x^+=\epsilon \delta(y^i),x^-=0] -S_{\text{bulk}}[x^+=0,x^-=0]\right], \end{equation} where the arguments of the entropies on the right hand side above are the coordinate locations of the corresponding bulk entanglement cuts. We can derive equation \eqref{EEfromANEC} as follows: consider the bulk relative entropy for the region corresponding to the entanglement wedge $r$ of the boundary subregion $R$: \begin{equation} S_{\text{bulk}}(\rho_{r}(\lambda)||\rho_{r}(0)) = \Delta \langle K_{\text{bulk},r}(0)\rangle - \Delta S_{\text{bulk}}, \end{equation} where the $\Delta$ symbol stands for subtraction with respect to the background TFD state: \begin{equation} \Delta \langle K_{\text{bulk},r}(0)\rangle = \langle K_{\text{bulk},r}(0)\rangle_{\Psi_{\lambda}}-\langle K_{\text{bulk},r}(0)\rangle_{\Psi_0}, \end{equation} \begin{equation} \Delta S_{\text{bulk}} = S_{\text{bulk}}(\Psi_{\lambda})- S_{\text{bulk}}(\Psi_{0}). \end{equation} Since the first derivative of the relative entropy at $\lambda=0$ vanishes, we conclude that \begin{equation} \frac{d}{d\lambda}S_{\text{bulk}}\Big|_{\lambda=0}= \frac{d}{d\lambda}\langle K_{\text{bulk},r}(0)\rangle_{\Psi_{\lambda}}\Big|_{\lambda=0}. \end{equation} Taking a derivative of this equation with respect to the shape of the bulk entanglement cut and using \cite{Faulkner:2016mzt, Casini:2017roe} \begin{equation} \frac{\delta K_{\text{bulk},r}(0)}{\delta x^+}\Big|_{y^i} = -2\pi\sqrt{h(y^i)}\int_0^{\infty} dx^+ T^{\text{bulk}}_{++}(x^+,x^-=0,y^i), \end{equation} we land on the first equality in equation \eqref{EEfromANEC}, while applying the same arguments to the the entanglement wedge $\ell$ of $L$ and using \begin{equation} \frac{\delta K_{\text{bulk},\ell}(0)}{\delta x^+}\Big|_{y^i} = 2\pi\sqrt{h(y^i)}\int^0_{-\infty} dx^+ T^{\text{bulk}}_{++}(x^+,x^-=0,y^i), \end{equation} gives the second equality. Importantly, equation \eqref{EEfromANEC} implies that for us to see the shock in the bulk dual to the canonical purification at $O(\lambda)$, we need to turn on a deformation which will source the bulk stress tensor at $O(\lambda)$ in the original state. For this reason, we cannot take $\mathcal{O}$ to be a single-trace operator, as single trace operators only source the bulk stress tensor at $O(\lambda^2)$. Instead, we can imagine turning on a double-trace operator $\mathcal{O} =\;:\phi\phi:$, for some single trace operator $\phi$; although the details of what $\mathcal{O}$ we choose will not be relevant in the discussion below. Now, the quantum extremal surface in the geometry dual to $\Psi_{\lambda}$ will deviate from the classical extremal surface at $O(\lambda G_N)$. Following the Engelhardt-Wall construction reviewed in section \ref{sec:EWconstruction}, the bulk spacetime dual to the canonical purification $\Psi_{\lambda}^{\star}$ consists of the entanglement wedge $\text{EW}(L)$ (in the original geometry dual to $\Psi_{\lambda}$) glued to its CPT image at the QES. In order for the junction conditions to be satisfied, the bulk matter stress tensor must have a singular contribution at the location of the QES. Importantly, even though the QES deviates from the classical extremal surface at $O(\lambda G_N)$, the singular contribution in the bulk stress tensor in the bulk dual to the canonically purified state must appear at $O(\lambda)$. It is this contribution that we are after. \subsection{Bulk one point function} In order to proceed, we wish to compute the bulk stress tensor in the canonically-purified state. We can be general, and compute the one-point function of a more general operator $\Phi$ acting on the $L^{\star}$ factor: \begin{equation} \label{1pt} \langle \Phi \rangle_{\Psi_{\lambda}^{\star}} = \langle \Psi_{\lambda} | \mathcal{R}^{\dagger}_{\lambda}\,\Phi\, \mathcal{R}_{\lambda} |\Psi_{\lambda}\rangle, \end{equation} at first order in $\lambda$. Later, we will take $\Phi$ to be the bulk matter stress tensor $T^{\text{bulk}}_{\mu\nu}(x_B)$, where we will take $x_B$ to lie in the entanglement wedge of $L^{\star}$ in the geometry dual to $\Psi_{\lambda}^{\star}$. In particular, we are interested in the $T^{\text{bulk}}_{\pm\pm}$ components of the stress tensor, and we wish to take the limit where the bulk point approaches the quantum extremal surface. Let us take a moment to discuss what this means. The backreaction from turning on a double-trace operator is of $O(\lambda G_N)$. If we ignore this effect for now, the classical bulk spacetime dual to the canonically purified state is the undeformed, eternal black hole spacetime, where we simply re-label the right subsystem as $L^{\star}$. However, the state of bulk matter fields receives corrections at $O(\lambda)$, and this is what we wish to probe via the bulk operator $\Phi$; in particular, we want to take $\Phi = T^{\text{bulk}}_{\pm\pm}$ and take the limit where this operator approaches the original extremal surface (i.e., the bifurcation point) in the eternal black hole. With this preamble, we now wish to compute the first $\lambda$ derivative of the above one-point function. Taking a $\lambda$-derivative of equation \eqref{1pt}, we get: \begin{equation} \frac{d}{d\lambda}\langle \Phi \rangle_{\lambda,\star} = \langle \Psi_{\lambda}|\frac{d\mathcal{R}_{\lambda}^{\dagger}}{d\lambda}\,\Phi\, \mathcal{R}_{\lambda}|\Psi_{\lambda}\rangle+ \langle \Psi_{\lambda}|\mathcal{R}_{\lambda}^{\dagger}\Phi\, \frac{d\mathcal{R}_{\lambda}}{d\lambda}|\Psi_{\lambda}\rangle + \langle \frac{d\Psi_{\lambda}}{d\lambda}|\widehat{\Phi}|\Psi_{\lambda}\rangle + \langle \Psi_{\lambda}|\widehat{\Phi}|\frac{d\Psi_{\lambda}}{d\lambda}\rangle, \end{equation} where in the last two terms we have defined the operator $\widehat{\Phi} \equiv\mathcal{R}_{\lambda}^{\dagger}\,\Phi\,\mathcal{R}_{\lambda}$. Using the flow equation for $\mathcal{R}_{\lambda}$, we can rewrite this as \begin{equation} \label{3terms} \frac{d}{d\lambda}\langle \Phi \rangle_{\lambda,\star}=i\langle \Psi_{\lambda}|\left[\mathcal{A}_R, \widehat{\Phi}\right]|\Psi_{\lambda}\rangle-i\langle \Psi^{\star}_{\lambda}|\left[\mathcal{A}^{\star}_{L^{\star}}, \Phi\right]|\Psi^{\star}_{\lambda}\rangle+\langle \frac{d\Psi_{\lambda}}{d\lambda}|\widehat{\Phi}|\Psi_{\lambda}\rangle + \langle \Psi_{\lambda}|\widehat{\Phi}|\frac{d\Psi_{\lambda}}{d\lambda}\rangle. \end{equation} Note that at $\lambda=0$, $\widehat{\Phi}=\Phi$, and so henceforth we will drop the hats. Further, the last two terms can simply be written as \begin{equation} \left(\langle\frac{d\Psi_{\lambda}}{d\lambda}|\widehat{\Phi}|\Psi_{\lambda}\rangle + \langle \Psi_{\lambda}|\widehat{\Phi}|\frac{d\Psi_{\lambda}}{d\lambda}\rangle\right)_{\lambda=0}=\frac{d}{d\lambda}\langle \Phi\rangle_{\Psi_{\lambda}}\Big|_{\lambda=0}. \end{equation} Let us now focus on the first term involving the commutator with $\mathcal{A}_R$; the same logic will also apply to the second term. We proceed by assuming that $\Phi(x_B)$ is an operator acting strictly on the $\mathcal{H}_{R}$ factor (i.e., $x_B$ is well within the entanglement wedge of $R$). As explained above, we will eventually take $\Phi = T^{\text{bulk}_{\pm\pm}}$ and take the limit where the bulk point approaches the bifurcation point. To be precise, when the operator acts ``at the bifurcation point'', we cannot take it to be supported in $\mathcal{H}_R$ alone. For instance, after a little bit of smearing to make this bulk operator well-defined, we will in general find that it acts on both sides of the bifurcation surface. A simple smearing is to instead consider the operators \begin{equation}\label{smear} \Phi_{\text{smear}}= \lim_{\delta \to 0}\int_{-\delta}^{\delta}dx^{\pm}\,T^{\text{bulk}}_{\pm\pm}. \end{equation} Indeed, later we will encounter the need for such a smearing, but for now we proceed with the above simplifying assumption. If we work at $\lambda=0$, and use equation \eqref{MBC1}: \begin{equation} \mathcal{A}_R(0) =a_R(0)- \frac{1}{2\pi }\int d\tau J_R(\tau)\int_{-\infty}^{\infty} ds\frac{1}{\left(1-e^{-(s+i\tau)}\right)}e^{\frac{is}{2\pi}K_{R}(0)}\mathcal{O}(0)e^{-\frac{is}{2\pi}K_{R}(0)}. \end{equation} then we get \begin{eqnarray} \langle \Psi_{0}|\left[\mathcal{A}_{R}(0), \Phi(x_B)\right]|\Psi_{0}\rangle &=& \mathrm{Tr}_{R}\left(\rho^{(0)}_{R}\left[\mathcal{A}_{R}(0), \Phi(x_B)\right]\right)\\ &=&\frac{1}{2\pi i}\int d\tau J_R(\tau)\int_{-\infty}^{\infty}\frac{ds}{\left(1 - e^{-(s+i\tau)}\right)}\,\mathrm{Tr}_{R}\left(\rho^{(0)}_{R}\left[\mathcal{O}(s),\Phi\right]\right)\nonumber\\ &=&\frac{1}{2\pi i}\int d\tau J_R(\tau)\int_{-\infty-i\epsilon}^{\infty-i\epsilon}\frac{ds}{\left(1 - e^{-(s+i\tau)}\right)}\,\mathrm{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O}(s)\Phi\right)\nonumber\\ &-&\frac{1}{2\pi i}\int d\tau J_R(\tau)\int_{-\infty-i(2\pi-\epsilon)}^{\infty-i(2\pi-\epsilon)}\frac{ds}{\left(1 - e^{-(s+i\tau)}\right)}\,\mathrm{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O}(s)\Phi\right).\nonumber \end{eqnarray} \begin{figure} \centering \includegraphics[height=4cm]{contour.pdf} \caption{The strip $-2\pi \leq \text{Im}(s)\leq 0$ in the complex-$s$ plane. The contour $\Gamma$ is shown in the bold blue. The dashed blue lines indicate the vertical contours at infinity. The red lines indicate potential branch cuts which may develop in the correlation function in the infinite dimensional limit, while the black dot indicates the pole coming from $\frac{1}{1-e^{-(s+i\tau)}}$.} \label{fig:contour} \end{figure} In the second line, we have used the fact that $a_R(0)$ commutes with $\rho_R^{(0)}$ to drop that term. In the third line, we have introduced a new regulator $\epsilon \to 0^+$ to separate the two operators infinitesimally in Euclidean time, and further used the KMS condition to bring the two operators in the same order. So, we conclude that \begin{equation}\label{corfunc} \langle \Psi_{0}|\left[\mathcal{A}_{R}, \Phi(x_B)\right]|\Psi_{0}\rangle = \frac{1}{2\pi i}\int d\tau\,J_R(\tau)\int_{\Gamma}\frac{ds}{\left(1 - e^{-(s+i\tau)}\right)}\,\mathrm{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O}(s)\Phi\right), \end{equation} where the contour $\Gamma = (\mathbb{R}-i\epsilon) \cup (\mathbb{R}-i(2\pi - \epsilon))$ is the union of the two horizontal contours at $\text{Im}(s)=-\epsilon$ and $\text{Im}(s) = - (2\pi -\epsilon)$. Using Cauchy's theorem, we can then rewrite this integral as the sum over three contributions: the pole at $s=-i\tau$, and the two ``vertical'' contours at $\mathbb{Re}(s) = \pm \Lambda$ (with $\Lambda \to \infty$): \begin{equation} \langle \Psi_{0}|\left[\mathcal{A}_{R}(0), \Phi(x_B)\right]|\Psi_{0}\rangle = -\int d\tau\,J_R(\tau)\mathrm{Tr}_{R}\left(\rho^{(0)}_{R}\mathcal{O}(\tau)\Phi\right)+\mathcal{I}^R_++\mathcal{I}^R_-, \end{equation} where \begin{equation}\label{VC1} \mathcal{I}^R_{\pm}=\pm \frac{1}{2\pi}\int d\tau\,J_R(\tau)\int_{\epsilon}^{2\pi-\epsilon}\frac{d\theta}{\left(1 - e^{-(\pm\Lambda+i\tau)}e^{i\theta}\right)}\,\mathrm{Tr}_{R}\left(\rho^{(0)}_{R}e^{i(\pm\Lambda-i\theta)K_{R}(0)}\mathcal{O} e^{-i(\pm\Lambda-i\theta)K_{R}(0)}\Phi\right). \end{equation} The vertical contour contributions are seemingly suppressed exponentially in $\Lambda$ from the large relative boost between the two operators, and so it is tempting to discard them. This is correct in most cases, but not all; we will return to this point below, where we will find that the shocks we are looking for actually come from these terms. For now, let us focus on the contribution of the pole: \begin{eqnarray} \langle \Psi_{0}|\left[\mathcal{A}_{R}(0), \Phi(x_B)\right]|\Psi_{0}\rangle\Big|_{\text{pole}} &=& -\int d\tau\,J_R(\tau)\mathrm{Tr}_{R}\left(\rho^{(0)}_{R}\mathcal{O}(\tau)\Phi\right)\nonumber\\ &=&-\int d\tau\,J_R(\tau)\left\langle \mathcal{O}(\tau)\Phi\right\rangle_{\Psi_0}\nonumber\\ &=&-\frac{d}{d\lambda}\langle\Phi\rangle_{\Psi_\lambda}\Big|_{\lambda=0}. \end{eqnarray} This term simply cancels the last term in equation \eqref{3terms}. This is expected: this term measures how the entanglement wedge of $R$ would change in the geometry dual to $\Psi_{\lambda}$, but in the canonical purification, the entanglement wedge of $R$ is replaced by a CPT reflected image of the entanglement wedge of $L$. Thus, the above cancellation ensures that all information about the entanglement wedge of $R$ is removed. We must now show that, in fact, the entanglement wedge of $R$ is replaced with a CPT image of the entanglement wedge of $L$. This comes from the pole contribution in the second term of \eqref{3terms}: \begin{eqnarray} \langle \Psi^{\star}_{0}|\left[\mathcal{A}^{\star}_{L^{\star}}(0), \Phi(x_B)\right]|\Psi^{\star}_{0}\rangle\Big|_{\text{pole}} &=& \int d\tau\,J_L(\tau)\mathrm{Tr}_{L^{\star}}\left(\rho^{(0)}_{L^{\star}}\Theta\,\mathcal{O}(\tau)\,\Theta^{-1}\,\Phi\right)\nonumber\\ &=&\int d\tau\,J_L(\tau)\mathrm{Tr}_{L}\left(\rho^{(0)}_{L}\mathcal{O}(\tau)\,\Theta^{-1}\,\Phi\,\Theta\right)\nonumber\\ &=&\frac{d}{d\lambda}\langle\Phi^{\star}\rangle_{\lambda=0}, \end{eqnarray} where $\Phi^{\star}= \Theta^{-1}\Phi \Theta$ is the CPT conjugate of the operator, but now inserted in the entanglement wedge of $L$. This is in precise agreement with our expectation for what the canonical purification should do. \subsection{Vertical contours at infinity} So far, we have reproduced the standard, expected properties of the bulk dual to the canonical purification. Now we turn to the non-trivial part, which is to reproduce the quantum extremal shock in the bulk. For this, we need to choose a specific bulk operator, i.e., we need to take $\Phi = T^{\text{bulk}}_{\pm\pm}$. Moreover, we need to take the limit where this bulk operator approaches the extremal surface in the original background geometry. To be concrete, let us take $\Phi=T^{\text{bulk}}_{++}$ and consider the vertical contour integral $\mathcal{I}_{\pm}^R$: \begin{equation} \mathcal{I}^R_{\pm} =\pm \frac{1}{2\pi}\int d\tau\,J_R(\tau)\int_{\epsilon}^{2\pi-\epsilon}\frac{d\theta}{\left(1 - e^{-(\pm\Lambda+i\tau)}e^{i\theta}\right)}\,\mathrm{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O} e^{i(\mp\Lambda+i\theta)K_{R}(0)}T^{\text{bulk}}_{++}e^{-i(\mp\Lambda+i\theta)K_{R}(0)}\right) \end{equation} This is the same as \eqref{VC1}, but we have now put the boost on the bulk operator. As $\Lambda \to \infty$, the relative boost between the two operators goes off to infinity, and so we expect the correlator to decay exponentially. Thus, in the $\Lambda \to \infty$ limit, this contour integral vanishes. The exception to this occurs when the bulk operator approaches the extremal surface. To see this, let us use light-cone coordinates $(x^+,x^-)$ in the plane transverse to the black hole extremal surface. At $\lambda=0$, boundary modular flow acts locally on bulk operators as a Schwarzschild boost: \begin{equation} e^{is K_{R}(0)}T^{\text{bulk}}_{\mu\nu}(x^+,x^-,y^i)e^{-isK_{R}(0)}={J^{\alpha}}_{\mu}(s){J^{\beta}}_{\nu}(s) T^{\text{bulk}}_{\alpha\beta}(x^+e^{s} ,x^-e^{-s},y^i), \end{equation} where the ${J^{\alpha}}_{\beta}$ represents the action of the boost on the indices of the stress tensor. In more detail, \begin{equation} e^{is K_{R}(0)}T^{\text{bulk}}_{\pm\pm}(x^+,x^-,y^i)e^{-isK_{R}(0)}=e^{\pm 2s} T^{\text{bulk}}_{\pm \pm}(x^+e^{s} ,x^-e^{-s},y^i), \end{equation} \begin{equation} e^{is K_{R}(0)}T^{\text{bulk}}_{\pm i}(x^+,x^-,y^i)e^{-isK_{R}(0)}=e^{\pm s} T^{\text{bulk}}_{\pm i}(x^+e^{s} ,x^-e^{-s},y^i), \end{equation} \begin{equation} e^{is K_{R}(0)}T^{\text{bulk}}_{ij}(x^+,x^-,y^i)e^{-isK_{R}(0)}= T^{\text{bulk}}_{ij}(x^+e^{s} ,x^-e^{-s},y^i). \end{equation} Consider first $\mathcal{I}_-^R$: \begin{eqnarray} \label{ir-} \mathcal{I}_-^R &=& \frac{1}{2\pi}\int d\tau\,J_R(\tau)\int_{\epsilon}^{2\pi-\epsilon}\frac{d\theta e^{2\Lambda}}{\left(1 - e^{(\Lambda - i\tau)}e^{i\theta}\right)}\,\mathrm{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O} e^{-\theta K_{R}(0)}T^{\text{bulk}}_{++}(x^+e^{\Lambda}, x^- e^{-\Lambda})e^{\theta K_{R}(0)}\right),\nonumber\\ &\simeq & -\frac{1}{2\pi}\int d\tau\,J_R(\tau)\int_{\epsilon}^{2\pi-\epsilon} d\theta e^{\Lambda +i(\tau-\theta)}\,\mathrm{Tr}_{R}\left(\rho^{(0)}_{R}\mathcal{O} e^{-\theta K_{R}(0)}T^{\text{bulk}}_{++}(x^+e^{\Lambda}, x^- e^{-\Lambda})e^{\theta K_{R}(0)}\right). \end{eqnarray} If $x^+ \neq 0$, then the above correlation function will decay exponentially in $\Lambda$ as previously mentioned, and is thus zero in the $\Lambda \to \infty$ limit because the bulk operator is getting boosted off to infinity. However, when $x^+=0$, the operator does not get boosted away, and we instead get a divergence from the $e^{\Lambda}$ factor in equation \eqref{ir-}.\footnote{A similar effect is responsible for the Ceyhan-Faulkner shock \cite{Ceyhan:2018zfg} in Connes-cocyle flowed states in the perturbative setup \cite{Bal}.} We can see this quite explicitly in the BTZ black hole, for instance. In Kruskal coordinates, the bulk metric is given by \begin{equation} ds^2 = -\frac{4dx^+dx^-}{(1+x^+x^-)^2}+ \frac{4\pi^2}{\beta^2}\frac{(1-x^+x^-)^2}{(1+x^+x^-)^2} d\phi^2, \end{equation} where $\phi$ is a periodic coordinate along the bifurcation surface. The bulk to boundary propagator is given by \cite{Keski-Vakkuri:1998gmz, Maldacena:2001kr} \begin{equation} \label{BBP} K(x^+,x^-,\phi)=\sum_{n}\frac{(1+x^+x^-)^{2\Delta}}{\left\{(1-x^+x^-)[\cosh(\frac{2\pi}{\beta}(\phi-\phi_0+2\pi n))-1]+(x^+-e^{-i\theta_0})(x^--e^{i\theta_0})\right\}^{2\Delta}}, \end{equation} where $(\phi_0, \tau_0)$ label the coordinates on the boundary torus with $\tau_0$ being the Euclidean time direction and $\phi_0$ being the spatial direction. The bulk stress tensor in the presence of the boundary double-trace operator is given by \begin{equation} \langle T_{++}^{\text{bulk}} \mathcal{O}\rangle \sim \sum_n\partial_+K_n \partial_+K_n, \end{equation} where $K_n$ is the $n$th term in the summation in equation \eqref{BBP}. For fixed $n$ and $x^+\neq 0$, the bulk stress tensor goes as $e^{-(4\Delta+2)s}$ in the large $s$ limit. However, when $x^+=0$, there is no suppression as the operator does not get boosted away and $\mathcal{I}^R_-$ diverges, because of the factor of $e^{\Lambda}$ out front in equation \eqref{ir-}; this suggests a delta-function contribution at $x^+=0$. To check this, we really need to smear the operator in the $x^+$ direction in an infinitesimally small window of $x^+\in [0,\delta]:$\footnote{We can think of this as the part of $\Phi_{\text{smear}}$ (see equation \eqref{smear}) which contributes to $[\mathcal{A}_R,\Phi]$.} \begin{eqnarray} \int_0^{\delta} dx^+\,\mathcal{I}^R_- &=&-\frac{1}{2\pi}\int d\tau\,J_R(\tau)\int_{\epsilon}^{2\pi-\epsilon}d \theta\int_0^{\delta e^\Lambda}d \tilde{x}^+e^{i(\tau-\theta)}\,\mathrm{Tr}_{R}\left(\rho^{(0)}_{R}\mathcal{O} e^{-\theta K_{R}(0)}T^{\text{bulk}}_{++}(\tilde{x}^+, x^-e^{-\Lambda})e^{\theta K_{R}(0)}\right)\nonumber\\ &\simeq &-\frac{1}{2\pi}\int d\tau\,J_R(\tau)\int_{\epsilon}^{2\pi-\epsilon}d \theta\int_0^{\infty}d \tilde{x}^+e^{i(\tau-\theta)}\,\mathrm{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O} e^{-\theta K_{R}(0)}T^{\text{bulk}}_{++}(\tilde{x}^+, 0)e^{\theta K_{R}(0)}\right), \end{eqnarray} where in the first line, we have defined a new coordinate $\tilde{x}^+ = x^+e^{\Lambda}$, and in the second line we have sent $\Lambda \to \infty$. By deforming the $\tilde{x}^+$ contour in the complex plane, we can remove all the $\theta$ dependence from the correlator, and replace it with $\tau$. Performing the $\theta$ integral then gives \begin{equation} \int_0^{\delta} dx^+\,\mathcal{I}_-^R = -\int d\tau\,J_R(\tau)\int_0^{\infty}d x^+\,\mathrm{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O}(\tau)T^{\text{bulk}}_{++}(x^+, 0,y^i)\right) = \frac{1}{2\pi}\frac{d}{d \lambda}\frac{\delta S_{\text{bulk}}}{ \delta x^+}\Big|_{\lambda=0,y^i}, \end{equation} where in the last equality we used equation \eqref{EEfromANEC}. Thus, the vertical contour precisely gives us the delta function contribution we had expected. Note that $\mathcal{I}_+^R$ does not give a delta function contribution because the enhancement factor of $e^{2\Lambda}$ is now replaced with a suppression factor of $e^{-2\Lambda}$. Similarly, we can evaluate the vertical contour contributions coming from the term involving $\mathcal{A}^{\star}_{L^{\star}}$. In this case, the contour at $s=+\Lambda$ contributes: \begin{equation} \mathcal{I}^L_{+} = \frac{1}{2\pi}\int d\tau\,J_L(\tau)\int_{\epsilon}^{2\pi-\epsilon}\frac{d\theta}{\left(1 - e^{-(\Lambda+i\tau)}e^{i\theta}\right)}\,\mathrm{Tr}_{L}\left(\rho_{L}^{(0)}\mathcal{O} e^{i(-\Lambda+i\theta)K_{L}(0)}(\Theta^{-1}T^{\text{bulk}}_{++}\Theta)e^{-i(-\Lambda+i\theta)K_{L}(0)}\right), \end{equation} where \begin{equation} \Theta^{-1}T^{\text{bulk}}_{++}(x^+,x^-,y^i)\Theta = T^{\text{bulk}}_{++}(-x^+,-x^-,y^i). \end{equation} The left-sided boost acts on this operator as: \begin{equation} e^{-i\Lambda K_{L}(0)}T^{\text{bulk}}_{++}(-x^+,-x^-)e^{i\Lambda K_{L}(0)} = e^{2\Lambda}T^{\text{bulk}}_{++}(-x^+e^{\Lambda},-x^-e^{-\Lambda}). \end{equation} In the large $\Lambda$ limit, we can expand: \begin{equation} \frac{1}{\left(1 - e^{-(\Lambda+i\tau)}e^{i\theta}\right) }= 1 + e^{-(\Lambda+i\tau)} e^{i\theta}+\cdots. \end{equation} The first term leads to a $e^{2\Lambda}$ divergence, but the $\theta$ integration kills this term, as can be seen by smearing in the infinitesimal interval $x^+ \in (-\delta,0)$. The first non-trivial contribution comes from the second term, which gives (following the same steps as before): \begin{equation} \int_{-\delta}^0dx^+ \mathcal{I}^L_+ = -\int d\tau\int J_L(\tau) \int_{-\infty}^0 dx^+\mathrm{Tr}_{L}\left(\rho_{L}^{(0)}O(\tau)T_{++}^{\text{bulk}}(x^+,0,y^i)\right)= -\frac{1}{2\pi}\frac{d}{d\lambda}\frac{\delta S_{\text{bulk}}}{\delta x^+}\Big|_{\lambda=0,y^i}. \end{equation} The extra minus sign above cancels with the minus sign in front of the $\mathcal{A}_{L^\star}$ term, and thus we get the same vertical contribution from here as we had from the $\mathcal{A}_R$ term, resulting in an overall factor of 2. Thus, we learn that the bulk stress tensor has the following shock contribution in the canonically purified state: \begin{equation} 2\pi\frac{d}{d\lambda}\langle T_{++}^{\text{bulk}}(x^+,x^-,y^i)\rangle_{\Psi^{\star}_{\lambda}}\Big|_{\lambda=0} = 2\delta(x^+) \frac{d}{d\lambda} \frac{\delta S_{\text{bulk}}}{\delta x^+}\Big|_{\lambda=0,y^i}+\cdots, \end{equation} where the $\cdots$ indicate the other non-singular parts. This is precisely the shock required to support the Engelhardt Wall geometry.\footnote{Our calculation is valid in the limit $x^+\to 0$ with $x^-$ fixed. However, we see that the dependence on $x^-$ is trivial in the end. This is a simple consequence of the conservation of the shock stress tensor, $\partial_-T^{\text{shock}}_{++}=0$.} Thus, the boundary entanglement structure in the canonically purified state gives rise to a state of the matter fields in the bulk which precisely supports the Engelhardt-Wall geometry, in a way consistent with the bulk Einstein's equations. \section{Discussion}\label{sec:discussion} To summarize, we have studied the canonical purification of Euclidean path integral states to first order in sources. In holographic conformal field theories, we have demonstrated that the state of the bulk matter in the bulk dual to the canonically purified state is precisely such that it gives rise to a shock in the bulk stress tensor which is required to support the Engelhardt-Wall geometry. We can view our result in two different ways. Firstly, let us assume that the bulk geometry dual to the canonically purified boundary CFT state must satisfy the semi-classical Einstein's equations, order by order in the state perturbation parameter $\lambda$. In this case, the bulk must satisfy the junction conditions, equation \eqref{IJC}. Together with our result for the bulk shock, we conclude that the co-dimension two surface across which the gluing happens must satisfy \begin{equation}\label{QES2} \frac{1}{4G_N}\theta_\pm + \frac{\delta S_{\text{bulk}}}{\delta x^\pm}=0, \end{equation} at $O(\lambda)$, i.e., at first order in the state deformation. This is indeed the quantum extremal surface formula. On the other hand, we could assume that the gluing surface in the bulk must satisfy the quantum extremal surface formula \eqref{QES2}, without assuming that the bulk geometry satisfies the gravitational junction conditions. In this case, combining our result for the bulk stress tensor shock together with the QES formula, we would deduce the co-dimension-two junction conditions in general relativity, equation \eqref{IJC}, at first order in perturbation theory. From this point of view, the bulk gravitational equations (in this case, the junction conditions) are a consequence of the boundary entanglement structure satisfying the quantum extremal surface formula. This is in the same spirit as the results in \cite{Faulkner:2013ana, Faulkner:2017tkh, Lewkowycz:2018sgn}, but generalized now to a context where quantum corrections in the bulk are important. The quantum extremal surface formula is deeply tied-in with the structure of the bulk-to-boundary map being a quantum error correcting code, and so one might hope that this viewpoint sheds some light on the emergence of gravity from quantum error correction. It would be nice to generalize our results beyond first order in perturbation theory. One approach to do this could be to work to leading order in perturbation theory around a more general background state/geometry. We expect that with some mild assumptions on the nature of modular flow, such as approximate locality in a neighbourhood of the entanglement cut, we should be able to extend our result to this more general scenario. Secondly, the existence of the shock in the bulk stress tensor is deeply tied with the emergence of bulk spacetime and a correspondent quantum field theory subregion algebra in the bulk. Indeed, the calculation we presented is consistent with the expectation that the bulk state dual to the boundary canonical purification is the bulk canonical purification. From this point of view, the bulk canonical purification destroys the delicate entanglement structure at the bifurcation surface, resulting in a ``firewall''. This is in line with the results in \cite{Ceyhan:2018zfg}, where it was shown that one-sided purifications in quantum field theory can result in such shocks. In more formal terms, this is associated with the emergence of an effective type III von Neumann algebra in the bulk from the type I algebra of the boundary CFT in the large N limit \cite{Leutheusser:2021frk, Witten:2021unn, Chandrasekaran:2022eqq}. It has been recently argued in \cite{Witten:2021unn, Chandrasekaran:2022eqq} that including $1/N$ corrections, and in particular, incorporating one quantum gravitational mode (corresponding to relative time fluctuations between the two boundaries, or equivalently, one-sided mass fluctuations) changes the nature of the bulk algebra from type III to type II$_{\infty}$, thus explaining the ``renormalization'' of the UV divergence in the generalized entropy in gravity. It would be nice to understand, in a similar vein, what effect these $\frac{1}{N}$ corrections can have on the shock that we encountered, and what this means for the bulk spacetime. To this end, it would be satisfying to derive the shock from the more formal machinery of Tomita-Takesaki theory (see \cite{Witten:2018zxz} for a review). The techniques in \cite{Ceyhan:2018zfg} may be of direct relevance. Finally, it would be nice to develop more tools to study the reflection operator introduced in this paper. This would have direct applications in several useful directions in AdS/CFT such as bulk reconstruction, complexity of the bulk-to-boundary map etc. \section*{Acknowledgements} We thank Abhijit Gadde, Arjun Kar, Gautam Mandal, Shiraz Minwalla, Pratik Rath, Arvin Shahbazi-Moghaddam, Joan Simon, Jonathan Sorce, Sandip Trivedi and Mark Van Raamsdonk for helpful discussions and comments on the draft. We acknowledge supported from the Department of Atomic Energy, Government of India, under project identification number RTI 4002. \providecommand{\href}[2]{#2}\begingroup\raggedright
{ "arxiv_id": "2302.14254", "language": "en", "timestamp": "2023-03-01T02:06:57", "url": "https://arxiv.org/abs/2302.14254", "yymm": "2302" }
\section{Introduction} Predictions of beauty and charm hadron lifetimes are achieved by the heavy quark expansion (HQE) model~\cite{Neubert:1997gu,Uraltsev:2000qw,Lenz:2013aua,Lenz:2014jha,Kirk:2017juj,Cheng:2018rkz}. The charm lifetime predictions are particularly challenging due to the significant higher-order corrections and spectator quark effects. So the charm lifetime measurements allow for HQE validation and refinement that increase the reliability and precision of Standard Model predictions in flavor dynamics. The best measurements of charm meson lifetimes date back to FOCUS~\cite{FOCUS} while LHCb recently reported precise measurements of charm baryon lifetimes, relative to $D^+$ lifetime~\cite{Ll_prd,LHCb:omgc,LHCb:omgc2}. We report absolute lifetime measurements of the charm hadrons using the data collected by the Belle II detector~\cite{belle2}, which is built around the interaction region (IR) of the SuperKEKB~\cite{skekb} asymmetric energy $e^+e^-$ collider. SuperKEKB adopts a nano-beam scheme that squeezes the IR to achieve large instantaneous luminosity. The Belle II detector consists of a tracking system, a particle identification system, and an electromagnetic calorimeter kept inside a 1.5 T superconducting magnet. The outer layer consists of a dedicated muon and $K_L^0$ detector. The details of the Belle II detector can be found in Ref.~\cite{belle2}. Excellent vertex resolution, precise alignment of the vertex detector, and accurate calibration of particle momenta in Belle II are crucial in the measurements of lifetimes. \section{Lifetime extraction} The proper decay times of charm hadrons are calculated as $t=m(\vec{L}\cdot\hat{p})/p$, where $m$ is the known mass of hadrons, $\vec{L}$ is the flight length between the production and decay vertices, and $p$ is the momentum of hadrons. Lifetimes are extracted by using unbinned maximum-likelihood fits to the $t$ and its uncertainty, $\sigma_t$, of the candidates populating the signal regions of data. The signal probability-density function (PDF) is the convolution of an exponential function in $t$ with a resolution function that depends on $\sigma_t$, multiplied by the PDF of $\sigma_t$. The time constant of the exponential function will return the lifetime. The $\sigma_t$ PDF is a histogram template derived directly from the signal region of the data. In all cases but $D^0$, the template is obtained from the candidates in the signal region after having subtracted the distribution of the sideband data. Simulation demonstrates that for $D^+$, $\Lambda_c^+$, and $\Omega_c^0$, a single Gaussian function is sufficient, whereas for $D^0$, a double Gaussian function with a common mean is required. \section{$D^0$ and $D^+$ lifetimes} We measured $D^0$ and $D^+$ lifetimes using $\rm72~fb^{-1}$ of Belle II data using samples of reconstructed $D^0\to K^-\pi^+$ and $D^+\to K^-\pi^+\pi^+$ decays, respectively. $171\times10^3$ signal candidates are reconstructed for $D^{*+}\toD^0(\to K^-\pi^+)\pi^+$ decays in the signal region: $1.851<m(K^-\pi^+)<1.878~{\rm GeV}/c^2$. In the $D^0$ case, the per-mille-level fraction of background candidates in the signal region is neglected, and a systematic uncertainty is assigned for this. $59\times10^3$ signal candidates are reconstructed for $D^{*+}\toD^+(\to K^-\pi^+\pi^+)\pi^0$ decays in the signal region: $1.855<m(K^-\pi^+\pi^+)<1.883~{\rm GeV}/c^2$. For the $D^+$ case, a sizable background contamination in the signal region is accounted for using the data sideband: $1.758<m(K^-\pi^+\pi^+)<1.814~{\rm GeV}/c^2, 1.936<m(K^-\pi^+\pi^+)<1.992~{\rm GeV}/c^2$. The background PDF consists of a zero-lifetime component and two exponential components, all convolved with the resolution function. The decay-time distributions of the data, with fit projections overlaid, are shown in Fig.~\ref{fig:lifetime_D}. The $D^0$ and $D^+$ lifetimes are measured to be $410.5\pm 1.1\pm0.8$~fs and $1030.4\pm4.7\pm3.1$~fs, respectively~\cite{dl_prl}. The errors are statistical and systematic (all relevant effects are studied as summarized in \cref{tab:D}), respectively. The results are consistent with their respective world average values~\cite{pdg}. \begin{figure}[t!] \centering \includegraphics[width=0.5\linewidth]{fig1.pdf}\\ \caption{Decay-time distributions of (top) $D^0\to K^-\pi^+$ and (bottom) $D^+\to K^-\pi^+\pi^+$ candidates in their respective signal regions with fit projections overlaid. \label{fig:lifetime_D}} \end{figure} \begin{table}[t] \centering \caption{Systematic uncertainties for $D^0$ and $D^+$ lifetimes.\label{tab:D}} \begin{tabular}{lcc} \hline Source & $\tau(D^0\to K^-\pi^+)$ [fs] & $\tau(D^+\to K^-\pi^+\pi^+)$ [fs]\\ \hline Resolution model & 0.16 & 0.39 \\ Backgrounds & 0.24 & 2.52 \\ Detector alignment & 0.72 & 1.70 \\ Momentum scale & 0.19 & 0.48 \\ \hline Total & 0.80 & 3.10 \\ \hline \end{tabular} \end{table} \section{$\Lambda_c^+$ lifetime} The most precise measurement of the $\Lambda_c^+$ lifetime is reported by the LHCb experiment~\cite{Ll_prd}. We report a preliminary result on the absolute measurement of the $\Lambda_c^+$ lifetime in $\Lambda_c^+\to pK^-\pi^+$ decays reconstructed using $207~\rm fb^{-1}$ of the Belle II data. We reconstruct $116\times10^3$ candidates for the decay $\Lambda_c^+\to pK^-\pi^+$ in the signal region: $2.283<m(pK^-\pi^+)<2.290~{\rm GeV}/c^2$, with a background contamination of 7.5\%. The $\Lambda_c^+$ lifetime is extracted in the same way as the $D^+$ lifetime. Background events in the signal region are constrained using data sideband ($2.249<m(pK^-\pi^+)<2.264~{\rm GeV}/c^2$, $2.309<m(pK^-\pi^+) <2.324~{\rm GeV}/c^2$). Decays of $\Xi_c^0\to\pi^-\Lambda_c^+$ and $\Xi_c^+\to\pi^0\Lambda_c^+$ may bias the measurement of the $\Lambda_c^+$ lifetime, since the $\Xi_c^0$ and $\Xi_c^+$ have non-zero lifetimes and may shift the production vertex of the $\Lambda_c^+$ away from the IR. A veto is applied to suppress such candidates, and a systematic uncertainty is assigned for the remaining contamination (details can be found in Ref.~\cite{Ll_prl}). We measure the $\Lambda_c^+$ lifetime to be $\rm203.20\pm0.89\pm0.77~fs$, where the uncertainties are statistical and systematic (summarized in the \cref{tab:Lbc}), respectively~\cite{Ll_prl}. Our result is consistent with the current world average~\cite{pdg}. \begin{figure}[t!] \centering \includegraphics[width=0.5\linewidth]{fig2.pdf}\\ \caption{Decay-time distributions of $\Lambda_c^+\to pK^-\pi^+$ candidates in their (top) signal and (bottom) sideband regions with fit projections overlaid. \label{fig:lifetime_Lbc}} \end{figure} \begin{table}[t] \centering \caption{Systematic uncertainties for $\Lambda_c^+$ lifetime.\label{tab:Lbc}} \begin{tabular}{lc} \hline Source & Uncertainty (fs) \\ \hline $\Xi_c$ contamination & 0.34 \\ Resolution model & 0.46 \\ Non-$\Xi_c$ background model & 0.20 \\ Detector alignment & 0.46 \\ Momentum scale & 0.09 \\ \hline Total & 0.77 \\ \hline \end{tabular} \end{table} \section{$\Omega_c^0$ lifetime} The $\Omega_c^0$ was believed to be the shortest-living singly charmed baryon that decays weakly. In 2018, LHCb measured a large value of $\Omega_c^0$ lifetime~\cite{LHCb:omgc}, and this observation inverted the lifetime hierarchy of singly charmed baryons. LHCb confirmed their result in 2022 using a different data sample~\cite{LHCb:omgc2}. We performed the first independent measurement of $\Omega_c^0$ lifetime using $\rm207~fb^{-1}$ of data collected at Belle II. We reconstructed 90 signal candidates in the signal region ($2.68<m(\Omega^-\pi^+)<2.71~{\rm GeV}/c^2$) for the decay $\Omega_c^0\to\Omega^-\pi^+$, where $\Omega^-\to\Lambda^0(\to p\pi^-) K^-$. It is a complex decay chain with two extra decay vertices in addition to the $\Omega_c^0$ decay vertex. The lifetime is extracted by fitting the signal and sideband regions simultaneously. The signal region has a background contamination of 33\% that is constrained using events in the sideband ($2.55<m(\Omega^-\pi^+)<2.65~{\rm GeV}/c^2$, $2.75<m(\Omega^-\pi^+)<2.85~{\rm GeV}/c^2$). The $\Omega_c^0$ lifetime is measured to be $\rm243\pm48\pm11~fs$, where the uncertainties are statistical and systematic (summarized in~\cref{tab:syst_omgc}), respectively~\cite{Ol_prl}. The result is consistent with LHCb measurements and inconsistent with previous measurements at 3.4 standard deviations. \begin{figure}[t!] \centering \includegraphics[width=0.5\linewidth]{fig3.pdf}\\ \caption{ Decay-time distributions of $\Omega_c^0\to\Omega^-\pi^+$ candidates in their (top) signal and (bottom) sideband regions with fit projections overlaid. \label{fig:lifetime_Omgc}} \end{figure} \begin{table}[t] \centering \caption{Systematic uncertainties for $\Omega_c^0$ lifetime.\label{tab:syst_omgc}} \begin{tabular}{lc} \hline Source & Uncertainty (fs) \\ \hline Fit bias & 3.4 \\ Resolution model & 6.2 \\ Background model & 8.3 \\ Detector alignment & 1.6 \\ Momentum scale & 0.2 \\ Input $\Omega_c^0$ mass & 0.2 \\ \hline Total & 11.0 \\ \hline \end{tabular} \end{table} \section{Conclusions} In conclusion, $D^0$, $D^+$, $\Lambda_c^+$, and $\Omega_c^0$ lifetimes are measured using the data collected by the Belle II experiment. The results on $D^0$, $D^+$, and $\Lambda_c^+$ lifetimes are the most precise to date and are consistent with previous measurements. Our result on $\Omega_c^0$ lifetime is consistent with the LHCb results~\cite{LHCb:omgc, LHCb:omgc2}, and inconsistent at 3.4 standard deviations with the pre-LHCb world average~\cite{pdg2018}. The Belle II result, therefore, confirms that the $\Omega_c^0$ is not the shortest-living weakly decaying charmed baryon.
{ "arxiv_id": "2302.14338", "language": "en", "timestamp": "2023-03-02T02:08:30", "url": "https://arxiv.org/abs/2302.14338", "yymm": "2302" }
\section{Introduction} \label{sec:intro} Scene text detection is a long-standing research topic aiming to localize the bounding box or polygon of each text instance from natural images, as it has wide practical applications scenarios, such as office automation, instant translation, automatic driving, and online education. With the rapid development of fully-supervised deep learning technologies, scene text detection has achieved remarkable progresses. Although supervised approaches have made remarkable progress in the field of text detection, they require extensive and elaborate annotations, \emph{e.g.}, character-level, word-level, and text-line level bounding boxes, especially polygonal boxes for arbitrarily-shaped scene text. Therefore, it is very important to investigate text detection methods under small amount of labeled data, \emph{i.e.}, few-shot training. \begin{figure}[tbp] \centering \includegraphics[width=0.39\textwidth]{figs/compare_pipeline-horizon_label.pdf} \caption{Comparisons of different paradigms of using text knowledge for scene text detection. } \vspace{-1em} \label{fig:compare_intro} \end{figure} Recently, through leveraging the pretrained vision and language knowledge, the large-scale Contrastive Language-Image Pretraining (CLIP) model~\cite{Radford2021LearningTV} has demonstrated its significance in various downstream tasks. \emph{e.g.}, image classification~\cite{Zhou2022ConditionalPL}, object detection~\cite{Gu2022OpenvocabularyOD}, and semantic segmentation~\cite{Rao2022DenseCLIPLD}. Compared to general object detection, scene text in natural images usually presents with both visual and rich character information, which has a natural connection with the CLIP model. Therefore, how to make full use of cross-modal information from visual, semantic, and text knowledge to improve the performance of the text detection models receives increasing attentions in recent studies. For examples, Song {\em et al.}~\cite{Song2022VisionLanguagePF}, inspired by CLIP, adopts fine-grained cross-modality interaction to align unimodal embeddings for learning better representations of backbone via carefully designed pretraining tasks. Xue {\em et al.}~\cite{Xue2022LanguageMA} presents a weakly supervised pretraining method to jointly learn and align visual and partial textual information for learning effective visual text representations for scene text detection. Wan {\em et al.}~\cite{Wan2021SelfattentionBT} proposes self-attention based text knowledge mining to enhance backbone via an image-level text recognition pretraining tasks. Different from these works, as shown in Figure~\ref{fig:compare_intro}, this paper focuses on turning the CLIP model for text detection without pretraining process. However, it is not trivial to incorporate the CLIP model into a scene text detector. The key is seeking a proper method to exploit the visual and semantic prior information conditioned on each image. In this paper, we develop a new method for scene text detection, termed as TCM, short for \textbf{T}urning a \textbf{C}LIP \textbf{M}odel into a scene text detector, which can be easily plugged to improve the scene text detection frameworks. We design a cross-modal interaction mechanism through visual prompt learning, which is implemented by cross-attention to recover the locality feature from the image encoder of CLIP to capture fine-grained information to respond to the coarse text region for the subsequent matching between text instance and language. Besides, to steer the pretrained knowledge from the text encoder conditioned independently on different input images, we employ the predefined language prompt, learnable prompt, and a language prompt generator using simple linear layer to get global image information. In addition, we design an instance-language matching method to align the image embedding and text embedding, which encourages the image encoder to explicitly refine text regions from cross-modal visual-language priors. Compared to previous pretraining approaches, our method can be directly finetuned for the text detection task without pretraining process, as elaborated in Fig.~\ref{fig:compare_intro}. In this way, the text detector can absorb the rich visual or semantic information of text from CLIP. We summarize the advantages of our method as follows: \begin{itemize} \item We construct a new text detection framework, termed as TCM, which can be easily plugged to enhance the existing detectors. \item Our framework can enable effective few-shot training capability. Such advantage is more obvious when using less training samples compared to the baseline detectors. Specifically, by using 10\% of labeled data, we improve the performance of the baseline detector by an average of 22\% in terms of the F-measure on 4 benchmarks. \item TCM introduces promising domain adaptation ability, \emph{i.e.}, when using training data that is out-of-distribution of the testing data, the performance can be significantly improved. Such phenomenon is further demonstrated by a NightTime-ArT text dataset\footnote{\href{https://drive.google.com/file/d/1v3CshPqlvhpnK1_MKwqqkWJDikKl_g4Y}{NightTime-ArT Download Link}}, which we collected from the ArT dataset. \item Without pretraining process using specific pretext tasks, TCM can still leverage the prior knowledge from the CLIP model, outperforming previous scene text pretraining methods~\cite{Wan2021SelfattentionBT,Song2022VisionLanguagePF,Xue2022LanguageMA}. \end{itemize} \section{Related works} \label{sec:rela} \paragraph{Unimodal Scene Text Detection.} Unimodal scene text detection represents the method directly adopts the bounding boxes annotation only~\cite{Long2020SceneTD}. It can be roughly divided into two categories: Segmentation-based methods and regression-based methods. The segmentation-based methods usually conduct pixel-level~\cite{Liao2020RealtimeST,Tian2019LearningSE,Xue2019MSRMS,Li2019ShapeRT,Wang2019EfficientAA,Xie2019SceneTD,Liao2019RealTimeST}, segment-level~\cite{Shi2017DetectingOT,Long2018TextSnakeAF, Zhang2020DeepRR,Baek2019CharacterRA, Xu2019TextFieldLA,Tian2016DetectingTI,Tang2019SegLinkDD, Ye2020TextFuseNetST}, or contour-level~\cite{Wang2020TextRayCG,Wang2020ContourNetTA} segmentation, then grouping segments into text instances via post-processing. The regression-based methods~\cite{zhu2021fourier,Zhang2016MultiorientedTD,He2017SingleST, He2017DeepDR, Liao2017TextBoxesAF, Zhou2017EASTAE, He2021MOSTAM,Zhang2019LookMT,Wang2019ArbitrarySS} regards text as a whole object and regress the bounding boxes of the text instances directly. \paragraph{Cross-modal Assisted Scene Text Detection.} Unlike unimodal based scene text detection, cross-modal assisted scene text detection aims to make full use of cross-modal information including visual, semantic, and text knowledge to boost the performance. Wan {\em et al.}~\cite{Wan2021SelfattentionBT} utilized an image-level text recognition pretraining tasks to enhance backbone via the proposed self-attention based text knowledge mining mechanism. Song {\em et al.}~\cite{Song2022VisionLanguagePF}, inspired by CLIP, designed three pretraining fine-grained cross-modality interaction tasks to align unimodal embeddings for learning better representations of backbone. Xue {\em et al.}~\cite{Xue2022LanguageMA} jointly learned and aligned visual and partial text instances information for learning effective visual text representations via the proposed weakly supervised pretraining method. Long {\em et al.}~\cite{Long2022TowardsEU} proposed an end-to-end model to perform unified scene text detection and visual layout analysis simultaneously. The above methods explicitly leverage text or visual information to assist text detection. Instead, our method focuses on improving the performance results by turning a CLIP model into a scene text detector via leveraging pretrained text knowledge. \section{Methodology} \label{sec:method} We begin by illustrating the CLIP model which we used for fetching the prior knowledge. Next, we introduce the technical details of TCM as well as the rationale behind it. An overview of our approach is shown in Fig.~\ref{fig:method_overall}. \subsection{Contrastive Language-Image Pretraining} CLIP~\cite{Radford2021LearningTV}, which collects 400 million image-text pairs without human annotation for model pretraining, has well demonstrated the potential of learning transferable knowledge and open-set visual concepts. Previous study~\cite{goh2021multimodal} shows that different neurons in CLIP model can capture the corresponding concept literally, symbolically, and conceptually, As shown in Fig.~\ref{fig:clip_neuron}, the CLIP model is an inborn text-friendly model which can effectively abstract the mapping space between image and text~\cite{Petroni2019LanguageMA}. During training, CLIP learns a joint embedding space for the two modalities via a contrastive loss. Given a batch of image-text pairs, for each image, CLIP maximizes the cosine similarity with the matched text while minimizing that with all other unmatched text. For each text, the loss is computed similarly as each image. In this way, CLIP can be used for zero-shot image recognition~\cite{Zhou2022ConditionalPL}. However, to exploit the relevant information from such a model, there are two prerequisites: 1) A proper method to effectively request the prior knowledge from the CLIP. 2) The original model can only measure the similarity between an integrated image and a single word or sentence. For scene text detection, there are usually many text instances per image, which are all required to be recalled equivalently. \subsection{Turning a CLIP into a Text Detector} To turn the CLIP model into the scene text detector, we propose TCM, as shown in Fig.~\ref{fig:method_overall} and Fig.\ref{fig:method_plug_tcm}. TCM is a pluggable module that can be directly applied to enhance the existing scene text detectors. It extracts the image and text embeddings from the image encoder and text encoder of CLIP model, respectively. We then design a cross-modal interaction mechanism through visual prompt learning to recover the locality feature from the image encoder of CLIP, which can capture fine-grained information to respond to the coarse text region for the subsequent matching between text instance and language. For better steering the pretrained knowledge, we introduce a language prompt generator to generate conditional cue for each image and design a visual prompt generator that learns image prompts for adapting the frozen clip text encoder for the text detection task. The TCM can be directly applicable to broader text detection methods only with some minor modifications. In addition, we design an instance-language matching method to align the image embedding and text embedding, which encourages the image encoder to explicitly refine text regions from cross-modal visual-language priors. \begin{figure}[htbp] \centering \includegraphics[width=0.46\textwidth]{figs/method_overall.pdf} \caption{The overall framework of our approach.} \vspace{-2em} \label{fig:method_overall} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth]{figs/method_plug_tcm_small_width.pdf} \vspace{-1em} \caption{The details of the TCM. The image encoder and text encoder are directly from the CLIP model. Det. Head short for detection head.} \vspace{-1em} \label{fig:method_plug_tcm} \end{figure} \paragraph{Image Encoder.} We use the pretrained ResNet50~\cite{He2016DeepRL} of CLIP as the image encoder, which produces an embedding vector for every input pixel. Given the input image $ \bm{I}' \in \mathbb{R}^{ H \times W \times 3} $, image encoder outputs image embedding $\bm{I} \in \mathbb{R}^{\tilde{H} \times \tilde{W} \times C}$, where $\tilde{H} = \frac{H}{s}$, $\tilde{W} = \frac{W}{s}$, and $C$ is the image embedding dimension ($C$ is set to 1024) and $s$ is the downsampling ratio (s is empirically set to 32), % which can be expressed as: \begin{equation} \label{eq:image_encoder} \bm{I} = \operatorname{ImageEncoder}(\bm{I}')\,. \end{equation} \paragraph{Text Encoder.} The text encoder takes input a number of of $K$ classes prompt and embeds it into a continuous vector space $\mathbb{R}^C$, producing text embeddings $ \bm{T} = \{\bm{t}_1,\ldots,\bm{t}_K\} \in \mathbb{R}^{K \times C}$ as outputs of the text encoder, where $ \bm{t}_i \in \mathbb{R}^C$. Specifically, we leverage the frozen pretrained text encoder of CLIP throughout as the text encoder can provide language knowledge prior for text detection. $K$ is set to 1 because there is only one text class in text detection task. Different from the original model that uses templates like ``a photo of a [CLS].'', we predefine discrete language prompt as ``\emph{Text}''. Then, a part of the text encoder input $\bm{t}_{in}'$ is defined as follows: \begin{equation} \label{eq:image_text_encoder_input} \bm{t}_{in}' = \operatorname{WordEmbedding}( \rm Text) \in \mathbb{R}^{D}\,, \end{equation} where $\operatorname{WordEmbedding}(\cdot)$ denotes word embedding for predefined prompt ``Text'' class. $D$ is the word embedding dimension and set to 512. Inspired by CoOp~\cite{Zhou2021LearningTP, Zhou2022ConditionalPL}, we also add learnable prompt $\{\bm{c}_1,\ldots,\bm{c}_n\}$ to learn robust transferability of text embedding for facilitating zero-shot transfer of CLIP model, where $n$ is the number of learnable prompt, which is set to 4 by default, and $\bm{c}_i \in \mathcal{R}^D$. Thus, the input $\bm{t}_{in}$ of the text encoder is as follows: \begin{equation} \label{eq:t_in} \bm{t}_{in} = [\bm{c}_1,\ldots,\bm{c}_n, \bm{t}_{in}'] \in \mathbb{R}^{(n+1) \times D}\,. \end{equation} The text encoder takes $\bm{t}_{in}$ as input and generates text embedding $ \bm{T} = \{\bm{t}_1\} \in \mathbb{R}^{C}$, and $ \bm{T}$ is donated by $\bm{t}_{out} \in \mathcal{R}^C$ for simplification: \begin{equation} \label{eq:text_encoder} \bm{t}_{out} = \operatorname{TextEncoder}( \bm{t}_{in}) \in \mathbb{R}^{C}\,. \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{figs/clip_neurons_half.pdf} \caption{The neurons in the clip model can directly respond to the text. The source images are from~\cite{goh2021multimodal}. } \vspace{-1em} \label{fig:clip_neuron} \end{figure} \paragraph{Language Prompt Generator.} Although the predefined prompt and learnable prompt are effective for steering the CLIP model, it may suffer from limited few-shot or generalization ability to open-ended scenarios where the testing text instance is out-of-distribution from the training images. To this end, we present a language prompt generator to generate a feature vector, termed as conditional cue ($\bm{cc}$). For each image, the $\bm{cc}$ is then combined with the input of the text encoder $\bm{t}_{in}$, formulated as follows: \begin{equation} \label{eq:text_gen} \hat{\bm{t}}_{in} = \bm{cc} + \bm{t}_{in} \in \mathbb{R}^{(n+1) \times D}\,, \end{equation} where $\hat{\bm{t}}_{in}$ is the new prompt input of the text encoder conditioned on the input image, and we replace $\bm{t}_{in}$ with $\hat{\bm{t}}_{in}$ in Eq.~\ref{eq:text_encoder}. In practice, the language prompt generator is built with a two-layer feed-forward network, which is applied to generate conditional cue ($\bm{cc}$) from the globality image embedding $\bm{I}$. This consists of two layer normalization followed by linear transformations, with a ReLU activation in between, which is formulated as follows: \begin{equation} \label{eq:language_prompt_generator} \bm{cc} = \operatorname{LN}(\sigma(\operatorname{LN}(\bm{\bar{I}})\bm{W}_1+\bm{b}_1)) \bm{W}_2+\bm{b}_2 \in \mathbb{R}^{D}\,, \end{equation} where $\bm{\bar{I}} \in \mathbb{R^{C}}$ is the global image-level feature generated from image embedding $\bm{I}$ by the same global attention pooling layer as in CLIP. $\bm{W}_1 \in \mathbb{R}^{ C \times C}$, $\bm{W}_2 \in \mathbb{R}^{ C \times D}$, $\bm{b}_1 \in \mathbb{R}^{ C }$, $\bm{b}_2 \in \mathbb{R}^{ D }$, and we broadcast $\bm{cc}$ with $\bm{t}_{in}$ to get $\hat{\bm{t}}_{in}$ in Eq.~\ref{eq:text_gen}. \paragraph{Visual Prompt Generator.} We design a visual prompt generator to adaptively propagate fine-grained semantic information from textual features to visual features. Formally, we use the cross-attention mechanism in Transformer~\cite{Vaswani2017AttentionIA} to model the interactions between image embedding ($\bm{Q}$) and text embedding ($\bm{K}$, $\bm{V}$). The visual prompt $\tilde{\bm{I}}$ is then learned for transferring the information prior from image-level to text instance-level, which is defined as: \begin{equation} \label{eq:vis_prompt_gen} \tilde{\bm{I}} = \operatorname{TDec}( Q= \bm{I}, K= \bm{t}_{out}, V= \bm{t}_{out} ) \in \mathbb{R}^{ \tilde{H} \times \tilde{W} \times C}\,, \end{equation} where TDec denotes the Transformer Decoder. Based on the conditional visual prompt, the original image embedding $\bm{I}$ is equipped with $\tilde{\bm{I}}$ to produce the prompted text-aware locality embeddings $\hat{\bm{I}}$ used for instance-language matching~(Eq.~\ref{eq:updated_pixel_text_matching}) and downstream detection head: \begin{equation} \label{eq:vis_gen} \hat{\bm{I}} = \bm{I} + \tilde{\bm{I}}. \end{equation} \paragraph{Instance-language Matching.} Given the output of the text encoder and image encoder, we perform text instance-language matching alignment on text-aware locality image embedding $\hat{\bm{I}}$ and text embedding $\bm{t}_{out}$ by the dot product followed by sigmoid activation to get binary score map. The mixture of the generated conditional fine-grained embedding $\tilde{\bm{I}}$ and visual embedding $\bm{I}$ can allow text instance existing in visual features to be better matched with pretrained language knowledge in collaboration. The matching mechanism is formulated as follows: \begin{equation} \label{eq:updated_pixel_text_matching} \bm{P} = \operatorname{sigmoid}( \hat{\bm{I}}\bm{t}_{out}^T / \tau ) \in \mathbb{R}^{\tilde{H} \times \tilde{W} \times 1}, \end{equation} where $\bm{t}_{out}$ is text embedding because of only one text class in text detection scenarios, and $\bm{P}$ is the binary text segmentation map. The segmentation maps are supervised using the ground-truths as an auxiliary loss and concatenated by the prompted embedding $\hat{\bm{I}}$ for downstream text detection head to explicitly incorporate language priors for detection. During training, we minimize a binary cross-entropy loss between the segmentation map $\bm{P}$ and ground-truth, which is defined as follows: \begin{equation} \label{eq:l_aux} \mathcal{L}_{aux} = {\sum_i^{\tilde{H}}}\sum_j^{\tilde{W}} y_{ij}\log(P_{ij}) + (1-y_{ij})\log(1-P_{ij})\,, \end{equation} where $y_{ij}$ and $P_{ij}$ are the label and predicted probability of pixel $(i,j)$ belonging to the text instances, respectively. \paragraph{Optimization.} The loss function $\mathcal{L}_{total}$ is the sum of detection loss $\mathcal{L}_{det}$ and auxiliary loss $\mathcal{L}_{aux}$, formulated as follows: \begin{equation} \label{eq:total_loss} \mathcal{L}_{total} = \mathcal{L}_{det} + \lambda \mathcal{L}_{aux} \,, \end{equation} where $\lambda$ is a trade-off hyper-parameters and set to 1 in this paper. $\mathcal{L}_{det}$ depends on downstream text detection method including segmentation and regression categories. In the inference period, we use the output of the detection head as the final result. \section{Experiments} \label{sec:experiments} We conduct four sets of experiments to validate TCM. Our first set of experiment examines how TCM can be incorporated into existing text detectors to achieve consistent performance improvements. Next, we demonstrate the few-shot training capability and generalization ability by incorporating the TCM method. In the third set of experiments, we compare our method with previous pretraining methods. Finally, we provide thorough experiments to evaluate the sensitivity \wrt the proposed designs. \paragraph{Datasets.} Our experiments are conducted on a number of commonly known scene text detection benchmarks including ICDAR2013 (IC13)~\cite{Karatzas2013ICDAR2R}, ICDAR2015 (IC15)~\cite{karatzas2015icdar}, MSRA-TD500 (TD)~\cite{yao2012detecting}, CTW1500 (CTW)~\cite{liu2019curved}, Total-Text (TT)~\cite{ch2019total}, ArT~\cite{chng2019icdar2019}, MLT17~\cite{Nayef2017ICDAR2017RR}, and MLT19~\cite{nayef2019icdar2019}. More details of the datasets refer to appendix. \paragraph{Evaluation Metric.} We use intersection over union (IoU) to determine whether the model correctly detects the region of text, and we calculate precision (P), recall (R), and F-measure (F) for comparison following common practice~\cite{Karatzas2013ICDAR2R}. For fair comparisons, text regions labeled with either ``do not care'' or ``\#\#\#'' will be ignored in all datasets during training and testing. \paragraph{Implementation Details.} For text detection tasks, we experiment with the popular text detection methods including DBNet (DB)~\cite{Liao2020RealtimeST}\footnote{\url{https://github.com/MhLiao/DB}}, PAN~\cite{Wang2019EfficientAA}\footnote{\url{https://github.com/whai362/pan_pp.pytorch}}, and FCENet (FCE)~\cite{zhu2021fourier}\footnote{\url{https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/fcenet}} to evaluate TCM. For consistent settings with these methods, we train the detector using both SynthText and the real datasets. Specifically, the backbone is instantiated with the pretrained image encoder ResNet50~\cite{He2016DeepRL} of the CLIP unless specified. The visual prompt generator has 3 transformer decoder layers with 4 heads; transformer width is 256; and the feed-forward hidden dimension is set to 1024. We use the corresponding detection head of the DBNet, PAN, and FCENet to predict the final results. For testing few-shot learning of model, we directly train on the benchmark with different proportions of training data without pretraining and test it on the corresponding test data. For testing the generalization ability, we use the model trained on the corresponding source datasets and evaluating it on the target dataset that has dissimilar distribution. We consider two kinds of adaptation including synthtext-to-real and real-to-real, to validate the domain adaptation of the TCM. The ablation studies are conducted \wrt the predefined prompt, the learnable prompt, the language prompt generator, the visual prompt generator, and the different settings. The DBNet is used as baseline for TCM. \subsection{Cooperation with Existing Methods} We report the text detection results of our TCM combined with three text detection methods on IC15, TD, and CTW in Table~\ref{tab:text_det_ic15}. Our method is +0.9\%, +1.7\%, and +1.9\% higher than the original FCENet, PAN, and DBNet, respectively, in terms of F-measure on IC15. TD and CTW also have similar consistent improvement. Note that the inference speed of our method is 18, 8.4, and 10 FPS evaluated on IC15, TD, and CTW datasets, respectively, with PAN, FCENet, and DBNet, remaining the high efficiency of the detector. % \begin{table}[htbp] \centering \normalsize \setlength\tabcolsep{2.4pt} \input{table/text_det_ic15_td_ctw} \caption{ Text detection results of cooperating with existing methods on IC15, TD, and CTW. $^\dagger$ indicates the results from~\cite{Zhan2019GADANGD}. Reg. and Seg. short for regression and segmentation methods, respectively. FPS are reported with ResNet50 backbone on a single V100. } \label{tab:text_det_ic15} \vspace{-0.1em} \end{table} We visualize our method in Fig.~\ref{fig:vp_results}. It shows that the fine-grained features $\tilde{\bm{I}}$ containing text information is recovered from the global image embedding $\bm{I}$, demonstrating that TCM can identify text regions and provide this prior cues for downstream text detection. \subsection{Few-shot Training Ability} To further verify the few-show training ability of our method, we directly train our model on real datasets using various training data ratio without pretraining, and evaluate it on the corresponding 4 benchmarks. As shown in Fig.~\ref{fig:few_show_ability}, our method shows robust on limited data and outperforms the three baseline methods including DB, PAN and EAST~\cite{Zhou2017EASTAE}. The results show that the TCM can capture the inherent characteristic of text via leveraging the pretrained vision and language knowledge of the zero-shot trained CLIP model. \begin{figure}[htbp] \centering \subcaptionbox{TD500}{\includegraphics[width=3.2cm,height=3.2cm]{figs/few_show_data_ratio_a.pdf}\label{fig:conditional_cue}} \subcaptionbox{ICDAR15}{\includegraphics[width=3.2cm,height=3.2cm]{figs/few_show_data_ratio_b.pdf}\label{fig:training_ratio_b}} \\ \subcaptionbox{TotalText}{\includegraphics[width=3.2cm,height=3.2cm]{figs/few_show_data_ratio_c.pdf}\label{fig:training_ratio_c}} \subcaptionbox{CTW1500}{\includegraphics[width=3.2cm,height=3.2cm]{figs/few_show_data_ratio_d.pdf}\label{fig:training_ratio_d}} \caption{Few-shot training ability with varying training data ratio. ``F'' represents F-measure. } \label{fig:few_show_ability} \vspace{-0.3cm} \end{figure} \subsection{Generalization Ability} \label{sec:exp:gn} We conduct two types of experiments including synthtext-to-real adaptation and real-to-real adaptation, as shown in Table~\ref{tab:synth_to_real} and Table~\ref{tab:real_to_real}, respectively. From the tables, we can see that by plugging the TCM to DBNet, we significantly improve the performance by an average of 8.2\% in terms of F-measure for four different settings including synthtext-to-real and real-to-real, which further demonstrates the effectiveness of our method for domain adaptation. \begin{table}[t] \centering \renewcommand\arraystretch{1} \setlength{\tabcolsep}{2.0mm} \input{table/synth_to_real} \caption{Synthtext-to-real adaptation. $^\dagger$ indicates the results from~\cite{Wu2020SynthetictoRealUD}. ST indicates SynthText. F-measure (\%) is reported.} \label{tab:synth_to_real} \vspace{-0.2cm} \end{table} \begin{table}[ht] \centering \renewcommand\arraystretch{1} \setlength{\tabcolsep}{2.0mm} \resizebox{\linewidth}{!}{ \input{table/real_to_real} } \caption{Real-to-real adaptation. $^\dagger$ indicates that the results are from~\cite{Zhan2019GADANGD}. % Note that the proposed method outperforms other methods. F-measure (\%) is reported. } \label{tab:real_to_real} \vspace{-0.2cm} \end{table} \subsection{Comparison with Pretraining Methods} \label{subset:pretrain} The pretraining methods based on specifically designed pretext tasks has made effective progress in the field of text detection. In contrast to these efforts, TCM can turn the CLIP model directly into a scene text detector without pretraining process. The comparison results are shown in Table~\ref{tab:comap_pretraining}, from which we can see that without pretext tasks for pretraining, DB+TCM consistently outperforms previous methods including DB+STKM~\cite{Wan2021SelfattentionBT}, DB+VLPT~\cite{Song2022VisionLanguagePF}, and DB+oCLIP~\cite{Xue2022LanguageMA}. Especially on IC15, our method outperforms previous state-of-the-art pretraining method by a large margin, with 89.4\% versus 86.5\% in terms of the F-measure. \begin{table}[tb] \centering \normalsize \setlength\tabcolsep{2.0pt} { \begin{tabularx}{1.0\linewidth}{l|lccccc} \hline & Methods & Pretext task & IC15 & TT & TD & CTW \\ \hline \multirow{5}{*}{\rotatebox{90}{Convention}} & SegLink~\cite{Shi2017DetectingOT} & $\times$ & - & - & 77.0 & - \\ & PSENet-1s~\cite{Li2019ShapeRT} & $\times$ & 85.7 & 80.9 & - & 82.2 \\ & LOMO~\cite{Zhang2019LookMT} & $\times$ & 87.2 & 81.6 & - & 78.4 \\ & MOST~\cite{He2021MOSTAM} & $\times$ & 88.2 & - & 86.4 & - \\ & Tang {\em et al.}\cite{Tang2022FewCB} & $\times$ & 89.1 & - & 88.1 & - \\ \hline \multirow{4}{*}{\rotatebox{90}{VLP}} & DB+ST$^\dagger$ & $\times$ & 85.4 & 84.7 & 84.9 & - \\ & DB+STKM$^\dagger$~\cite{Wan2021SelfattentionBT} & \checkmark & 86.1 & 85.5 & 85.9 & - \\ & DB+VLPT$^\dagger$~\cite{Song2022VisionLanguagePF} & \checkmark & 86.5 & 86.3 & 88.5 & - \\ & DB+oCLIP*~\cite{Xue2022LanguageMA} & \checkmark & - & - & - & 84.4 \\\hline & DB+TCM(Ours) & $\times$ & \textbf{89.4} & 85.9 & \textbf{88.8} & \textbf{85.1} \\ \hline \end{tabularx}} \caption{Comparison with existing scene text pretraining techniques on DBNet (DB). $^\dagger$ indicates the results from~\cite{Song2022VisionLanguagePF}. ST and VLP denote SynthText pretraining and visual-language pretraining methods, respectively. * stand for our reimplementation results. F-measure (\%) is reported. } \label{tab:comap_pretraining} \vspace{-0.6em} \end{table} \subsection{Ablation Studies} \noindent\textbf{Pretrained CLIP Backbone.} First, we conduct experiments that we only replace the original backbone of the DBNet with the pretrained image encoder ResNet50 of the CLIP to quantify the performance variance of the backbones. As shown in Table~\ref{tab:abla_clip_on_db}, the original pretrained model of CLIP is insufficient for leveraging the visual-language knowledge of the CLIP. Therefore, it is necessary to use a proper method to excavate the knowledge of the CLIP model. \begin{table}[htbp] \setlength\tabcolsep{1pt} \centering \renewcommand\arraystretch{1} \setlength{\tabcolsep}{2.0mm} \resizebox{\linewidth}{!}{ \input{table/ablation_clip_on_db} } \caption{Ablation study of the ResNet50 backbone on IC15, TD, TT, and CTW. BB indicates Backbone. R50 and CR50 represent the ResNet50 backbones of the DBNet and the CLIP, respectively. F-measure (\%) is reported. % } \label{tab:abla_clip_on_db} \end{table} \begin{table}[htbp] \centering \normalsize \setlength\tabcolsep{3.8pt} \input{table/ablation_ic15_td_tt_ctw} \caption{Ablation study of our proposed components on IC15, TD, TT and CTW. ``BSL'', ``PP'', ``LP'', ``LG'', and ``VG'' represent the baseline method DBNet, the predefined prompt, the learnable prompt, the language prompt generator, and the visual prompt generator, respectively. F (\%) represents F-measure. $\Delta $ represents the variance. } \label{tab:abla_ic15_td} \end{table} \begin{table*}[htbp] \centering \renewcommand\arraystretch{1} \setlength{\tabcolsep}{2.0mm} \input{table/abla_da_tg_vg} \caption{ Ablation study of the effect of LG and VG on generalization performance. F-measure (\%) is reported. } \label{tab:abla_da_tg_vg} \end{table*} \noindent\textbf{Ablation Study for the Predefined Prompt.} When using the predefined prompt, as illustrated in the second row of Table~\ref{tab:abla_ic15_td}, the performances are slightly improved on all four datasets (IC15, TD, TT, and CTW), with 0.05\%, 0.2\%, 0.04\%, and 0.1\% higher than the baseline method, respectively. \noindent\textbf{Ablation Study for the Learnable Prompt.} Besides, results combing the learnable prompt with the predefined prompt on four datasets are provided in the third row of Table~\ref{tab:abla_ic15_td}. We notice that a consistent improvement can be achieved by adding the learnable prompt. We also show the influence of using different numbers of the learnable prompt in row 4 to row 6 of Table~\ref{tab:abla_ic15_td}. We observe that as the value of the number of the learnable prompt increases, the performance increases gradually on all datasets. Compared to the value 4, the value 32 obtains obvious improvements on CTW, TD, and TT. We conjecture that this is because the larger number of the learnable prompt can better steer the pretrained text encoder knowledge which is useful for text detection. In the following experiments, the default number of the learnable prompt is set to 4 for simplicity. \noindent\textbf{Ablation Study for the Language Prompt Generator.} Furthermore, we evaluate the performance of the proposed language prompt generator shown in 7$_{th}$ row of Table~\ref{tab:abla_ic15_td}. With the help of the language prompt generator, we find that TCM achieves further improvements on all four datasets, especially on ICDAR2015, indicating that the conditional cue generated by the language prompt generator for each image can ensure better generalization over different types of datasets. \noindent\textbf{Ablation Study for the Visual Prompt Generator.} Finally, combining the proposed visual prompt generator with the above other components, the improvement of F-measure is better than the baseline on all four datasets, with larger margins of 1.7\% and 2.0\% on IC15 and TD, respectively. The reason for this obvious complementary phenomenon is that the visual prompt generator can propagate fine-grained visual semantic information from textual features to visual features. Besides, the prompted locality image embedding generated by the visual prompt generator can guide the model to obtain more accurate text instance-level visual representations, which boosts the ability of instance-language matching and generates a precise segmentation score map that is useful for downstream detection head. \noindent\textbf{Ablation Study for the VG and LG on Generalization Performance.} As described in Table~\ref{tab:abla_da_tg_vg}, removing the VG and LG elements from TCM dramatically deteriorates the generalization performance, which further indicates the effectiveness of the VG and LG. \noindent\textbf{Ablation Study for Image Encoder and Text Encoder.} We have investigated how the quality of the frozen text encoder and image encoder affects the performance via adjusting the corresponding learning rate (LR) factor. The experimental results of TCM-DBNet on the TD500 dataset are shown in Table~\ref{tab:appendix_expl_ie_te}. The results show that using a lower learning rate for both encoders and fixing the text encoder is the optimal setting for training the whole model. Note that we observe performance degradation when directly using $1.0\times$ learning rate for both encoders, which suggests the frozen text encoder can stabilize the training process. The cores of the architecture, including the language prompt generator and visual prompt generator, are designed to better steer knowledge of the pretrained CLIP. Appropriate design of the network architecture and the use of the pretrained CLIP are complementary. \begin{table}[ht] \centering \renewcommand\arraystretch{1} \setlength{\tabcolsep}{2.0mm} \begin{tabular}{llll} \toprule & Image encoder & Text encoder & F (\%) \\ \midrule \multirow{4}{*}{LR Factor} & 0.1 & 0.0 & \textbf{88.7} \\ & 0.1 & 0.1 & 87.8 \\ & 0.1 & 1.0 & 87.1 \\ & 1.0 & 1.0 & 86.3 \\ \bottomrule \end{tabular} \caption{Ablation study of exploration on image encoder and text encoder. ``LR'' represents the learning rate. } \label{tab:appendix_expl_ie_te} \end{table} \noindent\textbf{Ablation Study for Different Amount of Data.} To further explore whether the TCM can learn the additional knowledge which is hard to be obtained from increasing data, we have trained the model on a large-scale public joint data including IC13, IC15, TD, CTW, TT, and MLT17, with total 13,784 image, and testing it on a NightTime-ArT data (326 images) carefully collected from ArT. The nighttime examples of ArT are shown in Fig.~\ref{fig:night_image}. Results are shown in Table~\ref{tab:appendix_more_data}. The results show that even with the addition of large amounts of training data, existing methods still show limitation to the nighttime data that is obviously out-of-distribution from the training set. However, TCM can still perform robust in such case, indicating its irreplaceable potential generalization ability. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{figs/night_image.pdf} \caption{The examples of our constructed NightTime-ArT.} \label{fig:night_image} \end{figure} \begin{table}[h] \begin{tabular}{llll} \toprule Method & Training Data & Testing Data & F (\%) \\ \midrule FCENet & Joint data & NightTime-ArT & 55.2 \\ DBNet & Joint data & NightTime-ArT & 52.8 \\ \midrule TCM-DBNet & Joint data & NightTime-ArT & \textbf{70.2} \\ \bottomrule \end{tabular} \caption{Ablation study of exploration on large amounts of training data. } \label{tab:appendix_more_data} \vspace{-0.2cm} \end{table} \noindent\textbf{Ablation Study for the Parameters Comparison.} For a fair comparison, we have increased the parameters of DBNet by replacing the backbone with a larger ResNet and then conduct experiments on TD500 dataset. Trainable parameters and FLOPs are calculated with an input size 1280$\times$800. Results are shown in Table~\ref{tab:appendix_fair_db}. The results show that TCM-DBNet has better performance than DBNet with less model size and computation overhead, demonstrating its effectiveness for scene text detection. \begin{table}[ht] \centering \renewcommand\arraystretch{1} \setlength{\tabcolsep}{2.0mm} \begin{tabular}{lllll} \toprule Method & Backbone & Params & FLOPs & F (\%) \\ \midrule DBNet & R50 & 26 (M) & 98 (G) & 84.9 \\ DBNet & R101 & 46 (M) & 139 (G) & 85.9 \\ DBNet & R152 & 62 (M) & 180 (G) & 87.3 \\ \midrule TCM-DBNet & R50 & 50 (M) & 156 (G) & \textbf{88.7} \\ \bottomrule \end{tabular} \caption{Ablation study of the parameters comparison with DBNet. } \label{tab:appendix_fair_db} \vspace{-0.2cm} \end{table} \noindent\textbf{Ablation Study for the Auxiliary Loss.} We further compare the results of with and without auxiliary loss on TD500 dataset, as shown in Table~\ref{tab:appendix_aux_loss}. We see that using auxiliary loss achieves higher performance. The results indicate auxiliary loss is beneficial to train the model via imposing constraints on instance-language matching score map. In addition, the improvement of the performance suggests that it might help the image encoder of pretrained CLIP to perceive locality text region effectively. \begin{table}[ht] \centering \renewcommand\arraystretch{1} \setlength{\tabcolsep}{2.0mm} \begin{tabular}{ll} \toprule Model & F (\%) \\ \midrule TCM-DBNet with auxiliary loss & \textbf{88.7} \\ TCM-DBNet w/o auxiliary loss & 85.1 \\ \bottomrule \end{tabular} \caption{Ablation study of the auxiliary Loss. } \label{tab:appendix_aux_loss} \vspace{-0.2cm} \end{table} \begin{figure}[ht] \centering \subcaptionbox{CTW1500}{\includegraphics[width=3.5cm,height=2.3cm]{figs/ctw1500.pdf}\label{fig:vis_ctw1500}} \subcaptionbox{MSRA-TD500}{\includegraphics[width=3.5cm,height=2.3cm]{figs/td.pdf}\label{fig:vis_roic13}} \caption{Visualization results of our method. For each pair, the left is the image embedding $\bm{I}$, and the right is the generated visual prompt $\tilde{\bm{I}}$. Best view in screen. More results can be found in appendix.} \label{fig:vp_results} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.43\textwidth]{figs/failure_one.pdf} \caption{Failure cases. Red circle means false positive region. } \vspace{-1em} \label{fig:failure} \end{figure} \section{Discussion of Failure Cases} \label{sec:dis} There are some insightful failed cases as shown in Figure~\ref{fig:failure}. The instance-language matching score map generates false positive region that is very similar to the characteristics of text, as shown in the region of the red circle in Fig.~\ref{fig:failure}, which will be considered as noise. Therefore, it is necessary that the downstream text detection head can further refine this initial score map instead of directly using the score map of instance-language matching as the final results. We leave this problem as future work to alleviate the false positive score map of instance-language matching. \section{Conclusion}\label{sec:conclusion} This paper proposes the TCM, which can directly excavate the prior knowledge from the CLIP model into a scene text detector without pretraining process. Such a new text detection paradigm reveals the importance of using visual-language prior for seeking information from the zero-shot off-the-rack model, and thus guiding the text detector adapting to small-scale data, divergent data distribution, and complicated scenes, without relying on carefully-designed pretraining tasks. Experiments comprehensively demonstrate the effectiveness of our method. It is worth mentioning that we also construct a NightTime-ArT dataset to further demonstrate that the TCM can steer useful prior knowledge from the CLIP model. As the CLIP model is an inborn-friendly framework for text, extension of TCM to scene text spotting is also a promising direction for future work. {\small \bibliographystyle{ieee_fullname}
{ "arxiv_id": "2302.14298", "language": "en", "timestamp": "2023-03-01T02:08:48", "url": "https://arxiv.org/abs/2302.14298", "yymm": "2302" }
\section{INTRODUCTION} \label{Introduction} As a widely-used solution for 6-degree of freedom (DOF) pose estimation and map reconstruction, LiDAR-inertial odometry and mapping (LI-OAM) is a fundamental technique for many robotics applications, e.g., unmanned vehicle and automatic navigation. LI-OAM combines the measurements from a three-dimension light detection and ranging (LiDAR) and an Inertial Measurement Unit (IMU) to estimate the state (i.e., pose and velocity) of the hardware platform in real time, and then utilizes the solved state to register the points of a new sweep into the map. According to the degree of coupling, existing LI-OAM systems \cite{zhang2014loam, zhang2017low, shan2018lego, li2021towards, ye2019tightly, qin2020lins, shan2020lio, xu2021fast, xu2022fast, chen2022direct, yuan2022sr} can be divided into two groups: loose-coupled and tightly coupled. The loose-coupled framework \cite{zhang2014loam, zhang2017low, shan2018lego} mainly uses IMU measurements to calibrate the motion distortion of LiDAR points, and provide motion priors for Iterative Closest Point (ICP) pose estimation. For instance, LOAM \cite{zhang2014loam} and LeGO-LOAM \cite{shan2018lego} have loosely-coupled IMU interfaces in their open source code, although they described their work as the LiDAR-only odometry and mapping system in the literature. However, calculating the motion priors of current time relies on the velocity of last state, which is neither directly observed by sensor nor involved in optimization. Therefore, the accumulated error of velocity increases with time, degrading the accuracy of state estimation. In addition, LOAM and LeGO-LOAM do not estimate the gravity vector which need to be removed from raw accelerometer measurements. Instead, they obtain the roll, pitch, yaw angle in real time by fusing the magnetometer with the accelerometer and gyroscope measurements in the attitude and heading reference system (AHRS) Then, the obtained roll, pitch and yaw angle are used to remove the gravity vector. However, the magnetometer only exists in AHRS, preventing those systems being used in most hardware platforms with 6-axis IMUs. The tight-coupled methods \cite{li2021towards, ye2019tightly, qin2020lins, shan2020lio, xu2021fast, xu2022fast, chen2022direct, yuan2022sr} also use IMU measurements to provide motion constraints for ICP so as to improve the accuracy and robustness of state estimation. The LIO joint optimization systems based on the tightly-coupled framework can be mainly categorized into three types: iterated extended Kalman filter (iEKF), \cite{qin2020lins, xu2021fast, xu2022fast}, bundle adjustment (BA) \cite{li2021towards, ye2019tightly, yuan2022sr} and graph optimization \cite{shan2020lio, chen2022direct}. All three types regard pose and velocity as state variables that need to be solved. The prediction of pose and velocity are calculated by integrating IMU measurements to the last state, and the observation of pose is obtained from LiDAR ICP. However, the velocity does not have any observation, therefore, it can only adjust itself according to the result of pose to satisfy the kinematic constraints. In order words, the accuracy of velocity mainly depends on the accuracy of pose. Once the pose is not correct, the velocity would adjust itself to fit the incorrect pose. Although IMU pre-integrations can constrain the velocity, however, this constrain is not very useful if the velocity of last time is not accurate. In this paper, we present LIW-OAM, an accurate and robust LiDAR-inertial-wheel (LIW) odometry and mapping system, which fused constraints from LiDAR, IMU and wheel encoder in a BA based tightly-coupled framework. Our system integrates the IMU and wheel encoder measurements as the initial pose value of current sweep, and then refine pose by a BA based LIW-optimization module. The wheel encoder can make the initial velocity value of LIW-optimization more reliable, and meanwhile can provide velocity observations to address the limitation of IMU. In addition, compared with \cite{zhang2014loam, shan2018lego} which need AHRS support, our system is compatible with hardware platforms with 6-axis IMUs and thus is much more convenient in practice. Although we utilize an extra wheel encoder sensor, the cost of a wheel encoder sensor is much lower than that of an AHRS. Experimental results on the public dataset $nclt$ \cite{carlevaris2016university} and kaist \cite{jeong2019complex} demonstrate that: 1) our system outperforms existing state-of-the-art LI-OAM systems (i.e., \cite{li2021towards, qin2020lins, shan2020lio, xu2022fast, yuan2022sr} in term of smaller absolute trajectory error (ATE); 2) Compared with a variant of our system which utilizes only IMU pre-integration to provide constraints, our final system which embeds velocity observations from wheel encoder could further improve accuracy and robustness. To summarize, the main contributions of this work are three folds: 1) We proposed a novel BA based LIW-OAM system, which embeds the velocity observations from a wheel encoder into BA based LI-optimization. Our LIW-OAM system outperforms most state-of-the-art LI-OAM systems in terms of accuracy. 2) We have released the source code of this work for the development of the community\footnote{https://github.com/ZikangYuan/liw\_oam}. The rest of this paper is structured as follows. In Sec. \ref{Related Work}, we briefly discuss the relevant literature. Sec. \ref{Preliminary} provides preliminaries. Sec. \ref{System Overview} illustrates the overview of our system. Sec. \ref{System Details} details each module of our system, followed by experimental evaluation in Sec. \ref{Experiments}. Sec. \ref{Conclusion} concludes the paper. \section{RELATED WORK} \label{Related Work} \textbf{LiDAR-Only Odometry and Mapping.} LiDAR-only odometry and mapping systems \cite{zhang2014loam, zhang2017low, shan2018lego, behley2018efficient, wang2020intensity, deschaud2018imls, dellenbach2022ct} rely on geometric information contained in LiDAR points for tracking, and constantly register the new points to the map. LOAM \cite{zhang2014loam, zhang2017low} firstly proposes a complete LiDAR odometry which mainly consists of three steps: 1) Extracting edge and surfaces from raw points; 2) Performing sweep-to-sweep pose estimation at an input sweep frequency; 3) Performing sweep-to-map pose optimization and utilizing the optimized pose to register points to the map at a lower frequency. However, due to huge number of 3D points to be processed, the output frequency of LOAM is low. On the basis of LOAM, LeGO-LOAM \cite{shan2018lego} proposes to cluster raw LiDAR points and then removes clusters with weak geometric structure information to reduce computation. However, accurately removing clusters with weak geometry is a nontrivial task, and incorrect removal of useful clusters would degrade the accuracy and robustness of pose estimation. SuMa \cite{behley2018efficient} proposes to represent the map via a surfel-based representation that aggregates information from points. However, GPU acceleration is necessary for SuMa to achieve real-time performance, and the pose estimation accuracy of SuMa is not better than systems based on the framework of LOAM. Fast-LOAM proposes to eliminate the sweep-to-map module and keep only the sweep-to-sweep module to make the system lightweight. However, without the sweep-to-map refinement, the accuracy of Fast-LOAM is sacrificed. In \cite{wang2020intensity}, authors propose intensity scan context (ISC) to improve the performance of loop detection based on Fast-LOAM. IMLS-LOAM \cite{deschaud2018imls} designs an IMLS pose solution algorithm to replace conventional transport ICP. However, large computational cost of IMLS makes IMLS-LOAM impossible to run in real time. Based on IMLS-LOAM, CT-ICP \cite{dellenbach2022ct} estimates the state at beginning and ending time of each sweep. By this way, the state at any time during a sweep can be expressed as a function of the beginning state and the ending state. Compared with the previous scheme \cite{zhang2014loam, zhang2017low, shan2018lego, behley2018efficient, wang2020intensity, deschaud2018imls} which represents the state of a sweep by only the state at beginning time or ending time, CT-ICP is more realistic and meanwhile achieves superior performance. \textbf{LiDAR-Inertial Odometry and Mapping.} LiDAR-inertial odometry and mapping systems \cite{zhang2014loam, zhang2017low, shan2018lego, li2021towards, ye2019tightly, qin2020lins, shan2020lio, xu2021fast, xu2022fast, chen2022direct, yuan2022sr} are mainly divided into loosely-coupled framework \cite{zhang2014loam, zhang2017low, shan2018lego} and tightly-coupled framework \cite{li2021towards, ye2019tightly, qin2020lins, shan2020lio, xu2021fast, xu2022fast, chen2022direct, yuan2022sr}. The loose-coupled framework, such as LOAM \cite{zhang2014loam} and LeGO-LOAM \cite{shan2018lego} with an IMU interface, uses IMU measurements to calibrate the motion priors for ICP pose estimation. The tightly-coupled framework \cite{li2021towards, ye2019tightly, qin2020lins, shan2020lio, xu2021fast, xu2022fast, chen2022direct, yuan2022sr} uses IMU measurements to provide motion constraints for ICP, so as to improve the accuracy and robustness of pose estimation. According to the type of LiDAR-inertial joint optimization, the tightly-coupled framework can be further divided into iEKF based framework \cite{qin2020lins, xu2021fast, xu2022fast} BA based framework \cite{li2021towards, ye2019tightly, yuan2022sr} and graph optimization based framework \cite{shan2020lio, chen2022direct}. LINs \cite{qin2020lins} firstly fuses 6-axis IMU and 3D LiDAR in an iEKF based framework, where an iEKF is designed to correct the estimated state recursively by generating new feature correspondences in each iteration, and to keep the system computationally tractable. Fast-LIO \cite{xu2021fast} proposes a new method of solving Kalman gain to avoid the calculation of the high-order matrix inversion, and in turn greatly reduce the computational burden. Based on Fast-LIO, Fast-LIO2 \cite{xu2022fast} proposes an ikd-tree algorithm \cite{cai2021ikd}. Compared with the original kd-tree, ikd-tree reduces time cost in building a tree, traversing a tree, removing elements and other operations. LIO-SAM \cite{shan2020lio} formulates LiDAR-inertial odometry as a factor graph. Measurements from LiDAR and IMU are used to provide absolute constraints for each node graph and relative constraints between nodes respectively. DLIO \cite{chen2022direct} builds an internal map by registering dense points to a local submap with a translational and rotational prior generated by a nonlinear motion model. \cite{ye2019tightly} fuses 6-axis IMU and 3D LiDAR in a BA based framework. Besides, to obtain more reliable poses estimation, a rotation-constrained refinement algorithm is proposed to further align the pose with the global map. LiLi-OM \cite{li2021towards} selects the key-sweeps from solid-state LiDAR data, and performs BA based multi-key-sweep joint LI-optimization. However, when the type of LiDAR changes from solid-state to spinning, the time interval between two consecutive key-sweeps becomes longer, and the accumulative error in IMU pre-integration increases. To reduce the accumulative error of IMU pre-integration in BA based framework, our previous work SR-LIO \cite{yuan2022sr} segments and reconstructs raw input sweeps from spinning LiDAR to obtain reconstructed sweeps with higher frequency. The increased frequency shortens the time interval between two consecutive sweeps, and thus reduces the accumulative error of IMU pre-integration. However, the increased sweep frequency requires the system to finish the solution of the current sweep in a shorter time which is very challenging on current off-the-shelf computing resources. \textbf{LiDAR-Inertial-Wheel Odometry and Mapping.} \cite{zhang2019lidar} is the first approach trying to fuse LiDAR, IMU and wheel in a loosely-coupled manner. First, the state at a particular time is calculated by LiDAR, IMU and wheel encoder odometer respectively. Then, the three states calculated by the three sensors are integrated into an extended Kalman filter (EKF) to obtain the final state. EKF-LOAM \cite{junior2022ekf} also adopts the EKF framework, and uses a simple and lightweight adaptive covariance matrix based on the number of detected geometric features. There are not many publicly datasets that include both LiDAR, IMU and wheel encoder measurements. Therefore, there are not many LIW-OAM systems, and the open-sourced work is even less. \section{PRELIMINARY} \label{Preliminary} \subsection{Coordinate Systems} \label{Coordinate Systems} We denote $\left({\cdot}\right)^w$, $\left({\cdot}\right)^l$, $\left({\cdot}\right)^o$ and $\left({\cdot}\right)^k$ as a 3D point in the world coordinates, the LiDAR coordinates, the IMU coordinates and the odometer (i.e., wheel encoder) coordinates respectively. The world coordinate is coinciding with $\left({\cdot}\right)^l$ at the starting position. In all coordinates, the x-axis points forward, the y-axis points to the left, and the z-axis points upward. We denote the LiDAR coordinates for taking the $i_{th}$ sweep at time $t_i$ as $l_i$ and the corresponding IMU coordinates at $t_i$ as $o_i$, then the transformation matrix (i.e., external parameters) from the LiDAR coordinates $l_i$ to the IMU coordinates $o_i$ is denoted as $\mathbf{T}_{l_i}^{o_i} \in S E(3)$: \begin{equation} \label{equation1} \mathbf{T}_{l_i}^{o_i}=\left[\begin{array}{cc} \mathbf{R}_{l_i}^{o_i} & \mathbf{t}_{l_i}^{o_i} \\ \mathbf{0} & 1 \end{array}\right] \end{equation} where $\mathbf{T}_{l_i}^{o_i}$ consists of a rotation matrix $\mathbf{R}_{l_i}^{o_i} \in S O(3)$ and a translation vector $\mathbf{t}_{l_i}^{o_i} \in \mathbb{R}^3$. The external parameters are usually calibrated once offline and remain constant during online pose estimation; therefore, we can represent $\mathbf{T}_{l_i}^{o_i}$ using $\mathbf{T}_{l}^{o}$ for simplicity. Similarity, the transformation from the odometer coordinates to the IMU coordinate is denoted as $\mathbf{T}_{k}^{o}$, which consists of $\mathbf{R}_{k}^{o}$ and $\mathbf{t}_{k}^{o}$. We use both rotation matrices $\mathbf{R}$ and Hamilton quaternions $\mathbf{q}$ to represent rotation. We primarily use quaternions in state vectors, but rotation matrices are also used for convenience rotation of 3D vectors. $\otimes$ represents the multiplication operation between two quaternions. Finally, we denote $\left(\hat{\cdot}\right)$ as the noisy measurement or estimate of a certain quantity. In addition to pose, we also estimate the velocity $\mathbf{v}$, the accelerometer bias $\mathbf{b}_{\mathbf{a}}$ and the gyroscope bias $\mathbf{b}_{\boldsymbol{\omega}}$, which are represented uniformly by a state vector: \begin{equation} \label{equation2} \boldsymbol{x}=\left[\mathbf{t}^T, \mathbf{q}^T, \mathbf{v}^T, \mathbf{b}_{\mathbf{a}}{ }^T, \mathbf{b}_{\boldsymbol{\omega}}{ }^T\right]^T \end{equation} \subsection{Sweep State Expression} \label{Sweep State Expression} Inspired by CT-ICP \cite{dellenbach2022ct}, we represent the state of a sweep $S$ by: 1) the state at the beginning time $t_b$ of $S$ (e.g., $\boldsymbol{x}_{b}$) and 2) the state at the end time $t_e$ of $S$ (e.g., $\boldsymbol{x}_{e}$). By this way, the state of each point during $\left[t_b, t_e\right]$ can be represented as a function of $\boldsymbol{x}_{b}$ and $\boldsymbol{x}_{e}$. For instance, for a point $\mathbf{p} \in S$ collected at time $t_{\mathbf{p}} \in\left[t_b, t_e\right]$, the state at $t_{\mathbf{p}}$ can be calculated as: \begin{equation} \label{equation3} \begin{gathered} \alpha=\frac{t_{\mathbf{p}}-t_b}{t_e-t_b} \\ \mathbf{t}_{\mathbf{p}}=(1-\alpha) \mathbf{t}_b+\alpha \mathbf{t}_e \\ \mathbf{q}_{\mathbf{p}}=\mathbf{q}_b . slerp\left(\alpha, \mathbf{q}_e\right) \\ \mathbf{v}_{\mathbf{p}}=(1-\alpha) \mathbf{v}_b+\alpha \mathbf{v}_e \\ \mathbf{b}_{\mathbf{a}_{\mathbf{p}}}=(1-\alpha) \mathbf{b}_{\mathbf{a}_b}+\alpha \mathbf{b}_{\mathbf{a}_e} \\ \mathbf{b}_{\boldsymbol{\omega}_{\mathbf{p}}}=(1-\alpha) \mathbf{b}_{\boldsymbol{\omega}_b}+\alpha \mathbf{b}_{\boldsymbol{\omega}_e} \end{gathered} \end{equation} where $slerp\left(\cdot\right)$ is the spherical linear interpolation operator for quaternion. \subsection{IMU-Odometer Measurement Model} \label{IMU-Odometer Measurement Model} The IMU-odometer includes a wheel encoder and an IMU, which consists of an accelerometer and a gyroscope. The raw gyroscope and accelerometer measurements from IMU, i.e., $\hat{\mathbf{a}}_t$ and $\hat{\boldsymbol{\omega}}_t$, are given by: \begin{equation} \label{equation4} \begin{gathered} \hat{\mathbf{a}}_t=\mathbf{a}_t+\mathbf{b}_{\mathbf{a}_t}+\mathbf{R}_w^t \mathbf{g}^w+\mathbf{n}_{\mathbf{a}} \\ \hat{\boldsymbol{\omega}}_t=\boldsymbol{\omega}_t+\mathbf{b}_{\boldsymbol{\omega}_t}+\mathbf{n}_{\boldsymbol{\omega}} \end{gathered} \end{equation} IMU measurements, which are measured in the IMU coordinates, combine the force for countering gravity and the platform dynamics, and are affected by acceleration bias $\mathbf{b}_{\mathbf{a}_t}$, gyroscope bias $\mathbf{b}_{\boldsymbol{\omega}_t}$, and additive noise. As mentioned in VINs-Mono \cite{qin2018vins}, the additive noise in acceleration and gyroscope measurements are modeled as Gaussian white noise, $\mathbf{n}_{\mathbf{a}} \sim N\left(\mathbf{0}, \boldsymbol{\sigma}_{\mathbf{a}}^2\right)$, $\mathbf{n}_{\boldsymbol{\omega}} \sim N\left(\mathbf{0}, \boldsymbol{\sigma}_{\boldsymbol{\omega}}^2\right)$. Acceleration bias and gyroscope bias are modeled as random walk, whose derivatives are Gaussian, $\dot{\mathbf{b}}_{\mathbf{a}_t}=\mathbf{n}_{\mathbf{b}_{\mathbf{a}}} \sim N\left(\mathbf{0}, \boldsymbol{\sigma}_{\mathbf{b}_{\mathbf{a}}}^2\right)$, $\dot{\mathbf{b}}_{{\boldsymbol{\omega}}_t}=\mathbf{n}_{\mathbf{b}_{\boldsymbol{\omega}}} \sim N\left(\mathbf{0}, \boldsymbol{\sigma}_{\mathbf{b}_{\boldsymbol{\omega}}}^2\right)$. The wheel encoder obtains the rotational speed $\tau$ of the shaft according to the pulse received by the counter, and then calculate the speed of the left rear wheel and the right rear wheel according to $\tau$ and the wheel radius $r$: \begin{equation} \label{equation5} \begin{gathered} \hat{\mathbf{v}}_{left}=\left[\begin{array}{lll} \hat{\tau}_{left} r_{left} & 0 & 0 \end{array}\right]^T \\ \hat{\mathbf{v}}_{right}=\left[\begin{array}{lll} \hat{\tau}_{right} r_{right} & 0 & 0 \end{array}\right]^T \\ \hat{\tau}_{left}=\tau_{left}+n_{\tau_{left}}, \hat{\tau}_{right}=\tau_{right}+n_{\tau_{right}} \end{gathered} \end{equation} where $n_{\tau_{left}}$ and $n_{\tau_{right}}$ are the corresponding zero-mean white Gaussian noises of $\tau_{left}$ and $\tau_{right}$, $\hat{\mathbf{v}}_{left}$ and $\hat{\mathbf{v}}_{right}$ are the measured linear speed of two wheels calculated from $\hat{\tau}_\cdot$ and $r_\cdot$. Then the final measurement model of wheel encoder odometer, which are measured in odometer coordinates, can be defined as: \begin{equation} \label{equation6} \begin{gathered} \hat{\mathbf{v}}=\frac{\hat{\mathbf{v}}_{left}+\hat{\mathbf{v}}_{right }}{2}+\mathbf{n}_{\mathbf{v}} \\ \mathbf{n}_{\mathbf{v}}=\left[\begin{array}{ccc} \frac{r_{left} n_{\tau_{left}}+r_{right} n_{\tau_{right}}}{2} & 0 & 0 \end{array}\right]^T \end{gathered} \end{equation} where $\left[\mathbf{n}_{\mathbf{v}}\right]_x$ is the sum of two zero-mean Gaussian distributions $\mathbf{n}_{\mathbf{v}} \sim N\left(\mathbf{0}, \boldsymbol{\sigma}_{\mathbf{v}}^2\right)$, which is still a zero-mean Gaussian distribution. \section{SYSTEM OVERVIEW} \label{System Overview} \begin{figure*} \begin{center} \includegraphics[scale=0.5]{framework} \caption{Overview of our LIW-OAM which consists of four main modules: a pre-processing module, an initialization module, a state estimation module and a point registration module.} \label{fig1} \end{center} \end{figure*} Fig. \ref{fig1} illustrates the framework of our LIW-OAM which consists of four main modules: pre-processing, initialization, state estimation and point registration. The pre-processing module down-samples the input raw points, and pre-integrates IMU-odometer measurements at the same frequency of input sweep. The initialization module estimates some state parameters including gravitational acceleration, accelerometer bias, gyroscope bias, and initial velocity. The state estimation module firstly integrates the IMU-odometer measurements to the last state to predict the current state, then performs BA based LIW-optimization to optimize the state of current sweep. Finally, the point registration adds the new points to the map, and deletes the points that are far away. \section{SYSTEM DETAILS} \label{System Details} \subsection{Pre-Processing} \label{Pre-Processing} \subsubsection{Down Sampling} \label{Down Sampling} Processing a huge number of 3D points yields a high computational cost. To reduce the computational complexity, we down-sample the input points as following. We put the points of current input sweep $S_{i+1}$ into a volume with 0.5$\times$0.5$\times$0.5 (unit: m) voxel size, and make each voxel contain only one point, to obtain the down-sampled sweep $P_{i+1}$. This down-sampling strategy ensures that the density distribution of points is uniform in 3D space after down-sampling. \subsubsection{Pre-Integration} \label{Pre-Integration} Typically, the IMU-odometer sends out data at a much higher frequency than the LiDAR. Pre-integration of all IMU-odometer measurements between two consecutive sweeps $S_i$ and $S_{i+1}$ can well summarize the dynamics of the hardware platform from time $t_{e_i}$ to $t_{e_{i+1}}$, where $e_i$ and $e_{i+1}$ are the end time stamp of $S_i$ and $S_{i+1}$ respectively. In this work, we employ the discrete-time quaternion-based derivation of IMU-odometer pre-integration approach \cite{liu2019visual}, and incorporate IMU bias using the method in \cite{qin2018vins}. Specifically, the pre-integrations between $S_i$ and $S_{i+1}$ in the corresponding IMU coordinates $o_{e_i}$ and $o_{e_{i+1}}$, i.e., $\hat{\boldsymbol{\alpha}}_{e_{i+1}}^{e_i}$, $\hat{\boldsymbol{\eta}}_{e_{i+1}}^{e_i}$, $\hat{\boldsymbol{\beta}}_{e_{i+1}}^{e_i}$, and $\hat{\boldsymbol{\gamma}}_{e_{i+1}}^{e_i}$, are calculated, where $\boldsymbol{\alpha}_{e_{i+1}}^{e_i}$, $\boldsymbol{\beta}_{e_{i+1}}^{e_i}$, $\boldsymbol{\gamma}_{e_{i+1}}^{e_i}$ are the pre-integration of translation, velocity, rotation from IMU measurements respectively and $\boldsymbol{\eta}_{e_{i+1}}^{e_i}$ is the pre-integration of translation from gyroscope and wheel encoder odometer measurements. In addition, the Jacobian of pre-integration with respect to bias, i.e., $\mathbf{J}_{\mathbf{b}_{\mathbf{a}}}^{\boldsymbol{\alpha}}$, $\mathbf{J}_{\mathbf{b}_{\boldsymbol{\omega}}}^{\boldsymbol{\alpha}}$, $\mathbf{J}_{\mathbf{b}_{\mathbf{a}}}^{\boldsymbol{\beta}}$, $\mathbf{J}_{\mathbf{b}_{\boldsymbol{\omega}}}^{\boldsymbol{\beta}}$, $\mathbf{J}_{\mathbf{b}_{\boldsymbol{\omega}}}^{\boldsymbol{\gamma}}$, $\mathbf{J}_{\mathbf{b}_{\mathbf{a}}}^{\boldsymbol{\eta}}$ and $\mathbf{J}_{\mathbf{b}_{\boldsymbol{\omega}}}^{\boldsymbol{\eta}}$, are also calculated according to the error state kinematics. \subsection{Initialization} \label{Initialization} The initialization module aims to estimate all necessary values including initial pose, velocity, gravitational acceleration, accelerometer bias and gyroscope bias, for subsequent state estimation. Similar as our previous work SR-LIO \cite{yuan2022sr}, we adopt motion initialization and static initialization for handheld devices and vehicle-mounted devices respectively. Please refer to \cite{yuan2022sr} for more details about our initialization module. \subsection{State Estimation} \label{State Estimation} \subsubsection{State Prediction} \label{State Prediction} When every new down-sampled sweep $P_{i+1}$ completes, we use IMU-odometer measurements to predict the state at the beginning time stamp of $P_{i+1}$ (i.e., $\boldsymbol{x}_{b_{i+1}}^w$) and the state at the end time stamp of $P_{i+1}$ (i.e., $\boldsymbol{x}_{e_{i+1}}^w$) to provide the prior motion for LIW-optimization. Specifically, the predicted state $\boldsymbol{x}_{b_{i+1}}^w$ (i.e., $\mathbf{t}_{b_{i+1}}^w$, $\mathbf{R}_{b_{i+1}}^w$, $\mathbf{v}_{b_{i+1}}^w$, $\mathbf{b}_{\mathbf{a}_{b_{i+1}}}$ and $\mathbf{b}_{\boldsymbol{\omega}_{b_{i+1}}}$) is assigned as: \begin{equation} \label{equation7} \boldsymbol{x}_{b_{i+1}}^w=\boldsymbol{x}_{e_i}^w \end{equation} and $\boldsymbol{x}_{e_{i+1}}^w$ (i.e., $\mathbf{t}_{e_{i+1}}^w$, $\mathbf{R}_{e_{i+1}}^w$, $\mathbf{v}_{e_{i+1}}^w$, $\mathbf{b}_{\mathbf{a}_{e_{i+1}}}$ and $\mathbf{b}_{\boldsymbol{\omega}_{e_{i+1}}}$) is calculated as: \begin{equation} \label{equation8} \begin{gathered} \mathbf{R}_{n+1}^w=\mathbf{R}_n^w Exp\left(\left(\frac{\hat{\boldsymbol{\omega}}_n+\hat{\boldsymbol{\omega}}_{n+1}}{2}-\mathbf{b}_{\boldsymbol{\omega}_{e_i}}\right) \delta t\right) \\ \mathbf{v}_{n+1}^w=\mathbf{R}_{n+1}^w \mathbf{R}_k^l \hat{\mathbf{v}}_{n+1} \\ \mathbf{t}_{n+1}^w=\mathbf{t}_n^w+\mathbf{v}_n^w \delta t+\frac{1}{2}\left(\frac{\hat{\mathbf{a}}_n+\hat{\mathbf{a}}_{n+1}}{2}-\mathbf{b}_{\mathbf{a}_{e_i}}-\mathbf{R}_n^w \mathbf{g}^w\right) \delta t^2 \end{gathered} \end{equation} where $\hat{\boldsymbol{\omega}}_{\cdot}$, $\hat{\mathbf{a}}_{\cdot}$ and $\hat{\mathbf{v}}_{\cdot}$ are the measurements from IMU gyroscope, IMU accelerometer and wheel encoder, $\mathbf{g}^w$ is the gravitational acceleration in the world coordinates, $n$ and $n+1$ are two time instants of obtaining an IMU-odometer measurements during $\left[t_{e_i}, t_{e_{i+1}}\right]$, $\delta t$ is the time interval between $n$ and $n+1$. We iteratively increase $n$ from $0$ to $\left(t_{e_{i+1}}-t_{e_i}\right) / \delta t$ to obtain $\boldsymbol{x}_{e_{i+1}}^w$. When $n=0$, $\boldsymbol{x}_{n}^w=\boldsymbol{x}_{e_i}^w$. For $\mathbf{b}_{\mathbf{a}_{e_{i+1}}}$ and $\mathbf{b}_{\boldsymbol{\omega}_{e_{i+1}}}$, we set the predicted values of them by: $\mathbf{b}_{\mathbf{a}_{e_{i+1}}}=\mathbf{b}_{\mathbf{a}_{e_i}}$ and $\mathbf{b}_{\boldsymbol{\omega}_{e_{i+1}}}=\mathbf{b}_{\boldsymbol{\omega}_{e_i}}$. \subsubsection{BA based LIW-Optimization} \label{BA based LIW-Optimization} We jointly utilize measurements of the LiDAR, inertial and wheel encoder to optimize the beginning state (i.e., $\boldsymbol{x}_{b_{i+1}}^w$) and the end state (i.e., $\boldsymbol{x}_{e_{i+1}}^w$) of the current sweep $P_{i+1}$, where the variable vector is expressed as: \begin{equation} \label{equation9} \boldsymbol{\chi}=\left\{\boldsymbol{x}_{b_{i+1}}^w, \boldsymbol{x}_{e_{i+1}}^w\right\} \end{equation} \textbf{Residual from the LiDAR constraint.} For a point $\mathbf{p}$, we first project $\mathbf{p}$ to the world coordinates to obtain $\mathbf{p}^w$, and then find 20 nearest points around $\mathbf{p}^w$ from the volume. To search for the nearest neighbor of $\mathbf{p}^w$, we only search in the voxel $V$ to which $\mathbf{p}^w$ belongs, and the 8 voxels adjacent to $V$. The 20 nearest points are used to fit a plane with a normal $\mathbf{n}$ and a distance $d$. Accordingly, we can build the point-to-plane residual $r^{\mathbf{p}}$ for $\mathbf{p}$ as: \begin{equation} \label{equation10} \begin{gathered} r^{\mathbf{p}}=\omega_{\mathbf{p}}\left(\mathbf{n}^T \mathbf{p}^w+d\right) \\ \mathbf{p}^w=\mathbf{q}_{\mathbf{p}}^w \mathbf{p}+\mathbf{t}_{\mathbf{p}}^w \\ \alpha=\frac{t_{\mathbf{p}}-t_{b_{i+1}}}{t_{e_{i+1}}-t_{b_{i+1}}} \\ \mathbf{t}_{\mathbf{p}}^w=(1-\alpha) \mathbf{t}_{b_{i+1}}^w+\alpha \mathbf{t}_{e_{i+1}}^w \\ \mathbf{q}_{\mathbf{p}}^w=\mathbf{q}_{b_{i+1}}^w . slerp\left(\alpha, \mathbf{q}_{e_{i+1}}^w\right) \end{gathered} \end{equation} where $\omega_{\mathbf{p}}$ is a weight parameter defined by \cite{ye2019tightly}, $\mathbf{q}_{b_{i+1}}^w$ and $\mathbf{q}_{e_{i+1}}^w$ are the rotation with respect to $\left({\cdot}\right)^w$ at $t_{b_{i+1}}$ and $t_{e_{i+1}}$ respectively, $\mathbf{t}_{b_{i+1}}^w$ and $\mathbf{t}_{e_{i+1}}^w$ are the translation with respect to $\left({\cdot}\right)^w$ at $t_{b_{i+1}}$ and $t_{e_{i+1}}$ respectively. Both $\mathbf{q}_{b_{i+1}}^w$, $\mathbf{q}_{e_{i+1}}^w$, $\mathbf{t}_{b_{i+1}}^w$, $\mathbf{t}_{e_{i+1}}^w$ are variables to be refined, and the initial value of them are obtained from Sec. \ref{State Prediction}. \textbf{Residual from the IMU-odometer constraint.} Considering the IMU-odometer measurements during $\left[t_{b_{i+1}}, t_{e_{i+1}}\right]$, according to pre-integration introduced in \cite{liu2019visual}, the residual for pre-integrated IMU-odometer measurements can be computed as: \begin{equation} \label{equation11} \begin{gathered} {{\mathbf{r}_o}_{{e_{i+1}}}^{b_{i+1}}}= \\ {\left[\begin{array}{c} \mathbf{R}_w^{b_{i+1}}\left(\mathbf{t}_{e_{i+1}}^w-\mathbf{t}_{b_{i+1}}^w+\frac{1}{2} \mathbf{g}^w \Delta t^2-\mathbf{v}_{b_{i+1}}^w \Delta t\right)-\hat{\boldsymbol{\alpha}}_{e_{i+1}}^{e_i} \\ \mathbf{R}_w^{b_{i+1}}\left(\mathbf{v}_{e_{i+1}}^w+\mathbf{g}^w \Delta t-\mathbf{v}_{b_{i+1}}^w\right)-\hat{\boldsymbol{\beta}}_{e_{i+1}}^{e_i} \\ 2\left[\mathbf{q}_{b_{i+1}}^{{w}^{-1}} \otimes \mathbf{q}_{e_{i+1}}^w \otimes\left(\hat{\boldsymbol{\gamma}}_{e_{i+1}}^{e_i}\right)^{-1}\right]_{x y z} \\ \mathbf{R}_w^{b_{i+1}}\left(\mathbf{t}_{e_{i+1}}^w-\mathbf{t}_{b_{i+1}}^w\right)-\mathbf{t}_k^o+\mathbf{R}_w^{b_{i+1}} \mathbf{R}_{e_{i+1}}^w \mathbf{t}_k^o-\hat{\boldsymbol{\eta}}_{e_{i+1}}^{e_i} \\ \mathbf{b}_{\mathbf{a}_{i+1}}-\mathbf{b}_{\mathbf{a}_i} \\ \mathbf{b}_{\boldsymbol{\omega}_{i+1}}-\mathbf{b}_{\boldsymbol{\omega}_i} \end{array}\right]} \end{gathered} \end{equation} where $[\cdot]_{x y z}$ extracts the vector part of a quaternion $\mathbf{q}$ for error state representation. At the end of each iteration, we update $\left[\hat{\boldsymbol{\alpha}}_{e_{i+1}}^{e_i}, \hat{\boldsymbol{\beta}}_{e_{i+1}}^{e_i}, \hat{\boldsymbol{\gamma}}_{e_{i+1}}^{e_i}, \hat{\boldsymbol{\eta}}_{e_{i+1}}^{e_i}\right]^T$ with the first order Jacobian approximation \cite{qin2018vins}. \textbf{Residual from the velocity observation constraint.} As mentioned in Sec. \ref{Introduction}, the existing LI-OAM systems lack of velocity observations to constrain the velocity during optimization. In our system, we utilized the measurements from wheel encoder as the observation to constrain the velocity: \begin{equation} \label{equation12} \mathbf{r}_w=\left[\begin{array}{l} \mathbf{r}_{w_{b_{i+1}}} \\ \mathbf{r}_{w_{e_{i+1}}} \end{array}\right]=\left[\begin{array}{l} \mathbf{v}_{b_{i+1}}^w-\mathbf{R}_{b_{i+1}}^w \mathbf{R}_k^l \hat{\mathbf{v}}_{b_{i+1}} \\ \mathbf{v}_{e_{i+1}}^w-\mathbf{R}_{e_{i+1}}^w \mathbf{R}_k^l \hat{\mathbf{v}}_{e_{i+1}} \end{array}\right] \end{equation} where $\mathbf{R}_k^l$ is the rotation from the wheel encoder to LiDAR, $\hat{\mathbf{v}}_{b_{i+1}}$ and $\hat{\mathbf{v}}_{e_{i+1}}$ are the velocity measurements from the wheel encoder at $t_{b_{i+1}}$ and $t_{e_{i+1}}$ respectively. \textbf{Residual from the consistency constraint.} According to CT-ICP \cite{dellenbach2022ct}, $\boldsymbol{x}_{b_{i+1}}^w$ and $\boldsymbol{x}_{e_{i}}^w$ are two states at the same time stamp $t_{b_{i+1}}$($t_{e_i}$). Logically, $\boldsymbol{x}_{e_{i}}^w$ and $\boldsymbol{x}_{b_{i+1}}^w$ should be the same. Therefore, we build the consistency residual as follow: \begin{equation} \label{equation13} \mathbf{r}_c=\left[\begin{array}{c} \mathbf{r}_c^{\mathbf{t}} \\ \mathbf{r}_c^{\mathbf{q}} \\ \mathbf{r}_c^{\mathbf{v}} \\ \mathbf{r}_c^{\mathbf{b}_{\mathbf{a}}} \\ \mathbf{r}_c^{\mathbf{b_{\boldsymbol{\omega}}}} \end{array}\right]=\left[\begin{array}{c} \mathbf{t}_{b_{i+1}}^w-\mathbf{t}_{e_i}^w \\ 2\left[{\mathbf{q}_{e_i}^{w}}^{-1} \otimes \mathbf{q}_{b_{i+1}}^w\right]_{x y z} \\ \mathbf{v}_{b_{i+1}}^w-\mathbf{v}_{e_i}^w \\ \mathbf{b}_{\mathbf{a}_{b_{i+1}}}-\mathbf{b}_{\mathbf{a}_{e_i}} \\ \mathbf{b}_{\boldsymbol{\omega}_{b_{i+1}}}-\mathbf{b}_{\boldsymbol{\omega}_{e_i}} \end{array}\right] \end{equation} where $\mathbf{t}_{b_{i+1}}^w$, $\mathbf{q}_{b_{i+1}}^w$, $\mathbf{v}_{b_{i+1}}^w$, $\mathbf{b}_{\mathbf{a}_{b_{i+1}}}$, $\mathbf{b}_{\boldsymbol{\omega}_{b_{i+1}}}$ are varibales to be optimized. By minimizing the sum of point-to-plane residuals, the IMU-odometer pre-integration residuals, the velocity observation residuals and the consistency residuals, we obtain a maximum posteriori estimation as: \begin{equation} \label{equation14} \begin{gathered} \boldsymbol{\chi}=\min _{\boldsymbol{\chi}} \\ \left\{\rho\left(\sum_{\mathbf{p} \in P_{i+1}}\left\|r^{\mathbf{p}}\right\|_{\mathbf{P}_L}^2+\left\|{\mathbf{r}_o}_{e_{i+1}}^{b_{i+1}}\right\|_{\mathbf{P}_{e_{i+1}}^{e_i}}^2+\left\|\mathbf{r}_w\right\|^2+\left\|\mathbf{r}_c\right\|^2\right)\right\} \end{gathered} \end{equation} where $\rho$ is the Huber kernel to eliminate the influence of outlier residuals. $\mathbf{P}_{e_{i+1}}^{e_i}$ is the covariance matrix of pre-integrated IMU-odometer measurements. The inverse of $\mathbf{P}_{e_{i+1}}^{e_i}$ is utilized as the weight of IMU pre-integration residuals. $\mathbf{P}_L$ is a constant (e.g., 0.001 in our system) to indicate the reliability of the point-to-plane residuals. The inverse of $\mathbf{P}_L$ is utilized as the weight of point-to-plane residuals. After finishing LIW-optimization, we selectively add the points of current sweep to the map. \subsection{Point Registration} \label{Point Registration} The cloud map is stored in a volume, and the size of each voxel is 1.0$\times$1.0$\times$1.0 (unit: m). Each voxel contains a maximum of 20 points. When the state of the current down-sampled sweep $P_{i+1}$ has been estimated, we transform $P_{i+1}$ to the world coordinate system $\left({\cdot}\right)^w$, and add the transformed points into the volume map. If a voxel already has 20 points, the new points cannot be added to it. \section{EXPERIMENTS} \label{Experiments} \begin{table}[] \begin{center} \caption{Datasets for Evaluation} \label{table1} \begin{tabular}{c|cc|cc|cc} \hline \multirow{2}{*}{} & \multicolumn{2}{c|}{LiDAR} & \multicolumn{2}{c|}{IMU} & \multicolumn{2}{c}{Wheel encoder} \\ \cline{2-7} & Line & Rate & Type & Rate & Type & Rate \\ \hline $nclt$ & 32 & 10\,Hz & 9-axis & 100\,Hz & speed & 10\,Hz \\ $kaist$ & 16 & 10\,Hz & 9-axis & 100\,Hz & pulse & 100\,Hz \\ \hline \end{tabular} \end{center} \end{table} We evaluate our LIW-OAM on the public datasets $nclt$ \cite{carlevaris2016university} and $kaist$ \cite{jeong2019complex}. $nclt$ is a large-scale, long-term autonomous unmanned ground vehicle dataset collected in the University of Michigans North Campus. The $nclt$ dataset contains a full data stream from a Velodyne HDL-32E LiDAR, 50\,Hz data from Microstrain MS25 IMU and 10\,Hz data from Segway vehicle platform’s wheel encoder. The $nclt$ dataset has a much longer duration and amount of data than other datasets and contains several open scenes, such as a large open parking lot. In addition, 50\,Hz IMU measurements cannot meet the requirements of some systems (e.g., LIO-SAM \cite{shan2020lio}). Therefore, we increase the frequency of the IMU to 100\,Hz by interpolation. \begin{table}[] \begin{center} \caption{Datasets of All Sequences for Evaluation} \label{table2} \begin{tabular}{cccc} \hline & Name & \begin{tabular}[c]{@{}c@{}}Duration\\ (min:sec)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distamce\\ (km)\end{tabular} \\ \hline \textit{nclt\_1} & 2012-01-08 & 92:16 & 6.4 \\ \textit{nclt\_2} & 2012-01-15 & 110:46 & 7.5 \\ \textit{nclt\_3} & 2012-01-22 & 86:11 & 6.1 \\ \textit{nclt\_4} & 2012-02-02 & 96:39 & 6.2 \\ \textit{nclt\_5} & 2012-02-18 & 88:19 & 6.2 \\ \textit{nclt\_6} & 2012-03-17 & 81:51 & 5.8 \\ \textit{nclt\_7} & 2012-05-11 & 83:36 & 6.0 \\ \textit{nclt\_8} & 2012-05-26 & 97:23 & 6.3 \\ \textit{nclt\_9} & 2012-06-15 & 55:10 & 4.1 \\ \textit{nclt\_10} & 2012-08-04 & 79:27 & 5.5 \\ \textit{nclt\_11} & 2012-08-20 & 88:44 & 6.0 \\ \textit{nclt\_12} & 2013-09-28 & 76:40 & 5.6 \\ \textit{kaist\_1} & urban\_08 & 5:07 & 1.56 \\ \textit{kaist\_2} & urban\_13 & 24:14 & 2.36 \\ \textit{kaist\_3} & urban\_14 & 29:06 & 8.20 \\ \hline \end{tabular} \end{center} \end{table} The $kaist$ dataset is collected with a human-driving robocar on a variety of longer and larger environments. The robocar has two 10\,Hz Velodyne VLP-16, 200\,Hz Ssens MTi-300 IMU and 100\,Hz RLS LM13 wheel encoder. Two 3D LiDARs are tilted by approximately $45^{\circ}$. For point clouds, we utilize the data from both two 3D LiDARs. The datasets’ information, including the sensors’ type and data rate, are illustrated in Table \ref{table1}. As both datasets utilize the vehicle platform, we employ static initialization in our system. Details of all the 15 sequences used in this section, including name, duration, and distance, are listed in Table II. For both datasets, we utilize the universal evaluation metric – absolute translational error (ATE) for pose accuracy evaluation. A consumer-level computer equipped with an Intel Core i7-12700 and 32 GB RAM is used for all experiments. \subsection{Comparison with the State-of-the-Arts} \label{Comparison with the State-of-the-Arts} \begin{table}[] \begin{center} \caption{RMSE of ATE Comparison of State-of-the-art (Unit: m)} \label{table3} \begin{tabular}{c|ccccc|c} \hline & \begin{tabular}[c]{@{}c@{}}LiLi-\\ OM\end{tabular} & \begin{tabular}[c]{@{}c@{}}LIO-\\ SAM\end{tabular} & LINs & \begin{tabular}[c]{@{}c@{}}Fast-\\ LIO2\end{tabular} & \begin{tabular}[c]{@{}c@{}}SR-\\ LIO\end{tabular} & Ours \\ \hline nclt\_1 & 60.98 & 1.71 & x & \textbf{1.34} & 1.55 & 1.42 \\ nclt\_2 & 127.5 & 2.12 & x & 1.65 & 1.53 & \textbf{1.46} \\ nclt\_3 & 42.32 & 9.70 & x & 1.91 & 6.72 & \textbf{1.20} \\ nclt\_4 & 40.14 & 1.45 & x & 1.95 & 1.57 & \textbf{1.45} \\ nclt\_5 & x & 5.66 & x & 4.37 & 1.46 & \textbf{1.44} \\ nclt\_6 & 146.2 & x & x & 6.11 & 2.07 & \textbf{1.52} \\ nclt\_7 & 89.98 & x & x & 2.42 & 1.87 & \textbf{1.79} \\ nclt\_8 & 43.46 & x & x & 2.62 & 2.04 & \textbf{1.41} \\ nclt\_9 & 82.66 & 1.51 & x & 2.09 & 2.00 & \textbf{1.31} \\ nclt\_10 & 96.87 & 2.26 & x & 2.43 & 2.15 & \textbf{1.46} \\ nclt\_11 & 207.1 & 10.81 & x & 2.29 & 1.97 & \textbf{1.33} \\ nclt\_12 & 1137.8 & x & x & 2.91 & 2.32 & \textbf{1.55} \\ kaist\_1 & x & x & x & 11.41 & 8.23 & \textbf{2.92} \\ kaist\_2 & x & x & x & x & x & \textbf{3.99} \\ kaist\_3 & x & x & x & x & x & \textbf{46.40} \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[] \begin{center} \caption{Ablation Study of Embedding Sensors on RMSE of ATE (Unit: m)} \label{table4} \begin{tabular}{c|cc|c} \hline & LiDAR-only & LI-OAM & LIW-OAM \\ \hline nclt\_1 & x & x & \textbf{1.42} \\ nclt\_2 & x & \textbf{1.43} & 1.46 \\ nclt\_3 & x & x & \textbf{1.20} \\ nclt\_4 & x & \textbf{1.36} & 1.45 \\ nclt\_5 & x & x & \textbf{1.44} \\ nclt\_6 & x & 1.73 & \textbf{1.52} \\ nclt\_7 & x & x & \textbf{1.79} \\ nclt\_8 & x & 1.65 & \textbf{1.41} \\ nclt\_9 & x & 1.72 & \textbf{1.31} \\ nclt\_10 & x & x & \textbf{1.46} \\ nclt\_11 & x & 76.69 & \textbf{1.33} \\ nclt\_12 & x & 32.84 & \textbf{1.55} \\ kaist\_1 & x & \textbf{2.31} & 2.92 \\ kaist\_2 & x & x & \textbf{3.99} \\ kaist\_3 & x & x & \textbf{46.40} \\ \hline \end{tabular} \end{center} \end{table} We compare our LIW-OAM with five state-of-the-art LI-OAM systems: i.e., LiLi-OM \cite{li2021towards}, LIO-SAM \cite{shan2020lio}, LINs \cite{qin2020lins}, Fast-LIO2 \cite{xu2022fast} and SR-LIO \cite{yuan2022sr}. It is necessary to emphasize that the LiDAR of $nclt$ takes 130$\sim$140ms to complete a 360deg sweep (i.e., the frequency of a sweep is about 7\,Hz), and SR-LIO needs 360deg sweeps as input. Therefore, for SR-LIO, we package sweeps at 7\,Hz from the full LiDAR data stream of $nclt$. For fair comparison, we obtain the results of the above systems based on the source code provided by the authors. In addition to the above menioned LI-OAM systems, there are also a few open-sourced LIW-OAM systems (e.g., EKF-LOAM \cite{liu2019visual}). However, we fail to configure the environment of EKF-LOAM based on their guidance on the github. In addition, the paper of EKF-LOAM did not report their ATE results on $nclt$ and $kaist$, but only test on their own dataset. Therefore, the results of EKF-LOAM are not included in Table \ref{table3}. \begin{table}[] \begin{center} \caption{Time Consumption Per Sweep (Unit: ms)} \label{table5} \begin{tabular}{c|cc|c} \hline & LIW-Optimization & Point Registration & Total \\ \hline nclt\_1 & 51.1 & 10.3 & 62.7 \\ nclt\_2 & 51.2 & 13.9 & 66.4 \\ nclt\_3 & 54.7 & 9.2 & 65.2 \\ nclt\_4 & 51.8 & 9.7 & 63.0 \\ nclt\_5 & 53.2 & 10.1 & 64.8 \\ nclt\_6 & 51.6 & 9.6 & 62.6 \\ nclt\_7 & 52.3 & 9.7 & 63.3 \\ nclt\_8 & 54.5 & 9.5 & 65.3 \\ nclt\_9 & 55.7 & 8.0 & 65.1 \\ nclt\_10 & 54.5 & 9.2 & 65.1 \\ nclt\_11 & 54.8 & 9.6 & 65.7 \\ nclt\_12 & 54.1 & 9.8 & 65.2 \\ kaist\_1 & 71.9 & 4.2 & 77.2 \\ kaist\_2 & 76.5 & 3.7 & 81.0 \\ kaist\_3 & 68.0 & 4.2 & 73.2 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure*} \begin{center} \includegraphics[scale=0.7]{velocity} \caption{The velocity distribution in the X-direction component of LI-OAM and LIW-OAM on the sequence $nclt\_9$. Compared to the high frequency oscillation curve of LI-OAM, the curve of LIW-OAM is much smoother. This shows that the accuracy of velocity estimation is greatly improved after embedding the velocity observation, because the velocity of a moving vehicle should be continuous and smooth in theory, but not oscillating at high frequency.} \label{fig2} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.7]{visualization} \caption{(a) is the comparison result between our estimated trajectory and ground truth on the exemplar sequence $kaist\_1$. (b) is the local point cloud map of $kaist\_1$.} \label{fig3} \end{center} \end{figure*} Results in Table \ref{table3} demonstrate that our LIW-OAM outperforms the state-of-the-art LI-OAM systems for almost all sequences in terms of smaller ATE. “x” means the system fails to run the entire sequence. Except for our system, other systems break down on several sequences (especially the $kaist$ sequences), which demonstrate that our method is robust under challenging scenes. It is necessary to emphasize that the Segway vehicle platform enters a long indoor corridor through a door from the outdoor scene at the end of some sequences of $nclt$, yielding significant scene changes. This large differences in scenes produce great difficulties for ICP point cloud registration, and hence almost all LI-OAM systems break down here. Therefore, we omit the test for these cases which usually locate at the end of the sequences. \subsection{Ablation Study of Embedding Sensors} \label{Ablation Study of Embedding Sensors} In this section, we examine the impact of embedding IMU into our LiDAR-only system and embedding the wheel encoder into our BA based LI-OAM system. To this end, we evaluate ATE of the estimated pose under the following three configurations: 1) using only the LiDAR point-to-plane residuals and consistency residuals to provide constraints for state estimation. 2) concurrently using LiDAR point-to-plane residuals, IMU pre-integration residuals and consistency residuals to provide constraints for state estimation. 3) concurrently using LiDAR point-to-plane residuals, IMU-odometer pre-integration residuals, velocity observation residuals and consistency residuals to provide constraints for state estimation. Table \ref{table4} shows the comparison results. Although the accuracy of LIW-OAM is not the best on $nclt\_1$, $nclt\_4$ and $kaist\_1$, we are very close to the best accuracy. On other sequences, embedding wheel encoder can achieve the best performance. In addition, both LiDAR-only and LI-OAM break down on many sequences, while LIW-OAM can run successfully on all sequences. This demonstrates that embedding a wheel encoder can greatly improve the robustness of BA based LI-OAM framework. \subsection{Time Consumption} \label{Time Consumption} We evaluate the runtime breakdown (unit: ms) of our system for all sequences. In general, the most time-consuming modules are the BA based LIW-optimization module, and the point registration module. Therefore, for each sequence, we test the time cost of above two modules, and the total time for handling a sweep. Results in Table \ref{table5} show that our LIW-OAM takes 60$\sim$80ms to handle a sweep, while the time interval of two consecutive input sweeps is 100ms. That means our system can not only run in real time, but also save 20$\sim$40ms per sweep. \subsection{Evaluation of Velocity} \label{Evaluation of Velocity} Introducing the wheel encoder measurements, which provides the velocity observation, could yield the greatest accuracy improvement for the estimated velocity, as the raw LI-OAM framework does not have any velocity observations. In practice, it is difficult for us to obtain the ground truth value of velocity. However, we can still make a stereotypical evaluation of the accuracy of velocity based on the kinematic attempt. In theory, the velocity of a moving vehicle should be continuous and smooth, but not oscillating at a high frequency. Therefore, the smoother the curve of a velocity function with respect to time, the better the curve fits the kinematics attempt. As illustrated in Fig. \ref{fig2}, compared to the high frequency oscillation curve of LI-OAM, the curve of LIW-OAM is much smoother. This shows that the accuracy of velocity estimation is greatly improved after embedding velocity observation. \subsection{Visualization for map} \label{Visualization for map} We also visualize the trajectories and point cloud maps estimated by our LIW-OAM. The comparison result between our estimated trajectory and ground truth of the sequence $kaist\_1$ is shown in Fig. \ref{fig3} (a), where our estimated trajectories and ground truth almost exactly coincide. Fig. \ref{fig3} (b) shows sufficient accuracy for some local structures, where the distribution of the points is also uniform. \section{CONCLUSION} \label{Conclusion} In this work, we proposed LIW-OAM, which is an accurate and robust BA based LiDAR-inertial-wheel framework for state estimation and mapping in real time. Compared with existing LI-OAM systems, the involvement of a wheel encoder provides velocity measurements as an extra observation, which can greatly enhance the accuracy and robustness of state estimation. Experiment results on the $nclt$ and $kaist$ datasets demonstrate that our LIW-OAM outperforms all existing state-of-the-art LI-OAM systems in terms of smaller ATE. In addition, we do not require a magnetometer which makes our system compatible with hardware platforms with user-level IMUs. \bibliographystyle{IEEEtrans}
{ "arxiv_id": "2302.14335", "language": "en", "timestamp": "2023-03-01T02:09:56", "url": "https://arxiv.org/abs/2302.14335", "yymm": "2302" }
\section{Introduction} Person re-identification (re-ID) aims at identifying person across different camera views, which is very important in many applications, such as intelligent surveillance, cross camera tracking and smart city. While re-ID has attracted great research interest and gained considerable development in recent years, there still exist some challenges~\cite{yan2021occluded, yang2021learning}, such as blur, low resolution, occlusion, illumination and viewpoint variation. These factors cause the intra-class distance of samples to be larger than the inter-class distance, which makes it challenging to retrieve pedestrians of correct identities. \begin{figure}[t!] \begin{center} \includegraphics[width=8cm]{img/intro.png} \end{center} \caption{Illustration of DC-Former for representation learning. On the left, each circle with dots denotes an embedding space. DC-Former uses multiple embeddings to represent each sample, and a self-diverse constraint is imposed on these embeddings to push them away. Finally, DC-Former obtains multiple diverse embedding subspaces for representation. And each subspace is more compact than the original space, which increases the identity density of embedding space to help model improve its discrimination for identifying similar classes. Figures on the right visualized by Grad-Cam~\cite{selvaraju2017grad} show that diverse embeddings from DC-Former focus on multiple different discriminative regions. And the fusion of them can provide more fine-grained information.} \label{fig:inro} \end{figure} Plenty of efforts~\cite{transreid, AutoLoss_2022_CVPR, dpm2022lei} have been made recently to improve the performance of re-ID, and among which increasing the amount of training data may be the most powerful way. On the one hand, increasing more instances for each identity helps to recognize one person under different circumstances, extracting the most common and discriminative features for the same class, thus reducing intra-class distance. On the other hand, increasing more identities means that there is a higher chance to place more similar (easy-to-confuse) classes in the embedding space, which would help the model to extract discriminative features with larger inter-class distance for similar classes. Therefore, more identities and more instances for each identity, resulting in larger inter-class distance and smaller intra-class distance, make re-ID task easier. As mentioned above, great potential lies in data. However, due to limited data, existing works concentrate more on other aspects, such as stronger backbone~\cite{swin}, metric loss~\cite{sun2020circle}, pre-trained model~\cite{luperson}, etc. As far as we know, few studies consider re-ID task from the perspective of data except for~\cite{grayscale,RandomErasing}, all of which are data augmentations, finally increasing the instances of each identity, but no study tries to increase the number of identity. Directly increasing the number of identity is impractical because it equals to adding more labeled data, which is expensive, but if there is a certain way that can simulate to increase the number of identity, it will also improve the performance of re-ID. Here we hypothesize that increasing the number of identity is equal to increasing the identity density in a given space. Reducing the size of the embedding space can make all the identities become more compact, so the relative amount of identity (identity density) increases. In a more compact embedding space, the reduction of inter-class distance makes it more difficult for model to distinguish samples from similar classes. Therefore, more discriminative and robust information is extracted to ensure that the classifier correctly identifies similar classes. In this paper, we propose a Diverse and Compact Transformer (DC-Former) that can achieve a similar effect of increasing identities of training data. As shown in Figure~\ref{fig:inro}, it compresses the embedding space by dividing the original space into multiple subspaces. More compact representations in the compressed embedding subspace helps model extract more discriminative feature to identify similar classes. And the embeddings of different subspaces are diverse, the fusion of them contains more information that can further improve performance. Specifically, multiple class tokens (CLSes) are used in vision transformer, and each CLS is supervised by an identity loss to obtain multiple representations. Then, a self-diverse constraint (SDC) is applied to CLSes to make the distribution of them as far as possible. In this way, the original space is divided into multiple subspaces. Due to the different learning status of different CLSes when dividing the space, some CLSes are pushed away while others are very close. A dynamic weight controller (DWC) is further designed for balancing the relative importance among them during training. Finally, each of compact subspaces learns a more robust representation. The experimental results of our method are promising, which surpass previous state-of-the-art methods on three commonly used person re-ID benchmarks. The contributions of this paper are summarized as follows: \begin{itemize} \item We propose a DC-Former to get multiple diverse and compact embedding subspaces, each embedding of these compact subspaces is more robust and discriminative to identify similar classes. And the fusion of these diverse embeddings can further improve the effect of re-ID. \item We propose a self-diverse constraint (SDC) to make embedding subspaces presented by each class token do not overlap. And a dynamic weight controller (DWC) is devised to balance the relative importance among multiple class tokens during training. \item Our method surpasses previous methods and sets state-of-the-art on three person re-ID benchmarks including \emph{MSMT17}, \emph{Market-1501} and \emph{CUHK03}. \end{itemize} \begin{figure*}[t] \begin{center} \includegraphics[scale=0.5]{img/framework.pdf} \end{center} \caption{The framework of \MethodName. Multiple class tokens concatenated with patch embeddings, adding positional embeddings, are fed into transformer encoder. Self-diverse constraint is employed on these class tokens in the last transformer layer to push them far way from each other, leading to diverse representation spaces. Then they are each supervised by a re-ID head, which contains a triplet loss and a classification loss. During training, a dynamic weight controller is used to dynamically adjust the constraint loss for easier optimization. $f_a$, $f_b$ and $f_c$ are different representations of the same object. } \label{fig:arch} \end{figure*} \section{Related Work} \subsection{Image-based re-ID} Re-ID can be conducted on either images \cite{transreid} or videos \cite{zhao2021phd} . Recent image-based tasks mainly focus on person re-ID and vehicle re-ID. The studies of person re-ID have paid attention to feature representation learning \cite{chen2019self,suh2018part} and deep metric learning \cite{zheng2017person,deng2018image}. For the feature learning strategy, Typically, there are three main categories for the feature learning strategy, including global feature \cite{chen2019self}, local feature \cite{suh2018part} and auxiliary feature (e.g., domain information \cite{lin2017consistent}, semantic attributes \cite{lin2019improving}, viewpoint information \cite{zhu2020aware}). As for the deep metric learning, many existing works have designed loss functions to guide the feature representation learning, which can be divided into three groups, i.e., identity loss \cite{zheng2017person}, verification loss \cite{deng2018image} and triplet loss \cite{hermans2017defense}. \subsection{Representation Learning in re-ID} Re-ID is one of fine-grained task which has far intra-class distance and close inter-class distance. Extracting discriminative features to distinguish similar classes is challenging. To minimize intra-class distance and maximize inter-class distance, proxy-based loss \cite{sce,imagenet} and pair-based loss \cite{triplet,hermans2017defense} are commonly used to push the different class away and pull the same class close. Some works use attention-base method \cite{Huynh_2020_CVPR} to discover discriminative features or enhance low-discriminative features, which helps disciminative feature representation. For example, DAM \cite{dam} iteratively identifies insufficiently trained elements and improves them. And some works \cite{RandomErasing,srivastava2014dropout} impose some regulation operations (e.g., random erasing and dropout) to prevent overfitting. Self-supervised representation learning \cite{he2019moco,chen2020mocov2,dino,CCL} use contrastive loss to maximize the agreement of features from same image with different augmentations. \subsection{Transformer-based re-ID} Before vision transformer, CNN-based methods have achieved absolute advantages on re-ID task. Methods like PCB~\cite{pcb}, MGN~\cite{mgn}, etc., partition an image into several stripes to obtain local feature representations for each stripe with multiple granularities. With the success of transformer in the field of computer version, it's also widely used in person re-ID. Compared with CNN-based method, ViT can keep more detailed information because these is no convolution and dowmsampling operators, which is more suited for fine-grained task like re-ID. TransReID~\cite{transreid} is the first pure transformer-based method on re-ID, it proposes a jigsaw patches module (JPM) which shuffles patch embeddings and re-groups them for further feature learning to extract several local features and aggregates them to get robust feature with global context. TransReID-SSL \cite{transreid-ssl} uses a massive person re-ID dataset $LUPerson$ \cite{luperson} to train a stronger pre-trained model by DINO \cite{dino}. Many works~\cite{zhu2021aaformer,PAT,oh-former,TPM} devote to extract partial region representation by transformer. For example, AAformer \cite{zhu2021aaformer} uses the additional learnable vectors of `part tokens' to learn the part representations by clustering the patch embeddings into several groups and integrates the part into the self-attention for alignment. Some methods \cite{PAT,TPM} use transformer to learn several different part-aware masks to get partial feature. And some works aim to fuse features at different granularities. For example, HAT \cite{HAT} put hierarchical features at different granularities from CNN backbone into transformer to aggregate them. These methods have one thing in common that multiple different features are extracted and integrated to obtain more robust representation. However, they impose constraints on specific goals like focusing only on local receptive fields or different granularity which are limited to extract representation that the model really needs. It is insightful to learn multiple feature space by model itself through certain implicit constraint. \section{Methodology} \subsection{Overview} For enhancing the identity density to increase the ability of discriminating similar classes, the original embedding space is divided into multiple diverse and compact subspaces. The proposed method consists of three parts: a ViT-based network with multiple class tokens to produce multiple embedding spaces, a self-diverse constraint loss (SDC) to push these embedding spaces far away from each other, and a dynamic weight controller (DWC) to balance the relative importance among class tokens during training as the token number increases. \subsection{Network Architecture} \label{sec:3.1} To construct multiple different embedding spaces for one single input image, an architecture with multiple parallel output features is required. Most multi-feature representation methods usually extract different features from different granularities or different partial regions by multiple branches \cite{mgn,pcb}. The multiple features generated by these branches are often individual and have no interactions with each other during train, which may cause multiple embedding spaces overlapping and homogenous. In vision transformer, thanks to stacked self-attention modules, information flows from patch embeddings to one class token, layer by layer gradually and autonomously. The class token acts as an information collector here, it receives information from each patch and outputs the summary according to its prior knowledge (learnable parameters). It's natural to come up with an idea that if there exist multiple class tokens acting as multiple information collectors, there will be a chance to gather different information from patch embeddings. Then, multiple different embedding spaces can be supported by multiple class tokens. Therefore, ViT with additional class tokens is chosen to be the main achitecture. Figure~\ref{fig:arch} illustrates the proposed framework. The input image is divided into $H \times W$ patches, and then each patch is mapped to a vector of dimension $D$ through a trainable linear projection. The output of this projection is denoted as patch embeddings $P^{C \times D}$, where $C$ is the number of patches. Then multiple learnable embeddings of dimension $D$ are concatenated as a sequence, denoted as class tokens $f^{N\times D}$, where $N$ is the number of class tokens. After that, class tokens and patch embeddings are concatenated to get a vector sequence $X^{(N + C) \times D}=[f^{N\times D}; P^{C \times D}]$. Next, this sequence added with a learnable positional embedding sequence is fed into transformer layers to get multiple representations. Notably in our design, multiple class tokens are mutually visible in self-attention layers, which helps the model to converge because of sharing intermediate features. For one certain class token, it receives information not only from all the patch embeddings but also from other class tokens, which improves the information acquisition efficiency. \begin{figure}[] \begin{center} \subfigure[w/o SDC]{ \includegraphics[width=4cm]{img/tsne_a.png} } \hspace{-0.4cm} \subfigure[with SDC]{ \includegraphics[width=4cm]{img/tsne_b.png} } \caption{Feature distributions visualized by t-SNE of two class tokens. Different colors represent different classes, marker $\star$ stands for feature points from one class token and $\bigcirc$ from the other. The data points are sampled from \emph{MSMT17} test set. (a) Without SDC, the feature points from two class tokens are heavily overlapped, which implies that the embedding spaces of these two class tokens are very close. (b) With SDC, the feature points from two class tokens can be easily separated into two parts without any overlap, and each embedding space become more compact.} \label{fig:feat_space} \end{center} \end{figure} \subsection{Self-diverse Constraint Loss} \label{sce:diverse_loss} If there are multiple different embeddings to represent each samples, the embedding space will become more compact, which helps model improve embedding's discrimination of similar classes. With multiple class tokens, the proposed framework can output multiple embeddings in parallel. But in practice, the learned embedding spaces overlap with each other because these class tokens are homogeneous in structure, their learning goals are the same, and there is no external energy to push them to be different. One way to learn different embedding spaces is to change the learning objectives, DeiT~\cite{deit} proposed a distillation token to learn the output logits of another CNN model, which requires training two models and much manual efforts, making the training procedure complex. Considering that class token is computed as a weighted sum of $V$, although multiple class tokens share the same K-V pairs from patch embeddings $P^{C \times D}$, they can still get different attentions from $P^{C \times D}$ if the query $Q$ is different. In other words, class tokens can learn different information as long as they are different. Here we humbly hypothesize that the more difference between class tokens, the farther the distance between their embedding space, which pushes overlapping embedding spaces away from each other. Not only do we want the class tokens to be different from each other, but we want to maximize the difference between them. In the metric of cosine similarity, if two vectors are orthogonal, i.e., $cos(*,*)=0$, they are irrelevant, and the distance between them are extremely large. If the class tokens are orthogonal to each other, each embedding space of them is compressed more tightly in the finite space to ensure their distance is maximized. We propose a self-diverse constraint loss (SDC) to constrain the relationship between class tokens. It forces the class tokens to be orthogonal to each other and can be expressed as: \begin{equation} \label{loss:sdc} L_{sdc} = \frac{1}{C_{N}^{2}}{\textstyle{\sum_{i}^{}} }^{} {\textstyle \sum_{j}^{}}\nu _{ij}, \ i<j, \ \ i,j=1,...,N \end{equation} where $\nu _{ij} = \left | cos(f_i, f_j) \right |$, and $f_i$, $f_j$ indicates any two class tokens of $f^{N\times D}$. Self-diverse constraint loss is employed on these class tokens to make sure that these embedding subspaces is far apart from each other. It makes each embedding subspace compact and helps model extract more discriminative feature to identify similar classes. The representations from different embedding subspaces contain not only identity information, but also features from different perspectives, i.e., coarse/fine-grained granularity, global/local region, and other unrecognized complementary aspects. The fusion of them can further facilitate robust and perturbation-invariant feature representation for re-ID tasks. \begin{table*}[t] \caption{Comparisons with state-of-the-art methods on person re-ID benchmarks. ${368\uparrow}$ and ${384\uparrow}$ denote the input images are resized to $368 \times 128$ and $384 \times 128$, otherwise $256 \times 128$. Best results for previous methods are underlined and best of our methods are labeled in bold.} \label{tab:sota_person} \vspace{-0.2cm} \centering \begin{tabular}{l|c|cc|cc|cc|cc} \hline \multicolumn{1}{c|}{\multirow{2}{*}{Methods}} & \multicolumn{1}{c|}{\multirow{2}{*}{Publications}} & \multicolumn{2}{c|}{\emph{MSMT17}} & \multicolumn{2}{c|}{\emph{Market-1501}} & \multicolumn{2}{c|}{\emph{CUHK03-l}} & \multicolumn{2}{c}{\emph{CUHK03-d}}\\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & mAP & R1 & mAP & R1 & mAP & R1 & mAP & R1 \\ \hline PFD~\cite{wang2022pfd} & AAAI & 64.4 & 83.8 & 89.7 & 95.5 & - & - & - & - \\ TransReID~\cite{transreid} & ICCV & 67.4 & 85.3 & 88.9 & 95.2 & - & - & - & - \\ DPM~\cite{dpm2022lei} & ACM'MM & - & - & 89.7 & 95.5 & - & - & - & - \\ GASM$^{384\uparrow}$~\cite{GASM} & ECCV & 52.5 & 79.5 & 84.7 & 95.3 & - & - & - & - \\ CDNet$^{384\uparrow}$~\cite{CDNet_2021_CVPR} & CVPR & 54.7 & 78.9 & 86.0 & 95.1 & - & - & - & - \\ AutoLoss$^{384\uparrow}$~\cite{AutoLoss_2022_CVPR} & CVPR & 63.0 & 83.7 & {\ul 90.1} & {\ul 96.2} & 74.3 & 75.6 & - & -\\ AAformer$^{384\uparrow}$~\cite{zhu2021aaformer} & - & 63.2 & 83.6 & 87.7 & 95.4 & {\ul 77.8} & {\ul 79.9} & {\ul 74.8} & {\ul 77.6}\\ OH-former$^{368\uparrow}$~\cite{oh-former} & - & 69.2 & {\ul 86.6} & 88.7 & 95.0 & - & - & - & - \\ TransReID$^{384\uparrow}$~\cite{transreid} & ICCV & {\ul 69.4} & 86.2 & 89.5 & 95.2 & - & - & - & - \\ \hline \MethodName & - & 69.8 & 86.2 & 90.4 & 96.0 & 79.4 & 81.6 & 77.5 & \textbf{80.1} \\ \MethodName$^{384\uparrow}$ & - & \textbf{70.7} & \textbf{86.9} & \textbf{90.6} & 96.0 & \textbf{83.3} & \textbf{84.4} & \textbf{77.5} & 79.6 \\ \hline \end{tabular} \end{table*} \subsection{Dynamic Weight Controller} \label{sec:3.2} To obtain more compact and robust features, more class tokens are needed to get more different embedding subspaces. However, as the number of class tokens increases, it becomes harder to optimize because there are more pairs of class tokens, each of which is required to be orthogonal. In experiments, we find that when the number of class tokens increases to a certain number, the loss in Eq.~\ref{loss:sdc} cannot be minimized as expected. Although some of the pairs have low cosine similarity, others are still very similar (cosine similarity close to 1). Actually it only learns less embedding space than we expect, and those class token pairs with higher self-diverse constraint loss are not optimized well in training. The reason for this is that randomness in the training process makes it easier for some pairs to be pushed farther apart while others harder. Therefore, it's necessary to change the relative importance among class token pairs for self-diverse constraint loss while training. We propose a dynamic weight controller (DWC) to dynamically adjust the loss weight of each pair during training on the fly. Instead of simply averaging these pair losses as in Eq.~\ref{loss:sdc}, the loss of each pair is re-weighted by its own softmax-normalized loss. The weight of each loss can be defined as: \begin{equation} \omega _{ij} = \frac{exp(\nu _{ij})}{{\textstyle{\sum_{m}^{}} }^{} {\textstyle \sum_{n}^{}} exp(\nu _{mn})}, \\ m<n, \ \ m,n=1,...,N \end{equation} So, the balanced self-diverse constraint loss is defined as: \begin{equation} L_{SDC} = {\textstyle{\sum_{i}^{}} }^{} {\textstyle \sum_{j}^{}}\omega _{ij}\nu _{ij}, \ i<j, \ \ i,j=1,...,N \end{equation} Those pairs with smaller cosine similarities are given smaller weights, while larger ones are given larger weights to make the model focus more on similar pairs. In this way, pairs can be learned more evenly so that they can be all orthogonal to each other. \subsection{Objective Function} \label{sec:3.3} In training, in order to ensure that each class token has the ability to distinguish identities, they are each supervised by cross-entropy loss for classification (ID loss) after normalized by BNNeck~\cite{Luo_2019_CVPR_Workshops}. To pull the samples of the same class closer and push the samples of different classes far away, the triplet loss with soft-margin is used to mine hard example in each embedding subspace and can be calculated as: \begin{equation} L_{triplet} = log[1+exp(|| f_a - f_p ||_2^2 - || f_a - f_n ||_2^2 )] \label{eq:1} \end{equation} The overall objective function is: \begin{equation} L_{total} = \frac{1}{N} {\textstyle \sum_{i=1}^{N}} (L_{ID}^{i} + L_{triplet}^{i}) + \lambda L_{SDC} \label{eq:2} \end{equation} During inference, all the class tokens are concatenated to represent an image. \section{Experiments} \subsection{Experimental settings} \textbf{Implementation Details.} We apply VIT-B/16~\cite{vit} as our backbone, it contains 12 transformer layers with hidden size of 768 dimensions. Overlapping patch embedding (step size = 12) and SIE~\cite{transreid} are also used in our experiments. All the images are resized to $256 \times 128$ unless other specified. The training images are augmented with random horizontal flipping, padding, random cropping, random erasing~\cite{RandomErasing}, and random grayscale~\cite{grayscale}. The initial weights of the models are pre-trained on ImageNet. The batch size is set to 64 with 4 images per ID. SGD optimizer is employed with a momentum of 0.9 and a weight decay of 1e-4. The learning rate is initialized as 0.032 with cosine learning rate decay. All the experiments are performed on 4 Nvidia Tesla V100 GPUs. \textbf{Datasets and Evaluations.} The proposed method is evaluated on four widely used person re-ID benchmarks, i.e., \emph{MSMT17} \cite{msmt}, \emph{Market-1501} \cite{market} and \emph{CUHK03} \cite{cuhk}. Mean Average Precision (mAP) and Cumulative Matching Curve (CMC) are used to evaluate the performance of re-ID tasks. \begin{figure}[!t] \begin{center} \includegraphics[width=8cm]{img/gradcam_token2.png} \caption{Grad-Cam visualization of attention map on \emph{Market-1501}. (a) Input image. (b) Baseline. (c)-(d) Two class tokens without SDC. (e)-(f) Two class tokens with SDC. As can be seen, self-diverse constraint loss makes multiple class tokens to focus on different discriminative regions.} \label{fig:grad_cam_2tokens} \end{center} \end{figure} \subsection{Comparisons with State-of-the-Arts} To verify the effectiveness of the proposed method, experiments are conducted on three commonly used person re-ID benchmarks in Table~\ref{tab:sota_person}. On \emph{MSMT17}, our method outperforms previous SOTA methods (e.g., TransReID) by a large margin especially on mAP (+2.4\%) with $256\times 128$ resolution, and also achieves the best performance with higher resolution $384\times 128$. On \emph{Market-1501}, our method achieves the best performance on mAP and comparable performance on Rank-1. On \emph{CUHK03}, our method achieves absolute superiority which outperforms previous SOTA method (AAformer) by a large margin both on mAP (+4.2\%) and Rank-1 (+2.5\%). \begin{table}[] \caption{The ablation study of SDC on \emph{MSMT17}. $T_1$ and $T_2$ denote two class tokens, $Cat.$ denotes the concatenation of the two class tokens.} \label{tab:SDC_person} \centering \begin{tabular}{ccccccc} \hline \multirow{2}{*}{} & \multicolumn{3}{c}{mAP} & \multicolumn{3}{c}{Rank-1} \\ & $T_1$ & $T_2$ & $Cat.$ & $T_1$ & $T_2$ & $Cat.$ \\ \hline Baseline & - & - & 66.1 & - & - & 84.6 \\ w/o SDC & 66.9 & 66.9 & 66.9 & 84.8 & 84.8 & 84.8 \\ with SDC & 67.9 & 68.1 & \textbf{68.3} & 85.3 & 85.4 & \textbf{85.5} \\ \hline \end{tabular} \end{table} \subsection{Ablation Study} Experiments are conducted to study the effectiveness of self-diverse constraint loss (SDC), intra-class and inter-class distance of compressed embedding space, the hyper-parameters $\lambda$ and token number $N$, the effectiveness of dynamic weight controller (DWC), and the effectiveness of training with a smaller amount of identity. \textbf{Impact of Multiple Class Tokens and SDC.} The effectiveness of multiple class tokens is validated on \emph{MSMT17} in Table~\ref{tab:SDC_person}. The baseline has only one class token. Adding one additional class token to the baseline (totally two class tokens) provides +0.8\% mAP improvement on \emph{MSMT17}. During training, each class token aggregates information not only from patch embeddings but also from other class tokens, which improves the information acquisition efficiency. But in further study, we find that the similarity between these two class tokens is very high, i.e., 0.999, which implies that there is no difference between them, so there is no improvement after fusion. And continuing to increase the number of tokens cannot continue to improve the performance. With SDC imposed on these two class tokens, the cosine similarity of them becomes 0.007, which is very close to 0.0, implying that these two class tokens have learned two non-overlapping representation subspaces. Figure~\ref{fig:feat_space} illustrates the feature distributions of two class tokens, and they are separated to each other. Also, the performance of these two class tokens are both improved by a large margin especially on mAP, more than +1.0\% improvement, which means both tokens have learned more robust features. After concatenation, feature fusion further improves the performance, reaching 68.3\% mAP and 85.5\% Rank-1, which is +2.2\% mAP and +0.9\% Rank-1 higher than the baseline. The attention map visualized in Figure~\ref{fig:grad_cam_2tokens} shows that both two tokens correctly represent the foreground part of the object. Moreover, the two tokens represent different embedding spaces, and the information they represent is also different. Compared with the baseline, two class tokens with SDC has captured more fine-grained features. The fusion of multiple class tokens helps the model to learn more discriminative representation. \begin{figure}[] \centering \subfigure[Baseline]{ \vspace{-0.2cm} \includegraphics[width=4cm]{img/pair_dist/vit.png} } \vspace{-0.2cm} \hspace{-0.4cm} \subfigure[$Cat(T_1,T_2)$]{ \includegraphics[width=4cm]{img/pair_dist/DRformer_cat.png} } \subfigure[$T_1$]{ \includegraphics[width=4cm]{img/pair_dist/DRformer_1.png} } \hspace{-0.4cm} \subfigure[$T_2$]{ \includegraphics[width=4cm]{img/pair_dist/DRformer_2.png} } \caption{The distance of positive and negative pairs in \emph{MSMT17} test set. The positive/negative pair denotes that two sample from $Query$ and $Gallery$ are the same/different class(es). For better visualization, only partial negative pairs sampled randomly are shown in this figure.} \label{fig:distance} \end{figure} \begin{table}[] \caption{The confusion of positive and negative pairs on \emph{MSMT17}. $Confusion$ means the number of overlapped pairs in Figure~\ref{fig:distance}.} \label{tab:distance} \centering \vspace{-0.2cm} \begin{tabular}{ccc} \hline & $Confusion \downarrow$ & mAP/R1 (\%) $\uparrow$ \\ \hline Baseline & 26,469 & 66.1/84.6 \\ $T_1$ & 25,738 & 67.9/85.3 \\ $T_2$ & 25,064 & 68.1/85.4 \\ $Cat.$ & 24,390 & 68.3/85.5 \\ \hline \end{tabular} \end{table} \textbf{Intra-class and inter-class distance.} DC-Former divides the original embedding space into multiple compact subspaces, reducing intra-class distance but also reducing inter-class distance. To verify the effectiveness of the compact space proposed by DC-Former, the intra-class and inter-class distance of DC-Former's embeddings are calculated and visualized in Figure~\ref{fig:distance}. And the confusion of positive and negative pairs in Figure~\ref{fig:distance} is further calculated in Table~\ref{tab:distance}. Compared to baseline, the embedding space of each token in DC-Former is smaller, as is their concatenation. Moreover, the embedding space of $T_2$ is more compact than that of $T_1$, so $T_2$ achieves higher performance. And the confusion of DC-Former is less than baseline, which means that compact embedding space pushes the embeddings of the same class more tightly than the embeddings of different classes. \begin{figure}[] \centering \subfigure[Impact of $\lambda$]{ \includegraphics[width=3.5cm]{img/lambda.png} } \hspace{-0.4cm} \subfigure[Impact of $N$]{ \includegraphics[width=4.5cm]{img/N.png} } \caption{Visualization of ablation studies on \emph{MSMT17}. Impact of two hyper-parameters of SDC.} \label{fig:ablation} \end{figure} \begin{figure}[!t] \centering \vspace{-0.2cm} \subfigure[w/o DWC]{ \includegraphics[width=4cm]{img/train_loss.png} \label{fig:loss_w/o_dwc} } \vspace{-0.2cm} \hspace{-0.4cm} \subfigure[with DWC]{ \vspace{-0.2cm} \includegraphics[width=4cm]{img/train_loss+dwc.png} } \subfigure[mAP]{ \includegraphics[width=4cm]{img/dwc_mAP.png} } \vspace{-0.2cm} \hspace{-0.4cm} \subfigure[Rank-1]{ \includegraphics[width=4cm]{img/dwc_rank1.png} } \caption{The evaluation of DWC on \emph{MSMT17}. $N$ denotes the number of class tokens. (a-b) DC-Former's SDC loss when training with/without DWC. (c-d) The mAP and Rank-1 of DC-Former with/without DWC.} \label{fig:dwc_ablation} \end{figure} \textbf{Hyper-parameters of SDC.} SDC has two hyper-parameters which are the weight of loss $\lambda$ and the number of class tokens $N$. We analyze the influence of $\lambda$ on the performance in Figure~\ref{fig:ablation}(a). When $\lambda$ = 0, baseline achieves 66.9\% mAP and 84.8\% Rank-1 on \emph{MSMT17}. When $\lambda$ = 0.1, SDC doesn't work because it's too small. As $\lambda$ increases, the performance increases. When $\lambda$ = 1, the mAP and Rank-1 are improved to 68.3\% and 85.5\%, respectively. When continuing to increase $\lambda$, the performance is degraded because excessive weights make the model pull apart two features at the beginning, making it difficult to optimize the classification loss. Therefore, $\lambda$ = 1 is the best beneficial for learning multiple diverse features. The experiments on the number of class tokens $N$ is in Figure~\ref{fig:ablation}(b). Increasing the number of class tokens improves the performance of model. When $N$ is between 3 and 6, mAP and Rank-1 are higher. And $N$ = 4 reaches the best performance at 69.2\% mAP and 86.2\% Rank-1. Continuing to increase $N$, performance is degraded because finding too much embedding space makes training difficult. As we can see SDC loss in Figure~\ref{fig:loss_w/o_dwc}, too much tokens cannot optimize model well because the subspaces cannot be separated due to competition between tokens. $N$ could be a little different in different datasets. In the SOTA experimental configuration (Table~\ref{tab:sota_person}), $N$ is [6,5,4] for \emph{MSMT17}, \emph{Market-1501} and \emph{CUHK03}, respectively. \textbf{Dynamic Weight Controller.} The effectiveness of the proposed DWC module is validated in Figure~\ref{fig:dwc_ablation}. The SDC loss in training phase illustrated in Figure~\ref{fig:dwc_ablation}(a-b) shows that DWC can make each pairs learn evenly so that they are all orthogonal to each other, which shows effectiveness on balancing the relative importance among class tokens during training as the token number increases. The performance in Figure~\ref{fig:dwc_ablation}(c-d) shows that DWC has limited effect when $N$ is small (less than 5). While it provides about +1.0\% improvement when $N$ is large. When $N\geq7$, the performance decreases because the number of subspaces in a finite embedding space has reached its limit. Too small space makes the embeddings lose their discrimination. \begin{figure}[] \centering \subfigure[mAP]{ \vspace{-0.2cm} \includegraphics[width=4cm]{img/data_vit_mAP.png} } \vspace{-0.2cm} \hspace{-0.4cm} \subfigure[Rank-1]{ \includegraphics[width=4cm]{img/data_dcformer_rank1.png} } \caption{Performance under training with a smaller amount of identity on \emph{MSMT17}. Randomly select partial categories as training set, and keep the test set unchanged.} \label{fig:amount_data} \end{figure} \textbf{Effectiveness with a smaller amount of identity.} We evaluate the effectiveness of DC-Former with a smaller amount of identity in Figure~\ref{fig:amount_data}. DC-Former achieves comparable results by using less than 20\% identities of baseline. And the smaller the amount of identity, the more obvious the advantages of DC-Former. Increasing the amount of identity strengthens the ability of the model to identify similar identities. And more compact embedding space of DC-Former also enhances the discrimination of similar classes' representations, which is similar to the effect of increasing the amount of identity. \section{Conclusions} In this paper, we propose a transformer-based network for re-ID to learn multiple diverse and compact embedding subspaces, which improves the robustness of representation by increasing identity density of embedding space. And the fusion of these representations from different subspaces further improves performance. Our method outperforms previous state-of-the-arts on three person re-ID benchmarks. Based on this promising results, we believe our method has great potential to be further explored in other area, and we hope it can bring new insights to the community. \clearpage
{ "arxiv_id": "2302.14287", "language": "en", "timestamp": "2023-03-01T02:08:17", "url": "https://arxiv.org/abs/2302.14287", "yymm": "2302" }
\section{Introduction} \label{sec1} \input{sections/intro} \section{Preliminaries} \label{sec2} \input{sections/preliminaries} \section{Index Overview} \label{sec3} \input{sections/overview} \section{Partitioning Optimization} \label{sec4} \input{sections/partition} \section{Bottom-up Packing} \label{sec5} \input{sections/packing} \section{Optimization} \label{sec6} \input{sections/optimization} \section{Experiments} \label{sec7} \input{sections/experiment} \section{Related Work} \label{sec8} \input{sections/related} \section{Conclusions and Future Work} \label{sec9} \input{sections/conclusion} \clearpage \balance \bibliographystyle{ACM-Reference-Format} \subsection{Implementation and Setup} \noindent\textbf{Implementation.} The learning process of CDF NN models, Algorithm \ref{algo:train}, and Algorithm \ref{algo:rltrain} are implemented with PyTorch \cite{DBLP:conf/nips/PaszkeGMLBCKLGA19}. The performance evaluation of all index structures is implemented in C++ and compiled using GCC 9.3 with -O3 flag. In the process of generating the bottom clusters, we empirically set 0.1 and 1 to the weights of stage 1 and stage 2, i.e., $w1$ and $w2$, respectively. The CDF network consists of 4 layers, and each hidden layer has 16 units. We use ReLU as the activation function of the hidden layer. The output of the CDF is activated by a sigmoid function. When packing the bottom clusters, we follow the original implementation of DQN \cite{DBLP:journals/nature/MnihKSRVBGRFOPB15}. The neural network consists of 3 layers, and each hidden layer has 64 units. We set the capacity of experience replay to 256, and the discount factor is set to 0.99. For $\epsilon$-greedy algorithm, the initial value of $\epsilon$ is set to 1, and the value decreases with more learning steps, which balances exploration and exploitation well. \noindent\textbf{Environment.} We run single-threaded experiments \textcolor{edit}{in the main memory} on an Ubuntu machine with Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz, 128GB RAM, and a 500 GB SSD disk. Besides, we train our CDF models on an Ubuntu machine with Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz, 256GB RAM, and RTX 2080 Ti GPU. \noindent \textbf{Baselines.} We compare WISK\xspace\ with four SOTA conventional indexes, i.e., \textbf{CDIR-Tree} \cite{DBLP:journals/pvldb/CongJW09}, \textbf{SFC-Quad} \cite{DBLP:conf/cikm/ChristoforakiHDMS11}, \textbf{ST2I} \cite{DBLP:conf/cikm/Hoang-VuVF16}, and \textbf{ST2D} \cite{DBLP:conf/ssd/TampakisSDPKV21}. We implement these indexes using the default parameter values reported in their original papers. Note that ST2D is only evaluated on \textbf{FS} by setting the similarity threshold to 0 since it is only suitable for the case that containing a few distinct keywords (a few hundreds) because of the textual clustering. We also integrate a learned spatial index with a textual index loosely, following traditional spatial keyword indexes. This results in a \emph{learned spatial-first index} (\textbf{SFI}) and a \emph{textual-first index} (\textbf{TFI}). SFI attaches an inverted file for keywords indexing to each leaf node of a learned spatial index, while TFI uses an inverted file as its top-level index and creates a learned spatial index for the objects containing the same keyword. It has been shown that textual-first indexes outperform their spatial-first counterparts~\cite{DBLP:conf/cikm/ZhouXWGM05, DBLP:conf/ssd/VaidJJS05}. Therefore, we only report results for TFI in our experiments. LISA~\cite{DBLP:conf/sigmod/Li0ZY020} is used as the learned spatial index since it returns the exact results. We further extend a learned multi-dimensional index, i.e., Flood~\cite{DBLP:conf/sigmod/NathanDAK20}. We build an inverted file for each grid cell in Flood and also improve its cost function for building the grid index by incorporating the textual information, utilizing our CDF models on the geo-textual data, following the method presented in Section~\ref{sec4}. We denote this index by \textbf{Flood-T}. It splits the data along \textit{only one dimension} in the 2D geographical space, which limits its capability to capture the complex data distribution. \textcolor{edit}{We also compare with \textbf{LSTI} \cite{ding2022learned}, the latest index to support spatial keyword queries. This method maps the data into one dimension using a Z-order curve based on the spatial coordinates and builds a RadixSpline index \cite{kipf2020radixspline} using the mapped values. Then, an inverted file is created for each spline point by scanning the dataset again. } \subsection{Datasets and Workloads} We use three real-world datasets in Table \ref{exp:data}. The \textbf{FS} dataset \cite{DBLP:conf/www/YangQYC19} consists of global-scale check-in records of Foursquare (\url{https://foursquare.com/}) from Apr. 2012 to Jan. 2014. A check-in data has a spatial location and its category. The \textbf{SP} dataset includes recreational and sports areas extracted from OpenStreetMap (\url{https://www.openstreetmap.org}). We use the center of each area and the original description as the spatial location and keywords, respectively. The \textbf{BPD} dataset contains global POIs published by the SLIPO project~\cite{DBLP:conf/ssd/PatroumpasSMGA19} (\url{http://slipo.eu/}). The \textbf{OSM} dataset contains 100M POIs extracted from OpenStreetMap, which is published in UCR STAR \cite{GVE+19}. Each POI has a point location, and its keywords include all related information such as street and category. \begin{table}[tb] \small \caption{Dataset Statistics} \vspace{-0.3cm} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Property} & \textbf{FS} & \textbf{SP} & \textbf{BPD} & \textbf{OSM} \\ \hline Number of data objects & 3M & 4M & 25M & 100M\\ \hline Number of distinct keywords & 462 & 1M & 24M & 447M\\ \hline Total number of keywords & 6M & 11M & 116M & 478M\\ \hline \end{tabular} \label{exp:data} \end{table} \begin{table}[tb] \small \caption{Parameters and their settings} \vspace{-0.3cm} \begin{tabular}{|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Parameter}} & \multicolumn{1}{c|}{\textbf{Setting}} \\ \hline Query distribution & UNI LAP GAU \textbf{\underline{MIX}} \\ \hline Query region size (\%) & 0.005 0.01 \textbf{\underline{0.05}} 0.1 0.5 1\\ \hline Number of query keywords & 1 3 \textbf{\underline{5}} 7 9 \\ \hline \end{tabular} \label{exp:query} \end{table} As there is no public real-world query workload for the geo-textual datasets, we generate the queries by following previous works \cite{DBLP:journals/pvldb/WangQWWZ21, DBLP:journals/pvldb/ChenCJW13, DBLP:journals/pvldb/WangZZLW14, DBLP:conf/sigmod/HuLXAPRY22, DBLP:journals/corr/abs-2103-04541}. Specifically, to generate a query, we first sample an object in the dataset, and then generate a bounding rectangular area with the location of this object being its center. Inspired by previous works \cite{DBLP:journals/pvldb/WangQWWZ21, DBLP:journals/corr/abs-2103-04541}, we use four methods to generate the centers: (i) \textbf{UNI}, where centers are uniformly sampled from the dataset. (ii) \textbf{LAP}, where centers are sampled from the Laplace distribution \cite{Kotz_2001}. We set the location and scale parameters, i.e., $\mu$ and $b$, to $|D| / 2$ and $|D| / 10$ respectively, where $D$ is the object set. (iii) \textbf{GAU}, where centers are sampled from a Gaussian distribution ($\mu=|D| / 2$, $\sigma=100$). (iv) \textbf{MIX}, composing of the centers generated from the (i) and (ii) in equal proportions. Finally, we associate keywords for the queries following prior works \cite{DBLP:journals/pvldb/ChenCJW13, DBLP:journals/pvldb/WangZZLW14}. If the number of query keywords is less than the number of keywords of the center, we choose the query keywords from the sampled object. Otherwise, we randomly choose the remaining keywords from the global keyword set. To evaluate the performance of indexes in different scenarios, we generate query sets with different numbers of keywords and query sizes. Table \ref{exp:query} summarizes parameters, where default values are in bold and underlined. We generate 2000 queries under each setting, in which 1000 queries are utilized to test the performance of all the indexes, and others are used to train learned indexes. \subsection{Query Time Evaluation} \textcolor{edit}{To evaluate the query time, we execute testing queries 100 times and report the average cost of the queries in each query set.} \subsubsection{\textbf{Effect of query distribution.}} In this experiment, we fix other settings except for the query distribution and show the results on all datasets in Figure \ref{exp:dis}. Clearly, conventional indexes (SFC-Quad, ST2I, and CDIR-Tree) perform worse on the skewed workload since they do not use the query characteristics when constructing the index. For the learned indexes, TFI performs even worse than the conventional indexes since it only loosely combines a learned spatial index with a textual index. Flood-T shows a slight fluctuation in its performance since it learns from the underlying data and the query workload simultaneously, but in the geo-textual scenario, it only splits along one dimension, making it incompatible with the skewed workload. Our WISK\xspace\ improves the partitioning and adopts RL to build a tree, so it is less sensitive to this alteration. \begin{figure}[tb] \centering \includegraphics[width=.8\columnwidth]{figures/experiments/dis/legend1.pdf} \subcaptionbox{FS\label{dis-fs}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/dis/dis-fs.pdf} } \subcaptionbox{SP\label{dis-sp}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/dis/dis-sp.pdf} } \subcaptionbox{BPD\label{dis-bpd}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/dis/dis-bpd.pdf} } \subcaptionbox{OSM\label{dis-osm}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/dis/dis-osm.pdf} } \vspace{-0.3cm} \caption{Varying the distribution of the query workload} \label{exp:dis} \end{figure} \subsubsection{\textbf{Effect of query region size.}} We show the performance of all indexes, by varying the query region size varies from $0.005\%$ to $1\%$ of the whole region in Figure \ref{exp:size}. Again, WISK\xspace\ performs the best on four datasets. Besides, Flood-T performs slightly worse than ST2I on \textbf{SP}, even though it optimizes its layout by learning from the data and query workload. It is because it only splits the whole region along one dimension. Thus, we improve this process to generate the leaf nodes of WISK\xspace and also pack the bottom clusters into a hierarchical structure. The two techniques simultaneously result in the superiority of WISK\xspace\ over the other indexes. \begin{figure}[tb] \centering \includegraphics[width=.8\columnwidth]{figures/experiments/size/legend2.pdf} \subcaptionbox{FS\label{size-fs}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/size/size-fs.pdf} } \subcaptionbox{SP\label{size-sp}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/size/size-sp.pdf} } \subcaptionbox{BPD\label{size-bpd}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/size/size-bpd.pdf} } \subcaptionbox{OSM\label{size-osm}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/size/size-osm.pdf} } \vspace{-0.3cm} \caption{Varying the query region size} \label{exp:size} \end{figure} \subsubsection{\textbf{Effect of number of query keywords.}} We evaluate the query sets with different numbers of keywords. Figure \ref{exp:keys} shows that the query time of all indexes grows with the query keyword set size. The reason is that with the increase in the number of query keywords, more candidates need to be verified after the filtering step. Besides, WISK\xspace\ consistently outperforms other baseline indexes, and its cost grows much slower than those of others, e.g., the increased time of WISK\xspace\ on \textbf{BPD} is around 100 $\mu$s while those of Flood-T and ST2I are both over 250 $\mu$s. Hence, compared to other indexes, WISK\xspace\ is less sensitive to the number of query keywords. \begin{figure}[tb] \centering \includegraphics[width=.8\columnwidth]{figures/experiments/keys/legend2.pdf} \subcaptionbox{FS\label{keys-fs}}{ \vspace{-0.2cm} \includegraphics[width=.475\linewidth]{figures/experiments/keys/keys-fs.pdf} } \subcaptionbox{SP\label{keys-sp}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/keys/keys-sp.pdf} } \subcaptionbox{BPD\label{keys-bpd}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/keys/keys-bpd.pdf} } \subcaptionbox{OSM\label{keys-osm}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/keys/keys-osm.pdf} } \vspace{-0.3cm} \caption{Varying no. of query keywords} \label{exp:keys} \end{figure} \subsubsection{\textbf{Scalability.}} We generate five sub-datasets of \textbf{OSM} containing from 1 to 100 million objects and run experiments on these sub-datasets. We choose ST2I, LSTI, and Flood-T as our baselines. As shown in Figure \ref{exp:scale}, the query processing time increases with the size of the dataset, but WISK\xspace\ performs more stable. \begin{figure}[tb] \begin{minipage}{.49\linewidth} \setcaptionwidth{1.5in} \centering \includegraphics[width=\linewidth]{figures/experiments/others/scalability.pdf} \abovecaptionskip 0.1cm \caption{Comparison of performance varying dataset size (num records)} \label{exp:scale} \end{minipage} \begin{minipage}{.49\linewidth} \setcaptionwidth{1.5in} \centering \includegraphics[width=\linewidth]{figures/experiments/others/robust.pdf} \abovecaptionskip 0.1cm \caption{Comparison of performance when changing query distribution} \label{exp:robust} \end{minipage} \end{figure} \subsubsection{\textbf{Robustness.}}\label{sec:robust} We evaluate the performance of ST2I, Flood-T, and WISK\xspace\ when the query distribution changes on \textbf{FS}. We initially train Flood-T and WISK\xspace\ based on the query workload with \textbf{UNI} distribution. Then, we keep the index consistent and adjust the ratio of queries with \textbf{LAP} distribution from 0.2 to 1.0 in the testing query set. As shown in Figure \ref{exp:robust}, the performance of query-aware indexes becomes worse when query distribution is more different from the training one. However, it can be seen that WISK\xspace\ is more robust than Flood-T due to its improved partitioning algorithm and the bottom-up packing process. Additionally, the query time of ST2I also increases since it ignores the query knowledge when building the index, but the fluctuation is less than the one of Flood-T. \subsection{Index Size \& Construction} \subsubsection{\textbf{Index Sizes.}} Table \ref{exp:space} reports the index sizes. Overall, WISK\xspace\ costs less space than conventional indexes but is comparable to that of the best adapted learned indexes. In particular, the size of CDIR-Tree is larger than those of the others since each of its nodes has an inverted file. For query efficiency, we have not compressed SFC-Quad, which leads to a larger size. ST2I has the smallest size among the conventional indexes. Among the learned indexes, the sizes of TFI are the largest, as it uses inverted files. WISK\xspace\ takes more space than Flood-T on a small dataset since the number of its bottom clusters is similar to the number of the columns of Flood-T, but we build a hierarchical index. However, on larger datasets, WISK\xspace\ needs less space cost, since Flood-T splits more columns for better performance and builds inverted files for them. \begin{table}[htb] \small \caption{Index structure size} \vspace{-0.3cm} \begin{tabular}{|c|c|c|c|c|} \hline Index & \textbf{FS} & \textbf{SP} & \textbf{BPD} & \textbf{OSM} \\ \hline CDIR-Tree & 2002MB & 3571MB & 33.15GB & 108.45GB \\ \hline SFC-Quad & 1406MB & 2568MB & 15.65GB & 58.71GB \\ \hline ST2I & 761MB & 1554MB & 15.18GB & 56.05GB \\ \hline TFI & 573MB & 1423MB & 8.86GB & 32.05GB \\ \hline \textcolor{edit}{LSTI} & \textcolor{edit}{642MB} & \textcolor{edit}{1073MB} & \textcolor{edit}{8.85GB} & \textcolor{edit}{8.09GB} \\ \hline Flood-T & 400MB & 937MB & 7.15GB & 27.94GB \\ \hline WISK\xspace & 483MB & 980MB & 7.02GB & 25.78GB \\ \hline \end{tabular} \label{exp:space} \end{table} \subsubsection{\textbf{Index Construction Time.}} We compare the efficiency of index construction algorithms and report the results in Table \ref{exp:construction}. It takes the minimum time to build SFC-Quad and ST2I on small datasets. However, the time cost of ST2I significantly increases when the dataset becomes larger, since ST2I is built based on the set of converted points, and its time cost is positively correlated to the total number of keywords. CDIR-Tree takes the highest time cost because it inserts the objects sequentially. \begin{table}[htb] \small \caption{Index construction time} \vspace{-0.3cm} \begin{tabular}{|c|c|c|c|c|} \hline Index & \textbf{FS} & \textbf{SP} & \textbf{BPD} & \textbf{OSM} \\ \hline CDIR-Tree & 391 sec & 490 sec & 56.17 min & 196.87 min \\ \hline SFC-Quad & 20 sec & 30 sec & 3.35 min & 9.18 min \\ \hline ST2I & 19 sec & 29 sec & 6.55 min & 26.23 min \\ \hline TFI & 125 sec & 283 sec & 33.75 min & 143.07 min \\ \hline \textcolor{edit}{LSTI} & \textcolor{edit}{23 sec} & \textcolor{edit}{32 sec} & \textcolor{edit}{4.16 min} & \textcolor{edit}{16.32 min} \\ \hline \textcolor{edit}{Flood-T} & \textcolor{edit}{188 sec} & \textcolor{edit}{974 sec} & \textcolor{edit}{19.66 min} & \textcolor{edit}{25.97 min} \\ \hline \textcolor{edit}{WISK\xspace} & \textcolor{edit}{353 sec} & \textcolor{edit}{1216 sec} & \textcolor{edit}{55.28 min} & \textcolor{edit}{65.37 min} \\ \hline \begin{tabular}[c]{@{}c@{}}\textcolor{edit}{WISK\xspace}\\ \textcolor{edit}{(Accelerated)}\end{tabular} & \textcolor{edit}{131 sec} & \textcolor{edit}{547 sec} & \textcolor{edit}{12.18 min} & \textcolor{edit}{17.43 min} \\ \hline \end{tabular} \label{exp:construction} \end{table} \textcolor{edit}{For the learned indexes, we report the training time. LSTI takes the least time to build because it only needs to scan the whole dataset twice. The time cost of TFI increases significantly when there are more different keywords. For Flood-T and WISK\xspace, we report the average time as the time costs of query-aware learned indexes usually increase with more query keywords. } \begin{figure}[htb] \centering \includegraphics[width=.9\columnwidth]{figures/experiments/others/legend6.pdf} \subcaptionbox{Sampling ratio (\%)\label{sample}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/sample.pdf} } \subcaptionbox{Clustering ratio (\%)\label{spectral}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/spectral-cluster.pdf} } \vspace{-0.3cm} \caption{Training time and resulting query time on \textbf{SP}} \label{fig:speed-up} \end{figure} \textcolor{edit}{We designed two training time acceleration techniques as presented in Section~\ref{sec6}. We report the training and query times of WISK\xspace with different sampling ratios in Figure~\ref{sample}. The result of each sampling ratio is an average of 10 runs. While the training time decreases by 72\%, we do not observe a large drop in query performance with a sample of only 30\% of the full query workload. We also observe that the standard deviation (represented by the width of the bands) of the training and query times of WISK\xspace is consistently small for all sampling ratios. This demonstrates that WISK\xspace has a stable performance using stratified sampling. We vary the clustering ratio, i.e., the number of groups obtained over the number of bottom clusters, to balance the training and querying time. Figure~\ref{spectral} shows that even when the number of bottom clusters decreases by 80\%, the query time of WISK\xspace still only changes slightly. We set the sampling ratio to 30\% and the clustering ratio to 20\% and generate the \textbf{Accelerated WISK\xspace}. As shown in Table~4, WISK\xspace has longer training times than the other learned indexes, but the acceleration techniques can reduce index training time up to 4 times while the query time is only affected marginally.} \eat{ \textcolor{edit}{We can see that WISK\xspace has longer training times than the other learned indexes. It has two steps: finding the bottom clusters and packing the bottom clusters through RL. To reduce the training time of WISK\xspace, we design acceleration techniques for both steps when building WISK\xspace. The first technique is to use sampled training queries, following a previous work~\cite{DBLP:conf/sigmod/NathanDAK20}. We use stratified sampling~\cite{botev2017variance} to obtain query samples that can better represent the distribution of the original workload. We report the training and query times of WISK\xspace with different sampling ratios in Figure~\ref{sample}. The result of each sampling ratio is an average of 10 runs. While the training time decreases by 72\%, we do not observe much drop in query performance with a sample of only 30\% of the full query workload. We also observe that the standard deviation (represented by the width of the bands) of the training and query times of WISK\xspace is consistently small for all sampling ratios. This demonstrates that WISK\xspace has a stable performance using stratified sampling.} \textcolor{edit}{The second technique groups the bottom clusters using a clustering algorithm to reduce the number of bottom clusters to be packed in the bottom-up packing step. We propose to use spectral clustering \cite{ng2001spectral} with the coordinates of the bottom left and top right points of each bottom cluster as features. We vary the clustering ratio, i.e., the number of groups obtained over the number of bottom clusters, to balance the training and querying time. Figure~\ref{spectral} shows that even when the number of bottom clusters decreases by 80\%, the query time of WISK\xspace does not drop significantly. By setting the sampling ratio to 30\% and the clustering ratio to 20\%, Table~4 shows that the WISK\xspace's training time can be reduced significantly while the query time is only affected marginally.} } \subsection{Index Update} \subsubsection{\textbf{Dynamic Query Workload Changes.}} To update the index when query distribution changes, we can retrain WISK\xspace periodically following the former study \cite{DBLP:conf/sigmod/NathanDAK20}. \textcolor{edit}{We generate six workloads for \textbf{FS}. For each workload, we randomly select the query region size and the number of query keywords, and the query distribution adopts the default settings (\textbf{MIX}) and we randomly select the proportions of \textbf{UNI} and \textbf{LAP}. Each workload runs for 30 minutes and consists of 100 queries. As Figure \ref{fig:query-retrain} shows, at the beginning of each 30-minute period when a new query workload starts, WISK\xspace retraining is triggered, which happens in a separate thread and does not interrupt the query processing. While the index is being rebuilt, WISK\xspace runs the new queries on its old layout, which explains the jumps in the figure. The retraining process lasts about 3 minutes, and then WISK\xspace switches to the new layout adapted to the new query workload. Thus, the query time drops back again.} \textcolor{edit}{ To capture minor changes in query distribution when retraining, we propose to apply incremental updates to the original index, in parallel to the retraining process. We locate the bottom clusters that are affected by the new queries, re-partition these clusters if the query costs can be reduced using new queries, and then insert the new clusters back into the non-leaf nodes they previously belonged to. The incremental updates may also help reduce the query times, which explains the multiple drops (e.g., at 00:30 and 01:30).} \textcolor{edit}{Figure \ref{fig:query-retrain} also indicates the necessity of learning from the query workload. When a new workload arrives, the performance of WISK\xspace drops due to its outdated layout. Re-learning the layouts based on the new workload mitigates the impact of the changing query distribution. The other two indexes do not utilize the queries during construction. As a result, on the skewed query workloads (i.e. high proportion of LAP), their query performance is much worse than WISK\xspace (e.g. at 01:00, 02:00, and 02:30).} \begin{figure}[htb] \centering \includegraphics[width=.9\columnwidth]{figures/experiments/others/query-retrain.pdf} \vspace{-0.3cm} \caption{Impact of dynamic workload changes} \label{fig:query-retrain} \end{figure} \textcolor{edit}{ \subsubsection{\textbf{Data Insertion.}} WISK\xspace also handles data insertion well. Given a new object \textit{o}, we can traverse WISK\xspace to find the bottom cluster where \textit{o} falls. Next, we update the inverted file or the bitmap of the affected nodes to obtain an updated index. This simple process, however, cannot guarantee an optimal layout because the bottom clusters might need to be split after the insertion. Thus, we buffer the inserted objects and retrain our index when the buffer is full.} \textcolor{edit}{We set the buffer size at 100,000 (around 20MB) and run experiments to examine the impact of data insertions. We randomly select 500,0000 objects from \textbf{FS} for the insertions. We insert 100,000 objects every 30 minutes. Figure~\ref{fig:data-insert} shows the performance of ST2I, LSTI, and WISK\xspace. We compare with WISK\xspace using the simple insertion process without retraining. It can be seen that the query time of all indexes increases when more objects are inserted. Between the two WISK\xspace variants, we see that the query time of WISK\xspace without retraining increases faster with more insertions, thus verifying the importance of retraining in improving the query time of WISK\xspace under dynamic data settings. We also observe that the retraining process takes only 1 to 2 minutes each time, since only the affected bottom clusters need to be split, and the RL-based packing can inherit knowledge from the previous training process, i.e., the unaffected bottom clusters are initially packed into the previous corresponding upper nodes.} \begin{figure}[tb] \centering \includegraphics[width=.9\columnwidth]{figures/experiments/others/data-insert.pdf} \vspace{-0.3cm} \caption{Impact of data insertion} \label{fig:data-insert} \end{figure} \subsection{Ablation Study} \subsubsection{\textbf{RL-based Packing.}} We conduct an experiment to compare the cost at the leaf level and that at the non-leaf level. Figure \ref{exp:leaf-inter} shows that the time at the leaf level dominates the query processing time, which occupies around 90\% of the total cost, verifying the way to define cost function is reasonable. In Figure \ref{exp:hier-comp}, we observe that packing our bottom clusters by directly using CDIR-Tree construction method might affect the query time because it may pack some leaf nodes intersecting with various queries. \begin{figure}[htb] \begin{minipage}{.49\linewidth} \setcaptionwidth{1.5in} \centering \includegraphics[width=\linewidth]{figures/experiments/others/leaf-inter.pdf} \abovecaptionskip 0.1cm \caption{Comparison of processing time} \label{exp:leaf-inter} \end{minipage} \begin{minipage}{.49\linewidth} \setcaptionwidth{1.5in} \centering \includegraphics[width=\linewidth]{figures/experiments/others/hier-comp.pdf} \abovecaptionskip 0.1cm \caption{Comparison of packing methods} \label{exp:hier-comp} \end{minipage} \end{figure} We evaluate the effectiveness of the bottom-up construction process. As shown in Figure \ref{rl-bpd-keys}, the improvement of the different number of keywords is similar. This is because the number of query keywords has little effect on the number of bottom clusters. Thus, the improvement is stable using this RL-based grouping algorithm. \begin{figure}[htb] \centering \includegraphics[width=.9\columnwidth]{figures/experiments/others/legend3.pdf} \subcaptionbox{Varying no. of keywords\label{rl-bpd-keys}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/rl-bpd-keys.pdf} } \subcaptionbox{Varying query region size\label{rl-bpd-size}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/rl-bpd-size.pdf} } \vspace{-0.3cm} \caption{Comparison of index layouts on \textbf{BPD}} \label{fig:abl_rl} \end{figure} It can be seen from Figure \ref{rl-bpd-size} that the improvement of hierarchical indexes becomes more significant for queries of larger region sizes. This is because queries of larger region size correspond to a wider space covered by these queries and more bottom nodes such that the bottom-up construction process can reduce more filtering cost. However, we also observe that the improvement becomes stable as the query region size continues to get larger, as the query region covers most of the data space. \textcolor{edit}{\subsubsection{\textbf{CDF Model.}} When generating bottom clusters, we use CDF models to estimate the number of objects sharing the same keywords inside a region. To reduce the parameters, we propose to use Gaussian functions and NNs for keywords with different frequencies. In Figure~\ref{gau-nn}, we compare our method with the settings in which only Gaussian models or NN models are used. Although the Gaussian-only method has the least training time, its estimation results are inaccurate, leading to much worse query time. In contrast, the NN-only method achieves the best query time, but it needs much more training time. In comparison, the proposed mixed method achieves similar query performance as the NN-only method without significantly increasing the training time. } \begin{figure}[htb] \centering \subcaptionbox{Model selection\label{gau-nn}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/gau-nn.pdf} } \subcaptionbox{1D vs. 2D model \label{cdf-loss}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/cdf-loss.pdf} } \vspace{-0.3cm} \caption{Effect of different CDF settings} \label{fig:cdf-model} \end{figure} \textcolor{edit}{To speed up pre-processing, we assume that the two spatial dimensions are independent such that we can decompose the joint CDF into the product of two marginal CDFs. We next study the impact of such an assumption. We observe that keywords with higher frequency have a stronger impact on the query time, and thus they need a more accurate CDF estimation. } \textcolor{edit}{ We run experiments with a randomly selected high-frequency keyword on \textbf{FS}. We train two marginal (1D) models and a joint (2D) model for the selected keyword on the whole dataset. The 1D models and the 2D model all employ a single-layer neural network with 16 hidden units. We also compare with two variants of the 2D model, one uses more hidden units (32 units) and the other uses more layers (3 layers). We randomly sample 1,000 rectangular query regions within the data space as testing data. For each query region, we compute the proportion of objects that fall within it as the ground truth. The product of the two 1D models and the output of the 2D models are used as the estimation results corresponding to 1D and 2D models, respectively. For every 50 training rounds, we calculate the mean squared error (MSE) between the estimations and the ground truth. Figure \ref{cdf-loss} shows that the 1D model converges much faster while its final loss is comparable to those of the 2D models. These results justify the use of 1D CDFs in our method.} \subsubsection{\textbf{Frequent Itemset (FI)}} The FI mining extracts the frequent interrelation among keywords. In this experiment, the minimum support is set to $0.01\text{\textperthousand}$ and the maximum size of target itemsets is equal to the number of query keywords. Figure \ref{fig:abl_fi} shows the effect of FI on the index construction by query efficiency on \textbf{FS} and \textbf{BPD}. We see that the FI mining improves the performance of WISK\xspace\ consistently when there is more than one query keyword. Without the FI mining component, we learn models of each keyword separately where redundancies occur when there is more than one keyword. We also observe that this adaptation is more beneficial given more query keywords. In general, more query keywords lead to a higher possibility of resulting in redundancy since the probability of an object including more than one query keyword increases. \begin{figure}[htb] \centering \includegraphics[width=.9\columnwidth]{figures/experiments/others/legend5.pdf} \subcaptionbox{FS\label{fi-fs}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/fi-fs.pdf} } \subcaptionbox{BPD\label{fi-bpd}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/fi-bpd.pdf} } \vspace{-0.3cm} \caption{Effect of the frequent itemset} \label{fig:abl_fi} \end{figure} Besides, the performance improvement on \textbf{BPD} is more obvious than that on \textbf{FS}. This is because the number of distinct keywords on \textbf{FS} is much less, making the number of frequent itemsets less than others. Additionally, the improvement becomes consistent when the number of query keywords is larger than a threshold. The reason is that each object includes finite keywords, and thus we cannot generate frequent itemsets with more keywords. \subsubsection{\textbf{Action Mask.}} When using the RL framework, the environment applies the action mask to reduce the action space. Here, we evaluate its effectiveness in two aspects. We use the SmoothL1Loss with the sum reduction as the loss function in our RL framework. Figure \ref{rl-loss} shows that the RL framework with the action mask can speed up the model convergence and reach a smaller loss. In Figure \ref{rl-reduction}, we sum the total rewards and the number of bottom nodes in each epoch to show the reduction in the average number of accessed nodes. The result shows that the pruning capability is always better when applying the action mask. \begin{figure}[htb] \centering \includegraphics[width=.9\columnwidth]{figures/experiments/others/legend4.pdf} \subcaptionbox{Loss\label{rl-loss}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/rl-loss.pdf} } \subcaptionbox{Filtering capacity\label{rl-reduction}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/rl-reduction.pdf} } \vspace{-0.3cm} \caption{Comparison of model convergence and reward} \label{fig:abl_mask} \end{figure} there are some fluctuations in the results as the RL agent learns from the feedback through trial-and-error interactions with the environment. RL balances the trade-off between exploration and exploitation, which reduces the fluctuation with more training epochs. These results confirm that the action mask helps to decrease the number of training epochs and hence leads to less training time. \textcolor{edit}{ \subsection{\textit{k}NN Query Support} WISK\xspace can also support the Boolean \textit{k}NN (\textbf{B\textit{k}}) query without any modification to the index layout. A B\textit{k} query on geo-textual data is formed by a set of keywords $\psi$, a spatial query point \textit{o}, and the result size \textit{k}. It aims to retrieve \textit{k} objects, each of which covers at least one keyword in $\psi$ and is top-\textit{k} closest to \textit{o}. To process B$k$ queries, we follow existing works~\cite{wu2011joint, DBLP:journals/pvldb/CongJW09} by using a best-first search. Here, we compare WISK\xspace with two SOTA indexes, WBIR-Tree \cite{wu2011joint} and LSTI, and we use the index layout generated under the default setting. As shown in Figure \ref{knn-key}, the query times of WBIR-Tree and our WISK\xspace grow with the number of query keywords, which is expected. LSTI shows an opposite trend, as it needs to scan more spline points with fewer keywords. We note that WISK\xspace shows comparable performance with the best baseline results. Figure \ref{knn-top} further shows the query times when the result size $k$ is varied. WISK\xspace and WBIR-Tree show stable performance as $k$ increases, while LSTI degrades rapidly. WISK\xspace achieves the best performance when $k > 15$.} \begin{figure}[htb] \centering \includegraphics[width=.9\columnwidth]{figures/experiments/others/knn-legend.pdf} \subcaptionbox{Varying no. of query keywords\label{knn-key}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/knn-keys.pdf} } \subcaptionbox{Varying $k$\label{knn-top}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/knn-top.pdf} } \vspace{-0.3cm} \caption{Query time} \label{fig:knn-query} \end{figure} \textcolor{edit}{Note that these experiments only aim to show that WISK\xspace is applicable to the B\textit{k} query as well. Our work mainly focuses on SKR queries. Optimizing WISK\xspace or designing an optimized learned index for other types of queries remains our future work.} \vspace{-2ex} \textcolor{edit}{ \subsection{Parameter Sensitivity Study} We further evaluate the sensitivity of WISK\xspace's training and query times to our key parameters. We show the results of varying the numbers of hidden units and layers in the neural networks in Figure \ref{hidden-unit}. Using more hidden units significantly increases the training time but only slightly improves the query time. Increasing the number of hidden layers shows a similar effect. Additionally, increasing the size of the structure requires larger memory space. Thus, we set the default hidden units and layers to 16 and 2.} \begin{figure}[htb] \centering \includegraphics[width=.9\columnwidth]{figures/experiments/others/train-legend.pdf} \subcaptionbox{Varying no. of hidden units\label{hidden-unit}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/hidden-unit.pdf} } \subcaptionbox{Varying no. of hidden layers\label{hidden-layers}}{ \vspace{-0.2cm} \includegraphics[width=.475\columnwidth]{figures/experiments/others/hidden-layer.pdf} } \vspace{-0.3cm} \caption{Comparison of training time and query time} \label{fig:hidden} \end{figure} \textcolor{edit}{We also vary the capacity of experience replay and the discount factor. The results show that they have minor impacts on the query time and the RL convergence rate. We omit the details due to space limits. The performance of WISK\xspace\ is less sensitive to these hyper-parameters, and we can follow the existing work \cite{DBLP:journals/nature/MnihKSRVBGRFOPB15} to set them.} \subsection{Design Considerations} As shown in Section \ref{sec3}, when executing a query with a hierarchical index, we traverse all qualified nodes until reaching the leaf nodes (i.e., the bottom clusters). An internal node and its descendants can be pruned if it does not intersect with the query or include any query keyword. We build our hierarchical index level by level, i.e., recursively packing the clusters to maximize the reduction in the node pruning cost at each level. Here we omit the object checking costs as they are only triggered on the bottom clusters. \subsubsection{\textbf{Optimization Goal}} The query time spent on node pruning can directly reflect the pruning capability of a hierarchical index. Measuring the query time, however, needs to run all queries in the query workload on an existing index, which is not suitable to be used as an optimization metric of our bottom-up packing problem. We observe that the pruning time cost is proportional to the number of accessed nodes for the workload such that it can be used to evaluate the pruning capability. To adopt this criterion, we associate each bottom cluster $c_{i}$ with a query label set denoted by $c_{i}.l$. If a cluster $c_{i}$ intersects with a training query $q_{j}$ and its textual document includes any keyword of $q_{j}$, we add $q_{j}$ to the query label set of this cluster, that is, $c_{i}.l = \{q_{j}\}$. During packing, the labels of a node in an upper level (an ``upper node'' for short hereafter) can be easily generated by merging all labels of its sub-tree. \subsubsection{\textbf{Bottom-up Packing Problem}} Next, we define the bottom-up packing problem to minimize the number of accessed nodes. \begin{prob}[\textbf{Bottom-up Packing}] \label{prob3} Given a query workload W and the set of bottom clusters $G$, the bottom-up packing process aims to generate a hierarchical index $I$ that minimizes the number of accessed nodes to process the queries in $W$. \end{prob} Given the leaf nodes, i.e., bottom clusters, we can build a hierarchical index using techniques from traditional indexes such as the CDIR-Tree. However, those techniques only consider the underlying data distribution, which might lead to worse performance as shown in the later experiment. To address these issues, and motivated by the strong performance of query-aware structures learned by reinforcement learning (RL), we propose an RL-based algorithm to learn a packing. We construct our index level by level with a bottom-up packing process, and we model the packing problem at each level as a sequential decision-making process, in particular, a Markov decision process, which makes it solvable by RL. To pack each level, the nodes from a lower level to be packed (``bottom nodes'' hereafter) are processed sequentially, and we find an upper node to host each bottom node. When there are no more bottom nodes to be hosted, the packing process of one level stops, and the non-empty upper nodes become the new bottom nodes to be packed for the next level. \subsection{Packing with Reinforcement Learning} We propose an RL-based packing algorithm following the idea of the Deep-Q-Network (DQN)~\cite{DBLP:journals/nature/MnihKSRVBGRFOPB15} to learn the optimal policy (i.e., a packing strategy) for solving the packing problem (Problem \ref{prob3}). To form a tree structure, we require that the number of upper nodes does not exceed that of the bottom nodes. There are two main challenges in our packing problem. \begin{enumerate} \item To use a neural network to estimate an expected reward (e.g., the reduction in the number of node accesses), the states (e.g., the relation of two levels resulting from a packing decision) need to be represented by a fixed-length vector. However, there are many different possibilities of bottom nodes, and it is challenging to generate such a vector to encode the current packing of bottom nodes effectively. \item Every time a node is added to the structure, it may lead to a reduced reward (i.e., more node accesses). However, it is necessary to add nodes to the structure continuously such that the structure can be built up. How to adapt the cost model for this case is another challenge. \end{enumerate} To address these challenges, we formulate an MDP process for our packing problem as follows: \noindent \textbf{States.} A state needs to capture the status of a (partially packed) level in an index structure. As mentioned above, the number of bottom nodes bounds that of the upper nodes. Hence, we initialize $N$ empty upper nodes given $N$ bottom nodes. Consider $m$ queries are used in the learning process. Each of the $N$ upper nodes to be constructed takes an $(m+1)$-dimensional vector representation. The first $m$ dimensions denote whether the node is labeled by each of the $m$ queries, and the last dimension is a count on the number of bottom nodes to be connected to this node. The $N$ upper nodes together form an $(m+1)\cdot N$-dimensional vector. We further append $m$ dimensions to the vector to represent the query label of the next bottom node to be connected to (i.e., packed into) one of the upper nodes. Overall, these form an $\big((m+1)\cdot N +m\big)$-dimensional vector representing a state. Figure \ref{fig:rlstate} shows an example, assuming $m=3$ queries and $N=3$ bottom nodes ($X$, $Y$, and $Z$). The circles denote upper nodes, and the colors denote different query labels. \begin{figure}[tb] \centering \includegraphics[width=.65\linewidth]{figures/state.pdf} \vspace{-0.3cm} \caption{An example of the state representation} \label{fig:rlstate} \end{figure} \noindent \textbf{Actions.} An action adds a bottom node to an upper node. To make the action space and the state representation consistent, we define the action space $A=\{1, 2, \ldots, N\}$, where action $a = i$ denotes packing the next bottom node into the $i$-th upper node. \noindent \textbf{Transition.} Given a state and an action, the agent transits to a new state by packing a bottom node into the chosen upper node and moving on to the next bottom node. The agent reaches a terminal state when there are no more bottom nodes to be packed. \noindent \textbf{Reward.} A larger reward represents a packing with better quality. Since we aim to reduce the number of node accesses when executing the query workload, the reward signal should reflect the expected number of node accesses before and after taking an action. We propose to use the average number of node accesses per query to formulate the rewards since the total number of node accesses grows monotonically as more bottom nodes are added to the consideration, which will lead to constant negative rewards. \begin{equation} \label{equ:reward} r = N_{a} - N_{a}^{\prime} \end{equation} The reward function is formulated as Eq.~\ref{equ:reward}, where $N_a$ ($N_a^{\prime}$) denotes the average number of node accesses before (after) action $a$ is taken. The agent chooses the action that maximizes the reward during exploitation. Additionally, we observe a positive correlation between the sum of rewards and the reduction in the average number of node accesses after packing all bottom nodes. Let $N_{a}^{*}$ be the average number of node accesses of packing the last bottom node. As $N_{a}$ in each iteration is identical to $N_{a}^{\prime}$ of the last iteration, the sum of rewards after packing all $N$ bottom nodes is equal to $1 - N_{a}^{*}$, and it is positively correlated to $N + 1 -N_{a}^{*}$. Note that the number of node accesses is equal to $N + 1$ before creating the upper nodes. Thus, if the sum of the rewards at a level is not larger than $-N$, the bottom-up packing process will be terminated. \begin{figure*}[tb] \centering \includegraphics[width=.95\linewidth]{figures/mdpexample1.pdf} \abovecaptionskip 0.2cm \caption{An example of MDP formulation for Problem \ref{prob3}} \label{fig:mdpexample} \end{figure*} \noindent \textbf{Example 5.2:\label{exa5.2}} Figure \ref{fig:mdpexample} presents an example of the MDP for the bottom-up packing problem. Same as in Figure~\ref{fig:rlstate}, the colors represent different query labels. Here, we only show state transitions with nonzero probabilities, and we have omitted the rewards to avoid clutter. The ellipse nodes and edges represent states and actions, respectively. Since there are 3 bottom nodes (rectangle), we initialize 3 upper nodes (circle), and the bottom nodes are to be packed sequentially. When no incoming bottom node is to be inserted at one level, i.e. the leaf node in Figure \ref{fig:mdpexample}, we reach the terminal states at this level and move to the upper level. \subsection{Training} Recall that Q-learning is a commonly used RL algorithm as introduced in Section \ref{sec2.3}. We train a deep Q-network (DQN) \cite{DBLP:journals/nature/MnihKSRVBGRFOPB15} to project the high-dimensional state and action spaces to low-dimension spaces using neural networks and efficiently predict the value of the Q-function $Q(s, a)$. In our model, we adopt the deep Q-learning with a technique known as experience replay where we store the agent's experience $e_{t} = (s_{t}, a_{t}, r_{t}, s_{t + 1})$ at each time-step $t$. We implement two networks, a policy network $Q$ and a target network $\hat{Q}$ separately, which has been shown to be more stable than using only one network as done in the standard Q-learning~\cite{DBLP:journals/nature/MnihKSRVBGRFOPB15}. Given a batch of transitions $(s, a, r, s^{\prime})$, the policy network parameters $\theta$ are updated with a gradient descent step by minimizing the mean square error (MSE) loss as shown in Eq. \ref{equ:rloss}, where $\gamma\in(0,1)$ denotes a discount factor determining the importance of future rewards, and $\theta^{-}$ are the parameters of the target network. \begin{equation} \label{equ:rloss} L(\theta) = \sum_{s,a,r,s^{\prime}} \left(r + \gamma\max_{a^{\prime}}\hat{Q}(s^{\prime}, a^{\prime};\theta^{-})-Q(s, a;\theta)\right)^{2} \end{equation} Note that the target network parameters $\theta^{-}$ are only synchronized with the policy network parameters $\theta$ every \textit{T} steps and are held fixed between weight updates. However, directly copying the weights has been shown to be unstable due to noise and outliers. Inspired by prior works \cite{DBLP:journals/nn/KobayashiI21,DBLP:journals/corr/LillicrapHPHETS15}, we apply the soft update (Eq. \ref{equ:softup}) to the target network. The weights of the target network are updated by interpolating between the weights of the target network and those of the policy network through a fixed ratio $\tau = 0.001$~\cite{DBLP:journals/corr/LillicrapHPHETS15}. \begin{equation} \label{equ:softup} \theta^{-} = \tau \theta + (1-\tau) \theta^{-}, \tau \ll 1 \end{equation} We present the learning process in Algorithm \ref{algo:rltrain}. We first initialize the policy network and the target network with the same random parameters (line 1). In each epoch, we reset the replay memory $M$ and the set of upper nodes $G_{u}$ (line 3). Then, the learning process sequential packs the bottom nodes to the upper nodes (lines 4 to 13). For every incoming bottom node $c_i$, we generate the state by combining $G_{u}$ and $c_i$ (line 5). We compute the average number of node accesses based on the query labels of the current upper nodes (line 6). To balance between RL exploration and exploitation, we use the $\epsilon$-greedy algorithm \cite{DBLP:journals/tnn/SuttonB98} to choose a random action with probability $\epsilon$ (i.e., exploration) or the action that maximizes the action-value function of the policy network (i.e., exploitation) (line 7). After $c_i$ is packed, we update the state representation and compute the average value again (lines 8 and 9). Then, we compute the reward and store this transition in the replay memory (lines 10 and 11). To train the DQN, we draw a batch of transitions to train the policy network (line 12) and periodically copy the policy network parameters to the target network (line 13). Finally, we use the learned action-value function $Q(s, a; \theta)$ to pack the nodes. \begin{algorithm}[tb] \caption{DQN Learning for Node Packing} \label{algo:rltrain} \footnotesize \KwIn{$G$, the bottom nodes with query labels; $M$, replay memory; $E$, the number of epochs} \KwOut{$Q(s, a; \theta)$, action-value function} Initialize $Q(s, a; \theta), \hat{Q}(s^{\prime}, a^{\prime}, \theta^{-})$\; \For{epoch $\in [1, E]$} { $G_{u} \leftarrow$ NewList(); $M \leftarrow$ NewList()\; \For{$c_i \in$ G} { Update $s$ using $G_{u}$ and $c_i$\; Compute the average number of node accesses $N_{a}$ according to $G_{u}.l$\; Choose $a$ by the $\epsilon$-greedy method\; Pack $c_i$ into $G_{u}$[$a$] and generate the new state $s^{\prime}$\; Compute $N_{a}^{\prime}$ according to the new $G_{u}.l$\; Compute reward $r$ based on Eq. \ref{equ:reward}\; Store transition $(s,a,r,s^{\prime})$ into $M$\; Draw a batch of samples from $M$ and perform a gradient step based on Eq. \ref{equ:rloss}\; Update $\hat{Q}(;\theta^{-})$ with $Q(;\theta)$ softly based on Eq. \ref{equ:softup} after every \textit{C} steps\; } } \Return{$Q(s, a; \theta)$\;} \end{algorithm} \subsection{Cost Model} We model the time cost $C(q_i)$ to process an SKR query $q_i$ over a set of bottom clusters $G$ as a linear combination of (1) the cost to scan all bottom clusters to find a subset $G_i \subset G$ that overlap with $q_i.area$ and containing at least one keyword in $q_i.kws$, and (2) the cost to examine the inverted file in each cluster $c \in G_i$ and find the objects in $q_i.area$ and contain at least one query keyword. Eq.~\ref{equ:cost} formalizes the cost, where $|G|$ denotes the total number of clusters, and $\sum_{c \in G_i}|O_c|$ denotes the number of objects in $G_i$ that contains at least one query keyword. The parameters $w_1$ and $w_2$ model the times to check each bottom cluster and to check each object, respectively. \textcolor{edit}{In particular, $w_1$ measures the time cost for checking (1) if the MBR of a cluster intersects with the query region, and (2) if the cluster contains some query keywords by scanning the textual index of the cluster. Both checks are independent of the cluster size. Meanwhile, $w_2$ measures the time cost to perform the same checks but at the object level. Following recent studies~\cite{DBLP:conf/ssd/ZhangRLZ21, DBLP:journals/pvldb/DingNAK20}, we use fixed values for these parameters.} \begin{equation} C(q_i) = w_1|G| + w_2\sum\nolimits_{c \in G_i}|O_c| \label{equ:cost} \end{equation} \begin{figure}[tb] \centering \subcaptionbox{All objects in a cluster\label{trade-off1}}{ \vspace{-0.2cm} \includegraphics[width=0.45\columnwidth]{figures/tradeoff1.pdf} } \subcaptionbox{Objects in two clusters\label{trade-off2}}{ \vspace{-0.2cm} \includegraphics[width=.45\columnwidth]{figures/tradeoff2.pdf} } \vspace{-0.3cm} \caption{Partitioning the space increases the number of clusters, which leads to a larger cluster scanning cost, but also potentially a lower object scanning cost.} \label{fig:trade-off} \end{figure} \noindent \textbf{Example 4.1:\label{exa:4.1}} Figure \ref{fig:trade-off} illustrates this cost function. Suppose that the red and green points represent objects that contain keywords $k_1$ and $k_2$ respectively. There are two queries, and $q_1.kws=\{k_1\}$ and $q_2.kws=\{k_2\}$. If all objects are in a cluster (i.e., no partitioning, Figure~\ref{trade-off1}), according to Eq.~\ref{equ:cost}, the two queries incur a cost of $2(w_1 + 4w_2) = 2w_1 + 8w_2$. This is because there is only one cluster, and each query needs to check four objects containing the query keywords (i.e., four red points for $k_1$ and four green points for $k_2$). If the space is split forming two clusters of five and three points each (Figure~\ref{trade-off2}), the cost of $q_{2}$ and $q_{1}$ will become $2w_1 + 2w_2$ (checking two clusters and two green points) and $2w_1 + 4w_2$ (checking two clusters and four red points), which sum up to $4w_1 + 6w_2$. The partitioning may lead to an overall lower query cost if $w_2$ dominates the cost. \subsection{The Optimal Partitioning Problem} We formulate an optimal partition problem to find a set of clusters that minimize the query cost over a given set of queries. \begin{prob}[\textbf{Optimal Partitioning}] Given a geo-textual dataset D = $\{o_1, o_1, \ldots,o_n\}$ and a query workload W = $\{q_1, q_2, \ldots, q_m\}$, we aim to find an optimal partition, i.e., a set of \textit{k} clusters $G = \{c_1, c_2, \ldots, c_k\}$ where (1) each object belongs to exactly one cluster, i.e., $\bigcup_{c_i \in G} c_i = D$, and $\forall_{c_i, c_j \in G}\, c_i \cap c_j = \varnothing$, and (2) the total cost, $\sum_{q_i \in W}C(q_i)$, is minimized, where $C(q_i)$ (Eq. \ref{equ:cost}) is the cost of $q_i$. \label{prob:partition} \end{prob} \subsubsection{\textbf{Problem Analysis.}} We proceed to show that the optimal partitioning problem is NP-hard by reducing from the MaxSkip partitioning problem, which has been shown to be NP-hard \cite{DBLP:conf/sigmod/SunFKX14, DBLP:conf/sigmod/YangCWGLMLKA20}. \begin{theorem} Problem~\ref{prob:partition} is NP-hard. \end{theorem} \begin{proof} We first briefly introduce the MaxSkip partitioning problem, which arises from big data analytics systems. Let \textit{Q} be a collection of queries. Consider a set of partitions $P = \{p_1, p_2,..., p_k\}$ where each partition is a collection of tuples, and the size of each partition is larger than a minimum size bound $b$. A big data analytics system can prune a partition $p_i$ if none of the tuples in this partition satisfies a query $q\in Q$ when processing $q$. A cost function (Eq.~\ref{equ:np1}) can thus be defined on each partition, which denotes the number of tuples that can be skipped for processing all queries in $Q$, if such a partition is formed. Here, $|p_{i}|$ denotes the number of tuples in partition $p_i$, and $Q_i$ denotes the set of queries that can be processed without accessing $p_i$. \begin{equation} \label{equ:np1} Cost(p_i) = |Q_i||p_i|, where\ Q_i\subseteq Q \end{equation} The MaxSkip partitioning problem aims to find the optimal partitions $P_{opt}$ maximizing the total number of tuples that can be skipped when executing \textit{Q}, i.e., $P_{opt} = \arg\max_{P}\sum\nolimits_{p_{i} \in P}Cost(p_{i})$. We map one instance of the MaxSkip partitioning problem to an instance of our optimal partitioning problem as below: for each query $q_w$ in $Q$, we create a keyword $d_w$ and form a SKR query $q=\{q.L, q.d_w\}$ where $q.L$ is the MBR of the entire space. For each tuple $t_m$, we create a geo-textual object $o_m$ such that its location is in $L$, and its keywords correspond to the queries it can satisfy in $Q$. Given this mapping, in the MaxSkip partitioning problem, for a partition $p_i\in P$, if a set of queries $Q_i\subseteq Q$ can be skipped when processing $p_i$, we can get a cluster $c_i$ in our problem and a set of SKR queries $R_i\in W$ that are irrelevant to $c_i$ (since no geo-textual objects in $c_i$ contains a keyword in $R_i$). Hence, if we could find an optimal partition that maximize the total number of tuples when running queries in $W$, it is equivalent that we can find an optimal partitioning method that minimizes the cost in our problem. Since the mapping is of linear time, we complete the proof. % \end{proof} \subsection{A Heuristic Partition Algorithm} As pointed out by Christoforaki et al.~\cite{DBLP:conf/cikm/ChristoforakiHDMS11}, a query region is usually much smaller than the data space such that many data objects are not queried by the workload $W$. Hence, to fully utilize the query workload for partitioning, a data based partitioning method is not suitable to solve our problem. Instead, we employ a space-disjoint partitioning approach and propose a heuristic partition algorithm. Our index aims to learn splitting the spatial data space along different dimensions and coordinate values. Our partition algorithm starts by initializing one single partition that covers the full data space (which corresponds to a cluster that contains the full dataset). At this point, each query contributes the same $w_1 + |D| \cdot w_2$ cost to the overall cost of the query workload $W$. Then, we find a split dimension $d_s$ and a split value $v_s$ that yield the largest reduction in the query cost. We use the resulting $d_s$ and $v_s$ to split the data space into two sub-spaces and update the total query cost. For each sub-space, we repeat the splitting process recursively until the total query cost cannot be reduced or some pre-defined conditions, e.g., a minimum number of queries intersecting with the sub-space, are met. When the algorithm terminates, we use the MBR of the data objects in each resultant sub-space as a bottom cluster in WISK\xspace. \subsubsection{\textbf{Learning the Split Dimension and Value}} \label{sec4.3.1} A na\"ive method to find the value to make a split uses a brute-force search. Let $V_d$ ($d \in \{x, y\}$) be a sorted list of distinct object coordinate values along dimension $d$ in the current (sub-)space to be partitioned. Except for the first and the last values, every value in $V_d$ can be used to split the space into two sub-spaces. Examining all $|V_d|$ values takes $\mathcal{O}\big(f\cdot(|V_{x}|+|V_{y}|)\big)$ where $f$ denotes the time cost to split on a value and run queries based on such a splitting This approach becomes impractical for large datasets with a large value of $|V_x| + |V_{y}|$. Motivated by the recent success of machine learning in solving complex problems~\cite{DBLP:conf/aaai/PratesALLV19, DBLP:journals/eor/BengioLP21}, we propose a learning-based method to predict the query costs given a split dimension and a split value, such that the optimal split can be approximated by minimizing the predicted query cost with high efficiency. At the core of the query cost prediction problem of a split is to (1)~predict the number of resultant sub-spaces overlapping with the query, and (2)~predict the number of objects that contain any of the query keywords and reside in the resultant sub-spaces. To address the first prediction problem, we use the indicator function \cite{kleene1952introduction} to denote whether a sub-space overlaps with the query region. For example, let $[q_{x_{b}}, q_{x_{u}}]$ be the \textit{x} range of query \textit{q} and $p_x$ be a split value along dimension \textit{x}. The indicator functions $\vmathbb{1}(p_x \ge q_{x_{b}})$ and $\vmathbb{1}(p_x < q_{x_{u}})$ are used to decide whether $q$ intersects with the resultant left and right sub-spaces, respectively. If a sub-space has an indicator function value of 1, we need to further predict the number of query result objects within the sub-space. Otherwise, we can ignore the sub-space when computing the query cost. The indicator function is not differentiable, and machine learning methods such as gradient descent cannot be applied to solve a split value optimization problem formulated by such functions. As such, we use the sigmoid function~\cite{DBLP:conf/iwann/HanM95}, $\sigma(\beta x)$ with $\beta = 3$, to approximate the indicator function as does in prior work~\cite{DBLP:journals/tnn/ChenTY14, cao2020sigmoidal}, e.g., $\vmathbb{1}(p_x \ge q_{x_b}) = \vmathbb{1}(p_{x} - q_{x_b} \ge 0) \approx \sigma(3(p_x - q_{x_b}))$. To address the second prediction problem, we follow the idea in recent studies~\cite{DBLP:conf/sigmod/KraskaBCDP18, DBLP:conf/sigmod/NathanDAK20, DBLP:conf/sigmod/0001ZC21} that learn the Cumulative Distribution Function (CDF) to estimate the density of objects in a data space. Our goal is to learn the joint CDF $F_{X, Y}(x, y)$ of two variables \textit{X} and \textit{Y}, corresponding to the spatial coordinates in two dimensions. The learned CDF can quickly estimate the number of objects in a rectangular region, i.e., a sub-space. \textcolor{edit}{To accelerate the CDF learning, we assume that \textit{X} and \textit{Y} are independent, following a previous study \cite{DBLP:conf/sigmod/NathanDAK20}. Thus, we can decompose the joint CDF into the product of two marginal CDFs, $F_X(x)$ and $F_Y(y)$, as shown in Eq. \ref{equ:jointcdf}.} \begin{equation} F_{X, Y}(x, y) = P(X \le x, Y \le y) = F_X(x)F_Y(y) \label{equ:jointcdf} \end{equation} For ease of presentation, we use F(x) and F(y) to denote the marginal CDFs of \textit{X} and \textit{Y} in the rest of the paper, respectively. \begin{lemma} Given a two-dimensional object $(x, y)$ and a rectangular region $[(x_b, y_b), (x_u, y_u)]$ where $(x_b, y_b)$ and $(x_u, y_u)$ denote the bottom-left and the upper-right points of the rectangular region, respectively, the probability of an object residing in the area is: \begin{equation*} P(x_b \le x \le x_u, y_b \le y \le y_u)=\big(F(x_u)-F(x_b)\big)\big(F(y_u)-F(y_b)\big) \label{equ:lemma1} \end{equation*} \end{lemma} \begin{proof} According to the definition of CDF, we have $P(x_b \le x \le x_u, y_b \le y \le y_u)=F(x_u, y_u) - F(x_b, y_u) - F(x_u, y_b) + F(x_b, y_b)$. Due to the independence assumption, we can decompose each joint CDF based on Eq. \ref{equ:jointcdf}, and obtain the equation in Lemma \ref{equ:lemma1}. \end{proof} The CDF in Eq. \ref{equ:jointcdf} only estimates the spatial density of objects without considering the keyword distribution. To solve this issue, we learn the marginal CDFs, i.e., $F_{k}(x)$ and $F_{k}(y)$, for each keyword $k$. The choice of CDF models will be detailed in Section~\ref{sec6}. With the CDF models and the sigmoid functions, we formulate the cost for processing a query $q$ with region $[(x_b, y_b), (x_u, y_u)]$ after splitting on dimensions $x$ or $y$ in Eq. \ref{equ:loss}. \begin{equation} \label{equ:loss} \begin{aligned} L_{q}(x)=\sigma \big(3(x-x_{b})\big) \left|O_{1}\right| + \sigma \big(3(x_{u}-x)\big) \left|O_{2}\right| \\ L_{q}(y)=\sigma \big(3(y-y_{b})\big) \left|O_{1}\right| + \sigma \big(3(y_{u}-y)\big) \left|O_{2}\right| \end{aligned} \end{equation} where $\left|O_{1}\right|$ and $\left|O_{2}\right|$ denote the number of objects containing the query keywords in the two resulting sub-spaces, respectively, which are estimated through the learned keyword-based marginal CDF models. The sigmoid functions (e.g., $\sigma \big(3(x-x_{b})\big)$ and $\sigma \big(3(x_{u}-x)\big)$) predicts whether the query intersects the two resultant sub-spaces, respectively. We apply stochastic gradient descent (SGD) to learn the values of $x$ and $y$ to minimize $L_{q}(x)$ and $L_{q}(y)$ in Eq.~\ref{equ:loss} using the query workload as the training data. \subsubsection{\textbf{Bottom Cluster Generation}} When splitting a data space, there are both profit and loss in the query costs. The profit is gained by the reduced number of objects to be checked while the loss reflects an increased number of sub-spaces to be checked. In Example \hyperref[exa:4.1]{4.1}, the profit and loss are equal to $2w_2$ and $2w_1$, respectively. The difference between the profit and the loss determines whether a split is needed, and where the split should be made. \begin{algorithm}[tb] \caption{Bottom Clusters Generation} \label{algo:train} \footnotesize \KwIn{\textit{W}, the query workload; \textit{S}, the data space} \KwOut{\textit{G}, the set of clusters} \SetKwFunction{init}{InitializeCost} \SetKwFunction{find}{FindOptimalPartition} \SetKwProg{Fp}{Function}{:}{} \textit{Q} $\leftarrow$ NewPriorityQueue()\; \textit{Q}.Enqueue(\textit{S})\; \textit{G} $\leftarrow \varnothing$\; \While{Q is not empty} { $s \leftarrow$ \textit{Q}.Dequeue()\; $C_{s} \leftarrow$ InitializeObjectCheckingCost($s$)\; $opt_{x} \leftarrow$ FindOptimalPartition($s$, x)\; $opt_{y} \leftarrow$ FindOptimalPartition($s$, y)\; $best \leftarrow$ $opt_{x}$ if $opt_{x}$.cost $\le$ $opt_{y}$.cost else $opt_{y}$\; \If{$C_{s} - w_2 \cdot best.cost$ > $w_1\cdot|W|$ } { $s_{1}, s_{2} \leftarrow$ GenerateSubSpace($best$.dim, $best$.val)\; \textit{Q}.Enqueue($s_{1}$)\; \textit{Q}.Enqueue($s_{2}$)\; } \Else { \textit{c} $\leftarrow$ GenerateMBR($s$)\; \textit{G}.add(\textit{c})\; } } \textbf{return} $G$\; \BlankLine \Fp{\find{$s, d$}} { $opt.dim$ $\leftarrow$ \textit{d} \tcc*{a map structure to record optimal split result} $cost, val \leftarrow$ SGDLearn(\textit{s}.queries) \tcc*{ SGDLearn() returns the optimal cost and split value} $opt.cost$ $\leftarrow$ $cost$\; $opt.val$ $\leftarrow$ $val$\; \textbf{return} $opt$\; } \end{algorithm} Algorithm \ref{algo:train} summarizes our bottom cluster generation algorithm. The algorithm takes the query workload $W$ and the data space $S$ enclosing all geo-textual objects as the input, and it aims to return a set of clusters that minimize the cost of executing all the queries in $W$. The algorithm maintains a priority queue $Q$ of sub-spaces to the examined, which are prioritized by their numbers of intersecting queries. At the start, $Q$ contains only the input data space $S$ (lines 1 and 2). Then, we iterate through the sub-spaces in $Q$. Let the current sub-space to be split be $s$. We set the initial object checking the cost of $s$ to be $|O_s| \cdot |W_s| \cdot w_2$ where $|O_s|$ and $|W_s|$ denote the number of objects in $s$ and the number of queries intersecting with $s$, respectively (lines 5 and 6). Then, we find the optimal split along both $x$- and $y$-dimensions, respectively (lines 7 and 8), and we use the one with a smaller object checking cost as our candidate split (line 9). If the reduction in the object checking cost from $C_s$ outweighs the increase in cluster checking cost, i.e., $w_1 \cdot |W|$ (every split adds a cluster to be checked against $|W|$ queries), we execute the split and enqueue the resultant sub-spaces (lines 10 to 13). Otherwise, $s$ is finalized, and we generate the MBR for the data objects in $s$ and use it as a bottom cluster (lines 14 to 16). The process terminates when $Q$ becomes empty (line 4). When finding the optimal splitting value along a dimension (lines 18 to 24), we apply SGD~\cite{bottou-98x} to minimize Eq.~\ref{equ:loss} (line 21). Here, we use a map structure $opt$ to record the new object checking cost, the dimension, and the value of a learned optimal split. The time complexity of each iteration in Algorithm \ref{algo:train} is $\mathcal{O}\big(h \cdot (E_{x} + E_{y})\big)$ where $h$ and $E_{d}, d \in \{x, y\}$ denote the time complexity of SGD per iteration and the number of iterations respectively. Recall that the time complexity of the brute-force algorithm is $\mathcal{O}\big(f\cdot(|V_{x}|+|V_{y}|)\big)$. We note that $f$ is larger than $h$ because our heuristic algorithm does not need to run a split to calculate a query cost (while the brute-force algorithm does). $E_{x}$ and $E_{y}$ depend on the algorithm configurations, such as the learning rate and the number of model parameters. They are usually much smaller than $V_{x}$ and $V_{y}$, respectively. Therefore, the time complexity of our heuristic algorithm is lower than that of the brute-force algorithm. \subsection{Problem Statement} We consider a geo-textual dataset $D$ where each data object, i.e., a geo-textual object $o\in D$, has a point location denoted as $o.loc$ and a text description denoted as $o.kws$. For ease of discussion, we assume two-dimensional coordinates in Euclidean space to represent $o.loc$, although our proposed techniques can generalize to multi-dimensional spaces easily. The text description $o.kws$ is represented as a set of keywords, e.g., tags indicating the functionality of a POI. We aim to process \emph{spatial keyword range queries} over $D$. \begin{definition}[\textbf{Spatial Keyword Range (SKR) Query}] An SKR query \textit{q} is represented by a pair $(q.area, q.keys)$ where \textit{q.area} and \textit{q.keys} denote a spatial region and a set of keywords, respectively. The result of \textit{q}, $q(D) = \{o \in D \mid o.loc \ in \ q.area, o.kws \cap q.keys \neq \varnothing\}$, is a subset of \textit{D} that includes all objects within the query region containing at least one query keyword. \end{definition} Here, we use a rectangular query region. Our techniques can be easily extended to handle query regions of other shapes (e.g., circles) by an extra filtering after querying with the bounding rectangle. \smallskip\noindent\textbf{Problem.} Our goal is to learn an index structure that can efficiently process SKR queries utilizing the distributions of the geo-textual data and the given query workload. \subsection{Reinforcement Learning} \label{sec2.3} Reinforcement learning (RL) \cite{DBLP:journals/jair/KaelblingLM96} is a machine learning technique where an agent learns from feedback obtained from trial-and-error interactions with an environment. It has been shown to be effective for sequential decision-making problems \cite{DBLP:journals/nature/SilverHMGSDSAPL16, DBLP:conf/aaai/ZhaoS0Y021}. RL formulation is based on the Markov Decision Process (MDP) \cite{DBLP:journals/siamrev/Feinberg96}. An MDP has four components: a set of states \textit{S}, a set of actions \textit{A}, transition probabilities $P$, and rewards $R$. At some state $s \in S$, an agent may take an action $a \in A$. As a result, there is a probability $P_{a}(s, s^{\prime})$ that the agent transits to state $s^{\prime}$, and a reward $R_{a}(s, s^{\prime})$ is received from such an action and state transition. The goal of the agent is to learn a policy function $\pi : A \times S \rightarrow [0, 1]$, i.e., the probability of taking action $a$ at state $s$, such that the cumulative reward of state transitions is maximized. Figure \ref{fig:basic_rl} shows the basic workflow of RL. The environment connects to the agent via perception and action and it offers the agent the possible action choices based on the current state of the agent. The agent learns its policy based on rewards accumulated from interactions with the environment. Its learning process stops when a terminal state is reached. \begin{figure}[tb] \centering \includegraphics[width=.7\columnwidth]{figures/rl.pdf} \vspace{-0.3cm} \caption{The typical RL learning framework} \label{fig:basic_rl} \end{figure} Q-learning \cite{DBLP:journals/ml/WatkinsD92} is a commonly used value-based policy learning algorithm, which learns the value of an action given a state. It learns a policy that maximizes the value of a so-called Q-function, $Q(s, a)$, i.e., the overall expected reward when an agent plays following the policy \cite{melo2001convergence}. State-of-the-art RL models such as Deep-Q-Network (DQN)~\cite{DBLP:journals/nature/MnihKSRVBGRFOPB15} use a deep neural network $Q(s, a;\theta)$ with parameters $\theta$ to estimate the value of the Q-function $Q(s, a)$. Once $Q(s, a;\theta)$ is trained, it can be used for decision-making for future events. \section{Introduction} \label{sec1} \input{sections/intro} \section{Preliminaries} \label{sec2} \input{sections/preliminaries} \section{Index Overview} \label{sec3} \input{sections/overview} \section{Partitioning Optimization} \label{sec4} \input{sections/partition} \section{Bottom-up Packing} \label{sec5} \input{sections/packing} \section{Design Optimizations} \label{sec6} \input{sections/optimization} \section{Experiments} \label{sec7} \input{sections/experiment} \section{Related Work} \label{sec8} \input{sections/related} \section{Conclusions and Future Work} \label{sec9} \input{sections/conclusion} \clearpage \balance \bibliographystyle{ACM-Reference-Format}
{ "arxiv_id": "2302.14258", "language": "en", "timestamp": "2023-03-01T02:07:09", "url": "https://arxiv.org/abs/2302.14258", "yymm": "2302" }
\subsection*{Notation} \section{Introduction} Curve shortening flow is the gradient flow of length for regular curves. It was introduced as a model for wearing processes \cite{Firey} and the evolution of grain boundaries in annealing metals \cite{Mullins,vonNeumann}, and has found a number of further applications, for example in image processing \cite{Sapiro}. It arises in areas as diverse as quantum field theory \cite{Bakas} and cellular automata \cite{MR1669736}. The ultimate fate of closed, embedded curves in $\mathbb{R}^2$ under curve shortening flow is characterised by the theorems of Gage--Hamilton \cite{GageHamilton86} and Grayson \cite{Grayson}, which imply that any such curve must remain embedded, eventually becoming convex, before shrinking to a round point after a finite amount of time. Recently, there has been significant interest in so-called \textit{free boundary} problems in geometry. Study of the free-boundary curve shortening flow (whereby the endpoints of the solution curve are constrained to move on a fixed barrier curve which they must meet orthogonally) was initiated by Huisken \cite{Huisken89}, Altschuler--Wu \cite{AltschulerWu}, and Stahl \cite{Stahl96b,Stahl96a}. In particular, Stahl proved that bounded, convex, locally uniformly convex curves with free boundary on a smooth, convex, locally uniformly convex barrier remain convex and shrink to a point on the barrier curve. Our main theorem completely determines the long-time behaviour of simple closed intervals under free-boundary curve shortening flow in a convex domain. \begin{usethmcounterof}{thm:fbGrayson} Let $\Omega \subset \mathbb{R}^2$ be a convex domain of class $C^2$ and let $\{\Gamma_t\}_{t\in [0,T)}$ be a maximal free boundary curve shortening flow starting from a properly embedded interval $\Gamma_0$ in $\Omega$. Either: \begin{enumerate} \item[(a)] $T=\infty$, in which case $\Gamma_t$ converges smoothly as $t\to \infty$ to a chord in $\Omega$ which meets $\partial\Omega$ orthogonally; or \item[(b)] $T<\infty$, in which case $\Gamma_t$ converges uniformly to some $z\in \partial\Omega$, and \[ \tilde\Gamma_t\doteqdot \frac{\Gamma_t -z}{\sqrt{2(T-t)}} \] converges uniformly in the smooth topology as $t\to T$ to the unit semicircle in $T_z\Omega$. \end{enumerate} \end{usethmcounterof} Theorem \ref{thm:fbGrayson} represents a free-boundary analogue of the Gage--Hamilton--Grayson theorem. Note, however, that we must allow for long-time convergence to a stationary chord, which was not a possibility for closed planar curves. Observe that our statement includes uniqueness of the limiting chord, which is a subtle issue.\footnote{Indeed, there are examples of closed curve shortening flows in three-manifolds which have non-unique limiting behaviour as $t\to \infty$ (see \cite[Remark 4.2]{WhiteNotesEdelen}). On the other hand, Gage \cite{MR1046497} showed that closed curve shortening flow on the round $S^2$ does converge to a unique limiting geodesic. More generally, on a closed Riemannian surface, Grayson \cite{MR979601} proved that a closed curve shortening flow always subconverges to a closed geodesic as $t\to\infty$ if $T=\infty$, but uniqueness of the limiting geodesic appears to remain open.} Returning to the setting of closed planar curves, we recall that Gage \cite{Gage84} and Gage--Hamilton \cite{GageHamilton86} established that closed convex curves remain convex and shrink to ``round'' points in finite time under curve shortening flow by exploiting monotonicity of the isoperimetric ratio and Nash entropy of the evolving curves. By carefully exploiting ``zero-counting'' arguments for parabolic equations in one space variable, Grayson \cite{Grayson} was able to show that general closed embedded curves eventually become convex. Further proofs of these results were discovered by Hamilton \cite{MR1369140}, Huisken \cite{Huisken96}, Andrews \cite{MR2967056} and Andrews--Bryan \cite{MR2843240,AndrewsBryan}. Huisken's argument provides a rather quick route to the Gage--Hamilton--Grayson theorem via distance comparison: using only the maximum principle, he shows that the ratio of extrinsic to intrinsic distances --- the \textit{chord-arc} ratio --- does not degenerate under the flow. This precludes ``collapsing'' singularity models, and the result follows by a (smooth) ``blow-up'' argument. Andrews and Bryan provided a particularly direct route to the theorem by refining Huisken's argument: they obtained a \emph{sharp} estimate for the chord-arc profile, which implied much stronger control on the evolution, allowing for a direct proof of convergence. Inspired by the approach of Huisken and Andrews--Bryan to planar curve shortening flow, we introduce a new ``extended'' chord-arc profile for embedded curves with orthogonal contact angle in a convex planar domain $\Omega$, and show that it cannot degenerate under free boundary curve shortening flow. The latter is sufficient to rule out collapsing singularity models, which is the key step in establishing Theorem \ref{thm:fbGrayson}. Our extended chord-arc profile is motivated by the half-planar setting: $\Omega=\mathbb{R}^2_+$. In this case, reflection across $\partial\ensuremath{\mathbb{R}}_+^2$ yields a curve shortening flow of closed curves in $\mathbb{R}^2$, so any suitable notion of chord-arc profile in the free boundary setting should account for the reflected part of the curve. Accordingly, we define the ``reflected'' distance $\tilde{d}(x,y)$ to be the length of the shortest single-bounce billiard trajectory in $\Omega$ connecting $x$ to $y$, and similarly define a reflected arclength. The extended chord-arc profile (see \S \ref{sec:extended-chord-arc}) then controls the relationship between extrinsic and intrinsic distance, with and without reflection. Our arguments therefore fit into the broader framework of maximum principle techniques for ``multi-point'' functions. Such techniques have been successfully applied to prove a number of key results in geometric analysis (see \cite{AndrewsSurvey,MR3061135} for a survey), including the distance comparison principles of Huisken \cite{Huisken96} and Andrews--Bryan \cite{AndrewsBryan}. However, applications in the context of Neumann boundary conditions are typically much more difficult than the closed (or periodic) case. Finally, we mention that ruling out collapsing singularity models was also a key component of work of the second author and collaborators \cite{EHIZ} on free boundary \textit{mean curvature flow}, under the assumption of mean convexity. We remark that similar techniques provide a plausible alternative route to Theorem \ref{thm:fbGrayson}, so long as a suitable ``sheeting'' theorem can be established in the absence of the convexity condition. The remainder of this paper is organised as follows. In Section \ref{sec:prelim}, we establish some preliminaries on free-boundary curve shortening flow, and in Section \ref{sec:extended-chord-arc} we define our reflected (and extended) chord-arc profile. In Section \ref{sec:spatial variation}, we establish first and second derivative conditions on the extended chord-arc profile at a spatial minimum. We compute the time derivative of the chord-arc profile in Section \ref{sec:evolution}, and then use it, in conjunction with the spatial derivative conditions, to establish our extended chord-arc estimate. In Section \ref{sec:grayson}, we deduce Theorem \ref{thm:fbGrayson}, via two different blow-up methods (``intrinsic'' and ``extrinsic''). Finally, in Section \ref{sec:unbounded}, we discuss how our chord-arc estimates may be applied to free boundary curve shortening flows (in an unbounded convex domain) with one free boundary point and one end asymptotic to a ray. \subsection*{Acknowledgements} The project originated while the first author was visiting, and while the second author was a research fellow at, the Australian National University. Both authors would like to thank Ben Andrews and the ANU for their generous support. M.L. was supported by the Australian Research Council (Grant DE200101834). J.Z. was supported in part by the Australian Research Council under grant FL150100126 and the National Science Foundation under grant DMS-1802984. \section{Free boundary curve shortening flow} \label{sec:prelim} Let $\Omega$ be closed domain in $\ensuremath{\mathbb{R}}^2$ with non-empty interior and $C^2$ boundary $\partial\Omega$. We shall often use the notation $S\doteqdot \partial\Omega$ and denote interiors by $\mathring{\Omega}$ and so forth. A family $\{\Gamma_t\}_{t\in I}$, of connected properly immersed curves-with-boundary $\Gamma_t$ satisfies \emph{free boundary curve shortening flow} in $\Omega$ if $\mathring{\Gamma_t}\subset\mathring{\Omega}$ and $\partial\Gamma_t\subset\partial\Omega$ for all $t\in I$, and there is a 1-manifold $M$ and a smooth family of immersions $\gamma:M\times I\to \Omega$ of $\Gamma_t=\gamma(M,t)$ such that \begin{equation}\label{eq:fb-csf} \left\{\begin{aligned}\partial_t\gamma={}&-\kappa N\;\;\text{in}\;\; \mathring{M}\times \\ \inner{N}{N^S}={}&0\;\;\text{on}\;\;\partial M\times I\, \end{aligned}\right. \end{equation} where $\kappa(\cdot,t)$ is the curvature of $\Gamma_t$ with respect to the unit normal field $N(\cdot, t)$ and $N^S$ is the outward unit normal to $S=\partial\Omega$ along $\gamma|_{\partial M\times I}$. We will work with the unit tangent vectors $T\doteqdot JN$ and $T^S\doteqdot JN^S$, where $J$ is the counterclockwise rotation by $\frac{\pi}{2}$ in $\mathbb{R}^2$. Up to a reparametrization, we may arrange that $T=\frac{\gamma'}{|\gamma'|}$. We will consider the setting where $\gamma(\cdot,t)$ are embeddings (a condition which is preserved under the flow) and $\Omega$ is convex. In general, the curves $\Gamma_t$ could be either bounded or unbounded and could have zero, one or two endpoints. If $\partial\Gamma_t=\emptyset$, the work of Gage--Hamilton \cite{GageHamilton86}, Grayson \cite{Grayson} and Huisken \cite{Huisken96} provides a complete description of the flow (upon imposing mild conditions at infinity in case the $\Gamma_t$ are unbounded). Our primary interest is therefore those cases in which the $\Gamma_t$ have either one or two boundary points. In the latter case, the timeslices $\Gamma_t$ are compact, and solutions will always remain in some compact subset $K$ of $\Omega$. (Indeed, since $\Omega$ is convex, we can enclose the initial curve $\Gamma_0$ by suitable half-lines or chords which meet $\partial\Omega$ in acute angles with respect to the side on which $\Gamma_0$ lies; these act as barriers for the flow.) Finally, since $\Omega$ is taken to be convex, there is no loss of generality in assuming that $\Gamma_t$ does not touch $\partial \Omega$ at interior points: the strong maximum principle ensures that interior touching cannot occur at interior times, unless $\Gamma_t=\partial\Omega$ for all $t$ and $\partial\Omega$ is flat. \section{Extending the chord-arc profile} \label{sec:extended-chord-arc} \begin{comment} Recall that the \emph{chord-arc profile} \cite{AndrewsBryan} (cf. \cite{Huisken96}) $\psi(\cdot,t)$ of the curve $\Gamma_t$ is defined to be \[ \psi(\delta,t) \doteqdot \inf \{d(x,y,t) : x,y \in M^1,\ell(x,y,t) = \delta\}. \] In order to adapt the Andrews--Bryan analysis to the free boundary setting, we need to extend the chord-arc profile across the boundary. To this end, define the \emph{reflected chordlength} $\tilde d(x,y,t)$ and the \emph{reflected arclength} $\tilde\ell(x,y,t)$ between two points $\gamma(x,t), \gamma(y,t)\in \Gamma_t$ by \[ \tilde d(x,y,t)\doteqdot \min_{z\in S}\big(\vert\gamma(x,t)-z\vert+\vert\gamma(y,t)-z\vert\big) \] and \[ \tilde\ell(x,y,t)\doteqdot \min_{s\in \partial M^1}\big(\ell(x,s,t)+\ell(y,s,t)\big)\,, \] respectively. Note that this introduces points of non-smoothness when $\ell(x,s_0,t) = \ell(y,s_1,t)$, where $s_0$ and $s_1$ are the boundary points of $M^1$. This occurs precisely when $\tilde{\ell}=\tilde L/2$, where $\tilde L(t)$ is twice the total length of $\Gamma_t$ (the total ``reflected length''). The \emph{reflected chord-arc profile} is then defined by \[ \tilde\psi(\delta,t) \doteqdot \inf\left\{\tilde d(x,y,t): x,y\in M^1, \tilde\ell(x,y,t) =\delta \right\} \] and we take the ``extended chord-arc profile'' to be \[ \Psi(\delta,t) \doteqdot \min\{\psi(\delta,t), \tilde{\psi}(\delta,t)\}\,. \] If $\gamma(y,t)$ is intrinsically nearer the boundary of $\Gamma_t$ than $\gamma(x,t)$, then the reflected chordlength $\tilde d(x,y,t)$, resp. reflected arclength $\tilde \ell(x,y,t)$ is equal to the extrinsic resp. intrinsic distance in $\tilde\Gamma_t$ between $\gamma(x,t)$ and $\tilde\gamma(y,t)$, where $\tilde \Gamma_t$ is the union of $\Gamma_t$ with its reflection across the tangent line to $S$ at the (intrinsically) nearest boundary point of $\Gamma_t$ to $\gamma(y,t)$, and $\tilde\gamma(y,t)$ is the image of the point $\gamma(y,t)$ under the reflection. {\color{cyan}If we denote by $L$ and $R$ the respective reflections of the plane about the tangent lines to $S$ at the left and right boundary points of $\Gamma_t$, then we may take $\tilde d$ and $\tilde\ell$ to be the chord- and arclength functions on $L\Gamma_t\cup\Gamma_t\cup R\Gamma_t$. Observe that, for any pair of points $\gamma(x,t)$ and $\gamma(y,t)$ in $\Gamma_t$, the sum of the intrinsic distances in $L\Gamma_t\cup\Gamma_t\cup R\Gamma_t$ between $L\gamma(x,t)$ and $\gamma(y,t)$ and between $R\gamma(x,t)$ and $\gamma(y,t)$ is equal to the total reflected length $\tilde L$. Thus, any function of $\tilde\ell$ (between points separated by intrinsic distance at most $\tilde L$) which is symmetric about $\tilde L/2$ descends to a function on the quotient $\tilde \Gamma_t$ of $L\Gamma_t\cup\Gamma_t\cup R\Gamma_t$ obtained by identifying points of $L\Gamma_t$ with points of $R\Gamma_t$ according to their preimages under the respective reflections.} \end{comment} Recall that the (``classical") \emph{chord-arc profile} \cite{EGF} (cf. \cite{AndrewsBryan,Huisken96}) $\psi_\Gamma$ of an embedded planar curve $\Gamma$ is defined to be \[ \psi_\Gamma(\delta) \doteqdot \inf \{d(x,y) : x,y \in \Gamma,\ell(x,y) = \delta\}\,, \] where $d(x,y )$ is the chordlength (Euclidean distance) and $\ell(x,y)$ the arclength between the points $x$ and $y$. \subsection{The (reflected) profile} For curves $\Gamma$ embedded in a convex planar domain $\Omega$ with nontrivial boundary $\partial\Gamma$ on $\partial\Omega$, we introduce an ``extended'' chord-arc profile as follows: first, we define the \emph{reflected distance} $\tilde{d}(x,y)$ between two points $x,y$ in $\Omega$ (or \emph{reflected chordlength} if $x,y\in\Gamma$) by \[\tilde{d}(x,y) = \min_{z\in\partial\Omega}\left(|x-z|+|y-z|\right)\] and the \emph{reflected arclength} $\tilde\ell(x,y)$ between two points $x,y\in \Gamma$ by \[ \tilde\ell(x,y)\doteqdot \min_{s\in \partial\Gamma}\big(\ell(x,s)+\ell(y,s)\big)\,. \] The \emph{reflected chord-arc profile} $\tilde\psi_\Gamma$ of $\Gamma$ is then defined by \[ \tilde\psi_\Gamma(\delta) \doteqdot \inf\left\{\tilde d(x,y): x,y\in \Gamma, \tilde\ell(x,y) =\delta \right\} \] and the \emph{extended chord-arc profile} $\boldsymbol{\psi}_\Gamma$ is taken to be \[ \boldsymbol{\psi}_\Gamma(\delta) \doteqdot \min\{\psi_\Gamma(\delta),\tilde{\psi}_\Gamma(\delta)\}\,. \] Given a parametrisation $\gamma:M\to\Omega$ of $\Gamma$, we may sometimes conflate the functions $d, \ell, \tilde{d},\tilde{\ell}$ with their pullbacks to $M\times M$ by $\gamma$. \subsection{The completed curve and profile} The extended chord-arc profile of an embedded curve-with-endpoints has a natural interpretation on its formal doubling. Consider a connected, properly immersed curve-with-boundary $\Gamma$ in a planar set $\Omega$ with endpoints on $\partial\Omega$. Given a parametrisation $\gamma:M\to \Omega$ of $\Gamma$, we define the formal double $\boldsymbol{M} = (M\sqcup M) / \partial M$ and write $\boldsymbol{x} = (x,\sign(\boldsymbol{x}))$ for elements of $\boldsymbol{M}$, where $x\in M$ and $\sign(\boldsymbol{x})=\pm$ distinguishes to which copy of $M$ it belongs. We also define continuous curve $\boldsymbol{\gamma}: \boldsymbol{M} \to \Omega$ by $\boldsymbol{\gamma}(\boldsymbol{x})= \gamma(x)$. Next observe that the arclength function $\ell$ is well-defined on $\boldsymbol{M}\times\boldsymbol{M}$, and satisfies \begin{equation} \boldsymbol\ell(\boldsymbol{x},\boldsymbol{y})\doteqdot \left\{\begin{aligned}\ell(x,y),{}&\;\;\text{if $\sign(\boldsymbol{x}) = \sign(\boldsymbol{y})$}\\ \tilde{\ell}(x,y), {}&\;\;\text{if $\sign(\boldsymbol{x})\neq \sign(\boldsymbol{y})$}. \end{aligned} \right. \end{equation} Similarly, we may define a ``completed chordlength'' function on $\boldsymbol{M}\times \boldsymbol{M}$ by \begin{equation} \boldsymbol{d}(\boldsymbol{x},\boldsymbol{y})\doteqdot \left\{\begin{aligned}d(\gamma(x),\gamma(y)),{}&\;\;\text{if $\sign(\boldsymbol{x}) = \sign(\boldsymbol{y})$}\\ \tilde{d}(\gamma(x),\gamma(y)), {}&\;\;\text{if $\sign(\boldsymbol{x})\neq \sign(\boldsymbol{y})$}. \end{aligned} \right. \end{equation} The \emph{completed chord-arc profile} $\boldsymbol{\psi}$ of $\Gamma$ is then defined by \[ \boldsymbol{\psi}(\delta) \doteqdot \inf\left\{\boldsymbol{d}(x,y): x,y\in \boldsymbol\Gamma, \boldsymbol{\ell}(x,y) =\delta \right\}. \] Note that this coincides with the notion of extended chord-arc profile defined above. \begin{remark} \label{rmk:gluing} The formal double $\boldsymbol{M}$ has an obvious smooth structur , and the arclength $\boldsymbol{\ell}$ is smooth with respect to this structure. Furthermore, if $\Gamma$ contacts $\partial\Omega$ orthogonally, then the completed chordlength $\boldsymbol{d}$ is essentially $C^1$ on $\boldsymbol{M}$. This $C^1$ gluing is basically what is needed to guarantee first derivative conditions at minima of the chord-arc profile, although we have presented their proof in a more direct and precise manner; see Lemma \ref{lem:C1-endpoints} in particular. \end{remark} \begin{comment} Given a properly embedded, connected curve-with-boundary $\Gamma$ in a planar set $\Omega$ with endpoints on $\partial\Omega$, we can find a unit speed embedding $\gamma:M\to\Omega$ with $M\in \{S^1,\ensuremath{\mathbb{R}},[0,L],[0,\infty)\}$. If $\partial M\ne\emptyset$, we define the \emph{reflected completion} of $(M,ds^2)$, where $ds^2$ is the induced metric, to be the Riemannian 1-manifold $(\boldsymbol{M},d\boldsymbol{s}^2)$ as follows: \begin{enumerate} \item[(a)] If $M=[0,L]$, then $\boldsymbol{M}\doteqdot [-L,L]/\{\pm L\}$ and $d\boldsymbol{s}^2$ is the metric induced by the immersion $\boldsymbol\gamma:\boldsymbol M\to\Omega$ defined by $\boldsymbol{\gamma}(x) \doteqdot \gamma(|x|)$. \item[(b)] If $M=[0,\infty)$, then $\boldsymbol{M}\doteqdot \ensuremath{\mathbb{R}}$ and $d\boldsymbol{s}^2$ is the metric induced by the immersion $\boldsymbol\gamma:\boldsymbol M\to\Omega$ defined by $\boldsymbol{\gamma}(x) \doteqdot \gamma(|x|)$. \end{enumerate} (Of course, if $\partial M=\emptyset$, then we may simply take $\boldsymbol M=M$, $\boldsymbol\gamma=\gamma$ and $d\boldsymbol{s}^2=ds^2$.) When $M=[0,L]$, the intrinsic length $\boldsymbol\ell(x,y)$ between two points $x,y\in \boldsymbol M$ is given by $\boldsymbol{\ell}(x,y) = \min\{|x-y| , \boldsymbol{L}-|x-y|\}$, where $\boldsymbol{L}\doteqdot 2L$. When $\boldsymbol L\doteqdot \length(\boldsymbol M,d\boldsymbol{s}^2)<\infty$, the composition $\varphi(\boldsymbol{\ell}/\boldsymbol{L})$ is $C^2$ away from the diagonal $\boldsymbol D\doteqdot \{(x,x):x\in \boldsymbol M\}\subset \boldsymbol M\times\boldsymbol M$ for any smooth function $\varphi:[0,1]\to\ensuremath{\mathbb{R}}$ with $\varphi'(\frac{1}{2})=0$. The intrinsic length $\boldsymbol \ell$ is related to the reflected arclength $\tilde{\ell}$ by \[ \boldsymbol{\ell}(x,y) = \begin{cases} \ell(|x|, |y|)\;\;\text{if} & xy\geq 0 \\ \tilde{\ell}(|x|, |y|)\;\;\text{if} & xy\leq 0\,.\end{cases}\] We define the \emph{(completed) chordlength} $\boldsymbol d$ on $\boldsymbol{M}\times\boldsymbol M$ in the analogous way: \[ \boldsymbol{d}(x,y) \doteqdot \begin{cases} d(\gamma(|x|), \gamma(|y|))\;\;\text{if} & xy\geq 0 \\ \tilde{d}(\gamma(|x|), \gamma(|y|))\;\;\text{if} & xy\leq 0\,.\end{cases}\] The \emph{completed chord-arc profile} $\boldsymbol{\psi}$ of $\Gamma$ is then defined by \[ \boldsymbol{\psi}(\delta) \doteqdot \inf\left\{\boldsymbol{d}(x,y): x,y\in \boldsymbol{M}, \boldsymbol{\ell}(x,y) =\delta \right\}. \] Note that this indeed coincides with the notion of extended chord-arc profile above. \end{comment} \subsection{Variation of the (reflected) chordlength} It will be convenient to introduce some notation (see also Figure \ref{fig:angles} below). Given $x,y\in \mathring{\Omega}$ and $z\in S=\partial\Omega$, we define the angles $\theta_{x},\theta_{y}\in \ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}$ by \[ \frac{x-z}{|x-z|} = \cos\theta_{x}N^S_z + \sin\theta_{x}T^S_z \] and \[ \frac{y-z}{|y-z|} = \cos\theta_{y}N^S_z + \sin\theta_{y}T^S_z\,. \] Note that, due to our convention for $N^S$, we have $\kappa^S >0$ and $\cos\theta_{x},\cos\theta_{y} <0$. If $x\ne y$, then, given unit vectors $X,Y$ in $\ensuremath{\mathbb{R}}^2$, we may further define the angles $\alpha_x^X, \alpha_y^Y$, $\beta_x^X$, and $\beta_y^Y$ by \[ \frac{x-y}{|x-y|} = \cos \alpha_x^X (-JX) + \sin \alpha_x^X X = \cos \alpha_y^Y (-JY) + \sin\alpha_y^Y Y \] \[ \frac{x-z}{|x-z|} = \cos \beta_x^X (-JX) + \sin \beta_x^X X, \] and \[ \frac{y-z}{|y-z|} = \cos\beta_y^Y (-JY) + \sin\beta_y^Y Y. \] Note that \[ \langle X,Y\rangle = \cos(\alpha_x^X-\alpha_y^Y)\;\;\text{and}\;\; \inner{JX}{Y}=\sin(\alpha_x^X-\alpha_y^Y)\,, \] and also that \[ \langle X,T^S_z\rangle = \cos(\beta_x^X-\theta_{x})\;\; \text{and} \;\;\langle Y,T^S_z\rangle = \cos(\beta_y^Y-\theta_{y})\, \] We emphasise that the subscripts $x$ or $y$ appear only to distinguish which vectors each angle is defined by; in particular, each of the angles defined above may depend on $x,y,z$ (and $X,Y$). The regularity of the distance function is well-established. Its first and second variations are given as follows \cite{AndrewsBryan,Huisken96}. \begin{proposition} \label{prop:d} Denote by $\Delta = \{(x,x) : x\in \Omega\}$ the diagonal in $\Omega\times \Omega$. The distance $d$ is continuous on ${\Omega}\times {\Omega}$ and smooth on $({\Omega}\times {\Omega})\setminus \Delta$. Moreover, given $(x,y) \in ({\Omega}\times {\Omega})\setminus \Delta$ and unit vectors $X,Y$ in $\mathbb{R}^2$, we have \[\partial^x_Xd= \inner{\frac{x-y}{d}}{X} = \sin \alpha_x^X, \] \[\partial^y_Y d= \inner{\frac{y-x}{d}}{Y} = - \sin\alpha_y^Y , \] \[\partial^x_X\partial^x_Xd= \frac{1}{d} - \frac{1}{d} \inner{\frac{x-y}{d}}{X}^2 = \frac{1}{d} \cos^2 \alpha_x^X, \] \[\partial^y_Y\partial^y_Yd= \frac{1}{d} - \frac{1}{d} \inner{\frac{y-x}{d}}{Y}^2 = \frac{1}{d} \cos^2 \alpha_y^Y ,\] \[\partial^x_X\partial^y_Yd= -\frac{1}{d}\langle X,Y\rangle - \frac{1}{d} \inner{\frac{x-y}{d}}{X}\inner{\frac{y-x}{d}}{Y} = -\frac{1}{d} \cos \alpha_x^X \cos \alpha_y^Y\,.\] \end{proposition} \begin{comment} \begin{proposition} \label{prop:d} The distance $d$ is continuous on ${\Omega}\times {\Omega}$ and smooth on $({\Omega}\times {\Omega})\setminus \Delta$. Moreover, given $(x,y) \in ({\Omega}\times {\Omega})\setminus \Delta$ and unit vectors $X,Y$, we have \[ \partial^x_X d= \langle\frac{x-y}{d}, X\rangle = \sin \alpha_x^X, \] \[ \partial^y_Y d= \langle \frac{y-x}{d}, Y\rangle = \sin\alpha_y^Y , \] \[ (\partial^x_X)^2 d= \frac{1}{d} - \frac{1}{d} \langle \frac{x-y}{d},X\rangle^2 = \frac{1}{d} \cos^2 \alpha_x^X, \] \[ (\partial^y_Y)^2 d= \frac{1}{d} - \frac{1}{d} \langle \frac{y-x}{d},Y\rangle^2 = \frac{1}{d} \cos^2 \alpha_y^Y ,\] \[ \partial^x_X \partial^y_Y d= \frac{1}{d}\langle X,Y\rangle - \frac{1}{d} \langle \frac{x-y}{d}, X\rangle \langle \frac{y-x}{d}, Y\rangle = \frac{1}{d} \cos \alpha_x^X \cos \alpha_y^Y\] \end{proposition} \end{comment} \begin{lemma}[Snell's law]\label{lem:snell} Given any $(x,y)\in\mathring{\Omega}\times \mathring{\Omega}$ there exists $z\in \partial \Omega$ such that \begin{equation*} \tilde{d}(x,y) = d(x,z) + d(y,z)\, \end{equation*} The triple $(x,y,z)$ necessarily satisfies \begin{equation*} \theta_{x}+\theta_{y}=0\,. \end{equation*} \end{lemma} \begin{proof} Since $x$ and $y$ are interior points, the function $d(x,\cdot) + d(y,\cdot)$ is smooth on $\partial\Omega$. It thus attains its minimum over $\partial\Omega$, due to compactness of $\Omega\cap \overline B_R$ for arbitrary (large) $R$. Moreover, at any minimum $z\in\partial\Omega$, the first derivative test gives the reflected angle condition \ba\label{eq:snell} 0={}&\partial^z_{T^S_z} (d(x,z) + d(y,z))\nonumber\\ ={}&-\inner{\frac{x-z}{|x-z|}}{T^S_z} -\inner{\frac{y-z}{|y-z|}}{T^S_z}\nonumber\\ ={}& -\sin \theta_{x} - \sin\theta_{y}. \ea Convexity of $\Omega$ then ensures that $\theta_{x} =- \theta_{y}$ (mod $2\pi$). \end{proof} \section{Spatial variation of the chord-arc profile}\label{sec:spatial variation} For the purposes of computing spatial variations it will be convenient to restrict attention to a fixed simple closed interval $\Gamma$ in $\Omega$ which meets $\partial\Omega$ orthogonally. Throughout this section, we may assume without loss of generality that $\gamma: M\to \Omega$ is a unit speed parametrisation of $\Gamma$, and $M=[0,L]$. As in \cite[Chapter 3]{EGF}, we will control the chord-arc profile by a smooth to-be-determined function $\varphi\in C^{\infty}([0,1])$ satisfying the following properties. \begin{enumerate}[(i)] \item $\varphi(1-\zeta) = \varphi(\zeta)$ for all $\zeta\in[0,1]$. \item $|\varphi'|<1$. \item $\varphi$ is strictly concave. \end{enumerate} Note that, since $\varphi$ is smooth and symmetric about $\zeta=\frac{1}{2}$, the function $\varphi(\frac{\boldsymbol\ell}{\boldsymbol{L}})$ is smooth away from the diagonal $\boldsymbol D$ in $\boldsymbol M\times \boldsymbol M$. The following observation about such functions will be useful. \begin{lemma}[{\cite[Lemma 3.14]{EGF}}] Let $\varphi\in C^{\infty}([0,1])$ be any function satisfying properties (i)-(iii) above. For all $\zeta \in [0,\frac{1}{2})$, we have $\varphi'(\zeta)>0$ and $\varphi(\zeta)-\zeta \varphi' (\zeta)>0$. \end{lemma} We proceed to consider the auxiliary functions on $M\times M$ given by \[ Z(x,y) = d(\gamma(x),\gamma(y)) - \boldsymbol{L} \varphi\left(\frac{\ell(x,y)}{\boldsymbol{L}}\right), \] \[ \tilde{Z}(x,y) = \tilde{d}(\gamma(x),\gamma(y)) - \boldsymbol{L} \varphi\left(\frac{\tilde{\ell}(x,y)}{\boldsymbol{L}}\right), \] and the auxiliary function on $M\times M\times S$ given by \[\bar{Z}(x,y,z) = d(\gamma(x) ,z) + d(\gamma(y) ,z) - \boldsymbol{L} \varphi\left(\frac{\tilde{\ell}(x,y)}{\boldsymbol{L}}\right).\] Note that $\tilde{Z}(x,y) = \min_{z\in\partial\Omega}\bar{Z}(x,y,z)$. Our completed two-point function on $\boldsymbol{M}\times \boldsymbol{M}$ is defined by \begin{equation}\label{eq:bs Z} \boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y}) = \boldsymbol{d}(\boldsymbol{x}, \boldsymbol{y}) - \boldsymbol{L} \varphi\left(\frac{\boldsymbol{\ell}(\boldsymbol{x},\boldsymbol{y})}{\boldsymbol{L}}\right) = \begin{cases} Z(x, y)\;\;\text{if} & \sign(\boldsymbol{x})=\sign(\boldsymbol{y}) \\ \tilde{Z}(x,y)\;\;\text{if} & \sign(\boldsymbol{x})\neq\sign(\boldsymbol{y})\,.\end{cases} \end{equation} Let us define (as functions of $x,y \in M$ and $z\in S=\partial\Omega$) the angles $\alpha_x = \alpha_{\gamma(x)}^{T_x}$, $\alpha_y\doteqdot \alpha_{\gamma(y)}^{T_y}$, $\beta_x\doteqdot\beta_{\gamma(x)}^{T_x}$, $\beta_y\doteqdot\beta_{\gamma(y)}^{T_y}$, and (in a slight abuse of notation) $\theta_x\doteqdot\theta_{\gamma(x)}$ and $\theta_y\doteqdot\theta_{\gamma(y)}$. In particular, we have \begin{equation} \label{eq:alpha} \frac{\gamma(x)-\gamma(y)}{|\gamma(x)-\gamma(y)|} = \cos \alpha_{x} N_{x} + \sin \alpha_{x} T_{x} = \cos \alpha_{y} N_{y} + \sin\alpha_{y} T_{y}\,, \end{equation} \begin{equation} \label{eq:beta-x} \frac{\gamma(x)-z}{|\gamma(x)-z|} = \cos \beta_{x} N_{x} + \sin \beta_{x} T_{x} = \cos \theta_{x} \, N^S_z + \sin\theta_{x} \, T^S_z \end{equation} and \begin{equation} \label{eq:beta-y} \frac{\gamma(y)-z}{|\gamma(y)-z|} = \cos\beta_{y} N_{y} + \sin\beta_{y} T_{y} = \cos \theta_{y} \, N^S_z + \sin\theta_{y} \, T^S_z. \end{equation} \begin{figure} \centering \includegraphics[width=0.54\textwidth]{angles} \caption{The angles $\alpha_x,\alpha_y,\beta_x,\beta_y,\theta_x,\theta_y$, which depend on the configuration of $\gamma$ at $x,y$ and the boundary point $z$.}\label{fig:angles} \end{figure} \subsection{Classical profile} We first calculate an outcome of the second derivative test at an (unreflected) minimum where the first derivative vanishes. (Note that we include the vanishing of the first derivatives as hypotheses, to account for the endpoints; these of course hold automatically at an interior minimum.) Denote by $D\doteqdot\{(x,x):x\in M\}$ the diagonal in $M\times M$. \begin{proposition} Suppose that $0=Z(x,y) = \min_{M\times M}Z$ and that $\partial_x Z(x,y) = \partial_y Z(x,y)=0$ for some $(x,y)\in (M\times M)\setminus D$. At $(x,y)$, we have \[ \alpha_x+ \alpha_y=\pi \] and \begin{equation} 0\leq (\partial_x - \partial_y)^2 Z = \frac{4}{d}(1-\varphi'^2) - 4\frac{\varphi''}{\boldsymbol{L}} - \kappa_x\cos\alpha_x + \kappa_y\cos\alpha_y. \end{equation} \end{proposition} \begin{proof} Recall that $\Gamma$ is parametrised by arclength. By symmetry in $x,y$, we may also assume $x<y$, so that $\ell(x,y) = y-x$; in particular, $\partial_x\ell=-1$ and $\partial_y\ell=1$. Then $\partial_y \varphi=-\partial_x\varphi = \frac{1}{\boldsymbol{L}}\varphi'$ and hence, by Proposition \ref{prop:d}, \[ 0= \partial_x Z = \inner{\frac{\gamma(x)-\gamma(y)}{|\gamma(x)-\gamma(y)|}}{T_x} + \varphi' = \sin\alpha_{x} + \varphi',\] \[0=\partial_y Z=\inner{\frac{\gamma(y)-\gamma(x)}{|\gamma(y)-\gamma(x)|}}{T_y} - \varphi'= -\sin\alpha_{y} - \varphi'.\] Therefore $\sin \alpha_{x} = \sin \alpha_{y}=-\varphi'$ and hence either $\alpha_{x}=\alpha_{y}$ or $\alpha_{x}+\alpha_{y}=\pi$. In fact, since the minimum is zero, only the latter case can occur: \begin{claim}\label{claim:angle condition vanilla} $\alpha_x+\alpha_y=\pi$. \end{claim} \begin{proof}[Proof of Claim \ref{claim:angle condition vanilla}] We argue as in \cite[Lemma 3.13]{EGF}. Indeed, let $\sigma$ be the line segment connecting $\gamma(x)$ to $\gamma(y)$ and let $w=\frac{\gamma(x)-\gamma(y)}{|\gamma(x)-\gamma(y)|}$. The curve $\Gamma$ divides the domain $\Omega$ into two connected components $\Omega_\pm$, where $N=N^{\Gamma}$ points towards $\Omega_+$ at all points on $\Gamma_t$. The points in $\sigma$ near $\gamma(x)$ are given by $\gamma(x)- \epsilon w$, and hence lie in $\Omega_{\sign \langle w, N_x\rangle} = \Omega_{\sign(\cos \alpha_x)}$; similarly the points in $\sigma$ near $\gamma(y)$ are given by $\gamma(y)+ \epsilon w$, and hence lie in $\Omega_{\sign \langle w, N_{y}\rangle} =\Omega_{-\sign(\cos\alpha_y)}$. If $\alpha_x=\alpha_y$, then this shows that $\sigma$ contains points on either side of $\Gamma$. In particular, $\sigma$ must intersect $\Gamma$ in a third point $\gamma(u)$. Since $d(x,u)+d(u,y)=d(x,y)$ and either $\ell(x,u)+\ell(u,y)=\ell(x,y)$ or $\ell(x,u)+\ell(u,y)=2L-\ell(x,y)$, the strict concavity of $\varphi$ now implies that either $Z(x,u)<0$ or $Z(y,u)<0$, which contradicts the assumption $\min_{M\times M}Z=0$. \end{proof} We proceed to compute $\partial_x^2 \varphi = \partial_y^2 \varphi = -\partial_x\partial_y \varphi = \frac{1}{\boldsymbol{L}^2}\varphi''$, and so \[\partial_x^2 Z = \frac{1}{d}\cos^2\alpha_x - \inner{\frac{\gamma(x)-\gamma(y)}{|\gamma(x)-\gamma(y)|}}{\kappa_x N_x} - \frac{1}{\boldsymbol{L}}\varphi'' = \frac{1}{d}\cos^2\alpha_x -\kappa_x \cos\alpha_x - \frac{1}{\boldsymbol{L}}\varphi'' ,\] \[\partial_y^2 Z = \frac{1}{d}\cos^2\alpha_y - \inner{\frac{\gamma(y)-\gamma(x)}{|\gamma(x)-\gamma(y)|}}{\kappa_y N_y} - \frac{1}{\boldsymbol{L}}\varphi'' = \frac{1}{d}\cos^2\alpha_y +\kappa_y \cos \alpha_y - \frac{1}{\boldsymbol{L}}\varphi'',\] and \[\partial_x\partial_y Z = -\frac{1}{d} \cos\alpha_x\cos\alpha_y + \frac{1}{\boldsymbol{L}}\varphi''.\] The second derivative test then gives \[ \begin{split} 0 &\leq (\partial_x \pm \partial_y)^2 Z \\ &= \frac{1}{d} ( \cos^2 \alpha_{x} + \cos^2 \alpha_{y} \mp 2\cos\alpha_{x}\cos\alpha_{y}) - (2\mp 2)\frac{\varphi''}{\boldsymbol{L}} - \kappa_x\cos\alpha_x + \kappa_y\cos\alpha_y \\&= \frac{1}{d} (\cos\alpha_{x} \mp \cos \alpha_{y})^2 - (2\mp 2)\frac{\varphi''}{\boldsymbol{L}}- \kappa_x\cos\alpha_x + \kappa_y\cos\alpha_y. \end{split} \] By Claim \ref{claim:angle condition vanilla} we have $(\cos \alpha_{x} +\cos \alpha_{y})^2 = 0$, and hence \begin{equation} 0\leq (\partial_x - \partial_y)^2 Z = - 4\frac{\varphi''}{\boldsymbol{L}} - \kappa_x\cos\alpha_x + \kappa_y\cos\alpha_y\,.\qedhere \end{equation} \end{proof} \subsection{Reflected profile} We next apply the first and second derivative tests (in the viscosity sense) to the reflected profile. It will be enough to consider interior points. Recall that \[\bar{Z}(x,y,z) = d(\gamma(x) ,z) + d(\gamma(y) ,z) - \boldsymbol{L} \varphi\left(\frac{\boldsymbol{\ell}(x,y)}{\tilde{L}}\right).\] We write $d_x = d(\gamma(x),z)$, $d_y = d(\gamma(y),z)$. \begin{proposition} \label{prop:tilde-Z} Suppose that $\boldsymbol Z\ge 0$ with $0=\tilde{Z}(x,y)$ for some off-diagonal pair $(x,y)\in (\mathring{M}\times \mathring{M})\setminus D$. At $(x,y)$, \begin{equation} 0\leq -\kappa_x \cos \beta - \kappa_y \cos\beta + \left(\frac{1}{d_x} + \frac{1}{d_y}\right)\frac{2\kappa^S_z}{\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z}(1-\varphi'^2) - 4 \frac{\varphi''}{\boldsymbol{L}} \,, \end{equation} where $\theta = \theta_{x} = - \theta_{y}$, $\beta = \beta_{x}=\beta_{y}$ and \[\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z <0.\] \end{proposition} \begin{proof} Recall that $\Gamma$ is parametrised by arclength. First, note that $\tilde{\ell}(x,y) = \min\{x+y, \boldsymbol{L}-(x+y)\}$. By reversing the parametrisation if needed, we may assume without loss of generality that $\tilde{\ell}(x,y) = x+y$, and in particular $\partial_x \tilde{\ell} = \partial_y \tilde{\ell} =1$. By Lemma \ref{lem:snell} there exists $z\in S$ such that $0=\tilde{Z}(x,y)= \bar{Z}(x,y,z) = \min \bar{Z}$. Moreover, we have $\theta_{x} = -\theta_{y} =: \theta$. Now as $x,y,z$ are all pairwise distinct, $\bar{Z}(x,y,z)$ is smooth, and we may freely apply the first and second derivative tests. We now have $\partial_x \varphi=\partial_y\varphi = \frac{1}{\boldsymbol{L}}\varphi'$, so the first derivatives are \[ \partial_x \bar{Z} = \partial^x_{T_x} d|_{\gamma(x), z} - \varphi' = \inner{ \frac{\gamma(x)-z}{|\gamma(x)-z|}}{T_x} -\varphi' = \sin\beta_{x} - \varphi', \] \[ \partial_y \bar{Z} = \partial^y_{T_y} d|_{\gamma(y), z} - \varphi' = \inner{ \frac{\gamma(y)-z}{|\gamma(y)-z|}}{T_y} -\varphi' = \sin\beta_{y} - \varphi', \] and, as in Lemma \ref{lem:snell}, \[\partial_z \bar{Z} = \inner{\frac{z-\gamma(x)}{|z-\gamma(x)|}}{T^S_z}+\inner{\frac{z-\gamma(y)}{|z-\gamma(y)|}}{T^S_z} = -\sin \theta_{x} - \sin\theta_{y} =0 .\] The first derivative test also gives $\partial_x \bar{Z}=\partial_y \bar{Z}=0$, so $\sin\beta_{x} = \sin\beta_{y} = \varphi'$. Thus, either $\beta_{x} = \beta_{y}$ or $\beta_{x} + \beta_{y}=\pi$. In fact, since the minimum is zero, only the former occurs: \begin{claim}\label{claim:angle condition reflected} $\beta_{x}=\beta_{y}$. \end{claim} \begin{figure} \centering \includegraphics[width=0.42\textwidth]{wrong_config}\quad\includegraphics[width=0.5\textwidth]{right_config} \caption{L: inadmissible configuration. R: admissible configuration.}\label{fig:reflected_configs} \end{figure} \begin{proof}[Proof of Claim \ref{claim:angle condition reflected}] Let $\sigma_x$ be the line segment connecting $\gamma(x)$ to $z$ and $\sigma_y$ the line segment connecting $\gamma(y)$ to $z$, and set $w_x = \frac{\gamma(x)-z}{|\gamma(x)-z|}$ and $w_y= \frac{\gamma(y) -z}{|\gamma(y)-z|}$. Again, the curve $\Gamma$ divides the domain $\Omega$ into two connected components $\Omega_\pm$, where $N$ points towards $\Omega_+$ at all points on $\Gamma$. The points in $\sigma_x$ near $\gamma(x)$ are given by $\gamma(x)- \epsilon w_x$, and hence lie in $\Omega_{\sign \langle w, N_x\rangle} = \Omega_{\sign(\cos \beta_{x})}$; similarly the points in $\sigma_y$ near $\gamma(y)$ are given by $\gamma(y)- \epsilon w_y$, and hence lie in $\Omega_{\sign(\cos\beta_{y})}$. If $\beta_{x}+\beta_{y}=\pi$, then this shows that $\sigma_x\cup \sigma_y$ contains points on either side of $\Gamma$. In particular, $\sigma_x\cup \sigma_y$ must intersect $\Gamma$ in a third point $\gamma(u)$, $u\geq 0$ (see Figure \ref{fig:reflected_configs}). We have the following two possibilities: \begin{enumerate} \item $\tilde d(x,y)= d(x,u)+\tilde d(u,y)$ and either $\ell(x,u)+\tilde\ell(u,y)=\tilde\ell(x,y)$ or $\tilde\ell(x,u)+\ell(u,y)=2L-\tilde\ell(x,y)$; \item $\tilde d(x,y)=\tilde d(x,u)+d(u,y)$ and either $\ell(x,u)+\tilde\ell(u,y)=\tilde\ell(x,y)$ or $\tilde\ell(x,u)+\ell(u,y)=2L-\tilde\ell(x,y)$. \end{enumerate} So strict concavity and symmetry of $\varphi$ ensure in case (1) that either $Z(x,u)<0$ or $\tilde Z(x,u)<0$ and in case (2) that either $Z(u,y)<0$ or $\tilde Z(u,y)<0$, all of which are impossible since, by assumption, $\boldsymbol Z\ge 0$. \end{proof} Henceforth, we write $\beta\doteqdot \beta_x=\beta_y$. We now compute $\partial_x^2 \varphi = \partial_y^2 \varphi = \partial_x\partial_y \varphi = \frac{1}{\boldsymbol{L}^2}\varphi''$, so the second derivatives are given by \[ \begin{split} \partial_x^2 \bar{Z} &= \frac{1}{d_x} - \frac{1}{d_x} \inner{ \frac{\gamma(x)-z}{|\gamma(x)-z|}}{T_x}^2 - \kappa_x \inner{ \frac{\gamma(x)-z}{|\gamma(x)-z|}}{N_x} - \frac{1}{\boldsymbol{L}}\varphi'' \\&= \frac{1}{d_x} \cos^2\beta - \kappa_x \cos \beta - \frac{1}{\boldsymbol{L}}\varphi'', \end{split} \] \[ \begin{split} \partial_y^2 \bar{Z} &= \frac{1}{d_y} - \frac{1}{d_y} \inner{ \frac{\gamma(y)-z}{|\gamma(y)-z|}}{T_y}^2 - \kappa_y \inner{ \frac{\gamma(y)-z}{|\gamma(y)-z|}}{N_y} - \frac{1}{\boldsymbol{L}}\varphi'' \\&= \frac{1}{d_y} \cos^2\beta - \kappa_y \cos \beta - \frac{1}{\boldsymbol{L}}\varphi'', \end{split} \] \[ \partial_x \partial_y \bar{Z} = - \frac{1}{\boldsymbol{L}}\varphi'', \] \begin{align*} \partial_z^2 \bar{Z} ={}& \frac{1}{d_x} - \frac{1}{d_x} \inner{\frac{\gamma(x)-z}{|\gamma(x)-z|}}{T^S_z}^2+\kappa^S_z \inner{\frac{\gamma(x)-z}{|\gamma(x)-z|}}{N^S_z}\\&+ \frac{1}{d_y} - \frac{1}{d_y} \inner{\frac{\gamma(y)-z}{|\gamma(y)-z|}}{T^S_z}^2+\kappa^S_z\inner{\frac{\gamma(y)-z}{|\gamma(y)-z|}}{N^S_z} \\ ={}& \left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos^2\theta + 2\kappa^S_z\cos\theta\,, \end{align*} \[ \begin{split} \partial_z \partial_x \bar{Z}={}& -\frac{1}{d_x} \inner{T^S_z}{T_x} + \inner{ \frac{\gamma(x)-z}{|\gamma(x)-z|}}{T_x} \inner{ \frac{\gamma(x)-z}{|\gamma(x)-z|}}{T^S_z} \\ ={}& -\cos(\beta-\theta) + \sin \beta \sin \theta\\ ={}&-\cos\beta\cos \theta\,, \end{split} \] and \[ \begin{split} \partial_z \partial_y \bar{Z}={}& -\frac{1}{d_y} \inner{T^S_z}{T_y} + \inner{\frac{\gamma(y)-z}{|\gamma(y)-z|}}{T_y} \inner{\frac{\gamma(y)-z}{|\gamma(y)-z|}}{T^S_z}\\ = {}&-\cos(\beta+\theta) - \sin\beta\sin \theta\\ ={}& -\cos\beta\cos\theta\,. \end{split} \] The second derivative test then gives \[ 0\le\partial_z^2 \bar{Z} = \left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos^2\theta + 2\kappa^S_z\cos\theta \] and, for any $c$, \begin{align}\label{eq:second variation Z} \qquad\qquad 0 \leq{}& (\partial_x + \partial_y + c \partial_z)^2 \bar{Z}\nonumber\\ ={}& -\kappa_x \cos \beta - \kappa_y \cos\beta + \frac{1}{d_x} \cos^2 \beta + \frac{1}{d_y} \cos^2 \beta - 4\frac{\varphi''}{\boldsymbol{L}}\nonumber\\ & -2c\frac{1}{d_x}\cos \theta \cos \beta \mp 2c\frac{1}{d_y}\cos\theta \cos \beta + c^2 \cos\theta\left(\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z\right)\\ ={}& -\kappa_x \cos \beta - \kappa_y \cos\beta + \left(\frac{1}{d_x} + \frac{1}{d_y}\right)(1-\varphi'^2) - 4 \frac{\varphi''}{\boldsymbol{L}}\nonumber\\ {}&+\cos\theta \left( c^2 \left(\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z\right) - 2c\left(\frac{1}{d_x} + \frac{1}{d_y}\right)\cos\beta\right). \nonumber \end{align} We may actually now conclude the strict inequality \[ \left(\frac{1}{d_x} + \frac{1}{d_y}\right)\cos\theta+2\kappa^S_z<0\,. \] Indeed, if the coefficient of $c^2$ were to vanish in \eqref{eq:second variation Z}, then the right hand side would be linear in $c$. Since this linear function would be bounded from below, the coefficient of $c$ would also have to vanish; i.e. $\cos\beta=0$, hence $|\varphi'| = |{\sin\beta}| =1$. By the assumptions on $\varphi$, this is only possible if $\tilde{\ell}(x,y)=0$, which in turn can only hold if $x,y$ are the same endpoint, which is not the case by hypothesis. Thus the coefficient of $c^2$ is strictly negative as claimed. We now take the optimal value for $c$, which is \[ c=\frac{\left(\frac{1}{d_x} + \frac{1}{d_y}\right)\cos\beta}{\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z}\,. \] This yields \[ \begin{split} 0 \leq {}& (\partial_x + \partial_y + c \partial_z)^2 \bar{Z} \\ = {}&-\kappa_x \cos \beta - \kappa_y \cos\beta + \left(\frac{1}{d_x} + \frac{1}{d_y}\right)(1-\varphi'^2) - 4 \frac{\varphi''}{\boldsymbol{L}}\\ & -\frac{\cos\theta}{\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z}\left(\frac{1}{d_x} + \frac{1}{d_y}\right)^2 \cos^2 \beta \,. \end{split} \] We recall that $\cos^2 \beta = 1- \sin^2\beta = 1-\phi'^2$, which completes the proof. \end{proof} \subsection{Completed profile} Here we consider the completed two-point function $\boldsymbol Z$, which controls the completed chord-arc profile. We use the glued function to ensure that the first derivatives vanish, even at a `boundary' minimum. Recall that we write $\boldsymbol{x}=(x, \sign(\boldsymbol{x}))$ for elements of $\boldsymbol{M} = (M\sqcup M)/\partial M$. Also note that $\boldsymbol{Z}$ has the symmetry $\boldsymbol{Z}(\boldsymbol{x}, \boldsymbol{y}) = \boldsymbol{Z}(-\boldsymbol{x}, -\boldsymbol{y})$, where $-\boldsymbol{x} = (x, -\sign(\boldsymbol{x}))$. \begin{lemma} \label{lem:C1-endpoints} If $0=\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y}) = \min_{\boldsymbol M\times \boldsymbol M}\boldsymbol{Z}$ with $\boldsymbol{x}\in \partial M$, then \[0=Z(x,y) = \min_{M\times M} Z,\] and, moreover, $\partial_x Z|_{x,y} = \partial_y Z|_{x,y}=0$. \end{lemma} \begin{proof} By reparametristing, we may assume without loss of generality that $M=[0, L]$, $x=0$ and $\partial_x \ell (x,y)=-1$, $\partial_y\tilde{\ell}(0,y)=1$. As $\boldsymbol{x}\in \partial M$, we have $\boldsymbol{x} = -\boldsymbol{x}$ in $\boldsymbol{M}$; by the symmetry mentioned above we have $ 0= \boldsymbol{Z}(\boldsymbol{x}, \boldsymbol{y}) = \boldsymbol{Z}(\boldsymbol{x}, - \boldsymbol{y})$. In particular, $0=\boldsymbol{Z}(\boldsymbol{x},(y,+))=Z(0,y) = \min_{M\times M}Z$. We will first show that $\partial_x Z|_{0,y}=0$. Since $Z$ is smooth, the first derivative test gives \begin{equation} \label{eq:endpt-Z} 0 \leq \partial_x Z(0 ,y) = \sin \alpha_x + \varphi'. \end{equation} On the other hand, we also have $0=\boldsymbol{Z}(\boldsymbol{x},(y,-)) = \tilde{Z}(0,y) = \min_{M\times M}\tilde{Z}(\cdot,\cdot)$. Take a unit speed parametrisation $\zeta$ of $S$ so that $\zeta(0)=\gamma(0)=:z_0$ and $\zeta'(0) = T^S_{z_0}=-N_{z_0}$ (for the last equality we have used the orthogonal contact). Then for any $c\in \mathbb{R}$ and $s\geq 0$, we must have $0\leq \bar{Z}( s, y, \zeta(cs))$, with equality at $s=0$. Taking the difference quotients directly gives \[ \begin{split} 0&\leq \lim_{s\to 0^+} \frac{\bar{Z}(s,y, \zeta(cs)) - \bar{Z}(0,y, z_0)}{s} \\&= \lim_{s\to 0^+} \frac{ d(s, \zeta(cs))+ d( \zeta(cs),\gamma(y)) -d( z_0, \gamma(y))}{s} - \varphi' \\&= \lim_{s\to 0^+} \frac{ d(s, \zeta(cs))}{s} -c\, (\partial^x_{N_{z_0}} d)|_{z_0, \gamma(y)} -\varphi' \\&= \lim_{s\to 0^+} \sqrt{1+c^2} -c \cos\alpha_x -\varphi'\,. \end{split} \] Choosing $c=\pm \cot\alpha_x$, we obtain (note that $\sin\alpha_x \leq 0$) \[ 0\leq -\sin\alpha_x + \varphi' \] Combining this with (\ref{eq:endpt-Z}), we find that indeed \[\partial_x Z(0,y) = \sin \alpha_x - \varphi'=0\] as desired. If $\boldsymbol{y}$ is also in $\partial M$, then the same argument shows that $\partial_y Z|_{x,y} =0$. On the other hand, if $\boldsymbol{y}\notin \partial M$, then $y\in \mathring{M}$ is an interior point, and the first derivative test for $Z$ yields $\partial_y Z|_{x,y}=0$. \end{proof} Lemma \ref{lem:C1-endpoints} ensures that the first derivatives vanish if a minimum occurs at an endpoint. Morally, this works because the reflected profile glues with the vanilla chord-arc profile in an essentially $C^1$ manner (as emphasized in Remark \ref{rmk:gluing}). We briefly list the remaining possibilities for ``interior'' minima: \begin{lemma} Suppose that $0=\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y}) = \min_{\boldsymbol M\times \boldsymbol M}\boldsymbol{Z}$, where $\boldsymbol{x}\ne \boldsymbol{y}$ and $\boldsymbol{x},\boldsymbol{y}\notin\partial M$. We may arrange that either: \begin{enumerate}[(a)] \item $\sign(\boldsymbol{x}) = \sign(\boldsymbol{y})$, $0=Z(x,y) =\min_{M\times M}Z$ and $\partial_x Z|_{x,y} = \partial_y Z|_{x,y}=0$; or \item $\sign(\boldsymbol{x}) \neq \sign(\boldsymbol{y})$, and $0=\tilde{Z}(x,y) = \min_{M\times M}\tilde{Z}$. \end{enumerate} \end{lemma} Note that in the first case, the vanishing derivatives follow from the first derivative test as $Z$ is smooth ($x\neq y$). Combining these lemmata with the second derivative tests earlier in this section yields the following dichotomy. \begin{proposition} \label{prop:spatial-minimum} If $0=\min_{\boldsymbol M\times \boldsymbol M}\boldsymbol{Z}=\boldsymbol Z(\boldsymbol{x}_0,bs{y}_0)$ for some $(\boldsymbol{x}_0,\boldsymbol{y}_0)\in(\boldsymbol M\times \boldsymbol M)\setminus\boldsymbol D$, then there exist $(\boldsymbol{x},\boldsymbol{y})\in (\boldsymbol M\times\boldsymbol M)\setminus\boldsymbol D$ such that $\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y})=0$ and either: \begin{enumerate}[(a)] \item $\sign(\boldsymbol{x}) = \sign(\boldsymbol{y})$, $\alpha_x+ \alpha_y=\pi$, and \[ 0\leq - 4\frac{\varphi''}{\boldsymbol{L}} - \kappa_x\cos\alpha_x + \kappa_y\cos\alpha_y; \] or \item $\sign(\boldsymbol{x}) \neq \sign(\boldsymbol{y})$, $x,y\in\mathring{M}$ and for any $z\in S$ such that \[\tilde{d}(\gamma(x),\gamma(y)) =d_x+d_y, \qquad d_x= d(\gamma(x),z) , d_y=d(\gamma(y),z),\] we have \[ 0\leq -\kappa_x \cos \beta - \kappa_{y} \cos \beta + \left(\frac{1}{d_x} + \frac{1}{d_y}\right)\frac{2\kappa^S_z}{\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z}(1-\varphi'^2) - 4 \frac{\varphi''}{\boldsymbol{L}} ,\] where $\theta = \theta_{x} = - \theta_{y}$, $\beta=\beta_x=\beta_{y}$ and \[\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z <0\,.\] \end{enumerate} \end{proposition} \section{Evolution and lower bounds for the chord-arc profile} \subsection{Evolution of the chord-arc profile}\label{sec:evolution} We now consider a free boundary curve shortening flow $\{\Gamma_t\}_{t\in [0,T)}$ with parametrisation $\gamma:M\times[0,T)\to \Omega$ and a smooth function $\varphi:[0,1]\times[0,T)\to\ensuremath{\mathbb{R}}$ satisfying the following conditions at every time $t$. (Here primes indicate spatial derivatives.) \begin{enumerate}[(i)] \item $\varphi(1-\zeta, t) = \varphi(\zeta,t)$ for all $\zeta\in[0,1]$. \item $|\varphi'(\cdot, t)|<1$. \item $\varphi(\cdot,t)$ is strictly concave. \end{enumerate} Denote by $\boldsymbol{d}(\cdot,\cdot,t)$, $\boldsymbol\ell(\cdot,\cdot,t)$, and $\boldsymbol{L}(t)$ the chordlength, arclength and length of the timeslice $\Gamma_t$. We consider the time-dependent auxiliary functions \[ Z(x,y,t) = d(\gamma(x,t),\gamma(y,t)) - \boldsymbol{L}(t) \varphi\left(\frac{\ell(x,y,t)}{\boldsymbol{L}(t)},t\right), \] \[ \tilde{Z}(x,y,t) = \tilde{d}(\gamma(x,t),\gamma(y,t),t) - \boldsymbol{L}(t) \varphi\left(\frac{\tilde{\ell}(x,y,t)}{\boldsymbol{L}(t)},t\right), \] \[ \boldsymbol{Z}(x,y,t) \doteqdot \boldsymbol{d}(x,y,t)-\boldsymbol{L}(t)\varphi\left(\frac{\boldsymbol{\ell}(x,y,t)}{\boldsymbol{L}(t)},t\right), \] on $M\times M$ and \[\bar{Z}(x,y,z,t) = d(\gamma(x,t) ,z) + d(\gamma(y,t) ,z) - \boldsymbol{L}(t) \varphi\left(\frac{\tilde{\ell}(x,y,t )}{\boldsymbol{L}(t)},t\right)\] on $M\times M\times S$. Denote by $[\boldsymbol{x}:\boldsymbol{y}]$ the shorter portion of $\boldsymbol{M}\setminus \{\boldsymbol{x},\boldsymbol{y}\}$. \begin{proposition}\label{prop:comparison equation} Suppose that $\boldsymbol Z(\cdot,\cdot,0)\ge 0$ with strict inequality away from the diagonal. Further suppose that $t_0\doteqdot \sup\{t\in [0,T):Z(\cdot,\cdot,t)\ge 0\}<T$. Then there exist $\boldsymbol{x},\boldsymbol{y}\in (\boldsymbol M\times\boldsymbol M)\setminus \boldsymbol D$ such that $\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y},t_0)=0$ and either: \begin{enumerate}[(a)] \item $\sign(\boldsymbol{x}) = \sign(\boldsymbol{y})$, $\alpha_{x}+ \alpha_{y}=\pi$, and \begin{equation}\label{eq:case-a} 0 \geq 4\frac{\varphi''}{\boldsymbol{L}} +2 \left( \varphi - \varphi' \frac{\boldsymbol\ell}{\boldsymbol{L}}\right)\int_{\Gamma_{t}} \kappa^2 ds + \varphi' \int_{[x:y]}\kappa^2 ds- \boldsymbol{L} \partial_t \varphi \end{equation} or \item $\sign(\boldsymbol{x}) \neq \sign(\boldsymbol{y})$, $x,y\in\mathring{M}$, $\beta_{x}=\beta_{y}$, and for any $z\in S$ such that \[\tilde{d}(\gamma(x),\gamma(y)) = d(\gamma(x),z) + d(\gamma(y),z) = d_x+d_y,\] we have \begin{equation}\label{eq:case-b} \begin{split} 0 \geq {}& 4\frac{\varphi''}{\boldsymbol{L}}+2 \left( \varphi - \varphi' \frac{\boldsymbol\ell}{\boldsymbol{L}}\right)\int_{\Gamma_{t_0}} \kappa^2 ds + \varphi' \int_{[\boldsymbol{x}:\boldsymbol{y}]}\kappa^2 ds- \boldsymbol{L} \partial_t \varphi\\ {}&-\left(\frac{1}{d_x} + \frac{1}{d_y}\right)\frac{2\kappa^S_z}{\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z}(1-\varphi'^2) \,, \end{split} \end{equation} where $\theta = \theta_{x} = - \theta_{y}$ and \[\left(\frac{1}{d_x} + \frac{1}{d_y}\right) \cos\theta + 2\kappa^S_z <0.\] \end{enumerate} \end{proposition} \begin{proof} We will simply write $\Gamma =\Gamma_{t_0}$, etc., and we may reparametrise so that $\gamma=\gamma(\cdot,t_0)$ has unit speed. First observe that \[ \lim_{s\to 0^+}\partial_x\boldsymbol Z|_{(\xi+s,\xi,\cdot)}=1-\varphi'(0,\cdot)>0 \;\;\text{and}\;\; \lim_{s\to 0^-}\partial_x\boldsymbol Z|_{(\xi+s,\xi,\cdot)}=-1+\varphi'(0,\cdot)<0 \] and, similarly, \[ \lim_{s\to 0^+}\partial_y\boldsymbol Z|_{(\eta,\eta+s,\cdot)}=1-\varphi'(0,\cdot)>0 \;\;\text{and}\;\; \lim_{s\to 0^-}\partial_y\boldsymbol Z|_{(\eta,\eta+s,\cdot)}=-1+\varphi'(0,\cdot)<0\,. \] This ensures that the diagonal is a \emph{strict} local minimum for $Z$. In fact, due to compactness of $[0,t_0]$, we are guaranteed the existence of a neighbourhood $\boldsymbol U$ of the diagonal $\boldsymbol D$ such that $\boldsymbol Z|_{(\boldsymbol U\setminus \boldsymbol D)\times[0,t_0]}>0$. So there must indeed exist an \emph{off-diagonal} pair $(\boldsymbol{x},\boldsymbol{y}) \in (\boldsymbol{M}\times \boldsymbol{M})\setminus\boldsymbol D$ attaining a zero minimum for\footnote{The same conclusion can be reached by analyzing the \emph{second} derivatives of $\boldsymbol Z$ in case $\varphi'(0,t)\equiv 1$, but we will in any case eventually choose $\varphi$ to satisfy the strict inequality $\varphi'(0,t)<1$.} $\boldsymbol Z(\cdot,\cdot,t_0)$. Proposition \ref{prop:spatial-minimum} now reduces to the following two cases depending on the location of the spatial minimum. \textbf{Case (a):} $\sign(\boldsymbol{x}) = \sign(\boldsymbol{y})$, so that $0\leq \boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y},t) = Z(x,y,t) $, with equality at $t_0$, and hence \begin{align*} 0 \geq \partial_t Z(x,y,t_0) ={}& -\kappa_x \inner{\frac{\gamma(x)-\gamma(y)}{|\gamma(x)-\gamma(y)|}}{N_x} -\kappa_y \inner{\frac{\gamma(y)-\gamma(x)}{|\gamma(x)-\gamma(y)|}}{N_y}\\ & - \varphi \partial_t \boldsymbol{L} - \varphi' \partial_t\ell + \frac{\ell}{\boldsymbol{L}} \varphi' \partial_t \boldsymbol{L} - \boldsymbol{L} \partial_t \varphi \\ ={}& -\kappa_x \cos \alpha_x + \kappa_y \cos \alpha_y - \varphi \partial_t \boldsymbol{L} - \varphi' \partial_t\ell + \frac{\ell}{\boldsymbol{L}} \varphi' \partial_t \boldsymbol{L} - \boldsymbol{L} \partial_t \varphi\,. \end{align*} Note that \[ \partial_t \boldsymbol{L} = -2 \int_{\Gamma_t} \kappa^2 ds\, \;\;\text{and}\;\; \partial_t \ell = -\int_{[x:y]} \kappa^2 ds\,, \] where $[x:y]$ is the interval between $x$ and $y$. Applying the spatial minimum condition of Proposition \ref{prop:spatial-minimum} now yields \eqref{eq:case-a}. \textbf{Case (b):} $\sign(\boldsymbol{x}) \ne \sign(\boldsymbol{y})$, so that $0=\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y},t_0) = \tilde{Z}(x,y,t_0) = \bar{Z}(x,y,z,t_0)$ for some $z\in S$, and hence \begin{align*} 0 \geq \partial_t \bar{Z}(x,y,z,t_0) ={}& -\kappa_x \inner{\frac{\gamma(x)-z}{|\gamma(x)-z|}}{N_x} -\kappa_{y} \inner{\frac{\gamma(y)-z}{|\gamma(y)-z|}}{N_{y}} \\ & - \varphi \partial_t \boldsymbol{L} - \varphi' \partial_t \tilde{\ell} + \frac{\tilde{\ell}}{\boldsymbol{L}} \varphi' \partial_t \boldsymbol{L} - \boldsymbol{L} \partial_t \varphi \\ ={}& -\kappa_x \cos \beta_{x} - \kappa_{y} \cos \beta_{y} - \varphi \partial_t \boldsymbol{L} - \varphi' \partial_t\tilde{\ell} + \frac{\tilde{\ell}}{\boldsymbol{L}} \varphi' \partial_t \boldsymbol{L} - \boldsymbol{L} \partial_t \varphi. \end{align*} Now note that $\partial_t \tilde{\ell} = -\int_{[\boldsymbol{x}:\boldsymbol{y}]} \kappa^2 ds$, where $[\boldsymbol{x}:\boldsymbol{y}]$ is the shorter portion of $\boldsymbol{M}\setminus \{\boldsymbol{x},\boldsymbol{y}\}$. Applying the spatial minimum condition of Proposition \ref{prop:spatial-minimum} now yields \eqref{eq:case-b} \end{proof} \subsection{Lower bounds for the chord-arc profile} Note that the length is monotone non-increasing under free boundary curve shortening flow, and hence attains a limit as $t\to T$. In order to (crudely) estimate the curvature integrals in Proposition \ref{prop:comparison equation}, we will make use of the following lemma. \begin{lemma}\label{lem:annoying angle lemma} Let $\epsilon\in(0,\frac{\pi}{2})$. Suppose $\Gamma$ is a curve in $\Omega$ which meets $S=\partial\Omega$ orthogonally at $\partial\Gamma=\{z_0,z_1\}$. Denote by $L$ the length of $\Gamma$, and $C= \sup_{S\cap B_{3L}(\partial\Gamma)} \kappa^S$. If $L (1+C) \leq \frac{\epsilon}{100}$, then $|\measuredangle(N^S_{z_0}, N^S_{z_1})| \leq \frac{\epsilon}{2}$. Moreover, if $x,y\in \Gamma$, then any $z\in S$ realising $\tilde{d}(x,y) = d(x,z) + d(y,z)$ must satisfy $|\measuredangle(N^S_z, N^S_{z_i})| \leq \frac{\epsilon}{2}$ for each $i=0,1$. \end{lemma} \begin{proof} To prove the first claim, we first estimate \[|\measuredangle(N^S_{z_0}, N^S_{z_1})| = \int_{[z_0:z_1]} \kappa^S ds \leq \ell_S(z_0, z_1) C,\] where $[z_0:z_1]$ denotes the portion of $S$ between $z_0$ and $z_1$. On the other hand, by \cite[Lemma 3.5]{EGF}, \[ \ell_S(z_0,z_1) \leq \frac{2}{C} \sin^{-1}\left(\frac{C}{2} d(z_0,z_1)\right) \leq \frac{2}{C} \sin^{-1}\left(\frac{\epsilon}{200}\right) \leq\frac{\epsilon}{2C}\,.\] For the second claim, let $B$ be the ball of radius $3L$ about $z_0$, so that the $2L$-neighbourhood of $\Gamma$ is contained in $B$; in particular, any $z\in S$ realising $\tilde{d}(x,y) = d(x,z) + d(y,z)$ must also lie in $B$. Then as above we have \[ |\measuredangle(N^S_z, N^S_{z_i})| \leq \ell_S(z,z_i) C\] and \[ \ell_S(z,z_i) \leq \frac{2}{C} \sin^{-1}\left(\frac{C}{2} d(z,z_i)\right) \leq \frac{2}{C} \sin^{-1}\left(\frac{6\epsilon}{50}\right) \leq\frac{\epsilon}{2C}\,.\qedhere\] \end{proof} The following theorem provides a uniform lower bound for the chord-arc profile $\boldsymbol\psi(\cdot,t)$ of $\Gamma_t$ so long as $L(t)\to 0$ as $t\to T$. Recall that the evolution of any compact curve in a convex domain will always remain in some compact set. \begin{theorem} Given any $\epsilon \in (0, \frac{\pi}{2})$ there exists $c_\epsilon$ such that the following holds. Let $\{\Gamma_t\}_{t\in[0,T)}$ be a compact free boundary curve shortening flow in a convex domain $\Omega$ which remains in the compact set $K$. Suppose that $L(0)(1+C) \leq \frac{\epsilon}{100}$, where $C=\sup_{S\cap K} \kappa^S$, $S=\partial\Omega$. Given any $c_0\in(0,c_\epsilon)$, if the inequality \[ \boldsymbol{\psi}(\delta,t) \geq c_0\boldsymbol L(t)\left(\sin\left((\pi-\epsilon)\frac{\delta}{\boldsymbol L(t)}+ \frac{\epsilon}{2}\right)-\sin\left(\frac{\epsilon}{2}\right)\right) \] holds at $t=0$, then it holds for all $t\in [0,T)$. \end{theorem} \begin{proof} Observe that the function $\varphi\in C^\infty([0,1])$ defined by \[ \varphi(\zeta)\doteqdot c_0\left(\sin\left((\pi-\epsilon)\zeta+ \frac{\epsilon}{2}\right)-\sin\left(\frac{\epsilon}{2}\right)\right) \] is symmetric about $\zeta=\frac{1}{2}$ and strictly concave, and has subunital gradient for sufficiently small $c_0$. So it is an admissible comparison function. Forming the auxiliary function $\boldsymbol{Z}$ as in \eqref{eq:bs Z} with this choice of $\varphi$, we have $\boldsymbol{Z}\geq0$ at $t=0$ by supposition, and we will show that $\boldsymbol{Z}\geq 0$ for all $t>0$. Indeed, if, to the contrary, $t_0 \doteqdot \sup\{t\in[0,T):\boldsymbol{Z}(\cdot,\cdot,t)\ge 0\}<T$, then we may apply Proposition \ref{prop:comparison equation}; we will first work generally to simplify the conditions at the minimum. We remark that these conditions are not, in general, sharp. Let $\boldsymbol{x},\boldsymbol{y}$ be as given by Proposition \ref{prop:comparison equation}, and define \[ \Theta \doteqdot \frac{1}{2} \left( \boldsymbol{L} \int_{\Gamma_t} \kappa^2 ds \right)^\frac{1}{2}\;\;\text{and}\;\;\boldsymbol{\omega} \doteqdot \frac{1}{2} \left( \boldsymbol{\ell} \int_{[\boldsymbol{x}:\boldsymbol{y}]} \kappa^2 ds \right)^\frac{1}{2}\,. \] Applying the Cauchy--Schwarz inequality yields \[ 2\Theta \geq \sqrt{2} \int_{\Gamma_t} |\kappa|ds. \] On the other hand, by the free boundary condition and the theorem of turning tangents, we have \[ \cos\left(\int_{\Gamma_t} \kappa \,ds\right) = \cos \measuredangle(T_{z_1}, T_{z_0})= \cos \measuredangle(N^S_{z_1} , -N^S_{z_0}) = \cos\left( \pi +\measuredangle(N^S_{z_1},N^S_{z_0})\right),\] where $z_i$ are the endpoints of $\Gamma_t$. By Lemma \ref{lem:annoying angle lemma}, we may estimate $|\measuredangle(N^S_{z_1}, N^S_{z_0})| <\epsilon$, hence \[(2\Theta)^2 \geq 2(\pi-\epsilon)^2.\] We may similarly estimate \[ 2\boldsymbol{\omega} \geq \int_{[\boldsymbol{x}:\boldsymbol{y}]} |\kappa|ds\,. \] If $\sign(\boldsymbol{x})=\sign(\boldsymbol{y})$, then $\int_{[\boldsymbol{x}:\boldsymbol{y}]} \kappa\,ds$ measures the turning angle from $T_x$ to $T_y$; in particular \[ \cos\left(\frac{1}{2}\int_{[\boldsymbol{x}:\boldsymbol{y}]} \kappa\,ds\right) = \cos \frac{\measuredangle(T_x,T_y)}{2} = \cos\left(\frac{\alpha_x - \alpha_y}{2}\right) = \cos\left(\alpha_x -\frac{\pi}{2}\right) = \varphi'. \] Here we have used that $\alpha_x+\alpha_y=\pi$ and $\varphi' = -\sin\alpha_x$. Thus in case (a) we may estimate \[ (2\boldsymbol{\omega})^2 \geq 2 (\cos^{-1}\varphi')^2.\] If $\sign(\boldsymbol{x})\neq \sign(\boldsymbol{y})$, then again by reversing the parametrisation if necessary, we may assume without loss of generality that $[\boldsymbol{x}:\boldsymbol{y}]$ is precisely the interval $[y,x]$. In particular, $\int_{[\boldsymbol{x}:\boldsymbol{y}]} \kappa$ gives the turning angle from $T_x$ to $T_{z_0}$ plus the turning angle from $T_{y}$ to $T_{z_0}$, where $z_0 = \gamma(0)$. But now \[ \begin{split} \measuredangle(T_x, T_{z_0}) &= \measuredangle(T_x, T^S_{z_0})- \frac{\pi}{2}\\ & = \measuredangle(T_x, T^S_z) +\measuredangle(T^S_z, T^S_{z_0}) - \frac{\pi}{2} \\&= \measuredangle(T_x, T^S_z) +\measuredangle(N^S_z, N^S_{z_0}) - \frac{\pi}{2} \\ & = \beta_{x} - \theta_{x} + \measuredangle(N^S_z, N^S_{z_0}) - \frac{\pi}{2} . \end{split} \] Arguing similarly for $T_{y}$, we estimate \[\begin{split} \cos\left(\frac{1}{2}\int_{[\boldsymbol{x}:\boldsymbol{y}]} |\kappa|\,ds\right) &\geq \cos\left( \frac{1}{2} \int_{[0,x]} \kappa\,ds + \frac{1}{2}\int_{[0,y]} \kappa\,ds \right) \\& = \cos\left(\frac{1}{2}(\measuredangle(T_x, T_{z_0}) + \measuredangle(T_{y}, T_{z_0}) )\right) \\& = \cos\left(\frac{1}{2}(\beta_{x} - \theta -\frac{\pi}{2} + \beta_{y} + \theta -\frac{\pi}{2}) + \measuredangle(N^S_z, N^S_{z_0})\right) \\& = \cos\left(\frac{1}{2}(\beta_{x} + \beta_{y}-\pi)+ \measuredangle(N^S_z, N^S_{z_0})\right) \\&= \cos\left(\frac{\pi}{2} -\beta - \measuredangle(N^S_z, N^S_{z_0})\right)\,. \end{split}\] Here we have used $\beta_{x} = \beta_{y}=\beta$. By Lemma \ref{lem:annoying angle lemma}, we may estimate $|\measuredangle(N^S_z, N^S_{z_0})| \leq \frac{\epsilon}{2}$. Recall also that $\sin\beta=\varphi'$. Note that our choice of $\varphi$ satisfies $|\varphi'| \leq c_0(\pi-\varepsilon)$, so for sufficiently small $c_0$ we have $|\beta| < \frac{\pi}{2}-\epsilon$. In particular, $\cos^{-1}\varphi' = \frac{\pi}{2}-\beta>\epsilon$. It follows that in case (b) we may estimate \[(2\boldsymbol{\omega})^2 \geq 2\left(\cos^{-1}\varphi' -\frac{\epsilon}{2}\right)^2.\] Since by Lemma \ref{lem:annoying angle lemma}, we also have $|\measuredangle(N^S_{z_0}, N^S_{z_1})| \leq \frac{\epsilon}{2}$, we have shown that in either case, \begin{equation}\label{eq:comparison-final} \boldsymbol{L}^2 \partial_t\varphi \geq 4\varphi'' + 4 \left( \varphi - \varphi' \frac{\boldsymbol\ell}{\boldsymbol{L}}\right) \left(\pi- \epsilon\right)^2 +4 \frac{\boldsymbol{L}}{\boldsymbol{\ell}} \varphi'\left(\cos^{-1}\varphi'-\frac{\epsilon}{2}\right)^2\,. \end{equation} Finally, we note that our explicit choice of $\varphi$ satisfies \[ \partial_t\varphi=0\,,\] \[\varphi'' + (\pi-\epsilon)^2\varphi = (\pi-\epsilon)^2 \sin\frac{\epsilon}{2}\ge 0\,,\] and \[\left(\cos^{-1}\varphi' -\frac{\epsilon}{2}\right)^2 - (\pi-\epsilon)^2 \zeta^2\ge 0\] so long as $c_0\le (\pi-\varepsilon)^{-1}$. Together, these contradict (\ref{eq:comparison-final}), and we conclude that indeed $\boldsymbol{Z}\geq0$ for all $t>0$, which implies the result. \end{proof} Note that if $L(t)\to 0$ as $t\to T$, then the condition $\left(1+\sup_{S\cap K} \kappa^S\right) L(0)\leq \frac{\epsilon}{100}$ is eventually satisfied. So unless $L(t)\not\to 0$ as $t\to T$, we can always find some $c_0>0$ and $t_0\in [0,T)$ such that the theorem applies to $\{\Gamma_t\}_{t\in[t_0,T)}$. On the other hand, if $L(t)\not\to0$, then we may apply the following (very) crude bound. \begin{theorem} Let $\{\Gamma_t\}_{t\in[0,T)}$ be a free boundary curve shortening flow with $\boldsymbol{\psi}(\delta,0) \geq c_0L(0)\sin\left(\frac{\pi\delta}{L(0)}\right)$ and $T<\infty$. If $\boldsymbol{L}_T \doteqdot \lim_{t\to T}\boldsymbol{L}(t)>0$, then \[ \boldsymbol{\psi}(\delta,t) \geq c_0\boldsymbol{L}(t)e^{-\frac{4\pi^2T}{\boldsymbol{L}_T^2}} \sin\left(\frac{\pi\delta}{\boldsymbol{L}(t)}\right) \] for all $t\in[0,T)$. \end{theorem} \begin{proof} We introduce the modified time coordinate $\tau \doteqdot \int_0^t \frac{1}{\boldsymbol{L}^2} dt$ and take $\varphi(\zeta,\tau) \doteqdot c_0 e^{-4\pi^2 \tau} \sin(\pi \zeta)$. Then $\boldsymbol{Z}\geq0$ at $t=0$ by supposition, and we will show that $\boldsymbol{Z}\geq 0$ for all $t>0$. As above, suppose to the contrary that $\boldsymbol{Z} <0$ at some positive time, so that $t_0 \doteqdot \sup \{t: \boldsymbol{Z}(\cdot,\cdot,t)\ge 0\}\in(0,T)$ and we can find $(\boldsymbol{x},\boldsymbol{y})\in (\boldsymbol M\times \boldsymbol M)\setminus\boldsymbol D$ such that $\boldsymbol{Z}\geq 0$ for $t\in [0,t_0]$ and $Z(\boldsymbol{x},\boldsymbol{y},t_0)=\min \boldsymbol{Z}(\cdot,\cdot,t_0)=0$. Noting that $\partial_\tau \varphi = \boldsymbol{L}^2 \partial_t \varphi$, and that (unless $\partial\Omega$ is flat) all the terms in Proposition \ref{prop:comparison equation} involving spatial derivatives of $\varphi$ are strictly positive (except $\varphi''$), we find that \[ 0 < 4\varphi'' - \partial_\tau\varphi=0\,, \] which is absurd. We conclude that that $\boldsymbol{Z}\geq 0$ for all $t\in [0,T)$. The claim follows since $\tau \leq \frac{1}{\boldsymbol{L}_T^2}T$. \end{proof} \subsection{Boundary avoidance} The chord-arc bound immediately yields the following ``quantitative boundary avoidance'' estimate. Given a curve $\gamma: M\to \Omega$, we shall denote by $\lambda:M\to \mathbb{R}$ the distance to the nearest endpoint; if $\gamma$ is parametrised by arclength, then $\lambda(x) = \min\{x, L-x\}$. \begin{proposition \label{prop:boundary-avoidance} Let $\{\Gamma_t\}_{t\in[0,T)}$ be a compact free boundary curve shortening flow in a convex domain $\Omega$. Given any $\delta>0$, there exists $\varepsilon= \varepsilon(\Gamma_0, \Omega,\delta)>0$ such that \[ \lambda(x,t)>\delta\;\;\implies\;\; d(\gamma(x,t),\partial\Omega)>\epsilon\,. \] \end{proposition} \begin{proof} The chord-arc estimate yields $\boldsymbol{d}/\boldsymbol{\ell} >c>0$; in particular \[ d(\gamma(x,t),\partial\Omega)=\frac{1}{2}\boldsymbol d(x,-x,t)\ge \frac{c}{2}\boldsymbol\ell(x,-x,t)=c\lambda(x,t)\,.\qedhere \] \end{proof} \begin{comment} \begin{proof} Fix a smooth boundary-defining function $b:\mathbb{R}^2 \to \mathbb{R}$ such that $b|_{\partial \Omega}=0$, $|Db| \leq 1$, and $\partial_{N^S} b =-1$. Recall that $N^S$ is the outer normal. Since $S=\partial \Omega$ is convex, we may take $b$ to be concave. Given $\delta>0$, fix a cutoff function $\eta$ which satisfies $\eta(s)=0$ for $s>2\delta$, $\eta(0)=\delta$, and $\eta'(0)=-\frac{3}{4}$. We compute the variation of $\eta(\lambda)$ (which we conflate with $\eta$). Assume that $\gamma$ is parametrised by arclength and (without loss of generality) that $x\leq \frac{L(t)}{2}$, so that $\partial_x \lambda=1$ and $\partial_x \lambda = -\int_{[0,x]} \kappa^2$, and hence \[(\partial_t -\partial_x^2) \eta = -\eta'\int_{[0,x]} \kappa^2 -\eta''.\] On the other hand, computing the evolution of $b$, we have $\partial_t b= - \kappa_x \langle Db, N_x\rangle$, $\partial_x b = \langle Db, T_x\rangle$ and $\partial_x^2 b= D^2b(T_x,T_x) - \kappa_x\langle Db, N_x\rangle$. Therefore, \[ (\partial_t -\partial_x^2)b = -D^2b(T_x,T_x) \] and hence \[ (\partial_t - \partial_x^2)(b+\eta\circ\lambda) \geq -\eta''. \] Suppose that $\lambda(x)=0$, so that $x$ is an endpoint. Again without loss of generality, $x=0$, and so $\partial_x(b+\eta\circ \lambda) =-\langle Db, N^S\rangle +\eta'(0)= 1-\frac{3}{4}>0$. In particular, $\min_x (b+\eta\circ\lambda)$ is not achieved at either endpoint, and the previous inequality shows that it is increasing in $t$. We conclude that \[ \min_{\Gamma_t}(b+\eta\circ\lambda)\ge \min_{\Gamma_0}(b+\eta\circ\lambda)\doteqdot \varepsilon>0 \] for all $t\in[0,T)$. In particular, if $\lambda\geq 2\delta$, then $b\geq \epsilon>0$, which proves the claim. \end{proof} \end{comment} \section{Convergence to a critical chord or a round half-point} \label{sec:grayson} We now exploit the chord-arc estimate to rule out collapsing at a finite time singularity, resulting in a free-boundary version of Grayson's theorem, Theorem \ref{thm:fbGrayson}. Given $z\in S=\partial\Omega$, we denote by $T_z\Omega$ the halfplane $\{p\in\ensuremath{\mathbb{R}}^2: \langle p, N^S(z) \rangle \le 0\}$. \begin{theorem}\label{thm:fbGrayson} Let $\Omega \subset \mathbb{R}^2$ be a convex domain of class $C^2$ and let $\{\Gamma_t\}_{t\in [0,T)}$ be a maximal free boundary curve shortening flow starting from a properly embedded interval $\Gamma_0$ in $\Omega$. Either: \begin{enumerate} \item[(a)] $T=\infty$, in which case $\Gamma_t$ converges smoothly as $t\to \infty$ to a chord in $\Omega$ which meets $\partial\Omega$ orthogonally; or \item[(b)] $T<\infty$, in which case $\Gamma_t$ converges uniformly to some $z\in \partial\Omega$, and \[ \tilde\Gamma_t\doteqdot \frac{\Gamma_t -z}{\sqrt{2(T-t)}} \] converges uniformly in the smooth topology as $t\to T$ to the unit semicircle in $T_z\Omega$. \end{enumerate} \end{theorem} \subsection{Long time behaviour} We first address the long-time behaviour. \begin{proof}[Proof of Theorem \ref{thm:fbGrayson} part (a)] \begin{comment} Assume $T=\infty$. We will show that for any sequence $t_i\to\infty$, there is a subsequence for which $\Gamma_{t_i}$ converges smoothly to a chord $C$ in $\Omega$ which meets $\partial\Omega$ orthogonally, which implies the result. First, since \ba \frac{dL}{dt}={}&-\int\kappa^2\,ds\label{eq:evolve length}\\ \le{}&0\,,\nonumber \ea the length must have a long-time limit $\lim_{t\to\infty} L(t) = L_\infty \geq 0$. If $L_\infty=0$, then we could eventually enclose $\Gamma_t$ by a small convex arc which meets $\partial\Omega$ orthogonally, which must contract to a point on the boundary in finite time (in accordance with Stahl's theorem). The avoidance principle would then imply that $T<\infty$, which is a contradiction. So we must have $L_\infty>0$. Now suppose $t_i \to \infty$. Integrating \eqref{eq:evolve length}, we have \[L(t_i) - L(t_i-1) = -\int_{-1}^0 \left( \int_{\Gamma_{t+t_i}} \kappa^2 ds\right) dt,\] which tends to 0 as $i\to\infty$ since $L(t)$ has a limit. In particular, passing to a subsequence we can arrange that \[ \int_{-1}^0\sum_{i=1}^\infty \left( \int_{\Gamma_{t+t_i}} \kappa^2 ds\right) dt = \sum_{i=1}^\infty \int_{-1}^0 \left( \int_{\Gamma_{t+t_i}} \kappa^2 ds\right) dt <\infty.\] Therefore, for a.e. $\tau \in[-1,0]$, we have $\sum_{i=1}^\infty \left( \int_{\Gamma_{\tau+t_i}} \kappa^2 ds\right) dt <\infty$ and hence \[\int_{\Gamma_{\tau+t_i}} \kappa^2 ds\to 0.\] Choose such a $\tau$. After reparametrising by arclength, the $W^{2,2}$ norms of $\gamma(\cdot,\tau+t_i):[0,L_i]\to\Omega)$ tend to zero. {\color{red}Reparamatrizing by a sequence of uniformly controlled diffeomorphisms $\phi_i:[0,L_i]\to [0,1]$, we obtain a sequence of embeddings $\gamma_i\doteqdot \gamma\circ\phi_i^{-1}\in W^{2,2}([0,1];\ensuremath{\mathbb{R}}^2)$ with uniformly bounded $W^{2,2}$-norm and $\vert\gamma_i'\vert$ uniformly bounded from below. By the Sobolev embedding $W^{2,2}([0,1])\hookrightarrow C^{1,\alpha}([0,1])$, there is a subsequence along which $\gamma_i$ converges in $C^{1,\alpha}([0,1])$ (for every $\alpha<\frac{1}{2}$) to a limit immersion $\gamma_{\infty}\in W^{2,2}([0,1])$ satisfying $\int\kappa^2=0$.} In particular, $\gamma_\infty$ is a weak solution of $\kappa=0$, that is, a straight line segment $C$ which meets $\partial\Omega$ orthogonally {\color{red}[I think this is a direct argument, rather than ``elliptic regularity'']}. \textcolor{red}{(would like to do smooth convergence here. can do PDE or brakke reg)} Note that by smooth dependence on initial data, $\gamma(\cdot, t_i)$ also converges smoothly to $\gamma_\infty$. {\color{red}[This bit seems to do all the work. Could you give some explanation?]} {\color{red}[I still feel the other argument is simpler: straightforward estimates give you better control on the curvature which gives direct convergence.]} \textcolor{magenta}{Mat's argument: This proves convergence of $\tilde\gamma(\cdot,t)$ to a diameter in $C^{1,\alpha}([0,1];\ensuremath{\mathbb{R}}^2)$ as $t\to \infty$. In particular, we may eventually write $\Gamma_t$ as a graph over the limit diameter with uniformly H\"older controlled height and gradient, so estimates for quasilinear parabolic partial differential equations with transversal Neumann boundary condition (cf. \cite{Stahl96a}) and interpolation yield convergence in the smooth topology.} \end{comment} First recall that, since $\Omega$ is convex, $\Gamma_t$ remains in some compact subset $K\subset \Omega$ for all time Next, observe that the length approaches a positive limit as $t\to\infty$. Indeed, a limit exists due to the monotonicity \ba \frac{dL}{dt}={}&-\int\kappa^2\,ds\label{eq:evolve length}\\ \le{}&0\,,\nonumber \ea and the limit cannot be zero: If it were zero, then we could eventually enclose $\Gamma_t$ by a small convex arc which meets $\partial\Omega$ orthogonally. The latter would contract to a point on the boundary in finite time (in accordance with Stahl's theorem), whence the avoidance principle would force $\Gamma_t$ to become singular in finite time, contradicting $T=\infty$. Then integrating \eqref{eq:evolve length} from time $0$ to $\infty$, we find tha , for every $\varepsilon>0$, we can find $t_\varepsilon<\infty$ such that \begin{equation}\label{eq:aeL2est} \int_{\Gamma_t}\kappa^2\,ds<\varepsilon \end{equation} for \emph{almost every} $t\ge t_\varepsilon$. We can bootstrap this to full convergence as follows (cf. \cite{MR1046497,MR979601}): integrating by parts and applying the boundary condition yields \bann \frac{d}{dt}\int_{\Gamma_t}\kappa^2\,ds={}&\int_{\Gamma_t}\left(2\kappa(\Delta\kappa+\kappa^3)-\kappa^4\right)\\ ={}&2\sum_{\partial\Gamma_t}\kappa^S\kappa^2-2\int_{\Gamma_t}\vert\nabla\kappa\vert^2\,ds+\int_{\Gamma_t}\kappa^{4}\\ \le{}&2C\sum_{\partial\Gamma_t}\kappa^2-2\int_{\Gamma_t}\vert\nabla\kappa\vert^2+\max_{\Gamma_t}\kappa^2\int_{\Gamma_t}\kappa^2\,, \eann where $C\doteqdot \max_{S\cap K}\kappa^S$, $S=\partial\Omega$. Since (by Stahl's theorem, say) $\min_{\Gamma_t}\vert\kappa\vert=0$ for each $t$, the fundamental theorem of calculus and the H\"older inequality yield \bann \max_{\Gamma_t}\kappa^2\le{}&\left(\int_{\Gamma_t}\vert\nabla\kappa\vert\right)^2\le L\int_{\Gamma_t}\vert\nabla\kappa\vert^2\le L_0\int_{\Gamma_t}\vert\nabla\kappa\vert^2\,, \eann while the fundamental theorem of calculus and the Cauchy--Schwarz inequality yield \bann 2C\sum_{\partial\Gamma_t}\kappa^2\le2C\int_{\Gamma_t}\vert\nabla\kappa^2\ver \le{}&4C^2 \int_{\Gamma_t}\kappa^2+\int_{\Gamma_t}\vert\nabla\kappa\vert^2\,. \eann Thus, \ba\label{eq:L2 subexp growth} \frac{d}{dt}\int_{\Gamma_t}\kappa^2\,ds\le{}&4C^2 \int_{\Gamma_t}\kappa^2+\left(L_0\int_{\Gamma_t}\kappa^2-1\right)\int_{\Gamma_t}\vert\nabla\kappa\vert^2\,. \ea Now, given any $\varepsilon\in (0,\frac{L_0}{2})$ we can find $t_\varepsilon$ such that \eqref{eq:aeL2est} holds for almost every $t\ge t_\varepsilon$. But then by \eqref{eq:L2 subexp growth} there is a a dense set of times $t'\ge t_\varepsilon$ such that $\int_{\Gamma_{t}}\kappa^2\,ds\le 2\varepsilon$ for \emph{every} $t\in [t', t'+\delta]$, where $\delta\doteqdot\frac{\log2}{4C^2}>0$. It follows that \[\int_{\Gamma_t} \kappa^2 \,ds \to 0\] as claimed. In particular, with respect to an arclength parametrization, the $W^{2,2}$ norm of $\gamma(\cdot,t):[0,L(t)]\to\Omega$ is bounded independent of $t$. Reparametrizing by a family of uniformly controlled diffeomorphisms $\phi(\cdot,t):[0,L(t)]\to [0,1]$, we obtain a family of embeddings $\tilde\gamma(\cdot,t)\doteqdot \gamma(\phi^{-1}(\cdot,t),t)\in W^{2,2}([0,1];\ensuremath{\mathbb{R}}^2)$ with uniformly bounded $W^{2,2}$-norm and $\vert\tilde\gamma'\vert$ uniformly bounded from below. Since the Sobolev embedding theorem then implies uniform bounds in $C^{1,\alpha}([0,1];\ensuremath{\mathbb{R}}^2)$ for every $\alpha<\frac{1}{2}$, the Arzel\`a--Ascoli theorem yields, for any sequence of times $t_j\to\infty$, a subsequence along which $\tilde\gamma(\cdot,t_j)$ converges in $C^{1,\alpha}([0,1];\ensuremath{\mathbb{R}}^2)$, for every $\alpha<\frac{1}{2}$, to a limit immersion $\gamma_{\infty}\in W^{2,2}([0,1];\ensuremath{\mathbb{R}}^2)$ satisfying $\kappa=0$ in the weak sense and orthogonal boundary condition. In particular, $\gamma_\infty$ must parametrise a straight line segment which meets $\partial \Omega$ orthogonally; we call such a segment a \textit{critical chord}. We need to show that the limit chord, which we denote by $\sigma$, is unique. To achieve this, we will show that the \emph{endpoints} of $\Gamma_t$ converge to those of $\sigma$. \begin{claim}\label{claim:finite crossing} The endpoints of $\Gamma_t$ cross those of $\sigma$ at most finitely many times. \end{claim} \begin{proof}[Proof of Claim \ref{claim:finite crossing}] Observe that the height function $y\doteqdot \langle \gamma , N^\sigma\rangle$, where $N^\sigma$ is a choice of unit normal to $\sigma$, satisfies \begin{equation} \label{eq:height} \left\{\begin{aligned}(\partial_t-\Delta)y={}&0\;\;\text{in}\;\;\Gamma_t\setminus\partial\Gamma_t\\ \langle \nabla y, N^S\rangle ={}&\langle N^\sigma, N^S\rangle \;\;\text{on}\;\;\partial\Gamma_t\,. \end{aligned}\right. \end{equation} In particular, the conormal derivative $\langle \nabla y , N^S\rangle$ vanishes at any boundary zero of $y$. We claim that, unless $\{\Gamma_t\}_{t\in[0,T)}$ is the stationary chord $\Gamma_t\equiv \sigma$, the boundary of $\Omega$ is a \emph{strict zero sink} for $y$; that is, \begin{itemize} \item if $p\in\partial\Gamma_{t_0}$ is a zero of $y$ at a positive time $t_0$, then we can find $r>0$ such that $\Gamma_t\cap B_r$ contains a zero of $y$ for all $t\in(t_0-r^2,t_0)$ but not for $t\in(t_0,t_0+r^2)$. \end{itemize} Indeed, if $p\in\partial\Gamma_{t_0}$ is a zero of $y$ for $t_0>0$, then $\Gamma_{t_0}$ lies locally (and nontrivially) to one side of $C$ in a neighbourhood $B$ of $p$ (above, say). But then, since $\nabla y\cdot N^S|_p=0$, the strong maximum principle implies that $y>0$ in $B$ for a short time. On the other hand, if we can find $r>0$ such that $\Gamma_t\cap B_r(p)$ does not contain a zero of $y$ for $t\in (t_0-r^2,t_0)$, then the Hopf boundary point lemma implies that $\nabla y\cdot N^S<0$ at $p$, which contradicts (\ref{eq:height}). Since, with respect to a parametrization $\tilde{\gamma}:[0,1]\times[0,T)\to \Omega$ for $\{\Gamma_t\}_{t\in[0,T)}$ over a fixed interval, $y$ satisfies a linear diffusion equation with suitably bounded coefficients, it now follows from Angenent's Sturmian theory \cite{Ang88} that the zero set of $y$ is finite and non-increasing at positive times (and in fact strictly decreasing each time $y$ admits a degenerate or boundary zero). The claim follows. \end{proof} \begin{claim}\label{claim:finite direction change} The endpoints of $\Gamma_t$ change direction at most finitely many times. \end{claim} \begin{proof}[Proof of Claim \ref{claim:finite direction change}] Recall that the curvature satisfies \cite{Stahl96a} \begin{equation}\label{eq:evolvekappa} \left\{\begin{aligned}(\partial_t-\Delta)\kappa={}&\kappa^3\;\;\text{in}\;\;\Gamma_t\\ \langle \nabla \kappa, N^S\rangle={}&\kappa^S\kappa\;\;\text{on}\;\;\partial\Gamma_t\,. \end{aligned}\right. \end{equation} In particular, the conormal derivative $\langle \nabla \kappa , N^S\rangle$ vanishes at any boundary zero of $\kappa$. Thus, applying essentially the same argument as in Claim \ref{claim:finite crossing}, we find that the number of zeroes of $\kappa$ is finite and non-increasing at positive times, and strictly decreasing any time $\kappa$ admits a degenerate or boundary zero. The claim follows. \end{proof} \begin{comment} To this end, recall that the curvature satisfies \begin{equation}\label{eq:evolvekappa} \left\{\begin{aligned}(\partial_t-\Delta)\kappa={}&\kappa^3\\ \nabla\kappa\cdot N^S={}&\kappa^S\kappa\,. \end{aligned}\right. \end{equation} With respect to a fixed parametrization $\gamma:[-1,1]\times[0,T)\to\Omega$ for $\{\Gamma_t\}_{t\in[0,T)}$, this becomes \[ \left\{\begin{aligned} \kappa_t={}&a\kappa_{xx}+b\kappa_x+\kappa^3 \;\;\text{in}\;\; [-1,1]\\ \kappa_x(\pm 1,t)={}&f_{\pm}(t)\kappa(\pm1,t)\,, \end{aligned}\right. \] where \[ a\doteqdot \vert\gamma'\vert^{-2}\,,\;\;b\doteqdot -\vert\gamma'\vert^{-4}\inner{\gamma''}{\gamma'}\;\;\text{and}\;\; f_\pm=\mp\vert\gamma'(\pm 1,t)\vert\kappa^S(\gamma(\pm 1,t))\,. \] So a beautiful argument of Matano \cite[Lemma 2]{MR501842} shows that the boundary values of $\kappa$ change sign at most a finite number of times; this means that each boundary point changes direction at most a finite number of times. Next, observe that the height function $y\doteqdot \gamma\cdot N^D$, where $N^D$ is a choice of unit normal to $D$, satisfies \[ \left\{\begin{aligned}(\partial_t-\Delta)y={}&0\\ \nabla y\cdot N^S={}&fy\,, \end{aligned}\right. \] where $f\doteqdot \frac{\nabla y\cdot N^S}{y}|_{\partial\Gamma_t}$ is a positive continuous function of $t$ at each boundary point. So Matano's argument shows that the boundary values of $y$ change sign at most a finite number of times; this means that the endpoints of $\Gamma_t$ cross those of $D$ at most a finite number of times. It now follows that the endpoints of $\Gamma_t$ converge monotonically to those of $D$, and we conclude that the limit chord is indeed unique. \end{comment} Claims \ref{claim:finite crossing} and \ref{claim:finite direction change} imply that the endpoints of $\Gamma_t$ converge to some pair of limit boundary points, which must then be the endpoints of $\sigma$, and we may thus conclude that the limit chord is indeed unique. This proves convergence of $\tilde\gamma(\cdot,t)$ to a chord in $C^{1,\alpha}([0,1];\ensuremath{\mathbb{R}}^2)$ as $t\to \infty$. In particular, we may eventually write $\Gamma_t$ as a graph over the limit chord with uniformly H\"older controlled height and gradient, so the Schauder estimate \cite[Theorem 4.23]{Lieberman} and interpolation yield convergence in the smooth topology. \end{proof} We present two routes to case (b) of Theorem \ref{thm:fbGrayson}: one using (smooth) intrinsic blowups in the spirit of Hamilton \cite{HamiltonPinched,MR1369140} and Huisken \cite{Huisken96}; and one using (weak) extrinsic blowups in the spirit of White \cite{Wh03} (cf. Schulze \cite{Schulze}). We present the extrinsic method first, as (utilising the powerful theory of free-boundary Brakke flows developed by Edelen \cite{Edelen}) it quickly reduces the problem to ruling out multiplicity of blowup limits, which the chord-arc bound easily achieves. The intrinsic method is more elementary but requires the adaptation of a number of (interesting) results to the free boundary setting (for instance a monotonicity formula for the total curvature). \subsection{Extrinsic blowup} We follow the treatment of Schulze \cite{Schulze} (taking care to explain the modifications required in order to contend with the boundary condition). We begin by classifying the tangent flows following Edelen's theory of free boundary Brakke flows \cite{Edelen} (cf. \cite[Theorem 6.4]{Edelen} and \cite[Theorem 6.9]{Buckland}). We say that a spacetime point $(x_0,t_0)$ is reached by a free boundary curve shortening flow if (for instance) its Gaussian density is positive: $\Theta(x_0,t_0)>0$. \begin{lemma} \label{lem:tangent-flow} Let $(x_0,t_0)$ be a point in spacetime reached by the free boundary curve shortening flow $\{\Gamma_t\}_{t\in I}$. For any sequence of scales $\lambda_i\to \infty$, there is a subsequence such that \[\{\Gamma_t^{i}\doteqdot\lambda_i(\Gamma_{t_0+\lambda_i^{-2}t}-x_0))\}_{t\in [-\lambda_i^2t_0,0)} \to \{\Gamma^\infty_t\}_{(-\infty,0)}\] graphically and locally smoothly, with multiplicity 1, where $\{\Gamma_t^\infty\}_{t\in(-\infty,0)}$ is one of the following: \begin{enumerate}[(a)] \item a static line through the origin; \item a static half-line from the origin; \item a shrinking semicircle. \end{enumerate} \end{lemma} \begin{proof} By \cite[Theorems 4.10 and 6.4]{Edelen}, there is certainly a subsequence along which the flows $\{\Gamma_t^i\}$ converge as free boundary Brakke flows to some self-shrinking free boundary Brakke flow $(\mu_t)$. Moreover, a slight modification of the proof of Edelen's reflected, truncated monotonicity formula \cite[Theorem 5.1]{Edelen} reveals that \begin{equation}\label{eq:RHS of MF} \int_{\tilde\Gamma{}^i_{t}}\left\vert\frac{\tilde x^\perp}{-2t}+\tilde\kappa\tilde\nu\right\vert^2\tilde\phi\tilde\rho+\int_{\Gamma^i_t}\left\vert\frac{x^\perp}{-2t}+\kappa\nu\right\vert^2\phi\rho\to 0\,, \end{equation} in $L^1_{\mathrm{loc}}((-\infty,0))$. (Indeed, the second term in the penultimate line of the estimate at the bottom of page 115 of \cite{Edelen} is discarded by Edelen, and one need not discard all of the first term on that line in producing the estimate on the top of page 116.) This gives a corresponding (extrinsic) $L^2_{\mathrm{loc}}$ bound for $\kappa$: for almost every $t\in(-\infty,0)$ and for every $R>0$, \[ \int_{\Gamma^i_t\cap B_R}\kappa^2\le C \] independent of $i$. Consider such a time $t=\tau$. Since $(x_0,t_0)$ is reached by the flow, there must exist points $p_i \in \Gamma^i_\tau$ converging to some limit $p$. We may consider an arclength parametrisation $\gamma^i_\tau$ of each $\Gamma^i_\tau$ such that $\gamma^i_\tau(0)=p_i$. By applying the Sobolev embedding theorem (cf. the proof of case (a) of Theorem \ref{thm:fbGrayson} above), there will be a further subsequence along which the maps $\gamma^i_\tau$ converge, in the $C^{1,\alpha}_{\mathrm{loc}}$ topology, to a proper and connected limiting immersion $\gamma_\tau^\infty:M_\infty\to \Pi$ of class $W^{2,2}_{\mathrm{loc}}$. By \eqref{eq:RHS of MF}, this limit satisfies \[ \vec\kappa=\frac{x^\perp}{-2\tau} \] in the weak sense, where $\Pi =T_{x_0}\Omega$ is the whole plane if $x_0\in\mathring{\Omega}$ or the closed halfspace $\{p\in\ensuremath{\mathbb{R}}^2:\langle p, N^S(x_0)\rangle\le 0\}$ if $x_0\in S=\partial\Omega$. By the $C_{\mathrm{loc}}^{1,\alpha}$ convergence, the limit $\Gamma^\infty_\tau\doteqdot\gamma_\tau^\infty(M_\infty)$ meets $\partial\Pi_{x_0}$ orthogonally at any boundary points, so the Schauder estimates \cite[Theorems 6.2 and 6.30]{GilbargTrudinger} imply that the limit immersion $\gamma^\infty_\tau$ is smooth. Moreover, the boundary avoidance estimate implies that $\mathring{\Gamma}^\infty_\tau\subset\mathring{\Pi}$ (which in particular rules out the barrier $\partial \Pi$ as a limit). Now, the only smoothly embedded curves in $\mathbb{R}^2$ or $\mathbb{R}^2_+$ which satisfy the free boundary self-shrinker equation are the (half-)lines through the origin and the (semi)circle of radius $\sqrt{2}$. (Indeed, for $\mathbb{R}^2_+$ a standard reflection argument, as in Proposition \ref{prop:ancient} below, reduces the classification to the planar case \cite{MR2931330}.) If $\Gamma^\tau_\infty$ is compact, then the embeddings $\gamma^i_\tau$ converge globally in the $C^{1,\alpha}$ topology. In particular, $\Gamma^\infty_\tau$ is topologically a compact interval. The only possibility is that $x_0\in\partial\Omega$, so that $\Pi$ is a half-plane, and $\Gamma^\infty_\tau$ is the semicircle in $\Pi$ centred at the origin with radius $\sqrt{-2\tau}$. Thus $(\mu_t)$ is the corresponding shrinking semicircle of multiplicity 1. If $\Gamma^\infty_\tau$ is noncompact, then it must be a (half-)line. The $C^{1,\alpha}_{\mathrm{loc}}$ convergence means that for any $R>0$, the restriction $\gamma^i_\tau |_{\{|x|<R\}}$ converges to a (half-)line. Using our chord-arc estimate, it follows that there is some $c>0$ (independent of $R$) such that, for large $i$, we have \[\Gamma^i_\tau \cap B_{cR} = \mathrm{im}\left(\gamma^i_\tau |_{\{|x|<R\}}\right) \cap B_{cR}.\] In particular, this implies that $\Gamma^i_\tau$ converges, locally and graphically (in the $C^{1,\alpha}$ topology), to a (half-)line with multiplicity 1. In particular, $(\mu_t)$ must be a stationary (half-)line of multiplicity 1 (which is orthogonal to $\partial\Pi$). In either case, local regularity for free boundary Brakke flows \cite[Theorem 8.1]{Edelen} implies that the flows $\{\Gamma^i_t\}$ converge locally and graphically, in the smooth topology, to $(\mu_t)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:fbGrayson} part (b) (extrinsic method)] By Edelen's local regularity theorem \cite[Theorem 8.1]{Edelen}, if any tangent flow to $\{\Gamma_t\}_{t\in [0,T)}$ at $(x_0,T)$ is a multiplicity one (half-)line, then $(x_0,T)$ is a smooth point of the flow. Since the flow becomes singular in finite time $T<\infty$, by Lemma \ref{lem:tangent-flow} there must be a point $x_0 \in \partial \Omega$ such that every tangent flow at $(x_0,T)$ converges smoothly, with multiplicity 1, to the shrinking semicircle in $\Pi_{x_0}$. The result follows (fix any time and consider the scale factors $\lambda = \frac{1}{\sqrt{T-t}}$). \end{proof} \subsection{Intrinsic blowup} We now follow the (smooth) ``type-I vs type-II'' blow-up argument of Huisken \cite{Huisken96}. We first exploit the classification of convex ancient planar curve shortening flows to classify smooth free boundary blow-ups. (Recall that a curve in a convex subset $\Omega$ of the plane is \emph{convex} if it is the relative boundary of a convex subset of $\Omega$.) \begin{proposition}\label{prop:ancient} The only convex ancient free boundary curve shortening flows in the halfplane $\ensuremath{\mathbb{R}}^2_+$ are the shrinking round semicircles, the stationary (half-)lines and pairs of parallel (half-)lines, the (half-)Grim Reapers, and the (half-)Angenent ovals.\footnote{Note that there are two geometrically distinct half-Angenent ovals.} \end{proposition} \begin{proof} Let $\{\Gamma_t\}_{t\in[-\infty,\omega)}$ be a convex ancient free boundary curve shortening flow in $\ensuremath{\mathbb{R}}^2_+$ with nontrivial boundary on $\partial\ensuremath{\mathbb{R}}^2_+$. By differentiating the evolution and boundary value equations \eqref{eq:evolvekappa} for $\kappa$, we find (by induction) that all of the odd-order derivatives of $\kappa$ vanish at $\partial\ensuremath{\mathbb{R}}^2_+$. We therefore obtain, upon doubling $\Gamma_t$ through (even) reflection across $\partial\ensuremath{\mathbb{R}}^2_+$, a convex ancient (boundaryless) curve shortening flow in the plane. So the claim follows from the classification from \cite{BLTcsf,DHScsf}. \end{proof} In order to ensure \emph{convex} blow-up limits, we will adapt a monotonicity formula of Altschuler \cite[Theorem 5.14]{Altschuler} to the free boundary setting. In order to achieve this, we first need to control the vertices of $\Gamma_t$ under the flow. \begin{comment} To this end, observe that, with respect to a fixed parametrization $\gamma:[-1,1]\times[0,T)\to\Omega$ for $\{\Gamma_t\}_{t\in[0,T)}$, the curvature satisfies \[ \left\{\begin{aligned} \kappa_t={}&a\kappa_{xx}+b\kappa_x+\kappa^3 \;\;\text{in}\;\; [-1,1]\\ \kappa_x(\pm 1,t)={}&f_{\pm}(t)\kappa(\pm1,t)\,, \end{aligned}\right. \] where \[ a\doteqdot \vert\gamma'\vert^{-2}\,,\;\;b\doteqdot -\vert\gamma'\vert^{-4}\inner{\gamma''}{\gamma'}\;\;\text{and}\;\; f_\pm=\mp\vert\gamma'(\pm 1,t)\vert\kappa^S(\gamma(\pm 1,t))\,. \] So Matano's argument \cite[Lemma 2]{MR501842} implies that $\kappa|_{\partial\Gamma_t}$ (and hence also $\kappa_s|_{\partial\Gamma_t}$) changes sign at most a finite number of times. \end{comment} \begin{lemma}\label{lem:Sturm} Let $\{\Gamma_t\}_{t\in[0,T)}$ be a compact free boundary curve shortening flow in a convex domain $\Omega\subset\ensuremath{\mathbb{R}}^2$. Unless $\{\Gamma_t\}_{t\in[0,T)}$ is a stationary chord or a shrinking semicircle, the inflection points $\{p\in\Gamma_t:\kappa(p)=0\}$ and interior vertices $\{p\in\mathring{\Gamma_t}:\kappa_s(p)=0\}$ are finite in number for all $t>0$. The number of inflection points is non-increasing, and strictly decreases each time $\Gamma_t$ admits a degenerate or boundary inflection point. \end{lemma} \begin{proof} Recall from the proof of part (a) of Theorem \ref{thm:fbGrayson} that the number of zeroes of $\kappa$ is finite and non-increasing at positive times, and strictly decreasing any time $\kappa$ admits a degenerate or boundary zero. In particular, $\kappa_s$ changes sign at most a finite number of times at any boundary point (but could still vanish on an open set of times if $\partial\Omega$ contains flat portions.) Since, with respect to a fixed parametrization $\gamma:[-1,1]\times[0,T)\to\Omega$ for $\{\Gamma_t\}_{t\in[0,T)}$, $\kappa_x$ satisfies a linear diffusion equation, we may apply \cite[Theorems C and D]{Ang88} to complete the proof.\footnote{Note that the argument of the Dirichlet case of \cite[Theorems C]{Ang88} yields the same conclusions under the mixed boundary condition: $u(x_1,t)=0$ and $u(x_2,t)\ne 0$ for all $t$.} \end{proof} Denoting the total curvature of $\Gamma_t$ by \[ K(\Gamma_t)\doteqdot \int_{\Gamma_t}\vert\kappa\vert\,, \] we now obtain the following free boundary version of Altschuler's formula. \begin{lemma}\label{lem:Altschuler} On any compact free boundary curve shortening flow $\{\Gamma_t\}_{t\in(\alpha,\omega)}$ in a convex domain $\Omega\subset \ensuremath{\mathbb{R}}^2$, we have \begin{equation}\label{eq:Altschuler} \frac{d}{dt}K(\Gamma_t) \sum_{\partial\Gamma_t}\vert\kappa\vert\kappa^S-2\sum_{\{p:\kappa(p,\cdot)=0\}}\vert\nabla\kappa\vert\,, \end{equation} except at finitely many times. \end{lemma} \begin{proof} By Lemma \ref{lem:Sturm}, either the solution is a stationary chord or shrinking semicircle (and hence the claim holds trivially) or the inflection points of $\Gamma_t$ are finite in number and non-degenerate, except possibly at a finite set of time . Away from these times, we may split $\Gamma_t$ into $N$ segments $\{\Gamma_t^j\}_{j=1}^N$, with boundaries $\{a_{i-1},a_i\}_{i=1}^N$, on which $\kappa$ is nonzero and alternates sign, so that, for an appropriate choice of arclength parameter, \bann \frac{d}{dt}\int_{\Gamma_t}\vert\kappa\vert={}&\sum_{j=1}^N(-1)^{j-1}\frac{d}{dt}\int_{\Gamma_t^j}\kappa\,ds\\ ={}&\sum_{i=1}^N(-1)^{j-1}\int_{\Gamma_t^j}\kappa_{ss}\,ds\\ ={}&-\kappa_s(a_0)+2\sum_{j=1}^{N-1}(-1)^{j-1}\kappa_{s}(a_j)+(-1)^{N-1}\kappa_s(a_N)\,. \eann Observe that $(-1)^{i}\kappa_{s}(a_i)\ge 0$ for each $i$ an \ba\label{eq:boundary gradient} 0={}&\frac{d}{dt}\inner{N}{N^S}\nonumber\\ ={}&\inner{\nabla\kappa}{N^S}-\kappa\inner{N}{D_NN^S}\nonumber\\ ={}&\inner{\nabla\kappa}{N^S}-\kappa\kappa^S \ea at the boundary. The claim follows since $\inner{\partial_s}{N^S}=-1$ at $a_0$ and $+1$ at $a_N$. \end{proof} We may eliminate the boundary term in \eqref{eq:Altschuler}, resulting in a genuine monotonicity formula, by introducing the total curvature $\tilde K(\Gamma_t)$ of the portion of the boundary $\partial\Omega$ (counted with multiplicity) traversed by the endpoints of $\Gamma_t$. That is, we set \[ \tilde K(\Gamma_t \doteqdot \int_t^T\vert\kappa^\text{L}\vert\,ds_{\mathrm{L}}+\int_t^T\vert\kappa^\text{R}\vert\,ds_{\mathrm{R}}\,, \] where $\kappa^{\text{L}}$ (resp. $\kappa^{\text{R}}$) denotes the curvature and $ds_{\mathrm{L}}$ (resp. $ds_{\mathrm{R}}$) the length element of the piecewise smoothly immersed curve $\zeta^{\text{L}}:[0,T)\to\partial\Omega$ (resp. $\zeta^{\text{R}}:[0,T)\to\partial\Omega$) determined by the left (resp. right) boundary point $\zeta_{\text{L}}(t)$ (resp. $\zeta_{\text{R}}(t)$) of $\partial\Gamma_t$. Recall that the boundary points may only change direction at most finitely many times. Moreover, the boundary avoidance estimate, Proposition \ref{prop:boundary-avoidance}, provides room for uniform barriers that prevent the boundary from cycling $\partial\Omega$ an infinite number of times as $t\to T<\infty$. It follows that the boundary points converge, and in particular $\tilde K(\Gamma_t)$ is finite. Now observe that, away from the (finitely many) boundary inflection times, the rate of change of $\tilde K(\Gamma_t)$ exactly cancels the boundary term in \eqref{eq:Altschuler}. Thus, if we define \[ \boldsymbol K(\Gamma_t)\doteqdot K(\Gamma_t)+\tilde K(\Gamma_t)\,, \] then we obtain the following monotonicity formula. \begin{corollary} On any compact free boundary curve shortening flow $\{\Gamma_t\}_{t\in(\alpha,\omega)}$ in a convex domain $\Omega\subset \ensuremath{\mathbb{R}}^2$ \begin{equation}\label{eq:Altschuler_prime} \frac{d}{dt}\boldsymbol K(\Gamma_t)=-2\sum_{\{p:\kappa(p,\cdot)=0\}}\vert\nabla\kappa\vert \end{equation} except at finitely many times. \end{corollary} Putting these ingredients together, we arrive at Theorem \ref{thm:fbGrayson}. \begin{proof}[Proof of Theorem \ref{thm:fbGrayson} part (b) (intrinsic method)] By hypothesis, $T<\infty$. By applying the ODE comparison principle to \eqref{eq:evolvekappa}, we find that \begin{equation}\label{eq:kappa lower bound} \max_{\Gamma_t}\kappa^2\ge \frac{1}{2(T-t)}\,. \end{equation} We claim that \begin{equation}\label{eq:type I} \limsup_{t\to T}\max_{\Gamma_t}(T-t)\kappa^2<\infty\,. \end{equation} Indeed, if this is not the case, then we may blow-up \emph{\`a la Hamilton} to obtain a Grim Reaper solution, which will contradict the chord-arc estimate: choose a sequence of times $t_{j}\in[0,T-j^{-1})$ and a sequence of points $x_{j}\in M$ such that \[ \kappa^{2}(x_{j},t_{j})\left(T-\tfrac{1}{j}-t_{j}\right)=\max_{(x,t)\in M\times \left [0,T-\frac{1}{j}\right]}\kappa^{2}(x,t)\left(T-\tfrac{1}{j}-t\right)\,, \] set $\sigma_j\doteqdot\lambda_j^2t_j$ and $T_{j}=\lambda^2_{j}\left(T-\frac{1}{j}-t_{j}\right)$, where $\lambda_j\doteqdot \vert\kappa(x_{j},t_{j})\vert$, and consider the sequence of rescaled solutions $\gamma_j:M\times (-\sigma_j,T_j)\to \Omega_j\doteqdot \lambda_j(\Omega-\gamma(x_j,t_j))$ defined by \[ \gamma_{j}(x,t)=\lambda_{j}\left(\gamma\left(x,t_{j}+\lambda_j^{-2}t\right)-\gamma(x_{j},t_{j})\right). \] By hypothesis, we can pass to some subsequence such that $t_j\to T$, $\lambda_j\to\infty$, $\sigma_j\to \infty$, $T_j\to\infty$, and $\Omega_j\to\Omega_\infty$, where $\Omega_\infty$ is either the plane or some halfplane. Observe also that $\gamma_{j}(x_{j},0)=0$ and \begin{align*} \kappa_{j}^2(x,t)={}&\lambda_j^{-2}\kappa^2\left(x,t_{j}+\lambda_j^{-2}t\right)\leq\frac{T-\frac{1}{j}-t_{j}}{T-\frac{1}{j}-(t_{j}+\lambda_j^{-2}t)}=\frac{T_{j}}{T_{j}-t}. \end{align*} So the curvature is uniformly bounded on any compact time interval and, up to a change of orientation, takes unit value at the spacetime origin. Thus, by estimates for quasilinear parabolic partial differential equations with transverse Neumann boundary condition (cf. \cite{Stahl96b}), the rescaled solutions $\gamma_j:M\times [-\sigma_j,T_j)\to\Omega_j$ converge in $C^\infty$, after passing to a subsequence, to a smooth, proper limiting solution $\gamma_{\infty}:M_\infty\times(-\infty,\infty)\to \Omega_\infty$ uniformly on compact subsets of $M_\infty\times(-\infty,\infty)$. By an argument of Altschuler \cite{Altschuler}, we find that the limit flow is locally uniformly convex (and hence convex by the strong maximum principle and the curvature normalization at the spacetime origin): integrating the identity \eqref{eq:Altschuler_prime} in time on the rescaled flows between times $a$ and $b$ yields \[ \boldsymbol K(\Gamma_{\lambda_j^{-2}b+t_j})-\boldsymbol{K}(\Gamma_{\lambda_j^{-2}a+t_j})=-2\sum_{\{p:\kappa^j(p,t)=0\}}\int_a^b\vert\nabla\kappa^j\vert dt\,. \] Since $\boldsymbol{K}(\Gamma_t)$ is non-negative and non-increasing, it takes a limit as $t\to T$, and hence the left hand side tends to zero as $j\to\infty$. Since $a$ and $b$ were arbitrary, we conclude that any inflection point of the limit flow is degenerate. Since the limit flow is not a critical chord (due to the normalization of $\kappa$ at the spacetime origin), we conclude that there are no inflection points, and hence the limit is indeed locally uniformly convex. Proposition \ref{prop:ancient} now implies that the limit solution, being eternal and non-flat, is a (half-)Grim Reaper, which is impossible due to the chord-arc estimate. This proves \eqref{eq:type I}. We next claim that \ba\label{eq:smoothestimatesHamilton} \limsup_{t\nearrow T}\max_{\Gamma_t}\vert 2(T-t)\kappa^2-1\vert=0\;\;\text{and}\;\;\limsup_{t\nearrow T}\max_{\Gamma_t}(T-t)^{-(m+1)}\vert \nabla^m\kappa\vert^2=0 \ea for all $m\in\ensuremath{\mathbb{N}}$. Indeed, given any sequence of times $t_j\to T$, choose $x_j\in M$ so that $\kappa^2(x_j,t_j)=\max_{x\in M}\kappa^2(x,t_j)$, set $\lambda_j\doteqdot (T-t_j)^{-\frac{1}{2}}$ and $\sigma_j\doteqdot\lambda_j^2t_j$, and consider the sequence of rescaled solutions $\gamma_j:M\times (-\sigma_j,1)\to\Omega_j\doteqdot\lambda_j(\Omega-\gamma(x_j,t_j))$ defined by \[ \gamma_{j}(\,\cdot\,,t)=\lambda_{j}\left(\gamma\left(\,\cdot\,,t_{j}+\lambda_j^{-2}t\right)-\gamma(x_{j},t_{j})\right)\,. \] For each $j$, $\gamma_{j}(x_{j},0)=0$, $\sigma_j\to\infty$, and \[ \kappa_j^2(\,\cdot\,,t)=\lambda_j^{-2}\kappa^2\left(\,\cdot\,,t_j+\lambda_j^{-2}t\right)\leq \frac{C^2}{1-t}\,. \] Thus, as above, the rescaled solutions $\gamma_j:M\times [-\sigma_j,1)\to\Omega_j$ converge in the smooth topology, after passing to a subsequence, to a smooth, proper limiting solution $\gamma_{\infty}:M_\infty\times(-\infty,1)\to\Omega_\infty$ uniformly on compact subsets of $M_\infty\times(-\infty,1)$, where $\Omega_\infty$ is either the plane or a halfplane. By \eqref{eq:kappa lower bound}, the curvature must, up to a change of orientation, be positive at the spacetime origin, and hence positive everywhere by the above argument. Since the chord-arc estimate is scale invariant, we deduce from Proposition \ref{prop:ancient} that the limit is a shrinking semicircle, from which the estimates \eqref{eq:smoothestimatesHamilton} follow. Convergence to a point on $\partial\Omega$ now follows by integrating the curve shortening flow equation and applying the first of the estimates \eqref{eq:smoothestimatesHamilton}; smooth convergence to the corresponding unit semi-circle after rescaling then follows by converting the geometric estimates \eqref{eq:smoothestimatesHamilton} into estimates for the rescaled immersions. \end{proof} \subsection{Remarks} \emph{Existence} of a (geometrically unique) free boundary curve shortening flow out of any given embedded closed interval having orthogonal boundary condition in a convex domain was proved by Stahl \cite{Stahl96a}. Note that our argument for part (b) of Theorem \ref{thm:fbGrayson} does not require Stahl's result \cite[Proposition 1.4]{Stahl96b} on the convergence to points of bounded, convex, non-flat free boundary curve shortening flows in convex domains, and hence provides a new proof of it (finiteness of $T$ is a straightforward consequence of non-trivial convexity and the maximum principle; cf. \cite[Theorem 3.2]{Stahl96b}). We found it convenient in the intrinsic blow-up approach to exploit the full classification of convex ancient solutions to planar curve shortening flow \cite{BLTcsf} (via Proposition \ref{prop:ancient}). It would suffice, as in \cite{Huisken96}, to exploit the (easier) classification of convex translators and shrinkers, however, since a suitable monotonicity formula is available \cite{Buckland} (see also \cite[Theorem 5.1]{Edelen}) and the differential Harnack inequality (which is not available in the general free boundary setting) may be invoked \emph{after} obtaining an eternal convex limit flow in $\ensuremath{\mathbb{R}}^2_+$ or $\ensuremath{\mathbb{R}}^2$. \section{Remarks on unbounded solutions} \label{sec:unbounded} The above arguments also yield information for solutions with unbounded timeslices (in unbounded convex domains $\Omega$) so long as suitable conditions at infinity are in place. In this case, since $L=\infty$, we consider the unnormalized auxiliary functions \[ Z(x,y,t) = d(\gamma(x,t),\gamma(y,t))-\phi\left(\ell(x,y,t),t\right), \] \[ \tilde{Z}(x,y,t) = \tilde{d}(\gamma(x,t),\gamma(y,t),t)-\phi(\tilde{\ell}(x,y,t),t),\] \[\bar{Z}(x,y,z,t) = d(\gamma(x,t) ,z) + d(\gamma(y,t) ,z)-\phi(\tilde{\ell}(x,y,t ),t)\] and \[ \boldsymbol{Z}(x,y,t) \doteqdot \boldsymbol{d}(x,y,t)-\phi\left(\boldsymbol{\ell}(x,y,t),t\right)\,, \] where $\phi$ is any smooth modulus. Following Sections \ref{sec:spatial variation} and \ref{sec:evolution}, we find (in particular) that \bann \phi_ >{}& 4\phi'' \eann at an interior parabolic minimum of $\boldsymbol Z$. Taking $\phi(\lambda,t)=\varepsilon\lambda$ generalizes an estimate of Huisken \cite[Theorem 2.1]{Huisken96}: if each timeslice $\Gamma_t$ has one end and the ends are all asymptotic to a fixed ray, then initial lower bounds for $\boldsymbol d/\boldsymbol\ell$ are preserved. If the asymptotic ray points into the interior of the asymptotic cone of $\Omega$, then we find that $\boldsymbol d/\boldsymbol\ell$ is uniformly bounded from below. This rules out finite time singularities and, we expect, can be used to give a simple proof of smooth convergence to a half-line or a half-expander as $t\to\infty$. Note that the above estimate is vacuous if the asymptotic ray does not point into the interior of the asymptotic cone of $\Omega$ (since in that case $\inf_{\Gamma_0}\boldsymbol d/\boldsymbol\ell\equiv 0$). On the other hand, taking $\phi$ to be the error function solution to the heat equation, we may still obtain an (exponentially decaying) lower bound of the form $\boldsymbol d\ge \phi(\boldsymbol\ell,t)$ in the case that the asymptotic ray is parallel to the boundary of the asymptotic con . This also rules out finite time singularities and, we expect, can be used to give a simple proof of smooth convergence to a half-line or a half-Grim Reaper as $t\to\infty$. \begin{comment} \begin{theorem} Let $\Gamma_0$ be a smoothly embedded closed halfline in a smooth, unbounded, closed, convex set $\Omega \subset \mathbb{R}^2$ which meets $\partial\Omega$ orthogonally and is $C^1$ asymptotic to a halfline $L$. There exists a free boundary curve shortening flow $\{\Gamma_t\}_{t\in [0,\infty)}$ starting from $\Gamma_0$ which is smoothly asymptotic to $L$ for all $t\ge 0$. Moreover, \begin{enumerate} \item[(a)] if $\partial\Gamma_t\not\to\infty$ as $t\to\infty$, then $\{\Gamma_{t+s}\}_{t\in(-s,\infty)}$ converges locally smoothly to the stationary solution $\{L\}_{t\in(-\infty,\infty)}$ as $s\to\infty$. \item[(b)] if $\partial\Gamma_t\to\infty$ as $t\to\infty$ and $L$ points into the interior of the asymptotic cone $T\Omega$ of $\Omega$, then $\{\Gamma_{t+s}-\partial\Gamma_s\}_{t\in[-s,\infty)}$ converges smoothly to the unique expander in $T\Omega$ which is asymptotic to $L$ (from the same side as $\partial\Gamma_t$ as $t\to\infty$); \item[(c)] if $\partial\Gamma_t\to\infty$ as $t\to\infty$ and $L$ is parallel to one of the boundary rays of $T\Omega$, then $\{\Gamma_{t+s}-\partial\Gamma_s\}_{t\in[-s,\infty)}$ converges smoothly as $s\to \infty$ to the Grim Reaper of half-width equal to the asymptotic distance between $L$ and $T\Omega$ (from the same side as $\partial\Gamma_t$ as $t\to\infty$). \end{enumerate} \end{theorem} \begin{proof}[Proof sketch] Existence of a smooth solution on a maximal time interval $[0,T)$ with either $T=\infty$ or $\limsup_{t\to T}\max_{\Gamma_t}\kappa^2=\infty$ is proved using standard methods. The idea is to evolve $\Gamma_t\cap B_R$, for $R\gg 0$, by curve shortening flow with free boundary condition at $\partial\Gamma_t\subset\partial\Omega$ and Dirichlet boundary condition at $\partial B_R\cap\Gamma_t$, and then take $R\to\infty$ using estimates of the form \cite{EckerHuisken91,Stahl96a}; we shall not include the (lengthy) details of this argument as it detracts from the main purpose of this article. In case (b), we obtain a lower bound for $\boldsymbol d/\boldsymbol\ell$ as outlined above. Blowing up as in Theorem \ref{thm:fbGrayson} rules out finite time singularities.... In case (c), we obtain a lower bound for $\boldsymbol d/\boldsymbol\ell$ on any compact time-interval, which again rules out finite time singularities... \end{proof} \end{comment} \bibliographystyle{acm}
{ "arxiv_id": "2302.14269", "language": "en", "timestamp": "2023-03-01T02:07:37", "url": "https://arxiv.org/abs/2302.14269", "yymm": "2302" }
\section{Introduction} Research in the field of psychology has traditionally focused on three main forms of both emotional and attentional responses: subjective perceptions of the individual about their own state, effects on behavior, and changes in their physiological patterns, such as acceleration or deceleration of heart rate, increase in skin conductivity, etc. (\cite{Bradley2000, Mauss2009}). Each one of these approaches shows both advantages and disadvantages. First, auto-informed methods, such as questionnaires or interviews, for which participants are directly asked to report their status are the only way to access the individual's subjective perception are limited by the individual's own ability to introspect, since many psychological processes can occur unconsciously or with low levels of consciousness (\cite{Nisbett1977}). Moreover, in certain cases it is possible that cognitive biases (such as social desirability bias) interfere with the reports, making the information not entirely reliable. For this reason, in the field of experimental psychology research, the analysis of physiological responses (for example, variations in heart rate, skin conductivity, activation of facial muscles, etc.) has been introduced as a way of obtaining information about the cognitive and emotional processes of individuals in an indirect way. On the other hand, the main disadvantage of the physiological methods with respect to the self-reported ones is that data is obtained in a much slower and more intensive way in terms of the necessary resources, so it is recommended to combine both methodologies with qualitative studies for which data can be obtained from a large number of participants, and laboratory studies with smaller samples (\cite{Cacioppo2017}). Experimental suggestions should therefore be adapted to the type of information that each of these measures can offer us. Here are the most relevant physiological measures that we used in our study: \begin{itemize} \item Electrodermal activity (or EDA, also known as galvanic skin response or GSR) is a correlate of the activation of the sympathetic branch of the autonomic nervous system, which provides reliable information with a high temporal resolution about participants' physiological arousal, related to the intensity of experienced cognitive-emotional responses. \item Cardiac activity (ECG or electrocardiogram), or optical pulse (PPG) as a measure of Heart Rate. This signal is mediated by both the sympathetic and parasympathetic systems, thus responding to both physiological arousal and emotional regulation processes. This is dependent on both the intensity of emotions and their hedonic load, also with the appearance of cognitive resources for stimulus processing. The variability of heart rate can be analyzed by looking at the ratio of sympathetic vs parasympathetic activity within the PPG signal. \end{itemize} \subsection*{Related Work} Physiological studies were previously carried out in the game literature (\cite{Kivikangas2011, Argasiski2017, Alhargan2017}) analyzing signals such as the ECG/PPG and the GSR/EDA, showing relevant differences in the participants given their affective state, mostly validated with questionnaires. Various other studies have also compared muscle signals (Electomyography/EMG) (\cite{Ahsan2009}), brain signals (Electroencephalography/EEG) (\cite{Hafeez2021}), and facial gestures (\cite{Samara2017}). The main focus of some of these latter studies is to assess the affective state of participants from these measures, classifying each one of the 7 affective categories of basic emotions (initially defined by \cite{Ekman2005}): anger, sadness, fear, disgust, joy, surprise, contempt, and neutral. Each of the affective categories is standardized and validated with different psychophysiological tests such as the Russell test (\cite{Russell1980}) or the Self Assessment Mannekin (\cite{Bradley1994}), serving as a guideline for the validation of physiological measures that could affect the affective responses of each individual. In this case, the EDA and the ECG are signals widely validated by the literature that can give insight into the changes in the affective states of each individual in a real-time manner. In a recent review by \cite{Leis2020}, 17 studies were meta-analyzed in Esports contexts for psychological and physiological stress, and it was concluded that simply playing in an Esports non-competitive environment produced no stress reactions, whereas in competitive environments several studies reported increases in anxiety levels, cortisol levels, and physiological sympathetic activation, all three indicators of stress (\cite{Jones2012, Yaribeygi2017}). However, stress is not the only interesting indicator to consider in Esports environments, whether competitive or not, peripheral physiology can also provide insight into various aspects of cognitive/emotional information processing, such as polarity, emotion, engagement, boredom, frustration, etc. The benefits of extracting this information are evident, as exemplified by \cite{Smerdov2020}. They performed a very comprehensive sensory analysis that results in the submission of a dataset collected from professional and amateur teams in 22 League of Legends video matches, including the simultaneous recording of the physiological (i.e. movements, pulse, saccades) and psychological (self-reported post-match survey) activity of five players, with interesting results such as the lack of correlation between stress and concentration levels for professional players. We take a similar approach, focusing on a simultaneous exploration of electrodermal activity and cardiac activity in all five players of a team in different events. \paragraph{Contributions} In our study we perform the first physiological analysis on a widely played desktop videogame "League of Legends". These are our main objectives: \begin{itemize} \item Evaluate the affective responses by analyzing distinct EDA and ECG metrics depending on game performance (winning or losing). \item Analyse differences in physiological responses over game events (e.g. "Killing", "Dying", "Kill Assist", "Destroy Turret", "Destroy Turret Plate", "Placing Ward", etc.) \item Investigate differences in individual physiological responses based on player roles (Jungle, Middle, Utility, Bottom, and Top) \end{itemize} \section{Methods and Experimental Design} For our experiments we used a Shimmer3\footnote{\url{https://shimmersensing.com/}} pack with 5 simultaneous GSR+ Units providing Galvanic Skin Response for acquisition of Electrodermal Activity (EDA), as well as Optical Pulse (PPG) estimating heart rate variations. We developed our own Python tools (see\footnote{\url{https://github.com/dberga/riotwatcher-shimmer-pynput}}) for capturing data and sending it through Bluetooth (using pyserial and pylsl) and synchronizing that data with events of the game (using riotwatcher) for later statistical analysis for EDA (Ledalab\footnote{\url{http://www.ledalab.de/}} V3.4.9) and PPG (HeartPy\footnote{\url{https://github.com/smritisridhar41/Heartrate_Analysis}}). A total of 4 sessions have been performed in an Asobu eSports Experience with 16 participants (contacted and selected by United Gamers Academy) in the gameplay experimentation. From such, we captured data from a total of 12 participants playing a specific team during Summoner's Rift gameplay (avg time 30-45 min), later filtered on 7 with enough valid events for statistical comparison. We cut the recording of these participants from the start to the end of the game and we set specific window times for each event (i.e. 5 sec). This data is synchronized with events downloaded from riotwatcher api\footnote{Riot API tokens available in \url{https://developer.riotgames.com/}}. The events selected for capture are "killing", "dying", "kill assist", "special killing", "item purchased", "level up", "ward placed", "building kill", "champion transform", "turret plate destroyed" and "elite monster kill". After gameplay we annotated riot's metadata for each participant such as game session data (total kills/deaths, damage done, etc.), win or loss condition and player roles (top, mid, bot, utility and jungle). Average player level was 216 (with lowest 82 and biggest 402) corresponding to silver-gold S12 competitive rank gamers. \subsection{Physiological data processing} \subsubsection{GSR preprocessing} The MatLab-based toolbox Ledalab (\cite{benedek2010continuous}) was used for the GSR signal preprocessing and analysis. First, we carried out a preliminary visual examination to look for periodic drift in the signal, which reflects artifacts, and we resampled the raw signal to 50Hz using Neurokit2\footnote{\url{https://neuropsychology.github.io/NeuroKit/}}. The following preprocessing operations were then carried out using Ledalab toolbox: low-pass Butterworth filtering with a cutoff frequency of 5 Hz, and smoothing to eliminate any remaining artifacts. Finally, we performed an event-related analysis utilizing the Continuous Decomposition Analysis to extract the features indicating Skin Conductance Responses (SCRs) (CDA). By extracting the phasic (driver) information underlying EDA, this approach attempts to obtain the signal features of the underlying sudomotor nerve activity. Skin conductance data is deconvoluted by the overall response shape, considerably enhancing temporal accuracy. This method enables the extraction of continuous phasic and tonic activity based on traditional deconvolution within a predetermined time window, which for us corresponded to a window comprising the three seconds before an event marker to the five following seconds. The number of SCRs within the response window, response latency for the first SCR, mean SCR amplitudes, maximum phasic, and average tonic activity within the specified window were therefore collected for each event described in the previous section. \subsubsection{PPG data processing} Processing and analysis of raw PPG data were conducted using the Python-based toolkit "Heartpy" \cite{van2019analysing, van2019heartpy}, specialized for the analysis of PPG signal as compared to ECG. At every heartbeat, blood perfuses via the capillaries and arteries, causing the skin to become discolored. The PPG detects this discoloration. The systolic peak, diastolic notch, and diastolic peak make up the signal. First, as we did with the GSR signal, we resampled the raw PPG signal to 50Hz using Neurokit2\footnote{\url{https://neuropsychology.github.io/NeuroKit/}}. Then, we run the processing algorithm that comes with the Heartpy toolkit and which allows for the peak detection to extract reliable time-domain measures, such as beats per minute (BPM), and Interbeat Intervals (IBI). Furthermore, for each event, we extracted measures that reflect Heart Rate Variability (HRV) such as the RMSSD (root mean square of successive differences) and the SDSD (standard deviation of successive differences). \section{Results} We performed data curation for our statistical analysis using data from 7 participants (a total of 2 game sessions with recorded measures in which 3 participants played twice) with enough event samples for later analysis and processing. \subsection{Physiological results: Skin Conductance} We have processed raw GSR data with Ledalab to extract the following measures: nrSCRs (total skin conductance number "\#" of responses above threshold), Latency (delay/surpassed time "s" to elicit EDA with respect to the event), Amplitude (mean activity "mV" inside the event window), PhasicMax (max phasic value "mV" from the gap with respect the response and the event window) and Tonic (max tonic activity "mV" with respect window). Previous literature in electrodermal physiology has shown EDA can be a reliable quantifier of sympathetic dynamics \cite{PosadaQuintero2016}, meaning higher EDA correlated with higher sympathetic (stress/alert) levels. In Table \ref{tab:stats-gsr-wins-losses} we show mean statistics of nrSCR, Latency, Amplitude, PhasicMax, and Tonic values of players that win the gameplay and lose the gameplay. Similarly, in Table \ref{tab:stats-gsr-events} we show statistics for events "Killing", "Dying", "Place Ward", "Destroy Turret" and "Destroy Turret Plate". We expand these statistics in Table \ref{tab:stats-gsr-roles} filtering player roles in the game. \begin{filecontents*}{stats-gsr-wins-losses.csv} wincon,nrSCR,Latency,Amplitude,PhasicMax,Tonic WIN,1.853±1.730,-.151±1.926,.274±.631,.545±1.165,12.577±7.047 LOSS,2.599±2.210,-.758±1.698,.190±.414,.370±.545,6.881±4.743 TOTAL,2.372±2.101,-.574±1.788,.215±.490,.423±.788,8.611±6.121 \end{filecontents*} \begin{filecontents*}{stats-gsr-events.csv} EVENT,nrSCR,Latency,Amplitude,PhasicMax,Tonic,N KILL,1.225±1.170,0.0485±2.114,0.031±0.0416,0.101±0.066,7.764±3.233,9 DIE,2.423±2.102,-0.191±0.739,0.323±0.627,0.504±0.862,12.198±5.818,35 PLACE WARD,2.32±2.160,-0.295±2.145,0.225±0.448,0.152±0.138,12.064±5.410,75 DES.TURRET,2.182±2.214,-0.611±1.537,0.156±0.233,0.250±0.439,9.745±6.270,79 DES.PLATE,2.175±1.885,-0.457±1.837,0.294±0.587,0.457±0.680,3.087±1.768,49 \end{filecontents*} \begin{table}[h!] \centering \csvautotabular[respect all]{stats-gsr-wins-losses.csv} \caption{Win and Loss mean GSR statistics (2 sessions) by stacking all events in one statistic.} \label{tab:stats-gsr-wins-losses} \end{table} \begin{table}[h!] \centering \footnotesize \vspace{-0.25in} \centering \csvautotabular[respect all]{stats-gsr-events.csv} \caption{Event mean GSR statistics (2 sessions) from events "Killing", "Dying", "Placing Ward", "Destroying Turret" and "Destroying Turret Plate".} \label{tab:stats-gsr-events} \end{table} Given the Chi-squared measured distributions (non-parametric) we performed Friedman's tests over win-loss and event data for each GSR metric. For the case of winning and losing conditions, we found significant differences in player's activity when they won or lost the game during "Killing" in nrSCR, Amplitude, and PhasicMax activity ($p$=.046, $\chi^2$=4.000). We also observed significant differences when winning/losing the game during "Destroying Turret" in nrSCR and Amplitude ($p$=.020, $\chi^2$=5.444) as well as "Destroying Plate" for nrSCR ($p$=.008, $\chi^2$=7.143) with a trend in Amplitude ($p$=.071, $\chi^2$=3.266). The tonic activity was only significantly distinct depending on winning/losing for the events of "Dying" ($p$=.035, $\chi^2$=4.455) and "Placing Ward" ($p$=.002, $\chi^2$=10.000). \begin{filecontents*}{stats-gsr-roles-killing.csv} Role,nrSCR,Latency,Amplitude,PhasicMax,Tonic,N Jungle,1.200±2.168,.644±1.578,.039±.056,.114±.085,7.974±3.427,5 Middle,1.333±.577,-.713±3.271,.024±.017,.100±.014,5.993±.479,3 Utility,.000±.000,.000±.000,.000±.000,.007±.000,11.78±.000,1 Bottom,.000±.000,.000±.000,.000±.000,.000±.000,.000±.000,0 Top,.000±.000,.000±.000,.000±.000,.000±.000,.000±.000,0 \end{filecontents*} \begin{filecontents*}{stats-gsr-roles-dying.csv} Role,nrSCR,Latency,Amplitude,PhasicMax,Tonic,N Jungle,3.000±2.793,-.436±2.147,.522±1.037,.694±1.490,11.97±4.11,11 Middle,.800±1.789,-.276±.617,.034±.076,.163±.148,6.157±.780,5 Utility,2.333±.577,-2.060±1.45,.083±.039,.041±.014,11.79±1.45,3 Bottom,2.909±2.071,-.969±1.125,.158±.170,.279±.223,6.329±.415,11 Top,1.737±1.968,-.645±1.377,.127±.267,.292±.333,10.84±9.42,19 \end{filecontents*} \begin{filecontents*}{stats-gsr-roles-placeward.csv} Role,nrSCR,Latency,Amplitude,PhasicMax,Tonic,N Jungle,3.000±2.512,.201±2.827,.151±.189,.241±.250,9.458±4.04,14 Middle,.750±1.035,-.598±1.109,.029±.054,.064±.105,9.40±2.91,8 Utility,.250±.500,.560±1.120,.005±.010,.238±.203,9.537±1.67,4 Bottom,.000±.000,.000±.000,.000±.000,.000±.000,.000±.000,0 Top,2.778±.833,-.544±1.462,.675±1.048,1.335±2.00,21.59±2.16,9 \end{filecontents*} \begin{filecontents*}{stats-gsr-roles-desturret.csv} Role,nrSCR,Latency,Amplitude,PhasicMax,Tonic,N Jungle,3.353±2.396,-.813±1.615,.186±.198,.311±.342,14.05±3.70,17 Middle,1.800±2.168,-.592±.849,.068±.088,.278±.214,5.802±.444,5 Utility,2.458±1.911,-.419±2.134,.177±.287,.623±1.294,3.355±4.192,24 Bottom,3.077±1.935,-.592±2.155,.350±.562,.546±.718,6.248±.617,13 Top,1.750±1.915,-.488±1.661,.082±.127,.454±.494,6.731±7.723,16 \end{filecontents*} \begin{filecontents*}{stats-gsr-roles-desplate.csv} Role,nrSCR,Latency,Amplitude,PhasicMax,Tonic,N Jungle,3.647±1.967,-.673±1.915,.678±1.02,.844±1.136,11.036±2.89,17 Middle,1.400±1.174,-.794±1.194,.044±.053,.218±.270,5.962±.420,10 Utility,2.571±2.181,-1.298±1.51,.148±.185,.354±.303,5.897±3.913,21 Bottom,3.615±1.758,-.740±2.099,.152±.185,.233±.198,6.215±.366,13 Top,1.278±1.965,-.243±1.888,.195±.489,.368±.439,8.588±9.302,18 \end{filecontents*} \begin{filecontents*}{stats-gsr-roles-allevents.csv} Role,nrSCR,Latency,Amplitude,PhasicMax,Tonic,N Jungle,3.125±2.380,-.375±2.089,.355±.716,.488±.893,11.41±4.02,64 Middle,1.194±1.376,-.619±1.267,.040±.059,.167±.199,6.858±2.119,31 Utility,2.283±1.984,-.778±1.861,.144±.229,.443±.902,5.465±4.515,53 Bottom,3.216±1.888,-.756±1.842,.223±.366,.357±.468,6.260±.471,37 Top,1.758±1.853,-.473±1.593,.215±.523,.507±.891,10.69±9.41,62 \end{filecontents*} \begin{table}[h!] \centering \footnotesize \begin{minipage}{\linewidth} \hspace*{0.5in} \rotatebox[origin=c]{90}{KILLING} \csvautotabular[respect all]{stats-gsr-roles-killing.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \hspace*{0.5in} \rotatebox[origin=c]{90}{DYING} \csvautotabular[respect all]{stats-gsr-roles-dying.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \hspace*{0.5in} \rotatebox[origin=c]{90}{PLACE WARD} \csvautotabular[respect all]{stats-gsr-roles-placeward.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \hspace*{0.5in} \rotatebox[origin=c]{90}{DES.TURRET} \csvautotabular[respect all]{stats-gsr-roles-desturret.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \hspace*{0.5in} \rotatebox[origin=c]{90}{DES.PLATE} \csvautotabular[respect all]{stats-gsr-roles-desplate.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \hspace*{0.5in} \rotatebox[origin=c]{90}{ALL EVENTS} \csvautotabular[respect all]{stats-gsr-roles-allevents.csv} \end{minipage} \caption{Role-dependent mean \textbf{nrSCR (\#)}, \textbf{Latency (sec)}, \textbf{Amplitude (mV)}, \textbf{PhasicMax (mV)} and \textbf{Tonic (mV)} statistics (2 sessions) from events "Killing", "Dying", "Placing Ward", "Destroying Turret" and "Destroying Turret Plate". N are event occurrences.} \label{tab:stats-gsr-roles} \end{table} We also analyzed in distributions of GSR activity between events for the same winners and found significant differences in Latency ($p$=.024, $\chi^2$=11.265), Amplitude ($p$=.041, $\chi^2$=9.959), Tonic activity ($p$=.010,$\chi^2$=13.28) and a trend for PhasicMax ($p$=.092, $\chi^2$=8.000). In the same analysis, we did not find differences in GSR activity between the events in the case of losing the game. \subsection{Physiological results: Heart Rate} We have processed raw PPG data with Heartpy to obtain the BPM ("\#" beats per minute), IBI (time "ms" of interbeat interval or R-R), SDNN (standard deviation of intervals "ms" between adjacent beats of the IBI of normal sinus beats), SDSD (standard deviation of successive differences between adjacent R-R intervals "ms") and RMSSD (root mean square of successive differences between adjacent R-R intervals "ms"). The latter metrics (SDSD and RMSSD) are related to the measurement of HRV (heart rate variability). Indeed, higher HRV (higher values of SDSD or RMSSD) can represent parasympathetic/vagal activity ( associated with a state of relaxation), while a lower HRV (lower values for SDSD or RMSSD) represents sympathetic/flight-or-fight activity (being stressed or alert; \cite{Valenza2018}). Here we have to point out that studies on HRV are commonly analyzed over large timeline streams of heart rate data (about 5 min or more; \cite{Shaffer2017}) for more simple and large tasks. However, our measurements of HRV are considering 5 to 10-second windows of activity according to the League of Legends fast-paced events. In Tables \ref{tab:stats-ppg-wins-losses}-\ref{tab:stats-ppg-roles} we show mean statistics of pulse metrics according to win condition, event, and role. \begin{filecontents*}{stats-ppg-wins-losses.csv} wincon,BPM,IBI,SDNN,SDSD,RMSSD WIN,172.145±112.795,531.156±332.714,128.673±48.392,100.174±46.513,201.980±87.488 LOSS,93.540±50.271,743.045±212.297,76.840±52.382,57.060±44.479,117.470±92.097 TOTAL,114.062±79.606,687.724±265.416,90.373±56.104,68.316±48.750,139.534±98.038 \end{filecontents*} \begin{filecontents*}{stats-ppg-events.csv} EVENT,BPM,IBI,SDNN,SDSD,RMSSD,N KILL,68.620±10.396,892.429±139.157,73.404±46.177,56.974±31.484,99.246±53.674,7 DIE,103.381±65.949,725.839±249.362,86.549±53.718,67.178±46.320,132.395±85.887,37 PLACE WARD,121.905±75.396,647.536±278.835,92.073±53.922,75.722±54.092,143.491±93.544,31 DES.TURRET,118.132±94.834,678.146±260.378,94.580±55.782,72.386±51.682,145.934±96.149,57 DES.PLATE,117.418±78.046,672.916±275.895,89.918±60.236,63.527±46.951,140.361±110.991,71 \end{filecontents*} \begin{table}[h!] \centering \hspace*{-0.25in} \csvautotabular[respect all]{stats-ppg-wins-losses.csv} \caption{Win and Loss mean PPG statistics (2 sessions) by stacking all events in one statistic.} \label{tab:stats-ppg-wins-losses} \end{table} \begin{table}[h!] \centering \footnotesize \hspace*{-0.15in} \csvautotabular[respect all]{stats-ppg-events.csv} \caption{Event mean PPG statistics (2 sessions) from events "Killing", "Dying", "Placing Ward", "Destroying Turret" and "Destroying Turret Plate".} \label{tab:stats-ppg-events} \end{table} \begin{filecontents*}{stats-ppg-roles-killing.csv} Role,BPM,IBI,SDNN,SDSD,RMSSD,N Jungle,63.195±10.832,967.750±142.479,84.5987±57.767,66.8081±40.275,113.310±67.023,4 Middle,75.852±3.301,792.000±34.176,58.4780±28.399,43.8612±9.449,80.492±31.328,3 Utility,.000±.000,.000±.000,.000±.000,.000±.000,.000±.000,0 Bottom,.000±.000,.000±.000,.000±.000,.000±.000,.000±.000,0 Top,.000±.000,.000±.000,.000±.000,.000±.000,.000±.000,0 \end{filecontents*} \begin{filecontents*}{stats-ppg-roles-dying.csv} Role,BPM,IBI,SDNN,SDSD,RMSSD,N Jungle,75.215±8.200,806.800±93.180,69.730±48.760,49.185±41.834,99.139±63.691,10 Middle,80.846±8.515,748.667±77.577,67.150±31.274,55.873±25.619,95.511±31.233,5 Utility,176.58±60.409,375.714±157.912,158.366±16.102,104.249±26.433,254.481±58.683,3 Bottom,71.112±5.453,848.111±64.272,58.362±25.412,48.458±29.071,97.162±44.958,9 Top,149.90±100.33,628.457±392.726,116.888±63.553,96.551±58.618,179.177±108.76,37 \end{filecontents*} \begin{filecontents*}{stats-ppg-roles-placeward.csv} Role,BPM,IBI,SDNN,SDSD,RMSSD,N Jungle,72.025±8.956,845.821±111.874,68.544±40.886,54.092±32.402,104.573±69.816,13 Middle,75.660±5.498,796.250±59.320,60.883±25.517,41.489±21.775,80.912±39.805,4 Utility,151.11±70.545,513.417±302.925,111.647±53.012,96.855±71.343,175.363±99.071,8 Bottom,.000±.000,.000±.000,.000±.000,.000±.000,.000±.000,0 Top,221.87±73.442,297.601±101.160,137.749±61.433,117.229±51.861,227.036±95.086,6 \end{filecontents*} \begin{filecontents*}{stats-ppg-roles-desturret.csv} Role,BPM,IBI,SDNN,SDSD,RMSSD,N Jungle,81.985±10.532,742.917±92.725,66.538±35.728,46.652±31.067,85.828±44.360,15 Middle,96.998±49.985,696.306±174.643,83.246±45.988,53.699±26.111,128.027±86.284,12 Utility,176.16±91.996,455.294±261.896,144.335±49.317,121.310±43.578,241.965±84.882,14 Bottom,86.635±19.452,721.889±157.714,127.253±59.682,110.750±68.850,184.948±96.020,6 Top,135.37±176.90,844.944±376.067,60.987±44.726,41.902±40.029,99.731±75.180,10 \end{filecontents*} \begin{filecontents*}{stats-ppg-roles-desplate.csv} Role,BPM,IBI,SDNN,SDSD,RMSSD,N Jungle,76.362±8.247,795.708±100.562,60.503±46.094,46.599±35.956,87.578±64.595,16 Middle,76.874±6.404,785.385±63.742,61.381±29.526,47.089±34.529,87.474±37.099,13 Utility,202.95±88.446,355.705±182.813,129.698±42.153,111.779±44.273,205.033±81.351,16 Bottom,96.659±35.939,691.758±214.864,113.291±80.127,51.086±25.804,193.149±167.07,11 Top,120.34±94.250,769.005±378.607,86.455±68.067,53.487±50.697,134.803±127.02,15 \end{filecontents*} \begin{filecontents*}{stats-ppg-roles-allevents.csv} Role,BPM,IBI,SDNN,SDSD,RMSSD,N Jungle,75.739±10.170,807.065±114.546,67.119±42.557,50.132±34.495,94.703±59.744,58 Middle,83.723±29.622,753.243±116.137,68.963±35.358,49.553±27.323,100.437±58.497,37 Utility,181.76±84.367,421.948±236.871,133.272±46.371,111.570±48.563,215.473±86.246,41 Bottom,85.503±27.045,752.833±172.923,97.499±66.037,63.945±46.580,158.030±125.32,26 Top,146.07±119.30,684.261±387.718,95.172±64.519,70.493±56.388,150.569±111.94,41 \end{filecontents*} \begin{table}[h!] \centering \footnotesize \begin{minipage}{\linewidth} \rotatebox[origin=c]{90}{KILLING} \csvautotabular[respect all]{stats-ppg-roles-killing.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \rotatebox[origin=c]{90}{DYING} \csvautotabular[respect all]{stats-ppg-roles-dying.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \rotatebox[origin=c]{90}{PLACE WARD} \csvautotabular[respect all]{stats-ppg-roles-placeward.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \rotatebox[origin=c]{90}{DES.TURRET} \csvautotabular[respect all]{stats-ppg-roles-desturret.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \rotatebox[origin=c]{90}{DES.PLATE} \csvautotabular[respect all]{stats-ppg-roles-desplate.csv} \end{minipage}\\ \begin{minipage}{\linewidth} \rotatebox[origin=c]{90}{ALL EVENTS} \csvautotabular[respect all]{stats-ppg-roles-allevents.csv} \end{minipage} \caption{Role-dependent mean \textbf{BPM (\#)}, \textbf{IBI (ms)}, \textbf{SDNN (ms)}, \textbf{SDSD (ms)} and \textbf{RMSSD (ms)} statistics (2 sessions) from events "Killing", "Dying", "Placing Ward", "Destroying Turret" and "Destroying Turret Plate". N are event occurrences.} \label{tab:stats-ppg-roles} \end{table} After performing Friedman tests over PPG metrics between all events and we found significant differences in SDSD ($p=$.016, $\chi$=10.371) when winning the game, while when losing the game we found differences in BPM and IBI ($p=$.041, $\chi$=8.28). We also tested if there were differences when winning or losing the game for each specific event and we saw significant differences for "Destroying Turret Plate" between win and loss for SDSD ($p=$5.32$\times10^{-4}$, $\chi$=12.0) and RMSSD ($p=$.004, $\chi$=8.333), and "Dying" (SDSD; $p=$.011, $\chi$=6.4)(RMSSD; $p=$.002, $\chi$=10) as well as for SDSD when "Placing a Ward" ($p=$.011, $\chi$=6.4). \iffalse \subsection{Results on Keyboard and Mouse usage} -Diferencia de clicks (Button.left y Button.right) y botones de teclado (Q,W,E,R,D,F,1,2,3,4,5,6,7) -> pre y post 60 segundos en eventos típicos (KILL, KILL/ASSIST y DEATH) de celdas -Diferencias de clicks (Button.left y Button.right) y botones de teclado (Q,W,E,R,D,F,1,2,3,4,5,6,7) -> entre roles top, mid, jungle, estos están especificados por cada persona con el identificador de partida \fi \section{Conclusions} This study shows the potential of using physiological measurements (EDA and ECG) over ESports/desktop gameplay environment. In this study, we characterized physiological responses depending on performance, events as well as participants' roles in the game. In most cases, we found significant differences in EDA (for nrSCR, Amplitude, and PhasicMax activity) during "Killing", "Destroying Turret" or "Destroying Turret Plate" between players that are winning the game and players that are losing the game. When players were winning the game, they showed distinct patterns of physiological activity depending on the events in the game (e.g., "Killing", "Destroying Turret", "Destroying Plate"). In contrast, we did not find any significant difference between these events for players that were losing the game. This can hinder the possibility that players that perform badly show similar physiological states across the game, while players that perform well have distinct physiological behavior during the game. For the case of ECG, SDSD was significantly distinct for players that were winning the game between different events. On the other side, we found that only IBI and BPM measures showed significant differences for players that were losing the game. Overall, players that performed better (winning) showed significantly higher parasympathetic activity (i.e., relaxation) than the ones that were losing. These results suggest that higher relaxation states could significantly improve in-game performance, while players that are losing the game tend to be more alert. Furthermore, the analysis for specific events, like "Dying", "Destroying Turret Plate" or "Placing Ward", has shown that players have distinct values of SDSD and RMSSD, with Killing" or "Dying" events inducing higher sympathetic activity (lower HRV). Despite the lack of physiological samples for participants and game sessions we obtained enough measurements to pinpoint differences in-game performance and events. By having a higher number of participants and game sessions we would suggest undergoing similar studies, not only for analyzing physiology over game performance and events but also for conducting an in-depth analysis of game roles, champions, player level, and type of match (beyond League of Legends' summoner's rift) in relation with EDA and ECG measurements. \subsection*{Author Contributions} David Berga contributed to the development, data analysis, experimentation, writing and reviewing of the paper. Alexandre Pereda contributed to the experimental design with subjects, writing and reviewing the paper. Eleonora de Filippi contributed to the data analysis and statistics as well as reviewing of the paper. Arijit Nandi contributed to the development tools of shimmer. Eulàlia Febrer, Marta Reverté and Lautaro Russo contributed to the session management and contact with parters for the experimental setting. \subsection*{Funding} This study has been possible through the Grant IRC 2020 (Reference ACE033/21 /000046) funded by ACCIÓ (Catalan Agency for Business Competitiveness), from the project "ESports-LAB" lead by INDESCAT (Associació Catalana Clúster de la Indústria de l'Esport), partners with Generalitat de Catalunya, EsportCat and Fundació Catalana per l'Esport. \subsection*{Conflicts of Interest} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Experiment participants signed an authorization form prior to the study to authorize the usage of the captured physiological data as well as performance from their Riot Gamertag and remained anonymous according to the national LOPD (Ley Orgánica de Protección de Datos de Carácter Personal). \printbibliography \end{document}
{ "arxiv_id": "2302.14303", "language": "en", "timestamp": "2023-03-01T02:08:59", "url": "https://arxiv.org/abs/2302.14303", "yymm": "2302" }
\section{Introduction} The AdS/CFT correspondence establishes an equivalence between $(d+1)$-dimensional gravity theory in asymptotically AdS spacetime and $d$-dimensional conformal field theory on the boundary\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}, and has motivated much research related to quantum information theory in the high-energy physics in recent years. One of the most important aspect of AdS/CFT correspondence is the geometrization of quantum entanglement, which provides a geometric prescription to compute entanglement entropy in holographic field theory. This geometric prescription was first given by Ryu and Takayanagi (RT) in \cite{Ryu:2006bv,Ryu:2006ef} for the static Lorentzian asymptotically AdS spacetime, and was subsequently generalized to generic time-dependent situations by Hubeny, Rangamani, and Takayanagi (HRT) in \cite{Hubeny:2007xt}. The RT/HRT proposals tell us that the entanglement entropy of boundary CFT is dual to the area of the bulk codimension-2 extremal surface which is homologous to the boundary entangling surfaces. In a static asymptotically AdS spacetime, the extremal surface sits on the canonical time slice and coincides with the minimal area surface, while the Wick rotation into the Euclidean AdS/CFT setup is straightforward. However, such Wick rotation becomes ambiguous for time-dependent asymptotically AdS spacetime, raising a question on the physical meanings of the area of the minimal surfaces in the time-dependent Euclidean asymptotically AdS space. Motivated by this, \cite{PhysRevD.103.026005} introduced a novel quantity, \emph{viz}., the pseudo entropy, as a generalization of the entanglement entropy which is dual to the area of the minimal surfaces in the time-dependent Euclidean asymptotically AdS space. However, the holographic pseudo entropy is always real-valued, provided that the bulk metric is real-valued formulated in Euclidean AdS space, although generic transition matrix, from which the pseudo entropy is constructed, is non-Hermitian by definition. Recently, the pseudo entropy in the dS/CFT duality and time-like entanglement entropy are also studied in \cite{Doi:2022iyj,Doi:2023zaf,Li:2022tsv,Narayan:2022afv}, which are both complex-valued and are argued to be correctly understood as pseudo entropy \cite{Doi:2022iyj}. For more developments of pseudo entropy, refer to \cite{PhysRevLett.126.081601, PhysRevResearch.3.033254, Camilo:2021dtt, PhysRevD.104.L121902, Miyaji:2021lcq, PhysRevD.105.126026, Mori:2022xec, Akal:2022qei, Berkooz:2022fso, Mukherjee:2022jac, Guo:2022sfl, Miyaji:2022dna, Akal:2021dqt, Bhattacharya:2022wlp, Guo:2022jzs, He:2023eap} for example. In spite of the original motivation, it can be rational to suspect that the pseudo entropy as a generalized entanglement measure is not inherently a CFT quantity which has its dual only to the Euclidean asymptotically AdS space. One may also expect its complex \emph{nature} to manifest itself via AdS/CFT duality in a somewhat intuitive manner, as via the non-Hermiticity of transition matrix he/she immediately deduces. Such an expectation comes from roughly a two-bolded reason: for the sake of computational purpose, one may want to figure out an appropriate prescription to incorporate the generic \emph{complex} holographic pseudo entropy as new measure of entanglement for future use; while such a prescription may in turn help one understand better its \emph{complex} nature in the AdS/CFT context. Motivated by these, instead of working with the Euclidean AdS as initiated in the previous literature, we provide in this note a Lorentzian or real-time path integral prescription for the computations of the holographic pseudo entropy, which shows that the holographic pseudo entropy generally takes complex values. Using this real-time prescription, we argue that the holographic pseudo entropy may be dual to the extremal codimension-2 area surface in the Lorentzian asymptotically AdS spacetime, receiving imaginary contributions from the extrinsic curvature terms in addition to the area of the extremal surface. In this sense, we also argue that the holographic pseudo entropy can be considered as a generalization of the covariant holographic entanglement entropy. In Section 2, we provide the basic real-time path integral presentation for pseudo entropy in the QFT realm. In Section 3, we recast the previous real-time prescription into Schwinger-Keldsyh formalism and compare with the covariant holographic entanglement entropy. In section 4, the complex nature of the holographic pseudo entropy and its dual in the bulk are discussed. \section{Real-time prescription} The pseudo entropy is a generalization of von Neumann entanglement entropy with the density matrices replaced by the transition matrices\cite{PhysRevD.103.026005}, \begin{equation}\label{transition matrix} \tau^{i\vert j}=\frac{\vert \Psi_i\rangle\langle \Psi_j\vert}{\langle\Psi_j\vert\Psi_i\rangle} \end{equation} where $i,j=1,2$ (we let $i\neq j$), and $\vert\Psi_1\rangle$ and $\vert\Psi_2\rangle$ are two pure quantum states which are non-orthogonal: $\langle \Psi_1\vert \Psi_2\rangle\neq 0 $. Taking the limit $\vert \Psi_2\rangle =\vert \Psi_1\rangle$, the transition matrices reduces to the ordinary density matrix. Dividing the whole system into $\mathcal{A}$ and its complement $\mathcal{A}_c$, the pseudo entropy of $\mathcal{A}$ is defined as \begin{equation} S(\tau^{i\vert j}_{\mathcal{A}})=-\textrm{tr}\left[\tau^{i\vert j}_{\mathcal{A}}\log\tau^{i\vert j}_\mathcal{A}\right] \end{equation} where $\tau^{i\vert j}_\mathcal{A}=\textrm{tr}_{\mathcal{A}_c}[\tau^{i\vert j}]$ is the reduced transition matrices. Note that the transition matrix is non-Hermitian by definition, such that the pseudo entropy develop complex values generally. For future purpose, we also use a re-expression of the transition matrix \cite{Guo:2022jzs}, \emph{viz.}, \begin{equation}\label{sym tra mat} \tau^{i\vert j}=\frac{\rho_i\rho_j}{\textrm{tr}[\rho_i \rho_j]} \end{equation} with $\rho_i=\vert\Psi_i\rangle\langle \Psi_i\vert$ the ordinary density matrices. In this section, we restrict ourselves in the real-time path integral representation of the reduced transition matrices $\tau^{i\vert j}_{\mathcal{A}}$ and pseudo entropies $S(\tau^{i\vert j}_\mathcal{A})$, in the QFT realm. Below, we choose the $t=0$ Cauchy slice $\Sigma_0\equiv\mathcal{A}\cup\mathcal{A}_c$ to compute these quantities. Let us proceed with the transition amplitude $\langle \Psi_j\vert \Psi_i\rangle$, since a generic reduced transition matrix can be simply thought of as cutting open the path integral of this amplitude along some specified regions on a Cauchy slice, up to the conventional normalization. We place the initial state $\vert \Psi_1\rangle$ in the past, with the final state $\vert \Psi_2\rangle$ in the future. Doing QFT, we recast the amplitude into a path integral, \emph{e.g.} write down, \begin{equation}\label{2.1} \langle \Psi_j \vert \Psi_i\rangle=\int \mathcal{D}\phi_0\langle \Psi_j\vert \phi_0\rangle\langle \phi_0\vert \Psi_i\rangle, \end{equation} where we use a short hand $\phi(t=0,\vec x) \equiv \phi_0$ where the spatial coordinates $\vec x$ are suppressed for the clarity. In \eqref{2.1} one effectively cuts open the path integral along $\Sigma_{0}$ by inserting corresponding instantaneous states of interest that characterize the system, and then glue it up again throughout $\Sigma_{0}$. On the other hands, viewing \eqref{2.1} as the total trace of $\vert \Psi_i\rangle\langle \Psi_j\vert$, the reduced transition matrices $\tau^{i\vert j}_\mathcal{A}$ can thus be similarly obtained by a partial gluing, leaving the cuts remain open along the subregion $\mathcal{A}$ on the Cauchy slice $\Sigma_0$. The path integral \eqref{2.1} can be formulated in the Euclidean setup, however leads to the real-valued holographic pseudo entanglement entropy. The idea in this note is to deform the path-integral time-contour from the Euclidean one to a piece-wise extension with a mixture of Euclidean and Lorentzian path integral\cite{Skenderis:2008dh,vanRees:2009rw}. In such piece-wise prescription and for our purpose, the Euclidean segments serve to produce the initial and final wavefunctions (or conditions) of interest, whereas the non-trivial evolution is encoded into the Lorentzian ones. We do not presume the analytical continuation from the Euclidean to the Lorentzian, leaving it as the future problem to examine such a continuation. To this end, we decompose the wavefunctionals as below, \begin{equation}\label{2.2} \begin{split} \langle \phi_0\vert \Psi_1\rangle&=\int\mathcal{D}\phi_{-} \langle \phi_0\vert\phi_-\rangle\langle\phi_-\vert \Psi_1\rangle,\\ \langle \Psi_2\vert \phi_0\rangle&=\int\mathcal{D}\phi_{+} \langle \Psi_2\vert\phi_+\rangle\langle\phi_+\vert \phi_0\rangle, \end{split} \end{equation} \begin{figure}[ht] \centering \includegraphics[width=0.55\linewidth]{comptim} \caption{\footnotesize Complex-time contour for $\langle \Psi_2\vert \Psi_1\rangle$ with the direction of time flow shown explicitly and spatial coordinates expressed. For $\langle \Psi_1\vert \Psi_2\rangle$, one inverses the flow of time in the Lorentzian segment and translates the Euclidean segments vertically. For our purpose, the Hamiltonian in the two Euclidean segments are set to be time-independent, and different from each other so that the initial and final state thus produced differ, which finally ends up with a non-trivial time-dependence in the Lorentzian segment.}\label{comptim} \end{figure} where the subscriptions `$-$, $0$, $+$' indicate that integrated field configurations lives on the Cauchy slices $\Sigma_{-T}$, $\Sigma_{0}$ and $\Sigma_{+T}$, respectively. Hereafter, we set $T>0$ to fix the time direction. The strategy now is to arrange $\langle \phi_0\vert\phi_\pm\rangle$ into the real-time axes, and think of $\langle\phi_\mp\vert\Psi\rangle$ as the static Euclidean path integrals which prepare the initial/final state on $\Sigma_{\mp T}$. This piece-wise arrangement leads to a complex-time contour as depicted in figure \ref{comptim}. Note that the path integral contour of $\langle\Psi_1\vert\Psi_2\rangle$ differs from that of $\langle\Psi_2\vert\Psi_1\rangle$ by a time reversal, since we fix $\vert\Psi_1\rangle$ as the initial state and $\vert\Psi_2\rangle$ as the final state in the real-time evolution sense. This time-reversal feature can be formulated to be exact, noting the real-time evolution is prescribed to be unitary. We refer to the spacetimes related to $\langle\Psi_2\vert\Psi_1\rangle$ and $\langle\Psi_1\vert\Psi_2\rangle$ as the forward spacetime $\mathcal{B}^{1\vert2}$ and the backward spacetime $\mathcal{B}^{2\vert1}$, respectively. While the ``evolution'' from $\vert \Psi_1\rangle$ to $\vert\Psi_2\rangle$ is unnecessarily unitary by definition, we can consider only the unitary evolution w.l.o.g, noting that the non-unitarity in the overlap $\langle\Psi_i\vert\Psi_j\rangle$ can be somehow encoded into the Euclidean segments as the state preparation process, leaving the real-time evolution still unitary. Or, one can consider a certain unitary evolution with \emph{truely} the initial and final state, $\vert \Psi_1\rangle$ and $\vert \Psi_2\rangle$. Especially, if $\vert \Psi_1\rangle$ is chosen as a pure ground state, one can set the corresponding Euclidean segment to go from time $-T+i \beta$ to $-T$ along the imaginary-time axis with a large $\beta$, while the final pure ground state wavefunctional of $\vert \Psi_2\rangle$ can be similarly constructed but with a distinct Hamiltonian. Moreover, one can also prepare the excited states from the ground state by inserting various operators into the Euclidean segments as in \cite{PhysRevD.103.026005}, which would be understood by our prescription as deforming the initial or final condition for the real-time solutions. For related considerations on the state preparation, see for example Section.4 of \cite{Colin-Ellerin:2020mva}. \begin{figure}[ht] \centering \subfigure[]{\includegraphics[width=0.34\linewidth]{tau12}} \quad\quad\quad\quad \subfigure[]{\includegraphics[width=0.34\linewidth]{tau21}} \caption{\footnotesize The path integral representation of (a) $\tau^{1\vert 2}_{\mathcal{A}}$ and (b) $\tau^{2\vert1}_{\mathcal{A}}$, related to the forward spacetime $\mathcal{B}^{1\vert 2}$ and backward spacetime $\mathcal{B}^{2\vert1}$, respectively, with the Euclidean segments omitted. There is a cut along the subregion $\mathcal{A}$ on the Cauchy slice $\Sigma_0=\mathcal{A}\cup\mathcal{A}_c$, with the entangling surface $\partial \mathcal{A}$ explicitly shown. The cuts in different spacetime are shown in different colors, blue or red. The arrows colored in orange indicate the direction of the time flow.} \label{tau12} \end{figure} The path integral of $\tau_{\mathcal{A}}^{i\vert j}$ can be obtain by cutting open the path integral of $\langle\Psi_j \vert \Psi_i \rangle$ along the subregion $\mathcal{A}$ on $\Sigma_0$, while they differ from each other by a time reversal, as depicted in figure \ref{tau12}. The time reversal reflects the complex nature of pseudo entropy, and one would find it instructive to consider the $i\epsilon$ prescription which is inherent in the real-time context. For instance, if we begin with preparing the initial/final pure ground state in the distant past/future via an explicit $i\epsilon$ rotating time axes into a complex plane as in the ordinary real-time QFT, instead of slicing open the Euclidean path integrals, we immediately see that the forward and backward time-contours are thus equipped with ``conjugated rotations'' in the complex plane. Requiring this $i\epsilon$ prescription in boundary field theory to generally extend to the bulk geometry via AdS/CFT, we are thus led to the position to double the bulk manifolds in a complex plane and pick up different branches when computing $S(\tau_{\mathcal{A}}^{1\vert 2})$ and $S(\tau_{\mathcal{A}}^{2\vert 1})$. Exactly, such subtleties of the $i\epsilon$ prescription are usually encountered in QFT dealing with various real-time correlation functions,\footnote{Most relevant to our discussion are perhaps the time-ordered and anti-time-ordered correlation functions.} and of course should be preserved when calculating such correlation functions in real-time AdS/CFT context. We will return to the discussion on this time-reversal feature in the next section via the Schwinger-Keldysh formalism. Having the real-time QFT path integral of the reduced transition matrices $\tau^{i\vert j}_{\mathcal{A}}$, we use the replica trick to compute the traces of their $n$th powers, $\textrm{tr}_{\mathcal{A}}(\tau^{i\vert j}_{\mathcal{A}})^{n}$. By gluing cyclically $n$ copies of $\mathcal{B}^{i\vert j}$ across $\mathcal{A}$ with associated boundary condition in a replica $\mathbb{Z}_n$ symmetric manner, we obtain the $n$-fold branched cover spacetime $\mathcal{B}^{i\vert j}_n$ (branched at $\partial\mathcal{A}$) of the original manifold $\mathcal{B}^{i\vert j}$. As usual, there is the fixed point set $\partial\mathcal{A}$ of the replica $\mathbb{Z}_n$ action, on $\mathcal{B}^{i\vert j}_n$. It follows that $\textrm{tr}_{\mathcal{A}}(\tau^{i\vert j}_{\mathcal{A}})^{n}$ are given by the generating functionals $Z[\mathcal{B}^{i\vert j}_n]/Z[\mathcal{B}^{i\vert j}]^n$, where $Z[\mathcal{B}^{i\vert j}]\equiv\langle\Psi_j\vert\Psi_i\rangle$ is the normalization. Then the $n$th pseudo R\'enyi entropies will be obtained from the standard formula\cite{PhysRevD.103.026005}, viz., \begin{equation}\label{renyi1} S^{(n)}(\tau^{i\vert j}_{\mathcal{A}})=\frac{1}{1-n}\log \textrm{tr}_{\mathcal{A}}\left(\tau^{i\vert j}_{\mathcal{A}}\right)^{n}=\frac{1}{n-1}\left(I^{i\vert j}_n-n I^{i\vert j}_1\right), \end{equation} where $I_n^{i\vert j}=-\log Z[\mathcal{B}^{i\vert j}_n]$, while $\mathcal{B}^{i\vert j}_1=\mathcal{B}^{i\vert j}$. The $n\rightarrow 1$ limit of \eqref{renyi1} after the analytical continuation to non-integer $n$ gives the pseudo entropies $S(\tau^{i\vert j}_{\mathcal{A}})=-\textrm{tr}_{\mathcal{A}}[\tau^{i\vert j}_{\mathcal{A}}\log\tau^{i\vert j}_{\mathcal{A}}]$. \section{The Schwinger-Keldysh formalism} In this section, we introduce an equivalent \emph{time-folded} prescription for pseudo entropy which gives rise to the more familiar Schwinger-Keldysh path integral prescription, which has been adopted for the derivation of the HRT proposal in \cite{Dong:2016hjy}, see \cite{Rangamani:2016dms} for a review. To begin with, we recall the symmetric re-expression of the transition matrix, \emph{i.e.} \eqref{sym tra mat}. By taking the partial trace over $\mathcal{A}_c$, we obtain the reduced transition matrices of $\mathcal{A}$, viz., \begin{equation}\label{recall} \tau_{\mathcal A}^{i\vert j}=\textrm{tr}_{\mathcal{A}_c}\left[\frac{\rho_i\rho_j}{\textrm{tr}[\rho_i \rho_j]}\right] \end{equation} The strategy for computing \eqref{recall} here is to formulate the matrix element of the density matrices, $\langle\phi_0\vert\rho_1\vert \phi^\prime_0\rangle$ and $\langle\phi_0\vert\rho_2\vert \phi^\prime_0\rangle$, separately, with the aforementioned piece-wise description, and then sew them together as doing the matrix multiplication. \begin{figure}[ht] \centering \subfigure[]{\includegraphics[width=0.45\linewidth]{Stau12}} \quad \subfigure[]{\includegraphics[width=0.45\linewidth]{Stau21}} \caption{\footnotesize The time-folded contour of (a) $\tau^{1\vert 2}_\mathcal{A}$ and (b) $\tau^{2\vert1}_{\mathcal{A}}$, which is of the Schwinger-Keldysh type. The path integrals has been glued up on ${\Sigma}_T$ as discussed in the main text. The cut along subregion $\mathcal{A}$ related to $\tau^{1\vert 2}_\mathcal{A}$ or $\tau^{2\vert 1}_\mathcal{A}$ is now arranged into a single branch of the Schwinger-Keldysh contour.}\label{Stau} \end{figure} Choosing the conventional Cauchy slice $\Sigma_{0}$ and fixing the initial and final conditions as before, the path integrals for $\tau_{\mathcal A}^{i\vert j}$ are straightforward obtained, as depicted in figure \ref{Stau}. We then immediately recognize the time-folded geometry on which the $\tau_{\mathcal A}^{1\vert 2}$ corresponds to the forward evolution and $\tau_{\mathcal A}^{2\vert 1}$ corresponds to the backward one, with a certain cut along $\mathcal{A}$. Note that the forward and backward Lorentzian segments have been glued on the Cauchy slice $\Sigma_{T}$. To see this, recall that in our piece-wise prescription the two future Euclidean segments prepare the identical final conditions on $\Sigma_{T}$, and each is glued with Lorentzian segment smoothly in a sense that the induced values of the fields and its conjugate momenta are continuous across the gluing surfaces\cite{Skenderis:2008dh,vanRees:2009rw}. In other words, the forward and backward segments have identical boundary conditions on $\Sigma_{T}$, while very single field configuration crosses continuously inward or outward each Euclidean segment. By the virtue of this continuity, provided that the final conditions on $\Sigma_{T}$ is always specified, we can glue up the forward and backward segments across $\Sigma_{T}$ in a sense of time reversal. This formally brings us to the Schwinger-Keldysh contour. Note that, the computation of $\tau_{\mathcal A}^{1\vert 2}$ or $\tau_{\mathcal A}^{2\vert 1}$ involves the forward $\mathcal{B}^{1\vert2}$ or backward $\mathcal{B}^{2\vert1}$ only, with specified initial and final conditions. Thus in the practical situations, one can think of each single branch of the time-folded path integrals as merely imposing the initial or final condition to its companion. This turns out to be in accord with the aforementioned piece-wise construction, modulo some redundancies from the path-integral time-contour deformation. For convenience, we denote the Schwinger-Keldysh geometry introduced above as $\mathcal{B}$, saying $\mathcal{B}$ reduces to $\mathcal{B}^{i\vert j}$ when there is a cut along $\mathcal{A}$ in the forward or backward segment. That is, we only want to compute the pseudo entropies using one branch of the Schwinger-Keldysh contour, as in figure \ref{tau12}. It remains to discuss how this time-folded prescription reflects the complex nature of the pseudo entropies. The observation that the path integral of the transition matrices can be recast into the form of the Schwinger-Keldysh implicitly relies on the prerequisite that the initial/final state can be thought of as certain unitary evolution from the immediate states on Cauchy slice $\Sigma_{0}$, \emph{i.e.} $\vert \Psi_2\rangle\sim\mathcal{U}_2\vert \phi_0\rangle$ and $\vert \Psi_1\rangle\sim\mathcal{U}_1^\dagger\vert\phi_0\rangle$, such that the forward and backward segment can be referred to as the conjugate unitary evolution, saying $\mathcal{U}_2\mathcal{U}_1$ and $\mathcal{U}^\dagger_1\mathcal{U}^\dagger_2$ respectively.\footnote{An appropriate $i\epsilon$ prescription should conform to this conjugate unitary evolution to preserve the causality structure of various real-time correlation functions. This is implicit in the Schwinger-Keldysh contour.} As a result, the time-reversal feature of our real-time prescription can be treated exactly on the same footing as the Schwinger-Keldysh, which is conventionally defined from $\rho_t=\mathcal{U}_{t,t_i}\rho_{t_i}\mathcal{U}^\dagger_{t,t_i}$. As is known in the Schwinger-Keldysh context, the Schwinger-Keldysh generating functional, $Z\sim e^{i(S^{for}-S^{bac})}$, possesses the CPT-conjugation symmetry, while the forward and backward branches exchange under the CPT conjugation\cite{Haehl:2016pec}. \begin{figure}[ht] \centering \includegraphics[width=.65\linewidth]{Srho} \caption{\footnotesize Schwinger-Keldysh construction for $\rho_{\mathcal{A}}(t)$ in the case of the covariant holographic entanglement entropy. $\rho(t)$ is the generic time-dependent reduced density matrix. The cut along the subregion $\mathcal{A}$ on the Cauchy slice $\Sigma_0$ of interest lies at the boundary between the forward and backward branch of the Schwinger-Keldysh contour.}\label{Srho} \end{figure} It is now crucial to emphasize that the path integral of covariant holographic entanglement entropy (CHEE) enjoys such a CPT-conjugation symmetry as well, which finally renders the R\'enyi and von Neumann entanglement entropies real-valued. To see this, we recall that CHEE is also constructed from a generic time-dependent density matrix, \emph{i.e.}, $\vert \Psi(t)\rangle\langle\Psi(t)\vert=\mathcal{U}_{t,t_i}\vert\Psi(t_i)\rangle\langle\Psi(t_i)\vert\mathcal{U}^\dagger_{t,t_i}$, whit $t_i$ being the time(slice) the initial state $\vert\Psi(t_i)\rangle$ is prepared, $t$ being the time the entropy would be computed. We hereafter set $t_i=-T$ and $t=0$ as the convention. Generally speaking, CHEE involves only the causal past of the Cauchy slice $\Sigma_{0}$ of interest, in a fashion of the forward-backward or ket-bra evolution in which one evolves the initial condition on $\Sigma_{-T}$ up until $\Sigma_{0}$ and then retraces the evolution back to the initial state instead of evolving forward from $\Sigma_{0}$ (to the future Cauchy slice $\Sigma_{T}$). Then the cut along the subregion $\mathcal{A}$ in the path integral that computes the reduced density matrix $\rho_{\mathcal{A}}$ occurs on the Cauchy slice $\Sigma_{0}$ where the forward ($-T\rightarrow 0$) and backward ($0\rightarrow -T$) branches joint, as shown in figure \ref{Srho}. With forward-backward construction, the generating functional for CHEE thus turns out to be of the form $e^{i (S^{for}-S^{bac})}$, such that only the imaginary part of the (gravitational) action contributes. For detailed discussions in such contexts, refers to \cite{Colin-Ellerin:2020mva,Colin-Ellerin:2021jev}, and see also the Appendix.A of \cite{Dong:2016hjy}. However, in our real-time prescription for pseudo entropy, the path integral of $\tau^{1\vert 2}_{\mathcal{A}}$ or $\tau^{2\vert 1}_{\mathcal{A}}$ traces the entire history of the evolution, forward or backward, and shall take the form $e^{i S^{for}}$ or $e^{-i S^{bac}}$, which ensures that the pseudo entropy shall generally take complex values. Indeed, this central feature of pseudo entropy stems from the fact that both the information of initial and final state are involved, which is inherent in the definition of the transition matrix. \section{Complex-valued holographic pseudo entropy}\label{sec_4} Having the real-time QFT path integral of pseudo entropy at hand, we turn to its holographic implications, and we will only briefly discuss how the complex-valued holographic pseudo entropy (HPE) comes from the real-time AdS/CFT duality, leaving the rigorous considerations as the future problem. The methodology used below is usual, refer to for example \cite{Rangamani:2016dms,Nishioka:2018khk} for the reviews and the references therein. Going to the AdS/CFT context, we assume a holographic field theory with a semi-classical Einstein-Hilbert gravitational dual in Lorentzian AdS spacetime, and adopt the Swhinger-Keldysh time-folded geometry $\mathcal{B}$ as the boundary geometry that provides the asymptotic boundary conditions for the bulk gravity dual $\mathcal{M}$. We refer to the bulk Cauchy slices anchored on the boundary Cauchy slice $\Sigma_t$ as $\widetilde{\Sigma}_t$, and $\partial \widetilde{\Sigma}_t=\Sigma_t$. In our real-time prescription, the pseudo entropies $S(\tau^{i\vert j}_{\mathcal{A}})$ are computed separately in a manner of singling out a spacetime branch $\mathcal{B}^{i\vert j}$ with a cut along $\mathcal{A}$, and constructing the replica manifold $\mathcal{B}^{i\vert j}_{n}$, as discussed in Section 2. It would thus be dual to the gravitational problem with both the initial and final condition specified. We refer to the bulk dual to $\mathcal{B}^{i\vert j}_n$ as the covering space geometry $\mathcal{M}^{i\vert j}_n$, provided $\partial \mathcal{M}^{i\vert j}_n=\mathcal{B}^{i\vert j}_n$. As usual, we assume the boundary replica $\mathbb{Z}_n$ symmetry extends to the bulk $\mathcal{M}^{i\vert j}_n$, with $\partial \mathcal{A}$ extending naturally to a bulk codimension-2 spacelike surface $\mathbf{e}_n$, the bulk fixed point set of the $\mathbb{Z}_n$ action. For our purpose, we will take a replica $\mathbb{Z}_n$ quotient and restrict ourselves into the the orbifold spacetime $\hat{\mathcal{M}}^{i\vert j}_n=\mathcal{M}^{i\vert j}_n/\mathbb{Z}_n$ which has the same boundary condition $\mathcal{B}^{i\vert j}$ as the original manifold $\mathcal{M}^{i\vert j}$, but with a conical singularity at the fixed locus $\mathbf{e}_n$. In this way, $\hat{\mathcal{M}}^{1\vert 2}$ and $\hat{\mathcal{M}}^{2\vert 1}$ can be thought of as the time-folded $\mathcal{M}$ with a certain conical singularity localized, respectively, in its forward and backward branch. This is perhaps the most straightforward way to exploit the CPT-conjugation symmetry of the Schwinger-Keldysh geometry $\mathcal{B}$, which shall be assumed to extend into the bulk dual $\mathcal{M}$, to reveal the complex nature of HPE. Via the AdS/CFT duality, the pseudo entropies evaluated in the bulk are given as \begin{equation}\label{bulkrenyi} S(\tau^{i\vert j}_{\mathcal{A}})=\lim_{n\rightarrow1}\frac{n}{1-n}\left(i S[\hat{\mathcal{M}}^{i\vert j}_n]-i S[\mathcal{M}^{i\vert j}]\right), \end{equation} where $S[\hat{\mathcal{M}}^{i\vert j}_n]$ is the gravitational action of the bulk geometry $\hat{\mathcal{M}}^{i\vert j}_n$ in the stationary phase approximation, and we have used the definition $n S[\hat{\mathcal{M}}_n^{i\vert j}]=S[\mathcal{M}_n^{i\vert j}]$. Viewing $\mathcal{M}^{i\vert j}$ as forward and backward segment of the time-folded bulk geometry $\mathcal{M}$ which is dual to the boundary Schwinger-Keldysh geometry, we then rewrite $S[\hat{\mathcal{M}}^{1\vert 2}]\equiv S[\mathcal{M}^{for}]$ and $S[\hat{\mathcal{M}}^{2\vert 1}]\equiv -S[\mathcal{M}^{bac}]$, noting that there contains now a contribution from the conical singularity at the locus $\mathbf{e}_n$, the codimension-2 surface homologous to the entangling surface $\partial\mathcal{A}$. The relative sign comes from the fact the $\mathcal{M}^{for}$ and $\mathcal{M}^{bac}$ are related to each other with a time reversal, and is fixed in accord with the Schwinger-Keldysh generating functional, which is of the form $e^{i (S^{for}-S^{bac})}$. Assuming the CPT-conjugation symmetry extends to the bulk geometry (the bulk metric shall also be conjugated if it is complex\cite{Colin-Ellerin:2020mva}), we thus expect $S[\mathcal{M}^{bac}]=S[\mathcal{M}^{for}]^*$. Then at this stage, we can deduce that the HPE computed in the bulk indeed takes complex value (at least, it is generally complex-valued by definition), as naturally as the generic non-Hermitian transition matrix is defined. To make above argument more apparent, let us compare the computations of the holographic pseudo entropy (HPE) and the covariant holographic entanglement entropy (CHEE).\footnote{The first notion is of course that, CHEE involves the generic time-dependent states, while the computation of HPE here is just formulated in the Lorentzian path integral representation, complementary to its Euclidean counterpart.} As aforementioned, the crucial observation is that CHEE involves only the causal past of the bulk Cauchy slice $\widetilde{\Sigma}_0$ of interest ($\partial\widetilde{\Sigma}_T=\Sigma_T$) and possesses the CPT-conjugation symmetry which render the entropies real-valued. To see this, we recall that the bulk on-shell action related to CHEE is given as, \begin{equation} S[\mathcal{M}]=S[\mathcal{M}^{for}]-S[\mathcal{M}^{bac}]=2 i \;\textrm{Im}(S[\mathcal{M}^{for}]), \end{equation} which is purely imaginary due to the pairwise cancelation between the forward and backward segments and finally gives rise to the real-valued entropies\cite{Colin-Ellerin:2021jev}. Such a pairwise cancellation will not generally happen in the case of HPE, since the computation there involves $S[\mathcal{M}^{for}]$ or $S[\mathcal{M}^{bac}]$ only. More precisely, in HPE, the fixed point locus $\mathbf{e}_n$ is localized in the interior of the forward or backward branch of the time-folded bulk manifold, while in CHEE, $\mathbf{e}_n$ lies at the boundary between the forward and backward branches. And in the case of HPE, the extremal surface condition\cite{Hubeny:2007xt} for surface $\mathbf{e}_n$ in the $n\rightarrow 1$ limit would be recovered by examining the local geometry near this surface through the ordinary argument\cite{Dong:2016hjy}. Indeed, in the case of CHEE, before gluing the forward and backward branches across the Cauchy slice $\widetilde{\Sigma}_{0}$ to compute $\textrm{tr}_{\mathcal{A}}\rho_\mathcal{A}^n$ (near $n\sim1$), $\widetilde{\Sigma}_0$ is not arbitrary: it should pass through the extremal surface\cite{Dong:2016hjy}. To simplify our discussion on HPE, we simply continue the bulk evolution from the Cauchy slice $\widetilde{\Sigma}_0$, in CHEE, to the future Cauchy slice $\widetilde{\Sigma}_T$ ($\partial\widetilde{\Sigma}_T=\Sigma_T$) in both forward and backward branches with final condition specified on $\widetilde{\Sigma}_{T}$, then glue up the the forward and backward branches along $\widetilde{\Sigma}_{T}$ in a certain continuous manner, so that the evolution is reversed on $\widetilde{\Sigma}_T$ and we do not have boundary terms there, thus allowing us to take the time-folded bulk geometry of the Schwinger-Keldysh type to perform the computation. And recall that, this time the locus $\mathbf{e}_n$ is localized in the interior of the forward or backward branch. Knowing $\mathbf{e}_n$ reduces to the extremal surface in the $n\rightarrow 1$ limit, we turn to valuate the on-shell action using the Einstein gravity with conical singularity at $\mathbf{e}_n$. The valuation turns out to be similar to that of CHEE \cite{Dong:2016hjy}, and gives, \begin{equation} \begin{split} &S(\tau^{1\vert 2}_{\mathcal{A}})=i \lim_{n\rightarrow1}\partial_n S[\mathcal{M}^{for}];\\ &S[\mathcal{M}^{for}]=-\lim_{\epsilon\rightarrow 0}\frac{1}{8\pi G_N}\int_{\mathbf{e}_n(\epsilon)}\mathcal{K}^{for}_\epsilon, \end{split} \end{equation} where we have regulated the codimension-2 locus $\mathbf{e}_n$ to be the codimension-1 surface $\mathbf{e}_n(\epsilon)$ defined as the hypersurface $r=\epsilon$ in the local coordinate around $\mathbf{e}_n$, with $\mathcal{K}^{for}_{\epsilon}$ being the extrinsic curvature of $\mathbf{e}_n(\epsilon)$ located in the forward branch. And the \emph{modular entropy} is used in the computation\cite{Dong:2016hjy}. According to the previous discussion, the pseudo entropy $S(\tau^{2\vert1}_{\mathcal{A}})$ valuated in the backward branch is expected to be obtained as the complex conjugate of $S(\tau^{1\vert2}_{\mathcal{A}})$. As in the CHEE, the light-like singularities of the extrinsic curvature contributes the imaginary part of the action and the real part of entropies, which ends up with the HRT formula\cite{Hubeny:2007xt}. However, there is also additional imaginary contribution to entropies from the regular curvature extrinsic term, as long as it survives in the $\epsilon\rightarrow 0$ limit, and is now understood as the pseudo entropy. The computations of HPE require the overall knowledge of the local geometry around the extremal surfaces. This seems to agree with the argument that, the transition matrix involving both the initial and final state such that the causal past and the causal future of the Cauchy slice of interest and the information of the evolution shall be included to compute the pseudo entropies. And in the AdS/CFT context, the dual bulk on-shell action turns to be located at the extremal surface, thus it is natural to expect the extrinsic curvature of the extremal surface somewhat encodes the information of the evolution with initial and final state both specified. It is thus intriguing to think of the HPE as a generalization of the CHEE which picks up only the partial information of the extremal surfaces, while noting that the CHEE and the ordinary holographic entanglement entropy both involve only the initial state. \section{Conclusion} We provided a real-time prescription for the computations of the pseudo entropy in the quantum field theory, and recast it into the Schwinger-Keldysh formalism which makes the complex nature of the pseudo entropy more apparent. We also draw a line between the real-time path integral representations of the holographic pseudo entropy and covariant holographic entanglement entropy, and explain how the holographic pseudo entropy become complex-valued through the argument of the CPT-conjugation symmetry of the Schwinger-Keldysh contour. Via the real-time AdS/CFT correspondence, we argue that the holographic pseudo entropy is indeed generally complex-valued and may be dual to the bulk codimension-2 extremal surface receiving the complex contribution from the extrinsic curvature of the extremal surface, which can be considered as a generalization of the covariant holographic entanglement entropy. \section*{acknowledgement} This work is not supported by any funding.
{ "arxiv_id": "2302.14242", "language": "en", "timestamp": "2023-03-01T02:06:29", "url": "https://arxiv.org/abs/2302.14242", "yymm": "2302" }
\section{Introduction} Recently there has been tremendous success in using deep reinforcement learning for various tasks involving decision-making. Deep RL is, in principle, a versatile approach that can directly learn from interaction data, often without an explicit, hand-coded dynamics model. By interacting with environments, RL agents can learn optimal policies from either dense or sparse environment feedback. A rich collection of state-of-the-art approaches allows the learning of policies for both discrete actions and continuous action spaces, while taking in either low-dimensional state vectors or high-dimensional sensor readings \cite{sac, dqn, ppo, trpo}. However, it remains challenging to apply Deep RL to real-life domains, including real-hardware robot learning. One major challenge comes from the need to reliably track the complete system state \cite{dexterous}. To mitigate this challenge, learning a policy that directly maps camera readings to optimal actions requires much less engineering effort. Recently, by incorporating ideas from the computer vision domain, including data augmentation and self-supervised learning, state-of-the-art RL algorithms can learn control policies purely from image observations, with high sample efficiency \cite{dreamerv2, curl, rad, drq}. On the other hand, it is also difficult to assign credit to the learning agent in a scalable way. Reward engineering utilizes domain-specific knowledge to provide better training signals: popular simulated environments provide optional dense reward functions based on heuristics \cite{gym, robosuite2020}, but might rely on state information not readily available in the real physical system. Thus, there has been a lot of effort to help RL agents explore more effectively in environments with sparse or no rewards \cite{diayn, vic}. For example, a sparse reward setting might be a manipulation task where a robot receives a positive reward only at task completion (e.g., for successfully placing an item on a shelf) and a zero reward at all other time steps. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{images/photos/cover.jpg} \caption{\small We enable efficient model-free reinforcement learning on a real Franka Emika Panda robot with 7 degrees of freedom from RGB image observations, sparse rewards, and only a few demonstrations. This is achieved by learning a distance metric in an embedding space, and rewarding the agent for staying close to the demonstrations.} \label{fig:franka} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{images/figures/nn-figure.pdf} \caption{\small Our approach augments the sparse task reward with a dense exploration reward to enable efficient training of vision-based robotic manipulation policies using Deep Reinforcement Learning. (\textbf{Left}) We have a set of variable length demonstrations $i$, each consisting of observations $o^i_t$, actions $a^i_t$ etc. (\textbf{Middle}) During learning, we update our policy with a mixture of stored demonstrations (blue) and examples from the replay buffer (yellow). (\textbf{Right}) Our RL agent maps a sensor observation $o$ to an action $a$. It finds the closest demonstration to infer a dense reward $r'$ based on how close we are to the goal.} \label{fig:method} \end{figure*} In this work, we propose a novel approach to tackle the exploration challenge in image-based control tasks with sparse rewards. We build on the idea of using demonstrations to help with exploration and draw inspiration from reward shaping, which has been explored in prior works \cite{modem, coder}. The main insight is that each step in the demonstration can be considered a checkpoint, and the agent should be credited for reaching a similar state (see Fig. \ref{fig:method}). Our main contributions are two-fold. First, we define a distance metric between pairs of image observations from the task domain by learning an embedding from images to a lower-dimensional latent space, making it possible to quantify how close the agent is from a demonstration state. Second, we design a systematic way to utilize the learned distance metric to provide much denser reward signals in sparse-reward tasks. Our approach is independent of the underlying RL algorithm. In this work, we implement our approach on top of the state-of-the-art off-policy algorithm SAC \cite{sac}. Simulation results show that our method significantly improves sample efficiency on long-horizon, sparse-reward tasks compared to state-of-the-art model-based and model-free methods. In the real world, our method enables the learning of manipulation tasks from scratch with only a few hours of training and a handful of demonstrations. \section{Related Work} During the training of an RL agent, exploring efficiently is challenging especially in sparse-reward environments -- agents need to interact with the rewards sufficiently to obtain meaningful learning signals. One popular approach to provide denser rewards is by incorporating human and domain knowledge. For example, reward shaping is usually used in long-horizon control tasks by adding intermediate rewards at important checkpoint locations \cite{robosuite2020}, or a dense reward is derived from physical understanding of the underlying system \cite{reward_shaping}. In the domain of real robot manipulation, a recent work by \citet{reward-sketching} takes advantage of imperfect dense reward signals provided by human operators. During training, engineers annotate episodes of robot experience to help train a reward model. Aside from efforts to provide dense rewards, novel algorithms are proposed to improve the agents' exploration behavior. For instance, maximum entropy RL balances the agent's exploration and exploitation behavior by encouraging exploration as part of the learning objective \citep{sac}. Hierarchical RL and intrinsically motivated skill learning are also potentially remedies. For example, \citet{diayn} proposed to learn a set of diverse low level skills, which can then be used as action primitives for a high-level RL agent to explore larger areas of the state space. \citet{curiosity} propose to utilize an intrinsic reward to model curiosity. In this approach, the agent learns an internal model of the world, and is awarded for visiting states that are not captured well by its model. It has also been shown that warm-starting and demonstrations are helpful in sparse-reward tasks. \citet{qt-opt} learn a closed-loop grasping policy purely from real robot data, with only binary reward signals. To assist with initial exploitation, the method employs a warm-start technique by using a scripted policy as the starting point. \citet{sparse_rewards} propose an algorithm that alternates between updating the RL policy from the environment and pulling it towards demonstration behaviors using an information theoretic approach. \citet{coder} employs a contrastive learning objective on demonstration trajectories to pre-train the image feature extractors, while also retaining them for RL updates. \citet{reward-sketching} provide demonstrations through tele-operation as part of the training loop. Finally, model-based RL methods aim to improve training sample efficiency by learning a transition model of the environment from past experience, and generate new training samples for RL updates. \citet{dreamerv2} learns an accurate discrete world model from high dimensional image inputs for Atari games which enables the training of an RL agent that achieve human-level performance. \citet{modem} combines model-based learning with demonstrations by over-sampling demonstrated data to form a behavior prior. The key innovation of our work compared to prior art is that we introduce a systematic pipeline for learning sparse reward tasks from high-dimensional observations using model-free reinforcement learning. As shown in Fig. \ref{fig:method}, our innovation comes from (a) matching to similar demonstrations using a locally linear latent model, (b) using data augmentation to improve the robustness of state matching, and (c) inferring a dense reward based on promising states in the demonstrations. \section{Background} \subsection{Deep Reinforcement Learning} In this work, we adopt a typical deep reinforcement learning setting, where the underlying dynamics of the environment is modeled as a Markov Decision Process (MDP). The MDP is defined by the tuple $(\mathcal{S}, \mathcal{A}, p, r, \gamma)$ where $\mathcal{S}$ is the state space and $\mathcal{A}$ is the action space. $p$ models the state transition probability, where $p(s' \,|\, s, a)$ is the chance of landing at state $s'$ after taking action $a$ from state $s$. The reward function $r(s, a, s')$ indicates the reward given to the agent for taking this transition. Finally, $\gamma$ is the discounting factor. An RL algorithm aims to find the optimal policy $\pi^*(a \, | \, s)$ that maximizes the agent's cumulative reward in expectation. \begin{equation} \label{eqn:rl} \pi^* = \argmax_{\pi(a \, | \, s)} \mathbbm{E}_\pi\bigg[\sum_{t = 0}^\infty \gamma^t \cdot r(s_t, a_t, s_{t+1})\bigg]. \end{equation} To reflect the use of image observations, we denote $\mathcal{O}$ as the observation space, and $o_t$ as the input images at time $t$. For the tasks that we solve, we assume that the necessary environment states are captured within our image observations. \subsection{Problem Setting} This work tackles the challenges that come with vision-based RL in long-horizon, sparse reward tasks, where the agent only receives a positive reward $r_\mathrm{done}$ at task completion, while getting negative rewards $r_\mathrm{live}$ everywhere else. The interpretation of such a reward function is that the agent gets a high reward only at task completion, but is also penalized for the length of the trajectory. Formally, letting $\mathcal{G}$ denote the set of goal states, we define the reward function as follows: \begin{equation} \label{eqn:reward} r(s, a, s') = \begin{cases} r_\mathrm{done} > 0 & s' \in \mathcal{G}\\ r_\mathrm{live} < 0& \text{otherwise}. \end{cases} \end{equation} The sparse reward function reduces the need for expert knowledge or human intervention, making it much easier to implement in a real-world environment. However, this makes exploration hard: the robot gets meaningful signal only when it reaches a goal, which is unlikely in long-horizon complex control tasks. We now propose a method to address the challenge of vision-based, sparse reward robotic tasks. \section{Method} We propose a training pipeline centered around a few demonstrations to tackle the exploration challenge with sparse-reward learning. The core idea is to provide additional dense rewards at states close to the demonstrations. To correctly measure the distance between states from their image observations, we find a structured embedding space by learning a latent dynamics model as an auxiliary task. In addition, we employ value clipping to take advantage of the sparse reward structure, and importance sampling to prioritize learning signals from demonstrations. \subsection{Off-policy Reinforcement Learning with Demonstrations} \label{subsec:rl-demo} We provide a set $\mathcal{D}$ of demonstrations consisting of $n$ successful trajectories of observations and actions. Each demonstration trajectory $i$ may be of different length $T_i$, but must terminate in the goal set $\mathcal{G}$. We assume the demonstrations to come from a human operator or a heuristic controller, and thus, can be imperfect in nature. We formalize the mathematical notations below, where $\mathcal{T}^i$ denotes the trajectory for demonstration $i$. $s_t$ is the underlying true state of the robot and $o_t$ is the high-dimensional observation received at time $t$, such as a pair of camera images. $a_t$ is the action taken by the demonstrator. Note that we don't have access to the ground truth state $s_t$ in the demonstrations, as shown below. \begin{align} \label{eqn:demo} \mathcal{D} &= \{\mathcal{T}^1 , \mathcal{T}^2 \cdots \mathcal{T}^n \} \nonumber\\ \mathcal{T}^i &= (o_0^i , a_0^i , o_1^i , a_1^i , \cdots o_{T_i - 1}^i, a_{T_i - 1}^i , o_{T_i}^i)\\ \forall i &, s_{T_i}^i \in \mathcal{G} \nonumber . \end{align} The demonstrations are kept in two forms. First, the trajectories are stored to help keep track of the steps-to-success for each state. This information is used to properly discount the exploration reward, as will be discussed later. Next, they are also sliced into experience tuples $(o, a, o', r, d)$ and placed in the replay buffer $\mathcal{B}$ of our off-policy, model-free reinforcement learning algorithm. Here, the notation $o'$ means the next observation after observation $o$. $r$ is the sparse reward signal as defined in Equation \ref{eqn:reward}, and $d$ is a Boolean variable indicating episode termination. These experience tuples are essential to propagate the reward signals into the value function and policy. As the replay buffer fills from new experience, we make sure the demonstration transitions are never ejected. \subsection{Augmentation-invariant Distance Metric} Our next goal is to calculate the distance between the robot's current observation during learning and the closest observation in the demonstration set. This will allow the robot to infer how close it is to achieving the sparse reward. Computing the distance between two states from their respective image observations is challenging: two drastically different states might only differ by a few pixels. Conversely, the same underlying state might appear very different in two camera observations due to task-irrelevant background features. Thus, we propose to embed the image observations into a low-dimensional latent space to obtain a viable distance metric. Inspired by \citet{e2c}, we utilize a Variational Auto-encoder (VAE) \cite{vae} and a locally-linear dynamics model to regularize the structure of the latent space. The locally-linear dynamics model captures our goal for the latent space to be temporally consistent since we have a multi-step control task. Meanwhile, it has been shown that data augmentation is essential for efficient and robust image-based Reinforcement Learning \cite{rad, drq}. To make the distance metric invariant to data augmentations, we train the VAE and latent dynamics model to encode augmented observations, while predicting the unchanged images. We use $f(\cdot)$ to denote the random augmentation function. The trainable components include the CNN encoder $E_{\phi}$, the decoder $D_{\theta}$, and the latent transition model $M_{\psi}$. Our model is trained by optimizing a variational evidence lower bound (ELBO) objective, using transition tuples $(o, a, o')_i$ sampled from the replay buffer. We assume a unit Gaussian prior $p(\mathbf{z})$ in the latent space. Additionally, the encoded distributions $q(\mathbf{z} \,|\, o)$ and decoded distribution $p(\mathbf{o} \,|\, z)$ are also modeled as Gaussian distributions, which is a good fit for predicting colored images. First, the data-augmented observations are encoded into their latent distributions whose mean $\mu$ and diagonal covariance matrix $\Sigma$ are predicted by the encoder network $E_{\phi}$: \begin{align} \begin{split} z \sim q_\phi(\mathbf{z} \,|\, f(o)) = \mathcal{N}(\mu, \Sigma) &, \; (\mu, \Sigma) = E_{\phi}(f(o)) \\ z' \sim q_\phi(\mathbf{z'} \,|\, f(o')) = \mathcal{N}(\mu', \Sigma') &, \; (\mu', \Sigma') = E_{\phi}(f(o')). \end{split} \end{align} The one-step dynamics model in the latent space is locally linear in the state and action, whose parameters (matrices $A$, $B$ and offset $c$) depend on the starting state, as predicted by the latent transition model $M_{\psi}$. Here $z$ is the latent state representation predicted by the encoder. Prior work shows that a latent linear dynamics model is tractable to learn, but provides modeling flexibility through local linearity \cite{e2c}. The latent dynamics are given by: \begin{equation} \hat{z'} = Az + Ba + c, \hspace{0.5cm} (A, B, c) = M_{\psi}(z). \end{equation} The linear transition model allows the prediction of the next step latent distribution using the starting distribution and action as follows: \begin{equation} q_{\psi}(\mathbf{\hat{z'}} \,|\, z, a) = \mathcal{N}(\hat{\mu'}, \hat{\Sigma'}), \end{equation} where \begin{align} \begin{split} \hat{\mu'} &= A \mu + B a + c \\ \hat{\Sigma'} &= A \Sigma A^T. \end{split} \end{align} The encoder $E_{\phi}$, decoder $D_{\theta}$, and transition model $M_{\psi}$ are updated jointly using a combined loss function which comprises of 3 objectives. First, we want the sampled starting latent state $z$ to be reconstructed back to the original observation $o$. Similarly, as we pass the sample through the dynamics model, the resulting latent state prediction $\hat{z'}$ should reconstruct back to $o'$. Finally, to ensure the latent dynamics model is consistent over multiple steps, we want the predicted distribution $q_\psi(\mathbf{\hat{z'}} | f(o), a)$ and encoded distribution $q_\phi(\mathbf{z'} | f(o'))$ to be similar. Formally, we write the overall training objective $\mathcal{L}$ as follows, where $\gamma$ is a hyper-parameter for weighing the two loss terms: \begin{align} \begin{split} \mathcal{L}_\mathrm{ELBO} &= \Ebig_{z \sim q_\phi, \, \hat{z'} \sim q_{\psi}} \bigg[ -\log p(o | z) - \log p(o' | \hat{z'}) \bigg] \\ &+ D_{\mathrm{KL}} \bigg(q_\phi(\mathbf{z} \,|\, f(o)) \;\bigg\Vert\; p(\mathbf{z})\bigg) \end{split}\\ \mathcal{L}_{\mathrm{dynamics}} &= \Ebig_{z \sim q_\phi} \bigg[ D_{\mathrm{KL}} \bigg( q_\psi(\mathbf{\hat{z'}} \,|\, z, a) \;\bigg\Vert\; q_\phi(\mathbf{z'} \,|\, f(o'))\bigg) \bigg]\\ \mathcal{L} &= \Ebig_{(o, a, o') \in \mathcal{B}} \bigg[ \mathcal{L}_\mathrm{ELBO} + \lambda \mathcal{L}_\mathrm{dynamics} \bigg]. \end{align} In essence, we want to minimize the reconstruction error for the VAE using the ELBO objective (term 1), but also want to minimize the dynamics prediction error in the latent space (term 2). Crucially, given the learned encoder, we can now define a task-relevant distance metric between high-dimensional observations $o$. Specifically, given two augmented observations $f(o_1)$ and $f(o_2)$, we define the Augmentation-invariant Distance Metric (ADM) to be the $L^2$ distance between the augmented and \textit{encoded} states: \begin{equation} \label{eqn:distance_metric} d(f(o_1), f(o_2)) := ||E_{\phi}(f(o_1)) - E_{\phi}(f(o_2))||_2. \end{equation} We now describe how to use the above Augmentation-invariant Distance Metric to find the closest demonstration to the robot's current observation. \subsection{Demonstration-Guided Exploration} To encourage the agent to explore more efficiently, we propose a reward-engineering approach to credit the agent for staying close to demonstrations. Given an experience tuple $(o, a, o', r, d)$, we assign an additional exploration reward $r_e$ if $o'$ is sufficiently close to an observation in our demonstration, up to the distance threshold $\epsilon$, which is dynamically computed. We now formally define the exploration reward $r_e$ and the distance threshold $\epsilon$. First, we compute the distance threshold $\epsilon$, which is defined as the average distance between consecutive observations within the demonstrations. In this way $\epsilon$ approximates the distance of one environment step. As we get new experience from learning, we re-compute the distance threshold $\epsilon$. Re-computation is necessary because the encoder $E_{\phi}$, decoder $D_{\theta}$, and dynamics model parameters $M_{\psi}$ are constantly updated with the latest experience to ensure our distance metric is not overfitted to only the demonstration data. The definition of $\epsilon$ is formulated as follows. \begin{equation} \label{eqn:eps} \epsilon := \Ebig_{i, t}\big[ d(o_t^i, o_{t+1}^i) \big] , \hspace{0.5cm} o_t^i, o_{t+1}^i \in \mathcal{D} \end{equation} Next, we find the trajectory index $i$ and time step $t$ of the nearest demonstration observation to our observation $o'$ using the distance function $d$: \begin{equation} \label{eqn:dge} i^*, t^* = \argmin_{i, t}d(o', o_t^i), \hspace{0.5cm} o_t^i \in \mathcal{D}. \end{equation} Our goal is to create a dense reward to encourage the agent to reach promising states which led to task success in the demonstrations. Thus, we add an additional exploration reward $r_e$ if we are close to a demonstration state and modulate it by how far we are expected to be from success. In our experiments, $r_e = 1$. First, we discount the exploration reward $r_e$ by powers of $\alpha$ depending on the number of steps until success; the exploration reward is greater when closer to the goal. The discounting factor $\alpha$ is a hyper-parameter chosen independently from the RL discounting factor $\gamma$. The extra reward is added to the environment reward $r$, and the resulting sum $r_{\mathrm{dense}}$ is used to update the RL agent. As shown in the equation below, we focus on the demonstration with index $i^*$, where we find our nearest neighbor observation. $T_{i^*}$ is the length of this demonstration trajectory, as introduced in Section \ref{subsec:rl-demo}. Note that we don't give an exploration reward if the state is already a successful state. Thus, our final reward becomes: \begin{equation} \label{eqn:demo_reward} r_{\mathrm{dense}} = \begin{cases} r + \alpha^{T_{i^*} - t^*} r_e & [d(o', o_{t^*}^{i^*}) \leq \epsilon] \land [o' \notin \mathcal{G}]\\ r & \text{otherwise}. \end{cases} \end{equation} We observe that in the case where $o'$ finds its nearest neighbor from one of the successful terminal states, we have $t^* = T_{i^*}$. Then, $o'$ is awarded the full exploration reward $r_e$ along with the nominal reward $r$. Whereas, when $o'$ is close to one of the earlier steps in a demonstration, $r_e$ is heavily discounted. Finally, if we are very far from any demonstration observation (relative to the distance threshold $\epsilon$), we simply pass the RL agent the nominal reward $r$ (case 2 in Eq. \ref{eqn:demo_reward}). Or, if we are at the goal, we get the goal reward in case 2 of Eq. \ref{eqn:demo_reward}. Given this new reward $r_\mathrm{dense}$, we can train using any off-the-shelf RL algorithm, which leads to the versatility of our approach. \subsection{Importance Sampling and Value Clipping} To further improve training efficiency, we take advantage of the reward and demonstration structures by implementing importance sampling and $q$-value clipping. Both importance sampling and $q$-value clipping are standard tools in RL to improve learning efficiency, but in our case we have a rigorous derivation of the specific bound we choose. We perform importance sampling when choosing transitions from the replay buffer for updating the RL agent. When sampling a batch of $b$ transitions, we prioritize demonstration transitions so that they constitute at least $p_d$ fraction of the batch. This technique allows the task reward $r_\mathrm{done}$ to propagate quickly into the values of promising states and for the agent to effectively learn from demonstrations. The batch size $b$ and fraction $p_d$ are hyper-parameters. To further take advantage of the sparse reward structure and stabilize training, we scale the three reward sources $r_\mathrm{done}$, $r_\mathrm{live}$ and $r_e$ so that the resulting $q$-value is bounded, allowing us to clip the value target when updating $Q(o, a)$. Concretely, we make $|r_e| \leq |r_\mathrm{live}|$. In our dense reward structure, an environment step either receives a positive reward of $r_\mathrm{done}$ and terminates the episode or receives a non-positive dense reward $r_\mathrm{dense}$. Thus, the highest $q$-value is achieved when a transition completes the task with reward $r_\mathrm{done}$. On the other hand, in the worst case where the agent receives $r_\mathrm{live}$ all the time and never succeeds, the $q$-value is bounded below by $\sum_{t=0}^\infty \gamma^t r_\mathrm{live} = (1 - \gamma)r_\mathrm{live}$. Thus, we apply the following clipping when computing the value target: \begin{equation} (1 - \gamma)r_\mathrm{live} \leq Q(o, a) \leq r_\mathrm{done}. \end{equation} In essence, the above value clipping exploits our domain knowledge of the reward structure to bound the value function, which stabilizes learning. \section{Experiments} \subsection{Motivating Example} \begin{figure}[t] \centering \begin{subfigure}[b]{2.7cm} \centering \includegraphics[height=2.5cm]{images/envs/maze-1.png} \caption{\small $s = (0.8, 0.8)$} \end{subfigure} \begin{subfigure}[b]{2.7cm} \centering \includegraphics[height=2.5cm]{images/envs/maze-2.png} \caption{\small $s = (0.8, 1.3)$} \end{subfigure} \begin{subfigure}[b]{2.7cm} \centering \includegraphics[height=2.5cm]{images/envs/maze-3.png} \caption{\small $s = (2.8, 0.8)$} \end{subfigure} \caption{\small Observations from the PointMaze environment. The point-mass in green is the controllable agent. $s$ is the agent's location. Clearly, states $a$ and $b$ are closer to each other than $a$ and $c$ are.} \label{fig:point_maze} \end{figure} We first motivate the need for learning a latent representation of a state and using our augmentation-invariant distance metric. In this experiment, we show that it is non-trivial to find a good distance metric between states from their respective high-dimensional image observations. To provide a clear illustration, we use the PointMaze environment provided in the D4RL benchmark \cite{d4rl}. In this environment, the controllable point-mass agent is marked in green. In Figure \ref{fig:point_maze}, we place the point-mass at three different positions in a same U-shaped maze. From the ground-truth state information, we know that state (a) is much closer to state (b) than state (c). \begin{figure*}[t] \centering \includegraphics[width=0.245\linewidth]{images/plots/robosuite_lift.pdf} \includegraphics[width=0.245\linewidth]{images/plots/robosuite_door.pdf} \includegraphics[width=0.245\linewidth]{images/plots/robosuite_stack.pdf} \includegraphics[width=0.245\linewidth]{images/plots/robosuite_pick_place_can.pdf} \caption{\small We compare our method with state-of-the art model-based and model-free methods, all given the same number of demonstrations. Clearly, our method (red) learns faster with a higher success rate than all three baseline methods.} \label{fig:results} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.245\linewidth]{images/plots/ablation-robosuite_lift.pdf} \includegraphics[width=0.245\linewidth]{images/plots/ablation-robosuite_door.pdf} \includegraphics[width=0.245\linewidth]{images/plots/ablation-robosuite_stack.pdf} \includegraphics[width=0.245\linewidth]{images/plots/ablation-robosuite_pick_place_can.pdf} \caption{\small Starting from a standard algorithm (RAD), we successively add key improvements, including IS (importance sampling), VC (value clipping) and exploration reward. We observe that both importance sampling and value clipping are helpful in terms of propagating the training signal and stabilizing training. Finally, the additional exploration reward slows down training at the very beginning, but is crucial in the learning of long-horizon tasks.} \label{fig:ablation} \end{figure*} \begin{table}[h] \centering \begin{tabular}{| c | c | c | c |} \hline Distance metric $d$ & $d(a, b)$ & $d(a, c)$ & $d(a, b) / d(a, c)$\\ \hline Ground truth $L^2$ & 0.5 & 2.0 & \textbf{0.25} \\ \hline Pixel $L^2$ & 4.102 & 4.178 & 0.982\\ \hline VAE Embedding $L^2$ & 1.80e-1 & 1.53e-2 & 11.76\\ \hline ADM & 1.092 & 2.255 & \textbf{0.484}\\ \hline \end{tabular} \caption{\small Comparison across different distance metrics in PointMaze. Distance in the pixel space and VAE embedding space is unable to capture the true underlying states' relations. Whereas our ADM is able to estimate the relative distance most accurately.} \label{tab:point_maze} \end{table} In Table \ref{tab:point_maze}, we compare two baseline distance metrics with our proposed method ADM. The first row is the ground truth, indicating that indeed states $a$ and $b$ are closer than states $a$ and $c$. For Pixel $L^2$, we consider each image as a long vector, and directly compute the $L^2$ distance between them. For VAE Embedding $L^2$, we train a Variational Auto-encoder to reconstruct the image observations without considering the dynamics, and compute the distance in the embedding space. For a consistent comparison, we don't apply data augmentations in this experiment. As expected, $d(a, b)$ and $d(a, c)$ are not distinguishable from the pixel-wise distance. While the VAE Embedding distance is able to reconstruct the images, it doesn't incorporate the transition information, and fails to estimate the distance between states. ADM is the only method to capture the relative proportions of the ground-truth distances. This simple experiment illustrates the benefits of using our latent space distance metric, ADM, to measure task-relevant similarity between image observations. \subsection{Robot Manipulation from Sparse Rewards} To verify the capability of our proposed approach, we aim to solve 4 robot manipulation tasks from the Robosuite simulator \cite{robosuite2020}, including picking up a block, stacking blocks, opening a door, and moving a soda can. In particular, moving the soda can requires a pick-and-place motion that takes around 100 steps to complete. The observations provided to the agent at each timestep are $128 \times 128$ RGB images from two cameras, one mounted on the wrist of the robot arm and one in the front. We use Operational Space Control to move the robot -- the RL agent outputs actions to change the displacement, rotation and opening of the robot hand, for a total of 7 degrees of freedom. For a better illustration, Figure \ref{fig:robosuite} shows the front-view renderings from all four environments. \begin{figure}[h] \centering \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=0.9\linewidth]{images/envs/lift-obs.png} \caption{\small Lift a Block} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=0.9\linewidth]{images/envs/door-obs.png} \caption{\small Open a Door} \end{subfigure}\\ \vspace{0.2cm} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=0.9\linewidth]{images/envs/stack-obs.png} \caption{\small Stack Blocks} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=0.9\linewidth]{images/envs/pick-obs.png} \caption{\small Move a Can} \end{subfigure} \caption{\small We experiment with 4 diverse manipulation tasks in the RoboSuite simulator. Shown above are observations from the front-view camera. The other camera is mounted on the robot's end effector. These 2 RGB images are the only observations provided to the RL agent.} \label{fig:robosuite} \end{figure} We use a hard-coded policy to collect demonstrations. Importantly, the scripted policy utilizes ground-truth state information and renders the image observations alongside to save in the demonstration set. The state information is not available for the RL agents, which must learn from camera observations alone. The number of demonstration trajectories and total steps collected for each task are shown in Table \ref{tab:demo_samples}. We emphasize that our method requires only a few demonstrations which are quick to collect even on real robot platforms, as we show in the next section. \begin{table}[h] \centering \begin{tabular}{| c | c | c | c | c |} \hline Task & Lift & Open a Door & Stack & Move a Can \\ \hline \# demo & 5 & 10 & 10 & 20\\ \# total steps & 94 & 538 & 548 & 1783 \\ \hline \end{tabular} \caption{\small We only need a small number of demonstrations for each task. Among the tasks, Moving a Can is the hardest as even the demonstrator requires about 90 steps to finish.} \label{tab:demo_samples} \end{table} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.9\linewidth]{images/photos/patch.png} \caption{\small Reach a colored patch} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.9\linewidth]{images/photos/block.png} \caption{\small Reach a block} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.9\linewidth]{images/photos/drawer.png} \caption{\small Open a drawer} \end{subfigure} \caption{\small We evaluate our method on 3 tasks with the Franka Emika Panda robot. The reinforcement learning agent controls the position and rotation of the robot's end effector, and additionally the gripper movement in task (b). The robot is able to achieve 100\% success rate on each task within a few hours in training.} \label{fig:franka_tasks} \end{figure*} We compare the performance of our approach with CoDER \cite{coder}, which is the state-of-the-art model-free approach for vision-based RL using a few demonstrations. Additionally, we compare with state-of-the-art model-based methods including MoDem \cite{modem} and DreamerV2 \cite{dreamerv2}. We initilize all methods with the same set of demonstrations. We measure the evaluation success rates during training, and repeat each experiment with 5 different random seeds to obtain the final results. In the case of MoDem and DreamerV2, to make our tasks compatible, we don't terminate the episode at success, but rather at a fixed number of steps. In Figure \ref{fig:results}, we report the mean and standard deviation of the evaluation success rates as a function of training environment steps. We observe that our method outperforms all three baselines across all four tasks, while showing major advantages in the two more challenging tasks. DreamerV2 does not explicitly take demonstrations. To make an apples-to-apples comparison with our work and other benchmarks, we provide demonstrations by initializing the training buffer with demonstrated episodes. However, DreamerV2 struggles to learn the tasks even with demonstrations as a result of the sparse reward structure. \subsection{Ablation Studies} Our approach utilizes multiple techniques to maximize the utility of demonstrations and to speed up learning. To gain more insight on how each component contributes to the overall performance, we perform ablation studies by starting from a standard RL algorithm RAD \citep{rad} (initialized with demonstrations) and introducing our key components one at a time, namely importance sampling (IS), value clipping (VC) and finally, our exploration reward $r_\mathrm{dense}$. From the results shown in Figure \ref{fig:ablation}, we observe that both importance sampling and value clipping are crucial for our approach. In block-stacking, they result in faster convergence to the optimal policy, and in can-moving, they enable the agent to complete the task. Clearly, they are helpful in quickly propagating the reward signals from the demonstrations and stabilizing training of the $q$-values by utilizing prior knowledge on the structure of our sparse reward task. However, as can be seen from the hardest can-moving task, the exploration reward makes the most significant difference as it allows the robot to complete the task reliably much earlier into training. This is because it effectively matches to the closest demonstration and encourages meaningful exploration progress. \section{Real Robot Training} \label{sec:panda} \subsection{Highly Compliant Real-time Controller} Because reinforcement learning requires a trial-and-error process \cite{rl}, we expect the robot to make frequent physical contacts with the environment. In order to eliminate safety hazards, the controller must be compliant to external forces. On the other hand, we want to the robot to move swiftly so that each RL step takes less time to execute. To this end, we develop a highly compliant real-time controller extended from the Cartesian Impedance controller \cite{impedance, jt}. At a high level, the end-effector tracks a equilibrium pose following a mass-spring-damper model. The robot asserts higher torque in the opposite direction as the current pose deviates further from the equilibrium. In the Cartesian space, we limit the maximum force exerted on the end effector by the robot, preventing it from causing damages. In the joint space, we apply a counter torque when a joint gets close to its hardware limit. The combination of these control rules builds a safety net around the robot for smoother RL training. \subsection{System Architecture} The robot gets sensory information from two Intel Realsense cameras -- one mounted on the end-effector and another on the side, as illustrated in Figure \ref{fig:franka}. While the cameras are capable of RGB-D images, our method only utilizes the color images. Both cameras are directly connected via USB to an Nvidia GPU desktop, which runs inference and training for the RL agent. Specifically, the GPU desktop runs an OpenAI gym interface \cite{gym}, to which the RL agent interacts with. At each timestep, the agent chooses its action based on its policy: $a = \pi(o)$. The action $a$ consists of a displacement of the end effector position, change in roll/pitch/yaw, and open/close of the gripper. Next, the action selected by the RL agent is sent via ethernet to an Intel NUC, which directly interfaces with the Panda Robot. Specifically, the Intel NUC runs the Robot Operating System (ROS), where our real-time controller communicates with the \textsc{franka\_ros} library. \paragraph*{Collecting Demonstrations via Tele-Operation} A key aspect of our approach is to efficiently learn from a small set of human demonstrations. We built an application where a user can tele-operate the robot by moving and rotating an iPhone. Our iPhone application utilizes primitives from Apple's ARKit \cite{linowes2017augmented} to stream the position and orientation of the device. During demonstration collection, a python script translates the tracking data into gym actions, and executes them on the real robot, which updates its equilibrium pose to follow the iPhone. These demonstration trajectories are stored on the Nvidia GPU desktop and used during training. \subsection{Tasks} We test on three diverse tasks on the Panda arm, as shown in Figure \ref{fig:franka_tasks}. We further introduce the reward function and resetting scheme as follows. \textit{Goal Reaching:} The simplest task is goal reaching, where the Panda must reach a fixed target patch of blue color. The sparse reward $r_\mathrm{done} = 100$ is given only when more than 25\% of the pixels from the gripper camera are blue, indicating we have correctly hovered over the target patch. The reward is $r_\mathrm{live} = -1$ everywhere else. The robot returns to its initial configuration at the beginning of each new episode. \textit{Block Reaching:} Similar to goal-reaching, the task is for the robot's end effector to reach a randomly-placed red block. This task is challenging, because (1) an episode is deemed success only if more than 65\% of the bottom half pixels of the gripper camera are red, in which case the gripper will be able to grasp the block. This requires highly precise movements close to the goal. And (2) the block is randomly placed after a successful episode, requiring the robot to follow it to different locations instead of memorizing a fixed spot. \textit{Opening a Drawer:} Another challenging task is to open a drawer. The robot must rotate from its original configuration and exercise fine control over its roll/pitch/yaw to grasp the drawer handle and open it, making use of the entire 7-dimensional action space. The sparse reward is given only if the drawer has been opened and extended by at least 50\%. When an episode succeeds, the robot moves in a pre-set path to close the drawer for automatic resetting. \subsection{Results} Experimental results show that our method is extremely data-efficient, leading to task success with only a couple of hours of training on the Panda Arm. The reason is that we effectively learn from demonstrations using our nearest-neighbor approach, which allows us to extrapolate a dense reward for the task. \begin{table}[h] \centering \begin{tabular}{| c | c | c | c |} \hline Task & Reach Goal & Reach Block & Open Drawer \\ \hline \# demo trajectories & 5 & 10 & 5\\ \# demo steps & 63 & 171 & 128 \\ Wall clock time & $\sim$2:00 & $\sim$6:00 & $\sim$4:00 \\ \hline \# steps first success & 460 & 253 & 1169 \\ \# steps convergence & 628 & 2714 & 2122 \\ Wall clock time & 25:35 & 1:40:10 & 2:32:16\\ \hline \end{tabular} \caption{\small In our real robot experiment, only a small number of demonstrations are needed for each task. All three tasks achieve 100\% success rate within a few hours of training.} \label{tab:franka-results} \end{table} Table \ref{tab:franka-results} shows the number of demonstrations provided and training performance of the three tasks completed by the Panda Arm. The Panda Arm was able to learn the task with only 30 minutes of training time for the goal reaching task. Compared to goal-reaching, it only took a couple of hours longer, including evaluation, to master harder tasks of block reaching and opening a drawer, all starting from scratch. Moreover, we point out that the wall clock time that we report are results prior to any computational optimization: we stop the robot and RL updates when we train the latent dynamics model, which consume about half the time. We also evaluate for 10 episodes every 10 training episodes, which is also counted in the total time. Overall, our results on a real Panda Arm show the efficacy of learning a latent dynamics model and using the latent representation to find similar observations in expert demonstrations. \section{Limitations} \label{sec:limitations} Our approach does not automatically determine the optimal number of trajectories, and diversity within, the set of expert demonstrations. Moreover, it does not determine which states are truly \textit{causal} and directly lead to high rewards and task success. Finding the nearest causal state using our latent dynamics model and distance metric might improve the speed of learning. Finally, in our real robot framework, self-collision is handled by the robot's firmware, resulting in occasional delays during RL training for error-handling. Utilizing a path planner and actively preventing collisions might make training smoother. \section{Conclusion} \label{sec:conclusion} This paper presents a data-efficient RL algorithm to learn sparse reward robotic tasks from image observations only, by utilizing a few demonstrations, which outperforms state-of-the-art model-free and model-based benchmarks. First, we start with a small set of expert demonstration trajectories. Then, we try to match a robot's current state with the closest state in a successful demonstration, in order to credit the robot for making meaningful exploratory progress. Our key insight is that simply matching states based on visual similarity is problematic -- the underlying states might be similar, but the high-dimensional pixel observations can differ due to background variation and task-irrelevant features. On the other hand, two images observations that differ only by a few pixels might require multiple steps in the environment. Thus, our key innovation is to learn a latent dynamics model, which provides a temporally consistent, concise latent state representation for each scene. Then, we find the closest latent state in a set of expert demonstrations (using data augmentation to improve robust matching) and assign an extra reward depending on the estimated number of steps away from the goal. As such, we can effectively convert a sparse reward task to a task with dense proxy rewards, which improves learning efficiency dramatically. Our work lends itself to several exciting future directions. First, we can leverage recent advancements in causal RL and counterfactual analysis \cite{bareinboim2016causal, mesnard2020counterfactual, zhu2019causal} to determine the state in an expert demonstration that directly caused high reward and task success. This might improve our search for nearest-neighbor demonstration states and overall learning. Second, we can utilize the latent dynamics model to further improve sample efficiency with model-based RL methods. Finally, we can explore theoretical guarantees for simple linear settings to show whether our proxy dense reward is optimal. \section{Appendix} \subsection{Environment Details} \textit{Maximum Episode Length:} For our simulated experiments in Robosuite, we set maximum episode lengths based on the difficulty of each task, as shown in Table \ref{tab:episode_length}. Note that these numbers are chosen to be slightly over the average steps taken by the demonstrator to complete each task, leaving the RL agent plenty of time to finish. For our real robot experiments, all 3 tasks share the same maximum episode length of 30 steps. The episodes are terminated if the maximum length is reached or when the task is completed. \begin{table}[h] \centering \begin{tabular}{| c | c | c | c | c |} \hline Task & Lift & Open a Door & Stack & Move a Can \\ \hline \# Max Episode Length & 40 & 80 & 80 & 120\\ \hline \end{tabular} \caption{\small We choose the maximum episode length of each task to be slightly over the average steps taken by the demonstrator, giving the RL agent ample time to finish.} \label{tab:episode_length} \end{table} \textit{Task-specific Settings:} the block-lifting, door-opening and block-stacking tasks are all taken directly from the Robosuite simulator. The move-a-can task is a single-object version of the pick-and-place task where only the soda can is included. For the move-a-can task, the base of the robot is placed at the middle of the two bins. \subsection{Neural Network Architectures} Our method consists of 6 components parameterized by neural-networks, namely: model encoder $E_\phi$, model decoder $D_\theta$, locally-linear dynamics model $M_{\psi}$, RL encoder $E_\mathrm{RL}$, Actor $\pi_\mathrm{RL}$ and Critic $Q_\mathrm{RL}$. The numbers shown below corresponds to our specific setting where the latent space has 16 dimensions, and the action space has 7 dimensions. Our locally linear dynamics model predicts a low-rank approximation of $16 \times 16$ matrix $A$ using two $16$-dimensional vectors $u$ and $v$, where $A = I + uv^T$. \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] Model Encoder: Input: 6x112x112 randomly cropped images ReLU(LayerNorm(Conv2D(6, 32, kernel=3, stride=2))) ReLU(LayerNorm(Conv2D(32, 32, kernel=3, stride=1))) ReLU(LayerNorm(Conv2D(32, 32, kernel=3, stride=1))) Flatten() ReLU(Linear(out_features=32)) ReLU(Linear(out_features=32)) Linear(out_features=32) Output: 16-dim mean + 16-dim log-std \end{lstlisting} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] Model Decoder: Input: 16-dim latent vectors ReLU(Linear(out_features=128)) ReLU(Linear(out_features=128)) ReLU(Linear(out_features=32768)) Reshape into 128x16x16 Upsample into 128x32x32 ReLU(Conv2D(128, 128, kernel=3, stride=1, pad=1))) Upsample into 128x64x64 ReLU(Conv2D(128, 128, kernel=3, stride=1, pad=1))) Upsample into 128x128x128 Conv2D(128, 6, kernel=3, stride=1, pad=1)) Output: 6x128x128 images \end{lstlisting} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] Locally-Linear Dynamics model: Input: 16-dim latent vectors ReLU(Linear(out_features=512)) ReLU(Linear(out_features=512)) ReLU(Linear(out_features=160)) Output: 16-dim vector u + 16-dim vector + 16x7 matrix B + 16-dim offset c \end{lstlisting} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] RL Encoder: Input: 6x112x112 randomly cropped images ReLU(LayerNorm(Conv2D(6, 32, kernel=3, stride=2))) ReLU(LayerNorm(Conv2D(32, 32, kernel=3, stride=2))) ReLU(LayerNorm(Conv2D(32, 32, kernel=3, stride=2))) ReLU(LayerNorm(Conv2D(32, 32, kernel=3, stride=2))) Flatten() LayerNorm(Linear(out_features=32)) Output: 32-dim feature \end{lstlisting} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] Actor: Input: 32-dim feature ReLU(Linear(out_features=1024)) ReLU(Linear(out_features=1024)) Linear(out_features=14) Output: 7-dim mean + 7-dim log-std \end{lstlisting} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] Critic: Input: 32-dim feature + 7-dim action ReLU(Linear(out_features=1024)) ReLU(Linear(out_features=1024)) Linear(out_features=1) Output: Q-value \end{lstlisting} \subsection{Hyper-parameters and Training Scheduling} For all our experiments, we use discounting factor $\gamma=0.99$, exploration reward discount $\alpha=0.98$. Actor and critic learning rates are $1e-3$. Latent dynamics model learning rate is $4e-3$. Batch size $B = 128$. We choose different fraction of demonstrations $p_d$ when we perform importance sampling, as shown in the table below. \begin{table}[h] \centering \begin{tabular}{| c | c | c | c | c | c |} \hline Task & Lift & Open a Door & Stack & Move a Can & Real Robot\\ \hline $p_d$ & 0.15 & 0.15 & 0.15 & 0.2 & 0.2\\ \hline \end{tabular} \caption{\small We vary the fraction $p_d$ of demonstration interactions within each batch depending on the task we are training.} \label{tab:p_d} \end{table} For simulated environments, we perform one Actor-Critic update for each environment steps taken. After loading the demonstrations and for every 5000 environment steps, we update the latent dynamics model until convergence. For real robot experiments, we update the latent dynamics model for every 300 steps taken by the robot. Since each steps takes about 0.5s to execute on the real robot, we keep performing SAC updates while the robot is in motion. This results in about 10 updates per environment step.
{ "arxiv_id": "2302.14314", "language": "en", "timestamp": "2023-03-01T02:09:17", "url": "https://arxiv.org/abs/2302.14314", "yymm": "2302" }
\section{Introduction} Continual learning of new knowledge and skill acquisition are the desirable traits for intelligent machines. However, in Deep Learning, neural networks may forget the previous knowledge due to optimization of network weights for new tasks, leading to ``catastrophic forgetting". Many works have been proposed to address this issue by constraining the weights of neural nets~\cite{kirkpatrick2017overcoming,zenke2017continual} or using data (pseudo-data) of previous tasks~\cite{li2017learning}. A simple way to mitigate this issue is to assign task specific sub-networks, where only the sub-network is optimized for new tasks and other parameters are task-independent and can be shared across tasks. This approach is particularly effective for Tack Incremental Continual Learning (TI-CL), which requires a task-ID to route the data to the corresponding sub-network. As the model is incrementally trained on new tasks, its size grows sub-linearly. This paper explores TI-CL of audio classifiers with Audio Spectrogram Transformers (AST)~\cite{gong2021ast}, which achieved state-of-the-art results in several audio benchmarks~\cite{piczak2015esc,warden2018speech,gemmeke2017audio}. However, there are two main issues with AST that must be addressed for sequential training: parameter inefficiency and computational inefficiency. \noindent\textbf{Parameter Inefficiency.} In TI-CL, the use of pre-trained transformer-based models like AST can lead to parameter inefficiency due to a large number of trainable parameters in full-finetuning for sequential tasks. This can cause overfitting, especially when the sequential tasks have limited data. \noindent\textbf{Computational Inefficiency.} The transformer's self-attention mechanism has quadratic computational complexity. Hence, a large number of tokens extracted from larger spectrograms (from long duration audio) exponentially increases the number of computations. However, audio spectrograms cannot be resized since their characteristics are determined by the audio duration and the number of frequency bins. Resizing audio spectrograms can lead to a loss of critical information and adversely affect their quality. Therefore, transformer-based AST shows significant computational inefficiency when processing long-duration audio. \vspace{-0.1em} Therefore, we propose a TI-CL method based on AST and address the issues of parameter and computational efficiency. We leverage PET methods to improve the parameter efficiency of AST. Our study evaluates the efficacy of various PET methods for AST on ESC-50~\cite{piczak2015esc} and SpeechCommandsV2~\cite{warden2018speech} benchmarks and proposes Convolutional Adapters to address parameter inefficiency. Note that the performance of PET methods for AST audio classifiers has not been studied before. The adapters perform as well as fully fine-tuned models in high-resource settings and even outperform them in low-resource settings with $<$5\% of the trainable parameters. \vspace{-0.1em} Next, we propose Frequency-Time factorized Attention (FTA) to address computational inefficiency in self-attention for long-duration audio spectrograms. Unlike traditional self-attention, FTA enables an arbitrary token to attend only to the frequency and temporal tokens that share the same position index in either axis, thereby leveraging the orthogonal nature of frequency and time in spectrograms (see Fig.~\ref{fig:FTA}). This factorization greatly reduces complexity and improves computational efficiency. To achieve both parameter and computational efficiency, we combine Convolutional Adapter and FTA for TI-CL of audio classification. \vspace{-0.2em} The main contributions of this paper can be summarized as follows, \begin{itemize} \item We provide an empirical study on the performance of various PET methods for AST. \vspace{-0.1em} \item We propose TI-CL of audio classifiers with parameter-efficient AST, using Convolutional Adapters. \item We introduce a novel Frequency-Time factorized Attention (FTA) for compute-efficient AST. \vspace{-0.1em} \item Through comprehensive experiments we demonstrate the advantages of the proposed approach for TI-CL of audio classifiers. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{Data/FTA.pdf} \vspace{-1.0em} \caption{Frequency-Time factorized Attention for a (yellow) token along the frequency and time axis.} \label{fig:FTA} \vspace{-2.0em} \end{figure} \vspace{-1.0em} \section{Related work} \subsection{Continual Learning for Audio} To prevent catastrophic forgetting in continual learning, various methods have been proposed. For example, GIM~\cite{cossu2020continual} incrementally adds new modules to capture drifts in input distribution, DFWF~\cite{ma21b_interspeech} uses a knowledge distillation loss to preserve memory from the original model, and static memory networks~\cite{karam2022task} introduce static memory to reduce memory usage and model complexity. Few-shot CL~\cite{wang2021few} enables fast and interactive model updates in a few-shot learning framework to expand the audio classifier to recognize novel classes, while CTR~\cite{ke2021achieving} addresses both catastrophic forgetting and knowledge transfer issues with a pair of continual learning plugin modules. \subsection{Parameter Efficient Transfer} Many recent works have proposed efficient transfer learning and fine-tuning techniques for downstream tasks, such as Adapter for NLP~\cite{houlsby2019parameter} and similar methods like LoRA~\cite{hulora}, AdaptFormer~\cite{chenadaptformer}, and ConvPass~\cite{jie2022convolutional}. These methods achieve efficient fine-tuning by inserting small trainable bottleneck modules at different locations inside a transformer encoder while freezing other parameters during training. Simple implementations typically involve a down-projection followed by an up-projection. Other methods tune specific parameters in the network, such as BitFit~\cite{zaken2021bitfit}, which adapts the model for different tasks by tuning the bias terms of the transformer layers, LayerNorm Tune~\cite{kim2021adapt}, which tunes the affine transformation parameters in the encoder normalization layers, and Prompt Tuning~\cite{jia2022visual}, which optimizes a set of learnable latent tokens that are prepended to the input sequence at every encoder layer for transfer learning. \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{Data/AICL.pdf} \caption{Performance of the AST model in TI-CL setup for three training modes.} \label{fig:main} \vspace{-1.5em} \end{figure*} \section{Methodology} \subsection{Continual Learning (CL) and AST audio classifier} The objective of continual learning is to sequentially train a parameterized model $f_{\boldsymbol{\theta}}$ over a set of $n$ tasks $D \in \{D_1, D_2, ..., D_n\}$. Each task is defined by $D_i \in \{X_i, Y_i\}, i \in [1, n]$, where $X$ is the set of input samples and $Y$ is the set of corresponding labels. The parameterized function $f_{\boldsymbol{\theta}}: x \xrightarrow{} y$ maps the input $x \in X$ to the corresponding label $y \in Y$ and the goal of CL is to train $f_{\boldsymbol{\theta}}$ such that it can correctly predict the label $y$ for an unseen arbitrary input $x$ sampled across $D$. If $D$ is an audio classification task, then $f_{\boldsymbol{\theta}}$ is a pre-trained AST model with total weights $\boldsymbol{\theta}$, $x \in X$ is a spectrogram image and $y \in Y$ is the corresponding audio class label. $f_{\boldsymbol{\theta}}$ extracts tokens of shape $\boldsymbol{Z} \in \mathbb{R} ^ {(MT+1) \times d}$ from $x$, where $M$ and $T$ denotes the tokens in frequency and time axis, $d$ is the embedding dimension and $1$ denotes the class token. These tokens are processed by a series of 12 transformer encoders with Multi-Head Self-Attention (MHSA), Multi Layer Perceptron (MLP) and Layer Normalization (LN) sublayers, and can be formulated as, \begin{equation}\footnotesize \label{eq2} \begin{split} \boldsymbol{Z'_l} &= MHSA(LN_1(\boldsymbol{Z_{l-1}})) + \boldsymbol{Z_{l-1}}, \\ \boldsymbol{Z_l} &= MLP(LN_2(\boldsymbol{Z'_l})) + \boldsymbol{Z'_l}, \end{split} \end{equation} \noindent where $l$ denotes the layer number and $\boldsymbol{Z_l}$ is the extracted tokens from layer $l$. \subsection{Adapter Incremental Continual Learning of AST} Task Incremental Continual Learning is one of the three scenarios for CL~\cite{van2019three}, where it assumes that the tasks $D_i$ are disjoint and the task ID $i$ is known both during training and inference. Full-finetuning $f_{\boldsymbol{\theta}}$ on the sequential tasks by optimizing $\boldsymbol{\theta}$ may not be efficient and may lead to the overfitting issue. A parameter incremental approach to solve TI-CL involves training a parameterized network with multiple task-specific sub-modules denoted as $f_{\boldsymbol{\theta}+\delta{\boldsymbol{\theta}}}$, where $\theta$ is the shared task-independent parameter, $\delta{\boldsymbol{\theta}} \in \{\boldsymbol{\theta_1}, \boldsymbol{\theta_2}, ..., \boldsymbol{\theta_n}\}$ are the task-specific parameters and $\boldsymbol{\theta} >> \delta\boldsymbol{\theta}$. We propose an adapter incremental method for TI-CL called Adapter Incremental Continual Learning (AI-CL), where a Convolutional Adapter (CA) is incrementally added and trained for each task while keeping the shared $\boldsymbol{\theta}$ frozen. We denote the weights of task-specific CA as $\delta\boldsymbol{\theta_i}$ for every new task $D_i$. CA has a bottleneck structure, which consists of a down-projection followed by an up-projection with an additional 2D convolution layer in between. CA processes any input tokens $\boldsymbol{z}$ as, \begin{equation}\footnotesize \label{eq3} CA(\boldsymbol{z}) = \boldsymbol{W_{up}}(Conv2D^*(\boldsymbol{W_{down}^*}(\boldsymbol{z}))), \end{equation} \noindent where $\boldsymbol{W_{down}} \in \mathbb{R} ^ {d \times d'}$, $\boldsymbol{W_{up}} \in \mathbb{R} ^ {d' \times d}$, $d' << d$ and $*$ denotes the non-linear GELU activation. CA runs parallel to both MHSA and MLP layers, which can be represented as, \begin{equation}\footnotesize \label{eq4} \begin{split} \boldsymbol{Z'_l} &= MHSA(LN_1( \boldsymbol{Z_{l-1}})) + \boldsymbol{Z_{l-1}} + CA_1(LN_1( \boldsymbol{Z_{l-1}})), \\ \boldsymbol{Z_l} &= MLP(LN_2(\boldsymbol{Z'_l})) + \boldsymbol{Z'_l} + CA_2(LN_2(\boldsymbol{Z'_l})). \end{split} \end{equation} \noindent The proposed AI-CL method using CA is parameter efficient since only the CA weights $\delta\boldsymbol{\theta_i}$ are trainable and saving these weights also occupies less storage. The backbone weights $\boldsymbol{\theta}$ are frozen and shared across tasks, both during the training and inference stage. During inference, when a test audio spectrogram $x$ is passed along with the task ID $i$, the AST model routes the tokens $\boldsymbol{Z}$ to the corresponding CA with the parameter $\delta\boldsymbol{\theta_i}$ and the corresponding classifier. The AST model with multiple task-specific CAs is illustrated in Fig \ref{fig:TICL}. \subsection{Frequency-Time factorized Attention (FTA)} While the AI-CL approach is parameter-efficient, the use of self-attention in AST results in a quadratic increase in computations (\emph{i.e.}, the number of floating point operations or FLOPS) for larger spectrograms. To address this issue, prior alternatives to self-attention either limit self-attention to a local window~\cite{liu2021swin} or factorize self-attention along two orthogonal axis~\cite{arnab2021vivit}, but they were developed for images and videos. Inspired by the factorization approach~\cite{arnab2021vivit}, we propose Frequency-Time factorized Attention (FTA) in the AI-CL method as shown in Figure \ref{fig:FTA}. It factorizes self-attention across the frequency and time axis of a spectrogram, by masking out the undesired tokens. This approach makes AST more computationally efficient, with attention along the frequency (vertical) axis learning the distribution of various frequency components at a given time interval, and attention along the time (horizontal) axis learning how a frequency component evolves over time. The only exception is the $[CLS]$ token, which attends to all the tokens (including itself) since it must summarize the semantic information in a spectrogram. For a token $\boldsymbol{Z} \in \mathbb{R} ^ {(MT+1) \times d}$, the computation complexity $\mathcal{O}$ of Global Self-Attention (GSA) and FTA can be calculated as follows, \begin{equation}\footnotesize \label{eq5} \begin{split} \mathcal{O}_{GSA} &= (MT+1)^2*d, \\ \mathcal{O}_{FTA} &= (MT(M+T+1)+1)*d, \end{split} \end{equation} where $(M+T)<<MT$. Thus, when $M$ and $T$ grow, FTA has much fewer computations than GSA. Empirically, we show that the proposed Frequency Time factorized Attention (FTA) achieves competitive performance to global self-attention with only a fraction of the computations. \begin{table*}[t] \caption{Computational efficiency of the proposed FTA. $k$ denotes the factor of GSA computations required by FTA.} \label{tab:FTA advantages} \centering \resizebox{0.75\textwidth}{!}{% \begin{tabular}{ccccccccc} \toprule Dataset & Duration & Spectrogram Shape & Freq (M Tokens) & Time (T Tokens) & $\mathcal{O}_{GSA}/d$ & $\mathcal{O}_{FTA}/d$ & $k$ \\ \midrule SCv2 & 1s & [128,101] & 12 & 9 & 11881 & 2377 & 0.2 \\ ESC-50 & 5s & [128,501] & 12 & 49 & 346921 & 36457 & 0.105 \\ AVE & 10s & [128,1006] & 12 & 100 & 1442401 & 135601 & 0.094 \\ \bottomrule \end{tabular}} \vspace{-0.8em} \end{table*} \section{Results} \subsection{Experimental Setup} \textbf{Datasets.} The datasets used for PET evaluation and TI-CL experiments are: \begin{itemize} \item ESC-50~\cite{piczak2015esc}, which contains 2,000 5-second audio recordings organized into 50 classes for environmental sound classification. The standard 5-fold cross-validation is used unless otherwise specified. \item Speech Commands V2 (SCv2)~\cite{warden2018speech}, which includes 105k 1-second recordings of 35 speech classes for speech recognition. The standard training and test set split is used with 84,843 and 11,005 samples respectively. \item AVE~\cite{tian2018audio}, an event localization dataset of 4,143 samples covering 28 events with a duration of 10 seconds (long duration). Only the audio modality is used, and the original train-test split for audio classification is followed. \end{itemize} \noindent\textbf{Model.} Our system is built upon the AST model, an ImageNet pre-trained ViT/B-16 model with 12 transformer encoders. We process audio input by converting the waveform into a log mel spectrogram with 128 Mel bins, a 25ms Hamming window, and a hop length of 10ms, without any data augmentation. Tokens are extracted using a convolutional feature extractor with a kernel size of 16, a stride of 10, and a dimensionality of 768, with position embeddings added via bilinear interpolation. The model is trained using Adam optimizer with a learning rate of 3e-4 and cross-entropy loss, with batch sizes of 128/32/12 for the SCv2/ESC-50/AVE datasets. We train the model for 5/20/15 epochs on the respective datasets. \begin{table}[t] \caption{Evaluation of PET methods for AST.} \label{tab:PET comparison} \centering \resizebox{0.42\textwidth}{!}{% \begin{tabular}{cccc} \toprule \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{Params (Million)}} & \multicolumn{2}{c}{\textbf{Accuracy (\%)}} \\ \cline{3-4} & \\[-1.0ex] & & \textbf{ESC-50} & \textbf{SCv2} \\ \midrule Linear & 0.26 & 71.05 & 81.44 \\ LayerNorm Tune~\cite{kim2021adapt} & 0.27 & 72.75 & 89.2 \\ BitFit~\cite{zaken2021bitfit} & 0.32 & 72 & 87.91 \\ AdaptFormer~\cite{chenadaptformer} & 1.43 & 83 & 92.3 \\ Prompt Tuning~\cite{jia2022visual} & 2.17 & 78.85 & 91.64 \\ LoRA~\cite{hulora} & 2.6 & 79.05 & 92.14 \\ Houlsby~\cite{houlsby2019parameter} & 2.62 & 69.75 & 90.83 \\ \rowcolor{Gainsboro!60}ConvPass~\cite{jie2022convolutional} & \textbf{3.5} & \textbf{83.3} & \textbf{93.42} \\ \hline Full Fine Tuning & \textbf{86.33} & \textbf{82.3} & \textbf{94.58} \\ \bottomrule \end{tabular} } \vspace{-0.5em} \end{table} \vspace{-0.5em} \subsection{Evaluation of PET methods} \vspace{-0.5em} While several PET methods have been proposed for NLP and Vision tasks, their effectiveness in audio classification remains largely unexplored. In this study, we evaluated several PET methods on the ESC-50 and SCv2 datasets, and found that AdaptFormer~\cite{chenadaptformer} and ConvPass~\cite{jie2022convolutional} achieved the highest performance (see Table \ref{tab:PET comparison}). The Linear method was simply adding a trainable linear layer for classification in Table~\ref{tab:PET comparison}. Notably, ConvPass achieved comparable performance to full fine-tuning on SCv2 (with 2.7k samples per class), and even outperformed it on ESC-50 (with only 40 samples per class) while using less than 5\% of trainable parameters. The evaluation provides compelling evidence for the effectiveness of a parameter efficient strategy. Therefore, we adopted the Convolutional Adapter for further investigation in TI-CL. \begin{table}[t] \caption{Comparison of parameter and storage cost for three training modes in TI-CL setup.} \label{tab:TICL params} \centering \resizebox{0.43\textwidth}{!}{% \begin{tabular}{ccccc} \toprule & Trainable Params & Total Params & Storage \\ \midrule Model Seq. & 86.5M & 86.62M & 348MB \\ Model Inc. & 86.5M & 259.63M & 1.02GB \\ \rowcolor{Gainsboro!60}Adapter Inc. & 3.5M & 96.6M & 47MB \\ \bottomrule \end{tabular} } \vspace{-1.0em} \end{table} \begin{table}[t] \caption{Performance of FTA vs GSA on three tasks.} \label{tab:FTA performance} \centering \resizebox{0.32\textwidth}{!}{% \begin{tabular}{cccc} \toprule \multirow{2}{*}{Method} & \multicolumn{3}{c}{Accuracy (\%)} \\ \cline{2-4} & \\[-1.5ex] & SCv2 & ESC-50 & AVE (Audio) \\ \midrule GSA & 93.57 & 85.25 & 69.1 \\ \rowcolor{Gainsboro!60} FTA & 92.81 & 83 & 66.42 \\ \bottomrule \end{tabular} } \vspace{-1.0em} \end{table} \subsection{Adapter Incremental Continual Learning of AST} \textbf{Formulation.} The TI-CL setup consists of three tasks: SCv2, ESC-50, and AVE, which are performed in a sequential order. In each task of the TI-CL, only the corresponding dataset is available for training and the datasets from previous tasks are no longer available. Only the test data of previous tasks are used to evaluate the model performance after training on current task. \noindent\textbf{Training Modes.} To demonstrate the proposed approach's advantages, we trained the AST model in three different modes, following the sequential training order. These modes are: \begin{itemize} \item Model Sequential: The same AST model is trained repeatedly on new tasks. \item Model Incremental: For every new task, a new AST model is trained independently. \item Adapter Incremental: The proposed approach described in Section 3, where new adapter modules are added to the frozen backbone with FTA for new tasks. \end{itemize} The first two modes rely on GSA and the ESC-50 task is evaluated with single fold. \noindent\textbf{Performance vs Parameter-Efficieny.} Figure \ref{fig:main} displays the performance of the AST model for three training modes. In the Model Sequential setting, catastrophic forgetting occurred, where the model weights optimized for a new task forgot the knowledge gained from previous tasks, leading to a significant performance drop. However, Model Incremental setting trained the models independently for each task, thereby resolving this issue. The proposed Adapter Incremental method also addressed the catastrophic forgetting issue by training independent task-specific adapter modules and showed competitive performance on all three tasks. However, the Model Sequential and Model Incremental settings were less efficient than the Adapter Incremental method in terms of total model parameters and trainable parameters, as illustrated in Table \ref{tab:TICL params}. Note that the total number of parameters were those required for inference upon the completion of sequential training on three tasks. The Model Incremental setting had a large number of total parameters, and both Model Sequential and Model Incremental settings required nearly 25 times more trainable parameters than Adapter Incremental. Overall, the proposed Adapter Incremental method for TI-CL combined the best of performance and parameter efficiency, delivering stable performance and minimizing the number of trainable parameters. Also, Adapter Incremental setting has substantially lower storage cost because only the adapter weights need to be saved, unlike the other two setting which stores the weights of the entire models(s). \subsection{Impact of FTA} We conducted a study to compare the computational efficiency and performance of our proposed FTA with Global Self-Attention (GSA) on three datasets, each with varying maximum audio durations. In Table \ref{tab:FTA advantages}, we summarize the details of our comparison. Our results showed that FTA required significantly fewer computations than GSA, especially with longer audio durations (the larger spectrograms). To further evaluate the performance of FTA and GSA, we implemented both methods using the Convolutional Adapter model and measured their audio classification accuracies on the three datasets. The results are presented in Table \ref{tab:FTA performance}. We found that FTA performed competitively with GSA in terms of accuracy, but with only a fraction of the computational resources required by self-attention. Overall, our study demonstrates that FTA is a promising approach for audio classification tasks, as it achieves comparable accuracy to GSA while using significantly fewer computational resources. \section{Conclusions} In this work, we proposed a new method called Adapter Incremental Continual Learning (AI-CL) for audio classification in the context of Task Incremental Continual Learning (TI-CL) of AST audio classifiers. AI-CL improved parameter efficiency with the introduction of Convolutional Adapters for AST. To enhance compute efficiency for longer audio streams, we proposed a new method called Frequency-Time factorized Attention. Our experiments have shown that AI-CL is both parameter-efficient and compute-efficient. AI-CL enables continual learning with minimal resources, which can be scaled effectively for a large number of tasks. \section{Acknowledgements} This work was carried out at the Rapid-Rich Object Search (ROSE) Lab, Nanyang Technological University, Singapore. The research is supported by the DSO National Laboratories, under the project agreement No. DSOCL21238. \bibliographystyle{IEEEtran}
{ "arxiv_id": "2302.14244", "language": "en", "timestamp": "2023-03-01T02:06:32", "url": "https://arxiv.org/abs/2302.14244", "yymm": "2302" }
\section*{Abstract} \input{0_Abstract.tex} Keywords: Unsteady; Regular reflection; Mach reflection; Moving wedge; Dynamic shock waves; Supersonic flow; Dual solution domain. \section{Introduction} \input{2_Introduction.tex} \section{Computational Model} \input{3_Computational_Model.tex} \subsection{Model Description} \input{4_Model_Description.tex} \subsection{Governing Equations} \input{5_Governing_Equations.tex} \subsection{Equations of Motion} \input{6_Equations_of_Motion.tex} \subsection{Computational Domain} \input{7_Computational_Domain.tex} \section{Results and Discussion} \input{8_Results_and_Discussion.tex} \subsection{Dynamic Transition from RR to MR} \input{9_Dynamic_transition_from_RR_to_MR.tex} \subsection{Mach Stem Height} \input{10_Mach_Stem_Height.tex} \subsection{Shock Reflection Domain} \input{11_Shock_Reflection_Domain.tex} \section{Conclusion} \input{12_Conclusion.tex}
{ "arxiv_id": "2302.14252", "language": "en", "timestamp": "2023-03-01T02:06:57", "url": "https://arxiv.org/abs/2302.14252", "yymm": "2302" }
\section{Additional Details on FixupResNet20} FixupResNet20 \cite{zhang2019fixup} is amended from the popular ResNet20 \cite{he2016deep} by deleting the BatchNorm layers \cite{ioffe2015batch}. The BatchNorm layers use the mean and variance of some hidden layers based on the data inputted into the models. In our experiment, the data on nodes are heterogeneous. If the models include BatchNorm layers, even all nodes have the same model parameters after training, their testing performance on the whole data would be different for different nodes because the mean and variance of the hidden layers are produced on the heterogeneous data. Thus we use FixupResNet20 instead of ResNet20. \section{Some Key Existing Lemmas} For $L$-smoothness function $f_i$, it holds for any ${\mathbf{x}}, {\mathbf{y}}\in\dom(r)$, \begin{align}\label{eq:assump-to-f_i} \textstyle \big|f_i({\mathbf{y}}) - f_i({\mathbf{x}}) - \langle \nabla f_i({\mathbf{x}}), {\mathbf{y}}-{\mathbf{x}}\rangle\big| \le \frac{L}{2}\|{\mathbf{y}}-{\mathbf{x}}\|^2. \end{align} From the smoothness of $f_i$ in Assumption \ref{assu:prob}, it follows that $f = \frac{1}{n}f_i$ is also $ L$-smooth in $\dom(r)$. When $f_i$ is $ L$-smooth in $\dom(r)$, we have that $f_i(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex. Since $r(\cdot)$ is convex, $\phi_i(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex, i.e., $\phi_i$ is $L$-weakly convex for each $i$. So is $\phi$. In the following, we give some lemmas about weakly convex functions. The following result is from Lemma II.1 in \cite{chen2021distributed}. \begin{lemma}\label{lem:weak_convx} For any function $\psi$ on $\mathbb{R}^{d}$, if it is $L$-weakly convex, i.e., $\psi(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex, then for any ${\mathbf{x}}_1, {\mathbf{x}}_2, \ldots, {\mathbf{x}}_m\in\mathbb{R}^d$, it holds that \[ \psi\left(\sum_{i=1}^m a_i{\mathbf{x}}_i\right)\leq \sum_{i=1}^m a_i \psi({\mathbf{x}}_i) + \frac{L}{2} \sum_{i=1}^{m-1} \sum_{j=i+1}^m a_i a_j \|{\mathbf{x}}_i-{\mathbf{x}}_j\|^2, \] where $a_i\geq 0$ for all $i$ and $\sum_{i=1}^m a_i=1$. \end{lemma} The first result below is from Lemma II.8 in \cite{chen2021distributed}, and the nonexpansiveness of the proximal mapping of a closed convex function is well known. \begin{lemma} \label{lem:prox_diff} For any function $\psi$ on $\mathbb{R}^{d}$, if it is $L$-weakly convex, i.e., $\psi(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex, then the proximal mapping with $\lambda< \frac{1}{L}$ satisfies \[ \|\prox_{\lambda \psi}({\mathbf{x}}_1)-\prox_{\lambda \psi}({\mathbf{x}}_2)\|\leq \frac{1}{1-\lambda L} \|{\mathbf{x}}_1-{\mathbf{x}}_2\|. \] For a closed convex function $r(\cdot)$, its proximal mapping is nonexpansive, i.e., \[ \|\prox_{r}({\mathbf{x}}_1)-\prox_{r}({\mathbf{x}}_2)\|\leq \|{\mathbf{x}}_1-{\mathbf{x}}_2\|. \] \end{lemma} \begin{lemma} For $\mathrm{DProxSGT}$ in Algorithm \ref{alg:DProxSGT} and $\mathrm{CDProxSGT}$ in Algorithm \ref{alg:CDProxSGT}, we both have \begin{gather} \bar{\mathbf{y}}^t =\overline{\nabla} \mathbf{F}^t, \quad \bar{\mathbf{x}}^{t} = \bar{\mathbf{x}}^{t+\frac{1}{2}} = \frac{1}{n} \sum_{i=1}^n \prox_{\eta r}\left({\mathbf{x}}_i^t - \eta {\mathbf{y}}_i^{t}\right). \label{eq:x_y_mean} \end{gather} \end{lemma} \begin{proof} For DProxSGT in Algorithm \ref{alg:DProxSGT}, taking the average among the workers on \eqref{eq:y_half_update} to \eqref{eq:x_1_update} gives \begin{align} \bar{\mathbf{y}}^{t-\frac{1}{2}} = \bar{\mathbf{y}}^{t-1} + \overline{\nabla} \mathbf{F}^t - \overline{\nabla} \mathbf{F}^{t-1}, \quad \bar{\mathbf{y}}^t =\bar{\mathbf{y}}^{t-\frac{1}{2}}, \quad \bar{\mathbf{x}}^{t+\frac{1}{2}} = \frac{1}{n} \sum_{i=1}^n \prox_{\eta r}\left({\mathbf{x}}_i^t - \eta {\mathbf{y}}_i^{t}\right), \quad \bar{\mathbf{x}}^{t} = \bar{\mathbf{x}}^{t+\frac{1}{2}},\label{eq:proof_mean} \end{align} where $\mathbf{1}^\top\mathbf{W}=\mathbf{1}^\top$ follows from Assumption \ref{assu:mix_matrix}. With $\bar{\mathbf{y}}^{-1}=\overline{\nabla} \mathbf{F}^{-1}$, we have \eqref{eq:x_y_mean}. Similarly, for CDProxSGT in Algorithm \ref{alg:CDProxSGT}, taking the average on \eqref{eq:alg3_1_matrix} to \eqref{eq:alg3_6_matrix} will also give \eqref{eq:proof_mean} and \eqref{eq:x_y_mean}. \end{proof} In the rest of the analysis, we define the Moreau envelope of $\phi$ for $\lambda\in(0,\frac{1}{L})$ as \begin{align*} \phi_\lambda({\mathbf{x}}) = \min_{\mathbf{y}}\left\{\phi({\mathbf{y}}) + \frac{1}{2\lambda}\|{\mathbf{y}}-{\mathbf{x}}\|^2\right\}. \end{align*} Denote the minimizer as \begin{align*} \prox_{\lambda \phi}({\mathbf{x}}):= \argmin_{{\mathbf{y}}} \phi({\mathbf{y}})+\frac{1}{2\lambda} \|{\mathbf{y}}-{\mathbf{x}}\|^2. \end{align*} In addition, we will use the notation $\widehat{{\mathbf{x}}}^t_i$ and $\widehat{{\mathbf{x}}}^{t+\frac{1}{2}}_i$ that are defined by \begin{align} \widehat{{\mathbf{x}}}^t_i = \prox_{\lambda \phi}({\mathbf{x}}^t_i),\ \widehat{{\mathbf{x}}}^{t+\frac{1}{2}}_i = \prox_{\lambda \phi}({\mathbf{x}}^{t+\frac{1}{2}}_i),\, \forall\, i\in\mathcal{N}, \label{eq:x_t_hat} \end{align} where $\lambda \in(0,\frac{1}{L})$. \section{Convergence Analysis for CDProxSGT} \label{sec:proof_CDProxSGT} In this section, we analyze the convergence rate of CDProxSGT. Similar to the analysis of DProxSGT, we establish a Lyapunov function that involves consensus errors and the Moreau envelope. But due to the compression, compression errors $\|\widehat\mathbf{X}^t-\mathbf{X}^t\|$ and $\|\widehat\Y^t-\Y^t\|$ will occur. Hence, we will also include the two compression errors in our Lyapunov function. Again, we can equivalently write a matrix form of the updates \eqref{eq:alg3_1}-\eqref{eq:alg3_6} in Algorithm \ref{alg:CDProxSGT} as follows: \begin{gather} \Y^{t-\frac{1}{2}} = \Y^{t-1} + \nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}, \label{eq:alg3_1_matrix}\\ \underline\Y^{t} = \underline\Y^{t-1} + Q_{\mathbf{y}}\big[\Y^{t-\frac{1}{2}} - \underline\Y^{t-1}\big], \label{eq:alg3_2_matrix}\\ \Y^{t} = \Y^{t-\frac{1}{2}} +\gamma_y \underline\Y^{t}(\mathbf{W}-\mathbf{I}), \label{eq:alg3_3_matrix}\\ \mathbf{X}^{t+\frac{1}{2}} =\prox_{\eta r} \left(\mathbf{X}^t - \eta \Y^{t}\right), \label{eq:alg3_4_matrix}\\ \underline\mathbf{X}^{t+1} = \underline\mathbf{X}^{t} + Q_{\mathbf{x}}\big[\mathbf{X}^{t+\frac{1}{2}} - \underline\mathbf{X}^{t}\big], \label{eq:alg3_5_matrix}\\ \mathbf{X}^{t+1} = \mathbf{X}^{t+\frac{1}{2}}+\gamma_x\underline\mathbf{X}^{t+1}(\mathbf{W}-\mathbf{I}).\label{eq:alg3_6_matrix} \end{gather} When we apply the compressor to the column-concatenated matrix in \eqref{eq:alg3_2_matrix} and \eqref{eq:alg3_5_matrix}, it means applying the compressor to each column separately, i.e., $Q_{\mathbf{x}}[\mathbf{X}] = [Q_x[{\mathbf{x}}_1],Q_x[{\mathbf{x}}_2],\ldots,Q_x[{\mathbf{x}}_n]]$. Below we first analyze the progress by the half-step updates of $\Y$ and $\mathbf{X}$ from $t+1/2$ to $t+1$ in Lemmas \ref{lem:prepare_comp_y} and \ref{lem:Xhat_Xhalf_comp}. Then we bound the one-step consensus error and compression error for $\mathbf{X}$ in Lemma \ref{lem:X_consensus_comperror} and for $\Y$ in Lemma \ref{lem:Y_consensus_comperror}. The bound of $\mathbb{E}[\phi_\lambda({\mathbf{x}}_i^{t+1})]$ after one-step update is given in \ref{lem:phi_one_step}. Finally, we prove Theorem \ref{thm:sect3thm} by building a Lyapunov function that involves all the five terms. \begin{lemma} \label{lem:prepare_comp_y} It holds that \begin{align} \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big] \leq &~2 \alpha^2\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + 6 \alpha^2 n \sigma^2 + 4 \alpha^2 L^2 \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big], \label{eq:2.3.2_1} \\ \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big] \leq &~\frac{1+\alpha^2}{2}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \frac{6 n \sigma^2}{1-\alpha^2} + \frac{4 L^2}{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big]. \label{eq:2.3.2} \end{align} \end{lemma} \begin{proof} From \eqref{eq:alg3_1} and \eqref{eq:alg3_2}, we have \begin{align} &~ \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big] = \mathbb{E}\big[\mathbb{E}_Q\big[\|Q_{\mathbf{y}}\big[\Y^{t+\frac{1}{2}}-\underline\Y^{t}\big]- (\Y^{t+\frac{1}{2}}-\underline\Y^{t})\|^2\big]\big] \nonumber\\ \leq &~ \alpha^2\mathbb{E}\big[\|\Y^{t+\frac{1}{2}}-\underline\Y^{t}\|^2\big] = \alpha^2\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t} +\nabla \mathbf{F}^{t+1}-\nabla \mathbf{F}^{t}\|^2\big]\nonumber\\ \leq &~ \alpha^2(1+\alpha_0)\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \alpha^2(1+\alpha_0^{-1})\mathbb{E}\big[\|\nabla \mathbf{F}^{t+1}-\nabla \mathbf{F}^{t}\|^2\big] \nonumber\\ \leq &~ \alpha^2(1+\alpha_0)\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \alpha^2(1+\alpha_0^{-1}) \left(3 n \sigma^2 + 2 L^2 \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big]\right), \label{eq:2.3.2_0} \end{align} where the first inequality holds by Assumption \ref{assu:compressor}, $\alpha_0$ can be any positive number, and the last inequality holds by \eqref{eq:y_cons12} which still holds for CDProxSGT. Taking $\alpha_0=1$ in \eqref{eq:2.3.2_0} gives \eqref{eq:2.3.2_1}. Letting $\alpha_0=\frac{1-\alpha^2}{2}$ in \eqref{eq:2.3.2_0}, we obtain $\alpha^2(1+\alpha_0) = (1-(1-\alpha^2))(1+\frac{1-\alpha^2}{2}) \leq \frac{1+\alpha^2}{2}$ and $\alpha^2(1+\alpha_0^{-1}) \leq \frac{2}{1-\alpha^2}$, and thus \eqref{eq:2.3.2} follows. \end{proof} \begin{lemma} \label{lem:Xhat_Xhalf_comp} Let $\eta\leq \lambda \leq\frac{1}{4 L}$. Then \begin{align} \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] \leq &~ 4\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \left( 1-\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +4\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 2\eta^2\sigma^2, \label{eq:hatx_xprox_comp}\\ \mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] \leq &~ 3\alpha^2 \left(\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t\|^2\big]+ \mathbb{E}\big[\|\widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big]\right), \label{eq:X_-X_1}\\ \mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] \leq &~ \frac{16}{1-\alpha^2}\Big( \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]+ \eta^2\mathbb{E}\big[\|\Y^t_\perp\|^2\big]\Big) + \frac{1+\alpha^2}{2} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] \nonumber\\ &~ +\frac{8}{1-\alpha^2}\left( \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +\eta^2\sigma^2\right). \label{eq:2.2.2} \end{align} Further, if $\gamma_x\leq \frac{2\sqrt{3}-3}{6\alpha}$, then \begin{align} \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] \leq &~ 30\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] +4\sqrt{3} \alpha \gamma_x \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] +16\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber \\ &~ + 8\mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + 8\eta^2\sigma^2. \label{eq:2.2.3} \end{align} \end{lemma} \begin{proof} The proof of \eqref{eq:hatx_xprox_comp} is the same as that of Lemma \ref{lem:Xhat_Xhalf} because \eqref{eq:alg3_4} and \eqref{eq:x_y_mean} are the same as \eqref{eq:x_half_update} and \eqref{eq:x_y_mean}. For $\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}$, we have from \eqref{eq:alg3_5} that \begin{align} &~ \mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] = \mathbb{E}\big[\mathbb{E}_Q\big[\| Q_{\mathbf{x}}\big[\mathbf{X}^{t+\frac{1}{2}} - \underline\mathbf{X}^{t}\big] -(\mathbf{X}^{t+\frac{1}{2}}-\underline\mathbf{X}^{t})\|^2\big]\big] \nonumber \\ \leq &~ \alpha^2 \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\underline\mathbf{X}^{t}\|^2\big] = \alpha^2 \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t+\widehat{\mathbf{X}}^t - \mathbf{X}^t+\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] \nonumber \\ \le & ~ \alpha^2(1+\alpha_1)\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \alpha^2(1+\alpha_1^{-1})\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t + \widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big] \nonumber \\ \leq &~ \alpha^2(1+\alpha_1)\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + 2\alpha^2(1+\alpha_1^{-1})\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t\|^2\big]+2\alpha^2(1+\alpha_1^{-1})\mathbb{E}\big[\|\widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big], \label{eq:X_-X_0} \end{align} where $\alpha_1$ can be any positive number. Taking $\alpha_1 = 2$ in \eqref{eq:X_-X_0} gives \eqref{eq:X_-X_1}. Taking $\alpha_1 = \frac{1-\alpha^2}{2}$ in \eqref{eq:X_-X_0} and plugging \eqref{eq:hatx_xprox_comp} give \eqref{eq:2.2.2}. About $\mathbb{E}[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2]$, similar to \eqref{eq:Xplus1-X}, we have from \eqref{eq:compX_hatW} that \begin{align} &~ \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] = \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}\widehat\mathbf{W}_x - \mathbf{X}^{t} + \gamma_x(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I})\|^2\big] \nonumber \\ \leq&~(1+\alpha_2) \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}\widehat\mathbf{W}_x-\mathbf{X}^t\|^2\big] + (1+\alpha_2^{-1}) \mathbb{E}\big[\|\gamma_x(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I})\|^2\big]\nonumber\\ \overset{\eqref{eq:Xplus1-X}, \eqref{eq:X_-X_1}}\leq &~ (1+\alpha_2) \left( 3\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t \|^2\big] +3\mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2\big] + 12 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]\right) \nonumber \\ &~ + (1+\alpha_2^{-1})4\gamma_x^2 \cdot 3\alpha^2 \left( \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t \|^2\big] + \mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2\big] + \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big]\right) \nonumber \\ \leq &~ 4\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t \|^2\big] + 4 \mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2\big] + 14\mathbb{E}\big[\|\mathbf{X}^t _\perp\|^2\big] + 4\sqrt{3} \alpha \gamma_x \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big], \nonumber \end{align} where in the first inequality $\alpha_2$ could be any positive number, in the second inequality we use \eqref{eq:X_-X_1}, and in the last inequality we take $\alpha_2 = 2\gamma_x \alpha$ and thus with $\gamma_x\leq \frac{2\sqrt{3}-3}{6\alpha}$, it holds $ 3(1+\alpha_2) +12\gamma_x^2\alpha^2(1+\alpha_2^{-1}) = 3(1+2\gamma_x\alpha)^2 \leq 4$, $12(1+\alpha_2)\leq 8\sqrt{3}\leq 14$, $(1+\alpha_2^{-1})4\gamma_x^2\cdot3\alpha^2 \leq 4\sqrt{3} \alpha \gamma_x$. Then plugging \eqref{eq:hatx_xprox_comp} into the inequality above, we obtain \eqref{eq:2.2.3}. \end{proof} \begin{lemma}\label{lem:X_consensus_comperror} Let $\eta\leq \lambda \leq\frac{1}{4 L}$ and $\gamma_x\leq \min\{\frac{ (1-\widehat\rho_x^2)^2}{60\alpha}, \frac{1-\alpha^2}{25}\}$. Then the consensus error and compression error of $\mathbf{X}$ can be bounded by \begin{align} \mathbb{E}\big[\|\mathbf{X}^{t+1}_\perp\|^2\big] \leq &~ \frac{3+\widehat\rho_x^2}{4} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + 2\alpha \gamma_x (1-\widehat\rho_x^2) \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{9}{4(1-\widehat\rho_x^2)}\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber\\ &~ + 4\alpha \gamma_x (1-\widehat\rho_x^2)\mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + 4 \alpha \gamma_x (1-\widehat\rho_x^2)\eta^2\sigma^2, \label{eq:2.4.1}\\ \mathbb{E}\big[\|\mathbf{X}^{t+1}-\underline\mathbf{X}^{t+1}\|^2\big] \leq &~ \frac{21}{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{3+\alpha^2}{4}\mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] +\frac{21}{1-\alpha^2} \eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big]\nonumber\\ &~ + \frac{11}{1-\alpha^2} \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \frac{11}{1-\alpha^2} \eta^2\sigma^2. \label{eq:2.5.1} \end{align} \end{lemma} \begin{proof} First, let us consider the consensus error of $\mathbf{X}$. With the update \eqref{eq:compX_hatW}, we have \begin{align} \mathbb{E}\big[\|\mathbf{X}^{t+1}_\perp\|^2\big] \leq &~ (1+\alpha_3)\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}\widehat\mathbf{W}_x (\mathbf{I}- \mathbf{J})\|^2\big] +(1+\alpha_3^{-1}) \mathbb{E}\big[\|\gamma_x(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I})\|^2\big], \nonumber\\ \leq &~ (1+\alpha_3)\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}(\widehat\mathbf{W}_x - \mathbf{J})\|^2\big] + (1+\alpha_3^{-1})4\gamma_x^2\mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big], \label{eq:XComp_consensus0} \end{align} where $\alpha_3$ is any positive number, and $\|\mathbf{W}-\mathbf{I}\|_2\leq 2$ is used. The first term in the right hand side of \eqref{eq:XComp_consensus0} can be processed similarly as the non-compressed version in Lemma \ref{lem:XI_J} by replacing $\mathbf{W}$ by $\widehat\mathbf{W}_x$, namely, \begin{align} \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}} (\widehat\mathbf{W}_x-\mathbf{J})\|^2\big] \leq &~ \textstyle \frac{1+\widehat\rho^2_x}{2} \mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big]+ \frac{2\widehat\rho^2_x \eta^2 }{1-\widehat\rho^2_x} \mathbb{E}\big[\| \Y^{t}_\perp \|^2\big]. \label{eq:XComp_consensus1} \end{align} Plugging \eqref{eq:XComp_consensus1} and \eqref{eq:X_-X_1} into \eqref{eq:XComp_consensus0} gives \begin{align*} &~ \mathbb{E}\big[\|\mathbf{X}^{t+1}_\perp\|^2\big] \leq ~ (1+\alpha_3)\left( \textstyle \frac{1+\widehat\rho^2_x}{2} \mathbb{E}\big[\| \mathbf{X}^{t}_\perp \|^2\big]+ \frac{2\widehat\rho^2_x \eta^2 }{1-\widehat\rho^2_x} \mathbb{E}\big[\| \Y^{t}_\perp \|^2\big]\right) \\ &~ + (1+\alpha_3^{-1})12 \alpha^2 \gamma_x^2 \left(\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t\|^2\big]+ \mathbb{E}\big[\|\widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big]\right)\\ \overset{\eqref{eq:hatx_xprox_comp}}{\leq} &~ \left( \textstyle \frac{1+\widehat\rho_x^2}{2}(1+\alpha_3) + 48 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) \right) \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] \nonumber\\ &~+ 12\alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] +\left( \textstyle \frac{2\widehat\rho_x^2}{1-\widehat\rho_x^2}(1+\alpha_3) +48 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1})\right)\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \\ &~ +24 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +24 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1})\eta^2\sigma^2. \end{align*} Let $\alpha_3 = \frac{7\alpha\gamma_x}{1-\widehat\rho_x^2}$ and $\gamma_x\leq \frac{(1-\widehat\rho_x^2)^2}{60\alpha}$. Then $\alpha^2 \gamma_x^2 (1+\alpha_3^{-1})=\alpha\gamma_x (\alpha\gamma_x+\frac{1-\widehat\rho_x^2}{7})\leq \alpha\gamma_x (\frac{ (1-\widehat\rho_x^2)^2}{60}+\frac{1-\widehat\rho_x^2}{7})\leq \frac{\alpha\gamma_x (1-\widehat\rho_x^2)}{6}$ and \begin{align*} &~ \textstyle \frac{1+\widehat\rho_x^2}{2}(1+\alpha_3) + 48 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) = \frac{1+\widehat\rho_x^2}{2} + 48 \alpha^2 \gamma_x^2 + \frac{7\alpha\gamma_x}{1-\widehat\rho_x^2} + \frac{48\alpha\gamma_x(1-\widehat\rho_x^2)}{7} \\ \leq&~ \textstyle \frac{1+\widehat\rho_x^2}{2} + \frac{48}{60^2}(1-\widehat\rho_x^2)^4 + \frac{7}{60}(1-\widehat\rho_x^2) + \frac{7}{60}(1-\widehat\rho_x^2)^3\leq \frac{1+\widehat\rho_x^2}{2} + \frac{ 1-\widehat\rho_x^2}{4} = \frac{3+\widehat\rho_x^2}{4},\\ &~ \textstyle \frac{2\widehat\rho_x^2}{1-\widehat\rho_x^2}(1+\alpha_3) + 48 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) = \frac{2\widehat\rho_x^2}{1-\widehat\rho_x^2} + 48 \alpha^2 \gamma_x^2 + \frac{2\widehat\rho_x^2}{1-\widehat\rho_x^2} \frac{7 \alpha \gamma_x }{1-\widehat\rho_x^2} + \frac{48\alpha\gamma_x(1-\widehat\rho_x^2)}{7}\\ \leq &~ \textstyle \frac{1}{1-\widehat\rho_x^2} \left( 2\widehat\rho_x^2 + \frac{48}{60^2} (1-\widehat\rho_x^2) + \frac{14\widehat\rho_x^2}{60} + \frac{7}{60}(1-\widehat\rho_x^2) \right) \leq \frac{1}{1-\widehat\rho_x^2} \left( 2\widehat\rho_x^2 + \frac{48}{60^2} + \frac{7}{60} \right) \leq \frac{9}{4(1-\widehat\rho_x^2)}. \end{align*} Thus \eqref{eq:2.4.1} holds. Now let us consider the compression error of $\mathbf{X}$. By \eqref{eq:alg3_6}, we have \begin{align} &~\mathbb{E}\big[\|\mathbf{X}^{t+1}-\underline\mathbf{X}^{t+1}\|^2\big = \mathbb{E}\big[\|(\underline\mathbf{X}^{t+1} - \mathbf{X}^{t+\frac{1}{2}}) \big(\gamma_x(\mathbf{W}-\mathbf{I}) -\mathbf{I}\big) + \gamma_x \mathbf{X}^{t+\frac{1}{2}} (\mathbf{I}-\mathbf{J}) (\mathbf{W}-\mathbf{I}) \|^2\big] \nonumber\\ \leq&~ (1+\alpha_4) (1+2\gamma_x)^2 \mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] + (1+\alpha_4^{-1})4 \gamma_x^2 \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}_\perp\|^2\big],\label{eq:2.5.1.0} \end{align} where we have used $\mathbf{J}\mathbf{W}=\mathbf{J}$ in the equality, $\|\gamma_x (\mathbf{W}-\mathbf{I}) -\mathbf{I}\|_2\leq \gamma_x\|\mathbf{W}-\mathbf{I}\|_2+\|\mathbf{I}\|_2\leq 1+2\gamma_x$ and $\|\mathbf{W}-\mathbf{I}\|_2\leq 2$ in the inequality, and $\alpha_4$ can be any positive number. For the second term in the right hand side of \eqref{eq:2.5.1.0}, we have \begin{align} \|\mathbf{X}^{t+\frac{1}{2}}_\perp\|^2 \overset{\eqref{eq:alg3_4}}{=}&~ \left\|\left(\prox_{\eta r} \left(\mathbf{X}^t - \eta \Y^{t}\right)-\prox_{\eta r} \left(\bar{\mathbf{x}}^t - \eta \bar{\mathbf{y}}^{t}\right)\mathbf{1}^\top\right)(\mathbf{I}-\mathbf{J})\right\|^2 \nonumber \\ \leq&~ \|\mathbf{X}^t_\perp- \eta \Y^{t}_\perp\|^2 \leq 2\|\mathbf{X}^t_\perp\|^2+2\eta^2\|\Y^{t}_\perp\|^2, \label{eq:2.2.1} \end{align} where we have used $\mathbf{1}^\top(\mathbf{I}-\mathbf{J})=\mathbf{0}^\top$, $\|\mathbf{I}-\mathbf{J}\|_2\leq 1$, and Lemma \ref{lem:prox_diff}. Now plugging \eqref{eq:2.2.2} and \eqref{eq:2.2.1} into \eqref{eq:2.5.1.0} gives \begin{align*} \mathbb{E}\big[\|\mathbf{X}^{t+1}-\underline\mathbf{X}^{t+1}\|^2\big] \leq \left( \textstyle (1+\alpha_4^{-1})8\gamma_x^2+(1+\alpha_4) (1+2\gamma_x)^2\frac{16}{1-\alpha^2}\right) \left( \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big]\right) \nonumber\\ \textstyle + (1+\alpha_4) (1+2\gamma_x)^2\frac{1+\alpha^2}{2}\mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] +(1+\alpha_4)(1+2\gamma_x)^2\frac{8}{1-\alpha^2} \left( \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \eta^2\sigma^2\right). \end{align*} With $\alpha_4=\frac{1-\alpha^2}{12}$ and $\gamma_x\leq \frac{1-\alpha^2}{25}$, \eqref{eq:2.5.1} holds because $(1+2\gamma_x)^2 \leq 1 + \frac{104}{25}\gamma_x \leq \frac{7}{6}$, $ (1+2\gamma_x)^2\frac{1+\alpha^2}{2}\leq \frac{1+\alpha^2}{2}+\frac{104}{25}\gamma_x\leq \frac{2+\alpha^2}{3}$, and \begin{align} (1+\alpha_4) (1+2\gamma_x)^2\frac{1+\alpha^2}{2} \leq &~ \frac{2+\alpha^2}{3} + \alpha_4 = \frac{3+\alpha^2}{4}, \label{eq:gamma_x_1}\\ (1+\alpha_4^{-1}) 8\gamma_x^2+ (1+\alpha_4) (1+2\gamma_x)^2\frac{16}{1-\alpha^2} \leq&~ \frac{13}{1-\alpha^2}\frac{8}{625} + \frac{13}{12} \frac{7}{6} \frac{16}{1-\alpha^2} \leq \frac{21}{1-\alpha^2}, \label{eq:gamma_x_2}\\ (1+\alpha_4)(1+2\gamma_x)^2\frac{8}{1-\alpha^2} \leq&~ \frac{13}{12} \frac{7}{6}\frac{8}{1-\alpha^2} \leq \frac{11}{1-\alpha^2}. \nonumber \end{align} \end{proof} \begin{lemma} \label{lem:Y_consensus_comperror} Let $\eta\leq \min\{\lambda, \frac{1-\widehat\rho^2_y}{8\sqrt{5} L} \} $, $\lambda \leq\frac{1}{4 L}$, $ \gamma_x\leq \frac{2\sqrt{3}-3}{6\alpha}$, $\gamma_y\leq \min\{\frac{\sqrt{1-\widehat\rho^2_y}}{12\alpha}, \frac{1-\alpha^2}{25}\}$. Then the consensus error and compression error of $\Y$ can be bounded by \begin{align} \mathbb{E}\big[\|\Y^{t+1}_\perp\|^2\big] \leq &~ \frac{150 L^2 }{1-\widehat\rho^2_y } \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{20\sqrt{3} \alpha\gamma_x L^2}{1-\widehat\rho^2_y } \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big]+\frac{3+\widehat\rho^2_y }{4}\mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber\\ &~ +\frac{48\alpha^2\gamma_y^2}{1-\widehat\rho^2_y } \mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \frac{40 L^2 }{1-\widehat\rho^2_y } \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + 12n \sigma^2, \label{eq:2.4.2} \\ \mathbb{E}\big[\|\Y^{t+1}-\underline\Y^{t+1}\|^2\big] \leq &~ \frac{180 L^2}{1-\alpha^2}\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{24\sqrt{3}\alpha\gamma_x L^2}{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{3+\alpha^2}{4}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] \nonumber\\ &~ +\frac{104\gamma_y^2+ 96\eta^2 L^2}{1-\alpha^2}\mathbb{E}\big[\|\Y^{t}( \mathbf{I}-\mathbf{J})\|^2\big] + \frac{48 L^2}{1-\alpha^2} \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \frac{10 n}{1-\alpha^2} \sigma^2 .\label{eq:2.5.2} \end{align} \end{lemma} \begin{proof} First, let us consider the consensus of $\Y$. Similar to \eqref{eq:XComp_consensus0}, we have from the update \eqref{eq:Y_hatW} that \begin{align} \mathbb{E}\big[\|\Y^{t+1}_\perp\|^2\big] \leq (1+\alpha_5)\mathbb{E}\big[\|\Y^{t+\frac{1}{2}}(\widehat\mathbf{W}_y-\mathbf{J})\|^2\big] + (1+\alpha_5^{-1})4\gamma_y^2 \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big], \label{eq:Ycomp_conses0} \end{align} where $\alpha_5$ can be any positive number. Similarly as \eqref{eq:y_cons1}-\eqref{eq:y_cons2} in the proof of Lemma \ref{lem:YI_J}, we have the bound for the first term on the right hand side of \eqref{eq:Ycomp_conses0} by replacing $\mathbf{W}$ with $\widehat\mathbf{W}_y$, namely, \begin{align} \mathbb{E}\big[\|\Y^{t+\frac{1}{2}}(\widehat\mathbf{W}_y-\mathbf{J})\|^2\big] \leq \textstyle \frac{1+\widehat\rho^2_y}{2} \mathbb{E}\big[\|\Y^{t}_\perp \|^2\big] + \frac{2 \widehat\rho^2_y L^2 }{1-\widehat\rho^2_y } \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] + 5 \widehat\rho^2_y n \sigma^2.\label{eq:comp_y_cons220} \end{align} Plug \eqref{eq:comp_y_cons220} and \eqref{eq:2.3.2_1} back to \eqref{eq:Ycomp_conses0}, and take $\alpha_5 = \frac{1-\widehat\rho^2_y}{3(1+\widehat\rho^2_y)}$. We have \begin{align*} &~ \mathbb{E}\big[\|\Y^{t+1}_\perp\|^2\big] \leq \textstyle \frac{2(2+\widehat\rho^2_y)}{3(1+\widehat\rho^2_y)}\frac{1+\widehat\rho^2_y}{2} \mathbb{E}\big[\|\Y^{t}_\perp \|^2\big] + \frac{24\gamma_y^2}{1-\widehat\rho^2_y} 2\alpha^2 \mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] \nonumber\\ &~\quad \textstyle + \frac{24\gamma_y^2}{1-\widehat\rho^2_y} 6\alpha^2 n\sigma^2 + 2\cdot5 \widehat\rho^2_y n \sigma^2 + \left( \textstyle \frac{24\gamma_y^2}{1-\widehat\rho^2_y} 4\alpha^2 L^2 + 2\cdot\frac{2 \widehat\rho^2_y L^2 }{1-\widehat\rho^2_y } \right)\mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] \nonumber\\ \leq &~ \textstyle \frac{2+\widehat\rho^2_y}{3} \mathbb{E}\big[\|\Y^{t}_\perp \|^2\big] + \frac{48\alpha^2\gamma_y^2}{1-\widehat\rho^2_y} \mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + 11 n \sigma^2 + \frac{5 L^2}{1-\widehat\rho^2_y} \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] \\ \leq &~ \textstyle \frac{150 L^2 }{1-\widehat\rho^2_y } \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{20\sqrt{3} L^2}{1-\widehat\rho^2_y } \alpha \gamma_x \mathbb{E}[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2] + \frac{40 L^2 }{1-\widehat\rho^2_y } \eta^2\sigma^2 + 11 n \sigma^2 \nonumber\\ &~ \textstyle +\left( \textstyle \frac{2+\widehat\rho^2_y}{3}+ \frac{80 L^2 }{1-\widehat\rho^2_y } \eta^2\right) \mathbb{E}\big[\|\Y^t_\perp\|^2\big] +\frac{48\alpha^2\gamma_y^2}{1-\widehat\rho^2_y} \mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \frac{40 L^2 }{1-\widehat\rho^2_y}\mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big], \end{align*} where the first inequality holds by $1+\alpha_5 = \frac{2(2+\widehat\rho^2_y)}{3(1+\widehat\rho^2_y)} \leq 2$ and $1+\alpha_5^{-1} = \frac{2(2+\widehat\rho^2_y)}{1-\widehat\rho^2_y}\leq \frac{6}{1-\widehat\rho^2_y}$, the second inequality holds by $\gamma_y\leq \frac{\sqrt{1-\widehat\rho^2_y}}{12\alpha}$ and $\alpha^2\leq 1$, and the third equality holds by \eqref{eq:2.2.3}. By $\frac{80 L^2 }{1-\widehat\rho^2_y} \eta^2 \leq \frac{1-\widehat\rho^2_y}{4}$ and $ \frac{40 L^2 }{1-\widehat\rho^2_y} \eta^2\leq \frac{1-\widehat\rho^2_y}{8}\leq 1$ from $\eta\leq \frac{1-\widehat\rho^2_y}{8\sqrt{5} L} $, we can now obtain \eqref{eq:2.4.2}. Next let us consider the compression error of $\Y$, similar to \eqref{eq:2.5.1.0}, we have by \eqref{eq:alg3_3} that \begin{align} &~\mathbb{E}\big[\|\Y^{t+1}-\underline\Y^{t+1}\|^2\big] \leq (1+\alpha_6)(1+2\gamma_y)^2 \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big] + (1+\alpha_6^{-1})4 \gamma_y^2 \mathbb{E}\big[\|\Y^{t+\frac{1}{2}}_\perp\|^2\big], \label{eq:Y_compress_0} \end{align} where $\alpha_6$ is any positive number. For $\mathbb{E}\big[\|\Y^{t+\frac{1}{2}}_\perp\|^2\big]$, we have from \eqref{eq:alg3_1} that \begin{align} &~\mathbb{E}\big[\|\Y^{t+\frac{1}{2}}_\perp\|^2\big] =\mathbb{E}\big[\|( \Y^{t} + \nabla \mathbf{F}^{t+1} - \nabla \mathbf{F}^{t})(\mathbf{I}-\mathbf{J})\|^2\big]\nonumber \\ \leq &~ 2\mathbb{E}\big[\|\Y^{t}_\perp\|^2\big] +2\mathbb{E}\big[\|\nabla \mathbf{F}^{t+1}-\nabla \mathbf{F}^{t}\|^2\big] \leq 2\mathbb{E}\big[\|\Y^{t}_\perp\|^2\big] +6 n \sigma^2 + 4 L^2 \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big], \label{eq:2.3.1} \end{align} where we have used \eqref{eq:y_cons12}. Plug \eqref{eq:2.3.2} and \eqref{eq:2.3.1} back to \eqref{eq:Y_compress_0} to have \begin{align*} &~\mathbb{E}\big[\|\Y^{t+1}-\underline\Y^{t+1}\|^2\big] \leq \textstyle (1+\alpha_6) (1+2\gamma_y)^2 \frac{1+\alpha^2}{2}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] +(1+\alpha_6^{-1})8\gamma_y^2\mathbb{E}\big[\|\Y^{t}( \mathbf{I}-\mathbf{J})\|^2\big] \\ &~+ \left( \textstyle (1+\alpha_6^{-1})4\gamma_y^2 +(1+\alpha_6)(1+2\gamma_y)^2 \frac{1}{1-\alpha^2} \right)4 L^2 \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big] \\ &~ + \left( \textstyle (1+\alpha_6^{-1})4\gamma_y^2 +(1+\alpha_6)(1+2\gamma_y)^2 \frac{1}{1-\alpha^2} \right) 6 n \sigma^2. \end{align*} With $\alpha_6=\frac{1-\alpha^2}{12}$ and $\gamma_y< \frac{1-\alpha^2}{25}$, like \eqref{eq:gamma_x_1} and \eqref{eq:gamma_x_2}, we have $(1+\alpha_6) (1+2\gamma_y)^2 \frac{1+\alpha^2}{2}\leq \frac{3+\alpha^2}{4}$, $8(1+\alpha_6^{-1})\leq\frac{8\cdot13}{1-\alpha^2} = \frac{104}{1-\alpha^2} $ and $ (1+\alpha_6^{-1})4\gamma_y^2 +(1+\alpha_6)(1+2\gamma_y)^2 \frac{1}{1-\alpha^2} \leq \frac{13}{1-\alpha^2}\frac{4}{625}+\frac{13}{12}\frac{7}{6}\frac{1}{1-\alpha^2}\leq \frac{3}{2(1-\alpha^2)}$. Thus \begin{align*} \mathbb{E}\big[\|\Y^{t+1}-\underline\Y^{t+1}\|^2\big] \leq &~ \textstyle \frac{3+\alpha^2}{4}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] +\frac{104\gamma_y^2}{1-\alpha^2}\mathbb{E}\big[\|\Y^{t}( \mathbf{I}-\mathbf{J})\|^2\big]+\frac{6 L^2}{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big] + \frac{9n \sigma^2}{1-\alpha^2} \nonumber\\ \leq &~ \textstyle\frac{180 L^2}{1-\alpha^2}\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{24\sqrt{3} \alpha\gamma_x L^2 }{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{3+\alpha^2}{4}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] \\ &~ \textstyle +\frac{104\gamma_y^2+ 96\eta^2 L^2}{1-\alpha^2}\mathbb{E}\big[\|\Y^{t}( \mathbf{I}-\mathbf{J})\|^2\big] + \frac{48 L^2}{1-\alpha^2} \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \frac{48 L^2\eta^2+9n}{1-\alpha^2} \sigma^2, \end{align*} where the second inequality holds by \eqref{eq:2.2.3}. By $48 L^2\eta^2\leq n$, we have \eqref{eq:2.5.2} and complete the proof. \end{proof} \begin{lemma}\label{lem:phi_one_step} Let $\eta\leq \lambda \leq\frac{1}{4 L}$ and $ \gamma_x\leq \frac{1}{6\alpha}$. It holds \begin{align} \sum_{i=1}^n \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_i^{t+1})\big] \leq&~ \sum_{i=1}^n\mathbb{E}\big[ \phi_\lambda( {\mathbf{x}}_i^{t})\big] + \frac{12}{\lambda}\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{7\alpha\gamma_x}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{12}{\lambda} \eta^2\mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber \\ &~+\frac{1}{\lambda}\left( -\frac{\eta}{4\lambda} + 23\alpha\gamma_x \right) \mathbb{E}\big[\| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2\big] + \frac{5}{\lambda} \eta^2 \sigma^2. \label{eq:2.7} \end{align} \end{lemma} \begin{proof} Similar to \eqref{eq:phi_update_0}, we have \begin{align} &~ \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_i^{t+1})\big] \overset{\eqref{eq:x_t_hat}}{=} \mathbb{E}\big[\phi(\widehat{\mathbf{x}}_i^{t+1})\big]+\frac{1}{2\lambda} \mathbb{E}\big[\|\widehat{\mathbf{x}}_i^{t+1}-{\mathbf{x}}_i^{t+1}\|^2\big] \nonumber \\ \overset{ \eqref{eq:compX_hatW}}{\leq} &~ \mathbb{E}\bigg[\phi\bigg(\sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}\bigg)\bigg] +\frac{1}{2\lambda} \mathbb{E}\bigg[\bigg\| \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \big(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}- {\mathbf{x}}_j^{t+\frac{1}{2}}\big) - \gamma_x\sum_{j=1}^n \big(\mathbf{W}_{ji}-\mathbf{I}_{ji}\big)\big(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}}\big) \bigg\|^2\bigg] \nonumber\\ \leq&~ \mathbb{E}\bigg[\phi\bigg(\sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}\bigg)\bigg] + \frac{1+\alpha_7}{2\lambda} \mathbb{E}\bigg[\bigg\| \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \big(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\big)\bigg\|^2\bigg]\nonumber\\ &~ + \frac{1+\alpha_7^{-1}}{2\lambda} \mathbb{E}\bigg[\bigg\| \gamma_x \sum_{j=1}^n\big(\mathbf{W}_{ji}-\mathbf{I}_{ji}\big)\big(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}}\big)\bigg\|^2\bigg] \nonumber \\ \overset{\mbox{Lemma \ref{lem:weak_convx}}}\leq &~ \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \mathbb{E}\big[\phi( \widehat{\mathbf{x}}_j^{t+\frac{1}{2}})\big] + \frac{ L}{2} \sum_{j=1}^{n-1}\sum_{l=j+1}^n \big(\widehat\mathbf{W}_x\big)_{ji} (\widehat\mathbf{W}_x)_{li}\mathbb{E}\big[\|\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-\widehat{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2\big] \nonumber \\ &~ + \frac{1+\alpha_7}{2\lambda} \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\mathbb{E}\big[\| \widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\|^2\big] + \frac{1+\alpha_7^{-1}}{2\lambda}\gamma_x^2 \mathbb{E}\big[\| \sum_{j=1}^n(\mathbf{W}_{ji}-\mathbf{I}_{ji})(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}})\|^2\big] \nonumber \\ \leq &~ \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}})\big] + \frac{1}{4\lambda} \sum_{j=1}^{n-1}\sum_{l=j+1}^n \big(\widehat\mathbf{W}_x\big)_{ji} (\widehat\mathbf{W}_x)_{li} \mathbb{E}\big[\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2\big] \nonumber \\ &~+ \frac{\alpha_7}{2\lambda} \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\mathbb{E}\big[\| \widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\|^2\big] + \frac{1+\alpha_7^{-1}}{2\lambda}\gamma_x^2 \mathbb{E}\big[\| \sum_{j=1}^n(\mathbf{W}_{ji}-\mathbf{I}_{ji})(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}})\|^2\big]. \label{eq:phi_lambda1} \end{align} The same as \eqref{eq:phi_lambda} and \eqref{eq:2_3}, for the first two terms in the right hand side of \eqref{eq:phi_lambda1}, we have \begin{align} \sum_{i=1}^n \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}}) \leq \sum_{i=1}^n \phi_\lambda( {\mathbf{x}}_i^{t}) +\frac{1}{2\lambda} \|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2 - \frac{1}{2\lambda} \|\widehat\mathbf{X}^t - \mathbf{X}^t\|^2,\label{eq:2_2_press}\\ \sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \big(\widehat\mathbf{W}_x\big)_{ji}(\widehat\mathbf{W}_x)_{li}\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2 \leq 8 \|\mathbf{X}^{t}_\perp\|^2+ 8\eta^2 \|\Y^{t}_\perp\|^2. \label{eq:2_3_press} \end{align} For the last two terms on the right hand side of \eqref{eq:phi_lambda1}, we have \begin{align} &~ \sum_{i=1}^n\sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\mathbb{E}\big[\| \widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\|^2\big] = \| \widehat\mathbf{X}^{t+\frac{1}{2}}-\mathbf{X}^{t+\frac{1}{2}} \|^2 \leq 2 \| \widehat\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^{t} \|^2 +2 \| \widehat\mathbf{X}^{t} - \mathbf{X}^{t+\frac{1}{2}} \|^2 \nonumber \\ \leq &~ \textstyle \frac{2}{(1-\lambda L)^2} \| \mathbf{X}^{t+\frac{1}{2}} - \mathbf{X}^{t} \|^2 +2 \| \widehat\mathbf{X}^{t} - \mathbf{X}^{t+\frac{1}{2}} \|^2 \leq 10 \| \mathbf{X}^{t+\frac{1}{2}}- \widehat\mathbf{X}^{t} \|^2+ 8 \| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2, \label{eq:X_-X2}\\ &~ \sum_{i=1}^n \mathbb{E}\big[\| \sum_{j=1}^n(\mathbf{W}_{ji}-\mathbf{I}_{ji})(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}})\|^2\big] = \mathbb{E}\big[\|(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I})\|^2\big]\leq 4\mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big]\nonumber \\ \leq &~ 12\alpha^2 \left(\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t\|^2\big]+ \mathbb{E}\big[\|\widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big]\right), \label{eq:X_-X1} \end{align} where \eqref{eq:X_-X2} holds by Lemma \ref{lem:prox_diff} and $\frac{1}{(1-\lambda L)^2}\leq 2$, and \eqref{eq:X_-X1} holds by \eqref{eq:X_-X_1}. Sum up \eqref{eq:phi_lambda1} for $t=0,1,\ldots,T-1$ and take $\alpha_7 =\alpha\gamma_x$. Then with \eqref{eq:2_2_press}, \eqref{eq:2_3_press}, \eqref{eq:X_-X2} and \eqref{eq:X_-X1}, we have \begin{align*} \sum_{i=1}^n \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_i^{t+1})\big] \leq & ~\sum_{i=1}^n \mathbb{E}\big[\phi_\lambda( {\mathbf{x}}_i^{t}) \big] + \frac{2}{\lambda}\left( \mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big] + \eta^2 \mathbb{E}\big[\|\Y^{t}_\perp\|^2\big]\right) +\textstyle \frac{6\alpha\gamma_x+6\alpha^2\gamma_x^2}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] \nonumber \\ &~ + \frac{1}{\lambda}\left( \textstyle \frac{1}{2}+11\alpha\gamma_x +6\alpha^2\gamma_x^2\right) \mathbb{E}\big[\| \mathbf{X}^{t+\frac{1}{2}}- \widehat\mathbf{X}^{t} \|^2\big]+ \frac{1}{\lambda}\left( \textstyle -\frac{1}{2}+10\alpha\gamma_x +6\alpha^2\gamma_x^2\right) \mathbb{E}\big[\| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2\big]\\ \leq & ~ \sum_{i=1}^n\mathbb{E}\big[ \phi_\lambda( {\mathbf{x}}_i^{t})\big]+ \frac{2}{\lambda}\left( \mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big] + \eta^2 \mathbb{E}\big[\|\Y^{t}_\perp\|^2\big]\right) + \frac{7\alpha\gamma_x}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] \nonumber \\ &~\quad +\frac{1}{\lambda}\left( \textstyle \frac{1}{2}+12\alpha\gamma_x\right)\mathbb{E}\big[ \|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] +\frac{1}{\lambda} \left( \textstyle -\frac{1}{2}+11\alpha\gamma_x\right) \mathbb{E}\big[\| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2\big]. \nonumber \\ \leq &~ \sum_{i=1}^n\mathbb{E}\big[ \phi_\lambda( {\mathbf{x}}_i^{t})\big] + \frac{12}{\lambda}\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{7\alpha\gamma_x}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{12}{\lambda} \eta^2\mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber \\ &~+ \frac{1}{\lambda}\Big( {\textstyle\left(\frac{1}{2}+12\alpha\gamma_x \right) \left( 1-\frac{\eta}{2\lambda} \right) + \left( -\frac{1}{2}+11\alpha\gamma_x\right) }\Big) \mathbb{E}\big[\| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2\big] + \frac{5}{\lambda} \eta^2 \sigma^2, \end{align*} where the second inequality holds by $6\alpha\gamma_x\leq 1$, and the third inequality holds by \eqref{eq:hatx_xprox_comp} with $\frac{1}{2}+12\alpha\gamma_x\leq \frac{5}{2}$. Noticing $$\left(\frac{1}{2}+12\alpha\gamma_x \right) \left( 1-\frac{\eta}{2\lambda} \right) + \left( -\frac{1}{2}+11\alpha\gamma_x\right) = 23\alpha\gamma_x - \frac{\eta}{4\lambda} - \frac{6\alpha\gamma_x\eta}{\lambda}\leq 23\alpha\gamma_x - \frac{\eta}{4\lambda},$$ we obtain \eqref{eq:2.7} and complete the proof. \end{proof} With Lemmas \ref{lem:X_consensus_comperror}, \ref{lem:Y_consensus_comperror} and \ref{lem:phi_one_step}, we are ready to prove the Theorem \ref{thm:sect3thm}. We will use the Lyapunov function: \begin{align*} \mathbf{V}^t = z_1 \mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big] + z_2 \mathbb{E}\big[\|\mathbf{X}^{t}-\underline\mathbf{X}^{t}\|^2\big] +z_3\mathbb{E}\big[\|\Y^{t}_\perp\|^2\big]+z_4 \mathbb{E}\big[\|\Y^{t}-\underline\Y^{t}\|^2\big] + z_5 \sum_{i=1}^n \mathbb{E}[\phi_\lambda( {\mathbf{x}}_i^{t})], \end{align*} where $z_1, z_2, z_3, z_4, z_5 \geq 0$ are determined later. \subsection*{Proof of Theorem \ref{thm:sect3thm}} \begin{proof} Denote \begin{align*} &~\Omega_0^t = \mathbb{E}[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t}\|^2], \quad \Phi^t = \sum_{i=1}^n \mathbb{E}[\phi_\lambda( {\mathbf{x}}_i^{t})], \\ &~ \Omega^t = \left(\mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big], \mathbb{E}\big[\|\mathbf{X}^{t}-\underline\mathbf{X}^{t}\|^2\big], \mathbb{E}\big[\|\Y^{t}_\perp\|^2\big], \mathbb{E}\big[\|\Y^{t}-\underline\Y^{t}\|^2\big], \Phi^t\right)^\top. \end{align*} Then Lemmas \ref{lem:X_consensus_comperror}, \ref{lem:Y_consensus_comperror} and \ref{lem:phi_one_step} imply $\Omega^{t+1} \leq \mathbf{A}\Omega^t + {\mathbf{b}} \Omega_0^t + {\mathbf{c}} \sigma^2$ with \begin{align*} &\mathbf{A} = \begin{pmatrix} \frac{3+\widehat\rho^2_x}{4} &~ 2\alpha\gamma_x(1-\widehat\rho_x^2) &~ \frac{9}{4(1-\widehat\rho^2_x)} \eta^2 &~ 0 &~ 0\\ \frac{21}{1-\alpha^2} &~ \frac{3+\alpha^2}{4} &~\frac{21}{1-\alpha^2} \eta^2 &~ 0 &~ 0 \\ \frac{150 L^2}{1-\widehat\rho^2_y} &~ \frac{20\sqrt{3} L^2}{1-\widehat\rho^2_y }\alpha\gamma_x &~ \frac{3+\widehat\rho^2_y}{4} &~ \frac{48}{1-\widehat\rho^2_y }\alpha^2\gamma_y^2 &~ 0\\ \frac{180 L^2}{1-\alpha^2} &~ \frac{24\sqrt{3} L^2}{1-\alpha^2} \alpha\gamma_x &~ \frac{104\gamma_y^2+96 L^2 \eta^2}{1-\alpha^2} &~ \frac{3+\alpha^2}{4} &~ 0\\ \frac{12}{\lambda} &~ \frac{7\alpha\gamma_x}{\lambda} &~ \frac{12}{\lambda}\eta^2 &~ 0 &~ 1\\ \end{pmatrix}, \\[0.2cm] &{\mathbf{b}} = \begin{pmatrix} 4\alpha\gamma_x(1-\widehat\rho_x^2) \\ \frac{11}{1-\alpha^2} \\ \frac{40 L^2 }{1-\widehat\rho^2_y}\\ \frac{48 L^2}{1-\alpha^2} \\ \frac{1}{\lambda}\left( \textstyle -\frac{\eta}{4\lambda} + 23\alpha\gamma_x \right) \end{pmatrix}, \quad {\mathbf{c}} = \begin{pmatrix} 4\alpha\gamma_x \eta^2 (1-\widehat\rho_x^2) \\ \frac{11 \eta^2 }{1-\alpha^2} \\ 12n \\ \frac{10n}{1-\alpha^2}\\ \frac{5}{\lambda} \eta^2 \end{pmatrix}. \end{align*} Then for any ${\mathbf{z}} = (z_1, z_2 , z_3, z_4, z_5 )^\top\geq \mathbf{0}^\top$, it holds \begin{align*} {\mathbf{z}}^\top \Omega^{t+1} \leq {\mathbf{z}}^\top \Omega^t + ({\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top) \Omega^t + {\mathbf{z}}^\top{\mathbf{b}} \Omega_0^t + {\mathbf{z}}^\top{\mathbf{c}} \sigma^2. \end{align*} Let $\gamma_x\leq \frac{\eta}{\alpha}$ and $\gamma_y\leq \frac{(1-\alpha^2) (1-\widehat\rho^2_x)(1-\widehat\rho^2_y)}{317}$. Take $$z_1=\frac{52}{1-\widehat\rho^2_x}, z_2 = \frac{448}{1-\alpha^2} \eta , z_3 = \frac{521}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \eta^2, z_4=(1-\alpha^2) \eta^2, z_5=\lambda.$$ We have \begin{align*} {\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top \leq &~ \begin{pmatrix} \frac{21\cdot448}{ (1-\alpha^2)^2} \eta + \frac{150\cdot521 L^2\eta^2}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2} + 180 L^2\eta^2 - 1 \\[0.2cm] \frac{521\cdot20\sqrt{3} L^2\eta^3}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2} + 24\sqrt{3} L^2\eta^3 -\eta \\[0.2cm] \frac{448\cdot21\eta^3}{ (1-\alpha^2)^2} + 96 L^2 \eta^4 - \frac{\eta^2}{(1-\widehat\rho^2_x)^2}\\[0.1cm] 0 \\[0.1cm] 0 \end{pmatrix}^\top, \\ {\mathbf{z}}^\top{\mathbf{b}} \leq &~ \textstyle -\frac{\eta}{4\lambda} + 23\eta + 48 L^2 \eta^2 + \frac{521\cdot 40 \eta^2 L^2}{(1-\widehat\rho^2_x)^2 (1-\widehat\rho^2_y)^2} + \frac{448\cdot11\eta}{ (1-\alpha^2)^2} + 52\cdot4 \eta,\\ {\mathbf{z}}^\top{\mathbf{c}} \leq &~ \left( \textstyle 52\cdot4\eta + \frac{448\cdot 11\eta}{ (1-\alpha^2)^2} + \frac{521\cdot12n}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}+ 10n + 5 \right)\eta^2. \end{align*} By $\eta\leq \frac{(1-\alpha^2)^2(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2}{18830\max\{1, L\}}$ and $\lambda\leq \frac{ (1-\alpha^2)^2}{9 L+41280}$, we have ${\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top \leq (-\frac{1}{2}, 0, 0, 0, 0)^\top$, \begin{align*} {\mathbf{z}}^\top{\mathbf{c}} \leq \textstyle \frac{(521\cdot12+10)n+6}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}\eta^2 = \textstyle\frac{6262n+6}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}\eta^2 \end{align*} and \begin{align*} {\mathbf{z}}^\top{\mathbf{b}} ~ \leq &~ \textstyle \eta\Big( -\frac{1}{4\lambda} + 23 + 48 L^2 \eta + \frac{521\cdot 40 \eta L^2}{(1-\widehat\rho^2_x)^2 (1-\widehat\rho^2_y)^2} + \frac{448\cdot11 }{ (1-\alpha^2)^2} + 52\cdot4 \Big) \nonumber \\ \leq &~ \textstyle -\frac{\eta}{8\lambda} + \eta\Big( -\frac{1}{8\lambda} + \frac{ 9 L}{8 } + \frac{5160}{ (1-\alpha^2)^2}\Big) \leq -\frac{\eta}{8\lambda}. \end{align*} Hence we have \begin{align} {\mathbf{z}}^\top \Omega^{t+1} \leq \textstyle {\mathbf{z}}^\top \Omega^{t} -\frac{\eta}{8\lambda} \Omega_0^t -\frac{1}{2}\mathbb{E}[\|\mathbf{X}^t_\perp\|^2] + \frac{6262n+6}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}\eta^2\sigma^2.\label{eq:l_fun_comp} \end{align} Thus summing up \eqref{eq:l_fun_comp} for $t=0,1,\ldots,T-1$ gives \begin{align} \frac{1}{\lambda T}\sum_{t=0}^{T-1} \Omega_0^t +\frac{4}{\eta T}\sum_{t=0}^{T-1} \mathbb{E}[\|\mathbf{X}^t_\perp\|^2] \leq \textstyle \frac{8\left({\mathbf{z}}^\top \Omega^0 - {\mathbf{z}}^\top \Omega^{T}\right)}{\eta T} + \frac{8(6262n+6)}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \eta\sigma^2. \label{eq:thm3_avg-Omega} \end{align} From ${\mathbf{y}}_i^{-1}=\mathbf{0}$, $\underline{\mathbf{y}}_i^{-1}=\mathbf{0}$, $\nabla F_i({\mathbf{x}}_i^{-1}$, $\xi_i^{-1})=\mathbf{0}$, $\underline{\mathbf{x}}_i^{0} =\mathbf{0}$, ${\mathbf{x}}_i^0 = {\mathbf{x}}^0, \forall\, i \in \mathcal{N}$, we have \begin{gather} \|\Y^0_\perp\|^2 = \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2\leq\|\nabla \mathbf{F}^0\|^2, \quad \|\Y^{0}-\underline\Y^{0}\|^2 = \|\nabla \mathbf{F}^0-Q_{\mathbf{y}}\big[\nabla \mathbf{F}^0\big]\|^2 \leq \alpha^2 \|\nabla \mathbf{F}^0\|^2, \label{eq:initial_thm3_1}\\ \|\mathbf{X}^0_\perp\|^2=0, \quad \|\mathbf{X}^0-\underline\mathbf{X}^{0}\|^2=0, \quad \Phi^0=n \phi_\lambda({\mathbf{x}}^0). \label{eq:initial_thm3_2} \end{gather} Note \eqref{eq:end_thm2} still holds here. With \eqref{eq:initial_thm3_1}, \eqref{eq:initial_thm3_2}, \eqref{eq:end_thm2}, and the nonnegativity of $ \mathbb{E}[\|\mathbf{X}^T_\perp\|^2]$, $\mathbb{E}[\|\mathbf{X}^{T}-\underline\mathbf{X}^{T}\|^2]$, $\mathbb{E}[\|\Y^T_\perp\|^2]$, $\mathbb{E}[\|\Y^{T}-\underline\Y^{T}\|^2]$, we have \begin{align} {\mathbf{z}}^\top \Omega^0 - {\mathbf{z}}^\top \Omega^{T} \le \textstyle \frac{521}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \eta^2 \mathbb{E}[\|\nabla \mathbf{F}^0\|^2] + \eta^2 \mathbb{E}[\|\nabla \mathbf{F}^0\|^2] + \lambda n \phi_\lambda({\mathbf{x}}^0) -\lambda n \phi_\lambda^*. \label{eq:them3_Omega0_OmegaT} \end{align} where we have used $\alpha^2\leq 1$ from Assumption \ref{assu:compressor}. By the convexity of the frobenius norm and \eqref{eq:them3_Omega0_OmegaT}, we obtain from \eqref{eq:thm3_avg-Omega} that \begin{align} &~ \frac{1}{n\lambda^2} \mathbb{E}\big[\|\widehat\mathbf{X}^{\tau}-\mathbf{X}^{\tau}\|^2\big] +\frac{4}{n \lambda \eta} \mathbb{E}[\|\mathbf{X}^\tau_\perp\|^2] \leq \frac{1}{n\lambda^2} \frac{1}{T}\sum_{t=0}^{T-1} \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t}\|^2\big] +\frac{4}{n \lambda \eta T}\sum_{t=0}^{T-1} \mathbb{E}[\|\mathbf{X}^t_\perp\|^2] \nonumber \\ \leq & \textstyle \frac{8\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{ \eta T} +\frac{50096n+48}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \frac{\eta}{n\lambda}\sigma^2 \textstyle + \frac{8\cdot521 \eta }{n\lambda T (1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \mathbb{E}\big[ \|\nabla \mathbf{F}^0\|^2\big] + \frac{8\eta}{n\lambda T} \mathbb{E}\big[ \|\nabla \mathbf{F}^0\|^2\big] \nonumber \\ \leq &~\textstyle \frac{8\left(\phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{\eta T} +\frac{(50096n+48)\eta \sigma^2}{n\lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} + \textstyle \frac{4176 \eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0\|^2\right] }{n\lambda T (1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}. \label{eq:them_CDProxSGT0} \end{align} With $\|\nabla \phi_\lambda ({\mathbf{x}}_i^\tau)\|^2 = \frac{\|{\mathbf{x}}_i^\tau-\widehat{\mathbf{x}}_i^\tau\|^2}{\lambda^{2}}$ from Lemma \ref{lem:xhat_x}, we complete the proof. \end{proof} \section{Conclusion} We have proposed two decentralized proximal stochastic gradient methods, DProxSGT and CDProxSGT, for nonconvex composite problems with data heterogeneously distributed on the computing nodes of a connected graph. CDProxSGT is an extension of DProxSGT by applying compressions on the communicated model parameter and gradient information. Both methods need only a single or $\mathcal{O}(1)$ samples for each update, which is important to yield good generalization performance on training deep neural networks. The gradient tracking is used in both methods to address data heterogeneity. An $\mathcal{O}\left( \frac{1}{ \epsilon^4}\right)$ sample complexity and communication complexity is established to both methods to produce an expected $\epsilon$-stationary solution. Numerical experiments on training neural networks demonstrate the good generalization performance and the ability of the proposed methods on handling heterogeneous data. \section{Convergence Analysis for DProxSGT} \label{sec:proof_DProxSGT} In this section, we analyze the convergence rate of DProxSGT in Algorithm \ref{alg:DProxSGT}. For better readability, we use the matrix form of Algorithm \ref{alg:DProxSGT}. By the notation introduced in section~\ref{sec:notation}, we can write \eqref{eq:y_half_update}-\eqref{eq:x_1_update} in the more compact matrix form: \begin{align} & \Y^{t-\frac{1}{2}} = \Y^{t-1} + \nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1},\label{eq:y_half_update_matrix} \\ & \Y^t = \Y^{t-\frac{1}{2}}\mathbf{W},\label{eq:y_update_matrix}\\ & \mathbf{X}^{t+\frac{1}{2}} =\prox_{\eta r} \left(\mathbf{X}^t - \eta \Y^{t}\right) \triangleq [\prox_{\eta r} \left({\mathbf{x}}_1^t - \eta {\mathbf{y}}_1^{t}\right),\ldots,\prox_{\eta r} \left({\mathbf{x}}_n^t - \eta {\mathbf{y}}_n^{t}\right)],\label{eq:x_half_update_matrix} \\ & \mathbf{X}^{t+1} = \mathbf{X}^{t+\frac{1}{2}}\mathbf{W}. \label{eq:x_1_update_matrix} \end{align} Below, we first bound $\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2$ in Lemma~\ref{lem:Xhat_Xhalf}. Then we give the bounds of the consensus error $\|\mathbf{X}_\perp^t\|$ and $\|\Y_\perp^t\|$ and $\phi_\lambda({\mathbf{x}}_i^{t+1})$ after one step in Lemmas~\ref{lem:XI_J}, \ref{lem:YI_J}, and \ref{lem:weak_convex}. Finally, we prove Theorem \ref{thm:sec2} by constructing a Lyapunov function that involves $\|\mathbf{X}_\perp^t\|$, $\|\Y_\perp^t\|$, and $\phi_\lambda({\mathbf{x}}_i^{t+1})$. \begin{lemma} \label{lem:Xhat_Xhalf} Let $\eta\leq \lambda \leq \frac{1}{4 L}$. Then \begin{align} \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] \leq &~ 4 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \left( 1-\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +4\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 2\eta^2\sigma^2. \label{eq:hatx_xprox} \end{align} \end{lemma} \begin{proof} By the definition of $\widehat{\mathbf{x}}^t_i$ in \eqref{eq:x_t_hat}, we have $0 \in \nabla f(\widehat{\mathbf{x}}^t_i) + \partial r(\widehat{\mathbf{x}}^t_i) + \frac{1}{\lambda}(\widehat{\mathbf{x}}^t_i-{\mathbf{x}}^t_i)$, i.e., \[ \textstyle 0 \in \partial r(\widehat{\mathbf{x}}^t_i) + \frac{1}{\eta} \left(\frac{\eta}{\lambda} \widehat{\mathbf{x}}^t_i-\frac{\eta}{\lambda}{\mathbf{x}}^t_i + \eta \nabla f(\widehat{\mathbf{x}}^t_i) \right) = \partial r(\widehat{\mathbf{x}}^t_i) + \frac{1}{\eta} \left(\widehat{\mathbf{x}}^t_i - \left( \frac{\eta}{\lambda}{\mathbf{x}}^t_i - \eta \nabla f(\widehat{\mathbf{x}}^t_i)+ \left(1- \frac{\eta}{\lambda}\right) \widehat{\mathbf{x}}^t_i \right)\right). \] Thus we have $\widehat{\mathbf{x}}^t_i = \prox_{\eta r}\left( \frac{\eta}{\lambda}{\mathbf{x}}^t_i - \eta \nabla f(\widehat{\mathbf{x}}^t_i) + \left(1- \frac{\eta}{\lambda}\right)\widehat{\mathbf{x}}^t_i\right)$. Then by \eqref{eq:x_half_update}, the convexity of $r$, and Lemma \ref{lem:prox_diff}, \begin{align} &~ \textstyle \|\widehat{\mathbf{x}}_i^{t}-{\mathbf{x}}_i^{t+\frac{1}{2}}\|^2 = \left\| \prox_{\eta r}\left( \frac{\eta}{\lambda}{\mathbf{x}}^t_i - \eta \nabla f(\widehat{\mathbf{x}}^t_i) + \left(1- \frac{\eta}{\lambda}\right)\widehat{\mathbf{x}}^t_i\right)- \prox_{\eta r} \left( {\mathbf{x}}_i^t - \eta {\mathbf{y}}^t_i\right) \right\|^2 \nonumber\\ \leq &~ \textstyle \left\| \frac{\eta}{\lambda}{\mathbf{x}}^t_i - \eta \nabla f(\widehat{\mathbf{x}}^t_i) + \left(1- \frac{\eta}{\lambda}\right)\widehat{\mathbf{x}}^t_i - ({\mathbf{x}}^t_i-\eta{\mathbf{y}}^t_i) \right\|^2 = \left\| \left(1- \frac{\eta}{\lambda}\right)(\widehat{\mathbf{x}}^t_i -{\mathbf{x}}^t_i )- \eta (\nabla f(\widehat{\mathbf{x}}^t_i) -{\mathbf{y}}^t_i) \right\|^2 \nonumber\\ = & ~ \textstyle \left(1- \frac{\eta}{\lambda}\right)^2 \left\| \widehat{\mathbf{x}}^t_i - {\mathbf{x}}^t_i \right\|^2 + \eta^2\left\| {\mathbf{y}}^t_i- \nabla f(\widehat{\mathbf{x}}^t_i) \right\|^2 + 2 \left(1- \frac{\eta}{\lambda}\right)\eta \left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, {\mathbf{y}}_i^t-\nabla f({\mathbf{x}}^t_i) + \nabla f({\mathbf{x}}^t_i)-\nabla f(\widehat{\mathbf{x}}^t_i) \right\rangle\nonumber\\ \leq & ~ \textstyle\left(\left(1- \frac{\eta}{\lambda}\right)^2 + 2\left(1- \frac{\eta}{\lambda}\right)\eta L \right) \left\| \widehat{\mathbf{x}}^t_i - {\mathbf{x}}^t_i \right\|^2 + \eta^2\left\| {\mathbf{y}}^t_i- \nabla f(\widehat{\mathbf{x}}^t_i) \right\|^2 + 2 \left(1- \frac{\eta}{\lambda}\right)\eta \left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, {\mathbf{y}}_i^t-\nabla f({\mathbf{x}}^t_i) \right\rangle , \label{eq:lem1.6.1} \end{align} where the second inequality holds by $\left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, \nabla f({\mathbf{x}}^t_i)-\nabla f(\widehat{\mathbf{x}}^t_i) \right\rangle \leq L\left\|\widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t\right\|^2$. The second term in the right hand side of \eqref{eq:lem1.6.1} can be bounded by \begin{align*} &~ \textstyle \mathbb{E}_t [\| {\mathbf{y}}^t_i- \nabla f(\widehat{\mathbf{x}}^t_i) \|^2\big] \overset{\eqref{eq:x_y_mean}}{=} \mathbb{E}_t\big[\| {\mathbf{y}}^t_i- \bar{\mathbf{y}}^t + \overline{\nabla} \mathbf{F}^t - \nabla f(\widehat{\mathbf{x}}^t_i) \|^2\big] \leq 2\mathbb{E}_t\big[\| {\mathbf{y}}^t_i- \bar{\mathbf{y}}^t \|^2\big] + 2\mathbb{E}_t\big[\big\| \overline{\nabla} \mathbf{F}^t - \nabla f(\widehat{\mathbf{x}}^t_i) \big\|^2\big] \\ = &~2\mathbb{E}_t\big[\| {\mathbf{y}}^t_i- \bar{\mathbf{y}}^t \|^2\big] + 2\mathbb{E}_t\big[\| \overline{\nabla} \mathbf{F}^t - \overline{\nabla} \mathbf{f}^t \|^2\big]+ 2\| \overline{\nabla} \mathbf{f}^t - \nabla f(\widehat{\mathbf{x}}^t_i) \|^2 \\ \leq&~ 2\mathbb{E}_t[ \|{\mathbf{y}}_i^t-\bar{\mathbf{y}}^t\|^2\big] + \frac{2}{n^2}\sum_{j=1}^n \mathbb{E}_t\big[\|\nabla F_j({\mathbf{x}}_j^t,\xi_j^t)-\nabla f_j({\mathbf{x}}_j^t)\|^2\big] + 4 \| \overline{\nabla} \mathbf{f}^t -\nabla f({\mathbf{x}}^t_i) \|^2 + 4\| \nabla f({\mathbf{x}}^t_i)- \nabla f(\widehat{\mathbf{x}}^t_i) \|^2\\ \leq &~ 2\mathbb{E}_t[ \|{\mathbf{y}}_i^t-\bar{\mathbf{y}}^t\|^2\big] + 2 \frac{\sigma^2}{n} + 4 \| \overline{\nabla} \mathbf{f}^t -\nabla f({\mathbf{x}}^t_i) \|^2 + 4 L^2 \|{\mathbf{x}}^t_i -\widehat{\mathbf{x}}^t_i\|^2, \end{align*} where the second equality holds by the unbiasedness of stochastic gradients, and the second inequality holds also by the independence between $\xi_i^t$'s. In the last inequality, we use the bound of the variance of stochastic gradients, and the $L$-smooth assumption. Taking the full expectation over the above inequality and summing for all $i$ give \begin{align} \sum_{i=1}^n\mathbb{E}\big[\| {\mathbf{y}}^t_i- \nabla f(\widehat{\mathbf{x}}^t_i) \|^2 ] \leq 2\mathbb{E}\big[\|\Y^t_\perp\|^2 ] +2\sigma^2 + 8 L^2 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2 ] +4 L^2 \mathbb{E}\big[\| \mathbf{X}^t - \widehat\mathbf{X}^t\|^2 ]. \label{eq:lem161_1} \end{align} To have the inequality above, we have used \begin{align} &~ \sum_{i=1}^n \left\| \overline{\nabla} \mathbf{f}^t -\nabla f({\mathbf{x}}^t_i) \right\|^2 \leq \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^n \left\|\nabla f_j({\mathbf{x}}_j^t) -\nabla f_j({\mathbf{x}}^t_i) \right\|^2 \leq \frac{ L^2}{n}\sum_{i=1}^n\sum_{j=1}^n \left\|{\mathbf{x}}_j^t - {\mathbf{x}}^t_i \right\|^2 \nonumber \\ = &~ \frac{ L^2}{n}\sum_{i=1}^n\sum_{j=1}^n \left( \left\|{\mathbf{x}}_j^t - \bar{\mathbf{x}}^t \right\|^2 +\left\|\bar{\mathbf{x}}^t-{\mathbf{x}}^t_i \right\|^2 + 2\left\langle {\mathbf{x}}_j^t - \bar{\mathbf{x}}^t, \bar{\mathbf{x}}^t-{\mathbf{x}}^t_i\right\rangle\right) = 2 L^2 \left\|\mathbf{X}^t_\perp\right\|^2, \label{eq:sumsum} \end{align} where the last equality holds by $ \frac{1}{n} \sum_{i=1}^n\sum_{j=1}^n \left\langle {\mathbf{x}}_j^t - \bar{\mathbf{x}}^t, \bar{\mathbf{x}}^t-{\mathbf{x}}^t_i\right\rangle = \sum_{i=1}^n \left\langle \frac{1}{n} \sum_{j=1}^n ({\mathbf{x}}_j^t - \bar{\mathbf{x}}^t), \bar{\mathbf{x}}^t-{\mathbf{x}}^t_i\right\rangle =\sum_{i=1}^n\left\langle \bar{\mathbf{x}}^t - \bar{\mathbf{x}}^t, \bar{\mathbf{x}}^t-{\mathbf{x}}^t_i\right\rangle=0$ from the definition of $\bar{\mathbf{x}}$. About the third term in the right hand side of \eqref{eq:lem1.6.1}, we have \begin{align} & ~\sum_{i=1}^n \mathbb{E}\left[ \left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, {\mathbf{y}}_i^t-\nabla f({\mathbf{x}}^t_i) \right\rangle\right] \overset{\eqref{eq:x_y_mean}}{=} \sum_{i=1}^n \mathbb{E}\left[\left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, {\mathbf{y}}_i^t -\bar{\mathbf{y}}^t+\overline{\nabla} \mathbf{F}^t -\nabla f({\mathbf{x}}^t_i) \right\rangle\right] \nonumber \\ = & ~ \textstyle \sum_{i=1}^n \mathbb{E}\big[ \langle\widehat{\mathbf{x}}^t_i -\bar{\widehat{\mathbf{x}}}^t, {\mathbf{y}}_i^t -\bar{\mathbf{y}}^t \rangle\big] + \sum_{i=1}^n \mathbb{E}\big[\langle \bar{{\mathbf{x}}}^t - {\mathbf{x}}_i^t,{\mathbf{y}}_i^t -\bar{\mathbf{y}}^t \rangle\big] + \sum_{i=1}^n \mathbb{E}\left[ \left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, \mathbb{E}_{t} \left[\overline{\nabla} \mathbf{F}^t\right] -\nabla f({\mathbf{x}}^t_i)\right\rangle\right] \nonumber \\ \leq &~ \frac{1}{2\eta} \left( \textstyle \mathbb{E}\big[\|\widehat\mathbf{X}^t_\perp\|^2\big]+ \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]\right) + \eta \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + \textstyle L\mathbb{E}\big[\| \widehat\mathbf{X}^t-\mathbf{X}^t\|^2\big] + \frac{1}{4 L} \sum_{i=1}^n \mathbb{E}\big[\|\overline{\nabla} {\mathbf{f}}^t -\nabla f({\mathbf{x}}^t_i)\|^2\big] \nonumber \\ \leq &~ \left(\textstyle\frac{1}{2\eta(1-\lambda L)^2} + \frac{1}{2\eta} + \frac{L}{2}\right)\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \eta\mathbb{E}\big[\|\Y^t_\perp\|^2\big] + L\mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t\|^2\big],\label{eq:lem161_2} \end{align} where $\textstyle \sum_{i=1}^n \big\langle \bar{\widehat{\mathbf{x}}}^t,{\mathbf{y}}_i^t -\bar{\mathbf{y}}^t \big\rangle = 0$ and $\sum_{i=1}^n \left\langle \bar{{\mathbf{x}}}^t,{\mathbf{y}}_i^t -\bar{\mathbf{y}}^t \right\rangle = 0$ is used in the second equality, $\mathbb{E}_{t} \left[\overline{\nabla} \mathbf{F}^t\right] = \overline{\nabla} {\mathbf{f}}^t$ is used in the first inequality, and $\|\widehat\mathbf{X}^t_\perp\|^2 =\left\|\left(\prox_{\lambda \phi}(\mathbf{X}^t)- \prox_{\lambda \phi}(\bar{\mathbf{x}}^t)\mathbf{1}^\top\right) (\mathbf{I}-\mathbf{J})\right\|^2\leq \frac{1}{(1-\lambda L)^2}\|\mathbf{X}^t-\bar\mathbf{X}^t\|^2$ and \eqref{eq:sumsum} are used in the last inequality. Now we can bound the summation of \eqref{eq:lem1.6.1} by using \eqref{eq:lem161_1} and \eqref{eq:lem161_2}: \begin{align*} & ~ \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big]\\ \leq & ~ \left(\textstyle \left(1- \frac{\eta}{\lambda}\right)^2 + 2\left(1- \frac{\eta}{\lambda}\right)\eta L \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t \|^2\big] \\ & ~ + \eta^2 \left(2\mathbb{E}[ \|\Y^t_\perp\|^2\big] +2\sigma^2 + 8 L^2 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] +4 L^2 \mathbb{E}\big[\| \mathbf{X}^t - \widehat\mathbf{X}^t\|^2\big]\right) \\ & ~ + \textstyle 2 \left(1- \frac{\eta}{\lambda}\right)\eta \left(\textstyle\left(\frac{1}{2\eta(1-\lambda L)^2} + \frac{1}{2\eta} + \frac{ L}{2} \right)\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \eta\mathbb{E}\big[\|\Y^t_\perp\|^2\big] + L\mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t\|^2\big]\right) \\ = & ~ \textstyle \left(1 - 2\eta (\frac{1}{\lambda} - 2 L) + \frac{\eta^2}{\lambda} (\frac{1}{\lambda} - 2 L) + 2 L \eta^2(-\frac{1}{\lambda}+2 L)\right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + 2\eta^2\sigma^2 \nonumber\\ & ~ + \textstyle \left( \left(1- \frac{\eta}{\lambda}\right) (1+\frac{1}{(1-\lambda L)^2}+ \eta L) + 8\eta^2 L^2 \right) \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + 2 (2- \frac{\eta}{\lambda}) \eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big]. \end{align*} With $\eta \leq \lambda \leq \frac{1}{4 L}$, we have $\frac{1}{(1-\lambda L)^2}\leq 2$ and \eqref{eq:hatx_xprox} follows from the inequality above. \end{proof} \begin{lemma}\label{lem:XI_J} The consensus error of $\mathbf{X}$ satisfies the following inequality \begin{align} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] \leq \frac{1+\rho^2}{2} \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp \|^2\big]+ \frac{2\rho^2 \eta^2 }{1-\rho^2} \mathbb{E}\big[\| \Y^{t-1}_\perp \|^2\big]. \label{eq:X_consensus} \end{align} \end{lemma} \begin{proof} With the updates \eqref{eq:x_half_update} and \eqref{eq:x_1_update}, we have \begin{align*} &~ \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] = \mathbb{E}\big[\|\mathbf{X}^{t-\frac{1}{2}}\mathbf{W}(\mathbf{I}- \mathbf{J})\|^2\big] = \mathbb{E}\big[\|\mathbf{X}^{t-\frac{1}{2}} (\mathbf{W}-\mathbf{J})\|^2\big] \nonumber\\ =&~ \mathbb{E}\big[\| \prox_{\eta r} \left(\mathbf{X}^{t-1} - \eta \Y^{t-1}\right) (\mathbf{W}-\mathbf{J})\|^2\big] \nonumber\\ =&~ \mathbb{E}\big[\| \left(\prox_{\eta r} \left(\mathbf{X}^{t-1} - \eta \Y^{t-1}\right)-\prox_{\eta r} \left(\bar{\mathbf{x}}^{t-1} - \eta \bar{\mathbf{y}}^{t-1}\right)\mathbf{1}^\top\right) (\mathbf{W}-\mathbf{J})\|^2\big] \nonumber\\ \leq &~ \mathbb{E}\big[\|\prox_{\eta r} \left(\mathbf{X}^{t-1} - \eta \Y^{t-1}\right)-\prox_{\eta r} \left(\bar{\mathbf{x}}^{t-1} - \eta \bar{\mathbf{y}}^{t-1}\right)\mathbf{1}^\top\|^2 \|(\mathbf{W}-\mathbf{J})\|^2_2] \nonumber\\ \leq &~ \rho^2 \mathbb{E}\left[ \textstyle \sum_{i=1}^n\| \prox_{\eta r} \left({\mathbf{x}}_i^{t-1} - \eta {\mathbf{y}}_i^{t-1}\right)-\prox_{\eta r} \left(\bar{\mathbf{x}}^{t-1} - \eta \bar{\mathbf{y}}^{t-1}\right) \|^2\right] \nonumber\\ \leq &~ \rho^2 \mathbb{E}\left[ \textstyle \sum_{i=1}^n\|\left({\mathbf{x}}_i^{t-1} - \eta {\mathbf{y}}_i^{t-1}\right)-\left(\bar{\mathbf{x}}^{t-1} - \eta \bar{\mathbf{y}}^{t-1}\right) \|^2\right] = \rho^2 \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp - \eta \Y^{t-1}_\perp \|^2\big] \nonumber\\ \leq &~ \textstyle \big(\textstyle \rho^2 + \frac{1-\rho^2}{2}\big) \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp \|^2\big]+ \big( \textstyle\rho^2 + \frac{2\rho^4}{1-\rho^2}\big) \eta^2\mathbb{E}\big[\| \Y^{t-1}_\perp \|^2\big] \nonumber\\ = &~\textstyle \frac{1+\rho^2}{2} \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp \|^2\big]+ \frac{1+\rho^2}{1-\rho^2} \rho^2 \eta^2\mathbb{E}\big[\| \Y^{t-1}_\perp \|^2\big] \nonumber \\ \leq &~\textstyle \frac{1+\rho^2}{2} \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp \|^2\big]+ \frac{2\rho^2 \eta^2 }{1-\rho^2} \mathbb{E}\big[\| \Y^{t-1}_\perp \|^2\big], \end{align*} where we have used $\mathbf{1}^\top (\mathbf{W}-\mathbf{J})=\mathbf{0}$ in the third equality, $\|\mathbf{W}-\mathbf{J}\|_2\leq \rho$ in the second inequality, and Lemma \ref{lem:prox_diff} in the third inequality, and $\rho\leq 1$ is used in the last inequality. \end{proof} \begin{lemma}\label{lem:YI_J} Let $\eta\leq \min\{\lambda, \frac{1-\rho^2}{4\sqrt{6} \rho L} \} $ and $\lambda \leq\frac{1}{4 L}$. The consensus error of $\Y$ satisfies \begin{align} \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \leq &~ \frac{48\rho^2 L^2 }{1-\rho^2 } \mathbb{E}\big[\|\mathbf{X}^{t-1}_\perp\|^2\big] \!+\! \frac{3\!+\!\rho^2}{4} \mathbb{E}\big[\|\Y^{t-1}_\perp \|^2\big] \!+\! \frac{12\rho^2 L^2 }{1-\rho^2 } \mathbb{E}\big[\|\widehat\mathbf{X}^{t-1}-\mathbf{X}^{t-1} \|^2\big] \!+\! 6 n\sigma^2. \label{eq:Y_consensus} \end{align} \end{lemma} \begin{proof} By the updates \eqref{eq:y_half_update} and \eqref{eq:y_update}, we have \begin{align} &~ \mathbb{E}\big[\|\Y^t_\perp\|^2\big] = \mathbb{E}\big[\|\Y^{t-\frac{1}{2}}(\mathbf{W}- \mathbf{J})\|^2\big] = \mathbb{E}\big[\| \Y^{t-1}(\mathbf{W} -\mathbf{J}) + (\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}) (\mathbf{W} -\mathbf{J})\|^2\big] \nonumber\\ = &~ \mathbb{E}\big[\|\Y^{t-1}(\mathbf{I}-\mathbf{J})(\mathbf{W} -\mathbf{J})\|^2\big] + \mathbb{E}\big[\|(\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}) (\mathbf{W}-\mathbf{J}) \|^2\big] + 2\mathbb{E}\big[\langle \Y^{t-1} (\mathbf{W} -\mathbf{J}), (\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}) (\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber\\ \leq &~ \rho^2 \mathbb{E}\big[\|\Y^{t-1}_\perp \|^2\big] + \rho^2 \mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}\|^2\big] + 2\mathbb{E}\big[\langle \Y^{t-1} (\mathbf{W} -\mathbf{J}),(\nabla \mathbf{f}^t - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big], \label{eq:y_cons1} \end{align} where we have used $\mathbf{J}\mathbf{W}=\mathbf{J}\J=\mathbf{J}$, $\|\mathbf{W}-\mathbf{J}\|_2\leq \rho$ and $\mathbb{E}_t[\nabla \mathbf{F}^t] = \nabla {\mathbf{f}}^t$. For the second term on the right hand side of \eqref{eq:y_cons1}, we have \begin{align} &~\mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}\|^2\big] = \mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{f}^t+\nabla \mathbf{f}^t -\nabla \mathbf{F}^{t-1}\|^2\big] \nonumber\\ \overset{\mathbb{E}_t[\nabla \mathbf{F}^t] = \nabla \mathbf{f}^t}{=}&~ \mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{f}^t\|^2\big]+\mathbb{E}\big[\|\nabla \mathbf{f}^t- \nabla \mathbf{f}^{t-1}+\nabla \mathbf{f}^{t-1}-\nabla \mathbf{F}^{t-1}\|^2\big] \nonumber \\ \leq &~ \mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{f}^t\|^2\big]+2\mathbb{E}\big[\|\nabla \mathbf{f}^t- \nabla \mathbf{f}^{t-1}\|^2\big]+2\mathbb{E}\big[\|\nabla \mathbf{f}^{t-1}-\nabla \mathbf{F}^{t-1}\|^2\big] \nonumber\\ \leq &~ 3 n \sigma^2 + 2 L^2 \mathbb{E}\big[\|\mathbf{X}^{t}-\mathbf{X}^{t-1}\|^2\big]. \label{eq:y_cons12} \end{align} For the third term on the right hand side of \eqref{eq:y_cons1}, we have \begin{align} &~2\mathbb{E}\big[\langle \Y^{t-1} (\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^t - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ =&~2\mathbb{E}\big[\langle \Y^{t-1}(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] +2\mathbb{E}\big[\langle \Y^{t-1}(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^{t-1} - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ =&~2\mathbb{E}\big[\langle \Y^{t-1}(\mathbf{I}-\mathbf{J})(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ &~ +2\mathbb{E}\big[\langle (\Y^{t-2} + \nabla \mathbf{F}^{t-1} - \nabla \mathbf{F}^{t-2})\mathbf{W}(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^{t-1} - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ =&~2\mathbb{E}\big[\langle \Y^{t-1}(\mathbf{I}-\mathbf{J})(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ &~ +2\mathbb{E}\big[\langle (\nabla \mathbf{F}^{t-1} - \nabla \mathbf{f}^{t-1} )\mathbf{W}(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^{t-1} - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ \leq &~2\mathbb{E}\big[\|\Y^{t-1}(\mathbf{I}-\mathbf{J})(\mathbf{W} -\mathbf{J})\|\cdot\|(\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1})(\mathbf{W}-\mathbf{J}) \|\big] \nonumber \\ &~ +2\mathbb{E}\big[\|(\nabla \mathbf{F}^{t-1} - \nabla \mathbf{f}^{t-1} )\mathbf{W}(\mathbf{W} -\mathbf{J})\|\cdot\|(\nabla \mathbf{f}^{t-1} - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J})\|\big] \nonumber \\ \leq&~ 2\rho^2\mathbb{E}\big[\| \Y^{t-1}_\perp\|\cdot\|\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1}\|\big] + 2\rho^2\mathbb{E}\big[\|\nabla \mathbf{F}^{t-1} - \nabla \mathbf{f}^{t-1}\|^2\big] \nonumber\\ \leq &~ \textstyle\frac{1-\rho^2}{2} \mathbb{E}\big[\| \Y^{t-1}_\perp\|^2\big]+\frac{2\rho^4}{1-\rho^2}\mathbb{E}\big[\|\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1}\|^2\big] + 2\rho^2 n \sigma^2 \nonumber \\ \leq &~ \textstyle\frac{1-\rho^2}{2} \mathbb{E}\big[\| \Y^{t-1}_\perp\|^2\big]+\frac{2\rho^4 L^2}{1-\rho^2} \mathbb{E}\big[\| \mathbf{X}^t - \mathbf{X}^{t-1}\|^2\big]+ 2\rho^2 n \sigma^2, \label{eq:y_cons13} \end{align} where the second equality holds by $\mathbf{W}-\mathbf{J}=(\mathbf{I}-\mathbf{J})(\mathbf{W}-\mathbf{J})$, \eqref{eq:y_half_update} and \eqref{eq:y_update}, the third equality holds because $\Y^{t-2} - \nabla \mathbf{F}^{t-2} -\nabla \mathbf{f}^{t-1}$ does not depend on $\xi_i^{t-1}$'s, and the second inequality holds because $\|\mathbf{W}-\mathbf{J}\|_2\leq \rho$ and $\|\mathbf{W}\|_2\leq 1$. Plugging \eqref{eq:y_cons12} and \eqref{eq:y_cons13} into \eqref{eq:y_cons1}, we have \begin{align} \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \leq &~ \textstyle\frac{1+\rho^2}{2} \mathbb{E}\big[\|\Y^{t-1}_\perp \|^2\big] + \frac{2 \rho^2 L^2 }{1-\rho^2 } \mathbb{E}\big[\| \mathbf{X}^t - \mathbf{X}^{t-1}\|^2\big] + 5 \rho^2 n \sigma^2 , \label{eq:y_cons2} \end{align} where we have used $1+\frac{\rho^2}{1-\rho^2} = \frac{1}{1-\rho^2 }$. For the second term in the right hand side of \eqref{eq:y_cons2}, we have \begin{align} &~ \| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2 = \|\mathbf{X}^{t+\frac{1}{2}}\mathbf{W}-\mathbf{X}^t\|^2 = \|(\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t)\mathbf{W} +(\widehat\mathbf{X}^t-\mathbf{X}^t)\mathbf{W} + \mathbf{X}^t (\mathbf{W}-\mathbf{I})\|^2 \nonumber \\ \leq &~ 3\|(\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t)\mathbf{W}\|^2 +3\|(\widehat\mathbf{X}^t-\mathbf{X}^t)\mathbf{W}\|^2 + 3\|\mathbf{X}^t(\mathbf{I}-\mathbf{J})(\mathbf{W}-\mathbf{I})\|^2 \nonumber \\ \leq &~ 3\|\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t \|^2 +3\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2 + 12\|\mathbf{X}^t_\perp\|^2,\label{eq:Xplus1-X} \end{align} where in the first inequality we have used $\mathbf{X}^t (\mathbf{W}-\mathbf{I})=\mathbf{X}^t(\mathbf{I}-\mathbf{J})(\mathbf{W}-\mathbf{I})$ from $\mathbf{J}(\mathbf{W}-\mathbf{I}) = \mathbf{J}-\mathbf{J}$, and in the second inequality we have used $\|\mathbf{W}\|_2\leq 1$ and $\|\mathbf{W}-\mathbf{I}\|_2\leq 2$. Taking expectation over both sides of \eqref{eq:Xplus1-X} and using \eqref{eq:hatx_xprox}, we have \begin{align*} &~ \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] \\ \le &~3 \left( \textstyle 4 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \left( 1-\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +4\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 2\eta^2\sigma^2\right) +3 \mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2\big] + 12 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]\\ = &~ 3 \textstyle \left(2 -\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +12\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 6\eta^2\sigma^2 + 24\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]. \end{align*} Plugging the inequality above into \eqref{eq:y_cons2} gives \begin{align*} \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \leq &~ \left(\textstyle\frac{1+\rho^2}{2} + \frac{24 \rho^2 L^2\eta^2 }{1-\rho^2 } \right) \mathbb{E}\big[\|\Y^{t-1}_\perp \|^2\big] + \textstyle 5 \rho^2 n\sigma^2 +\frac{12 \rho^2 L^2 \eta^2 \sigma^2 }{1-\rho^2 } \nonumber \\ &~\textstyle + \frac{6\rho^2 L^2 }{1-\rho^2 }\left( \textstyle 2- \frac{\eta}{2\lambda} \right) \mathbb{E}\big[\|\widehat\mathbf{X}^{t-1}-\mathbf{X}^{t-1} \|^2\big] + \frac{48 \rho^2 L^2 }{1-\rho^2 } \mathbb{E}\big[\|\mathbf{X}^{t-1}_\perp\|^2\big]. \end{align*} By $\rho<1$ and $ \eta \leq \frac{1-\rho^2}{4\sqrt{6} \rho L}$, we have $\frac{24 \rho^2 L^2 \eta^2}{1-\rho^2 } \leq \frac{1-\rho^2}{4}$ and $\frac{12 \rho^2 L^2 \eta^2}{1-\rho^2 } \leq \frac{1-\rho^2}{8}\leq n$, and further \eqref{eq:Y_consensus}. \end{proof} \begin{lemma}\label{lem:weak_convex} Let $\eta\leq \lambda \leq\frac{1}{4 L}$. It holds \begin{align} \sum_{i=1}^n \mathbb{E}[\phi_\lambda({\mathbf{x}}_i^{t+1})] \leq &~ \sum_{i=1}^n \mathbb{E}[ \phi_\lambda( {\mathbf{x}}_i^{t})] + \frac{4}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{4 \eta^2}{\lambda} \mathbb{E}[ \|\Y^t_\perp\|^2\big] - \frac{\eta}{4\lambda^2} \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t \|^2\big] + \frac{\eta^2\sigma^2}{\lambda}. \label{eq:phi_update} \end{align} \end{lemma} \begin{proof} By the definition in \eqref{eq:x_t_hat , the update in \eqref{eq:x_1_update}, the $ L$-weakly convexity of $\phi$, and the convexity of $\|\cdot\|^2$, we have \begin{align} &~\phi_\lambda({\mathbf{x}}_i^{t+1}) \overset{\eqref{eq:x_t_hat}}{=} \phi(\widehat{\mathbf{x}}_i^{t+1})+{\textstyle \frac{1}{2\lambda} }\|\widehat{\mathbf{x}}_i^{t+1}-{\mathbf{x}}_i^{t+1}\|^2 \overset{\eqref{eq:x_1_update}}{\leq} \phi\bigg(\sum_{j=1}^n\mathbf{W}_{ji}\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}\bigg)+{ \frac{1}{2\lambda}} \bigg\|\sum_{j=1}^n \mathbf{W}_{ji}\big(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\big)\bigg\|^2 \nonumber \\ &~\overset{\mbox{Lemma \ref{lem:weak_convx}} }{\leq} \sum_{j=1}^n \mathbf{W}_{ji} \phi(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}) +{ \frac{L}{2} }\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-\widehat{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2+{ \frac{1}{2\lambda} }\sum_{j=1}^n \mathbf{W}_{ji} \|\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\|^2 \nonumber \\ &~\leq \sum_{j=1}^n \mathbf{W}_{ji} \phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}}) + \frac{1}{4\lambda} \sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2, \label{eq:phi_update_0} \end{align} where in the last inequality we use $ \phi(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}) + \frac{1}{2\lambda} \|(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}})\|^2 = \phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}})$, $\|\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-\widehat{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2\leq \frac{1}{(1-\lambda L)^2}\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2$ from Lemma \ref{lem:prox_diff}, $\frac{1}{(1-\lambda L)^2}\leq 2$ and $ L \leq \frac{1}{4\lambda}$. For the first term on the right hand side of \eqref{eq:phi_update_0}, with $\sum_{i=1}^n \mathbf{W}_{ji}=1$, we have \begin{align} \sum_{i=1}^n \sum_{j=1}^n \mathbf{W}_{ji} \phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}}) = &~ \sum_{i=1}^n \phi_\lambda({\mathbf{x}}_i^{t+\frac{1}{2}}) \leq \sum_{i=1}^n \phi_\lambda( {\mathbf{x}}_i^{t}) + { \frac{1}{2\lambda}} \|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2 - { \frac{1}{2\lambda}} \|\widehat\mathbf{X}^t - \mathbf{X}^t\|^2, \label{eq:phi_lambda} \end{align} where we have used $ \phi_\lambda({\mathbf{x}}_i^{t+\frac{1}{2}}) \leq \phi(\widehat{\mathbf{x}}_i^{t})+\frac{1}{2\lambda} \|\widehat{\mathbf{x}}_i^{t}-{\mathbf{x}}_i^{t+\frac{1}{2}}\|^2$ and $\phi_\lambda({\mathbf{x}}_i^{t}) = \phi( \widehat{\mathbf{x}}_i^{t}) + \frac{1}{2\lambda} \|\widehat{\mathbf{x}}_i^{t}-{\mathbf{x}}_i^t\|$. For the second term on the right hand side of \eqref{eq:phi_update_0}, with Lemma \ref{lem:prox_diff} and \eqref{eq:x_half_update}, we have \begin{align} &~\sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2 = \sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|\prox_{\eta r}({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-\prox_{\eta r}({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber\\ \leq &~ \sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber\\ = &~ \sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)+(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)-({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber\\ \leq&~ 2\sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)\|^2 + 2\sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|(\bar{{\mathbf{x}}}^{t}-\eta\bar{{\mathbf{y}}}^{t})-({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber\\ \leq&~ 2\sum_{i=1}^n\sum_{j=1}^{n-1} \mathbf{W}_{ji} \|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)\|^2 + 2\sum_{i=1}^n \sum_{l=2}^n \mathbf{W}_{li}\|(\bar{{\mathbf{x}}}^{t}-\eta\bar{{\mathbf{y}}}^{t})-({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber \\ \leq&~4 \sum_{j=1}^{n} \|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)\|^2 \leq 8 \|\mathbf{X}^{t}_\perp\|^2+ 8\eta^2 \|\Y^{t}_\perp\|^2. \label{eq:2_3} \end{align} With \eqref{eq:phi_lambda} and \eqref{eq:2_3}, summing up \eqref{eq:phi_update_0} from $i=1$ to $n$ gives \begin{align*} \sum_{i=1}^n \phi_\lambda({\mathbf{x}}_i^{t+1}) \leq &~ \sum_{i=1}^n \phi_\lambda( {\mathbf{x}}_i^{t}) +{ \frac{1}{2\lambda} }\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2 - { \frac{1}{2\lambda} } \|\widehat\mathbf{X}^t - \mathbf{X}^t\|^2 +{ \frac{2}{\lambda} }\left( \|\mathbf{X}^{t}_\perp\|^2+ \eta^2 \|\Y^{t}_\perp\|^2 \right) \end{align*} Now taking the expectation on the above inequality and using \eqref{eq:hatx_xprox}, we have \begin{align*} \sum_{i=1}^n \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_i^{t+1}) \big] \leq &~ \sum_{i=1}^n \mathbb{E}\big[\phi_\lambda( {\mathbf{x}}_i^{t}) \big] - \frac{1}{2\lambda} \mathbb{E}\big[ \|\widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \frac{2}{\lambda} \mathbb{E}\big[ \|\mathbf{X}^{t}_\perp\|^2+ \eta^2 \|\Y^{t}_\perp\|^2 \big]\\ &~ \hspace{-2cm}+\frac{1}{2\lambda} \left(\textstyle 4 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \left(\textstyle 1-\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +4\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 2\eta^2\sigma^2 \right). \end{align*} Combining like terms in the inequality above gives \eqref{eq:phi_update}. \end{proof} With Lemmas \ref{lem:XI_J}, \ref{lem:YI_J} and \ref{lem:weak_convex}, we are ready to prove Theorem \ref{thm:sec2}. We build the following Lyapunov function: \begin{align*} \mathbf{V}^t = z_1 \mathbb{E}[\|\mathbf{X}^t_\perp\|^2] +z_2\mathbb{E}[\|\Y^t_\perp\|^2] +z_3\sum_{i=1}^n \mathbb{E}[ \phi_\lambda( {\mathbf{x}}_i^{t})], \end{align*} where $z_1, z_2, z_3 \geq 0$ will be determined later. \subsection*{Proof of Theorem \ref{thm:sec2}.} \begin{proof} Denote \begin{align*} \Phi^t = \sum_{i=1}^n \mathbb{E}[ \phi_\lambda( {\mathbf{x}}_i^{t})],\quad \Omega_0^t = \mathbb{E}[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t}\|^2],\quad \Omega^t = \left(\mathbb{E}[\|\mathbf{X}^t_\perp\|^2], \mathbb{E}[\|\Y^t_\perp\|^2], \Phi^t\right)^\top. \end{align*} Then Lemmas \ref{lem:XI_J}, \ref{lem:YI_J} and \ref{lem:weak_convex} imply $\Omega^{t+1} \leq \mathbf{A}\Omega^t + {\mathbf{b}} \Omega_0^t + {\mathbf{c}} \sigma^2$, where \begin{align*} \mathbf{A} = \begin{pmatrix} \frac{1+\rho^2}{2} &~ \frac{2\rho^2}{1-\rho^2}\eta^2 &~ 0\\ \frac{48\rho^2 L^2 }{1-\rho^2 } &~\frac{3+\rho^2}{4} &~ 0 \\ \frac{4}{\lambda} &~ \frac{4}{\lambda}\eta^2 &~ 1 \end{pmatrix}, \quad {\mathbf{b}} = \begin{pmatrix} 0 \\ \frac{12\rho^2 L^2 }{1-\rho^2 } \\ - \frac{\eta}{4\lambda^2} \end{pmatrix}, \quad {\mathbf{c}} = \begin{pmatrix} 0 \\ 6n \\ \frac{\eta^2}{\lambda} \end{pmatrix}. \end{align*} For any ${\mathbf{z}} = (z_1, z_2, z_3)^\top \geq \mathbf{0}$, We have \begin{align*} {\mathbf{z}}^\top \Omega^{t+1} \leq {\mathbf{z}}^\top \Omega^{t}+ ({\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top)\Omega^t +{\mathbf{z}}^\top {\mathbf{b}} \Omega_0^t + {\mathbf{z}}^\top{\mathbf{c}} \sigma^2. \end{align*} Take $$z_1=\frac{10}{1-\rho^2},\ z_2=\left(\frac{80\rho^2}{(1-\rho^2)^3} + \frac{16}{1-\rho^2}\right)\eta^2,\ z_3 = \lambda.$$ We have $ {\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top = \begin{pmatrix} \frac{48\rho^2 L^2 }{1-\rho^2 }z_2-1, 0, 0 \end{pmatrix}. $ Note $z_2 \leq \frac{96}{(1-\rho^2)^3}\eta^2$. Thus \begin{align*} {\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top \leq \begin{pmatrix} \textstyle \frac{4608\rho^2 L^2 }{(1-\rho^2)^4 }\eta^2-1, 0, 0 \end{pmatrix}, \ {\mathbf{z}}^\top{\mathbf{b}} \leq \textstyle \frac{1152\rho^2 L^2 }{(1-\rho^2)^4 }\eta^2 - \frac{\eta}{4\lambda}, \ {\mathbf{z}}^\top{\mathbf{c}} \leq \textstyle \Big( \textstyle \frac{576n }{(1-\rho^2)^3} + 1\Big)\eta^2 \leq \frac{577n}{(1-\rho^2)^3} \eta^2. \end{align*} With $\eta\leq \frac{(1-\rho^2)^4}{96\rho L}$ and $\lambda \leq \frac{1}{96\rho L}$, we have $ {\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top \leq (-\frac{1}{2}, 0, 0 )^\top$ and $ {\mathbf{z}}^\top{\mathbf{b}} \leq \left(12\rho L - \frac{1}{8\lambda}\right)\eta - \frac{\eta}{8\lambda} \leq -\frac{\eta}{8\lambda}$. Thus \begin{align} {\mathbf{z}}^\top \Omega^{t+1} \leq \textstyle {\mathbf{z}}^\top \Omega^{t} -\frac{1}{2}\mathbb{E}[\|\mathbf{X}^t_\perp\|^2] -\frac{\eta}{8\lambda} \Omega_0^t + \frac{577n}{(1-\rho^2)^3} \eta^2 \sigma^2.\label{eq:l_fun} \end{align} Hence, summing up \eqref{eq:l_fun} for $t=0,1,\ldots,T-1$ gives \begin{align}\label{eq:avg-Omega} \frac{1}{\lambda T}\sum_{t=0}^{T-1} \Omega_0^t +\frac{4}{\eta T}\sum_{t=0}^{T-1} \mathbb{E}[\|\mathbf{X}^t_\perp\|^2] \leq \textstyle \frac{8}{\eta T} \left({\mathbf{z}}^\top \Omega^0 - {\mathbf{z}}^\top \Omega^{T}\right) + \frac{577n}{(1-\rho^2)^3} 8\eta\sigma^2 . \end{align} From ${\mathbf{y}}_i^{-1} =\mathbf{0}, \nabla F_i({\mathbf{x}}_i^{-1},\xi_i^{-1}) = \mathbf{0}, {\mathbf{x}}_i^0 = {\mathbf{x}}^0, \forall\, i \in \mathcal{N}$, we have \begin{align} \|\mathbf{X}^0_\perp\|^2 = 0, \quad \|\Y^0_\perp\|^2 = \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2, \quad \Phi^0=n \phi_\lambda({\mathbf{x}}^0). \label{eq:initial_thm2} \end{align} From Assumption \ref{assu:prob}, $\phi$ is lower bounded and thus $\phi_\lambda $ is also lower bounded, i.e., there is a constant $\phi_\lambda^*$ satisfying $\phi_\lambda^* = \min_{{\mathbf{x}}} \phi_\lambda({\mathbf{x}}) > -\infty$. Thus \begin{align} \Phi^T \geq n \phi_\lambda^*.\label{eq:end_thm2} \end{align} With \eqref{eq:initial_thm2}, \eqref{eq:end_thm2}, and the nonnegativity of $ \mathbb{E}[\|\mathbf{X}^T_\perp\|^2]$ and $ \mathbb{E}[\|\Y^T_\perp\|^2]$, we have \begin{align} \textstyle {\mathbf{z}}^\top \Omega^0 - {\mathbf{z}}^\top \Omega^{T} \le \frac{96 \eta^2}{(1-\rho^2)^3} \mathbb{E}[ \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2] + \lambda n \phi_\lambda({\mathbf{x}}^0) -\lambda n \phi_\lambda^*. \label{eq:Omega0_OmegaT} \end{align} By the convexity of the Frobenius norm and \eqref{eq:Omega0_OmegaT}, we obtain from \eqref{eq:avg-Omega} that \begin{align*} &~ \frac{1}{\lambda^2n} \mathbb{E}\big[\|\widehat\mathbf{X}^{\tau}-\mathbf{X}^{\tau}\|^2\big] +\frac{4}{n \lambda \eta}\mathbb{E}\big[\|\mathbf{X}^\tau_\perp\|^2\big] \leq \frac{1}{\lambda^2n T}\sum_{t=0}^{T-1} \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t}\|^2\big] +\frac{4}{n \lambda \eta T}\sum_{t=0}^{T-1} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] \nonumber \\ \leq &~ \textstyle \frac{8\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{ \eta T} + \frac{4616 \eta}{\lambda(1-\rho^2)^3} \sigma^2 \textstyle + \frac{768\eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2\right]}{n\lambda T(1-\rho^2)^3}. \end{align*} Note $\|\nabla \phi_\lambda ({\mathbf{x}}_i^\tau)\|^2 = \frac{\|{\mathbf{x}}_i^\tau-\widehat{\mathbf{x}}_i^\tau\|^2}{\lambda^{2}}$ from Lemma \ref{lem:xhat_x}, we finish the proof. \end{proof} \section{Convergence Analysis} In this section, we analyze the convergence of the algorithms proposed in section~\ref{sec:alg}. Nonconvexity of the problem and stochasticity of the algorithms both raise difficulty on the analysis. In addition, the coexistence of the nonsmooth regularizer $r(\cdot)$ causes more significant challenges. To address these challenges, we employ a tool of the so-called Moreau envelope \cite{MR201952}, which has been commonly used for analyzing methods on solving nonsmooth weakly-convex problems. \begin{definition}[Moreau envelope] Let $\psi$ be an $L$-weakly convex function, i.e., $\psi(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex. For $\lambda\in(0,\frac{1}{L})$, the Moreau envelope of $\psi$ is defined as \vspace{-0.2cm} \begin{equation*} \psi_\lambda({\mathbf{x}}) = \min_{\mathbf{y}} \textstyle \left\{\psi({\mathbf{y}}) + \frac{1}{2\lambda}\|{\mathbf{y}}-{\mathbf{x}}\|^2\right\}, \vspace{-0.2cm} \end{equation*} and the unique minimizer is denoted as \vspace{-0.2cm} \begin{equation*} \prox_{\lambda \psi}({\mathbf{x}})= \argmin_{{\mathbf{y}}} \textstyle \left\{\psi({\mathbf{y}})+\frac{1}{2\lambda} \|{\mathbf{y}}-{\mathbf{x}}\|^2\right\}.\vspace{-0.2cm} \end{equation*} \end{definition} The Moreau envelope $\psi_\lambda$ has nice properties. The result below can be found in \cite{davis2019stochastic, nazari2020adaptive, xu2022distributed-SsGM}. \begin{lemma}\label{lem:xhat_x} For any function $\psi$, if it is $L$-weakly convex, then for any $\lambda \in (0, \frac{1}{L})$, the Moreau envelope $\psi_\lambda$ is smooth with gradient given by $\nabla \psi_\lambda ({\mathbf{x}}) = \lambda^{-1} ({\mathbf{x}}-\widehat{\mathbf{x}}),$ where $\widehat{\mathbf{x}}=\prox_{\lambda\psi}({\mathbf{x}})$. Moreover, \vspace{-0.2cm} \[ \|{\mathbf{x}}-\widehat{\mathbf{x}}\|=\lambda\|\nabla \psi_\lambda({\mathbf{x}})\|, \quad \dist(\mathbf{0}, \partial \psi(\widehat {\mathbf{x}}))\leq \|\nabla \psi_\lambda({\mathbf{x}})\|.\vspace{-0.2cm} \] \end{lemma} Lemma~\ref{lem:xhat_x} implies that if $\|\nabla \psi_\lambda({\mathbf{x}})\|$ is small, then $\widehat{\mathbf{x}}$ is a near-stationary point of $\psi$ and ${\mathbf{x}}$ is close to $\widehat{\mathbf{x}}$. Hence, $\|\nabla \psi_\lambda({\mathbf{x}})\|$ can be used as a valid measure of stationarity violation at ${\mathbf{x}}$ for $\psi$. Based on this observation, we define the $\epsilon$-stationary solution below for the decentralized problem \eqref{eq:decentralized_problem}. \begin{definition}[Expected $\epsilon$-stationary solution]\label{def:eps-sol} Let $\epsilon > 0$. A point $\mathbf{X} = [{\mathbf{x}}_1, \ldots, {\mathbf{x}}_n]$ is called an expected $\epsilon$-stationary solution of \eqref{eq:decentralized_problem} if for a constant $\lambda\in (0, \frac{1}{L})$, \vspace{-0.1cm} \begin{equation*} \textstyle\frac{1}{n} \mathbb{E}\left[\sum_{i=1}^n \|\nabla \phi_\lambda({\mathbf{x}}_i)\|^2 + L^2 \|\mathbf{X}_\perp\|^2\right] \leq \epsilon^2. \vspace{-0.1cm} \end{equation*} \end{definition} In the definition above, $L^2$ before the consensus error term $\|\mathbf{X}_\perp\|^2$ is to balance the two terms. This scaling scheme has also been used in existing works such as \cite{xin2021stochastic,mancino2022proximal,DBLP:journals/corr/abs-2202-00255} . From the definition, we see that if $\mathbf{X}$ is an expected $\epsilon$-stationary solution of \eqref{eq:decentralized_problem}, then each local solution ${\mathbf{x}}_i$ will be a near-stationary solution of $\phi$ and in addition, these local solutions are all close to each other, namely, they are near consensus. Below we first state the convergence results of the non-compressed method DProxSGT and then the compressed one CDProxSGT. All the proofs are given in the appendix. \begin{theorem}[Convergence rate of DProxSGT]\label{thm:sec2} Under Assumptions \ref{assu:prob} -- \ref{assu:stoc_grad}, let $\{\mathbf{X}^t\}$ be generated from $\mathrm{DProxSGT}$ in Algorithm~\ref{alg:DProxSGT} with ${\mathbf{x}}_i^0 = {\mathbf{x}}^0, \forall\, i \in \mathcal{N}$. Let $\lambda = \min\big\{\frac{1}{4 L}, \frac{1}{96\rho L}\big\}$ and $\eta\leq \min\big\{\frac{1}{4 L},\frac{(1-\rho^2)^4}{96\rho L}\big\}$. Select $\tau$ from $\{0, 1, \ldots, T-1\}$ uniformly at random. Then \vspace{-0.1cm} \begin{equation* \begin{aligned} &~ \textstyle \frac{1}{n} \mathbb{E}\left[\sum_{i=1}^n \|\nabla\phi_\lambda({\mathbf{x}}_i^\tau)\|^2 +\frac{4}{\lambda \eta} \|\mathbf{X}^\tau_\perp\|^2\right] \\ \leq &~ \textstyle \frac{8\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{ \eta T} + \frac{4616 \eta}{\lambda(1-\rho^2)^3} \sigma^2 \textstyle + \frac{768\eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2\right]}{n\lambda T(1-\rho^2)^3}, \end{aligned} \vspace{-0.1cm} \end{equation*} where $\phi_\lambda^* = \min_{{\mathbf{x}}} \phi_\lambda({\mathbf{x}})> -\infty$. \end{theorem} By Theorem~\ref{thm:sec2}, we obtain a complexity result as follows. \begin{corollary}[Iteration complexity] Under the assumptions of Theorem~\ref{thm:sec2}, for a given $\epsilon>0$, take $ \eta = \min\{\frac{1}{4 L},\frac{(1-\rho^2)^4}{96\rho L}, \frac{ \lambda(1-\rho^2)^3 \epsilon^2}{9232\sigma^2}\}$. Then $\mathrm{DProxSGT}$ can find an expected $\epsilon$-stationary point of \eqref{eq:decentralized_problem} when $T \geq T_\epsilon = \left\lceil \frac{16\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{ \eta \epsilon^2 } + \frac{1536\eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2\right]}{n\lambda (1-\rho^2)^3 \epsilon^2} \right\rceil$. \end{corollary} \begin{remark} \label{remark:DProxSGT} When $\epsilon$ is small enough, $\eta$ will take $\frac{ \lambda(1-\rho^2)^3 \epsilon^2}{9232\sigma^2}$, and $T_\epsilon$ will be dominated by the first term. In this case, DProxSGT can find an expected $\epsilon$-stationary solution of \eqref{eq:decentralized_problem} in $O\Big( \frac{\sigma^2\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right) }{\lambda(1-\rho^2)^3 \epsilon^4}\Big)$ iterations, leading to the same number of stochastic gradient samples and communication rounds. Our sample complexity is optimal in terms of the dependence on $\epsilon$ under the smoothness condition in Assumption~\ref{assu:prob}, as it matches with the lower bound in \cite{arjevani2022lower}. However, the dependence on $1-\rho$ may not be optimal because of our possibly loose analysis, as the \emph{deterministic} method with single communication per update in \cite{scutari2019distributed} for nonconvex nonsmooth problems has a dependence $(1-\rho)^2$ on the graph topology. \end{remark} \begin{theorem}[Convergence rate of CDProxSGT] \label{thm:sect3thm} Under Assumptions \ref{assu:prob} through \ref{assu:compressor}, let $\{\mathbf{X}^t\}$ be generated from $\mathrm{CDProxSGT}$ in Algorithm \ref{alg:CDProxSGT} with ${\mathbf{x}}_i^0 = {\mathbf{x}}^0, \forall\, i \in \mathcal{N}$. Let $\lambda = \min \big\{\frac{1}{4 L}, \frac{ (1-\alpha^2)^2}{9 L+41280}\big\}$, and suppose \vspace{-0.1cm} \begin{gather*} \eta \leq~ \min\left\{ \textstyle \lambda, \frac{(1-\alpha^2)^2(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2}{18830\max\{1, L\}}\right\}, \\ \gamma_x\leq~ \min\left\{ \textstyle \frac{1-\alpha^2}{25}, \frac{\eta}{\alpha \right\}, \quad \gamma_y\leq ~ \textstyle \frac{(1-\alpha^2)(1-\widehat\rho^2_x)(1-\widehat\rho^2_y)}{317}. \end{gather*} \vspace{-0.1cm} Select $\tau$ from $\{0, 1, \ldots, T-1\}$ uniformly at random. Then \vspace{-0.1cm} \begin{equation* \begin{aligned} &~ \textstyle \frac{1}{n} \mathbb{E}\left[\sum_{i=1}^n \|\nabla\phi_\lambda({\mathbf{x}}_i^\tau)\|^2 +\frac{4}{\lambda \eta} \|\mathbf{X}^\tau_\perp\|^2\right] \\ \leq &~\textstyle \frac{8\left(\phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{\eta T} +\frac{(50096n+48)\eta \sigma^2}{n\lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} + \frac{4176 \eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0\|^2\right] }{n\lambda T (1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}, \end{aligned} \vspace{-0.1cm} \end{equation*} where $\phi_\lambda^* = \min_{{\mathbf{x}}} \phi_\lambda({\mathbf{x}})> -\infty$. \end{theorem} By Theorem~\ref{thm:sect3thm}, we have the complexity result as follows. \begin{corollary}[Iteration complexity] Under the assumptions of Theorem \ref{thm:sect3thm}, for a given $\epsilon>0$, take \begin{gather*} \eta = \textstyle \min \left\{\frac{1}{4 L}, \frac{ (1-\alpha^2)^2}{9 L+41280}, \frac{(1-\alpha^2)^2(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2}{18830\max\{1, L\}}\right.,\\ \textstyle \left. \frac{n\lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)\epsilon^2}{2(50096n+48) \sigma^2}\right\}, \\ \textstyle \gamma_x = \min\left\{ \textstyle \frac{1-\alpha^2}{25}, \frac{\eta}{\alpha}\right\}, \quad \gamma_y = \frac{(1-\alpha^2)(1-\widehat\rho^2_x)(1-\widehat\rho^2_y)}{317}. \end{gather*} Then $\mathrm{CDProxSGT}$ can find an expected $\epsilon$-stationary point of \eqref{eq:decentralized_problem} when $T\geq T_\epsilon^c$ where \begin{align*} T_\epsilon^c = \textstyle \left\lceil\frac{16\left(\phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{\eta \epsilon^2} + \frac{8352 \eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0\|^2\right] }{n\lambda (1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)\epsilon^2} \right\rceil. \end{align*} \end{corollary} \begin{remark} When the given tolerance $\epsilon$ is small enough, $\eta$ will take $\frac{n\lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)\epsilon^2}{2(50096n+48) \sigma^2}$ and $T_\epsilon^c$ will be dominated by the first term. In this case, similar to DProxSGT in Remark \ref{remark:DProxSGT}, CDProxSGT can find an expected $\epsilon$-stationary solution of \eqref{eq:decentralized_problem} in $O\Big( \frac{\sigma^2\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right) }{ \lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y) \epsilon^4}\Big)$ iterations. \end{remark} \section{Decentralized Algorithms}\label{sec:alg} In this section, we give our decentralized algorithms for solving \eqref{eq:decentralized_problem} or equivalently \eqref{eq:problem_original}. To perform neighbor communications, we introduce a mixing (or gossip) matrix $\mathbf{W}$ that satisfies the following standard assumption. \begin{assumption}[Mixing matrix] \label{assu:mix_matrix} We choose a mixing matrix $\mathbf{W}$ such that \vspace{-1.5mm} \begin{enumerate} \item [(i)] $\mathbf{W}$ is doubly stochastic: $\mathbf{W}\mathbf{1} = \mathbf{1}$ and $\mathbf{1}^\top \mathbf{W} = \mathbf{1}^\top$; \item [(ii)] $\mathbf{W}_{ij} = 0$ if $i$ and $j$ are not neighbors to each other; \item [(iii)] $\mathrm{Null}(\mathbf{W}-\mathbf{I}) = \mathrm{span}\{\mathbf{1}\}$ and $\rho \triangleq \|\mathbf{W} - \mathbf{J}\|_2 < 1$. \end{enumerate} \vspace{-2mm} \end{assumption} The condition in (ii) above is enforced so that \emph{direct} communications can be made only if two nodes (or workers) are immediate (or 1-hop) neighbors of each other. The condition in (iii) can hold if the graph $\mathcal{G}$ is connected. The assumption $\rho < 1$ is critical to ensure contraction of consensus error. The value of $\rho$ depends on the graph topology. \cite{koloskova2019decentralized} gives three commonly used examples: when uniform weights are used between nodes, $\mathbf{W} = \mathbf{J}$ and $\rho = 0$ for a fully-connected graph (in which case, our algorithms will reduce to centralized methods), $1-\rho = \Theta(\frac{1}{n})$ for a 2d torus grid graph where every node has 4 neighbors, and $1-\rho = \Theta(\frac{1}{n^2})$ for a ring-structured graph. More examples can be found in \cite{nedic2018network}. \subsection{Non-compreseed Method} With the mixing matrix $\mathbf{W}$, we propose a decentralized proximal stochastic gradient method with gradient tracking (DProxSGT) for \eqref{eq:decentralized_problem}. The pseudocode is shown in Algorithm~\ref{alg:DProxSGT}. In every iteration $t$, each node $i$ first computes a local stochastic gradient $\nabla F_i({\mathbf{x}}_i^{t},\xi_i^{t})$ by taking a sample $\xi_i^{t}$ from its local data distribution $\mathcal{D}_i$, then performs gradient tracking in \eqref{eq:y_half_update} and neighbor communications of the tracked gradient in \eqref{eq:y_update}, and finally takes a proximal gradient step in \eqref{eq:x_half_update} and mixes the model parameter with its neighbors in \eqref{eq:x_1_update}. \begin{algorithm}[tb] \caption{DProxSGT}\label{alg:DProxSGT} \begin{algorithmic} \small{ \STATE Initialize ${\mathbf{x}}_i^{0}$ and set ${\mathbf{y}}_i^{-1}=\mathbf{0}$, $\nabla F_i({\mathbf{x}}_i^{-1},\xi_i^{-1}) =\mathbf{0}$, $\forall i\in\mathcal{N}$. \FOR{$t=0, 1, 2, \ldots, T-1$} \STATE \hspace{-0.1cm}\textbf{all} nodes $i=1, 2, \ldots, n$ do the updates \textbf{in parallel:} \STATE obtain one random sample $\xi_i^t$, compute a stochastic gradient $\nabla F_i({\mathbf{x}}_i^{t},\xi_i^{t})$, and perform \vspace{-0.2cm} \begin{gather} {\mathbf{y}}_i^{t-\frac{1}{2}} = {\mathbf{y}}_i^{t-1} + \nabla F_i({\mathbf{x}}_i^{t},\xi_i^{t}) - \nabla F_i({\mathbf{x}}_i^{t-1},\xi_i^{t-1}),\label{eq:y_half_update} \\ {\mathbf{y}}_i^t = \textstyle \sum_{j=1}^n \mathbf{W}_{ji} {\mathbf{y}}_j^{t-\frac{1}{2}},\label{eq:y_update}\\ {\mathbf{x}}_i^{t+\frac{1}{2}} =\prox_{\eta r} \left({\mathbf{x}}_i^t - \eta {\mathbf{y}}_i^{t}\right),\label{eq:x_half_update} \\ {\mathbf{x}}_i^{t+1} = \textstyle \sum_{j=1}^n \mathbf{W}_{ji}{\mathbf{x}}_j^{t+\frac{1}{2}}. \label{eq:x_1_update} \vspace{-0.2cm} \end{gather} \ENDFOR} \end{algorithmic} \end{algorithm} \vspace{-0.1mm} Note that for simplicity, we take only one random sample $\xi_i^{t}$ in Algorithm \ref{alg:DProxSGT} but in general, a mini-batch of random samples can be taken, and all theoretical results that we will establish in the next section still hold. We emphasize that we need only $\mathcal{O}(1)$ samples for each update. This is different from ProxGT-SA in \cite{xin2021stochastic}, which shares a similar update formula as our algorithm but needs a very big batch of samples, as many as $\mathcal{O}(\frac{1}{\epsilon^2})$, where $\epsilon$ is a target tolerance. A small-batch training can usually generalize better than a big-batch one \cite{lecun2012efficient, keskar2016large} on training large-scale deep learning models. Throughout the paper, we make the following standard assumption on the stochastic gradients. \begin{assumption}[Stochastic gradients] \label{assu:stoc_grad} We assume that \vspace{-1.5mm} \begin{itemize} \item[(i)] The random samples $\{\xi_i^t\}_{i\in \mathcal{N}, t\ge 0}$ are independent. \item[(ii)] There exists a finite number $\sigma\ge0$ such that for any $i\in \mathcal{N}$ and ${\mathbf{x}}_i\in\dom(r)$, \begin{gather*} \mathbb{E}_{\xi_i}[\nabla F_i({\mathbf{x}}_i,\xi_i)] = \nabla f_i({\mathbf{x}}_i),\\ \mathbb{E}_{\xi_i}[\|\nabla F_i({\mathbf{x}}_i,\xi_i)-\nabla f_i({\mathbf{x}}_i)\|^2] \leq \sigma^2. \end{gather*} \end{itemize} \vspace{-2mm} \end{assumption} The gradient tracking step in \eqref{eq:y_half_update} is critical to handle heterogeneous data \cite{di2016next,nedic2017achieving,lu2019gnsd,pu2020distributed,sun2020improving,xin2021stochastic,song2021optimal,mancino2022proximal,zhao2022beer,DBLP:journals/corr/abs-2202-00255,song2022compressed}. In a deterministic scenario where $\nabla f_i(\cdot)$ is used instead of $\nabla F_i(\cdot, \xi)$, for each $i$, the tracked gradient ${\mathbf{y}}_i^t$ can converge to the gradient of the global function $\frac{1}{n}\sum_{i=1}^n f_i(\cdot)$ at $\bar{\mathbf{x}}^t$, and thus all local updates move towards a direction to minimize the \emph{global} objective. When stochastic gradients are used, the gradient tracking can play a similar role and make ${\mathbf{y}}_i^t$ approach to the stochastic gradient of the global function. With this nice property of gradient tracking, we can guarantee convergence without strong assumptions that are made in existing works, such as bounded gradients \cite{koloskova2019decentralized,koloskova2019decentralized-b, taheri2020quantized, singh2021squarm} and bounded data similarity over nodes \cite{lian2017can, tang2018communication, tang2019deepsqueeze, vogels2020practical, wang2021error}. \subsection{Compressed Method} In DProxSGT, each worker needs to communicate both the model parameter and tracked stochastic gradient with its neighbors at every iteration. Communications have become a bottleneck for distributed training on GPUs. In order to save the communication cost, we further propose a compressed version of DProxSGT, named CDProxSGT. The pseudocode is shown in Algorithm \ref{alg:CDProxSGT}, where $Q_{\mathbf{x}}$ and $Q_{\mathbf{y}}$ are two compression operators. \begin{algorithm}[tb] \caption{CDProxSGT}\label{alg:CDProxSGT} \begin{algorithmic} \small{\STATE Initialize ${\mathbf{x}}_i^{0}$; set ${\mathbf{y}}_i^{-1}=\underline{\mathbf{y}}_i^{-1}=\nabla F_i({\mathbf{x}}_i^{-1}, \xi_i^{-1})=\underline{\mathbf{x}}_i^{0} =\mathbf{0}$, $\forall i\in\mathcal{N}$. \FOR{$t=0, 1, 2, \ldots, T-1$} \STATE \hspace{-0.1cm}\textbf{all} nodes $i=1, 2, \ldots, n$ do the updates \textbf{in parallel:} \vspace{-0.2cm} \begin{gather} {\mathbf{y}}_i^{t-\frac{1}{2}} = {\mathbf{y}}_i^{t-1} + \nabla F_i({\mathbf{x}}_i^{t},\xi_i^{t}) - \nabla F_i({\mathbf{x}}_i^{t-1},\xi_i^{t-1}),\label{eq:alg3_1}\\ \underline{\mathbf{y}}_i^{t} = \underline{\mathbf{y}}_i^{t-1} + Q_{\mathbf{y}}\big[{\mathbf{y}}_i^{t-\frac{1}{2}} - \underline{\mathbf{y}}_i^{t-1}\big], \label{eq:alg3_2}\\ {\mathbf{y}}_i^{t} = {\mathbf{y}}_i^{t-\frac{1}{2}} +\gamma_y \left(\textstyle \sum_{j=1}^n \mathbf{W}_{ji} \underline{\mathbf{y}}_j^{t}-\underline{\mathbf{y}}_i^{t}\right), \label{eq:alg3_3}\\ {\mathbf{x}}_i^{t+\frac{1}{2}} =\prox_{\eta r} \left({\mathbf{x}}_i^t - \eta {\mathbf{y}}_i^{t}\right), \label{eq:alg3_4}\\ \underline{\mathbf{x}}_i^{t+1} = \underline{\mathbf{x}}_i^{t} + Q_{\mathbf{x}}\big[{\mathbf{x}}_i^{t+\frac{1}{2}} - \underline{\mathbf{x}}_i^{t}\big], \label{eq:alg3_5}\\ {\mathbf{x}}_i^{t+1} = {\mathbf{x}}_i^{t+\frac{1}{2}}+\gamma_x\Big(\textstyle \overset{n}{\underset{j=1}\sum} \mathbf{W}_{ji} \underline{\mathbf{x}}_j^{t+1}-\underline{\mathbf{x}}_i^{t+1}\Big).\label{eq:alg3_6} \vspace{-0.2cm} \end{gather} \ENDFOR} \end{algorithmic} \end{algorithm} \vspace{-0.1mm} In Algorithm \ref{alg:CDProxSGT}, each node communicates the non-compressed vectors $\underline{\mathbf{y}}_i^{t}$ and $\underline{\mathbf{x}}_i^{t+1}$ with its neighbors in \eqref{eq:alg3_3} and \eqref{eq:alg3_6}. We write it in this way for ease of read and analysis. For efficient and \emph{equivalent} implementation, we do not communicate $\underline{\mathbf{y}}_i^{t}$ and $\underline{\mathbf{x}}_i^{t+1}$ directly but the compressed residues $Q_{\mathbf{y}}\big[{\mathbf{y}}_i^{t-\frac{1}{2}} - \underline{\mathbf{y}}_i^{t-1}\big]$ and $Q_{\mathbf{x}}\big[{\mathbf{x}}_i^{t+\frac{1}{2}} - \underline{\mathbf{x}}_i^{t}\big]$, explained as follows. Besides ${\mathbf{y}}_i^{t-1}$, ${\mathbf{x}}_i^{t}$, $\underline{\mathbf{y}}_i^{t-1}$ and $\underline{\mathbf{x}}_i^{t}$, each node also stores ${\mathbf{z}}_i^{t-1}$ and ${\mathbf{s}}_i^{t} $ which record $\sum_{j=1}^n \mathbf{W}_{ji} \underline{\mathbf{y}}_i^{t-1}$ and $\sum_{j=1}^n \mathbf{W}_{ji} \underline{\mathbf{x}}_i^{t}$. For the gradient communication, each node $i$ initializes ${\mathbf{z}}_i^{-1} = \mathbf{0}$, and then at each iteration $t$, after receiving $Q_{\mathbf{y}}\big[{\mathbf{y}}_j^{t-\frac{1}{2}} - \underline{\mathbf{y}}_j^{t-1}\big]$ from its neighbors, it updates $\underline{\mathbf{y}}_i^{t}$ by \eqref{eq:alg3_2}, and ${\mathbf{z}}_i^{t}$ and ${\mathbf{y}}_i^t$ by \vspace{-0.2cm} \begin{align*} {\mathbf{z}}_i^{t} =&~ \textstyle {\mathbf{z}}_i^{t-1} + \sum_{j=1}^n \mathbf{W}_{ji} Q_{\mathbf{y}}\big[{\mathbf{y}}_j^{t-\frac{1}{2}} - \underline{\mathbf{y}}_j^{t-1}\big], \\ {\mathbf{y}}_i^{t} =&~ \textstyle {\mathbf{y}}_i^{t-\frac{1}{2}} +\gamma_y \big({\mathbf{z}}_i^{t}-\underline{\mathbf{y}}_i^{t}\big).\vspace{-0.2cm} \end{align*} From the initialization and the updates of $\underline{\mathbf{y}}_i^{t}$ and ${\mathbf{z}}_i^{t}$, it always holds that ${\mathbf{z}}_i^{t}=\sum_{j=1}^n \mathbf{W}_{ji} \underline{\mathbf{y}}_i^{t}$. The model communication can be done efficiently in the same way. The compression operators $Q_{\mathbf{x}}$ and $Q_{\mathbf{y}}$ in Algorithm \ref{alg:CDProxSGT} can be different, but we assume that they both satisfy the following assumption. \begin{assumption} \label{assu:compressor} There exists $\alpha \in [0,1)$ such that $$\mathbb{E}[\|{\mathbf{x}}-Q[{\mathbf{x}}]\|^2]\leq \alpha^2 \|{\mathbf{x}}\|^2, \forall\, {\mathbf{x}}\in\mathbb{R}^d,$$ for both $Q=Q_{\mathbf{x}}$ and $Q=Q_{\mathbf{y}}$. \end{assumption} The assumption on compression operators is standard and also made in \cite{koloskova2019decentralized-b,koloskova2019decentralized,zhao2022beer}. It is satisfied by the sparsification, such as Random-$k$ \cite{stich2018sparsified} and Top-$k$ \cite{aji2017sparse}. It can also be satisfied by rescaled quantizations. For example, QSGD \cite{alistarh2017qsgd} compresses ${\mathbf{x}}\in \mathbb{R}^d$ by $Q_{sqgd}({\mathbf{x}}) = \frac{\mathbf{sign}({\mathbf{x}})\|{\mathbf{x}}\|}{s} \lfloor s \frac{|{\mathbf{x}}|}{\|{\mathbf{x}}\|} + \xi \rfloor $ where $\xi$ is uniformly distributed on $[0,1]^d$, $s$ is the parameter about compression level. Then $Q({\mathbf{x}})= \frac{1}{\tau} Q_{sqgd} ({\mathbf{x}})$ with $\tau=(1+\min\{d/s^2, \sqrt{d}/s\})$ satisfies Assumption \ref{assu:compressor} with $\alpha^2=1-\frac{1}{\tau}$. More examples can be found in \cite{koloskova2019decentralized}. Below, we make a couple of remarks to discuss the relations between Algorithm \ref{alg:DProxSGT} and Algorithm \ref{alg:CDProxSGT}. \begin{remark} When $Q_{\mathbf{x}}$ and $Q_{\mathbf{y}}$ are both identity operators, i.e., $Q_{\mathbf{x}}[{\mathbf{x}}] = {\mathbf{x}}, Q_{\mathbf{y}}[{\mathbf{y}}] = {\mathbf{y}}$, and $\gamma_x=\gamma_y=1$, in Algorithm \ref{alg:CDProxSGT}, CDProxSGT will reduce to DProxSGT. Hence, the latter can be viewed as a special case of the former. However, we will analyze them separately. Although the big-batch training method ProxGT-SA in \cite{xin2021stochastic} shares a similar update as the proposed DProxSGT, our analysis will be completely different and new, as we need only $\mathcal{O}(1)$ samples in each iteration in order to achieve better generalization performance. The analysis of CDProxSGT will be built on that of DProxSGT by carefully controlling the variance error of stochastic gradients and the consensus error, as well as the additional compression error. \end{remark} \begin{remark} When $Q_{\mathbf{y}}$ and $Q_{\mathbf{x}}$ are identity operators, $\underline{\mathbf{y}}_i^{t} = {\mathbf{y}}_i^{t-\frac{1}{2}}$ and $\underline{\mathbf{x}}_i^{t+1} = {\mathbf{x}}_i^{t+\frac{1}{2}}$ for each $i\in\mathcal{N}$. Hence, in the compression case, $\underline{\mathbf{y}}_i^{t}$ and $\underline{\mathbf{x}}_i^{t+1}$ can be viewed as estimates of ${\mathbf{y}}_i^{t-\frac{1}{2}}$ and ${\mathbf{x}}_i^{t+\frac{1}{2}}$. In addition, in a matrix format, we have from \eqref{eq:alg3_3} and \eqref{eq:alg3_6} that \begin{align} \Y^{t+1} =&~ \Y^{t+\frac{1}{2}}\widehat\mathbf{W}_y + \gamma_y\big(\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\big)(\mathbf{W}-\mathbf{I}), \label{eq:Y_hatW}\\ \mathbf{X}^{t+1} =&~ \mathbf{X}^{t+\frac{1}{2}}\widehat\mathbf{W}_x + \gamma_x(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I}), \label{eq:compX_hatW} \end{align} where $\widehat\mathbf{W}_y = \gamma_y \mathbf{W} + (1-\gamma_y)\mathbf{I},\ \widehat\mathbf{W}_x = \gamma_x \mathbf{W} + (1-\gamma_x)\mathbf{I}.$ When $\mathbf{W}$ satisfies the conditions (i)-(iii) in Assumption~\ref{assu:mix_matrix}, it can be easily shown that $\widehat\mathbf{W}_y$ and $\widehat\mathbf{W}_x$ also satisfy all three conditions. Indeed, we have $$\widehat\rho_x \triangleq \|\widehat\mathbf{W}_x - \mathbf{J}\|_2 < 1,\quad \widehat\rho_y \triangleq \|\widehat\mathbf{W}_y - \mathbf{J}\|_2 < 1.$$ Thus we can view $\Y^{t+1}$ and $\mathbf{X}^{t+1}$ as the results of $\Y^{t+\frac{1}{2}}$ and $\mathbf{X}^{t+\frac{1}{2}}$ by one round of neighbor communication with mixing matrices $\widehat{\mathbf{W}}_y$ and $\widehat{\mathbf{W}}_x$, and the addition of the estimation error $\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}$ and $\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}$ after one round of neighbor communication. \end{remark} \section{Introduction} In this paper, we consider to solve nonconvex stochastic composite problems in a decentralized setting: \vspace{-0.1cm} \begin{equation}\label{eq:problem_original} \begin{aligned} & \min_{{\mathbf{x}}\in\mathbb{R}^d} \phi({\mathbf{x}}) = f({\mathbf{x}}) + r({\mathbf{x}}),\\[-0.1cm] & \text{with } f({\mathbf{x}})=\frac{1}{n}\sum_{i=1}^n f_i({\mathbf{x}}), f_i({\mathbf{x}})\!=\!\mathbb{E}_{\xi_i \sim \mathcal{D}_i}[F_i({\mathbf{x}},\xi_i)]. \end{aligned} \vspace{-0.1cm} \end{equation} Here, $\{\mathcal{D}_i\}_{i=1}^n$ are possibly \emph{non-i.i.d data} distributions on $n$ machines/workers that can be viewed as nodes of a connected graph $\mathcal{G}$, and each $F_i(\cdot, \xi_i)$ can only be accessed by the $i$-th worker. We are interested in problems that satisfy the following structural assumption. \begin{assumption}[Problem structure] \label{assu:prob} We assume that \vspace{-1.5mm} \begin{itemize} \item[(i)] $r$ is closed convex and possibly nondifferentiable. \item[(ii)] Each $f_i$ is $L$-smooth in $\dom(r)$, i.e., $\|\nabla f_i({\mathbf{x}}) - \nabla f_i({\mathbf{y}})\| \le L \|{\mathbf{x}}- {\mathbf{y}}\|$, for any ${\mathbf{x}}, {\mathbf{y}}\in\dom(r)$. \item[(iii)] $\phi$ is lower bounded, i.e., $\phi^* \triangleq \min_{\mathbf{x}} \phi({\mathbf{x}}) > -\infty$. \end{itemize} \vspace{-2mm} \end{assumption} Let $\mathcal{N}=\{1, 2, \ldots, n\}$ be the set of nodes of $\mathcal{G}$ and $\mathcal{E}$ the set of edges. For each $i\in\mathcal{N}$, denote $\mathcal{N}_i$ as the neighbors of worker $i$ and itself, i.e., $\mathcal{N}_i = \{j: (i,j) \in \mathcal{E}\}\cup \{i\}$. Every worker can only communicate with its neighbors. To solve \eqref{eq:problem_original} collaboratively, each worker $i$ maintains a copy, denoted as ${\mathbf{x}}_i$, of the variable ${\mathbf{x}}$. With these notations, \eqref{eq:problem_original} can be formulated equivalently to \vspace{-0.1cm} {\begin{align}\label{eq:decentralized_problem} \begin{split} \min_{\mathbf{X} \in \mathbb{R}^{d\times n}} & \frac{1}{n}\sum_{i=1}^n \phi_i({\mathbf{x}}_i), \text{with }\phi_i({\mathbf{x}}_i) \triangleq f_i({\mathbf{x}}_i) + r({\mathbf{x}}_i), \\ \mbox{s.t. } \quad & {\mathbf{x}}_i={\mathbf{x}}_j, \forall\, j\in \mathcal{N}_i, \forall\, i = 1,\ldots, n. \end{split} \end{align}} \vspace{-0.5cm} Problems with a \emph{nonsmooth} regularizer, i.e., in the form of \eqref{eq:problem_original}, appear in many applications such as $\ell_1$-regularized signal recovery \cite{eldar2014phase,duchi2019solving}, online nonnegative matrix factorization \cite{guan2012online}, and training sparse neural networks \cite{scardapane2017group, yang2020proxsgd}. When data involved in these applications are distributed onto (or collected by workers on) a decentralized network, it necessitates the design of decentralized algorithms. Although decentralized optimization has attracted a lot of research interests in recent years, most existing works focus on strongly convex problems \cite{scaman2017optimal, koloskova2019decentralized} or convex problems \cite{6426375,taheri2020quantized} or smooth nonconvex problems \cite{bianchi2012convergence, di2016next, wai2017decentralized, lian2017can,zeng2018nonconvex}. Few works have studied \emph{nonsmooth nonconvex} decentralized \emph{stochastic} optimization like \eqref{eq:decentralized_problem} that we consider. \cite{chen2021distributed, xin2021stochastic, mancino2022proximal} are among the exceptions. However, they either require to take many data samples for each update or assume a so-called mean-squared smoothness condition, which is stronger than the smoothness condition in Assumption~\ref{assu:prob}(ii), in order to perform momentum-based variance-reduction step. Though these methods can have convergence (rate) guarantee, they often yield poor generalization performance on training deep neural networks, as demonstrated in \cite{lecun2012efficient, keskar2016large} for large-batch training methods and in our numerical experiments for momentum variance-reduction methods. On the other side, many distributed optimization methods \cite{shamir2014distributed,lian2017can,wang2018cooperative} often assume that the data are i.i.d across the workers. However, this assumption does not hold in many real-world scenarios, for instance, due to data privacy issue that local data has to stay on-premise. Data heterogeneity can result in significant degradation of the performance by these methods. Though some papers do not assume i.i.d. data, they require certain data similarity, such as bounded stochastic gradients \cite{koloskova2019decentralized,koloskova2019decentralized-b, taheri2020quantized} and bounded gradient dissimilarity \cite{ tang2018communication,assran2019stochastic, tang2019deepsqueeze, vogels2020practical}. To address the critical practical issues mentioned above, we propose a decentralized proximal stochastic gradient tracking method that needs only a single or $O(1)$ data samples (per worker) for each update. With no assumption on data similarity, it can still achieve the optimal convergence rate on solving problems satisfying conditions in Assumption~\ref{assu:prob} and yield good generalization performance. In addition, to reduce communication cost, we give a compressed version of the proposed algorithm, by performing compression on the communicated information. The compressed algorithm can inherit the benefits of its non-compressed counterpart. \subsection{Our Contributions} Our contributions are three-fold. First, we propose two decentralized algorithms, one without compression (named DProxSGT) and the other with compression (named CDProxSGT), for solving \emph{decentralized nonconvex nonsmooth stochastic} problems. Different from existing methods, e.g., \cite{xin2021stochastic, wang2021distributed, mancino2022proximal}, which need a very large batchsize and/or perform momentum-based variance reduction to handle the challenge from the nonsmooth term, DProxSGT needs only $\mathcal{O}(1)$ data samples for each update, without performing variance reduction. The use of a small batch and a standard proximal gradient update enables our method to achieve significantly better generalization performance over the existing methods, as we demonstrate on training neural networks. To the best of our knowledge, CDProxSGT is the first decentralized algorithm that applies a compression scheme for solving nonconvex nonsmooth stochastic problems, and it inherits the advantages of the non-compressed method DProxSGT. Even applied to the special class of smooth nonconvex problems, CDProxSGT can perform significantly better over state-of-the-art methods, in terms of generalization and handling data heterogeneity. Second, we establish an optimal sample complexity result of DProxSGT, which matches the lower bound result in \cite{arjevani2022lower} in terms of the dependence on a target tolerance $\epsilon$, to produce an $\epsilon$-stationary solution. Due to the coexistence of nonconvexity, nonsmoothness, big stochasticity variance (due to the small batch and no use of variance reduction for better generalization), and decentralization, the analysis is highly non-trivial. We employ the tool of Moreau envelope and construct a decreasing Lyapunov function by carefully controlling the errors introduced by stochasticity and decentralization. Third, we establish the iteration complexity result of the proposed compressed method CDProxSGT, which is in the same order as that for DProxSGT and thus also optimal in terms of the dependence on a target tolerance. The analysis builds on that of DProxSGT but is more challenging due to the additional compression error and the use of gradient tracking. Nevertheless, we obtain our results by making the same (or even weaker) assumptions as those assumed by state-of-the-art methods \cite{koloskova2019decentralized-b, zhao2022beer}. \subsection{Notation}\label{sec:notation} For any vector ${\mathbf{x}}\in\mathbb{R}^{d}$, we use $\|{\mathbf{x}}\|$ for the $\ell_2$ norm. For any matrix $\mathbf{A}$, $\|\mathbf{A}\|$ denotes the Frobenius norm and $\|\mathbf{A}\|_2$ the spectral norm. $\mathbf{X} = [{\mathbf{x}}_1,{\mathbf{x}}_2,\ldots,{\mathbf{x}}_n]\in\mathbb{R}^{d\times n}$ concatinates all local variables. The superscript $^t$ will be used for iteration or communication. $\nabla F_i({\mathbf{x}}_i^t,\xi_i^t)$ denotes a local stochastic gradient of $F_i$ at ${\mathbf{x}}_i^t$ with a random sample $\xi_i^t$. The column concatenation of $\{\nabla F_i({\mathbf{x}}_i^t,\xi_i^t)\}$ is denoted as \vspace{-0.1cm} \begin{equation* \nabla \mathbf{F}^t = \nabla \mathbf{F}(\mathbf{X}^t,\Xi^t) = [ \nabla F_1({\mathbf{x}}_1^t,\xi_1^t),\ldots, \nabla F_n({\mathbf{x}}_n^t,\xi_n^t)],\vspace{-0.1cm} \end{equation*} where $\Xi^t = [\xi_1^t,\xi_2^t,\ldots,\xi_n^t]$. Similarly, we denote \vspace{-0.1cm} \begin{equation*} \nabla \mathbf{f}^t = [ \nabla f_1({\mathbf{x}}_1^t ),\ldots, \nabla f_n({\mathbf{x}}_n^t )].\vspace{-0.1cm} \end{equation*} For any $\mathbf{X} \in \mathbb{R}^{d\times n}$, we define \vspace{-0.1cm} \begin{equation*} \bar{{\mathbf{x}}} = \textstyle\frac{1}{n}\mathbf{X}\mathbf{1}, \quad \overline{\mathbf{X}} = \mathbf{X}\mathbf{J} = \bar{{\mathbf{x}}}\mathbf{1}^\top,\quad \mathbf{X}_\perp = \mathbf{X}(\mathbf{I} - \mathbf{J}), \vspace{-0.1cm} \end{equation*} where $\mathbf{1}$ is the all-one vector, and $\mathbf{J} = \frac{\mathbf{1}\1^\top}{n}$ is the averaging matrix. Similarly, we define the mean vectors \vspace{-0.1cm} \begin{equation*} \overline{\nabla} \mathbf{F}^t = \textstyle\frac{1}{n} \mathbf{F}^t \mathbf{1},\ \overline{\nabla} \mathbf{f}^t = \textstyle\frac{1}{n} \mathbf{f}^t \mathbf{1}.\vspace{-0.1cm} \end{equation*} We will use $\mathbb{E}_t$ for the expectation about the random samples $\Xi^t$ at the $t$th iteration and $\mathbb{E}$ for the full expectation. $\mathbb{E}_Q$ denotes the expectation about a stochastic compressor $Q$. \section{Related Works} The literature of decentralized optimization has been growing vastly. To exhaust the literature is impossible. Below we review existing works on decentralized algorithms for solving nonconvex problems, with or without using a compression technique. For ease of understanding the difference of our methods from existing ones, we compare to a few relevant methods in Table \ref{tab:method_compare}. \begin{table*}[t]\label{tab:method_compare} \caption{Comparison between our methods and some relevant methods: ProxGT-SA and ProxGT-SR-O in \cite{xin2021stochastic}, DEEPSTORM \cite{mancino2022proximal}, ChocoSGD \cite{koloskova2019decentralized-b}, and BEER \cite{zhao2022beer}. We use ``CMP'' to represent whether compression is performed by a method. GRADIENTS represents additional assumptions on the stochastic gradients in addition to those made in Assumption \ref{assu:stoc_grad}. SMOOTHNESS represents the smoothness condition, where ``mean-squared'' means $\mathbb{E}_{\xi_i}[\|\nabla F_i({\mathbf{x}}; \xi_i) - \nabla F_i({\mathbf{y}}; \xi_i)\|^2]\le L^2\|{\mathbf{x}}-{\mathbf{y}}\|^2$ that is stronger than the $L$-smoothness of $f_i$. BS is the required batchsize to get an $\epsilon$-stationary solution. VR and MMT represent whether the variance reduction or momentum are used. Large batchsize and/or momentum variance reduction can degrade the generalization performance, as we demonstrate in numerical experiments. } \label{sample-table} \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccccc} \toprule Methods & CMP & $r\not\equiv 0$ & GRADIENTS & SMOOTHNESS & (BS, VR, MMT) \\ \midrule ProxGT-SA & No& Yes & No & $f_i$ is smooth & \big($\mathcal{O}(\frac{1}{\epsilon^2})$, No , No\big) \\[0.1cm] ProxGT-SR-O & No & Yes & No & mean-squared & \big($\mathcal{O}(\frac{1}{\epsilon})$, Yes, No\big) \\[0.1cm] DEEPSTORM & No & Yes & No & mean-squared & ($\mathcal{O}(1)$, Yes, Yes) \\ \textbf{DProxSGT (this paper)} & No & Yes & No & $f_i$ is smooth & ($\mathcal{O}(1)$, No, No) \\ \midrule ChocoSGD & Yes& No & $\mathbb{E}_{\xi}[\|\nabla F_i({\mathbf{x}},\xi_i)\|^2]\leq G^2$ & $f_i$ is smooth & ($\mathcal{O}(1)$, No, No) \\ BEER & Yes & No & No & $f$ is smooth & \big($\mathcal{O}(\frac{1}{\epsilon^2})$, No, No\big) \\[0.1cm] \textbf{CDProxSGT (this paper)} & Yes & Yes & No & $f_i$ is smooth & ($\mathcal{O}(1)$, No, No) \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} \subsection{Non-compressed Decentralized Methods} For nonconvex decentralized problems with a nonsmooth regularizer, a lot of deterministic decentralized methods have been studied, e.g., \cite{di2016next, wai2017decentralized, zeng2018nonconvex, chen2021distributed, scutari2019distributed}. When only stochastic gradient is available, a majority of existing works focus on smooth cases without a regularizer or a hard constraint, such as \cite{lian2017can, assran2019stochastic, tang2018d}, gradient tracking based methods \cite{lu2019gnsd,zhang2019decentralized, koloskova2021improved}, and momentum-based variance reduction methods \cite{xin2021hybrid, zhang2021gt}. Several works such as \cite{bianchi2012convergence, wang2021distributed, xin2021stochastic, mancino2022proximal} have studied stochastic decentralized methods for problems with a nonsmooth term $r$. However, they either consider some special $r$ or require a large batch size. \cite{bianchi2012convergence} considers the case where $r$ is an indicator function of a compact convex set. Also, it requires bounded stochastic gradients. \cite{wang2021distributed} focuses on problems with a polyhedral $r$, and it requires a large batch size of $\mathcal{O}(\frac{1}{\epsilon})$ to produce an (expected) $\epsilon$-stationary point. \cite{xin2021stochastic, mancino2022proximal} are the most closely related to our methods. To produce an (expected) $\epsilon$-stationary point, the methods in \cite{xin2021stochastic} require a large batch size, either $\mathcal{O}(\frac{1}{\epsilon^2})$ or $\mathcal{O}(\frac{1}{\epsilon})$ if variance reduction is applied. The method in \cite{mancino2022proximal} requires only $\mathcal{O}(1)$ samples for each update by taking a momentum-type variance reduction scheme. However, in order to reduce variance, it needs a stronger mean-squared smoothness assumption. In addition, the momentum variance reduction step can often hurt the generalization performance on training complex neural networks, as we will demonstrate in our numerical experiments. \subsection{Compressed Distributed Methods} Communication efficiency is a crucial factor when designing a distributed optimization strategy. The current machine learning paradigm oftentimes resorts to models with a large number of parameters, which indicates a high communication cost when the models or gradients are transferred from workers to the parameter server or among workers. This may incur significant latency in training. Hence, communication-efficient algorithms by model or gradient compression have been actively sought. Two major groups of compression operators are quantization and sparsification. The quantization approaches include 1-bit SGD \cite{seide20141}, SignSGD \cite{bernstein2018signsgd}, QSGD \cite{alistarh2017qsgd}, TernGrad \cite{wen2017terngrad}. The sparsification approaches include Random-$k$ \cite{stich2018sparsified}, Top-$k$ \cite{aji2017sparse}, Threshold-$v$ \cite{dutta2019discrepancy} and ScaleCom \cite{chen2020scalecom}. Direct compression may slow down the convergence especially when compression ratio is high. Error compensation or error-feedback can mitigate the effect by saving the compression error in one communication step and compensating it in the next communication step before another compression \cite{seide20141}. These compression operators are first designed to compress the gradients in the centralized setting \cite{tang2019DoubleSqueeze,karimireddy2019error}. The compression can also be applied to the decentralized setting for smooth problems, i.e., \eqref{eq:decentralized_problem} with $r=0$. \cite{tang2019deepsqueeze} applies the compression with error compensation to the communication of model parameters in the decentralized seeting. Choco-Gossip \cite{koloskova2019decentralized} is another communication way to mitigate the slow down effect from compression. It does not compress the model parameters but a residue between model parameters and its estimation. Choco-SGD uses Choco-Gossip to solve \eqref{eq:decentralized_problem}. BEER \cite{zhao2022beer} includes gradient tracking and compresses both tracked stochastic gradients and model parameters in each iteration by the Choco-Gossip. BEER needs a large batchsize of $\mathcal{O}(\frac{1}{\epsilon^2})$ in order to produce an $\epsilon$-stationary solution. DoCoM-SGT\cite{DBLP:journals/corr/abs-2202-00255} does similar updates as BEER but with a momentum term for the update of the tracked gradients, and it only needs an $\mathcal{O}(1)$ batchsize. Our proposed CDProxSGT is for solving decentralized problems in the form of \eqref{eq:decentralized_problem} with a nonsmooth $r({\mathbf{x}})$. To the best of our knowledge, CDProxSGT is the first compressed decentralized method for nonsmooth nonconvex problems without the use of a large batchsize, and it can achieve an optimal sample complexity without the assumption of data similarity or gradient boundedness. \section{Numerical Experiments}\label{sec:numerical_experiments} In this section, we test the proposed algorithms on training two neural network models, in order to demonstrate their better generalization over momentum variance-reduction methods and large-batch training methods and to demonstrate the success of handling heterogeneous data even when only compressed model parameter and gradient information are communicated among workers. One neural network that we test is LeNet5 \cite{lecun1989backpropagation} on the FashionMNIST dataset \cite{xiao2017fashion}, and the other is FixupResNet20 \cite{zhang2019fixup} on Cifar10 \cite{krizhevsky2009learning}. Our experiments are representative to show the practical performance of our methods. Among several closely-related works, \cite{xin2021stochastic} includes no experiments, and \cite{mancino2022proximal,zhao2022beer} only tests on tabular data and MNIST. \cite{koloskova2019decentralized-b} tests its method on Cifar10 but needs similar data distribution on all workers for good performance. FashionMNIST has a similar scale as MNIST but poses a more challenging classification task \cite{xiao2017fashion}. Cifar10 is more complex, and FixupResNet20 has more layers than LeNet5. All the compared algorithms are implemented in Python with Pytorch and MPI4PY (for distributed computing). They run on a Dell workstation with two Quadro RTX 5000 GPUs. We use the 2 GPUs as 5 workers, which communicate over a ring-structured network (so each worker can only communicate with two neighbors). Uniform weight is used, i.e., $W_{ji} = \frac{1}{3}$ for each pair of connected workers $i$ and $j$. Both FashionMNIST and Cifar10 have 10 classes. We distribute each data onto the 5 workers based on the class labels, namely, each worker holds 2 classes of data points, and thus the data are heterogeneous across the workers. For all methods, we report their objective values on training data, prediction accuracy on testing data, and consensus errors at each epoch. To save time, the objective values are computed as the average of the losses that are evaluated during the training process (i.e., on the sampled data instead of the whole training data) plus the regularizer per epoch. For the testing accuracy, we first compute the accuracy on the whole testing data for each worker by using its own model parameter and then take the average. The consensus error is simply $\|\mathbf{X}_\perp\|^2$. \subsection{Sparse Neural Network Training} \label{subsect:RegL1} In this subsection, we test the non-compressed method DProxSGT and compare it with AllReduce (that is a centralized method and used as a baseline), DEEPSTORM\footnote{For DEEPSTORM, we implement DEEPSTORM v2 in \cite{mancino2022proximal}.} and ProxGT-SA \cite{xin2021stochastic} on solving \eqref{eq:decentralized_problem}, where $f$ is the loss on the whole training data and $r({\mathbf{x}}) = \mu\|{\mathbf{x}}\|_1$ serves as a sparse regularizer that encourages a sparse model. For training LeNet5 on FashionMNIST, we set $\mu= 10^{-4}$ and run each method to 100 epochs. The learning rate $\eta$ and batchsize are set to $0.01$ and 8 for AllReduce and DProxSGT. DEEPSTORM uses the same $\eta$ and batchsize but with a larger initial batchsize 200, and its momentum parameter is tuned to $\beta=0.8$ in order to yield the best performance. ProxGT-SA is a large-batch training method. We set its batchsize to 256 and accordingly apply a larger step size $\eta=0.3$ that is the best among $\{0.1, 0.2, 0.3, 0.4\}$. For training FixupResnet20 on Cifar10, we set $\mu= 5 \times 10^{-5}$ and run each method to 500 epochs. The learning rate and batchsize are set to $\eta=0.02$ and 64 for AllReduce, DProxSGT, and DEEPSTORM. The initial batchsize is set to 1600 for DEEPSTORM and the momentum parameter set to $\beta=0.8$. ProxGT-SA uses a larger batchsize 512 and a larger stepsize $\eta=0.1$ that gives the best performance among $\{0.05, 0.1, 0.2, 0.3\}$. \begin{figure}[ht] \begin{center} \includegraphics[width=.9\columnwidth]{./figures/noncompressed} \vspace{-0.2cm} \caption{Results of training sparse neural networks by non-compressed methods with $r({\mathbf{x}}) = \mu \|{\mathbf{x}}\|_1$ for the same number of epochs. Left: LeNet5 on FashionMNIST with $\mu=10^{-4}$. Right: FixupResnet20 on Cifar10 with $\mu=5\times 10^{-5}$.} \label{fig:RegL1} \end{center} \end{figure} The results for all methods are plotted in Figure \ref{fig:RegL1}. For LeNet5, DProxSGT produces almost the same curves as the centralized training method AllReduce, while on FixupResnet20, DProxSGT even outperforms AllReduce in terms of testing accuracy. This could be because AllReduce aggregates stochastic gradients from all the workers for each update and thus equivalently, it actually uses a larger batchsize. DEEPSTORM performs equally well as our method DProxSGT on training LeNet5. However, it gives lower testing accuracy than DProxSGT and also oscillates significantly more seriously on training the more complex neural network FixupResnet20. This appears to be caused by the momentum variance reduction scheme used in DEEPSTORM. In addition, we see that the large-batch training method ProxGT-SA performs much worse than DProxSGT within the same number of epochs (i.e., data pass), especially on training FixupResnet20. \subsection{Neural Network Training by Compressed Methods} \label{subsect:compress} In this subsection, we compare CDProxSGT with two state-of-the-art compressed training methods: Choco-SGD \cite{koloskova2019decentralized,koloskova2019decentralized-b} and BEER \cite{zhao2022beer}. As Choco-SGD and BEER are studied only for problems without a regularizer, we set $r({\mathbf{x}})=0$ in \eqref{eq:decentralized_problem} for the tests. Again, we compare their performance on training LeNet5 and FixupResnet20. The two non-compressed methods AllReduce and DProxSGT are included as baselines. The same compressors are used for CDProxSGT, Choco-SGD, and BEER, when compression is applied. \begin{figure}[htbp] \begin{center} \includegraphics[width=.9\columnwidth]{./figures/Compressed} \vspace{-0.2cm} \caption{Results of training neural network models by compressed methods for the same number of epochs. Left: LeNet5 on FashionMNIST. Right: FixupResnet20 on Cifar10.} \label{fig:Compress} \end{center} \end{figure} We run each method to 100 epochs for training LeNet5 on FashionMNIST. The compressors $Q_y$ and $Q_x$ are set to top-$k(0.3)$ \cite{aji2017sparse}, i.e., taking the largest $30\%$ elements of an input vector in absolute values and zeroing out all others. We set batchsize to 8 and tune the learning rate $\eta$ to $0.01$ for AllReduce, DProxSGT, CDProxSGT and Choco-SGD, and for CDProxSGT, we set $\gamma_x=\gamma_y=0.5$. BEER is a large-batch training method. It uses a larger batchsize 256 and accordingly a larger learning rate $\eta=0.3$, which appears to be the best among $\{0.1, 0.2, 0.3, 0.4\}$. For training FixupResnet20 on the Cifar10 dataset, we run each method to 500 epochs. We take top-$k(0.4)$ \cite{aji2017sparse} as the compressors $Q_y$ and $Q_x$ and set $\gamma_x=\gamma_y=0.8$. For AllReduce, DProxSGT, CDProxSGT and Choco-SGD, we set their batchsize to 64 and tune the learning rate $\eta$ to $0.02$. For BEER, we use a larger batchsize 512 and a larger learning rate $\eta=0.1$, which is the best among $\{0.05, 0.1, 0.2, 0.3\}$. The results are shown in Figure \ref{fig:Compress}. For both models, CDProxSGT yields almost the same curves of objective values and testing accuracy as its non-compressed counterpart DProxSGT and the centralized non-compressed method AllReduce. This indicates about 70\% saving of communication for the training of LeNet5 and 60\% saving for FixupResnet20 without sacrifying the testing accuracy. In comparison, BEER performs significantly worse than the proposed method CDProxSGT within the same number of epochs in terms of all the three measures, especially on training the more complex neural network FixupResnet20, which should be attributed to the use of a larger batch by BEER. Choco-SGD can produce comparable objective values. However, its testing accuracy is much lower than that produced by our method CDProxSGT. This should be because of the data heterogeneity that ChocoSGD cannot handle, while CDProxSGT applies the gradient tracking to successfully address the challenges of data heterogeneity. \section{Decentralized Algorithms}\label{sec:alg} In this section, we give our decentralized algorithms for solving \eqref{eq:decentralized_problem} or equivalently \eqref{eq:problem_original}. To perform neighbor communications, we introduce a mixing (or gossip) matrix $\mathbf{W}$ that satisfies the following standard assumption. \begin{assumption}[Mixing matrix] \label{assu:mix_matrix} We choose a mixing matrix $\mathbf{W}$ such that \vspace{-1.5mm} \begin{enumerate} \item [(i)] $\mathbf{W}$ is doubly stochastic: $\mathbf{W}\mathbf{1} = \mathbf{1}$ and $\mathbf{1}^\top \mathbf{W} = \mathbf{1}^\top$; \item [(ii)] $\mathbf{W}_{ij} = 0$ if $i$ and $j$ are not neighbors to each other; \item [(iii)] $\mathrm{Null}(\mathbf{W}-\mathbf{I}) = \mathrm{span}\{\mathbf{1}\}$ and $\rho \triangleq \|\mathbf{W} - \mathbf{J}\|_2 < 1$. \end{enumerate} \vspace{-2mm} \end{assumption} The condition in (ii) above is enforced so that \emph{direct} communications can be made only if two nodes (or workers) are immediate (or 1-hop) neighbors of each other. The condition in (iii) can hold if the graph $\mathcal{G}$ is connected. The assumption $\rho < 1$ is critical to ensure contraction of consensus error. The value of $\rho$ depends on the graph topology. \cite{koloskova2019decentralized} gives three commonly used examples: when uniform weights are used between nodes, $\mathbf{W} = \mathbf{J}$ and $\rho = 0$ for a fully-connected graph (in which case, our algorithms will reduce to centralized methods), $1-\rho = \Theta(\frac{1}{n})$ for a 2d torus grid graph where every node has 4 neighbors, and $1-\rho = \Theta(\frac{1}{n^2})$ for a ring-structured graph. More examples can be found in \cite{nedic2018network}. \subsection{Non-compreseed Method} With the mixing matrix $\mathbf{W}$, we propose a decentralized proximal stochastic gradient method with gradient tracking (DProxSGT) for \eqref{eq:decentralized_problem}. The pseudocode is shown in Algorithm~\ref{alg:DProxSGT}. In every iteration $t$, each node $i$ first computes a local stochastic gradient $\nabla F_i({\mathbf{x}}_i^{t},\xi_i^{t})$ by taking a sample $\xi_i^{t}$ from its local data distribution $\mathcal{D}_i$, then performs gradient tracking in \eqref{eq:y_half_update} and neighbor communications of the tracked gradient in \eqref{eq:y_update}, and finally takes a proximal gradient step in \eqref{eq:x_half_update} and mixes the model parameter with its neighbors in \eqref{eq:x_1_update}. \begin{algorithm}[tb] \caption{DProxSGT}\label{alg:DProxSGT} \begin{algorithmic} \small{ \STATE Initialize ${\mathbf{x}}_i^{0}$ and set ${\mathbf{y}}_i^{-1}=\mathbf{0}$, $\nabla F_i({\mathbf{x}}_i^{-1},\xi_i^{-1}) =\mathbf{0}$, $\forall i\in\mathcal{N}$. \FOR{$t=0, 1, 2, \ldots, T-1$} \STATE \hspace{-0.1cm}\textbf{all} nodes $i=1, 2, \ldots, n$ do the updates \textbf{in parallel:} \STATE obtain one random sample $\xi_i^t$, compute a stochastic gradient $\nabla F_i({\mathbf{x}}_i^{t},\xi_i^{t})$, and perform \vspace{-0.2cm} \begin{gather} {\mathbf{y}}_i^{t-\frac{1}{2}} = {\mathbf{y}}_i^{t-1} + \nabla F_i({\mathbf{x}}_i^{t},\xi_i^{t}) - \nabla F_i({\mathbf{x}}_i^{t-1},\xi_i^{t-1}),\label{eq:y_half_update} \\ {\mathbf{y}}_i^t = \textstyle \sum_{j=1}^n \mathbf{W}_{ji} {\mathbf{y}}_j^{t-\frac{1}{2}},\label{eq:y_update}\\ {\mathbf{x}}_i^{t+\frac{1}{2}} =\prox_{\eta r} \left({\mathbf{x}}_i^t - \eta {\mathbf{y}}_i^{t}\right),\label{eq:x_half_update} \\ {\mathbf{x}}_i^{t+1} = \textstyle \sum_{j=1}^n \mathbf{W}_{ji}{\mathbf{x}}_j^{t+\frac{1}{2}}. \label{eq:x_1_update} \vspace{-0.2cm} \end{gather} \ENDFOR} \end{algorithmic} \end{algorithm} \vspace{-0.1mm} Note that for simplicity, we take only one random sample $\xi_i^{t}$ in Algorithm \ref{alg:DProxSGT} but in general, a mini-batch of random samples can be taken, and all theoretical results that we will establish in the next section still hold. We emphasize that we need only $\mathcal{O}(1)$ samples for each update. This is different from ProxGT-SA in \cite{xin2021stochastic}, which shares a similar update formula as our algorithm but needs a very big batch of samples, as many as $\mathcal{O}(\frac{1}{\epsilon^2})$, where $\epsilon$ is a target tolerance. A small-batch training can usually generalize better than a big-batch one \cite{lecun2012efficient, keskar2016large} on training large-scale deep learning models. Throughout the paper, we make the following standard assumption on the stochastic gradients. \begin{assumption}[Stochastic gradients] \label{assu:stoc_grad} We assume that \vspace{-1.5mm} \begin{itemize} \item[(i)] The random samples $\{\xi_i^t\}_{i\in \mathcal{N}, t\ge 0}$ are independent. \item[(ii)] There exists a finite number $\sigma\ge0$ such that for any $i\in \mathcal{N}$ and ${\mathbf{x}}_i\in\dom(r)$, \begin{gather*} \mathbb{E}_{\xi_i}[\nabla F_i({\mathbf{x}}_i,\xi_i)] = \nabla f_i({\mathbf{x}}_i),\\ \mathbb{E}_{\xi_i}[\|\nabla F_i({\mathbf{x}}_i,\xi_i)-\nabla f_i({\mathbf{x}}_i)\|^2] \leq \sigma^2. \end{gather*} \end{itemize} \vspace{-2mm} \end{assumption} The gradient tracking step in \eqref{eq:y_half_update} is critical to handle heterogeneous data \cite{di2016next,nedic2017achieving,lu2019gnsd,pu2020distributed,sun2020improving,xin2021stochastic,song2021optimal,mancino2022proximal,zhao2022beer,DBLP:journals/corr/abs-2202-00255,song2022compressed}. In a deterministic scenario where $\nabla f_i(\cdot)$ is used instead of $\nabla F_i(\cdot, \xi)$, for each $i$, the tracked gradient ${\mathbf{y}}_i^t$ can converge to the gradient of the global function $\frac{1}{n}\sum_{i=1}^n f_i(\cdot)$ at $\bar{\mathbf{x}}^t$, and thus all local updates move towards a direction to minimize the \emph{global} objective. When stochastic gradients are used, the gradient tracking can play a similar role and make ${\mathbf{y}}_i^t$ approach to the stochastic gradient of the global function. With this nice property of gradient tracking, we can guarantee convergence without strong assumptions that are made in existing works, such as bounded gradients \cite{koloskova2019decentralized,koloskova2019decentralized-b, taheri2020quantized, singh2021squarm} and bounded data similarity over nodes \cite{lian2017can, tang2018communication, tang2019deepsqueeze, vogels2020practical, wang2021error}. \subsection{Compressed Method} In DProxSGT, each worker needs to communicate both the model parameter and tracked stochastic gradient with its neighbors at every iteration. Communications have become a bottleneck for distributed training on GPUs. In order to save the communication cost, we further propose a compressed version of DProxSGT, named CDProxSGT. The pseudocode is shown in Algorithm \ref{alg:CDProxSGT}, where $Q_{\mathbf{x}}$ and $Q_{\mathbf{y}}$ are two compression operators. \begin{algorithm}[tb] \caption{CDProxSGT}\label{alg:CDProxSGT} \begin{algorithmic} \small{\STATE Initialize ${\mathbf{x}}_i^{0}$; set ${\mathbf{y}}_i^{-1}=\underline{\mathbf{y}}_i^{-1}=\nabla F_i({\mathbf{x}}_i^{-1}, \xi_i^{-1})=\underline{\mathbf{x}}_i^{0} =\mathbf{0}$, $\forall i\in\mathcal{N}$. \FOR{$t=0, 1, 2, \ldots, T-1$} \STATE \hspace{-0.1cm}\textbf{all} nodes $i=1, 2, \ldots, n$ do the updates \textbf{in parallel:} \vspace{-0.2cm} \begin{gather} {\mathbf{y}}_i^{t-\frac{1}{2}} = {\mathbf{y}}_i^{t-1} + \nabla F_i({\mathbf{x}}_i^{t},\xi_i^{t}) - \nabla F_i({\mathbf{x}}_i^{t-1},\xi_i^{t-1}),\label{eq:alg3_1}\\ \underline{\mathbf{y}}_i^{t} = \underline{\mathbf{y}}_i^{t-1} + Q_{\mathbf{y}}\big[{\mathbf{y}}_i^{t-\frac{1}{2}} - \underline{\mathbf{y}}_i^{t-1}\big], \label{eq:alg3_2}\\ {\mathbf{y}}_i^{t} = {\mathbf{y}}_i^{t-\frac{1}{2}} +\gamma_y \left(\textstyle \sum_{j=1}^n \mathbf{W}_{ji} \underline{\mathbf{y}}_j^{t}-\underline{\mathbf{y}}_i^{t}\right), \label{eq:alg3_3}\\ {\mathbf{x}}_i^{t+\frac{1}{2}} =\prox_{\eta r} \left({\mathbf{x}}_i^t - \eta {\mathbf{y}}_i^{t}\right), \label{eq:alg3_4}\\ \underline{\mathbf{x}}_i^{t+1} = \underline{\mathbf{x}}_i^{t} + Q_{\mathbf{x}}\big[{\mathbf{x}}_i^{t+\frac{1}{2}} - \underline{\mathbf{x}}_i^{t}\big], \label{eq:alg3_5}\\ {\mathbf{x}}_i^{t+1} = {\mathbf{x}}_i^{t+\frac{1}{2}}+\gamma_x\Big(\textstyle \overset{n}{\underset{j=1}\sum} \mathbf{W}_{ji} \underline{\mathbf{x}}_j^{t+1}-\underline{\mathbf{x}}_i^{t+1}\Big).\label{eq:alg3_6} \vspace{-0.2cm} \end{gather} \ENDFOR} \end{algorithmic} \end{algorithm} \vspace{-0.1mm} In Algorithm \ref{alg:CDProxSGT}, each node communicates the non-compressed vectors $\underline{\mathbf{y}}_i^{t}$ and $\underline{\mathbf{x}}_i^{t+1}$ with its neighbors in \eqref{eq:alg3_3} and \eqref{eq:alg3_6}. We write it in this way for ease of read and analysis. For efficient and \emph{equivalent} implementation, we do not communicate $\underline{\mathbf{y}}_i^{t}$ and $\underline{\mathbf{x}}_i^{t+1}$ directly but the compressed residues $Q_{\mathbf{y}}\big[{\mathbf{y}}_i^{t-\frac{1}{2}} - \underline{\mathbf{y}}_i^{t-1}\big]$ and $Q_{\mathbf{x}}\big[{\mathbf{x}}_i^{t+\frac{1}{2}} - \underline{\mathbf{x}}_i^{t}\big]$, explained as follows. Besides ${\mathbf{y}}_i^{t-1}$, ${\mathbf{x}}_i^{t}$, $\underline{\mathbf{y}}_i^{t-1}$ and $\underline{\mathbf{x}}_i^{t}$, each node also stores ${\mathbf{z}}_i^{t-1}$ and ${\mathbf{s}}_i^{t} $ which record $\sum_{j=1}^n \mathbf{W}_{ji} \underline{\mathbf{y}}_i^{t-1}$ and $\sum_{j=1}^n \mathbf{W}_{ji} \underline{\mathbf{x}}_i^{t}$. For the gradient communication, each node $i$ initializes ${\mathbf{z}}_i^{-1} = \mathbf{0}$, and then at each iteration $t$, after receiving $Q_{\mathbf{y}}\big[{\mathbf{y}}_j^{t-\frac{1}{2}} - \underline{\mathbf{y}}_j^{t-1}\big]$ from its neighbors, it updates $\underline{\mathbf{y}}_i^{t}$ by \eqref{eq:alg3_2}, and ${\mathbf{z}}_i^{t}$ and ${\mathbf{y}}_i^t$ by \vspace{-0.2cm} \begin{align*} {\mathbf{z}}_i^{t} =&~ \textstyle {\mathbf{z}}_i^{t-1} + \sum_{j=1}^n \mathbf{W}_{ji} Q_{\mathbf{y}}\big[{\mathbf{y}}_j^{t-\frac{1}{2}} - \underline{\mathbf{y}}_j^{t-1}\big], \\ {\mathbf{y}}_i^{t} =&~ \textstyle {\mathbf{y}}_i^{t-\frac{1}{2}} +\gamma_y \big({\mathbf{z}}_i^{t}-\underline{\mathbf{y}}_i^{t}\big).\vspace{-0.2cm} \end{align*} From the initialization and the updates of $\underline{\mathbf{y}}_i^{t}$ and ${\mathbf{z}}_i^{t}$, it always holds that ${\mathbf{z}}_i^{t}=\sum_{j=1}^n \mathbf{W}_{ji} \underline{\mathbf{y}}_i^{t}$. The model communication can be done efficiently in the same way. The compression operators $Q_{\mathbf{x}}$ and $Q_{\mathbf{y}}$ in Algorithm \ref{alg:CDProxSGT} can be different, but we assume that they both satisfy the following assumption. \begin{assumption} \label{assu:compressor} There exists $\alpha \in [0,1)$ such that $$\mathbb{E}[\|{\mathbf{x}}-Q[{\mathbf{x}}]\|^2]\leq \alpha^2 \|{\mathbf{x}}\|^2, \forall\, {\mathbf{x}}\in\mathbb{R}^d,$$ for both $Q=Q_{\mathbf{x}}$ and $Q=Q_{\mathbf{y}}$. \end{assumption} The assumption on compression operators is standard and also made in \cite{koloskova2019decentralized-b,koloskova2019decentralized,zhao2022beer}. It is satisfied by the sparsification, such as Random-$k$ \cite{stich2018sparsified} and Top-$k$ \cite{aji2017sparse}. It can also be satisfied by rescaled quantizations. For example, QSGD \cite{alistarh2017qsgd} compresses ${\mathbf{x}}\in \mathbb{R}^d$ by $Q_{sqgd}({\mathbf{x}}) = \frac{\mathbf{sign}({\mathbf{x}})\|{\mathbf{x}}\|}{s} \lfloor s \frac{|{\mathbf{x}}|}{\|{\mathbf{x}}\|} + \xi \rfloor $ where $\xi$ is uniformly distributed on $[0,1]^d$, $s$ is the parameter about compression level. Then $Q({\mathbf{x}})= \frac{1}{\tau} Q_{sqgd} ({\mathbf{x}})$ with $\tau=(1+\min\{d/s^2, \sqrt{d}/s\})$ satisfies Assumption \ref{assu:compressor} with $\alpha^2=1-\frac{1}{\tau}$. More examples can be found in \cite{koloskova2019decentralized}. Below, we make a couple of remarks to discuss the relations between Algorithm \ref{alg:DProxSGT} and Algorithm \ref{alg:CDProxSGT}. \begin{remark} When $Q_{\mathbf{x}}$ and $Q_{\mathbf{y}}$ are both identity operators, i.e., $Q_{\mathbf{x}}[{\mathbf{x}}] = {\mathbf{x}}, Q_{\mathbf{y}}[{\mathbf{y}}] = {\mathbf{y}}$, and $\gamma_x=\gamma_y=1$, in Algorithm \ref{alg:CDProxSGT}, CDProxSGT will reduce to DProxSGT. Hence, the latter can be viewed as a special case of the former. However, we will analyze them separately. Although the big-batch training method ProxGT-SA in \cite{xin2021stochastic} shares a similar update as the proposed DProxSGT, our analysis will be completely different and new, as we need only $\mathcal{O}(1)$ samples in each iteration in order to achieve better generalization performance. The analysis of CDProxSGT will be built on that of DProxSGT by carefully controlling the variance error of stochastic gradients and the consensus error, as well as the additional compression error. \end{remark} \begin{remark} When $Q_{\mathbf{y}}$ and $Q_{\mathbf{x}}$ are identity operators, $\underline{\mathbf{y}}_i^{t} = {\mathbf{y}}_i^{t-\frac{1}{2}}$ and $\underline{\mathbf{x}}_i^{t+1} = {\mathbf{x}}_i^{t+\frac{1}{2}}$ for each $i\in\mathcal{N}$. Hence, in the compression case, $\underline{\mathbf{y}}_i^{t}$ and $\underline{\mathbf{x}}_i^{t+1}$ can be viewed as estimates of ${\mathbf{y}}_i^{t-\frac{1}{2}}$ and ${\mathbf{x}}_i^{t+\frac{1}{2}}$. In addition, in a matrix format, we have from \eqref{eq:alg3_3} and \eqref{eq:alg3_6} that \begin{align} \Y^{t+1} =&~ \Y^{t+\frac{1}{2}}\widehat\mathbf{W}_y + \gamma_y\big(\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\big)(\mathbf{W}-\mathbf{I}), \label{eq:Y_hatW}\\ \mathbf{X}^{t+1} =&~ \mathbf{X}^{t+\frac{1}{2}}\widehat\mathbf{W}_x + \gamma_x(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I}), \label{eq:compX_hatW} \end{align} where $\widehat\mathbf{W}_y = \gamma_y \mathbf{W} + (1-\gamma_y)\mathbf{I},\ \widehat\mathbf{W}_x = \gamma_x \mathbf{W} + (1-\gamma_x)\mathbf{I}.$ When $\mathbf{W}$ satisfies the conditions (i)-(iii) in Assumption~\ref{assu:mix_matrix}, it can be easily shown that $\widehat\mathbf{W}_y$ and $\widehat\mathbf{W}_x$ also satisfy all three conditions. Indeed, we have $$\widehat\rho_x \triangleq \|\widehat\mathbf{W}_x - \mathbf{J}\|_2 < 1,\quad \widehat\rho_y \triangleq \|\widehat\mathbf{W}_y - \mathbf{J}\|_2 < 1.$$ Thus we can view $\Y^{t+1}$ and $\mathbf{X}^{t+1}$ as the results of $\Y^{t+\frac{1}{2}}$ and $\mathbf{X}^{t+\frac{1}{2}}$ by one round of neighbor communication with mixing matrices $\widehat{\mathbf{W}}_y$ and $\widehat{\mathbf{W}}_x$, and the addition of the estimation error $\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}$ and $\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}$ after one round of neighbor communication. \end{remark} \section{Convergence Analysis} In this section, we analyze the convergence of the algorithms proposed in section~\ref{sec:alg}. Nonconvexity of the problem and stochasticity of the algorithms both raise difficulty on the analysis. In addition, the coexistence of the nonsmooth regularizer $r(\cdot)$ causes more significant challenges. To address these challenges, we employ a tool of the so-called Moreau envelope \cite{MR201952}, which has been commonly used for analyzing methods on solving nonsmooth weakly-convex problems. \begin{definition}[Moreau envelope] Let $\psi$ be an $L$-weakly convex function, i.e., $\psi(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex. For $\lambda\in(0,\frac{1}{L})$, the Moreau envelope of $\psi$ is defined as \vspace{-0.2cm} \begin{equation*} \psi_\lambda({\mathbf{x}}) = \min_{\mathbf{y}} \textstyle \left\{\psi({\mathbf{y}}) + \frac{1}{2\lambda}\|{\mathbf{y}}-{\mathbf{x}}\|^2\right\}, \vspace{-0.2cm} \end{equation*} and the unique minimizer is denoted as \vspace{-0.2cm} \begin{equation*} \prox_{\lambda \psi}({\mathbf{x}})= \argmin_{{\mathbf{y}}} \textstyle \left\{\psi({\mathbf{y}})+\frac{1}{2\lambda} \|{\mathbf{y}}-{\mathbf{x}}\|^2\right\}.\vspace{-0.2cm} \end{equation*} \end{definition} The Moreau envelope $\psi_\lambda$ has nice properties. The result below can be found in \cite{davis2019stochastic, nazari2020adaptive, xu2022distributed-SsGM}. \begin{lemma}\label{lem:xhat_x} For any function $\psi$, if it is $L$-weakly convex, then for any $\lambda \in (0, \frac{1}{L})$, the Moreau envelope $\psi_\lambda$ is smooth with gradient given by $\nabla \psi_\lambda ({\mathbf{x}}) = \lambda^{-1} ({\mathbf{x}}-\widehat{\mathbf{x}}),$ where $\widehat{\mathbf{x}}=\prox_{\lambda\psi}({\mathbf{x}})$. Moreover, \vspace{-0.2cm} \[ \|{\mathbf{x}}-\widehat{\mathbf{x}}\|=\lambda\|\nabla \psi_\lambda({\mathbf{x}})\|, \quad \dist(\mathbf{0}, \partial \psi(\widehat {\mathbf{x}}))\leq \|\nabla \psi_\lambda({\mathbf{x}})\|.\vspace{-0.2cm} \] \end{lemma} Lemma~\ref{lem:xhat_x} implies that if $\|\nabla \psi_\lambda({\mathbf{x}})\|$ is small, then $\widehat{\mathbf{x}}$ is a near-stationary point of $\psi$ and ${\mathbf{x}}$ is close to $\widehat{\mathbf{x}}$. Hence, $\|\nabla \psi_\lambda({\mathbf{x}})\|$ can be used as a valid measure of stationarity violation at ${\mathbf{x}}$ for $\psi$. Based on this observation, we define the $\epsilon$-stationary solution below for the decentralized problem \eqref{eq:decentralized_problem}. \begin{definition}[Expected $\epsilon$-stationary solution]\label{def:eps-sol} Let $\epsilon > 0$. A point $\mathbf{X} = [{\mathbf{x}}_1, \ldots, {\mathbf{x}}_n]$ is called an expected $\epsilon$-stationary solution of \eqref{eq:decentralized_problem} if for a constant $\lambda\in (0, \frac{1}{L})$, \vspace{-0.1cm} \begin{equation*} \textstyle\frac{1}{n} \mathbb{E}\left[\sum_{i=1}^n \|\nabla \phi_\lambda({\mathbf{x}}_i)\|^2 + L^2 \|\mathbf{X}_\perp\|^2\right] \leq \epsilon^2. \vspace{-0.1cm} \end{equation*} \end{definition} In the definition above, $L^2$ before the consensus error term $\|\mathbf{X}_\perp\|^2$ is to balance the two terms. This scaling scheme has also been used in existing works such as \cite{xin2021stochastic,mancino2022proximal,DBLP:journals/corr/abs-2202-00255} . From the definition, we see that if $\mathbf{X}$ is an expected $\epsilon$-stationary solution of \eqref{eq:decentralized_problem}, then each local solution ${\mathbf{x}}_i$ will be a near-stationary solution of $\phi$ and in addition, these local solutions are all close to each other, namely, they are near consensus. Below we first state the convergence results of the non-compressed method DProxSGT and then the compressed one CDProxSGT. All the proofs are given in the appendix. \begin{theorem}[Convergence rate of DProxSGT]\label{thm:sec2} Under Assumptions \ref{assu:prob} -- \ref{assu:stoc_grad}, let $\{\mathbf{X}^t\}$ be generated from $\mathrm{DProxSGT}$ in Algorithm~\ref{alg:DProxSGT} with ${\mathbf{x}}_i^0 = {\mathbf{x}}^0, \forall\, i \in \mathcal{N}$. Let $\lambda = \min\big\{\frac{1}{4 L}, \frac{1}{96\rho L}\big\}$ and $\eta\leq \min\big\{\frac{1}{4 L},\frac{(1-\rho^2)^4}{96\rho L}\big\}$. Select $\tau$ from $\{0, 1, \ldots, T-1\}$ uniformly at random. Then \vspace{-0.1cm} \begin{equation* \begin{aligned} &~ \textstyle \frac{1}{n} \mathbb{E}\left[\sum_{i=1}^n \|\nabla\phi_\lambda({\mathbf{x}}_i^\tau)\|^2 +\frac{4}{\lambda \eta} \|\mathbf{X}^\tau_\perp\|^2\right] \\ \leq &~ \textstyle \frac{8\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{ \eta T} + \frac{4616 \eta}{\lambda(1-\rho^2)^3} \sigma^2 \textstyle + \frac{768\eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2\right]}{n\lambda T(1-\rho^2)^3}, \end{aligned} \vspace{-0.1cm} \end{equation*} where $\phi_\lambda^* = \min_{{\mathbf{x}}} \phi_\lambda({\mathbf{x}})> -\infty$. \end{theorem} By Theorem~\ref{thm:sec2}, we obtain a complexity result as follows. \begin{corollary}[Iteration complexity] Under the assumptions of Theorem~\ref{thm:sec2}, for a given $\epsilon>0$, take $ \eta = \min\{\frac{1}{4 L},\frac{(1-\rho^2)^4}{96\rho L}, \frac{ \lambda(1-\rho^2)^3 \epsilon^2}{9232\sigma^2}\}$. Then $\mathrm{DProxSGT}$ can find an expected $\epsilon$-stationary point of \eqref{eq:decentralized_problem} when $T \geq T_\epsilon = \left\lceil \frac{16\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{ \eta \epsilon^2 } + \frac{1536\eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2\right]}{n\lambda (1-\rho^2)^3 \epsilon^2} \right\rceil$. \end{corollary} \begin{remark} \label{remark:DProxSGT} When $\epsilon$ is small enough, $\eta$ will take $\frac{ \lambda(1-\rho^2)^3 \epsilon^2}{9232\sigma^2}$, and $T_\epsilon$ will be dominated by the first term. In this case, DProxSGT can find an expected $\epsilon$-stationary solution of \eqref{eq:decentralized_problem} in $O\Big( \frac{\sigma^2\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right) }{\lambda(1-\rho^2)^3 \epsilon^4}\Big)$ iterations, leading to the same number of stochastic gradient samples and communication rounds. Our sample complexity is optimal in terms of the dependence on $\epsilon$ under the smoothness condition in Assumption~\ref{assu:prob}, as it matches with the lower bound in \cite{arjevani2022lower}. However, the dependence on $1-\rho$ may not be optimal because of our possibly loose analysis, as the \emph{deterministic} method with single communication per update in \cite{scutari2019distributed} for nonconvex nonsmooth problems has a dependence $(1-\rho)^2$ on the graph topology. \end{remark} \begin{theorem}[Convergence rate of CDProxSGT] \label{thm:sect3thm} Under Assumptions \ref{assu:prob} through \ref{assu:compressor}, let $\{\mathbf{X}^t\}$ be generated from $\mathrm{CDProxSGT}$ in Algorithm \ref{alg:CDProxSGT} with ${\mathbf{x}}_i^0 = {\mathbf{x}}^0, \forall\, i \in \mathcal{N}$. Let $\lambda = \min \big\{\frac{1}{4 L}, \frac{ (1-\alpha^2)^2}{9 L+41280}\big\}$, and suppose \vspace{-0.1cm} \begin{gather*} \eta \leq~ \min\left\{ \textstyle \lambda, \frac{(1-\alpha^2)^2(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2}{18830\max\{1, L\}}\right\}, \\ \gamma_x\leq~ \min\left\{ \textstyle \frac{1-\alpha^2}{25}, \frac{\eta}{\alpha \right\}, \quad \gamma_y\leq ~ \textstyle \frac{(1-\alpha^2)(1-\widehat\rho^2_x)(1-\widehat\rho^2_y)}{317}. \end{gather*} \vspace{-0.1cm} Select $\tau$ from $\{0, 1, \ldots, T-1\}$ uniformly at random. Then \vspace{-0.1cm} \begin{equation* \begin{aligned} &~ \textstyle \frac{1}{n} \mathbb{E}\left[\sum_{i=1}^n \|\nabla\phi_\lambda({\mathbf{x}}_i^\tau)\|^2 +\frac{4}{\lambda \eta} \|\mathbf{X}^\tau_\perp\|^2\right] \\ \leq &~\textstyle \frac{8\left(\phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{\eta T} +\frac{(50096n+48)\eta \sigma^2}{n\lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} + \frac{4176 \eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0\|^2\right] }{n\lambda T (1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}, \end{aligned} \vspace{-0.1cm} \end{equation*} where $\phi_\lambda^* = \min_{{\mathbf{x}}} \phi_\lambda({\mathbf{x}})> -\infty$. \end{theorem} By Theorem~\ref{thm:sect3thm}, we have the complexity result as follows. \begin{corollary}[Iteration complexity] Under the assumptions of Theorem \ref{thm:sect3thm}, for a given $\epsilon>0$, take \begin{gather*} \eta = \textstyle \min \left\{\frac{1}{4 L}, \frac{ (1-\alpha^2)^2}{9 L+41280}, \frac{(1-\alpha^2)^2(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2}{18830\max\{1, L\}}\right.,\\ \textstyle \left. \frac{n\lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)\epsilon^2}{2(50096n+48) \sigma^2}\right\}, \\ \textstyle \gamma_x = \min\left\{ \textstyle \frac{1-\alpha^2}{25}, \frac{\eta}{\alpha}\right\}, \quad \gamma_y = \frac{(1-\alpha^2)(1-\widehat\rho^2_x)(1-\widehat\rho^2_y)}{317}. \end{gather*} Then $\mathrm{CDProxSGT}$ can find an expected $\epsilon$-stationary point of \eqref{eq:decentralized_problem} when $T\geq T_\epsilon^c$ where \begin{align*} T_\epsilon^c = \textstyle \left\lceil\frac{16\left(\phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{\eta \epsilon^2} + \frac{8352 \eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0\|^2\right] }{n\lambda (1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)\epsilon^2} \right\rceil. \end{align*} \end{corollary} \begin{remark} When the given tolerance $\epsilon$ is small enough, $\eta$ will take $\frac{n\lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)\epsilon^2}{2(50096n+48) \sigma^2}$ and $T_\epsilon^c$ will be dominated by the first term. In this case, similar to DProxSGT in Remark \ref{remark:DProxSGT}, CDProxSGT can find an expected $\epsilon$-stationary solution of \eqref{eq:decentralized_problem} in $O\Big( \frac{\sigma^2\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right) }{ \lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y) \epsilon^4}\Big)$ iterations. \end{remark} \section{Additional Details on FixupResNet20} FixupResNet20 \cite{zhang2019fixup} is amended from the popular ResNet20 \cite{he2016deep} by deleting the BatchNorm layers \cite{ioffe2015batch}. The BatchNorm layers use the mean and variance of some hidden layers based on the data inputted into the models. In our experiment, the data on nodes are heterogeneous. If the models include BatchNorm layers, even all nodes have the same model parameters after training, their testing performance on the whole data would be different for different nodes because the mean and variance of the hidden layers are produced on the heterogeneous data. Thus we use FixupResNet20 instead of ResNet20. \section{Some Key Existing Lemmas} For $L$-smoothness function $f_i$, it holds for any ${\mathbf{x}}, {\mathbf{y}}\in\dom(r)$, \begin{align}\label{eq:assump-to-f_i} \textstyle \big|f_i({\mathbf{y}}) - f_i({\mathbf{x}}) - \langle \nabla f_i({\mathbf{x}}), {\mathbf{y}}-{\mathbf{x}}\rangle\big| \le \frac{L}{2}\|{\mathbf{y}}-{\mathbf{x}}\|^2. \end{align} From the smoothness of $f_i$ in Assumption \ref{assu:prob}, it follows that $f = \frac{1}{n}f_i$ is also $ L$-smooth in $\dom(r)$. When $f_i$ is $ L$-smooth in $\dom(r)$, we have that $f_i(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex. Since $r(\cdot)$ is convex, $\phi_i(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex, i.e., $\phi_i$ is $L$-weakly convex for each $i$. So is $\phi$. In the following, we give some lemmas about weakly convex functions. The following result is from Lemma II.1 in \cite{chen2021distributed}. \begin{lemma}\label{lem:weak_convx} For any function $\psi$ on $\mathbb{R}^{d}$, if it is $L$-weakly convex, i.e., $\psi(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex, then for any ${\mathbf{x}}_1, {\mathbf{x}}_2, \ldots, {\mathbf{x}}_m\in\mathbb{R}^d$, it holds that \[ \psi\left(\sum_{i=1}^m a_i{\mathbf{x}}_i\right)\leq \sum_{i=1}^m a_i \psi({\mathbf{x}}_i) + \frac{L}{2} \sum_{i=1}^{m-1} \sum_{j=i+1}^m a_i a_j \|{\mathbf{x}}_i-{\mathbf{x}}_j\|^2, \] where $a_i\geq 0$ for all $i$ and $\sum_{i=1}^m a_i=1$. \end{lemma} The first result below is from Lemma II.8 in \cite{chen2021distributed}, and the nonexpansiveness of the proximal mapping of a closed convex function is well known. \begin{lemma} \label{lem:prox_diff} For any function $\psi$ on $\mathbb{R}^{d}$, if it is $L$-weakly convex, i.e., $\psi(\cdot) + \frac{L}{2}\|\cdot\|^2$ is convex, then the proximal mapping with $\lambda< \frac{1}{L}$ satisfies \[ \|\prox_{\lambda \psi}({\mathbf{x}}_1)-\prox_{\lambda \psi}({\mathbf{x}}_2)\|\leq \frac{1}{1-\lambda L} \|{\mathbf{x}}_1-{\mathbf{x}}_2\|. \] For a closed convex function $r(\cdot)$, its proximal mapping is nonexpansive, i.e., \[ \|\prox_{r}({\mathbf{x}}_1)-\prox_{r}({\mathbf{x}}_2)\|\leq \|{\mathbf{x}}_1-{\mathbf{x}}_2\|. \] \end{lemma} \begin{lemma} For $\mathrm{DProxSGT}$ in Algorithm \ref{alg:DProxSGT} and $\mathrm{CDProxSGT}$ in Algorithm \ref{alg:CDProxSGT}, we both have \begin{gather} \bar{\mathbf{y}}^t =\overline{\nabla} \mathbf{F}^t, \quad \bar{\mathbf{x}}^{t} = \bar{\mathbf{x}}^{t+\frac{1}{2}} = \frac{1}{n} \sum_{i=1}^n \prox_{\eta r}\left({\mathbf{x}}_i^t - \eta {\mathbf{y}}_i^{t}\right). \label{eq:x_y_mean} \end{gather} \end{lemma} \begin{proof} For DProxSGT in Algorithm \ref{alg:DProxSGT}, taking the average among the workers on \eqref{eq:y_half_update} to \eqref{eq:x_1_update} gives \begin{align} \bar{\mathbf{y}}^{t-\frac{1}{2}} = \bar{\mathbf{y}}^{t-1} + \overline{\nabla} \mathbf{F}^t - \overline{\nabla} \mathbf{F}^{t-1}, \quad \bar{\mathbf{y}}^t =\bar{\mathbf{y}}^{t-\frac{1}{2}}, \quad \bar{\mathbf{x}}^{t+\frac{1}{2}} = \frac{1}{n} \sum_{i=1}^n \prox_{\eta r}\left({\mathbf{x}}_i^t - \eta {\mathbf{y}}_i^{t}\right), \quad \bar{\mathbf{x}}^{t} = \bar{\mathbf{x}}^{t+\frac{1}{2}},\label{eq:proof_mean} \end{align} where $\mathbf{1}^\top\mathbf{W}=\mathbf{1}^\top$ follows from Assumption \ref{assu:mix_matrix}. With $\bar{\mathbf{y}}^{-1}=\overline{\nabla} \mathbf{F}^{-1}$, we have \eqref{eq:x_y_mean}. Similarly, for CDProxSGT in Algorithm \ref{alg:CDProxSGT}, taking the average on \eqref{eq:alg3_1_matrix} to \eqref{eq:alg3_6_matrix} will also give \eqref{eq:proof_mean} and \eqref{eq:x_y_mean}. \end{proof} In the rest of the analysis, we define the Moreau envelope of $\phi$ for $\lambda\in(0,\frac{1}{L})$ as \begin{align*} \phi_\lambda({\mathbf{x}}) = \min_{\mathbf{y}}\left\{\phi({\mathbf{y}}) + \frac{1}{2\lambda}\|{\mathbf{y}}-{\mathbf{x}}\|^2\right\}. \end{align*} Denote the minimizer as \begin{align*} \prox_{\lambda \phi}({\mathbf{x}}):= \argmin_{{\mathbf{y}}} \phi({\mathbf{y}})+\frac{1}{2\lambda} \|{\mathbf{y}}-{\mathbf{x}}\|^2. \end{align*} In addition, we will use the notation $\widehat{{\mathbf{x}}}^t_i$ and $\widehat{{\mathbf{x}}}^{t+\frac{1}{2}}_i$ that are defined by \begin{align} \widehat{{\mathbf{x}}}^t_i = \prox_{\lambda \phi}({\mathbf{x}}^t_i),\ \widehat{{\mathbf{x}}}^{t+\frac{1}{2}}_i = \prox_{\lambda \phi}({\mathbf{x}}^{t+\frac{1}{2}}_i),\, \forall\, i\in\mathcal{N}, \label{eq:x_t_hat} \end{align} where $\lambda \in(0,\frac{1}{L})$. \section{Convergence Analysis for DProxSGT} \label{sec:proof_DProxSGT} In this section, we analyze the convergence rate of DProxSGT in Algorithm \ref{alg:DProxSGT}. For better readability, we use the matrix form of Algorithm \ref{alg:DProxSGT}. By the notation introduced in section~\ref{sec:notation}, we can write \eqref{eq:y_half_update}-\eqref{eq:x_1_update} in the more compact matrix form: \begin{align} & \Y^{t-\frac{1}{2}} = \Y^{t-1} + \nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1},\label{eq:y_half_update_matrix} \\ & \Y^t = \Y^{t-\frac{1}{2}}\mathbf{W},\label{eq:y_update_matrix}\\ & \mathbf{X}^{t+\frac{1}{2}} =\prox_{\eta r} \left(\mathbf{X}^t - \eta \Y^{t}\right) \triangleq [\prox_{\eta r} \left({\mathbf{x}}_1^t - \eta {\mathbf{y}}_1^{t}\right),\ldots,\prox_{\eta r} \left({\mathbf{x}}_n^t - \eta {\mathbf{y}}_n^{t}\right)],\label{eq:x_half_update_matrix} \\ & \mathbf{X}^{t+1} = \mathbf{X}^{t+\frac{1}{2}}\mathbf{W}. \label{eq:x_1_update_matrix} \end{align} Below, we first bound $\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2$ in Lemma~\ref{lem:Xhat_Xhalf}. Then we give the bounds of the consensus error $\|\mathbf{X}_\perp^t\|$ and $\|\Y_\perp^t\|$ and $\phi_\lambda({\mathbf{x}}_i^{t+1})$ after one step in Lemmas~\ref{lem:XI_J}, \ref{lem:YI_J}, and \ref{lem:weak_convex}. Finally, we prove Theorem \ref{thm:sec2} by constructing a Lyapunov function that involves $\|\mathbf{X}_\perp^t\|$, $\|\Y_\perp^t\|$, and $\phi_\lambda({\mathbf{x}}_i^{t+1})$. \begin{lemma} \label{lem:Xhat_Xhalf} Let $\eta\leq \lambda \leq \frac{1}{4 L}$. Then \begin{align} \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] \leq &~ 4 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \left( 1-\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +4\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 2\eta^2\sigma^2. \label{eq:hatx_xprox} \end{align} \end{lemma} \begin{proof} By the definition of $\widehat{\mathbf{x}}^t_i$ in \eqref{eq:x_t_hat}, we have $0 \in \nabla f(\widehat{\mathbf{x}}^t_i) + \partial r(\widehat{\mathbf{x}}^t_i) + \frac{1}{\lambda}(\widehat{\mathbf{x}}^t_i-{\mathbf{x}}^t_i)$, i.e., \[ \textstyle 0 \in \partial r(\widehat{\mathbf{x}}^t_i) + \frac{1}{\eta} \left(\frac{\eta}{\lambda} \widehat{\mathbf{x}}^t_i-\frac{\eta}{\lambda}{\mathbf{x}}^t_i + \eta \nabla f(\widehat{\mathbf{x}}^t_i) \right) = \partial r(\widehat{\mathbf{x}}^t_i) + \frac{1}{\eta} \left(\widehat{\mathbf{x}}^t_i - \left( \frac{\eta}{\lambda}{\mathbf{x}}^t_i - \eta \nabla f(\widehat{\mathbf{x}}^t_i)+ \left(1- \frac{\eta}{\lambda}\right) \widehat{\mathbf{x}}^t_i \right)\right). \] Thus we have $\widehat{\mathbf{x}}^t_i = \prox_{\eta r}\left( \frac{\eta}{\lambda}{\mathbf{x}}^t_i - \eta \nabla f(\widehat{\mathbf{x}}^t_i) + \left(1- \frac{\eta}{\lambda}\right)\widehat{\mathbf{x}}^t_i\right)$. Then by \eqref{eq:x_half_update}, the convexity of $r$, and Lemma \ref{lem:prox_diff}, \begin{align} &~ \textstyle \|\widehat{\mathbf{x}}_i^{t}-{\mathbf{x}}_i^{t+\frac{1}{2}}\|^2 = \left\| \prox_{\eta r}\left( \frac{\eta}{\lambda}{\mathbf{x}}^t_i - \eta \nabla f(\widehat{\mathbf{x}}^t_i) + \left(1- \frac{\eta}{\lambda}\right)\widehat{\mathbf{x}}^t_i\right)- \prox_{\eta r} \left( {\mathbf{x}}_i^t - \eta {\mathbf{y}}^t_i\right) \right\|^2 \nonumber\\ \leq &~ \textstyle \left\| \frac{\eta}{\lambda}{\mathbf{x}}^t_i - \eta \nabla f(\widehat{\mathbf{x}}^t_i) + \left(1- \frac{\eta}{\lambda}\right)\widehat{\mathbf{x}}^t_i - ({\mathbf{x}}^t_i-\eta{\mathbf{y}}^t_i) \right\|^2 = \left\| \left(1- \frac{\eta}{\lambda}\right)(\widehat{\mathbf{x}}^t_i -{\mathbf{x}}^t_i )- \eta (\nabla f(\widehat{\mathbf{x}}^t_i) -{\mathbf{y}}^t_i) \right\|^2 \nonumber\\ = & ~ \textstyle \left(1- \frac{\eta}{\lambda}\right)^2 \left\| \widehat{\mathbf{x}}^t_i - {\mathbf{x}}^t_i \right\|^2 + \eta^2\left\| {\mathbf{y}}^t_i- \nabla f(\widehat{\mathbf{x}}^t_i) \right\|^2 + 2 \left(1- \frac{\eta}{\lambda}\right)\eta \left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, {\mathbf{y}}_i^t-\nabla f({\mathbf{x}}^t_i) + \nabla f({\mathbf{x}}^t_i)-\nabla f(\widehat{\mathbf{x}}^t_i) \right\rangle\nonumber\\ \leq & ~ \textstyle\left(\left(1- \frac{\eta}{\lambda}\right)^2 + 2\left(1- \frac{\eta}{\lambda}\right)\eta L \right) \left\| \widehat{\mathbf{x}}^t_i - {\mathbf{x}}^t_i \right\|^2 + \eta^2\left\| {\mathbf{y}}^t_i- \nabla f(\widehat{\mathbf{x}}^t_i) \right\|^2 + 2 \left(1- \frac{\eta}{\lambda}\right)\eta \left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, {\mathbf{y}}_i^t-\nabla f({\mathbf{x}}^t_i) \right\rangle , \label{eq:lem1.6.1} \end{align} where the second inequality holds by $\left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, \nabla f({\mathbf{x}}^t_i)-\nabla f(\widehat{\mathbf{x}}^t_i) \right\rangle \leq L\left\|\widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t\right\|^2$. The second term in the right hand side of \eqref{eq:lem1.6.1} can be bounded by \begin{align*} &~ \textstyle \mathbb{E}_t [\| {\mathbf{y}}^t_i- \nabla f(\widehat{\mathbf{x}}^t_i) \|^2\big] \overset{\eqref{eq:x_y_mean}}{=} \mathbb{E}_t\big[\| {\mathbf{y}}^t_i- \bar{\mathbf{y}}^t + \overline{\nabla} \mathbf{F}^t - \nabla f(\widehat{\mathbf{x}}^t_i) \|^2\big] \leq 2\mathbb{E}_t\big[\| {\mathbf{y}}^t_i- \bar{\mathbf{y}}^t \|^2\big] + 2\mathbb{E}_t\big[\big\| \overline{\nabla} \mathbf{F}^t - \nabla f(\widehat{\mathbf{x}}^t_i) \big\|^2\big] \\ = &~2\mathbb{E}_t\big[\| {\mathbf{y}}^t_i- \bar{\mathbf{y}}^t \|^2\big] + 2\mathbb{E}_t\big[\| \overline{\nabla} \mathbf{F}^t - \overline{\nabla} \mathbf{f}^t \|^2\big]+ 2\| \overline{\nabla} \mathbf{f}^t - \nabla f(\widehat{\mathbf{x}}^t_i) \|^2 \\ \leq&~ 2\mathbb{E}_t[ \|{\mathbf{y}}_i^t-\bar{\mathbf{y}}^t\|^2\big] + \frac{2}{n^2}\sum_{j=1}^n \mathbb{E}_t\big[\|\nabla F_j({\mathbf{x}}_j^t,\xi_j^t)-\nabla f_j({\mathbf{x}}_j^t)\|^2\big] + 4 \| \overline{\nabla} \mathbf{f}^t -\nabla f({\mathbf{x}}^t_i) \|^2 + 4\| \nabla f({\mathbf{x}}^t_i)- \nabla f(\widehat{\mathbf{x}}^t_i) \|^2\\ \leq &~ 2\mathbb{E}_t[ \|{\mathbf{y}}_i^t-\bar{\mathbf{y}}^t\|^2\big] + 2 \frac{\sigma^2}{n} + 4 \| \overline{\nabla} \mathbf{f}^t -\nabla f({\mathbf{x}}^t_i) \|^2 + 4 L^2 \|{\mathbf{x}}^t_i -\widehat{\mathbf{x}}^t_i\|^2, \end{align*} where the second equality holds by the unbiasedness of stochastic gradients, and the second inequality holds also by the independence between $\xi_i^t$'s. In the last inequality, we use the bound of the variance of stochastic gradients, and the $L$-smooth assumption. Taking the full expectation over the above inequality and summing for all $i$ give \begin{align} \sum_{i=1}^n\mathbb{E}\big[\| {\mathbf{y}}^t_i- \nabla f(\widehat{\mathbf{x}}^t_i) \|^2 ] \leq 2\mathbb{E}\big[\|\Y^t_\perp\|^2 ] +2\sigma^2 + 8 L^2 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2 ] +4 L^2 \mathbb{E}\big[\| \mathbf{X}^t - \widehat\mathbf{X}^t\|^2 ]. \label{eq:lem161_1} \end{align} To have the inequality above, we have used \begin{align} &~ \sum_{i=1}^n \left\| \overline{\nabla} \mathbf{f}^t -\nabla f({\mathbf{x}}^t_i) \right\|^2 \leq \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^n \left\|\nabla f_j({\mathbf{x}}_j^t) -\nabla f_j({\mathbf{x}}^t_i) \right\|^2 \leq \frac{ L^2}{n}\sum_{i=1}^n\sum_{j=1}^n \left\|{\mathbf{x}}_j^t - {\mathbf{x}}^t_i \right\|^2 \nonumber \\ = &~ \frac{ L^2}{n}\sum_{i=1}^n\sum_{j=1}^n \left( \left\|{\mathbf{x}}_j^t - \bar{\mathbf{x}}^t \right\|^2 +\left\|\bar{\mathbf{x}}^t-{\mathbf{x}}^t_i \right\|^2 + 2\left\langle {\mathbf{x}}_j^t - \bar{\mathbf{x}}^t, \bar{\mathbf{x}}^t-{\mathbf{x}}^t_i\right\rangle\right) = 2 L^2 \left\|\mathbf{X}^t_\perp\right\|^2, \label{eq:sumsum} \end{align} where the last equality holds by $ \frac{1}{n} \sum_{i=1}^n\sum_{j=1}^n \left\langle {\mathbf{x}}_j^t - \bar{\mathbf{x}}^t, \bar{\mathbf{x}}^t-{\mathbf{x}}^t_i\right\rangle = \sum_{i=1}^n \left\langle \frac{1}{n} \sum_{j=1}^n ({\mathbf{x}}_j^t - \bar{\mathbf{x}}^t), \bar{\mathbf{x}}^t-{\mathbf{x}}^t_i\right\rangle =\sum_{i=1}^n\left\langle \bar{\mathbf{x}}^t - \bar{\mathbf{x}}^t, \bar{\mathbf{x}}^t-{\mathbf{x}}^t_i\right\rangle=0$ from the definition of $\bar{\mathbf{x}}$. About the third term in the right hand side of \eqref{eq:lem1.6.1}, we have \begin{align} & ~\sum_{i=1}^n \mathbb{E}\left[ \left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, {\mathbf{y}}_i^t-\nabla f({\mathbf{x}}^t_i) \right\rangle\right] \overset{\eqref{eq:x_y_mean}}{=} \sum_{i=1}^n \mathbb{E}\left[\left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, {\mathbf{y}}_i^t -\bar{\mathbf{y}}^t+\overline{\nabla} \mathbf{F}^t -\nabla f({\mathbf{x}}^t_i) \right\rangle\right] \nonumber \\ = & ~ \textstyle \sum_{i=1}^n \mathbb{E}\big[ \langle\widehat{\mathbf{x}}^t_i -\bar{\widehat{\mathbf{x}}}^t, {\mathbf{y}}_i^t -\bar{\mathbf{y}}^t \rangle\big] + \sum_{i=1}^n \mathbb{E}\big[\langle \bar{{\mathbf{x}}}^t - {\mathbf{x}}_i^t,{\mathbf{y}}_i^t -\bar{\mathbf{y}}^t \rangle\big] + \sum_{i=1}^n \mathbb{E}\left[ \left\langle \widehat{\mathbf{x}}^t_i-{\mathbf{x}}_i^t, \mathbb{E}_{t} \left[\overline{\nabla} \mathbf{F}^t\right] -\nabla f({\mathbf{x}}^t_i)\right\rangle\right] \nonumber \\ \leq &~ \frac{1}{2\eta} \left( \textstyle \mathbb{E}\big[\|\widehat\mathbf{X}^t_\perp\|^2\big]+ \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]\right) + \eta \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + \textstyle L\mathbb{E}\big[\| \widehat\mathbf{X}^t-\mathbf{X}^t\|^2\big] + \frac{1}{4 L} \sum_{i=1}^n \mathbb{E}\big[\|\overline{\nabla} {\mathbf{f}}^t -\nabla f({\mathbf{x}}^t_i)\|^2\big] \nonumber \\ \leq &~ \left(\textstyle\frac{1}{2\eta(1-\lambda L)^2} + \frac{1}{2\eta} + \frac{L}{2}\right)\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \eta\mathbb{E}\big[\|\Y^t_\perp\|^2\big] + L\mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t\|^2\big],\label{eq:lem161_2} \end{align} where $\textstyle \sum_{i=1}^n \big\langle \bar{\widehat{\mathbf{x}}}^t,{\mathbf{y}}_i^t -\bar{\mathbf{y}}^t \big\rangle = 0$ and $\sum_{i=1}^n \left\langle \bar{{\mathbf{x}}}^t,{\mathbf{y}}_i^t -\bar{\mathbf{y}}^t \right\rangle = 0$ is used in the second equality, $\mathbb{E}_{t} \left[\overline{\nabla} \mathbf{F}^t\right] = \overline{\nabla} {\mathbf{f}}^t$ is used in the first inequality, and $\|\widehat\mathbf{X}^t_\perp\|^2 =\left\|\left(\prox_{\lambda \phi}(\mathbf{X}^t)- \prox_{\lambda \phi}(\bar{\mathbf{x}}^t)\mathbf{1}^\top\right) (\mathbf{I}-\mathbf{J})\right\|^2\leq \frac{1}{(1-\lambda L)^2}\|\mathbf{X}^t-\bar\mathbf{X}^t\|^2$ and \eqref{eq:sumsum} are used in the last inequality. Now we can bound the summation of \eqref{eq:lem1.6.1} by using \eqref{eq:lem161_1} and \eqref{eq:lem161_2}: \begin{align*} & ~ \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big]\\ \leq & ~ \left(\textstyle \left(1- \frac{\eta}{\lambda}\right)^2 + 2\left(1- \frac{\eta}{\lambda}\right)\eta L \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t \|^2\big] \\ & ~ + \eta^2 \left(2\mathbb{E}[ \|\Y^t_\perp\|^2\big] +2\sigma^2 + 8 L^2 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] +4 L^2 \mathbb{E}\big[\| \mathbf{X}^t - \widehat\mathbf{X}^t\|^2\big]\right) \\ & ~ + \textstyle 2 \left(1- \frac{\eta}{\lambda}\right)\eta \left(\textstyle\left(\frac{1}{2\eta(1-\lambda L)^2} + \frac{1}{2\eta} + \frac{ L}{2} \right)\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \eta\mathbb{E}\big[\|\Y^t_\perp\|^2\big] + L\mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t\|^2\big]\right) \\ = & ~ \textstyle \left(1 - 2\eta (\frac{1}{\lambda} - 2 L) + \frac{\eta^2}{\lambda} (\frac{1}{\lambda} - 2 L) + 2 L \eta^2(-\frac{1}{\lambda}+2 L)\right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + 2\eta^2\sigma^2 \nonumber\\ & ~ + \textstyle \left( \left(1- \frac{\eta}{\lambda}\right) (1+\frac{1}{(1-\lambda L)^2}+ \eta L) + 8\eta^2 L^2 \right) \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + 2 (2- \frac{\eta}{\lambda}) \eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big]. \end{align*} With $\eta \leq \lambda \leq \frac{1}{4 L}$, we have $\frac{1}{(1-\lambda L)^2}\leq 2$ and \eqref{eq:hatx_xprox} follows from the inequality above. \end{proof} \begin{lemma}\label{lem:XI_J} The consensus error of $\mathbf{X}$ satisfies the following inequality \begin{align} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] \leq \frac{1+\rho^2}{2} \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp \|^2\big]+ \frac{2\rho^2 \eta^2 }{1-\rho^2} \mathbb{E}\big[\| \Y^{t-1}_\perp \|^2\big]. \label{eq:X_consensus} \end{align} \end{lemma} \begin{proof} With the updates \eqref{eq:x_half_update} and \eqref{eq:x_1_update}, we have \begin{align*} &~ \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] = \mathbb{E}\big[\|\mathbf{X}^{t-\frac{1}{2}}\mathbf{W}(\mathbf{I}- \mathbf{J})\|^2\big] = \mathbb{E}\big[\|\mathbf{X}^{t-\frac{1}{2}} (\mathbf{W}-\mathbf{J})\|^2\big] \nonumber\\ =&~ \mathbb{E}\big[\| \prox_{\eta r} \left(\mathbf{X}^{t-1} - \eta \Y^{t-1}\right) (\mathbf{W}-\mathbf{J})\|^2\big] \nonumber\\ =&~ \mathbb{E}\big[\| \left(\prox_{\eta r} \left(\mathbf{X}^{t-1} - \eta \Y^{t-1}\right)-\prox_{\eta r} \left(\bar{\mathbf{x}}^{t-1} - \eta \bar{\mathbf{y}}^{t-1}\right)\mathbf{1}^\top\right) (\mathbf{W}-\mathbf{J})\|^2\big] \nonumber\\ \leq &~ \mathbb{E}\big[\|\prox_{\eta r} \left(\mathbf{X}^{t-1} - \eta \Y^{t-1}\right)-\prox_{\eta r} \left(\bar{\mathbf{x}}^{t-1} - \eta \bar{\mathbf{y}}^{t-1}\right)\mathbf{1}^\top\|^2 \|(\mathbf{W}-\mathbf{J})\|^2_2] \nonumber\\ \leq &~ \rho^2 \mathbb{E}\left[ \textstyle \sum_{i=1}^n\| \prox_{\eta r} \left({\mathbf{x}}_i^{t-1} - \eta {\mathbf{y}}_i^{t-1}\right)-\prox_{\eta r} \left(\bar{\mathbf{x}}^{t-1} - \eta \bar{\mathbf{y}}^{t-1}\right) \|^2\right] \nonumber\\ \leq &~ \rho^2 \mathbb{E}\left[ \textstyle \sum_{i=1}^n\|\left({\mathbf{x}}_i^{t-1} - \eta {\mathbf{y}}_i^{t-1}\right)-\left(\bar{\mathbf{x}}^{t-1} - \eta \bar{\mathbf{y}}^{t-1}\right) \|^2\right] = \rho^2 \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp - \eta \Y^{t-1}_\perp \|^2\big] \nonumber\\ \leq &~ \textstyle \big(\textstyle \rho^2 + \frac{1-\rho^2}{2}\big) \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp \|^2\big]+ \big( \textstyle\rho^2 + \frac{2\rho^4}{1-\rho^2}\big) \eta^2\mathbb{E}\big[\| \Y^{t-1}_\perp \|^2\big] \nonumber\\ = &~\textstyle \frac{1+\rho^2}{2} \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp \|^2\big]+ \frac{1+\rho^2}{1-\rho^2} \rho^2 \eta^2\mathbb{E}\big[\| \Y^{t-1}_\perp \|^2\big] \nonumber \\ \leq &~\textstyle \frac{1+\rho^2}{2} \mathbb{E}\big[\| \mathbf{X}^{t-1}_\perp \|^2\big]+ \frac{2\rho^2 \eta^2 }{1-\rho^2} \mathbb{E}\big[\| \Y^{t-1}_\perp \|^2\big], \end{align*} where we have used $\mathbf{1}^\top (\mathbf{W}-\mathbf{J})=\mathbf{0}$ in the third equality, $\|\mathbf{W}-\mathbf{J}\|_2\leq \rho$ in the second inequality, and Lemma \ref{lem:prox_diff} in the third inequality, and $\rho\leq 1$ is used in the last inequality. \end{proof} \begin{lemma}\label{lem:YI_J} Let $\eta\leq \min\{\lambda, \frac{1-\rho^2}{4\sqrt{6} \rho L} \} $ and $\lambda \leq\frac{1}{4 L}$. The consensus error of $\Y$ satisfies \begin{align} \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \leq &~ \frac{48\rho^2 L^2 }{1-\rho^2 } \mathbb{E}\big[\|\mathbf{X}^{t-1}_\perp\|^2\big] \!+\! \frac{3\!+\!\rho^2}{4} \mathbb{E}\big[\|\Y^{t-1}_\perp \|^2\big] \!+\! \frac{12\rho^2 L^2 }{1-\rho^2 } \mathbb{E}\big[\|\widehat\mathbf{X}^{t-1}-\mathbf{X}^{t-1} \|^2\big] \!+\! 6 n\sigma^2. \label{eq:Y_consensus} \end{align} \end{lemma} \begin{proof} By the updates \eqref{eq:y_half_update} and \eqref{eq:y_update}, we have \begin{align} &~ \mathbb{E}\big[\|\Y^t_\perp\|^2\big] = \mathbb{E}\big[\|\Y^{t-\frac{1}{2}}(\mathbf{W}- \mathbf{J})\|^2\big] = \mathbb{E}\big[\| \Y^{t-1}(\mathbf{W} -\mathbf{J}) + (\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}) (\mathbf{W} -\mathbf{J})\|^2\big] \nonumber\\ = &~ \mathbb{E}\big[\|\Y^{t-1}(\mathbf{I}-\mathbf{J})(\mathbf{W} -\mathbf{J})\|^2\big] + \mathbb{E}\big[\|(\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}) (\mathbf{W}-\mathbf{J}) \|^2\big] + 2\mathbb{E}\big[\langle \Y^{t-1} (\mathbf{W} -\mathbf{J}), (\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}) (\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber\\ \leq &~ \rho^2 \mathbb{E}\big[\|\Y^{t-1}_\perp \|^2\big] + \rho^2 \mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}\|^2\big] + 2\mathbb{E}\big[\langle \Y^{t-1} (\mathbf{W} -\mathbf{J}),(\nabla \mathbf{f}^t - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big], \label{eq:y_cons1} \end{align} where we have used $\mathbf{J}\mathbf{W}=\mathbf{J}\J=\mathbf{J}$, $\|\mathbf{W}-\mathbf{J}\|_2\leq \rho$ and $\mathbb{E}_t[\nabla \mathbf{F}^t] = \nabla {\mathbf{f}}^t$. For the second term on the right hand side of \eqref{eq:y_cons1}, we have \begin{align} &~\mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}\|^2\big] = \mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{f}^t+\nabla \mathbf{f}^t -\nabla \mathbf{F}^{t-1}\|^2\big] \nonumber\\ \overset{\mathbb{E}_t[\nabla \mathbf{F}^t] = \nabla \mathbf{f}^t}{=}&~ \mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{f}^t\|^2\big]+\mathbb{E}\big[\|\nabla \mathbf{f}^t- \nabla \mathbf{f}^{t-1}+\nabla \mathbf{f}^{t-1}-\nabla \mathbf{F}^{t-1}\|^2\big] \nonumber \\ \leq &~ \mathbb{E}\big[\|\nabla \mathbf{F}^t - \nabla \mathbf{f}^t\|^2\big]+2\mathbb{E}\big[\|\nabla \mathbf{f}^t- \nabla \mathbf{f}^{t-1}\|^2\big]+2\mathbb{E}\big[\|\nabla \mathbf{f}^{t-1}-\nabla \mathbf{F}^{t-1}\|^2\big] \nonumber\\ \leq &~ 3 n \sigma^2 + 2 L^2 \mathbb{E}\big[\|\mathbf{X}^{t}-\mathbf{X}^{t-1}\|^2\big]. \label{eq:y_cons12} \end{align} For the third term on the right hand side of \eqref{eq:y_cons1}, we have \begin{align} &~2\mathbb{E}\big[\langle \Y^{t-1} (\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^t - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ =&~2\mathbb{E}\big[\langle \Y^{t-1}(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] +2\mathbb{E}\big[\langle \Y^{t-1}(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^{t-1} - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ =&~2\mathbb{E}\big[\langle \Y^{t-1}(\mathbf{I}-\mathbf{J})(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ &~ +2\mathbb{E}\big[\langle (\Y^{t-2} + \nabla \mathbf{F}^{t-1} - \nabla \mathbf{F}^{t-2})\mathbf{W}(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^{t-1} - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ =&~2\mathbb{E}\big[\langle \Y^{t-1}(\mathbf{I}-\mathbf{J})(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ &~ +2\mathbb{E}\big[\langle (\nabla \mathbf{F}^{t-1} - \nabla \mathbf{f}^{t-1} )\mathbf{W}(\mathbf{W} -\mathbf{J}), (\nabla \mathbf{f}^{t-1} - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J}) \rangle\big] \nonumber \\ \leq &~2\mathbb{E}\big[\|\Y^{t-1}(\mathbf{I}-\mathbf{J})(\mathbf{W} -\mathbf{J})\|\cdot\|(\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1})(\mathbf{W}-\mathbf{J}) \|\big] \nonumber \\ &~ +2\mathbb{E}\big[\|(\nabla \mathbf{F}^{t-1} - \nabla \mathbf{f}^{t-1} )\mathbf{W}(\mathbf{W} -\mathbf{J})\|\cdot\|(\nabla \mathbf{f}^{t-1} - \nabla \mathbf{F}^{t-1})(\mathbf{W}-\mathbf{J})\|\big] \nonumber \\ \leq&~ 2\rho^2\mathbb{E}\big[\| \Y^{t-1}_\perp\|\cdot\|\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1}\|\big] + 2\rho^2\mathbb{E}\big[\|\nabla \mathbf{F}^{t-1} - \nabla \mathbf{f}^{t-1}\|^2\big] \nonumber\\ \leq &~ \textstyle\frac{1-\rho^2}{2} \mathbb{E}\big[\| \Y^{t-1}_\perp\|^2\big]+\frac{2\rho^4}{1-\rho^2}\mathbb{E}\big[\|\nabla \mathbf{f}^t - \nabla \mathbf{f}^{t-1}\|^2\big] + 2\rho^2 n \sigma^2 \nonumber \\ \leq &~ \textstyle\frac{1-\rho^2}{2} \mathbb{E}\big[\| \Y^{t-1}_\perp\|^2\big]+\frac{2\rho^4 L^2}{1-\rho^2} \mathbb{E}\big[\| \mathbf{X}^t - \mathbf{X}^{t-1}\|^2\big]+ 2\rho^2 n \sigma^2, \label{eq:y_cons13} \end{align} where the second equality holds by $\mathbf{W}-\mathbf{J}=(\mathbf{I}-\mathbf{J})(\mathbf{W}-\mathbf{J})$, \eqref{eq:y_half_update} and \eqref{eq:y_update}, the third equality holds because $\Y^{t-2} - \nabla \mathbf{F}^{t-2} -\nabla \mathbf{f}^{t-1}$ does not depend on $\xi_i^{t-1}$'s, and the second inequality holds because $\|\mathbf{W}-\mathbf{J}\|_2\leq \rho$ and $\|\mathbf{W}\|_2\leq 1$. Plugging \eqref{eq:y_cons12} and \eqref{eq:y_cons13} into \eqref{eq:y_cons1}, we have \begin{align} \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \leq &~ \textstyle\frac{1+\rho^2}{2} \mathbb{E}\big[\|\Y^{t-1}_\perp \|^2\big] + \frac{2 \rho^2 L^2 }{1-\rho^2 } \mathbb{E}\big[\| \mathbf{X}^t - \mathbf{X}^{t-1}\|^2\big] + 5 \rho^2 n \sigma^2 , \label{eq:y_cons2} \end{align} where we have used $1+\frac{\rho^2}{1-\rho^2} = \frac{1}{1-\rho^2 }$. For the second term in the right hand side of \eqref{eq:y_cons2}, we have \begin{align} &~ \| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2 = \|\mathbf{X}^{t+\frac{1}{2}}\mathbf{W}-\mathbf{X}^t\|^2 = \|(\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t)\mathbf{W} +(\widehat\mathbf{X}^t-\mathbf{X}^t)\mathbf{W} + \mathbf{X}^t (\mathbf{W}-\mathbf{I})\|^2 \nonumber \\ \leq &~ 3\|(\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t)\mathbf{W}\|^2 +3\|(\widehat\mathbf{X}^t-\mathbf{X}^t)\mathbf{W}\|^2 + 3\|\mathbf{X}^t(\mathbf{I}-\mathbf{J})(\mathbf{W}-\mathbf{I})\|^2 \nonumber \\ \leq &~ 3\|\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t \|^2 +3\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2 + 12\|\mathbf{X}^t_\perp\|^2,\label{eq:Xplus1-X} \end{align} where in the first inequality we have used $\mathbf{X}^t (\mathbf{W}-\mathbf{I})=\mathbf{X}^t(\mathbf{I}-\mathbf{J})(\mathbf{W}-\mathbf{I})$ from $\mathbf{J}(\mathbf{W}-\mathbf{I}) = \mathbf{J}-\mathbf{J}$, and in the second inequality we have used $\|\mathbf{W}\|_2\leq 1$ and $\|\mathbf{W}-\mathbf{I}\|_2\leq 2$. Taking expectation over both sides of \eqref{eq:Xplus1-X} and using \eqref{eq:hatx_xprox}, we have \begin{align*} &~ \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] \\ \le &~3 \left( \textstyle 4 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \left( 1-\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +4\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 2\eta^2\sigma^2\right) +3 \mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2\big] + 12 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]\\ = &~ 3 \textstyle \left(2 -\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +12\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 6\eta^2\sigma^2 + 24\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]. \end{align*} Plugging the inequality above into \eqref{eq:y_cons2} gives \begin{align*} \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \leq &~ \left(\textstyle\frac{1+\rho^2}{2} + \frac{24 \rho^2 L^2\eta^2 }{1-\rho^2 } \right) \mathbb{E}\big[\|\Y^{t-1}_\perp \|^2\big] + \textstyle 5 \rho^2 n\sigma^2 +\frac{12 \rho^2 L^2 \eta^2 \sigma^2 }{1-\rho^2 } \nonumber \\ &~\textstyle + \frac{6\rho^2 L^2 }{1-\rho^2 }\left( \textstyle 2- \frac{\eta}{2\lambda} \right) \mathbb{E}\big[\|\widehat\mathbf{X}^{t-1}-\mathbf{X}^{t-1} \|^2\big] + \frac{48 \rho^2 L^2 }{1-\rho^2 } \mathbb{E}\big[\|\mathbf{X}^{t-1}_\perp\|^2\big]. \end{align*} By $\rho<1$ and $ \eta \leq \frac{1-\rho^2}{4\sqrt{6} \rho L}$, we have $\frac{24 \rho^2 L^2 \eta^2}{1-\rho^2 } \leq \frac{1-\rho^2}{4}$ and $\frac{12 \rho^2 L^2 \eta^2}{1-\rho^2 } \leq \frac{1-\rho^2}{8}\leq n$, and further \eqref{eq:Y_consensus}. \end{proof} \begin{lemma}\label{lem:weak_convex} Let $\eta\leq \lambda \leq\frac{1}{4 L}$. It holds \begin{align} \sum_{i=1}^n \mathbb{E}[\phi_\lambda({\mathbf{x}}_i^{t+1})] \leq &~ \sum_{i=1}^n \mathbb{E}[ \phi_\lambda( {\mathbf{x}}_i^{t})] + \frac{4}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{4 \eta^2}{\lambda} \mathbb{E}[ \|\Y^t_\perp\|^2\big] - \frac{\eta}{4\lambda^2} \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t \|^2\big] + \frac{\eta^2\sigma^2}{\lambda}. \label{eq:phi_update} \end{align} \end{lemma} \begin{proof} By the definition in \eqref{eq:x_t_hat , the update in \eqref{eq:x_1_update}, the $ L$-weakly convexity of $\phi$, and the convexity of $\|\cdot\|^2$, we have \begin{align} &~\phi_\lambda({\mathbf{x}}_i^{t+1}) \overset{\eqref{eq:x_t_hat}}{=} \phi(\widehat{\mathbf{x}}_i^{t+1})+{\textstyle \frac{1}{2\lambda} }\|\widehat{\mathbf{x}}_i^{t+1}-{\mathbf{x}}_i^{t+1}\|^2 \overset{\eqref{eq:x_1_update}}{\leq} \phi\bigg(\sum_{j=1}^n\mathbf{W}_{ji}\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}\bigg)+{ \frac{1}{2\lambda}} \bigg\|\sum_{j=1}^n \mathbf{W}_{ji}\big(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\big)\bigg\|^2 \nonumber \\ &~\overset{\mbox{Lemma \ref{lem:weak_convx}} }{\leq} \sum_{j=1}^n \mathbf{W}_{ji} \phi(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}) +{ \frac{L}{2} }\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-\widehat{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2+{ \frac{1}{2\lambda} }\sum_{j=1}^n \mathbf{W}_{ji} \|\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\|^2 \nonumber \\ &~\leq \sum_{j=1}^n \mathbf{W}_{ji} \phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}}) + \frac{1}{4\lambda} \sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2, \label{eq:phi_update_0} \end{align} where in the last inequality we use $ \phi(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}) + \frac{1}{2\lambda} \|(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}})\|^2 = \phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}})$, $\|\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-\widehat{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2\leq \frac{1}{(1-\lambda L)^2}\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2$ from Lemma \ref{lem:prox_diff}, $\frac{1}{(1-\lambda L)^2}\leq 2$ and $ L \leq \frac{1}{4\lambda}$. For the first term on the right hand side of \eqref{eq:phi_update_0}, with $\sum_{i=1}^n \mathbf{W}_{ji}=1$, we have \begin{align} \sum_{i=1}^n \sum_{j=1}^n \mathbf{W}_{ji} \phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}}) = &~ \sum_{i=1}^n \phi_\lambda({\mathbf{x}}_i^{t+\frac{1}{2}}) \leq \sum_{i=1}^n \phi_\lambda( {\mathbf{x}}_i^{t}) + { \frac{1}{2\lambda}} \|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2 - { \frac{1}{2\lambda}} \|\widehat\mathbf{X}^t - \mathbf{X}^t\|^2, \label{eq:phi_lambda} \end{align} where we have used $ \phi_\lambda({\mathbf{x}}_i^{t+\frac{1}{2}}) \leq \phi(\widehat{\mathbf{x}}_i^{t})+\frac{1}{2\lambda} \|\widehat{\mathbf{x}}_i^{t}-{\mathbf{x}}_i^{t+\frac{1}{2}}\|^2$ and $\phi_\lambda({\mathbf{x}}_i^{t}) = \phi( \widehat{\mathbf{x}}_i^{t}) + \frac{1}{2\lambda} \|\widehat{\mathbf{x}}_i^{t}-{\mathbf{x}}_i^t\|$. For the second term on the right hand side of \eqref{eq:phi_update_0}, with Lemma \ref{lem:prox_diff} and \eqref{eq:x_half_update}, we have \begin{align} &~\sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2 = \sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|\prox_{\eta r}({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-\prox_{\eta r}({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber\\ \leq &~ \sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber\\ = &~ \sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)+(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)-({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber\\ \leq&~ 2\sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)\|^2 + 2\sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \mathbf{W}_{ji}\mathbf{W}_{li}\|(\bar{{\mathbf{x}}}^{t}-\eta\bar{{\mathbf{y}}}^{t})-({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber\\ \leq&~ 2\sum_{i=1}^n\sum_{j=1}^{n-1} \mathbf{W}_{ji} \|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)\|^2 + 2\sum_{i=1}^n \sum_{l=2}^n \mathbf{W}_{li}\|(\bar{{\mathbf{x}}}^{t}-\eta\bar{{\mathbf{y}}}^{t})-({\mathbf{x}}_l^{t}-\eta{\mathbf{y}}_l^t)\|^2 \nonumber \\ \leq&~4 \sum_{j=1}^{n} \|({\mathbf{x}}_j^{t}-\eta{\mathbf{y}}_j^{t})-(\bar{\mathbf{x}}^{t}-\eta\bar{\mathbf{y}}^t)\|^2 \leq 8 \|\mathbf{X}^{t}_\perp\|^2+ 8\eta^2 \|\Y^{t}_\perp\|^2. \label{eq:2_3} \end{align} With \eqref{eq:phi_lambda} and \eqref{eq:2_3}, summing up \eqref{eq:phi_update_0} from $i=1$ to $n$ gives \begin{align*} \sum_{i=1}^n \phi_\lambda({\mathbf{x}}_i^{t+1}) \leq &~ \sum_{i=1}^n \phi_\lambda( {\mathbf{x}}_i^{t}) +{ \frac{1}{2\lambda} }\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2 - { \frac{1}{2\lambda} } \|\widehat\mathbf{X}^t - \mathbf{X}^t\|^2 +{ \frac{2}{\lambda} }\left( \|\mathbf{X}^{t}_\perp\|^2+ \eta^2 \|\Y^{t}_\perp\|^2 \right) \end{align*} Now taking the expectation on the above inequality and using \eqref{eq:hatx_xprox}, we have \begin{align*} \sum_{i=1}^n \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_i^{t+1}) \big] \leq &~ \sum_{i=1}^n \mathbb{E}\big[\phi_\lambda( {\mathbf{x}}_i^{t}) \big] - \frac{1}{2\lambda} \mathbb{E}\big[ \|\widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \frac{2}{\lambda} \mathbb{E}\big[ \|\mathbf{X}^{t}_\perp\|^2+ \eta^2 \|\Y^{t}_\perp\|^2 \big]\\ &~ \hspace{-2cm}+\frac{1}{2\lambda} \left(\textstyle 4 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \left(\textstyle 1-\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +4\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 2\eta^2\sigma^2 \right). \end{align*} Combining like terms in the inequality above gives \eqref{eq:phi_update}. \end{proof} With Lemmas \ref{lem:XI_J}, \ref{lem:YI_J} and \ref{lem:weak_convex}, we are ready to prove Theorem \ref{thm:sec2}. We build the following Lyapunov function: \begin{align*} \mathbf{V}^t = z_1 \mathbb{E}[\|\mathbf{X}^t_\perp\|^2] +z_2\mathbb{E}[\|\Y^t_\perp\|^2] +z_3\sum_{i=1}^n \mathbb{E}[ \phi_\lambda( {\mathbf{x}}_i^{t})], \end{align*} where $z_1, z_2, z_3 \geq 0$ will be determined later. \subsection*{Proof of Theorem \ref{thm:sec2}.} \begin{proof} Denote \begin{align*} \Phi^t = \sum_{i=1}^n \mathbb{E}[ \phi_\lambda( {\mathbf{x}}_i^{t})],\quad \Omega_0^t = \mathbb{E}[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t}\|^2],\quad \Omega^t = \left(\mathbb{E}[\|\mathbf{X}^t_\perp\|^2], \mathbb{E}[\|\Y^t_\perp\|^2], \Phi^t\right)^\top. \end{align*} Then Lemmas \ref{lem:XI_J}, \ref{lem:YI_J} and \ref{lem:weak_convex} imply $\Omega^{t+1} \leq \mathbf{A}\Omega^t + {\mathbf{b}} \Omega_0^t + {\mathbf{c}} \sigma^2$, where \begin{align*} \mathbf{A} = \begin{pmatrix} \frac{1+\rho^2}{2} &~ \frac{2\rho^2}{1-\rho^2}\eta^2 &~ 0\\ \frac{48\rho^2 L^2 }{1-\rho^2 } &~\frac{3+\rho^2}{4} &~ 0 \\ \frac{4}{\lambda} &~ \frac{4}{\lambda}\eta^2 &~ 1 \end{pmatrix}, \quad {\mathbf{b}} = \begin{pmatrix} 0 \\ \frac{12\rho^2 L^2 }{1-\rho^2 } \\ - \frac{\eta}{4\lambda^2} \end{pmatrix}, \quad {\mathbf{c}} = \begin{pmatrix} 0 \\ 6n \\ \frac{\eta^2}{\lambda} \end{pmatrix}. \end{align*} For any ${\mathbf{z}} = (z_1, z_2, z_3)^\top \geq \mathbf{0}$, We have \begin{align*} {\mathbf{z}}^\top \Omega^{t+1} \leq {\mathbf{z}}^\top \Omega^{t}+ ({\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top)\Omega^t +{\mathbf{z}}^\top {\mathbf{b}} \Omega_0^t + {\mathbf{z}}^\top{\mathbf{c}} \sigma^2. \end{align*} Take $$z_1=\frac{10}{1-\rho^2},\ z_2=\left(\frac{80\rho^2}{(1-\rho^2)^3} + \frac{16}{1-\rho^2}\right)\eta^2,\ z_3 = \lambda.$$ We have $ {\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top = \begin{pmatrix} \frac{48\rho^2 L^2 }{1-\rho^2 }z_2-1, 0, 0 \end{pmatrix}. $ Note $z_2 \leq \frac{96}{(1-\rho^2)^3}\eta^2$. Thus \begin{align*} {\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top \leq \begin{pmatrix} \textstyle \frac{4608\rho^2 L^2 }{(1-\rho^2)^4 }\eta^2-1, 0, 0 \end{pmatrix}, \ {\mathbf{z}}^\top{\mathbf{b}} \leq \textstyle \frac{1152\rho^2 L^2 }{(1-\rho^2)^4 }\eta^2 - \frac{\eta}{4\lambda}, \ {\mathbf{z}}^\top{\mathbf{c}} \leq \textstyle \Big( \textstyle \frac{576n }{(1-\rho^2)^3} + 1\Big)\eta^2 \leq \frac{577n}{(1-\rho^2)^3} \eta^2. \end{align*} With $\eta\leq \frac{(1-\rho^2)^4}{96\rho L}$ and $\lambda \leq \frac{1}{96\rho L}$, we have $ {\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top \leq (-\frac{1}{2}, 0, 0 )^\top$ and $ {\mathbf{z}}^\top{\mathbf{b}} \leq \left(12\rho L - \frac{1}{8\lambda}\right)\eta - \frac{\eta}{8\lambda} \leq -\frac{\eta}{8\lambda}$. Thus \begin{align} {\mathbf{z}}^\top \Omega^{t+1} \leq \textstyle {\mathbf{z}}^\top \Omega^{t} -\frac{1}{2}\mathbb{E}[\|\mathbf{X}^t_\perp\|^2] -\frac{\eta}{8\lambda} \Omega_0^t + \frac{577n}{(1-\rho^2)^3} \eta^2 \sigma^2.\label{eq:l_fun} \end{align} Hence, summing up \eqref{eq:l_fun} for $t=0,1,\ldots,T-1$ gives \begin{align}\label{eq:avg-Omega} \frac{1}{\lambda T}\sum_{t=0}^{T-1} \Omega_0^t +\frac{4}{\eta T}\sum_{t=0}^{T-1} \mathbb{E}[\|\mathbf{X}^t_\perp\|^2] \leq \textstyle \frac{8}{\eta T} \left({\mathbf{z}}^\top \Omega^0 - {\mathbf{z}}^\top \Omega^{T}\right) + \frac{577n}{(1-\rho^2)^3} 8\eta\sigma^2 . \end{align} From ${\mathbf{y}}_i^{-1} =\mathbf{0}, \nabla F_i({\mathbf{x}}_i^{-1},\xi_i^{-1}) = \mathbf{0}, {\mathbf{x}}_i^0 = {\mathbf{x}}^0, \forall\, i \in \mathcal{N}$, we have \begin{align} \|\mathbf{X}^0_\perp\|^2 = 0, \quad \|\Y^0_\perp\|^2 = \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2, \quad \Phi^0=n \phi_\lambda({\mathbf{x}}^0). \label{eq:initial_thm2} \end{align} From Assumption \ref{assu:prob}, $\phi$ is lower bounded and thus $\phi_\lambda $ is also lower bounded, i.e., there is a constant $\phi_\lambda^*$ satisfying $\phi_\lambda^* = \min_{{\mathbf{x}}} \phi_\lambda({\mathbf{x}}) > -\infty$. Thus \begin{align} \Phi^T \geq n \phi_\lambda^*.\label{eq:end_thm2} \end{align} With \eqref{eq:initial_thm2}, \eqref{eq:end_thm2}, and the nonnegativity of $ \mathbb{E}[\|\mathbf{X}^T_\perp\|^2]$ and $ \mathbb{E}[\|\Y^T_\perp\|^2]$, we have \begin{align} \textstyle {\mathbf{z}}^\top \Omega^0 - {\mathbf{z}}^\top \Omega^{T} \le \frac{96 \eta^2}{(1-\rho^2)^3} \mathbb{E}[ \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2] + \lambda n \phi_\lambda({\mathbf{x}}^0) -\lambda n \phi_\lambda^*. \label{eq:Omega0_OmegaT} \end{align} By the convexity of the Frobenius norm and \eqref{eq:Omega0_OmegaT}, we obtain from \eqref{eq:avg-Omega} that \begin{align*} &~ \frac{1}{\lambda^2n} \mathbb{E}\big[\|\widehat\mathbf{X}^{\tau}-\mathbf{X}^{\tau}\|^2\big] +\frac{4}{n \lambda \eta}\mathbb{E}\big[\|\mathbf{X}^\tau_\perp\|^2\big] \leq \frac{1}{\lambda^2n T}\sum_{t=0}^{T-1} \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t}\|^2\big] +\frac{4}{n \lambda \eta T}\sum_{t=0}^{T-1} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] \nonumber \\ \leq &~ \textstyle \frac{8\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{ \eta T} + \frac{4616 \eta}{\lambda(1-\rho^2)^3} \sigma^2 \textstyle + \frac{768\eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2\right]}{n\lambda T(1-\rho^2)^3}. \end{align*} Note $\|\nabla \phi_\lambda ({\mathbf{x}}_i^\tau)\|^2 = \frac{\|{\mathbf{x}}_i^\tau-\widehat{\mathbf{x}}_i^\tau\|^2}{\lambda^{2}}$ from Lemma \ref{lem:xhat_x}, we finish the proof. \end{proof} \section{Convergence Analysis for CDProxSGT} \label{sec:proof_CDProxSGT} In this section, we analyze the convergence rate of CDProxSGT. Similar to the analysis of DProxSGT, we establish a Lyapunov function that involves consensus errors and the Moreau envelope. But due to the compression, compression errors $\|\widehat\mathbf{X}^t-\mathbf{X}^t\|$ and $\|\widehat\Y^t-\Y^t\|$ will occur. Hence, we will also include the two compression errors in our Lyapunov function. Again, we can equivalently write a matrix form of the updates \eqref{eq:alg3_1}-\eqref{eq:alg3_6} in Algorithm \ref{alg:CDProxSGT} as follows: \begin{gather} \Y^{t-\frac{1}{2}} = \Y^{t-1} + \nabla \mathbf{F}^t - \nabla \mathbf{F}^{t-1}, \label{eq:alg3_1_matrix}\\ \underline\Y^{t} = \underline\Y^{t-1} + Q_{\mathbf{y}}\big[\Y^{t-\frac{1}{2}} - \underline\Y^{t-1}\big], \label{eq:alg3_2_matrix}\\ \Y^{t} = \Y^{t-\frac{1}{2}} +\gamma_y \underline\Y^{t}(\mathbf{W}-\mathbf{I}), \label{eq:alg3_3_matrix}\\ \mathbf{X}^{t+\frac{1}{2}} =\prox_{\eta r} \left(\mathbf{X}^t - \eta \Y^{t}\right), \label{eq:alg3_4_matrix}\\ \underline\mathbf{X}^{t+1} = \underline\mathbf{X}^{t} + Q_{\mathbf{x}}\big[\mathbf{X}^{t+\frac{1}{2}} - \underline\mathbf{X}^{t}\big], \label{eq:alg3_5_matrix}\\ \mathbf{X}^{t+1} = \mathbf{X}^{t+\frac{1}{2}}+\gamma_x\underline\mathbf{X}^{t+1}(\mathbf{W}-\mathbf{I}).\label{eq:alg3_6_matrix} \end{gather} When we apply the compressor to the column-concatenated matrix in \eqref{eq:alg3_2_matrix} and \eqref{eq:alg3_5_matrix}, it means applying the compressor to each column separately, i.e., $Q_{\mathbf{x}}[\mathbf{X}] = [Q_x[{\mathbf{x}}_1],Q_x[{\mathbf{x}}_2],\ldots,Q_x[{\mathbf{x}}_n]]$. Below we first analyze the progress by the half-step updates of $\Y$ and $\mathbf{X}$ from $t+1/2$ to $t+1$ in Lemmas \ref{lem:prepare_comp_y} and \ref{lem:Xhat_Xhalf_comp}. Then we bound the one-step consensus error and compression error for $\mathbf{X}$ in Lemma \ref{lem:X_consensus_comperror} and for $\Y$ in Lemma \ref{lem:Y_consensus_comperror}. The bound of $\mathbb{E}[\phi_\lambda({\mathbf{x}}_i^{t+1})]$ after one-step update is given in \ref{lem:phi_one_step}. Finally, we prove Theorem \ref{thm:sect3thm} by building a Lyapunov function that involves all the five terms. \begin{lemma} \label{lem:prepare_comp_y} It holds that \begin{align} \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big] \leq &~2 \alpha^2\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + 6 \alpha^2 n \sigma^2 + 4 \alpha^2 L^2 \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big], \label{eq:2.3.2_1} \\ \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big] \leq &~\frac{1+\alpha^2}{2}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \frac{6 n \sigma^2}{1-\alpha^2} + \frac{4 L^2}{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big]. \label{eq:2.3.2} \end{align} \end{lemma} \begin{proof} From \eqref{eq:alg3_1} and \eqref{eq:alg3_2}, we have \begin{align} &~ \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big] = \mathbb{E}\big[\mathbb{E}_Q\big[\|Q_{\mathbf{y}}\big[\Y^{t+\frac{1}{2}}-\underline\Y^{t}\big]- (\Y^{t+\frac{1}{2}}-\underline\Y^{t})\|^2\big]\big] \nonumber\\ \leq &~ \alpha^2\mathbb{E}\big[\|\Y^{t+\frac{1}{2}}-\underline\Y^{t}\|^2\big] = \alpha^2\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t} +\nabla \mathbf{F}^{t+1}-\nabla \mathbf{F}^{t}\|^2\big]\nonumber\\ \leq &~ \alpha^2(1+\alpha_0)\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \alpha^2(1+\alpha_0^{-1})\mathbb{E}\big[\|\nabla \mathbf{F}^{t+1}-\nabla \mathbf{F}^{t}\|^2\big] \nonumber\\ \leq &~ \alpha^2(1+\alpha_0)\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \alpha^2(1+\alpha_0^{-1}) \left(3 n \sigma^2 + 2 L^2 \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big]\right), \label{eq:2.3.2_0} \end{align} where the first inequality holds by Assumption \ref{assu:compressor}, $\alpha_0$ can be any positive number, and the last inequality holds by \eqref{eq:y_cons12} which still holds for CDProxSGT. Taking $\alpha_0=1$ in \eqref{eq:2.3.2_0} gives \eqref{eq:2.3.2_1}. Letting $\alpha_0=\frac{1-\alpha^2}{2}$ in \eqref{eq:2.3.2_0}, we obtain $\alpha^2(1+\alpha_0) = (1-(1-\alpha^2))(1+\frac{1-\alpha^2}{2}) \leq \frac{1+\alpha^2}{2}$ and $\alpha^2(1+\alpha_0^{-1}) \leq \frac{2}{1-\alpha^2}$, and thus \eqref{eq:2.3.2} follows. \end{proof} \begin{lemma} \label{lem:Xhat_Xhalf_comp} Let $\eta\leq \lambda \leq\frac{1}{4 L}$. Then \begin{align} \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] \leq &~ 4\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \left( 1-\frac{\eta}{2\lambda} \right) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +4\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] + 2\eta^2\sigma^2, \label{eq:hatx_xprox_comp}\\ \mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] \leq &~ 3\alpha^2 \left(\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t\|^2\big]+ \mathbb{E}\big[\|\widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big]\right), \label{eq:X_-X_1}\\ \mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] \leq &~ \frac{16}{1-\alpha^2}\Big( \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]+ \eta^2\mathbb{E}\big[\|\Y^t_\perp\|^2\big]\Big) + \frac{1+\alpha^2}{2} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] \nonumber\\ &~ +\frac{8}{1-\alpha^2}\left( \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +\eta^2\sigma^2\right). \label{eq:2.2.2} \end{align} Further, if $\gamma_x\leq \frac{2\sqrt{3}-3}{6\alpha}$, then \begin{align} \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] \leq &~ 30\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] +4\sqrt{3} \alpha \gamma_x \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] +16\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber \\ &~ + 8\mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + 8\eta^2\sigma^2. \label{eq:2.2.3} \end{align} \end{lemma} \begin{proof} The proof of \eqref{eq:hatx_xprox_comp} is the same as that of Lemma \ref{lem:Xhat_Xhalf} because \eqref{eq:alg3_4} and \eqref{eq:x_y_mean} are the same as \eqref{eq:x_half_update} and \eqref{eq:x_y_mean}. For $\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}$, we have from \eqref{eq:alg3_5} that \begin{align} &~ \mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] = \mathbb{E}\big[\mathbb{E}_Q\big[\| Q_{\mathbf{x}}\big[\mathbf{X}^{t+\frac{1}{2}} - \underline\mathbf{X}^{t}\big] -(\mathbf{X}^{t+\frac{1}{2}}-\underline\mathbf{X}^{t})\|^2\big]\big] \nonumber \\ \leq &~ \alpha^2 \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\underline\mathbf{X}^{t}\|^2\big] = \alpha^2 \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t+\widehat{\mathbf{X}}^t - \mathbf{X}^t+\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] \nonumber \\ \le & ~ \alpha^2(1+\alpha_1)\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \alpha^2(1+\alpha_1^{-1})\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t + \widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big] \nonumber \\ \leq &~ \alpha^2(1+\alpha_1)\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + 2\alpha^2(1+\alpha_1^{-1})\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t\|^2\big]+2\alpha^2(1+\alpha_1^{-1})\mathbb{E}\big[\|\widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big], \label{eq:X_-X_0} \end{align} where $\alpha_1$ can be any positive number. Taking $\alpha_1 = 2$ in \eqref{eq:X_-X_0} gives \eqref{eq:X_-X_1}. Taking $\alpha_1 = \frac{1-\alpha^2}{2}$ in \eqref{eq:X_-X_0} and plugging \eqref{eq:hatx_xprox_comp} give \eqref{eq:2.2.2}. About $\mathbb{E}[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2]$, similar to \eqref{eq:Xplus1-X}, we have from \eqref{eq:compX_hatW} that \begin{align} &~ \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] = \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}\widehat\mathbf{W}_x - \mathbf{X}^{t} + \gamma_x(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I})\|^2\big] \nonumber \\ \leq&~(1+\alpha_2) \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}\widehat\mathbf{W}_x-\mathbf{X}^t\|^2\big] + (1+\alpha_2^{-1}) \mathbb{E}\big[\|\gamma_x(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I})\|^2\big]\nonumber\\ \overset{\eqref{eq:Xplus1-X}, \eqref{eq:X_-X_1}}\leq &~ (1+\alpha_2) \left( 3\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t \|^2\big] +3\mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2\big] + 12 \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big]\right) \nonumber \\ &~ + (1+\alpha_2^{-1})4\gamma_x^2 \cdot 3\alpha^2 \left( \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t \|^2\big] + \mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2\big] + \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big]\right) \nonumber \\ \leq &~ 4\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^t \|^2\big] + 4 \mathbb{E}\big[\|\widehat\mathbf{X}^t-\mathbf{X}^t \|^2\big] + 14\mathbb{E}\big[\|\mathbf{X}^t _\perp\|^2\big] + 4\sqrt{3} \alpha \gamma_x \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big], \nonumber \end{align} where in the first inequality $\alpha_2$ could be any positive number, in the second inequality we use \eqref{eq:X_-X_1}, and in the last inequality we take $\alpha_2 = 2\gamma_x \alpha$ and thus with $\gamma_x\leq \frac{2\sqrt{3}-3}{6\alpha}$, it holds $ 3(1+\alpha_2) +12\gamma_x^2\alpha^2(1+\alpha_2^{-1}) = 3(1+2\gamma_x\alpha)^2 \leq 4$, $12(1+\alpha_2)\leq 8\sqrt{3}\leq 14$, $(1+\alpha_2^{-1})4\gamma_x^2\cdot3\alpha^2 \leq 4\sqrt{3} \alpha \gamma_x$. Then plugging \eqref{eq:hatx_xprox_comp} into the inequality above, we obtain \eqref{eq:2.2.3}. \end{proof} \begin{lemma}\label{lem:X_consensus_comperror} Let $\eta\leq \lambda \leq\frac{1}{4 L}$ and $\gamma_x\leq \min\{\frac{ (1-\widehat\rho_x^2)^2}{60\alpha}, \frac{1-\alpha^2}{25}\}$. Then the consensus error and compression error of $\mathbf{X}$ can be bounded by \begin{align} \mathbb{E}\big[\|\mathbf{X}^{t+1}_\perp\|^2\big] \leq &~ \frac{3+\widehat\rho_x^2}{4} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + 2\alpha \gamma_x (1-\widehat\rho_x^2) \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{9}{4(1-\widehat\rho_x^2)}\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber\\ &~ + 4\alpha \gamma_x (1-\widehat\rho_x^2)\mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + 4 \alpha \gamma_x (1-\widehat\rho_x^2)\eta^2\sigma^2, \label{eq:2.4.1}\\ \mathbb{E}\big[\|\mathbf{X}^{t+1}-\underline\mathbf{X}^{t+1}\|^2\big] \leq &~ \frac{21}{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{3+\alpha^2}{4}\mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] +\frac{21}{1-\alpha^2} \eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big]\nonumber\\ &~ + \frac{11}{1-\alpha^2} \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \frac{11}{1-\alpha^2} \eta^2\sigma^2. \label{eq:2.5.1} \end{align} \end{lemma} \begin{proof} First, let us consider the consensus error of $\mathbf{X}$. With the update \eqref{eq:compX_hatW}, we have \begin{align} \mathbb{E}\big[\|\mathbf{X}^{t+1}_\perp\|^2\big] \leq &~ (1+\alpha_3)\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}\widehat\mathbf{W}_x (\mathbf{I}- \mathbf{J})\|^2\big] +(1+\alpha_3^{-1}) \mathbb{E}\big[\|\gamma_x(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I})\|^2\big], \nonumber\\ \leq &~ (1+\alpha_3)\mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}(\widehat\mathbf{W}_x - \mathbf{J})\|^2\big] + (1+\alpha_3^{-1})4\gamma_x^2\mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big], \label{eq:XComp_consensus0} \end{align} where $\alpha_3$ is any positive number, and $\|\mathbf{W}-\mathbf{I}\|_2\leq 2$ is used. The first term in the right hand side of \eqref{eq:XComp_consensus0} can be processed similarly as the non-compressed version in Lemma \ref{lem:XI_J} by replacing $\mathbf{W}$ by $\widehat\mathbf{W}_x$, namely, \begin{align} \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}} (\widehat\mathbf{W}_x-\mathbf{J})\|^2\big] \leq &~ \textstyle \frac{1+\widehat\rho^2_x}{2} \mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big]+ \frac{2\widehat\rho^2_x \eta^2 }{1-\widehat\rho^2_x} \mathbb{E}\big[\| \Y^{t}_\perp \|^2\big]. \label{eq:XComp_consensus1} \end{align} Plugging \eqref{eq:XComp_consensus1} and \eqref{eq:X_-X_1} into \eqref{eq:XComp_consensus0} gives \begin{align*} &~ \mathbb{E}\big[\|\mathbf{X}^{t+1}_\perp\|^2\big] \leq ~ (1+\alpha_3)\left( \textstyle \frac{1+\widehat\rho^2_x}{2} \mathbb{E}\big[\| \mathbf{X}^{t}_\perp \|^2\big]+ \frac{2\widehat\rho^2_x \eta^2 }{1-\widehat\rho^2_x} \mathbb{E}\big[\| \Y^{t}_\perp \|^2\big]\right) \\ &~ + (1+\alpha_3^{-1})12 \alpha^2 \gamma_x^2 \left(\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t\|^2\big]+ \mathbb{E}\big[\|\widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big]\right)\\ \overset{\eqref{eq:hatx_xprox_comp}}{\leq} &~ \left( \textstyle \frac{1+\widehat\rho_x^2}{2}(1+\alpha_3) + 48 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) \right) \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] \nonumber\\ &~+ 12\alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] +\left( \textstyle \frac{2\widehat\rho_x^2}{1-\widehat\rho_x^2}(1+\alpha_3) +48 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1})\right)\eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big] \\ &~ +24 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] +24 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1})\eta^2\sigma^2. \end{align*} Let $\alpha_3 = \frac{7\alpha\gamma_x}{1-\widehat\rho_x^2}$ and $\gamma_x\leq \frac{(1-\widehat\rho_x^2)^2}{60\alpha}$. Then $\alpha^2 \gamma_x^2 (1+\alpha_3^{-1})=\alpha\gamma_x (\alpha\gamma_x+\frac{1-\widehat\rho_x^2}{7})\leq \alpha\gamma_x (\frac{ (1-\widehat\rho_x^2)^2}{60}+\frac{1-\widehat\rho_x^2}{7})\leq \frac{\alpha\gamma_x (1-\widehat\rho_x^2)}{6}$ and \begin{align*} &~ \textstyle \frac{1+\widehat\rho_x^2}{2}(1+\alpha_3) + 48 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) = \frac{1+\widehat\rho_x^2}{2} + 48 \alpha^2 \gamma_x^2 + \frac{7\alpha\gamma_x}{1-\widehat\rho_x^2} + \frac{48\alpha\gamma_x(1-\widehat\rho_x^2)}{7} \\ \leq&~ \textstyle \frac{1+\widehat\rho_x^2}{2} + \frac{48}{60^2}(1-\widehat\rho_x^2)^4 + \frac{7}{60}(1-\widehat\rho_x^2) + \frac{7}{60}(1-\widehat\rho_x^2)^3\leq \frac{1+\widehat\rho_x^2}{2} + \frac{ 1-\widehat\rho_x^2}{4} = \frac{3+\widehat\rho_x^2}{4},\\ &~ \textstyle \frac{2\widehat\rho_x^2}{1-\widehat\rho_x^2}(1+\alpha_3) + 48 \alpha^2 \gamma_x^2 (1+\alpha_3^{-1}) = \frac{2\widehat\rho_x^2}{1-\widehat\rho_x^2} + 48 \alpha^2 \gamma_x^2 + \frac{2\widehat\rho_x^2}{1-\widehat\rho_x^2} \frac{7 \alpha \gamma_x }{1-\widehat\rho_x^2} + \frac{48\alpha\gamma_x(1-\widehat\rho_x^2)}{7}\\ \leq &~ \textstyle \frac{1}{1-\widehat\rho_x^2} \left( 2\widehat\rho_x^2 + \frac{48}{60^2} (1-\widehat\rho_x^2) + \frac{14\widehat\rho_x^2}{60} + \frac{7}{60}(1-\widehat\rho_x^2) \right) \leq \frac{1}{1-\widehat\rho_x^2} \left( 2\widehat\rho_x^2 + \frac{48}{60^2} + \frac{7}{60} \right) \leq \frac{9}{4(1-\widehat\rho_x^2)}. \end{align*} Thus \eqref{eq:2.4.1} holds. Now let us consider the compression error of $\mathbf{X}$. By \eqref{eq:alg3_6}, we have \begin{align} &~\mathbb{E}\big[\|\mathbf{X}^{t+1}-\underline\mathbf{X}^{t+1}\|^2\big = \mathbb{E}\big[\|(\underline\mathbf{X}^{t+1} - \mathbf{X}^{t+\frac{1}{2}}) \big(\gamma_x(\mathbf{W}-\mathbf{I}) -\mathbf{I}\big) + \gamma_x \mathbf{X}^{t+\frac{1}{2}} (\mathbf{I}-\mathbf{J}) (\mathbf{W}-\mathbf{I}) \|^2\big] \nonumber\\ \leq&~ (1+\alpha_4) (1+2\gamma_x)^2 \mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] + (1+\alpha_4^{-1})4 \gamma_x^2 \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}_\perp\|^2\big],\label{eq:2.5.1.0} \end{align} where we have used $\mathbf{J}\mathbf{W}=\mathbf{J}$ in the equality, $\|\gamma_x (\mathbf{W}-\mathbf{I}) -\mathbf{I}\|_2\leq \gamma_x\|\mathbf{W}-\mathbf{I}\|_2+\|\mathbf{I}\|_2\leq 1+2\gamma_x$ and $\|\mathbf{W}-\mathbf{I}\|_2\leq 2$ in the inequality, and $\alpha_4$ can be any positive number. For the second term in the right hand side of \eqref{eq:2.5.1.0}, we have \begin{align} \|\mathbf{X}^{t+\frac{1}{2}}_\perp\|^2 \overset{\eqref{eq:alg3_4}}{=}&~ \left\|\left(\prox_{\eta r} \left(\mathbf{X}^t - \eta \Y^{t}\right)-\prox_{\eta r} \left(\bar{\mathbf{x}}^t - \eta \bar{\mathbf{y}}^{t}\right)\mathbf{1}^\top\right)(\mathbf{I}-\mathbf{J})\right\|^2 \nonumber \\ \leq&~ \|\mathbf{X}^t_\perp- \eta \Y^{t}_\perp\|^2 \leq 2\|\mathbf{X}^t_\perp\|^2+2\eta^2\|\Y^{t}_\perp\|^2, \label{eq:2.2.1} \end{align} where we have used $\mathbf{1}^\top(\mathbf{I}-\mathbf{J})=\mathbf{0}^\top$, $\|\mathbf{I}-\mathbf{J}\|_2\leq 1$, and Lemma \ref{lem:prox_diff}. Now plugging \eqref{eq:2.2.2} and \eqref{eq:2.2.1} into \eqref{eq:2.5.1.0} gives \begin{align*} \mathbb{E}\big[\|\mathbf{X}^{t+1}-\underline\mathbf{X}^{t+1}\|^2\big] \leq \left( \textstyle (1+\alpha_4^{-1})8\gamma_x^2+(1+\alpha_4) (1+2\gamma_x)^2\frac{16}{1-\alpha^2}\right) \left( \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \eta^2 \mathbb{E}\big[\|\Y^t_\perp\|^2\big]\right) \nonumber\\ \textstyle + (1+\alpha_4) (1+2\gamma_x)^2\frac{1+\alpha^2}{2}\mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] +(1+\alpha_4)(1+2\gamma_x)^2\frac{8}{1-\alpha^2} \left( \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \eta^2\sigma^2\right). \end{align*} With $\alpha_4=\frac{1-\alpha^2}{12}$ and $\gamma_x\leq \frac{1-\alpha^2}{25}$, \eqref{eq:2.5.1} holds because $(1+2\gamma_x)^2 \leq 1 + \frac{104}{25}\gamma_x \leq \frac{7}{6}$, $ (1+2\gamma_x)^2\frac{1+\alpha^2}{2}\leq \frac{1+\alpha^2}{2}+\frac{104}{25}\gamma_x\leq \frac{2+\alpha^2}{3}$, and \begin{align} (1+\alpha_4) (1+2\gamma_x)^2\frac{1+\alpha^2}{2} \leq &~ \frac{2+\alpha^2}{3} + \alpha_4 = \frac{3+\alpha^2}{4}, \label{eq:gamma_x_1}\\ (1+\alpha_4^{-1}) 8\gamma_x^2+ (1+\alpha_4) (1+2\gamma_x)^2\frac{16}{1-\alpha^2} \leq&~ \frac{13}{1-\alpha^2}\frac{8}{625} + \frac{13}{12} \frac{7}{6} \frac{16}{1-\alpha^2} \leq \frac{21}{1-\alpha^2}, \label{eq:gamma_x_2}\\ (1+\alpha_4)(1+2\gamma_x)^2\frac{8}{1-\alpha^2} \leq&~ \frac{13}{12} \frac{7}{6}\frac{8}{1-\alpha^2} \leq \frac{11}{1-\alpha^2}. \nonumber \end{align} \end{proof} \begin{lemma} \label{lem:Y_consensus_comperror} Let $\eta\leq \min\{\lambda, \frac{1-\widehat\rho^2_y}{8\sqrt{5} L} \} $, $\lambda \leq\frac{1}{4 L}$, $ \gamma_x\leq \frac{2\sqrt{3}-3}{6\alpha}$, $\gamma_y\leq \min\{\frac{\sqrt{1-\widehat\rho^2_y}}{12\alpha}, \frac{1-\alpha^2}{25}\}$. Then the consensus error and compression error of $\Y$ can be bounded by \begin{align} \mathbb{E}\big[\|\Y^{t+1}_\perp\|^2\big] \leq &~ \frac{150 L^2 }{1-\widehat\rho^2_y } \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{20\sqrt{3} \alpha\gamma_x L^2}{1-\widehat\rho^2_y } \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big]+\frac{3+\widehat\rho^2_y }{4}\mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber\\ &~ +\frac{48\alpha^2\gamma_y^2}{1-\widehat\rho^2_y } \mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \frac{40 L^2 }{1-\widehat\rho^2_y } \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + 12n \sigma^2, \label{eq:2.4.2} \\ \mathbb{E}\big[\|\Y^{t+1}-\underline\Y^{t+1}\|^2\big] \leq &~ \frac{180 L^2}{1-\alpha^2}\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{24\sqrt{3}\alpha\gamma_x L^2}{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{3+\alpha^2}{4}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] \nonumber\\ &~ +\frac{104\gamma_y^2+ 96\eta^2 L^2}{1-\alpha^2}\mathbb{E}\big[\|\Y^{t}( \mathbf{I}-\mathbf{J})\|^2\big] + \frac{48 L^2}{1-\alpha^2} \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \frac{10 n}{1-\alpha^2} \sigma^2 .\label{eq:2.5.2} \end{align} \end{lemma} \begin{proof} First, let us consider the consensus of $\Y$. Similar to \eqref{eq:XComp_consensus0}, we have from the update \eqref{eq:Y_hatW} that \begin{align} \mathbb{E}\big[\|\Y^{t+1}_\perp\|^2\big] \leq (1+\alpha_5)\mathbb{E}\big[\|\Y^{t+\frac{1}{2}}(\widehat\mathbf{W}_y-\mathbf{J})\|^2\big] + (1+\alpha_5^{-1})4\gamma_y^2 \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big], \label{eq:Ycomp_conses0} \end{align} where $\alpha_5$ can be any positive number. Similarly as \eqref{eq:y_cons1}-\eqref{eq:y_cons2} in the proof of Lemma \ref{lem:YI_J}, we have the bound for the first term on the right hand side of \eqref{eq:Ycomp_conses0} by replacing $\mathbf{W}$ with $\widehat\mathbf{W}_y$, namely, \begin{align} \mathbb{E}\big[\|\Y^{t+\frac{1}{2}}(\widehat\mathbf{W}_y-\mathbf{J})\|^2\big] \leq \textstyle \frac{1+\widehat\rho^2_y}{2} \mathbb{E}\big[\|\Y^{t}_\perp \|^2\big] + \frac{2 \widehat\rho^2_y L^2 }{1-\widehat\rho^2_y } \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] + 5 \widehat\rho^2_y n \sigma^2.\label{eq:comp_y_cons220} \end{align} Plug \eqref{eq:comp_y_cons220} and \eqref{eq:2.3.2_1} back to \eqref{eq:Ycomp_conses0}, and take $\alpha_5 = \frac{1-\widehat\rho^2_y}{3(1+\widehat\rho^2_y)}$. We have \begin{align*} &~ \mathbb{E}\big[\|\Y^{t+1}_\perp\|^2\big] \leq \textstyle \frac{2(2+\widehat\rho^2_y)}{3(1+\widehat\rho^2_y)}\frac{1+\widehat\rho^2_y}{2} \mathbb{E}\big[\|\Y^{t}_\perp \|^2\big] + \frac{24\gamma_y^2}{1-\widehat\rho^2_y} 2\alpha^2 \mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] \nonumber\\ &~\quad \textstyle + \frac{24\gamma_y^2}{1-\widehat\rho^2_y} 6\alpha^2 n\sigma^2 + 2\cdot5 \widehat\rho^2_y n \sigma^2 + \left( \textstyle \frac{24\gamma_y^2}{1-\widehat\rho^2_y} 4\alpha^2 L^2 + 2\cdot\frac{2 \widehat\rho^2_y L^2 }{1-\widehat\rho^2_y } \right)\mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] \nonumber\\ \leq &~ \textstyle \frac{2+\widehat\rho^2_y}{3} \mathbb{E}\big[\|\Y^{t}_\perp \|^2\big] + \frac{48\alpha^2\gamma_y^2}{1-\widehat\rho^2_y} \mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + 11 n \sigma^2 + \frac{5 L^2}{1-\widehat\rho^2_y} \mathbb{E}\big[\| \mathbf{X}^{t+1} - \mathbf{X}^{t}\|^2\big] \\ \leq &~ \textstyle \frac{150 L^2 }{1-\widehat\rho^2_y } \mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{20\sqrt{3} L^2}{1-\widehat\rho^2_y } \alpha \gamma_x \mathbb{E}[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2] + \frac{40 L^2 }{1-\widehat\rho^2_y } \eta^2\sigma^2 + 11 n \sigma^2 \nonumber\\ &~ \textstyle +\left( \textstyle \frac{2+\widehat\rho^2_y}{3}+ \frac{80 L^2 }{1-\widehat\rho^2_y } \eta^2\right) \mathbb{E}\big[\|\Y^t_\perp\|^2\big] +\frac{48\alpha^2\gamma_y^2}{1-\widehat\rho^2_y} \mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] + \frac{40 L^2 }{1-\widehat\rho^2_y}\mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big], \end{align*} where the first inequality holds by $1+\alpha_5 = \frac{2(2+\widehat\rho^2_y)}{3(1+\widehat\rho^2_y)} \leq 2$ and $1+\alpha_5^{-1} = \frac{2(2+\widehat\rho^2_y)}{1-\widehat\rho^2_y}\leq \frac{6}{1-\widehat\rho^2_y}$, the second inequality holds by $\gamma_y\leq \frac{\sqrt{1-\widehat\rho^2_y}}{12\alpha}$ and $\alpha^2\leq 1$, and the third equality holds by \eqref{eq:2.2.3}. By $\frac{80 L^2 }{1-\widehat\rho^2_y} \eta^2 \leq \frac{1-\widehat\rho^2_y}{4}$ and $ \frac{40 L^2 }{1-\widehat\rho^2_y} \eta^2\leq \frac{1-\widehat\rho^2_y}{8}\leq 1$ from $\eta\leq \frac{1-\widehat\rho^2_y}{8\sqrt{5} L} $, we can now obtain \eqref{eq:2.4.2}. Next let us consider the compression error of $\Y$, similar to \eqref{eq:2.5.1.0}, we have by \eqref{eq:alg3_3} that \begin{align} &~\mathbb{E}\big[\|\Y^{t+1}-\underline\Y^{t+1}\|^2\big] \leq (1+\alpha_6)(1+2\gamma_y)^2 \mathbb{E}\big[\|\underline\Y^{t+1}-\Y^{t+\frac{1}{2}}\|^2\big] + (1+\alpha_6^{-1})4 \gamma_y^2 \mathbb{E}\big[\|\Y^{t+\frac{1}{2}}_\perp\|^2\big], \label{eq:Y_compress_0} \end{align} where $\alpha_6$ is any positive number. For $\mathbb{E}\big[\|\Y^{t+\frac{1}{2}}_\perp\|^2\big]$, we have from \eqref{eq:alg3_1} that \begin{align} &~\mathbb{E}\big[\|\Y^{t+\frac{1}{2}}_\perp\|^2\big] =\mathbb{E}\big[\|( \Y^{t} + \nabla \mathbf{F}^{t+1} - \nabla \mathbf{F}^{t})(\mathbf{I}-\mathbf{J})\|^2\big]\nonumber \\ \leq &~ 2\mathbb{E}\big[\|\Y^{t}_\perp\|^2\big] +2\mathbb{E}\big[\|\nabla \mathbf{F}^{t+1}-\nabla \mathbf{F}^{t}\|^2\big] \leq 2\mathbb{E}\big[\|\Y^{t}_\perp\|^2\big] +6 n \sigma^2 + 4 L^2 \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big], \label{eq:2.3.1} \end{align} where we have used \eqref{eq:y_cons12}. Plug \eqref{eq:2.3.2} and \eqref{eq:2.3.1} back to \eqref{eq:Y_compress_0} to have \begin{align*} &~\mathbb{E}\big[\|\Y^{t+1}-\underline\Y^{t+1}\|^2\big] \leq \textstyle (1+\alpha_6) (1+2\gamma_y)^2 \frac{1+\alpha^2}{2}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] +(1+\alpha_6^{-1})8\gamma_y^2\mathbb{E}\big[\|\Y^{t}( \mathbf{I}-\mathbf{J})\|^2\big] \\ &~+ \left( \textstyle (1+\alpha_6^{-1})4\gamma_y^2 +(1+\alpha_6)(1+2\gamma_y)^2 \frac{1}{1-\alpha^2} \right)4 L^2 \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big] \\ &~ + \left( \textstyle (1+\alpha_6^{-1})4\gamma_y^2 +(1+\alpha_6)(1+2\gamma_y)^2 \frac{1}{1-\alpha^2} \right) 6 n \sigma^2. \end{align*} With $\alpha_6=\frac{1-\alpha^2}{12}$ and $\gamma_y< \frac{1-\alpha^2}{25}$, like \eqref{eq:gamma_x_1} and \eqref{eq:gamma_x_2}, we have $(1+\alpha_6) (1+2\gamma_y)^2 \frac{1+\alpha^2}{2}\leq \frac{3+\alpha^2}{4}$, $8(1+\alpha_6^{-1})\leq\frac{8\cdot13}{1-\alpha^2} = \frac{104}{1-\alpha^2} $ and $ (1+\alpha_6^{-1})4\gamma_y^2 +(1+\alpha_6)(1+2\gamma_y)^2 \frac{1}{1-\alpha^2} \leq \frac{13}{1-\alpha^2}\frac{4}{625}+\frac{13}{12}\frac{7}{6}\frac{1}{1-\alpha^2}\leq \frac{3}{2(1-\alpha^2)}$. Thus \begin{align*} \mathbb{E}\big[\|\Y^{t+1}-\underline\Y^{t+1}\|^2\big] \leq &~ \textstyle \frac{3+\alpha^2}{4}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] +\frac{104\gamma_y^2}{1-\alpha^2}\mathbb{E}\big[\|\Y^{t}( \mathbf{I}-\mathbf{J})\|^2\big]+\frac{6 L^2}{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|^2\big] + \frac{9n \sigma^2}{1-\alpha^2} \nonumber\\ \leq &~ \textstyle\frac{180 L^2}{1-\alpha^2}\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{24\sqrt{3} \alpha\gamma_x L^2 }{1-\alpha^2} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{3+\alpha^2}{4}\mathbb{E}\big[\|\Y^{t} -\underline\Y^{t}\|^2\big] \\ &~ \textstyle +\frac{104\gamma_y^2+ 96\eta^2 L^2}{1-\alpha^2}\mathbb{E}\big[\|\Y^{t}( \mathbf{I}-\mathbf{J})\|^2\big] + \frac{48 L^2}{1-\alpha^2} \mathbb{E}\big[\| \widehat\mathbf{X}^t - \mathbf{X}^t\|^2\big] + \frac{48 L^2\eta^2+9n}{1-\alpha^2} \sigma^2, \end{align*} where the second inequality holds by \eqref{eq:2.2.3}. By $48 L^2\eta^2\leq n$, we have \eqref{eq:2.5.2} and complete the proof. \end{proof} \begin{lemma}\label{lem:phi_one_step} Let $\eta\leq \lambda \leq\frac{1}{4 L}$ and $ \gamma_x\leq \frac{1}{6\alpha}$. It holds \begin{align} \sum_{i=1}^n \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_i^{t+1})\big] \leq&~ \sum_{i=1}^n\mathbb{E}\big[ \phi_\lambda( {\mathbf{x}}_i^{t})\big] + \frac{12}{\lambda}\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{7\alpha\gamma_x}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{12}{\lambda} \eta^2\mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber \\ &~+\frac{1}{\lambda}\left( -\frac{\eta}{4\lambda} + 23\alpha\gamma_x \right) \mathbb{E}\big[\| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2\big] + \frac{5}{\lambda} \eta^2 \sigma^2. \label{eq:2.7} \end{align} \end{lemma} \begin{proof} Similar to \eqref{eq:phi_update_0}, we have \begin{align} &~ \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_i^{t+1})\big] \overset{\eqref{eq:x_t_hat}}{=} \mathbb{E}\big[\phi(\widehat{\mathbf{x}}_i^{t+1})\big]+\frac{1}{2\lambda} \mathbb{E}\big[\|\widehat{\mathbf{x}}_i^{t+1}-{\mathbf{x}}_i^{t+1}\|^2\big] \nonumber \\ \overset{ \eqref{eq:compX_hatW}}{\leq} &~ \mathbb{E}\bigg[\phi\bigg(\sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}\bigg)\bigg] +\frac{1}{2\lambda} \mathbb{E}\bigg[\bigg\| \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \big(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}- {\mathbf{x}}_j^{t+\frac{1}{2}}\big) - \gamma_x\sum_{j=1}^n \big(\mathbf{W}_{ji}-\mathbf{I}_{ji}\big)\big(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}}\big) \bigg\|^2\bigg] \nonumber\\ \leq&~ \mathbb{E}\bigg[\phi\bigg(\sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}\bigg)\bigg] + \frac{1+\alpha_7}{2\lambda} \mathbb{E}\bigg[\bigg\| \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \big(\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\big)\bigg\|^2\bigg]\nonumber\\ &~ + \frac{1+\alpha_7^{-1}}{2\lambda} \mathbb{E}\bigg[\bigg\| \gamma_x \sum_{j=1}^n\big(\mathbf{W}_{ji}-\mathbf{I}_{ji}\big)\big(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}}\big)\bigg\|^2\bigg] \nonumber \\ \overset{\mbox{Lemma \ref{lem:weak_convx}}}\leq &~ \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \mathbb{E}\big[\phi( \widehat{\mathbf{x}}_j^{t+\frac{1}{2}})\big] + \frac{ L}{2} \sum_{j=1}^{n-1}\sum_{l=j+1}^n \big(\widehat\mathbf{W}_x\big)_{ji} (\widehat\mathbf{W}_x)_{li}\mathbb{E}\big[\|\widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-\widehat{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2\big] \nonumber \\ &~ + \frac{1+\alpha_7}{2\lambda} \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\mathbb{E}\big[\| \widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\|^2\big] + \frac{1+\alpha_7^{-1}}{2\lambda}\gamma_x^2 \mathbb{E}\big[\| \sum_{j=1}^n(\mathbf{W}_{ji}-\mathbf{I}_{ji})(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}})\|^2\big] \nonumber \\ \leq &~ \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}})\big] + \frac{1}{4\lambda} \sum_{j=1}^{n-1}\sum_{l=j+1}^n \big(\widehat\mathbf{W}_x\big)_{ji} (\widehat\mathbf{W}_x)_{li} \mathbb{E}\big[\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2\big] \nonumber \\ &~+ \frac{\alpha_7}{2\lambda} \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\mathbb{E}\big[\| \widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\|^2\big] + \frac{1+\alpha_7^{-1}}{2\lambda}\gamma_x^2 \mathbb{E}\big[\| \sum_{j=1}^n(\mathbf{W}_{ji}-\mathbf{I}_{ji})(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}})\|^2\big]. \label{eq:phi_lambda1} \end{align} The same as \eqref{eq:phi_lambda} and \eqref{eq:2_3}, for the first two terms in the right hand side of \eqref{eq:phi_lambda1}, we have \begin{align} \sum_{i=1}^n \sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji} \phi_\lambda({\mathbf{x}}_j^{t+\frac{1}{2}}) \leq \sum_{i=1}^n \phi_\lambda( {\mathbf{x}}_i^{t}) +\frac{1}{2\lambda} \|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2 - \frac{1}{2\lambda} \|\widehat\mathbf{X}^t - \mathbf{X}^t\|^2,\label{eq:2_2_press}\\ \sum_{i=1}^n\sum_{j=1}^{n-1}\sum_{l=j+1}^n \big(\widehat\mathbf{W}_x\big)_{ji}(\widehat\mathbf{W}_x)_{li}\|{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_l^{t+\frac{1}{2}}\|^2 \leq 8 \|\mathbf{X}^{t}_\perp\|^2+ 8\eta^2 \|\Y^{t}_\perp\|^2. \label{eq:2_3_press} \end{align} For the last two terms on the right hand side of \eqref{eq:phi_lambda1}, we have \begin{align} &~ \sum_{i=1}^n\sum_{j=1}^n \big(\widehat\mathbf{W}_x\big)_{ji}\mathbb{E}\big[\| \widehat{\mathbf{x}}_j^{t+\frac{1}{2}}-{\mathbf{x}}_j^{t+\frac{1}{2}}\|^2\big] = \| \widehat\mathbf{X}^{t+\frac{1}{2}}-\mathbf{X}^{t+\frac{1}{2}} \|^2 \leq 2 \| \widehat\mathbf{X}^{t+\frac{1}{2}}-\widehat\mathbf{X}^{t} \|^2 +2 \| \widehat\mathbf{X}^{t} - \mathbf{X}^{t+\frac{1}{2}} \|^2 \nonumber \\ \leq &~ \textstyle \frac{2}{(1-\lambda L)^2} \| \mathbf{X}^{t+\frac{1}{2}} - \mathbf{X}^{t} \|^2 +2 \| \widehat\mathbf{X}^{t} - \mathbf{X}^{t+\frac{1}{2}} \|^2 \leq 10 \| \mathbf{X}^{t+\frac{1}{2}}- \widehat\mathbf{X}^{t} \|^2+ 8 \| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2, \label{eq:X_-X2}\\ &~ \sum_{i=1}^n \mathbb{E}\big[\| \sum_{j=1}^n(\mathbf{W}_{ji}-\mathbf{I}_{ji})(\underline{\mathbf{x}}_j^{t+1}-{\mathbf{x}}_j^{t+\frac{1}{2}})\|^2\big] = \mathbb{E}\big[\|(\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I})\|^2\big]\leq 4\mathbb{E}\big[\|\underline\mathbf{X}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big]\nonumber \\ \leq &~ 12\alpha^2 \left(\mathbb{E}\big[ \|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \mathbb{E}\big[\|\mathbf{X}^{t+\frac{1}{2}}-\widehat{\mathbf{X}}^t\|^2\big]+ \mathbb{E}\big[\|\widehat{\mathbf{X}}^t - \mathbf{X}^t\|^2\big]\right), \label{eq:X_-X1} \end{align} where \eqref{eq:X_-X2} holds by Lemma \ref{lem:prox_diff} and $\frac{1}{(1-\lambda L)^2}\leq 2$, and \eqref{eq:X_-X1} holds by \eqref{eq:X_-X_1}. Sum up \eqref{eq:phi_lambda1} for $t=0,1,\ldots,T-1$ and take $\alpha_7 =\alpha\gamma_x$. Then with \eqref{eq:2_2_press}, \eqref{eq:2_3_press}, \eqref{eq:X_-X2} and \eqref{eq:X_-X1}, we have \begin{align*} \sum_{i=1}^n \mathbb{E}\big[\phi_\lambda({\mathbf{x}}_i^{t+1})\big] \leq & ~\sum_{i=1}^n \mathbb{E}\big[\phi_\lambda( {\mathbf{x}}_i^{t}) \big] + \frac{2}{\lambda}\left( \mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big] + \eta^2 \mathbb{E}\big[\|\Y^{t}_\perp\|^2\big]\right) +\textstyle \frac{6\alpha\gamma_x+6\alpha^2\gamma_x^2}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] \nonumber \\ &~ + \frac{1}{\lambda}\left( \textstyle \frac{1}{2}+11\alpha\gamma_x +6\alpha^2\gamma_x^2\right) \mathbb{E}\big[\| \mathbf{X}^{t+\frac{1}{2}}- \widehat\mathbf{X}^{t} \|^2\big]+ \frac{1}{\lambda}\left( \textstyle -\frac{1}{2}+10\alpha\gamma_x +6\alpha^2\gamma_x^2\right) \mathbb{E}\big[\| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2\big]\\ \leq & ~ \sum_{i=1}^n\mathbb{E}\big[ \phi_\lambda( {\mathbf{x}}_i^{t})\big]+ \frac{2}{\lambda}\left( \mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big] + \eta^2 \mathbb{E}\big[\|\Y^{t}_\perp\|^2\big]\right) + \frac{7\alpha\gamma_x}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] \nonumber \\ &~\quad +\frac{1}{\lambda}\left( \textstyle \frac{1}{2}+12\alpha\gamma_x\right)\mathbb{E}\big[ \|\widehat\mathbf{X}^{t}-\mathbf{X}^{t+\frac{1}{2}}\|^2\big] +\frac{1}{\lambda} \left( \textstyle -\frac{1}{2}+11\alpha\gamma_x\right) \mathbb{E}\big[\| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2\big]. \nonumber \\ \leq &~ \sum_{i=1}^n\mathbb{E}\big[ \phi_\lambda( {\mathbf{x}}_i^{t})\big] + \frac{12}{\lambda}\mathbb{E}\big[\|\mathbf{X}^t_\perp\|^2\big] + \frac{7\alpha\gamma_x}{\lambda} \mathbb{E}\big[\|\mathbf{X}^t-\underline\mathbf{X}^{t}\|^2\big] + \frac{12}{\lambda} \eta^2\mathbb{E}\big[\|\Y^t_\perp\|^2\big] \nonumber \\ &~+ \frac{1}{\lambda}\Big( {\textstyle\left(\frac{1}{2}+12\alpha\gamma_x \right) \left( 1-\frac{\eta}{2\lambda} \right) + \left( -\frac{1}{2}+11\alpha\gamma_x\right) }\Big) \mathbb{E}\big[\| \widehat\mathbf{X}^{t}- \mathbf{X}^{t} \|^2\big] + \frac{5}{\lambda} \eta^2 \sigma^2, \end{align*} where the second inequality holds by $6\alpha\gamma_x\leq 1$, and the third inequality holds by \eqref{eq:hatx_xprox_comp} with $\frac{1}{2}+12\alpha\gamma_x\leq \frac{5}{2}$. Noticing $$\left(\frac{1}{2}+12\alpha\gamma_x \right) \left( 1-\frac{\eta}{2\lambda} \right) + \left( -\frac{1}{2}+11\alpha\gamma_x\right) = 23\alpha\gamma_x - \frac{\eta}{4\lambda} - \frac{6\alpha\gamma_x\eta}{\lambda}\leq 23\alpha\gamma_x - \frac{\eta}{4\lambda},$$ we obtain \eqref{eq:2.7} and complete the proof. \end{proof} With Lemmas \ref{lem:X_consensus_comperror}, \ref{lem:Y_consensus_comperror} and \ref{lem:phi_one_step}, we are ready to prove the Theorem \ref{thm:sect3thm}. We will use the Lyapunov function: \begin{align*} \mathbf{V}^t = z_1 \mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big] + z_2 \mathbb{E}\big[\|\mathbf{X}^{t}-\underline\mathbf{X}^{t}\|^2\big] +z_3\mathbb{E}\big[\|\Y^{t}_\perp\|^2\big]+z_4 \mathbb{E}\big[\|\Y^{t}-\underline\Y^{t}\|^2\big] + z_5 \sum_{i=1}^n \mathbb{E}[\phi_\lambda( {\mathbf{x}}_i^{t})], \end{align*} where $z_1, z_2, z_3, z_4, z_5 \geq 0$ are determined later. \subsection*{Proof of Theorem \ref{thm:sect3thm}} \begin{proof} Denote \begin{align*} &~\Omega_0^t = \mathbb{E}[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t}\|^2], \quad \Phi^t = \sum_{i=1}^n \mathbb{E}[\phi_\lambda( {\mathbf{x}}_i^{t})], \\ &~ \Omega^t = \left(\mathbb{E}\big[\|\mathbf{X}^{t}_\perp\|^2\big], \mathbb{E}\big[\|\mathbf{X}^{t}-\underline\mathbf{X}^{t}\|^2\big], \mathbb{E}\big[\|\Y^{t}_\perp\|^2\big], \mathbb{E}\big[\|\Y^{t}-\underline\Y^{t}\|^2\big], \Phi^t\right)^\top. \end{align*} Then Lemmas \ref{lem:X_consensus_comperror}, \ref{lem:Y_consensus_comperror} and \ref{lem:phi_one_step} imply $\Omega^{t+1} \leq \mathbf{A}\Omega^t + {\mathbf{b}} \Omega_0^t + {\mathbf{c}} \sigma^2$ with \begin{align*} &\mathbf{A} = \begin{pmatrix} \frac{3+\widehat\rho^2_x}{4} &~ 2\alpha\gamma_x(1-\widehat\rho_x^2) &~ \frac{9}{4(1-\widehat\rho^2_x)} \eta^2 &~ 0 &~ 0\\ \frac{21}{1-\alpha^2} &~ \frac{3+\alpha^2}{4} &~\frac{21}{1-\alpha^2} \eta^2 &~ 0 &~ 0 \\ \frac{150 L^2}{1-\widehat\rho^2_y} &~ \frac{20\sqrt{3} L^2}{1-\widehat\rho^2_y }\alpha\gamma_x &~ \frac{3+\widehat\rho^2_y}{4} &~ \frac{48}{1-\widehat\rho^2_y }\alpha^2\gamma_y^2 &~ 0\\ \frac{180 L^2}{1-\alpha^2} &~ \frac{24\sqrt{3} L^2}{1-\alpha^2} \alpha\gamma_x &~ \frac{104\gamma_y^2+96 L^2 \eta^2}{1-\alpha^2} &~ \frac{3+\alpha^2}{4} &~ 0\\ \frac{12}{\lambda} &~ \frac{7\alpha\gamma_x}{\lambda} &~ \frac{12}{\lambda}\eta^2 &~ 0 &~ 1\\ \end{pmatrix}, \\[0.2cm] &{\mathbf{b}} = \begin{pmatrix} 4\alpha\gamma_x(1-\widehat\rho_x^2) \\ \frac{11}{1-\alpha^2} \\ \frac{40 L^2 }{1-\widehat\rho^2_y}\\ \frac{48 L^2}{1-\alpha^2} \\ \frac{1}{\lambda}\left( \textstyle -\frac{\eta}{4\lambda} + 23\alpha\gamma_x \right) \end{pmatrix}, \quad {\mathbf{c}} = \begin{pmatrix} 4\alpha\gamma_x \eta^2 (1-\widehat\rho_x^2) \\ \frac{11 \eta^2 }{1-\alpha^2} \\ 12n \\ \frac{10n}{1-\alpha^2}\\ \frac{5}{\lambda} \eta^2 \end{pmatrix}. \end{align*} Then for any ${\mathbf{z}} = (z_1, z_2 , z_3, z_4, z_5 )^\top\geq \mathbf{0}^\top$, it holds \begin{align*} {\mathbf{z}}^\top \Omega^{t+1} \leq {\mathbf{z}}^\top \Omega^t + ({\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top) \Omega^t + {\mathbf{z}}^\top{\mathbf{b}} \Omega_0^t + {\mathbf{z}}^\top{\mathbf{c}} \sigma^2. \end{align*} Let $\gamma_x\leq \frac{\eta}{\alpha}$ and $\gamma_y\leq \frac{(1-\alpha^2) (1-\widehat\rho^2_x)(1-\widehat\rho^2_y)}{317}$. Take $$z_1=\frac{52}{1-\widehat\rho^2_x}, z_2 = \frac{448}{1-\alpha^2} \eta , z_3 = \frac{521}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \eta^2, z_4=(1-\alpha^2) \eta^2, z_5=\lambda.$$ We have \begin{align*} {\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top \leq &~ \begin{pmatrix} \frac{21\cdot448}{ (1-\alpha^2)^2} \eta + \frac{150\cdot521 L^2\eta^2}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2} + 180 L^2\eta^2 - 1 \\[0.2cm] \frac{521\cdot20\sqrt{3} L^2\eta^3}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2} + 24\sqrt{3} L^2\eta^3 -\eta \\[0.2cm] \frac{448\cdot21\eta^3}{ (1-\alpha^2)^2} + 96 L^2 \eta^4 - \frac{\eta^2}{(1-\widehat\rho^2_x)^2}\\[0.1cm] 0 \\[0.1cm] 0 \end{pmatrix}^\top, \\ {\mathbf{z}}^\top{\mathbf{b}} \leq &~ \textstyle -\frac{\eta}{4\lambda} + 23\eta + 48 L^2 \eta^2 + \frac{521\cdot 40 \eta^2 L^2}{(1-\widehat\rho^2_x)^2 (1-\widehat\rho^2_y)^2} + \frac{448\cdot11\eta}{ (1-\alpha^2)^2} + 52\cdot4 \eta,\\ {\mathbf{z}}^\top{\mathbf{c}} \leq &~ \left( \textstyle 52\cdot4\eta + \frac{448\cdot 11\eta}{ (1-\alpha^2)^2} + \frac{521\cdot12n}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}+ 10n + 5 \right)\eta^2. \end{align*} By $\eta\leq \frac{(1-\alpha^2)^2(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)^2}{18830\max\{1, L\}}$ and $\lambda\leq \frac{ (1-\alpha^2)^2}{9 L+41280}$, we have ${\mathbf{z}}^\top \mathbf{A}-{\mathbf{z}}^\top \leq (-\frac{1}{2}, 0, 0, 0, 0)^\top$, \begin{align*} {\mathbf{z}}^\top{\mathbf{c}} \leq \textstyle \frac{(521\cdot12+10)n+6}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}\eta^2 = \textstyle\frac{6262n+6}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}\eta^2 \end{align*} and \begin{align*} {\mathbf{z}}^\top{\mathbf{b}} ~ \leq &~ \textstyle \eta\Big( -\frac{1}{4\lambda} + 23 + 48 L^2 \eta + \frac{521\cdot 40 \eta L^2}{(1-\widehat\rho^2_x)^2 (1-\widehat\rho^2_y)^2} + \frac{448\cdot11 }{ (1-\alpha^2)^2} + 52\cdot4 \Big) \nonumber \\ \leq &~ \textstyle -\frac{\eta}{8\lambda} + \eta\Big( -\frac{1}{8\lambda} + \frac{ 9 L}{8 } + \frac{5160}{ (1-\alpha^2)^2}\Big) \leq -\frac{\eta}{8\lambda}. \end{align*} Hence we have \begin{align} {\mathbf{z}}^\top \Omega^{t+1} \leq \textstyle {\mathbf{z}}^\top \Omega^{t} -\frac{\eta}{8\lambda} \Omega_0^t -\frac{1}{2}\mathbb{E}[\|\mathbf{X}^t_\perp\|^2] + \frac{6262n+6}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}\eta^2\sigma^2.\label{eq:l_fun_comp} \end{align} Thus summing up \eqref{eq:l_fun_comp} for $t=0,1,\ldots,T-1$ gives \begin{align} \frac{1}{\lambda T}\sum_{t=0}^{T-1} \Omega_0^t +\frac{4}{\eta T}\sum_{t=0}^{T-1} \mathbb{E}[\|\mathbf{X}^t_\perp\|^2] \leq \textstyle \frac{8\left({\mathbf{z}}^\top \Omega^0 - {\mathbf{z}}^\top \Omega^{T}\right)}{\eta T} + \frac{8(6262n+6)}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \eta\sigma^2. \label{eq:thm3_avg-Omega} \end{align} From ${\mathbf{y}}_i^{-1}=\mathbf{0}$, $\underline{\mathbf{y}}_i^{-1}=\mathbf{0}$, $\nabla F_i({\mathbf{x}}_i^{-1}$, $\xi_i^{-1})=\mathbf{0}$, $\underline{\mathbf{x}}_i^{0} =\mathbf{0}$, ${\mathbf{x}}_i^0 = {\mathbf{x}}^0, \forall\, i \in \mathcal{N}$, we have \begin{gather} \|\Y^0_\perp\|^2 = \|\nabla \mathbf{F}^0(\mathbf{I}-\mathbf{J})\|^2\leq\|\nabla \mathbf{F}^0\|^2, \quad \|\Y^{0}-\underline\Y^{0}\|^2 = \|\nabla \mathbf{F}^0-Q_{\mathbf{y}}\big[\nabla \mathbf{F}^0\big]\|^2 \leq \alpha^2 \|\nabla \mathbf{F}^0\|^2, \label{eq:initial_thm3_1}\\ \|\mathbf{X}^0_\perp\|^2=0, \quad \|\mathbf{X}^0-\underline\mathbf{X}^{0}\|^2=0, \quad \Phi^0=n \phi_\lambda({\mathbf{x}}^0). \label{eq:initial_thm3_2} \end{gather} Note \eqref{eq:end_thm2} still holds here. With \eqref{eq:initial_thm3_1}, \eqref{eq:initial_thm3_2}, \eqref{eq:end_thm2}, and the nonnegativity of $ \mathbb{E}[\|\mathbf{X}^T_\perp\|^2]$, $\mathbb{E}[\|\mathbf{X}^{T}-\underline\mathbf{X}^{T}\|^2]$, $\mathbb{E}[\|\Y^T_\perp\|^2]$, $\mathbb{E}[\|\Y^{T}-\underline\Y^{T}\|^2]$, we have \begin{align} {\mathbf{z}}^\top \Omega^0 - {\mathbf{z}}^\top \Omega^{T} \le \textstyle \frac{521}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \eta^2 \mathbb{E}[\|\nabla \mathbf{F}^0\|^2] + \eta^2 \mathbb{E}[\|\nabla \mathbf{F}^0\|^2] + \lambda n \phi_\lambda({\mathbf{x}}^0) -\lambda n \phi_\lambda^*. \label{eq:them3_Omega0_OmegaT} \end{align} where we have used $\alpha^2\leq 1$ from Assumption \ref{assu:compressor}. By the convexity of the frobenius norm and \eqref{eq:them3_Omega0_OmegaT}, we obtain from \eqref{eq:thm3_avg-Omega} that \begin{align} &~ \frac{1}{n\lambda^2} \mathbb{E}\big[\|\widehat\mathbf{X}^{\tau}-\mathbf{X}^{\tau}\|^2\big] +\frac{4}{n \lambda \eta} \mathbb{E}[\|\mathbf{X}^\tau_\perp\|^2] \leq \frac{1}{n\lambda^2} \frac{1}{T}\sum_{t=0}^{T-1} \mathbb{E}\big[\|\widehat\mathbf{X}^{t}-\mathbf{X}^{t}\|^2\big] +\frac{4}{n \lambda \eta T}\sum_{t=0}^{T-1} \mathbb{E}[\|\mathbf{X}^t_\perp\|^2] \nonumber \\ \leq & \textstyle \frac{8\left( \phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{ \eta T} +\frac{50096n+48}{(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \frac{\eta}{n\lambda}\sigma^2 \textstyle + \frac{8\cdot521 \eta }{n\lambda T (1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} \mathbb{E}\big[ \|\nabla \mathbf{F}^0\|^2\big] + \frac{8\eta}{n\lambda T} \mathbb{E}\big[ \|\nabla \mathbf{F}^0\|^2\big] \nonumber \\ \leq &~\textstyle \frac{8\left(\phi_\lambda({\mathbf{x}}^0) - \phi_\lambda^*\right)}{\eta T} +\frac{(50096n+48)\eta \sigma^2}{n\lambda(1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)} + \textstyle \frac{4176 \eta \mathbb{E}\left[ \|\nabla \mathbf{F}^0\|^2\right] }{n\lambda T (1-\widehat\rho^2_x)^2(1-\widehat\rho^2_y)}. \label{eq:them_CDProxSGT0} \end{align} With $\|\nabla \phi_\lambda ({\mathbf{x}}_i^\tau)\|^2 = \frac{\|{\mathbf{x}}_i^\tau-\widehat{\mathbf{x}}_i^\tau\|^2}{\lambda^{2}}$ from Lemma \ref{lem:xhat_x}, we complete the proof. \end{proof} \section{Numerical Experiments}\label{sec:numerical_experiments} In this section, we test the proposed algorithms on training two neural network models, in order to demonstrate their better generalization over momentum variance-reduction methods and large-batch training methods and to demonstrate the success of handling heterogeneous data even when only compressed model parameter and gradient information are communicated among workers. One neural network that we test is LeNet5 \cite{lecun1989backpropagation} on the FashionMNIST dataset \cite{xiao2017fashion}, and the other is FixupResNet20 \cite{zhang2019fixup} on Cifar10 \cite{krizhevsky2009learning}. Our experiments are representative to show the practical performance of our methods. Among several closely-related works, \cite{xin2021stochastic} includes no experiments, and \cite{mancino2022proximal,zhao2022beer} only tests on tabular data and MNIST. \cite{koloskova2019decentralized-b} tests its method on Cifar10 but needs similar data distribution on all workers for good performance. FashionMNIST has a similar scale as MNIST but poses a more challenging classification task \cite{xiao2017fashion}. Cifar10 is more complex, and FixupResNet20 has more layers than LeNet5. All the compared algorithms are implemented in Python with Pytorch and MPI4PY (for distributed computing). They run on a Dell workstation with two Quadro RTX 5000 GPUs. We use the 2 GPUs as 5 workers, which communicate over a ring-structured network (so each worker can only communicate with two neighbors). Uniform weight is used, i.e., $W_{ji} = \frac{1}{3}$ for each pair of connected workers $i$ and $j$. Both FashionMNIST and Cifar10 have 10 classes. We distribute each data onto the 5 workers based on the class labels, namely, each worker holds 2 classes of data points, and thus the data are heterogeneous across the workers. For all methods, we report their objective values on training data, prediction accuracy on testing data, and consensus errors at each epoch. To save time, the objective values are computed as the average of the losses that are evaluated during the training process (i.e., on the sampled data instead of the whole training data) plus the regularizer per epoch. For the testing accuracy, we first compute the accuracy on the whole testing data for each worker by using its own model parameter and then take the average. The consensus error is simply $\|\mathbf{X}_\perp\|^2$. \subsection{Sparse Neural Network Training} \label{subsect:RegL1} In this subsection, we test the non-compressed method DProxSGT and compare it with AllReduce (that is a centralized method and used as a baseline), DEEPSTORM\footnote{For DEEPSTORM, we implement DEEPSTORM v2 in \cite{mancino2022proximal}.} and ProxGT-SA \cite{xin2021stochastic} on solving \eqref{eq:decentralized_problem}, where $f$ is the loss on the whole training data and $r({\mathbf{x}}) = \mu\|{\mathbf{x}}\|_1$ serves as a sparse regularizer that encourages a sparse model. For training LeNet5 on FashionMNIST, we set $\mu= 10^{-4}$ and run each method to 100 epochs. The learning rate $\eta$ and batchsize are set to $0.01$ and 8 for AllReduce and DProxSGT. DEEPSTORM uses the same $\eta$ and batchsize but with a larger initial batchsize 200, and its momentum parameter is tuned to $\beta=0.8$ in order to yield the best performance. ProxGT-SA is a large-batch training method. We set its batchsize to 256 and accordingly apply a larger step size $\eta=0.3$ that is the best among $\{0.1, 0.2, 0.3, 0.4\}$. For training FixupResnet20 on Cifar10, we set $\mu= 5 \times 10^{-5}$ and run each method to 500 epochs. The learning rate and batchsize are set to $\eta=0.02$ and 64 for AllReduce, DProxSGT, and DEEPSTORM. The initial batchsize is set to 1600 for DEEPSTORM and the momentum parameter set to $\beta=0.8$. ProxGT-SA uses a larger batchsize 512 and a larger stepsize $\eta=0.1$ that gives the best performance among $\{0.05, 0.1, 0.2, 0.3\}$. \begin{figure}[ht] \begin{center} \includegraphics[width=.9\columnwidth]{./figures/noncompressed} \vspace{-0.2cm} \caption{Results of training sparse neural networks by non-compressed methods with $r({\mathbf{x}}) = \mu \|{\mathbf{x}}\|_1$ for the same number of epochs. Left: LeNet5 on FashionMNIST with $\mu=10^{-4}$. Right: FixupResnet20 on Cifar10 with $\mu=5\times 10^{-5}$.} \label{fig:RegL1} \end{center} \end{figure} The results for all methods are plotted in Figure \ref{fig:RegL1}. For LeNet5, DProxSGT produces almost the same curves as the centralized training method AllReduce, while on FixupResnet20, DProxSGT even outperforms AllReduce in terms of testing accuracy. This could be because AllReduce aggregates stochastic gradients from all the workers for each update and thus equivalently, it actually uses a larger batchsize. DEEPSTORM performs equally well as our method DProxSGT on training LeNet5. However, it gives lower testing accuracy than DProxSGT and also oscillates significantly more seriously on training the more complex neural network FixupResnet20. This appears to be caused by the momentum variance reduction scheme used in DEEPSTORM. In addition, we see that the large-batch training method ProxGT-SA performs much worse than DProxSGT within the same number of epochs (i.e., data pass), especially on training FixupResnet20. \subsection{Neural Network Training by Compressed Methods} \label{subsect:compress} In this subsection, we compare CDProxSGT with two state-of-the-art compressed training methods: Choco-SGD \cite{koloskova2019decentralized,koloskova2019decentralized-b} and BEER \cite{zhao2022beer}. As Choco-SGD and BEER are studied only for problems without a regularizer, we set $r({\mathbf{x}})=0$ in \eqref{eq:decentralized_problem} for the tests. Again, we compare their performance on training LeNet5 and FixupResnet20. The two non-compressed methods AllReduce and DProxSGT are included as baselines. The same compressors are used for CDProxSGT, Choco-SGD, and BEER, when compression is applied. \begin{figure}[htbp] \begin{center} \includegraphics[width=.9\columnwidth]{./figures/Compressed} \vspace{-0.2cm} \caption{Results of training neural network models by compressed methods for the same number of epochs. Left: LeNet5 on FashionMNIST. Right: FixupResnet20 on Cifar10.} \label{fig:Compress} \end{center} \end{figure} We run each method to 100 epochs for training LeNet5 on FashionMNIST. The compressors $Q_y$ and $Q_x$ are set to top-$k(0.3)$ \cite{aji2017sparse}, i.e., taking the largest $30\%$ elements of an input vector in absolute values and zeroing out all others. We set batchsize to 8 and tune the learning rate $\eta$ to $0.01$ for AllReduce, DProxSGT, CDProxSGT and Choco-SGD, and for CDProxSGT, we set $\gamma_x=\gamma_y=0.5$. BEER is a large-batch training method. It uses a larger batchsize 256 and accordingly a larger learning rate $\eta=0.3$, which appears to be the best among $\{0.1, 0.2, 0.3, 0.4\}$. For training FixupResnet20 on the Cifar10 dataset, we run each method to 500 epochs. We take top-$k(0.4)$ \cite{aji2017sparse} as the compressors $Q_y$ and $Q_x$ and set $\gamma_x=\gamma_y=0.8$. For AllReduce, DProxSGT, CDProxSGT and Choco-SGD, we set their batchsize to 64 and tune the learning rate $\eta$ to $0.02$. For BEER, we use a larger batchsize 512 and a larger learning rate $\eta=0.1$, which is the best among $\{0.05, 0.1, 0.2, 0.3\}$. The results are shown in Figure \ref{fig:Compress}. For both models, CDProxSGT yields almost the same curves of objective values and testing accuracy as its non-compressed counterpart DProxSGT and the centralized non-compressed method AllReduce. This indicates about 70\% saving of communication for the training of LeNet5 and 60\% saving for FixupResnet20 without sacrifying the testing accuracy. In comparison, BEER performs significantly worse than the proposed method CDProxSGT within the same number of epochs in terms of all the three measures, especially on training the more complex neural network FixupResnet20, which should be attributed to the use of a larger batch by BEER. Choco-SGD can produce comparable objective values. However, its testing accuracy is much lower than that produced by our method CDProxSGT. This should be because of the data heterogeneity that ChocoSGD cannot handle, while CDProxSGT applies the gradient tracking to successfully address the challenges of data heterogeneity. \section{Conclusion} We have proposed two decentralized proximal stochastic gradient methods, DProxSGT and CDProxSGT, for nonconvex composite problems with data heterogeneously distributed on the computing nodes of a connected graph. CDProxSGT is an extension of DProxSGT by applying compressions on the communicated model parameter and gradient information. Both methods need only a single or $\mathcal{O}(1)$ samples for each update, which is important to yield good generalization performance on training deep neural networks. The gradient tracking is used in both methods to address data heterogeneity. An $\mathcal{O}\left( \frac{1}{ \epsilon^4}\right)$ sample complexity and communication complexity is established to both methods to produce an expected $\epsilon$-stationary solution. Numerical experiments on training neural networks demonstrate the good generalization performance and the ability of the proposed methods on handling heterogeneous data. \section{Introduction} In this paper, we consider to solve nonconvex stochastic composite problems in a decentralized setting: \vspace{-0.1cm} \begin{equation}\label{eq:problem_original} \begin{aligned} & \min_{{\mathbf{x}}\in\mathbb{R}^d} \phi({\mathbf{x}}) = f({\mathbf{x}}) + r({\mathbf{x}}),\\[-0.1cm] & \text{with } f({\mathbf{x}})=\frac{1}{n}\sum_{i=1}^n f_i({\mathbf{x}}), f_i({\mathbf{x}})\!=\!\mathbb{E}_{\xi_i \sim \mathcal{D}_i}[F_i({\mathbf{x}},\xi_i)]. \end{aligned} \vspace{-0.1cm} \end{equation} Here, $\{\mathcal{D}_i\}_{i=1}^n$ are possibly \emph{non-i.i.d data} distributions on $n$ machines/workers that can be viewed as nodes of a connected graph $\mathcal{G}$, and each $F_i(\cdot, \xi_i)$ can only be accessed by the $i$-th worker. We are interested in problems that satisfy the following structural assumption. \begin{assumption}[Problem structure] \label{assu:prob} We assume that \vspace{-1.5mm} \begin{itemize} \item[(i)] $r$ is closed convex and possibly nondifferentiable. \item[(ii)] Each $f_i$ is $L$-smooth in $\dom(r)$, i.e., $\|\nabla f_i({\mathbf{x}}) - \nabla f_i({\mathbf{y}})\| \le L \|{\mathbf{x}}- {\mathbf{y}}\|$, for any ${\mathbf{x}}, {\mathbf{y}}\in\dom(r)$. \item[(iii)] $\phi$ is lower bounded, i.e., $\phi^* \triangleq \min_{\mathbf{x}} \phi({\mathbf{x}}) > -\infty$. \end{itemize} \vspace{-2mm} \end{assumption} Let $\mathcal{N}=\{1, 2, \ldots, n\}$ be the set of nodes of $\mathcal{G}$ and $\mathcal{E}$ the set of edges. For each $i\in\mathcal{N}$, denote $\mathcal{N}_i$ as the neighbors of worker $i$ and itself, i.e., $\mathcal{N}_i = \{j: (i,j) \in \mathcal{E}\}\cup \{i\}$. Every worker can only communicate with its neighbors. To solve \eqref{eq:problem_original} collaboratively, each worker $i$ maintains a copy, denoted as ${\mathbf{x}}_i$, of the variable ${\mathbf{x}}$. With these notations, \eqref{eq:problem_original} can be formulated equivalently to \vspace{-0.1cm} {\begin{align}\label{eq:decentralized_problem} \begin{split} \min_{\mathbf{X} \in \mathbb{R}^{d\times n}} & \frac{1}{n}\sum_{i=1}^n \phi_i({\mathbf{x}}_i), \text{with }\phi_i({\mathbf{x}}_i) \triangleq f_i({\mathbf{x}}_i) + r({\mathbf{x}}_i), \\ \mbox{s.t. } \quad & {\mathbf{x}}_i={\mathbf{x}}_j, \forall\, j\in \mathcal{N}_i, \forall\, i = 1,\ldots, n. \end{split} \end{align}} \vspace{-0.5cm} Problems with a \emph{nonsmooth} regularizer, i.e., in the form of \eqref{eq:problem_original}, appear in many applications such as $\ell_1$-regularized signal recovery \cite{eldar2014phase,duchi2019solving}, online nonnegative matrix factorization \cite{guan2012online}, and training sparse neural networks \cite{scardapane2017group, yang2020proxsgd}. When data involved in these applications are distributed onto (or collected by workers on) a decentralized network, it necessitates the design of decentralized algorithms. Although decentralized optimization has attracted a lot of research interests in recent years, most existing works focus on strongly convex problems \cite{scaman2017optimal, koloskova2019decentralized} or convex problems \cite{6426375,taheri2020quantized} or smooth nonconvex problems \cite{bianchi2012convergence, di2016next, wai2017decentralized, lian2017can,zeng2018nonconvex}. Few works have studied \emph{nonsmooth nonconvex} decentralized \emph{stochastic} optimization like \eqref{eq:decentralized_problem} that we consider. \cite{chen2021distributed, xin2021stochastic, mancino2022proximal} are among the exceptions. However, they either require to take many data samples for each update or assume a so-called mean-squared smoothness condition, which is stronger than the smoothness condition in Assumption~\ref{assu:prob}(ii), in order to perform momentum-based variance-reduction step. Though these methods can have convergence (rate) guarantee, they often yield poor generalization performance on training deep neural networks, as demonstrated in \cite{lecun2012efficient, keskar2016large} for large-batch training methods and in our numerical experiments for momentum variance-reduction methods. On the other side, many distributed optimization methods \cite{shamir2014distributed,lian2017can,wang2018cooperative} often assume that the data are i.i.d across the workers. However, this assumption does not hold in many real-world scenarios, for instance, due to data privacy issue that local data has to stay on-premise. Data heterogeneity can result in significant degradation of the performance by these methods. Though some papers do not assume i.i.d. data, they require certain data similarity, such as bounded stochastic gradients \cite{koloskova2019decentralized,koloskova2019decentralized-b, taheri2020quantized} and bounded gradient dissimilarity \cite{ tang2018communication,assran2019stochastic, tang2019deepsqueeze, vogels2020practical}. To address the critical practical issues mentioned above, we propose a decentralized proximal stochastic gradient tracking method that needs only a single or $O(1)$ data samples (per worker) for each update. With no assumption on data similarity, it can still achieve the optimal convergence rate on solving problems satisfying conditions in Assumption~\ref{assu:prob} and yield good generalization performance. In addition, to reduce communication cost, we give a compressed version of the proposed algorithm, by performing compression on the communicated information. The compressed algorithm can inherit the benefits of its non-compressed counterpart. \subsection{Our Contributions} Our contributions are three-fold. First, we propose two decentralized algorithms, one without compression (named DProxSGT) and the other with compression (named CDProxSGT), for solving \emph{decentralized nonconvex nonsmooth stochastic} problems. Different from existing methods, e.g., \cite{xin2021stochastic, wang2021distributed, mancino2022proximal}, which need a very large batchsize and/or perform momentum-based variance reduction to handle the challenge from the nonsmooth term, DProxSGT needs only $\mathcal{O}(1)$ data samples for each update, without performing variance reduction. The use of a small batch and a standard proximal gradient update enables our method to achieve significantly better generalization performance over the existing methods, as we demonstrate on training neural networks. To the best of our knowledge, CDProxSGT is the first decentralized algorithm that applies a compression scheme for solving nonconvex nonsmooth stochastic problems, and it inherits the advantages of the non-compressed method DProxSGT. Even applied to the special class of smooth nonconvex problems, CDProxSGT can perform significantly better over state-of-the-art methods, in terms of generalization and handling data heterogeneity. Second, we establish an optimal sample complexity result of DProxSGT, which matches the lower bound result in \cite{arjevani2022lower} in terms of the dependence on a target tolerance $\epsilon$, to produce an $\epsilon$-stationary solution. Due to the coexistence of nonconvexity, nonsmoothness, big stochasticity variance (due to the small batch and no use of variance reduction for better generalization), and decentralization, the analysis is highly non-trivial. We employ the tool of Moreau envelope and construct a decreasing Lyapunov function by carefully controlling the errors introduced by stochasticity and decentralization. Third, we establish the iteration complexity result of the proposed compressed method CDProxSGT, which is in the same order as that for DProxSGT and thus also optimal in terms of the dependence on a target tolerance. The analysis builds on that of DProxSGT but is more challenging due to the additional compression error and the use of gradient tracking. Nevertheless, we obtain our results by making the same (or even weaker) assumptions as those assumed by state-of-the-art methods \cite{koloskova2019decentralized-b, zhao2022beer}. \subsection{Notation}\label{sec:notation} For any vector ${\mathbf{x}}\in\mathbb{R}^{d}$, we use $\|{\mathbf{x}}\|$ for the $\ell_2$ norm. For any matrix $\mathbf{A}$, $\|\mathbf{A}\|$ denotes the Frobenius norm and $\|\mathbf{A}\|_2$ the spectral norm. $\mathbf{X} = [{\mathbf{x}}_1,{\mathbf{x}}_2,\ldots,{\mathbf{x}}_n]\in\mathbb{R}^{d\times n}$ concatinates all local variables. The superscript $^t$ will be used for iteration or communication. $\nabla F_i({\mathbf{x}}_i^t,\xi_i^t)$ denotes a local stochastic gradient of $F_i$ at ${\mathbf{x}}_i^t$ with a random sample $\xi_i^t$. The column concatenation of $\{\nabla F_i({\mathbf{x}}_i^t,\xi_i^t)\}$ is denoted as \vspace{-0.1cm} \begin{equation* \nabla \mathbf{F}^t = \nabla \mathbf{F}(\mathbf{X}^t,\Xi^t) = [ \nabla F_1({\mathbf{x}}_1^t,\xi_1^t),\ldots, \nabla F_n({\mathbf{x}}_n^t,\xi_n^t)],\vspace{-0.1cm} \end{equation*} where $\Xi^t = [\xi_1^t,\xi_2^t,\ldots,\xi_n^t]$. Similarly, we denote \vspace{-0.1cm} \begin{equation*} \nabla \mathbf{f}^t = [ \nabla f_1({\mathbf{x}}_1^t ),\ldots, \nabla f_n({\mathbf{x}}_n^t )].\vspace{-0.1cm} \end{equation*} For any $\mathbf{X} \in \mathbb{R}^{d\times n}$, we define \vspace{-0.1cm} \begin{equation*} \bar{{\mathbf{x}}} = \textstyle\frac{1}{n}\mathbf{X}\mathbf{1}, \quad \overline{\mathbf{X}} = \mathbf{X}\mathbf{J} = \bar{{\mathbf{x}}}\mathbf{1}^\top,\quad \mathbf{X}_\perp = \mathbf{X}(\mathbf{I} - \mathbf{J}), \vspace{-0.1cm} \end{equation*} where $\mathbf{1}$ is the all-one vector, and $\mathbf{J} = \frac{\mathbf{1}\1^\top}{n}$ is the averaging matrix. Similarly, we define the mean vectors \vspace{-0.1cm} \begin{equation*} \overline{\nabla} \mathbf{F}^t = \textstyle\frac{1}{n} \mathbf{F}^t \mathbf{1},\ \overline{\nabla} \mathbf{f}^t = \textstyle\frac{1}{n} \mathbf{f}^t \mathbf{1}.\vspace{-0.1cm} \end{equation*} We will use $\mathbb{E}_t$ for the expectation about the random samples $\Xi^t$ at the $t$th iteration and $\mathbb{E}$ for the full expectation. $\mathbb{E}_Q$ denotes the expectation about a stochastic compressor $Q$. \section{Related Works} The literature of decentralized optimization has been growing vastly. To exhaust the literature is impossible. Below we review existing works on decentralized algorithms for solving nonconvex problems, with or without using a compression technique. For ease of understanding the difference of our methods from existing ones, we compare to a few relevant methods in Table \ref{tab:method_compare}. \begin{table*}[t]\label{tab:method_compare} \caption{Comparison between our methods and some relevant methods: ProxGT-SA and ProxGT-SR-O in \cite{xin2021stochastic}, DEEPSTORM \cite{mancino2022proximal}, ChocoSGD \cite{koloskova2019decentralized-b}, and BEER \cite{zhao2022beer}. We use ``CMP'' to represent whether compression is performed by a method. GRADIENTS represents additional assumptions on the stochastic gradients in addition to those made in Assumption \ref{assu:stoc_grad}. SMOOTHNESS represents the smoothness condition, where ``mean-squared'' means $\mathbb{E}_{\xi_i}[\|\nabla F_i({\mathbf{x}}; \xi_i) - \nabla F_i({\mathbf{y}}; \xi_i)\|^2]\le L^2\|{\mathbf{x}}-{\mathbf{y}}\|^2$ that is stronger than the $L$-smoothness of $f_i$. BS is the required batchsize to get an $\epsilon$-stationary solution. VR and MMT represent whether the variance reduction or momentum are used. Large batchsize and/or momentum variance reduction can degrade the generalization performance, as we demonstrate in numerical experiments. } \label{sample-table} \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccccc} \toprule Methods & CMP & $r\not\equiv 0$ & GRADIENTS & SMOOTHNESS & (BS, VR, MMT) \\ \midrule ProxGT-SA & No& Yes & No & $f_i$ is smooth & \big($\mathcal{O}(\frac{1}{\epsilon^2})$, No , No\big) \\[0.1cm] ProxGT-SR-O & No & Yes & No & mean-squared & \big($\mathcal{O}(\frac{1}{\epsilon})$, Yes, No\big) \\[0.1cm] DEEPSTORM & No & Yes & No & mean-squared & ($\mathcal{O}(1)$, Yes, Yes) \\ \textbf{DProxSGT (this paper)} & No & Yes & No & $f_i$ is smooth & ($\mathcal{O}(1)$, No, No) \\ \midrule ChocoSGD & Yes& No & $\mathbb{E}_{\xi}[\|\nabla F_i({\mathbf{x}},\xi_i)\|^2]\leq G^2$ & $f_i$ is smooth & ($\mathcal{O}(1)$, No, No) \\ BEER & Yes & No & No & $f$ is smooth & \big($\mathcal{O}(\frac{1}{\epsilon^2})$, No, No\big) \\[0.1cm] \textbf{CDProxSGT (this paper)} & Yes & Yes & No & $f_i$ is smooth & ($\mathcal{O}(1)$, No, No) \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} \subsection{Non-compressed Decentralized Methods} For nonconvex decentralized problems with a nonsmooth regularizer, a lot of deterministic decentralized methods have been studied, e.g., \cite{di2016next, wai2017decentralized, zeng2018nonconvex, chen2021distributed, scutari2019distributed}. When only stochastic gradient is available, a majority of existing works focus on smooth cases without a regularizer or a hard constraint, such as \cite{lian2017can, assran2019stochastic, tang2018d}, gradient tracking based methods \cite{lu2019gnsd,zhang2019decentralized, koloskova2021improved}, and momentum-based variance reduction methods \cite{xin2021hybrid, zhang2021gt}. Several works such as \cite{bianchi2012convergence, wang2021distributed, xin2021stochastic, mancino2022proximal} have studied stochastic decentralized methods for problems with a nonsmooth term $r$. However, they either consider some special $r$ or require a large batch size. \cite{bianchi2012convergence} considers the case where $r$ is an indicator function of a compact convex set. Also, it requires bounded stochastic gradients. \cite{wang2021distributed} focuses on problems with a polyhedral $r$, and it requires a large batch size of $\mathcal{O}(\frac{1}{\epsilon})$ to produce an (expected) $\epsilon$-stationary point. \cite{xin2021stochastic, mancino2022proximal} are the most closely related to our methods. To produce an (expected) $\epsilon$-stationary point, the methods in \cite{xin2021stochastic} require a large batch size, either $\mathcal{O}(\frac{1}{\epsilon^2})$ or $\mathcal{O}(\frac{1}{\epsilon})$ if variance reduction is applied. The method in \cite{mancino2022proximal} requires only $\mathcal{O}(1)$ samples for each update by taking a momentum-type variance reduction scheme. However, in order to reduce variance, it needs a stronger mean-squared smoothness assumption. In addition, the momentum variance reduction step can often hurt the generalization performance on training complex neural networks, as we will demonstrate in our numerical experiments. \subsection{Compressed Distributed Methods} Communication efficiency is a crucial factor when designing a distributed optimization strategy. The current machine learning paradigm oftentimes resorts to models with a large number of parameters, which indicates a high communication cost when the models or gradients are transferred from workers to the parameter server or among workers. This may incur significant latency in training. Hence, communication-efficient algorithms by model or gradient compression have been actively sought. Two major groups of compression operators are quantization and sparsification. The quantization approaches include 1-bit SGD \cite{seide20141}, SignSGD \cite{bernstein2018signsgd}, QSGD \cite{alistarh2017qsgd}, TernGrad \cite{wen2017terngrad}. The sparsification approaches include Random-$k$ \cite{stich2018sparsified}, Top-$k$ \cite{aji2017sparse}, Threshold-$v$ \cite{dutta2019discrepancy} and ScaleCom \cite{chen2020scalecom}. Direct compression may slow down the convergence especially when compression ratio is high. Error compensation or error-feedback can mitigate the effect by saving the compression error in one communication step and compensating it in the next communication step before another compression \cite{seide20141}. These compression operators are first designed to compress the gradients in the centralized setting \cite{tang2019DoubleSqueeze,karimireddy2019error}. The compression can also be applied to the decentralized setting for smooth problems, i.e., \eqref{eq:decentralized_problem} with $r=0$. \cite{tang2019deepsqueeze} applies the compression with error compensation to the communication of model parameters in the decentralized seeting. Choco-Gossip \cite{koloskova2019decentralized} is another communication way to mitigate the slow down effect from compression. It does not compress the model parameters but a residue between model parameters and its estimation. Choco-SGD uses Choco-Gossip to solve \eqref{eq:decentralized_problem}. BEER \cite{zhao2022beer} includes gradient tracking and compresses both tracked stochastic gradients and model parameters in each iteration by the Choco-Gossip. BEER needs a large batchsize of $\mathcal{O}(\frac{1}{\epsilon^2})$ in order to produce an $\epsilon$-stationary solution. DoCoM-SGT\cite{DBLP:journals/corr/abs-2202-00255} does similar updates as BEER but with a momentum term for the update of the tracked gradients, and it only needs an $\mathcal{O}(1)$ batchsize. Our proposed CDProxSGT is for solving decentralized problems in the form of \eqref{eq:decentralized_problem} with a nonsmooth $r({\mathbf{x}})$. To the best of our knowledge, CDProxSGT is the first compressed decentralized method for nonsmooth nonconvex problems without the use of a large batchsize, and it can achieve an optimal sample complexity without the assumption of data similarity or gradient boundedness.
{ "arxiv_id": "2302.14250", "language": "en", "timestamp": "2023-03-01T02:06:54", "url": "https://arxiv.org/abs/2302.14250", "yymm": "2302" }
\section{Introduction} \label{sec:intro} \begin{figure}[t!] \centering \includegraphics[width=0.97\linewidth]{compare_archv2.pdf} \vspace{-.1in} \caption{Illustration of the major difference of pipeline between previous WILSS work and {FMWISS}{}. Given a model pre-trained on old classes with pixel-level labels ($\mathcal{Y}^{t-1}$), previous work~\cite{cermelli2022wilson} learn new classes ({\itshape e.g.}, \textit{horse}) via image-level labels ($\mathcal{C}^t$), while {FMWISS}{} improves and effectively utilizes the supervision from complementary foundation models.} \label{fig:compare_arch} \vspace{-.2in} \end{figure} Semantic segmentation is a fundamental task in computer vision and has witnessed great progress using deep learning in the past few years. It aims at assigning each pixel a category label. Modern supervised semantic segmentation methods~\cite{chen2017deeplab,chen2018encoder} are usually based on published large-scale segmentation datasets with pixel annotations. Despite the promising results, one model pre-trained on one dataset is prone to easily forget learned knowledge when being retrained on another dataset with new classes. This phenomenon is known as catastrophic forgetting~\cite{mccloskey1989catastrophic}, which is caused by large changes of model parameters to model new samples with novel categories without accessing old samples. A promising approach to solve such catastrophic forgetting problem is called incremental learning. Many methods have been proposed to solve image classification task~\cite{kirkpatrick2017overcoming,chaudhry2018riemannian,zenke2017continual,li2017learning,dhar2019learning,rebuffi2017icarl,castro2018end,shin2017continual,hou2019learning,wu2018memory,ostapenko2019learning}. Recently, a few methods have been presented to address incremental learning for semantic segmentation (ILSS) task, where only new classes of training samples of the current step are labeled with pixel annotations and old classes of the previous step are labeled as background. Modern ILSS methods can be classified into two categories: regularization-based and replay-based. Regularization-based methods~\cite{cermelli2020mib,douillard2021plop,michieli2021sdr} focus on distilling knowledge, {\itshape e.g.}, output probability, intermedia features, from pre-trained model of previous step. Replay-based methods~\cite{maracani2021recall} propose to store the information of previous old classes or web-crawled images and replay for new training steps. However, a key barrier to further develop these methods is the requirement for pixel-level annotations for new classes. Very recently, WILSON~\cite{cermelli2022wilson} first proposes a new task, weakly incremental learning for semantic segmentation (WILSS), to incrementally update the model from image-level labels for new classes. Despite the comparable results, the image-level labels can not provide details to accurately locate each segment, which limits the performance and development of WILSS. In this work, we explore to improve and more effectively utilize the supervision of new classes given image-level labels while preserving the knowledge of old ones. We propose a \underline{\textbf{F}}oundation \underline{\textbf{M}}odel drives \underline{\textbf{W}}eakly \underline{\textbf{I}}ncremental learning for \underline{\textbf{S}}emantic \underline{\textbf{S}}egmentation framework, dubbed FMWISS. Firstly, as shown in Figure~\ref{fig:compare_arch}, we are the first attempt to leverage pre-trained foundation models to improve the supervision given image-level labels for WILSS in a training-free manner. To be specific, we propose pre-training based co-segmentation to distill the knowledge of vision-language pre-training models ({\itshape e.g.}, CLIP~\cite{radford2021clip}) and self-supervised pre-training models ({\itshape e.g.}, iBOT~\cite{zhou2021ibot}), which can be complementary to each other. However, it is not trivial to apply the pre-trained models. We first adapt CLIP for category-aware dense mask generation. Based on the initial mask for each new class, we then propose to extract compact category-agnostic attention maps with seeds guidance using self-supervised models. We finally refine the pseudo masks via mask fusion. We further propose to optimize the still noisy pseudo masks with a teacher-student architecture, where the plug-in teacher is optimized with the proposed dense contrastive loss. Thus we can more effectively utilize the pseudo dense supervision. Finally, we present memory-based copy-paste augmentation to remedy the forgetting problem of old classes and can further improve the performance. The contributions of this paper are as follows: \begin{itemize} \itemsep -0.1cm \item We present a novel and data-efficient WILSS framework, called FMWISS, which is the first attempt to utilize complementary foundation models to improve and more effectively use the supervision given only image-level labels. \item We propose pre-training based co-segmentation to generate dense masks by distilling both category-aware and category-agnostic knowledge from pre-trained foundation models, which provides dense supervision against original image labels. \item To effectively utilize pseudo labels, we use a teacher-student architecture with a proposed dense contrastive loss to dynamically optimize the noisy pseudo labels. \item We further introduce memory-based copy-paste augmentation to remedy the forgetting problem of old classes and can also improve performance. \item Extensive experiments on Pascal VOC and COCO datasets demonstrate the significant efficacy of our {FMWISS}{} framework. \end{itemize} \section{Related Work} \noindent \textbf{Incremental Learning for Semantic Segmentation.} In addition to an exhaustive exploration of incremental learning for image classification~\cite{kirkpatrick2017overcoming,chaudhry2018riemannian,zenke2017continual,li2017learning,dhar2019learning,rebuffi2017icarl,castro2018end,shin2017continual,hou2019learning,wu2018memory,ostapenko2019learning}, a relatively few methods~\cite{cermelli2020mib,douillard2021plop,klingner2020cil,maracani2021recall,michieli2019ilt,michieli2021sdr} have been proposed to tackle the incremental learning for semantic segmentation task, which can be classified into regularization-based and replay-based methods. Regularization-based methods~\cite{cermelli2020mib,douillard2021plop,michieli2021sdr} focus on preserving knowledge from previous training steps. For instance, MiB~\cite{cermelli2020mib} proposes a modified version of traditional cross-entropy and knowledge distillation loss terms to regularize the probability of old classes and distill previous knowledge respectively, so as to remedy the background shift problem. PLOP~\cite{douillard2021plop} proposes a multi-scale pooling technique to preserve long and short-range spatial relationships at the feature level. SDR~\cite{michieli2021sdr} proposes to optimize the class-conditional features by minimizing feature discrepancy of the same class. In addition, as a replay-based method, RECALL~\cite{maracani2021recall} uses web-crawled images with pseudo labels to remedy the forgetting problem. Pixel-by-pixel labeling for semantic segmentation is time-consuming and labor-intensive. Recently, some literature proposes to attain segmentations from cheaper and more available supervisions, {\itshape e.g.}, sparse point~\cite{bearman2016s}, and image-level label~\cite{huang2018weakly,lee2019ficklenet,sun2020mining}, which has been attracting more and more attention these years. Most image-based weakly supervised semantic segmentation methods~\cite{araslanov2020ss,lee2021eps,wang2020seam} leverage image-level labels to optimize the class activation map (CAM) and then extract pseudo dense annotations. However, the image-level label is rarely explored in incremental learning for semantic segmentation. Very recently, WILSON~\cite{cermelli2022wilson} first proposes a novel weakly incremental learning for semantic segmentation (WILSS) task, which extends a pre-trained segmentation model using only image-level labels and achieves comparable results. In this work, inspired by WILSON~\cite{cermelli2022wilson}, we present a {FMWISS}{} framework to improve and effectively utilize the image-level labels by distilling the knowledge of the complementary foundation models. \begin{figure*}[t!] \begin{center} \includegraphics[width=0.9\textwidth]{arch.pdf} \end{center} \vspace{-.2in} \caption{Illustration of the proposed {FMWISS}{} framework. The plug-in teacher module (ASPP~\cite{chen2014deeplabv2}) is to learn the segments of both old and new classes during training, which is eliminated during inference. The model at step \textit{t} is optimized using the outputs of the pre-trained model at step \textit{t}-1 and the learned logits of the online teacher module.} \label{fig:framework} \vspace{-.1in} \end{figure*} \noindent \textbf{Visual Foundation Models.} We mainly focus on two kinds of foundation models in computer vision, including the vision-language pre-training (VLP) models and the self-supervised pre-training models. VLP~\cite{radford2021clip,jia2021scalingalign} plays an important role in multimodal research, {\itshape e.g.}, VQA~\cite{antol2015vqa}, text-to-image generation~\cite{ramesh2022hierarchicaldalle2}, zero-shot classification~\cite{zhou2021coop,zhou2022cocoop}. A representative VLP work is CLIP~\cite{radford2021clip}, which jointly trains the image and text encoders on 400 million image-text pairs collected from the web and demonstrates promising results on zero-shot image classification tasks. Recently, MaskCLIP~\cite{zhou2021maskclip} adapts CLIP to zero-shot semantic segmentation in a training-free manner, which illustrates the potential of CLIP in category-aware dense prediction. Self-supervised visual pre-training can be classified into three categories: contrastive learning based~\cite{oord2018infonce,chen2020simple,he2020momentum}, distillation based~\cite{grill2020bootstrap,caron2021dino}, and masked image modeling based~\cite{he2022mae,zhou2021ibot}. Among these methods, iBOT~\cite{zhou2021ibot} and DINO~\cite{caron2021dino} are two representative approaches to automatically perform class-agnostic dense features modeling. These two kinds of foundation models can be complementary to each other. \section{Method} \subsection{Problem Definition and Notation} Let $\mathcal{X}$ be the input image space and each image $x \in \mathcal{X}$ consists of a set of pixels $\mathcal{I}$ with $|\mathcal{I}| = H \times W$. Let $\mathcal{Y}$ be the label space. In the incremental learning for semantic segmentation setting~\cite{cermelli2020mib}, the training procedure is arranged into multiple steps, and each learning step $t$ will involve novel classes $\mathcal{C}^t$ with pixel annotations, constructing a new label set $\mathcal{Y}^t = \mathcal{Y}^{t-1} \cup \mathcal{C}^t$. However, different from the original incremental setting, in the novel weakly incremental learning for semantic segmentation (WILSS) setting, recently proposed by WILSON~\cite{cermelli2022wilson}, the pixel annotations are only provided for the initial step, {\itshape i.e.}, $t=0$. For the following steps, we can only access to training sets with image-level annotations for new classes, and can not access to the training samples of previous training steps anymore. The goal is to learn and update a model to perform segmentation on new classes without forgetting old classes. \subsection{Pre-training Based Co-segmentation} It is still challenging to use only image-level labels to supervise the dense prediction tasks, {\itshape e.g.}, semantic segmentation, since image-level labels can not provide detailed information to accurately locate each segment. This limitation inspires us to investigate the following question: \textit{how to improve the supervision of new classes from image-level labels?} To tackle this question, we propose a pre-training based co-segmentation method to utilize the knowledge of foundation models in a training-free manner. We distill the complementary knowledge of two kinds of foundation models, including the vision-language pre-training models, {\itshape e.g.}, CLIP~\cite{radford2021clip}, and the self-supervised pre-training models, {\itshape e.g.}, iBOT~\cite{zhou2021ibot}, DINO~\cite{caron2021dino}. \noindent \textbf{Initial Mask.} We believe that the pre-trained vision-language model, {\itshape e.g.}, CLIP, has encoded rich semantic information in its features as it learns to associate image with language caption from 400 million image-text pairs. To get dense prediction of a new class image, we apply the pre-trained CLIP model to extract category-aware pixel annotations given image-level labels. As shown in Figure~\ref{fig:framework}, to be specific, given an image $x$ of step $t$ with image-level labels $\mathcal{C}^t$, we first extract dense image features $F$ from the CLIP image encoder $f_I$. Then, project $F$ via the final linear projection layer $f_{proj}$ of $f_I$. \begin{equation} \overline{F} = | f_{proj}(F) |_{2}, ~~ \overline{F} \in \mathbb{R}^{h \times w \times d}, \end{equation} where $|\cdot|_{2}$ denotes L2 normalization along the channel dimension, $d$ is the feature dimension of the joint space. We then compute the text embeddings by taking as input the prompt with target classes $\mathcal{C}^t$ of step $t$: \begin{equation} \overline{T} = | f_T({\tt prompt} (\mathcal{C}^t)) |_2, ~~ \overline{T} \in \mathbb{R}^{\mathcal{C}^t \times d}, \end{equation} where $f_T$ is the CLIP text encoder, ${\tt prompt}(\cdot)$ denotes prompt engineering, which ensembles multiple prompt templates as in~\cite{radford2021clip}. We then compute the pixel-text score maps using the language-compatible image feature embeddings $\overline{F}$ and the text embeddings $\overline{T}$ by: \begin{equation} \mathcal{M}_{init} = \overline{F} \cdot \overline{T}^{\top}, ~\mathcal{M}_{init} \in \mathbb{R}^{h \times w \times \mathcal{C}^t}, \end{equation} which indicates that each pixel will be assign a score for each class in $\mathcal{C}^t$, and $\mathcal{M}_{init}$ can be viewed as the initial segmentation results with category information. \noindent \textbf{Refine Mask via Seeds Guidance.} The pseudo mask $\mathcal{M}_{init}$ generated by CLIP~\cite{radford2021clip} can provide rich category-aware pixel annotations, but the mask is noisy since the training paradigm of CLIP based on image-text pairs doomed to be good at instance-level classification rather than segmentation. To improve the mask quality, we propose to distill the knowledge of another kind of foundation models, {\itshape i.e.}, self-supervised pre-training models, which have shown promising performance in local feature modeling~\cite{caron2021dino,zhou2021ibot}. These models can produce compact category-agnostic attention maps. However, \textit{how to extract segmentations for a target class given an image that may contain multiple objects?} To address this issue, we propose to refine the initial mask via category-specific seeds guidance. Specifically, we randomly select $N=\{(i_p, j_p)\}^{N}_{p=1}$ seed points from initial mask $\mathcal{M}_{init} \in \mathbb{R}^{h \times w \times \mathcal{C}^t}$ for each target class $c \in \mathcal{C}^t$, and extract the corresponding attention maps from the pre-trained self-supervised model. Let $S$ denotes the image encoder of the self-supervised model, $S(x) \in \mathbb{R}^{h \times w \times n}$ denotes the output attention maps of the last self-attention block. We extract the category-aware attention map with the guidance of seeds as follows. For simplicity, we only show the calculation on one class $c \in \mathcal{C}^t$: \begin{equation} \mathcal{M}^{c}_{seeds} = \big [\frac{1}{N} \sum^{N}_{p=1} \frac{1}{n} \sum^{n} S(x) \big ]_{\tt binary}, \label{eq:m_seeds} \end{equation} where $n$ denotes the number of attention heads. The $N$ seed points are randomly sampled from the foreground of binarized $\mathcal{M}_{init}$ for each new class and training step. $[\cdot]_{\tt binary}$ is a binarization operation that sets all the values greater than the threshold $\mathcal{T}$ to 1, otherwise 0. $\mathcal{T}$ is dynamically updated to keep the top $\mathcal{K}$\% ($\mathcal{K}=70$ by default) locations from the averaged attention map. As shown in Figure~\ref{fig:seeds}, we visualize extracted attention maps of two classes (\textit{horse, dog}), nine seeds ($N=9$) can already show good clustering performance. Finally, we get the refined mask via simple mask fusion for better performance: \begin{equation} \mathcal{M}_{refine} = \mathcal{M}_{init} \cup \mathcal{M}_{seeds}, ~~\mathcal{M}_{refine} \in \mathbb{R}^{h \times w \times \mathcal{C}^t}, \label{eq:fusion} \end{equation} where $\cup$ represents mask fusion operation, which is the union operation in our experiments by default. \subsection{Pseudo Label Optimization} \begin{figure}[t!] \centering \includegraphics[width=0.95\linewidth]{contrast.pdf} \vspace{-.1in} \caption{Illustration of the dense contrastive loss calculation process. The colorful points represent pixels with different categories.} \label{fig:contrast} \vspace{-.1in} \end{figure} Previous WILSS literature~\cite{cermelli2022wilson} use learned CAM to supervise the segmentation learning of new classes, and CAM is optimized with a binary cross-entropy (BCE) loss against one-hot image-level labels. Now, we have the generated pseudo pixel labels that can provide more information than image-level labels, a natural question is: \textit{how to effectively utilize such supervision?} We propose to use a teacher-student architecture to further optimize the still noisy pseudo mask. To be specific, by taking the segmentation model as student model, we introduce a plug-in teacher module (ASPP network in Figure~\ref{fig:framework}) to dynamically learn better pseudo masks during training. To learn the teacher module, we first propose to use the pixel-wise binary cross-entropy loss $\mathcal{L}^{new}_{\text{BCE}}$ to supervise the predictions of the new classes at step $t$ as follows, and we leave the old classes optimization in the next section. \begin{equation} \begin{split} \mathcal{L}^{new}_{\text{BCE}} = - \frac{1}{|\mathcal{C}^{t}| |\mathcal{I}|} \sum_{i \in \mathcal{I}} \sum_{c \in \mathcal{C}^t} \mathcal{M}^{c, i}_{refine}\log(p^{c, i}) \\ + (1 - \mathcal{M}^{c, i})\log(1 - p^{c, i}), \end{split} \label{eq:pixelbce} \end{equation} where $p^{c, i}$ denotes the predicted probability on new class $c$ of pixel $i$. However, the pixel-wise BCE loss mainly focus on the optimization of the target foreground class $c$ of current input image and treat all other classes as background, which ignores the correlation among pixels. To better utilize the multi-class predictions and corresponding pixel-wise pseudo labels among the entire dataset, inspired by the InfoNCE~\cite{oord2018infonce} loss in unsupervised representation learning, we propose to perform dense contrastive learning. Specifically, as depicted in Figure~\ref{fig:contrast}, for a pixel $i$ of a new class image and its corresponding pseudo annotation, we collect all the pixels with the same class label as $i$ to compose positive samples $\mathbf{P}_i$, and collect the points of other classes to compose negative samples $\mathbf{N}_i$. Formally, our dense contrastive loss $\mathcal{L}^{new}_{\text{DCL}}$ can be calculated as follows. For simplicity, we only show the loss on pixel $i$: \begin{equation} \begin{aligned} & \mathcal{L}^{new}_{\text{DCL}} \\ & = \frac{1}{|\mathbf{P}_i|} \sum_{q_{+} \in \mathbf{P}_i}-\log \frac{\text{exp}(i \cdot q_{+}/\tau)} {\text{exp}(i \cdot q_{+}/\tau) + \sum\limits_{q_{-} \in \mathbf{N}_i} \text{exp}(i \cdot q_{-}/\tau)}, \end{aligned} \label{eq:dcl} \end{equation} where $\tau$ is a temperature term (0.1 by default). For training efficiency, we randomly sample only ten points for each contained class of the current mini-batch. The pixel-wise BCE loss in Eq.~(\ref{eq:pixelbce}) and the dense contrastive loss in Eq.~(\ref{eq:dcl}) can be complementary to each other, which help the teacher module to learn discriminative pixel features as well as regularize the pixel feature space by intra-class and inter-class pixel feature modeling. \subsection{Memory-based Copy-Paste Augmentation} \begin{figure}[t!] \centering \includegraphics[width=0.8\linewidth]{copypaste.pdf} \vspace{-.1in} \caption{Illustration of the memory-based copy-paste augmentation. Notably, we will not use the ground-truth labels of new classes of augmented labels.} \label{fig:copypaste} \vspace{-.1in} \end{figure} In addition to improving and effectively leveraging the supervision of new classes for WILSS, we propose a memory-based copy-paste augmentation strategy to stabilize the learning of old classes and can further improve the performance of the segmentation model. As shown in Figure~\ref{fig:copypaste}, we first construct a memory bank for each old class, and each class archive will store $\mathcal{B}$ foreground instances and segmentation labels during the base model training. Then, in step $t$, we randomly pick one pair of foreground images and labels from a randomly selected old class archive, and randomly paste them to the new class image. Now, the training samples contain new class images at step $t$ as well as old class images and pixel labels at step $t$-1. We thus optimize the old class learning of the teacher module as: \begin{equation} \begin{split} \mathcal{L}^{old}_{\text{BCE}} = - \frac{1}{|\mathcal{Y}^{t-1}| |\mathcal{I}|} \sum_{i \in \mathcal{I}} \sum_{c \in \mathcal{Y}^{t-1}} g(f^{t-1}_{\theta}(\hat{x}))^{c,i}\log(p^{c, i}) \\ + (1 - g(f^{t-1}_{\theta}(\hat{x}))^{c,i})\log(1 - p^{c, i}), \end{split} \end{equation} \begin{equation} g(f^{t-1}_{\theta}(\hat{x}))^{c,i} = \begin{cases} 1, & \hat{x}^{c,i}~\text{augmented}, \\ g(f^{t-1}_{\theta}(\hat{x}))^{c,i}, & \text{otherwise}, \end{cases} \end{equation} where $g(\cdot)$ is the logistic function, $f^{t-1}_{\theta}$ is the trained model at step $t$-1, $p^{c, i}$ is the predicted probability on old class $c$ of pixel $i$, $\hat{x}$ denotes the augmented image. \subsection{Overall Optimization} We optimize the segmentation model $f_{\theta}^{t}$ at step $t$ by distilling the knowledge of the trained model $f^{t-1}_{\theta}$ and the dynamically updated teacher module $T^{t}$. Since $T^{t}$ is optimized mainly through the binary cross-entropy loss, we use the BCE loss to distill the prediction of $T^{t}$ to model $f_{\theta}^{t}$. Considering that the learned pseudo mask is not perfect, we use the soft pixel labels as the final supervision for new classes $\mathcal{C}^t$, and use the weighted average value of the old model and teacher module outputs as the supervision for old classes: \begin{equation} q^{c,i} = \begin{cases} \alpha |T^{t}(x)|_{\text{hard}} + (1-\alpha) g(T^{t}(x)) , & \text{if}~ c \in \mathcal{C}^{t}, \\ \beta g(f_{\theta}^{t-1}(x)) + (1-\beta) g(T^{t}(x)), & \text{otherwise}, \end{cases} \label{eq:qci} \end{equation} where $|\cdot|_{\text{hard}}$ denotes one-hot operation to set one to the class with the maximum score for each pixel and zero to others. $\alpha$ and $\beta$ are trade-off parameters and we set $\alpha=0.5, \beta=0.9$ by default. Then, the BCE loss for $f_{\theta}^{t}$ is: \begin{equation} \begin{split} \mathcal{L}^{all}_{\text{BCE}} = - \frac{1}{|\mathcal{Y}^{t}||\mathcal{I}|} \sum_{i \in \mathcal{I}} \sum_{c \in \mathcal{Y}^{t}} q^{c,i}\log(p^{c, i}) \\ + (1 - q^{c,i})\log(1 - p^{c, i}), \end{split} \end{equation} where $\mathcal{Y}^{t}$ is the set of all seen classes and $p=f_{\theta}^{t}(x)$ represents the output of segmentation model at step $t$. The overall learning objective is as follows: \begin{equation} \mathcal{L}_{overall} = \mathcal{L}^{new}_{\text{BCE}} + \lambda \mathcal{L}^{new}_{\text{DCL}} + \mathcal{L}^{old}_{\text{BCE}} + \mathcal{L}^{all}_{\text{BCE}}, \end{equation} where $\lambda$ is the loss weight of $\mathcal{L}^{new}_{\text{DCL}}$. \section{Experiments} \subsection{Datasets and Protocols} To ensure a fair comparison, we follow the experimental settings of the state-of-the-art (SoTA) WILSS method WILSON~\cite{cermelli2022wilson} for datasets and protocols. Different from~\cite{cermelli2020mib,maracani2021recall} rely on pixel-wise annotation on new classes, we only use image-level labels for novel classes as WILSON. \noindent \textbf{Datasets:} we consider two standard evaluation benchmarks including Pascal VOC 2012~\cite{everingham2010pascalvoc} and COCO~\cite{lin2014coco}. COCO consists of 118,287 and 5,000 images for training and validation with 80 annotated object classes. Pascal VOC is composed of 1,464 training and 1,449 validation images with 20 labeled object classes. We follow the standard method~\cite{ahn2018learning,kolesnikov2016seed,cermelli2022wilson} to augment the VOC dataset with images from~\cite{hariharan2011semantic}, building up 10,582 and 1,449 images for training and validation, respectively. We also follow the practice of~\cite{cermelli2022wilson} to use the train split and the annotation of COCO-Stuff~\cite{caesar2018cocostuff} that addresses the annotation overlapping problem of COCO~\cite{lin2014coco}. \noindent \textbf{Protocols:} previous works~\cite{cermelli2020mib} introduce two different incremental learning protocols: \textit{disjoint} and \textit{overlap}. In \textit{disjoint}, images of each training step only contain pixels of previous seen and current classes. In \textit{overlap}, each training step contains all the images, where pixels can belong to any class. Thus, the overlap protocol is more realistic and challenging compared to the disjoint. In our experiments, we follow the previous WILSS work~\cite{cermelli2022wilson} to apply these two protocols on the VOC dataset, including \textbf{15-5 VOC}, where 15 base classes are learned in the first training step and 5 new classes are continuously learned in the second step; \textbf{10-10 VOC}, where 10 base classes are learned in the first step and another 10 new classes are added in the second step. In addition, we also verify our method on the \textbf{COCO-to-VOC} protocol, which is a new incremental learning scenario proposed in~\cite{cermelli2022wilson}. To be specific, in the first step, we learn the 60 classes of COCO that do not appear in the VOC dataset. Then, we continuously learn the 20 classes of VOC. Following common practice~\cite{cermelli2020mib,maracani2021recall,cermelli2022wilson}, we report the standard mean Intersection over Union (mIoU) results on the validation sets. \subsection{Implementation Details} As in WILSON~\cite{cermelli2022wilson}, we use a Deeplab V3~\cite{chen2017deeplabv3} with a ResNet-101~\cite{he2016deep} backbone for VOC and a Wide-ResNet-38 for COCO, both pre-trained on ImageNet~\cite{deng2009imagenet}. We train all the models for 40 epochs with a batch size of 24 and the SGD optimizer with a learning rate of 1$e^{-3}$, momentum of 0.9, weight decay of 1$e^{-4}$. Before training the segmentation model, we first warm up the teacher module for five epochs. We set $\lambda=0.1$, $\alpha=0.5$, and $\beta=0.9$. As for foundation models, in terms of the VLP model, we find that MaskCLIP~\cite{zhou2021maskclip} is an alternative solution to adapt CLIP for better dense prediction, and we use its mechanism based on the pre-trained CLIP with ViT-B architecture. In terms of the self-supervised model, we use pre-trained iBOT~\cite{zhou2021ibot} with ViT-L architecture by default. \subsection{Baselines} Since weakly incremental learning for semantic segmentation (WILSS) is a novel setting proposed by WILSON~\cite{cermelli2022wilson}, we also compare our framework with both supervised incremental learning and weakly supervised semantic segmentation methods as in~\cite{cermelli2022wilson}. For supervised incremental learning methods using dense pixel-wise annotations, we compare with eight representative state-of-the-art works, including LWF~\cite{li2017lwf}, LWF-MC~\cite{rebuffi2017lwfmc}, ILT~\cite{michieli2019ilt}, MiB~\cite{cermelli2020mib}, PLOP~\cite{douillard2021plop}, CIL~\cite{klingner2020cil}, SDR~\cite{michieli2021sdr}, and RECALL~\cite{maracani2021recall}. As for the weakly supervised semantic segmentation methods adopted to the incremental learning scenario, we compare with the reported results with the pseudo labels generated from class activation maps (CAM), SEAM~\cite{wang2020seam}, SS~\cite{araslanov2020ss}, and EPS~\cite{lee2021eps} as in~\cite{cermelli2022wilson}. \subsection{Results.} \subsubsection{Performance on 15-5 VOC} In this setting, for a fair comparison, we also introduce 5 classes in the incremental step as in~\cite{cermelli2022wilson}: \textit{plant, sheep, sofa, train, tv monitor.} As shown in Table~\ref{tbl:voc15-5}, our {FMWISS}{} achieves the new state-of-the-art results in all the settings (disjoint and overlap) compared to all the pixel label based and image label based methods. To be more specific, compared to pixel label based methods, our WILSS method {FMWISS}{} even outperforms the best method by 3.5\% and 3.2\% in the two settings, respectively. Compared to image label based methods, {FMWISS}{} improves the overall performance by \textbf{3.4\%} and \textbf{6.1\%} in the two settings against previous SoTA WILSON~\cite{cermelli2022wilson}. Notably, {FMWISS}{} significantly improves the performance on new classes (16-20) by \textbf{7.0\%} and \textbf{12.8\%} in the disjoint and overlap settings, respectively. \def{P}{{P}} \def{I}{{I}} \begin{table}[t!] \centering \resizebox{\linewidth}{!}{% \begin{tabular}{rc|lll|lll} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Sup} & \multicolumn{3}{c|}{Disjoint} & \multicolumn{3}{c}{Overlap} \\ & & 1-15 & 16-20 & All & 1-15 & 16-20 & All \\ \midrule Joint$^{\ast}$ & {P}{} & 75.5 & 73.5 & 75.4 & 75.5 & 73.5 & 75.4 \\ \hline FT$^{\ast}$ & {P}{} & 8.4 & 33.5 & 14.4 & 12.5 & 36.9 & 18.3 \\ LWF$^{\ast}$~\cite{li2017lwf} & {P}{} & 39.7 & 33.3 & 38.2 & 67.0 & 41.8 & 61.0 \\ LWF-MC$^{\ast}$~\cite{rebuffi2017lwfmc} & {P}{} & 41.5 & 25.4 & 37.6 & 59.8 & 22.6 & 51.0 \\ ILT$^{\ast}$~\cite{michieli2019ilt} & {P}{} & 31.5 & 25.1 & 30.0 & 69.0 & 46.4 & 63.6 \\ CIL$^{\ast}$~\cite{klingner2020cil} & {P}{} & 42.6 & 35.0 & 40.8 & 14.9 & 37.3 & 20.2 \\ MIB$^{\ast}$~\cite{cermelli2020mib} & {P}{} & 71.8 & 43.3 & 64.7 & 75.5 & 49.4 & 69.0 \\ PLOP~\cite{douillard2021plop} & {P}{} & 71.0 & 42.8 & 64.3 & \underline{75.7} & 51.7 & \underline{70.1} \\ SDR$^{\ast}$~\cite{michieli2021sdr} & {P}{} & \underline{73.5} & 47.3 & \underline{67.2} & 75.4 & 52.6 & 69.9 \\ RECALL$^{\ast}$~\cite{maracani2021recall} & {P}{} & 69.2 & \underline{52.9} & 66.3 & 67.7 & \underline{54.3} & 65.6 \\ \midrule CAM$^{\dagger}$ & {I}{} & 69.3 & 26.1 & 59.4 & 69.9 & 25.6 & 59.7 \\ SEAM$^{\dagger}$~\cite{wang2020seam} & {I}{} & 71.0 & 33.1 & 62.7 & 68.3 & 31.8 & 60.4 \\ SS$^{\dagger}$~\cite{araslanov2020ss} & {I}{} & 71.6 & 26.0 & 61.5 & 72.2 & 27.5 & 62.1 \\ EPS$^{\dagger}$~\cite{lee2021eps} & {I}{} & 72.4 & 38.5 & 65.2 & 69.4 & 34.5 & 62.1 \\ WILSON$^{\dagger}$~\cite{cermelli2022wilson} & {I}{} & 73.6 & 43.8 & 67.3 & 74.2 & 41.7 & 67.2 \\ {FMWISS}{} (Ours) & {I}{} & \textbf{75.9 (+2.3)} & \textbf{50.8 (+7.0)} & \textbf{70.7 (+3.4)} & \textbf{78.4 (+4.2)} & \textbf{54.5 (+12.8)} & \textbf{73.3 (+6.1)} \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Results on the 15-5 setting of Pascal VOC. $^{\ast}$ denotes the results are from~\cite{maracani2021recall}, $^{\dagger}$ denotes the results are from~\cite{cermelli2022wilson}. `P' and `I' indicate pixel-level and image-level labels, respectively. The best methods of using image-level labels and using pixel-level labels as supervision are \textbf{bold} and \underline{underlined}, respectively.} \label{tbl:voc15-5} \vspace{-.1in} \end{table} \subsubsection{Performance on 10-10 VOC} In this setting, for a fair comparison, we also introduce 10 classes in the incremental step as in~\cite{cermelli2022wilson}: \textit{dining table, dog, horse, motorbike, person, plant, sheep, sofa, train, tv monitor.} As reported in Table~\ref{tbl:voc10-10}, our {FMWISS}{} achieves the new state-of-the-art performance against the image label based methods and achieves \textit{on par} or even better performance than the pixel label based methods. Specifically, compared to pixel label based methods, we achieve an overall performance of 64.6\% on disjoint protocol, which is very close to ILT's 64.7\%. On overlap protocol, we achieve an overall performance of 69.1\%, which is even 1.8\% higher than SDR's 67.4\%. Compared to image label based methods, our {FMWISS}{} achieves the best results on all the protocols, and we achieve overall performance improvements of \textbf{3.8\%} and \textbf{4.1\%} in both settings compared to WILSON~\cite{cermelli2022wilson}. \begin{table}[t!] \centering \resizebox{\linewidth}{!}{% \begin{tabular}{rc|lll|lll} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Sup} & \multicolumn{3}{c|}{Disjoint} & \multicolumn{3}{c}{Overlap} \\ & & 1-10 & 11-20 & All & 1-10 & 11-20 & All \\ \midrule Joint$^{\ast}$ & {P}{} & 76.6 & 74.0 & 75.4 & 76.6 & 74.0 & 75.4 \\ \hline FT$^{\ast}$ & {P}{} & 7.7 & 60.8 & 33.0 & 7.8 & 58.9 & 32.1 \\ LWF$^{\ast}$~\cite{li2017lwf} & {P}{} & 63.1 & 61.1 & 62.2 & \underline{70.7} & 63.4 & 67.2 \\ LWF-MC$^{\ast}$~\cite{rebuffi2017lwfmc} & {P}{} & 52.4 & 42.5 & 47.7 & 53.9 & 43.0 & 48.7 \\ ILT$^{\ast}$~\cite{michieli2019ilt} & {P}{} & \underline{67.7} & \underline{61.3} & \underline{64.7} & 70.3 & 61.9 & 66.3 \\ CIL$^{\ast}$~\cite{klingner2020cil} & {P}{} & 37.4 & 60.6 & 48.8 & 38.4 & 60.0 & 48.7 \\ MIB$^{\ast}$~\cite{cermelli2020mib} & {P}{} & 66.9 & 57.5 & 62.4 & 70.4 & 63.7 & 67.2 \\ PLOP~\cite{douillard2021plop} & {P}{} & 63.7 & 60.2 & 63.4 & 69.6 & 62.2 & 67.1 \\ SDR$^{\ast}$~\cite{michieli2021sdr} & {P}{} & 67.5 & 57.9 & 62.9 & 70.5 & \underline{63.9} & \underline{67.4} \\ RECALL$^{\ast}$~\cite{maracani2021recall} & {P}{} & 64.1 & 56.9 & 61.9 & 66.0 & 58.8 & 63.7 \\ \midrule CAM$^{\dagger}$ & {I}{} & 65.4 & 41.3 & 54.5 & 70.8 & 44.2 & 58.5 \\ SEAM$^{\dagger}$~\cite{wang2020seam} & {I}{} & 65.1 & 53.5 & 60.6 & 67.5 & 55.4 & 62.7 \\ SS$^{\dagger}$~\cite{araslanov2020ss} & {I}{} & 60.7 & 25.7 & 45.0 & 69.6 & 32.8 & 52.5 \\ EPS$^{\dagger}$~\cite{lee2021eps} & {I}{} & 64.2 & 54.1 & 60.6 & 69.0 & 57.0 & 64.3 \\ WILSON$^{\dagger}$~\cite{cermelli2022wilson} & {I}{} & 64.5 & 54.3 & 60.8 & 70.4 & 57.1 & 65.0 \\ {FMWISS}{} (Ours) & {I}{} & \textbf{68.5 (+4.0)} & \textbf{58.2 (+3.9)} & \textbf{64.6 (+3.8)} & \textbf{73.8 (+3.4)} & \textbf{62.3 (+5.2)} & \textbf{69.1 (+4.1)} \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Results on the 10-10 setting of Pascal VOC. $^{\ast}$ denotes the results are from~\cite{maracani2021recall}, $^{\dagger}$ denotes the results are from~\cite{cermelli2022wilson}.} \label{tbl:voc10-10} \end{table} \subsubsection{Performance on COCO-to-VOC} This is a more challenging setting. First, the base model is trained on 60 classes of the COCO dataset, which are not overlapped with VOC. Then we train the additional 20 classes from VOC dataset with image-level labels in the second step. The results are reported in Table~\ref{tbl:coco2voc}, {FMWISS}{} also achieves the new state-of-the-art performance against the WILSS methods. \begin{table}[t!] \centering \resizebox{0.85\linewidth}{!}{% \begin{tabular}{rc|lll|l} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Sup} & \multicolumn{3}{c|}{COCO} & VOC \\ & & 1-60 & 61-80 & All & 61-80 \\ \midrule FT$^{\dagger}$ & {P}{} & 1.9 & 41.7 & 12.7 & \underline{75.0} \\ LWF$^{\dagger}$~\cite{li2017lwf} & {P}{} & 36.7 & \underline{49.0} & \underline{40.3} & 73.6 \\ ILT$^{\dagger}$~\cite{michieli2019ilt} & {P}{} & \underline{37.0} & 43.9 & 39.3 & 68.7 \\ MIB$^{\dagger}$~\cite{cermelli2020mib} & {P}{} & 34.9 & 47.8 & 38.7 & 73.2 \\ PLOP$^{\dagger}$~\cite{douillard2021plop} & {P}{} & 35.1 & 39.4 & 36.8 & 64.7 \\ \midrule CAM$^{\dagger}$ & {I}{} & 30.7 & 20.3 & 28.1 & 39.1 \\ SEAM$^{\dagger}$~\cite{wang2020seam} & {I}{} & 31.2 & 28.2 & 30.5 & 48.0 \\ SS$^{\dagger}$~\cite{araslanov2020ss} & {I}{} & 35.1 & 36.9 & 35.5 & 52.4 \\ EPS$^{\dagger}$~\cite{lee2021eps} & {I}{} & 34.9 & 38.4 & 35.8 & 55.3 \\ WILSON$^{\dagger}$~\cite{cermelli2022wilson} & {I}{} & 39.8 & 41.0 & 40.6 & 55.7 \\ {FMWISS}{} (Ours) & {I}{} & \textbf{39.9 (+0.1)} & \textbf{44.7 (+3.7)} & \textbf{41.6 (+1.0)} & \textbf{63.6 (+7.9)} \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Results on the COCO-to-VOC setting. $^{\dagger}$ denotes the results are from~\cite{cermelli2022wilson}.} \label{tbl:coco2voc} \vspace{-.1in} \end{table} \subsection{Ablation Studies} Unless otherwise specified, we perform the ablation experiments with iBOT~\cite{zhou2021ibot} as the self-supervised model with ViT-B architecture based on the 10-10 VOC setting. We provide more ablation details, {\itshape e.g.}, ablations of $\alpha, \beta, \mathcal{K}$, in our ${\tt supplementary}$ materials. \subsubsection{Effect of Number of Seeds} As depicted in Figure~\ref{fig:seeds}, we visualize the impact of number of seeds when calculating $\mathcal{M}_{seeds}$. Based on the category-aware $\mathcal{M}_{init}$ of two classes (\textit{horse, dog}), more random seeds guidance leads to more compact attention maps and nine seeds are enough to get good clustering results for training efficiency. Moreover, the category-agnostic $\mathcal{M}_{seeds}$ can be complementary to the initial $\mathcal{M}_{init}$, which is indicated by red boxes. \begin{figure}[t!] \centering \includegraphics[width=0.94\linewidth]{seedsv2.pdf} \vspace{-.1in} \caption{Effect of the number of randomly selected seeds.} \label{fig:seeds} \vspace{-.1in} \end{figure} \subsubsection{Analysis of Mask Fusion} In this section, we verify the effect of mask fusion operation in Eq.~(\ref{eq:fusion}). As shown in Table~\ref{tbl:mask_fusion}, we compare the results of two fusion operations and the result without fusion (``None''). Performing co-segmentation with ``Union'' can significantly improve performance, especially on new classes (53.25\% $\rightarrow$ 55.45\%, 56.39\% $\rightarrow$ 60.54\%). The results indicate that the proposed pre-training based co-segmentation does improve the supervision and performance for WILSS against original image-level labels. \begin{table}[t!] \centering \resizebox{0.8\linewidth}{!}{% \begin{tabular}{c|ccc|ccc} \toprule Fusion & \multicolumn{3}{c|}{Disjoint} & \multicolumn{3}{c}{Overlap} \\ Operation & 1-10 & 11-20 & All & 1-10 & 11-20 & All \\ \midrule None & 67.83 & 53.25 & 61.96 & 73.25 & 56.39 & 66.45 \\ Intersection & 66.85 & 51.17 & 60.40 & 74.12 & 54.02 & 64.95 \\ Union & 67.23 & 55.45 & \textbf{62.74} & 73.12 & 60.54 & \textbf{67.94} \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Performance comparison of the mask fusion operation. ``None'' denotes using only the $\mathcal{M}_{init}$ from CLIP.} \label{tbl:mask_fusion} \end{table} \subsubsection{Effect of Dense Contrastive Objective} We further analyze the effect of the proposed dense contrastive loss in Table~\ref{tbl:contrast}. Compared to the result without using $\mathcal{L}^{new}_{\text{DCL}}$ ($\lambda=0.0$), introducing our proposed dense contrastive loss with a loss weight of 0.1 further improves the performance on new classes by 1.77\% and 0.38\% in the two settings, respectively. We thus set $\lambda=0.1$ in our experiments by default. The results show that the plug-in teacher module equipped with the proposed $\mathcal{L}^{new}_{\text{DCL}}$ can more effectively optimize and leverage the dense supervision for WILSS. \begin{table}[t!] \centering \resizebox{0.8\linewidth}{!}{% \begin{tabular}{c|ccc|ccc} \toprule \multirow{2}{*}{$\lambda$ of $\mathcal{L}^{new}_{\text{DCL}}$} & \multicolumn{3}{c|}{Disjoint} & \multicolumn{3}{c}{Overlap} \\ & 1-10 & 11-20 & All & 1-10 & 11-20 & All \\ \midrule 0.0 & 67.23 & 55.45 & 62.74 & 73.12 & 60.54 & 67.95 \\ 0.1 & 67.94 & 57.22 & \textbf{63.91} & 73.44 & 60.92 & \textbf{68.28} \\ 1.0 & 67.15 & 57.94 & 63.87 & 72.08 & 61.05 & 68.17 \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Ablation experiments on the dense contrastive objective $\mathcal{L}^{new}_{\text{DCL}}$. $\lambda=0.0$ represents eliminating the dense contrastive loss.} \label{tbl:contrast} \end{table} \subsubsection{Effect of Memory-based Copy-paste Augmentation} As reported in Table~\ref{tbl:copypaste}, we analyze the effect of the proposed memory-based copy-paste augmentation. Without further tuning, we fix the augmentation probability as 0.5. Compared with the results without augmentation, applying the copy-paste augmentation with $\mathcal{B}=50$ can already improve the performance of old classes (67.94\% $\rightarrow$ 68.64\% in disjoint, 73.44\% $\rightarrow$ 73.71\% in overlap) for saving memory consumption. \begin{table}[t!] \centering \resizebox{0.8\linewidth}{!}{% \begin{tabular}{cc|ccc|ccc} \toprule \multirow{2}{*}{Rand.} & Memory-bank & \multicolumn{3}{c|}{Disjoint} & \multicolumn{3}{c}{Overlap} \\ & $\mathcal{B}$ & 1-10 & 11-20 & All & 1-10 & 11-20 & All \\ \midrule \multirow{5}{*}{0.5} & 0 & 67.94 & 57.22 & 63.91 & 73.44 & 60.92 & 68.28 \\ & 10 & 68.85 & 57.14 & 64.29 & 73.11 & 61.50 & 68.40 \\ & 50 & 68.64 & 57.90 & \textbf{64.56} & 73.71 & 62.06 & \textbf{68.95} \\ & 100 & 68.41 & 58.09 & 64.55 & 73.26 & 61.78 & 68.60 \\ & 500 & 68.73 & 58.29 & 64.80 & 73.50 & 62.28 & 68.97 \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Ablation experiments on the memory-based copy-paste augmentation. $\mathcal{B}=0$ denotes removing augmentation.} \label{tbl:copypaste} \vspace{-.1in} \end{table} \subsubsection{Effect of Different Self-supervised Models} We experiment with two representative self-supervised models~\cite{zhou2021ibot,caron2021dino} based on different ViT architectures. As shown in Table~\ref{tbl:selfup_model}, both bring promising results and we use iBOT with ViT-L for better and stable performance \begin{table}[t!] \centering \resizebox{0.85\linewidth}{!}{% \begin{tabular}{cc|ccc|ccc} \toprule Self-supervised & \multirow{2}{*}{Arch.} & \multicolumn{3}{c|}{Disjoint} & \multicolumn{3}{c}{Overlap} \\ Model & & 1-10 & 11-20 & All & 1-10 & 11-20 & All \\ \midrule \multirow{2}{*}{iBOT~\cite{zhou2021ibot}} & ViT-B & 68.64 & 57.90 & 64.56 & 73.71 & 62.06 & 68.95 \\ & ViT-L & 68.51 & 58.20 & \textbf{64.64} & 73.79 & 62.26 & \textbf{69.09} \\ \cmidrule{2-8} \multirow{2}{*}{DINO~\cite{caron2021dino}} & ViT-B & 68.01 & 59.14 & 64.86 & 73.11 & 60.98 & 68.14 \\ & ViT-S & 66.86 & 58.85 & 64.17 & 70.97 & 59.57 & 66.43 \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Performance comparison by using different self-supervised pre-training models.} \label{tbl:selfup_model} \end{table} \subsubsection{Factor-by-factor Experiments} We conduct a factor-by-factor experiment on the proposed pre-training based co-segmentation, dense contrastive loss, and memory-based copy-paste augmentation. As shown in Tabel~\ref{tbl:factorbyfactor}, each design has a positive effect, and all designs are combined to obtain the best performance. \begin{table}[t!] \centering \resizebox{\linewidth}{!}{% \begin{tabular}{ccc|ccc|ccc} \toprule Pre-training & \multirow{2}{*}{$\mathcal{L}^{new}_{\text{DCL}}$} & Copy-paste & \multicolumn{3}{c|}{Disjoint} & \multicolumn{3}{c}{Overlap} \\ based Co-seg. & & Aug. & 1-10 & 11-20 & All & 1-10 & 11-20 & All \\ \midrule & & & 67.83 & 53.25 & 61.96 & 73.25 & 56.39 & 66.45 \\ \checkmark & & & 67.23 & 55.45 & 62.74 & 73.12 & 60.54 & 67.94 \\ \checkmark & \checkmark & & 67.94 & 57.22 & 63.91 & 73.44 & 60.92 & 68.28 \\ \checkmark & \checkmark & \checkmark & 68.64 & 57.90 & 64.56 & 73.71 & 62.06 & 68.95 \\ \checkmark & \checkmark & \checkmark & 68.51 & 58.20 & \textbf{64.64} & 73.79 & 62.26 & \textbf{69.09} \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Effect of each design proposed in this work. The last row indicates the result of using ViT-L architecture.} \label{tbl:factorbyfactor} \end{table} \begin{table}[t!] \centering \resizebox{0.8\linewidth}{!}{% \begin{tabular}{rc|ccc|ccc} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Sup} & \multicolumn{3}{c|}{Disjoint} & \multicolumn{3}{c}{Overlap} \\ & & 1-10 & 11-20 & All & 1-10 & 11-20 & All \\ \midrule WILSON~\cite{cermelli2022wilson} & I & 64.5 & 54.3 & 60.8 & 70.4 & 57.1 & 65.0 \\ WILSON~\cite{cermelli2022wilson} & P & 69.5 & 56.4 & 64.2 & 73.6 & 57.6 & 66.7 \\ \midrule {FMWISS}{} (Ours) & I & 68.5 & 58.2 & \textbf{64.6} & 73.8 & 62.3 & \textbf{69.1} \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Comparison with previous WILSS SoTA~\cite{cermelli2022wilson} trained with direct dense annotations.} \label{tbl:com_dense} \end{table} \begin{table}[t!] \centering \resizebox{0.8\linewidth}{!}{% \begin{tabular}{rc|ccc|ccc} \toprule \multirow{2}{*}{Method} & Train & \multicolumn{3}{c|}{Disjoint} & \multicolumn{3}{c}{Overlap} \\ & Data & 1-10 & 11-20 & All & 1-10 & 11-20 & All \\ \midrule WILSON~\cite{cermelli2022wilson} & 100\% & 64.5 & 54.3 & 60.8 & 70.4 & 57.1 & 65.0 \\ \midrule \multirow{3}{*}{{FMWISS}{} (Ours)} & 100\% & 68.5 & 58.2 & 64.6 & 73.8 & 62.3 & 69.1 \\ & 50\% & 66.7 & 56.0 & 62.7 & 72.1 & 60.5 & 67.4 \\ & 30\% & 68.5 & 51.5 & 61.5 & 75.7 & 55.7 & 66.8 \\ \bottomrule \end{tabular}% } \vspace{-.1in} \caption{Performance comparison with fewer training data.} \vspace{-.1in} \label{tbl:datasize} \end{table} \subsubsection{Data Efficiency Experiment} We first compare with WILSON~\cite{cermelli2022wilson} trained with direct dense pixel-level labels (P) for all classes in Table~\ref{tbl:com_dense}. It is worth noting that {FMWISS}{} based on only image-level labels (I) still outperforms it, especially in the challenging overlap setting. We further evaluate {FMWISS}{} with fewer training data in Table~\ref{tbl:datasize}. It is notable that {FMWISS}{} trained with only 30\% data still outperforms WILSON trained with 100\% data. Combining Table~\ref{tbl:com_dense}, Table~\ref{tbl:datasize}, and the fast convergence result (Sec.~4) in {\tt supplementary} materials, we can conclude that {FMWISS}{} is a significantly data-efficient WILSS framework. \subsubsection{Visualization} We report qualitative results indicating the superiority of our {FMWISS}{} framework on both old ({\itshape e.g.}, \textit{bottle}) and new ({\itshape e.g.}, \textit{person, horse, dog, sofa}) classes in Figure~\ref{fig:visualize}. \begin{figure}[t!] \centering \includegraphics[width=0.91\linewidth]{vis.pdf} \vspace{-.1in} \caption{Qualitative comparison between previous WILSS SoTA~\cite{cermelli2022wilson} and our {FMWISS}{} on the 10-10 VOC setting. From left to right: image, WILSON~\cite{cermelli2022wilson}, {FMWISS}{} and the ground-truth.} \label{fig:visualize} \vspace{-.1in} \end{figure} \section{Limitations} Like WILSON~\cite{cermelli2022wilson}, the {FMWISS}{} framework is not designed for single-class incremental learning steps, the proposed dense contrastive objective needs some other classes as negative samples. \section{Conclusion} In this paper, we present a novel and data-efficient {FMWISS}{} framework for weakly incremental learning for semantic segmentation. {FMWISS}{} is the first attempt to exploit knowledge of pre-trained foundation models for WILSS. We propose pre-training based co-segmentation to generate dense supervision based on image-level labels. We further use a teacher-student architecture with a proposed dense contrastive loss to more effectively utilize the pseudo labels. Besides, we introduce memory-based copy-paste augmentation to improve the forgetting problem of old classes. Extensive experiments demonstrate the superior results of {FMWISS}{}. {\small \bibliographystyle{ieee_fullname}
{ "arxiv_id": "2302.14315", "language": "en", "timestamp": "2023-03-01T02:09:17", "url": "https://arxiv.org/abs/2302.14315", "yymm": "2302" }
\section*{Introduction} In their study of the deformed $\mathcal{W}$-algebras, Frenkel-Reshetikhin~\cite{FR98} introduced a certain $2$-parameter deformation $C(q,t)$ of the Cartan matrix of finite type. In the previous work~\cite{FM}, the present authors gave a categorical interpretation of this deformed Cartan matrix $C(q,t)$ in terms of bigraded modules over the generalized preprojective algebras in the sense of Gei\ss-Leclerc-Schr\"oer~\cite{GLS}. More precisely, we have shown that the entries of the matrix $C(q,t)$ and its inverse $\widetilde{C}(q,t)$ can be expressed by the Euler-Poincar\'e pairings of certain bigraded modules. The definition of the generalized preprojective algebra is given in a generality of arbitrary symmetrizable Kac-Moody type by \cite{GLS}, and it admits a Weyl group symmetry~\cite{GLS,AHIKM} and a geometric realization of crystal bases \cite{GLS4}. As a sequel of \cite{FM}, the main purpose of the present paper is to propose a categorification of a several parameter deformation of arbitrary symmetrizable generalized Cartan matrix (GCM for short) by considering multi-graded modules over the generalized preprojective algebra. In the context of theoretical physics, Kimura-Pestun~\cite{KP1, KP2} introduced \emph{the mass-deformed Cartan matrix}, a deformation of GCM with several deformation parameters, in their study of (fractional) quiver $\mathcal{W}$-algebras, which is a generalization of Frenkel-Reshetikhin's deformed $\mathcal{W}$-algebras. Our deformation essentially coincides with Kimura-Pestun's mass-deformed Cartan matrix under a certain condition which is satisfied in all the symmetric cases or in all the finite and affine cases (see \S\ref{Ssec:KP}). To explain our results more precisely, let us prepare some kinds of terminology. Let $C = (c_{ij})_{i,j \in I}$ be a GCM with a symmetrizer $D = \mathop{\mathrm{diag}}\nolimits(d_i \mid i \in I)$. We put $g_{ij} \coloneqq \gcd(|c_{ij}|, |c_{ji}|)$ and $f_{ij} \coloneqq |c_{ij}|/g_{ij}$ for $i,j \in I$ with $c_{ij} < 0$. Associated with these data, we have the generalized preprojective algebra $\Pi$ defined over an arbitrary field (see \cite{GLS} for the precise definition or \S\ref{Ssec:GPA} for our convention). We introduce the (multiplicative) abelian group $\Gamma$ generated by the elements \[ \{q,t\} \cup \{ \mu_{ij}^{(g)} \mid (i,j, g) \in I\times I \times \mathbb{Z}, c_{ij} < 0, 1 \le g \le g_{ij} \} \] which subject to the relations \[ \mu^{(g)}_{ij} \mu^{(g)}_{ji} = 1 \quad \text{for all $i, j \in I$ with $c_{ij} < 0$ and $1 \le g \le g_{ij}$}.\] These elements play the role of deformation parameters. Here, we introduced the parameters $\mu_{ij}^{(g)}$ in addition to $q$ and $t$ inspired by \cite{KP1, KP2}, where the counterparts are called \emph{mass-parameters}. We endow a certain $\Gamma$-grading on the algebra $\Pi$ as in \eqref{eq:deg} below. We can show that this grading on $\Pi$ is universal under a reasonable condition, see \S\ref{subsec:univ_grad}. With the terminology, we give the following definition of $(q, t, \ul{\mu})$-deformation $C(q,t, \ul{\mu})$ of GCM $C$, and propose a categorical framework which organizes some relevant combinatorics in terms of the $\Gamma$-graded $\Pi$-modules: \begin{DefThm} We define the $\mathbb{Z}[\Gamma]$-valued $I \times I$-matrix $C(q,t, \ul{\mu})$ by the formula \begin{equation} \label{eq:defC} C_{ij}(q,t, \ul{\mu}) = \begin{cases} q^{d_i}t^{-1} + q^{-d_i}t & \text{if $i=j$}; \\ - [f_{ij}]_{q^{d_i}} \sum_{g = 1}^{g_{ij}}\mu_{ij}^{(g)} & \text{if $c_{ij} < 0$}; \\ 0 & \text{otherwise}, \end{cases} \end{equation} where $[k]_q = (q^{k}-q^{-k})/(q-q^{-1})$ is the standard $q$-integer. We establish the following statements: \begin{enumerate} \item Each entry of $C(q,t, \ul{\mu})$ and its inverse $\widetilde{C}(q,t,\ul{\mu})$ can be expressed as the Euler-Poincar\'e paring of certain $\Gamma$-graded $\Pi$-modules. (\S\ref{Ssec:EP}) \item Moreover, when $C$ is of infinite type, the formal expansion at $t=0$ of each entry of $\widetilde{C}(q,t,\ul{\mu})$ coincides with the $\Gamma$-graded dimension of a certain $\Pi$-module, and hence its coefficients are non-negative. (Corollary~\ref{Cor:chi}) \item\label{main:3} For general $C$, the formal expansion at $t=0$ of $\widetilde{C}(q,t,\ul{\mu})$ admits a combinatorial expression in terms of a braid group symmetry. (\S\S\ref{Ssec:cinv} \& \ref{sec:braid}) \end{enumerate} \end{DefThm} Note that if we consider the above~\eqref{main:3} for each finite type and some specific reduced words, then it recovers the combinatorial formula obtained by \cite{HL15} and \cite{KO} after some specialization. We might see our generalization as a kind of aspects of the Weyl/braid group symmetry of $\Pi$ about general reduced expressions (e.g. \cite{FG,M}). When $C$ is of finite type, these results are essentially same as the results in our previous work \cite{FM}. When $C$ is of infinite type, the algebra $\Pi$ is no longer finite-dimensional. In this case, we find it suitable to work with the category of $\Gamma$-graded modules which are bounded from below with respect to the $t$-grading, and its \emph{completed} Grothendieck group. Then, the discussion is almost parallel to the case of finite type. Indeed, we give a uniform treatment which deals with the cases of finite type and of infinite type at the same time. In the case of finite type, the above combinatorial aspects of the deformed Cartan matrices play an important role in the representation theory of quantum loop algebras, see our previous work~\cite{FM} and references therein. We may expect that our results here on the deformed GCM are also useful in the study of quiver $\mathcal{W}$-algebras and the representation theory of quantum affinizations of Kac-Moody algebras in the future. This paper is organized as follows. In \S\ref{Sec:Cartan}, after fixing our notation, we discuss combinatorial aspects (i.e., a braid group action in \S\ref{Ssec:Braid} and the formula for $\widetilde{C}(q,t,\ul{\mu})$ using it in \S\ref{Ssec:cinv}) of our deformed Cartan matrices. The proofs of several assertions require the categorical interpretation and hence are postponed to the next section. In \S\ref{Sec:gpa}, we discuss the categorical interpretation of our deformed GCM in terms of the graded modules over the generalized preprojective algebras. The final \S\ref{Sec:Rem} consists of three remarks, which are logically independent from the other parts of the paper. In \S\ref{Ssec:KP}, we compare our deformed GCM with the mass-deformed Cartan matrix in the sense of Kimura-Pestun~\cite{KP2}. In \S\ref{subsec:univ_grad}, we show that our $\Gamma$-grading on $\Pi$ is universal among the gradings valued at free abelian groups. In \S\ref{Ssec:species}, we briefly discuss the $t$-deformed GCM, which is obtained from our $C(q,t,\ul{\mu})$ by evaluating all the deformation parameters except for $t$ at $1$, and its categorical interpretation by the classical representation theory of modulated graphs in the sense of Dlab-Ringel~\cite{DR}. \subsection*{Conventions} Throughout this paper, we use the following conventions. \begin{itemize} \item For a statement $\mathrm{P}$, we set $\delta(\mathrm{P})$ to be $1$ or $0$ according that $\mathrm{P}$ is true or false. We often use the abbreviation $\delta_{x,y} \coloneqq \delta(x=y)$ known as Kronecker's delta. \item For a group $G$, let $\mathbb{Z}[G]$ denote the group ring and $\mathbb{Z}[\![G]\!]$ the set of formal sums $\{\sum_{g \in G} a_g g \mid a_g \in \mathbb{Z}\}$. Note that $\mathbb{Z}[\![ G ]\!]$ is a $\mathbb{Z}[G]$-module in the natural way. If $\mathbb{Z}[G]$ is a commutative integral domain, we write $\mathbb{Q}(G)$ for its fraction field. \end{itemize} \section{Deformed Cartan matrices} \label{Sec:Cartan} \subsection{Notation} \label{Ssec:notation} Let $I$ be a finite set. Recall that a $\mathbb{Z}$-valued $I \times I$-matrix $C = (c_{ij})_{i,j \in I}$ is called \emph{a symmetrizable generalized Cartan matrix} if the following conditions are satisfied: \begin{enumerate} \renewcommand{\theenumi}{\rm C\arabic{enumi}} \item \label{C1} $c_{ii} = 2$, $c_{ij} \in \mathbb{Z}_{\le 0}$ for all $i, j \in I$ with $i\neq j$, and $c_{ij} = 0$ if and only if $c_{ji} = 0$, \item \label{C2} there is a diagonal matrix $D = \mathop{\mathrm{diag}}\nolimits(d_i \mid i \in I)$ with $d_i \in \mathbb{Z}_{>0}$ for all $i \in I$ such that the product $DC$ is symmetric. \end{enumerate} We call the diagonal matrix $D$ in \eqref{C2} \emph{a symmetrizer of $C$}. It is said to be \emph{minimal} when $\gcd(d_i \mid i \in I) =1$. For $i, j \in I$, we write $i \sim j$ when $c_{ij} < 0$. We say that a symmetrizable generalized Cartan matrix $C$ is \emph{irreducible} if, for any $i, j \in I$, there is a sequence $i_1, \ldots, i_l \in I$ satisfying $i \sim i_1 \sim \cdots \sim i_l \sim j$. In this case, a minimal symmetrizer of $C$ is unique, and any symmetrizer of $C$ is a scalar multiple of it. From now on, by a GCM, we always mean an irreducible symmetrizable generalized Cartan matrix. We say that $C$ is of finite type if it is positive definite, and it is of infinite type otherwise. Throughout this section, we fix a GCM $C=(c_{ij})_{i,j \in I}$ with its symmetrizer $D= \mathop{\mathrm{diag}}\nolimits(d_i\mid i \in I)$. For any $i, j \in I$ with $i \sim j$, we set \[ g_{ij} \coloneqq \gcd(|c_{ij}|, |c_{ji}|), \quad f_{ij} \coloneqq |c_{ij}|/g_{ij}, \quad d_{ij} \coloneqq \gcd(d_i, d_j). \] By definition, we have $g_{ij} = g_{ji}, d_{ij} = d_{ji}$ and $f_{ij} = d_j/d_{ij}$. Let $r \coloneqq \mathop{\mathrm{lcm}}\nolimits(d_i \mid i \in I)$. We note that the transpose ${}^{\mathtt{t}}{C} = (c_{ji})_{i,j \in I}$ is also a GCM, whose minimal symmetrizer is $rD^{-1} = \mathop{\mathrm{diag}}\nolimits(r/d_i \mid i\in I)$. Following \cite{GLS}, we say that a subset $\Omega \subset I \times I$ is \emph{an acyclic orientation of $C$} if the following conditions are satisfied: \begin{itemize} \item $\{(i,j), (j,i)\} \cap \Omega \neq \varnothing$ if and only if $i \sim j$, \item for any sequence $(i_1, i_2, \ldots, i_l)$ in $I$ with $l > 1$ and $(i_k, i_{k+1}) \in \Omega$ for all $1 \le k < l$, we have $i_1 \neq i_l$. \end{itemize} Let $\mathsf{Q} = \bigoplus_{i \in I}\mathbb{Z} \alpha_i$ be the root lattice of the Kac-Moody algebra associated with $C$, where $\alpha_i$ is the $i$-th simple root for each $i \in I$. We write $s_i$ for the $i$-th simple reflection, which is an automorphism of $\mathsf{Q}$ given by $s_i \alpha_j = \alpha_j - c_{ij}\alpha_i$ for $j \in I$. The Weyl group $W$ is defined to be the subgroup of $\mathop{\mathrm{Aut}}\nolimits(\mathsf{Q})$ generated by all the simple reflections $\{ s_i\}_{i \in I}$. The pair $(W,\{s_i\}_{i \in I})$ forms a Coxeter system. \subsection{Deformed Cartan matrices}\label{Ssec:DCM} Let $\Gamma$ be the (multiplicative) abelian group defined in Introduction. As an abelian group, $\Gamma$ is free of finite rank. Let $\ul{\mu}^\mathbb{Z}$ denote the subgroup of $\Gamma$ generated by all the elements in $\{\mu_{ij}^{(g)} \mid i,j \in I, i \sim j, 1 \le g \le g_{ij}\}$. Then we have $\Gamma = q^\mathbb{Z} \times t^\mathbb{Z} \times \ul{\mu}^\mathbb{Z}$, where $x^{\mathbb{Z}} \coloneqq \{ x^k \mid k \in \mathbb{Z}\}$. If we choose an acyclic orientation $\Omega$ of $C$, we have $\ul{\mu}^\mathbb{Z} = \prod_{(i,j)\in \Omega} \prod_{g=1}^{g_{ij}} (\mu_{ij}^{(g)})^\mathbb{Z}$. In particular, the rank of $\Gamma$ is $2 + \sum_{(i,j) \in \Omega} g_{ij}$. Consider the group ring $\mathbb{Z}[\Gamma]$ of $\Gamma$. Given an acyclic orientation $\Omega$ of $C$, it is identical to the ring of Laurent polynomials in the variables $q,t$ and $\mu_{ij}^{(g)}$ with $(i,j) \in \Omega$. We define \emph{the deformed generalized Cartan matrix} (\emph{deformed GCM} for short) $C(q,t, \ul{\mu})$ to be the $\mathbb{Z}[\Gamma]$-valued $I \times I$-matrix whose $(i,j)$-entry $C_{ij}(q,t, \ul{\mu})$ is given by the formula~\eqref{eq:defC} in Introduction. We often evaluate all the parameters $\mu_{ij}^{(g)}$ at $1$ and write $C(q,t)$ for the resulting $\mathbb{Z}[q^{\pm 1}, t^{\pm 1}]$-valued matrix. More explicitly, its $(i,j)$-entry is given by \[ C_{ij}(q,t) \coloneqq \delta_{i,j} (q^{d_i}t^{-1} + q^{-d_i}t) - \delta(i \sim j) g_{ij}[f_{ij}]_{q^{d_i}}. \] We refer to the matrix $C(q,t)$ as the \emph{$(q,t)$-deformed GCM}. Note that we have $[d_i]_q C_{ij}(q,t) = g_{ij}[d_i f_{ij}]_q$ whenever $i\neq j$, and hence the matrix $([d_i]_q C_{ij}(q,t))_{i,j \in I}$ is symmetric. \begin{Rem} When the GCM $C$ is of finite type, the matrix $C(q,t)$ coincides with the $(q,t)$-deformed Cartan matrix considered in \cite{FR98}. A deformed GCM of general type is also considered in \cite{KP1, KP2}, called \emph{the mass deformed Cartan matrix}. We discuss the difference between our definition and the definition in \cite{KP2} in \S\ref{Ssec:KP}. \end{Rem} Let $\Gamma_0 \coloneqq q^\mathbb{Z} \times \ul{\mu}^\mathbb{Z} \subset \Gamma$. Since $\Gamma = t^\mathbb{Z} \times \Gamma_0$, we have $\mathbb{Z}[\Gamma] = \mathbb{Z}[\Gamma_0][t^{\pm 1}]$. Letting $q^{\pm D} \coloneqq \mathop{\mathrm{diag}}\nolimits(q^{\pm d_i} \mid i \in I)$, we can write \begin{equation} \label{eq:X} C(q,t,\ul{\mu}) = (\mathrm{id}-tX)t^{-1}q^{D}, \end{equation} for some $\mathbb{Z}[\Gamma_0][t]$-valued matrix $X$. In particular, the matrix $C(q,t,\ul{\mu})$ is invertible as a $\mathbb{Z}[\Gamma_0](\!(t)\!)$-valued matrix and its inverse $\widetilde{C}(q,t,\ul{\mu}) = (\widetilde{C}_{ij}(q,t,\ul{\mu}))$ is given by \[ \widetilde{C}(q,t,\ul{\mu}) = q^{-D}t\left( \mathrm{id} + \sum_{k > 0} X^k t^{k} \right).\] \begin{Ex} Even if we begin with a non-invertible GCM $C$, we obtain $C(q, t, \ul{\mu})$ as an invertible matrix. For example, if we take \[C=\begin{pmatrix} 2 & -2 \\ -2 & 2 \end{pmatrix}\quad \text{and}\quad D= \mathop{\mathrm{diag}}\nolimits(1, 1),\] then we obtain \begin{equation} \label{eq:A11} C(q,t,\ul{\mu})= \begin{pmatrix} qt^{-1}+q^{-1}t & -(\mu_{12}^{(1)}+\mu_{12}^{(2)})\\ -(\mu_{21}^{(1)}+\mu_{21}^{(2)}) & qt^{-1}+q^{-1}t. \end{pmatrix}. \end{equation} Since $\det C(q,t,\ul{\mu}) = q^2 t^{-2}- (\mu_{12}^{(1)}\mu_{21}^{(2)}+\mu_{21}^{(1)}\mu_{12}^{(2)})+q^{-2}t^2 \in \mathbb{Z}[\Gamma_0](\!(t)\!)^{\times}$, our $C(q,t,\ul{\mu})$ is invertible. \end{Ex} \begin{Thm} \label{Thm:tCpos} When $C$ is of infinite type, the matrix $\widetilde{C}(q,t,\ul{\mu})$ has non-negative coefficients, namely we have $\widetilde{C}_{ij}(q,t,\ul{\mu}) \in \mathbb{Z}_{\ge 0}[\Gamma_0][\![t]\!]$ for any $i,j \in I$. \end{Thm} A proof will be given in the next section (see Corollary~\ref{Cor:chi}~\eqref{Cor:chi:2} below). \begin{Rem} If we evaluate all the deformation parameters except for $q$ at $1$ in \eqref{eq:A11}, we get a $q$-deformed Cartan matrix $C(q)$, which is different from the naive $q$-deformation $C'(q)$, where \[C(q) = \begin{pmatrix} [2]_q&-2 \\ -2 & [2]_q\end{pmatrix}, \qquad C'(q) = \begin{pmatrix} [2]_q& [-2]_q \\ [-2]_q & [2]_q \end{pmatrix}.\] Note that $C(q)$ is invertible, while $C'(q)$ is not invertible. See also Remark~\ref{Rem:qC} below for a related discussion on $q$-deformed Cartan matrices. In the context of the representation theory of quantum affinizations, the choice of $q$-deformation of GCM affects the definition of the algebra. For the quantum affinization of $\widehat{\mathfrak{sl}}_2$, the matrix $C(q)$ was used by Nakajima \cite[Remark 3.13]{Nak11} and also adopted by Hernandez in \cite{Her11}. See \cite[Remark 4.1]{Her11}. \end{Rem} \subsection{Braid group actions}\label{Ssec:Braid} Let $\mathbb{Q}(\Gamma)$ denote the fraction field of $\mathbb{Z}[\Gamma]$. Let $\phi$ be the automorphism of the group $\Gamma$ given by $\phi(q) = q$, $\phi(t) = t$, and $\phi(\mu_{ij}^{(g)}) = \mu_{ji}^{(g)}$ for all possible $i,j \in I$ and $g$. It induces the automorphisms of $\mathbb{Z}[\![\Gamma]\!]$ and $\mathbb{Q}(\Gamma)$, for which we again write $\phi$. We often write $a^{\phi}$ instead of $\phi(a)$. Consider the $\mathbb{Q}(\Gamma)$-vector space $\mathsf{Q}_\Gamma$ given by \[ \mathsf{Q}_\Gamma \coloneqq \mathbb{Q}(\Gamma)\otimes_{\mathbb{Z}}\mathsf{Q} = \bigoplus_{i \in I} \mathbb{Q}(\Gamma)\alpha_i.\] We endow $\mathsf{Q}_\Gamma$ with a non-degenerate $\phi$-sesquilinear hermitian form $(-,-)_\Gamma$ by \[ (\alpha_i, \alpha_j)_\Gamma \coloneqq [d_i]_q C_{ij}(q,t, \ul{\mu})\] for each $i,j \in I$. Here the term ``$\phi$-sesquilinear hermitian" means that it satisfies \[ (ax,by)_{\Gamma} = a^\phi b (x,y)_\Gamma, \quad (x,y)_\Gamma = (y,x)_\Gamma^\phi \] for any $x,y \in \mathsf{Q}_\Gamma$ and $a,b \in \mathbb{Q}(\Gamma)$. Let $\{\alpha_i^\vee\}_{i \in I}$ be another basis of $\mathsf{Q}_\Gamma$ defined by \[ \alpha_i^\vee \coloneqq q^{-d_i}t[d_i]_q^{-1} \alpha_i. \] It is thought of a deformation of simple coroots. We have \[(\alpha_i^\vee, \alpha_j)_\Gamma = q^{-d_i}tC_{ij}(q,t,\ul{\mu})\] for any $i,j \in I$. Let $\{ \varpi_i^\vee \}_{i\in I}$ denote the dual basis of $\{\alpha_i\}_{i \in I}$ with respect to $(-,-)_\Gamma$. We also consider the element $\varpi_i \coloneqq [d_i]_q \varpi_i^\vee$ for each $i \in I$. With these conventions, we have \begin{equation} \label{eq:ao} \alpha_i = \sum_{j \in I}C_{ji}(q,t,\ul{\mu})\varpi_j, \qquad \alpha_i^\vee = q^{-d_i}t\sum_{j \in I} C_{ij}(q,t,\ul{\mu})^\phi \varpi_j^\vee. \end{equation} For each $i \in I$, we define a $\mathbb{Q}(\Gamma)$-linear automorphism $T_i$ of $\mathsf{Q}_\Gamma$ by \begin{equation} \label{Baction} T_i x \coloneqq x - (\alpha_i^\vee, x)_\Gamma \alpha_i \end{equation} for $x \in \mathsf{Q}_\Gamma$. In terms of the basis $\{ \alpha_i \}_{i \in I}$, we have \begin{equation} \label{eq:Troot} T_i \alpha_j = \alpha_j - q^{-d_i}tC_{ij}(q,t,\ul{\mu}) \alpha_i. \end{equation} Thus, the action \eqref{Baction} can be thought of a deformation of the $i$-th simple reflection $s_i$. \begin{Prop} \label{Prop:braidrel} The operators $\{ T_i \}_{i \in I}$ define an action of the braid group associated to the Coxeter system $(W, \{s_i\}_{i \in I})$, i.e., they satisfy the braid relations : \begin{alignat*}{2} T_i T_j &= T_j T_i &\qquad & \text{if $c_{ij}=0$}, \\ T_i T_j T_i &= T_j T_i T_j && \text{if $c_{ij}c_{ji}=1$}, \\ (T_i T_j)^k &= (T_j T_i)^k && \text{if $c_{ij}c_{ji}=k$ with $k \in \{2,3\}$}. \end{alignat*} \end{Prop} A proof will be given in \S\ref{sec:braid} below (after Lemma~\ref{Lem:JT}). Given $w \in W$, we choose a reduced expression $w = s_{i_1}s_{i_2} \cdots s_{i_l}$ and set $T_w \coloneqq T_{i_1} T_{i_2} \cdots T_{i_l}$. By Proposition~\ref{Prop:braidrel}, $T_w$ does not depend on the choice of reduced expression. \subsection{Remark on finite type} In this subsection, we assume that $C$ is of finite type. Since we always have $g_{ij}=1$ in this case, we write $\mu_{ij}$ instead of $\mu_{ij}^{(1)}$. For any $(i,j) \in I$, we define $\mu_{ij} \coloneqq \mu_{i,i_1} \mu_{i_1,i_2} \cdots \mu_{i_k,j}$, where $(i_1, \ldots, i_k)$ is any finite sequence in $I$ such that $i \sim i_1 \sim i_2 \sim \cdots \sim i_k\sim j$. Note that the element $\mu_{ij} \in \Gamma$ does not depend on the choice of such a sequence. Let $[-]_{\ul{\mu} =1} \colon \mathbb{Z}[\Gamma] \to \mathbb{Z}[q^{\pm 1}, t^{\pm 1}]$ denote the map induced from the specialization $\ul{\mu}^{\mathbb{Z}} \to \{1\}$. Recall $C_{ij}(q,t) = [C_{ij}(q,t,\ul{\mu})]_{\ul{\mu} = 1}$ by definition. \begin{Lem} \label{Lem:fin} When $C$ is of finite type, for any $i,j \in I$ and a sequence $(i_1,\ldots,i_k)$, we have \[ (\varpi_i^\vee, T_{i_1} \cdots T_{i_k} \alpha_{j})_\Gamma = \mu_{ij} [(\varpi_i^\vee, T_{i_1} \cdots T_{i_k} \alpha_{j})_\Gamma]_{\ul{\mu}=1}.\] \end{Lem} \begin{proof} By definition, we have $C_{ij}(q,t,\ul{\mu}) = \mu_{ij}C_{ij}(q,t)$ for any $i,j \in I$. Then the assertion follows from \eqref{eq:Troot}. \end{proof} Let $w_0 \in W$ be the longest element. It induces an involution $i \mapsto i^*$ of $I$ by $w_0 \alpha_i = - \alpha_{i^*}$. We consider the $\mathbb{Q}(\Gamma)$-linear automorphism $\nu$ of $\mathsf{Q}_\Gamma$ given by $\nu(\alpha_i) = \mu_{i^*i} \alpha_{i^*}$ for each $i \in I$. It is easy to see that $\nu$ is involutive and the pairing $(-,-)_\Gamma$ is invariant under $\nu$. In particular, we have $\nu (\varpi_i^\vee) = \mu_{ii^*}\varpi_{i^*}^\vee$ for each $i \in I$. Denote the Coxeter and dual Coxeter numbers associated with $C$ by $h$ and $h^\vee$ respectively. \begin{Prop} \label{Prop:longest} Assume that $C$ is of finite type. We have $T_{w_0} = - q^{-rh^\vee}t^h \nu.$ \end{Prop} \begin{proof} We know that the assertion holds when $\ul{\mu} = 1$ \cite[Theorem~1.6]{FM}. It lifts to the desired formula thanks to Lemma~\ref{Lem:fin}. \end{proof} \subsection{Combinatorial inversion formulas} \label{Ssec:cinv} Let $C$ be a GCM of general type. Let $(i_k)_{k \in \mathbb{Z}_{>0}}$ and $(j_k)_{k \in \mathbb{Z}_{>0}}$ be two sequences in $I$. We say that $(i_k)_{k \in \mathbb{Z}_{>0}}$ is \emph{commutation-equivalent} to $(j_k)_{k \in \mathbb{Z}_{>0}}$ if there is a bijection $\sigma \colon \mathbb{Z}_{> 0} \to \mathbb{Z}_{>0}$ such that $i_{\sigma(k)} = j_k$ for all $k \in \mathbb{Z}_{>0}$ and we have $c_{i_k, i_l} =0$ whenever $k < l$ and $\sigma(k)> \sigma(l)$. \begin{Thm} \label{Thm:inv1} Let $(i_k)_{k \in \mathbb{Z}_{>0}}$ be a sequence in $I$ satisfying the following condition: \begin{enumerate} \item if $C$ is of finite type, $(i_k)_{k \in \mathbb{Z}_{>0}}$ is commutation-equivalent to another sequence $(j_k)_{k \in \mathbb{Z}_{>0}}$ such that the subsequence $(j_1, \ldots, j_l)$ is a reduced word with $l$ being the length of the longest element $w_0 \in W$ and we have $j_{k+l} = j_{k}^*$ for all $k \in \mathbb{Z}_{>0}$; \item if $C$ is of infinite type, the subsequence $(i_1, i_2, \ldots, i_k)$ is a reduced word for all $k \in \mathbb{Z}_{>0}$, and we have $|\{ k \in \mathbb{Z}_{>0} \mid i_k = i\}| = \infty$ for each $i \in I$. \end{enumerate} Then, for any $i, j \in I$, we have \begin{equation} \label{eq:inv1} \widetilde{C}_{ij}(q,t, \ul{\mu}) = q^{-d_j}t\sum_{k \in \mathbb{Z}_{>0}; i_k = j}(\varpi_i^\vee, T_{i_1} \cdots T _{i_{k-1}}\alpha_j)_\Gamma. \end{equation} \end{Thm} \begin{proof}[Proof of Theorem~\ref{Thm:inv1} for finite type] Note that the RHS of \eqref{eq:inv1} is unchanged if we replace the sequence $(i_1,i_2,\ldots)$ with another commutation-equivalent sequence thanks to Proposition~\ref{Prop:braidrel}. When $C$ is of finite type, we know that the equality \eqref{eq:inv1} holds at $\ul{\mu}=1$ by \cite[Proposition 3.16]{FM}. Since we have $\widetilde{C}_{ij}(q,t,\ul{\mu}) = \mu_{ij}\widetilde{C}_{ij}(q,t)$ for any $i,j \in I$, we can deduce \eqref{eq:inv1} for general $\ul{\mu}$ thanks to Lemma~\ref{Lem:fin}. \end{proof} A proof when $C$ is of infinite type will be given in \S\ref{sec:braid} below (after Corollary~\ref{Cor:braid:ac}). In the remaining part of this section, we discuss the special case of the above inversion formula~\eqref{eq:inv1} when the sequence comes from a Coxeter element and deduce a recursive algorithm to compute $\widetilde{C}(q,t,\ul{\mu})$. Fix an acyclic orientation $\Omega$ of $C$. We say that a total ordering $I =\{i_1, \ldots, i_n\}$ is \emph{compatible with $\Omega$} if $(i_k, i_l) \in \Omega$ implies $k < l$. Taking a compatible total ordering, we define the Coxeter element $\tau_\Omega \coloneqq s_{i_1}\cdots s_{i_n}$. The assignment $\Omega \mapsto \tau_\Omega$ gives a well-defined bijection between the set of acyclic orientations of $C$ and the set of Coxeter elements of $W$. In what follows, we abbreviate $T_\Omega \coloneqq T_{\tau_\Omega}$. Letting $I = \{i_1,\ldots,i_n \}$ be a total ordering compatible with $\Omega$, for each $i \in I$, we set \begin{equation} \label{eq:hgamma} \beta^\Omega_i \coloneqq (1-T_\Omega)\varpi_i = q^{-d_i}tT_{i_1} T_{i_2} \cdots T_{i_{k-1}}\alpha_{i_k} \qquad \text{if $i = i_k$}. \end{equation} Note that the resulting element $\beta^\Omega_i$ is independent of the choice of the compatible ordering. \begin{Prop} \label{Prop:inv2} Let $\Omega$ be an acyclic orientation of $C$. For any $i,j \in I$, we have \begin{equation} \label{eq:inv2} \widetilde{C}_{ij}(q,t,\ul{\mu}) = \sum_{k=0}^\infty (\varpi_i^\vee, T^{k}_\Omega \beta^\Omega_j)_\Gamma. \end{equation} \end{Prop} \begin{proof} Choose a total ordering $I = \{i_1, \ldots, i_n\}$ compatible with $\Omega$. Then we have $T_\Omega = T_{i_1} \cdots T_{i_n}$. We extend the sequence $(i_1, \ldots, i_n)$ to an infinite sequence $(i_k)_{k \in \mathbb{Z}_{>0}}$ so that $i_{k+n} = i_k$ for all $k \in \mathbb{N}$. When $C$ is of infinite type, this sequence satisfies the condition in Theorem~\ref{Thm:inv1} by~\cite{Speyer}, and hence we obtain \eqref{eq:inv2}. When $C$ is of finite type, we know that the subsequence $(i_1,\ldots, i_{2l}) = (i_1, \ldots, i_n)^{h}$ is commutation-equivalent to a sequence $(j_1, \ldots, j_{2l})$ such that $(j_1,\ldots,j_l)$ is a reduced word (adapted to $\Omega$) for the longest element $w_0$ and $j_{k+l} = j_k^*$ for all $1 \le k \le l$. Indeed, when $C$ is of simply-laced type, it follows from~\cite{Bed}. When $C$ is of non-simply-laced type, we simply have $\tau_\Omega^{h/2} = w_0$ and $(i_1, \ldots, i_n)^{h/2}$ is a reduced word for $w_0$. Therefore Theorem~\ref{Thm:inv1} again yields \eqref{eq:inv2}. \end{proof} \begin{Lem} \label{Lem:mesh1} For each $i \in I$ and $k \in \mathbb{N}$, we have \begin{equation} \label{eq:rec1} q^{d_i} t^{-1} T^{k+1}_\Omega \beta^\Omega_i + q^{-d_i}t T^{k}_\Omega \beta^\Omega_i + \sum_{j \sim i}C_{ji}(q,t,\ul{\mu}) T_\Omega^{k+\delta((j,i)\in \Omega)}\beta^\Omega_j =0. \end{equation} \end{Lem} \begin{proof} For any $i, j \in I$, we have \[ T_i \varpi_j = \begin{cases} -q^{-2d_i}t^{2}\varpi_i - q^{-d_i}t\sum_{i' \sim i}C_{i'i}(q,t,\ul{\mu})\varpi_{i'} & \text{if $i=j$}, \\ \varpi_j & \text{if $i\neq j$} \end{cases} \] by definition. Using this identity, we obtain \[ q^{d_i}t T_\Omega \varpi_i =- q^{-d_i}t\varpi_i -\sum_{j \sim i} C_{ji}(q,t,\ul{\mu}) T_\Omega^{\delta((j,i)\in \Omega)}\varpi_j.\] Applying $T^{k}_\Omega(1-T_\Omega)$ yields \eqref{eq:rec1}. \end{proof} Once we fix a total ordering $I= \{i_1,\ldots,i_n\}$ compatible with $\Omega$, the equalities \eqref{eq:hgamma} and \eqref{eq:rec1} compute the elements $T^{k}_\Omega \beta^\Omega_i$ for all $(k,i) \in \mathbb{Z}_{\ge 0} \times I$ recursively along the lexicographic total ordering of $\mathbb{Z}_{\ge 0} \times I$. Thus, together with \eqref{eq:inv2}, we have obtained a recursive algorithm to compute $\widetilde{C}_{ij}(q,t, \ul{\mu})$. We say that a GCM $C$ is \emph{bipartite} if there is a function $\epsilon \colon I \to \mathbb{Z} / 2\mathbb{Z}$ such that $\epsilon(i) = \epsilon(j)$ implies $i \not\sim j$. When $C$ is bipartite, we can simplify the above recursive formula by separating the parameter $t$ as explained below. For each $i \in I$, we consider a $\mathbb{Q}(\Gamma)$-linear automorphism $\bar{T}_i$ of $\mathsf{Q}_\Gamma$ obtained from $T_i$ by evaluating the parameter $t$ at $1$. More precisely, it is given by \[ \bar{T}_i \alpha_j = \alpha_j - q^{-d_i}C_{ij}(q,1,\ul{\mu}) \alpha_i \] for all $j \in I$. The operators $\{\bar{T}_i\}_{i \in I}$ define another action of the braid group, under which the $\mathbb{Q}(\Gamma_0)$-subspace $\mathsf{Q}_{\Gamma_0} \coloneqq \bigoplus_{i \in I}\mathbb{Q}(\Gamma_0)\alpha_i$ of $\mathsf{Q}_\Gamma$ is stable. \begin{Def} A function $\xi \colon I \to \mathbb{Z}$ is called \emph{a height function} (for $C$) if \[ |\xi(i) - \xi(j)| = 1 \quad \text{for all $i, j \in I$ with $i \sim j$}. \] A height function $\xi$ gives an acyclic orientation $\Omega_\xi$ of $C$ such that we have $(i, j) \in \Omega_\xi$ if $i \sim j$ and $\xi(j) = \xi(i) + 1$. When $i \in I$ is a sink of $\Omega_\xi$, in other words, when $\xi(i) < \xi(j)$ holds for all $j \in I$ with $j \sim i$, we define another height function $s_i \xi$ by \[ (s_i \xi)(j) \coloneqq \xi(j)+2\delta_{i,j}.\] \end{Def} \begin{Rem} There exists a height function for $C$ if and only if $C$ is bipartite. \end{Rem} Given a function $\xi \colon I \to \mathbb{Z}$, we define a linear automorphism $t^\xi$ of $\mathsf{Q}_{\Gamma}$ by \[ t^\xi \alpha_i \coloneqq t^{\xi(i)}\alpha_i \] for each $i \in I$. When $\xi\colon I \to \mathbb{Z}$ is a height function and $i \in I$ a sink of $\Omega_\xi$, a straightforward computation yields $t^{\xi} T_i = \bar{T}_i t^{s_i \xi}$, from which we deduce \begin{equation} \label{eq:hT&T} t^{\xi} T_{\Omega_\xi} = \bar{T}_{\Omega_\xi} t^{\xi + 2}. \end{equation} \begin{Def} Let $\xi \colon I \to \mathbb{Z}$ be a height function. Define a map $\Phi_{\xi} \colon I \times \mathbb{Z} \to \mathsf{Q}_{\Gamma_0}$ by \[ \Phi_\xi(i,u) \coloneqq \begin{cases} \bar{T}^{k}_{\Omega_\xi}(1-\bar{T}_{\Omega_\xi})\varpi_i & \text{if $u = \xi(i) + 2k$ for some $k \in \mathbb{Z}_{\ge 0}$,} \\ 0 & \text{else}. \end{cases} \] \end{Def} The next proposition is a consequence of Proposition~\ref{Prop:inv2} and \eqref{eq:hT&T}. \begin{Prop} \label{Prop:inv3} Let $\xi \colon I \to \mathbb{Z}$ be a height function. For any $i,j \in I$, we have \begin{equation} \label{eq:inv3} \widetilde{C}_{ij}(q,t,\ul{\mu}) = \sum_{u = \xi(j)}^\infty\left(\varpi_i^\vee, \Phi_\xi(j,u)\right)_{\Gamma}t^{u-\xi(i)+1}. \end{equation} \end{Prop} Now, Lemma~\ref{Lem:mesh1} specializes to the following. \begin{Lem} \label{Lem:mesh2} Let $\xi \colon I \to \mathbb{Z}$ be a height function. For any $(i,u) \in I \times \mathbb{Z}$ with $u > \xi(i)$, we have \begin{equation} \label{eq:Phi} q^{-d_i} \Phi_\xi(i,u-1)+q^{d_i}\Phi_\xi(i,u+1) + \sum_{j \sim i} C_{ji}(q,1,\ul{\mu}) \Phi_\xi(j,u) =0. \end{equation} In particular, \eqref{eq:Phi} enables us to compute recursively all the $\Phi_\xi(i,u)$ starting from \[\Phi_{\xi}(i, \xi(i)) = (1-\bar{T}_{\Omega_\xi})\varpi_i= q^{-d_i}\bar{T}_{i_1}\cdots \bar{T}_{i_{k-1}}\alpha_i \quad \text{for all $i \in I$},\] where $I = \{i_1,\ldots,i_n\}$ is a total ordering compatible with $\Omega_\xi$ and $i_k=i$. \end{Lem} Thus, Proposition~\ref{Prop:inv3} combined with Lemma~\ref{Lem:mesh2} gives a simpler recursive algorithm to compute $\widetilde{C}_{ij}(q,t,\ul{\mu})$ when $C$ is bipartite. \begin{Rem} When $C$ is of finite type, the formula~\eqref{eq:inv3} recovers the formulas in \cite[Proposition 2.1]{HL15} (type ADE) and \cite[Theorem 4.7]{KO} (type BCFG) after the specialization $\Gamma_0 \to \{1\}$. \end{Rem} \begin{Rem} When $C$ is of finite type, the above algorithm can be used to compute $\widetilde{C}(q,t,\ul{\mu})$ (or $\widetilde{C}(q,t)$) completely. For example, let us give an explicit formula of $\widetilde{C}(q,t)$ for type $\mathrm{F}_4$. We use the convention $I = \{1,2,3,4\}$ with $1 \sim 2 \sim 3 \sim 4$ and $(d_1,d_2,d_3,d_4) = (2,2,1,1)$. Then, for any $i \le j$, we have \[\widetilde{C}_{ij}(q,t) = \frac{f_{ij}(q,t)+f_{ij}(q^{-1},t^{-1})}{q^9t^{-6}+q^{-9}t^6}\] where $f_{ij} = f_{ij}(q,t)$ is given by \begin{align*} f_{11} &= q^7t^{-5}+qt^{-1}, & f_{12} &= q^5t^{-4}+q^3t^{-2}+q, \allowdisplaybreaks \\ f_{13} &= q^4t^{-3}+q^2t^{-1}, & f_{14} &= q^3t^{-2}, \allowdisplaybreaks \\ f_{22} &= q^7t^{-5}+(q^5+q^3)t^{-3}+(q^3+2q)t^{-1}, & f_{23} &= q^6t^{-4}+(q^4+q^2)t^{-2}+1, \allowdisplaybreaks \\ f_{24} &= q^5t^{-3}+qt^{-1}, & f_{33} &= q^8t^{-5}+(q^6+q^4)t^{-3}+(2q+1)t^{-1}, \allowdisplaybreaks \\ f_{34} &= q^7t^{-4}+q^3t^{-2}+q, & f_{44} &= q^8t^{-5}+q^2t^{-1}. \end{align*} For the other case $i > j$, we can use the relation $[d_i]_q \widetilde{C}_{ij}(q,t) = [d_j]_q \widetilde{C}_{ji}(q,t)$. When $C$ is of type $\mathrm{ABCD}$, an explicit formula of $\widetilde{C}(q,t)$ is given in \cite[Appendix C]{FR98}. When $C$ is of type $\mathrm{ADE}$, we have $\widetilde{C}(q,t) = \widetilde{C}(qt^{-1},1)$ and an explicit formula of $\widetilde{C}(q) = \widetilde{C}(q, 1)$ is given in \cite[Appendix A]{GTL} (see also \cite[\S\S 4.4.1,4.4.2]{KO}). \end{Rem} \section{Generalized preprojective algebras} \label{Sec:gpa} Throughout this section, we fix an arbitrary field $\Bbbk$. Unless specified otherwise, vector spaces and algebras are defined over $\Bbbk$, and modules are left modules. \subsection{Conventions} Let $Q$ be a finite quiver. We understand it as a quadruple $Q=(Q_0, Q_1, \mathrm{s}, \mathrm{t})$, where $Q_0$ is the set of vertices, $Q_1$ is the set of arrows and $\mathrm{s}$ (resp.~$\mathrm{t}$) is the map $Q_1 \to Q_0$ which assigns each arrow with its source (resp.~target). For a quiver $Q$, we set $\Bbbk Q_0 \coloneqq \bigoplus_{i \in Q_0} \Bbbk e_i$ and $\Bbbk Q_1 \coloneqq \bigoplus_{\alpha \in Q_1} \Bbbk \alpha$. We endow $\Bbbk Q_0$ with a $\Bbbk$-algebra structure by $e_i \cdot e_j = \delta_{ij} e_i$ for any $i,j \in Q_0$, and $\Bbbk Q_1$ with a $(\Bbbk Q_0, \Bbbk Q_0)$-bimodule structure by $e_i \cdot \alpha = \delta_{i, \mathrm{t}(\alpha)} \alpha$ and $\alpha \cdot e_i = \delta_{i, \mathrm{s}(\alpha)} \alpha$ for any $i \in Q_0$ and $\alpha \in Q_1$. Then the path algebra of $Q$ is defined to be the tensor algebra $\Bbbk Q \coloneqq T_{\Bbbk Q_0}(\Bbbk Q_1)$. Let $G$ be a multiplicative abelian group with unit $1$. By a $G$-graded quiver, we mean a quiver $Q$ equipped with a map $\deg \colon Q_1 \to G$. We regard its path algebra $\Bbbk Q$ as a $G$-graded algebra in the natural way. We say that a $G$-graded vector space $V = \bigoplus_{g \in G}V_g$ is locally finite if $V_g$ is of finite dimension for all $g \in G$. In this case, we define its graded dimension $\dim_G V$ to be the formal sum $\sum_{g \in G}\dim_\Bbbk (V_g) g \in \mathbb{Z}[\![G]\!]$. For a $G$-graded vector space $V$ and an element $x \in G$, we define the grading shift $xV = \bigoplus_{g \in G}(xV)_{g \in G}$ by $(xV)_g = V_{x^{-1}g}$. More generally, for $a = \sum_{g \in G} a_g g \in \mathbb{Z}_{\ge 0}[\![G]\!]$, we set $V^{\oplus a} \coloneqq \bigoplus_{g \in G} (gV)^{\oplus a_g}$. When $V^{\oplus a}$ happens to be locally finite, we have $\dim_G V^{\oplus a} = a \dim_G V$. \subsection{Preliminary on positively graded algebras} \label{Ssec:pga} Let $t^{\mathbb{Z}}$ denote a free abelian group generated by a non-trivial element $t$. In what follows, we consider the case when $G$ is a direct product $G = G_0 \times t^{\mathbb{Z}}$, where $G_0$ is another abelian group. Our principal example is the group $\Gamma = t^\mathbb{Z} \times \Gamma_0$ in \S\ref{Ssec:DCM}. For $G$-graded vector space $V = \bigoplus_{g \in G} V_g$ and $n \in \mathbb{Z}$, we define the $G_0$-graded subspace $V_n \subset V$ by $V_n \coloneqq \bigoplus_{g \in G_0}V_{t^n g}$. By definition, we have $V = \bigoplus_{n \in \mathbb{Z}}V_n$. We use the notation $V_{\ge n} \coloneqq \bigoplus_{m \ge n} V_m$ and $V_{>n} \coloneqq \bigoplus_{m > n}V_m$. We consider a $G$-graded algebra $\Lambda$ satisfying the following condition: \begin{itemize} \item[(A)] $\Lambda = \Lambda_{\ge 0}$ and $\dim_\Bbbk \Lambda_n < \infty$ for each $n \in \mathbb{Z}_{\ge 0}$. \end{itemize} In particular, $\Lambda_0$ is a $G_0$-graded finite dimensional algebra. Let $\{ S_j\}_{j \in J}$ be a complete collection of $G_0$-graded simple modules of $\Lambda_0$ up to isomorphism and grading shift. It also gives a complete collection of $G$-graded simple modules of $\Lambda$. For a $G$-graded $\Lambda$-module $M$, the subspace $M_{\ge n} \subset M$ is a $\Lambda$-submodule for each $n \in \mathbb{Z}$. Let $\Lambda \text{-$\mathrm{mod}$}_G^{\ge n}$ denote the category of $G$-graded $\Lambda$-modules $M$ satisfying $M = M_{\ge n}$ and $\dim_\Bbbk M_m < \infty$ for all $m \ge n$, whose morphisms are $G$-homogeneous $\Lambda$-homomorphisms. This is a $\Bbbk$-linear abelian category. Let $\Lambda \text{-$\mathrm{mod}$}_G^+ \coloneqq \bigcup_{n \in \mathbb{Z}} \Lambda \text{-$\mathrm{mod}$}_G^{\ge n}$. Note that $\Lambda \text{-$\mathrm{mod}$}_G^{+}$ contains all the finitely generated $G$-graded $\Lambda$-modules, because it contains their projective covers by the condition (A). \begin{Lem} \label{Lem:PM} Given $n \in \mathbb{Z}$ and $M \in \Lambda \text{-$\mathrm{mod}$}_G^{\ge n}$, there is a surjection $P \twoheadrightarrow M$ from a projective $\Lambda$-module $P$ belonging to $\Lambda \text{-$\mathrm{mod}$}_G^{\ge n}$ . \end{Lem} \begin{proof} For each $m \ge n$, let $P_m \twoheadrightarrow M_m$ be a projective cover of $M_m$ regarded as a $G_0$-graded $\Lambda_0$-module. Then consider the $G$-graded projective $\Lambda$-module $P \coloneqq \Lambda \otimes_{\Lambda_0}\bigoplus_{m \ge n} t^mP_m$, which carries a natural surjection $P \twoheadrightarrow M$. This $P$ belongs to $\Lambda \text{-$\mathrm{mod}$}_G^{\ge n}$ because $\dim_G P$ is not greater than $\dim_{G} \Lambda \cdot \sum_{m \ge n}t^m \dim_{G_0} P_m$ which belongs to $\mathbb{Z}[G_0][\![t]\!] t^n$. \end{proof} For an abelian category $\mathcal{C}$, we denote by $K(\mathcal{C})$ its Grothendieck group. We regard $K(\Lambda \text{-$\mathrm{mod}$}_G^{\ge n})$ as a subgroup of $K(\Lambda \text{-$\mathrm{mod}$}_G^+)$ via the inclusion for any $n \in \mathbb{Z}$. Then, the collection of subgroups $\{ K(\Lambda \text{-$\mathrm{mod}$}_G^{\ge n})\}_{n \in \mathbb{Z}}$ gives a filtration of $K(\Lambda \text{-$\mathrm{mod}$}_G^{\ge n})$. We define the completed Grothendieck group $\hat{K}(\Lambda \text{-$\mathrm{mod}$}_G^+)$ to be the projective limit \[ \hat{K}(\Lambda \text{-$\mathrm{mod}$}_G^+) \coloneqq \varprojlim_{n} K(\Lambda \text{-$\mathrm{mod}$}_G^{+})/K(\Lambda \text{-$\mathrm{mod}$}_G^{\ge n}). \] Note that $\hat{K}(\Lambda \text{-$\mathrm{mod}$}_G^+)$ carries a natural $\mathbb{Z}[G_0](\!( t )\!)$-module structure given by $a [M] = [M^{\oplus a_+}] - [M^{\oplus a_-}]$, where we choose $a_+, a_- \in \mathbb{Z}_{\ge 0}[G_0](\!(t)\!)$ so that $a = a_+ - a_-$. \begin{Lem} \label{Lem:Sbasis} The $\mathbb{Z}[G_0](\!(t)\!)$-module $\hat{K}(\Lambda \text{-$\mathrm{mod}$}_G^+)$ is free with a basis $\{[S_j]\}_{j \in J}$. \end{Lem} \begin{proof} For any $n \in \mathbb{Z}$ and $M \in \Lambda\text{-$\mathrm{mod}$}^{\ge n}_G$, we have a unique expression \[[M] = \sum_{j\in J}\left(\sum_{m \geq n}[M_m:S_j]_{G_0}t^m\right)[S_j]\] in $\hat{K}(\Lambda \text{-$\mathrm{mod}$}_G^+)$, where $[M_m:S_j]_{G_0} \in \mathbb{Z}[G_0]$ denotes the $G_0$-graded Jordan-H\"older multiplicity of $S_j$ in the finite length $G_0$-graded $\Lambda_0$-module $M_m$. This proves the assertion. \end{proof} \subsection{Generalized preprojective algebras} \label{Ssec:GPA} We fix a GCM $C = (c_{ij})_{i,j \in I}$ and its symmetrizer $D = \mathop{\mathrm{diag}}\nolimits(d_i \mid i \in I)$ as in \S\ref{Ssec:notation}. Recall the free abelian group $\Gamma$ in \S\ref{Ssec:DCM}. We consider the quiver $\widetilde{Q} = (\widetilde{Q}_0, \widetilde{Q}_1, \mathrm{s}, \mathrm{t})$ given as follows: \begin{gather*} \widetilde{Q}_0 = I, \quad \widetilde{Q}_1 = \{ \alpha_{ij}^{(g)} \mid (i,j, g) \in I \times I \times \mathbb{Z}, i \sim j, 1 \le g \le g_{ij}\} \cup \{ \varepsilon_i \mid i \in I \}, \\ \mathrm{s}(\alpha_{ij}^{(g)}) = j, \quad \mathrm{t}(\alpha_{ij}^{(g)})= i, \quad \mathrm{s}(\varepsilon_i)=\mathrm{t}(\varepsilon_i)=i. \end{gather*} We equip the quiver $\widetilde{Q}$ with a $\Gamma$-grading by \begin{equation} \label{eq:deg} \deg (\alpha_{ij}^{(g)}) = q^{-d_if_{ij}}t\mu^{(g)}_{ij}, \qquad \deg(\varepsilon_i) = q^{2d_i}. \end{equation} Let $\Omega$ be an acyclic orientation of $C$. We define the associated potential $W_\Omega \in \Bbbk \widetilde{Q}$ by \begin{equation} \label{eq:potential} W_\Omega = \sum_{i, j \in I; i \sim j} \sum_{g=1}^{g_{ij}} \mathrm{sgn}_\Omega(i,j)\alpha_{ij}^{(g)} \alpha_{ji}^{(g)}\varepsilon_{i}^{f_{ij}}, \end{equation} where $\mathrm{sgn}_\Omega(i,j) \coloneqq(-1)^{\delta((j,i) \in \Omega)}$. Note that $W_\Omega$ is homogeneous of degree $t^2$. We define the $\Gamma$-graded $\Bbbk$-algebra $\widetilde{\Pi}$ to be the quotient of $\Bbbk\widetilde{Q}$ by the ideal generated by all the cyclic derivations of $W_\Omega$. In other words, the algebra $\widetilde{\Pi}$ is the quotient of $\Bbbk \widetilde{Q}$ by the following two kinds of relations: \begin{itemize} \item[(R1)] $\varepsilon_i^{f_{ij}} \alpha_{ij}^{(g)} = \alpha_{ij}^{(g)} \varepsilon_j^{f_{ji}}$ for any $i,j \in I$ with $i \sim j$ and $1 \le g \le g_{ij}$; \item[(R2)] $\displaystyle \sum_{j \in I: j\sim i}\sum_{g=1}^{g_{ij}}\sum_{k = 0}^{f_{ij}-1}\mathrm{sgn}_\Omega(i,j) \varepsilon_i^k \alpha_{ij}^{(g)} \alpha_{ji}^{(g)} \varepsilon_i^{f_{ij}-1-k} =0$ for each $i \in I$. \end{itemize} \begin{Rem} \label{Rem:ori} Although the definition of the algebra $\widetilde{\Pi}$ depends on the choice of acyclic orientation $\Omega$, it is irrelevant. In fact, a different choice of $\Omega$ gives rise to an isomorphic $\Gamma$-graded algebra. Moreover, one may define $\widetilde{\Pi}$ with more general orientation (i.e., without acyclic condition, as in \S\ref{Ssec:KP} below). Even if we do so, the resulting $\Gamma$-graded algebra is isomorphic to our $\widetilde{\Pi}$. \end{Rem} For a positive integer $\ell \in \mathbb{Z}_{>0}$, we define the $\Gamma$-graded algebra $\Pi(\ell)$ to be the quotient \[ \Pi(\ell) = \widetilde{\Pi}/ (\varepsilon^\ell), \] where $\varepsilon \coloneqq \sum_{i \in I} \varepsilon^{r/d_i}$. Note that $\varepsilon$ is homogeneous and central in $\widetilde{\Pi}$. The algebra $\Pi(\ell)$ is identical to the generalized preprojective algebra $\Pi({}^\mathtt{t}C, \ell rD^{-1}, \Omega)$ in the sense of \cite{GLS}. In other words, it is the quotient of $\Bbbk \widetilde{Q}$ by the three kinds of relations: (R1), (R2), and \begin{itemize} \item[(R3)] $\varepsilon_i^{r\ell / d_i} = 0$ for each $i \in I$. \end{itemize} \begin{Lem} \label{Lem:tpos} For any $\ell \in \mathbb{Z}_{>0}$, the algebra $\Pi(\ell)$ satisfies the condition {\rm(A)} in {\rm \S\ref{Ssec:pga}}. \end{Lem} \begin{proof} The fact $\Pi(\ell)_{\ge 0} =\Pi(\ell)$ is clear from the definition \eqref{eq:deg}. For any $n \in \mathbb{Z}_{\ge 0}$, thanks to the relation {\rm (R3)}, the vector space $\Pi(\ell)_n$ is spanned by a finite number of vectors in \[ \{ \varepsilon_{i_0}^{m_0} \alpha^{(g_1)}_{i_0,i_1} \varepsilon_{i_1}^{m_1} \alpha^{(g_2)}_{i_1,i_2} \cdots \varepsilon_{i_{n-1}}^{m_{n-1}}\alpha^{(g_{n})}_{i_{n-1},i_n}\varepsilon_{i_n}^{m_n} \mid i_k \in I, 0 \le m_k < r \ell/d_{i_k}, 1 \le g_k \le g_{i_{k-1},i_k} \}.\] Therefore, we have $\dim_\Bbbk \Pi(\ell)_n < \infty$. \end{proof} In what follows, we fix $\ell \in \mathbb{Z}_{>0}$ and write $\Pi$ for $\Pi(\ell)$ for the sake of brevity. By the definition, we have \[ \Pi_0 \cong \prod_{i \in I} H_i, \qquad \text{where $H_i \coloneqq \Bbbk[\varepsilon_i]/(\varepsilon_i^{r\ell/d_i})$}. \] In particular, for each $M \in \Pi \text{-$\mathrm{mod}$}_\Gamma^+$ and $n \in \mathbb{Z}$, the subspace $e_i M_n$ is a finite-dimensional $H_i$-module for each $i \in I$. We say that $M$ is locally free if $e_i M_n$ is a free $H_i$-module for any $n \in \mathbb{Z}$ and $i \in I$, or equivalently $M_n$ is a projective $\Pi_0$-module for any $n \in \mathbb{Z}$. In this case, we set $\operatorname{\mathrm{rank}}_{i} M \coloneqq \dim_\Gamma e_i (M/\varepsilon_i M) \in \mathbb{Z}[\Gamma_0](\!(t)\!)$. \begin{Thm}[{\cite[\S 11]{GLS}}] As a (left) $\Pi$-module, $\Pi$ is locally free in itself. \end{Thm} For each $i \in I$, let $P_i \coloneqq \Pi e_i$ be the indecomposable projective $\Pi$-module associated to the vertex $i$ and $S_i$ its simple quotient. Consider the two-sided ideal $J_i \coloneqq \Pi (1-e_i) \Pi$. We have $\Pi / J_i \cong H_i$ as $\Gamma$-graded algebras. We write $E_i$ for $\Pi / J_i$ when we regard it as a $\Gamma$-graded left $\Pi$-module. This is a locally free $\Pi$-module characterized by $\operatorname{\mathrm{rank}}_j E_i = \delta_{i,j}$. In $\hat{K}(\Pi\text{-$\mathrm{mod}$}_\Gamma^+)$, we have \begin{equation} \label{eq:E=S} [E_i] = \frac{1-q^{2r\ell}}{1-q^{2d_i}}[S_i]. \end{equation} There is the anti-involution $\phi \colon \Pi \rightarrow \Pi^{\mathrm{op}}$ given by the assignment \[\phi(e_i) \coloneqq e_i, \qquad \phi(\alpha_{ij}^{(g)}) \coloneqq \alpha_{ji}^{(g)}, \qquad \phi(\varepsilon_i) \coloneqq \varepsilon_i.\] Recall the automorphism of the group $\Gamma$ also denoted by $\phi$ in \S\ref{Ssec:Braid}. By definition, if $x \in \Pi$ is homogeneous of degree $\gamma \in \Gamma$, then $\phi(x)$ is homogeneous of degree $\phi(\gamma)$. For a left $\Pi$-module $M$, let $M^\phi$ be the right $\Pi$-module obtained by twisting the original left $\Pi$-module structure by the opposition $\phi$. If $M$ is $\Gamma$-graded, $M^\phi$ is again $\Gamma$-graded by setting $(M^\phi)_\gamma \coloneqq M_{\phi(\gamma)}$. In particular, for $M \in \Pi \text{-$\mathrm{mod}$}_\Gamma^+$, we have $\dim_\Gamma (M^\phi) = (\dim_\Gamma M)^\phi$. \subsection{Projective resolutions} Following \cite[\S 5.1]{GLS}, for each $i, j \in I$ with $i \sim j$, we define the bigraded $(H_i, H_j)$-bimodule ${}_i H_j$ by \[ {}_i H_j \coloneqq \sum_{g=1}^{g_{ij}} H_i \alpha_{ij}^{(g)} H_j \subset \Pi. \] It is free as a left $H_i$-module and free as a right $H_j$-module. Moreover, the relation $(R1)$ gives the following: \[ {}_i H_j = \bigoplus_{k=0}^{f_{ji}-1}\bigoplus_{g=1}^{g_{ij}}H_i(\alpha_{ij}^{(g)}\varepsilon_j^k) = \bigoplus_{k=0}^{f_{ij}-1}\bigoplus_{g=1}^{g_{ij}}(\varepsilon_i^k \alpha_{ij}^{(g)}) H_j. \] In particular, we get the following lemma. \begin{Lem}\label{lem:iHjCartan} For $i, j\in I$ with $i\sim j$, we have two isomorphisms \[ {}_{H_i}({}_i H_j) \cong H_i^{\oplus (-q^{-d_j}tC_{ji}(q,t,\ul{\mu})^\phi)}, \qquad ({}_i H_j) {}_{H_j} \cong H_j^{\oplus (-q^{-d_i}tC_{ij}(q,t, \ul{\mu}))} \] as $\Gamma$-graded left $H_i$-modules and as $\Gamma$-graded right $H_j$-modules respectively. \end{Lem} Consider the following sequence of $\Gamma$-graded $(\Pi, \Pi)$-bimodules: \begin{equation} \label{eq:bmodres} \bigoplus_{i\in I} q^{-2d_i} t^2 \Pi e_i \otimes_i e_i \Pi \xrightarrow{\psi} \bigoplus_{i, j \in I: i \sim j} \Pi e_j \otimes_j {}_jH_i \otimes_i e_i \Pi \xrightarrow{\varphi} \bigoplus_{i\in I} \Pi e_i \otimes_i e_i \Pi \rightarrow \Pi \rightarrow 0, \end{equation} where $\otimes_i \coloneqq \otimes_{H_i}$ and the morphisms $\psi$ and $\varphi$ are given by \begin{align*} \psi(e_i \otimes e_i) &\coloneqq \sum_{j \sim i}\sum_{k=0}^{f_{ij}-1}\sum_{g=1}^{g_{ij}} \mathrm{sgn}_\Omega(i,j) \left(\varepsilon_i^{k} \alpha_{ij}^{(g)} \otimes \alpha_{ji}^{(g)} \varepsilon_i^{f_{ij}-1-k}\otimes e_i + e_i \otimes \varepsilon_i^{k} \alpha_{ij}^{(g)} \otimes \alpha_{ji}^{(g)} \varepsilon_i^{f_{ij}-1-k}\right), \\ \varphi(e_j \otimes x\otimes e_i) &\coloneqq x \otimes e_i + e_j \otimes x. \end{align*} The other arrows $\bigoplus_{i\in I} \Pi e_i \otimes_i e_i \Pi \to \Pi \to 0$ are canonical. The relation (R2) ensures that the sequence~\eqref{eq:bmodres} forms a complex. For each $i \in I$, applying $(-)\otimes_{\Pi} E_i$ to \eqref{eq:bmodres} yields the following complex of $\Gamma$-graded (left) $\Pi$-modules: \begin{equation} \label{res3} q^{-2d_i}t^2 P_i \xrightarrow{\psi^{(i)}} \bigoplus_{j\sim i} P_j^{\oplus (-q^{-d_i}tC_{ij}(q,t,\ul{\mu})^\phi)} \xrightarrow{\varphi^{(i)}} P_i \to E_i \to 0. \end{equation} Here we used Lemma~\ref{lem:iHjCartan}. \begin{Thm}[{\cite[Proposition 12.1 and Corollary 12.2]{GLS}, \cite[Theorem 3.3]{FM}}] \label{Thm:Pres} The complexes \eqref{eq:bmodres} and \eqref{res3} are exact. Moreover, the followings hold. \begin{enumerate} \item When $C$ is of infinite type, we have $\mathop{\mathrm{Ker}}\nolimits \psi = 0$ and $\mathop{\mathrm{Ker}}\nolimits \psi^{(i)} =0$ for all $i \in I$. \item When $C$ is of finite type, we have $\mathop{\mathrm{Ker}}\nolimits \psi^{(i)} \cong q^{-rh^\vee}t^h\mu_{i^*i} E_{i^*}$ for each $i \in I$. \end{enumerate} \end{Thm} \subsection{Euler-Poincar\'{e} pairing} \label{Ssec:EP} For a $\Gamma$-graded right $\Pi$-module $M$ and a $\Gamma$-graded left $\Pi$-module $N$, the vector space $M \otimes_\Pi N$ is naturally $\Gamma$-graded. Let $\mathop{\mathrm{tor}}\nolimits_k^\Pi (M, N)$ denote the $k$-th left derived functor of $M \mapsto M \otimes_\Pi N$ (or equivalently, that of $N \mapsto M \otimes_\Pi N$). \begin{Lem} \label{Lem:tor} If $M \in \Pi^\mathrm{op} \text{-$\mathrm{mod}$}_\Gamma^{\ge m}$ and $N \in \Pi \text{-$\mathrm{mod}$}_\Gamma^{\ge n}$, we have \[ \mathop{\mathrm{tor}}\nolimits_k^\Pi(M, N) \in \Bbbk \text{-$\mathrm{mod}$}_\Gamma^{\ge m+n} \quad \text{for any $k \in \mathbb{Z}_{\ge 0}$}.\] \end{Lem} \begin{proof} We see that $\dim_\Gamma (M\otimes_\Pi N)$ is not grater than $\dim_\Gamma M \cdot \dim_\Gamma N$ which belongs to $\mathbb{Z}[\Gamma_0][\![t]\!]t^{m+n}$ under the assumption. This proves the assertion for $k =0$. The other case when $k > 0$ follows from this case and Lemma~\ref{Lem:PM}. \end{proof} We consider the following finiteness condition for a pair $(M,N)$ of objects in $\Pi \text{-$\mathrm{mod}$}_\Gamma^+$: \begin{itemize} \item[(B)] For each $\gamma \in \Gamma$, the space $\mathop{\mathrm{tor}}\nolimits^\Pi_k(M^\phi, N)_\gamma$ vanishes for $k \gg 0$. \end{itemize} If $(M,N)$ satisfies the condition (B), their \emph{Euler-Poincar\'{e} (EP) pairing} \[ \langle M,N \rangle_\Gamma \coloneqq \sum_{k =0}^{\infty} (-1)^k \dim_\Gamma \mathop{\mathrm{tor}}\nolimits^\Pi_k(M^\phi, N).\] is well-defined as an element of $\mathbb{Z}[\![\Gamma ]\!]$. The next lemma is immediate from the definition. \begin{Lem} \label{Lem:EP} Let $M, N \in \Pi\text{-$\mathrm{mod}$}_\Gamma^+$. \begin{enumerate} \item If $(M,N)$ satisfies {\rm (B)}, the opposite pair $(N,M)$ also satisfies {\rm (B)} and we have $\langle N,M \rangle_\Gamma = \langle M, N \rangle_\Gamma^\phi$. \item If $(M,N)$ satisfies {\rm (B)}, the pair $(M^{\oplus a}, N^{\oplus b})$ also satisfies {\rm (B)} for any $a, b \in \mathbb{Z}_{\ge 0}[\Gamma]$ and we have $\langle M^{\oplus a}, N^{\oplus b} \rangle_\Gamma = a^\phi b \langle M, N \rangle_\Gamma$. \item \label{Lem:EP:3} Suppose that there is an exact sequence $0 \to M' \to M \to M'' \to 0$ in $\Pi \text{-$\mathrm{mod}$}_\Gamma^+$, and the pairs $(M', N)$ and $(M'', N)$ both satisfy {\rm (B)}. Then the pair $(M,N)$ also satisfies {\rm (B)} and we have $\langle M, N\rangle_\Gamma = \langle M', N \rangle_\Gamma + \langle M'', N \rangle_\Gamma.$\qedhere \end{enumerate} \end{Lem} \begin{Prop} \label{Prop:EP} For any $i,j \in I$, the pair $(S_i, S_j)$ satisfy the condition {\rm (B)} and we have \begin{align} \label{eq:ES} \langle E_i, S_j \rangle_\Gamma &= \begin{cases} \displaystyle \frac{q^{-d_i}t \left( C_{ij}(q,t,\ul{\mu}) - q^{-rh^\vee}t^{h}\mu_{ii^*} C_{i^*j}(q,t,\ul{\mu}) \right)}{1-q^{-2rh^\vee}t^{2h}}& \text{if $C$ is of finite type}, \\ q^{-d_i}tC_{ij}(q,t,\ul{\mu}) & \text{otherwise}, \end{cases} \\ \label{eq:SS} \langle S_i, S_j\rangle_\Gamma &= \frac{1-q^{2d_i}}{1-q^{2r\ell}} \langle E_i, S_j \rangle_\Gamma. \end{align} Here we understand $(1-\gamma)^{-1} = \sum_{k \ge 0}\gamma^{k} \in \mathbb{Z}[\![\Gamma]\!]$ for $\gamma \in \Gamma \setminus \{1\}$. \end{Prop} \begin{proof} The former formula \eqref{eq:ES} directly follows from Theorem~\ref{Thm:Pres}. The latter \eqref{eq:SS} follows from Theorem~\ref{Thm:Pres} and the fact that $S_i$ has an $E_i$-resolution of the form: \[ \cdots \to q^{2r\ell + 2d_i}E_i \to q^{2r\ell} E_i \to q^{2d_i}E_i \to E_i \to S_i \to 0. \] See the proof of \cite[Proposition 3.11]{FM} for some more details. \end{proof} \begin{Cor}\label{Cor:fin} For any $M,N \in \Pi\text{-$\mathrm{mod}$}_\Gamma^+$, the pair $(M,N)$ satisfies the condition {\rm (B)}. Moreover, the EP pairing induces a $\phi$-sesquilinear hermitian form on the $\mathbb{Z}[\Gamma_0](\!(t)\!)$-module $\hat{K}(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)$ valued at $\mathbb{Z}[\Gamma_0][(1-q^{2r\ell})^{-1}](\!(t)\!)$. \end{Cor} \begin{proof} Given $M,N \in \Pi\text{-$\mathrm{mod}$}_\Gamma^+$, we shall show that $(M,N)$ satisfies the condition {\rm (B)}. Without loss of generality, we may assume that $M, N \in \Pi \text{-$\mathrm{mod}$}_\Gamma^{\ge 0}$. For $\gamma \in \Gamma$ fixed, take $n \in \mathbb{Z}$ such that $\gamma \not \in \Gamma_0 t^{\mathbb{Z}_{> n}}$. By Lemma~\ref{Lem:tor}, we have $\mathop{\mathrm{tor}}\nolimits_k^\Pi(M_{>n},N)_{\gamma} =0$ and therefore $\mathop{\mathrm{tor}}\nolimits_k^\Pi(M,N)_\gamma \simeq \mathop{\mathrm{tor}}\nolimits_k^\Pi(M/M_{>n},N)_\gamma$ for any $k \in \mathbb{Z}_{\ge 0}$. Similarly, we have $\mathop{\mathrm{tor}}\nolimits_k^\Pi(M/M_{>n},N)_\gamma \simeq \mathop{\mathrm{tor}}\nolimits_k^\Pi(M/M_{>n},N/N_{>n})_\gamma$ and hence $\mathop{\mathrm{tor}}\nolimits_k^\Pi(M,N)_\gamma \simeq \mathop{\mathrm{tor}}\nolimits_k^\Pi(M/M_{>n},N/N_{>n})_\gamma$ for any $k \in \mathbb{Z}_{\ge 0}$. By Lemma~\ref{Lem:EP} and Proposition~\ref{Prop:EP}, we know that the condition {\rm (B)} is satisfied for any finite-dimensional modules. Therefore, for $k$ large enough, we have $\mathop{\mathrm{tor}}\nolimits_k^\Pi(M/M_{>n},N/N_{>n})_\gamma = 0$. Thus, the pair $(M,N)$ satisfies the condition {\rm (B)}. Now, by Lemma~\ref{Lem:EP}~\eqref{Lem:EP:3} and Proposition~\ref{Prop:EP}, the EP pairing induces a pairing on the Grothendieck group $K(\Pi \text{-$\mathrm{mod}$}_G^+)$ valued at $\mathbb{Z}[\Gamma_0][(1-q^{2r\ell})^{-1}](\!(t)\!)$. Lemma~\ref{Lem:tor} tells us that this is continuous with respect to the topology given by the filtration $\{ K(\Pi \text{-$\mathrm{mod}$}_G^{\ge n})\}_{n \in \mathbb{Z}}$. Therefore, it descends to a pairing on the completion $\hat{K}(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)$ satisfying the desired properties. \end{proof} Let $\mathbb{F}$ be an algebraic closure of the field $\mathbb{Q}(\Gamma_0)(\!( t )\!)$. We understand that $\mathbb{Q}(\Gamma)$ is a subfield of $\mathbb{F}$ by considering the Laurent expansions at $t = 0$. By Corollary~\ref{Cor:fin} above, the EP pairing linearly extends to a $\phi$-sesquilinear hermitian form, again written by $\langle - , - \rangle_\Gamma$, on \[ \hat{K}(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)_\mathbb{F} \coloneqq \hat{K}(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)\otimes_{\mathbb{Z}[\Gamma_0](\!(t)\!)} \mathbb{F} \] valued at $\mathbb{F}$. Note that the set $\{ [E_i] \}_{i \in I}$ forms an $\mathbb{F}$-basis of $\hat{K}(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)_\mathbb{F}$ by Lemma~\ref{Lem:Sbasis} and \eqref{eq:E=S}, and that, if $M \in \Pi \text{-$\mathrm{mod}$}_\Gamma^+$ is locally free, we have $[M] = \sum_{i \in I}(\operatorname{\mathrm{rank}}_i M) [E_i]$. It is useful to introduce the module $\bar{P}_i \coloneqq P_i/P_i \varepsilon_i = (\Pi/\Pi\varepsilon_i)e_i$ for each $i \in I$. We can easily prove the following (see~\cite[Lemma 2.5]{FM}). \begin{Lem} \label{Lem:barP} If $M \in \Pi \text{-$\mathrm{mod}$}_\Gamma^+$ is locally free, we have $\langle \bar{P}_i, M \rangle_\Gamma = \operatorname{\mathrm{rank}}_i M$. In particular, we have $\langle \bar{P}_i, E_j \rangle_\Gamma = \delta_{i,j}$ and $\operatorname{\mathrm{rank}}_i P_j = (\dim_\Gamma e_j \bar{P}_i)^\phi$ for any $i,j \in I$. \end{Lem} On the other hand, we consider the $\mathbb{F}$-vector space $\mathsf{Q}_\Gamma \otimes_{\mathbb{Q}(\Gamma)}\mathbb{F}$, on which the pairing $(-,-)_\Gamma$ extends linearly. Let $\Psi$ be the $\mathbb{F}$-linear automorphism of $\mathsf{Q}_\Gamma \otimes_{\mathbb{Q}(\Gamma)}\mathbb{F}$ given by \[ \Psi \coloneqq \begin{cases} \displaystyle (1+ q^{-rh^\vee}t^h \nu)^{-1} = \frac{\mathrm{id}-q^{-rh^\vee}t^h\nu}{1-q^{-2rh^\vee}t^{2h}} & \text{if $C$ is of finite type}, \\ \mathrm{id} & \text{otherwise}. \end{cases} \] Here $\nu$ is the linear operator on $\mathsf{Q}_\Gamma$ given by $\nu(\alpha_i) = \mu_{i^*i}\alpha_{i^*}$, which we have already defined in \S\ref{Ssec:Braid}. Let us choose an element $\kappa_\ell \in \mathbb{F}$ satisfying $ \kappa_\ell^2 = q^{r\ell}[r\ell]_qt^{-1}$. \begin{Thm} \label{Thm:chi} The assignment $[E_i] \mapsto \kappa_\ell \alpha_i^\vee \, (i \in I)$ gives an $\mathbb{F}$-linear isomorphism \[ \chi_\ell \colon \hat{K}(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)_\mathbb{F} \to \mathsf{Q}_\Gamma \otimes_{\mathbb{Q}(\Gamma)} \mathbb{F}\] satisfying the following properties: \begin{enumerate} \item \label{Thm:chi:1} For any $i \in I$, we have $\chi_\ell[S_i] = \kappa_\ell^{-1} \alpha_i$. \item \label{Thm:chi:2} For any $x,y \in \hat{K}(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)_\mathbb{F}$, we have $\langle x,y \rangle_\Gamma = ( \Psi\chi_\ell(x), \chi_\ell(y))_\Gamma.$ \item \label{Thm:chi:3} For any $i \in I$, we have $\varpi_i^\vee = \kappa_\ell^{-1}\Psi\chi_\ell[P_i]$ and $\varpi_i = q^{-d_i}t\kappa_\ell \Psi \chi_\ell[\bar{P}_i]$. \end{enumerate} \end{Thm} \begin{proof} As the set $\{ [E_i]\}_{i \in I}$ forms an $\mathbb{F}$-basis of $\hat{K}(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)_\mathbb{F}$, the linear map $\chi_\ell$ is an isomorphism. The properties \eqref{Thm:chi:1} \& \eqref{Thm:chi:2} follow from the identities \eqref{eq:E=S}, \eqref{eq:ES} and \eqref{eq:SS}. Since the basis $\{[P_i]\}_{i \in I}$ (resp.~$\{[\bar{P}_i]\}_{i \in I}$) is dual to the basis $\{[S_i]\}_{i \in I}$ (resp.~$\{[E_i]\}_{i \in I}$ by Lemma~\ref{Lem:barP}), the property \eqref{Thm:chi:3} follows from the property~\eqref{Thm:chi:2}. \end{proof} \begin{Cor} \label{Cor:chi} Let $i,j \in I$. \begin{enumerate} \item When $C$ is of finite type, we have \[ \widetilde{C}_{ij}(q,t,\ul{\mu}) = \frac{q^{-d_i}t}{1-q^{-2rh^\vee}t^h} \left(\dim_{\Gamma}(e_i \bar{P}_j) - q^{-rh^\vee}t^h \mu_{ii^*}\dim_{\Gamma}(e_{i^*} \bar{P}_j) \right) \] \item \label{Cor:chi:2} When $C$ is of infinite type, we have \[ \widetilde{C}_{ij}(q,t,\ul{\mu}) = q^{-d_j}t\dim_{\Gamma}(e_i \bar{P}_j). \] \end{enumerate} \end{Cor} \begin{proof} It follows from Theorem~\ref{Thm:chi} and the inversion of \eqref{eq:ao}. \end{proof} In particular, Corollary~\ref{Cor:chi}~\eqref{Cor:chi:2} proves Theorem~\ref{Thm:tCpos}. \begin{Rem} In the previous paper~\cite{FM}, we dealt with GCMs of finite type and finite dimensional $(q,t)$-graded $\Pi$-modules. Therein, we used the modules $\bar{I}_i \coloneqq \mathbb{D}(\bar{P}_i^\phi)$ and the graded extension groups $\mathop{\mathrm{ext}}\nolimits_\Pi^k$, where $\mathbb{D}$ is the graded $\Bbbk$-dual functor, instead of the modules $\bar{P}_i$ and the graded torsion groups $\mathop{\mathrm{tor}}\nolimits^\Pi_k$. Note that the two discussions are mutually equivalent thanks to the usual adjunction (cf.~\cite[\S A.4 Proposition~4.11]{ASS}), i.e., we have $\mathbb{D}(\mathop{\mathrm{tor}}\nolimits_k^\Pi(\mathbb{D}(M),N)) \simeq \mathop{\mathrm{ext}}\nolimits^k_\Pi(N,M)$ for $M,N \in \Pi \text{-$\mathrm{mod}$}_\Gamma^+$. In this sense, our discussion here is a slight generalization of that in \cite{FM} with the additional $\ul{\mu}$-grading. \end{Rem} \subsection{Braid group action}\label{sec:braid} Recall the two-sided ideal $J_i = \Pi (1-e_i) \Pi$. For any $M \in \Pi \text{-$\mathrm{mod}$}_\Gamma^+$ and $k \in \mathbb{Z}_{\ge 0}$, we see that $\mathop{\mathrm{tor}}\nolimits_k^\Pi (J_i, M )$ also belongs to $\Pi \text{-$\mathrm{mod}$}_\Gamma^+$. When $C$ is of infinite type, $J_i^{\phi} = J_i$ has projective dimension at most $1$. In particular, the derived tensor product $J_i \overset{\mathbf{L}}{\otimes}_\Pi M$ is an object in the bounded derived category $\mathcal{D}^b (\Pi \text{-$\mathrm{mod}$}_\Gamma^+)$ for each $M \in \Pi \text{-$\mathrm{mod}$}_\Gamma^+$. By the natural identification $K(\mathcal{D}^b (\Pi \text{-$\mathrm{mod}$}_\Gamma^+))\cong K(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)$ and the canonical map $K(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)\rightarrow \hat{K}(\Pi\text{-$\mathrm{mod}$}_\Gamma^+)$, it gives the element \begin{equation} \label{eq:Lotimes} [J_i \overset{\mathbf{L}}{\otimes}_\Pi M] = \sum_{k = 0}^{\infty}(-1)^k[\mathop{\mathrm{tor}}\nolimits^\Pi_k(J_i, M)] = \sum_{j \in I}\langle J_i e_j, M \rangle_\Gamma [S_j], \end{equation} of $\hat{K}(\Pi\text{-$\mathrm{mod}$}_\Gamma^+)$, where the second equality follows since $J_i^\phi = J_i$. When $C$ is of finite type, we define the element $[J_i \overset{\mathbf{L}}{\otimes}_\Pi M]$ of $\hat{K}(\Pi\text{-$\mathrm{mod}$}_\Gamma^+)$ by \eqref{eq:Lotimes}. Recalling the relation $E_i = \Pi/J_i$, we have $[J_ie_j] = [P_j] -\delta_{i,j}[E_i]$ for each $j \in I$, and hence \[ [J_i \overset{\mathbf{L}}{\otimes}_\Pi M] = [M] - \langle E_i, M \rangle_\Gamma [S_i]. \] Sending this equality by the isomorphism $\chi_\ell$ in Theorem~\ref{Thm:chi}, we get \begin{equation} \label{eq:JiM} \chi_\ell[J_i \overset{\mathbf{L}}{\otimes}_\Pi M] = \chi_\ell[M]-(\Psi \alpha_i^\vee, \chi_\ell[M])_\Gamma \alpha_i. \end{equation} In particular, we obtain the following analogue of \cite[Proposition 2.10]{AIRT}. \begin{Lem} \label{Lem:JT} When $C$ is of infinite type, we have \[\chi_\ell[J_i \overset{\mathbf{L}}{\otimes}_\Pi M] = T_i \chi_\ell[M] \qquad \text{for any $M \in \Pi \text{-$\mathrm{mod}$}_\Gamma^+$ and $i \in I$}.\] \end{Lem} \begin{proof} When $C$ is of infinite type, we have $\Psi = \mathrm{id}$ by definition. Thus, the equation~\eqref{eq:JiM} coincides with the defining equation~\eqref{eq:Troot} of $T_i$ in this case. \end{proof} \begin{proof}[Proof of Proposition~\ref{Prop:braidrel}] When $C$ is of finite type, we can reduce the proof to the case of affine type since the collection $\{ T_i \}_{i \in I}$ can be extended to the collection $\{T_i\}_{i \in I\cup \{0\}}$ of the corresponding untwisted affine type. Hence, it suffices to consider the case when $C$ is of infinite type. In this case, the braid relations for $\{ T_i \}_{i \in I}$ follow from Lemma~\ref{Lem:JT} and the fact that the ideals $\{J_i\}_{i \in I}$ satisfy the braid relations with respect to multiplication, which is due to \cite[Theorem~4.7]{FG}. For example, when $c_{ij}c_{ji}=1$, we have \[ J_i \overset{\mathbf{L}}{\otimes}_\Pi J_j \overset{\mathbf{L}}{\otimes}_\Pi J_i \simeq J_i \otimes_\Pi J_j \otimes_\Pi J_i \simeq J_iJ_jJ_i = J_jJ_iJ_j \simeq J_j \otimes_\Pi J_i \otimes_\Pi J_j \simeq J_j \overset{\mathbf{L}}{\otimes}_\Pi J_i \overset{\mathbf{L}}{\otimes}_\Pi J_j, \] which implies the desired braid relation $T_iT_jT_i = T_jT_iT_j$. \end{proof} \begin{Cor}\label{Cor:braid:ac} Let $M \in \Pi \text{-$\mathrm{mod}$}_\Gamma^+$ with $\mathop{\mathrm{tor}}\nolimits_1^\Pi (J_i, M)=0$ for $i \in I$. We have \[\chi_\ell[J_i \otimes_{\Pi} M] = T_i\chi_\ell[M]. \] Moreover, if we assume that $M$ is locally free, so is $J_i \otimes_{\Pi} M$. \end{Cor} \begin{proof} The first assertion is a direct consequence of Lemma~\ref{Lem:JT}. For $C$ of infinite type, since projective dimension of $J_i$ is at most $1$ by Theorem~\ref{Thm:Pres}, our involution $\phi$ yields that $\mathop{\mathrm{tor}}\nolimits_k^\Pi (J_i, M)=0$ also for $k \geq 2$. This shows $[J_i \otimes_{\Pi} M]= [J_i \overset{\mathbf{L}}{\otimes}_{\Pi} M]$ when $C$ is of infinite type. When $C$ is of finite type, our assertion follows easily from the exact embedding to the corresponding untwisted affine type $\hat{\Pi}$. Namely, we have an isomorphism $J_i\otimes_{\Pi} M \simeq \hat{J}_i \otimes_{\hat{\Pi}} M$, where $\hat{J}_i \coloneqq \hat{\Pi}(1-e_i)\hat{\Pi}$. The last assertion is just an analogue of \cite[Proof of Proposition~9.4]{GLS}. \end{proof} \begin{proof}[Proof of Theorem~\ref{Thm:inv1} when $C$ is of infinite type] Assume that $C$ is of infinite type. Let $(i_k)_{k \in \mathbb{Z}_{>0}}$ be a sequence in $I$ satisfying the condition (2) in Theorem~\ref{Thm:inv1}. We have a filtration $\Pi = F_0 \supset F_1 \supset F_2 \supset \cdots$ of $(\Pi,\Pi)$-bimodules given by $F_k \coloneqq J_{i_1}J_{i_2} \cdots J_{i_k}$. This filtration $\{F_k\}_{k \ge 0}$ is exhaustive, i.e., $\bigcap_{k \ge 0}F_k = 0$. Indeed, since the algebra $\Pi$ satisfies the condition~$\mathrm{(A)}$, its radical filtration $\{ R_k\}_{k \ge 0}$ as a right $\Pi$-module is exhaustive. Note that, for any right $\Pi$-module $M$ and $i \in I$, the right module $M/MJ_i$ is the largest quotient of $M$ such that $(M/MJ_i)e_j \neq 0$ for $j \neq i$. Thanks to this fact and our assumption on the sequence $(i_k)_{k \in \mathbb{Z}_{>0}}$, we can find for each $k > 0$ a large integer $K$ such that $F_K \subset R_k$. Thus, we have $\bigcap_{k} F_k = \bigcap_k R_k = 0$. Moreover, by \cite[Proposition~3.8]{M2}, we have \[F_{k-1}/F_{k} \simeq J_{i_1} J_{i_2}\cdots J_{i_{k-1}}\otimes_\Pi E_{i_k} \qquad \text{as $\Gamma$-graded left $\Pi$-modules}\] for each $k \ge 1$. Note that we have an equality $\mathop{\mathrm{tor}}\nolimits_1^\Pi (J_{i_1}, J_{i_2}\cdots J_{i_{k-1}}\otimes_\Pi E_{i_k})=0$ by \cite[Proof of Proposition~3.8]{M2}. This yields $\chi_\ell [J_{i_1}J_{i_2}\cdots J_{i_{k-1}}\otimes_\Pi E_{i_k}] = \kappa_\ell T_{i_1}, T_{i_2}\cdots T_{i_{k-1}} \alpha^{\vee}_{i_k}$ inductively by Corollary~\ref{Cor:braid:ac}. The filtration $\{F_k\}_{k \ge 0}$ induces an exhaustive filtration $\{ F_k e_i \}_{k \ge 0}$ of the projective module $P_i$ such that \[ F_{k-1}e_i / F_{k}e_i \simeq \begin{cases} J_{i_1} \cdots J_{i_{k-1}}\otimes_\Pi E_{i} & \text{if $i_k = i$}, \\ 0 & \text{otherwise} \end{cases}\] for each $k \ge 1$. Therefore, in $\hat{K}(\Pi \text{-$\mathrm{mod}$}_\Gamma^+)$, we have \[ [P_i] = \sum_{k=1}^{\infty}[F_{k-1}e_i/F_{k}e_i] = \sum_{k \colon i_k = i}[J_{i_1} \cdots J_{i_{k-1}}\otimes_\Pi E_{i}].\] Applying the isomorphism $\chi_\ell$ in Theorem~\ref{Thm:chi} to this equality, we obtain \[ \varpi_i^\vee = \sum_{k \colon i_k = i}T_{i_1}\cdots T_{i_{k-1}} \alpha_i^\vee.\] This is rewritten as \[ \varpi_i = q^{-d_i}t \sum_{k \colon i_k = i}T_{i_1}\cdots T_{i_{k-1}} \alpha_i. \] Since $\widetilde{C}_{ij}(q,t,\ul{\mu}) = (\varpi_i^\vee, \varpi_j)_\Gamma$ by \eqref{eq:ao}, we obtain the desired equality~\eqref{eq:inv1} from this. \end{proof} \begin{Rem} In \cite{IR}, they proved that the ideal semigroup $\langle J_i \mid i \in I \rangle$ gives the set of isoclasses of classical tilting $\Pi$-modules for any symmetric affine type $C$ with $D = \mathrm{id}$. In our situation, our two-sided ideals are $\Gamma$-graded tilting objects whose $\Gamma$-graded endomorphism algebras are isomorphic to $\Pi$ when $C$ is of infinite type by arguments in \cite{BIRS,FG}. In particular, our braid group symmetry on $\hat{K}(\Pi\text{-$\mathrm{mod}$}_\Gamma^+)$ is induced from auto-equivalences on the derived category (cf.~\cite[\S 2]{MP}). From the viewpoint of \S\ref{subsec:univ_grad}, we might expect that our braid group (or Iwahori-Hecke algebra~\cite[Remark 1.7]{FM}) action is some ubiquitous symmetry that is induced from graded tilting objects up to grading shift. \end{Rem} \section{Remarks} \label{Sec:Rem} \subsection{Comparison with Kimura-Pestun's deformation} \label{Ssec:KP} In their study of \emph{(fractional) quiver $\mathcal{W}$-algebras}, Kimura-Pestun~\cite{KP2} introduced a deformation of GCM called \emph{the mass-deformed Cartan matrix}. In this subsection, we compare their mass-deformed Cartan matrix with our deformed GCM $C(q,t,\ul{\mu})$. Let $Q$ be a quiver without loops and $d \colon Q_0 \to \mathbb{Z}_{>0}$ be a function. Following~\cite[\S2.1]{KP2}, we call such a pair $(Q,d)$ \emph{a fractional quiver}. We set $d_i \coloneqq d(i)$ and $d_{ij} \coloneqq \gcd(d_i,d_j)$ for $i,j \in Q_0$. Let $C = (c_{ij})_{i,j \in I}$ be a GCM. We say that a fractional quiver $(Q,d)$ is of type $C$ if $Q_0 = I$ and the following condition is satisfied: \begin{equation} \label{eq:fQ} c_{ij} = 2\delta_{i,j} - (d_j/d_{ij})|\{ e \in Q_1 \mid \{\mathrm{s}(e), \mathrm{t}(e)\} = \{ i,j \}\}| \quad \text{for any $i,j \in I$}.\end{equation} In this case, $D = \mathop{\mathrm{diag}}\nolimits(d_i \mid i \in I)$ is a symmetrizer of $C$ and we have \[ g_{ij} = |\{ e \in Q_1 \mid \{\mathrm{s}(e), \mathrm{t}(e)\} = \{ i,j \}\}|, \quad f_{ij} = d_j/d_{ij} \quad \text{when $i \sim j$.} \] See \S\ref{Ssec:notation} for the definitions. For a given fractional quiver $(Q,d)$ of type $C$, Kimura-Pestun introduced a matrix $C^{\mathrm{KP}} = (C^{\mathrm{KP}}_{ij})_{i,j \in I}$, whose $(i,j)$-entry $C^{\mathrm{KP}}_{ij}$ is a Laurent polynomial in the formal parameters $q_1, q_2$ and $\mu_e$ for each $e \in Q_1$ given by \begin{equation} \label{eq:CKP} C^{\mathrm{KP}}_{ij} \coloneqq \delta_{i,j} (1+q_1^{-d_i}q_2^{-1}) - \frac{1-q_1^{-d_j}}{1-q_1^{-d_{ij}}}\left( \sum_{e \colon i \to j}\mu_e^{-1} + \sum_{e \colon j \to i}\mu_e q_1^{-d_{ij}}q_2^{-1}\right). \end{equation} The parameters $\mu_e$ are called \emph{mass-parameters}. If we evaluate all the parameters to $1$, the matrix $C^{\mathrm{KP}}$ coincides with the GCM $C$ by \eqref{eq:fQ}. Now we fix a function $g \colon Q_1 \to \mathbb{Z}_{>0}$ whose restriction induces a bijection between $\{ e \in Q_1 \mid \{\mathrm{s}(e), \mathrm{t}(e)\} = \{ i,j \}\}$ and $\{ g \in \mathbb{Z} \mid 1 \le g \le g_{ij} \}$ for each $i,j \in I$ with $i \sim j$. Then consider the monomial transformation $\mathbb{Z}[q_1^{\pm 1},q_2^{\pm 1},\mu_e^{\pm 1} \mid e \in Q_1] \to \mathbb{Z}[\Gamma]$ given by \begin{equation} \label{eq:mt} q_1 \mapsto q^2, \qquad q_2 \mapsto t^{-2}, \qquad \mu_e \mapsto q^{d_{ij}}t^{-1}\mu_{ij}^{(g(e))}, \end{equation} where $i = \mathrm{t}(e)$ and $j = \mathrm{s}(e)$. Note that it induces an isomorphism if we formally add the square roots of $q_1$ and $q_2$. Under this monomial transformation, for any $i,j \in I$, we have \begin{equation} \label{eq:mtCKP} C^{\mathrm{KP}}_{ij} \mapsto q^{-d_j}t\left( \delta_{i,j} (q^{d_i}t^{-1} + q^{-d_i}t) - \delta(i \sim j) [f_{ij}]_{q^{d_{ij}}} \sum_{g = 1}^{g_{ij}}\mu_{ij}^{(g)} \right). \end{equation} \begin{Prop} Under the monomial transformation \eqref{eq:mt}, the matrix $C^{\mathrm{KP}}$ corresponds to the matrix $C(q,t,\ul{\mu})q^{-D}t$ if and only if the following condition is satisfied: \begin{equation} \label{eq:condf} \text{For any $i,j \in I$ with $i \sim j$, we have $f_{ij} = 1$ or $f_{ji}=1$.} \end{equation} \end{Prop} \begin{proof} Compare \eqref{eq:mtCKP} with \eqref{eq:defC} and note that we have $[f_{ij}]_{q^{d_{ij}}} = [f_{ij}]_{q^{d_i}}$ for any $i,j \in I$ with $i \sim j$ if and only if the condition~\eqref{eq:condf} is satisfied. \end{proof} \begin{Ex} If we take our GCM $C$ and its symmetrizer $D$ as \[ C = \begin{pmatrix}2&-6\\-9&2\end{pmatrix} \quad \text{and} \quad D=\mathop{\mathrm{diag}}\nolimits(3, 2) , \] not satisfying \eqref{eq:condf}, then the image of $C^{\mathrm{KP}}$ under \eqref{eq:mt} is \[ \begin{pmatrix} 1+q^{-6}t^{2} & -(q^{-2}+q^{-4})t(\mu_{12}^{(1)}+\mu_{12}^{(2)}+\mu_{12}^{(3)}) \\ -(q^{-1}+q^{-3}+q^{-5})t(\mu_{21}^{(1)}+\mu_{21}^{(2)}+\mu_{21}^{(3)}) & 1+q^{-4}t^2 \end{pmatrix},\] which is different from \[C(q, t, \ul{\mu}) q^{-D}t = \begin{pmatrix} 1+q^{-6}t^2 & -(q + q^{-5})t(\mu_{12}^{(1)}+\mu_{12}^{(2)}+\mu_{12}^{(3)})\\ -(q + q^{-3} + q^{-7})t(\mu_{21}^{(1)}+\mu_{21}^{(2)}+\mu_{21}^{(3)}) & 1 + q^{-4}t^2 \end{pmatrix}.\] \end{Ex} \begin{Rem} The condition~\eqref{eq:condf} is satisfied for all finite and affine types. It is also satisfied when $C$ is symmetric (i.e., ${}^\mathtt{t}C = C$). This \eqref{eq:condf} also appears in \cite[\S C(iv)]{NW} as a condition for two possible mathematical definitions of Coulomb branches of quiver gauge theories with symmetrizers to coincide with each other as schemes. \end{Rem} \begin{Rem} \label{Rem:qC} When \eqref{eq:condf} is satisfied, we can assure that the evaluation at $t = 1$ makes sense in the inversion formulas. More precisely, assuming~\eqref{eq:condf}, we see that the matrix $X$ in \eqref{eq:X} is written in the form $X = q^{-1}X'$ with $X'$ being a $\mathbb{Z}[\ul{\mu}^\mathbb{Z}][q^{-1},t]$-valued matrix (see the proof of \cite[Lemma 4.3]{FO21}), and hence we have $\widetilde{C}_{ij}(q,t,\ul{\mu}) \in \mathbb{Z}[\ul{\mu}^\mathbb{Z}][q^{-1},t][\![(q^{-1}t)]\!]$ for any $i,j \in I$. Thus, under~\eqref{eq:condf}, the evaluation at $t=1$ gives a well-defined element $\widetilde{C}_{ij}(q,1,\ul{\mu})$ of $\mathbb{Z}[\ul{\mu}^{\mathbb{Z}}][\![q^{-1}]\!]$. \end{Rem} \subsection{Universality of the grading}\label{subsec:univ_grad} In this subsection, we briefly explain how one can think that our grading \eqref{eq:deg} on the algebra $\widetilde{\Pi}$ is universal. It is stated as follows. We keep the notation in \S\ref{Ssec:GPA}. Let $\widetilde{G}$ be the (multiplicative) abelian group generated by the finite number of formal symbols $\{[a] \mid a \in \widetilde{Q}_1 \}$ subject to the relations \begin{equation} \label{eq:Whom} [\alpha_{i_1j_1}^{(g_1)}] [\alpha_{j_1i_1}^{(g_1)}][\varepsilon_{i_1}]^{f_{i_1j_1}} = [\alpha_{i_2j_2}^{(g_2)}] [\alpha_{j_2i_2}^{(g_2)}][\varepsilon_{i_2}]^{f_{i_2j_2}}\end{equation} for any $i_k,j_k \in I$ with $i_k \sim j_k$ and $1 \le g_k \le g_{i_kj_k}$ ($k = 1,2$). Let $\widetilde{G} \twoheadrightarrow \widetilde{G}_f$ be the quotient by the torsion subgroup. By construction, for any free abelian group $G$, giving a homomorphism $\deg \colon \widetilde{G}_f \to G$ is equivalent to giving $\widetilde{Q}$ a structure of $G$-graded quiver $\deg \colon \widetilde{Q}_1 \to G$ such that the potential $W_\Omega$ is homogeneous. In this sense, we can say that the tautological map $\widetilde{Q}_1 \to \widetilde{G}_f$ gives a universal grading on the algebra $\widetilde{\Pi}$. Now recall our fixed symmetrizer $D = \mathop{\mathrm{diag}}\nolimits(d_i \mid i \in I)$ and set $d \coloneqq \gcd(d_i \mid i \in I)$. Let $\Gamma' \subset \Gamma$ be the subgroup generated by $\{ \deg(a) \mid a \in \widetilde{Q}_1\}$. Note that $\Gamma'$ is a free abelian group with a basis $\{q^{2d}, t^{2}\} \cup \{q^{-d_if_{ij}}t\mu^{(g)}_{ij}\mid (i,j) \in \Omega, 1 \le g \le g_{ij} \}$. \begin{Prop} The degree map \eqref{eq:deg} gives an isomorphism $\deg \colon \widetilde{G}_f \simeq \Gamma'$. \end{Prop} \begin{proof} Choose integers $\{a_{i}\}_{i \in I}$ satisfying $\sum_{i \in I}a_i d_i = d$. Let $e$ and $w$ be the elements of $\widetilde{G}_f$ given by $e \coloneqq \prod_{i \in I}[\varepsilon_i]^{a_i}$ and $w = [\alpha_{ij}^{(g)}] [\alpha_{ji}^{(g)}][\varepsilon_{i}]^{f_{ij}}$ respectively. Note that $w$ does not depend on the choice of $i,j \in I$ with $i \sim j$ and $1 \le g \le g_{ij}$ by \eqref{eq:Whom}. We define a group homomorphism $\iota \colon \Gamma' \to \widetilde{G}_f$ by $\iota(q^{2d}) \coloneqq e$, $\iota(t^2) \coloneqq w$ and $\iota(q^{-d_if_{ij}}t\mu^{(g)}_{ij})\coloneqq [\alpha^{(g)}_{ij}]$ for $(i,j) \in \Omega$, $1 \le g \le g_{ij}$. It is easy to see $\deg \circ \iota = \mathrm{id}$. Now we shall prove $\iota \circ \deg = \mathrm{id}$. First, we observe that $[\varepsilon_i]^{f_{ij}} = [\varepsilon_j]^{f_{ji}}$ when $i \sim j$ by \eqref{eq:Whom}. Since $f_{ij} = d_j / d_{ij}$, we have \[ [\varepsilon_i]^{r/d_i} = ([\varepsilon_i]^{f_{ij}})^{rd_{ij}/d_id_j} = ([\varepsilon_j]^{f_{ji}})^{rd_{ij}/d_id_j} = [\varepsilon_j]^{r/d_j}\] for any $i,j \in I$ with $i \sim j$. Since $C$ is assumed to be irreducible, it follows that $[\varepsilon_i]^{r/d_i} = [\varepsilon_j]^{r/d_j}$ for any $i, j \in I$. Furthermore, since $\widetilde{G}_f$ is torsion-free, we get \begin{equation} \label{eq:epd} [\varepsilon_i]^{d_j/d} = [\varepsilon_j]^{d_i/d} \qquad \text{for any $i, j \in I$}. \end{equation} Using \eqref{eq:epd}, for each $i \in I$, we find \[ \iota( \deg [\varepsilon_i]) = e^{d_i/d} = \prod_{j \in I}[\varepsilon_j]^{a_jd_i/d} = \prod_{j \in I}[\varepsilon_i]^{a_j d_j /d} = [\varepsilon_i].\] The equality $\iota(\deg [\alpha^{(g)}_{ij}]) = [\alpha^{(g)}_{ij}]$ is obvious. Thus we conclude that $\iota \circ \deg = \mathrm{id}$ holds. \end{proof} In particular, we have the isomorphism of group rings $\mathbb{Z}[\widetilde{G}_f] \simeq \mathbb{Z}[\Gamma']$. Using the notation in the above proof, we consider the formal roots $e^{1/d}$ and $w^{1/2}$. Then we obtain the isomorphism $\mathbb{Z}[\widetilde{G}_f][e^{1/d}, w^{1/2}] \simeq \mathbb{Z}[\Gamma].$ This means that our deformed GCM $C(q,t,\ul{\mu})$ can be specialized to any other deformation of $C$ which arises from a grading of the quiver $\widetilde{Q}$ respecting the potential $W_\Omega$ (formally adding roots of deformation parameters if necessary). \subsection{$t$-Cartan matrices and representations of modulated graphs} \label{Ssec:species} In this subsection, we discuss the $t$-Cartan matrix $C(1,t)$, which is obtained from our $(q,t)$-deformed GCM $C(q,t)$ by evaluating the parameter $q$ at $1$. Note that this kind of specialization is also studied by Kashiwara-Oh~\cite{KO} in the case of finite type very recently. Here we give an interpretation of the $t$-Cartan matrix from the viewpoint of certain graded algebras arising from an $F$-species. First, we briefly recall the notion of acyclic $F$-species over a base field $F$ \cite{Gab, Rin}. Let $I=\{1, \dots, n\}$. By definition, an \emph{$F$-species} $(F_i, {}_iF_{j})$ over $F$ consists of \begin{itemize} \item a finite dimensional skew-field $F_i$ over $F$ for each $i \in I$; \item an $(F_i, F_j)$-bimodule ${}_i F_{j}$ for each $i, j \in I$ such that $F$ acts centrally on ${}_i F_{j}$ and $\dim_{F}{}_i F_{j}$ is finite; \item There does not exist any sequence $i_1, \dots, i_l, i_{l+1}=i_1$ such that ${}_{i_k} F_{i_{k+1}}\neq 0$ for each $k=1, \dots, l$. \end{itemize} For ${}_iF_j\neq 0$, we write ${}_{F_i}({}_iF_j) \simeq F_i^{\oplus {-c_{ij}}}$ and $({}_iF_j)_{F_j} \simeq F_j^{\oplus {-c_{ji}}}$. If we put $c_{ii}=2$ and $c_{ij}=0$ for ${}_iF_j=0={}_jF_i$, the matrix $C\coloneqq ({c_{ij}})_{i,j \in I}$ is clearly a GCM with left symmetrizer $D = \mathop{\mathrm{diag}}\nolimits(\dim _F F_i \mid i\in I )$. We have an acyclic orientation $\Omega$ of this GCM determined by the conditions ${}_iF_j\neq 0$. {Following} our convention in \S \ref{Ssec:notation}, we write $\dim_F F_i = {d_i}$. For our $F$-species $(F_i, {}_iF_j)$, we set $S \coloneqq \prod_{i \in I} F_i$ and $B \coloneqq \bigoplus_{(i,j) \in \Omega} {}_iF_j$. Note that $B$ is an $(S,S)$-bimodule. We define a finite dimensional hereditary algebra $T = {T(C, D, \Omega)}$ to be the tensor algebra $T \coloneqq T_S(B)$. Note that we use the same convention for $T(C, D, \Omega)$ as that in Gei\ss-Leclerc-Schr\"oer~\cite{GLS3}, unlike our dual convention for the algebra $\Pi(\ell)$. We can also define the preprojective algebra (see \cite{DR} for details). For $(i.j)\in \Omega$, there exists a $F_j$-basis $\{x_1, \dots, x_{{|c_{ji}|}}\}$ of ${}_iF_j$ and a $F_j$-basis $\{y_1, \dots, y_{{|c_{ji}|}}\}$ of $\mathop{\mathrm{Hom}}\nolimits_{F_j}({}_iF_j, F_j)$ such that for every $x \in {}_iF_j$ we have $x = \sum_{i=1}^{{|c_{ji}|}} y_i(x)x_i.$ We have the canonical element $\mathtt{c}_{ij} = \sum_{i=1}^{{|c_{ji}|}} x_i \otimes_{F_i} y_i \in {}_iF_j \otimes_{F_j} \mathop{\mathrm{Hom}}\nolimits_{F_i}({}_iF_j, F_i)$ which does not depend on our choice of basis $\{x_i\}$ and $\{y_j\}$. Letting ${}_jF_i\coloneqq \mathop{\mathrm{Hom}}\nolimits_{F_j}({}_iF_j, F_j)$ for $(i,j)\in \Omega$, we can also define the similar canonical element $\mathtt{c}_{ji}\in {}_jF_i \otimes_{F_i} {}_iF_j$. We put $\overline{B}\coloneqq \bigoplus_{(i,j)\in \Omega}({}_iF_j \oplus {}_jF_i)$, and define the preprojective algebra $\Pi_T=\Pi_T(\ell)$ of the algebra $T$ as $$T_S(\overline{B})/\langle \sum_{(i,j)\in \Omega}\mathrm{sgn}_{\Omega}(i,j)\mathtt{c}_{ij}\rangle.$$ Let $P^T_i$ (resp. $P^{\Pi_T}_i$) denote the indecomposable projective $T$-module (resp. $\Pi_T$-module) associated with $i$, and $\tau_T$ the Auslander-Reiten translation for (left) $T$-modules. Note that this algebra $\Pi_T$ satisfies $P^{\Pi_T}_i=\bigoplus_{k\geq 0} \tau_T^{-k} P^T_i$ by an argument on the preprojective component of the Auslander-Reiten quiver of $T$ similar to \cite[Proposition 4.7]{So}. Note that our $F$-species $(F_i, {}_iF_j)$ is nothing but a \emph{modulated graph} associated with ${(C, D,\Omega)}$ in the sense of Dlab-Ringel~\cite{DR}, although we will work with these algebras along with a context of a deformation of $C$. Although there is obviously no nontrivial $\mathbb{Z}$-grading on $S$ by the fact $F_i$ is a finite dimensional skew-field, we can nevertheless endow $T$ and $\Pi_T$ with a $t^{\mathbb{Z}}$-grading induced from their tensor algebra descriptions. Each element of ${}_iF_j$ has degree $t$. We remark that if we specifically choose a decomposition of each ${}_iF_j$ like $F(\!(\varepsilon)\!)$-species $\Tilde{H}$ in \cite[\S4.1]{GLS6} and define its preprojective algebra, then we can also endow these algebras with natural $\ul{\mu}^{\mathbb{Z}}$-gradings and homogeneous relations by using \cite[Lemma 1.1]{DR}. But we only consider the $t^{\mathbb{Z}}$-grading here since our aim is to interpret the $t$-Cartan matrix. By our $t^{\mathbb{Z}}$-grading, our algebra $\Pi_T$ satisfies the condition (A) in \S\ref{Ssec:pga} (with $\Bbbk = F$). We have the following complex of $t$-graded modules for each simple module $F_i$: \begin{Lem} \label{Thm:Pres2} The complex \begin{equation} t^2 P^{\Pi_T}_i \xrightarrow{\psi^{(i)}} \bigoplus_{j\sim i} (P_j^{\Pi_T})^{\oplus (-t{C_{ji}(1,t)})} \rightarrow P^{\Pi_T}_i \to F_i \to 0. \end{equation} is exact. Moreover, the followings hold. \begin{enumerate} \item\label{res1sp} When $C$ is of infinite type, $\mathop{\mathrm{Ker}}\nolimits \psi^{(i)} =0$ for all $i \in I$. In particular, each object in $\Pi_T\text{-$\mathrm{mod}$}_{t^\mathbb{Z}}^+$ has projective dimension at most $2$. \item\label{res2sp} When $C$ is of finite type, we have $\mathop{\mathrm{Ker}}\nolimits \psi^{(i)} \cong t^h F_{i^*}$ for each $i \in I$. \end{enumerate} \end{Lem} \begin{proof} The statement \eqref{res1sp} is deduced from the Auslander-Reiten theory for $T$ (e.g. \cite[Proposition 7.8]{AHIKM}). The statement \eqref{res2sp} follows from \cite[\S 6]{So}. Note that $C$ is of finite type if and only if $\Pi_T$ is a self-injective finite dimensional algebra and its Nakayama permutation can be similarly computed as Theorem~\ref{Thm:Pres} by an analogue of \cite[\S 3]{Miz} (see Remark~\ref{rem:AHIKM}). \end{proof} \begin{Cor} \label{Cor:chi2} For any $i,j \in I$, the followings hold. \begin{enumerate} \item \label{Cor:chi2:1} When $C$ is of finite type, we have \[ { d_i \widetilde{C}_{ij}(1,t) = \frac{t}{1-t^h}\left(\dim_{t^{\mathbb{Z}}}(e_i P_j^{\Pi_T}) - t^h \dim_{t^{\mathbb{Z}}}(e_{i^*} P_j^{\Pi_T}) \right).} \] \item \label{Cor:chi2:2} When $C$ is of infinite type, we have \[{ d_i\widetilde{C}_{ij}(1,t) = t\dim_{t^\mathbb{Z}}(e_i P_j^{\Pi_T}).} \] \end{enumerate} Here $\dim_{t^\mathbb{Z}}$ denotes the graded dimension of $t^\mathbb{Z}$-graded $F$-vector spaces. \end{Cor} \begin{proof} The equality $[P_j^{\Pi_T}]=\sum_{i\in I} (\dim_{t^{\mathbb{Z}}}(e_iP_j^{\Pi_T}) / \dim_F F_i) [F_i]$ in $\hat{K}(\Pi_T\text{-$\mathrm{mod}$}_{t^\mathbb{Z}})$ and an equality $\dim_{t^{\mathbb{Z}}} e_i \Pi_T \otimes_{\Pi_T} F_j = \delta_{ij} {d_i}$ immediately yield our assertion by Lemma~\ref{Thm:Pres2} with arguments similar to the case of the generalized preprojective algebras in \S \ref{Ssec:EP}. \end{proof} \begin{Rem}\label{rem:AHIKM} In the case of our algebra $\Pi_T$, the two-sided ideal $J_i\coloneqq \Pi_T(1-e_i)\Pi_T$ and the ideal semi-group $\langle J_1, \dots, J_n \rangle$ also gives the Weyl group symmetry on its module category analogously to \cite{IR,BIRS,Miz} (see \cite[\S 7.1]{AHIKM}). Even if we consider the algebra $\Pi_T$ and $t^{\mathbb{Z}}$-homogeneous ideal $J_i$, we can also establish the similar braid group symmetry as \S\ref{sec:braid} after the specialization $q\rightarrow 1$ and $\ul{\mu} \rightarrow 1$ by Lemma~\ref{Thm:Pres2}. \end{Rem} \begin{Rem} The algebra $\Pi_T$ is a Koszul algebra for non-finite types and $(h-2, h)$-almost Koszul algebras for finite types in the sense of \cite{BBK} with our $t^{\mathbb{Z}}$-gradings. Thus Corollary~\ref{Cor:chi2} might be interpreted in the context of \cite[\S 3.3]{BBK}. \end{Rem} As a by-product of this description, we have the following generalization of the formula in \cite[Proposition 2.1]{HL15} and \cite[Proposition 3.8]{Fuj22} for any bipartite symmetrizable Kac-Moody type. For a $t$-series $f(t) = \sum_k f_k t^k \in \mathbb{Z}[\![t, t^{-1}]\!]$, we write $[f(t)]_k \coloneqq f_k$ for $k \in \mathbb{Z}$. \begin{Prop} \label{Prop:Ext} Assume that $C$ is bipartite and take a height function $\xi$ for $C$ such that $\Omega_\xi = \Omega$ (see \S\ref{Ssec:cinv}). Let $(F_i, {}_iF_j)$ be a modulated graph associated with ${(C, D,\Omega)}$ as above. Let $M \simeq \tau_T^{-k}P^T_i$ and $N \simeq \tau_T^{-l}P_j^T$ be any two indecomposable preprojective $T$-modules. When $C$ is of infinite type, we have \begin{equation} \label{eq:ExtT} \dim_F \mathop{\mathrm{Ext}}\nolimits_T^1(M,N) = \left[{d_i\widetilde{C}_{ij}(1,t)}\right]_{(\xi(i)+2k)-(\xi(j)+2l) -1}.\end{equation} When $C$ is of finite type, the equality \eqref{eq:ExtT} still holds provided that \begin{equation} \label{eq:1&h-1} 1 \le (\xi(i)+2k)-(\xi(j)+2l) -1 \le h-1. \end{equation} Otherwise, we have $\mathop{\mathrm{Ext}}\nolimits_T^1(M,N) = 0$. \end{Prop} \begin{proof} We may deduce the assertion by a combinatorial thought using the formula \eqref{eq:inv3} as in \cite{HL15} or \cite{Fuj22}. But, here we shall give another proof using the algebra $\Pi_T$. For any $t^\mathbb{Z}$-graded $T$-module $M$, we have a decomposition $M = \bigoplus_{u \in \mathbb{Z}} M^{[u]}$, where $M^{[u]} \coloneqq \bigoplus_{i \in I}e_iM_{u - \xi(i)}$. Note that each $M^{[u]}$ is an $T$-submodule of $M$, since $\xi$ is a height function satisfying $\Omega_\xi = \Omega$. We have the following isomorphism \begin{equation} \label{eq:isomHP} {}_T(P_i^{\Pi_T})^{[u]} \cong \begin{cases} \tau_T^{-k}P_i^{T} & \text{if $u = \xi(i) + 2k$ for $k \in \mathbb{Z}_{\ge 0}$}, \\ 0 & \text{otherwise} \end{cases} \end{equation} as (ungraded) $T$-modules. Now, we have for each $M \simeq \tau_T^{-k}P^T_i$ and $N \simeq \tau_T^{-l}P_j^T$ \begin{align*} \dim_F \mathop{\mathrm{Ext}}\nolimits_T^1(M,N) &= \dim_F \mathop{\mathrm{Ext}}\nolimits_T^1(\tau_T^{-k}P^T_i, \tau_T^{-l}P^T_j) \allowdisplaybreaks \\ &= \dim_F e_j\tau_T^{(k-l-1)}P^T_i&&\text{(cf.~\cite[\S IV 2.13]{ASS})} \allowdisplaybreaks \\ &= \dim_F e_j(P_i^{\Pi_T})^{[\xi(i)+2(k-l-1)]}&&\text{(\ref{eq:isomHP})} \allowdisplaybreaks \\ &= \dim_{F} (e_j{P}^{\Pi_T}_i)_{(\xi(i)+2k)-(\xi(j)+2l)-2}. \end{align*} When $C$ is of infinite type, we deduce the desired equation \eqref{eq:ExtT} from Corollary~\ref{Cor:chi2}~\eqref{Cor:chi2:2}. When $C$ is of finite type, we can find that $(e_j {P}^{\Pi_T}_i)_{(\xi(i)+2k)-(\xi(j)+2l)-2}$ is non-zero only if the condition~\eqref{eq:1&h-1} is satisfied by an analogue of \cite[Corollary 3.9]{FM}. When \eqref{eq:1&h-1} is satisfied, we get \eqref{eq:ExtT} by Corollary~\ref{Cor:chi2}~\eqref{Cor:chi2:1}. \end{proof} \begin{Rem} When the authors almost finished writing this paper, a preprint~\cite{KO23} by Kashiwara-Oh appeared in arXiv, which shows that the $t$-Cartan matrix of finite type is closely related to the representation theory of quiver Hecke algebra. Combining their main theorem with Proposition~\ref{Prop:Ext} above, we find a relationship between the representation theory of the modulated graphs and that of quiver Hecke algebras, explained as follows. Let $C$ be a Cartan matrix of finite type, and let $\mathfrak{g}$ denote the simple Lie algebra associated with $C$. Let $R$ be the quiver Hecke algebra associated with $C$ and its minimal symmetrizer $D$, which categorifies the quantized enveloping algebra $U_q(\mathfrak{g})$. We are interested in the $\mathbb{Z}_{\ge 0}$-valued invariant $\textfrak{d}(S, S')$ defined by using the R-matrices, which measures how far two ``affreal" $R$-modules $S$ and $S'$ are from being mutually commutative with respect to the convolution product (or parabolic induction). Given an (acyclic) orientation $\Omega$ of $C$, we have an affreal $R$-module $S_\Omega(\alpha)$ for each positive root $\alpha$ of $\mathfrak{g}$, called a cuspidal module. See \cite{KO23} for details. On the other hand, we have a generalization of the Gabriel theorem for $F$-species (see \cite{DR2,DR3,Rin}). In particular, for each positive root $\alpha$ of $\mathfrak{g}$, there exists an indecomposable module $M_\Omega(\alpha)$ over the algebra $T = {T(C,D,\Omega)}$ satisfying $\sum_{i \in I} (\dim_{F_i} e_i M_{\Omega}(\alpha)) \alpha_i = \alpha$, uniquely up to isomorphism. Note that every indecomposable $T$-module is a preprojective module when $C$ is of finite type. Then, \cite[Main Theorem]{KO23} and Proposition~\ref{Prop:Ext} tell us that the equality \begin{equation} \label{eq:d=Ext} \textfrak{d}\left(S_{{\Omega}^*}(\alpha), S_{{\Omega}^*}(\beta)\right) = \dim_F \mathop{\mathrm{Ext}}\nolimits_T(M_\Omega(\alpha), M_\Omega(\beta)) + \dim_F \mathop{\mathrm{Ext}}\nolimits_T(M_\Omega(\beta), M_\Omega(\alpha)) \end{equation} holds for any positive roots $\alpha$ and $\beta$, where ${\Omega}^*$ denotes the orientation of $C$ opposite to $\Omega$. In particular, \eqref{eq:d=Ext} implies that the following three conditions are mutually equivalent for any positive roots $\alpha$ and $\beta$ : \begin{itemize} \item The convolution product $S_{{\Omega}^*}(\alpha) \circ S_{{\Omega}^*}(\beta)$ is simple; \item We have an isomorphism $S_{{\Omega}^*}(\alpha) \circ S_{{\Omega}^*}(\beta) \simeq S_{{\Omega}^*}(\beta) \circ S_{{\Omega}^*}(\alpha)$ of $R$-modules; \item We have $\mathop{\mathrm{Ext}}\nolimits_T(M_\Omega(\alpha), M_\Omega(\beta)) = \mathop{\mathrm{Ext}}\nolimits_T(M_\Omega(\beta), M_\Omega(\alpha)) =0.$ \end{itemize} Note that an analogous statement in the case of fundamental modules over the quantum loop algebra of type $\mathrm{ADE}$ is obtained in \cite{Fuj22}. \end{Rem} \subsection*{Acknowledgments} The authors are grateful to Christof Gei\ss, Naoki Genra, David Hernandez, Yuya Ikeda, Osamu Iyama, Bernhard Keller, Taro Kimura, Yoshiyuki Kimura, Bernard Leclerc and Hironori Oya for useful discussions and comments. This series of works of the authors is partly motivated from the talk~\cite{Kels} given by Bernhard Keller. They thank him for sharing his ideas and answering some questions. They were indebted to Laboratoire de Math\'ematiques \`a l'Universit\'e de Caen Normandie for the hospitality during their visit in the fall 2021. This work was partly supported by the Osaka City University Advanced Mathematical Institute: MEXT Joint Usage/Research Center on Mathematics and Theoretical Physics [JPMXP0619217849].
{ "arxiv_id": "2302.14301", "language": "en", "timestamp": "2023-03-01T02:08:57", "url": "https://arxiv.org/abs/2302.14301", "yymm": "2302" }
\section{Introduction}\label{sec1} Deep learning has achieved unprecedented success in numerous computer vision tasks, including image classification \citep{krizhevsky2012imagenet,he2015deep,dosovitskiy2021image}, object detection \citep{ren2015faster,redmon2016you,he2017mask}, etc. However, these well-performing models are criticized for their lack of robustness to adversarial noises \citep{szegedy2013intriguing,goodfellow2014explaining}, common image corruptions \citep{hendrycks2018benchmarking}, and various types of real-world distribution shifts \citep{barbu2019objectnet,hendrycks2021many,hendrycks2021natural,dong2022viewfool}. For example, the widely studied \emph{adversarial examples}, generated by applying imperceptible perturbations to natural examples, can lead to erroneous predictions of a target model. The problems with the robustness of deep learning have become a formidable obstacle to human-level performance. As deep learning models have been increasingly used in security-sensitive applications (e.g., autonomous driving, medical image processing, etc.), the study of model robustness has become an important research topic in computer vision and machine learning, which spawns a large number of adversarial attack and defense algorithms \citep{Madry2017Towards,Wong2018Provable,dong2018boosting,liao2017Defense,cohen2019certified,zhang2019theoretically,croce2020reliable,pang2020bag}, as well as out-of-distribution (OOD) datasets \citep{hendrycks2018benchmarking,geirhos2018imagenet,barbu2019objectnet,hendrycks2021many,hendrycks2021natural,dong2022viewfool}. To fully understand the effectiveness of existing algorithms and measure actual progress in the field, it is indispensable to comprehensively and correctly benchmark the robustness of models under diverse settings \citep{carlini2019evaluating,dong2020benchmarking,croce2021robustbench}. It can also be an impetus for the development of more robust models. However, the research on model robustness is often faced with an \emph{arms race} between attacks and defenses, i.e., a defense method robust to existing attacks can be further evaded by new attacks, and vice versa \citep{Athalye2018Obfuscated,carlini2019evaluating,tramer2020adaptive}, making robustness evaluation particularly challenging. Various works have been devoted to building robustness benchmarks \citep{ling2019deepsec,dong2020benchmarking,croce2021robustbench,tang2021robustart}, which facilitate fair comparisons of different models. However, the existing benchmarks are not comprehensive enough. First, some of the benchmarks can hardly keep up with the state-of-the-art due to the rapid development of deep learning models, rendering the benchmarking results outdated. Second, most benchmarks mainly focus on adversarial robustness but rarely study robustness to natural distribution shifts and adversarial examples in tandem. As a result, they cannot disclose the relationship between different aspects of model robustness. Moreover, the evaluation metrics lie in the core of robustness benchmarks. The existing benchmarks primarily adopt point-wise evaluation metrics to compare the robustness of models, which cannot comprehensively demonstrate their performance. For example, \cite{tang2021robustart} show that Transformers are more robust than CNNs against adversarial attacks, while \cite{mahmood2021robustness} observe that Transformers provide no additional robustness over CNNs. The conflicting results are due to the different noise budgets adopted in their benchmarks. Therefore, the point-wise evaluation metrics are insufficient to provide a global understanding of model robustness. To address this problem, our previous work \citep{dong2020benchmarking} proposes robustness curves as fair-minded evaluation metrics that can help to thoroughly compare the robustness of models at different noise levels. In this paper, we establish a comprehensive and rigorous benchmark called \textbf{ARES-Bench}\footnote{\textbf{ARES-Bench} is named after the platform called \textbf{Adversarial Robustness Evaluation for Safety (ARES)}.} to evaluate model robustness on the image classification task. Our benchmark evaluates both \emph{natural robustness} with 4 real-world and 3 synthesized OOD datasets, and \emph{adversarial robustness} with various white-box and black-box attack methods. We systematically study 55 models on ImageNet \citep{russakovsky2015imagenet} with diverse network architectures, including typical CNNs and Transformers, and different learning algorithms, including normal supervised training, pre-training on large-scale datasets \citep{dosovitskiy2021image}, self-supervised learning (SSL) \citep{chen2021empirical,he2022masked}, and adversarial training (AT) \citep{Madry2017Towards}. Using robustness curves as the evaluation criteria, we conduct extensive experiments, based on which we draw some important findings. First, given a particular architecture, there is a trade-off between adversarial and natural robustness. Specifically, AT degrades natural robustness for most OOD datasets although the adversarial robustness is improved. We find that although AT learns robust features that are more shape-biased and aligned with humans \citep{tsipras2018robustness,zhang2019interpreting,ilyas2019adversarial}, these features generalize poorly to real-world distribution shifts. Second, AT on Transformers performs much better than CNNs. Among the architectures we study, the optimal one is the Swin Transformer \citep{liu2021swin}, on the basis of which AT achieves more than $60\%$ robustness against the $\ell_\infty$-norm bounded perturbations of $4/255$. The main advantage of Swin Transformer is the hierarchical architecture with the self-attention mechanism, which allows AT to concentrate more on global features and benefit the adversarial robustness. Third, pre-training on large-scale datasets (e.g., ImageNet-21K) and by SSL significantly improves natural robustness. For AT, a pre-trained model on larger datasets is a better initialization than a random model, which speeds up AT. More analyses and discussions can be found in Sec.~\ref{sec4}. Based on ARES-Bench, we further provide two case studies. First, we examine the effects of a wide range of tricks (e.g., data augmentation, regularization, weight averaging, pre-training, etc.) in large-scale AT on ImageNet. Most of these tricks prevent overfitting to the training data, which is a severe problem in AT. By performing ablation studies of these tricks, we obtain an optimized setting to achieve the state-of-the-art adversarial robustness compared with the existing methods \citep{xie2019feature,salman2020adversarially,debenedetti2022light}. Second, to explain why one model is more adversarially robust than another, we provide a frequency analysis to exhibit the frequency bias of the robust models. We find that AT models have lower frequency bias than normally trained models, suggesting AT uses more low-frequency or shape-biased features for prediction. In summary, our main contributions include: \begin{itemize} \item We establish ARES-Bench, a comprehensive benchmark to evaluate natural and adversarial robustness of image classifiers. Using robustness curves, we systematically investigate the robustness of 55 models on ImageNet with different architectures and learning algorithms. \item Based on the large-scale experiments, we have obtained many insightful findings, that reveal the inherent relationship between natural and adversarial robustness while providing a deeper understanding of robustness of different architectures and learning methods. \item Based on our benchmark, we analyze the training tricks in large-scale adversarial training on ImageNet and achieve the new state-of-the-art robustness. A frequency analysis is provided to exhibit the frequency bias of adversarially robust models. \item We make our benchmark available at \url{https://ml.cs.tsinghua.edu.cn/ares-bench}, which contains the leaderboards under various settings. We also open-source the robustness platform ARES (\url{https://github.com/thu-ml/ares}) and the collection of models in our benchmark. \end{itemize} We hope our benchmark, platform, robust models, and detailed analyses can be helpful for future research. \section{Related Work}\label{sec2} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{fig1_new_new2.pdf} \caption{A comparison between our benchmark ARES-Bench and two existing benchmarks RobustBench \citep{croce2021robustbench} and RobustART \citep{tang2021robustart}. \textbf{(a)} shows the different types of models evaluated with various datasets and attacks in each benchmark. \textbf{(b)} shows the training tricks in adversarial training evaluated with various datasets and attacks in each benchmark. \textbf{(c)} shows the best results under each dataset and attack of the models in each benchmark. Compared with the existing benchmarks, our benchmark evaluates diverse and top-performing models under more OOD datasets and adversarial attacks. Therefore, our benchmark is more comprehensive and represents the state-of-the-art performance. } \label{fig:fig1} \end{figure*} \subsection{Deep Learning Robustness} It has been widely demonstrated that deep learning models have poor robustness. \cite{szegedy2013intriguing} first show that deep neural networks are vulnerable to adversarial examples, which are generated by adding imperceptible perturbations to natural examples but significantly affect the model predictions. A number of works have been devoted to studying adversarial robustness. On the one hand, many adversarial attacks \citep{goodfellow2014explaining,kurakin2016adversarial,carlini2019evaluating,dong2018boosting,xie2019improving,croce2020reliable} have been proposed to improve the effectiveness and efficiency of generating adversarial examples under white-box and black-box settings. Adversarial attacks can serve as an important surrogate to identify the weaknesses and evaluate the robustness of deep learning models. On the other hand, numerous adversarial defense methods \citep{Kurakin2017Adversarial,tramer2017ensemble,Madry2017Towards,Guo2017Countering,Xie2018Mitigating,Wong2018Provable,liao2017Defense,cohen2019certified,zhang2019theoretically,pang2020bag} have been developed to improve model robustness. Among the existing defenses, adversarial training (AT) is arguably the most effective technique \citep{Athalye2018Obfuscated,dong2020benchmarking}, in which the network is trained on the adversarial examples generated by attacks. Besides adversarial robustness, deep learning models are also susceptible to natural distribution shifts. For example, \cite{engstrom2019exploring} demonstrate that an image translation or rotation is sufficient to mislead a target model. \cite{hendrycks2018benchmarking} introduce ImageNet-C/P to benchmark model robustness to common corruptions. \cite{geirhos2018imagenet} show that ImageNet classifiers are biased towards texture rather than shape. Other works \citep{barbu2019objectnet,hendrycks2021many,hendrycks2021natural,dong2022viewfool} have collected real-world datasets to evaluate out-of-distribution generalization performance. For natural robustness, there are also techniques to improve robustness \citep{hendrycks2019augmix,hendrycks2021many,xie2020adversarial}. \subsection{Robustness Benchmarks} Since an increasing number of deep learning methods and robustness improvement techniques have been proposed, a comprehensive benchmark of model robustness is vital to understand their effectiveness and keep up with the state-of-the-art. Many works have developed robustness platforms that implement popular attacks for evaluating the robustness, including CleverHans \citep{papernot2016technical}, Foolbox \citep{rauber2017foolbox}, ART \citep{nicolae2018adversarial}, etc. But these platforms do not include the latest state-of-the-art models and do not provide benchmarking results. Some robustness benchmarks have been further established \citep{ling2019deepsec,dong2020benchmarking,croce2021robustbench,tang2021robustart}. Nevertheless, most benchmarks focus on adversarial robustness rather than natural robustness, thus they are unable to reveal the inherent relationships between them. For example, the widely used RobustBench \citep{croce2021robustbench} mainly evaluates adversarial robustness of adversarially trained models by AutoAttack \citep{croce2020reliable}. The most relevant work to ours is RobustART \citep{tang2021robustart}, which benchmarks robustness of dozens of network architectures and training techniques under various adversarial attacks and out-of-distribution datasets. The main limitation of RobustART is that it adopts aligned training settings to understand the robustness of each component, but this leads to inferior performance in the benchmark (i.e., it does not contain state-of-the-art models). It also incorporates insufficient OOD datasets. Compared with the existing benchmarks, ours is more comprehensive in the following aspects. First, our benchmark integrates 3 IID and 7 OOD datasets to better evaluate natural robustness, as shown in Fig.~\ref{fig:fig1}(a) and (b). Second, we evaluate a wide range of top-performing models, some of which are trained by ourselves through extensive ablation studies, leading to better results of the models in our benchmark compared with those in other benchmarks, as illustrated in Fig.~\ref{fig:fig1}(c). Third, we adopt robustness curves as the evaluation criteria, which can fully show the model performance under varying noise severities. Fourth, we provide a frequency analysis to explore the frequency bias of adversarially trained models and explain the model behaviors. \section{Benchmark Design}\label{sec3} \begin{table*}[t]\footnotesize \caption{Evaluation datasets of robustness in ARES-Bench. We include 3 IID datasets and 7 OOD datasets to evaluate natural robustness. We adopt various white-box and transfer-based black-box attacks to evaluate adversarial robustness. } \label{tab:datasets} \begin{center} \begin{minipage}{0.9\textwidth} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llllc@{\extracolsep{\fill}}} \toprule% Robustness & Distribution Shift & Source & Datasets \\ \midrule \multirow{4}*{Natural} & IID & Real-world & ImageNet-Val, ImageNet-V2, ImageNet-Real \\ \cmidrule{2-4} & \multirow{3}*{OOD} & Real-world & ObjectNet, ImageNet-A, ImageNet-R, ImageNet-V \\ \cmidrule{3-4} && Synthesized & ImageNet-C, Stylized-ImageNet, ImageNet-Sketch \\ \midrule \multirow{1}*{Adversarial} & Adversarial & Adversary & White-box attacks, Transfer-based attacks \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} In this paper, we establish \textbf{ARES-Bench}, a comprehensive benchmark for evaluating the robustness on image classification tasks. As two important aspects of model robustness, we investigate the relationship between natural and adversarial robustness using a variety of OOD datasets and adversarial attacks. For image classification models in the evaluation, we consider 55 models on ImageNet covering two mainstream types of network architectures: CNNs and Transformers, and four training paradigms that are closely related to robustness, including normal supervised training, pre-training on large-scale datasets, self-supervised learning (SSL), and adversarial training (AT). We systematically evaluate the robustness of these models using robustness curves. Below, we introduce the evaluation datasets and metrics in Sec.~\ref{sec:3-1}, and the model zoo in Sec.~\ref{sec:3-2}. \subsection{Evaluation Datasets and Metrics}\label{sec:3-1} In this section, we mainly introduce the evaluation datasets and adversarial attacks in our benchmark for natural robustness and adversarial robustness. \subsubsection{Natural Robustness} Natural noise comes from various sources in the real world, such as weather changes, sensor damage, object deformation, etc., which is unavoidable and destructive to deep learning models. To comprehensively evaluate the natural robustness of image classification models under diverse kinds of noises, our benchmark includes 3 independent and identically distributed (IID) datasets (\emph{ImageNet validation set}, \emph{ImageNet-V2} \citep{recht2019imagenet} and \emph{ImageNet-Real} \citep{beyer2020we}), 4 OOD datasets collected in the real world (\emph{ObjectNet} \citep{barbu2019objectnet}, \emph{ImageNet-A} \citep{hendrycks2021natural}, \emph{ImageNet-R} \citep{hendrycks2021many} and \emph{ImageNet-V} \citep{dong2022viewfool}), and 3 algorithmically synthesized OOD datasets (\emph{ImageNet-C} \citep{hendrycks2018benchmarking}, \emph{Stylized-ImageNet} \citep{geirhos2018imagenet} and \emph{ImageNet-Sketch} \citep{wang2019learning}), as shown in Table~\ref{tab:datasets}. These datasets are compatible with the ImageNet classifiers and commonly adopted to evaluate their natural robustness, while they focus on different aspects, such as viewpoint changes, common corruptions, style transfer. An integration of these datasets can thoroughly show the model behavior under various distribution shifts. Most of the datasets utilize classification accuracy as the evaluation metric, except for ImageNet-C. \textbf{ImageNet validation} \citep[\textbf{IN-Val},][]{russakovsky2015imagenet} is by default adopted to evaluate the clean accuracy of ImageNet classifiers. It contains 50,000 images belonging to 1,000 categories. \textbf{ImageNet-V2} \citep[\textbf{IN-V2},][]{recht2019imagenet} is a new test dataset for the ImageNet benchmark, which is used to evaluate the generalization ability of image classifiers. It contains three test sets with 10,000 new images each. We only use the subset of matched-frequency-format. \textbf{ImageNet-Real} \citep[\textbf{IN-Real},][]{beyer2020we} adjusts the labels in ImageNet, which considers multiple less important labels of an image. It is used to test whether the model overfits to the idiosyncrasies of the original labeling procedure. \textbf{ObjectNet} \citep[\textbf{ON},][]{barbu2019objectnet} is a real-world test dataset containing 50,000 images, in which object backgrounds, rotations, and imaging viewpoints are random. It is used to test robustness under background and viewpoint changes. \textbf{ImageNet-A} \citep[\textbf{IN-A},][]{hendrycks2021natural} is a collection of hard-to-classify samples against the ResNet50~\citep{he2015deep} model. The samples in ImageNet-A are real-world and unmodified with limited spurious cues. \textbf{ImageNet-R} \citep[\textbf{IN-R},][]{hendrycks2021many} contains real-world images with changes in image style, including art, cartoons, etc. It has renditions of 200 ImageNet classes and totally 30,000 images, which can be used to evaluate robustness to real-world style transferred images. \textbf{ImageNet-V} \citep[\textbf{IN-V},][]{dong2022viewfool} is a recently proposed dataset to evaluate viewpoint robustness of image classifiers. It contains 10,000 images of 3D objects collected from adversarial viewpoints generated by an attack method. \textbf{ImageNet-C} \citep[\textbf{IN-C},][]{hendrycks2018benchmarking} consists of 15 types of synthesized corruptions ranging from noise, blur, weather, to digital ones. Each corruption is associated with 5 severities. ImageNet-C has been widely used to evaluate corruption robustness of image classification models. Commonly, the metric of ImageNet-C is the normalized mean corruption error (mCE). We let $f_{j,k}(\cdot)$ denote the function of the $j$-th corruption type under the $k$-th severity. For the original dataset $\mathcal{D}=\{(\bm{x}_i, y_i)\}_{i=1}^N$ with $N$ samples, the corruption dataset is constructed by applying every corruption function $f_{j,k}(\cdot)$ to the original dataset. Then, the corruption error of a classifier $\mathcal{C}$ under the $j$-th corruption is defined as \begin{equation} \mathrm{CE}(\mathcal{C}, j)=\frac{1}{NK}\sum_{i=1}^N \sum_{k=1}^K\bm{1}(\mathcal{C}(f_{j,k}(\bm{x}_i)) \neq y_i), \end{equation} where $\bm{1}(\cdot)$ is an indicator function. When it comes to multiple corruptions, the mean corruption error (mCE) can be defined as \begin{equation} \mathrm{mCE}(\mathcal{C})=\frac{1}{M}\sum_{j=1}^M \frac{\mathrm{CE}(\mathcal{C}, j)}{\mathrm{CE}(\mathrm{AlexNet}, j)}, \end{equation} where $M$ is the total number of corruptions and the performance of AlexNet is used as the baseline. Apart from the mCE metric, we also introduce a robustness curve of \emph{classification accuracy vs. severity} of the corruptions, which can be used to study robustness under different severities. \textbf{Stylized-ImageNet} \citep[\textbf{SIN},][]{geirhos2018imagenet} is a stylized version of ImageNet, which is generated by style transfer and can be used to measure shape bias of models. \textbf{ImageNet-Sketch} \citep[\textbf{IN-Sketch},][]{wang2019learning} contains 50,000 sketch-like images of the 1,000 ImageNet classes. All images are within the ``black and white'' color scheme. \subsubsection{Adversarial Robustness} Adversarial robustness is an important aspect of model robustness in the presence of adversaries. To measure adversarial robustness, it is typical to adopt adversarial attacks as surrogates, which craft the worst-case adversarial examples under a specific threat model. To make the adversarial example $\bm{x}^{adv}$ visually indistinguishable from the original example $\bm{x}$, a norm bound is usually adopted to restrict the perturbation as $\|\bm{x}^{adv}-\bm{x}\|_p\leq \epsilon$, in which $\epsilon$ is the perturbation budget. An adversarial example can be generated by solving \begin{equation}\label{eq:adv} \bm{x}^{adv}=\mathop{\arg \max} \limits_{\hat{\bm{x}}:\|\hat{\bm{x}}-\bm{x}\|_p\leq\epsilon} \mathcal{L}(\hat{\bm{x}}, y), \end{equation} where $\mathcal{L}$ is a classification loss (e.g., cross-entropy loss, C\&W loss \citep{carlini2017towards}, etc.). To solve the optimization problem \eqref{eq:adv}, various gradient-based attack methods have been proposed \citep{goodfellow2014explaining,kurakin2016adversarial,carlini2017towards}. These methods calculate the gradient of the loss function w.r.t. input, which are known as \emph{white-box attacks}. Besides, researchers have found that adversarial examples exhibit good cross-model transferability \citep{liu2016delving}, making \emph{black-box attacks} feasible. In our benchmark, we evaluate adversarial robustness with 3 white-box attacks, including \emph{Fast Gradient Sign Method (FGSM)} \citep{goodfellow2014explaining}, \emph{Projected Gradient Descent (PGD)} \citep{Madry2017Towards} and \emph{AutoAttack} \citep{croce2020reliable}, and 5 black-box attacks, including \emph{Momentum Iterative Method (MIM)} \citep{dong2018boosting}, \emph{Diversity Input Method (DIM)} \citep{xie2019improving}, \emph{Translation Invariant Method (TIM)} \citep{dong2019evading}, \emph{Scale-Invariant Nesterov Iterative Method (SI-NI-FGSM)} \citep{lin2019nesterov} and \emph{Variance Tuning Method (VMI-FGSM)} \citep{wang2021enhancing}. We include the representative and state-of-the-art attack methods under the white-box and black-box settings, e.g., AutoAttack for white-box attacks and VMI-FGSM for black-box attacks, which facilitate a more correct evaluation of adversarial robustness. We consider both $\ell_\infty$-norm and $\ell_2$-norm perturbations. The detailed settings of these attacks are shown in Appendix A. For the evaluation metrics, we follow our previous work \citep{dong2020benchmarking} to adopt the robustness curves. In this work, we consider the curve of \emph{accuracy vs. perturbation budget}, which shows the robustness of models under different perturbation budgets, to provide a global understanding of robustness. To efficiently obtain a robustness curve, we first perform a binary search for a sample to find the minimum perturbation budget that can lead to the misclassification of the crafted adversarial example. Then, we compute the percentage of data samples whose minimum perturbations are smaller than each $\epsilon$ to plot the curve. For black-box attacks, we show the transferability heatmap of all models. \begin{table*}[t]\footnotesize \caption{Image classification models in our benchmark. We include typical architectures of CNNs and Transformers with different sizes. The models are trained by normal supervised learning, pre-training on ImageNet-21K, self-supervised learning (SSL) and adversarial training (AT). We also include existing adversarially robust models from RobustBench \citep[RB,][]{croce2021robustbench}, Robust Library \citep[RL,][]{robustness}, and feature denoising \citep[FD,][]{xie2019feature}.} \label{tab:arch} \begin{center} \begin{minipage}{0.9\textwidth} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llllccccc@{\extracolsep{\fill}}} \toprule% \multicolumn{3}{c}{\multirow{3}*{Model Architecture}} & \multirow{3}*{Size} & \multicolumn{5}{c}{Training Paradigm} \\ \cmidrule{5-9} &&&&Normal&Pre-training&SSL&AT&Others\\ \midrule \multirow{16}*{CNN} & \multirow{3}*{VGG} & VGG13 &127M& \checkmark \\ && VGG16 &131M& \checkmark \\ && VGG19 &137M& \checkmark \\ \cmidrule{2-9} & \multirow{4}*{ResNet} & ResNet50 &24M& \checkmark&&\checkmark&\checkmark&RB,RL\\ && ResNet101 &42M& \checkmark&&&\checkmark \\ && ResNet152 &57M& \checkmark&&&\checkmark&FD\\ && Wide-ResNet50 &66M& \checkmark&&&\checkmark&RB \\ \cmidrule{2-9} & \multirow{3}*{DenseNet} & DenseNet121 &8M& \checkmark \\ && DenseNet161 &27M& \checkmark \\ && DenseNet201 &19M& \checkmark \\ \cmidrule{2-9} & \multirow{3}*{ConvNext} & ConvNextS &48M& \checkmark&\checkmark&&\checkmark \\ && ConvNextB &84M& \checkmark&\checkmark&&\checkmark \\ && ConvNextL &189M& \checkmark&\checkmark&&\checkmark \\ \midrule \multirow{15}*{Transformer} & \multirow{3}*{ViT} & ViTS &21M& \checkmark&\checkmark&&\checkmark \\ && ViTB &83M& \checkmark&\checkmark&\checkmark&\checkmark \\ && ViTL &290M& \checkmark&\checkmark&\checkmark \\ \cmidrule{2-9} & \multirow{3}*{XciT} & XciTS &45M& \checkmark& &&&RB \\ && XciTM &80M& \checkmark &&&&RB\\ && XciTL &180M& \checkmark &&&&RB\\ \cmidrule{2-9} & \multirow{3}*{T2T} & T2T14 &20M& \checkmark \\ && T2T19 &37M& \checkmark \\ && T2T24 &61M& \checkmark \\ \cmidrule{2-9} & \multirow{3}*{Swin} & SwinS &47M& \checkmark&\checkmark& &\checkmark \\ && SwinB &84M& \checkmark&\checkmark&&\checkmark \\ && SwinL &187M& &\checkmark&&\checkmark \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \subsubsection{Frequency Perspective} \label{sec:3.2.3} Adversarial training has been shown to effectively improve adversarial robustness by replacing the benign inputs with the adversarial examples, but this training paradigm remains opaque. An interpretability technique is necessary for researchers to understand the mechanism of AT and develop better architectures and training strategies. Generally, interpretability approaches highlight the important regions that affect the decision-making \citep{selvaraju2017grad,cao2020analyzing}. However, the adversarial noise is overlapped with the benign image, making it difficult to find out the region of interest in the spatial domain. Inspired by the observation that adversarial perturbations are actually high-frequency signals \citep{maiya2021frequency, zhang2019adversarial}, we intend to find out the differences between AT models and the normally trained ones for their frequency responses. From the perspective of Fourier analysis~\citep{howell2016principles}, signals can be decomposed into several basic trigonometric functions. Therefore, we can isolate the signals that contribute to the final decision to find out the frequency bias of the models. Specifically, we develop a low-pass filter to drop high-frequency signals. By gradually increasing the band-width of the low-pass filter, the minimum cutoff frequency $f_c$ for each sample is determined. The minimum cutoff frequency indicates how many trigonometric signals are sufficient for the correct classification. The ensemble of the minimum $f_c$ of all samples forms the final frequency bias of a model. To exhibit the ensemble of $f_c$, we introduce the \emph{accuracy vs. low-pass bandwidth} (ACC-LPB) curve. Besides, it is meaningless to examine $f_c$ of misclassified clean samples, so we can normalize the accuracy to 0-1 by dividing all the accuracy with the clean accuracy. Thus, we introduce a metric to measure the frequency bias as \begin{equation}\label{eq:frenq} f_{bias}=\frac{1}{N'} \sum^{N'}_{i=1} \mathop{\min} f_c(\bm{x}_i), \end{equation} where $N'$ is the number of correctly classified samples, $f_c(\bm{x}_i)$ is the low pass bandwidth that can maintain the correct classification. This metric is positive linear correlated to the area ratio of the part above the ACC-LPB curve. \subsection{Model Zoo}\label{sec:3-2} In this section, we introduce the image classification models evaluated in our benchmark. Recently, Vision Transformer \citep{dosovitskiy2021image} has become the prevalent model architecture for image recognition based on the long-range self-attention mechanism. To fully understand their robustness compared with convolutional neural networks (CNNs), we include typical networks of CNNs and Transformers. Besides normal supervised training on ImageNet-1K, we further consider other typical training paradigms, including pre-training on large-scale datasets (e.g., ImageNet-21K), self-supervised learning (SSL), and adversarial training (AT), to facilitate a deeper understanding of their effects on model robustness because they are widely studied in the literature. Therefore, we include a total number of 55 models in the evaluation, as shown in Table~\ref{tab:arch}. Besides these models, we further study training tricks in large-scale AT on ImageNet, as detailed in Sec.~\ref{sec:3.2.2}. \subsubsection{Architectures and Training Paradigms}\label{sec:3-2-1} \textbf{Network architectures.} To comprehensively understand the robustness of Transformers compared with CNNs, we include several typical and advanced architectures. For CNNs, we consider \emph{VGG} \citep{simonyan2014very}, \emph{ResNet} \citep{he2015deep}, \emph{DenseNet} \citep{huang2017densely}, and \emph{ConvNext} \citep{liu2022convnet}. The first three are famous architectures in the field while the last one is recently developed to achieve comparable performance with Transformers. For Transformer models, we consider \emph{ViT} \citep{dosovitskiy2021image}, \emph{XciT} \citep{el2021xcit}, \emph{T2T} \citep{Yuan_2021_ICCV}, and \emph{Swin Transformer} \citep{liu2021swin}. For these architectures, we also consider different model sizes, as shown in Table~\ref{tab:arch}. We collect most of the normally trained models on ImageNet-1K of these architectures. Besides, we also study other training paradigms, as illustrated below. \textbf{Pre-training on large-scale datasets.} This is a common strategy to prevent overfitting and improve the generalization performance \citep{dosovitskiy2021image,liu2022convnet}. However, its effects on model robustness are rarely explored. To understand its effects, we consider ImageNet-21K as the pre-training dataset, which contains 21K classes and 14M images \citep{deng2009imagenet}. In our benchmark, we include ConvNext, ViT, and Swin based on the released models. \textbf{Self-supervised learning (SSL)}. SSL is an effective method of learning discriminative representations from unlabeled data based on preset tasks. The paradigm of pre-training by SSL and fine-tuning on downstream tasks has become a prevailing approach. Despite the effectiveness, the study on the effects of SSL on model robustness is limited. Studying this problem is essential to understand the behavior of SSL when deployed in security-sensitive applications. In our benchmark, we consider two SSL methods, including MOCOv3 \citep{chen2021empirical} and MAE \citep{he2022masked}, as they are among the most representative SSL methods and achieve superior performance. MOCOv3 adopts a ResNet50 backbone and MAE is based on the ViT architecture. \textbf{Adversarial training (AT).} AT augments training data with adversarial examples \citep{goodfellow2014explaining,Madry2017Towards}, which is shown to be the most effective technique of improving adversarial robustness \citep{Athalye2018Obfuscated,dong2020benchmarking,croce2020reliable}. Formally, AT can be formulated as a minimax optimization problem \citep{Madry2017Towards}: \begin{equation} \min \limits_{\bm{w}} \mathbb{E}_{(\bm{x},y) \sim \mathcal{D}}\max \limits_{\hat{\bm{x}}:\|\hat{\bm{x}}-\bm{x}\|_p\leq\epsilon}\mathcal{L}(\hat{\bm{x}},y;\bm{w}), \end{equation} where $\bm{w}$ denotes the weights of a classifier. The inner maximization can be solved by PGD \citep{Madry2017Towards}. In our benchmark, we perform AT with the $\ell_\infty$-norm bounded perturbations of $\epsilon=4/255$. To accelerate training on ImageNet-1K, we adopt the PGD-3 adversary with the step size $2\epsilon/3$. Due to the heavy cost of training, we do not run AT for all architectures, but only consider ResNet, ConvNext, ViT, and Swin to cover the most widely used and state-of-the-art networks of CNNs and Transformers. The training settings of AT are similar to normal training, as detailed in Appendix A. We also introduce the training tricks used in AT in Sec.~\ref{sec:3.2.2}. For comparison, we also incorporate the existing state-of-the-art adversarially robust models according to RobustBench \citep{croce2021robustbench}, including a ResNet50 and a Wide-ResNet50 from \cite{salman2020adversarially}, three XciT models from \cite{debenedetti2022light}, a ResNet50 model from the Robust Library \citep[RL,][]{robustness}, and a ResNet152 based on Feature Denoising \citep[FD,][]{xie2019feature}. \subsubsection{Training Tricks in AT} \label{sec:3.2.2} \begin{table}[t] \begin{center} \begin{minipage}{215pt} \caption{Training tricks in adversarial training.} \label{tab:strategy}% \begin{tabular}{@{}lc@{}} \toprule Category & Method \\ \midrule Data Augmentation & Mixup, RandAugment \\ Regularization & Weight Decay, Label Smoothing \\ Weight Averaging & EMA \\ Pre-training & 21K-pre-training, SimMIM, CLIP \\ \botrule \end{tabular} \end{minipage} \end{center} \end{table} Recent works have found that the training tricks (e.g., weight decay, label smoothing, weight averaging) play an important role in AT \citep{pang2020bag,gowal2020uncovering}. However, previous works mainly study the tricks in AT on smaller datasets (e.g., CIFAR-10), but do not explore large-scale AT on ImageNet. Only \cite{debenedetti2022light} provide a recipe of training robust Vision Transformers on ImageNet. As adversarial training is an important method evaluated in our benchmark, we put a special focus on various training tricks in large-scale AT to thoroughly understand their effectiveness and provide a guideline for training robust models on ImageNet. Based on previous studies \citep{pang2020bag,gowal2020uncovering}, we only concentrate on four categories of training tricks that have a significant impact on adversarial robustness, including data augmentation, regularization, weight averaging, and pre-training. Table~\ref{tab:strategy} shows the studied methods of each category, most of which have also been integrated into the recent normally trained models such as ConvNext and Swin. Below, we detail these methods. \textbf{Data augmentation (DA).} DA techniques are commonly adopted to prevent overfitting and improve the generalization performance of deep learning models. It has also been shown that DA can improve adversarial robustness \citep{rebuffi2021data}. In this work, we study Mixup \citep{zhang2017mixup} and RandAugment \citep{cubuk2020randaugment} as two typical DA techniques in AT. Specifically, Mixup blends two random samples and their labels with a certain ratio, which can be expressed as \begin{equation} \tilde{\bm{x}}=\lambda\bm{x}_i+(1-\lambda)\bm{x}_j;\; \tilde{\bm{y}}=\lambda\bm{y}_i+(1-\lambda)\bm{y}_j, \end{equation} where $\bm{x}_i,\bm{x}_j$ are two samples, $\bm{y}_i,\bm{y}_j$ are their corresponding labels. $\tilde{\bm{x}}, \tilde{\bm{y}}$ are the generated samples for training. RandAugment performs random transformations (e.g., auto-contrast, shear transformation, etc.) of images. It reduces the space for searching the augmentation policies and the regularization strength can be tailored to different models and dataset sizes. \textbf{Regularization.} As shown in \cite{pang2020bag}, weight decay and label smoothing are two important regularization methods that can significantly affect the performance of AT. A proper value of weight decay can enlarge the margin of a sample from the decision boundary by imposing a $\ell_2$ regularization on model weights. Label smoothing introduces uncertainty to the one-hot labels, which can combat with label noise in AT to avoid robust overfitting \citep{dong2022exploring}. \textbf{Weight averaging (WA).} WA is widely used in normal training to find flatter optima and lead to better generalization \citep{izmailov2018averaging}. It has been shown that WA has a significant robustness improvement \citep{gowal2020uncovering}. For WA, we adopt an exponential moving average \citep[EMA,][]{bolme2009average} of model weights as \begin{equation} \tilde{\bm{w}}_t=\beta \cdot \tilde{\bm{w}}_{t-1} + (1-\beta) \cdot \bm{w}_t, \end{equation} where $\tilde{\bm{w}}_t$ denotes the average model weights at the $t$-th training step, $\bm{w}_t$ denotes the current model weights, and $\beta$ is a hyperparameter commonly set between $0.9\sim0.999$. \textbf{Pre-training.} It has been shown that pre-training on large-scale datasets can improve model robustness \citep{hendrycks2019using}. Pre-trained models by SSL can also be adversarially fine-tuned towards adversarial robustness on downstream tasks \citep{chen2020adversarial,dong2021should}. Thus, we further study pre-training for large-scale AT. Different from the previous works, our purpose is to examine whether the pre-trained models can serve as better initializations for AT to accelerate training, thus we do not perform adversarial pre-training but adopt the pre-trained models given by different methods. Specifically, we consider three widely studied types of pre-training: pre-training on large-scale datasets, masked image modeling, and vision-language pre-training. For pre-training on large-scale datasets, we adopt ImageNet-21K as the pre-training dataset (denoted as \textbf{21K-pre-training}). We think that more data can prevent models from overfitting and improve the robustness. For masked image modeling, we adopt \textbf{SimMIM} \citep{xie2022simmim}. Although similar to MAE, we choose SimMIM because it adopts the backbone of Swin Transformer, which performs better than ViT adopted in MAE, as shown in the experiment. For vision-language pre-training, we focus on the famous \textbf{CLIP} model \citep{radford2021learning}, which performs contrastive learning on images and the corresponding texts. We only adopt the image encoder from CLIP for adversarial fine-tuning. \renewcommand{\arraystretch}{1.2} \begin{table*}[t]\footnotesize \caption{The natural robustness of CNN-based models, including VGG, ResNet, DenseNet and ConvNext. We show the classification accuracy (\%) on these datasets and $1-\mathrm{mCE}$ on ImageNet-C for consistent comparison (higher is better). We mark the best result within each architecture in \textbf{bold}, and mark the overall best result in {\color{red} \textbf{red}}. } \label{tab:ood-cnn} \begin{center} \begin{minipage}{0.99\textwidth} \setlength{\tabcolsep}{5pt}{ \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ll ccc cccc cccc@{\extracolsep{\fill}}} \toprule% \multirow{3}*{Architecture}& \multirow{3}*{Method} & \multicolumn{3}{c}{IID} & \multicolumn{4}{c}{Real-world OOD} & \multicolumn{3}{c}{Synthesized OOD} & \multirow{3}*{Avg.} \\ \cmidrule{3-5}\cmidrule{6-9}\cmidrule{10-12} & & IN-Val & IN-V2 & IN-Real & ON & IN-A & IN-R & IN-V & IN-C & SIN & IN-Sketch \\ \midrule VGG13 & Normal & 69.6 & 57.3 & 77.0 & 10.4 & 2.2 & 25.5 & 9.1 & 6.0 & 2.8 & 15.4 & 27.5 \\ \cmidrule{2-13} VGG16 & Normal & 71.4 & 58.6 & 78.8 & 12.5 & 2.8 & 27.0 & 10.1 & 9.7 & 3.0 & 16.9 & 29.1 \\ \cmidrule{2-13} VGG19 & Normal & 72.2 & 59.7 & 79.2 & 13.4 & 2.4 & 27.6 & 9.6 & 11.1 & 2.8 & 17.2 & 29.5 \\ \midrule \multirow{5}*{ResNet50} & Normal & \bf75.9 & \bf63.3 & \bf82.6 & \bf17.9 & 0.1 & 35.4 & \bf12.5 & 23.3 & 7.4 & 23.0 & 34.1 \\ & MOCOv3 & 74.6 & 62.0 & 81.3 & 14.3 & \bf4.0 & 36.6 & 9.8 & \bf26.1 & 8.1 & \bf25.2 & \bf34.2 \\ & AT & 66.1 & 53.0 & 73.7 & 9.1 & 2.6 & \bf40.3 & 7.5 & 15.3 & \bf13.2 & 21.8 & 30.2 \\ & RB & 64.2 & 51.4 & 71.9 & 8.5 & 2.3 & 38.3 & 7.1 & 14.9 & 11.7 & 20.9 & 29.1 \\ & RL & 62.9 & 50.2 & 71.0 & 8.0 & 2.0 & 39.4 & 7.2 & 15.0 & 12.5 & 20.8 & 28.9 \\ \cmidrule{2-13} \multirow{2}*{ResNet101} & Normal & \bf77.3 & \bf65.7 & \bf83.6 & \bf19.1 & \bf5.1 & 38.7 & \bf13.6 & \bf29.6 & 9.1 & \bf26.4 & \bf36.8 \\ & AT & 70.4 & 57.7 & 78.3 & 11.8 & 3.7 & \bf42.9 & 8.4 & 20.6 & \bf15.2 & 25.1 & 33.4 \\ \cmidrule{2-13} \multirow{3}*{ResNet152} & Normal & \bf78.2 & \bf66.9 & \bf84.7 & \bf20.0 & \bf6.4 & 40.5 & \bf14.8 & \bf30.7 & 10.3 & 27.9 & \bf38.0 \\ & AT & 72.4 & 60.0 & 79.7 & 13.0 & 5.4 & \bf47.2 & 9.4 & 24.4 & \bf16.7 & \bf29.9 & 35.8 \\ & FD & 65.4 & 49.0 & 72.1 & 8.0 & 3.4 & 43.5 & 6.8 & 17.5 & 15.1 & 23.8 & 30.4 \\ \cmidrule{2-13} \multirow{3}*{Wide-ResNet50} & Normal & \bf81.5 & \bf70.5 & \bf86.6 & \bf22.9 & \bf16.3 & \bf44.2 & \bf16.1 & \bf40.4 & 12.0 & \bf32.5 & \bf42.3 \\ & AT & 70.1 & 56.9 & 78.0 & 11.0 & 3.4 & 42.0 & 7.5 & 18.5 & 13.2 & 23.8 & 32.4 \\ & RB & 68.8 & 55.8 & 76.4 & 10.6 & 3.3 & 41.8 & 8.2 & 19.5 & \bf13.6 & 23.7 & 32.2 \\ \midrule DenseNet121 & Normal & 74.7 & 62.6 & 81.7 & 15.8 & 2.6 & 36.8 & 12.3 & 26.6 & 8.0 & 23.8 & 34.5 \\ \cmidrule{2-13} DenseNet161 & Normal & 77.4 & 65.8 & 83.8 & 17.8 & 4.7 & 39.6 & 14.3 & 33.6 & 11.3 & 28.1 & 37.6 \\ \cmidrule{2-13} DenseNet201 & Normal & 77.3 & 65.2 & 83.5 & 17.9 & 4.1 & 40.3 & 13.4 & 31.6 & 10.9 & 27.3 & 37.1 \\ \midrule \multirow{3}*{ConvNextS} & Normal & 83.2 & 72.5 & 88.0 & 25.3 & 31.3 & 49.6 & 17.8 & 50.5 & 21.8 & 37.1 & 47.7 \\ & Pre-train & \bf84.6 & \bf74.7 & \bf89.1 & \bf28.7 & \bf44.8 & \bf57.6 & \bf23.2 & \bf56.3 & 19.2 & \bf43.6 & \bf52.2 \\ &AT&76.1&63.8&82.8&15.8&10.5&53.7&11.0&36.5&\bf22.0&39.4&41.2\\ \cmidrule{2-13} \multirow{3}*{ConvNextB} & Normal & 83.8 & 73.7 & 88.2 & 26.3 & 36.6 & 51.3 & 19.9 & 53.2 & 21.5 & 38.2 & 49.3 \\ & Pre-train & \bf85.8 & \bf76.0 & \bf89.6 & \bf30.5 & \bf54.6 & \bf62.0 & \bf24.5 & \bf59.6 & \bf25.1 & \bf48.8 & \bf55.7 \\ &AT&77.1&64.9&83.8&16.5&12.2&56.8&11.5&38.8&23.6&43.6&42.9\\ \cmidrule{2-13} \multirow{3}*{ConvNextL} & Normal & 84.3 & 74.2 & 88.5 & 27.1 & 41.3 & 53.5 & 22.0 & 55.4 & 23.9 & 40.1 & 51.0 \\ & Pre-train & {\color{red} \bf86.6} & {\color{red} \bf77.1} & {\color{red} \bf89.7} & {\color{red} \bf30.6} & {\color{red} \bf59.8} & {\color{red} \bf64.2} & {\color{red} \bf27.3} & {\color{red} \bf63.7} & 24.9 & {\color{red} \bf49.9} & {\color{red} \bf57.4}\\ &AT&78.1&66.2&84.6&17.7&14.1&59.4&12.8&40.7&{\color{red} \bf25.2}&45.1&44.4\\ \botrule \end{tabular*} } \end{minipage} \end{center} \end{table*} \renewcommand{\arraystretch}{1.2} \begin{table*}[t]\footnotesize \begin{center} \begin{minipage}{0.99\textwidth} \setlength{\tabcolsep}{5pt} \caption{The natural robustness of Transformer-based models, including ViT, XciT, T2T and Swin. We show the classification accuracy (\%) on these datasets and $1-\mathrm{mCE}$ on ImageNet-C for consistent comparison (higher is better). We mark the best result within each architecture in \textbf{bold}, and mark the overall best result in {\color{red} \textbf{red}}.} \label{tab:ood-transformer} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ll ccc cccc cccc@{\extracolsep{\fill}}} \toprule% \multirow{3}*{Architecture}& \multirow{3}*{Method} & \multicolumn{3}{c}{IID} & \multicolumn{4}{c}{Real-world OOD} & \multicolumn{3}{c}{Synthesized OOD} & \multirow{3}*{Avg.} \\ \cmidrule{3-5}\cmidrule{6-9}\cmidrule{10-12} & & IN-Val & IN-V2 & IN-Real & ON & IN-A & IN-R & IN-V & IN-C & SIN & IN-Sketch \\ \midrule \multirow{3}*{ViTS} & Normal & 74.4 & 61.6 & 80.0 & 13.1 & 8.8 & 30.4 & 11.2 & 32.0 & 9.1 & 19.9 & 34.0 \\ & Pre-train & \bf81.4 & \bf70.3 & \bf86.8 & \bf22.7 & \bf27.3 & \bf45.7 & \bf16.6 & \bf47.1 & 15.8 & \bf32.5 & \bf44.6 \\ & AT & 70.2 & 57.3 & 77.9 & 11.5 & 6.1 & 46.0 & 8.5 & 27.8 & \bf16.8 & 29.8 & 35.2 \\ \cmidrule{2-13} \multirow{4}*{ViTB} & Normal & 75.8 & 61.6 & 80.9 & 13.2 & 11.4 & 32.8 & 13.3 & 34.3 & 10.9 & 23.7 & 35.8 \\ & Pre-train & \bf84.6 & \bf73.9 & \bf88.8 & \bf27.4 & \bf44.5 & \bf56.8 & \bf19.4 & \bf57.5 & \bf22.6 & \bf43.0 & \bf51.9 \\ & MAE & 83.6 & 73.1 & 88.1 & 24.9 & 37.7 & 49.8 & 18.2 & 49.4 & 20.2 & 36.4 & 48.1 \\ & AT & 73.4 & 60.4 & 80.5 & 12.7 & 8.9 & 50.7 & 9.4 & 36.6 & 22.2 & 35.7 & 39.1 \\ \cmidrule{2-13} \multirow{3}*{ViTL} & Normal & 75.2 & 60.7 & 79.8 & 11.2 & 11.3 & 33.3 & 13.4 & 35.4 & 9.3 & 25.0 & 35.4 \\ & Pre-train & \bf85.8 & \bf76.0 & \bf89.2 & \bf30.5 & \bf56.1 & {\color{red}\bf64.2} & \bf25.5 & {\color{red}\bf65.3} & {\color{red}\bf30.1} & {\color{red}\bf51.8} & {\color{red}\bf57.4} \\ & MAE & 85.1 & 75.6 & 89.0 & 27.3 & 50.6 & 60.0 & 21.5 & 56.2 & 24.1 & 46.4 & 53.6 \\ \midrule \multirow{2}*{XciTS} & Normal & \bf82.4 & \bf71.5 & \bf86.8 & \bf23.7 & \bf31.3 & 45.0 & \bf17.0 & \bf50.1 & \bf19.5 & \bf32.9 & \bf46.0 \\ & RB & 73.3 & 60.5 & 80.6 & 12.7 & 6.3 & \bf45.7 & 9.7 & 28.5 & 18.4 & 31.2 & 36.7 \\ \cmidrule{2-13} \multirow{2}*{XciTM} & Normal & \bf82.6 & \bf71.0 & \bf86.8 & \bf23.4 & \bf33.3 & 44.7 & \bf17.7 & \bf50.5 & \bf20.3 & \bf33.1 & \bf46.3 \\ & RB & 74.1 & 61.7 & 81.3 & 13.6 & 7.0 & \bf47.1 & 9.5 & 30.2 & 19.7 & 32.6 & 37.7 \\ \cmidrule{2-13} \multirow{2}*{XciTL} & Normal & \bf83.0 & \bf72.0 & \bf86.9 & \bf23.7 & \bf36.2 & 46.2 & \bf17.9 & \bf50.2 & \bf20.4 & \bf34.4 & \bf47.1 \\ & RB & 75.1 & 62.7 & 81.7 & 13.4 & 8.8 & \bf49.0 & 10.7 & 32.0 & 19.9 & \bf34.4 & 38.7\\ \midrule T2T14 & Normal & 81.6 & 70.9 & 86.8 & 22.3 & 24.1 & 44.7 & 16.7 & 46.8 & 17.7 & 32.2 & 44.4 \\ \cmidrule{2-13} T2T19 & Normal & 82.3 & 71.6 & 87.2 & 23.2 & 29.0 & 47.3 & 18.0 & 50.2 & 20.9 & 34.4 & 46.4 \\ \cmidrule{2-13} T2T24 & Normal & 82.4 & 71.7 & 87.2 & 22.9 & 29.7 & 47.9 & 18.0 & 52.0 & 20.8 & 35.1 & 46.8 \\ \midrule \multirow{3}*{SwinS} & Normal & 83.2 & 72.1 & 87.5 & 24.7 & 33.0 & 44.9 & 19.3 & 45.1 & 16.8 & 32.0 & 45.8 \\ & Pre-train & \bf83.3 & \bf73.5 & \bf88.6 & \bf28.1 & \bf43.9 & \bf54.8 & \bf21.3 & \bf50.6 & 17.2 & \bf41.2 & \bf50.3 \\ & AT & 75.8 & 63.3 & 82.6 & 15.3 & 10.6 & 52.5 & 10.8 & 37.1 & \bf21.1 & 37.1 & 40.6 \\ \cmidrule{2-13} \multirow{3}*{SwinB} & Normal & 83.4 & 72.3 & 87.6 & 25.5 & 35.8 & 46.6 & 20.2 & 45.6 & 17.9 & 32.4 & 46.7 \\ & Pre-train & \bf85.1 & \bf75.2 & \bf89.1 & \bf28.8 & \bf51.8 & \bf59.1 & \bf22.7 & \bf56.4 & 19.6 & \bf45.1 & \bf53.3 \\ & AT & 76.8 & 64.5 & 83.4 & 15.5 & 13.1 & 53.5 & 11.8 & 39.3 & \bf22.7 & 39.3 & 42.0 \\ \cmidrule{2-13} \multirow{2}*{SwinL} & Pre-train & {\color{red}\bf86.3} & {\color{red}\bf77.0} & {\color{red}\bf89.6} & {\color{red}\bf31.6} & {\color{red}\bf61.0} & \bf63.6 & {\color{red}\bf26.4} & \bf61.3 & 23.4 & \bf48.8 & \bf56.9 \\ & AT & 78.7 & 66.9 & 84.9 & 18.2 & 18.1 & 57.3 & 11.6 & 43.4 & \bf25.2 & 42.9 & 44.7 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \section{Evaluation Results}\label{sec4} \begin{figure*}[t] \centering \includegraphics[width=0.98\linewidth]{inc.pdf} \caption{Robustness curves of classification accuracy vs. severity on ImageNet-C. \textbf{(a)} Robustness curves of normally trained models with different architectures, including VGG19, ResNet152, DenseNet161, ConvNextL, ViTL, XciTL, T2T24, and SwinB. \textbf{(b)} Robustness curves of pre-trained models, including 21K-pre-training, MOCOv3, and MAE. \textbf{(c)} Robustness curves of adversarially trained models compared with normally trained models. The model performance is highly consistent under different severities. } \label{fig:rb-inc} \end{figure*} We first present the evaluation of natural robustness in Sec.~\ref{sec:4.1}. Then, the adversarial robustness under white-box and black-box attacks is shown in Sec.~\ref{sec:4.2} and Sec.~\ref{sec:4.3}, respectively. Finally, an interpretability analysis of adversarial training in the frequency domain is provided in Sec.~\ref{sec:4.4}. \subsection{Natural Robustness Evaluation}\label{sec:4.1} In this section, we evaluate the natural robustness of the 55 models introduced in Sec.~\ref{sec:3-2-1} under 3 IID and 7 OOD datasets. We show the results of CNNs and Transformers in Table~\ref{tab:ood-cnn} and Table~\ref{tab:ood-transformer}, respectively. Note that all the evaluation results are the Top-1 accuracies, except for ImageNet-C, which uses $\mathrm{mCE}$ as the evaluation metric (lower is better). For consistency, we use $1-\mathrm{mCE}$ as the evaluation metric for ImageNet-C. Besides, on ImageNet-C, we further show the robustness curve of classification accuracy vs. corruption severity in Fig.~\ref{fig:rb-inc}. Detailed results of ImageNet-C robustness curves of all models will be shown in Appendix B. Based on the experimental results, we have the following observations. \textbf{Model architecture.} We first compare the natural robustness of different architectures. It is obvious that the natural robustness of Transformers is better than that of most CNNs, except for ConvNexts. ConvNextL based on pre-training achieves 57.4\% natural robustness, which exactly matches the performance of the best Transformer model ViTL. Besides, the normally trained ConvNextL achieves 51.0\% natural robustness, which is better than all Transformers based on normal training. The results demonstrate that CNNs can achieve comparable or even better natural robustness compared with Transformers, which is different from the conclusions of previous works \citep{bai2021transformers,shao2021adversarial,paul2022vision}. Therefore, we think that the key factors of modern architectures, including patchified input images, enlarged kernel size, and reduced activation and normalization layers, are essential to natural robustness rather than the self-attention mechanism \citep{wang2022can}. Moreover, a larger model usually leads to better natural robustness within the same architecture family. For example, based on normal training, the natural robustness increases from 34.1\% of ResNet50 to 36.8\% of ResNet101, and finally to 38.0\% of ResNet-152. However, this improvement is not very significant. And for some other architectures, a larger model does not necessarily improve robustness (e.g., 35.8\% of ViTB and 35.4\% of ViTL). From Fig.~\ref{fig:rb-inc}(a), it can be observed that the corruption robustness of typical models is consistent across different severities. \textbf{Pre-training on ImageNet-21K.} From the results in Table~\ref{tab:ood-cnn} and Table~\ref{tab:ood-transformer}, we can see that pre-training on ImageNet-21K significantly improves natural robustness of CNNs and Transformers. For example, pre-training improves the performance of ViTL from 35.4\% to 57.4\%, exhibiting a large margin. We think using more training data prevents models from overfitting to a certain data distribution, and promotes the natural robustness. \textbf{Self-supervised learning (SSL).} MOCOv3 achieves slightly better performance than normal training as shown in Table~\ref{tab:ood-cnn}, and MAE improves the natural robustness significantly based on ViT, as shown in Table~\ref{tab:ood-transformer}. It proves that SSL enables models to learn informative representations, which contain fewer spurious cues corresponding to class labels. Thus SSL has better natural robustness. However, the natural robustness of SSL is inferior to pre-training on ImageNet-21K, demonstrating the advantage of using more data. A promising method is performing SSL on larger-scale datasets to further improve natural robustness. \begin{figure*}[t] \centering \includegraphics[width=0.98\linewidth]{rb-nonadv.pdf} \caption{Robustness curves of classification accuracy vs. perturbation budget under AutoAttack. \textbf{(a)} Robustness curves of normally trained models with different architectures, including VGG19, ResNet152, DenseNet161, ConvNextL, ViTL, XciTL, T2T24, and SwinB. \textbf{(b)} Robustness curves of pre-trained models on ImageNet-21K. \textbf{(c)} Robustness curves of self-supervised pre-trained models, including MOCO and MAE. ViT exhibits better adversarial robustness than others.} \label{fig:rb-nonadv} \end{figure*} \textbf{Adversarial training (AT).} Despite the effectiveness in adversarial robustness, we observe that almost all AT models have significant performance drops in natural robustness, except for ViT. Generally speaking, this is because there is a huge shift between adversarial noise and natural noise. For ViT, we conjecture that adversarial examples serve as a form of data augmentation, and compensate for the weak inductive bias of ViT, leading to the improved performance. However, our finding is somewhat contradictory to previous ones \citep{tsipras2018robustness,zhang2019interpreting,ilyas2019adversarial} which deem that AT learns robust and shape-biased features invariant to spurious changes, such as texture and background changes. Thus it can be expected that AT models have better robustness to natural distribution shifts based on previous findings. In fact, AT models lead to better performance on some OOD datasets with different image styles (e.g., ImageNet-R, Stylized-ImageNet, and ImageNet-Sketch), corroborating the previous arguments that AT models learn a more shape-biased representation. However, this representation does not generalize well to other real-world distribution shifts, such as viewpoint changes, uncommon objects, etc. Therefore, it still remains a challenge to learn generalizable representations that are not only biased towards shape but also invariant to changes of real-world objects. \textbf{Comparison of different OOD datasets.} We find that the performance on several OOD datasets (e.g., ObjectNet, ImageNet-C) are highly consistent with the clean classification accuracy, which is reasonable due to the consistent performance drops. Given two models, the probability that the model with lower accuracy classifies a sample correctly while the other one with higher accuracy classifies it incorrectly is small, if the distribution shift is relatively small \citep{mania2020classifier}. Besides, ObjectNet, ImageNet-V, and Stylized-ImageNet are harder datasets than the others since the best models still have less than 30\% accuracies on them. It demonstrates that the models are more vulnerable to viewpoint changes and style transfer. \subsection{White-box Adversarial Robustness Evaluation }\label{sec:4.2} In this section, we evaluate the adversarial robustness of all models under white-box attacks given the $\ell_\infty$-norm bounded perturbations. The experiments on the $\ell_2$-norm bounded attacks are provided in Appendix D. The experiments are conducted on 1,000 randomly sampled images from the ImageNet validation set due to the tremendous computation cost of running all data. \subsubsection{Results on Normally Trained Models} We first evaluate the adversarial robustness of normally trained models and pre-trained models, while the results of adversarially trained models are presented in Sec.~\ref{sec:4.2.2}. We adopt AutoAttack as the basic attack method for evaluation, which is a powerful white-box attack and widely used in benchmarking adversarial robustness. We provide an extensive evaluation on typical model architectures and two pre-training paradigms, including pre-training on ImageNet-21K and self-supervised learning. The detailed robustness curves are shown in Fig.~\ref{fig:rb-nonadv}. We also provide the full results of all models in Appendix C. From Fig.~\ref{fig:rb-nonadv}(a), some models with higher clean accuracy, such as SwinB, XciTL, and T2T24, tend to drop more rapidly as the perturbation budget increases. This suggests that the decision boundaries of these models are relatively close to the data points. Therefore, the boundary error introduced in \cite{zhang2019theoretically} will increase more rapidly. Besides, ViT is the most resistant architecture to large perturbations. This is due to the fact that ViT has a smoother loss landscape under input perturbations as discussed in \cite{paul2022vision}. The effectiveness of pre-trained models on ImageNet-21K for adversarial robustness is inconsistent, e.g., ViT has better robustness but Swin has worse robustness with pre-training, as shown in Fig.~\ref{fig:rb-nonadv}(b). This could be due to the fact that ViT has a weaker inductive bias, so the pre-trained models on large-scale datasets learn fewer spurious features, which benefit the adversarial robustness. For self-supervised learning methods, MAE has a negative impact on the adversarial robustness especially under large perturbation budgets, but MOCOv3 improves adversarial robustness compared with the normally trained models, as shown in Fig.~\ref{fig:rb-nonadv}(c). We believe it is caused by the different learning methods (i.e., contrastive learning of MOCOv3 is more resistant to image noises). \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{rb-adv.pdf} \caption{Robustness curves of adversarially trained models with different network architectures, including ResNets, ConvNexts, ViTs, XciTs, and Swins under AutoAttack. ConvNextL performs best in CNNs, and SwinL performs best in Transformers.} \label{fig:rb-adv-arch-ens} \end{figure} \begin{table*}[t]\footnotesize \caption{White-box robustness (\%) with different architectures. All the results are obtained by adversarial attacks with $\epsilon=4/255$. ConvNextL performs best in CNNs, and SwinL performs best in Transformers.} \label{tab:rb-adv-arch} \begin{center} \begin{minipage}{0.75\textwidth} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llcccc@{\extracolsep{\fill}}} \toprule% Architecture & Method & Clean Acc & FGSM & PGD100 & AutoAttack \\ \midrule \multirow{3}*{ResNet50} & AT & 67.0 & 44.5 & 38.7 & 34.1 \\ & RB & 65.5 & 46.7 & 42.1 & 36.9 \\ & RL & 63.4 & 42.7 & 34.3 & 30.9 \\ \cmidrule{2-6} ResNet101 & AT & 71.0 & 51.3 & 46.5 & 42.2 \\ \cmidrule{2-6} \multirow{2}*{ResNet152} & AT & 72.4 & 54.6 & 49.6 & 46.7 \\ & FD & 67.0 & 51.6 & 48.8 & 41.0 \\ \cmidrule{2-6} \multirow{2}*{Wide-ResNet50} & AT & 70.5 & 51.8 & 44.6 & 39.3 \\ & RB & 70.2 & 51.1 & 45.2 & 41.7 \\ \midrule ConvNextS&AT&77.3&60.3&56.9&54.3\\ \cmidrule{2-6} ConvNextB&AT&77.2&62.2&59.0&56.8\\ \cmidrule{2-6} ConvNextL&AT&\bf78.8&\bf63.9&\bf61.7&\bf60.1\\ \midrule ViTS & AT & 70.7 & 51.3 & 47.5 & 43.7 \\ \cmidrule{2-6} ViTB & AT & 74.7 & 55.9 & 52.2 & 49.7 \\ \midrule XciTS & RB & 74.5 & 51.6 & 46.0 & 43.1 \\ \cmidrule{2-6} XciTM & RB & 75.3 & 54.8 & 50.2 & 47.0 \\ \cmidrule{2-6} XciTL & RB & 75.8 & 55.7 & 57.1 & 49.6 \\ \midrule SwinS & AT & 76.6 & 61.5 & 58.4 & 55.6 \\ \cmidrule{2-6} SwinB & AT & 76.6 & 63.2 & 60.2 & 57.3 \\ \cmidrule{2-6} SwinL & AT & \bf79.7 & \bf65.9 & \bf63.9 & \bf62.3 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} However, the adversarial robustness of normally trained models, including pre-trained ones, is relatively worse than adversarially trained models. Therefore, the conclusions are only valid for normal training. \subsubsection{Results on Adversarially Trained Models}\label{sec:4.2.2} We then evaluate the adversarial robustness of AT models. AT is such an effective method to improve robustness that most of the state-of-the-art robust models under white-box attacks are obtained by AT. It is shown that AT can be used to explore the upper limit of model robustness. To comprehensively show the performance of AT models, we also exhibit the robustness curves of several models in Fig.~\ref{fig:rb-adv-arch-ens}. More detailed results of all models are provided in Appendix C. Besides, we also show the robustness under various white-box attacks with $\epsilon=4/255$ in Table~\ref{tab:rb-adv-arch}. Based on the experimental results, we have the following observations. \begin{table*}[t]\footnotesize \caption{Adversarial robustness (\%) of different training tricks in AT. Most training tricks, including RandAugment (RA), Mixup, label smoothing (LS), weight decay (WD) and EMA, are studied on SwinS. The pre-training methods, including 21K-pre-training and SimMIM, are studied on SwinB. The pre-training method CLIP is studied on ViTB. A combination of RA, Mixup, LS (0.1), and EMA performs best in SwinS. The models pre-trained with ImageNet-21K and SimMIM have similar performance as the model trained from scratch. However, the model pre-trained with CLIP has a performance drop compared with the model trained from scratch.} \label{tab:rb-adv-strategy} \begin{center} \begin{minipage}{0.98\textwidth} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ccccccccccc@{\extracolsep{\fill}}} \toprule% \multirow{3}*{Model} & \multicolumn{6}{c}{Strategy} & \multirow{3}*{Clean Acc} & \multirow{3}*{FGSM} & \multirow{3}*{PGD100} & \multirow{3}*{AutoAttack} \\ \cmidrule{2-7} & RA & Mixup & LS & WD & EMA & Pre-training \\ \midrule \multirow{8}*{SwinS}& \xmark & \xmark & 0.0 & 0.0 & \xmark &\xmark &73.9 & 53.8 &50.3 &48.7\\ &\cmark &\xmark &0.0 &0.0 &\xmark &\xmark & 74.5 &55.7 &52.0&49.7\\ &\cmark &\cmark &0.0 &0.0 &\xmark &\xmark & 76.4 &60.4 &56.8&54.9\\ &\cmark &\cmark &0.1 &0.0 &\xmark & \xmark& 76.9 &61.4 &57.9&55.4\\ &\cmark &\cmark &0.1 &0.01 &\xmark & \xmark& \bf77.1 & 60.5 &57.7&54.7\\ &\cmark &\cmark &0.1 &0.05 &\xmark & \xmark& 76.7 & 60.8 &57.7&54.4\\ &\cmark &\cmark &0.1 &0.1 &\xmark & \xmark& 76.7 &58.3 &56.3 &53.7\\ &\cmark &\cmark &0.1 &0.0 &\cmark & \xmark & 76.6 & \bf61.5 & \bf58.4& \bf55.6 \\ \midrule \multirow{3}*{SwinB} &\cmark &\cmark &0.1 &0.0 &\cmark & \xmark & 76.6 &73.4 &\bf60.2 &57.3 \\ &\cmark &\cmark &0.1 &0.0 &\cmark & 21K & \bf77.5 &\bf74.8 &59.2&\bf57.4\\ &\cmark &\cmark &0.1 &0.0 &\cmark & SimMIM & 77.2 &73.6 &59.1&55.9\\ \midrule \multirow{2}*{ViTB} &\cmark &\cmark &0.1 &0.0 &\cmark & \xmark & \bf74.7 & \bf69.2 & \bf52.2 & \bf49.7 \\ &\cmark &\cmark &0.1 &0.0 &\cmark & CLIP & 69.0 & 63.3 & 46.3 & 42.8\\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} From Fig.~\ref{fig:rb-adv-arch-ens}, it can be seen that the robustness trends of different architectures under different perturbation budgets are consistent. First, the most robust Transformer model is SwinL, which achieves 62.3\% robust accuracy under AutoAttack with $\epsilon=4/255$. The most robust CNN model is ConvNextL with 60.1\% robust accuracy under the same setting. Both results are clearly state-of-the-art compared with XciTL in RobustBench (with 49.6\% robust accuracy under AutoAttack). In contrast to previous debates on whether Transformers have better adversarial robustness than CNNs \citep{tang2021robustart,aldahdooh2021reveal,shao2021adversarial}, we have found that CNNs can achieve competitive (albeit slightly worse) adversarial robustness as Transformers. Therefore, we believe that modern architectures, including patchified input images, enlarged kernel size, and reduced activation and normalization layers, play the most important role in adversarial robustness. Second, there is a trade-off in the perturbation budgets. ResNet152-FD was trained on adversarial examples with $\epsilon=16/255$, resulting in lower robust accuracy (41.0\%) under AutoAttack with $\epsilon=4/255$ than ResNet152-AT (46.7\%). However, the robustness of ResNet-FD under a larger budget (e.g., $\epsilon=8/255$) is better than that of ResNet152-AT. This suggests that we still need robustness curves for a comprehensive evaluation of robustness, rather than a single robust accuracy under a fixed perturbation budget. Third, adversarial robustness is improved with a larger model size. A larger model size means a larger model capacity, which has a positive effect on adversarial robustness. Moreover, the depth design of large models, such as SwinL (with a depth of 2-2-18-2) and ConvNextL (with a depth of 3-3-27-3) is consistent with the conclusion in \cite{huang2021exploring} that reducing capacity in the last stage benefits the adversarial robustness. \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{ablation.pdf} \caption{Robustness curves of adversarially trained models with different training tricks. \textbf{(a)} The curves of some training tricks, including weight decay (W), label smoothing (L), RandAugment (R), Mixup (M), and EMA (E). SwinS\_Base is trained without any of the tricks. The rest of the models are trained with the tricks indicated by the capital initial letter and appended to the name of SwinS (e.g., SwinS\_RMLW001 is trained with RandAugment, Mixup, label smoothing and weight decay 0.01). (b) The curves of pre-training methods including 21K-pre-training and SimMIM of SwinB, and CLIP of ViTB. The robustness curves are nearly parallel to each other, and the robustness is consistent with the results in Table~\ref{tab:rb-adv-strategy}.} \label{fig:rb-adv-strategy} \end{figure*} \subsubsection{Ablation Study on Training Tricks in AT} In this section, we examine the training tricks in large-scale AT. Since some of the recently proposed tricks are not suitable for the previously proposed networks, such as ResNets, we choose SwinS, SwinB and ViTB (intended for CLIP) as the backbone networks. The detailed results are shown in Table~\ref{tab:rb-adv-strategy}. To comprehensively show the robustness under all perturbation budgets, we also show the robustness curves of these AT models in Fig.~\ref{fig:rb-adv-strategy}. In the ablation study, we adopt the control variate method by changing one type of tricks each time, while leaving the other settings unchanged. We first train a baseline model with none of the strategies added, which obtains 48.7\% robust accuracy under AutoAttack. For data augmentations, the adversarial robustness improves by 1.0\% with RandAugment and by another 5.2\% with Mixup. AT easily overfits to certain attack patterns \citep{pang2020bag}, so data augmentations can effectively reduce overfitting and improve the adversarial robustness. For label smoothing, the adversarial robustness improves by 0.5\% with appropriate label smoothing. Compared with hard labels, soft labels prevent models from making over-confident decisions, which improves robustness in the presence of noisy inputs and labels \citep{stutz2020confidence,dong2022exploring}. For weight decay, we find that a larger weight decay degrades the robustness, which is different from previous study \citep{pang2020bag}. We think that adequate data augmentation and advanced architectural components are sufficient to avoid overfitting, such that a large weight decay would affect the learning ability of the model. For EMA, the robust accuracy improves by 0.2\% with EMA. Using EMA can smoothen the update of model parameters and avoid fluctuations caused by mini-batch training methods. This can make the model more robust on test data. \begin{figure}[!t] \centering \includegraphics[width=0.98\linewidth]{time.pdf} \caption{Training curves of vanilla AT, 21K-pre-training and SimMIM pre-training on SwinB and CLIP pre-training on ViTB. Pre-training with ImageNet 21K dataset accelerates AT. The model pre-trained with CLIP is reconstructed during the fine-tuning process, leading to the catastrophic performance drops.} \label{fig:rb-time} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{sns_heatmap_normal_vmi.pdf} \caption{Black-box transferability across normally trained, 21K pre-trained and self-supervised pre-trained models. ViTs are generally more robust than other models. ConvNexts act as the best surrogate models. Transformers generally have better black-box robustness, compared with CNNs. Transferability of adversarial examples between CNNs and Transformers is relatively lower than that within similar architectures.} \label{fig:transfer-normal-vmi} \end{figure*} We also conduct experiments on some pre-training paradigms including ImageNet 21K pre-training, SimMIM, and CLIP. We adopt the pre-trained weights as the initialization of AT. The main advantage of (self-)supervised pre-training mainly is that it can reduce the convergence time during model training. Therefore, we additionally provide the detailed convergence process in Fig.~\ref{fig:rb-time}, in which the models are tested by the PGD-10 attack with $\epsilon=4/255$ and step size 1/255. For 21K-pre-training, it can be seen that the fine-tuning process is obviously accelerated and the fine-tuned model in the 100-th epoch has competitive robustness with the model trained from scratch. It indicates that the 21K-pre-training learns useful representations that can be beneficial to the fine-tuning process. However, the accuracy on clean samples initially decreases and then increases to a lower level. This means that the fine-tuning process still suffers from catastrophic forgetting. For SimMIM, it can be seen that the robust accuracy curves of the fine-tuned models from SimMIM are close to the curves of the models trained from scratch. It indicates that the representative features learned in SimMIM are not beneficial to the downstream AT. For CLIP, we have witnessed a drop in performance with the image encoder as initialization. Besides, by analyzing the loss and accuracy during model training, it seems that the CLIP model is reconstructed during the training process, which makes the final results do not adequately fit to adversarial examples. This is because the CLIP models learn from the textual information. Once the image encoder is separated from the original architecture, it needs to be reconstructed to match adversarial training. \subsection{Black-box Adversarial Robustness Evaluation}\label{sec:4.3} In this section, we evaluate the adversarial robustness of normally trained and adversarially trained models under the black-box attack setting. \subsubsection{Results on Normally Trained Models} Here, we use transferability heatmaps to illustrate the robustness of models against black-box attacks instead of the robustness curves. We use VMI-FGSM as the black-box attack method. The attack transferability heatmap of the normally trained models is shown in Fig.~\ref{fig:transfer-normal-vmi}, in which the source models are the surrogate models of an attack method, and the target models are those models to be tested with the adversarial examples generated by attacking the source models with VMI-FGSM. Additional results of different adversarial attacks are shown in Appendix~E. From the results, we have the following findings. First, ViTs are generally more robust than other models under black-box attacks, which is consistent with the results of white-box robustness, i.e., among normally trained models, ViT is more resistant to adversarial perturbations. Second, ConvNext models act as the best surrogate models. This means, the adversarial examples generated for them achieve much higher attack success rates than other models. It indicates that ConvNext contains both the gradient information of the convolution in CNNs and that of the structure in Transformers. Third, similar to white-box robustness, Transformers generally perform much better against transfer-based attacks than CNNs. Fourth, models of one architecture can, to some extent, defend against perturbations generated from models of different architectures. It is due to the significant difference of gradients between architectures. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sns_heatmap_adv_vmi.pdf} \caption{Black-box transferability across adversarially trained models. The models with higher white-box robustness generally have better black-box robustness. } \label{fig:transfer-adv-vmi} \end{figure} \subsubsection{Results on Adversarially Trained Models} We then evaluate the robustness of the adversarially trained models under transfer-based black-box attacks. The detailed results of VMI-FGSM are shown in Fig.~\ref{fig:transfer-adv-vmi}. Additional results of different adversarial attacks are shown in Appendix E. From the results, we have the following findings. First, the robustness to transfer-based attacks is significantly improved by AT. Models with higher white-box robustness also have a better performance against black-box attacks. Second, ConvNexts have the best robustness under transfer-based attacks among CNNs. Swins have the best robustness under transfer-based attacks among Transformers. It is highly consistent with white-box robustness. \subsection{Relationship Between Frequency bias and Robustness} \label{sec:4.4} \renewcommand{\arraystretch}{1.0} \begin{table}[t]\footnotesize \begin{center} \caption{Frequency bias $f_{bias}$ in Eq.~\eqref{eq:frenq} of normally trained and adversarially trained models, including ResNet, ViT, XciT and Swin. Adversarially trained models have lower frequency bias, compared with the normally trained ones.} \label{tab:fc-bias}% \begin{tabular}{@{}llcc@{}} \toprule Architecture & Model & Normal & AT \\ \midrule \multirow{3}*{ResNet} &ResNet50 & 32.1& 22.8 \\ &ResNet101 & 30.0 & 24.9 \\ &ResNet152 & 28.3 & 23.3 \\ \midrule \multirow{3}*{ConvNext} &ConvNextS & 29.9& 23.9 \\ &ConvNextB & 28.1 & 22.5 \\ &ConvNextL & 26.7 & 21.9 \\ \midrule \multirow{2}*{ViT} & ViTS & 29.1 & 22.5 \\ & ViTB & 28.5 & 21.3 \\ \midrule \multirow{3}*{XciT} & XciTS & 28.3 & 23.8 \\ & XciTM & 28.0 & 24.6 \\ & XciTL & 28.6 & 25.7 \\ \midrule \multirow{3}*{Swin} & SwinS & 30.5 & 21.6 \\ & SwinB & 28.1 & 21.6 \\ & SwinL & 24.9 & 20.8 \\ \botrule \end{tabular} \end{center} \end{table} \begin{figure*}[t] \centering \includegraphics[width=0.93\linewidth]{fc.pdf} \caption{The ACC-LPB curves of models with architectures ResNet, ViT, XciT and Swin. The accuracy of each architecture is normalized to 0-1. The higher robust curve means lower frequency bias of the target model. Adversarially-trained models generally appear above normally-trained ones.} \label{fig:fc-curve} \end{figure*} In this section, we try to understand the robustness of the models from the perspective of the frequency domain. We use the ACC-LPB curve and the frequency bias metric as introduced in Sec.~\ref{sec:3.2.3}. We mainly compare the differences between normally trained and adversarially trained models to investigate the frequency bias of AT. The ACC-LPB curve is shown in Fig.~\ref{fig:fc-curve}. For the convenience of comparison, we normalize the accuracy to the range of 0-1. We also calculate the frequency bias of these two types of models, which is shown in Table~\ref{tab:fc-bias}. The higher the LPB, the more high-frequency information is added to the images, increasing the accuracy. The region with a higher growth rate means that the models pay more attention to the information within that region. Compared with the curves of the normally trained models, the curves of the AT models increase more rapidly within the lower LPB region, and become flatter within the higher LPB region. It means that the AT models pay more attention to low-frequency information. The result is consistent with the frequency bias calculated in Table~\ref{tab:fc-bias} which states that the AT models have lower frequency bias than normal ones. Low-frequency information commonly contains the shapes of objects in an image. It is concluded that AT forces models to learn more shape-biased features. This conclusion is also consistent with the natural robustness in Table~\ref{tab:ood-cnn} and Table~\ref{tab:ood-transformer}. Compared with normally trained modes, the performance of AT models is improved on some specific OOD datasets, such as ImageNet-R, Stylized-ImageNet and ImageNet-Sketch. The images in these datasets have lost much more detailed textures in a high-frequency domain, due to the style transfer. In this paper, we utilize the frequency analysis only as a tool to explain the frequency attention bias of AT. It can be developed as a regularization method to achieve better adversarial robustness in future work. \section{Conclusions and Discussions}\label{sec5} In this paper, we established a comprehensive and rigorous benchmark to evaluate the natural and adversarial robustness of image classifiers. We performed large-scale experiments on CNN and Transformer models trained with four paradigms, including normal training, pre-training on large-scale datasets, self-supervised learning, and adversarial training. We evaluated the natural robustness on 10 datasets and also reported the robustness curves under the state-of-the-art adversarial attack of AutoAttack. Besides, we also conducted ablation studies on the training tricks in large-scale adversarial training and provided an optimized setting. Finally, we provided a frequency perspective to show the frequency bias of adversarially trained models. Based on the results, we highlight some important findings. First, there exists a trade-off between adversarial and natural robustness given a model architecture. Typically, adversarial training significantly degrades the natural robustness on most OOD datasets, although the adversarial robustness is improved. The trade-off found by us complements the well-known accuracy-robustness trade-off \citep{zhang2019theoretically}, while contradicting some previous findings \citep{tsipras2018robustness,zhang2019interpreting,ilyas2019adversarial}. As discussed in Sec.~\ref{sec:4.1}, we think that although adversarial training enables the learning of more shape-biased and human-interpretable representations, they do not generalize well to real-world distribution shifts, especially changes in viewpoint and style transfer. Our results suggest that adversarial training is not a universally applicable solution for improving model robustness. It still remains an open problem of how to achieve both natural and adversarial robustness in tandem. Second, although Transformers outperform most CNNs in terms of natural robustness and adversarial robustness, the more advanced ConvNext models achieve comparable robustness to Transformers. Specifically, ConvNext achieves very similar natural robustness and slightly worse adversarial robustness than the best Transformer. In contrast to previous findings that Transformers achieve superior robustness over CNNs \citep{bhojanapalli2021understanding,bai2021transformers,tang2021robustart,aldahdooh2021reveal,shao2021adversarial}, we find that modern architectural designs, including patchified input images, enlarged kernel size, and reduced activation and normalization layers, are essential to robustness, rather than the self-attention mechanism. Third, pre-training on large-scale datasets (e.g., ImageNet-21K) or by self-supervised learning significantly improves natural robustness. More training data can prevent models from overfitting to the data distribution and lead to better generalization performance, while self-supervision can make models learn predictive features with fewer spurious cues, which also exhibit improved natural robustness. Besides, we find that the pre-trained models on larger datasets can serve as better initializations to speed up the fine-tuning process of adversarial training. Based on the normal pre-trained models on ImageNet-21K, we can achieve the state-of-the-art adversarial robustness with a larger model size without incurring huge computational costs. Finally, adversarial training generally suffers from the problem of overfitting \citep{rice2020overfitting}. Therefore, some of the training tricks, such as data augmentation, regularization, weight averaging, can improve adversarial robustness by mitigating overfitting in adversarial training. Based on the ablation studies, we provide a recipe for training robust models on ImageNet with appropriately designed tricks. \section*{Statements and Declarations} \begin{itemize} \item Competing interests. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \end{itemize} \renewcommand*{\bibfont}{\small} \section{Experimental settings}\label{app:a} In this part, we provide the detailed settings for both adversarial attacks and model training in our paper. For adversarial attacks, we utilize FGSM~\citep{goodfellow2014explaining}, PGD100~\citep{Madry2017Towards}, and AutoAttack~\citep{croce2020reliable} for white-box robustness evaluation; MIM~\citep{dong2018boosting}, DIM~\citep{xie2019improving}, TIM~\citep{dong2019evading}, SI-NI-FGSM~\citep{lin2019nesterov}, and VMI-FGSM~\citep{wang2021enhancing} for black-box robustness evaluation. For $L_\infty$ white-box attacks, the attack epsilon for all white-box attacks is 4/255. For FGSM, the attack step size is 4/255. For PGD100, the attack step is 100, the attack step size is 1/255, and the attack starts with an random noise. For AutoAttack, we adopt the standard version of attack setting. For MIM, the attack step is 20 and the decay factor is 1.0. For $L_\infty$ black-box attacks, the attack epsilon for all black-box attacks is 8/255. For DIM, the attack step is 20 and the decay factor is 1.0. The input images are augmented by random resize and crop with probability 0.7. For TIM, the attack step is 20 and the decay factor is 1.0. The kernel filter is a Gaussian kernel with size 15 and sigma 3. For SI-NI-FGSM, the attack step is 20, decay factor is 1.0, the scale factor is 5. For VMI-FGSM, the attack step is 20, the parameter beta is 1.5 and the sample number is 10. For $L_2$ white-box attacks, the attack epsilon for all white-box attacks is 0.5. For PGD100, the attack step is 100, the attack step size is 1.0, and the attack starts with an random noise. For AutoAttack, we adopt the standard version of attack setting. For adversarial training, we generate adversarial examples by PGD-3 attack, in which the attack epsilon is $\epsilon=4/255$, the attack step is $\mathrm{step}=3$ and the attack step size is calculated by $\mathrm{size}=2*\epsilon/\mathrm{step}$. The basic training setting for Transformers is based on the basic setting in Swin Transformers \citep{liu2021swin}. We adopt AdamW~\citep{loshchilov2017decoupled} optimizer with momentum 0.9 and weight decay 0.0. The learning scheduler is set with 300 epochs (without pre-trained models), cosine scheduler, learning rate 5e-4, 5 warm up epochs and decay rate 0.1. The input images are first center cropped with crop percentage $87.5\%$, resized to image size 224 and normalized with mean $(0.485, 0.456, 0.406)$ and std $(0.229, 0.224, 0.225)$. Then the normalized images are augmented by color-jittering 0.4, RandAugment~\citep{cubuk2020randaugment} with parameter rand-m9-mstd0.5-inc1, Random Erasing with probability 0.25 and Mixup~\citep{zhang2017mixup}. We use soft label in label smoothing~\citep{szegedy2016rethinking} with parameter 0.1. No dropout used in adversarial training. The parameter for EMA~\citep{bolme2009average} is 0.9998. The training setting for ConvNext~\citep{liu2022convnet} is the same as that for Transformers. However, the basic training setting for ResNet~\citep{he2015deep} is different from that for Transformers, because some of the training strategies are not suitable for ResNet. We adopt SGD with momentum 0.9 and weight decay 0.0. The learning scheduler is set as 95 epochs, step scheduler, learning rate 0.2, 5 warm up epochs, decay rate 0.1 and 30 decay epochs. The rest settings are the same as these in Transformers. The batch size for adversarial training is 512. \section{Results on ImageNet-C}\label{app:b} \begin{figure*}[t] \centering \includegraphics[width=0.98\linewidth]{inc-app.pdf} \caption{Robust curves on ImageNet-C. (a) Robust curves of VGGs (b) Robust curves of ResNets. (c) Robust curves of DenseNets. (d) Robust curves of ConvNexts. (e) Robust curves of ViTs. (f) Robust curves of XciTs. (g) Robust curves of T2Ts. (h) Robust curves of Swins. OOD robustness under different severity is generally positive correlated to clean accuracy, except for adversarially-trained models, which are more resistant to severe corruptions.} \label{fig:inc-app} \end{figure*} We also provide detailed robust curves of all the basic models in Tab.~\textcolor{blue}{2}. In this section, the robust curves are organized by model architectures. Detailed results of these robust curves are shown in Fig.~\ref{fig:inc-app}. First, it is shown that 21K pre-training and SSL greatly improve natural robustness. Pre-training with 21K dataset prevent models from overfitting into certain distribution. SSL help model learn representative features, which benefit natural robustness. Second, it can be seen that the natural robustness will be improved with larger model size. Large model size means large model capacity, which benefits natural robustness. Third, the variation trends of normally-trained, 21K pre-trained and self-supervised pre-trained models are similar. However, the robust curves of adversarially-trained models become flat on large severity. It means adversarial training, in some degree, makes models resistant to more severe natural noise. This is owed to the fact that corruption noise with large severity damages the texture information. Adversarial training forces models to focus on shape-biased features, which can overcome the disturbance of severe noise. \section{Result for different models}\label{app:c} In this section, we provide detailed robust curves of all the basic models. The models will be divided to two groups: naturally-trained models (including normally-trained, 21K pre-trained and self-supervised pre-trained models) and adversarially-trained ones. The robust curves of naturally-trained models are shown in Fig.~\ref{fig:rb-normal-app}. \begin{figure*}[t] \centering \includegraphics[width=0.98\linewidth]{rb-nonadv-app.pdf} \caption{Robust curves of naturally-trained models (including normally-trained, 21K pre-trained and self-supervised pre-trained models) with AutoAttack. (a) Robust curves of VGGs (b) Robust curves of ResNets. (c) Robust curves of DenseNets. (d) Robust curves of ConvNexts. (e) Robust curves of ViTs. (f) Robust curves of XciTs. (g) Robust curves of T2Ts. (h) Robust curves of Swins. ViTL pre-trained with 21K dataset is the most robust model.} \label{fig:rb-normal-app} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.98\linewidth]{rb-adv-app.pdf} \caption{Robust curves of adversarially-trained models with AutoAttack. (a) Robust curves of ResNets (b) Robust curves of ConvNexts. (c) Robust curves of ViTs. (d) Robust curves of XciTs. (e) Robust curves of Swins.} \label{fig:rb-adv-app} \end{figure*} First, it is shown that 21K pre-training improves adversarial robustness on ConvNexts and ViTs~\citep{dosovitskiy2021image}. However, the robustness on Swins will decrease with 21K pre-training. Second, MAE~\citep{he2022masked} improves adversarial robustness on ViTs, while MOCOv3~\citep{chen2021empirical} benefits adversarial robustness. Finally, model size is less correlated to adversarial robustness. It is owed to the fact that normally-trained models have poor adversarial robustness. Therefore, the robust curves of normally-trained models are so close to each other. The robust curves of adversarially-trained models are shown in Fig.~\ref{fig:rb-adv-app}. It is shown that the robust curves of the models with the same architecture are similar. First, ConvNext Large performs best in CNNs, and Swin Transformer Large performs best in Transformers. They all benefit from modern architecture designs, such as patchified input images, enlarged kernel size, and reduced activation and normalization layers~\citep{wang2022can}. Second, there exists trade-off between different perturbation budgets. The ResNet152\_FD is trained with epsilon 8/255, while the other models are trained with epsilon 4/255. This leads to the worse performance of ResNet152\_FD with perturbation budget smaller than 4/255. Third, larger model size generally means better adversarial robustness. With adversarial training, the robust curves of the models with similar architectures become parallel to each other. In this way, models' capacity also have effect on the adversarial robustness. Besides, the depth design of large models, such as SwinL (with depth 2-2-18-2) and ConvNextL (with depth 3-3-27-3) is consistent with the conclusion in~\citep{https://doi.org/10.48550/arxiv.2110.03825} that reducing capacity at the last stage benefits adversarial robustness. \section{White-box attacks}\label{app:d} Apart from $L_\infty$ norm, we also provide white-box evaluation with adversarial examples with $L_2$ norm. We only evaluate models with FGSM, PGD100 and AutoAttack. The epsilon for each attack method is set to 0.5. The detailed results are shown in Tab.~\ref{tab:appl2}. The conclusions are similar to these from $L_\infty$ attacks. ConvNextL is the most robust model in CNNs, and SwinL is the most robust model in Transformers. \begin{table*}[h]\small \begin{center} \begin{minipage}{0.8\textwidth} \caption{White-box robustness with different architectures. All the results are obtained by adversarial attacks with epsilon 0.5 and $L_2$ norm. ConvNextL is the most robust model in CNNs and SwinL is the most robust model in Transformers.} \fontsize{7}{6}\selectfont \label{tab:appl2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llcccc@{\extracolsep{\fill}}} \toprule% Model & Source & Clean Acc &FGSM& PGD100 & AutoAttack \\ \midrule \multirow{3}*{ResNet50} & AT & 67.0 &59.7& 58.7 & 56.0 \\ & RB & 65.5 &57.2& 55.4 & 53.3 \\ & RL & 63.4 &56.9& 55.9 & 54.0 \\ \cmidrule{2-6} ResNet101 & AT & 71.0 &64.3& 63.1 & 61.2 \\ \cmidrule{2-6} \multirow{2}*{ResNet152} & AT & 72.4&66.0 & 64.2 & 62.0 \\ & FD & 62.0&61.2 & 60.2 & 57.2 \\ \cmidrule{2-6} \multirow{2}*{Wide-ResNet50} & AT & 70.5 &65.3& 64.1 & 61.9 \\ & RB & 70.2 &62.0& 60.8 & 59.0 \\ \midrule ConvNextS&AT&77.3&72.7&72.5&71.4\\ \cmidrule{2-6} ConvNextB&AT&77.2&72.6&72.3&71.4\\ \cmidrule{2-6} ConvNextL&AT&78.8&75.0&74.7&73.7\\ \midrule SwinS & AT & 75.8 &72.3&72.2 & 71.6 \\ \cmidrule{2-6} SwinB & AT & 76.6 &73.3& 73.1 & 72.6 \\ \cmidrule{2-6} SwinL & AT & 79.7 &75.4& 75.3 & 74.7 \\ \midrule ViTS & AT & 70.7 &65.4 & 65.1 & 63.6 \\ \cmidrule{2-6} ViTB & AT & 74.7 &70.3 & 70.2 & 69.5 \\ \midrule XciTS & RB & 74.5&70.7 & 70.5 & 69.8 \\ \cmidrule{2-6} XciTM & RB & 75.3&71.8 & 71.6 & 70.9 \\ \cmidrule{2-6} XciTL & RB & 75.8&71.7 & 71.6 & 71.4 \\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table*} \section{Black-box attacks}\label{app:e} We also evaluate the black-box robustness of different models with more adversarial attacks, including MIM, DIM, TIM, and SI-NI-FGSM. The detailed results of naturally-trained models with adversarial attacks, including MIM, DIM, TIM, and SI-NI-FGSM, are shown in Fig.~\ref{fig:black-natural-mim-app}-Fig.~\ref{fig:black-natural-si-app}. The detailed results of adversarially-trained models with MIM, DIM, TIM, and SI-NI-FGSM are shown in Fig.~\ref{fig:transfer-adv-mim-app}-Fig.~\ref{fig:transfer-adv-si-app}. First, ViT models are generally more robust than other models under the black-box setting. Second, ConvNext models act as the best surrogate models. Third, naturally-trained models are generally less robust on DIM attacks. While adversarially-trained models are generally less robust on MIM attacks. This is because, transferability is not the decisive factor to attack an adversarially-trained models, while the attack ability to surrogate models is also important. Fourth, models in one architecture are relatively more robust to adversarial examples generated by the surrogate models in another architecture. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{sns_heatmap_normal_mim.pdf} \caption{Black-box transferability of naturally-trained models, including normally-trained, 21K pre-trained and self-supervised pre-trained models, with black-box attack MIM.} \label{fig:black-natural-mim-app} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{sns_heatmap_normal_dim.pdf} \caption{Black-box transferability of naturally-trained models, including normally-trained, 21K pre-trained and self-supervised pre-trained models, with black-box attack DIM.} \label{fig:black-natural-dim-app} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{sns_heatmap_normal_tim.pdf} \caption{Black-box transferability of naturally-trained models, including normally-trained, 21K pre-trained and self-supervised pre-trained models, with black-box attack TIM.} \label{fig:black-natural-tim-app} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{sns_heatmap_normal_si.pdf} \caption{Black-box transferability of naturally-trained models, including normally-trained, 21K pre-trained and self-supervised pre-trained models, with black-box attack SI-NI-FGSM.} \label{fig:black-natural-si-app} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sns_heatmap_adv_mim.pdf} \caption{Black-box transferability of adversarially trained models with black-box attack MIM.} \label{fig:transfer-adv-mim-app} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sns_heatmap_adv_dim.pdf} \caption{Black-box transferability of adversarially trained models with black-box attack DIM.} \label{fig:transfer-adv-dim-app} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sns_heatmap_adv_tim.pdf} \caption{Black-box transferability of adversarially trained models with black-box attack TIM.} \label{fig:transfer-adv-tim-app} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sns_heatmap_adv_si.pdf} \caption{Black-box transferability of adversarially trained models with black-box attack SI-NI-FGSM.} \label{fig:transfer-adv-si-app} \end{figure} \end{appendices} \renewcommand*{\bibfont}{\small}
{ "arxiv_id": "2302.14293", "language": "en", "timestamp": "2023-03-01T02:08:29", "url": "https://arxiv.org/abs/2302.14293", "yymm": "2302" }
\subsubsection{#1}} \newcommand{\RQ}[1]{\textit{RQ}${}_{\mathrm{#1}}$} \usepackage{framed} \setlength{\FrameSep}{5pt} \setlength{\OuterFrameSep}{2pt} \newcommand{\Conclusion}[1]{\begin{framed}\noindent #1\end{framed}} \newcommand{\cellcolor{lightgray}}{\cellcolor{lightgray}} \newcommand{\bf}{\bf} \begin{document} \title{Large-Scale Evaluation of Method-Level\\Bug Localization with FinerBench4BL} \author{% \IEEEauthorblockN{Shizuka Tsumita} \IEEEauthorblockA{% \textit{Tokyo Institute of Technology}\\ Tokyo 152--8550, Japan\\ tsumita@se.c.titech.ac.jp} \and \IEEEauthorblockN{Shinpei Hayashi} \IEEEauthorblockA{% \textit{Tokyo Institute of Technology}\\ Tokyo 152--8550, Japan\\ hayashi@c.titech.ac.jp} \and \IEEEauthorblockN{Sousuke Amasaki} \IEEEauthorblockA{% \textit{Okayama Prefectural University}\\ Okayama 700--0961, Japan\\ amasaki@cse.oka-pu.ac.jp} } \maketitle \thispagestyle{plain} \begin{abstract} Bug localization is an important aspect of software maintenance because it can locate modules that need to be changed to fix a specific bug. Although method-level bug localization is helpful for developers, there are only a few tools and techniques for this task; moreover, there is no large-scale framework for their evaluation. In this paper, we present FinerBench4BL, an evaluation framework for method-level information retrieval-based bug localization techniques, and a comparative study using this framework. This framework was semi-automatically constructed from Bench4BL, a file-level bug localization evaluation framework, using a repository transformation approach. We converted the original file-level version repositories provided by Bench4BL into method-level repositories by repository transformation. Method-level data components such as oracle methods can also be automatically derived by applying the oracle generation approach via bug-commit linking in Bench4BL to the generated method repositories. Furthermore, we tailored existing file-level bug localization technique implementations at the method level. We created a framework for method-level evaluation by merging the generated dataset and implementations. The comparison results show that the method-level techniques decreased accuracy whereas improved debugging efficiency compared to file-level techniques. \end{abstract} \begin{IEEEkeywords} bug localization, information retrieval, repository transformation \end{IEEEkeywords} \section{Introduction}\label{s:introduction} Bug localization is the process of identifying the location of a bug. Developers must fix many bugs in large-scale software projects, and debugging software is difficult and time-consuming~\cite{whyProgramsFail}. As this can be a tedious task in large-scale software development projects, numerous ideas have been proposed to automate this process using software development information. For instance, we can identify the locations of a bug using the description of bug reports, \textit{i.e.}, information retrieval~(IR)-based techniques~\cite{lukins-ist2010,nguyen-ase2011}, or execution traces, \textit{i.e.}, dynamic analysis~\cite{Wong2014BRTracer}. Several hybrid techniques that combine a base technique with additional information have been proposed to improve bug localization accuracy. For example, \BLT{BugLocator}~\cite{Zhou2012BugLocator} improved an IR-based technique with similar bug reports that were previously resolved. \BLT{BLUiR}~\cite{Saha2013BLUiR} incorporated structural information in addition to using similar bug reports. \BLT{AmaLgam}~\cite{Wang2014AmaLgam} combined the version history, structural information, and similar bug reports. Most existing IR-based bug localization~(IRBL) techniques follow file-level recommendations. They output a ranked list of suspicious files; therefore, their evaluations were performed at the file level. This granularity may be too coarse for developers to debug. In particular, IRBL approaches may recommend large files with more than 500 lines of code. Debugging efforts can be decreased if it is possible to recommend code fragments to be fixed at the method level. However, in the literature, there are only a few IRBL techniques and their implementations at the method level~\cite{Razzaq2021Nsift,Zhang2019Fine,Youm2017BLIA}. To the best of our knowledge, no study has performed a comprehensive comparison and evaluation of method-level IRBL techniques, and only IR techniques have been compared~\cite{Chakkrit2018IR}. In addition, no publicly available evaluation framework enables comparison at the method level, and we have little knowledge of method-level IRBL approaches. Therefore, in this study, we propose FinerBench4BL, an evaluation framework for method-level bug localization techniques. This framework is based on Bench4BL~\cite{Lee2018Bench4BL}, an existing evaluation framework for file-level bug localization techniques. We built a method-level bug localization dataset by applying a repository transformation to the repositories in the Bench4BL dataset and prepared method-level IRBL implementations by modifying the file-level ones. This study also presents a performance study of method-level IRBL techniques. The results demonstrated that method-level IRBL techniques do not improve the accuracy but reduce debugging effort compared with file-level bug localization. We demonstrated a similar performance across different levels and revealed the need for improvements in method-level bug localization. The main contributions of this study with respect to method-level bug localization techniques are as follows: \begin{itemize} \item a transformational approach to obtain method-level IRBL techniques and datasets, \item construction of an evaluation framework that enables comparison of IRBL techniques at the method level, and \item confirmation of the superiority and inferiority between techniques at the method level, which replicates existing IRBL approaches applied to the different levels. \end{itemize} The remainder of this paper is organized as follows. In the next section, we briefly introduce IR-based bug localization and its evaluation framework. Section~\ref{s:motivation} discusses the issues in terms of module granularity, and related work is introduced in Section~\ref{s:relatedwork}. The approach proposed in this study that leads to the solution of these issues is presented in Section~\ref{s:approach}. Section~\ref{s:evaluation} presents the experimental setup used in the analysis and comparison of the results of the file and method levels. Threats to validity are presented in Section~\ref{s:validity}. Finally, the conclusions and future work are presented in Section~\ref{s:conclusion}. \section{Preliminaries}\label{s:preparation} \subsection{IR-Based Bug Localization} Bug localization is the process of finding buggy locations in source code based on the information about a given bug. This study explicitly focuses on IR-based bug localization~(IRBL), which uses textual analysis to determine bug locations. IRBL techniques use a bug report and source code as inputs and output a ranked list of modules to be fixed. Some IRBL techniques may use past bug reports, stack traces, or historical information as additional inputs. The given bug report serves as a query for searching the given source code. A bug report includes the bug ID, summary, dates when the bug is opened or closed, and detailed description. The description of a bug may include stack traces that are extracted and utilized using advanced IRBL techniques. Finally, IRBL techniques compute the similarity score between the bug report and source code files considering the additional information and output a ranked list of source files on the score. \begin{table}[tb]\centering \caption{Applying \BLT{BugLocator} to \Bug{CODEC-199}}\label{t:ranklist_eg} \begin{tabular}{rlr} \hline Rank & Filename & Score \\\hline 1 &\cellcolor{lightgray}\Mod{Soundex.java} & 0.800 \\ 2 & \Mod{DaitchMokotoffSoundex.java} & 0.618 \\ 3 & \Mod{RefinedSoundex.java} & 0.564 \\ 4 & \Mod{SoundexTest.java} & 0.500 \\ 5 & \Mod{RefinedSoundexTest.java} & 0.388 \\\hline \end{tabular} \end{table} As an IRBL example, an excerpt result of applying \BLT{BugLocator}~\cite{Zhou2012BugLocator} to bug \Bug{CODEC-199}\footnote{https://issues.apache.org/jira/browse/CODEC-199} is shown in Table~\ref{t:ranklist_eg}. The columns indicate the rank, filename, and similarity score of the target file for the query bug report. The highlighted row, \Mod{Soundex.java}, specifies the \emph{oracle}, \textit{i.e.}, the buggy file for this bug. The higher the rank of the file, the more likely it is to contain a bug. Therefore, developers search for bugs starting with \Mod{Soundex.java} in the table. \subsection{IRBL Evaluation Framework} Researchers often employ a retrospective approach by applying IRBL techniques to previously resolved bugs to evaluate their performance. When resolving a bug, the list of fixed modules is regarded as the oracle list of modules that should be localized associated with the bug. A set of resolved bug reports and their associated lists of fixed modules form an IRBL evaluation dataset. The performance of IRBL techniques can be evaluated by comparing the oracle list of modules to the ranked list of modules produced by the techniques. Bench4BL\footnote{https://github.com/exatoa/Bench4BL}\cite{Lee2018Bench4BL} is a large-scale framework proposed by Lee \textit{et al.}\xspace\ to evaluate the performance of file-level bug localization techniques. Resolved bug reports obtained from the issue tracking systems for 46 projects and their lists of oracle source files to be localized were collected and provided as a dataset for evaluation. In addition to the dataset, they have attached the implementations of \BLT{BugLocator}~\cite{Zhou2012BugLocator}, \BLT{BLUiR}~\cite{Saha2013BLUiR}, \BLT{BRTracer}~\cite{Wong2014BRTracer}, \BLT{AmaLgam}~\cite{Wang2014AmaLgam}, \BLT{BLIA}~\cite{Youm2015BLIA}, and \BLT{Locus}~\cite{Wen2016Locus}. In Bench4BL, released versions of projects are regarded as source code snapshots to be searched. Each version was created by checking out files from a Git repository in the dataset. An oracle list of files corresponding to a bug report was generated based on the commit history in a repository. Once a bug-fixing commit is detected based on the matching between its commit message and the ID of the bug, the changed files in the commit are regarded as files to be fixed to resolve the bug. In addition, \BLT{AmaLgam} and \BLT{BLIA} utilize the historical information (also obtained from the Git repository) to improve the bug localization accuracy. In summary, all information used in applying IRBL techniques follows the original Git repositories. \section{Motivation}\label{s:motivation} Most existing IR-based bug localization techniques localize buggy code at the file level. Techniques such as \BLT{BugLocator}~\cite{Zhou2012BugLocator}, \BLT{BLUiR}~\cite{Saha2013BLUiR}, \BLT{AmaLgam}~\cite{Wang2014AmaLgam}, and \BLT{BLIA}~\cite{Youm2015BLIA} all recommend suspicious modules at the file level. This section presents the challenges of file-level bug localization and method-level bug localization. \subsection{Challenges of File-Level Bug Localization} When recommending buggy files, IR-based bug localization techniques may produce very large files that contain methods unrelated to the bug, making it challenging to identify bug locations. For example, consider the bug \Bug{CODEC-221}\footnote{https://issues.apache.org/jira/browse/CODEC-221}. For this bug, the three methods in \Mod{HmacUtils.java} must be fixed. This file is 729 lines long and includes 40 methods. Therefore, significant time is required to identify the bug location in the entire file. It would be more helpful if the three buggy methods were recommended to be fixed directly. \begin{table*}[tb]\centering \caption{Applying \BLT{BLIA} at File and Method Levels to \Bug{CODEC-221}}\label{t:rankloc} \begin{tabular}{r|lrr|lrr} \hline \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{File level} & \multicolumn{3}{c}{Method level} \\ Rank & Module & Score & LOC & Module & Score & LOC \\\hline 1 &\cellcolor{lightgray}\Mod{HmacUtils.java} & 0.640 & 729 & \Mod{BaseNCodecInputStream\#reset()} & 0.640 & 15 \\ 2 & \Mod{DigestUtils.java} & 0.251 & 752 &\cellcolor{lightgray}\Mod{HmacUtils\#updateHmac(Mac,InputStream)} & 0.573 & 29 \\ 3 & \Mod{HmacUtilsTest.java} & 0.095 & 237 &\cellcolor{lightgray}\Mod{HmacUtils\#updateHmac(Mac,byte[])} & 0.565 & 19 \\ 4 & \Mod{BaseNCodecInputStream.java} & 0.048 & 184 &\cellcolor{lightgray}\Mod{HmacUtils\#updateHmac(Mac,String)} & 0.565 & 19 \\\hline \end{tabular} \end{table*} \begin{table*}[tb]\centering \caption{Comparison of IRBL Studies}\label{t:relatedwork} \begin{tabular}{l|ccccc} \hline & Additional information & Method level & \# techniques & \# projects & Multi-version \\\hline Lee \textit{et al.}\xspace~(Bench4BL)\cite{Lee2018Bench4BL} & $\checkmark$ & & 6 & 46 & $\checkmark$ \\ Youm \textit{et al.}\xspace~(\BLT{BLIA1.5})\cite{Youm2017BLIA} & $\checkmark$ & $\checkmark$ & 1 & 3 & \\ Amasaki \textit{et al.}\xspace~\cite{Amasaki2020} & $\checkmark$ & $\checkmark$ & 1 & 43 & $\checkmark$ \\ Razzaq \textit{et al.}\xspace~(\BLT{BoostNSift})\cite{Razzaq2021Nsift} & $\checkmark$ & $\checkmark$ & 4 & 4 & \\ Chakkrit \textit{et al.}\xspace~\cite{Chakkrit2018IR} & & $\checkmark$ & 4 & 2 & \\ Our Approach & $\checkmark$ & $\checkmark$ & 5 & 37 & $\checkmark$ \\\hline \end{tabular} \end{table*} Table~\ref{t:rankloc} lists the results of applying \BLT{BLIA} to this bug report at both file and method levels. This table shows the names of the modules recommended by \BLT{BLIA} with their ranks at both the file and method levels. The score represents the points used to rank each file, and LOC shows the lines of code of the module. Rows with the correct answers are highlighted. In this example, the correct answer at file level, \Mod{HmacUtils.java}, is recommended at the first rank with a score of 0.640. However, identifying the methods that need to be fixed in this file, which consists of more than 700 lines, is challenging. Conversely, all buggy methods are localized up to the fourth in the list at the method level, and the sum of their LOC is only 82. The use of method-level IRBL reduces the effort required to read the 729 lines of \Mod{HmacUtils.java} by 11\%. In addition, the ratio of buggy methods to the total number of methods in a buggy file is generally small. In the context of bug prediction, Hata \textit{et al.}\xspace\ investigated the ratio of buggy methods in a target project to ascertain the effectiveness of fine-grained bug predictions \cite{Hata2012}. The results showed that the median number of buggy methods was 1--2, whereas that of all methods was 8--22. Therefore, file-level recommendations may be inefficient in identifying bug locations. \subsection{Challenges of Method-Level Bug Localization} Currently, there is a lack of knowledge regarding method-level IRBL because only a few method-level techniques exist. To the best of our knowledge, only a few bug localization techniques, such as \BLT{BLIA1.5}\cite{Youm2017BLIA}, \BLT{FineLocator}\cite{Zhang2019Fine}, or \BLT{BoostNSift}\cite{Razzaq2021Nsift}, work at the method level. Furthermore, there is no unified framework for evaluating techniques at the method level, and the performance differences between the granularity levels and techniques remain unclear. Therefore, it is necessary to establish an evaluation framework for various techniques. \section{Related Work}\label{s:relatedwork} \subsection{IRBL Approaches} To date, many techniques have been studied that recommend bug locations based on information retrieval. Some of these techniques add other information to the similarity between the source code and bug reports. \BLT{BugLocator}\cite{Zhou2012BugLocator} uses similarity to previous bug reports and file size. \BLT{BLUiR}\cite{Saha2013BLUiR} uses structural information of source code. \BLT{BRTracer}\cite{Wong2014BRTracer} uses the stack trace information in bug reports. \BLT{AmaLgam}\cite{Wang2014AmaLgam} and \BLT{Locus}\cite{Wen2016Locus} use historical data. \BLT{BLIA}\cite{Youm2015BLIA} combines all of these to localize bugs. The aforementioned techniques localize bugs at the file level and were evaluated with limited data, such as old versions of JDT or AspectJ\@. After that, Bench4BL, an evaluation framework proposed by Lee \textit{et al.}\xspace\cite{Lee2018Bench4BL}, evaluated these six techniques for 46 projects at the file level. They suggested that IRBL techniques should be evaluated more accurately using multiple-version matching, which searches for a version in which a bug report is submitted. Method-level IRBL is considered to aid developers in debugging more, and several method-level IRBL techniques have been proposed. \BLT{BLIA1.5} proposed by Youm \textit{et al.}\xspace\ extends \BLT{BLIA} for method level bug localization\cite{Youm2017BLIA}. Amasaki \textit{et al.}\xspace\cite{Amasaki2020} tailored \BLT{BLUiR} to localize bugs at the method level by using information outside the method. \BLT{BoostNSift} was proposed by Razzaq \textit{et al.}\xspace\cite{Razzaq2021Nsift} to filter source code based on bug report text. Each method-level bug localization technique was evaluated using different datasets. Youm \textit{et al.}\xspace\ used AspectJ, SWT, and ZXing, whereas Amasaki \textit{et al.}\xspace\ used a part of the Bench4BL dataset. Razzaq \textit{et al.}\xspace\ compared \BLT{BoostNSift} with \BLT{BLUiR}, \BLT{BugLocator}, and \BLT{BLIA}, and used the dataset used by Youm \textit{et al.}\xspace\ plus Eclipse. Numerous method-level techniques were not applied to large projects in these evaluations, and the datasets were not uniform. However, only IR techniques without additional information have been evaluated under a unified framework at the method level. Chakkrit \textit{et al.}\xspace\ investigated the effectiveness of four IR techniques, VSM, LDA, LSI, and Entity Metric, at the method level\cite{Chakkrit2018IR}. They proposed a metric named top-$k$ LOC, the percentage of bug reports for which at least one buggy file is found in the top-ranked files with a cumulative sum of $k$ lines of code, to investigate the performance of the techniques. They found that the settings, such as how texts are pre-processed and which part of the texts in bug reports are used, significantly impact the performance of method-level IR techniques. In addition, settings with good performance produce good results at any granularity. They also stated that performance evaluation using LOC is necessary because debugging developers' efforts to find bugs may differ, even if they show similar accuracy. Table \ref{t:relatedwork} shows the relationship between this study and previous studies. In this table, the column of the additional information indicates whether the techniques consider other information. The column of the method level shows whether the approach is evaluated at the method level. The column of the multi-version shows whether the techniques localize bugs with multiple-version matching. \begin{table}[tb]\centering \caption{Comparison of IRBL Datasets}\label{t:datasets} \begin{tabular}{llrr} \hline Name & Granularity & \# projects & \# bugs \\\hline iBUGS \cite{dallmeier2007extraction} & File & 3 & 390 \\ MoreBugs \cite{rao2013morebugs} & File & 2 & 902 \\ BugLinks \cite{sisman2013assisting} & File & 2 & 5,046 \\ Bench4BL \cite{Lee2018Bench4BL} & File & 46 & 9,459 \\ Bugzbook \cite{akbar2020large} & File & 29 & 21,253 \\ FDS\cite{oscar-bugdesc} & File & 11 & 4,429 \\ MDS\cite{oscar-bugdesc} & Method & 14 & 360 \\ FinerBench4BL & Method & 37 & 3,344 \\\hline \end{tabular} \end{table} \subsection{IRBL Datasets} Bug localization techniques are evaluated by comparing the list of modules produced by applying the techniques to bug reports resolved previously with the oracle list of modules that have actually been fixed when bugs were resolved. Several sets of resolved bug reports and oracle lists of fixed modules were packed and proposed as a bug localization evaluation dataset. Table~\ref{t:datasets} summarizes bug localization datasets. iBUGS~\cite{dallmeier2007extraction} is a dataset developed by Dallmeier \textit{et al.}\xspace MoreBugs~\cite{rao2013morebugs} proposed by Rao \textit{et al.}\xspace\, is an extended dataset for projects adopted by iBUGS by additionally attaching version history information. BugLinks~\cite{sisman2013assisting} is a dataset proposed by Sisman \textit{et al.}\xspace\ that contains non-Java projects. iBUGS, MoreBugs, and BugLinks are relatively small datasets, comprising only 2--3 projects. Bench4BL is a large-scale bug localization evaluation framework proposed by Lee \textit{et al.}\xspace This dataset consisted of 46 Java projects and 9,459 bug reports. In addition, the Bench4BL framework bundles several IRBL technique implementations so that users can easily execute the techniques for projects in the provided dataset. Akbar \textit{et al.}\xspace\ proposed BugzBook~\cite{akbar2020large}, which is another large-scale bug localization dataset that contains over 20,000 bug reports from 29 projects developed in Java, C/C++, and Python. To the best of our knowledge, most publicly available bug localization datasets are at the file level. As mentioned above, several existing studies on method-level bug localization internally analyzed method-level datasets. However, these datasets are not publicly available. We believe that the inexistence of method-level datasets and evaluation frameworks hinders the promotion of method-level bug localization research. Note that Chappaoro \textit{et al.}\xspace~\cite{oscar-bugdesc} prepared IRBL datasets at the class, file~(FDS), and method level~(MDS) as part of their study on query reformulation for bug localization. These datasets are available upon request. \section{Approach}\label{s:approach} To address the aforementioned challenges, we used the \emph{method repositories} created by the repository transformation to construct a method-level IRBL dataset, and we modified the existing file-level IRBL implementations for the method-level ones. We constructed an evaluation framework for method-level IRBL techniques by combining them. IRBL techniques output a list of source code files in the order of similarity to bug reports calculated using additional information, such as the text of previous bug reports and historical data. They were evaluated by comparing the ranked list of modules obtained by applying IRBL techniques to resolve bug reports with the oracle list of modules. Accordingly, a method-level evaluation framework needs to fulfill the following three requirements: \begin{itemize} \item it should be able to output a ranked list of methods, \item it should be able to link bug reports to the fixed methods, and \item it should be able to obtain historical information about each method. \end{itemize} Most IRBL techniques assume that the input bug report and source code are in the form of files and calculate their similarity. They may consider the size or history of the source files as additional information to be used to calculate the score. A straightforward approach to make such implementations work at the method level is to add a specific process to extract information exclusively for the method of interest against existing IRBL implementations, which require substantial engineering work. Conversely, our idea is to prepare \emph{method files} whose contents are only of the individual methods of interest, trick existing IRBL implementations to recognize that these method files are normal source files, and have them perform on these method files. This transforms the complexity of method-level bug localization into the cost of preparing the method files and minimizes the cost of re-implementing each IRBL implementation at the method level. Note that such method files are naturally considered incorrect Java source code for the project as a whole, but even such fake files can be analyzed without issues in most cases because IR-based approaches do not perform deep program analysis but only superficial text analysis. By preparing the method files using a \emph{repository transformation} mechanism and converting every source file into method files at the stage of the original Git change history, most of the automated processing provided in the existing file-level IRBL evaluation framework, such as the generation of code snapshots from a repository, extraction of the change history of each file, and identification of oracle files based on the correspondence between bug reports and commits, could be completely reused. This could lead to an ecosystem of IRBL evaluation frameworks with multiple levels of granularity. \begin{figure*}[tb]\centering \includegraphics[width=16cm]{overview.pdf} \caption{Overview of FinerBench4BL.}\label{f:approach} \end{figure*} Accordingly, we propose an approach that transforms the repository itself, which records the change history at the method level, by applying a repository transformation mechanism. Figure~\ref{f:approach} shows the repository before and after the transformation. As shown in the figure, the dataset repository provided by the existing evaluation framework was converted to generate source code files split by the method. Consequently, the entire dataset was converted to a method-level dataset. We then constructed a dataset that can compare IRBL techniques at the method level employing this method-level repository as the input. Fewer changes are required to modify IRBL techniques into method-level techniques. This approach is described in more detail as follows: \subsection{Creating Method-Level Repositories} \begin{figure}[tb]\centering \includegraphics[width=\linewidth]{mjava_eg.pdf} \caption{Example of repository transformation.}\label{f:mjava_eg} \end{figure} We used the Git repositories provided by Bench4BL and converted them into method-level repositories using Historinc\cite{Historinc2022}, a repository transformation tool. We converted the target repository to the method level and obtained method files. Figure \ref{f:mjava_eg} demonstrates an example of the splitting part of file \Mod{HmacUtils.java} in the Apache Commons CODEC\@. By splitting the original source file, the \textit{method files}, \Mod{HmacUtils\#hmac(String).java} and \Mod{HmacUtils\#hmacHex().java}, were generated. The package name, class name, Javadoc comment, and method body were extracted as the bodies of the method file. Therefore, in this approach, entities outside the method, such as fields and imports of the class, were excluded from the method file. Bugs caused outside of methods were excluded from the evaluation at the method level. \subsection{Generating Oracles} Bench4BL framework includes a script that links the resolved bug reports to fixed files. This script identifies the fixed file by linking bugs to commits that resolve them by referring to the bug ID in the commit message. Therefore, the link of bug reports can be updated by connecting bug reports to the method-level repository using the same script. This updated link can be used for the method file to compare method-level IRBL techniques in the Bench4BL framework. \subsection{Modifying IRBL Techniques to Method Level} We modified the existing IRBL techniques to output a ranked list at the method level. As explained, our approach splits source files into method files as parsable Java source code files so that the results of parsing up to the method internals can be artificially reproduced. Therefore, if it merely uses the information available from a method file, such as file size or source code structure, no modifications are required. Furthermore, we could obtain historical information without modifying implementations because the repository itself is already fine-grained and consists of method-level contents. We need to modify an IRBL implementation only when it considers outside source code, such as the bug report description's stack trace, or when it requires valid source files. For example, consider modifying \BLT{BugLocator} for method-level bug localization. \BLT{BugLocator} outputs the ranked list of files by considering the similarity between source code and a bug report, its file size, and the similarity to past bug reports. In this case, the original implementation could be completely reused without any changes because all the required information was available from the method file. \section{Evaluation}\label{s:evaluation} In this study, we compare and validate the method-level techniques modified by our approach and the method-level evaluation framework and answer the following research questions (RQs): \defHow much modification is required to convert IRBL techniques to the method level?{How much modification is required to convert IRBL techniques to the method level?} \defHow well do the method-level IRBL techniques perform?{How well do the method-level IRBL techniques perform?} \begin{itemize} \item \RQ{1}: How much modification is required to convert IRBL techniques to the method level? \item \RQ{2}: How well do the method-level IRBL techniques perform? \end{itemize} Five of the six techniques provided by Bench4BL were used for evaluation: \BLT{BugLocator}\cite{Zhou2012BugLocator}, \BLT{BLUiR}\cite{Saha2013BLUiR}, \BLT{BRTracer}\cite{Wong2014BRTracer}, \BLT{AmaLgam}\cite{Wang2014AmaLgam}, and \BLT{BLIA}\cite{Youm2015BLIA}, excluding \BLT{Locus}\cite{Wen2016Locus}. We excluded \BLT{Locus} because it crashed during execution, and we could not obtain the results. We found that several techniques provided by Bench4BL had issues that prevented them from correctly calculating the similarities. Therefore, we utilized \BLT{BLIA} implementation to imitate \BLT{BLUiR}, \BLT{AmaLgam}, and \BLT{BRTracer} by changing specific parameters to drop additional information to avoid the issues in their implementations. \subsection{\RQ{1}: How much modification is required to convert IRBL techniques to the method level?} \Heading{Motivation} We investigated the number of modifications to clarify whether our approach can easily convert the existing IRBL technique implementations to the method level. \Heading{Study Design} This study is based on the LOC of the modifications required to convert the techniques and the percentage of the total LOC of Java source files for each technique implementation. \Heading{Results} Figure~\ref{t:cost} illustrates the results for \RQ{1}. We needed to modify only \BLT{BRTracer} and \BLT{BLIA} using the proposed approach. These techniques for calculating similarity add scores to files included in the stack traces in the bug report. We modified the process of obtaining the file name from the stack trace to obtain the method names belonging to the file. For example, the stack trace of bug \Bug{COMPRESS-203}\footnote{https://issues.apache.org/jira/browse/COMPRESS-203} includes \Mod{\seqsplit{org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.writePaxHeaders(TarArchiveOutputStream.java:485)}}. In this case, the modified method-level techniques were needed to obtain the method name \Mod{writePaxHeaders} and add the similarity score of the corresponding method file. It was easy to find the code location of the feature extracting the source file name from a stack trace and to modify it to extract the method file names by string manipulation, leading to only four line modifications. \begin{table}[tb]\centering \caption{Changes Required to Fix Techniques for Method Level}\label{t:cost} \begin{tabular}{lrrr} \hline IRBL technique & Whole LOC & Modified LOC & Ratio (\%) \\\hline \BLT{BugLocator} & 3,180 & 0 & 0 \\ \BLT{BLUiR} & 10,469 & 0 & 0 \\ \BLT{BRTracer} & 10,530 & 4 & 0.038 \\ \BLT{AmaLgam} & 10,469 & 0 & 0 \\ \BLT{BLIA} & 10,474 & 4 & 0.038 \\\hline \end{tabular} \end{table} \begin{table}[tb]\centering \caption{Target Projects}\label{t:subject} {\tabcolsep=3.5pt\begin{tabular}{ll|rrrrrr}\hline Group & Project & \# files & \# methods & \# versions & \# bugs \\\hline Commons & CODEC & 115 & 1,310 & 6 & 27 \\ & COLLECTIONS & 525 & 6,997 & 5 & 59 \\ & COMPRESS & 265 & 2,591 & 15 & 105 \\ & CONFIGURATION & 447 & 6,073 & 11 & 107 \\ & CRYPTO & 82 & 488 & 1 & 4 \\ & CSV & 29 & 452 & 3 & 6 \\ & IO & 227 & 2,608 & 12 & 70 \\ & LANG & 305 & 6,336 & 15 & 158 \\ & MATH & 1,617 & 15,695 & 15 & 175 \\ & WEAVER & 113 & 473 & 1 & 1 \\ \hline Jboss & ENTESB & 252 & 3,210 & 1 & 4 \\ & JBMETA & 858 & 4,834 & 3 & 15 \\ \hline Spring & AMQP & 408 & 3,996 & 32 & 86 \\ & ANDROID & 305 & 3,582 & 2 & 8 \\ & BATCH & 1,732 & 10,071 & 33 & 335 \\ & BATCHADM & 243 & 1,298 & 4 & 16 \\ & DATACMNS & 604 & 4,512 & 30 & 104 \\ & DATAGRAPH & 848 & 5,190 & 14 & 43 \\ & DATAJPA & 330 & 2,002 & 32 & 107 \\ & DATAMONGO & 622 & 6,703 & 40 & 209 \\ & DATAREDIS & 551 & 9,488 & 15 & 44 \\ & DATAREST & 414 & 2,183 & 23 & 89 \\ & LDAP & 566 & 3,556 & 5 & 46 \\ & MOBILE & 64 & 814 & 3 & 8 \\ & ROO & 1,109 & 7,803 & 15 & 568 \\ & SEC & 1,618 & 9,295 & 41 & 422 \\ & SECOAUTH & 726 & 3,912 & 6 & 61 \\ & SGF & 695 & 5,790 & 19 & 83 \\ & SHDP & 1,102 & 6,348 & 8 & 37 \\ & SHL & 151 & 749 & 2 & 6 \\ & SOCIAL & 212 & 1,344 & 4 & 10 \\ & SOCIALFB & 253 & 1,786 & 4 & 11 \\ & SOCIALLI & 180 & 830 & 1 & 2 \\ & SOCIALTW & 153 & 1,197 & 5 & 6 \\ & SPR & 6,512 & 57,696 & 10 & 89 \\ & SWF & 808 & 6,864 & 19 & 101 \\ & SWS & 925 & 3,505 & 24 & 122 \\ \hline & Total & 25,966 & 211,581 & 479 & 3,344 \\ \hline \end{tabular}} \end{table} \begin{table*}[tb]\centering \caption{Results of MAP}\label{t:map} \begin{tabular}{l|ccccc|ccccc} \hline & \multicolumn{5}{c|}{File level} & \multicolumn{5}{c}{Method level} \\ Project & \BLT{BugLocator} & \BLT{BLUiR} & \BLT{BRTracer} & \BLT{AmaLgam} & \BLT{BLIA} & \BLT{BugLocator} & \BLT{BLUiR} & \BLT{BRTracer} & \BLT{AmaLgam} & \BLT{BLIA} \\\hline CODEC &\B0.631 & 0.623 & 0.283 & 0.629 & 0.621 & 0.192 & 0.352 & 0.094 & 0.345 & \B0.381 \\ COLLECTIONS & 0.572 & 0.604 & 0.283 & 0.585 &\bf 0.614 & 0.366 & 0.369 & 0.094 & 0.377 & \B0.394 \\ COMPRESS & 0.631 &\bf 0.703 & 0.263 & 0.678 & 0.696 & 0.281 & 0.318 & 0.170 & 0.302 & \B0.349 \\ CONFIGURATION & 0.715 & 0.735 & 0.250 & 0.734 &\bf 0.776 & 0.210 & 0.329 & 0.138 & 0.313 & \B0.363 \\ CRYPTO & 0.314 & 0.313 & 0.063 & 0.322 & \B0.323 & 0.106 & 0.060 & \B0.264 & 0.060 & 0.058 \\ CSV & 0.806 &\bf 0.833 & 0.482 &\bf 0.833 &\bf 0.833 & 0.155 & 0.423 & 0.154 & 0.437 & \B0.453 \\ IO & 0.742 & 0.761 & 0.271 & 0.764 & \B0.772 & 0.400 & \B0.506 & 0.227 & 0.499 & 0.495 \\ LANG & 0.696 & 0.710 & 0.277 & 0.707 & \B0.738 & 0.374 & 0.503 & 0.167 & 0.525 & \B0.536 \\ MATH & 0.499 & 0.553 & 0.145 & 0.571 &\bf 0.578 & 0.263 & 0.355 & 0.122 & 0.355 & \B0.375 \\ WEAVER & \B0.321 & 0.079 & 0.125 & 0.079 & 0.167 & 0.068 & \B0.083 & 0.015 & \B0.083 & 0.038 \\ \hline ENTESB & 0.119 &\bf 0.431 & 0.399 & 0.429 & 0.329 & 0.031 & 0.167 &\bf 0.350 & 0.166 & 0.214 \\ JBMETA &\bf 0.359 & 0.278 & 0.150 & 0.318 & 0.315 & 0.146 &\bf 0.226 & 0.035 &\bf 0.226 & 0.214 \\ \hline AMQP & 0.404 & 0.421 & 0.086 & 0.422 &\bf 0.434 & 0.155 & 0.167 & 0.106 & 0.165 &\bf 0.187 \\ ANDROID & 0.540 & 0.562 &\bf 0.566 & 0.499 & 0.556 & 0.298 & 0.343 & 0.237 & 0.353 &\bf 0.367 \\ BATCH & 0.414 & 0.436 & 0.122 &\bf 0.448 &\bf 0.448 & 0.216 & 0.222 & 0.138 & 0.227 &\bf 0.241 \\ BATCHADM & 0.438 & 0.553 & 0.276 & 0.549 &\bf 0.606 &\bf 0.256 & 0.223 & 0.136 & 0.248 & 0.225 \\ DATACMNS & 0.300 & 0.365 & 0.136 &\bf 0.369 & 0.362 & 0.125 & 0.204 & 0.124 & 0.201 &\bf 0.214 \\ DATAGRAPH & 0.145 & 0.171 & 0.107 & 0.182 &\bf 0.197 & 0.145 & 0.171 & 0.106 & 0.182 &\bf 0.197 \\ DATAJPA & 0.355 & 0.369 & 0.110 & 0.380 &\bf 0.394 & 0.169 & 0.173 & 0.115 & 0.175 & \B0.183 \\ DATAMONGO & 0.296 & 0.317 & 0.111 & 0.316 &\bf 0.340 & 0.114 & 0.144 & 0.096 & 0.142 &\bf 0.165 \\ DATAREDIS & 0.382 &\bf 0.410 & 0.114 & 0.406 & 0.391 & 0.163 & 0.160 & 0.123 & 0.152 &\bf 0.190 \\ DATAREST & 0.264 & 0.272 & 0.119 &\bf 0.290 & 0.289 & 0.100 & 0.127 & 0.077 &\bf 0.128 &\bf 0.128 \\ LDAP & 0.498 & 0.476 & 0.223 & 0.487 &\bf 0.508 & 0.227 & 0.300 & 0.212 &\bf 0.309 &\bf 0.309 \\ MOBILE &\bf 0.530 & 0.427 & 0.251 & 0.427 & 0.427 & 0.372 &\bf 0.627 & 0.376 &\bf 0.627 & 0.595 \\ ROO &\bf 0.414 & 0.375 & 0.127 & 0.384 &\bf 0.414 & 0.200 & 0.193 & 0.087 & 0.200 &\bf 0.228 \\ SEC & 0.505 & 0.533 & 0.221 & 0.545 &\bf 0.559 & 0.299 & 0.334 & 0.186 & 0.339 &\bf 0.362 \\ SECOAUTH & 0.434 & 0.426 & 0.157 & 0.429 &\bf 0.457 & 0.301 & 0.275 & 0.176 & 0.285 &\bf 0.318 \\ SGF &\bf 0.415 & 0.380 & 0.155 & 0.384 & 0.404 &\bf 0.182 & 0.159 & 0.123 & 0.168 & 0.169 \\ SHDP & \B0.387 & 0.361 & 0.187 & 0.360 & 0.379 & 0.200 & 0.161 & 0.166 & 0.160 &\bf 0.211 \\ SHL & 0.503 &\bf 0.593 & 0.217 &\bf 0.593 &\bf 0.593 & 0.227 &\bf 0.327 & 0.226 &\bf 0.327 & 0.314 \\ SOCIAL &\bf 0.692 & 0.570 & 0.292 & 0.600 & 0.684 & 0.316 & 0.499 & 0.406 & 0.499 &\bf 0.501 \\ SOCIALFB &\bf 0.666 & 0.470 & 0.222 & 0.495 & 0.520 & 0.184 & 0.300 &\bf 0.328 & 0.318 & 0.318 \\ SOCIALLI &\bf 0.327 & 0.300 & 0.084 & 0.300 & 0.303 & 0.511 & 0.514 & 0.505 & 0.514 &\bf 0.515 \\ SOCIALTW & 0.693 & 0.462 &\bf 0.694 & 0.462 & 0.512 &\bf 0.488 & 0.320 & 0.193 & 0.320 & 0.282 \\ SPR &\bf 0.415 & 0.277 & 0.119 & 0.321 & 0.339 & 0.203 & 0.167 & 0.141 & 0.227 &\bf 0.254 \\ SWF & 0.461 & 0.424 & 0.253 & 0.411 &\bf 0.476 & 0.212 & 0.243 & 0.191 & 0.229 &\bf 0.271 \\ SWS & 0.270 & 0.257 & 0.105 & 0.264 &\bf 0.285 & 0.272 & 0.256 & 0.095 & 0.265 &\bf 0.283 \\ \hline \end{tabular} \end{table*} Table \ref{t:cost} shows that the modifications of the two techniques were minimal (0.038 \% each). Conversely, there were no modifications to techniques that use information directly related to files, such as file size and historical information. \Conclusion{Our approach could convert five existing bug localization techniques to method level in eight lines. The amount of modification was small, and it was easy to identify the required changes.} \subsection{\RQ{2}: How well do the method-level IRBL techniques perform?} \Heading{Motivation} We evaluated modified techniques in a large-scale unified framework to identify changes in performance to gain new insights into method-level bug localization. \Heading{Study Design} We compared the performance of the techniques at the same and different levels. We evaluated the accuracy and effort required to determine the bug locations. MAP and MRR were used as measures of accuracy, and the top-$k$ LOC\cite{Chakkrit2018IR} was used as a measure of effort. Top-$k$ LOC is the percentage of bug reports for which at least one buggy file is found in the top-ranked files with a cumulative sum of LOC below $k$. For example, a top-$k$ LOC of 0.1 for $k={}$10,000 means that the correct file appears within top 10,000 lines for 10\% of bug reports. We used $k \in \{100,500,1000,5000\}$. We used a two-sided Wilcoxon signed-rank test to evaluate the output of the techniques at both levels. \Heading{Construction of FinerBench4BL Dataset}\label{ss:dataset} In this experiment, we used 37 of the 46 projects provided by Bench4BL to construct the FinerBench4BL dataset owing to the execution time. We filtered out bug reports that 1) any of the five IRBL techniques at either level failed to produce any solution. A typical example case to be filtered out is that the code location to be fixed when resolving the bug was outside of methods, and the list of solution modules at the method level became empty. The target projects, number of modules, versions, and bug reports to be used from Bench4BL and to be generated as FinerBench4BL are shown in Table~\ref{t:subject}. 25,966 original source files and 211,581 method files from all 479 versions were searched for 3,344 bug reports. \def7.7cm{7.7cm} \begin{figure}\centering{\footnotesize \includegraphics[width=7.7cm]{fm_mapbox.pdf}\\ (a) MAP.\\~\\ \includegraphics[width=7.7cm]{fm_mrrbox.pdf}\\ (b) MRR.\\~\\ \includegraphics[width=7.7cm]{loc.pdf}\\ (c) Top-1,000 LOC.} \caption{Evaluation of IRBL techniques at each level.}\label{f:fm} \end{figure} \Heading{Results of Accuracy Evaluation} Table~\ref{t:map} presents the MAP results for each project. Box plots of the MAP and MRR of the projects comparing the file and method levels are shown in Figs.~\ref{f:fm}(a) and \ref{f:fm}(b). We describe only the MAP results because the MRR results showed a similar tendency to the MAP results. The technique with the highest accuracy for the same project is highlighted as bold in the results in Table~\ref{t:map}. First, we investigated the accuracy at each level. \BLT{BLIA} showed the highest accuracy, with the highest values in 20 of the 37 projects at the file level and 26 at the method level. Furthermore, the same techniques showed the highest MAP at both levels for 18 projects (48.6\% of the total). This means that, for each project, the technique that showed the highest accuracy at the file level is likely to recommend buggy methods precisely at the method level. This result suggests that the selection of effective techniques for file-level bug localization may also be helpful at the method level. Second, we investigated how the accuracy of each technique changed as it was modified for the method level. There were significant differences in the median MAP between the techniques. MAP at the file level was 1.33 times higher than that at the method level for \BLT{BRTracer} as the minimum and 2.07 times higher for \BLT{BugLocator} as the maximum. Furthermore, at both levels, the median MAP value indicated by \BLT{BLIA} was the largest, followed by \BLT{AmaLgam}, \BLT{BLUiR}, \BLT{BugLocator}, and \BLT{BRTracer}. Therefore, the accuracy increased with the amount of additional information handled, except for \BLT{BRTracer}. The high accuracy of \BLT{BLIA}, which uses most of the information, indicates that it can maintain accuracy by supplementing the missing information at method-level bug localization from various perspectives. In contrast, \BLT{BLUiR}, which uses only structural information as additional information, showed an accuracy comparable to that of \BLT{AmaLgam} and \BLT{BLIA}, which also consider other information. This suggests that structural information is adequate for method-level bug localization. In some bugs, significant changes in accuracy occur with different bug localization granularities. Two examples of \BLT{BugLocator} outputs are presented below. The first case is of decreased accuracy. In \Bug{COLLECTIONS-220}\footnote{https://issues.apache.org/jira/browse/COLLECTIONS-220}, the \Mod{writeObject()} in \Mod{\seqsplit{UnboundedFifoBuffer.java}} was the cause of the bug. Although the target file was ranked first at the file level, \Mod{writeObject()} was recommended in 789th place at the method level. The words ``increment'', ``tail'', and ``head'' in this bug report were not included in any \Mod{writeObject()}, which was a deficient information method for the seven lines. However, \Mod{add()}, \Mod{remove()}, and other methods included in \Mod{UnboundedFifoBuffer.java}, contained several relevant words. This suggests that information from irrelevant methods contributes to the high accuracy in file-level bug localization. The second case is of improved accuracy. In \Bug{CONFIGURATION-558}\footnote{https://issues.apache.org/jira/browse/CONFIGURATION-558}, \Mod{getList()} in \Mod{\seqsplit{MultiFileHierarchicalConfiguration.java}} was the buggy method. The method-level bug localization improved the rank from 22nd to seventh at the file level. Although there were few bug report descriptions, they included the parameters and method names directly related to the target method. These entities might have contributed to the improvement in the accuracy at the method level. This is supported by the results of Wang \textit{et al.}\xspace~\cite{Wang2015BR} and Rahman \textit{et al.}\xspace~\cite{Mohammad2018QueryReform}, who showed that bug reports containing program entities are suitable for IRBL. These examples with significant variations in accuracy at both levels suggest that, at the file level, information on methods irrelevant to the bug location is used for information retrieval. This suggests the output ranked list based on unrelated methods may not help developers find such bug locations. Conversely, the textual information of the method body may be insufficient for information retrieval at the method level, which is considered to have reasonable granularity. Therefore, for method-level IRBL, it is useful to implement a hybrid approach for bug localization that uses not only the information on the target method body but also the information outside of its own method, as proposed by Amasaki \textit{et al.}\xspace~\cite{Amasaki2020}. \Conclusion{\BLT{BLIA} showed the highest accuracy at both levels. In 48.6\% of the projects, the best-performed techniques were the same at both levels. Therefore, an accurate technique at the file level performs well at the method level. Furthermore, we found that the additional information effectively contributed to the recommendation of the buggy methods despite a 2.07-fold difference in accuracy.} \Heading{Results of Effort Evaluation} Figure~\ref{f:fm}(c) shows the results of the top-1,000 LOC, which is a measure of effort to find bug locations from the ranked list. Table~\ref{t:topkloc_median} also presents the top-1,000 LOC median, p-value of the two-sided Wilcoxon signed-rank test, and Cliff's $d$ with an interpretation~\cite{romano-fair2006} for each project. Owing to space limitations, we omit the results using $k \in \{100,500,5000\}$ because they produce similar tendencies to the case of $k=1000$. \begin{table}[tb]\centering \caption{Median of Top-1,000 LOC Values}\label{t:topkloc_median} {\tabcolsep=5.5pt\begin{tabular}{l|cc|cl} \hline & File level & Method level & p-value & \multicolumn{1}{c}{Cliff's $d$} \\ \hline \BLT{BugLocator} & 0.865 & 0.875 & 0.644 & 0.063 (negligible) \\ \BLT{BLUiR} & 0.850 & 0.933 & 0.026 & 0.297 (small) \\ \BLT{BRTracer} & 0.708 & 0.865 & 0.024 & 0.301 (small) \\ \BLT{AmaLgam} & 0.855 & 0.933 & 0.022 & 0.305 (small) \\ \BLT{BLIA} & 0.875 & 0.944 & 0.027 & 0.295 (small) \\ \hline \end{tabular}} \end{table} For all techniques, top-1,000 LOC performance was improved at the method level. Among the four techniques, except for \BLT{BugLocator}, there were significant differences in the top-1,000 LOC between the two levels. Compared to the top-1,000 LOC performance at the file level, \BLT{BugLocator} showed the smallest increase at 2.2\%, and \BLT{BRTracer} showed the largest increase at 22.2\%. At the method level, the best performing \BLT{BLIA} identified correct files within the top-1,000 lines of code of the ranked list for 94.4 \% of the bug reports. The top-1,000 LOC performance of the file level was lower than that of the method level owing to the large size of each file's lines of code, despite better accuracy. This suggests that the number of lines developers need to read could be reduced by current method-level IRBL approaches, even if their accuracy is not high. In terms of the performance order, \BLT{BLIA} showed the highest top-1,000 LOC, and \BLT{BRTracer} showed the lowest, at both levels. The order of the method level performance of the top-1,000 LOC for each technique is the same from the file level, as with the results of the accuracy evaluation, except for \BLT{BugLocator}, which did not show significant differences. This indicates that a technique with a good effort performance at the file level also performed well at the method level. Furthermore, as in the above discussion of accuracy, \BLT{BLUiR}, which only uses structural information, performed top-1,000 LOC as well as the other techniques at both levels. Therefore, the consideration of structural information leads to an improvement in the top-1,000 LOC performance. \Conclusion{Except for one technique, the method-level techniques significantly improved the top-1,000 LOC\@. \BLT{BLIA}, which best performed, found fixed method files in more than 94\% of bug reports within the top 1,000 lines of code of the rank list. Techniques that perform well at the file level often also perform well at the method level. Structural information may improve the performance in method-level bug localization, where information is scarce.} \section{Threats to Validity}\label{s:validity} In this study, we assumed that bug reports were correctly linked with fixed files. We used scripts provided by Bench4BL, linked bug reports, and fixed files. However, we did not check the validation of the correctness of the linking; therefore, there may be bug reports that cannot be identified in the commit log and those linked with non-buggy files. Although the top-$k$ LOC was adopted as an indicator for evaluating the effort, its validity to the actual effort required to identify bug locations was not apparent. In addition, we used the total number of LOC from the top of the ranked list above the correct solution file as an indicator of effort. However, if the solution is ranked at the top, then the total number is zero. This result may differ from the actual effort. Moreover, this indicator also affects the evaluation in terms of effort as developers do not necessarily read all lines of files. \section{Conclusion}\label{s:conclusion} In this study, we propose an approach to convert IRBL techniques to the method level using repository transformation. We evaluated them at both method and repository levels using a framework combining converted techniques with datasets constructed at the method level. This approach can convert the existing IRBL techniques to method-level techniques with minor modifications. The evaluation results showed that the converted method-level techniques decreased the accuracy but reduced the debugging effort. However, as Amasaki \textit{et al.}\xspace~\cite{Amasaki2020} stated, there is potential for further performance improvement using file-level information to compensate for the lack of details in method-level bug localization. In future research on method-level IRBL techniques, obtaining performance improvements will be easier using our framework to adjust the parameters and select additional information. The future research directions of this study are as follows: \begin{itemize} \item \emph{Increasing the number of evaluated techniques and datasets.} We believe the evaluation framework can be extended to existing method-level techniques and other datasets to gain more knowledge. \item \emph{Tuning the parameters of method-level techniques.} The method-level techniques in this paper are run with parameters tuned for the file level. Further performance improvements could be achieved by identifying optimal settings for method-level bug localization. \item \emph{Comparing both levels of techniques under more uniform conditions.} When it is necessary to find all methods in a fixed file, the number of solutions that method-level techniques need to recommend is higher than at the file level, contributing to a significant reduction in accuracy. Bug reports with many solution methods should be excluded and compared under appropriate conditions. \item \emph{Considering code elements outside of methods.} Incorporating information in Java source files, such as fields and imports, may provide different insights, which were excluded in this study. \end{itemize} The FinerBench4BL experimental results of this study are publicly available~\cite{zenodo}. \section*{Acknowledgments} This study was partially supported by JSPS KAKENHI (JP21H04877, JP21K18302, JP21KK0179, 21K11833, and JP22H03567). \IEEEtriggeratref{10}
{ "arxiv_id": "2302.14325", "language": "en", "timestamp": "2023-03-01T02:09:37", "url": "https://arxiv.org/abs/2302.14325", "yymm": "2302" }
\section{Introduction} \label{sec:intro} Place recognition plays an important role in both the map construction and localization phases of long-term Simultaneous Localization and Mapping (SLAM) systems \cite{cadena2016past}. In the map construction phase, it can provide loop closure constraints to eliminate the accumulated drift of the odometry. In the localization phase, it can re-localize the system when the pose tracking is lost and improve the robustness of the system. In recent years, lots of image-based place recognition methods \cite{2012Bags,arandjelovic2016netvlad,2017ORB} have been developed and achieved satisfactory performance. However, these methods are vulnerable to illumination changes and view variation due to the imaging mechanism of camera sensors. On the contrary, point clouds of LiDAR sensors are robust to illumination changes due to active sensing. In addition, the availability of precise depth information can help more accurate place recognition \cite{angelina2018pointnetvlad,liu2019lpd}. LiDAR-based place recognition can be regarded as a retrieval problem, that is, finding the most similar frame to a query from a pre-built database. The key to solving this problem is to generate a global feature that can model the similarity between point clouds. PointNetVLAD \cite{angelina2018pointnetvlad} gives the first deep-learning solution to the problem of large-scale LiDAR-based place recognition. It uses PointNet \cite{qi2017pointnet} to extract local features from unordered points and NetVLAD \cite{arandjelovic2016netvlad} to generate global features. There are lots of subsequent methods that follow PointNetVLAD and introduce auxiliary modules such as attentions \cite{zhang2019pcan}, handcrafted features \cite{liu2019lpd}, and sparse convolution \cite{mickloc3d}. Recently, some methods \cite{chen2020overlapnet,OverlapTransformer} based on range images have been developed. The range image is the sphere projection of a point cloud. Due to the projection mechanism, the translation of the range image is equivariant to the rotation of the point cloud. Based on this, OverlapTransformer \cite{OverlapTransformer} uses a convolution network and a transformer to extract rotation-invariant features from the images. Some methods \cite{kim2018scan,kim20191,chen2020overlapnet} use similar projections and also achieve place recognition robust to view changes. Although the aforementioned methods have made great progress, they still have limitations in terms of generalization ability. This is because that both unordered points and range images used for place recognition are sensitive to the motions of the LiDAR sensor. Specifically, for unordered points, the point coordinate and the relative positions between points will change severely along with motions of the LiDAR sensor. For range images, the image contents suffer various distortions with translations of point clouds although they are robust to rotations. Current methods \cite{angelina2018pointnetvlad,zhang2019pcan,liu2019lpd} force the network to learn these variations of data with data augmentation. However, as pointed out in \cite{tipooling}, data augmentation needs the network to be as flexible as possible to capture all the variations, which may result in the large risk of overfitting and poor generalization ability. In this work, we explore the potential of place recognition using bird's eye view (BEV) images. The BEV image is generated by projecting a point cloud to the ground space. In road scenes, the transformations of point clouds are approximately equivariant to the rotations and translations of BEV images \cite{bvmatch2021}. Thus, BEV images are more robust to sensor motions. As shown in Fig. \ref{fig: represention}, the translation of a point cloud causes little appearance changes in the BEV image but introduces geometry distortions to the range image. Since the feature distance measures the structural similarity between point clouds, the BEV image may be a better representation for place recognition. To achieve robustness to viewpoint changes, we design a group convolution \cite{groupconv} network to extract local features from BEV images. Then, we use NetVLAD \cite{arandjelovic2016netvlad} for global rotation-invariant features extraction. Benefiting from the design of rotation invariance for BEV images, our method has the strong ability of place retrieval in the cases of both viewpoint variations and scene changes. In addition, we observe that the distances of the BEV features correlate well with the geometry distances of point clouds. According to this correlation, we map the feature distance to the geometry distance and then estimate the position of the query cloud, which extends the usage of LiDAR-based place recognition. We summarize the contributions of this paper as follows: \begin{itemize} \item We propose a novel LiDAR-based place recognition method called BEVPlace. In the method, we extract rotation-equivariant local features from BEV images based on group convolution, which facilitates the design of rotation invariant global features. \item We explore the statistical correlation between the feature distance and the geometry distance of point cloud pairs. Based on this correlation prior, we compute the geometry distance between the query point cloud and the matched point cloud and use it for position estimation. \item We evaluate our method on three large-scale public datasets, showing that our method is robust to view changes, has strong generalization ability, and achieves state-of-the-art performance in terms of recall rate. \end{itemize} \section{Related Work} In this section, we briefly review the recent developments in the field of LiDAR-based place recognition. For a more comprehensive overview, the readers may refer to \cite{survey}. According to the representations used for feature extraction, we classify the current Lidar-based place recognition methods into two categories, \emph{i.e.,} the methods that utilize 3D points and the methods that use projection images as intermediate representations. \textbf{Place recognition based on 3D points.} PointNetVLAD \cite{angelina2018pointnetvlad} leverages PointNet \cite{qi2017pointnet} to project each point into a higher dimension feature, and then uses NetVLAD \cite{arandjelovic2016netvlad} to generate global features. To take advantage of more contextual information, {PCAN \cite{zhang2019pcan} introduces the point contextual attention network that learns attentions to the task-relevant features.} Both PointNetVLAD and PCAN cannot capture local geometric structures due to the independent treatment for each point. Thus, the following methods focus on extracting more discriminative local features considering the neighborhood information. LPD-Net \cite{liu2019lpd} adopts an adaptive local feature module to extract the handcrafted features and uses a graph-based neighborhood aggregation module to discover the spatial distribution of local features. EPC-Net \cite{epcnet} improves LPD-Net by using a proxy point convolutional neural network. DH3D \cite{du2020dh3d} designs a 3D local feature encoder to learn more distinct local descriptors, and SOE-Net \cite{soe} introduces a point orientation encoding (PointOE) module. Minkloc3D \cite{mickloc3d,mickloc3dv2} uses sparse 3D convolutions in local areas and achieves state-of-the-art performance on the benchmark dataset. Recently, some works including SVT-Net \cite{svtnet}, TransLoc3D \cite{transloc3d}, NDT-Transformer \cite{ndtformer}, and PPT-Net \cite{pptnet} leverage the transformer-based attention mechanism \cite{attention} to boost place recognition performance. However, it was shown that MinkLoc3D outperforms these transformer-based methods with fewer parameters. \textbf{Place recognition based on projection images.} Steder \emph{et al.} \cite{2011Place} extract handcrafted local features from range images of point clouds and perform place recognition by local feature matching. Kim \emph{et al.} \cite{kim2018scan} project the point cloud into a bearing-angle image and propose the scan context descriptor. They further introduce the concept of scan context image (SCI) \cite{kim20191} and achieve place recognition by classifying the SCIs using a convolutional network. OverlapNet \cite{chen2020overlapnet} uses the overlap of range images to determine whether two point clouds are at the same place and uses a siamese network to estimate the overlap. OverlapTransformer \cite{OverlapTransformer} further uses a transformer architecture to learn rotation-invariant global features. Different from the aforementioned methods based on the image representations that are built under polar or polar-like projections, BVMatch \cite{bvmatch2021} projects point clouds into BEV images and extracts handcrafted BVFT features from the images. It then uses the bag-of-words model \cite{2012Bags} to generate global features. However, it is shown that BVMatch cannot generalize well to unseen environments \cite{bvmatch2021}. Different from BVMatch, we extract rotation-equivariant local features using group convolution \cite{groupconv} and generate global features by NetVLAD \cite{arandjelovic2016netvlad}. Thanks to the network design, our method can generalize to different scenes while keeping high recall rates. \begin{figure*}[htp] \begin{center} \includegraphics [width=5.7in]{FIG2_pipeline.pdf} \caption{Two modules of our method. In the BEVPlace network, we project point clouds into BEV images and extract rotation-invariant global features. In the position estimator module, we recover geometry distances from feature space and estimate positions of query point clouds.} \label{fig: method} \vspace{-6mm} \end{center} \end{figure*} \section{Preliminaries} Let $\textbf{m}_i$ be the point cloud collected by a sensor at the pose $\mathbf{T}_i=(\mathbf{R}_i,\mathbf{t}_i)$, where $\mathbf{R}_i$ is the rotation matrix and $\mathbf{t}_i$ is the position. The database formed by $n$ point clouds and their associated poses could be denoted as $\mathcal{M}=\{(\mathbf{m}_i,\mathbf{T}_i)\}_{i=1,2,...,n}$. Given a query point cloud $\textbf{m}_q$, place recognition aims at finding its most structurally similar point cloud from the pre-built database $\mathcal{M}$. In the problem of LiDAR-based place recognition, two point clouds are usually regarded as structurally similar if they are collected at geometry close places. Towards this goal, we design a network $f(\cdot)$ to map the point cloud to a distinct compact global feature vector such that $\Vert f(\mathbf{m}_q)-f(\mathbf{m}_i)\Vert_2<\Vert f(\mathbf{m}_q)-f(\mathbf{m}_j)\Vert_2$ if $\mathbf{m}_q$ is structurally similar to $\mathbf{m}_i$ but dissimilar to $\mathbf{m}_j$. Based on the network $f$, we perform place retrieval by finding the point cloud with the minimum feature distance to the query point cloud. In this work, we train our network based on BEV images of point clouds. In addition to place retrieval, we develop an extended usage that estimates the positions of the query point clouds. \section{Method} Our method is formed by two modules as illustrated in Fig. \ref{fig: method}. In the BEVPlace network, we project the query point cloud into the BEV image. Then, we extract a rotation-invariant global feature through a group convolution network and NetVLAD \cite{arandjelovic2016netvlad}. In the position estimator, we retrieve the closest feature of the global feature from a pre-built database. We recover the geometry distance between the query and the matched point clouds based on a mapping model. The position of the query is estimated based on the recovered distances. \subsection{BEVPlace Network} In road scenes, a LiDAR sensor on a car or a robot can only move on the ground plane. Since we generate BEV images by projecting point clouds into the ground plane, the view change of the sensor will result in a rotation transformation on the image. To achieve robust place recognition, we aim at designing a network $f$ to extract rotation-invariant features from BEV images. Denoting the rotation transformation $\textbf{R}\in SO(2)$ on the BEV image $\textbf{I}$ as $\textbf{R}\circ \textbf{I}$, the rotation invariance of $f$ can be represented as \begin{equation} \label{eq: rotation_invriance} \begin{aligned} f(\textbf{R}\circ \textbf{I}) = f(\textbf{I}). \end{aligned} \end{equation} A straightforward approach to achieve such invariance is to train a network with data augmentation \cite{tipooling}. However, data augmentation usually requires that the network has a larger group of parameters to learn the rotations and may not generalize to the combination of rotations and scenes not occurring in the training set. In this work, we use the cascading of a group convolution network and NetVLAD to achieve rotation invariance. Our BEVPlace has strong generalization ability since the network is designed inherently invariant to rotations. \textbf{Bird's Eye View Image Generation.} We follow BVMatch \cite{bvmatch2021} and use the point density to construct images. We discretize the ground space into uniform grids with a grid size of 0.4 m. For a point cloud $\mathbf{m}$, we compute the number of points in each grid and use the normalized point density as the pixel intensity of the BEV image $\mathbf{I}$. \textbf{Group Convolution Network.} Group convolution treats the feature map as functions of the corresponding symmetry-group \cite{groupconv}. Considering the 2D rotation group $SO(2)$, applying group convolution $f_{gc}$ on a BEV image $\textbf{I}$ results in rotation-equivariant features, which can be written as \begin{equation} \label{eq: rotation_equivariance} \begin{aligned} f_{gc}(\textbf{R}\circ \textbf{I}) = \textbf{R}'\circ f_{gc}(\textbf{I}). \end{aligned} \end{equation} That is, transforming the input $\textbf{I}$ by a rotation transformation $\textbf{R}$ and then passing it through the mapping $f_{gc}$ should give the same result as first mapping $\textbf{I}$ through $f_{gc}$ and then transforming the feature with $\textbf{R}'\in SO(2)$. Usually, $f$ is designed such that $\textbf{R}'=\textbf{R}$. Group convolution has been well-developed for a few years, and there are some mature group convolution designs \cite{groupconv, gift, e2cnn}. We implemented our network based on GIFT \cite{gift}. GIFT is originally designed for image matching and can produce distinct local features. Our main modification to GIFT is to remove the scale features since there is no scale difference between BEV images. More details of our network implementation are appended in the supplementary materials. \textbf{Rotation invariant global features.} According to Eq. \ref{eq: rotation_equivariance}, the contents of the feature map of group convolution keep the same for rotated images and are only transformed by a rotation $\mathbf{R}'$. Thus, we can use a global pooling operation to extract rotation-invariant global features. To capture more information about the statistics of local features, we use NetVLAD \cite{arandjelovic2016netvlad} for feature aggregation. We achieve rotation invariance by cascading the group convolution network and NetVLAD, which is, \begin{equation} \label{eq: rotation_invariance} \begin{aligned} \mathrm{NetVLAD}\left(f_{gc}(\textbf{R}\circ \textbf{I})\right) &= \mathrm{NetVLAD}\left(\textbf{R}'\circ f_{gc}(\textbf{I})\right) \\&= \mathrm{NetVLAD}\left(f_{gc}(\textbf{I})\right). \end{aligned} \end{equation} \textbf{Loss function.} There are some loss functions \cite{angelina2018pointnetvlad,soe} for LiDAR-based place recognition problem. In this work, we train our network with the simple commonly used lazy triplet loss \cite{angelina2018pointnetvlad}, formulated as \begin{equation} \label{eq: loss} \begin{aligned} \mathcal{L} = \max_j([m+\delta_\text{pos}-\delta_{\text{neg}j}]_+), \end{aligned} \end{equation} where $[. . .]_+$ denotes the hinge loss, $m$ is the margin, $\delta_\text{pos}$ is the feature distance between an anchor point cloud $\mathbf{m}_a$ and its structurally similar (``positive") point cloud, $\delta_{\text{neg}j}$ is the feature distance between $\mathbf{m}_a$ and its structurally dissimilar (``negative") point cloud. We follow the training strategy in \cite{angelina2018pointnetvlad,liu2019lpd,soe} and regard two point clouds are structurally similar if their geometry distance is less than $\epsilon$ meters. \subsection{Position Estimator} The lazy triplet loss forces the network to learn a mapping that preserves the adjacency of point clouds in the geometry space. Although there isn't an explicit mapping function that reveals the relationship between the feature space and the geometry space, we observe that the distance of global features and the geometry distance of point clouds are inherently correlated. Based on this property, we recover the geometry distance between the query and the match and then use it for position estimation. \textbf{Statistical correlation between the feature and geometry distances}. To reveal the relationship between the feature space and the geometry space, we train our method on the sequence ``\textit{00}'' of the KITTI dataset \cite{kitti}. We then plot the feature distances and the geometry distances of all point cloud pairs in different sequences of the dataset. As shown in Fig. \ref{fig: bev_distance}, for all the sequences, the feature distance approximately monotonically increases with the geometry distance and saturates when the point clouds are far away from each other. This phenomenon is intuitive since two point clouds are more similar if they are geometry closer, and consequently the feature distance is smaller. It can be seen that the mean curve and the standard deviation differ in different sequences since sequences are collected in diverse scenes. Despite this, the mean curves have similar shapes and can be depicted using a function based on the generalized Gaussian kernel \cite{ggd}, which is \begin{equation} \label{eq: ggd} \begin{aligned} \Vert f(\textbf{m}_i-f(\textbf{m}_j)\Vert_2 = \alpha\left(1-\exp({-\frac{\Vert\textbf{t}_i-\textbf{t}_j\Vert_2^\gamma}{\beta}})\right), \end{aligned} \end{equation} where $\alpha$ is the max feature distance, $\gamma$ and $\beta$ control the curve shape. \begin{figure}[htbp] \begin{center} \includegraphics [width=3.1in]{FIG3_distance_mapping.pdf} \caption{ Geometry distance and feature distance relationship of the point clouds in different sequences of the KITTI dataset.} \label{fig: bev_distance} \end{center} \end{figure} \textbf{Mapping Model}. The mapping function above inspires us to recover the geometry distance from the feature distance, and further estimate positions of the query point clouds. However, this mapping relationship may differ slightly in local areas due to the appearance changes of point clouds. For more accurate geometry distance recovery, we build a mapping function for each point cloud $\mathbf{m}_i$ in the database $\mathcal{M}$. Specifically, we compute its feature and geometry distances to all the other point clouds in $\mathcal{M}$. We then fit the curve with Eq. \ref{eq: ggd} and compute the parameters $\alpha_i$, $\beta_i$, and $\gamma_i$. After this, we can recover the geometry distance of a query point cloud $\mathbf{m}_q$ to $\mathbf{m}_i$ according to Eq. \ref{eq: ggd}, that is \begin{equation} \label{eq: distance_mapping} \begin{aligned} \Vert\mathbf{t}_q-\mathbf{t}_i\Vert_2 =\left(-\beta_i\log(1-\frac{\Vert f(\textbf{m}_q)-f(\textbf{m}_i)\Vert_2}{\alpha_i})\right)^{\frac{1}{\gamma_i}}. \end{aligned} \end{equation} \textbf{Position Recovery}. Since the positions of the point clouds in the database are given, we can compute the position of the query point cloud $\textbf{m}_q$ if we know its geometry distances to at least three reference point clouds. To this end, we first follow the place recognition procedure and find the most similar point cloud $\textbf{m}_r$ of $\textbf{m}_q$. We choose the reference point clouds as $\textbf{m}_r$ and the point clouds that are less than $\epsilon$ meters away from $\textbf{m}_r$. Denoting $\Omega = \{k\big|\Vert \mathbf{t}_r-\mathbf{t}_k\Vert_2<\epsilon\}$ and $d_k$ as the recovered geometry distance between $\textbf{m}_q$ and $\textbf{m}_k$, the position of $\textbf{m}_q$ can be easily computed by solving the following minimization problem, \begin{equation} \label{eq: ggd_model} \begin{aligned} \mathbf{t}_q = \arg\min_{\mathbf{t}}\sum_{k\in\Omega}\left(\Vert\mathbf{t}-\mathbf{t}_k\Vert_2-d_k\right)^2.\\ \end{aligned} \end{equation} \textbf{Discussion}. In fact, the monotonicity of the mapping from feature distance to the geometry distance also holds for other methods. Fig. \ref{fig: overlap_distances} plots the relationship between the feature and the geometry spaces of two state-of-the-art methods Minkloc3D-V2 \cite{mickloc3d} and OverlapTransformer \cite{OverlapTransformer} on the sequence ``\textit{00}'' and ``\textit{06}'' of KITTI. Although the mappings of the methods have quite different shapes, they all can be approximately depicted by Eq. \ref{eq: ggd} with specific parameters and thus positions can also be estimated based on the mapping model. In the experiment, we will compare their position estimation accuracy with our method. \begin{figure}[!htp] \begin{center} \includegraphics [width=3.1in]{FIG4_distance_mapping2.pdf} \caption{ Geometry distance and feature distance relationship of the point clouds in the sequence ``\textit{00}'' and ``\textit{06}'' of KITTI of two methods.} \label{fig: overlap_distances} \end{center} \end{figure} \section{Experiments} We compare our method with the state-of-the-art place recognition methods including PointNetVLAD \cite{angelina2018pointnetvlad}, LPD-Net \cite{liu2019lpd}, SOE-Net \cite{soe}, MinkLock3D-V2 \cite{mickloc3dv2}, and OverlapTransformer \cite{OverlapTransformer}. All these methods are deep learning methods and their open-sourced codes are used for evaluation. For our method, we set the triplet margin $m=0.3$, and the number of clusters of NetVLAD as 64. In the training stage, we choose 1 positive point cloud and 10 negative point clouds in calculating loss functions. We test the methods in terms of place retrieval with metrics of recall at Top-1 and recall at Top-\%1. For a more comprehensive evaluation, we also compare the loop closure detection performance with the metric of Precision-Recall (PR) curve. In addition, we test the position estimation accuracy using the absolute translation error (ATE). \subsection{Datasets} We conduct experiments on three large-scale public datasets, i.e. the KITTI dataset \cite{kitti}, the ALITA dataset \cite{alita}, and the benchmark dataset \cite{angelina2018pointnetvlad}. \textbf{KITTI dataset} contains a large number of point cloud data collected by a Velodyne 64-beam LiDAR under low viewpoint variation. We select the sequences ``\textit{00}'', ``\textit{02}'', ``\textit{05}'', and ``\textit{06}'' under the Odometry subset for evaluation since these sequences contain large revisited areas. We split the point clouds of each sequence into database frames and query frames for place retrieval. The partition of each sequence is summarized in Table \ref{tab: kitti_partition}. For our method, we crop each point cloud with a [$-20 $ m, $ 20 $ m] cubic window and downsample it into 4096 points. We then generate BEV images from the downsampled point clouds. For PointNetVLAD, LPD-Net, MinkLock3D-V2, we normalize the point values to fit their input. For OverlapTransformer, we use full point clouds since its performance is sensitive to the point density. \begin{table}[ht]\footnotesize \centering \caption{Dataset Partition of the KITTI dataset.} \vspace{-3mm} \begin{tabular}{lccccc} \toprule Sequence & 00 & 02 & 05 & 06 \\ \midrule Database & 0-3000 & 0-3400 & 0-1000 & 0-600 \\ Query & 3200-4650 & 3600-4661 & 1200-2751 & 800-1100 \\ \bottomrule \end{tabular} \label{tab: kitti_partition} \end{table} \textbf{ALITA dataset} is a dataset for long-term place recognition in large-scale environments. It contains point cloud data of campus and city scenes under different illuminations and viewpoints. In this work, we use its subset released in the General Place Recognition Competition \footnote{https://www.aicrowd.com/challenges/icra2022-general-place-recognition-city-scale-ugv-localization}. We evaluate the generalization ability of the methods on its validation set and its test set. Note that the evaluation result of the test set is automatically calculated by the server once we upload the global features to the website. The point clouds of the dataset have been cropped into a [$-20 $ m, $ 20 $ m] cubic window and downsampled into 4096 points. Similar to the process in the KITTI dataset, we generate BEV images with the downsampled point clouds for our method. We normalize the points to fit the input of PointNetVLAD, LPD-Net, and MinkLock3D-V2. We do not evaluate OverlapTransformer on this dataset as it cannot adapt to such sparse point clouds. \textbf{Benchmark dataset} is broadly used by the recent place recognition method based on unordered points. It is a dataset set consisting of four scenarios: an outdoor dataset Oxford RobotCar, three in-house datasets of a university sector (U.S.), a residential area (R.A.), and a business district (B.D.). It provides normalized point clouds of 4096 points, which can be directly used by PointNetVLAD, LPD-Net, and MinkLock3D-V2. For our method, we multiply the point values by 20 and then project the point cloud into the BEV image for feature extraction. Note that the recovered point cloud is not of the actual scale since we do not know the exact coordinate range of the point clouds. In spite of this, our method can adapt to such scale variation thanks to the convolution network design. For the KITTI dataset and the ALITA dataset, we regard a retrieval as true positive if the geometry distance between the query and the match is less than $\epsilon=5$ meters. For the benchmark dataset, we set $\epsilon=25$ meters following the configurations in \cite{angelina2018pointnetvlad,liu2019lpd,soe,mickloc3dv2}. \begin{figure*}[htp] \begin{center} \includegraphics [width=6.5in]{FIG5_pr_curve.pdf} \caption{Precision-recall curves for the sequences of the KITTI dataset.} \label{fig: pr_curve} \vspace{-1mm} \end{center} \end{figure*} \subsection{Place Recognition} We train the methods with the point clouds in the database of sequence ``\textit{00}'' of the KITTI dataset. In the training stage, we perform data augmentation with random rotations to the point clouds. For ablation on the potentional of place recognition using BEV images and our rotation invariance design, we also train two networks VGG16-Net \cite{vgg} and VGG+NetVLAD \cite{arandjelovic2016netvlad} based on BEV images. \textbf{Performance on the KITTI dataset.} Table \ref{tab: top1_kitti} shows the recall rate at Top-1 on the sequences of KITTI. It can be seen that without any delicate design, VGG-Net and VGG+NetVLAD achieve comparable performance to the state-of-the-art methods. On the other hand, our BEVPlace outperforms the compared methods with large margins due to our rotation invariance design. \begin{table}[htp]\small \centering \caption{Recall at Top-1 on the KITTI dataset.} \vspace{-3mm} \begin{tabular}{lcccccccc} \toprule Sequence & 00 & 02 & 05 & 06 \\ \midrule PointNetVLAD\cite{angelina2018pointnetvlad} &91.6&62.3&76.9&77.8\\ LPD-Net \cite{liu2019lpd} &95.7&72.3&83.6&82.2\\ SOE-Net \cite{soe} &95.0&65.5&84.8&69.6\\ MinkLoc3D-V2 \cite{mickloc3d} &95.9&72.3&86.4&80.4\\ OverlapTransformer\cite{OverlapTransformer} &96.7&80.1&91.9&95.6\\ \midrule VGG \cite{vgg} &91.3&82.9&91.0&96.7\\ VGG+NetVLAD \cite{arandjelovic2016netvlad} &92.4&81.9&85.9&95.6\\ BEVPlace (ours) &\textbf{99.7} &\textbf{98.1} &\textbf{99.3} &\textbf{100.0} \\ \bottomrule \end{tabular} \label{tab: top1_kitti} \end{table} \textbf{Robustness to view changes.} In the testing stage, we randomly rotate the point clouds of KITTI to simulate view changes. As shown in Table \ref{tab: top1_kitti_rot}, our method shows much higher recall rates than VGG and VGG+NetVLAD. This validates the significance of our rotation invariance design. On the other hand, OverlapTransformer also performs well since it is inherently rotation invariant. For all the other methods, their recall rates drop a lot despite training with data augmentation. \begin{table}[!htp]\small \centering \caption{Recall at Top-1 on the rotated KITTI dataset.} \vspace{-3mm} \begin{tabular}{lcccccccc} \toprule Sequence & 00 & 02 & 05 & 06 \\ \midrule PointNetVLAD\cite{angelina2018pointnetvlad} &86.1&41.0&69.7&51.5\\ LPD-Net \cite{liu2019lpd} &89.6&61.9&72.2&48.9\\ SOE-Net \cite{soe} &93.1&63.5&82.8&65.5\\ MinkLoc3D-V2 \cite{mickloc3d} &89.4&48.7&83.0&48.1\\ OverlapTransformer\cite{OverlapTransformer} &96.7&80.1&91.9&95.6\\ \midrule VGG \cite{vgg} &75.1&52.9&72.9&68.5\\ VGG+NetVLAD \cite{arandjelovic2016netvlad} &85.5&58.7&80.1&76.3\\ BEVPlace (ours) &\textbf{99.6} &\textbf{93.5} &\textbf{98.9} &\textbf{100.0} \\ \bottomrule \end{tabular} \label{tab: top1_kitti_rot} \end{table} \textbf{Loop closure detection.} Loop closure detection is an important application of place recognition. For a query point cloud, we accept its Top-1 match as positive if the feature distance is less than a threshold. By setting different thresholds, we compute the precision-recall curves and plot them in Fig. \ref{fig: pr_curve}. It can be seen that our method outperforms the compared methods. It is worth noting that although our method is only trained on a part of the point clouds of the sequence $00$, it generalizes much better to the other sequences than the other methods. We believe our method can be deployed the LiDAR SLAM systems \cite{loam} and help globally consistent mapping building. \textbf{Generalization performance on ALITA.} We test the place recognition performance of the methods on the ALITA dataset based on the model trained on the KITTI dataset. Table \ref{tab: recall_alita} shows the recall rates on the validation set and the test set. It can be seen that our method generalizes well in ALITA. On the other hand, the recall rates of the compared methods degrade much. \begin{table}[!htp]\small \centering \caption{Recall rates on the ALITA dataset.} \vspace{-3mm} \begin{tabular}{lcccccccc} \toprule & \multicolumn{2}{c}{\dlmu[1.8cm]{Val Set}} & \multicolumn{2}{c}{\dlmu[1.8cm]{Test set}} \\ & @1 & @1\% & @1 & @1\% \\ \midrule PointNetVLAD\cite{angelina2018pointnetvlad} &42.3&55.4&39.8&-\\ LPD-Net \cite{liu2019lpd} &51.2&72.7&49.6&-\\ SOE-Net \cite{soe} &66.6&92.8&59.5&-\\ MinkLoc3D-V2 \cite{mickloc3d} &55.6&82.8&55.3&-\\ \midrule VGG \cite{vgg} &37.1&54.1&36.5&-\\ VGG+NetVLAD \cite{arandjelovic2016netvlad} &41.0&70.1&36.8&-\\ BEVPlace (ours) &\textbf{96.7} &\textbf{99.2} &\textbf{91.7} &- \\ \bottomrule \end{tabular} \label{tab: recall_alita} \end{table} \begin{table*}[ht]\small \centering \vspace{-3mm} \caption{Recall rates on the benchmark dataset.} \vspace{-3mm} \renewcommand\tabcolsep{2pt} \begin{tabular}{lllllllll|ll} \toprule & \multicolumn{2}{c}{\dlmu[2.5cm]{Oxford}} & \multicolumn{2}{c}{\dlmu[2.5cm]{U.S.}} & \multicolumn{2}{c}{\dlmu[2.5cm]{R.A.}} & \multicolumn{2}{c}{\dlmu[2.5cm]{B.D}} & \multicolumn{2}{|c}{Mean} \\ & AR@1 & AR@1\% & AR@1 & AR@1\% & AR@1 & AR@1\% & AR@1 & AR@1\% & AR@1 & AR@1\% \\ \midrule PointNetVLAD \cite{angelina2018pointnetvlad} & 62.8 & 80.3 & 63.2 & 72.6 & 56.1 & 60.3 & 57.2 & 65.3 & 59.8 & 69.6 \\ LPD-Net \cite{liu2019lpd} & 86.3 & 94.9 & 87.0 & 96.0 & 83.1 & 90.5 & 82.5 & 89.1 & 84.7 & 92.6 \\ NDT-Transformer \cite{ndtformer} & 93.8 & 97.7 & - & - & - & - & - & - & - & - \\ PPT-Net \cite{pptnet} & 93.5 & 98.1 & 90.1 & 97.5 & 84.1 & 93.3 & 84.6 & 90.0 & 88.1 & 94.7 \\ SVT-Net \cite{svtnet} & 93.7 & 97.8 & 90.1 & 96.5 & 84.3 & 92.7 & 85.5 & 90.7 & 88.4 & 94.4 \\ TransLoc3D \cite{transloc3d} & 95.0 & 98.5 & - & - & - & - & - & - & - & - \\ MinkLoc3Dv2 \cite{mickloc3dv2} & 96.3 & 98.9 & 90.9 & 96.7 & 86.5 & 93.8 & 86.3 & 91.2 & 90.0 & 95.1 \\ BEVPlace (ours) & \textbf{96.5} \textcolor[rgb]{0,0.5,0}{$\uparrow$0.2} & \textbf{99.0} \textcolor[rgb]{0,0.5,0}{$\uparrow$0.1} & \textbf{98.2} \textcolor[rgb]{0,0.5,0}{$\uparrow$7.3} & \textbf{100.0} \textcolor[rgb]{0,0.5,0}{$\uparrow$3.3} & \textbf{97.2} \textcolor[rgb]{0,0.5,0}{$\uparrow$10.7} & \textbf{100.0} \textcolor[rgb]{0,0.5,0}{$\uparrow$6.2} & \textbf{95.3} \textcolor[rgb]{0,0.5,0}{$\uparrow$9.0} & \textbf{99.5} \textcolor[rgb]{0,0.5,0}{$\uparrow$7.7} & \textbf{96.9} \textcolor[rgb]{0,0.5,0}{$\uparrow$6.9} & \textbf{99.6} \textcolor[rgb]{0,0.5,0}{$\uparrow$4.5} \\ \bottomrule % \end{tabular} \label{tab: recall_oxford} \end{table*} \textbf{Performance on benchmark datasets.} Following the previous works, we train our method using only the Oxford RobotCar training dataset and test the method on the test set. The details of the dataset partition can be found in \cite{angelina2018pointnetvlad}. For a more comprehensive comparison, we also compare our method with the state-of-the-art transformer-based methods, including NDT-Transformer \cite{ndtformer}, PPT-Net \cite{pptnet}, SVT-Net \cite{svtnet}, and TransLoc3D \cite{transloc3d}. For all the compared methods, we directly use the results from their papers. Table \ref{tab: recall_oxford} shows that our method exceeds the state-of-the-art method minklock3D-v2 on the oxford dataset with only 0.2\% in terms of recall@1. However, on other datasets, our method shows strong generalization ability and outperforms other methods including the transformer-based ones with large margins. \begin{figure}[!h] \begin{center} \includegraphics [width=2in]{FIG7_fitting_error.pdf} \caption{ Distance estimation error distribution.} \label{fig: fit_error} \vspace{-0mm} \end{center} \end{figure} \begin{figure*}[!ht] \begin{center} \includegraphics [width=6.3in]{FIG6_acc_error.pdf} \caption{ Accumulative translation error distribution on the KITTI dataset with and without rotations.} \label{fig: acc_error} \end{center} \vspace{-2mm} \end{figure*} \subsection{Position Estimation} We first recovery the geometry distances between the query and the matches and then estimate the global position of point clouds. In the following, we evaluate the performance of these two stages. \textbf{Accuracy of the recovery distances}. We compute the errors of the recovered distances on the sequence ``00'' of the KITTI dataset. Fig. \ref{fig: fit_error} shows the fitting distribution of the distance errors of the methods. It can be seen that our method can recover the geometry distance more accurately. This will lead to more accurate position estimation results since the estimation is based on the recovered distances. \textbf{Position estimation}. Fig. \ref{fig: acc_error} (a), (b), (c), and (d) show the cumulative distribution of the translation error on different sequences of the KITTI datasets. Our method and OverlapTransformer, both of which are based on projection images, achieve more accurate position estimation than the compared methods. To validate the performance under view changes, we randomly rotate the point clouds in the testing stage. Fig. \ref{fig: acc_error} (e), (f), (g), and (h) show that our method and OverlapTransformer perform well since they are designed to be rotation invariant. On the other hand, the other methods shows poor robustness and their performance degrades much. \section{Conclusions} In this work, we explore the potential of LiDAR-based place recognition using BEV images. We designed a rotation invariant network called BEVPlace based on group convolution. Thanks to the use of BEV images and the rotation invariance design, our method achieves high recall rates, strong generalization ability, and robustness to viewpoint changes, as shown in the experiments. In addition, we observe that the geometry and feature distance are correlated, and we model the correlation for position estimation. This model can adapt to other place recognition methods, but our BEVPlace gives more accurate estimation results. In our future work, we will try to encode the rotation information into global features and estimate 6-DoF pose of point clouds. {\small \bibliographystyle{ieee_fullname}
{ "arxiv_id": "2302.14330", "language": "en", "timestamp": "2023-03-01T02:09:42", "url": "https://arxiv.org/abs/2302.14330", "yymm": "2302" }
\section{Introduction} \label{Sec:Introduction} Flow control is a central topic in fluid dynamics that is concerned with devising passive or active means of intervention with the flow structure and its underlying mechanisms in a manner that causes desirable changes in the overall flow behavior. Through flow control, it is possible, in principle, to enable favorable outcomes such as, for example, delay of laminar-to-turbulent transition and reduction of skin-friction drag in wall-bounded flows~\cite{Gad-el-Hak_2000}. These scenarios allow for substantial savings in fuel expenditure for air, sea, land vehicles, wind and water turbines, long-range gas and liquid pipelines, and other similar applications. Flow control by active means has been extensively investigated over the past few decades~\cite{Wehrmann_1965,Liepmann_1982,Joslin_1995,Grundmann_2008,Amitay_2016,Jansen_2018}. Passive techniques, on the other hand, are desirable because of their simplicity and low cost, i.e., no active control devices, wires, ducts, slots, etc., are needed and no electric power is required to drive the control process. Passive techniques widely explored in the literature include the use of riblets~\cite{Walsh_1978,Garcia_2011}, roughness~\cite{Cossu_2002,Fransson_2005}, or porous features~\cite{Abderrahaman_2017} on the surface exposed to the flow, or coating the surface with a compliant material~\cite{Kramer_1957,Benjamin_1960,Bushnell_1977,Gad-el-Hak_1984,Carpenter_1985,Lucy_1995,Davies_1997,Luhar_2015,Esteghamatian_2022}. An ideal intervention requires an understanding of the key characteristics of the flow dynamics and using this knowledge to tailor, with dynamical precision, a control stimulus that accounts for the underlying flow mechanisms. Recently, this endeavor has been shown to be possible with the use of phononic materials passively employed in the subsurface of a wall-bounded flow~\cite{Hussein_2015,Barnes_2021}. Flow transition may occur when external disturbances or inherent fluctuations develop and become significant within the flow field. These disturbances may be in the form of unstable waves that represent a small component of the total velocity field; an example of a widely studied type of disturbance in shear flows is a Tollmien-Schlichting (TS) wave~\cite{Tollmien_1928,Schlichting_1933}. In this context, flow disturbances, also known as perturbations or instabilities\footnote{We will use the terms \textit{perturbation} and \textit{instability}, interchangeably, when referring to the flow waves.}, appear at various frequencies, wavenumbers, phases, and orientations and depending on their character may grow in amplitude as they travel downstream. In 2015, the general concept of a \textit{phononic subsurface} (PSub)~\cite{Hussein_2015} was introduced as a means to provide a wave-synchronized intervention with flow instabilities to cause either stabilization or destabilization, as desired.~A PSub is installed in the subsurface region, and is nominally perpendicularly oriented and configured to extend all the way to expose its edge to the flow, forming an elastic fluid-structure interface.~The underlying mechanism that a PSub induces is passive and responsive\footnote{Responsive control implies that no phase locking is required. The PSub adaptively responds as desired regardless of the specific phase of the incoming instability wave when it arrives at the control region.} localized control of both the sign and rate of production of the perturbation kinetic energy within the flow field.~A strong (weak) PSub intervention for flow stabilization causes a strong (weak) negative rate of production, effectively shutting off the source of energy intake into the instability from the mean flow.~When a PSub is introduced for flow destabilization, the opposite effect takes place and the instability is forced to acquire energy from the mean flow at a higher rate.~These two scenarios are manifestations of a contiguous solid-fluid flow antiresonance or resonance phenomenon, respectively.~A PSub takes the form of a finite, relatively stiff elastic structure oriented in a manner that enables only small elastic motion perpendicular to the fluid-structure interface to be admitted and transferred into the flow.~Ensuring small vibrations at the surface allows the PSub to modulate mostly (to the extent allowed in practice) a single-velocity component of the instability field (the wall-normal component in the present study), as opposed to simultaneously influencing multiple components at once.~With these conditions in place, the PSub is engineered to exhibit specific frequency-dependent \textit{amplitude} and \textit{phase} response characteristics at the edge exposed to the flow.~These quantities represent the two core properties on which the PSubs design theory is based on.~Figure~\ref{Fig1} provides an illustration of a PSub in operation, showing clearly its ability to attenuate the instability field exactly within the region in the flow where the PSub is installed~\cite{Hussein_2015}. \begin{figure*} \includegraphics{Fig_01.eps}% \caption{Passive flow stabilization by a PSub.~Contours showing the streamwise-component of an instability velocity field when a PnC-based PSub is installed (front) versus an all-rigid-wall surface (back)~\cite{Hussein_2015}. Yellow color represents low-instability intensity, red color represents high-instability intensity. The reduced color intensity at the position where the PSub is placed is indicative of stabilization exactly at that location. } \label{Fig1} \end{figure*} A PSub may be composed of any form of phononic materials\footnote{From a configurational perspective, a PSub, in general, can take any form that achieves the mechanistic intervention with the flow production rate described above.~For example a standard homogeneous and uniform finite structure may be employed. However, a phononic structure provides significantly favorable dynamical properties and attributes because a phononic material has a frequency band structure~\cite{Hussein_2014,Jin_2021} and when rendered finite gives unique local and global resonance characteristics~\cite{wallis1957effect,Camley_1983,Davis_2011,Albabaa_2017,Bastawrous_2022,Albabaa_2022,Rosa_2022} that are not attainable by conventional materials and that are highly controllable by design.}.~The study of phononic materials, in general, is an area that has received tremendous attention in the literature over the past three decades~\cite{Hussein_2014,Jin_2021}. Figure~\ref{Fig2} displays a schematic of two types of PSubs. The model in Fig.~\ref{Fig2}a is based on a phononic-crystal (PnC) rod comprised of a repeated layering of acrylonitrile butadiene styrene (ABS) polymer and aluminum; this is the original configuration used in Ref.~\cite{Hussein_2015}. In contrast, the PSub configuration in Fig.~\ref{Fig2}b is formed from a locally-resonant elastic metamaterial (MM) which is here realized in the form of a homogeneous ABS polymer rod with a repeated inclusion of spring-mass units to serve as intrinsic local resonators.~Both structures comprise five unit cells in the schematic and throughout the paper. Phononic crystals draw their unique wave propagation properties from wave interference mechanisms, namely Bragg scattering~\cite{Kushwaha_1993}.~This typically requires the unit cells to be relatively long for intervention at a given frequency regime; e.g., the unit cell used in Ref.~\cite{Hussein_2015} is 40 cm long to enable control of a flow instability near 2~kHz.~A 10-unit-cell PSub, in that case, would be 4~m long extending into the subsurface in the wall-normal direction, prohibiting practical deployment.~The schematic shown in Fig.~\ref{Fig1} is of this particular PSub, passively stabilizing a TS wave~\cite{Hussein_2015}.~Elastic metamaterials, on the other hand, produce their unique wave propagation properties via resonance hybridization$-$an elastodynamic coupling mechanism that frees the unit cell from any length constraints~\cite{Liu_2000}.~An alternative PSub configuration for overcoming this length constraint is a coiled PnC~\cite{Barnes_2021,Willey_2022}. The reader may refer to books~\cite{Deymier_2013,Craster_2013,Phani_2017} and extensive reviews~\cite{Hussein_2014,Jin_2021} for in-depth description and analysis of phononic crystals and elastic metamaterials.~Another key PSub dimension is its length along the streamwise direction; this has to be designed to correlate with the wavelength of the instability waves (or range of wavelengths in the case of multiple instability waves). In this paper, the notion of a metamaterial-based PSub is introduced to mitigate the large unit-cell length limitation imposed on a phononic-crystal-based PSub.~Furthermore, we demonstrate the desirable precision of the PSub's impact on the flow field, showing that the alterations to the flow structure are spatially localized, as targeted, with no (or insignificant) undesirable behavior downstream to the position in the flow where a PSub is installed.~The type and intensity of control are also shown to take effect with design precision.~Finally, we provide a rigorous analysis of the effect of the PSub on the intrinsic flow dynamics, quantitatively demonstrating the mechanism of the rate of energy exchange between a flow instability and the mean flow as a result of the presence of a PSub.~We also examine the PSub's spatial influence on the flow vector field, and, conversely, the flow instability’s spatial influence on the elastodynamic energy field within the PSub itself. The unique ability to passively enact perfect wave synchronization across the PSub and flow domains is explicitly demonstrated. \begin{figure*} [h!] \includegraphics{Fig_02.eps}% \caption{Schematic of two types of PSubs: (a) a phononic-crystal-based PSub~\cite{Hussein_2015} and (b) a locally-resonant elastic metamaterial-based PSub. In this schematic, the PSub in (b) has a unit-cell length 40 times shorter than the PSub in (a). Each PSub is installed in the flow subsurface and extends all the way to allow for direct exposition to the flow. Flow instabilities, e.g., TS waves, will excite the PSub at the top edge (i.e., at the fluid-structure interface), and the PSub, in turn, will respond at or near its structural resonance and out of phase at the excitation point. This passive process will repeat and cause sustained attenuation of reoccurring and continuously incoming instability waves. Alternatively, the PSub could be designed to trigger destabilization instead of stabilization by producing an in-phase elastic response.} \label{Fig2} \end{figure*} \begin{figure*} [t!] \includegraphics[width=0.6\columnwidth]{Fig_03.eps}% \caption{Schematic illustration of key energy exchange mechanisms passively triggered by a PSub installed for channel flow control.~The flow total velocity field $\mathbf{u}$ is decomposed into a mean flow component $\bar{\mathbf{u}}$ and a perturbation (instability) component $\hat{\mathbf{u}}$. When designed for stabilization, the PSub causes energy transfer from the instability component to the mean flow component. When designed for destabilization, the opposite effect takes place.~The flow model details are given in Section~\ref{FlowModel}} \label{Fig3} \end{figure*} \section{PSub Design Theory} \label{sec:PSubs} The general elements of the PSub design theory were outlined in Ref.~\cite{Hussein_2015}.~A PSub configuration consists of a finite phononic structure with its principle path of elastic wave propagation typically oriented orthogonal to the fluid-structure interface to enable ``pointwise" spatial control as needed. A PSub is engineered offline to exhibit target frequency-dependent amplitude and phase response characteristics at the edge exposed to the flow, i.e., at the top end of each PSub shown in Fig.~\ref{Fig2}. This pair of response quantities at this location represents the two principle properties targeted by the PSub design theory. In all cases, the PSub edge should be ensured to vibrate at, or close to, resonance at the frequency of the instability to be controlled. A high vibration amplitude allows for strong interaction with the flow. Yet, still, the regime of operation is intentionally limited to small elastic vibrations, where the local fluid-structure interface remains practically flat, and large finite deformations of the solid surface are avoided.~As described above, this confines the control to exclusively, or predominantly, the vertical, i.e., wall-normal, component of the perturbation velocity field (see Section~\ref{sec:Res} for an analysis and further discussion on this aspect).~As for the phase, the PSub is designed to display a negative phase (out of phase) if the target is stabilization or a positive phase (in phase) if the target is destabilization.~Given the importance of both the vibration amplitude and phase, a performance metric $P$ was introduced and is defined as the frequency-dependent product of the two quantities. Negative and positive values of $P$ correspond to flow stabilization and destabilization, respectively. The absolute value $|P|$ indicates the strength of the stabilization or destabilization. For example, to impede the growth of a particular instability to delay the transition to turbulence, the PSub is designed to exhibit a strongly negative $P$ value at the frequency of the instability. For a range of instability frequencies, the PSub would need to display this property over that frequency range.~As for the spatial size or width of a PSub along the downstream direction; this is tuned according to the wavelength of the flow wave instability to be controlled.~In Ref.~\cite{Hussein_2015}, the PSub length was set to be roughly one quarter of the wavelength of the unstable flow wave. From the flow's perspective, the phase of the elastic waves returning to the flow$-$after being passively processed by the PSub$-$will cause destructive or constructive interference with the vertical velocity component of the continuously incoming instability waves. This, in turn, will influence the work done by or on the instability field, causing either a diminishing or an enhancing effect on the transfer of energy from the mean flow into the instability, depending on whether the PSub is designed to stabilize or destabilize, respectively.~Figure~\ref{Fig3} provides a schematic illustration of this mechanism.~This effect on the exchange of energy with the mean flow is quantified by what is known as the production rate term, an averaged quantity involving the wall-normal and streamwise (vertical and horizontal, respectively, in Fig.~\ref{Fig2}) components of the instability field, which is derived from the Navier-Stokes equations governing the flow~\cite{Hussein_2015}. \begin{figure*} [b!] \includegraphics[width=1\columnwidth]{Fig_04.eps}% \caption{Dispersion diagram for (a) PnC unit cell and (b) locally resonant elastic MM unit cell with a close-up.~The dispersion curves for a corresponding homogeneous rod unit cell in each case are also provided.~Schematics of the unit-cell configurations are shown as insets.~The frequency and wavenumber are nondimensionalized by multiplication with corresponding unit-cell parameters: $a_{\rm PnC}$ and $a_{\rm MM}$ are the PnC and MM unit-cell length, respectively, and $c_{\rm ABS}$ is the long-wave longitudinal speed for the ABS material.} \label{Fig4} \end{figure*} \subsection{Stop-band truncation-resonance approach} \label{sec:PSubsStop} In Ref.~\cite{Hussein_2015}, a PnC was used to form the PSub structure. The frequency band structure of a unit cell for this PnC is shown in~Fig.~\ref{Fig4}a. The finite extent of the PnC represents a symmetry breaking, or truncation, of an otherwise idealized PnC with an infinite extent. The symmetry breaking has been taken to our advantage as it created a \textit{truncation resonance} inside a band gap~\cite{Camley_1983,Davis_2011,Albabaa_2017,Bastawrous_2022,Albabaa_2022,Rosa_2022}, the first Bragg band gap for the unit cell considered. Associated with the truncation resonance, there is a phase change from positive to negative as the frequency is increased, allowing us to yield a negative value of $P$ with a high absolute value at frequencies higher than the truncation resonance frequency. Furthermore, the negative $P$ properties extend over a relatively wide frequency range compared to what is produced by a standard structural resonance associated with, for example, a statically-equivalent homogeneous structure. The higher the value of $\left|P\right|$, the stronger the control, in the negative for stabilization or in the positive for destabilization, and the broader its frequency range, the more robust the control effect. The performance metric for a PnC-based PSub versus a PSub comprising a statically-equivalent homogeneous structure is shown and contrasted in Fig.~\ref{Fig5}a. \begin{figure*} [b!] \includegraphics[width=1\columnwidth]{Fig_05.eps}% \caption{Schematic illustration of two PSub design principles: PnC-based PSub versus MM-based PSub. To produce the desired performance metric properties at the frequency of the instability, in (a) a truncation resonance is utilized (PnC-based PSub), and in (b) a sub-hybridization resonance is utilized (MM-based PSub). The performance metric for the corresponding homogeneous structures is shown for comparison.} \label{Fig5} \end{figure*} \subsection{Pass-band lowered-resonance approach} \label{sec:PSubsPass} As mentioned above, a PnC-based PSub must be relatively long to accommodate low-frequency instabilities. To mitigate this limitation, we demonstrate in this paper the concept of a subwavelength PSub using a locally-resonant elastic MM.~An elastic MM may be designed to feature a band gap in the subwavelength regime (i.e., where the wavelength of the elastic wave is larger than the unit-cell size of the periodic medium~\cite{Liu_2000}), as shown in Fig.~\ref{Fig4}b.~In this case, it is also possible to produce a finite structure with a truncation resonance inside a subwavelength band gap~\cite{xiao2013flexural,Sangiuliano_2020,Xia_2020,Park_2022}.~However, here we provide an alternative approach whereby the PSub resonance that we utilize is a \textit{sub-hybridization resonance}.~This global-structure resonance appears at a frequency lower than the subwavelength band gap, which otherwise would appear at a much higher frequency if the band gap did not exist. The performance metric for an MM-based PSub versus the same PSub without the local resonators is shown and contrasted in Fig.~\ref{Fig5}b.~The lowest resonance for the MM-based PSub is near 1550 Hz, whereas the lowest resonance for the same rod without the resonators is near 8000 Hz. \section{Models and Methods} \label{sec:1} As described in Section~\ref{sec:PSubs}, a PSub is designed without the need for any coupled fluid-structure simulations$-$a trait that is indicative of the mechanistic nature of the theory of phononic subsurfaces. The theory entails producing four key plots that allow for full characterization of the properties of a given PSub configuration~\cite{Hussein_2015}.~The first is the dispersion curves (elastic band structure) for the unit cell from which the PSub is formed.~The second and third plots are the frequency-dependent amplitude and phase response of the PSub, with both the excitation and response being at the edge that will be exposed to the flow.~The fourth characterization calculation produces the frequency-dependent performance metric $P$ for the PSub, defined as the product of the amplitude and phase as mentioned earlier. These plots allow for prediction of the changes that will occur in the instability field in the flow when the PSub is installed. A simulation of the flow coupled to the PSub is then run only to verify and assess the performance.~In the simulation, the Navier-Stokes equations are solved simultaneously with Newton's second law governing the elastodynamic motion in the PSub, with appropriate boundary conditions applied at the fluid-structure interface.~This section briefly describes the models, numerical procedures, and physical parameters used throughout the paper.~The reader is referred to Ref.~\cite{Hussein_2015} for more details on the modeling and solution methods. \subsection{PSub model and analysis approach} \label{PSubDispersionG} All the PSub structures we investigate are modeled as one-dimensional (1D) linear elastic solid rods with a constant cross-sectional area, where the elastodynamic motion is governed by \begin{equation} \rho_{\rm s} \ddot{\eta}=(E\eta_{,s}+C\dot{\eta}_{,s})_{,s}+f, \label{eq:structure} \end{equation} where the structure's axial spatial coordinate and time are denoted by $s$ and $t$, respectively, and $\rho_{\rm s}=\rho_{\rm s}(s)$, $E=E(s)$, $C=C(s)$, $\eta=\eta(s,t)$, and $f=f(s,t)$ represent the material density, elastic modulus, damping constant, longitudinal displacement, and external force, respectively. Differentiation with respect to position is indicated by $(.)_{,s}$, and the superposed single dot $\dot{(.)}$ and double dot $\ddot{(.)}$ denote the first and second time derivatives, respectively. \\ \indent The dispersion curves for a given PSub unit-cell configuration are obtained by setting the force $f$ to zero in Eq.~(\ref{eq:structure}) and applying Bloch's theorem ~\cite{Bloch_1929,Hussein_2009}; this yields a relationship between the frequency $\omega$ and the wavenumber $\kappa$ for longitudinal wave propagation along the axis of the rod. The amplitude and phase response of a finite version of the PSub composed of $n_c$ repeated unit cells are obtained by solving Eq.~(\ref{eq:structure}) as a boundary value problem. Free and fixed boundary conditions are chosen for the PSub top and bottom edges, respectively, i.e., $\eta_{,s}(0,t)=0$ and $\eta(l,t)=0$, where $l=n_c a_{\rm{UC}}$, and $a_\mathrm{UC}$ is the PSub unit-cell length. The top end is excited harmonically, i.e., $f(0,t) = \tilde{f}(0)\rm{e}^{\rm{i}\omega_{\mathrm{e}} \it{t}}$ and $f(s,t)=0$ for $s>0$, where $\omega_{\mathrm{e}}$ is the excitation frequency and $\tilde{f}$ is the amplitude of the forcing. The displacement response is given by $\eta(s,t) = \tilde{\eta}(s)\rm{e}^{\rm{i}\omega_{\mathrm{e}} \it{t}}$, where $\tilde{\eta}(s)$ is the amplitude of the response. The phase ${\phi}={\phi}(s)$ is formulated to span the range $-\pi/2\leq{\phi}\leq\pi/2$. When running the coupled fluid-structure simulations, the PSub model must be treated as an initial boundary value problem where no assumptions are made for the displacement field's temporal dependency. As in the steady-state analysis, we set $f(s,t^*)=0$ for $s>0$ and the value of $f(0,t^*)$ is fed in from an integration of the pressure field exerted by the flow at each time step in the simulation covering the time interval $0 \leq t^* \leq t^*_{\rm T}$, where $t^*$ and $t^*_{\rm T}$ are the coupled simulation dimensional time and end time (in seconds), respectively.\\ The PSub rod material/structure is numerically analyzed using the finite-element (FE) method utilizing 1D 2-node iso-parametric elements \cite{HusseinJSV06}. Damping is introduced in the form of viscous proportional damping, which yields a unit-cell damping matrix defined as ${\mathbf{C}} = {q_1}{\mathbf{M}} + {q_2}{\mathbf{K}}$, where $q_1$ and $q_2$ are damping constants and ${\mathbf{M}}$ and ${\mathbf{K}}$ denote the FE mass and stiffness matrices, respectively. The dispersion curves are obtained for values of wavenumber in the range $0 \leq \kappa \leq \pi/a_{\rm{UC}}$~\cite{Hussein_PRB_2009}. The number of nodes in the unit cell is denoted by $n_\mathrm{m}$. For the finite version of the PSub, the number of nodes along the full structure is $n_s=n_c(n_{m}-1)+1$.~For the wave propagation simulation problem, the second-order Newmark time integration scheme is used with the dimensional time step increment $\Delta t^*$.~We use an implicit version of the scheme by selecting the parameters $\gamma=1/2$ and $\beta=1/4$ in the formulation provided in Ref.~\cite{Hussein_2015}. \\ \subsection{Model of unstable channel flow with PSub installed and simulation approach} \label{FlowModel} We examine spatially evolving instabilities in fully-developed incompressible plane channel flows, also known as Poiseuille flows. The flow is driven by a mean pressure gradient between two parallel walls that are nominally rigid except for the region where the PSub is located. An exact solution of the Navier-Stokes equations gives the mean velocity for the flow field~\cite{navier1823memoire,stokes1845g}, which is considered the base inflow. The dynamic stability in this flow is governed by the Orr-Sommerfeld equation~\cite{Orr_I_1907,Orr_II_1907,Sommerfeld_1908}, which is obtained by linearizing the Navier-Stokes equations using the normal assumption.~As mentioned in Section~\ref{Sec:Introduction}, we consider TS waves as examples of two-dimensional (2D) evolving instabilities in parallel shear flows.~These waves are represented by growing eigensolutions of the Orr-Sommerfeld equation and have been observed in laboratory experiments for channel flows~\cite{Nishioka_JFM_1975} and earlier in boundary layer flows~\cite{Schubauer_1948,Klebanoff_JFM_1962}. In our coupled fluid-structure simulations, we superimpose a particular Orr-Sommerfeld unstable spatial mode at the channel inflow boundary. This causes an excitation of the parabolic base velocity, which provides a representative model of an unstable spatially-evolving transitional flow in a typical laboratory experiment.\\ \indent The simulations are based on the time-dependent, three-dimensional Navier-Stokes equations where the channel half-height $\delta$ and the centerline velocity $U_\mathrm{c}$ are used for nondimensionalization. The continuity and momentum equations are as follows, respectively \begin{equation} \label{eq:cont} \frac{\partial u_i}{\partial x_i} = 0, \end{equation} \begin{equation} \label{eq:NS} \frac{\partial u_i}{\partial t} + \frac{\partial u_j u_i}{\partial x_j} = \frac{2}{Re}-\frac{\partial p}{\partial x_i} + \frac{1}{Re}\frac{\partial u_i}{\partial x_j x_j}, \end{equation} where ${\bf u}(x,y,z,t)=(u,v,w)$ is the velocity vector with components in the streamwise $x$, wall-normal $y$, and the spanwise $z$, directions, respectively, and $p$ is the nondimensional pressure. Moreover, $Re=U_\mathrm{c}\delta/\nu_\mathrm{f}$ is the Reynolds number based on the centerline velocity, $\nu_\mathrm{f}$ is the kinematic viscosity, and $t$ (in this context) is the nondimensional time.~The ranges of the wall-normal and spanwise domains are $0\le y\le 2$ and $0\le z\le 2\pi$, respectively.~We decompose the velocity vector in Eqs.~\eqref{eq:cont} and \eqref{eq:NS} into ${\mathbf{u}} = {\bar{\mathbf{u}}} + {\mathbf {\hat{u}}}$, where ${\bf \bar{u}}$ is the mean flow component obtained by averaging ${\mathbf{u}}$ over a time range and ${\bf \hat{u}}$ is the perturbation (instability) component.~With this decomposition, $p = \bar{p} + \hat{p}$, where $\hat{()}$ represents the perturbation part of the flow. The initial and boundary conditions for the decomposed velocity field for an all-rigid-wall channel are \begin{subequations} \begin{equation} \label{eq:bcgen} {\bf{u}}(x=0,y,z,t) = {\bf{u_b}}(x=0,y,z,t)+{\bf{\hat{u}}}(x=0,y,z,t), \end{equation} \begin{equation} \label{eq:bc1} {\bf u_b}(x=0,y,z,t) = \left(1-(1-y)^2, 0, 0\right), \end{equation} \begin{equation} \label{eq:bc2} {\bf \hat{u}}(x=0,y,z,t) = A_\mathrm{2D}\text{Real}[{\bf u}_{\rm e2D}(y)e^{-{\rm i}\omega_{\mathrm{TS}}t}], \end{equation} \begin{equation} \label{eq:bc3} {\bf u}(x,y=0,2,z,t) = (0,0,0), \end{equation} \end{subequations} where $A_{\rm 2D}$ is the amplitude of the 2D perturbation, ${\bf u}_{\rm e2D}$ is the Orr-Sommerfeld eigenfunction we prescribe, and $\omega_\mathrm{{TS}}$ is the perturbation dimensionless frequency (which is a real quantity).~Only the $\hat{u}$ and $\hat{v}$ components of ${\bf u}_{\rm e2D}$ are nonzero.~Furthermore, periodic boundary conditions are applied in the $z$ direction and a non-reflective buffer domain is added to the physical domain for the outflow boundary conditions~\cite{Hussein_2015,Dana91,Saiki93,Kucala14}.~The complex wave speed of the perturbation is defined as $c=-\omega_{\mathrm{TS}}/\alpha$ where $\alpha=\alpha_{\rm R}+{\rm i}\alpha_{\rm I}$ denotes complex wavenumber~\cite{Reynolds69}.~The perturbation grows in space when $-\alpha_{\rm I}>0$. The PSub installation region covers a streamwise distance from $x_{\rm s}$ to $x_{\rm e}$ and extends uniformly across the entire spanwise direction. For the coupled simulations throughout this paper, $(\cdot)^*$ represents dimensional quantities, whereas the omission of the asterisk symbol denotes dimensionless flow quantities.~We define the dimensional wall pressure as $p^*_{\rm w}=\bar{p}\rho_{\rm f} U_{\rm c}^2$, where $\rho_{\rm f}$ is the fluid density and $\bar{p}$ is the averaged pressure between $x_{\rm s}$ and $x_{\rm e}$, respectively.~At every time step, this quantity is computed on the fluid-structure interface.~It acts on the top edge of the PSub as a force.~On the other hand, the resultant displacement $\eta(0,t^*)$ and velocity $\dot{\eta}(0,t^*)$ obtained from the time integration of the structure model is imposed as boundary conditions to the flow field at the interface such that~\cite{Hussein_2015} \begin{subequations} \label{eq:structbc} \begin{equation} \label{eq:structbcu} \hat{u}(x_{\rm s}\le x\le x_{\rm e},y=0,z,t)=-\frac{\eta(0,t^*)}{\delta}\frac{du_{\rm b}}{dy}, \end{equation} \begin{equation} \label{eq:structbcv} \hat{v}(x_{\rm s}\le x\le x_{\rm e},y=0,z,t)=\frac{\dot{\eta}(0,t^*)}{U_\mathrm{c}}. \end{equation} \end{subequations} These boundary conditions ensure that the stresses and velocities match at the fluid-structure interface and are valid when $\eta<<\delta$ is maintained throughout the computations.~Referred to as transpiration boundary conditions~\cite{Lighthill_1958,Sankar_1981}, Eqs.~(\ref{eq:structbcu}) and~(\ref{eq:structbcv}) are obtained by keeping the interface location fixed and retaining only the linear terms following a Taylor series expansion of the exact interface compatibility conditions.~Other boundary conditions have been examined by Barnes et al.~\cite{Barnes_2021} giving qualitatively similar results.~Given our assumption of small displacements, these fluid-structure interface boundary conditions allow wall motion predominantly along the wall-normal \it y\rm-direction, since $\dot{\eta}>>\eta$.~The spanwise velocity $w$ is zero at the interface. For the flow field, the Navier-Stokes equations are integrated using a time-splitting scheme~\cite{Dana91,Saiki93,Kucala14} on a staggered structured grid system, in which the velocity components are computed at the edges, and the pressure is determined at the centers.~The wall-normal diffusion term is discretized by implementing the implicit Crank-Nicolson method, and the Adam-Bashforth scheme is used for an explicit treatment of all the other terms.~This numerical procedure was verified with the linear theory giving a maximum deviation of $0.05\%$ in the predicted perturbation energy growth~\cite{Kucala14}.~Since the equations for the fluid and the structure are inverted separately in the coupled simulations, a conventional serial staggered scheme~\cite{Farhat_2000} is implemented to couple the two sets of time integration. \subsection{Model parameters} \label{sec:MP} Table \ref{Table:phononic} lists the geometric parameters and material properties of the PSubs we examine in this paper.~For the PnC-based PSub, we select the values of 2.4 $\mathrm{GPa}$ and 1040 $\mathrm{kg/m^3}$ for the elastic modulus and density of ABS polymer, respectively, and the corresponding values of 68.8 $\mathrm{GPa}$ and 2700 $\mathrm{kg/m^3}$ for the Al.~The unit cell of the PnC rod consists of two layers, aluminum (Al) and ABS polymer.~The PSub comprises 5 unit cells, each with a length of $a_{\rm PnC}=40$ cm (i.e., $l_\mathrm{PnC}=2$ m). In the FE analysis, each unit cell is discretized into 50 linear elements; hence, the structure has 250 degrees of freedom considering the fixed end at the bottom. \begin{table} \begin{center} \caption{Geometric parameters and material properties of PSubs} \label{tab:1} \begin{tabular}{p{2cm}p{3cm}p{3cm}p{2cm}p{3.4cm}} \hline Material & Volume Fraction & Elastic Modulus & Density & Damping Constant\\ & ($\%$) & ($\mathrm{GPa}$) & ($\mathrm{kg/m^3}$) & ($q_1;q_2$) \\ Aluminum & 90 & 68.8$-$70 & 2700$-$2710 & (0;6$e^{-9}$)\\ ABS & 10 & 2.4$-$3 & 1040$-$1200 & (0;6$e^{-8}$)\\ \hline \end{tabular} \label{Table:phononic} \end{center} \end{table} The unit cell of the MM-based PSub consists of a homogeneous rod made out of ABS polymer and a local mass-spring resonator attached at the center.~This configuration may be realized in practice by, for example, a rod/beam structure with pillars periodically attached to represent the resonators~\cite{wu2008evidence,pennec2008low,Bilal_2013,Xiao_2013}.~We choose the elastic modulus and density of ABS polymer to be 3 $\mathrm{GPa}$ and 1200 $\mathrm{kg/m^3}$, respectively.~The unit cell has a length of $a_{\rm MM}=1$~cm, and the PSub is formed from either 5 ($l_{\rm MM}=5$~cm), 10 ($l_{\rm MM}=10$~cm), 15 ($l_{\rm MM}=15$~cm), or 20 ($l_{\rm MM}=20$~cm) unit cells. The resonator's mass and spring stiffness are tunable according to the target instability frequency. In the nominal case, the resonator's frequency is set to $f_\mathrm{res}=2000$ Hz. The resonator's point mass is set to be ten times higher than the total mass of the rod portion in the unit cell, $m_\mathrm{res}=10\times \rho_\mathrm{ABS}a_{\rm MM}$; this gives a resonator's stiffness equal to $k_\mathrm{res}=m_\mathrm{res}(2\pi f_\mathrm{res})^2$. The metamaterial unit cell is discretized into seven FE elements (including six rods and one mass-resonator elements); thus each unit cell has eight degrees of freedom including that of the resonator. A 5 unit-cell MM-based PSub would therefore have 35 degrees of freedom by applying fixed boundary conditions at the bottom.~The reader is referred to Ref. \cite{Khajehtourian_2014} for details on the dispersion behavior of this particular elastic metamaterial configuration. The coupled fluid-structure simulations are based on ${Re}=7500$, incorporating an instability with nondimensional frequency $\omega_\mathrm{TS}=0.25$ and wavenumber $\alpha=1.0004-\mathrm{i}0.0062$, which corresponds to the least-attenuated eigenmode of the Orr-Sommerfeld equation.~Utilizing nondimensional analysis to simulate a given TS wave with a dimensional frequency $f_\mathrm{TS}={\omega_\mathrm{TS}}{U_{\rm c}}/{2\pi}{\delta}$ Hz, we vary the centerline velocity (velocity scale) ${U_{\rm c}}$ and half height of the channel (length scale) $\delta$ accordingly in the DNS code.~All simulations are done for liquid water for which the kinematic viscosity is $\nu_\mathrm{f}=1\times10^{-6}$ $\mathrm{m^2/s}$.~While not considered here, PSubs may also be designed for air by adjusting the elastic compliance of the PSub surface exposed to the flow.~The $\delta$ quantity varies between the different models examined. For example, a value of $\delta=4.23\times10^{-4}$ m is used for a PnC-based PSub targeting strong stabilization of instability at 1670 Hz (see details in Section~\ref{sec:ResPnC}) and $\delta=4.38\times10^{-4}$ m for an MM-based PSub comprising 5 unit cells and targeting strong stabilization of instability at 1550.3 Hz (see details in Section~\ref{sec:ResMM}). The corresponding centerline velocities for these PnC-based and MM-based PSub simulations are $U_{\rm c} = 17.72$ m/s and $U_{\rm c} = 17.11$ m/s, respectively. For all the MM-based and PnC-based PSub simulations, the dimension of the channel is fixed as $L_x=20\delta$, $L_y=2\delta$, and $L_z=2\pi \delta$. The fluid domain is discretized into $n_x=225$, $n_y=65$, and $n_z=8$ points in the streamwise, wall-normal, and spanwise directions, respectively. The length of the PSub interface (control surface) along the streamwise direction is approximately a quarter of the instability wavelength, $\lambda_\mathrm{TS}=2\pi\delta/\alpha_\mathrm{R}$. The front and end edges of the PSub interface in the streamwise direction are $x_s/\delta\approx6$ and $x_e/\delta\approx8$, respectively.~The dimensional time step $\Delta t^*$ is selected such that 2000 time steps cover a period of the instability wave. Specifically, $\Delta t^*=3\times10^{-7}$ s and $\Delta t^*=3.22\times10^{-7}$ s for the PnC-based and MM-based PSub simulations, respectively. The dimensional time integration step for the flow is the same as that for the PSub.~All the simulations are run for 3 million time steps until $t_{\rm end}^*\approx 1$ s where $t_{\rm end}^*$ is the dimensional time at the end of the coupled fluid-structure simulations. The averaging time window for adequately capturing the relevant statistics for the various cases is chosen to begin when the simulation has become quasi-steady, i.e., the effect of the initial conditions has faded, and to extend sufficiently long to cover approximately 1000 TS wave periods.~The buffer region is sized to 40$\%$ of the channel length ending at the outlet~\cite{Hussein_2015}. All the simulations were executed on the RMACC supercomputer Summit at the University of Colorado implementing parallel computation. \section{Results} \label{sec:Res} We now examine the detailed characteristics and actual performance from coupled fluid-structure simulations of the two types of PSubs considered in Figs.~\ref{Fig2},~\ref{Fig4} and~\ref{Fig5}. \subsection{PnC-based PSub} \label{sec:ResPnC} \begin{figure*} [t!] \includegraphics[width=1\columnwidth]{Fig_06.eps}% \caption{Four key characterization plots that form the foundation of the PSubs theory: (a) Dispersion curves for a unit cell from which the PSub is formed. Steady-state vibration (b) amplitude and (c) phase response of the PSub top edge when harmonically excited at the same location.~(d) Performance metric obtained by multiplying the amplitude by the phase. The phase is between the force and the displacement at the PSub top edge. All plots are obtained by analyzing a stand-alone FE model of the PSub without yet coupling to the flow. These results are for the PnC-based PSub with a 40-cm long unit cell.} \label{Fig6} \end{figure*} The four key characterization plots for the PnC-based PSub configuration whose geometric and material properties are given in Section~\ref{sec:MP} are shown in Fig.~\ref{Fig6}. This structure is identical to that investigated in Ref.~\cite{Hussein_2015} which was designed for a TS instability with a frequency of 1690 Hz, except here it comprises five unit cells instead of 10.~The band structure pertaining to the PSub unit cell features a band gap, as shown by the grey region throughout the four plots. A truncation resonance appears inside the band gap at 1660.3 Hz for five unit cells; and, as shown in the third plot, the phase turns from positive (in-phase) to negative (out-of-phase) at that frequency and stays negative until the next resonance. This, in turn, gives a value of $P$ that is positive at pre-resonance and negative at post-resonance, as shown in the fourth plot. Both the amplitude and phase quantities are determined from isolated steady-state harmonic frequency response analysis of a 5-unit-cell long version of the PSub with fixed support at the bottom, as described in Section~\ref{PSubDispersionG}. This contrasts with Ref.~\cite{Hussein_2015} where the phase spectrum was obtained by running long-time simulations. For comparison, the characterization curves of the statically equivalent homogeneous structure are superimposed in all plots. It is noticeable that the distance between the resonances and the range of the negative phase for the homogeneous structure near the TS wave frequency peak is markedly narrower than that of the PnC-based PSub. Consequently, the dip in the $P$ curve near the TS wave frequency is both wider (broader) and deeper (higher in absolute value) for the PnC-based PSub compared to the corresponding homogenized structure, as marked in Fig.~\ref{Fig6}d.~This advantage is present for both the functions of stabilization and destabilization. \begin{figure*} [t!] \includegraphics[width=1\columnwidth]{Fig_07.eps}% \caption{Demonstration of PnC-based PSub performance for both flow stabilization and destabilization.~(a) Performance metric curve (grey) and four vertical lines respectively representing four different instability waves investigated (each characterized by a frequency as indicated).~Green and red regions quantify the intensity and frequency breadth of the stabilization and destabilization capacity of the PSub; grey region represents frequency range of band gap.~Time-integrated (b) kinetic energy of the flow perturbation (instability) and (c) skin-friction coefficient as a function of streamwise position for each of the four cases as obtained from coupled flow-PSub simulations.~The PSub location spans the distance between the two dashed lines as indicated.~The responses quantitatively correlate with the frequency-performance metric intersection values in (a), indicating a perfect prediction of PSub performance. } \label{Fig7} \end{figure*} In Fig.~\ref{Fig7}a, we show a portion of the $P$-function again and mark the frequency values of four different TS wave instabilities. The first from the left (light orange line) is at 1637 Hz, which intersects the performance metric curve at a relatively low positive value ($P=1.02\times10^{-9}$ rad$\cdot$m/N)$-$indicating the ability to trigger \textit{weak destabilization} once the PSub is applied to a flow carrying an instability at this particular frequency. The second line from the left (dark red) is at 1650 Hz and can be seen to intersect the $P$ curve at a higher positive value ($P=2.32\times10^{-9}$ rad$\cdot$m/N), indicating the ability to cause~\textit{strong destabilization}.~The third frequency (dark green line) has a value of 1670 Hz; this intersects with the $P$ curve at a relatively high negative value ($P=-2.46\times10^{-9}$ rad$\cdot$m/N) which would bring rise to \textit{strong stabilization}. Lastly, the fourth vertical line (light green curve) corresponds to a TS wave with a frequency of 1684 Hz; this intersects with the performance metric curve at a lower negative value ($P=-1.04\times10^{-9}$ rad$\cdot$m/N) which would cause \textit{weak stabilization}.~Figure~\ref{Fig7}b shows the actual performance of the PSub in passively controlling each of these instabilities as seen from four separate coupled fluid-structure simulations.~To serve as a reference case, a fifth simulation is conducted with no PSub installed (i.e., the flow is exposed to a rigid wall all along) with a TS wave at 1660.3 Hz, corresponding to the center between the resonance and anti-resonance peaks in the PSub performance metric shown in Fig.~\ref{Fig7}a.~The figure shows a time-averaged quantity of the kinetic energy of the perturbation velocity field plotted as a function of the streamwise position. The perturbation kinetic energy $K_\mathrm{p}^{*}$, in unit of ${\rm J}/{\rm m}$, is defined as \begin{equation} K_{\mathrm{p}}^*\left(x^*\right)=\rho_{\mathrm{f}} \int_0^{L_z} \int_0^{2 \delta} \frac{1}{2}\left(\left\langle\hat{u}^{* 2}\right\rangle+\left\langle\hat{v}^{* 2}\right\rangle+\left\langle\hat{w}^{* 2}\right\rangle\right) \mathrm{d} y^* \mathrm{~d} z^*, \end{equation} \noindent where $\hat{u}^{*}$, $\hat{v}^{*}$, and $\hat{w}^{*}$ are the perturbation velocity components in the streamwise, wall-normal, and spanwise directions, respectively.~The symbol $\langle \cdot \rangle$ denotes time-averaged quantities.~The channel flow characteristics are expected to be nonhomogeneous along the streamwise direction due to the presence of the instability and PSub.~It is clearly observed from Fig.~\ref{Fig7}b that the $K_\mathrm{p}^{*}$ of the instability field rises above the reference rigid-wall case for the destabilization cases and falls under it for the stabilization cases, and this rise or fall takes place exactly where the PSub is installed (as indicated by the two vertical lines). Furthermore, the intensity of the rise or fall of $K_\mathrm{p}^{*}$ is consistent with the absolute value of the performance metric at the frequency intersects in Fig.~\ref{Fig7}a, where a small value of $|P|$ correlates with a weak change in $K_\mathrm{p}^{*}$ and a large value of $|P|$ correlates with a strong change in $K_\mathrm{p}^{*}$. We also observe that the $K_\mathrm{p}^{*}$ levels return to nearly the same level of the reference rigid-wall case downstream to the PSub, which is a desired outcome as it indicates precise local control of the instability field.~The stronger the stabilization or destabilization within the PSub region, the larger the offset of $K_\mathrm{p}^{*}$ in the far downstream region compared to the rigid-wall case. In Fig. \ref{Fig7}c, we present the skin-friction coefficient calculated at the bottom wall of the channel where the PSub is installed.~The skin-friction coefficient $C_{\rm f}$ for channel flows is defined as \begin{equation} C_{\rm f}(x^{*}) = \frac{\langle\tau_{\rm w}^{*}\rangle}{\frac{1}{2}\rho_{\rm f}U_{\rm B}^2} \end{equation} where $\langle\tau_{\rm w}^{*}\rangle=\left[\mu_\mathrm{f}\frac{\partial \langle u^{*} \rangle}{\partial y^{*}}- \rho_\mathrm{f}\langle \hat{u}^* \hat{v}^*\rangle\right]_{y^{*}=0}$ is the wall mean shear stress, $\mu_\mathrm{f}$ is the fluid's dynamic viscosity, and $U_\mathrm{B}$ is the bulk velocity.~The mean shear stress at the wall was computed using a polynomial fit.~We observe that the skin-friction coefficient decreases in the stabilization cases and increases in the destabilization cases within the PSub control region.~The behavior of the skin friction is, therefore, compatible with what we observe for the perturbation kinetic energy in~Fig.~\ref{Fig7}b. In the region where the perturbation kinetic energy decreases, the wall mean shear stress $\langle\tau_{\rm w}^{*}\rangle$ also reduces, resulting in a drop in the skin-friction coefficient values and vice versa for the destabilization cases.~This reduction (or enhancement) for the stabilization (or destabilization) is mild (less than 0.5$\%$) because the PSub region is relatively small, and the TS wave examined is growing slowly and represent a small linear perturbation in the flow. Nevertheless, the PSub is shown to influence the flow precisely as desired, and stronger influence on the skin friction will be achieved with greater area coverage by PSubs acting on more dominant instability fields.~A PSub designed to exhibit even higher values of $|P|$ at the instability frequency will also cause stronger changes to the skin-friction coefficient. \begin{figure*} [t!] \includegraphics[width=1\columnwidth]{Fig_08New.eps}% \caption{Synchronized passive phased response and energy exchange between PnC-based PSub and flow perturbation (instability) field. The colored contours show the time-averaged spatial distribution of the perturbation kinetic energy within the flow.~The black curves represent the time-averaged total elastodynamic energy in the PSub. Stabilization ($P<0$) and destabilization ($P>0$) cases are shown in (b) and (d), respectively, whereas a neutral case ($P=0$) is shown in (c). The rigid-wall case is shown in (a) as a reference.}\label{Fig8} \end{figure*} The time-averaged spatial distribution of $K_\mathrm{p}^{*}$ over both the $x$ and $y$ directions is shown in Fig.~\ref{Fig8}, for the rigid-wall (Fig.~\ref{Fig8}a), weak stabilization (Fig.~\ref{Fig8}b), and weak destabilization (Fig.~\ref{Fig8}d) cases, respectively.~Figure (Fig.~\ref{Fig8}c) examines a case for $\omega_{\mathrm{TS}}=1444$ Hz, which corresponds to $P=0$ thus offering a neutral effect.~For Figs.~\ref{Fig8}b-d, we also show the corresponding time-averaged quantities of the total elastodynamic energy within the PSub, defined as $\Psi(s,t^*) = \frac{1}{2} \left( E \eta^2 + \rho_{s} \dot{\eta}^2 \right)$, as obtained simultaneously from the same coupled simulations.~The peaks in the PSub total energy plots correspond to the regions occupied by the aluminum layers, where the speed of sound is higher than that of the ABS polymer layers.~We observe the total energy profile for the stabilization case (Fig.~\ref{Fig8}b) to be lower overall than that of the destabilization case (Fig.~\ref{Fig8}d), which is expected because the former admitting at out-of-phase (cancelling) wave motion across the fluid-structure assembly, and the latter is admitting in-phase (adding up) wave motion.~The total elastodynamic energy in the neutral case (Fig.~\ref{Fig8}c) is very small (almost negligible) because the PSub response amplitude at that frequency is zero (hence $P = 0$), thus preventing the system from experiencing any substantial fluid-structure interaction.~The results of Fig.~\ref{Fig8} demonstrate, most explicitly, a remarkable passive synchronization of response across both the PSub structure and the coupled flowing fluid. \\ \begin{figure*} [b!] \includegraphics[width=1\columnwidth]{Fig_09.eps}% \caption{Four key characterization plots for MM-based PSubs: (a) Dispersion curves for a unit cell from which each MM-based PSub is formed. Steady-state vibration (b) amplitude and (c) phase response at top edge of a 5-, 10-, 15-, or 20-unit-cell long PSub, all when harmonically excited at the same location.~(d) Performance metric obtained by multiplying the amplitude by the phase. The phase is between the force and the displacement at the PSub top edge.~Insets show the plots over an extended frequency range for the 5 unit-cell case and its corresponding homogeneous rod.~All plots are obtained by analyzing a stand-alone FE model of the PSub without yet coupling to the flow.~The unit cell of the MM-based PSub is 1-cm long.} \label{Fig9} \end{figure*} \begin{figure*} [b!] \includegraphics[width=1\columnwidth]{Fig_10.eps}% \caption{Demonstration of MM-based PSub performance for flow stabilization and destabilization.~(a) Performance metric curve (grey) and four vertical lines respectively representing four different instability waves investigated (each characterized by a frequency as indicated).~Green and red regions quantify the intensity and frequency breadth of the stabilization and destabilzation capacity of the PSub.~Time-averaged (b) kinetic energy of the flow perturbation (instability) and (c) skin-friction coefficient as a function of streamwise position for each of the four cases as obtained from coupled flow-PSub simulations.~The PSub location spans the distance between the two dashed lines as indicated.~The responses quantitatively correlate with the frequency-performance metric intersection values in (a), indicating perfect prediction of PSub performance.} \label{Fig10} \end{figure*} \subsection{MM-based PSub} \label{sec:ResMM} As described in Section~\ref{sec:PSubsPass}, the MM-based PSub design approach utilizes a pass-band resonance that has been lowered in its frequency value due to the presence of a locally resonant hybridization band gap. The unit-cell dispersion diagram of the MM-based PSub configuration whose material and geometric properties are given in Section~\ref{sec:MP} is shown in Fig.~\ref{Fig4}b.~The dispersion curves for the same homogeneous rod but without the resonators are also shown for comparison.~Given that this PSub configuration comprises a slender homogeneous rod with a periodic arrangement of spring-mass resonators, a local-resonance band gap may be tuned to a target resonator frequency by simply adjusting the spring constant and/or mass value. As mentioned in Section~\ref{sec:MP}, we select a target resonator frequency of 2000 Hz and a resonator-to-rod mass ratio of 10; this generates a band gap centered at 4302.3 Hz.~Figure~\ref{Fig9}a shows the same dispersion diagram as the one shown in Fig.~\ref{Fig4}b, but in a rotated view and expressed in terms of dimensional frequency, and Figs.~\ref{Fig9}b,~\ref{Fig9}c, and~\ref{Fig9}d show the three remaining characterization plots, for each of a 5-, 10-, 15-, and 20-unit-cell long PSub.~We utilize the following subwavelength resonance frequency for each case: 1547.3 Hz (5-unit-cell PSub), 1032.4 Hz (10-unit-cell PSub), 742.2 Hz (15-unit-cell PSub), and 573 Hz (20-unit-cell PSub). The longer the PSub we can afford to install, the lower the frequency we can target for TS wave stabilization or destabilization for a given MM unit-cell configuration.~As seen in Fig.~\ref{Fig9}b, the PSub unit-cell band gap enables the generation of several structural resonances at frequencies lower than the band gap, which itself is already in the subwavelength regime. In particular, for the shortest PSub with 5 unit cells, we employ the resonance at 1547.3 Hz for our flow control objective. Similar to the results for the PnC-based PSub shown in Fig.~\ref{Fig7}, when comparing Fig.~\ref{Fig10}a with Fig.~\ref{Fig10}b we observe a direct correlation between the $P$ value at the intersection with the TS wave frequency and the corresponding actual $K_\mathrm{p}^{*}$ performance in the flow simulation.~Once again, we observe a perfect~\textit{a priori} prediction of whether the TS wave stabilizes or destabilizes, and at what level in each case.~Furthermore, similar to the PnC-based PSub cases, all the reductions in $K_\mathrm{p}^{*}$ take place exactly where the PSub is placed, and, favorably, the $K_\mathrm{p}^{*}$ levels return to nearly the same level of the reference rigid-wall case downstream to the PSub.~Figure.~\ref{Fig10}c displays the corresponding skin-friction coefficient calculated at the bottom wall of the channel, with qualitatively similar results to the PnC-based PSub results shown in Fig.~\ref{Fig7}c.~The rigid-wall case here is taken for a TS wave at 1547.3 Hz, corresponding to the center between the resonance and anti-resonance peaks in the PSub $P$ metric shown in Fig.~\ref{Fig10}a. \begin{figure*} [t!] \includegraphics[width=1\columnwidth]{Fig_11New.eps}% \caption{Synchronized passive phased response and energy exchange between MM-based PSub and flow perturbation (instability) field. The colored contours show the time-averaged spatial distribution of the perturbation kinetic energy within the flow.~The black curves represent time-averaged total elastodynamic energy in the PSub, with horizontal lines indicating the total energy level of the resonating masses.~Stabilization ($P<0$) and destabilization ($P>0$) cases are shown in (b) and (d), respectively, whereas a neutral case ($P\approx0$) is shown in (c). The rigid-wall case is shown in (a) as a reference.}\label{Fig11} \end{figure*} \begin{figure*} [t!] \includegraphics[width=1\columnwidth]{Fig_12.eps}% \caption{Instantaneous vector field of the perturbation (instability) velocity component $\hat{\mathbf{u}}$.~In the background, the resultant magnitude of the perturbation velocity field $|\hat{\mathbf{u}}|$ is also plotted.~Both are plotted along the $z=\pi$ plane.~In the top and bottom panels, the strongest stabilization and destabilization cases of Fig.~\ref{Fig10} are shown, respectively.~The PSub location spans the distance between the two white dashed lines as indicated.~The corresponding all-rigid-wall case is shown in the middle panel for comparison. Close-up views are shown in the right column.} \label{Fig12} \end{figure*} \begin{figure*} [t!] \includegraphics[width=1\columnwidth]{Fig_13.eps}% \caption{Decomposition of flow kinetic energy.~Time-averaged kinetic energy of (a) perturbation (instability) component, (b) mean flow component, and (c) summation of both components, i.e., total kinetic energy. All results are for the MM-based PSub examined in Fig.~\ref{Fig10}; only the strongest stabilization and destabilization cases are shown.~The PSub location spans the distance between the two white dashed lines as indicated.~In each of (b) and (c), the corresponding kinetic energy curve for the same channel flow without the presence of an instability is shown (light orange). In all sub-figures, the case with the $\hat{u}$ (streamwise) component of the fluid-structure interface boundary conditions not applied (i.e., replacing Eq.~(\ref{eq:structbcu}) with $\hat{u}(x,0,z,t)=0$) is shown in dashed lines. } \label{Fig13} \end{figure*} \begin{figure*} [t!] \includegraphics[width=0.8\columnwidth]{Fig_14.eps}% \caption{Instantaneous vector field of the mean-flow velocity component with the base-flow velocity subtracted. i.e., $\bar{\mathbf{u}}-\mathbf{u_{\mathbf{b}}}$.~In the background, the resultant magnitude of this quantity, i.e., $|\bar{\mathbf{u}}-\mathbf{u_{\mathbf{b}}}|$, is also plotted.~Both are plotted along the $z=\pi$ plane.~In the top and bottom panels, the strongest stabilization and destabilization cases of Fig.~\ref{Fig10} are shown, respectively.~The PSub location spans the distance between the two white dashed red lines as indicated.~The corresponding all-rigid-wall case is shown in the middle panel for comparison.} \label{Fig14} \end{figure*} The time-averaged spatial distribution of $K_\mathrm{p}^{*}$ in the flow and the corresponding time-averaged total elastodynamic energy $\Psi(s,t^*)$ within the MM-based PSub are shown in Fig.~\ref{Fig11}. The rigid-wall (Fig.~\ref{Fig11}a), strong stabilization (Fig.~\ref{Fig11}b), and strong destabilization (Fig.~\ref{Fig11}d) cases, as well as a neutral case at 1400 Hz where $P\approx0$ (MM-based PSub does not generate zero $P$ before resonance frequency) (Fig.~\ref{Fig11}c), are shown. The black horizontal lines represent the total energy level of the locally resonating masses depicted in Fig.~\ref{Fig2}b.~In analogy to in Fig.\ref{Fig8}, we observe the energy in the resonators for the stabilization case (Fig.~\ref{Fig11}b) to be lower overall than that of the destabilization case (Fig.~\ref{Fig11}d), and also note that the the neutral case (Fig.~\ref{Fig11}c) experience very small (almost negligible) energy in the resonators.~As in Fig.~\ref{Fig8} for a PnC-based PSub, the results of Fig.~\ref{Fig11} for a MM-based PSub demonstrate a holistic synchrony in the coupled fluid-structure interaction response, and exactly consistent with the corresponding $P$ value in each case. Figure~\ref{Fig12} provides a contour plot of the absolute value of the instantaneous velocity perturbation for the strong stabilization and destabilization cases and the rigid-wall case for comparison.~A snapshot of the instantaneous vector field of the perturbation velocity is overlaid in each subfigure.~It is clear that at the PSub region, the stabilization case attains the lowest value of $|\mathbf{\hat{u}}|$ (smallest and least bright yellow spot), followed by the rigid-wall case (where these is no PSub), and then the destabilization case.~Consistent with this pattern, the perturbation velocity vector field experiences the smallest wall-normal components near the wall at the PSub region for the stabilization case, also followed by the rigid-wall case, and then the destabilization case.~Small wall-normal components compared to the rigid-wall case are indicative of coherent wave cancellation due to the presence of a stabilizing PSub.~In contrast, relatively large wall-normal components near the wall are indicative of destructive interference from a destabilizing PSub. In Fig.~\ref{Fig13}, we examine the exchange of energy within the flow.~With no PSub installed, Fig.~\ref{Fig13}b shows that the mean-flow kinetic energy drops at the upstream region of the channel as the perturbation kinetic energy grows and acquires energy from the mean flow.~The trend eventually reverses when the mean flow begins to experiences structural changes itself as it carries a growing instability.~The time-averaged perturbation kinetic energy for the strong stabilization and destabilization cases are shown, again, in Fig.~\ref{Fig13}a and contrasted with the corresponding mean-flow component that is plotted in Fig.~\ref{Fig13}b.~The sum of both components is given in Fig.~\ref{Fig13}c.~The changes incurred in the controlled mean-flow component are very small due to the small magnitude of the perturbation, but nevertheless reveal valuable qualitative information.~In the presence of a PSub, we observe a short rise (fall) in the mean-flow kinetic energy near the upstream border of the PSub while the perturbation kinetic energy drops (rises) for stabilization (destabilization).~Subsequently, as the perturbation kinetic energy profile reverses direction, a corresponding opposite change in direction is seen in the mean-flow kinetic energy profile.~These trends confirm the energy exchange mechanisms depicted in the Fig.~\ref{Fig3} schematic. Fig.~\ref{Fig14} examines the influence on the mean flow from a contour diagram perspective.~In this figure, the base flow field is subtracted from the mean flow field yielding a $\mathbf{\bar{u}-u_{\rm b}}$ vector field which is plotted in 2D space.~Furthermore, the corresponding time-averaged quantity $|\mathbf{\bar{u}-u_{\rm b}}|$ is mapped out using color contours.~First we observe in the rigid-wall case that the velocity vectors points backwards (opposite to flow direction) near the middle of the half-channel, and, conversely, point forwards near the wall.~This pattern reveals that the instability is causing the mean-flow velocity profile to shorten and broaden, demonstrating very early traits of birth of transition to turbulence.~In the stabilization and destabilization plots, we observe an increase (decrease) in the mean-flow resultant amplitude and a pointing up (down) of the arrows near the wall for the cases of stabilization (destabilization). This reveals slower (faster) transition process in comparison with the rigid-wall case.~This adds further evidence of the phased energy exchange mechanisms described and discussed earlier. To further examine the underlying anti-resonance and resonance mechanisms within the flow, we compute the production rate of the perturbation energy $P_{\rm r}^{*}$, given by \begin{equation} P_{\rm r}^{*}(x^{*},y^{*}) = \int_{0}^{L_z} \left( -\rho_\mathrm{f} \langle \hat{u}^{*} \hat{v}^{*} \rangle \frac{\partial \langle u^{*} \rangle}{\partial y^{*}}\right) dz^{*}. \end{equation} This quantity depicts the energy transfer rate between the mean flow and instability, or more generally, the rate of perturbation generation (turbulence generationin fully-developed turbulent flows~\cite{Davies_1997,Hussein_2015,Prandtl_1922,Morris_1976,Cossu_2004,Cimarelli_2019}).~Without control, the production rate is generally positive for an unstable laminar flow, indicating a flow resonance phenomenon where energy is being transferred from the mean flow to the instability, causing it to grow as it propagates downstream$-$hence the positive, upward trend of $K_\mathrm{p}^{*}$ that we observe in Figs.~\ref{Fig7}b and~\ref{Fig10}b.~In contrast, a negative production rate is a PSub-induced flow anti-resonance phenomenon whereby energy is transferred from the instability back to the mean flow.~A negative production rate of the perturbation kinetic energy diminishes the intensity of an instability. In Fig.~\ref{Fig15}, we present the production rate of perturbation kinetic energy (expressed in dimensionless form) with respect to the wall-normal direction at three streamwise $x$-locations (stations) for both strong (Fig.~\ref{Fig15}a) and weak (Fig.~\ref{Fig15}b) passive control.~In both plots, the nominal MM-based PSub with five unit cells is used. Since the TS waves are small linear perturbations, we observe significantly modest changes in the production rate, on the order of $\sim10^{-6}$ in dimensionless units; however, these changes elucidate the underlying dynamics of the impact of the PSub on the flow field.~In Fig.~\ref{Fig15}, Station 1 is at the left edge of the PSub, $x^*/\delta=6.25$ (solid curves). This is the position where the flow first ``experiences" the influence of the PSub, and according to Fig.~\ref{Fig10}b, where the strongest reduction in $K_\mathrm{p}^{*}$ occurs for the stabilization cases.~Station 2 is at the right edge of the PSub, $x^*/\delta=7.86$ (dashed-dotted curves). This is the location where the instability initiates its recovery from the effect of the PSub.~For the stabilization cases, at this station, we notice the perturbation kinetic energy rises substantially, exceeding even the rigid-wall case, but only for a short distance downstream.~The last streamwise station, Station 3, is at $x^*/\delta=12$ (dashed curves) which is at the far downstream where the effect of the PSub has practically vanished,~confirming that the influence of the PSub is strictly local, within and very closely around the control region.~Similar but opposite trends for $P_{\rm r}^{*}$ are observed for the destabilization cases.~A comparison between Figs.~\ref{Fig15}a and~\ref{Fig15}b clearly reveals that the absolute strength of the production rate of perturbation kinetic energy is larger for strong PSub control.~This is again consistent with the prediction of the performance metric $P$ from Fig.~\ref{Fig10}a.~The impact of the PSub on production rate along the $y$-direction is also intriguing, showing that it starts with zero at the wall (due to the nominally zero velocity boundary conditions), reaches the peak close to the wall, and then gradually diminishes to zero again around the centerline. The near wall peak of the production rate occurs closer to the wall.~Moreover, due to the existence of the PSub at the bottom wall, the flow is not symmetric along wall-normal direction, see Fig.~\ref{FigAppA}. ~An analysis of the flux of the perturbation energy for PnC-based PSubs is provided in Ref.~\cite{Hussein_2015}. \begin{figure*} [t!] \includegraphics[width=1\columnwidth]{Fig_15.eps}% \caption{Production of flow perturbation (instability) kinetic energy in channel with MM-based PSub for (a) strong and (b) weak stabilization or destabilization as a function of the wall-normal direction, $y^*/\delta$ at three streamwise positions (denoted measuring stations). Station 1 is located at the left edge position (beginning) of the PSub, Station 2 at the right edge position (end) of the PSub, and Station 3 at a far position downstream from the PSub.~A schematic of the PSub-installed channel with the station locations marked is shown in the insets.} \label{Fig15} \end{figure*} \section{Conclusions} The theory of phononic subsurfaces enables the design of subsurface structures for the passive responsive control of wall-bounded laminar/transitional flows with growing instabilities.~We have investigated an MM-based configuration of PSubs that operates in the elastic subwavelength regime. This renders a PSub much shorter (5 cm) than the PnC-based PSub investigated in Ref.~\cite{Hussein_2015} (4 m).~We considered channel flows with unstable TS waves as examples for demonstrating the underlying performance of this new form of PSubs. A parallel analysis of a PnC-based PSub was conducted as well for comparison.~A PnC-based PSub is designed by tuning a stop-band truncation resonance to engage the target TS wave~\cite{Hussein_2015,Barnes_2021}, whereas the proposed MM-based PSub uses a pass-band resonance that has been lowered in frequency due to the generation of a subwavelength locally resonant band gap. Both TS wave stabilization and destabilization were demonstrated.~It was reaffirmed that the performance metric curve $P$ for a given PSub design (which is calculated \textit{a priori} without the need for coupled fluid-structure simulations) perfectly predicts both the nature of engagement with the instability (i.e., stabilization versus destabilization) and the intensity of engagement (e.g., weak, moderate, or strong control of the instability).~The results clearly display that the perturbation kinetic energy of the flow instability field is altered as desired specifically near the wall in the channel region where the PSub is installed.~Furthermore, and importantly, it was shown that the time-averaged value of $K_\mathrm{p}^{*}$ returns to nearly the same level as the reference rigid-wall case downstream of the PSub.~This ascertains the \textit{local} nature of PSub-based flow control, which in turn implies the ability to extend control to wider spatial regions by installing more PSubs as desired.~The time-averaged total elastodynamic energy in the PSub was also calculated and shown to be relatively low, zero, or high for stabilization, neutral effect, or destabilization, respectively.~This demonstrates the coherent nature of the PSub controlled coupled fluid-structure interaction and phased response across both media, and confirms the perfect predictability of the actual response by the predetermined value of the performance metric $P$.~Analysis of the rate of production of the flow perturbation kinetic energy, as a function of both the downstream and wall-normal directions, reveals the intrinsic anti-resonance and resonance mechanisms that take place within the flow when a PSub is installed. For stabilization, a PSub causes steady-state energy transfer from the flow instability into the mean flow at the start of the control region and vice versa closer to its end.~The opposite effect takes place for a PSub designed to destabilize the flow. The PSubs theory lays the foundation for a mechanistic, spatially precise, and frequency- and wavenumber-dependent passive and responsive flow control paradigm that is fundamentally based on enabling a targeted contiguous synchronization of wave characteristics across both the flow and an interfacing subsurface elastic structure.~Future research will aim to advance PSubs design to enlarge the green area ($A_{\rm P}^{\rm S}$) or red area ($A_{\rm P}^{\rm D}$) under the $P$ curve in Fig.~\ref{Fig10}a for flow stabilization or destabilization, respectively. Emphasis will be on both deepening and widening these green and red regions to further strengthen the control and make it more robust over broad-frequency ranges. Ongoing innovative research in phononics (see reviews by Hussein et al.~\cite{Hussein_2014}, Jin et al.~\cite{Jin_2021}, and others) will drive this track.~Investigation of PSubs will be extended to boundary-layer flows, supersonic and hypersonic flows, advanced transitional flows, and fully developed turbulent flows, among other problems in flow control~\cite{Gad-el-Hak_2000}.~Switchable PSub control using piezoelectrics~\cite{Hagood_1991,Thorp_2001} is also a potential application.~Multifunctional PSub design to target flow control and, simultaneously, vibroacoustic control~\cite{Bilal_2018}, energy harvesting~\cite{Patrick_2021}, and/or structural support~\cite{Meza_2015} is another promising research direction that will build on the current investigation. \section*{Acknowledgement} The authors dedicate this paper to the memory of Professor Sedat Biringen (1945-2020). This work utilized the RMACC Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University. \bigskip \bibliographystyle{ieeetr}